entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04936v1 | 20230710230605 | Quarkonia pair production as a tool for study of gluon GPDs | [
"Marat Siddikov",
"Ivan Schmidt"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
=6.0in =8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Quarkonia pair production as a tool for study of gluon GPDs
Marat Siddikov, Ivan Schmidt
Departamento de Física, Universidad Técnica Federico Santa María, y Centro Científico - Tecnológico de Valparaíso, Casilla 110-V, Valparaíso, Chile
In these proceedings we present our results on the exclusive photoproduction of J/ψ η_c pairs in the collinear factorization framework. We argue that the process might be used as a complementary channel for studying the generalized parton distributions (GPDs) of gluons. We provide numerical estimates for the cross-section in the kinematics of the future Electron Ion Collider.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
§ INTRODUCTION
Nowadays the understanding of partonic and multipartonic distributions in the proton, and in particular of the so-called Generalized Parton Distributions (GPDs) <cit.>
remains one of the open problems within hadronic physics. The phenomenological extraction of these distributions is challenging for technical (mathematical) reasons. Moreover, it relies on different assumptions and sometimes provides only limited information about
the partonic distributions. For these reasons, it is always desired
to extend the number of channels used for phenomenological studies <cit.>. Recently it has been suggested that 2→3 exclusive processes might be used as a new tool for the study of the GPDs and complement existing phenomenological
research in 2→2 channels <cit.>.
We argue that it is possible to extend these studies and use the exclusive photoproduction of heavy quarkonia pairs as additional probe of the gluon GPDs. From previous research on single-quarkonia production it is known that the heavy mass of quarkonia may be used as a natural hard scale in the problem, justifying the use of perturbative methods, even in the photoproduction regime.
Due to the different structure of the coefficient function, the quarkonia
pair production allows to get additional constraints on the gluon GPDs,
especially outside the classical x=±ξ line.
Since, due to C-parity constraints, the production of J/ψ J/ψ pairs
is not related to GPDs of the target, in our
study we focus on the production of J/ψ η_c pairs, and
analyze the kinematics of the low-energy runs at the future Electron-Ion
Collider <cit.>.
For high-energy runs, as well as other future accelerators, it might
be more appropriate to use the evaluations in the color dipole picture,
which incorporates saturation effects <cit.>.
This proceeding is structured as follows. Below, in Section <ref>,
we briefly discuss the framework and the structure of the cross-section
of quarkonia pair production (the detailed derivation of these results
might be found in <cit.>). At the end of this section
we present some numerical estimates for the cross-section in the EIC kinematics and
draw conclusions.
§ EXCLUSIVE PHOTOPRODUCTION OF MESON PAIRS
The production of light meson pairs was analyzed previously
in Refs. <cit.>,
in Bjorken kinematics. However, for heavy quarkonia this analysis has limited applicability, since in the kinematics of very large photon virtualities
Q^2=-q^2≫ M_1,2^2 (quarkonia masses), the cross-section is vanishingly small. In our studies we consider that both Q^2 and M_1,2^2 are large scales, although eventually we will consider the photoproduction
limit Q→0.
The cross-section of the photoproduction of heavy quarkonia pairs is given by
dσ_γ p→ M_1M_2p^(L,T)=dy_1dp_1⊥^2dy_2dp_2⊥^2dϕ|𝒜_γ p→ M_1M_2p^(L,T)|^2/4(2π)^4√((W^2+Q^2-m_N^2)^2+4Q^2m_N^2)δ((q+P_1-p_1-p_2)^2-m_N^2)
where y_1,y_2 are the quarkonia rapidities, p_1⊥, p_2⊥
are their corresponding momenta, ϕ is the azimuthal angle between
p_1⊥,p_2⊥; W^2=(q+P_1)^2
is the energy of the γ^*p pairs, and the δ-function
in the right-hand side stems from the onshellness of the recoil proton.
This δ-function introduces cumbersome constraints on the kinematics
of the produced quarkonia pairs for the fixed-energy photons
(see <cit.> for details), however might be trivially
taken into account if we treat the quarkonia momenta p_1⊥, p_2⊥
and rapidities y_1,y_2 as independent variables, and fix the
photon energy W from the onshellness condition.
The evaluation of the amplitudes 𝒜_γ p→ M_1M_2p^(L,T)
was done in the collinear factorization framework, assuming the quarkonia
pairs and the recoil proton are kinematically well-separated from
each other. In the leading order, the dominant contribution to the
amplitudes of quarkonia production comes from the gluon GPDs. In our
evaluations we will disregard the contributions of the poorly known
transversity gluon GPDs H_T^g, E_T^g, H̃_T^g, Ẽ_T^g,
since existing experimental bounds suggest that they should be negligibly
small (see e.g. explanation in <cit.>).
The contribution of the remaining (chiral even) GPDs to the square
of amplitude is given by
∑_ spins|𝒜_γ p→ M_1M_2p^(𝔞)|^2 =1/(2-x_B)^2[4(1-x_B)(ℋ_𝔞ℋ_𝔞^*+ℋ̃_𝔞ℋ̃_𝔞^*).
-x_B^2(ℋ_𝔞ℰ_𝔞^*+ℰ_𝔞ℋ_𝔞^*+ℋ̃_𝔞ℰ̃_𝔞^*+ℰ̃_𝔞ℋ̃_𝔞^*)
.-(x_B^2+(2-x_B)^2t/4m_N^2)ℰ_𝔞ℰ_𝔞^*-x_B^2t/4m_N^2ℰ̃_𝔞ℰ̃_𝔞^*], 𝔞=L,T
where we introduced the shorthand notations for convolutions
ℋ_𝔞 =∫_-1^1dx c_𝔞(x, y_1, y_2)H_g(x,ξ,t), ℰ_𝔞=∫_-1^1dx c_𝔞(x, y_1, y_2)E_g(x,ξ,t),
ℋ̃_𝔞 =∫_-1^1dx c̃_𝔞(x, y_1, y_2)H̃_g(x,ξ,t), ℰ̃_𝔞=∫_-1^1dx c̃_𝔞(x, y_1, y_2)Ẽ_g(x,ξ,t),
x is the average light-cone momentum fraction of the proton carried
by the gluon before and after interaction, and ξ is the standard
skewness variable (it might be related to quarkonia rapidities y_1,y_2).
The partonic amplitudes c_𝔞, c̃_𝔞
might be evaluated perturbatively (see details in <cit.>). For the case in which the quarkonia are well-separated from each other kinematically,
it is possible to express the amplitudes c_𝔞, c̃_𝔞
in terms of the nonperturbative long-distance matrix elements (LDMEs)
of Non-Relativistic QCD (NRQCD) <cit.>, multiplied by a rational function,
c_𝔞, c̃_𝔞∼∑_ℓ𝒫_ℓ(x)/∏_k=1^n_ℓ(x-x_k^(ℓ)+i0)
where 𝒫_ℓ(x) is a smooth polynomial of
the variable x, and the denominator of each term in the sum (<ref>)
might include a polynomial with up to n_ℓ=5 nodes x_k^(ℓ)
in the region of integration. The position of the poles x_k^(ℓ)
depends on all kinematic variables y_1, y_2, Q, and for this reason, varying the rapidities y_1,y_2 of the observed quarkonia, it is possible to probe the gluon GPDs in the full kinematic range (x, ξ).
Due to space limitations, here we omit the full expressions for the
amplitudes c_𝔞, c̃_𝔞 (see <cit.> for details). However, in
Figure <ref>
we show the density plot which illustrates the behavior of the coefficient
function c_T(x, y_1, y_2) as a function of its
arguments, and allows to see the dependence of the poles on the variable ξ.
The typical values of the cross-sections in the EIC kinematics range
between a few dozens to a few hundreds of picobarns, depending on
the kinematics and chosen parametrization of the gluon GPDs.
In the right panel of Figure <ref> for the
sake of illustration we show the cross-section for the lowest-energy
electron-proton beam as a function of the invariant momentum transfer
t, for several parametrizations of the gluon GPDs.
More detailed predictions for the cross-section at various
energies might be found in our recent article <cit.>.
To summarize, our findings demonstrate that the exclusive photoproduction
of J/ψ η_c mesons (as well as other heavy quarkonia pairs
with opposite C-parities) potentially could be used as a viable
gateway for the analysis of the gluon GPDs of the target. The amplitude
of this process obtains the dominant contribution from the unpolarized
gluon GPD H_g; however, in contrast to classical 2→2 processes,
it has sensitivity to the behavior of the GPDs outside the x=±ξ
line, and thus could complement information extracted from DVCS and
single-quarkonia production. Numerically, the evaluated cross-sections are on par with similar
estimates for 2→3 processes suggested recently in the literature <cit.>.
§ ACKNOWLEDGEMENTS
We thank our colleagues at UTFSM university for encouraging discussions.
This research was partially supported by Proyecto ANID PIA/APOYO AFB220004
(Chile) and Fondecyt (Chile) grants 1220242 and 1230391. “Powered@NLHPC:
This research was partially supported by the supercomputing infrastructure
of the NLHPC (ECM-02)”.
100
Goeke:2001tz K. Goeke, M. V. Polyakov and M. Vanderhaeghen,
Prog. Part. Nucl. Phys. 47, 401 (2001).
Diehl:2003ny M. Diehl, Phys. Rept. 388, 41
(2003).
Guidal:2013ryaM. Guidal, H. Moutarde and M. Vanderhaeghen,
Rept. Prog. Phys. 76 (2013), 066202.
Pire:2017ygeB. Pire and L. Szymanowski, Phys. Rev. D
96 (2017) no.11, 114008.
Pire:2021dadB. Pire, L. Szymanowski and J. Wagner, Phys.
Rev. D 104 (2021) no.9, 094002.
GPD2x3:9 G. Duplančić, S. Nabeebaccus, K. Passek-Kumerički,
B. Pire, L. Szymanowski and S. Wallon, JHEP 03 (2023) 241; JHEP 11 (2018) 179.
GPD2x3:7 R. Boussarie, B. Pire, L. Szymanowski and S. Wallon,
JHEP 02 (2017) 054.
GPD2x3:6 W. Cosyn and B. Pire, Phys. Rev. D 103 (2021)
114002.
GPD2x3:5 A. Pedrak, B. Pire, L. Szymanowski and J. Wagner,
Phys. Rev. D 101 (2020) 114027.
GPD2x3:4 B. Pire, L. Szymanowski and S. Wallon, Phys. Rev.
D 101 (2020) 074005.
Boussarie:2016qopR. Boussarie, B. Pire, L. Szymanowski
and S. Wallon, JHEP 02 (2017), 054 [erratum: JHEP 10
(2018), 029].
GPD2x3:10 J.-W. Qiu and Z. Yu, JHEP 08 (2022)
103; Phys. Rev D 107 (2023) 1, 014007.
Brambilla:2010cs N. Brambilla et al.,
Eur. Phys. J. C71, 1534 (2011).
Accardi:2012qutA. Accardi et al., Eur. Phys. J. A
52, no. 9, 268 (2016).
AbdulKhalek:2021gbh R. Abdul Khalek et al., Nucl.
Phys. A 1026 (2022) 122447.
Andrade:2022rbnS. Andrade, M. Siddikov and I. Schmidt,
Phys. Rev. D 105 (2022) 7, 076022.
Siddikov:2022bku M. Siddikov and I. Schmidt, Phys. Rev.
D 107 (2023) no.3, 034037 [arXiv:2212.14019 [hep-ph]].
LehmannDronke:2000hloB. Lehmann-Dronke, A. Schafer, M. V. Polyakov
and K. Goeke, Phys. Rev. D 63 (2001), 114001.
Clerbaux:2000hbB. Clerbaux and M. V. Polyakov, Nucl.
Phys. A 679 (2000), 185-195.
Goloskokov:2013mbaS. V. Goloskokov and P. Kroll, Eur.
Phys. J. C 74 (2014), 2725; Eur. Phys. J. A
47, 112 (2011).
|
http://arxiv.org/abs/2307.04097v1 | 20230709045910 | Restricted Generative Projection for One-Class Classification and Anomaly Detection | [
"Feng Xiao",
"Ruoyu Sun",
"Jicong Fan"
] | cs.LG | [
"cs.LG"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Restricted Generative Projection for One-Class Classification and Anomaly Detection
Feng Xiao, Ruoyu Sun, Jicong Fan Member, IEEE,
The authors are with the School of Data Science, The Chinese University of Hong Kong, Shenzhen, and Shenzhen Research Institute of Big Data. E-mail: [email protected].
Manuscript received April 19, 2021; revised August 16, 2021.
May 17, 2023
====================================================================================================================================================================================================================================================================================================
We present a simple framework for one-class classification and anomaly detection. The core idea is to learn a mapping to transform the unknown distribution of training (normal) data to a known target distribution. Crucially, the target distribution should be sufficiently simple, compact, and informative. The simplicity is to ensure that we can sample from the distribution easily, the compactness is to ensure that the decision boundary between normal data and abnormal data is clear and reliable, and the informativeness is to ensure that the transformed data preserve the important information of the original data. Therefore, we propose to use truncated Gaussian, uniform in hypersphere, uniform on hypersphere, or uniform between hyperspheres, as the target distribution. We then minimize the distance between the transformed data distribution and the target distribution while keeping the reconstruction error for the original data small enough. Comparative studies on multiple benchmark datasets verify the effectiveness of our methods in comparison to baselines.
Anomaly Detection, One-class Classification, Generative Projection.
§ INTRODUCTION
Anomaly detection (AD) under the setting of one-class classification aims to distinguish normal data and abnormal data using a model trained on only normal data <cit.>. AD is useful in numerous real problems such as intrusion detection for video surveillance, fraud detection in finance, and fault detection for sensors. Many AD methods have been proposed in the past decades <cit.>. For instance, Schölkopf et al.<cit.> proposed the one-class support vector machine (OC-SVM) that finds, in a high-dimensional kernel feature space, a hyperplane yielding a large distance between the normal training data and the origin. Tax et al.<cit.> presented the support vector data description (SVDD), which obtains a spherically shaped boundary (with minimum volume) around the normal training data to identify abnormal samples. Hu et al.<cit.> propose a new kernel function to estimate samples’ local densities and propose a weighted
neighborhood density estimation to increase the robustness to changes in the neighborhood size.
There are also many deep learning based AD methods including unsupervised AD methods <cit.> and semi-supervised AD methods <cit.>.
Deep learning based AD methods may be organized into three categories. The first category is based on compression and reconstruction. These methods usually use an autoencoder <cit.> to learn a low-dimensional representation to reconstruct the high-dimensional data <cit.>. The autoencoder learned from the normal training data is expected to have a much higher reconstruction error on unknown abnormal data than on normal data.
The second category is based on the combination of classical one-class classification <cit.> and deep learning <cit.>. For instance, Ruff et al.<cit.> proposed a method called deep one-class SVDD. The main idea is to use deep learning to construct a minimum-radius hypersphere to include all the training data, while the unknown abnormal data are expected to fall outside.
The last category is based on generative learning or adversarial learning
<cit.>.
For example, Perera et al. <cit.> proposed to use the generative adversarial network (GAN) <cit.> with constrained latent representation to detect anomalies for image data. Goyal et al.<cit.> presented a method called deep robust one-class classification (DROCC) and the method aims to find a low-dimensional manifold to accommodate the normal data via an adversarial optimization approach.
Although deep learning based AD methods have shown promising performance on various datasets, they still have limitations. For instance, the one-class classification methods such as Deep SVDD <cit.> only ensure that a hypersphere could include the normal data but cannot guarantee that the normal data are distributed evenly in the hypersphere, which may lead to large empty regions in the hypersphere and hence yield incorrect decision boundary (see Fig.<ref>). Moreover, the popular hypersphere assumption may not be the best one for providing a compact decision boundary (see Fig.<ref> and Tab.<ref>). The adversarial learning methods such as <cit.> may suffer from instability in optimization.
In this work, we present a restricted generative projection (RGP) framework for one-class classification and anomaly detection. The main idea is to train a deep neural network to convert the distribution of normal training data to a target distribution that is simple, compact, and informative, which will provide a reliable decision boundary to identify abnormal data from normal data. There are many choices for the target distribution, such as truncated Gaussian and uniform on hypersphere. Our contributions are summarized as follows.
* We present a novel framework called RGP for one-class classification and anomaly detection. It aims to transform the data distribution to some target distributions that are easy to be violated by unknown abnormal data.
* We provide four simple, compact, and informative target distributions, analyze their properties theoretically, and show how to sample from them efficiently.
* We propose two extensions for our original RGP method.
We conduct extensive experiments (on eight benchmark datasets) to compare the performance of different target distributions and compare our method with state-of-the-art baselines. The results verify the effectiveness of our methods.
The rest of this paper is organized as follows. Section <ref> introduces the related work.
Section <ref> details our proposed methods.
Section <ref> presents two extensions of the proposed method.
Section <ref> shows the experiments.
Section <ref> draws conclusions for this paper.
§ RELATED WORK
Before elaborating our method, we in this section briefly review deep one-class classification, autoencoder-based AD methods, and maximum mean discrepancy (MMD)<cit.>.
We also discuss the connection and difference between our method and these related works.
§.§ Deep One-Class Classification
The Deep SVDD proposed by <cit.> uses a neural network to learn a minimum-radius hypersphere to enclose the normal training data, i.e.,
minimize_𝒲1/n∑^n_i=1‖ϕ(𝐱_i; 𝒲) - 𝐜‖^2 + λ/2∑^L_l=1‖𝐖_l ‖^2_F
where 𝐜∈ℝ^d is a predefined centroid and 𝒲={𝐖_1,…,𝐖_L} denotes the parameters of the L-layer neural network ϕ, and λ is a regularization hyperparameter. In (<ref>), to avoid model collapse, bias terms should not be used and activation functions should be bounded <cit.>. There are also a few variants of Deep SVDD proposed for semi-supervised one-class classification and anomaly detection <cit.>.
Both our method and Deep SVDD as well as its variants aim to project the normal training data into some space such that a decision boundary between normal data and unknown abnormal data can be found easily. However, the sum-of-square minimization in Deep SVDD and its variants only ensures that the projected data are sufficiently close to the centroid 𝐜 in the sense of Euclidean distance and does guarantee that the data are sufficiently or evenly distributed in the hypersphere centered at 𝐜. Thus, in the hypersphere, there could be holes or big empty regions without containing any normal data and hence it is not suitable to assume that the whole space enclosed by the hypersphere is completely a normal space. In other words, the optimal decision boundary between normal data and abnormal data is actually very different from the hypersphere. An intuitive example is shown in Fig.<ref>. We see that there is a large empty space in the hypersphere learned by Deep SVDD. In contrast, the transformed data of our method are sufficiently distributed.
§.§ Autoencoder-based AD Methods
Our method is similar to but quite different from the variational autoencoder (VAE) <cit.>. Although our model is an autoencoder, the main goal is not to represent or generate data; instead, our model aims to convert distribution to find a reliable decision boundary for anomaly detection. More importantly, the latent distribution in VAE is often Gaussian and not bounded while the latent distribution in our model is more general and bounded, which is essential for anomaly detection. In addition, the optimizations of VAE and our method are also different: VAE involves KL-divergence while our method involves maximum mean discrepancy <cit.>.
It is worth noting that similar to our method, Perera et al.<cit.> also considered bounded latent distribution in autoencoder for anomaly detection. They proposed to train a denoising autoencoder with a hyper-cube supported latent space, via adversarial training. The latent distribution and optimization are different from ours. In addition, the latent distributions of our method, such as uniform on hypersphere, are more compact than the multi-dimensional uniform latent distribution of their method.
Compared with the autoencoder based anomaly detection method NAE <cit.> that uses reconstruction error to normalize autoencoder, our method pays more attention to learning a mapping that can transform the unknown data distribution into a simple and compact target distribution. The ideas are orthogonal.
§.§ Maximum Mean Discrepancy
In statistics, maximum mean discrepancy (MMD)<cit.> is often used for Two-Sample test and its principle is to find a function that assumes different expectations on two different distributions:
MMD[ℱ, p,q] =‖ f ‖_ℋ≤1sup(𝔼_p[f(𝐱)]-𝔼_q[f(𝐲)]),
where p, q are probability distributions, ℱ is a class of functions f:𝕏→ℝ and ℋ denotes a reproducing kernel Hilbert space.
Using the kernel trick, MMD can be represented as a simple loss function to measure the discrepancy between two distributions by finite samples, which is easy to apply to deep learning and can be efficiently trained by gradient descent. Based on the aforementioned advantages of MMD, Li et al.<cit.> proposed generative moment matching networks (GMMNs), which leads to a simpler optimization objective compared to the min-max optimization of GAN <cit.>.
Although both our method and GMMNs <cit.> minimize the MMD between data distribution and prior distribution, our goal is not generating new data but detecting anomalies. In addition, we consider a few bounded target distributions and analyze their sampling properties. More importantly, our method has very competitive performance when compared with SOTA methods of anomaly detection and one-class classification.
§ RESTRICTED GENERATIVE PROJECTION
In this section, we introduce our RGP framework, bounded target distributions, and the computation of anomaly scores.
§.§ Restricted Distribution Projection
Suppose we have a set of m-dimensional training data 𝐗={𝐱_1,𝐱_2,…,𝐱_n }
drawn from an unknown bounded distribution 𝒟_𝐱 and any samples drawn from 𝒟_𝐱 are normal data. We want to train a model ℳ on 𝐗 to determine whether a test data 𝐱_new is drawn from 𝒟_𝐱 or not. One may consider estimating the density function (denoted by p_𝐱) of 𝒟_𝐱 using some techniques such as kernel density estimation <cit.>. Suppose the estimation p̂_𝐱 is good enough, then one can determine whether 𝐱_new is normal or not according to the value of p̂_𝐱(𝐱_new): if p̂_𝐱(𝐱_new) is zero or close to zero, 𝐱_new is an abnormal data point; otherwise, 𝐱_new is a normal data point [Here we assume that the distributions of normal data and abnormal data do not overlap. Otherwise, it is difficult to determine whether a single point is normal or not.]. However, the dimensionality of the data is often high and hence it is very difficult to obtain a good estimation p̂_𝐱.
We propose to learn a mapping 𝒯:ℝ^m→ℝ^d to transform the unknown bounded distribution 𝒟_𝐱 to a known distribution 𝒟_𝐳 while there still exists a mapping 𝒯':ℝ^d→ℝ^m that can recover 𝒟_𝐱 from 𝒟_𝐳 approximately.
Let p_𝐳 be the density function of 𝒟_𝐳. Then we can determine whether 𝐱_new is normal or not according to the value of p_𝐳(𝒯(𝐱_new)). To be more precise, we want to solve the following problem
𝒯, 𝒯'minimize ℳ(𝒯(𝒟_𝐱), 𝒟_𝐳)+λℳ(𝒯'(𝒯(𝒟_𝐱)),𝒟_𝐱),
where ℳ(·, ·) denotes some distance metric between two distributions and λ is a trade-off parameter for the two terms. Note that if λ=0, 𝒯 may convert any distribution to 𝒟_𝐳 and lose the ability of distinguishing normal data and abnormal data.
Based on the universal approximation theorems <cit.> and substantial success of neural networks, we use deep neural networks (DNN) to model 𝒯 and 𝒯' respectively. Let f_θ and g_ϕ be two DNNs with parameters θ and ϕ respectively. We solve
θ, ϕminimize ℳ(𝒟_f_θ(𝐱), 𝒟_𝐳)+λℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱),
where f_θ and g_ϕ serve as encoder and decoder respectively.
However, problem (<ref>) is intractable because 𝒟_𝐱 is unknown and 𝒟_f_θ(𝐱), 𝒟_g_ϕ(f_θ(𝐱)) cannot be computed analytically. Note that the samples of 𝒟_𝐱 and 𝒟_g_ϕ(f_θ(𝐱)) are given and paired. Then the second term in the objective of (<ref>) can be replaced by sample reconstruction error such as 1n∑_i=1^n𝐱_i-g_ϕ(f_θ(𝐱_i))^2. On the other hand, we can also sample from 𝒟_f_θ(𝐱) and 𝒟_𝐳 easily but their samples are not paired. Hence, the metric ℳ in the first term of the objective of (<ref>) should be able to measure the distance between two distributions using their finite samples. To this end, we propose to use the kernel maximum mean discrepancy (MMD)<cit.> to measure the distance between 𝒟_f_θ(𝐱) and 𝒟_𝐳.
Its empirical estimate is
MMD^2[ℱ, X,Y] = 1/m(m-1)∑_i=1^mj≠ i∑^mk(𝐱_i, 𝐱_j)
+ 1/n(n-1)∑_i=1^nj≠ i∑^nk(𝐲_i, 𝐲_j)
- 2/mn∑_i=1^mj=1∑^nk(𝐱_i, 𝐲_j),
where X = {𝐱_1, …, 𝐱_m} and Y = {𝐲_1, …, 𝐲_n} are samples consisting of i.i.d observations drawn from p and q, respectively. k(·, ·) denotes a kernel function, e.g., k(𝐱, 𝐲)=exp(-γ𝐱-𝐲^2), a Gaussian kernel.
Based on the above analysis, we obtain an approximation for (<ref>) as
minimize_θ, ϕ MMD^2(𝐙_θ,𝐙_T)+ λ/n∑_i=1^n𝐱_i-g_ϕ(f_θ(𝐱_i))^2,
where 𝐙_θ={f_θ(𝐱_1),f_θ(𝐱_2),…,f_θ(𝐱_n) } and 𝐙_T={𝐳_i:𝐳_i∼𝒟_𝐳, i=1,…,n}.
The first term of the objective function in (<ref>) makes f_θ learn the mapping 𝒯 from data distribution 𝒟_𝐱 to target distribution 𝒟_𝐳 and the second term ensures that f_θ can preserve the main information of observations provided that λ is sufficiently large.
§.§ Bounded Target Distributions
Now we introduce four examples of simple and compact 𝒟_𝐳 for (<ref>). The four distributions are Gaussian in Hypersphere (GiHS), Uniform in Hypersphere (UiHS), Uniform between Hyperspheres (UbHS), and
Uniform on Hypersphere (UoHS). Their 2-dimensional examples are visualized in Fig.<ref>.
GiHS (Fig.<ref>.a) is actually a truncated Gaussian. Suppose we want to draw n samples from GiHS. A simple approach is drawing (1+ρ)n samples from a standard d-dimensional Gaussian and discarding the ρ n samples with larger ℓ_2 norms. The maximum ℓ_2 norm of the remaining n points is the radius of the hypersphere. One may also use the inverse transform method of <cit.>. We have the following results.
Suppose 𝐳_1,𝐳_2,…,𝐳_n are sampled from 𝒩(0,𝐈_d) independently. Then for any r>√(d), we have
Pr(𝐳_j≥ r) ≤exp(-0.5α), j∈[n],
and
Pr(max_1≤ j≤ n‖𝐳_j‖≤ r)≥ 1-nexp(-0.5α),
where α=√(d+2r^2)-√(d).
Inequality (<ref>) means a hypersphere of radius r can include all the n samples with a high probability if r is sufficiently large. On the other hand, according to (<ref>), if we expect to get n samples in a hypersphere of radius r, we need to sample about n/(1-exp(-0.5α)) points from 𝒩(0,𝐈_d). If d is larger, we need to sample more points.
UiHS (Fig.<ref>.b) is a hyperball in which all the samples are distributed uniformly. To sample from UiHS, we first need to sample from 𝒰(-r,r)^d. Then we discard all the data points outsides the radius-r hyperball centered at the origin.
The following proposition (the proof is in Appendix) shows some probability result of sampling from a d-dimensional uniform distribution.
Suppose 𝐳_1,𝐳_2,…,𝐳_n are sampled from 𝒰(-r,r)^d independently. Then for any t>0, we have
Pr(𝐳_j≥rt) ≤d/3t^2, j∈[n],
and
Pr(max_1≤ j≤ n‖𝐳_j‖≤ rt)≥ 1-nd/3t^2.
Inequality (<ref>) means a hypersphere of radius rt can include all the n samples with probability at least 1-nd/(3t^2). On the other hand, inequality (<ref>) indicates that if we draw n/(1-d/(3t^2)) samples from 𝒰(-r,r)^d, the expected number of samples falling into a hypersphere of radius rt is at least n.
Actually, sampling from UiHS is closely related to the Curse of Dimensionality and we need to sample a large number of points from 𝒰(-r,r)^d if d is large because only a small volume of the hypercube is inside the hyperball. To be more precisely, letting V_hypercube be the volume of a hypercube with length 2r and V_hyperball be the volume of a hyperball with radius r, we have
V_hyperball/V_hypercube=π ^d/2/d2^d-1Γ (d/2)≜η,
where Γ is the gamma function. Therefore, we need to draw n/η samples from 𝒰(-r,r)^d to ensure that the expected number of samples included in the hyperball is n, where η is small if d is large.
UbHS (Fig.<ref>.c) can be obtained via UiHS. We first sample from UiHS and then remove all samples included by a smaller hypersphere. Since the volume ratio of two hyperballs with radius r and r'is (r/r')^d, where r'<r, we need to draw n/(1-(r'/r)^d) samples from UiHS to ensure that the expected number of samples between the two hyperspheres is n. Compared with GiHS and UiHS, UbHS is more compact and hence provides larger abnormal space for abnormal data to fall in.
UoHS (Fig.<ref>.d) can be easily obtained via sampling from 𝒩(0,𝐈_d). Specifically, for every 𝐳_i drawn from 𝒩(0,𝐈_d), we normalize it as 𝐳_i←r𝐳_i/‖𝐳_i‖, where r is the predefined radius of the hypersphere. UoHS is a special case of UbHS when r'=r.
To quantify the compactness of the four target distributions, we define density ρ as the number of data points in unit volume, i.e., ρ=n/V. Consequently, the densities of the four target distributions are reported in Table <ref>.
UoHS is more compact than UbHS as well as GiHS and UiHS, it should have better performance in anomaly detection. Indeed, our numerical results show that UoHS outperforms others in most cases.
§.§ Anomaly Scores
In the test stage, we only use the trained f_θ^* to calculate anomaly scores. For a given test sample
𝐱_new, we define anomaly score s for each target distribution by
s(𝐱_new)= {[ |‖ f_θ^*(𝐱_new) ‖ - r |, for UoHS; ‖ f_θ^*(𝐱_new) ‖, for GiHB or UiHS; (‖ f_θ^*(𝐱_new) ‖ - r)· (‖ f_θ^*(𝐱_new) ‖ - r'),; for UbHS ].
There are clear decision boundaries according to (<ref>) and they can be regarded as `hard boundaries' between normal samples and abnormal samples. However, these `hard boundaries' only work in ideal cases where the projected data exactly match the target distributions. In real cases, due to the noise of data or the non-optimality of optimization, the projected data do not exactly match the target distributions. Therefore, we further propose a `soft boundary' for calculating anomaly scores. Specifically, for a given test sample 𝐱_new, we define anomaly score s for all four target distributions as
s(𝐱_new)= 1/k∑_i ∈ N_k‖ f_θ^*(𝐱_new) - f_θ^*(𝐱_i) ‖
where 𝐱_i denotes a single sample with index i in the training data and N_k denotes the index set of the k nearest training (projected) samples to f_θ^*(𝐱_new).
Empirically, in the experiments, we found that (<ref>) has better performance than (<ref>) in most cases. Table <ref>, <ref>, <ref> only report the results from (<ref>). The comparison results between (<ref>) and (<ref>) are provided in Section <ref>.
We call our method Restricted Generative Projection (RGP), which has four variants, denoted by RGP-GiHS, RGP-UiHS, RGP-UbHS, and RGP-UoHS respectively, though any bounded target distribution applies.
§ EXTENSIONS OF RGP
In this section, based on the general objective in (<ref>), we provide two variants of RGP.
§.§ Double-MMD based RGP
In the objective function of RGP defined by (<ref>), the second term is the reconstruction error for 𝐗, which is only a special example of approximation for the second term in the objective function of (<ref>), i.e., ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱). Alternatively, we can use MMD to approximate ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱), which yields the following Double-MMD RGP:
minimize_θ, ϕ MMD^2(𝐙_θ,𝐙_T)+ λMMD^2(g_ϕ(𝐙_θ),𝐗).
Compared to the sum of squares reconstruction error used in (<ref>), MMD^2(g_ϕ(𝐙_θ),𝐗) is a weaker approximation for ℳ(𝒟_g_ϕ(f_θ(𝐱)), 𝒟_𝐱),
because it does not exploit the fact that the samples in 𝐙_θ and 𝐗 are paired. Thus, the projection of Double-MMD RGP cannot preserve sufficient information of 𝐗,
which will reduce the detection accuracy. Indeed, as shown by the experimental results in Section
<ref>, our original RGP outperforms Double-MMD RGP.
§.§ Sinkhorn Distance based RGP
Besides MMD, the optimal transport theory can also be used to construct a notion of distance between pairs of probability distributions. In particular, the Wasserstein distance <cit.>, also known as “Earth Mover’s Distance”, has appealing theoretical properties and a very intuitive formulation
𝒲 = ⟨γ^*, 𝐂⟩_F
where 𝐂 denotes a metric cost matrix and γ* is the optimal transport plan.
Finding the optimal transport plan γ^* might appear to be a really hard problem. Especially, the computation cost of Wasserstein distance can quickly become prohibitive when the data dimension increases. In order to speed up the calculation of Wasserstein distance, Cuturi <cit.> proposed Sinkhorn distance that regularizes the optimal transport problem with an entropic penalty and uses Sinkhorn's algorithm <cit.> to approximately calculate Wasserstein distance.
Now, if replacing the first term in (<ref>) with the Sinkhorn distance<cit.>, we can get a new optimization objective
minimize_θ,ϕ ⟨γ, ℳ(𝐙_θ ,𝐙_T) ⟩_F + ϵ∑_i,jγ_ijlog(γ_ij)
+ λ/n∑_i=1^n 𝐱_i-g_ϕ(f_θ(𝐱_i))^2
subject to γ1 = 𝐚, γ^T 1 = 𝐛, γ≥ 0
where ℳ(𝐙_θ ,𝐙_T) denotes the metric cost matrix between 𝐙_θ and 𝐙_T, ϵ is the coefficient of entropic regularization term, 𝐚 and 𝐛 are two probability vectors and satisfy 𝐚^T1=1 and 𝐛^T1=1 respectively. We call this method Sinkhorn RGP.
Compared to MMD, Sinkhorn distance is more effective in quantifying the difference between two distributions using their finite samples. Therefore, the Sinkhorn RGP usually has better performance than our original RGP (<ref>), which will be shown by the experimental results in Section <ref>.
§ EXPERIMENTS
§.§ Datasets and Baselines
We compare the proposed method with several state-of-the-art methods of anomaly detection on five tabular datasets and three widely-used image datasets for one-class classification. The datasets are detailed as follows.
* Abalone[http://archive.ics.uci.edu/ml/datasets/Abalone]<cit.> is a dataset of physical measurements of abalone to predict the age. It contains 1,920 instances with 8 attributes.
* Arrhythmia[http://odds.cs.stonybrook.edu/arrhythmia-dataset/]<cit.> is an ECG dataset. It was used to identify arrhythmic samples in five classes and contains 452 instances with 279 attributes.
* Thyroid[http://odds.cs.stonybrook.edu/thyroid-disease-dataset/]<cit.> is a hypothyroid disease dataset that contains 3,772 instances with 6 attributes.
* KDD[https://kdd.ics.uci.edu/databases/kddcup99/]<cit.> is the KDDCUP99 10 percent dataset from the UCI repository and contains 34 continuous attributes and 7 categorical attributes. The attack samples are regarded as normal data, and the non-attack samples are regarded as abnormal data.
* KDDRev is derived from the KDDCUP99 10 percent dataset. The non-attack samples are regarded as normal data, and the attack samples are regarded as abnormal data.
* MNIST[http://yann.lecun.com/exdb/mnist/]<cit.> is a well-known dataset of handwritten digits and totally contains 70,000 grey-scale images in 10 classes from number 0-9.
* Fashion-MNIST[https://www.kaggle.com/datasets/zalando-research/fashionmnist]<cit.> contains 70,000 grey-scale fashion images (e.g. T-shirt and bag) in 10 classes.
* CIFAR-10[https://www.cs.toronto.edu/ kriz/cifar.html]<cit.> is a widely-used benchmark for image anomaly detection. It contains 60,000 color images in 10 classes.
We compare our method with three classic shallow models, four deep autoencoder based methods, three deep generative model based methods, and some latest anomaly detection methods.
* Classic shallow models: local outlier factor (LOF)<cit.>, one-class support vector machine (OC-SVM)<cit.>, isolation forest (IF)<cit.>.
* Deep autoencoder based methods: denoising auto-encoder (DAE)<cit.>, DCAE<cit.>, E2E-AE, DAGMM<cit.>, DCN <cit.>.
* Deep generative model based methods: AnoGAN<cit.>, ADGAN<cit.>, OCGAN <cit.>.
* Some latest AD methods: DeepSVDD<cit.>, GOAD <cit.>, DROCC <cit.>, HRN <cit.>, SCADN <cit.>, NeuTraL AD <cit.>, GOCC <cit.>, PLAD <cit.>, MOCCA <cit.>.
§.§ Implementation Details and Evaluation Metrics
In this section, we introduce the implementation details of the proposed method RGP and describe experimental settings for image and tabular datasets. Note that our method neither uses any abnormal data during the training process nor utilizes any pre-trained feature extractors.
For the five tabular datasets (Abalone, Arrhythmia, Thyroid, KDD, KDDRev), in our method, f_θ and g_ϕ are both MLPs. We follow the dataset preparation of <cit.> to preprocess the tabular datasets for one-class classification task. The hyper-parameter λ is set to 1.0 for the Abalone, Arrhythmia and Thyroid. For the KDD and KDDRev, λ is set to 0.0001.
For the three image datasets (MNIST, Fashion-MNIST, CIFAR-10), in our method, f_θ and g_ϕ are both CNNs. Since the three image datasets contain 10 different classes, we conduct 10 independent one-class classification tasks on both datasets: one class is regarded as normal data and the remaining nine classes are regarded as abnormal data. In each task on MNIST, there are about 6,000 training samples and 10000 testing samples. In each task on CIFAR-10, there are 5,000 training samples and 10,000 testing samples. In each task on Fashion-MNIST, there are 6,000 training samples and 10,000 testing samples. The hyper-parameter λ is chosen from {1.0, 0.5, 0.1, 0.01, 0.001, 0.0001} and varies for different classes.
In our method, regarding the radius r of GiHS and UiHS, we first generate a large number (denoted by N) of samples from Gaussian or uniform, sort the samples according to their ℓ_2 norms, and set r to be the pN-th smallest ℓ_2 norm, where p=0.9. For UbHS, we need to use the aforementioned method to determine an r with p=0.95 and a r' with p=0.05. We see that {r, r'} are not related to the actual data, they are determined purely by the target distribution.
In each iteration (mini-batch) of the optimization for all four target distributions, we resample 𝐙_T according to r. For UoHS, we draw samples from Gaussian and normalize them to have unit ℓ_2 norm, then they lie on a unit hypersphere uniformly. The procedure is repeated in each iteration (mini-batch) of the optimization.
For hyper-parameter k on the testing stage, we select k=3 for Thyroid, Arrhythmia, KDD, KDDRev, and select k=5 for Abalone dataset. For three image datasets, the hyper-parameter k is chosen from {1, 3, 5, 10} and varies for different classes.
We use Adam <cit.> as the optimizer in our method. For MNIST, Fashion-MNIST, CIFAR-10, Arrhythmia and KDD, the learning rate is set to 0.0001. For Abalone, Thyroid and KDDRev, the learning rate is set to 0.001. Table <ref> shows the detailed implementation settings of RGP on all datasets. All experiments were run on AMD EPYC CPU with 64 cores and with NVIDIA Tesla A100 GPU, CUDA 11.6.
To evaluate the performance of all methods, we follow the previous works such as <cit.> and <cit.> to use AUC (Area Under the ROC curve) for image datasets and F1-score for tabular datasets.
Note that when conducting experiments on the tabular datasets, we found that most of the strong baselines, like DROCC <cit.>, NeuTral AD <cit.>, GOCC <cit.>, used the F1-score and we just followed this convention.
In our method, we get the threshold via simply calculating the dispersion of training data in latent space. Specifically, we first calculated the scores s(𝐗) on training data 𝐗 using (12) or (13), and then sorted s(𝐗) in ascending order and set the threshold to be the pN-th smallest score, where p is a probability varying for different datasets.
§.§ Results on Image Datasets
Tables <ref> and <ref> show the comparison results on Fahsion-MNIST and CIFAR-10 respectively. We have the following observations.
* Firstly, in contrast to classic shallow methods such as OC-SVM <cit.> and IF <cit.>, our RGP has significantly higher AUC scores on all classes of Fashion-MNIST and most classes of CIFAR-10. An interesting phenomenon is that most deep learning based methods have inferior performance compared to IF <cit.> on class `Sandal' of Fashion-MNIST and IF <cit.> outperforms all deep learning based methods including ours on class `Deer' of CIFAR-10.
* Our methods outperformed the deep autoencoder based methods and generative model based methods in most cases and have competitive performance compared to the state-of-the-art in all cases.
* RGP has superior performance on most classes of Fashion-MNIST and CIFAR-10 under the setting of UoHS (uniform distribution on hypersphere).
Table <ref> shows the average performance on MNIST, Fashion-MNIST, and CIFAR-10 over all 10 classes to provide an overall comparison. We see that RGP achieves the best average AUC on Fashion-MNSIT and CIFAR-10 among all competitive methods. Four variants of RGP have relatively close average performance on all three image datasets. The experimental results of a single class on MNIST are reported in Appendix.
§.§ Results on Tabular Datasets
In Table <ref>, we report the F1-scores of our methods in comparison to ten baselines on the five tabular datasets. Our four variants of RGP significantly outperform all baseline methods on Arrhythmia, thyroid, and Abalone. Particularly, RGP-GiHS has 23.25%, 12.22%, and 19.58% improvements on the three datasets in terms of F1-score compared to the runner-up, respectively. It is worth mentioning that Neutral AD <cit.> and GOCC <cit.> are both specially designed for non-image data but are outperformed by our methods in most cases.
Compared with image datasets, the performance improvements of RGPs on the three tabular datasets are more significant. One possible reason is that, compared to image data, it is easier to convert tabular data to a compact target distribution. Furthermore, we also report the AUC scores on Abalone, Thyroid and Arrhythmia datasets and the results are provided in Appendix.
In addition to the quantitative results, we choose Thyroid (with 6 attributes) as an example and transform the data distribution to 2-dimensional target distributions, which are visualized in Figure <ref>. Plots (a), (b), (c), (d) in Figure <ref> refer to GiHS, UiHS, UbHS, UoHS, respectively. The blue points, orange points, green points, and red points denote samples from target distribution, samples from training data, normal samples from test set, and abnormal samples from test set, respectively. For much clearer illustration, the left figure in each plot of Figure <ref> shows all four kinds of instances and the right figure shows two kinds of instances including normal and abnormal samples from test set.
We see that RGPs are effective to transform the data distribution to the restricted target distributions, though the transformed data do not exactly match the target distributions (it also demonstrates the necessity of using the `soft boundary' defined by (<ref>)).
§.§ Comparison between`soft' and `hard' boundary
We further explore the performance of two different anomaly scores. Specifically, we compare the `hard boundaries' (<ref>) and `soft boundary' (<ref>) as anomaly scores during the test stage on image datasets and tabular datasets. The results are showed in Figures <ref>, <ref>, <ref>. It can be observed that using `soft boundary' (<ref>) to calculate anomaly score has better performance than using `hard boundaries' (<ref>) on most classes of image and tabular datasets. Nevertheless, using `hard boundaries' to calculate anomaly scores still achieves remarkable performance on some classes. For example, on the class `Ankle-boot' of Fashion-MNIST and the class `Trunk' of CIFAR-10, the best two results are both from RGPs using `hard boundaries' (<ref>) to calculate anomaly score.
§.§ Experiments of Double-MMD RGP and Sinkhorn RGP
We use Double-MMD RGP (<ref>) to conduct experiments and the results are reported in Table <ref>, <ref>. On image datasets, we just consider the target distribution UoHS (Uniform on HyperSphere) for simplicity.
On tabular datasets, we conduct experiments on the proposed four different target distributions.
From the experimental results of Table <ref>, <ref>, we found that Double-MMD RGP and original RGP have similar performance on the three tabular datasets, whereas on image datasets including Fashion-MNIST and CIFAR-10, the performance has apparent gap in spite of a large range of adjustment of λ∈{10.0, 5.0, 1.0, 0.5, 0.1, 0.01} for Double-MMD RGP (<ref>). Note that Table <ref> reports the average AUC(%) on all classes of Fahion-MNIST and CIFAR-10, the results on single class are provided in Appendix.
For the phenomenon, we consider that the tabular datasets in our implementation have fewer features (no more than 279) than the image datasets and second term of (<ref>) is a much weaker constraint for preserving data information than that of (<ref>). As a consequence, Double-MMD RGP (<ref>) is able to preserve the enough key information on the tabular data but loses a lot of important information on the image data than original RGP (<ref>). Meanwhile, we know that the generalization error of MMD for high-dimensional samples or distribution is often larger than that for low-dimensional samples or distribution. To ensure that MMD is able to accurately measure the distance between two high-dimensional distributions, the sample sizes should be sufficiently large.
We use Sinkhorn RGP (<ref>) to conduct experiments on Abalone, Arrhythmia, and Thyroid datasets and the results are reported in Table <ref>. In all implementations, ϵ is set to 0.01 and the a, b are uniform. In keeping with our expectation, the performance of Sinkhorn RGP (<ref>) is similar to or better than the original RGP (<ref>) for all four objective distributions, whereas the time cost of Sinkhorn RGP (<ref>) is much higher. We do not experiment with Sinkhorn RGP for the image dataset since the time cost is too higher.
§.§ Ablation Study
§.§.§ The Gaussian Kernel Function for MMD
We use the Gaussian kernel exp(-γ‖𝐱 - 𝐲‖^2) for MMD in optimization objective and set γ = 1/d^2 in all experiments, where d=1/n(n-1)∑^n_i=1∑^n_j=1‖𝐱_i - 𝐱_j ‖ denotes the mean Euclidean distance among all training samples.
To show the influence of γ, we fix γ from {0.1, 1, 10, 100} to run experiments on Fashion-MNIST.
As shown in Table <ref>, there are differences in every single case but the gaps in the average results are not significant. This demonstrated that our methods are not sensitive to γ.
§.§.§ The Coefficient λ of Reconstruction Term in Optimization Objective
The coefficient λ is a key hyperparameter in problem (<ref>). Now we explore the influence of λ for model performance.
Figures <ref>, <ref> show F1-scores of our methods with λ varying from 0 to 1000, on the tabular datasets. It can be observed that too small or too large λ can lower the performance of RGP. When λ is very tiny, the reconstruction term of (<ref>) makes less impact on the training target and f_θ can easily transform the training data to the target distribution but ignores the importance of original data distribution (see Figure <ref>). On the other hand, when λ is very large, the MMD term of optimization objective becomes trivial for the whole training target and f_θ under the constraint of reconstruction term more concentrates on the original data distribution yet can not learn a good mapping from data distribution to the target distribution. Figure <ref> illustrates the influence of hyper-parameter λ on the training set of Thyroid dataset. We see that f_θ transforms training data to target distribution better with the decrease of the λ. The blue points and orange points in Figure <ref> denote samples from target distribution, samples from training data, respectively.
§ CONCLUSION
We have presented a novel and simple framework for one-class classification and anomaly detection. Our method aims to convert the data distribution to a simple, compact, and informative target distribution that can be easily violated by abnormal data. We presented four target distributions and the numerical results showed that four different target distributions have relatively close performance and uniform on hypersphere is more effective than other distributions in most cases. Furthermore, we also explore two extensions based on the original RGP and analyze performance difference among them. Importantly, our methods have competitive performances as state-of-the-art AD methods on all benchmark datasets considered in this paper and the improvements are remarkable on the tabular datasets.
IEEEtran
|
http://arxiv.org/abs/2307.04086v1 | 20230709024221 | Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement | [
"Tiancheng Sun",
"Zhishuai Ge",
"Xunzhou Chen",
"Shaolan Bi",
"Tanda Li",
"Xianfei Zhang",
"Yaguang Li",
"Yaqian Wu",
"Sarah A. Bird",
"Ferguson J. W.",
"Jianzhao Zhou",
"Lifei Ye",
"Liu Long",
"Jinghua Zhang"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Beijing Planetarium, Beijing Academy of Science and Technology, Beijing, 100044, China
[email protected]
Research Center for Intelligent Computing Platforms, Zhejiang Laboratory, Hangzhou 311100, China
[email protected]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
[email protected]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Sydney Institute for Astronomy (SIfA), School of Physics, University of Sydney, NSW 2006, Australia
Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20
Datun Rd., Chaoyang District, Beijing 100101,
People’s Republic of China
Center for Astronomy and Space Sciences, China Three Gorges University, Yichang 443002, People's Republic of China
Department of Physics, Wichita State University, Wichita, KS 67260-0032, USA
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20
Datun Rd., Chaoyang District, Beijing 100101,
People’s Republic of China
Varying oxygen abundance could impact the modeling-inferred ages. This work aims to estimate the ages of dwarfs considering observed oxygen abundance. To characterize 67,503 LAMOST and 4,006 GALAH FGK-type dwarf stars, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. Compared with ages determined with commonly-used α-enhanced models, we find a difference of ∼9% on average when the observed oxygen abundance is considered. The age differences between the two types of models are correlated to [Fe/H] and [O/α], and they are relatively significant on stars with [Fe/H] ≲ -0.6 dex. Generally, varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-1 < [Fe/H] < -0.2) stars by ∼15%. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The fractional age difference of high-O stars with [O/α] ∼ 0.4 dex reaches up to -33% to -42% at [Fe/H] ≲ -0.6 dex. We also analyze the chemical properties of these stars. We find a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr for the stars from the LAMOST and GALAH. The [O/Fe] of these stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, indicating that the younger population is more O-rich.
§ INTRODUCTION
Galactic archaeology uses the chemical abundances, kinematics, and derived ages of resolved stellar populations as fossils to investigate the formation and evolution history of the Milky Way <cit.>. However, in comparison to chemical abundance and kinematics estimation, estimating the ages of field stars is a challenging task due to the inherent uncertainties present in both observational data and the stellar models employed for dating stars <cit.>.
The chemical composition of a star is a fundamental input parameter in the construction of its theoretical model, which is critical in the determination of its age. Notably, at fixed [Fe/H], the abundance variations of individual elements exert a consequential impact on the overall metallicity Z, which subsequently determines the opacity of the stellar models. This, in turn, influences the efficiency of energy transfer and the thermal structure, thereby altering the evolution tracks on the HR diagram and the main-sequence lifetime <cit.>. Consequently, in the context of stellar modeling, it is essential to consider the proper metal mixture in order to accurately characterize stars and determine their ages.
The solar-scaled ([α/Fe] = 0) and α-enhanced mixtures have been commonly used in theoretical model grids like Y2 isochrones <cit.>, Dartmouth Stellar Evolution Database <cit.>, and Padova stellar models <cit.>. These models treated all the α-elements, that are O, Ne, Mg, Si, S, Ca, Ti, by the same factor.
Observations from high-resolution spectroscopic data have presented very different O-enhancement values from other α-elements on many stars <cit.>.
The observed discrepancies in the abundances of oxygen and other α-elements can be attributed to the diverse origins of these elements. Specifically, O and Mg are believed to be primarily synthesized during the hydrostatic burning phase of massive stars and subsequently ejected during the core-collapse supernovae (CCSNe) <cit.>. Nevertheless, some works have provided evidence that Mg might also be partially released into the interstellar medium by SNe Ia <cit.>, while O appears to be solely enriched by CCSNe <cit.>. The other α-elements, namely Si, Ca, and Ti, primarily originate from the explosive burning of CCSNe and are partially contributed by SNe Ia <cit.>.
For instance, 22% of Si and 39% of Ca come from SNe Ia according to the chemical evolution models in <cit.>.
Therefore, not all α-elements vary in lockstep, the abundance of oxygen may not necessarily correlate with the abundance of other α-elements.
Many works have also discussed the effects of varying individual element abundances on the stellar evolution models <cit.>. Theoretical models showed that the oxygen abundance influences the stellar evolution differently from the other α-elements <cit.>.
Furthermore, <cit.> proposed the CO-extreme models, which treat oxygen abundance differently from the other α-elements and add carbon abundance in the stellar evolution models. The models have been employed to determine the ages of thousands of metal-poor halo stars, disk stars, and main sequence turn-off stars <cit.>. These results showed that increasing oxygen abundance leads to smaller age determination for the stars with [Fe/H] < -0.2. For the stars with [Fe/H] < -0.2 and [O/α] > 0.2 dex, the age difference would be about 1 Gyr. Due to the limited sample sizes of previous studies (<cit.>, with 70 stars, and <cit.>, with 148 stars) or the restricted range of [Fe/H] values <cit.>, there is a pressing need for a large and self-consistent sample to conduct a quantitative analysis regarding the impact of O-enhancement on age determination.
Recently, millions of stars' individual element abundances have been measured by spectroscopic surveys like LAMOST <cit.>, APOGEE <cit.>, and GALAH <cit.>. These large sky surveys provide an excellent opportunity to study the effects of oxygen abundance variations on age determinations across a wide range of stellar parameters. To investigate the systematic effects of O-enhancement on age determination, we study the dwarf stars with available oxygen abundance measurements from LAMOST and GALAH. This paper is organized as follows: Section <ref> mentions the data selection; Section <ref> describes computations of stellar model grids; Section <ref> demonstrates ages differences between the O-enhanced models and α-enhanced models; the resulting age-abundance trends are presented in Section <ref>; and the conclusions of this work are drawn in Section <ref>.
§ TARGET SELECTION
In this work, we make use of spectroscopic data from LAMOST DR5 Value Added Catalogue <cit.> and Third Data Release of GALAH <cit.>,
together with astrometric data from Gaia Data Release 3 <cit.>.
§.§ Spectroscopic Data
LAMOST (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope) DR5 Value Added Catalog <cit.> contains more than 6 million stars with atmosphere parameters (T_ eff, log g, V_mic) and chemical abundances of 16 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni, Cu, and Ba). Measurements of element abundances are based on the DD–Payne tool <cit.>, which is a data-driven method that incorporates constraints from theoretical spectral models.
It is noteworthy that, as discussed by <cit.>, the direct derivation of oxygen abundances from atomic oxygen lines or oxygen-bearing molecular lines in low-resolution (R ∼ 1800) LAMOST spectra is unfeasible. Alternatively, CH and CN molecular lines can be utilized for indirect estimation of oxygen abundances, as their strengths are sensitive to the amount of carbon locked up in CO molecules. As a result, the LAMOST oxygen abundances are only available in the cooler stars (T_ eff ≲ 5700 K), where the CH and CN lines have sufficient strength to allow a reasonably precise (±0.10 dex) estimate of [O/Fe] <cit.>.
Due to the wide age range and the preservation of initial chemical abundances, the main-sequence star could be a good tracer of stellar populations. Therefore, we select the main-sequence stars with available measurements for [Fe/H], [α/Fe], and [O/Fe] from the catalog. Firstly, we use some recommended labels (T_ eff_flag = 1, log g_flag = 1, [Fe/H]_flag = 1, [X/Fe]_flag[[X/Fe]_flag = 1 for 14 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni).] = 1, qflag_chi2 = good) to select stars with reliable measurements. Afterward, we remove stars with T_ eff smaller than 5000 K or signal-to-noise ratio (S/N) less than 50 because their [O/Fe] determinations are not robust. <cit.> also provided a tag named “qflag_singlestar” to infer whether a star is single or belongs to a binary system. The tag is determined by the deviation significance of the spectroscopic parallax from the Gaia astrometric parallax. When the deviation is less than 3σ, it suggests an object is a single star. We use this tag to remove all candidate binaries from our sample.
Finally, we choose stars with log g> 4.1. We lastly select a total of 187,455 unique stars.
GALAH (Galactic Archaeology with HERMES) DR3 <cit.> presents stellar parameters (T_ eff, log g, [Fe/H], V_mic, V_broad, V_rad) and up to 30 elemental abundances for 588,571 stars, derived from optical spectra at a typical resolution of R ∼ 28,000.
The oxygen abundance from GALAH DR3 was calculated using the O_ I 777 nm triplet <cit.>, based on a non-LTE method (LTE: local thermodynamic equilibrium)<cit.>.
This NLTE method has also been employed for the measurement of [Fe/H] in GALAH.
Following the recommendations in GALAH DR3, we require a SNR > 30, and a quality flag = 0 for reliable stellar parameter determination including iron, α-elements, and oxygen abundances (flag_sp = 0, flag_fe_h = 0, flag_alpha_fe = 0, and flag_o_fe = 0). Additionally, the sample is limited to the stars with e_alpha_fe < 0.1 and e_o_fe < 0.1. We exclude the binary systems identified by <cit.> (which is a catalog of FGK binary stars in GALAH). These cuts give us a sample of 19,512 dwarf stars (log g> 4.1).
§.§ Astrometric Data
We cross-match our selected LAMOST and GALAH samples with Gaia DR3 <cit.> catalog to obtain the luminosity for each star. Given that luminosity is utilized as a key observational constraint for estimating stellar age, we select stars with luminosity uncertainty less than 10%. Additionally, we select single stars by making a cut based on the Gaia re-normalized unit weight error (RUWE) being less than 1.2 (RUWE values are from the Gaia DR3). Our final sample consists of 149,906 stars from LAMOST (5000 K < T_ eff < 5725 K, -1 < [Fe/H] < 0.5, log g> 4.1) and 15,591 stars from GALAH (4500 K < T_ eff < 7000 K, -1 < [Fe/H] < 0.5, log g> 4.1).
We calculate the Galactic Cartesian coordinates (X, Y, Z) and velocities (U, V, W) for the LAMOST sample using the Python package Galpy <cit.>. The distances are estimated by <cit.>. The Sun is located at (X, Y, Z) = (-8.3, 0, 0) kpc, and the solar motion with respect to the local standard of rest is (U_⊙, V_⊙, W_⊙) = (11.1, 12.24, 7.25) km s^-1 <cit.>. We use the Galactic Cartesian coordinates and velocities from the GALAH DR3 value-added catalog (VAC), which is based on astrometry from Gaia EDR3 and radial velocities determined from the GALAH spectra <cit.>.
In Figure <ref>, we demonstrate dwarfs from LAMOST and GALAH in the Kiel diagram, and the [α/Fe][The [α/Fe] from both the LAMOST and GALAH catalog are defined as an error-weighted mean of [Mg/Fe], [Si/Fe], [Ca/Fe] and [Ti/Fe].]-[O/Fe] space to inspect their general distributions.
The Kiel diagram in Figure <ref>(a) shows that most of the LAMOST dwarfs are cooler than 5700 K, while the GALAH dwarfs in Figure <ref>(b) covers a wider range of T_ eff (4500 - 7000 K). It should be noted that we do not apply any cut-off value at the high temperature side for the LAMOST sample. This upper limit is where reliable oxygen abundance can be measured by <cit.>.
The [α/Fe]-[O/Fe] diagrams in Figure <ref>(c-d) show that the [O/Fe] generally increases with increasing [α/Fe], however, [O/Fe] widely spread at given α-enhanced values. The spreading is relatively large for low-α stars (especially for the GALAH sample), ranging from -0.4 to +0.6.
c r c c[ht!]
Metal Mixtures for the GS98 Solar Mixture, the α-Enhanced Mixture, and the O-Enhanced Mixture.
Element log N_⊙ log N_α EM log N_ OEM
C 8.52 8.52 8.52
N 7.92 7.92 7.92
O 8.83 8.83+[α/Fe] 8.83+[O/Fe]
F 4.56 4.56 4.56
Ne 8.08 8.08+[α/Fe] 8.08+[α/Fe]
Na 6.33 6.33 6.33
Mg 7.58 7.58+[α/Fe] 7.58+[α/Fe]
Al 6.47 6.47 6.47
Si 7.55 7.55+[α/Fe] 7.55+[α/Fe]
P 5.45 5.45 5.45
S 7.33 7.33+[α/Fe] 7.33+[α/Fe]
Cl 5.50 5.50 5.50
Ar 6.40 6.40 6.40
K 5.12 5.12 5.12
Ca 6.36 6.36+[α/Fe] 6.36+[α/Fe]
Sc 3.17 3.17 3.17
Ti 5.02 5.02+[α/Fe] 5.02+[α/Fe]
V 4.00 4.00 4.00
Cr 5.67 5.67 5.67
Mn 5.39 5.39 5.39
Fe 7.50 7.50 7.50
Co 4.92 4.92 4.92
Ni 6.25 6.25 6.25
c r c[ht!]
Grid of Evolutionary Models with Two Metal Mixture Patterns.
Metal-mixture [O/Fe] [α/Fe]
(dex) (dex)
O-enhanced mixture -0.2 0
0.2 0
0.4 0
-0.1 0.1
0.3 0.1
0.5 0.1
0 0.2
0.4 0.2
0.2 0.3
0.4 0.3
0.5 0.3
0.6 0.3
α-enhanced mixture 0 0
0.1 0.1
0.2 0.2
0.3 0.3
r c c c[ht!]
Z Values of Fixed [Fe/H] with Two Metal Mixture Patterns.
[Fe/H] [α/Fe] [O/Fe] Z
(dex) (dex) (dex) (dex)
-1.0 0.1 0.1 0.0020
-1.0 0.1 0.5 0.0036
-0.8 0.1 0.1 0.0032
-0.8 0.1 0.5 0.0056
-0.6 0.1 0.1 0.0051
-0.6 0.1 0.5 0.0089
-0.4 0.1 0.1 0.0080
-0.4 0.1 0.5 0.0139
-0.2 0.1 0.1 0.0126
-0.2 0.1 0.5 0.0217
0 0.1 0.1 0.0197
0 0.1 0.5 0.0337
c c c c c c[ht!]
Atmosphere Parameters and Chemical Abundance for the Example Stars from LAMOST
Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe]
sobject_id (K) (dex) (L_⊙) (dex) (dex)
20140313-HD145243N315530B-01-084 5619±22 -0.30±0.04 0.74±0.02 0.06±0.02 0.46±0.09
20141112-HD083415N451147V01-03-165 5652±24 -0.15±0.04 1.57±0.03 0.15±0.02 -0.02±0.08
§ STELLAR MODELS
§.§ Input Physics
We construct a stellar model grid using the Modules for Experiments in Stellar Astrophysics (MESA) code <cit.>. The versions of MESA and MESA SDK we used are Revision 12115 and Version 20.3.1, respectively.
The EOS (Equation of State) tables in MESA are a blend of OPAL <cit.>, SCVH <cit.>, PTEH <cit.>, HELM <cit.>, and PC <cit.> EOS tables. Nuclear reaction rates are a combination of rates from NACRE <cit.>, JINA REACLIB <cit.>, plus additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>. Thermal neutrino loss rates are from <cit.>. The helium enrichment law is calibrated with initial abundances of helium and heavy elements of the solar model given by <cit.>, and it results in Y = 0.248 + 1.3324 Z. The mixing-length parameter α_ MLT is fixed to 1.82. Microscopic diffusion and gravitational settling of elements are necessary for stellar models of low-mass stars, which will lead to a modification to the surface abundances and main-sequence (MS) lifetimes <cit.>. Therefore, we include diffusion and gravitational settling using the formulation of <cit.>. We use the solar mixture GS98 from <cit.>. The opacity tables are OPAL high-temperature opacities [<http://opalopacity.llnl.gov/new.html>] supplemented by the low-temperature opacities <cit.>.
We customize metal mixtures by introducing two enhancement factors, one for oxygen and one for all other α-elements (i.e., Ne, Mg, Si, S, Ca, and Ti). The two factors are applied in the same way as <cit.> to vary the volume density of element (log N) based on the GS98 solar mixture as presented in Table <ref>.
We make a number of opacity tables by varying two enhancement factors according to the ranges of [α/Fe] and [O/Fe] values of the star sample. The enhancement values are shown in Table <ref>. For the mixtures with the same oxygen and α-elements enhancement factors, we refer to them as α-enhanced mixture (αEM), otherwise, as O-enhanced mixture (OEM).
c c c c c c c c c c[ht!]
Fundamental Parameters and Chemical Abundance for the Example Stars from GALAH
Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe] Mass_α EM Mass_ Buder2021 Age_α EM Age_ Buder2021
sobject_id (K) (dex) (L_⊙) (dex) (dex) (M_⊙) (M_⊙) (Gyr) (Gyr)
171230005802396 6096±76 -0.23±0.06 2.26±0.07 0±0.02 0.02±0.08 1.06±0.03 1.03±0.04 6.08±1.01 6.46±1.17
160529003401378 5846±76 -0.42±0.06 1.67±0.03 0.31±0.03 0.34±0.09 0.97±0.03 0.96±0.03 9.53±1.26 10.04±1.39
* The masses (Mass_ Buder2021) and ages (Age_ Buder2021) of the two example stars from the GALAH value-added catalog <cit.> are calculated based on PARSEC stellar isochrones (the PAdova and TRieste Stellar Evolution Code) <cit.>.
§.§ Grid Computations
We establish stellar model grids that include various metal-mixture patterns as indicated in Table <ref>. The mass range is from 0.6 to 1.2 M_⊙ with a grid step of 0.02 M_⊙. Input [Fe/H] values range from -1.20 to +0.46 dex with a grid step of 0.02 dex. The computation starts at the Hayashi line and terminates at the end of main-sequence when core Hydrogen exhausts (mass fraction of center hydrogen goes below 10^-12).
The inlist file (for MESA) utilized in the computation of our stellar models is available on Zenodo: [doi:10.5281/zenodo.7866625]https://doi.org/10.5281/zenodo.7866625
To explicate the effect of oxygen enhancement on the evolutionary tracks, we provide an exposition of representative evolutionary tracks in Figure <ref>. The corresponding values of Z are listed in Table <ref>. At fixed [Fe/H], the variation of [O/Fe] would influence opacity, which could influence the energy transfer efficiency and the thermal structure.
We find that the larger [O/Fe] leads to higher opacity at input [Fe/H] ≤ -0.2, and shifts the evolutionary tracks to lower T_ eff.
As seen in Figure <ref>, at [Fe/H] ≤ -0.2, O-rich models are generally cooler than the α-enhanced models at given input [Fe/H], leading to higher modeling-determined masses (smaller ages) for a given position on the HR diagram (left panel of Figure <ref>). However, at input [Fe/H] = 0,
larger [O/Fe] leads to lower opacity, and shifts the evolutionary tracks to higher T_ eff.
The O-rich models are slightly hotter than the α-enhanced models.
Overall, at fixed mass, the T_ eff difference between the two models becomes significant with smaller [Fe/H].
In addition, we note that the 1.1 M_⊙ and 1.2 M_⊙ tracks of O-rich models show different behavior compared with the tracks of 0.7 ∼ 1.0 M_⊙. The O-rich models with 1.1 M_⊙ show a blue hook morphology at [Fe/H] ≤ -0.8, which enlarges the T_ eff difference between two models at this evolutionary phase. At 1.2 M_⊙, both models show a blue hook morphology at the end of main-sequence, and the T_ eff difference keeps approximately constant at [Fe/H] ≤ -0.6.
Figure <ref> presents the stellar evolution tracks of two example stars calculated with αEM and OEM models. Figure <ref>(a) presents the tracks of a star with observed [α/Fe] ∼ 0.1, [O/Fe] ∼ 0.5. Based on the αEM models (input [α/Fe] = 0.1, [O/Fe] = 0.1), we obtain the best-fit values of fundamental parameters for this star: mass = 0.87 ± 0.02 M_⊙, age = 8.69 ± 1.49 Gyr (the fitting method is described in detail in Section <ref>). Using the OEM models (input [α/Fe] = 0.1, [O/Fe] = 0.5), we estimate it to be a young star with mass = 0.90 ± 0.02 M_⊙, age = 5.68 ± 1.44 Gyr. The mean value of masses of OEM models ([O/Fe] = 0.5) inside the observational error box is larger than that of αEM models ([O/Fe] = 0.1), leading to smaller modeling-determined age for this star. Figure <ref>(b) shows the tracks of a star with observed [α/Fe] ∼ 0.2, [O/Fe] ∼ 0. We obtain a mass of 0.99 ± 0.01 M_⊙ and an age of 10.51 ± 0.60 Gyr for this star with αEM models (input [α/Fe] = 0.2, [O/Fe] = 0.2), and a mass of 0.98 ± 0.02 M_⊙ and an age of 11.34 ± 0.51 Gyr with OEM models (input [α/Fe] = 0.2, [O/Fe] = 0). As seen, the OEM models with input [O/Fe] = 0 are generally hotter than the αEM models ([O/Fe] = 0.2) at fixed mass and [Fe/H], leading to smaller modeling-determined mass and larger age for this star.
§.§ Fitting Method
We constrain stellar masses and ages using five observed quantities, i.e., T_ eff, luminosity, [Fe/H], [α/Fe], and [O/Fe]. Note that [O/Fe] is not used when estimating parameters with αEM models.
We follow the fitting method raised by <cit.>. According to the Bayes theorem, we compare model predictions with their corresponding observational properties D to calculate the overall probability of the model M_i with posterior probability I,
p(M_i| D,I)=p(M_i| I) p(D| M_i, I)/p(D| I)
where p(M_i | I) represents the uniform prior probability for a specific model, and p(D | M_i, I) is the likelihood function:
p(D| M_i,I)=L(T_eff,[Fe/H],lum)
=L_T_effL_[Fe/H]L_lum
The p(D | I) in Equation <ref> is a normalization factor for the specific model probability:
p(D | I)=∑_j=1^N_m p(M_j| I) p(D | M_j, I)
where N_m is the total number of selected models. The uniform priors p(M_i | I) can be canceled, giving the simplified Equation (1) as :
p(M_i| D, I)=p(D | M_i, I)/∑_j=1^N_m p(D | M_j, I).
Then Equation <ref> is the probability distribution for the selected models with the most probable fundamental parameters.
As demonstrated in Figure <ref>, we fit a Gaussian function to the likelihood distribution of mass and age for each star.
The mean and standard deviation of the resulting Gaussian profile are then utilized as the median value and uncertainty of fundamental parameter (mass and age) for each star.
To find the stars that locate near the edge of the model grid, we consider a 3-sigma error box (i.e., three times the observational error, depicted as a blue square in Figure <ref>) on the HR diagram and divide the error box into 100 bins. For a certain star, when there are more than 5 bins that do not contain any theoretical model (sampling rate < 95%), we flag the star with “edge effect”.
To assess the accuracy of our models and investigate potential model dependency in age and mass determination, we present a comparison of results obtained from our αEM models, OEM models, and the GALAH DR3 value-added catalog <cit.>. Figure <ref> shows the comparison of age and mass estimations for ∼4,000 GALAH stars, with age uncertainty of less than 30%, based on
αEM models, OEM models, and GALAH DR3 VAC <cit.>. The ages and masses of stars from GALAH DR3 VAC are calculated using the PARSEC (the PAdova and TRieste Stellar Evolution Code) release v1.2S + COLIBRI stellar isochrone <cit.>, which adopt a solar-scaled metal mixture, i.e., input [α/Fe] = 0. Figure <ref> illustrates that the one-to-one relation of the results is quite good for most stars. It is noteworthy that the adopted approach encompasses a flat prior on age with an age cap of 13.2 Gyr <cit.>. Consequently, the ages of the majority of stars from GALAH DR3 VAC are found to be younger than 12 Gyr (with masses larger than 0.8 M_⊙), which results in a relatively large dispersion of age differences, amounting to 12.4% for αEM models and 13.0% for OEM models.
Significant systematic differences are apparent between the PARSEC and the αEM models in Figure <ref>(a-b), with the former indicating 2.3% older age and 1.5% smaller mass than the latter.
These discrepancies could be attributed to differences in the input physics employed by the two models, such as the input [α/Fe] value, helium abundance, and mixing-length parameter.
In Figure <ref>(c-d), the PARSEC yields 5.5% older age and 1.9% smaller mass than the OEM models.
Compared with the αEM models, the OEM models demonstrate more pronounced systemic differences from PAESEC. These distinctions primarily arise from the consideration of O-enhancement in OEM models, leading to younger ages and higher masses.
In addition, a comparison of results obtained from our αEM models and the Yonsi–Yale <cit.> stellar isochrones have been shown in Figure <ref> in Appendix.
§ RESULTS
This work aims to determine the ages of dwarfs considering oxygen abundance and study the chemical and kinematic properties of high-α and low-α populations in the Galactic disk. We give the masses and ages of 149,906 LAMOST dwarfs and 15,591 GALAH dwarfs with αEM models and OEM models. We remove ∼30% stars with sampling rate < 95%, located near the edge of the model grid. In addition, we remove ∼3% stars whose inferred ages are 2-sigma[For a certain star, age - 2*age_uncertainty > 13.8 Gyr.] larger than the universe age <cit.> due to their significant model systematic bias. Finally, we remove ∼35% stars that have relative age uncertainty larger than 30 percent. After these cuts, we obtain the ages of 67,503 dwarfs from LAMOST with a median age uncertainty of ∼16%, and 4,006 dwarfs from GALAH with a median age uncertainty of ∼18%.
The age estimation of dwarf stars is inherently accompanied by considerable uncertainty, which can reach up to 30% within our sample. Furthermore, uncertainties (especially the systematic error) in atmosphere parameters can introduce biases in the age estimation. Consequently, a minority of stars in our sample exhibits ages that exceed the age of the universe. This occurrence is not uncommon, as even samples of subgiants with more precise age determinations have encountered analogous occurrences <cit.>.
§.§ Oxygen Effect on Age Determinations
§.§.§ Mock Data Test
Most of the stars in both the LAMOST and GALAH samples are distributed in a relatively narrow range of [Fe/H] (-0.5 dex - +0.5 dex).
To systematically investigate the effect of O-enhancement on age determinations in a wide range of T_ eff and [Fe/H], we apply a mock data test based on our grid of stellar models. For each set of stellar mode grids with fixed [Fe/H], [α/Fe], and [O/Fe] values, we draw random samples from the distributions of stellar evolution tracks in the H-R diagram.
We adopt 0.05, 30 K as the observational errors for [Fe/H] and T_ eff, and fractional error of 2% for luminosity. Finally, We generate mock data of 0.15 million stars with age uncertainty of less than 30 percent.
Figure <ref>(a) shows the distribution of mock stars on the HR diagram. Figure <ref>(b-c) presents a comparison between mock data and observational data for T_ eff and [Fe/H] distributions. Comparing mock data with LAMOST or GALAH dwarfs, mock stars cover wider ranges of T_ eff (5000 K - 7000 K), and [Fe/H] (-1.0 dex - +0.4 dex). Therefore, the mock data is useful for statistical studies of oxygen effect on age determinations.
Figure <ref> shows a comparison between ages determined with αEM models (τ_α EM) and OEM models (τ_ OEM). The mock stars are grouped by their [Fe/H] and [O/α] values. The stars with [O/α] > 0 are hereafter referred to as high-O stars and the stars with [O/α] < 0 as low-O stars. Generally, high-O stars have younger ages based on OEM models, while low-O stars become older. The effect of oxygen enhancement on age determination is relatively significant for stars with [Fe/H] < -0.2. At [O/α] = -0.2, the mean fractional age difference ( (τ_ OEM - τ_α EM)/τ_α EM ) is 10.5% for metal-rich stars (-0.2 < [Fe/H] < 0.2), and 15.5% for relatively metal-poor stars (-1 < [Fe/H] < -0.2). The mean fractional age difference at [O/α] = 0.2 is -9.2% for metal-rich stars, and -16.5% for relatively metal-poor stars.
The largest fractional age difference comes from high-O stars with [O/α] = 0.4, which have a mean fractional age difference of -20.2% at -0.2 < [Fe/H] < 0.2, and -30.6% at -1 < [Fe/H] < -0.2.
We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Increasing 0.2 dex in [O/α] will reduce the age estimates of metal-rich stars by ∼10%, and metal-poor stars by ∼15%.
The mock data provide us with more sufficient stars at the metal-poor edge than observational data to present clearly age differences at different [O/α] and [Fe/H] values.
§.§.§ Observational Data
Figure <ref> presents the fractional age differences between αEM and OEM models for observational (LAMOST and GALAH) and mock data. The overall average age offset (absolute value of age difference) of stars from LAMOST and GALAH is 8.9% and 8.6%, respectively. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%. The age offsets are relatively significant for metal-poor stars. The largest age differences are -33% to -42% for stars with [Fe/H] ≲ -0.6 dex and [O/α] ∼0.4 dex. For mock data, we note the trend of age offsets versus [Fe/H] is consistent with that of observational data. The age offsets of both samples increase significantly with decreasing metallicity at [Fe/H] ≳ -0.6. Interestingly, there is a slight increase in age offsets with decreasing metallicity at [Fe/H] < -0.6.
This trend of age offsets is consistent with the change of T_ eff difference as a function of [Fe/H] (shown in Figure <ref>), as discussed in Section <ref>.
§.§ Age-Abundance Relations
To trace the chemical evolution history of the Galactic disk, we hereby present the age-abundance relations of the LAMOST sample (consisting of 67,511 stars) and the GALAH sample (consisting of 4,006 stars) using the ages from OEM models. For each sample, we employ local nonparametric regression fitting (LOESS model) to characterize the trends in these relations with enhanced clarity.
Figure <ref> illustrates the results for the LAMOST sample. In Figure <ref>(a), a gradual decline in [Fe/H] is observed across the age range of ∼9 Gyr to ∼6.5 Gyr. This trend shows similarities to the metal-rich branch observed in young stars (age < 8 Gyr) as found by <cit.>, where the metallicity range of their metal-rich branch stars spans approximately -0.2 to +0.4. Notably, <cit.> also identifies a trend comparable to our findings, whereby their sample exhibits a [Fe/H] value of 0.4 at 8 Gyr, diminishing to around -0.2 at 6 Gyr. The "two-infall" chemical evolution model <cit.> predicts a process involving the infall of metal-poor gas commencing roughly 9.4 Gyr ago <cit.>. The observed trend of decreasing metallicity from 9 Gyr to 6.5 Gyr in our results may be related to this infalling metal-poor gas. Intriguingly, this "two-infall" model not only anticipates a decline in metallicity but also predicts an increase in the oxygen abundance,which is consistent with the observed trend illustrated in Figure <ref>(b). In Figure <ref>(b), the sample stars from LAMOST exhibit an increase in [O/Fe] as the age decreases from 9 Gyr to 4 Gyr, indicating a slight enrichment of oxygen in the younger stellar population.
Figure <ref> presents the results for the GALAH sample. It is noteworthy that the GALAH stars display a decrease in [Fe/H] from ∼7.5 Gyr to 5 Gyr. Furthermore, the [O/Fe] of the GALAH stars exhibit a slight decrease with age ranging from ∼7.5 Gyr to 3 Gyr. The GALAH sample exhibits age-[Fe/H] and age-[O/Fe] trends similar to those observed in LAMOST; however, an overall slight temporal discrepancy can be observed. This incongruity may be ascribed to dissimilarities in sample composition or systematic differences in atmospheric parameters between the two survey datasets. The GALAH sample, on the whole, exhibits higher temperatures compared to LAMOST sample (5000 - 5700 K), indicating a relatively younger population. Furthermore, the determinations of [Fe/H] and [O/Fe] from GALAH are based on a non-LTE method <cit.>, which can also impact the observed trends.
In conclusion, the analysis of the LAMOST and GALAH samples reveals a decreasing trend of [Fe / H] with an age ranging from 7.5–9 Gyr to 5–6.5 Gyr, and a notable upward trend in [O/Fe] as the age decreases from 7.5–9 Gyr to 3–4 Gyr. This result agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago. As discussed in Section <ref>, oxygen has a unique origin, primarily produced by CCSNe <cit.>. Therefore, the observed age-[O/Fe] trend plays a distinct role in characterizing the chemical evolution history of the Milky Way and constraining chemical evolution models.
Neglecting to account for the independent enhancement of oxygen abundance in age determination would result in significant age biases, as discussed in Section <ref>. Such biases would obscure the age-[O/Fe] relation, as depicted in Figure <ref> in the appendix, where the rising trend of [O/Fe] with decreasing age remains imperceptible at age < 9 Gyr. Therefore, we suggest that considering the oxygen abundance independently in stellar models is crucial. This would aid in accurately characterizing the age-[O/Fe] relation and provide better constraints for Galactic chemical evolution models.
§ CONCLUSIONS
To determine the ages of dwarfs considering observed oxygen abundance, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. We generate mock data with 0.15 million mock stars to systematically study the effect of oxygen abundance on age determination. Based on the α-enhanced models and O-enhanced models, we obtain the masses and ages of 67,503 stars from LAMOST and 4,006 stars from GALAH and analyze the chemical and kinematic properties of these stars combined with ages from O-enhanced models.
Our main conclusions are summarized as follows:
(1) The ages of high-O stars based on O-enhanced models are smaller compared with those determined with α-enhanced models, while low-O stars become older. We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-0.2 < [Fe/H] < 0.2) stars by ∼15%.
(2) The overall average age offset (absolute value of age difference) between α-enhanced models and O-enhanced models is
8.9% for LAMOST stars, and 8.6% for GALAH stars. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%, and reach up to -33% to -42% at [Fe/H] ≲ -0.6 dex.
(3) Based on LAMOST and GALAH samples, we observe a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr. Furthermore, The [O/Fe] of both sample stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, which indicates that the younger population of these stars is more O-rich. Our results agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago.
We thank the anonymous referee for valuable comments and suggestions that have significantly improved the presentation of the manuscript. This work is based on data acquired through the Guoshoujing Telescope. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
This work used the data from the GALAH survey, which is based on observations made at the Anglo Australian Telescope, under programs A/2013B/13, A/2014A/25, A/2015A/19, A/2017A/18, and 2020B/23.
This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
This work is supported by National Key R&D Program of China No. 2019YFA0405503, the Joint Research Fund in Astronomy (U2031203,) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), and NSFC grants (12090040, 12090042). This work is partially supported by the CSST project, and the Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002). This paper has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (CartographY GA. 804752).
Figure <ref> depicts the age and mass determinations for ∼15,000 LAMOST stars (with [α/Fe] ∼ 0.1) and reveals a satisfactory correspondence between the αEM models and the YY isochrones <cit.>, as the dispersion of the relative age and mass differences are only 6.4% and 1.1% between these two models. However, slight systematic differences are visible among this result, as the YY yields 3.6% older age and -0.4% smaller mass than the αEM models.
aasjournal
|
http://arxiv.org/abs/2307.07476v1 | 20230714165606 | Low-Scale Leptogenesis with Low-Energy Dirac CP-Violation | [
"Alessandro Granelli",
"Silvia Pascoli",
"Serguey T. Petcov"
] | hep-ph | [
"hep-ph"
] |
[email protected]
Dipartimento di Fisica e Astronomia, Università di Bologna, via Irnerio 46, 40126, Bologna, Italy
INFN, Sezione di Bologna, viale Berti Pichat 6/2, 40127, Bologna, Italy
Dipartimento di Fisica e Astronomia, Università di Bologna, via Irnerio 46, 40126, Bologna, Italy
INFN, Sezione di Bologna, viale Berti Pichat 6/2, 40127,
Bologna, Italy
Also at: Institute of Nuclear Research and Nuclear Energy, Bulgarian Academy of Sciences, 1784 Sofia, Bulgaria.
INFN, Sezione di Trieste, via Valerio 2, 34127 Trieste, Italy
Kavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa,
Chiba 277-8583, Japan.
We study the freeze-in scenario of leptogenesis via oscillations within the type-I seesaw model with two quasi-degenerate heavy Majorana neutrinos N_1, 2 having masses M_2 > M_1 ∼ (0.1-100) GeV, (M_2-M_1)/M_1 ≪ 1, focusing on the role of the CP-violation provided by the Dirac phase δ of the Pontecorvo-Maki-Nakagawa-Sakata lepton
mixing matrix.
We find that viable leptogenesis can be due solely to CP-violating values of δ and that the N_1, 2 total mixing squared Θ^2=∑_αΘ^2_α needed is within the reach of future experiments, Θ_α parameterising the coupling to the charged lepton α=e, μ, τ. Furthermore, the required parameter space differs from that associated with additional Casas-Ibarra sources of CP-violation.
Future determination of δ, Θ^2 and/or the ratios Θ_τ^2:Θ^2_μ:Θ^2_e would provide a critical test of the considered scenario.
Low-Scale Leptogenesis with Low-Energy Dirac CP-Violation
Serguey T. Petcov
August 12, 2023
===========================================================
Introduction—
In the present observable Universe there is an overabundance of matter over antimatter. The asymmetry in baryons, or the baryon asymmetry of the Universe (BAU), can be parameterised by the baryon-to-photon ratio η_B.
Observations of the cosmic microwave background anisotropies and the abundances of light primordial elements agree on the present value of η_B≃ 6.1× 10^-10 <cit.>. An early mechanism to generate the BAU is referred to as baryogenesis (see <cit.> for a recent review), but it is unfeasible within the Standard Model (SM) of particle physics and new physics is required.
An alternative attractive mechanism is that of baryogenesis via
leptogenesis (LG) <cit.>, consisting of an early generation of a lepton asymmetry, which is then converted into the present BAU by the SM sphaleron processes <cit.>. The simplest scenario of LG is realised within the type-I seesaw extension of the SM <cit.>, which also provides a mechanism for the generation of the light neutrino masses by augmenting the SM with right-handed sterile neutrinos.
The type-I seesaw extension with two right-handed neutrinos
and, correspondingly, with two heavy Majorana neutrinos N_1, 2 with definite masses M_1, 2 > 0, is the minimal set-up in which LG can be realised, while being also compatible with current data on light neutrino masses and mixing.
Many realisations of LG within the type-I seesaw extension are possible depending on the mass scale <cit.>, through lepton number, C- and CP-violating, out-of-equilibrium processes involving
the heavy Majorana neutrinos, the Higgs and left-handed lepton doublets,
which satisfy the necessary Sakharov's conditions <cit.>. In this work, we are
focused on the
freeze-in mechanism proposed in <cit.> and extensively studied <cit.>, in which the oscillations of the right-handed neutrinos during their out-of-equilibrium production are crucial for the generation of the BAU.
This scenario of LG via oscillations
can be successful for heavy Majorana neutrinos mass scales
as low as 100 MeV, thus being accessible to low-energy searches of heavy neutral leptons <cit.>.
A physically interesting possibility is when the requisite
CP-violation in LG is only due to the phases of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS)
lepton mixing matrix <cit.>. In this case, there would be a direct link between the BAU and CP-violating phenomena in low-energy neutrino physics, such as, e.g., in neutrino oscillations or in neutrinoless double beta
decay (see e.g., <cit.>).
At present, only indications of CP-violation in neutrino oscillations involving the Dirac phase δ exist. However, δ is determined in the global analyses with relatively large uncertainties <cit.> and CP-conserving values are not yet excluded. Current experiments such as T2K <cit.> and NOνA <cit.> will be able to provide additional information in the next future, potentially reaching ∼ 3σ for hints of CP-violation.
The experiments DUNE <cit.>, and T2Hyper-Kamiokande (T2HK) <cit.>, currently under construction, will have much stronger sensitivity, aiming at a 5σ discovery of leptonic CP-violation for a large fraction of the possible values of δ.
It is possible that the Dirac phase δ is the only source
of CP-violation in the lepton sector.
LG with Dirac CP-violation has been shown to work in the thermal high-scale scenarios <cit.>, emerging as one of the motivations for the current and future neutrino oscillation experimental programme. As great attention is being put to the searches of heavy neutral leptons at the GeV scale <cit.>, the question on whether low-scale LG via oscillations can be successful with low-energy CP-violation solely from the Dirac phase should be answered also in this context. In this paper, we examine this
physically interesting possibility with particular attention
to the related low-energy phenomenology. This could serve as further motivations for neutrino oscillation experiments and suggest new directions for heavy neutral lepton searches.
The Framework— We consider the
minimal version of the type-I seesaw extension of the SM
with two heavy Majorana neutrinos neutrinos N_1,2 having masses
M_2 > M_1∼ (0.1 - 100) GeV and a mass splitting
Δ M≡ M_2 - M_1≪ M_1
in the range Δ M/M_1 ∼ (10^-11 - 10^-4).
In the type-I seesaw,
after the neutral component of the Higgs doublet acquires a non-vanishing
vacuum expectation value v = 246 GeV, one gets
the well known relation
(m_ν)_αβ≃ -(v^2/2) ∑_j=1,2Y_α jY_β j M_j^-1,
α,β =e, μ, τ, for the entries of the tree-level
light neutrino mass matrix m_ν, where Y_α j is the Yukawa coupling
of N_j with the Higgs and left-handed lepton doublet of flavour α.
The matrix m_ν can be diagonalised as m̂_ν = U^† m_ν U^*,
where m̂_ν≡diag(m_1, m_2, m_3) and U represents
the PMNS lepton
mixing matrix.
We adopt the standard parameterisation for U <cit.>
in terms of three neutrino mixing angles θ_12, θ_23 and
θ_13, the Dirac phase δ, and two Majorana phases α_21
and α_31 <cit.>.
In the case of
two heavy Majorana neutrinos, the lightest
neutrino is massless at tree and
one-loop levels and the light neutrino mass spectrum is hierarchical
with either normal ordering (NO) m_1≃ 0 ≪ m_2 < m_3, or
inverted ordering (IO) m_3≃ 0 ≪ m_1 < m_2.
In the numerical analysis that follows, we consider the best-fit values
of θ_12, θ_23 and θ_13, and the two neutrino mass
squared differences obtained
in
<cit.>,
but treat
δ as a free parameter due to the relatively large uncertainty in its determination. The Majorana phases α_21 and
α_31 cannot be constrained by the neutrino oscillation
experiments <cit.>
and are undetermined at present. In the studied case,
only the combination α_23≡α_21-α_31
(the phase α_21) is physical in the hierarchical NO (IO) case.
We treat α_23(21) as free parameters. For reasons that will be clearer throughout the text, we concentrate the analysis mostly on the light-neutrino mass spectrum with NO and leave the IO case for a future longer work. Global analyses including data from atmospheric, reactor and long-baseline neutrino experiments give a mild preference for NO against the spectrum with IO <cit.>.
We consider the Casas-Ibarra (CI) parameterisation for the Yukawa matrix
<cit.>:
Y_α j = ± i (√(2)/v) ∑_a=1, 2, 3U_α a√(m_a)O_ja√(M_j) .
The arbitrary CI matrix O is complex with orthonormal rows and have entries O_11(13) = O_21(23) =0,
O_23(22) = φ O_12(11) = φcosθ
and O_13(12) = -φ O_22(21) = φsinθ in the NO (IO) case,
with θ≡ω + i ξ, ω and ξ being free real
parameters and φ = ±1.
We choose to work with φ = +1 but extend the range of the Majorana phases α_23(21) from [0, 2π] to [0, 4π]. In this way, the same full sets of CI and Yukawa matrices are considered <cit.>.
The SM flavour neutrinos also mix with the heavy Majorana neutrinos.
The mixing Θ_α j≃ (v/√(2))Y_α j/M_j sets the
coupling between N_j and
the charged lepton α (ν_α) in the weak charged
(neutral) current,
thus being important for low-energy phenomenology.
For instance, direct searches at colliders, beam-dump and kaon experiments are sensitive to
Θ^2_α≡∑_j=1^2 |Θ_α j|^2 and
Θ^2 ≡∑_α = e, μ, τΘ_α^2.
The same quantities are crucial in
LG as they determine
the strength of the wash-out processes.
Low-Energy CP-Violation— Within the considered
CI parameterisation, the CP-violating matrices can be either U, O or both.
In the case of low-energy CP-violation (LECPV) we are interested in,
the only CP-violating matrix is U, with the CI matrix being CP-conserving.
LECPV can be achieved <cit.>
either by setting i) ξ=0 and ω≠ 0,
with real CI matrix;
or ii) ω = kπ, k=0,1,2, and ξ≠ 0, so that O_12O_13 (O_11O_12) in the NO (IO) case is purely imaginary.
Case ii) is associated with relatively large values of the mixings
Θ^2_a and Θ^2, as the condition |ξ|≫ 1 leads to an overall
exponential enhancement. Since we are interested in connecting with
experimental searches of heavy Majorana neutrinos, we focus the analysis on
the case with ω = kπ and ξ≠ 0. We stress that the
condition ω≠ kπ when ξ≠ 0
would result in a CP-violating CI matrix (CICPV) <cit.>.
To have LECPV, the phases in the PMNS matrix
should be CP-violating, i.e.,
δ≠ 0, π, and/or α_21≠ k_21π and/or
α_31≠ k_31π,
k_21 = 0, 1, 2, ..., k_31 = 0, 1, 2, .... It is also possible, however, that CP is violated even when U and O
are CP-conserving, but Y is not <cit.>.
In this case, CP is broken due to an
interplay between the PMNS and CI matrices in the CI parameterisation
of the Yukawa matrix. When ξ≠ 0 and ω = kπ, this can be realised
for the CP-conserving values of the PMNS phases satisfying, additionally,
α_23≠± (2n + 1)π (α_21≠ (2n + 1)π),
n = 0, 1, in the NO (IO) case <cit.>. For the purpose of
studying the case of LECPV uniquely from δ,
we shall consider α_23(21) = π or 3π.
CP-Violation in Leptogenesis— All the
CP-violating physical observables are expected to depend upon specific
basis-independent quantities written in terms of the flavour parameters of
the model, the so-called CP-violating invariants.
For instance, the magnitude of CP-violation in
ν_α→ν_β and
ν̅_α→ν̅_β
oscillations (α≠β)
is determined by the rephasing invariant J_CP =
[U_μ 3 U^*_e3 U_e2 U^*_μ 2 ] <cit.>,
analogous to the Jarlskog invariant
in the quark sector <cit.>.
Several
CP-invariants can be derived in the type-I seesaw extension of the SM
starting from the Yukawa and heavy Majorana neutrino mass matrices
<cit.>, and those that are relevant to LG (at leading order and in
the case of two quasi-degenerate in mass
heavy Majorana neutrinos)
can be constructed out of the following two building blocks
(see <cit.> for a recent derivation):
J^LNC_α = [Y_α 1^*Y_α 2(Y^† Y)_21]
and
J^LNV_α = [Y_α 1^*Y_α 2(Y^† Y)_12].
At leading order, the BAU arising in LG is proportional to a combination of J^LNC_α and J_α^LNV weighted over the lepton flavours <cit.>. For LECPV with ω = kπ and ξ≠ 0,
we have that J^LNC_α = -J^LNV_α∝[U_α 3(2)^*U_α 2(1)](2ξ) in the NO (IO) case.
For M_1≳ 100 GeV, outside the mass range of interest to this study,
low-scale LG has been shown to reconnect with the resonant freeze-out
mechanism <cit.>.
In the resonant LG scenario and within the Boltzmann equations formalism, the lepton asymmetry of flavour
α is proportional to the sum of the two invariants
<cit.>
J_α^LNC + J_α^LNV∝sin(2ω) up to corrections
of the order of 𝒪(Δ M/M_1), which vanishes when
ω = kπ contrarily to what happens in the low-scale LG scenario via
oscillations. This highlights the importance of the oscillation mechanism
in the considered scenario.
We further note that, in the IO case,
[U_e 2^*U_e 1]∝cos(α_21/2), so that, when
α_21 = π, 3π, J_e^LNC = J_e^LNV = 0,
J_μ^LNC = -J_τ^LNC and J_μ^LNV = -J_τ^LNV.
In this case, higher order CP-invariants can be relevant to LG, making the IO case more involved.
Results— We perform a numerical scan of
the parameter space of viable LG. To calculate the BAU in the scenario
of interest, we solve the momentum-averaged density matrix equations
<cit.> for the evolution of the lepton asymmetries
and heavy Majorana neutrino abundances. We consider the equations as
in <cit.> and make use of
the latest version of the Python package
<cit.>. We list in what follows the results of our numerical analysis.
* We show in Fig. <ref> the region in
the Θ^2-M_1 plane
where LG with LECPV from δ
is successful in reproducing
the observed value
of the BAU. For illustrative purposes, we choose δ = 3π/2,
α_23= π and ω = 0, vary Δ M/M_1 in the range
[10^-11, 10^-4] and focus on the NO case.
A qualitatively similar figure for the same choice of parameters
is obtained in the IO case, but not shown here.
The upper (lower) solid black line in the plot is the curve of maximal
(minimal) mixing Θ^2 compatible with viable LG.
The shaded blue area between the two black lines correspond to successful LG
for certain choices of Δ M/M_1 and δ. We paint in darker
(lighter) blue the regions of successful LG corresponding to larger (smaller)
values of Δ M /M_1.
We find that the extreme values of Θ^2 can be obtained for
Δ M/M_1≲ 10^-6, while, for larger splittings, the viable
region reduces in size, with the maximal (minimal) allowed mixing
taking smaller (larger) values.
The LG parameter space is bounded from below by the requirement of reproducing
the light neutrino masses (lower grey region) and from above by the
experimental
limits on the couplings of the heavy Majorana neutrinos to
the electron
<cit.>, the muon <cit.> and
the tauon <cit.> flavour. Numerous planned and proposed
experiments aim at improving the sensitivity to
these couplings
further <cit.>.
The total mixing is also constrained by the Big Bang Nucleosynthesis (BBN)
<cit.>. The reported limits and projections on
Θ^2, however, are currently based on the assumption that only
the mixing in a particular flavour α is non-zero, i.e. Θ^2=Θ_α^2 for either α = e, μ or τ.
In Fig. <ref>, as long as large mixings are
considered, i.e. |ξ|≫ 1, we find that LG is compatible with the
condition Θ^2_τ> Θ_μ^2>Θ_e^2 (see further).
For this reason, we only consider the bounds on Θ^2_τ when showing
the region excluded from past and present searches <cit.>
(upper grey region) and BBN <cit.> (yellow).
Moreover, we project the expected sensitivities on Θ^2_τ
of upcoming and proposed experiments <cit.>
(purple dot-dashed line).
The prospective sensitivity on Θ^2 of the
discussed FCC-ee <cit.> is also reported
(green dashed line).
* The maximal allowed values of Θ^2 compatible with viable LG with
LECPV from δ depend on the value of the Dirac phase.
For the case in Fig. <ref> with δ = 3π/2,
these are Θ^2 ≃ 9× 10^-6, 5× 10^-7, 6× 10^-9, 9× 10^-12 when M_1 = 0.1, 1, 10, 100 GeV, respectively.
By fixing δ = 195^∘ (345^∘), we find Θ^2 ≃ 2× 10^-6 (1.5× 10^-5), 9× 10^-8 (1.5× 10^-6), 2× 10^-9 (1.2× 10^-8), 6× 10^-12(4× 10^-11) at M_1 = 0.1, 1, 10, 100 GeV.
We compare these results with the case of
CICPV fixing ω = π/4 or 3π/4
(in this case the results depend only slightly on δ),
for which we get Θ^2 ≃ 3× 10^-5, 3× 10^-6,
2.5× 10^-8, 4× 10^-11 at M_1 = 0.1, 1, 10,
100 GeV (see also the results of
<cit.>). The differences in the values
obtained with LECPV from δ
and from CICPV
reveal a separation between the parameter spaces of successful
LG in the two cases.
The magnitude of this gap depends on δ and M_1.
* We show in Fig. <ref> the possible values of the mixing ratios
Θ^2_α/Θ^2 in a ternary plot. The four triangular regions
in the plot are obtained for α_23(21) = π and ω = 0, and
by marginalising over δ in the range [0,π],
(or, equivalently, [π,2π]), with the green and blue (yellow and red)
triangles corresponding respectively to ξ≥0 and ξ≤ 0 in the
NO (IO) case. For |ξ|≫ 1, the triangles reduce to the shorter
solid edges, while the intersection points correspond to ξ = 0.
The larger and fainter blue (red) region is obtained by varying
δ, α_23(21) and ω within their entire allowed ranges of
possible values. In the NO case, one has Θ_μ,τ^2>Θ^2_e, and,
depending on whether ξ≫ 1, ≪ -1 or ∼ 0, either
Θ^2_μ>Θ^2_τ, Θ^2_τ>Θ^2_μ or
Θ^2_τ∼Θ^2_μ.
* Concentrating on the NO case, we scan the LG space
over δ across the entire ranges of
masses and splittings considered. We find the results to be symmetric under the simultaneous change δ→δ±π and ξ→ -ξ. Moreover, the present η_B
can be reproduced with the correct sign only for i) ξ> 0 and
0 < δ < π, or ii) ξ< 0 and
π < δ < 2π. When large
mixings are considered, i.e. |ξ|≫ 1, the above two cases correspond
respectively to
* Θ^2_μ>Θ^2_τ> Θ^2_e with 0.005≲Θ^2_e/Θ^2≲ 0.12, 0.69≲Θ^2_μ/Θ^2≲ 0.76 and 0.19≲Θ^2_τ/Θ^2≲ 0.24;
* Θ^2_τ>Θ^2_μ> Θ^2_e with 0.005≲Θ^2_e/Θ^2 ≲ 0.12, 0.13≲Θ^2_μ/Θ^2≲ 0.16 and 0.75≲Θ^2_τ/Θ^2 ≲ 0.83.
The situation is more involved for the IO spectrum: a shift of sign in ξ changes prominently the mixings and the leading order CP-invariant in the electron flavour vanishes for
LECPV from δ. The IO case will be discussed in more details elsewhere.
The results were obtained for ω = 0 and α_23(21)=π.
Everything would be the same for ω = π, 2π, while setting
α_23(21) = 3π would imply equivalent results provided that
the overall sign of ξ is changed.
Conclusions—
The results we have found indicate quite remarkably not only
that LG with low-energy CP-violation solely from δ
is viable in the mass range 0.1≤ M_1/GeV≤ 100, but also that it
is compatible with rather large values of Θ^2. As the sensitivity
reaches of proposed experiments enter inside the region of viable LG in
the entire considered mass range, they could potentially probe the parameter
space of the LG scenario discussed in this work. Moreover, we find
viable LG for broad ranges of δ values within 0 < δ < π and π < δ < 2π. Qualitatively similar results hold in the IO case as well.
We have found a correspondence between the sign of the BAU and that
of sinδ in the NO case, which is reflected in the differences in the
flavour hierarchies. More specifically, LG with LECPV
from δ is successful in reproducing the positive BAU for either 0 < δ < π and
Θ^2_μ>Θ^2_τ > Θ^2_e
or π < δ < 2π and
Θ^2_τ>Θ^2_μ > Θ^2_e. As the physical observables
at direct searches of heavy neutral leptons depend on the ratios
Θ^2_τ:Θ^2_μ:Θ^2_e, the two cases are phenomenologically
different. Possible future signatures favouring a certain flavour hierarchy
and a measurement of δ establishing whether
0 < δ < π or π < δ < 2π
could either support or falsify the scenario considered in this work.
Finally,
we have shown that there is a gap between the parameter spaces of LG
with LECPV and CICPV, with the separation depending on δ and M_1.
A measurement of δ and Θ^2 at a certain mass scale
in the associated gap would indicate the necessity of having additional
sources of CP-violation other than δ in low-scale LG via oscillations.
We thank S. Sandner
and B. Shuve for useful email exchanges. A.G. is grateful to the Kavli IPMU for the kind hospitality offered during the first part of this project. We acknowledge the use of computational resources from the parallel computing cluster of the Open Physics Hub (https://site.unibo.it/openphysicshub/enhttps://site.unibo.it/openphysicshub/en) at the Physics and Astronomy Department in Bologna. This work was supported in part by the European Union's Horizon research and innovation programme under the Marie Skłodowska-Curie grant agreements No. 860881-HIDDeN and No. 101086085-ASYMMETRY, and by the Italian INFN program on Theoretical Astroparticle Physics. S.T.P. acknowledges partial support from the World Premier International Research Center Initiative (WPI Initiative, MEXT), Japan.
109
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Aghanim et al.(2020)Aghanim, Akrami, Ashdown, Aumont, Baccigalupi, Ballardini,
Banday, Barreiro, Bartolo, and et al.]Planck2018
author author N. Aghanim et al., 10.1051/0004-6361/201833910
journal journal Astronomy & Astrophysics volume 641, pages A6 (year
2020), http://arxiv.org/abs/1807.06209 arXiv:1807.06209
NoStop
[Cooke et al.(2018)Cooke,
Pettini, and Steidel]Cooke_2018
author author R. J. Cooke, author M. Pettini, and author C. C. Steidel, 10.3847/1538-4357/aaab53 journal journal The Astrophysical Journal volume 855, pages 102 (year 2018), http://arxiv.org/abs/1710.11129 arXiv:1710.11129 NoStop
[Bodeker and Buchmuller(2021)]Bodeker:2020ghk
author author D. Bodeker and author W. Buchmuller, 10.1103/RevModPhys.93.035004 journal journal Reviews of Modern Physics volume 93, pages 035004 (year
2021), http://arxiv.org/abs/2009.07294 arXiv:2009.07294
[hep-ph] NoStop
[Fukugita and Yanagida(1986)]Fukugita:1986hr
author author M. Fukugita and author T. Yanagida, 10.1016/0370-2693(86)91126-3 journal journal Physics Letters B volume 174, pages 45 (year 1986)NoStop
[Kuzmin et al.(1985)Kuzmin,
Rubakov, and Shaposhnikov]Kuzmin:1985mm
author author V. A. Kuzmin, author V. A. Rubakov,
and author M. E. Shaposhnikov, 10.1016/0370-2693(85)91028-7 journal journal Physics Letters B volume 155, pages 36 (year 1985)NoStop
[Minkowski(1977)]Minkowski:1977sc
author author P. Minkowski, 10.1016/0370-2693(77)90435-X journal journal Physics Letters B volume 67, pages 421 (year 1977)NoStop
[Yanagida(1979)]Yanagida:1979as
author author T. Yanagida, booktitle booktitle
Proceedings: Workshop on the Unified Theories and the Baryon Number in the
Universe: Tsukuba, Japan, February 13-14, 1979, @noop journal journal Conference Proceedings volume C7902131, pages 95 (year
1979)NoStop
[Gell-Mann et al.(1979)Gell-Mann, Ramond, and Slansky]GellMann:1980vs
author author M. Gell-Mann, author P. Ramond, and author R. Slansky, booktitle booktitle Supergravity Workshop
Stony Brook, New York, September 27-28, 1979, @noop journal journal Conference Proceedings volume C790927, pages 315 (year 1979), http://arxiv.org/abs/1306.4669 arXiv:1306.4669 [hep-th]
NoStop
[Glashow(1980)]Glashow:1979nm
author author S. Glashow, 10.1007/978-1-4684-7197-7_15 journal journal NATO Advanced Study Institutes Series volume 61, pages 687 (year
1980)NoStop
[Mohapatra and Senjanovic(1980)]Mohapatra:1979ia
author author R. N. Mohapatra and author G. Senjanovic, 10.1103/PhysRevLett.44.912 journal journal Physical Review Letters volume 44, pages 912 (year 1980)NoStop
[Pilaftsis(1997)]Pilaftsis:1997jf
author author A. Pilaftsis, 10.1103/PhysRevD.56.5431 journal journal Physical Review D volume
56, pages 5431 (year 1997), http://arxiv.org/abs/hep-ph/9707235 arXiv:hep-ph/9707235 NoStop
[Pilaftsis and Underwood(2004)]Pilaftsis:2003gt
author author A. Pilaftsis and author T. E. J. Underwood, 10.1016/j.nuclphysb.2004.05.029 journal journal Nulcear Physics B volume 692, pages 303 (year 2004), http://arxiv.org/abs/hep-ph/0309342 arXiv:hep-ph/0309342 [hep-ph]
NoStop
[Akhmedov et al.(1998)Akhmedov, Rubakov, and Smirnov]Akhmedov:1998qx
author author E. K. Akhmedov, author V. A. Rubakov, and author A. Y. Smirnov, 10.1103/PhysRevLett.81.1359 journal journal Physical Review Letters volume 81, pages 1359 (year 1998), http://arxiv.org/abs/hep-ph/9803255 arXiv:hep-ph/9803255 NoStop
[Asaka and Shaposhnikov(2005)]Asaka:2005pn
author author T. Asaka and author M. Shaposhnikov, 10.1016/j.physletb.2005.06.020 journal journal Physics Letters B volume 620, pages 17 (year 2005), http://arxiv.org/abs/hep-ph/0505013 arXiv:hep-ph/0505013 NoStop
[Racker et al.(2012)Racker,
Pena, and Rius]Racker:2012vw
author author J. Racker, author M. Pena, and author N. Rius, 10.1088/1475-7516/2012/07/030 journal journal
Journal of Cosmology and Astroparticle Physics volume 07, pages 030 (year 2012), http://arxiv.org/abs/1205.1948 arXiv:1205.1948
[hep-ph] NoStop
[Sakharov(1991)]Sakharov:1967dj
author author A. D. Sakharov, 10.1070/PU1991v034n05ABEH002497 journal journal Soviet Physics Uspekhi volume 5, pages 32 (year 1991), note [Usp. Fiz. Nauk161,61(1991)]NoStop
[Shaposhnikov(2007)]hepph0605047
author author M. Shaposhnikov, https://doi.org/10.1016/j.nuclphysb.2006.11.003 journal
journal Nuclear Physics B volume
763, pages 49 (year 2007), http://arxiv.org/abs/hep-ph/0605047 hep-ph/0605047 NoStop
[Asaka et al.(2012)Asaka,
Eijima, and Ishida]Asaka:2011wq
author author T. Asaka, author S. Eijima, and author H. Ishida, 10.1088/1475-7516/2012/02/021 journal journal Journal of Cosmology and Astroparticle Physics volume 02, pages 021 (year 2012), http://arxiv.org/abs/1112.5565 arXiv:1112.5565 [hep-ph]
NoStop
[Canetti et al.(2013a)Canetti, Drewes,
Frossard, and Shaposhnikov]canetti2013dark
author author L. Canetti, author M. Drewes,
author T. Frossard, and author M. Shaposhnikov, https://doi.org/10.1103/PhysRevD.87.093006 journal
journal Physical Review D volume 87, pages 093006 (year 2013a), http://arxiv.org/abs/1208.4607 1208.4607 NoStop
[Shuve and Yavin(2014)]Shuve:2014zua
author author B. Shuve and author I. Yavin, 10.1103/PhysRevD.89.075014 journal journal Physical Review D volume 89, pages 075014 (year 2014), http://arxiv.org/abs/1401.2459 arXiv:1401.2459 [hep-ph] NoStop
[Hernández et al.(2015)Hernández, Kekic, López-Pavón,
Racker, and Rius]1508.03676
author author P. Hernández, author M. Kekic,
author J. López-Pavón,
author J. Racker, and author N. Rius, 10.1007/JHEP10(2015)067 journal journal Journal
of High Energy Physics volume 10, pages
067 (year 2015), http://arxiv.org/abs/1508.03676
arXiv:1508.03676 [hep-ph] NoStop
[Drewes et al.(2016)Drewes,
Garbrecht, Gueter, and Klaric]Drewes:2016gmt
author author M. Drewes, author B. Garbrecht,
author D. Gueter, and author J. Klarić, 10.1007/JHEP12(2016)150 journal journal Journal of High Energy Physics volume 12, pages 150 (year
2016), http://arxiv.org/abs/1606.06690 arXiv:1606.06690
[hep-ph] NoStop
[Hernández et al.(2016)Hernández, Kekic, López-Pavón,
Racker, and Salvado]Hernandez:2016kel
author author P. Hernández, author M. Kekic,
author J. López-Pavón,
author J. Racker, and author J. Salvado, 10.1007/JHEP08(2016)157 journal journal Journal of High Energy Physics volume 08, pages 157 (year
2016), http://arxiv.org/abs/1606.06719 arXiv:1606.06719
[hep-ph] NoStop
[Drewes et al.(2016)Drewes,
Garbrecht, Gueter, and Klaric]Drewes:2016jae
author author M. Drewes, author B. Garbrecht,
author D. Gueter, and author J. Klarić, 10.1007/JHEP08(2017)018 journal journal Journal of High Energy Physics volume 08, pages 018 (year
2017), http://arxiv.org/abs/1609.09069 arXiv:1609.09069
[hep-ph] NoStop
[Asaka et al.(2017)Asaka,
Eijima, Ishida, Minogawa, and Yoshii]1704.02692
author author T. Asaka, author S. Eijima,
author H. Ishida, author K. Minogawa, and author T. Yoshii, 10.1103/PhysRevD.96.083010 journal journal
Physical Review D volume 96, pages
083010 (year 2017), http://arxiv.org/abs/1704.02692 arXiv:1704.02692 [hep-ph] NoStop
[Ghiglieri and Laine(2017)]Ghiglieri:2017gjz
author author J. Ghiglieri and author M. Laine, 10.1007/JHEP05(2017)132 journal
journal Journal of High Energy Physics volume 05, pages 132 (year 2017), http://arxiv.org/abs/1703.06087 arXiv:1703.06087 [hep-ph] NoStop
[Drewes et al.(2018)Drewes,
Garbrecht, Hernández, Kekic,
Lopez-Pavon, Racker, Rius,
Salvado, and Teresi]Drewes:2017zyw
author author M. Drewes, author B. Garbrecht,
author P. Hernández, author M. Kekic, author
J. Lopez-Pavon, author
J. Racker, author N. Rius, author J. Salvado, and author D. Teresi, 10.1142/S0217751X18420022 journal
journal International Journal of Modern Physics A volume 33, pages 1842002 (year
2018), http://arxiv.org/abs/1711.02862 arXiv:1711.02862
NoStop
[Abada et al.(2019a)Abada, Arcadi,
Domcke, Drewes, Klaric, and Lucente]Abada:2018oly
author author A. Abada, author G. Arcadi,
author V. Domcke, author M. Drewes, author
J. Klarić, and author
M. Lucente, 10.1007/JHEP01(2019)164 journal journal Journal of High Energy Physics volume 01, pages 164 (year
2019a), http://arxiv.org/abs/1810.12463
arXiv:1810.12463 [hep-ph] NoStop
[Klarić et al.(2021a)Klarić, Shaposhnikov, and Timiryasov]Klaric:2020phc
author author J. Klarić, author M. Shaposhnikov, and author I. Timiryasov, 10.1103/PhysRevLett.127.111802 journal journal Physical Review Letters volume 127, pages 111802 (year
2021a), http://arxiv.org/abs/2008.13771
arXiv:2008.13771 [hep-ph] NoStop
[Klarić et al.(2021b)Klarić, Shaposhnikov, and Timiryasov]Klaric:2021cpi
author author J. Klarić, author M. Shaposhnikov, and author I. Timiryasov, 10.1103/PhysRevD.104.055010 journal journal Physical Review D volume
104, pages 055010 (year 2021b), http://arxiv.org/abs/2103.16545 arXiv:2103.16545 [hep-ph]
NoStop
[Hernandez et al.(2022)Hernandez, Lopez-Pavon, Rius, and Sandner]Hernandez:2022ivz
author author P. Hernandez, author J. Lopez-Pavon, author N. Rius, and author S. Sandner, 10.1007/JHEP12(2022)012 journal journal
Journal of High Energy Physics volume 12, pages 012 (year 2022), http://arxiv.org/abs/2207.01651 arXiv:2207.01651
[hep-ph] NoStop
[Drewes et al.(2022)Drewes,
Klarić, and López-Pavón]Drewes:2022akb
author author M. Drewes, author J. Klarić, and author J. López-Pavón, 10.1140/epjc/s10052-022-11100-7 journal
journal The European Physical Journal C volume 82, pages 1176 (year 2022), http://arxiv.org/abs/2207.02742 arXiv:2207.02742 [hep-ph] NoStop
[Sandner et al.(2023)Sandner, Hernandez, Lopez-Pavon, and Rius]Sandner:2023tcg
author author S. Sandner, author P. Hernandez,
author J. Lopez-Pavon, and author N. Rius, @noop
(year 2023), http://arxiv.org/abs/2305.14427
arXiv:2305.14427 [hep-ph] NoStop
[Abdullahi et al.(2023)Abdullahi et al.]Abdullahi:2022jlv
author author A. M. Abdullahi et al., 10.1088/1361-6471/ac98f9
journal journal Journal of Physics G volume 50, pages 020501 (year 2023), http://arxiv.org/abs/2203.08039 arXiv:2203.08039 [hep-ph]
NoStop
[Antel et al.(2023)Antel
et al.]Antel:2023hkf
author author C. Antel et al., in @noop booktitle
Workshop on Feebly-Interacting Particles (year 2023) http://arxiv.org/abs/2305.01715 arXiv:2305.01715 [hep-ph]
NoStop
[Pascoli et al.(2007a)Pascoli, Petcov, and Riotto]Pascoli:2006ie
author author S. Pascoli, author S. T. Petcov,
and author A. Riotto, 10.1103/PhysRevD.75.083511 journal journal Physical Review D volume 75, pages 083511 (year 2007a), http://arxiv.org/abs/hep-ph/0609125 arXiv:hep-ph/0609125 NoStop
[Pascoli et al.(2007b)Pascoli, Petcov, and Riotto]Pascoli:2006ci
author author S. Pascoli, author S. T. Petcov,
and author A. Riotto, 10.1016/j.nuclphysb.2007.02.019 journal journal Nuclear Physics B volume 774, pages 1 (year 2007b), http://arxiv.org/abs/hep-ph/0611338 arXiv:hep-ph/0611338 NoStop
[Blanchet and Di Bari(2007)]Blanchet:2006be
author author S. Blanchet and author P. Di Bari, 10.1088/1475-7516/2007/03/018 journal journal Journal of Cosmology and Astroparticle Physics volume 03, pages 018 (year 2007), http://arxiv.org/abs/hep-ph/0607330 arXiv:hep-ph/0607330 NoStop
[Branco et al.(2007)Branco,
Gonzalez Felipe, and Joaquim]Branco:2006ce
author author G. C. Branco, author R. Gonzalez Felipe, and author F. R. Joaquim, 10.1016/j.physletb.2006.12.060
journal journal Physics Letters B volume 645, pages 432 (year
2007), http://arxiv.org/abs/hep-ph/0609297
arXiv:hep-ph/0609297 NoStop
[Anisimov et al.(2008)Anisimov, Blanchet, and Di Bari]Anisimov:2007mw
author author A. Anisimov, author S. Blanchet,
and author P. Di Bari, 10.1088/1475-7516/2008/04/033 journal journal Journal of Cosmology and Astroparticle Physics volume 04, pages 033
(year 2008), http://arxiv.org/abs/0707.3024
arXiv:0707.3024 [hep-ph] NoStop
[Molinaro and Petcov(2009a)]Molinaro:2009lud
author author E. Molinaro and author S. T. Petcov, 10.1140/epjc/s10052-009-0985-3 journal journal The European Physical Journal C volume
61, pages 93 (year 2009a), http://arxiv.org/abs/0803.4120 arXiv:0803.4120 [hep-ph]
NoStop
[Molinaro and Petcov(2009b)]Molinaro:2008cw
author author E. Molinaro and author S. T. Petcov, 10.1016/j.physletb.2008.11.047 journal journal Physics Letters B volume
671, pages 60 (year 2009b), http://arxiv.org/abs/0808.3534 arXiv:0808.3534 [hep-ph]
NoStop
[Dolan et al.(2018)Dolan,
Dutka, and Volkas]Dolan:2018qpy
author author M. J. Dolan, author T. P. Dutka, and author R. R. Volkas, 10.1088/1475-7516/2018/06/012 journal
journal Journal of Cosmology and Astroparticle Physics volume 06, pages 012 (year 2018), http://arxiv.org/abs/1802.08373 arXiv:1802.08373 [hep-ph] NoStop
[Moffat et al.(2019)Moffat,
Pascoli, Petcov, and Turner]Moffat:2018smo
author author K. Moffat, author S. Pascoli,
author S. T. Petcov, and author J. Turner, 10.1007/JHEP03(2019)034 journal journal
Journal of High Energy Physics volume 03, pages 034 (year 2019), http://arxiv.org/abs/1809.08251 arXiv:1809.08251
[hep-ph] NoStop
[Granelli et al.(2021a)Granelli, Moffat, and Petcov]Granelli:2021fyc
author author A. Granelli, author K. Moffat, and author S. T. Petcov, 10.1007/JHEP11(2021)149 journal journal Journal of High Energy Physics volume 11, pages 149
(year 2021a), http://arxiv.org/abs/2107.02079 arXiv:2107.02079 [hep-ph] NoStop
[K. Nakamura and S.T. Petcov, in M. Tanabashi et al.
(Particle Data Group collaboration)(2018)]Tanabashi:2018oca
author author K. Nakamura and S.T.
Petcov, in M. Tanabashi et al. (Particle Data Group collaboration), 10.1103/PhysRevD.98.030001 journal journal Physical Review D volume 98, pages 030001 (year 2018)NoStop
[Capozzi et al.(2020)Capozzi, Di Valentino, Lisi, Marrone, Melchiorri, and Palazzo]Capozzi_2020
author author F. Capozzi, author E. Di Valentino, author E. Lisi,
author A. Marrone, author A. Melchiorri, and author A. Palazzo, 10.1103/PhysRevD.101.116013 journal journal
Physical Review D volume 101, pages
116013 (year 2020)NoStop
[Esteban et al.(2020)Esteban, Gonzalez-Garcia, Maltoni,
Schwetz, and Zhou]Esteban:2020cvm
author author I. Esteban, author M. C. Gonzalez-Garcia, author M. Maltoni, author T. Schwetz, and author A. Zhou, 10.1007/JHEP09(2020)178 journal journal
Journal of High Energy Physics volume 09, pages 178 (year 2020), http://arxiv.org/abs/2007.14792 arXiv:2007.14792
[hep-ph] NoStop
[nuf()]nufit
@noop title Nufit v5.2, howpublished <http://www.nu-fit.org>NoStop
[Abe et al.(2011)Abe et al.]T2K:2011qtm
author author K. Abe et al. (collaboration T2K), 10.1016/j.nima.2011.06.067 journal journal
Nuclear Instruments and Methods in Physics Research A volume 659, pages 106 (year 2011), http://arxiv.org/abs/1106.1238 arXiv:1106.1238 [physics.ins-det]
NoStop
[Acero et al.(2022)Acero
et al.]NOvA:2021nfi
author author M. A. Acero et al. (collaboration NOvA), 10.1103/PhysRevD.106.032004 journal journal
Physical Review D volume 106, pages
032004 (year 2022), http://arxiv.org/abs/2108.08219 arXiv:2108.08219 [hep-ex] NoStop
[Hewes et al.(2021)Hewes
et al.]DUNE:2021tad
author author V. Hewes et al. (collaboration DUNE), 10.3390/instruments5040031 journal journal
Instruments volume 5, pages 31
(year 2021), http://arxiv.org/abs/2103.13910
arXiv:2103.13910 [physics.ins-det] NoStop
[Bian et al.(2022)Bian et al.]Hyper-Kamiokande:2022smq
author author J. Bian et al. (collaboration Hyper-Kamiokande), in @noop booktitle Snowmass 2021 (year 2022) http://arxiv.org/abs/2203.02029 arXiv:2203.02029
[hep-ex] NoStop
[Bilenky et al.(1980)Bilenky, Hosek, and Petcov]Bilenky:1980cx
author author S. M. Bilenky, author J. Hosek, and author S. T. Petcov, 10.1016/0370-2693(80)90927-2 journal
journal Physics Letters B volume 94, pages 495 (year 1980)NoStop
[Casas and Ibarra(2001)]Casas:2001sr
author author J. A. Casas and author A. Ibarra, 10.1016/S0550-3213(01)00475-8 journal
journal Nuclear Physics B volume
618, pages 171 (year 2001), http://arxiv.org/abs/hep-ph/0103065 arXiv:hep-ph/0103065 NoStop
[Molinaro and Petcov(2009c)]Molinaro:2008rg
author author E. Molinaro and author S. T. Petcov, 10.1140/epjc/s10052-009-0985-3 journal journal The European Physical Journal C volume
61, pages 93 (year 2009c), http://arxiv.org/abs/0803.4120 arXiv:0803.4120 [hep-ph]
NoStop
[Krastev and Petcov(1988)]Krastev:1988yu
author author P. I. Krastev and author S. T. Petcov, 10.1016/0370-2693(88)90404-2 journal journal Physics Letters B volume
205, pages 84 (year 1988)NoStop
[Jarlskog(1985a)]Jarlskog:1985ht
author author C. Jarlskog, 10.1103/PhysRevLett.55.1039 journal journal Physical Review Letters volume 55, pages 1039 (year
1985a)NoStop
[Jarlskog(1985b)]Jarlskog:1985cw
author author C. Jarlskog, 10.1007/BF01565198 journal
journal Zeitschrift für Physik C volume 29, pages 491 (year 1985b)NoStop
[Bernabeu et al.(1986)Bernabeu, Branco, and Gronau]Bernabeu:1986fc
author author J. Bernabeu, author G. C. Branco, and author M. Gronau, 10.1016/0370-2693(86)90659-3 journal journal Physics Letters B volume
169, pages 243 (year 1986)NoStop
[Branco et al.(2001)Branco,
Morozumi, Nobre, and Rebelo]Branco:2001pq
author author G. C. Branco, author T. Morozumi,
author B. M. Nobre, and author M. N. Rebelo, 10.1016/S0550-3213(01)00425-4 journal journal Nuclear Physics B volume 617, pages 475 (year 2001), http://arxiv.org/abs/hep-ph/0107164 arXiv:hep-ph/0107164 NoStop
[Branco and Rebelo(2005)]Branco:2004hu
author author G. C. Branco and author M. N. Rebelo, 10.1088/1367-2630/7/1/086 journal
journal New Journal of Physics volume 7, pages 86 (year 2005), http://arxiv.org/abs/hep-ph/0411196 arXiv:hep-ph/0411196 NoStop
[Jenkins and Manohar(2008)]Jenkins:2007ip
author author E. E. Jenkins and author A. V. Manohar, 10.1016/j.nuclphysb.2007.09.031 journal journal Nuclear Physics B volume
792, pages 187 (year 2008), http://arxiv.org/abs/0706.4313 arXiv:0706.4313 [hep-ph] NoStop
[Jenkins and Manohar(2009)]Jenkins:2009dy
author author E. E. Jenkins and author A. V. Manohar, 10.1088/1126-6708/2009/10/094 journal journal Journal of High Energy Physics volume 10, pages 094 (year 2009), http://arxiv.org/abs/0907.4763 arXiv:0907.4763 [hep-ph] NoStop
[Wang et al.(2021)Wang,
Yu, and Zhou]Wang:2021wdq
author author Y. Wang, author B. Yu, and author S. Zhou, 10.1007/JHEP09(2021)053 journal journal Journal of High Energy Physics volume 09, pages 053 (year
2021), http://arxiv.org/abs/2107.06274 arXiv:2107.06274
[hep-ph] NoStop
[Yu and Zhou(2021)]Yu:2021cco
author author B. Yu and author S. Zhou, 10.1007/JHEP10(2021)017 journal journal Journal of High Energy Physics volume 10, pages 017
(year 2021), http://arxiv.org/abs/2107.11928
arXiv:2107.11928 [hep-ph] NoStop
[Flanz et al.(1995)Flanz,
Paschos, and Sarkar]Flanz:1994yx
author author M. Flanz, author E. A. Paschos,
and author U. Sarkar, 10.1016/0370-2693(94)01555-Q journal journal Physics Letters B volume 345, pages 248 (year 1995), note [Erratum:
Phys.Lett.B 384, 487–487 (1996), Erratum: Phys.Lett.B 382, 447–447
(1996)], http://arxiv.org/abs/hep-ph/9411366
arXiv:hep-ph/9411366 NoStop
[Covi and Roulet(1997)]Covi:1996fm
author author L. Covi and author E. Roulet, https://doi.org/10.1016/S0370-2693(97)00287-6 journal journal Physics Letters B volume 399, pages 113 (year 1997), http://arxiv.org/abs/hep-ph/9611425 hep-ph/9611425 NoStop
[Buchmüller and Plümacher(1998)]Buchmuller:1997yu
author author W. Buchmüller and author M. Plümacher, https://doi.org/10.1016/S0370-2693(97)01548-7 journal
journal Physics Letters B volume
431, pages 354 (year 1998), http://arxiv.org/abs/hep-ph/9710460 hep-ph/9710460 NoStop
[Hambye and Teresi(2017)]Hambye:2017elz
author author T. Hambye and author D. Teresi, 10.1103/PhysRevD.96.015031 journal journal Physical Review D volume 96, pages 015031 (year 2017), http://arxiv.org/abs/1705.00016 arXiv:1705.00016 [hep-ph]
NoStop
[Covi et al.(1996)Covi,
Roulet, and Vissani]COVI1996169
author author L. Covi, author E. Roulet, and author F. Vissani, https://doi.org/10.1016/0370-2693(96)00817-9 journal
journal Physics Letters B volume
384, pages 169 (year 1996), http://arxiv.org/abs/hep-ph/9605319 hep-ph/9605319 NoStop
[Granelli et al.(2021b)Granelli, Moffat, and Petcov]Granelli:2020ysj
author author A. Granelli, author K. Moffat, and author S. T. Petcov, 10.1016/j.nuclphysb.2021.115597 journal
journal Nuclear Physics B volume 973, pages 115597 (year 2021b), http://arxiv.org/abs/2009.03166 arXiv:2009.03166 [hep-ph] NoStop
[Canetti et al.(2013b)Canetti, Drewes,
Frossard, and Shaposhnikov]Canetti:2012kh
author author L. Canetti, author M. Drewes,
author T. Frossard, and author M. Shaposhnikov, 10.1103/PhysRevD.87.093006 journal journal Physical Review D volume 87, pages 093006 (year 2013b), http://arxiv.org/abs/1208.4607 arXiv:1208.4607 [hep-ph] NoStop
[Ghiglieri and Laine(2018)]Ghiglieri:2017csp
author author J. Ghiglieri and author M. Laine, 10.1007/JHEP02(2018)078 journal
journal Journal of High Energy Physics volume 02, pages 078 (year 2018), http://arxiv.org/abs/1711.08469 arXiv:1711.08469 [hep-ph] NoStop
[Eijima et al.(2019)Eijima,
Shaposhnikov, and Timiryasov]Eijima:2018qke
author author S. Eijima, author M. Shaposhnikov, and author I. Timiryasov, 10.1007/JHEP07(2019)077 journal journal Journal of High Energy Physics volume 07, pages 077 (year 2019), http://arxiv.org/abs/1808.10833 arXiv:1808.10833 [hep-ph]
NoStop
[Granelli et al.(2021c)Granelli, Moffat,
Perez-Gonzalez, Schulz, and Turner]Granelli:2020pim
author author A. Granelli, author K. Moffat,
author Y. F. Perez-Gonzalez,
author H. Schulz, and author J. Turner, 10.1016/j.cpc.2020.107813 journal journal
Computer Physics Communications volume 262, pages 107813 (year 2021c), http://arxiv.org/abs/2007.09150 arXiv:2007.09150 [hep-ph] NoStop
[Granelli et al.(2023)Granelli, Leslie, Perez-Gonzalez,
Schulz, Shuve, Turner, and Walker]Granelli:2023vcm
author author A. Granelli, author C. Leslie,
author Y. F. Perez-Gonzalez,
author H. Schulz, author B. Shuve, author
J. Turner, and author
R. Walker, 10.1016/j.cpc.2023.108834 journal journal
Computer Physics Communications volume 291, pages 108834 (year 2023c), http://arxiv.org/abs/2301.05722 arXiv:2301.05722 [hep-ph] NoStop
[Bergsma et al.(1985)Bergsma
et al.]CHARM:1985anb
author author F. Bergsma et al. (collaboration CHARM), 10.1016/0370-2693(85)90400-9 journal journal Physics Letters B volume 157, pages 458 (year 1985)NoStop
[Abreu et al.(1997)Abreu
et al.]DELPHI:1996qcc
author author P. Abreu et al. (collaboration DELPHI), 10.1007/s002880050370 journal journal
Zeitschrift für Physik C volume 74, pages 57
(year 1997), note [Erratum: Z.Phys.C 75, 580
(1997)]NoStop
[Abe et al.(2019)Abe et al.]T2K:2019jwa
author author K. Abe et al. (collaboration T2K), 10.1103/PhysRevD.100.052006 journal journal
Physical Review D volume 100, pages
052006 (year 2019), http://arxiv.org/abs/1902.07598 arXiv:1902.07598 [hep-ex] NoStop
[Acciarri et al.(2021)Acciarri et al.]ArgoNeuT:2021clc
author author R. Acciarri et al. (collaboration ArgoNeuT), 10.1103/PhysRevLett.127.121801 journal journal Physical Review Letters volume 127, pages 121801 (year 2021), http://arxiv.org/abs/2106.13684 arXiv:2106.13684 [hep-ex] NoStop
[Lees et al.(2023)Lees et al.]BaBar:2022cqj
author author J. P. Lees et al. (collaboration BaBar), 10.1103/PhysRevD.107.052009 journal journal
Physical Review D volume 107, pages
052009 (year 2023), http://arxiv.org/abs/2207.09575 arXiv:2207.09575 [hep-ex] NoStop
[Barouki et al.(2022)Barouki, Marocco, and Sarkar]Barouki:2022bkt
author author R. Barouki, author G. Marocco, and author S. Sarkar, 10.21468/SciPostPhys.13.5.118 journal journal SciPost Physics volume 13, pages 118 (year 2022), http://arxiv.org/abs/2208.00416 arXiv:2208.00416 [hep-ph] NoStop
[Sabti et al.(2020)Sabti,
Magalich, and Filimonova]Sabti:2020yrt
author author N. Sabti, author A. Magalich, and author A. Filimonova, 10.1088/1475-7516/2020/11/056 journal
journal Journal of Cosmology and Astroparticle Physics volume 11, pages 056 (year 2020), http://arxiv.org/abs/2006.07387 arXiv:2006.07387 [hep-ph] NoStop
[Boyarsky et al.(2021)Boyarsky, Ovchynnikov, Ruchayskiy, and Syvolap]Boyarsky:2020dzc
author author A. Boyarsky, author M. Ovchynnikov, author O. Ruchayskiy, and author V. Syvolap, 10.1103/PhysRevD.104.023517 journal journal Physical Review D volume
104, pages 023517 (year 2021), http://arxiv.org/abs/2008.00749 arXiv:2008.00749 [hep-ph] NoStop
[Ariga et al.(2019)Ariga
et al.]FASER:2018eoc
author author A. Ariga et al. (collaboration FASER), 10.1103/PhysRevD.99.095011 journal journal Physical Review D volume 99, pages 095011
(year 2019), http://arxiv.org/abs/1811.12522
arXiv:1811.12522 [hep-ph] NoStop
[Dib et al.(2020)Dib,
Helo, Nayak, Neill,
Soffer, and Zamora-Saa]Dib:2019tuj
author author C. O. Dib, author J. C. Helo,
author M. Nayak, author N. A. Neill, author
A. Soffer, and author
J. Zamora-Saa, 10.1103/PhysRevD.101.093003 journal journal
Physical Review D volume 101, pages
093003 (year 2020), http://arxiv.org/abs/1908.09719 arXiv:1908.09719 [hep-ph] NoStop
[Aielli et al.(2020)Aielli
et al.]Aielli:2019ivi
author author G. Aielli et al., 10.1140/epjc/s10052-020-08711-3
journal journal The European Physical Journal C volume 80, pages 1177 (year
2020), http://arxiv.org/abs/1911.00481 arXiv:1911.00481
[hep-ex] NoStop
[Batell et al.(2021)Batell,
Evans, Gori, and Rai]Batell:2020vqn
author author B. Batell, author J. A. Evans,
author S. Gori, and author M. Rai, 10.1007/JHEP05(2021)049 journal journal Journal of High Energy Physics volume 05, pages 049 (year
2021), http://arxiv.org/abs/2008.08108 arXiv:2008.08108
[hep-ph] NoStop
[Cortina Gil et al.(2022)Cortina Gil et al.]HIKE:2022qra
author author E. Cortina Gil et al. (collaboration HIKE), @noop
(year 2022), http://arxiv.org/abs/2211.16586
arXiv:2211.16586 [hep-ex] NoStop
[Aberle et al.(2022)Aberle
et al.]Aberle:2839677
author author O. Aberle et al. (collaboration SHiP), https://cds.cern.ch/record/2839677 title BDF/SHiP at
the ECN3 high-intensity beam facility, type Tech. Rep. (institution CERN, address Geneva, year 2022)NoStop
[Alpigiani et al.(2020)Alpigiani et al.]MATHUSLA:2020uve
author author C. Alpigiani et al. (collaboration MATHUSLA), @noop (year 2020), http://arxiv.org/abs/2009.01693 arXiv:2009.01693 [physics.ins-det]
NoStop
[Abada et al.(2019b)Abada et al.]FCC:2018byv
author author A. Abada et al. (collaboration FCC), 10.1140/epjc/s10052-019-6904-3 journal journal
The European Physical Journal C volume 79, pages 474 (year 2019b)NoStop
[Abada et al.(2019c)Abada et al.]FCC:2018evy
author author A. Abada et al. (collaboration FCC), 10.1140/epjst/e2019-900045-4 journal journal The
European Physical Journal Special Topics volume
228, pages 261 (year
2019c)NoStop
[Bernardi et al.(1988)Bernardi et al.]Bernardi:1987ek
author author G. Bernardi et al., 10.1016/0370-2693(88)90563-1
journal journal Physics Letters B volume 203, pages 332 (year
1988)NoStop
[Liventsev et al.(2013)Liventsev et al.]Belle:2013ytx
author author D. Liventsev et al. (collaboration Belle), 10.1103/PhysRevD.87.071102 journal journal Physical Review D volume 87, pages 071102 (year 2013), note [Erratum:
Phys.Rev.D 95, 099903 (2017)], http://arxiv.org/abs/1301.1105
arXiv:1301.1105 [hep-ex] NoStop
[Aguilar-Arevalo et al.(2018)Aguilar-Arevalo et al.]PIENU:2017wbj
author author A. Aguilar-Arevalo et al. (collaboration PIENU), 10.1103/PhysRevD.97.072012 journal journal Physical Review D volume 97, pages 072012 (year 2018), http://arxiv.org/abs/1712.03275 arXiv:1712.03275 [hep-ex] NoStop
[Sirunyan et al.(2018)Sirunyan et al.]CMS:2018iaf
author author A. M. Sirunyan et al. (collaboration CMS), 10.1103/PhysRevLett.120.221801 journal journal Physical Review Letters volume 120, pages 221801 (year 2018), http://arxiv.org/abs/1802.02965 arXiv:1802.02965 [hep-ex] NoStop
[Aad et al.(2019)Aad et al.]ATLAS:2019kpx
author author G. Aad et al. (collaboration ATLAS), 10.1007/JHEP10(2019)265 journal journal Journal of High Energy Physics volume 10, pages 265 (year
2019), http://arxiv.org/abs/1905.09787 arXiv:1905.09787
[hep-ex] NoStop
[Cortina Gil et al.(2021a)Cortina Gil et al.]NA62:2020xlg
author author E. Cortina Gil et al. (collaboration NA62), 10.1007/JHEP03(2021)058 journal journal
Journal of High Energy Physics volume 03, pages 058 (year 2021a), http://arxiv.org/abs/2011.11329
arXiv:2011.11329 [hep-ex] NoStop
[Tumasyan et al.(2022)Tumasyan et al.]CMS:2022fut
author author A. Tumasyan et al. (collaboration CMS), 10.1007/JHEP07(2022)081 journal journal
Journal of High Energy Physics volume 07, pages 081 (year 2022), http://arxiv.org/abs/2201.05578 arXiv:2201.05578
[hep-ex] NoStop
[ATL(2022)]ATLAS:2022atq
@noop (year 2022), http://arxiv.org/abs/2204.11988 arXiv:2204.11988 [hep-ex] NoStop
[Abratenko et al.(2020)Abratenko et al.]MicroBooNE:2019izn
author author P. Abratenko et al. (collaboration MicroBooNE), 10.1103/PhysRevD.101.052001 journal journal Physical Review D volume 101, pages 052001 (year 2020), http://arxiv.org/abs/1911.10545 arXiv:1911.10545 [hep-ex] NoStop
[Cortina Gil et al.(2021b)Cortina Gil et al.]NA62:2021bji
author author E. Cortina Gil et al. (collaboration NA62), 10.1016/j.physletb.2021.136259 journal journal Physics Letters B volume 816, pages 136259 (year 2021b), http://arxiv.org/abs/2101.12304 arXiv:2101.12304 [hep-ex] NoStop
[Abratenko et al.(2022)Abratenko et al.]MicroBooNE:2022ctm
author author P. Abratenko et al. (collaboration MicroBooNE), 10.1103/PhysRevD.106.092006 journal journal Physical Review D volume 106, pages 092006 (year 2022), http://arxiv.org/abs/2207.03840 arXiv:2207.03840 [hep-ex] NoStop
[Beacham et al.(2020)Beacham
et al.]Beacham:2019nyx
author author J. Beacham et al., 10.1088/1361-6471/ab4cd2
journal journal Journal of Physics G volume 47, pages 010501 (year 2020), http://arxiv.org/abs/1901.09966 arXiv:1901.09966 [hep-ex]
NoStop
[Altmannshofer et al.(2022)Altmannshofer et al.]PIONEER:2022yag
author author W. Altmannshofer et al. (collaboration PIONEER), @noop (year 2022), http://arxiv.org/abs/2203.01981 arXiv:2203.01981 [hep-ex] NoStop
[Blinov et al.(2022)Blinov,
Kowalczyk, and Wynne]Blinov:2021say
author author N. Blinov, author E. Kowalczyk, and author M. Wynne, 10.1007/JHEP02(2022)036 journal journal
Journal of High Energy Physics volume 02, pages 036 (year 2022), http://arxiv.org/abs/2112.09814 arXiv:2112.09814
[hep-ph] NoStop
[Alviggi et al.(2022)Alviggi
et al.]Alviggi:2839484
author author M. Alviggi et al. (collaboration SHADOWS), https://cds.cern.ch/record/2839484 title SHADOWS
Letter of Intent, type Tech. Rep. (institution
CERN, address Geneva, year 2022)NoStop
|
http://arxiv.org/abs/2307.06343v1 | 20230712132801 | Sequential Experimental Design for X-Ray CT Using Deep Reinforcement Learning | [
"Tianyuan Wang",
"Felix Lucka",
"Tristan van Leeuwen"
] | eess.IV | [
"eess.IV",
"cs.CV",
"cs.LG"
] |
1]Tianyuan Wang
1]Felix Lucka
1, 2]Tristan van Leeuwen
[1]Computational Imaging, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
[2]Utrecht University, Mathematical Institute, Utrecht, 3584 CD, The Netherlands
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Sequential Experimental Design for X-Ray CT Using Deep Reinforcement Learning
[
August 12, 2023
==============================================================================
In X-ray Computed Tomography (CT), projections from many angles are acquired and used for 3D reconstruction. To make CT suitable for in-line quality control, reducing the number of angles while maintaining reconstruction quality is necessary. Sparse-angle tomography is a popular approach for obtaining 3D reconstructions from limited data. To optimize its performance, one can adapt scan angles sequentially to select the most informative angles for each scanned object. Mathematically, this corresponds to solving and optimal experimental design (OED) problem. OED problems are high-dimensional, non-convex, bi-level optimization problems that cannot be solved online, i.e., during the scan. To address these challenges, we pose the OED problem as a partially observable Markov decision process in a Bayesian framework, and solve it through deep reinforcement learning. The approach learns efficient non-greedy policies to solve a given class of OED problems through extensive offline training rather than solving a given OED problem directly via numerical optimization. As such, the trained policy can successfully find the most informative scan angles online. We use a policy training method based on the Actor-Critic approach and evaluate its performance on 2D tomography with synthetic data.
X-ray CT, optimal experimental design, adaptive angle selection, reinforcement learning.
§ INTRODUCTION
X-ray Computed Tomography (CT) is a non-destructive method widely used to evaluate the quality of complex internal structures in industrial parts. However, there is a trade-off between high-quality reconstruction and scanning speed, as a time-consuming full 360-degree rotation is typically needed to obtain comprehensive information. Kazantsev <cit.> and Varga et al. <cit.> have pointed out that angles are not equally informative. Therefore, reducing the number of angles by extracting more informative data can help to improve the trade-off between reconstruction quality and scanning efficiency. This trade-off can be formulated as a bi-level optimization problem with respect to angle parameters <cit.>. The low-level optimization problem formulates the image reconstruction based on the chosen, limited projection data, while the high-level optimization problem finds angles that optimize the reconstruction quality.
Bayesian Optimal Experimental Design (OED) is a mathematical framework that enables the acquisition of informative experimental designs while minimizing experimental costs <cit.>. In Bayesian OED, the prior distribution represents the current belief about the underlying ground truth, while the posterior distribution refers to the updated belief after taking into account the new measurements obtained through the selected design. The difference between the prior and updated posterior reflects the change in uncertainty or equivalently the amount of information gained from the experiments.
In simultaneous experimental design, we apply this procedure to select the optimal viewing angles in a single step, while in sequential experimental design, the goal is to select the viewing angles step-by-step, based on the projection data that has been collected so far. It is this variant of the experimental design problem that we are interested in, as it can adapt the selected viewing angles to the object under investigation.
Two widely used methods for measuring the uncertainty reduction or information gain in Bayesian OED are D-optimality, and A-optimality <cit.>. D-optimality measures the information gain using the Kullback-Leibler divergence to compare the posterior and prior distributions, while A-optimality computes the expected error between the underlying ground truth and the reconstruction.
However, the high dimensionality, computational cost, and typically unknown or unobtainable prior distribution prevents the direct application of the aforementioned technique for sequential optimal design in real-time CT imaging.
Several methods have been proposed to address these issues. Implicit prior information has been the focus of some researchers. To this end, Batenburg et al. <cit.> and Dabravolski et al. <cit.> used a set of template images comprising Gaussian blobs to represent prior distribution samples and introduced an upper bound <cit.> to approximate the information gain, indicating the solution set's diameter. Gaussian distribution has been used as a tractable method for the prior distribution in <cit.>. Burger et al. sequentially selected the projection angle and the source-receiver pair's lateral position considering a specific region of interest and explored Bayesian A- and D-optimality to update the posterior in the covariance matrix and mean after each experimental step. Helin et al. <cit.> extended this work to non-Gaussian distributions and employed a Total Variation (TV) prior to enhance edges. In practice, a lagged diffusivity iteration generated a series of Gaussian approximations for the TV prior. Additionally, Barbano et al. <cit.> proposed a linearized deep image prior that incorporated information from the pilot measurements as a data-dependent prior. They then used a conjugate Gaussian-linear model to determine the next informative angles sequentially. However, these methods can be time-consuming and are not well-suited for fast in-line applications.
In an industrial context, the use of Computer-Aided Design (CAD) models is a common form of prior information. CAD models enable offline optimization by allowing angle acquisition using simulation tools. Fischer et al. <cit.> used a CAD model of the object to optimize task-specific trajectories based on the detectability index proposed by Stayman et al. <cit.>. The detectability index is computed using the modulation transfer function and noise power spectrum to evaluate its fitness with a user-defined frequency template. In addition to task-specific optimization, Herl et al. <cit.> considered data completeness optimization using a Tuy-based metric. Meanwhile, Victor et al. <cit.> obtained a complete set of angles using either a simulation model or a CAD model and then used the discrete empirical interpolation method and related variants to sub-sample from the set of angles. Once a trajectory is optimized offline sequentially by a CAD model, it can apply fast in the real application. Nonetheless, the alignment of the optimized trajectory outcome to the real-world coordinate system through proper registration is crucial before executing the real scan <cit.>. Hence, these methods lack genuine adaptability in in-line applications.
In terms of the methods discussed above, achieving adaptivity while maintaining a fast scan for in-line settings still presents a significant challenge. Additionally, informative angles are typically selected in a greedy manner after evaluating all available angle candidates. In the field of medical CT, Shen et al. <cit.> addressed this issue by training a deep reinforcement learning agent on a medical CT image data set to personalize the scanning strategy sequentially. They utilized a gated recurrent unit as a policy network that maps all the previous measurements to a probability distribution over discrete angles and a radiation dose fraction. The next angle is chosen by sampling from this distribution. This way, around 60 are chosen sequentially.
We also leverage deep reinforcement learning to address the aforementioned challenges in our work but we focus on the application of industrial, in-line CT inspection instead of medical CT: We are considering very few scan angles (< 10), simple image features, but a potentially large inter-subject variation due to arbitrary placement and changing samples. For these reasons, we diverge from <cit.> by using the reconstruction space as the main state variable, avoiding problems caused by the increasing number of measurements. Due to this we use very different network architectures to parameterize the learned policy. By employing a deep reinforcement learning approach, we can train the policy to facilitate adaptive angle selection, offering a more efficient alternative to solving the high-dimensional, non-convex, bi-level optimization problem. Figure (<ref>) illustrates the proposed reinforcement learning approach for X-ray CT to solve this OED problem.
The contributions of this work include a novel formulation of the angle selection problem as a POMDP, the use of the Actor-Critic approach from the field of reinforcement learning to address the OED problem, and the development of an adaptive approach that can be fast applied in in-line CT applications.
The structure of this paper is as follows. In section 2, we present the background on CT reconstruction, Bayesian OED, and reinforcement learning. In section 3, we discuss the formulation of this experimental design as a POMDP and describe the computation of the policy gradient using the Actor-Critic approach. We provide a set of numerical experiments in section 4 to assess the performance of our proposed method. Finally, in section 5 and section 6, we discuss and summarize our findings.
§ BACKGROUND
§.§ CT Reconstruction
In sparse-angle tomography, the challenge lies in accurately reconstructing an image from incomplete measurement data, where only a limited number of angles are acquired. This inverse problem is severely ill-posed, meaning that small errors in the measurements could result in a large reconstruction error, or that several reconstructions are consistent with the measurements <cit.>. The Filtered Back-Projection (FBP) algorithm, a traditional analytical reconstruction method, has limitations when used for sparse-angle tomography. It assumes that the measurements are acquired with low noise over the full angular range, resulting in inferior reconstructions when applied to limited data <cit.>.
To address this challenge, it is necessary to incorporate prior information into the reconstruction algorithm to compensate for the limited data <cit.>. Regularised algebraic reconstruction methods have been proposed to incorporate such prior information efficiently. When applied to limited data, these can result in more stable and accurate reconstructions.
Therefore, we represent the object that we would like to reconstruct as x∈ℝ^n where n ∈ℕ represents the number of pixels or voxels. A single noisy measurement y at angle θ is generated as
y(θ) = A(θ)x + ϵ,
with ϵ∼𝒩(0, σ^2 I) and A(θ) is a discretization of the Radon transform along angle θ.
The reconstructed image from M measurements along angles θ = {θ_1, …, θ_M} is obtained via
x(θ) = min_x1/2∑_k=1^M A(θ_k)x-y(θ_k)_2^2 + α L(x),
where L(x) is a regularization term representing prior information for x.
§.§ Bayesian OED
Bayesian OED is a statistical framework that optimizes the design of an experiment by trading off the information gain with the cost of an experiment.
In the context of X-ray CT experimental design, the utility function in Bayesian OED measures the reconstruction quality, where the true underlying ground truth x is estimated by x(θ) from measurements y∈𝒴 obtained under experimental conditions specified by θ∈𝒟. The optimal design θ^* maximizes the expectation of the utility function over the design space 𝒟 with respect to the measured data y and the model parameter x.
Sequential OED is an approach that adjusts the design parameters as new data is acquired. This is achieved by treating the experiment as a sequential decision-making process, where the aim is to select the most informative design parameters based on the observed data to maximize the utility function. In the k_th step of an X-ray CT experiment, the process involves generating observed data using a data model π_data(y_k|x;θ_k) (as shown in Equation (<ref>)), updating the posterior distribution of x given the observed data up to step k (denoted by π_post(x_k|y_1:k; θ_1:k) in Equation (<ref>)), obtaining the reconstruction for the underlying ground truth. Subsequently, the most informative angle θ_k+1 is selected as the next design parameter to be used, which maximizes the utility function.
§.§ Reinforcement Learning
Reinforcement learning is a widely used approach for sequential decision-making, allowing agents to learn how to map the current state to actions that maximize the total reward for the entire process <cit.>. Since it considers the long-term effects of actions, reinforcement learning can realize non-greedy sequential decision-making. This approach is based on the Markov Decision Processes (MDPs) framework {𝒮, 𝒜, π_t, R }, which consists of a set of states 𝒮, a set of actions 𝒜, a transition operator π_t representing the conditional probability distribution from the current state to the next state after selecting an action, and a reward function: 𝒮×𝒜→ℝ that provides feedback from the environment at each time step.
A policy π_policy in reinforcement learning is a mapping from the current state to a probability distribution of actions: π_policy(a_k|s_k).
In MDPs with a finite number of states, the process begins from an initial state s_1 with a probability distribution π_s(s_1). The agent follows a policy that maps the initial state to the first action, leading the agent to transition to the next state and receive a reward from the environment. This process is repeated until a terminal state is reached, generating a trajectory or an episode τ = (s_1, a_1, r_1,...,s_M, a_M, r_M) of M steps.
In practical applications, the problems encountered may not conform to the idealized framework of a reinforcement learning problem. To address this, POMDPs are utilized. A POMDP can be defined as a tuple {𝒮, 𝒜, 𝒪, π_t, π_e, R}, where two additional components are included in addition to the ones in the standard MDP formulation: a finite observation set 𝒪 and an observation function π_e that defines the conditional probability distribution over the observation in the underlying state after executing an action. Since the agent has limited knowledge about the underlying state in POMDPs, the policy must either map historical observations to the next action or extract information from historical observations in the form of a belief state.
Reinforcement learning aims to find the optimal policy with parameters w, denoted as π^*_policy(.;w), that generates the trajectory or episode τ to maximize the expected total reward. The objective function for reinforcement learning can be expressed as follows:
J(w) = 𝔼_τ∼π_chain∑_k=1^Mγ^k-1 r_k,
where π_chain = π_s(s_1) ∏_k=1^Mπ_policy(a_k|s_k;w)π_t(s_k+1|s_k, a_k).
The objective function measures the expected total reward with a discount factor γ∈ (0,1] to account for future uncertainty, and π_chain represents the trajectory generation process by the policy.
The total rewards for one trajectory are obtained after the agent completes an episode. The expectation over all trajectories can be estimated by sampling many trajectories. To enhance the process's efficiency, some reinforcement learning approaches utilize value functions that evaluate the expected future benefits from the k_th step following the policy.
V(s_k) = 𝔼_π_policy[∑_k^'=k^Mγ^k^'-k r_k^'|s_k].
The state-value function V(s_k) quantifies the expected cumulative reward from state s_k, taking into account all possible trajectories following the current policy that start from this state.
§ METHODS
§.§ Sequential OED as a POMDP
We take the reconstruction as a belief state rather than considering measurements as the state, as done in <cit.>. To formulate the problem, we adopt a Bayesian OED framework and model it as a POMDP. The POMDP formulation for the X-ray CT experiment is defined as follows:
* Observation space 𝒪: The observation space is defined as the set of measurements generated by the data model expressed in Equation (<ref>).
* State space 𝒮: The ground truth x represents the underlying state. The current reconstruction (belief state) of the underlying state, denoted by x(θ_1:k), is obtained using the SIRT algorithm with box constraints <cit.> as specified in Equation (<ref>). For ease of notation, we use x_k to represent the reconstruction at the k_th step. In addition, we maintain a vector b_k to keep track of the angles that have been selected before the k_th experiment to prevent repeating the same angles.
* Action space 𝒜: The action space is a discrete design space consisting of 180 integer angles from the range [0^∘, 180^∘).
* Transition function π_t and observation function π_e: The transition function π_t is deterministic, as the underlying state remains unchanged. On the other hand, the data model π_e given by Equation (<ref>) serves as the observation function, from which we only consider measurement samples.
* Reward function R: The reward function is defined based on the PSNR value between the reconstruction obtained after selecting the angle θ_k and its ground truth. Two reward settings are considered, both of which correspond to A-optimality in Bayesian OED.:
r_A(x_k^(i), θ_k^(i))=
PSNR(x_k^(i), x^(i)) PSNR > e or k > 180
-λ else
where e is the expected value for PSNR. If the expected value for PSNR or the maximal number of angles is reached, one episode will be terminated. Because the size of design space is 180, we force that one episode has 180 steps at most to avoid endless Otherwise, the agent will get a penalty λ.
* End-to-end setting: The reward is given as follows:
R(x_k+1, x)=
PSNR(x_k+1, x) if k = M
0 otherwise
If the fixed number of angles M is reached, the episode terminates, and the final PSNR value is given. Otherwise, the agent receives a reward of 0.
* Incremental setting: The reward is given as follows:
R(x_k+1, x_k, x)=
PSNR(x_k+1, x) - PSNR(x_k, x)
The reward represents the improvement in the current reconstruction quality compared to the previous step.
* Initial angles and state: Due to the random rotation and scaling in training set, the probability distribution for initial state is uniform.
Due to the discrete nature of the design space, a soft-max policy is considered, which is parameterized by w:
π_policy(θ_k|x̂_k-1^(i), b_k;w) = e^G(x_k-1^(i), b_k,θ_k;w)/∑_θ_k' ∈𝒜e^G(x_k-1^(i), b_k,θ_k';w),
where w represents the policy parameters, and ϕ(x̂_k-1^(i), b_k-1, θ_k) is a feature vector that captures the relevant information for the state-action pair. The output of the soft-max policy is a probability distribution over the 180 angles in the design space.
§.§ Actor-Critic method for policy optimization
The Actor-Critic method is a novel category in the field of reinforcement learning for computing the policy gradient on the objective function described in Equation (<ref>). The proposed approach leverages the concept of value functions and utilizes a state-value function to obtain the expected future rewards at the current state, thereby expediting the learning process. Additionally, this method parameterizes the value function. The Temporal-Difference (TD) error <cit.> is employed in this approach, which calculates the discrepancy between the estimated value function for the current state and the sum of the current reward and the discounted estimated value function for the next state. This enables the state-value function to be updated through bootstrapping and provides a direction for policy gradient.
At the beginning of each episode, a zero matrix and a zero vector are used as the initial state and action vector, respectively. The complete algorithm is presented in Algorithm (<ref>).
To further reduce variance, a baseline is introduced. The advantage function is estimated with the help of a baseline state-value function, which is also parameterized by h and denoted as V(x, b;h). The one-step Temporal-Difference (TD) error <cit.> is used to estimate the advantage function in Equation <ref>, in which only the state-value function is updated through bootstrapping. Finally, the policy gradient is computed by the Actor-Critic method. To initialize the state-value function, a zero matrix is considered. The whole algorithm is shown in Algorithm 1:
§.§ Network architecture
The proposed method requires the agent to extract relevant features from high-dimensional images to increase learning efficiency, which is accomplished using a deep neural encoder network. The architecture of the encoder network and the Actor-Critic network is shown in Figure (<ref>), with the input image being of dimension 128 × 128. The neural network's connection weights represent the policy parameters w_1 and the state-value function parameters w_2.
The proposed model adopts a shared encoder network between the actor and critic networks. This shared encoder network comprises three convolutional neural networks (CNNs) each with padding and group normalization, followed by a leaky Rectified Linear Unit (ReLU) activation and a max pooling operation for down-sampling. The shared encoder network consists of a total of 13,320 parameters. Furthermore, the following actor and critic networks are separate and have 170,820 and 900,601 parameters, respectively.
Figure (<ref>) outlines the process by which the network operates in the context of the actor-critic method. The encoder network takes the reconstruction x_k as input and produces a feature vector in the bottleneck layer, which is flattened into a 1D vector and concatenated with the 1D action vector b_k. The resulting information is then fed into the following actor and critic networks.
The actor network uses a Soft-max policy to map the information to a probability distribution over all possible angle candidates in the action space, while the critic network estimates the state-value function V(x_k, b_k;w_2). Based on the probability distribution generated by the actor network, the agent selects the next angle θ_k and subsequently collects measurements to obtain a new reconstruction x_k+1. The action vector is updated as b_k+1 accordingly.
To compute the policy gradient and update the parameters in the value function using TD error in Algorithm (<ref>), the new reconstruction x_k+1 and the new action vector b_k+1 are fed into the network again. This is done to calculate a new state-value function V(x_k+1, b_k+1;w_2). Once an angle is selected, both the policy parameters w_1 and the value function parameters w_2 are updated once.
§ NUMERICAL EXPERIMENTS
We examine in intuitive numerical experiments whether the learned policies are really able to sequentially adapt the scan angles to the object (a-posteriori adaptation). For this, we use various simple numerical phantoms for which the informative angles are well-known. Throughout our experiments, we focus on parallel-beam geometry and simple 2D tomography using synthetic data. The code and synthetic data are available on Github [<https://github.com/tianyuan1wang/SeqAngleRL>].
§.§ Data sets
In our numerical experiments, we consider several shapes depicted in Figure (<ref>). All phantoms in the data sets have a size of 128 × 128 and are binary images. To assess the adaptability of the agent to dynamic environments, each data set includes phantoms with different rotations, causing a shift in their informative angles. These rotations are represented by 36 equally spaced angles ranging from 0^∘ to 179^∘. Additionally, the phantoms in each data set exhibit various scaling and shifts. Nonetheless, these modifications do not alter the informative angles, thereby preserving the consistency of informative angles across the scaled and shifted phantoms. By including these scaling and shifts, we aim to ensure the agent's ability to recognize the same object despite its size and location variations.
d1) Circles: The first data set consists of circles with varying locations and radii. Due to its uniform curvature, a circle does not have a relatively higher concentration of informative angles. To obtain an accurate reconstruction, angles must be equidistantly distributed.
d2) Ellipses: Unlike circles, ellipses have a major axis and a minor axis. The major axis serves as a preferential direction, making angles around it more informative, as shown in references <cit.> and <cit.>.
d3) Triangles: Triangles, characterized by one angle of 90^∘ and two angles of 45^∘, possess three preferential directions, causing the informative angles to be tangential to their edges.
d4) Mixed phantoms: The final data set consists of a mixture of phantoms, including triangles from d3), regular pentagons, and regular hexagons, each of which has its own preferential directions.
§.§ Implementation
For all of our experiments, the sequential experimental process for each data set in Figure (<ref>) follows Algorithm (<ref>). To generate the measurement data, we utilize the ASTRA Toolbox <cit.>, considering a projection size of 1.5 × 128. The reconstruction is performed using the SIRT algorithm with box constraints [0,1] for 150 iterations.
The encoder and Actor-Critic neural network architectures are illustrated in Figure (<ref>). During training, we set the discount factor γ to 0.99 and assign weights of 1.0 and 0.5 to the actor loss and critic loss, respectively. To encourage exploration during training, we incorporate an entropy loss with a weight of 0.01. For optimization of the parameters, we employ the Adam optimizer <cit.> with a learning rate of 10^-4 and weight decay of 10^-5.
The number of phantoms used for training differs between experiments, with experiments d1) to d3) consisting of 3,000 training phantoms, while experiment d) has 9,000 training phantoms. the number of episodes required for convergence during training also varies among experiments, with experiment d1) requiring 100,000 episodes, experiments d2) and d3) requiring 150,000 episodes, and experiment d4) requiring 300,000 episodes.
To assess the generalization ability of the Actor-Critic agent, a set of testing experiments for d2) to d4) is performed to evaluate its ability to identify previously unseen rotations of phantoms, representing out-of-distribution data. The number of test phantoms varies across the conducted experiments, denoted as d2) to d4). Specifically, 300 test phantoms are used in experiments d2) and d3), while 900 are used in experiments d4).
In addition, we consider two reward functions: incremental and end-to-end settings. An equidistant policy is introduced as a benchmark to compare the performance of the Actor-Critic policy with un-informed and non-adaptive angle selection method.
The subsequent sections will present the training and testing outcomes on the aforementioned data set and compare the Actor-Critic policies utilizing two reward settings and the equidistant policy.
§.§ Experiment 1 - Uniform informative angles
In the first experiment, we aim to evaluate the performance of the Actor-Critic policy on the circles data set, which have a uniform distribution of informative angles. It is known that the equidistant benchmark is the optimal policy for this data set. Our objective is to investigate whether the Actor-Critic policy approaches the equidistant policy in performance.
As depicted in Figure (<ref>), the equidistant policy exhibits enhanced performance for the circular phantoms compared to the Actor-Critic policies with diverse reward configurations. Furthermore, we observed that the performance of the Actor-Critic policy with end-to-end reward surpasses that of the policy with incremental reward as the number of angles increases.
Figure (<ref>) presents two samples considering three and seven angles obtained from the Actor-Critic policy with the end-to-end reward setting. This result demonstrates that the Actor-Critic agent tends to distribute the selected angles evenly, although the number of angles is different.
§.§ Experiment 2 - Non-uniform informative angles
In contrast to circles, informative angles in ellipses are found to be concentrated around its major axis.
The training outcomes of the ellipse phantoms over the final 2,000 episodes are shown in Figure (<ref>), which indicates that the Actor-Critic policies exhibit superior performance. As the number of angles increases, the results for the three policies get closer. This is because a sufficient number of angles around the major axis have already been obtained, even for the equidistant policy, to achieve a high-quality reconstruction. Notably, the Actor-Critic policy with the end-to-end reward setting achieves the best performance. Figure (<ref>) presents the results for two ellipse phantoms, demonstrating that the agent can discern the rotation of the ellipse and concentrate the distribution of the angles around the informative area. As the number of angles increases, the agent increases the number of angles around the major axis.
Table (<ref>) reports the test outcomes for the ellipses data set with three to seven angles. Consistent with the training results, the Actor-Critic policies demonstrate superior performance compared to the benchmark, with the policies becoming progressively closer as the number of angles increases. The end-to-end reward setting still shows the best average performance, though it has a higher variance.
§.§ Experiment 3 - Explicit informative angles
The third experiment focuses on evaluating the ability of the Actor-Critic agent to identify explicit informative angles for phantoms with sharp edges, namely triangles. These phantoms have well-defined informative angles that are tangential to their edges, and thus, it is of interest to investigate if the agent can successfully locate these angles. The results of this experiment will provide insight into the performance of the Actor-Critic agent in detecting preferential directions for phantoms with sharp edges.
In this study, a fixed number of five angles is employed. As shown in Figure (<ref>), both reward settings for the Actor-Critic agent outperform the equidistant policy. Specifically, training using the incremental reward setting demonstrates faster convergence, whereas the end-to-end reward setting yields the best performance.
The training results demonstrate that the Actor-Critic agent tends to select the first two angles as fixed angles, with particular emphasis on the first angle, while the second angle exhibits some uncertainty. Subsequently, the agent would select three informative angles to optimize the reconstruction process. This behavior is consistent with the fact that the initial state is set as a zero matrix and a zero vector with no prior information, and the agent, therefore, prioritizes gathering information by fixing the first angle or first two angles before personalizing the strategies based on the different phantoms encountered.
Figure (<ref>) presents two samples of the agent's performance in an end-to-end reward setting, in which the agent selects the initial two angles of 44^∘ and 153^∘. Subsequently, for the right phantom, the agent selects 76^∘, 115^∘, and 165^∘ as the following three angles, while for the left phantom, the agent chooses 97^∘, 136^∘, and 3^∘. Notably, these angles are almost tangential to the edges of the triangle phantoms.
We observe that the agent tends to select more angles around the informative angles or repeat its selection when the first two angles are close to the informative angles. Again, this behavior can be explained by the informative angles containing the most relevant information for accurate reconstruction.
In regards to the out-of-distribution test, Table (<ref>) demonstrates that the Actor-Critic policies outperform the equidistant benchmark. Furthermore, it is observed that the end-to-end reward setting achieves the highest quality in terms of reconstruction.
§.§ Experiment 4 - Mixed phantoms with explicit informative angles
In this study, we aim to investigate the capacity of an Actor-Critic agent to recognize and distinguish between different phantoms with varying informative angles and their rotation. In this study, a fixed number of seven angles is employed.
Similar to Experiment 3, shown in Figure (<ref>), the training for the mixed phantoms reveals that the incremental reward setting facilitates faster convergence, while end-to-end reward setting results in better performance. Figure (<ref>) illustrates the performance of the end-to-end reward setting. It fixes the first two angles to 137^∘ and 46^∘ for the hexagon and triangle, respectively, while it selects 137^∘ and 65^∘ for the pentagon as the first two angles because of some uncertainty for the second angle as mentioned in Experiment 3. The agent then selects the subsequent informative angles based on this prior information by the fixed angles.
Furthermore,we investigate the impact of Gaussian noise on an Actor-Critic agent's ability to select informative angles for phantoms. Our results, presented in Figure (<ref>), demonstrate that the performance gap between the equidistant and Actor-Critic policies is reduced in the presence of 5% Gaussian noise on the measurements. Specifically, our training results show that the incremental reward setting yields nearly identical total rewards to the equidistant policy, suggesting that the presence of noise negatively impacts the training process. Additionally, comparing the result samples from end-to-end rewards in Figures (<ref>) and (<ref>), we find that the noise in measurements has a substantial influence on the angle selection strategy, including the fixed angles and the informative angles selection orders afterward. To better understand the differences between the two policies in Figures (<ref>) and (<ref>), we show the policy results for triangles in Figure (<ref>). Our analysis reveals that the policy for clean data is tightly clustered around informative angles, whereas the policy for noisy data is more broadly distributed. We also observe that the policy realizes adaptive angle selection, where the probability of a chosen angle decreases significantly, followed by an increase in the probability of some angles with a small probability.
Based on the results for the PSNR values presented in Table (<ref>), it can be observed that the Actor-Critic policies, with both end-to-end and incremental rewards, outperform the equidistant policy. In addition, adding noise to the measurements reduces the performance gap between the policies. Notably, the end-to-end reward setting still exhibits the highest performance. Moreover, the model's performance on noisy measurements improves when testing the model trained on clean measurements compared to the model trained on noisy measurements, confirming that the training on noisy measurements degrade the performance of the Actor-Critic policies.
§ DISCUSSION
The results demonstrate that for classes of phantoms with a clear informative angles, the reinforcement learning policy are able to achieve superior performance compared to the uninformed, equidistant policy. As the informative angles were differing for each individual phantom and were not shared along the whole class of phantoms, no informative angles could be known a-priori. We therefore demonstrated that the policies learned truly preform a-posteriori adaptation. This complements the findings from <cit.>, whose numerical studies could not answer this important question.
Importantly, the trained reinforcement learning policies exhibit generalization capabilities on the test dataset, including rotations not encountered during training. Interestingly, adding measurement noise reduces the achievable gains in performance, and understanding and mitigating the reason for this will be a topic for future research.
In addition, we conducted numerical experiments to compare end-to-end and incremental reward functions. The end-to-end reward function achieves the highest average performance on both the training and test datasets. This indicates its effectiveness in guiding the reinforcement learning agent toward optimal solutions. On the other hand, the incremental reward function demonstrates faster convergence during training. In the future, we will investigate further how to design reward functions that share both of these desirable properties.
In the future, our work can be extended in the following ways: Firstly, instead of using SIRT as an image reconstruction method, we will use deep learning-based reconstruction methods, trained end-to-end. Secondly, more complex reward functions can be designed to achieve task-specific angle selection. For example, to detect defects in in-line quality control, one could reward angle selection policies that improve the contrast between the defect and its embedding. Thirdly, we restricted ourselves to a simple 2D parallel-beam geometry to obtain scenarios in which optimal angle selection strategies are known, and the results of trained policies can be interpreted more easily. In the future, we will extend the approach to more complex and realistic 3D geometries with additional degrees of freedom, such as tilting and zooming.
§ CONCLUSION
Compared to classical, computationally prohibitive approaches to solve the sequential OED problem of adaptive angle selection in X-ray CT, deep reinforcement learning avoids direct gradient computation on the high-dimensional, non-convex, bi-level optimization problem. Instead, it learns non-greedy strategies to solve it for a particular class of phantoms during an offline training phase which can then be applied fast and efficiently online to scans of new phantoms. We posed the sequential OED problem as a POMDP and utilized the Actor-Critic network combining a shared encoder network to learn an optimal policy. In our numerical studies with 2D CT scenarios mimicking industrial, in-line CT inspection, we could demonstrate that our approach learns efficient, truly adaptive policies that achieve better performance in terms of reconstruction quality. We introduced two different reward function settings, namely, the end-to-end and incremental reward settings. Both settings lead to stable learning processes, consolidating reinforcement learning as a reliable and extremely promising method for sequential OED. To conclude, our work demonstrates the potential of using reinforcement learning for solving sequential OED problems in inverse problems and imaging - in particular to automate angle selection and improve CT imaging efficiency, providing a flexible and adaptive approach for various CT imaging scenarios in the Industry 4.0.
§ ACKNOWLEDGEMENT
This research was co-financed by the European Union H2020-MSCA-ITN-2020 under grant agreement no. 956172 (xCTing). We would like to express our gratitude to Chat Generative Pre-trained Transformer (ChatGPT) for its assistance in refining the English writing of this paper.
IEEEtran
§ BIOGRAPHY SECTION
Tianyuan Wang received a Bsc. in Automation from Central South University in China and a Msc. in Computer Engineering from RWTH Aachen University in Germany. Currently, he is working as a Ph.D. student in the Computational Imaging group of the Centrum Wiskunde & Informatics (CWI) in the Netherlands. His research is part of the xCTing network and aims to realize adaptive angle selection for in-line CT.
Felix Lucka is a senior researcher in the Computational Imaging group at the Centrum Wiskunde & Informatica (CWI). After obtaining a first degree in mathematics and physics in 2011, he completed a PhD in applied mathematics at WWU Münster (Germany) in 2015 followed by a postdoc at University College London until 2017. His main interests are mathematical challenges arising from biomedical imaging applications that have a classical inverse problem described by partial differential equations at their core.
Tristan van Leeuwen is the group leader of the Computational Imaging group at the Centrum Wiskunde & Informatica (CWI) in the Netherlands. He received his BSc. and MSc. in Computational Science from Utrecht University. He obtained his PhD. in geophysics at Delft University in 2010. After spending some time as a postdoctoral researcher at the University of British Columbia in Vancouver, Canada and the CWI, he returned to Utrecht University in 2014 as an assistant professor at the mathematical institute. In 2021, he moved to his current position. His research interests include: inverse problems, computational imaging, tomography and numerical optimization.
|
http://arxiv.org/abs/2307.05849v1 | 20230711234736 | Time resolved eye diagrams to exploit hidden high energy branches in a nonlinear wideband vibration energy harvester | [
"Kankana Paul",
"Saibal Roy",
"Andreas Amann"
] | physics.app-ph | [
"physics.app-ph"
] |
[email protected]
Micropower-Nanomagnetics group, Micro-Nano-Systems Center, Tyndall National Institute, Cork, Ireland
[email protected]
Micropower-Nanomagnetics group, Micro-Nano-Systems Center, Tyndall National Institute, Cork, Ireland
Department of Physics, University College Cork, Cork, Ireland
[email protected]
School of Mathematical Science, University College Cork, Cork, Ireland
A wideband vibration energy harvester with multiple nonlinear forces is investigated. The nonlinearities are due to repulsive magnets and hardening springs, which gives rise to multistabilities between a number of energy branches. Not all branches are accessible by a simple up or down sweep of the driving frequency and in particular the highest energy branch is often hidden, requiring a suitable frequency schedule to be accessed. Detailed theoretical understanding of the energy branch structure along with robust experimental methods are essential for characterizing each of the energy branches to enhance the energy output from such vibration energy harvesting system. We introduce a graphical representation in the form of eye diagrams based on time-resolved measurements of acceleration and output voltage to study the dynamical features of the different branches. This generic approach allows us to optimize the design, which results in 1.3mW of power generated at 1g over 44Hz frequency bandwidth while maintaining a small footprint of 1.23 cm^3. The energy conversion ratio of the energy harvester at 120Hz drive frequency is 0.52 for the high energy branch.
Time resolved eye diagrams to exploit hidden high energy branches in a nonlinear wideband vibration energy harvester
Andreas Amann
August 12, 2023
====================================================================================================================
§ INTRODUCTION
In this epoch of Internet of Things (IoT), the lack of a sustainable power source significantly impedes the pervasive deployment of autonomous sensors nodes. To address this cardinal issue, Vibration Energy Harvesters (VEHs) have emerged as a promising renewable energy source <cit.> due to the abundance of vibrations in the domestic and industrial environment. However, the characteristically narrow frequency bandwidth, and hence the poor off-resonance performance of traditional linear VEHs <cit.> makes them unsuitable for harnessing substantial mechanical energy from ambient vibrations, which is spread over a broad spectrum of frequency. The challenge is therefore to design a VEH with large energy output over a wide frequency range.
A wider operable bandwidth is obtainable by using a VEH with a nonlinear restoring force <cit.> and in the past VEHs possessing monostable <cit.>, bistable <cit.>, tristable <cit.>, quadstable <cit.>, and polystable<cit.> potential energy functions have been studied experimentally. From a theoretical point of view, VEHs can be modelled as driven nonlinear oscillators <cit.>, which also appear in many other fields, including optics, photonics <cit.>, biomechanics <cit.>, and electronics <cit.>. It is well known that, even in simple periodically driven systems the presence of nonlinearity can give rise to complex dynamical features, including multistability, dynamic symmetry breaking and frequency locking <cit.>.
In the context of VEHs, the phenomenon of multistability translates into the presence of multiple energy branches which coexist for a given set of driving parameters. For example, in the classical case of a hardening nonlinearity <cit.> as in the Duffing oscillator <cit.>, high-energy and low-energy branches coexist. The selection of the dynamical state depends on the initial conditions and the frequency schedule of the drive. Additionally, fully isolated resonances with large amplitudes are also possible <cit.>.
From an application point of view, it is desirable to maintain the system in the branch with the highest energy output, and various mechanisms for achieving and sustaining these high energy branches have been devised in the past <cit.>. Thus a route to further increase the energy output and frequency bandwidth using more sophisticated nonlinearities appears possible. However, the increased complexity of the resulting energy branch structure requires detailed theoretical understanding and powerful experimental methods to characterize different branches.
The graphical representation of the dynamics of linear and nonlinear oscillators is a powerful tool for the estimation of energy generation <cit.>, energy transfer as well as the comparison of the performance with an ideal oscillator <cit.>. Particularly, the area enclosed in the force-displacement plane is useful for the investigation of the involved damping mechanism and the energy dissipated through the oscillator <cit.>. However, the potential of this method for characterizing complex nonlinear systems and the associated energy branches, which could lead to a more efficient energy harvesting system, is still unexplored.
In this work, we present a wideband vibration energy harvester that combines nonlinear forces arising from the spring-hardening and from repulsive magnetic interactions in a single device. We investigated the complex energy branch structure in this case. It was found that the highest-energy branch may be hidden in the sense that a particular frequency schedule is required to reach it. Using our knowledge of the branch structure, we achieved this high energy branch even at a low level of excitation. To characterize the various energy branches experimentally, we took time-resolved measurements of acceleration and voltage outputs. This allows us to plot eye diagrams in a force-displacement plane, where the enclosed areas (eyes) represent the energy transacted within one period of the external drive. The different eye shapes bear useful information about the nonlinearities involved and allow us to efficiently characterise the various energy branches experimentally. The visual representation using eye-diagrams is similar to the well-known thermodynamic cycles in the context of combustion engines <cit.>, where the enclosed area also represents the transacted energy per cycle. By demonstrating the usefulness of eye-diagrams in improving the design of our device, we seek to establish this as a generic tool for wider application in the VEH community and beyond.
§ FREQUENCY RESPONSE: HIDDEN ENERGY BRANCH
The employed tapered FR4 (Flame Retardant 4) spring architecture (laser micromachined), as shown in Fig. <ref>, exploits the unique stress distribution <cit.> arising from the tapered geometry to introduce a strong cubic nonlinear restoring force, while maintaining a small footprint of 1.23 cm^3. Two pairs of repulsive permanent magnets, one pair fixed to the FR4 spring and the other mounted on movable rails are used (bottom Fig. <ref>) to destabilize the central position of the load. Different parameters of the VEH are listed in Table <ref>. The experimental set-up is shown in Fig. <ref>. The electrodynamical characterization of the developed prototype has been performed with a Bruel and Kjær LDS V455 permanent magnet shaker that emulates real-world vibrations in a laboratory environment. The vibration of the shaker is controlled by an LDS Comet controller, and the output signal from the controller is fed to an LDS PA 1000L power amplifier. With a sweep rate of 1Hz/sec, the frequency of the excitation has been ramped up from 50Hz to 200Hz, and similarly swept back to 50Hz for different amplitudes of excitation (0.1g to 2.0g). A small piezoelectric CCLD accelerometer (DeltaTron 4517-002) placed near the harvester monitors the acceleration over frequency sweeps and feeds it back to the vibration controller (A1). Simultaneously, another accelerometer (A2) placed near the harvester monitors the amplitude of excitation and feeds to the g-meter (Environmental Equipments Ltd. Model 2025). The response from the harvester across an optimized load resistance (2kΩ) is recorded with a digital oscilloscope (Picoscope 3000 series). Concurrently, the output from the g-meter is recorded with the same oscilloscope. The 3D printed rails, as shown in Fig. <ref> have been used to vary the distance d between the repulsive sets of magnets.
Fig. <ref>(a) shows the variation of load power during driving frequency sweeps while the distance between the repulsive pairs of magnets and the amplitude of excitation are fixed at 2.5mm and 0.8g, respectively. As the driving frequency is swept up from 50Hz to 200Hz, the extracted power increases slowly (magenta) up to 0.1mW at point A. Beyond this point, the power gradually decreases for higher driving frequencies. On sweeping the frequency of the drive down from 200Hz to 50Hz, the response jumps up at B, and the VEH delivers more power across the load, maximizing up to 0.28mW (grey). The load power then reduces for lower driving frequencies and jumps down at the point C to a low energy state. The top and bottom inset of Fig. <ref>(a) shows the time trace of small amplitude oscillations exhibited by the VEH in the vicinity of A and B.
Interestingly, as shown in Fig. <ref>(b), higher energy output can be achieved by designing a specific drive frequency schedule. While sweeping the frequency of the external drive down from 200Hz, instead of going all the way up to 50Hz, we turn the drive frequency up from 92Hz which is between the two jumping points B and C, as depicted by the arrow in Fig. <ref>(b). During this up sweep, the VEH now delivers large power output (brown) of up to 0.85mW at the point D before falling down to a low energy state. This high output is not achieved in the simple up and down sweep as shown in Fig. <ref>(a) and it is a consequence of the existence of multiple stable energy branches that have been shown later in Fig. <ref>(a). More explicitly, there is a low energy branch EB1 (dark blue) which is the only stable branch at low frequencies and it terminates at the point A. Then there is an intermediate energy branch EB2 (light blue), the only stable branch at the high frequencies which terminates at B. Finally, there is a high energy output branch EB3 (yellow) which extends from C to D, and can only be selected through particular frequency schedule of the drive. It is interesting to note that, between the points B and A, all the three energy branches co-exist, and the frequency schedule of the drive determines which of the three branches is selected.
The energy branches depends also on the acceleration of the drive. In Fig. <ref>(b), we show the extent of the energy branches in the acceleration-frequency plane (for d =2.5mm). The dot symbols mark the experimentally obtained boundaries of the respective energy branches. The linear frequency of the oscillator is at 115Hz for very low acceleration (0.1g) of the external drive. With increasing acceleration, the two energy branches EB1 and EB2 overlap and form a hysteresis region of up to 14Hz at 0.4g. Then at a drive of 0.5g the high energy branch EB3 starts to emerge (yellow region in Fig. <ref>(b)). As a consequence of its position in the overlap region between EB1 and EB2, the branch EB3 can only be achieved by following the frequency schedule as explained before. This energy branch (EB3) extends up to 76Hz as the external drive increases to 2g and provides the largest energy output of all branches. It should be noted that the high energy branch EB3 aids the system to generate more energy over a considerably wider bandwidth of operable frequencies which makes it a potential candidate for harnessing mechanical energy from broadband vibrations. However, the multistable characteristics of this VEH makes it difficult to achieve and sustain the high energy state consistently. Using controlled electrical actuation <cit.> is a viable route to switch the state of this system to higher energy state, while enabling the VEH to capture substantial mechanical energy from real-world vibrations.
§ REDUCED ORDER MODEL FOR THE VEH
To describe the dynamics of the nonlinear VEH system, let us consider the following general equation of motion for our driven system,
mz̈ + cż + γ I + ∂U (z)/∂ z = F sinω_0 t
here, z is the vertical displacement of the moving magnet and m is its mass; c is the mechanical damping parameter, γ is the electromagnetic coupling factor, U(z) is the potential energy, and F sinω_0 t is the external drive.The interaction between the coil and the magnet has been emulated using the finite element analysis tool Ansys Maxwell <cit.>. The electromagnetic coupling factor γ represents the spatial gradient of magnetic flux that is experienced by the coil under consideration, which has been calculated to be 15mWb/m in this case.
I is the current through the load resistor which is expressed as,
I=γż/R_C+R_L
where, R_L is the load resistor and R_C is the coil resistor. To model the effect of the external magnet, we consider the repulsive interaction between two magnets with magnetic dipoles m_2 on m_1 at a distance d, as shown in Fig.<ref>. The potential energy U(z) for this interaction is given by <cit.>,
U(z) = μ_0/4 π (z^2+d^2)^3/2[3 m_1 m_2 d^2/(z^2+d^2)- m_1 m_2 ]
= μ_0 m_1 m_2 /4 π d^3[2 - z^2/d^2/(1+z^2/d^2)^5/2]
Taking into account the contribution from the repulsive set of magnets as well as the linear and nonlinear spring force arising from the spring bending and stretching respectively (k and k_n being the linear and nonlinear spring stiffness coefficient), the total force F_ overall(z) arising from the spring and the magnets can be expressed as,
F_overall= k z + k_n z^3- ∂ U (z)/∂ z
= k z + k_n z^3+F_mag(z)
Then the equation of motion of the VEH under consideration takes the form
z̈ + c/mż +γ^2/m (R_C + R_L)ż
+ k/m z + k_n/m z^3+ F_mag(z)/m = F/msinω_0 t
It is convenient to non-dimensionalise this equation. First we choose the dimensionless time parameter τ = ω t,
d/d t = dτ/d td/dτ= ωd/dτ
Substituting this in the equation of motion and selecting ω=√(k/m), the equation of motion takes the following form,
d^2 z/dτ ^2 + c/m√(m/k)d z/dτ +γ^2/m (R_C + R_L)√(m/k)d z/dτ
+ z + k_n/k z^3+ F_mag(z)/k = F/ksinω̂τ
Scaling z such that ẑ = z a, choosing k_n/k=a^2, and combining the two damping terms using c/√(mk) +γ^2/√(mk) (R_C + R_L) =D_total, the simplified form of the equation of motion is,
d^2ẑ/dτ ^2 + D_totald ẑ/dτ + ẑ
+ ẑ^̂3̂
+μ_0 m_1 m_2 a /4 π m d^3 k[-2 ẑ/a/d^2 (ẑ^2/a^2 d^2 +1)^5/2-5 ẑ/a (2 - ẑ^2/a^2 d^2)/d^2 (ẑ^2/a^2 d^2 +1)^7/2]
= F a/ksinω̂τ
We further introduce the nondimensionalized parameters as d̂= a d, F̂=Fa/k and (3/4 π) ( μ_0 m_1 m_2/k) =p̂/a^5. We then get the following dynamical equation for the Reduced Order Model (ROM),
d^2ẑ/dτ ^2 + D_totald ẑ/dτ + ẑ
+ ẑ^̂3̂
+p̂ẑ/d̂^̂5̂[(ẑ^̂2̂/d̂^̂2̂-4) /(ẑ^2/d̂^2+1)^7/2]
= F̂sinω̂τ
Equation (<ref>) is solved by using the fourth-order Runge-Kutta method in MATLAB. We will now utilize this nondimensionalized model to investigate the complex dynamics associated with the presented system. The linear and nonlinear parameters that have been used in this model are shown in Table <ref>. The linear and nonlinear spring stiffness coefficients of the spring structure have been estimated using the solid mechanics module of COMSOL Multiphysics platform. The stationary analysis in COMSOL is used to excite the spring structure at the fundamental vibrational frequency of 94Hz while sweeping the external force from -2N to 2N and recording the displacement of the spring. The spring stiffness coefficients are extracted from the force-displacement relationship. The mass, coil and load resistance are measured and are fed into the reduced order parameters.
We use the ROM to investigate the dynamic response of the system. The magnitude of the applied force and the distance between the magnets are kept fixed at 0.86g and 2.0765mm respectively. As shown in Fig.6, with traditional up and down sweep of frequency, a sudden jump is observed at ω̂=0.6, which is highlighted as point B. On designing a frequency routine as explained before to achieve the higher energy state, we observed a similar high energy response from the system as the drive frequency is swept up from 0.56 (point C) to 2.2 (point D). Similar to the previously explained experimental observation, we can also notice here a low energy branch EB1 that terminates at A, the higher energy branch EB2 that terminates at B, and the hidden energy branch EB3, which is achieved through the designed frequency routine, extends from C to D. Hence, we conclude that the ROM is able to reproduce the previously presented experimental results.
§ EYE DIAGRAMS
In order to investigate the dynamical features of the branches, and in particular to compare their energy output, we now introduce the concept of eye diagrams, which provide intuitive insight into the various dynamical states on the basis of experimentally observable quantities.
Revisiting the equation of motion (<ref>), let us now assume that the period of the solution z(t) equals the period of the external drive T= 2π/ω_0 and let us further assume that at a time T_0 the displacement z(T_0) is at a maximum. Multiplying both sides of equation (<ref>) by ż and integrating over one period T leads to the condition,
E_m+E_e = E_f,
with
E_m = ∫_T_0^T_0+T c ż^2 dt,
E_e = ∫_T_0^T_0+Tγ I(t) ż dt,
E_f = ∫_T_0^T_0+T F sin(ω_0 t) ż dt .
Here, E_m is the mechanical contribution to the dissipated energy over one period and E_e is the electromagnetically transduced energy over one period of the drive. E_f is the energy injected through the external drive.
Equation (<ref>) represents the energy balance of these energies associated with the energy harvesting system. Motivated by similar representations of the energy balance in combustion engines, let us now represent the two sides of equation (<ref>) as an enclosed area in a suitable force-displacement plane. We do this by substituting the integration in time t by an integration in the displacement z. Since z(t) is a non-monotonous function, the substitution is split up into time intervals [T_k-1,T_k] where, T_k is a maximum (minimum) of z(T_k) for even (odd) k for k=0,…, 2n, where n is the number of minima per period T. This also implies T_2n=T_0+T. This is illustrated for the case n=1 in Fig. <ref>). Defining t̂_k(z) as the inverse function of z(t) on the interval [T_k-1,T_k] then yields
E_x = ∑_k=1^2n∫_Z_k-1^Z_k F_xk(z)dz,
where the subscript x refers to either m, e or f and F_mk(z)=cż(t̂_k (z)), F_ek(z)=γ I(t̂_k (z)), F_fk(z)=Fsin(ω_0 t̂_k (z)) are the corresponding forces. Equation (<ref>) shows that the energies E_m, E_e and E_f appearing in (<ref>) can be interpreted intuitively as the areas enclosed by the functions F_mk(z) , F_ek(z) and F_fk(z) in a displacement versus force diagram.
§ EXPERIMENTAL METHODS, RESULTS AND DISCUSSIONS
To experimentally determine the function F_ek(z), the electromagnetic force, we measure the VEH's output voltage V_L. This allows us to obtain ż= V_L (R_c+R_L)/γ R_L. We then calculate the displacement z by integrating the velocity ż.
Fig. <ref> depicts the resulting force displacement diagram for fixed acceleration and various values of the interspacing between the repulsive magnets (d). The enclosed area corresponds to the energy E_e transduced in one period. As this shape resembles the shape of an eye, we call this an eye diagram. The eye for the VEH topology with d=2.5mm encloses the largest area among all eyes and therefore represents the largest energy transaction into the electrical domain per forcing period.
Further, to experimentally determine the function F_fk(z), we measure the external acceleration fed to the oscillator using a piezoelectric accelerometer attached to the base of the excitation source and we multiply this with the mass of the system (3 × 10^-3kg) to obtain F_fk(z). The displacement is obtained from the electromotive force measurement as explained before. An example is shown in Fig. <ref>(f) for the two branches EB1 and EB3. The area enclosed in the F_fk-z plane stands for the amount of mechanical energy fed to the VEH from the drive. Due to the energy balance in equation (<ref>), this corresponds to the area in the F_ek-z plane as shown in Fig. <ref>(a). In Fig. <ref>(a)-(j) we compare the eye diagrams in the F_ek-z plane with the corresponding diagrams in the F_fk-z plane for various frequencies. We observe that the shape of the eye evolves differently for the three different branches that are shown Fig. <ref>(a). In particular the branch EB3 corresponds to large areas enclosed in Fig. <ref>(d) and (i), while the co-exisiting branch EB2 only encloses a small area.
The area enclosed by the eye corresponding to this high energy branch EB3 in Fig. <ref>(i) represents 98μ J mechanical energy that is fed into the nonlinear VEH through the external excitation (E_f). On the other hand, the area enclosed by the eye for EB3 in Fig. <ref>(d) depicts the fraction of this mechanical energy, 51μ J, that is transacted into electrical domain by the VEH (E_e) per cycle. Now we define the energy conversion ratio from mechanical to electrical domain as,
Energy Conversion Ratio =E_e/E_f
It is important to note that this energy E_e is dissipated in both the coil and the load resistance; only the fraction that is dissipated across the load resistor represents the usable energy which could be utilized in a target application.The energy values as mentioned above, obtained from the area of the eyes reflects an energy conversion ratio of 0.52. On the other hand, the eye corresponding to EB2 in Fig.<ref>(i) represents only 2.5μ J mechanical energy acquired from the external force, only 0.3μ J of this energy gets transacted as usable electrical energy, yielding a conversion efficiency of 0.12. Similarly, the EB1 only converts 0.9μ J energy into the electrical domain, a fraction of 8μ J energy that the drive provides to the VEH, resulting a conversion efficiecy of only 0.11. Interestingly, this efficiency increases to 0.91 when the energy contribution from the energy branch EB3 is taken into account. The energy conversion ratio corresponding to each energy branches for the 90Hz, 100Hz, 110Hz and 120Hz drive frequency has been summarized in Table <ref>.
Furthermore, the shape of the eye diagrams for EB3 deviate strongly from the simple ellipse, which is characteristic for a harmonic oscillator. We therefore conclude that EB3, which is only obtained through a special frequency schedule, is inherently connected to the nonlinear force in our system. The eye diagrams are therefore a useful tool to experimentally explore the nonlinear character of the various co-existing branches.
To connect to the well studied linear case, let us consider the shape of the eyes corresponding to the energy branch EB1 in Fig. <ref>(a) and (f). They are close to the shape of an ellipse, which suggests that the VEH performs harmonic oscillations, similar to a simple linear harmonic oscillator. In this context, the question arises, if we could not simply define an appropriate phase which is able to characterize the response of the system. In fact, the phase is a useful tool for linear oscillators where the periodic response to an external drive of the form F(t) = sin(ω t) only consist of a single harmonic component, i.e. z(t) = z_0 sin(ω t - ϕ_0). In this case the quantity ϕ_0 uniquely defines the phase of the response, which can also be used to characterise the energy transaction in the linear case. However, in a nonlinear system multiple frequency components are present in the response, i.e z(t) = z_0 sin(ω t- ϕ_0) + z_1 sin(2 ω t - ϕ_1) + …, and the relationship between drive and response cannot be expressed in terms of a single phase. In this case, the eye-diagrams introduced before prove more useful, as they take into account all frequency components of the response. In Appendix-A we provide a simple example which shows that in a nonlinear oscillator the energy transaction depends on the phases of higher frequency components.
In this case, the phase difference between displacement and the input excitation determines the enclosed area and thereby the energy transacted during one cycle. This corresponds to the well known role of the phase in the linear oscillator, where a phase difference of π/2 corresponds to the peak of the resonance. On the other hand, all of the eyes from the high energy state EB3 have asymmetrical shape, which indicates that the strong nonlinear restoring force arising from the the stretching of the spring as well as from the repulsive magnetic interactions predominates here. Similarly, the energy state EB2 possess very little asymmetry in the eyes corresponding again to weak nonlinearities.
As discussed above, the shape of the eyes are different for each energy branch. This fact can be exploited for the discovery of previously unknown branches. Let us for example revisit the frequency down-sweep in Fig. <ref>(a). In this case, the small jump in the load power at point B reveals the presence of another branch. Such a jump is however not guaranteed to be visible in all cases where a transition between branches occurs. In contrast, if we consider the transition between the light blue eye in Fig. <ref>(g) for 100Hz and the yellow eye in Fig. <ref>(f) for 90Hz, we see that the shape and orientation of the eye markedly changes in addition to the enclosed area. This provides therefore a much stronger signal for a branch change and in this case prompts us to further explore the hidden branch EB3, which turns out to feature the largest energy output available in this device.
In order to illustrate the difference in power output obtainable for given input frequency and acceleration, let us consider Fig. <ref>(a) and (b). In
Fig. <ref>(a) we show the load power as the drive frequency is simply swept down from 250Hz to 50Hz, while the acceleration is fixed for each sweep. On the other hand, in Fig. <ref>(b) the power following the specific frequency schedule as explained before to achieve higher energy states is shown. We observe that in the parameter regime, where the branches EB2 and EB3 overlap, the branch EB3 has a much higher energy output which is reached in Fig. <ref>(b) but not Fig. <ref>(a).
For example, at 0.5g drive amplitude, the peak load power approximately doubles from 0.18mW to 0.33mW, when the system follows EB3 instead of EB2. The delivered power increases to 1.3mW for 1g, with a 44Hz bandwidth. As shown in Fig. <ref>(b), the load power further increases with increasing acceleration up to 2.8mW while offering a large bandwidth of up to 76Hz at 2g. This wide operable frequency bandwidth corresponding to the branch EB3 is a feature due to the nonlinearity of our device. This offers a unique benefit for harnessing real-world vibrations where no apriori knowledge of the prevalent frequency components is available. In the weakly excited regime, this VEH essentially behaves as a linear resonator. The obtainable frequency bandwidth is as low as 3.19Hz for 0.1g excitation which restrains the efficiency for harvesting energy from brodaband vibrations. Since the bandwidth is a key performance metric, this outlines the advantage of employing a nonlinear device compared to a linear one, which despite of meeting the resonance condition offers a low frequency bandwidth.
Fig. <ref> shows a comparison of peak load power obtained from this VEH for different values of the interspacing between the repulsive magnets (d). When the magnets are very close (d=1mm), the strong repulsive force between results in very low electrical power generation. For example, for a 1g drive, the extracted peak load power is only 0.32mW. In contrast, if the magnets are at a large distance (d=7mm), the VEH exhibits larger oscillations about the equilibrium state. This improves the deliverable power to 1mW. As the magnets are placed at an intermediate value of d=2.5mm, the associated nonlinear effects both from the spring stretching and the magnetic interaction result in a large peak load power of 1.3mW that is extracted from the energy branch EB3. This corresponds to approximately 30% and 300% improvement in the power outcome as compared with the VEH topology with d=7mm and d=1mm, respectively. Therefore, this demonstrates the effectiveness of using a repulsive pair of magnets at an optimized distance to enhance the overall performance of the VEH.
§ CONCLUSION
To summarize, a wideband vibration energy harvester is presented with multiple nonlinear force acting on the system that gives rise to a number of energy branches. Some of these branches are “hidden” in the sense that they are not fully reached by simple frequency up or down sweeps. We designed a particular frequency schedule to reach those branches which substantially improved the energy output. The different branches have been experimentally characterised through eye diagrams, which directly illustrate the magnitude of the transacted energy per cycle. The energy harvesting device yields 1.3mW power (at 1g) across a suitable load resistor providing an enhanced operable frequency bandwidth of 44Hz. This energy harvesting system transduces mechanical energy into usable electrical energy at a conversion efficiency of 52%.
The author would like to thank Tony Compagno for the help in executing the experiments and the useful discussions. This work is financially supported by a research grant from Science Foundation Ireland (SFI) and is co-funded under the European Regional Development Fund Grant Number 13/RC/2077. This is also part funded by the EU-H-2020 project ‘Enables’, Project ID: 73095 and the Science Foundation Ireland (SFI) Frontiers for the Future Programme (FFP) Award Grant (Grant ID: 21/FFPA/10003).
§
To explicitly highlight the contrast of the role of phase in linear and nonlinear systems, we here provide a mathematical example. Let us first consider a linear system that is driven with harmonic excitation of the form F(t) = F_0 sin(ω t) and the response of the system is expressed through the displacement z(t) = z_0 sin(ω t - ϕ_0). where ω is the frequency and ϕ_0 is the phase difference between the applied force and the response of the system. We can find the energy that is injected into the system through the external drive,
E_linear = ∫_0^T F(t) ż(t) dt
= F_0 z_0 ω/2∫_0^T[sin(2 ω t - ϕ) + sin (ϕ) ] dt
= F_0 z_0 ω T/2sin(ϕ)
This shows the contribution of phase ϕ in controlling the energy transaction and hence the performance of such a linear system.
Let us now consider a nonlinear system with displacement of the form z(t) = z_0 sin(ω t - ϕ_0) +z_1 sin(2ω t - ϕ_1) which comprises higher harmonic components along with different phases ϕ_0 and ϕ_1. Now, considering the same external forcing as that of the linear system, the energy fed into this nonlinear system can be expressed as,
E_nonlinear = ∫_0^T F_0 sin(ω t) [ω z_0 cos(ω t - ϕ_0) ]
+ ∫_0^T F_0 [ 2ω z_0 cos(2ω t - ϕ_1) ] dt
= F_0 z_0 ω/2∫_0^T[sin(2 ω t - ϕ_0) + sin (ϕ_0) ] dt
+ 2 F_0 z_0 ω/2∫_0^T[sin(3 ω t - ϕ_1) + sin (ϕ_1) ]
= F_0 z_0 ω T/2 sin(ϕ_0) + F_0 z_0 ω T sin(ϕ_1)
This points towards the fact that the energy transacted does not depend on a single phase in a nonlinear system, and therefore the introduction of a single phase quantity is often not very useful. As an alternative we propose the use of the eye diagrams to relate to the energy transaction for such a nonlinear system.
|
http://arxiv.org/abs/2307.04513v1 | 20230710122005 | CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation | [
"Yicheng Wu",
"Zhonghua Wu",
"Hengcan Shi",
"Bjoern Picker",
"Winston Chong",
"Jianfei Cai"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
CoactSeg for New MS Lesion Segmentation
Yicheng Wu et al.
1 Department of Data Science & AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3168, Australia
[email protected]
2 SenseTime Research, Singapore, 069547, Singapore
3 Alfred Health Radiology, Alfred Health, Melbourne, VIC 3004, Australia
4 Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC 3800
CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation
Yicheng Wu1() Zhonghua Wu 2 Hengcan Shi 1 Bjoern Picker 3,4 Winston Chong 3,4 Jianfei Cai1
August 12, 2023
==============================================================================================
New lesion segmentation is essential to estimate the disease progression and therapeutic effects during multiple sclerosis (MS) clinical treatments. However, the expensive data acquisition and expert annotation restrict the feasibility of applying large-scale deep learning models. Since single-time-point samples with all-lesion labels are relatively easy to collect, exploiting them to train deep models is highly desirable to improve new lesion segmentation.
Therefore, we proposed a coaction segmentation (CoactSeg) framework to exploit the heterogeneous data (i.e., new-lesion annotated two-time-point data and all-lesion annotated single-time-point data) for new MS lesion segmentation.
The CoactSeg model is designed as a unified model, with the same three inputs (the baseline, follow-up, and their longitudinal brain differences) and the same three outputs (the corresponding all-lesion and new-lesion predictions), no matter which type of heterogeneous data is being used.
Moreover, a simple and effective relation regularization is proposed to ensure the longitudinal relations among the three outputs to improve the model learning.
Extensive experiments demonstrate that utilizing the heterogeneous data and the proposed longitudinal relation constraint can significantly improve the performance for both new-lesion and all-lesion segmentation tasks.
Meanwhile, we also introduce an in-house MS-23v1 dataset, including 38 Oceania single-time-point samples with all-lesion labels. Codes and the dataset are released at <https://github.com/ycwu1997/CoactSeg>.
§ INTRODUCTION
Multiple sclerosis (MS) is a common inflammatory disease in the central nervous system (CNS), affecting millions of people worldwide <cit.> and even leading to the disability of young population <cit.>. During the clinical treatment of MS, lesion changes, especially the emergence of new lesions, are crucial criteria for estimating the effects of given anti-inflammatory disease-modifying drugs <cit.>. However, MS lesions are usually small, numerous, and appear similar to Gliosis or other types of brain lesions, e.g., ischemic vasculopathy <cit.>. Identifying MS lesion changes from multi-time-point data is still a heavy burden for clinicians. Therefore, automatically quantifying MS lesion changes is essential in constructing a computer-aided diagnosis (CAD) system for clinical applications.
Deep learning has been widely used for MS lesion segmentation from brain MRI sequences <cit.>. For example, the icobrain 5.1 framework <cit.> combined supervised and unsupervised approaches and designed manual rules to fuse the final segmentation results. Some works <cit.> further studied the complementary features from other MRI modalities for MS lesion segmentation. Meanwhile, to train a better deep model, class-imbalance issues <cit.> and prior brain structures <cit.> have been respectively investigated to improve the performance.
With the impressive performance achieved by existing pure MS lesion segmentation methods <cit.>, recent attention has been shifted to analyze the longitudinal MS changes <cit.>, such as stable, new, shrinking, and enlarging lesions, with the focus on new MS lesion segmentation <cit.>.
However, collecting adequate well-labeled longitudinal MS lesion data for model learning is highly challenging since it needs multi-time-point data from the same set of patients, and requires costly and time-consuming expert annotations.
Fig. <ref> shows the three types of heterogeneous MS lesion data: new-lesion annotated two-time-point data, all-lesion annotated two-time-point data, and all-lesion annotated single-time-point data, each of which is associated with different costs. New-lesion annotated two-time-point data is the ideal one for learning new lesion segmentation, but with the highest data acquisition and annotation costs. Annotating all lesions in two-time-point data can reduce the annotation cost, but it requires accurate brain registration and rule-based post-processing to identify lesion changes, which cannot avoid noise accumulation and often leads to sub-optimal performance. All-lesion annotated single-time-point data is with the cheapest data acquisition and annotation costs. This motivates us to raise the question: “Can we leverage all-lesion annotated single-time-point data to promote the new MS lesion segmentation?”
Therefore, in this paper, we proposed a deep Coaction Segmentation (CoactSeg) model that can unify heterogeneous data and annotations for the new MS lesion segmentation task. Specifically, CoactSeg takes three-channel inputs, including the baseline, follow-up, and corresponding differential brains, and produces all-lesion and new-lesion segmentation results at the same time.
Moreover, a longitudinal relation constraint (e.g., new lesions should only appear at the follow-up scans) is proposed to regularize the model learning in order to integrate the two tasks (new and all lesion segmentation) and boost each other. Extensive experiments on two MS datasets demonstrate that our proposed CoactSeg model is able to achieve superior performance for both new and all MS lesion segmentation, e.g., obtaining 63.82% Dice on the public MICCAI-21 dataset <cit.> and 72.32% Dice on our in-house MS-23v1 dataset, respectively. It even outperforms two neuro-radiologists on MICCAI-21.
Overall, the contributions of this work are three-fold:
* We propose a simple unified model CoactSeg that can be trained on both new-lesion annotated two-time-point data and all-lesion annotated single-time-point data in the same way, with the same input and output format;
* We design a relation regularizer to ensure the longitudinal relations among all and new lesion predictions of the baseline, follow-up, and corresponding differential brains;
* We construct an in-house MS-23v1 dataset, which includes 38 Oceania single-time-point 3D FLAIR scans with manual all-lesion annotations by experienced human experts. We will release this dataset publicly.
§ DATASETS
We trained and evaluated our CoactSeg model on two MS segmentation datasets, as shown in Table <ref>. On the public MICCAI-21 dataset[<https://portal.fli-iam.irisa.fr/msseg-2/>], we only use its training set since it does not provide official labels of testing samples. Specifically, 40 two-time-point 3D FLAIR scans are captured by 15 MRI scanners at different locations. Among them, 11 scans do not contain any new MS lesions. The follow-up data were obtained around 1-3 years after the first examination. Four neuro-radiologists from different centers manually annotated new MS lesions, and a majority voting strategy was used to obtain the final ground truth. For pre-processing, the organizers only performed a rigid brain registration, and we further normalized all MRI scans to a fixed resolution of [0.5, 0.75, 0.75] mm.
Since the public MS lesion data is not adequate <cit.>, we further collected 38 single-time-point 3D FLAIR sequences as a new MS dataset (MS-23v1). Specifically, all samples were anonymized and captured by a 3T Siemens scanner in Alfred Health, Australia. To the best of our knowledge, this will be the first open-source dataset from Oceania for MS lesion segmentation, contributing to the diversity of existing public MS data. Two neuro-radiologists and one senior neuro-scientist segmented all MS lesions individually and in consensus using the MRIcron segmentation tool[<https://www.nitrc.org/projects/mricron/>]. The voxel spacing of all samples is then normalized to an isotropic resolution of [0.8, 0.8, 0.8] mm.
Finally, when conducting the mixed training, we used a fixed data split in this paper (i.e., 62 samples for training and 16 for validation in total). Note that we followed the setting of the public challenge <cit.>, which selects the new validation set from MICCAI-21 that does not include samples without any new MS lesions.
§ METHOD
§.§ Overview
Fig. <ref> illustrates the overall pipeline of our proposed CoactSeg model F_θ. We construct a quadruple set (X_b, X_fu, X_d, Y) for the model training. Here, the longitudinal difference map x_d ∈ X_d is obtained by a subtraction operation between the baseline brain x_b ∈ X_b and its follow-up x_fu∈ X_fu (i.e., x_d = x_fu-x_b). Therefore, given heterogeneous annotations, i.e., all-lesion labels y_al^s ∈ Y_al^s in single-time-point data and new-lesion labels y_nl^t ∈ Y_nl^t in two-time-point data, the CoactSeg model F_θ is designed to exploit both for the model training.
§.§ Multi-head Architecture
Fig. <ref> shows that new-lesion regions are highlighted in the brain difference map x_d. Hence, besides x_b and x_fu, CoactSeg also receives x_d as inputs. It generates all-lesion and new-lesion predictions as
p_al^s1, p_al^s2, p_nl^s = F_θ(x_b^s, x_fu^s, x_d^0)
p_al^t1, p_al^t2, p_nl^t = F_θ(x_b^t, x_fu^t, x_d^t).
For single-time-point samples x^s ∈ X^s, x_b^s and x_fu^s are identical as x^s, and the difference map becomes an all-zero matrix x_d^0, with p_al^s1, p_al^s2 and p_nl^s being the corresponding all-lesion and new-lesion predictions of x^s. For two-time-point data x^t ∈ X^t,
x_b^t and x_fu^t respectively denote the first and second time-point data samples, with p_al^t1, p_al^t2 and p_nl^t being the all-lesion segmentation results at the first and second time-point and the new-lesion results of x^t, respectively.
In this way, we unify the learning of both single and two-time-point data with heterogeneous annotations by using the same model F_θ, with the same input and output formats.
Note that, inspired by semi-supervised learning <cit.>, we mix x^s and x^t samples into each batch for training. Given the heterogeneous annotations, i.e., all-lesion labels for single-time-point data and new-lesion labels for two-time-point data, we apply the following corresponding supervisions:
L_al = Dice(p_al^s1, y_al^s) + Dice(p_al^s2, y_al^s)
L_nl = Dice(p_nl^t, y_nl^t)
where Dice refers to the common Dice loss for medical segmentation tasks. We use a 3D VNet <cit.> as the backbone of F_θ and three prediction heads are designed as individual convolutional blocks. Note that, the last prediction head also receives the features from the first two in order to capture the all-lesion information. Compared to the recent work <cit.> for exploiting heterogeneous data, our architecture avoids the complicated design of dynamic prediction heads.
§.§ Longitudinal Relation Regularization
Human experts usually identify new MS lesions by comparing the brain MRI scans at different time points. Inspired by this, we further propose a longitudinal relation constraint to compare samples from different time points:
L_rr = ||p_al^s1, p_al^s2||_2 + ||p_al^t1⊗ y_nl^t, 0||_2 + ||p_al^t2⊗ y_nl^t, 1||_2
where ⊗ is a masking operation. The first term in (<ref>) is to encourage the all-lesion predictions p_al^s1 and p_al^s2 to be the same since there is no brain difference for single-time-point data. The second and third terms in (<ref>) are to ensure that the new-lesion region can be correctly segmented as the foreground in p_al^t2 and as the background in p_al^t1 in two-time-point data with only new lesion labels y_nl^t.
Finally, the overall loss function to train our CoactSeg model becomes a weighted sum of L_al, L_nl, and the regularization L_rr:
L = L_al + λ_1 × L_nl +λ_2 × L_rr
where λ_1 and λ_2 are constants to balance different tasks.
§ RESULTS
§.§.§ Implementation Details.
For training, we normalize all inputs as zero mean and unit variance. Then, among common augmentation operations, we use the random flip or rotation to perturb inputs. Since MS lesions are always small, we apply a weighted cropping strategy to extract 3D patches of size 80×80×80 to relieve the class imbalance problem <cit.>. Specifically, if the input sample contains the foreground, we randomly select one of the foreground voxels as the patch center and shift the patch via a maximum margin of [-10, 10] voxels. Otherwise, we randomly crop 3D patches. The batch size is set as eight (i.e., four new-lesion two-time-point samples and four all-lesion single-time-point samples). We apply Adam optimizer with a learning rate of 1e-2. The overall training iterations are 20k. In the first 10k iterations, λ_1 and λ_2 are set to 1 and 0, respectively, in order to train the model for segmenting MS lesions at the early training stage. After that, we set λ_2 as 1 to apply the relation regularization. During testing, we extract the overlapped patches by a stride of 20×20×20 and then re-compose them into the entire results.
Note that we follow <cit.> to mask the non-brain regions and all experiments are only conducted in the brain regions with the same environment (Hardware: Single NVIDIA Tesla V100 GPU; Software: PyTorch 1.8.0, Python 3.8.10; Random Seed: 1337). The computational complexity of our model is 42.34 GMACs, and the number of parameters is 9.48 M.
§.§.§ Performance for MS Lesion Segmentation.
Two MS tasks (i.e., new-lesion segmentation on MICCAI-21 and all-lesion segmentation on our MS-23v1 dataset) are used to evaluate the proposed CoactSeg. Besides common segmentation metrics <cit.> including Dice, Jaccard, 95% Hausdorff Distance (95HD), and Average Surface Distance (ASD), we further follow <cit.> to use the instance-level F1 score (F1) to denote the lesion-wise segmentation performance. Here, tiny lesions (i.e., fewer than 11 voxels) are not included in the F1 calculation as <cit.>.
Fig. <ref> illustrates that our proposed CoactSeg accurately segments the tiny new lesions on MICCAI-21. Compared to the recent work <cit.>, our model can even predict new lesions with low contrast (indicated by the enlarged yellow rectangles in Fig. <ref>). Table <ref> gives the quantitative results on MICCAI-21. We can see that: 1) Our model achieves good segmentation performance for new MS lesion segmentation and outperforms the second-best method <cit.> by 7.01% in Dice; 2) Compared with human experts, our proposed model also outperforms two of them (i.e., #3 and #4) in terms of the segmentation and the shape-related metrics; 3) For the lesion-wise F1 score, our method
significantly reduces the performance gap between deep models and human experts, achieving a comparable F1 with expert #3 (i.e., 61.96% vs. 62.88%).
Fig. <ref> shows the all-lesion segmentation results of our CoactSeg model on our in-house MS-23v1 dataset. It can be seen that CoactSeg is able to segment most MS lesions, even for very tiny ones (highlighted by red arrows). Moreover, we can see that the segmentation results of the first two prediction heads are relatively consistent (i.e., the 2nd and 3rd columns of Fig. <ref>), demonstrating the effectiveness of our proposed relation regularization.
§.§.§ Ablation Study.
Table <ref> further shows the ablation study for both new and all MS lesion segmentation tasks. It reveals that: 1) Introducing the heterogeneous data significantly improves the performance of new-lesion segmentation on MICCAI-21 with an average Dice gain of 2.64%; 2) Exploiting the relation regularization for mixed training can further improve the performance on the two datasets; 3) The simple stage-by-stage training strategy (See the Implementation Details <ref>) can better balance two tasks and achieve the overall best segmentation performance for both tasks.
§ CONCLUSION
In this paper, we have presented a unified model CoactSeg for new MS lesion segmentation, which can predict new MS lesions according to the two-time-point inputs and their differences while at the same time segmenting all MS lesions. Our model effectively exploits heterogeneous data for training via a multi-head architecture and a relation regularization. Experimental results demonstrated that introducing all-lesion single-time-point data can significantly improve the new-lesion segmentation performance. Moreover, the relation constraint also facilitates the model to capture the longitudinal MS changes, leading to a further performance gain. Our in-house MS-23v1 dataset will be made public to help the MS lesion research.
Future works will explore more longitudinal relations to study the fine-grained MS changes as well as consider more powerful constraints to address the domain gap <cit.> and fairness <cit.> problems. Moreover, we plan to collect and annotate more MS lesion data to improve the possibility of training large-scale deep models for clinical applications <cit.>.
§.§.§ Acknowledgement.
This work was supported in part by the Monash FIT Start-up Grant, in part by the Novartis (ID: 76765455), and in part by the Monash Institute of Medical Engineering (MIME) Project: 2022-13. We here appreciate the public repositories of SNAC <cit.> and Neuropoly <cit.>, and also thanks for the efforts to collect and share the MS dataset <cit.> and the MS-23v1 dataset from Alfred Health, Australia.
splncs04
|
http://arxiv.org/abs/2307.04351v1 | 20230710052343 | MD-HIT: Machine learning for materials property prediction with dataset redundancy control | [
"Qin Li",
"Nihang Fu",
"Sadman Sadeed Omee",
"Jianjun Hu"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cs.LG"
] |
On Sufficient Graphical Models
Bing Li [email protected]
Department of Statistics, Pennsylvania State University
326 Thomas Building, University Park, PA 16802
Kyongwon Kim [email protected]
Department of Statistics, Ewha Womans University
52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760
August 12, 2023
=======================================================================================================================================================================================================================================================================================================================
Materials datasets are usually featured by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the field of bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit <cit.>) is always applied to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold. This paper surveys the overestimated ML performance in the literature for both composition based and structure based material property prediction. We then propose a material dataset redundancy reduction algorithm called MD-HIT and evaluate it with several composition and structure based distance threshold sfor reducing data set sample redundancy. We show that with this control, the predicted performance tends to better reflect their true prediction capability. Our MD-hit code can be freely accessed at <https://github.com/usccolumbia/MD-HIT>
§ INTRODUCTION
Density functional theory (DFT) level accuracy of material property prediction <cit.> and >0.95 R^2 for thermal conductivity prediction <cit.> with less than a hundred training samples have been routinely reported recently by an increasing list of machine learning algorithms in the material informatics community. In <cit.>, an AI model was shown to be able to predict formation energy of a hold-out test set containing 137 entries from their structure and composition with a mean absolute error (MAE) of 0.064 eV/atom which significantly outperform the performance of DFT computations for the same task (discrepancies of >0.076 eV/atom). In another related work in Nature Communication by the same group <cit.>, a mean absolute error (MAE) of 0.07 eV/atom was achieved for composition only based formation energy prediction using deep transfer learning, which is comparable to the MAE of DFT-computation. Pasini et al <cit.> reported that their multitasking neural networks can estimate the material properties (total energy, charge density and magnetic moment) for a specific configuration hundreds of times faster than first-principles DFT calculations while achieving comparable accuracy. In <cit.>, the authors claimed their graph neural network models can predict the formation energies, band gaps, and elastic moduli of crystals with better than DFT accuracy over a much larger data set. In <cit.>, Farb et al. showed numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all nine properties that they evaluated over the QM9 molecule dataset. They also claimed the out-of-sample prediction errors with respect to hybrid DFT reference were on par with, or close to, chemical accuracy. In <cit.>, Tian et al reported that current ML models can achieve accurate property-prediction (formation energy, band gap, bulk and shear moduli) using composition alone without using structure information, especially for for compounds close to the thermodynamic convex hull. However, this good performance may be partially due to the over-represented redundancy in their test samples obtained with 6:2:2 random selection from matminer datasets without redundancy control. To illustrate this point, Figure <ref> shows the formation energy and band gap landscape over the MP composition space, which is generated by mapping the Magpie features of all MP unique compositions to the 2D space using t-SNE and then plot the surface. Both figures show that there exist a large number of local areas with smooth or similar property values. Random splitting of samples in those areas into training and test sets may lead to information leakage and over-estimation of the prediction performance.
Despite these encouraging successes, the DFT accuracy reports of these ML models for material property prediction should be cautiously interpreted as they are all average performance evaluated over mostly randomly held-out samples that come from unexpectedly highly redundant datasets. Materials databases such as Material Project and OQMD are characterized by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO_3. This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the area of ecology <cit.> and bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit <cit.>) is required to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold e.g. 95% sequence identity. In a recent work in 2023, it was also shown that excellent benchmark score may not imply good generalization performance <cit.>.
The over-estimation of the ML performance for materials has been investigated in a few studies. In <cit.>, Meredig et al. examined extrapolation performance of ML methods for materials discovery. They found that traditional ML metrics (even with cross-validation (CV)) overestimate model performance for materials discovery and introduce the leave-one-(material) cluster-out cross-validation (LOCO CV) to objectively evaluate the extrapolation performance of ML models. They especially highlighted that materials scientists often intend to extrapolate with trained ML models, rather than interpolate to find new functional materials and sampling in materials training data is typically highly non-uniform. So the high interpolation performance of ML models trained with datasets with high sample redundancy (e.g. due to doping) does not indicate their strong capability to discovery new materials (out of dotmain (OOD) samples). They showed that current ML models have much higher difficulty to generalize from the training clusters to a distinct test cluster. They suggested the use of uncertainty quantification (UQ) on top of ML models to evaluate and explore candidates in new regions of design space. Stanev et al. <cit.> also discussed this generalization issue across different superconductor family. In <cit.>, Xiong et al. propose K-fold forward cross-validation (FCV) as a new way for evaluating exploration performance in materials property prediction by first sorting the samples by their property values before CV splitting. They showed that current ML models' prediction performance were actually very low as shown by their proposed
FCV evaluation method and the proposed exploratory prediction accuracy. A similar study for thermal conductivity prediction <cit.> also showed that when ML models are trained with low property values, they are usually not good at predicting samples with high property values, indicating the weak extrapolation capability. These studies show the need for the material property model developers to focus more on extrapolative prediction performance rather than average interpolation performance over test samples with high similarity to training samples due to dataset redundancy.
The material datasets redundancy issue has also been studied recently from the point of view of training efficient ML models or achieving sample efficiency. In <cit.>, Magar and Farimani proposed an adaptive sampling strategy to generate/sample informative samples for training machine learning models in the lowest amounts of data. They assumed that informative samples for a model are those with the highest K(e.g. 250) MAE in the test set, which are added to the initial 1000 training set iteratively. Another selection approach is to add samples similar to data points of the train set having the maximum MAE during training. They showed that their sampling algorithms can create smaller training sets that obtain better performance than the baseline CGCNN model trained with all training samples. This approach can be used with active learning to build high performance ML models in a data efficient way. In a more recent work <cit.>, Li et al. studied the redundancy in large material datasets and found that a significant degree of redundancy across multiple large datasets is present for various material properties and that up to 95% of data can be removed from ML model training with little impact on prediction performance for test sets sampled randomly from the same distribution dataset. They further showed that the redundant data is due to over-represented material types and does not help improve the low performance on out-of-distribution samples. They proposed a pruning algorithm similar to <cit.> which first splits the training set into A and B and then train a ML model on A and evaluates the prediction errors on samples in B. After that the test samples with low MAEs are pruned and the remaining samples are merged and split into A and B again and so on. Both approaches rely on the iterative training of ML models and are specific to a given material property. The also proposed an uncertainty quantification based active learning method to generate sample efficient training set for model training. While these works recognize the possibility to build data-efficient training set, they did not mention the how redundancy can affect the over-estimated ML model performance commonly seen in literature. Moreover, all approaches for building informative training set are material property specific, making it difficult to generate a single non-redundant benchmark dataset for benchmarking material property prediction algorithms for all material properties. Another limitation of these methods is that they show different similarity thresholds when applied to different datasets, which makes the resulting non-redundant datasets to have different minimum distances among the samples.
Since material property prediction research is now pivoting toward developing ML models with high accuracy, that are generalizable and transferable between different materials (including materials of different families), healthy evaluation of ML algorithms is needed to recognize the limitation of existing ML models and to invent new models with essential process. Within this context, reducing the dataset redundancy of both training set and test sets can avoid the over-estimation of the ML model performance, ameliorate the training bias towards samples in crowded areas, and push the model developers to focus on improving extrapolation performance instead of only interpolation performance.
In this paper, we argue the importance of redundancy control in the training and test set selection to achieve objective performance evaluation. Neglecting this has lead to many overestimated ML performances as reported in the literature for both composition based and structure based material property prediction. We then conduct the ML experiments to show that the over-estimated models usually fail for samples that are distant to training samples (lack of extrapolation performance). We then developed two redundancy reducing algorithms (CD-hit-composition and CD-hit-structure) with open-sourced code for reducing the dataset redundancy of both composition datasets and structure datasets. These two algorithms are based on composition and structure based distance metrics, which are used to add samples that are above a defined distance threshold. After this data redundancy control, the dataset can then be splitted randomly into training, validation, and test sets to achieve objective performance evaluation. We show that with this dataset redundancy control, the predicted performance tends to reflect their true prediction capability.
§ METHOD
§.§ MD-HIT-composition algorithm for redundancy reduction of composition datasets
The early version of CD-HIT algorithm <cit.> of bioinformatics was originally developed to handle large-scale sequence datasets efficiently. It employs a clustering approach to group similar sequences together based on a defined sequence identity threshold. Within each cluster, only one representative sequence, called the "centroid," is retained, while the rest of the highly similar sequences are considered duplicates and removed. However, the clustering approach is still inefficient to deal with datasets with hundreds of thousands of sequences. The next generation of CD-HIT further improved the efficiency by using a greedy algorithm <cit.>.
Both of our MD-HIT-composition and MD-HIT-structure redundancy reduction algorithms are designed based on this idea, which are greedy incremental algorithms. In our case, the MD-HIT starts the selection process with a seed material (default to be H2O). And then it sorts the remaining materials by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities to the existing representatives already selected into the cluster. The composition similarities are estimated using the ElMD (Earth Movers' Distance) package, which provides the options to choose linear, chemically derived, and machine learned similarity measures. By default, we used the mendeleev similarity and the magpie similarity <cit.> for our non-redundant composition dataset generation. The magpie distance function is defined as the Euclidean distance of a given set of material composition magpie feature vectors such as the widely used magpie features <cit.>. In the matminer materials informatics package, there are several other material composition descriptors that can also be used as well. Here we focused on ElMD and the magpie feature based distance function for redundancy control of composition datasets for materials property prediction.
The complete composition similarity metrics can be found in Table <ref>.
§.§ MD-HIT-Structure algorithm for redundancy reduction of structure datasets
MD-HIT-structure algorithm uses the same greedy adding approach of the MD-HIT-composition except it uses a structure based distance metric. However, due to the varying number of atoms of different crystals, it is non-trivial to compare the similarity of two given structures given most of structure descriptors tend to have different dimension for structures of different number of atoms. Here we chose two structure distances for redundancy reduction. One is the distance metric based on XRD features calculated from crystal structures. We used a Gaussian smoothing operation to first smooth the calculated XRD using the Pymatgen XRDCalculator module and then sample 900 points even distributed between 0 and 90 degree, which leads to XRD features of a fixed 900 dimension.
We also selected the OrbitalFieldMatrix feature to calculate the distances of two structures. This feature has also been used in <cit.> to select informative samples for ML model training. It is a set of descriptors that encode the electronic structure of a material. These features provide information about the distribution of electrons in different atomic orbitals within a crystal structure. These features provide a comprehensive representation of the electronic structure and bonding characteristics of materials and is of fixed dimension (1024).
Similar to the MD-Hit-composition, MD-Hit-structure algorithm also starts the selection process with a seed material (default to be H2O) put in the non-redundant set. And then it sorts the remaining materials in the candidate set by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities (we use Euclidean distance of XRD features or OFM features) to the existing representatives already selected into the non-redundant set. Redundant samples are discarded while non-redundant ones are added to the non-redundant set until the candidate set is empty.
§.§ Composition based materials property prediction algorithms
We evaluate two state-of-the-art composition based material property prediction algorithms including Roost <cit.> and Crabnet (the Compositionally Restricted Attention-Based network)<cit.> to study the impact of dataset redundancy on their performance. The Roost algorithm is a machine learning approach specifically designed for materials property prediction based on the material composition. It utilizes a graph neural network framework to learn relationships between material compositions and their corresponding properties. CrabNet is a transformer self-attention based model for composition only material property prediction. It matches or exceeds current best-practice methods on nearly all of 28 total benchmark datasets.
§.§ Structure based material property prediction algorithms
We evaluate two state-of-the-art structure based material property prediction algorithms including ALIGNN (Atomistic Line Graph Neural Network)<cit.> and DeeperGATGNN<cit.> to study the impact of dataset redundancy on their performance. The ALIGNN model addresses a major limitation of the majority of current Graph Neural Network (GNN) models used for atomistic predictions, which only rely on atomic distances while overlooking the bond angles. Actually bond angles play a crucial role in distinguishing various atomic structures and small deviations in bond angles can significantly impact several material properties. ALIGNN is a GNN architecture that conducts message passing on both the interatomic bond graph and its corresponding line graph specifically designed for bond angles. It has achieved state-of-art performances in most benchmark problems of the matbench <cit.>. The DeeperGATGNN algorithm is a global attention based graph neural network that uses differentiable group normalization and residual connection to achieve high performance deep graph neural networks without performance degradation. It has achieved superior results as shown in a set of material property predictions.
§.§ Evaluation criteria
We use the following performance metrics for evaluating dataset redundancy's impact on model performance, including Mean Absolute Error (MAE), R-squared (R^2), and Root Mean Squared Error (RMSE)
Mean Absolute Error (MAE):
MAE = 1/n∑_i=1^n| y_i - ŷ_i |
R-squared (R^2):
R^2 = 1 - ∑_i=1^n (y_i - ŷ_i)^2/∑_i=1^n (y_i - y̅)^2
Where y_i represents the observed or true values, ŷ_i represents the predicted values, and y̅ represents the mean of the observed values. The summation symbol ∑ is used to calculate the sum of values, and n represents the number of data points in the dataset.
§ RESULTS
§.§ Datasets generation
We downloaded 125,619 cif strutures from the Material Project database, which contains 89,354 unique compositions. For compositions that correspond to multiple polymorphs, we choose the average material property values as the default property value for that composition except for formation energy we use the minimum value. We also dropped the mp-101974 (HeSiO2) which has issue to calculate their Magpie features. We then remove all formulas with more than 50 atoms and got a non-duplicate composition dataset with 86,741 samples. We then use different similarity (distance) thresholds to generate non-redundant data sets. For mendeleev similarity, we use distance thresholds of 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 to generate seven non-redundant datasets. The dataset sizes range from 86740 to 3177. Similarly we generate eight matsholar non-redundant datasets. The percentages of total range from 50.82% to 2.33%. We also applied the MD-HIT-structure algorithm to all the 125,619 cif structures and use different thresholds to generate seven XRD non-redundant datasets and eight OFM non-redundant datasets.
After removal of redundancy based on varying degree of sample identity using MD-HIT algorithms, the details of all non-redundant datasets are shown in Table 2.
To visually understand the effect of redundancy removal of datasets, Figure <ref> shows the material distribution t-SNE maps of the full dataset and two non-redundant datasets. For each dataset, we calculate the magpie composition features for all its samples. Then we use t-SNE dimension reduction algorithm to map the features to two dimension space. Figure 2(a) shows the distribution of whole dataset, which are filled crowded samples with high redundancy. Figure 2(b) shows the less redundant dataset Matscholar-nr generated with threshold of 0.1. It contains only 50.82% samples while still covering the whole map. Figure 2(c) shows the Mendeleev-nr non-redundant dataset with only 4,930 samples, only 5.68% of the whole dataset while still covering the whole map with much lower redundancy. The non-redundant datasets thus allow us to test the true generalization capability when trained and tested on them.
§.§ Composition based material property prediction with redundancy control
To examine the material properties prediction performance of ML models using datasets with Mendeleev distance and Matscholar distance based redundancy control, we conducted a series of experiments to explore how the degree of redundancy affects the ML performance for formation energy and band gap prediction. The non-redundant datasets derived from the whole MP composition dataset with 86,740 samples using different thresholds were divided into training, validation, and testing sets with a ratio of 8:1:1, respectively. Figure <ref> and <ref> show a comparison of the performances of Roost and CrabNet for formation energy and band gap prediction on datasets of different sizes, filtered by Mendeleev distance thresholds of 0, 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 and Matscholar distance thresholds of 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35 and 0.4.
Figure <ref>(a) shows the prediction performances (MAE and R^2) of Roost and CrabNet for formation energy prediction evaluated over the whole dataset and six non-redundant datasets. It is found that the performance of both models exhibits a deteriorating trend with the increasing thresholds corresponding to lower degree of data redundancy, as evidenced by the diminishing R2 and increasing MAE scores. For band gap prediction (Figure <ref>(b)), the R2 scores of both models are decreasing gradually with the increase of the threshold. While the MAE scores exhibit a general uptrend, they do not exhibit a consistent decline with respect to the increasing threshold. Instead, they exhibit abrupt jumps at certain points. This could be due to outliers in the band gap-target datasets, which also shows the higher challenges for band gap prediction.
Figure <ref> shows the ML performances over the matscholar-controlled non-redundant datasets. In Figure <ref> (a), we found that the correlations between prediction performances of both Roost and CrabNet and thresholds (or data redundancy) are much higher than those shown in Figure <ref>(a), indicating that the matscholar distance tends to generate more evenly distributed non-redundant datasets compared to Mendeleev distance. However, this consistent trends of MAE and R^2 do not hold in the bandgap prediction performance shown in Figure <ref>(b), in which the R^2 curves are similar to those found in Figure <ref>(b) while the band gap prediction performances have large variation across different thresholds. We have checked this phenomenon by running multiple experiments for each threshold and got similar results. One possible reason is that a large percentage of bandgap samples have zero values. Overall, we found that removing redundancy of the datasets allows us to obtain more objective performances of ML models.
Through experiments, we observe that without reducing redundancy, a significant portion of test samples are concentrated in crowded areas with low prediction errors. This occurs because the model may overly rely on the information from these redundant samples during the learning process, while disregarding other more diverse data features. Excessive sample redundancy can potentially lead to deceptive phenomena on the test set.
§.§ Structure based material property prediction with redundancy control
To investigate the redundancy control of structure-based material datasets, we downloaded the whole Material Project database of 123,108 crystal structures along with their formation energy per atom and band gaps. Then we use the XRD and OFM features of crystal structures to define the similarity between pairs of structures, which is used to control the structure redundancy using the thresholds the minimum XRD/OFM distance between any pair of samples. For XRD based non-redundant datasets, we used the thresholds of 0.5, 0.6, 0.8, and 0.9. We then evaluated the material property prediction performances of two state-of-the-art graph neural network algorithms including DeeperGATGNN and ALIGNN. The results are shown in Figure <ref> (a) for formation energy prediction and Figure <ref> (b) for band gap prediction.
First we found that the XRD-distance provides a good control of data redundancy as the MAEs of both algorithms gradually increase with the increasing XRD thresholds, corresponding to lower dataset redundancy (Figure <ref> (a)). Simultaneously, the R^2 scores decrease as the thresholds go up. For band gap prediction result in Figure <ref> (b), the degree of dataset redundancy also affects the performance of both algorithms, though with a more complex effect compared to formation energy prediction results. First, it can be found that the R^2 scores of both algorithms drop down with the increasing thresholds. However, while the MAEs of the DeeperGATGNN go up overall with increasing thresholds, the MAEs of ALIGNN over the non-redundant with thresholds 0.8 and 0.9 are actually lower than the result over the dataset with threshold of 0.6 while the R^2 scores are lower. This discrepancy indicates for the bandgap prediction problem, there is a higher nonlinearity and the outlier band gap values may also play a role here. This phenomenon is also observed in the composition based results in Figure <ref> and Figure <ref>.
We further evaluated how OFM-controlled data redundancy affects the algorithms' performance. Figure <ref>(a) and (b) show how the performances in terms of MAE and R^2 change with the decreasing redundancy (or increasing thresholds). First we found that both algorithms showed high consistency in the formation energy prediction (Figure <ref>(a)). For both algorithms, the R^2 scores decreases in general with the increasing thresholds while the MAE scores increase. This indicates that OFM distance metric can be used as a good redundancy control method for crystal structure dataset. However, for band gap prediction, Figure <ref>(b) shows a surprising result: the R^2 scores go down with the increasing threshold as expected for both algorithms. However,the MAE scores also go down, which is unexpected since lower redundancy should lead to higher challenge for property prediction. To investigate the issue, we count the percentages of near-zero bandgap (<0.01 eV) samples of the test sets for all the five datasets with thresholds 0, 0.15, 0.2, 0.45, 0.7 and found that while the whole redundant dataset contains only 48.64% near-zero bandgap samples, our MD_HIT algorithm accidentally tend to pick higher percentage of near-zero bandgap samples with 64.09%, 67.81%, 84.52%, and 92.43% for thresholds 0.15, 0 2, 0.45, 0.7 respectively, which makes the prediction to be much easier, which explains why the MAEs drop. To further illustrate this data bias, we plotted the scatter plots of the predicted bandgaps by DeeperGATGNN over the whole datasets and two non-redundant datasets. We can clearly see that the dominance (92.43%) of near-zero samples in non-redundant dataset with threshold 0.7, which makes the prediction to be much easier compared to the whole dataset. This data bias may be reduced by choosing a different seed structure rather than the SrTiO_3 as used in this experiment. It also shows the importance to watch for data bias which can easily lead to over-estimated ML model performance in material property prediction.
§ CONCLUSION
Large material databases such as Materials Project usually contain high degree of redundancy, which causes biased ML models and over-estimated performance evaluations due to the redundancy between randomly selected test samples and the remaining training samples. The claimed DFT accuracy averaged over all data samples from literature deviates from the common needs of material scientists who usually want to discover new materials that are different from the known training samples, which makes it important to evaluate and report the extrapolation rather than interpolation material property prediction performance.
Here we propose and develop two material dataset redundancy reducing algorithms based on a greedy algorithm inspired by the peer bioinformatics CD-HIT algorithm. We use two composition distance metrics and two structure distance metrics as the thresholds to control sample redundancy of our composition and structure datasets. Our benchmark results over two composition based and two structure based material property prediction algorithms over two material properties (formation energy and band gap) showed that the prediction performance of current ML models all tend to degrade due to the removal of redundant samples, leading to more realistic measure of prediction performance of current ML material property models. The availability of our easy-to-use open-source code of MD-HIT-composition and MD-HIT-structure makes it easy for researchers to conduct objective evaluation and report realistic peformance of their ML models for material property prediction. It should be also noted that the current multi-threaded implementation of our MD-hit algorithms are still slow and more improvements are highly desirable.
§ DATA AND CODE AVAILABILITY
The source code and the non-redundant datasets can be freely accessed at https://github.com/usccolumbia/MD-HIT
§ CONTRIBUTION
Conceptualization, J.H.; methodology,J.H. Q.L.,S.L.,E.S.,Y.Z.; software, J.H., S.S.,Y.S., S.O.; resources, J.H.; writing–original draft preparation, J.H., S.S., Y.S.,S.O.,S.L.,E.S.,Y.Z.; writing–review and editing, J.H; visualization, J.H. and S.S.; supervision, J.H.; funding acquisition, J.H.
§ ACKNOWLEDGEMENT
Qin Li would like to thank for the computing support of the State Key Laboratory of Public Big Data, Guizhou University.
unsrt
|
http://arxiv.org/abs/2307.04028v1 | 20230708183125 | Measuring the Success of Diffusion Models at Imitating Human Artists | [
"Stephen Casper",
"Zifan Guo",
"Shreya Mogulothu",
"Zachary Marinov",
"Chinmay Deshpande",
"Rui-Jie Yew",
"Zheng Dai",
"Dylan Hadfield-Menell"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
[
Measuring the Success of Diffusion Models at Imitating Human Artists
equal*
Stephen Casperequal,MIT
Zifan Guoequal,MIT
Shreya MogulothuMIT
Zachary MarinovMIT
Chinmay DeshpandeHarvard
Rui-Jie YewMIT,Brown
Zheng DaiMIT
Dylan Hadfield-MenellMIT
MITMIT
HarvardHarvard University
BrownBrown University
Stephen [email protected]
Machine Learning, ICML
0.3in
]
§ OVERVIEW
Modern diffusion models have set the state-of-the-art in AI image generation.
Their success is due, in part, to training on Internet-scale data which often includes copyrighted work. This prompts questions about the extent to which these models learn from, imitate, or copy the work of human artists.
This work suggests that questions involving copyright liability should factor in a model's capacity to imitate an artist.
Tying copyright liability to the capabilities of the model may be useful given the evolving ecosystem of generative models.
Specifically, much of the legal analysis of copyright and generative systems focuses on the use of protected data for training <cit.>.
However, generative systems are often the result of multiple training processes. As a result, the connections between data, training, and the system are often obscured.
In our approach, we consider simple image classification techniques to measure a model's ability to imitate specific artists. Specifically, we use Contrastive Language-Image Pretrained (CLIP) <cit.> encoders to classify images in a zero-shot fashion.
Our process first prompts a model to imitate a specific artist. Then, we test whether CLIP can be used to reclassify the artist (or the artist's work) from the imitation. If these tests match the imitation back to the original artist, this suggests the model can imitate that artist's expression.
Our approach is simple and quantitative. Furthermore, it uses standard techniques and does not require additional training. We demonstrate our approach with an audit of Stable Diffusion's <cit.> capacity to imitate 70 professional digital artists with copyrighted work online. When Stable Diffusion is prompted to imitate an artist from this set, we find that the artist can be identified from the imitation with an average accuracy of 81.0%. Finally, we also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability. Overall, these results suggest that Stable Diffusion is broadly successful at imitating individual human artists. Code is available https://colab.research.google.com/drive/1ScHo9uMdUgId0DlSr4W4RgnMD44dLiku?usp=sharinghere.
§ BACKGROUND
Contrastive Language-Image Pretraining (CLIP): CLIP <cit.> is a technique for training AI systems that encode images and text into fixed-length vector representations.
CLIP image and text encoders are trained to produce similar encodings of image/caption pairs and dissimilar encodings of image/caption non-pairs.
The more geometrically distant two encodings of images or captions are, the less related they are according to the encoder, and vice versa.
Using this principle, <cit.> introduced a method to classify an image among a set of labels based on the distances between encodings. We use this method in our proposed test.
Diffusion Models: Diffusion models <cit.> such as Stable Diffusion <cit.> and Midjourney <cit.>, are capable of generating images from arbitrary, user-specified prompts.
Their success has largely been due to training on large amounts of text/image data, often including copyrighted works <cit.>.
Modern image-generation diffusion models are trained using CLIP-style encoders.
When given an encoding of a caption, a diffusion model is trained to generate an image corresponding to the caption <cit.>.
Accordingly, a diffusion model that generates images from these embeddings is trained to be the inverse of a CLIP image encoder.
Legal Motivation: In the United States, <cit.> established that copyright infringement “is measured by considering the qualitative and quantitative significance of the copied portion in relation to the plaintiff’s work as a whole”. However, the subjective nature of these determinations makes practical enforcement complicated.
<cit.>.
In evaluating copyright questions involving AI systems, legal analyses have focused on how copyrighted work is used in the system's training data <cit.>, but such a focus on training data does not connect liability to an AI system's ability to copy an artist.
In contrast, we show how standard image classification techniques can be used to help determine how successful AI image generators are at imitating individual human artists.
This approach is consistent, quantitative, and connected to the capabilities of the resulting AI system.
Our goal, however, is not to automate determinations of infringement but to demonstrate how tried and tested image classification techniques from machine learning can be used to analyze legal claims.
§ EXPERIMENTS
We conduct two complementary experiments to evaluate Stable Diffusion's ability to imitate human artists. First, we classify human artists from imitations of their work, and second, we match real work from human artists to imitations. Both experiments suggest that Stable Diffusion is broadly successful at imitating human artists.
§.§ Identifying Artists from Imitations
Method: We used CLIP encoders to classify artists from Stable Diffusion's imitations of them. We selected 70 artists from the LAION-aesthetics dataset <cit.>, the dataset used to train Stable Diffusion. We selected these 70 as artists who may potentially be harmed by digital imitations using several criteria: each artist is alive, has a presence on digital art platforms (Instagram, DeviantArt, and ArtStation), publishes artwork or sells their artwork (e.g., prints or digital works), and has more than 100 images in the LAION dataset.
Figure <ref> outlines our method.
We prompted https://huggingface.co/runwayml/stable-diffusion-v1-5Stable Diffusion (v1.5) to generate images in the style of each artist, using prompts of the form “Artwork from <artist’s name>”.
Example images are in Figure <ref>.
We then used https://huggingface.co/openai/clip-vit-base-patch32CLIP encoders to classify each image among a set of 73 labels.
The 73 labels consisted of each of the 70 artist's prompts (“Artwork from <artist’s name>”) plus three default labels: “Artwork”, “Digital Artwork”, and “Artwork from the public domain.”
These additional labels lend insight into how confident CLIP is that an image imitates a particular artist's style instead of some more generic style.
We then classified each imitation image among these labels using the technique from <cit.>.
CLIP-based classification produces a probability of an image matching each label, and we evaluate the model on the correctness of its most-likely prediction and confidence in the correct artists.
Results: We repeated the experiment with the 70 artists ten times to reduce the effect of random variation. On average, CLIP correctly classified 81.0% of the generated images as works made by artists whose names were used to generate them.
Over the ten trials, 69 of the 70 artists were correctly classified in a plurality of the ten trials.
Overall, these results suggest that Stable Diffusion has a broad-ranging ability to imitate the styles of individual artists.
We compared these results to two baselines.
First, we implemented a random-name baseline by running the same experiment with 70 random names from a https://randomwordgenerator.com/name.phprandom name generator.
Since Stable Diffusion was not trained on artists with these names (unless a random name is coincidentally the same as some artist's), this experiment serves as a proxy for how Stable Diffusion would handle artists not in its training data.
In this case, only 6 names (8.6%) were guessed correctly.
Second, a random guess would only result in a successful classification every 1 in 73 attempts (1.4%) on average.
We visualize results from our main experiment alongside the controls in Figure <ref>.
Results are Robust to Different Sets of Artists: To test whether our 70 artists were especially classifiable, we ran the original experiment but with a larger set of indiscriminately-selected artists and found similar results. We selected the 250 artists with the highest number of images in the LAION dataset and found that CLIP correctly classified 81.2% of the images.
This demonstrates that successful classification transcends a particular specific set of artists.
§.§ Matching Artwork to Imitations
Method: Our first experiment tested how easily artists could be identified from diffusion model imitations of them.
To provide a complementary perspective, we also directly study the similarity of artists' digital works to Stable Diffusion's imitations of them. For each of the 70 artists, we retrieve the top result obtained by Google Image searching “<artist's name> art.”
As before, we then use Stable Diffusion to generate 10 images for each artist with the prompt “Artwork from [artist's name].” We then compare the real images and generated images. Distances are measured by first encoding images
using the CLIP image encoder and calculating the cosine distance between encodings.
Results: For each artist, we calculate whether real images from artists are more similar to imitations of that artist or other artists. The significance was calculated using a rank sum test with a Bonferroni correction factor of 70. Results are in Figure <ref>.
90% (63/70) of the experiments produce p values less than 0.05. This compares to an average of 22.8% (16/70) for a control experiment using random artist assignments of real images. These results further support that Stable Diffusion is broadly successful at imitating artists.
§ CONCLUSION
We have demonstrated how AI image classification can help to measure the success of diffusion models imitating human artists.
We argue that these methods can provide a practical way to tie questions about copyright liability to the capabilities of a model instead of its training data alone.
By matching imitation images to both artists' names and works, we find that Stable Diffusion is broadly successful at imitating human digital artists.
We hope that future work can use image classification to analyze legal claims and to test defenses against AI imitation of copyrighted work.
§ ACKNOWLEDGEMENTS
We thank Taylor Lynn Curtis and Lennart Schulze for feedback.
icml2023
|
http://arxiv.org/abs/2307.03886v1 | 20230708033922 | On Regularization and Inference with Label Constraints | [
"Kaifu Wang",
"Hangfeng He",
"Tin D. Nguyen",
"Piyush Kumar",
"Dan Roth"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
[
ICML'2023
On Regularization and Inference with Label Constraints
equal*
Kaifu Wangupenn
Hangfeng Herochester
Tin D. Nguyenmit
Piyush Kumarstr
Dan Rothupenn
upennUniversity of Pennsylvania, Philadelphia, PA, USA
mitMassachusetts Institute of Technology, Cambridge, MA, USA
strSystems and Technology Research, Woburn, MA USA
rochesterUniversity of Rochester, Rochester, NY, USA (Part of the work done while at the University of Pennsylvania.)
Piyush [email protected]
Dan [email protected]
Machine Learning, ICML
0.3in
]
Prior knowledge and symbolic rules in machine learning are often expressed in the form of label constraints, especially in structured prediction problems.
In this work, we compare two common strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference, by quantifying their impact on model performance.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints. However, its preference for small violations introduces a bias toward a suboptimal model.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
Given these differences, we further explore the use of two approaches together and propose conditions for constrained inference to compensate for the bias introduced by regularization, aiming to improve both the model complexity and optimal risk.
§ INTRODUCTION
Domain knowledge in machine learning is often framed as constraints on the output label space.
Such label constraints have been widely identified in natural language processing tasks
<cit.>
and studied in the context of structured prediction
<cit.>.
For example, in temporal reasoning <cit.> where the model is asked to label the relations (“before” or “after”) among a set of events, the assigned labels will need to satisfy a transitivity constraint which means, for example, the facts that an event E_1 is after E_2 and that E_2 is after E_3 imply that E_1 is after E_3.
The central question is how to encode such a constraint into a learning algorithm to ensure better performance and generalization of the learned model.
Practitioners have developed two techniques to encode a label constraint in a machine learning pipeline. The first, called regularization with constraints, penalizes a model for its violation of the constraint in addition to the classification loss <cit.>. The second, called inference with constraints, modifies prediction rules directly by enforcing strictly constrained inference <cit.> or balancing the original model's output with the constraint in a soft way <cit.>.
Although these two learning algorithms have been shown to be empirically successful, we are not aware of theoretical analyses that elucidate each algorithm's advantages or disadvantages in comparison with the other one. Natural questions include, how do these two differ in their impact on the learned model? Moreover, in practice, the constraints could be noisy i.e. <cit.>. In such cases, do they still improve the model performance? If so, by how much?
Focusing on multiclass classification with label constraints, we compare regularization with constraints and constrained inference.
For each algorithm, we quantify its optimal risk (aka approximation error) and its generalization gap (aka estimation error).
Specifically, in Section <ref>, we show that regularization with constraints achieves a smaller generalization error by reducing the model complexity but will introduce a bias towards a suboptimal model if the risk minimizer and violation minimizer does not coincide.
In Section <ref>, we study a broad family of constrained inference model called Constrained Conditional Model (CCM) <cit.> and point out that the constrained inference could reduce the risk of a model if and only if the model violates the constraint more than the true data distribution.
This further suggests finding models with higher violation, which contrasts the learning objective used in regularization that discourages violation.
Given these contrasts, we further study the combination and interaction of the two methods in Section <ref> and describe how constrained inference could compensate for the bias introduced by regularization.
To the best of our knowledge, our analysis is the first to provide a theoretical view on comparing the two approaches. We believe in the importance of this comparison and hope to bring this problem to the attention of the machine learning community.
In summary, our contributions include:
* We provide an error bound (Theorem <ref>) that describes the tradeoff between the generalization gap and the optimal risk when performing regularization with constraints.
* We propose a sufficient and necessary condition (Theorem <ref>) for constrained inference to improve a model by quantifying its reduction in risk.
Based on this, we further argue that constrained inference, when used at training time, implicitly modifies the training objective in an opposite direction as in the regularization approach (Proposition <ref>).
* We study the combination of regularization and constrained inference, and propose sufficient (Theorem <ref>) as well as necessary (Theorem <ref>) conditions for the combined algorithm to achieve improvement in both optimal risk and model complexity.
Proofs of all the theoretical results are in the appendix.
§ PRELIMINARIES
Our goal is to learn a mapping from the instance space X to the output space Y.
The learner has access to a set of labeled training data S_ L of size m_ L, which contains i.i.d. samples of a distribution P on X ×Y.
The marginal distribution of X is denoted as P_X.
In this work, we assume the ground truth label associated with x ∈X is generated by a deterministic mapping y_:X →Y (_ is short for oracle). We also denote the true label as y_ when the context is clear.
Model.
The scoring class F contains scoring functions f:X ×Y →R.
We will also call a f∈F a classifier.
Let Δ_Y be the |Y|-dimensional probability simplex. Each scoring function
induces a probabilistic prediction P_f(·|x) ∈Δ_Y by performing softmax inference as P(y|x) ∝exp(f(x,y)).
Loss Function.
The prediction of f at x is evaluated by the classification error (or ℓ^1 loss) L(x,y_,f) := 1 - P_f(y_|x), which is half the ℓ^1 distance the between the one-hot distribution e_y_ and P_f on Δ_Y.
It can also be viewed as a smoothed version of the standard zero-one loss in the sense that lim_t →∞ L(x,y_,tf) = 1{_y∈Yf(x,y) y_}.
More background on the definition of the ℓ^1 loss are provided in Appendix <ref>.
A scoring function f is evaluated by its risk R(f) := E[L(x,y_,f)]. The empirical estimate of the risk using the labeled examples in S_ L is denoted as R(f, S_ L).
We also consider the cross-entropy surrogate loss defined as L_ (x,y_,f) := -logP_f(y_|x) and refer its expectation R_(f) = E[L_(x,y_,f)] as cross-entropy risk.
Label constraint.
A label constraint (or constraint for short) is a deterministic mapping C:X → 2^Y-{∅}. Namely, C maps an instance x to a nonempty subset of Y, which may or may not contain the true label y_(x). In particular, we say a constraint C is noise-free if P(y_∈ C(x))=1. Otherwise, C is said to be a noisy constraint and its noise rate is denoted as V_ := P(y_(x) ∉ C(x)).
Violation.
A constraint C is equipped a violation function, which is an indicator function v_C(x,y) = 1{y∉ C(x)}. We also overload the notation v and define the violation of a classifier f at an instance x as v_C(x,f):= 1-P_f(C(x)|x) = ∑_y∉ C(x)P_f(y|x). Its expectation is V_C(f):= E[v_C(x,f)]. We elide the subscript C and write them as v(x,y), v(x,f) and V(f) when the context is clear. Similar to the classification error, we consider a cross-entropy surrogate of the violation function defined as v_(x,f):=-logP_f(C(x)) and its expectation V_(f) = E[v_(x,f)].
Rademacher complexity.
We use the following version of Rademacher complexity that is adopted from <cit.> to characterize the generalization ability of the scoring space of multiclass classifiers F:
The empirical Rademacher complexity of scoring class F with respect to a set S = {x_i}_i=1^m that contains m samples of the instance is defined as
ℜ_m(F;S)
:=
1/mE_ϵ[
sup_f∈F∑_i=1^m
∑_y∈Yϵ_i,y f(x_i,y)
]
where ϵ=(ϵ_i,y)_i∈ [m],y∈Y are independent Rademacher random variables, each of which is uniformly distributed over {-1,+1}. The Rademacher complexity of scoring class F is the expectation of the empirical version:
ℜ_m(F)
:= E_S ∼P_X^m[ℜ_m(F;S)]
This definition of Rademacher complexity is a special case of the factor graph complexity proposed by <cit.>, which is defined for more general structured prediction models. It is hence possible to extend our results of the generalization bounds to structured models by replacing the Rademacher complexity with factor graph complexity. In this work, we focus on multiclass classifiers for the simplicity of presentation.
§ REGULARIZATION WITH CONSTRAINTS
In a standard machine learning algorithm, the learner receives a set of labeled data S_ L ∈∪_m=1^∞(X ×Y)^m and finds the empirical risk minimizer, which is defined as _f ∈FR̂(f;S_ L).
In this section, we consider a method that modifies this learning objective by adding a regularization term defined with the constraint C. Precisely, we consider minimizing an augmented objective defined as
L_ρ (f)
:= R(f) + ρ V(f)
where ρ≥ 0 is a fixed tradeoff parameter.
The idea of regularizing the model by adding a penalty for the violation of the constraints on an unlabeled dataset is widely adopted in the literature. In particular, the cross entropy violation is known as the semantic loss <cit.> in the context of logical constraints. Other designs of the regularization term include using the KL-divergence on the probability space in the posterior regularization algorithm <cit.> and using the t-norms from fuzzy logic <cit.>.
We will show this algorithm improves the generalization error by reducing the complexity of the scoring space (Theorem <ref>), but in general leads to a larger classification risk in the long run (Proposition <ref>), thus resulting in a tradeoff between estimation and approximation errors.
§.§ Semi-supervised Regularization with Constraints
We consider a semi-supervised approach where the learner has access to an unlabeled dataset S_ U that contains m_ U independent samples of the instance X, resulting in the following definition.
Given a labeled dataset S_ L of size m_ L and an unlabeled dataset S_ U of size m_ U, a scoring space F and a tradeoff parameter ρ≥ 0, we define and denote the empirical risk and violation minimizer (ERVM) as:
f_ρ(S_ L,S_ U)
:= _f∈F (
1/m_ L∑_(x,y)∈ S_ L L(x,y,f) .
. + ρ/m_ U∑_x∈ S_ U v_C(x,f)
).
We also denote the expected version as:
f_ρ := _f ∈F R(f) + ρ V_C(f).
For example, with our notation, f̂_0 is the ERM and f_∞ is the minimizer of the expected violation function. Notice that the minimizer in general is non-unique. Therefore, when we state any proposition that is related to f_ρ or f̂_ρ, we mean the proposition will hold for any of the minimizers.
§.§ Deviation from The Optimal Risk
In this section, we study how the risk of the minimizer f_ρ will deviate from the optimal risk in F. The reason that we are interested in bounding R(f_ρ) is that in general the minimizer R(f_ρ) is non-unique and may have different values of risks. Therefore, to describe the risk of ERVM in the long run (in Theorem <ref>), we provide an upper bound for all the possible risks of f_ρ.
For any constraint C and ρ≥ 0, the following holds.
R(f_0)
≤R(f_ρ)
≤R(f_0) + ρ (V(f_0) - V(f_∞))
.
The same relation also holds for the empirical estimates R̂ and V̂. Moreover, for any ρ>0, there exists a scoring space and data distribution so that the RHS can be reached even with a noise-free constraint C.
This result shows the minimizer of the regularized objective in general has a suboptimal risk over F. On the other hand, if the risk minimizer is simultaneously a violation minimizer, i.e., V(f_0) = V(f_∞), this relation implies consistency, i.e., R(f_ρ) = R(f_0).
This quantity V(f_0) can be small when the noise rate V_ is small and the model is expressive enough (e.g., a deep neural net) to approximate the true model.
§.§ Generalization Bounds
Now we discuss how regularization could reduce the complexity of the hypothesis class. The first step is to show that the violation of the target hypothesis is not too large. In particular, the following bound is a direct consequence of minimizing the regularized objective:
Let f_ρ be the regularized learning objective defined as in (<ref>). If the minimum violation in F is upper bounded by a known constant u ≥ 0, i.e., V(f_∞) ≤ u, then V(f_ρ) ≤ 1/ρ + u.
The upper bound u can be set to arbitrarily small by adding a baseline model defined as f_t(x,y) = t·1{y∈ C(x)} and driving t to infinite. This construction is possible due to the fact that the mapping C is known to the learner. The benefits of knowing C will be further explored in Section <ref> when we discuss inference with constraints.
For any B ≥ 0, we let F_B := {f ∈F| V(f) ≤ B} be the set of classifiers with small violation.
From the above discussion, we know that the target hypothesis f_ρ will lie in a smaller space F_u+1/ρ, which is characterized by the violation function and hence can be identified only with unlabeled data. To this end, we describe how the violation as well as the risk can be estimated with data.
Given a labeled dataset S_ L of size m_ L, for any δ>0, with probabilistic at least 1-δ, the following inequality holds uniformly for f ∈F:
R(f)
≤R̂(f;S_ L) + ℜ_m_ L(F) + √(log(1/δ)/2m_ L)
Given a unlabeled dataset S_ U of size m_ U, for any δ>0, with probabilistic at least 1-δ, the following inequality holds uniformly for f ∈F:
V(f)
≤V̂(f;S_ U) + ℜ_m_ U(F) + √(log(1/δ)/2m_ U)
The proof of this result relies on a contraction lemma established in <cit.>, which was used to analyze the argmax inference with margin losses. Our analysis extends their results to softmax inference, which may be of independent interest.
Furthermore, if the size of the constrained set C(x) is a constant, namely |C(x)|=c_0 < c = |Y| for all x ∈X, then the Rademacher complexity term of equation (<ref>) can be improved to √(2)/2√(1/c-c_0 + 1/c_0)ℜ_m_ U(F) (see the discussion in the proof).
This term is symmetric with the transformation c_0 ↦ c-c_0, due to the fact that estimating the violation V_C of a constraint C is equivalent to estimating V_Y-C.
In particular, when c_0 < c/2, if the constraint is more restrictive and informative (so that c_0 is small), it can be more difficult to estimate the violation.
Assuming lim_m
→∞ℜ_m(F) = 0, this result implies L_ρ can be approximated by its empirical version L̂_ρ with sufficient amount of data. On the other hand, since L̂_ρ is upper bounded by its cross-entropy surrogate R̂_ + ρV̂_, we further have that
L_ρ(f)
≤R̂_(f,S_ L) + ρV̂_(f,S_ U) + o_m_ L, m_ U(1)
where o_m_ L, m_ U(1) converges to 0 as m_ L, m_ U →∞.
Therefore, in practice one can minimize this upper bound by solving the convex surrogate problem
min_f ∈FR̂_(f,S_ L) + ρV̂_(f,S_ U).
where R̂_(f,S_ L) and V_(f,S_ U) are the empirical average of the cross-entropy loss and violation.
Finally, using these results, we bound the risk of the classifier learned by ERVM. For simplicity, we will denote the generalization gap B(δ, m, F) := ℜ_m(F) + 2√(log(1/δ)/2m).
We have with probability at least 1-6δ that
R(f̂_ρ)
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞)
+ ℜ_m_ L(F_1/ρ + u + B(δ, m_ U, ℱ))
+ ρℜ_m_ U(F_1/ρ + u + B(δ, m_ U, ℱ))
+ 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)
where ℜ(·) is the Rademacher complexity defined in (<ref>).
First, we show f̂_ρ and f_ρ both lie in the subspace F_1/ρ + u + B(δ, m_ U, ℱ) with high probability since the violation can be well-approximated, according to Lemma <ref>.
Then, the gap between the objective L(f_ρ) and L(f̂_ρ) is controlled by the Rademacher complexity of F_1/ρ + u + B(δ, m_ U, ℱ).
Finally, we use the inequalities established in Lemma <ref> to further upper bound the term L(f_ρ) using the risk and violation of f_0.
Using the same proof technique, this result can be extended to other choices of loss function as long as:
(a) The loss is bounded so that the optimal regularized model has a small violation, as in Lemma <ref>. (b) The loss is Lipschitz with the model scores so that a generalization bound associated with the loss holds, as in Lemma <ref>.
Reducing the generalization gap.
The bound (<ref>) contains three parts: the first line is the worst risk that can be achieved by f_ρ as we described in Proposition <ref>, the second and the third line is the complexity of the classifiers that have a small violation, and the last line is the errors that are independent of the model.
This bound (<ref>) is most preferable when a large set of unlabeled data is available so that the approximation errors of violations (i.e., term B(δ/2, m_ U, ℱ), ℜ_m_ U(F_1/ρ + u + B(δ/2, m_ U, ℱ)) and √(log(1/δ)/2m_ U)) are all small. Then, the model complexity is mainly described by the term ℜ_m_ L(F_1/ρ + u), which is the Rademacher complexity of a proper subset of F.
In this sense, the regularization method reduces the generalization gap by reducing the model complexity of the scoring space.
Tradeoff in regularization.
In situations where m_ U is large, the tradeoff parameter ρ balances two quantities: a larger ρ leads to a smaller scoring space F_1/ρ + u, but brings more bias depending on the suboptimality of f_0 in violation, measured by V(f_0)-V(f_∞).
The benefit of regularization is greater if fewer classifiers can achieve a violation that is close to the optimal value V(f_∞).
We provide the following example to illustrate how the Rademacher complexity can be reduced in linear models.
[Logistic Regression]
Consider a linear model for multiclass classification where Y=[c] and f(x,j)=w_j^ T x with ∑_j=1^c w_j_2^2 ≤ 1.
Suppose x ∈R^p is distributed in the unit sphere x_2 ≤ 1 with expectation E[x] = α∈R^p and covariance matrix σ^2I_p× p.
Without constraint, the Rademacher complexity is upper bounded as ℜ_m(F) ≤√(c/m) as in <cit.> (Theorem 2).
Now, consider a constraint that removes exactly one label so that C(x) ≡ [c-1].
With regularization, for sufficient small t<1/(c+2), we have the following bound
ℜ_m(F_t)
≤1/2(√(c/m) + √(c-σ^2-α_2^2/m))
which is strictly tighter than the standard bound. Intuitively, if x is concentrated around the origin 0, the prediction by any classifier will tend to be a uniform distribution. Therefore, a large bias and variance in x (captured by σ^2+α_2^2) help to distinguish models with different levels of violation.
Compare to existing results.
Previous works mostly consider a zero-one loss for both classification and violation under the assumption that the risk minimizer also achieves zero violation.
Then, one can simply preclude all the classifiers f∈F that have nonzero empirical violations on the unlabeled dataset and find the ERM among the remaining classifiers.
This approach has been theoretically studied in <cit.> for binary classification and <cit.> in a similar manner for regression by characterizing the complexity of the reduced set of hypotheses that achieve zero violation.
Conceptually, we can regard this algorithm as a special case of problem (<ref>) when ρ = ∞.
Our study, therefore, extends previous works with a soft learning objective to multiclass classification problems.
§ INFERENCE WITH CONSTRAINTS
An inference algorithm is a mapping F ×X →Δ_Y.
By default, we define it as the softmax inference: (f,x) ↦P_f(·|x).
When performing inference with constraints (or constrained inference), we modify this softmax mapping for the given function f using the additional information of C.
In this section, we study the Constrained Conditional Model (CCM) <cit.>, a broad family of models that perform inference with constraints.
We show at testing time, whether CCM reduces the risk depends on whether the model's expected violation is larger than the noise rate of the constraint V_ (Theorem <ref>).
In particular, when the constraint is noise-free, CCM always achieves a smaller or equal risk.
Furthermore, we show better risks are achieved if the constrained inference is also performed at training time, and pursuing this optimal risk leads to a learning objective that contrasts with the one used in the regularization approach (Proposition <ref>).
To make distinguishment, we will refer to a model in the original spaces F as a base model and refer to an augmented model as a constrained model.
§.§ Constrained Conditional Model
CCM augments existing scoring functions using a linear combination with the violation function. Precisely, given a vanilla scoring space F, the scoring space of CCM is defined as follows.
Given a scoring space F, a constraint C and a fixed tradeoff parameter μ∈ [0, ∞], the scoring space of the Constrained Conditional Model (CCM) is defined as:
F^μ
:= { (x,y) ↦ f(x,y) - μ v_C(x,y) | f∈F}
We will also denote
f^μ(x,y)
:= f(x,y) - μ v_C(x,y)
to be the augmented scoring function for a given f∈F. In particular, setting μ = ∞ will assign a score -∞ to any y ∉ C(x), which implies P_f^∞(y|x)=0, namely forcing strictly-constrained inference.
The tradeoff parameter μ allows CCM to improve the base model f despite noisy constraints, as we will discuss in detail in the following sections. Otherwise, if the noise rate is large, performing strictly-constrained inference can be harmful because it assigns 0 probability mass to any label y that is outside C(x) and hence has a classification loss L(x,y_,f^∞)=1 at any x where y_∉ C(x).
The learner can choose whether or not to perform the constrained inference either at the training time. This choice leads to the following two different approaches:
* On-training approach: perform constrained inference both at training and testing time, and directly find the ERM over F^μ using labeled data (also known as (Inference Based Training in <cit.>)
* Post-training approach: first find the ERM over the vanilla F using labeled data, and then perform constrained inference at the testing time (also known as Learning Plus Inference in <cit.>).
For both approaches, the generalization ability of CCM is characterized by the complexity of F^μ. So, we first point out that CCM does not increase the Rademacher complexity.
For any fixed μ≥ 0 and m ∈N, we have the following identity:
ℜ_m(F^μ)
= ℜ_m(F)
§.§ Post-training Constrained Inference
For a given and fixed classifier f (presumably trained with data), how does performing constrained inference impact the model performance?
In this section, we study the change in risk when the learner chooses to augment f as a CCM f^μ defined in (<ref>).
It is most convenient to characterize the risk of a CCM using the cross-entropy loss, although we will also conduct the same analysis for the hinge and ℓ^1 losses, as we will point out later.
To start with, for any f and μ∈ [0, ∞], we let
Δ^μ_(f)
:=R_(f) - R_(f^μ)
be the difference in the risk between the base model and the CCM (the larger the better).
We have:
-0.5em
* For any fixed model f, there exists an μ_0 > 0 such that R_(f^μ_0) < R_(f) if and only if
V(f) > V_
* The change in risk can be lower bounded as
Δ^μ_(f)
≥ V(f)(1-^-μ) - μ V_
* In particular, if the constraint is noise-free, we have
Δ^∞_(f)
= V_(f)
The first result describes the sufficient and necessary condition for constrained inference to be helpful.
It requires f to have a larger violation (measured by ℓ^1 violation) than the true data on average so that it has the potential to be improved. This condition is easier to satisfy when the constraint is less noisy.
The second result further quantifies the risk reduction as an explicit function of μ.
The last result shows that in the noise-free case, the maximum risk reduction is exactly the expected violation measured by cross-entropy. Its consequences will be further discussed in the next section.
We present the counterparts of Theorem <ref> for hinge loss and ℓ^1 loss in the Appendix <ref>.
The information delivered by those results is consistent with Theorem <ref> in the sense that (1) whether CCM can reduce the risk depends on the comparison between the violation of the original model and the oracle.
(2) the reduction can be described or lower bounded by some measures of the violation.
The drawback of the hinge loss is its non-smoothness due to the discontinuity of the argmax inference. The drawback of the ℓ^1 loss is that the range of μ such that R(f^μ) ≤ R(f) can be disconnected and difficult to describe. Therefore, we provide weaker results by deriving only sufficient or necessary conditions for CCM to reduce the risks.
As an application of Theorem <ref>, we derive a sufficient condition under which CCM achieves smaller risks.
Assuming V(f) ≥ V_, then R_(f^μ) ≤ R_(f) if the following condition holds:
μ≤ W(-η/^η)+η
where η := V(f)/V_ is the relative violation rate and W is the Lambert W function whose value W(t) is defined to be the solution to the equation w^w = t of w.
The RHS of (<ref>) increases with η and vanishes as η→ 1.
In particular, when the constraint is noise-free, one should encourage strictly-constrained inference and set μ = ∞. We also provide a plot of the RHS in the proof in the appendix.
§.§ On-training Constrained Inference
In this subsection, we study the on-training approach where we perform constrained inference both at the training and testing time. We use the results we established in the last subsection to describe the learning objective of the on-training approach, and argue that it achieves better risks than the post-training approach. Based on this, we further show that minimizing the cross entropy over CCM encourages a large violation of the base model, which contrasts the learning objective (<ref>) that is used in regularization.
We provide a simplified analysis for the noise-free setting where we choose μ = ∞ and perform strictly-constrained inference.
Then, the on-training approach aims to find the optimal (in terms of cross entropy) base model as follows:
:=
_f ∈F R_(f^∞)
(recall f^∞ means performing strictly-constrained inference with f) We characterize the behavior of with the following results, which are direct corollaries of Theorem <ref>.
Assuming C is noise-free, we can reformulate the learning objective (<ref>) as
= _ f∈F R_(f) - V_(f)
A fundamental difference.
Surprisingly, the reformulated learning objective (<ref>) is opposite to the surrogate regularized objective defined in (<ref>) in their attitudes towards violations. This contrast suggests a fundamental difference between regularization and constrained inference: the regularization method views violation as a bad thing and it precludes classifiers with substantial violations. But constrained inference corrects a model from its violation, so a large violation means a great potential to be improved.
On-training vs post-training.
Loosely speaking, this result also suggests that in general, the best constrained model is not the constrained best model. To be more precise, suppose we perform post-training constrained inference for the cross-entropy risk minimizer in the vanilla model, i.e., := _f∈F R_ (f).
Then, we can reformulate the definition of as
:= _f∈F(R_(f) - V_(f))_objective in (<ref>), post-training risk + V_(f)
which can be regarded as a “regularized” version of (<ref>). Therefore, similar to Proposition <ref>, we can argue that the risk minimizer over F, as a base model of CCM, contains a bias towards a higher risk than the on-training method's as follows:
R_(^∞)
≤ R_(^∞)
≤ R_() - min_f∈F V_(f)
The proof is included in the proof of Proposition <ref>.
Computational considerations.
In practical structured prediction problems where the output is sequential or graphical, performing constrained inference during training time is typically expensive due to the complexity of the constraints. For example, as pointed out by <cit.>, when the constraint is defined by a logical expression over several output variables, computing the probability of constraint being satisfied corresponds to the problem of weighted model counting (WMC) and is #P-complete <cit.>.
Therefore, to implement the on-training approach in practice, one can alternatively use approximate inference to ensure tractability.
For example, strictly constrained inference, formulated as Integer Linear Programming <cit.>, can be further relaxed as Linear Programming <cit.>.
Another example is amortized inference <cit.>, which accelerates the convergence to the optimal model while only performing exact inference in every τ>1 iterations.
Compare to existing results.
There has been limited theoretical work discussing the impact of performing constrained inference. The most related one is <cit.>, which derives VC-style generalization bounds for linear structured models to argue that (1) performing strictly constrained inference in a post-training manner (Learning Plus Inference in the paper) improves the model performance and (2) the on-training approach (Inference Based Training in the paper) further reduces the error in the long run. Our approach directly analyses the classification risk and extends the comparison to noisy constraints and soft-constrained inference with CCM.
§ REGULARIZATION WITH CONSTRAINED INFERENCE
We have seen that regularization and constrained inference have different impacts on the generalization gap and the risk.
On one hand, CCM has an equal Rademacher complexity (Proposition <ref>) as the original model ℜ(F), which can be reduced by regularization. So, performing regularized algorithm to CCM also reduces the generalization gap.
On the other hand, their impacts on the risks are contradicting, as summarized in figure <ref>.
In this section, we aim to describe how these impacts can interact with each other by applying our established results to explore the usage of these two methods together.
We show both positive and negative results for the combination. On one hand, we propose sufficient conditions under which the bias introduced by regularization can be compensated by performing constrained inference (Proposition <ref>).
On the other hand, we study if post-training constrained inference can reduce the risk of the optimal classifier f_ρ. We show with a noisy constraint, choosing a large value of ρ in the regularized objective (<ref>) will make CCM incapable to reduce the risk (Proposition <ref>).
§.§ CCM Compensates for Regularization Bias
As the red part of Figure <ref> summarizes, we have shown that the regularization and constrained inference have contradicting influences on the risk. Moreover, the regularization bias is controlled by the violation of the risk minimizer (Proposition <ref>), which can be reduced by constrained inference. This suggests the possibility for CCM to reduce the additional risk introduced by regularization.
We formally describe this phenomenon by considering the following combination: an on-training approach that aims to find the minimizer of the following regularized surrogate objective over the CCM F^μ:
f_⋆^μ
:= _g∈F^μ R_(g) + ρ V_(g)
Recall that R_() is the minimum cross-entropy risk that can be achieved in F.
We show that unlike the vanilla regularized objective (<ref>), it is possible for this algorithm to achieve a smaller risk than R_() as follows.
If
CCM improves so that Δ^μ_()> 0,
then letting
ρ
< V_()-μ V_/V_(^μ) - 1
will imply R_(f_⋆^μ) < R_().
This result shows a small choice of ρ allows the regularized optimizer f_⋆^μ to achieve better cross-entropy.
A less noisy constraint allows more choices of ρ to make this happen.
In particular, when the constraint is noise-free, since V_(^μ) → 0 as μ→∞, driving μ to ∞ will make R(f_⋆^μ) < R() for all ρ > 0.
As a cost, regularization will be less effective in reducing the Rademacher complexity with a large value of μ. In the extreme case, all the classifiers in F^∞ make zero violation, and hence cannot be distinguished by the regularization objective.
§.§ Post-regularized-training Constrained Inference
Finally, as the blue part of Figure <ref> summarizes, we have shown that post-training inference is beneficial only if the average violation of f is larger than V_ (Theorem <ref>). However, the minimizer of the regularized objective f_ρ tends to have a small violation (Proposition <ref>) scaled with 1/ρ.
Therefore, it is possible that choosing a large value of ρ will make post-training incapable to reduce the risk with a noisy constraint.
Formally, assuming a model is already trained with the vanilla regularized ℓ^1 objective as in (<ref>), we have the following holds.
Recall V(f_∞) is the minimal expected violation that can be achieved by F. If V_≥ V(f_∞) and
ρ≥1/V_ - V(f_∞)
then the minimizer f_ρ of the regularized objective (<ref>) will not be improved by post-training constrained inference for any μ∈ (0, ∞] in the sense that R_(f_ρ) ≤ R_((f_ρ)^μ).
The RHS of (<ref>) shrinks with a larger noise rate V_ and smaller V(f_∞). Intuitively, a more noisy constraint is less helpful (Theorem <ref>), while a small value of V(f_∞) allows f_ρ to violate less (Proposition <ref>) and hence gains fewer benefits from constrained inference (Theorem <ref>).
As a consequence, with a noisy constraint, choosing a large ρ in the regularized objective will make post-training constrained inference unnecessary or even harmful.
§ RELATED WORKS
Regularization with constraints.
In the context of structured prediction, the Posterior Regularization (PR) framework <cit.> proposed to regularize the log-likelihood by adding a distance of the probabilistic prediction to the constrained subspace of distributions.
The CoDL algorithm <cit.> is a semi-supervised algorithm that repetitively assigns constrained pseudo-labels to the unlabeled dataset and uses pseudo-labels to retrain the model.
CoDL and PR are further unified in <cit.> as special cases of a parameterized EM algorithm.
More recent works have proposed injecting logical constraints into deep models by augmenting the training objective with explicitly defined violation functions, such as the semantic loss <cit.>, the DL2 loss <cit.> and the inconsistency loss <cit.>, which motivate our theoretical formulation in (<ref>).
Inference with constraints.
The idea of injecting prior knowledge directly into a predictive model dates back to <cit.>, which formulates the problem of inference with hard constraints as Integer Linear Programming (ILP).
The idea of constrained inference has been followed and developed by NLP researchers and empirically shown to be effective in various problems such as summarization <cit.>, temporal reasoning <cit.>, semantic parsing <cit.> and text generation <cit.>.
<cit.> further defines the CCM to incorporate soft constraints into linear models.
Another related work is <cit.>, which uses Bayesian networks to model the label correlations and define an order to the labels.
The order information is then taken as extended features at inference time.
Theoretically, <cit.> provides a comparison between the on-training and post-training constrained inference using VC-style error bounds.
Semi-supervised learning theory.
Several theoretical semi-supervised learning frameworks such as <cit.> and <cit.> illustrate how hard constraints on the hypothesis space could reduce the generalization error. A detailed comparison can be seen in the discussion at the end of Section <ref>.
Learning with partial labels.
The problem of learning with constraints is closely related to the problem of learning from partial labels (also known as superset labels) <cit.> where each instance x in the dataset is assigned with a partial label s which also takes value in 2^Y.
The difference is that the constraint mapping itself is known to the learner and hence can be encoded in the inference algorithm directly, for example, via the CCM. Another difference is that the partial labels are typically more informative and can guarantee learnability alone <cit.>. In contrast, the constraints that appear in practice typically provide only side information and need to be used with gold labels together.
§ CONCLUSION AND FUTURE WORKS
In this paper, we presented a theoretical study of two methods to encode label constraints into a learning system: regularization and constrained inference.
We compared these two approaches by quantifying their impact on the optimal risk as well as the generalization error.
Our study revealed that the success of these two approaches replies on different data assumptions:
the regularization method requires the optimal classifier in the model to have a small violation while constrained inference requires the true data to have a small violation.
We further elucidated the detrimental consequences that arise when these assumptions fail to hold.
Finally, we demonstrate how their impacts on the model can interact when used together.
We have focused on multiclass classification, aiming to provide a starting point for understanding the different mechanisms of the two methods. For future work, we will extend the discussion to structured prediction problems where complex constraints are naturally defined. In particular, while the presence of constraints can improve the model performance, it also suggests a strong dependency inside the structure, which may hurt the generalization performance, as pointed out by <cit.>.
§ ACKNOWLEDGEMENTS
This work was partially supported by Contract FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
This work was also partially sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0080. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
This work was also partially funded by ONR Contract N00014-19-1-2620.
icml2023
PART:
*Appendix
§ DETAILS ON LOSS FUNCTION
The ℓ^1 loss is a smoothed alternative to the zero-one loss and has been used in the theoretical analysis for the generalization error, see, for example, in <cit.> (Section 6.2). It can be related to other common loss functions as follows.
As distances on the probability simplex.
Let e_y ∈R^|Y| be a one-hot vector with the y^th coordinate be 1 and all others be 0. We then have that
L(x,y_,f)
:= 1 - P_f(y_|x)
= 1/2e_y_ - P_f_1
Moreover, since our label space Y is of finite cardinality, we further have that 1/2e_y_ - P_f_1 = TV(e_y_, P_f), the total variation distance.
Relation to zero-one loss.
By introducing a temperature parameter t ∈R_≥ 0 to the softmax function, it is well known that lim_t →∞(tu) = (u) for a vector u. This implies
lim_t →∞ L(x,y_,tf)
= 1 - 1{_y∈Yf(x,y) = y_}
= 1{_y∈Yf(x,y) y_}
which is the zero-one loss.
Since performing softmax inference with temperature t can be equivalently regarded as performing softmax inference for the scoring space tF, for the simplicity of our presentation, we omit the temperature parameter in the softmax inference.
Relation to cross-entropy.
The total variation distance to a one-hot probability can be lower bounded by cross-entropy due to Pinsker's inequality. More directly, in our case, we have 1-p ≤ -log(p) for any p ∈ [0,1] from basic inequality. This implies L(x,y,f) ≤ L_(x,y,f).
In conclusion, the ℓ^1 loss is a ℓ^1 and total variation distance on the probability space, is a smoothed version of the zero-one loss, and is upper bounded by cross-entropy. It is differentiable and bounded so that we can derive generalization bounds with Rademacher complexity. Another reason that we are interested in softmax inference will be clearer in the discussion for constrained inference, where in Theorem <ref>, <ref> and <ref>, the change of expected cross entropy and ℓ^1 loss can be lower bounded by a smooth function. But with the argmax inference, the risk is in general not continuous and needs to be assumed to be Lipschitz to obtain similar results.
§ PROOFS FROM SECTION 3
§.§ Proof of Proposition <ref>
The first inequality is straightforward. For the second inequality, by definition (<ref>) we have
R(f_ρ) + ρ V(f_ρ)
≤ R(f_0) + ρ V(f_0)
and
V(f_ρ) ≥ V(f_∞)
.
Combining the two above inequalities yields
R(f_ρ) + ρ V(f_∞)
≤ R(f_0) + ρ V(f_0)
.
The desired inequality follows by rearranging these terms. This argument also holds if we replace the expectations with empirical estimates.
To see how the RHS bound can be reached, consider the following scoring space that contains two classifiers, f_0 and f_∞, and an instance space X that only contains one point x. Let C(x) = {y_,y'}. Let f_0 be such that P_f_0(y_)=a∈(0,1) and P_f_0(y')=b. Let f_∞ be such that P_f_∞(y_)=a-ϵ_1 and P_f_∞(y')=b+ϵ_2 so that ϵ_1 < ρϵ_2. Then
R(f_∞) + ρ V(f_∞)
≤ 1 - (a - ϵ_1) + ρ (b-ϵ_2)
< 1-a + ρ b
= R(f_0) + ρ V(f_0)
which means f_∞ will be preferred to f_0 by the regularized objective.
§.§ Proof of Lemma <ref>
By definitions, we have
ρV(f_ρ)
≤R(f_ρ) + ρV(f_ρ)
≤R(f_∞) + ρV(f_∞)
≤ 1 + ρV(f_∞)
≤ 1 + ρ u
Therefore, we have that V(f_ρ) ≤ u + 1/ρ.
§.§ Proof of Lemma <ref>
To prove this theorem, we need the following lemmas. The first one is a contraction inequality established in <cit.>.
Let H be a set of functions mapping X to R^N. Suppose Φ_i is μ_i-Lipschtz with the 2-norm, i.e.,
|Φ_i(v') - Φ_i(v)|
≤μ_i v'-v_2
∀ v,v'∈R^N
Then for any set of m points x_1,…, x_m ∈X, the following inequality holds
1/mE_σ[
sup_h ∈H∑_i=1^m σ_i Φ_i(h(x_i))
]
≤√(2)/mE_ϵ[
sup_h∈H∑_i=1^m ∑_j=1^N ϵ_ijμ_i h_j(x_i)
]
where σ_is and ϵ_ijs are independent Rademacher variables uniformly distributed over {-1,+1}.
The second one computes the Lipschitz constants of the ℓ^1 losses by bounding its gradient's 2-norm.
Given a scoring function f:X ×Y →R, let f(x) = [f(x,y)]_y ∈Y∈R^|Y| be the vector of scores for each label.
For any two scoring functions f,f' and data (x,y), we have that
|P_f(y|x) - P_f'(y|x)|
≤√(2)/4f(x) - f'(x)_2
Furthermore, for any constraint C, we have
|P_f(C|x) - P_f'(C|x)|
≤1/4√(1 + 1/|C(x)|)f(x) - f'(x)_2
where P_f(C|x)=P_f(C(x)|x)=∑_y ∈ C(x)P_f(y|x).
We start with the second claim.
Suppose C(x) = Y, then P_f(C|x) = 0 for any scoring function f, so the inequality trivially holds.
Next, we assume C(x) ⊂Y.
Given a constraint C:X → 2^𝒴, the derivative of its violation function with respect to the score for a label y is
P_f(C|x)/ f(x,y) = ∑_y' ∈ C(x)P_f(y'|x)/ f(x,y)
= ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x)
The 2-norm of the gradient of the mapping f(x) ↦P_f(y|x) is then
(
∑_y ∈Y( ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x) )^2
)^1/2
which is maximized when P_f(y|x) = 1/2|C(x)| for all y ∈ C(x) and P_f(y|x) = 1/2(Y-|C(x)|) for all y ∉ C(x) (so that P_f(C|x)=1/2). The maximum is then
(
∑_y ∈ C(x)( ∑_y' ∈ C(x)P_f(y|x) 1{y' = y} - P_f(y|x) P_f(y'|x) )^2
+ ∑_y ∉ C(x)( ∑_y' ∈ C(x)P_f(y|x) P_f(y'|x) )^2
)^1/2
= √(|C(x)|(1/4|C(x)|)^2 + |Y-C(x)| (1/2|Y-C(x)|)^2)
= √(1/16 |C(x)| + 1/16|Y-C(x)|)
≤√(1/16 |C(x)| + 1/16)
= 1/4√(1 + 1/|C(x)|)
The boundedness of the gradient implies that the function f(x) ↦P_f(C|x) is Lipschitz with a Lipschitz constant 1/4√(1 + 1/|C(x)|).
The first claim then follows by considering the special constraint C(x) := {y_(x)} so that |C(x)| = 1.
Next, we present the proof of the theorem. By standard Rademacher complexity bounds, given a labeled dataset S of size m, for any δ>0, with probability at least 1-δ, the following inequality holds uniformly for f ∈F:
R(f)
≤R̂(f;S_ L) + 2 ℜ_m(H) + √(log (1/δ)/2m)
where
H
:= {(
x,y) ↦ 1- P_f(y|x): f ∈F
}
By the contraction lemma and Lipschitzness, we have
ℜ_m(H)
= 1/mE_SE_σ[
sup_f ∈F∑_i=1^m σ_i ( 1 - P_f(y_i|x_i))
]
≤√(2)/mE_SE_ϵ[
sup_f ∈F∑_i=1^m ∑_y ∈Yϵ_iy√(2)/4 f(x, y)
]
= 1/2mE_SE_ϵ[
sup_f ∈F∑_i=1^m ∑_y ∈Yϵ_iy f(x, y)
]
This implies
R(f)
≤R̂(f;S_ L) + ℜ_m(F) + √(log (1/δ)/2m)
The proof for the generalization bound of violation follows from the same argument. In particular, if the size of the constrained set C(x) is a constant, namely |C(x)|=c_0 < c = |Y| for all x ∈X, then from Equation (<ref>), we know that the mapping x ↦ 1- P_f(y|x) is Lipschitz with a Lipschitz constant 1/4√(1/c_0 + 1/c-c_0). So in this case, the generalization bound for the violation function can be improved as
V(f)
≤V̂(f;S_ U)
+ √(2)/2√(1/c_0 + 1/c-c_0)ℜ_m_ U(F)
+ √(log(1/δ)/2m_ U)
§.§ Proof of Theorem <ref>
Step 1. Showing the expected violation of f̂_̂ρ̂ is bounded.
First, we have with probability 1-δ,
ρV̂(f̂_ρ)
≤R̂(f̂_ρ) + ρV̂(f̂_ρ)
≤R̂(f_∞) + ρV̂(f_∞)
≤ 1 + ρV̂(f_∞)
≤ 1 + ρ(u + √(log(1/δ)/2m_ U))
where the last step follows by applying Hoeffding's inequality to V̂(f_∞). This result implies V̂(f̂_ρ) ≤1/ρ + u + √(log(1/δ)/2m_ U).
Second, Theorem <ref> claims that with probability 1-δ, the following inequality holds:
V(f̂_ρ) - V̂(f̂_ρ) ≤ℜ_m_ U(F) + √(log(1/δ)/2m_ U)
Putting these two inequalities together using union bound, we know with probability 1-2δ,
V(f̂_ρ)
≤1/ρ + u + ℜ_m_ U(F) + √(log(1/δ)/2m_ U) + √(log(1/δ)/2m_ U)
= 1/ρ + u + B(δ,m_ U,F)
Namely, with probability no less than 1-2δ, f̂_ρ lies in F_1/ρ + u + B(δ,m_ U,F), which is a fixed hypothesis class.
Step 2. Bounding the generalization gap of L_ρ.
Since f̂_ρ∈F_1/ρ + u + B(δ,m_ U,F), we can bound the generalization gap of L_ρ using the uniform convergence property of F_1/ρ + u + B(δ,m_ U,F). By standard decomposition,
L_ρ (f̂_ρ) - L_ρ (f_ρ)
=
L_ρ (f̂_ρ) - L̂_ρ (f̂_ρ)_(*)
+ L̂_ρ (f̂_ρ) - L̂_ρ (f_ρ)_≤ 0
+ L̂_ρ (f_ρ) - L_ρ (f_ρ)_(**)
For term (*), combining the two inequalities in Lemma <ref> and Step 1 via union bound, we know with probability 1-4δ,
(*)
≤ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + √(log(1/δ)/2m_ L) + ρ( ℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + √(log(1/δ)/2m_ U))
For term (**), using Hoeffding's inequality for the risk and violation separately, we have with probability 1-2δ,
(**)
≤√(log(2/δ)/2m_ L) + ρ√(log(2/δ)/2m_ U)
By union bound, with probability 1-6δ,
L_ρ (f̂_ρ) - L_ρ (f_ρ)
≤ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + ρℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)_for convenience, denote these terms as B'
Step 3. Bounding the risk of f_ρ.
By Step 2, we have with probability 1-6δ,
R(f̂_ρ)
≤ R(f_ρ) + ρ V(f_ρ) - ρ V(f̂_ρ) + B'
≤ R(f_0) + ρ V(f_0) - ρ V(f̂_ρ) + B'
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞) + B'
We conclude that with probability 1-6δ,
R(f̂_ρ)
≤ R(f_0) + ρ V(f_0) - ρ V(f_∞)
+ ℜ_m_ L(F_1/ρ + u + B(δ,m_ U,F)) + ρℜ_m_ U(F_1/ρ + u + B(δ,m_ U,F)) + 2 √(log(2/δ)/2m_ L) + 2ρ√(log(2/δ)/2m_ U)
as claimed.
§.§ Proof of Example <ref>
The normalizing factor ∑_j=1^c ^w_j^ T x is maximized at w_1=x=[1,0,0,…,0] and w_2=…=w_c=0 so that
∑_j=1^c ^w_j^ T x≤ + (c-1)
≤ c+2
This implies P_w(y_c) ≥ (^w_c^ T x)/(c+2). Therefore, E[P_w(y_c)] ≤ t implies t(c+2) ≥E[^w_c^ T x] ≥^E[w_c^ T x] = ^α^ T w_c, or equivalently α^ T w_c ≤log(t(c+2)).
Therefore, given a set of data S={x_i}_i=1^m and Rademacher random variables ϵ, the inner supremum in the definition of Rademacher complexity can be upper bounded by solving the following program
max ∑_i=1^m ∑_j=1^c ϵ_i, j w_j^ T x_i
s.t. ∑_j=1^c w_j^ T w_j ≤ 1
α^ T w_c ≤log(t(c+2))
Consider its Lagrangian
L(w, λ, μ)
= ∑_i=1^m ∑_j=1^c ϵ_i,j w_j^ T x_i
+ λ(1 - ∑_j=1^n w_j^ T w_j )
+ ν(log(t(c+2)) - α^ T w_c )
Denote ξ_j := ∑_i=1^m ϵ_i,jx_i. The Lagrangian is then maximized at w_j = ξ_j/(2λ) for j<c and w_c = (ξ_c- να)/(2λ). The dual function then writes:
g(λ, ν)
= νlog(t(c+2)) + λ + ∑_j=1^c-1ξ_j ^2_2/4 λ +ξ_c - να^2_2/4 λ≥νlog(t(c+2)) + √(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 )
By weak duality, we have that
ℜ̂_m (F_t)
≤1/mE_ϵ[
min_ν≥ 0(
νlog(t(c+2)) + √(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
Assuming t<1/(c+2) so that log(t(c+2))<0. We can upper bound (<ref>) as
1/mE_ϵ[
min_ν≥ 0(
√(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
The function ∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 is minimized at ν = 0 if ξ_c^ T α≤ 0 and ν = ξ_c^ T α /α_2^2 otherwise. Denote the event ξ_c^ T α≤ 0 as E. By symmetry, we have that P(E) = 1/2 so that
1/mE_ϵ[
min_ν≥ 0(
√(∑_j=1^c-1ξ_j _2^2 + ξ_c - να_2^2 ))
]
= 1/2E_ϵ[ √(∑_j=1^cξ_j _2^2)| E ]
+ 1/2E_ϵ[√(∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2)| E]
Again by symmetry, the quantity (ξ_c^ T α)^2 is independent of E. Therefore, by Jensen's inequality, we have that
E_S,ϵ[√(∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2)| E]
≤√(E_S,ϵ[
∑_j=1^cξ_j _2^2 - (ξ_c^ T α)^2/α_2^2]
)
≤√(
cm - E_S,ϵ[ (ξ_c^ T α)^2/α_2^2]
)
= √(
cm - Var(ξ_c^ T α)/α_2^2)
= √(
cm - mσ^2 α_2^2+α_2^4/α_2^2)
= √(
(c-σ^2-α_2^2)m
)
Similarly, we can use Jensen's inequality to bound E_S,ϵ[ √(∑_j=1^cξ_j _2^2)| E ] ≤√(cm). Putting these together, we have that
ℜ_m (F_t)
=E_x[ℜ̂_m (F_t)]
≤1/2√(c/m) +1/2√(c-σ^2-α_2^2/m)
§ PROOFS FROM SECTION 4
§.§ Proof of Propostion <ref>
First, we show the Rademacher complexity of the singleton mapping is zero:
ℜ_m({(x,y)↦ -μ v(x,y)})
= 1/mE_x, ϵ[
∑_i=1^m∑_y ∈Y -ϵ_i,yμ v(x_i,y)
]
= 1/mE_x[
∑_i=1^m∑_y ∈Y -E[ϵ_i,y] μ v(x_i,y)
]
= 0
Second, we use the linearity of Rademacher complexity to obtain the desired result.
ℜ_m(F^μ)
= 1/mE_x, ϵ[ sup_f ∈F∑_i=1^m∑_y ∈Yϵ_i,y (f(x_i,y) - μ v(x_i,y))
]
= 1/mE_x, ϵ[ sup_f ∈F∑_i=1^m∑_y ∈Yϵ_i,y f(x_i,y)
] + 1/mE_x, ϵ[
∑_i=1^m∑_y ∈Y -ϵ_i,yμ v(x_i,y)
]
= ℜ_m(F) + ℜ_m({(x,y)↦ -μ v(x,y)}) = ℜ_m(F)
§.§ Proof of Proposition <ref>
* Given any scoring function f, let Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y)) and Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y)). We have
μΔ^μ_ (f)
= μE[logexp(f(x,y_)-μ v(x,y_))/Z_f^C(x) + Z_f^-C(x)/^μ]
= E[ μlogexp(f(x,y_)-μ v(x,y_))/Z_f^C(x) + Z_f^-C(x)/^μ]
= E[
Z^-C_f(x)/^μ/Z_f^C(x) + Z_f^-C(x)/^μ - v(x,y_)
]
= V(f^μ) - V_
Moreover,
μ V(f^μ)
= E[ μZ_f^μ^-C(x)/Z_f^μ(x)]
= E[ Z_f^μ(x)(-Z_f^μ^C(x)) + (Z_f^μ^C(x))^2/(Z_f^μ(x))^2]
= E[ P_f^μ^2(-C) - P_f^μ(-C) ]
which is negative and bounded, implying V(f^μ) - V_ is decreasing and Lipschitz with μ. Therefore, there is a μ > 0 such that R_(f^μ) < R_(f) if and only if the derivative is positive at μ = 0, i.e., V(f) > V_.
* By (<ref>),
Δ^μ_ (f)
= ∫^μ_0 (V(f^t) - V_) t
= E[ ∫^μ_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t t
] - μ V_
≥E[ ∫^μ_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x) t
] - μ V_
= (1-^-μ) E[
Z^-C_f(x)/Z_f^C(x) + Z_f^-C(x)] - μ V_
= (1-^-μ) V(f) - μ V_
* If V_=0, we have
Δ^∞_ (f)
= ∫^∞_0 E[
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t] t
= E[ ∫^∞_0
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t t ]
= E[
log(Z_f^C(x) + Z_f^-C/Z_f^C)
]
= V_(f)
§.§ Proof of Corollary <ref>
Using Proposition <ref> (b), this result follows by solving the following equation
(1-^-μ) V(f) - μ V_≥ 0
It is known that the solution to the inequaltiy u ≤ a + b^c u of u is u ≤ a-1/cW(-bc^ac). Substituting a=η=V(f)/V_=-b and c=-1 yields the desired result:
μ≤ W(-η/^η)+η
where the RHS is positive only when η>1. A plot of this solution as a function of η is presented below in Figure <ref>.
§.§ Proof of Proposition <ref>
This claim follows from the fact that R_(f^∞)=R_(f)-V_(f) from Proposition <ref> (c).
For equation (<ref>), the first inequality follows from the optimality of . For the second inequality, by definition we have
R_(^∞) + V_() = R_()
≤ R_()
⇒ R_(^∞) ≤ R_() - V_() ≤ R_() - min_f∈F V_(f)
§ ANALYSIS FOR HINGE LOSS AND ℓ^1 LOSS
§.§ Hinge Loss
The margin of a scoring function f at a sample (x,y_) is defined as
m(x,y_, f)
:= max_y∈Y{f(x,y)} - f(x,y_)
We denote its expectation as M(f) = E[m(x,y_,f)].
Given a loss function ℓ:Y×Y →R, the structured hinge loss <cit.> is defined as the margin of the loss augmented scoring function f+ℓ: (x,y)↦ f(x,y) + ℓ(y, y_). Namely,
L_hinge (x,y_, f)
:= m(x,y_, f+ℓ)
Therefore, we can study the impact of constrained inference on the hinge loss via the impact on the margin. Let Δ_margin^μ(f) = M(f) - M(f^μ). We present the following result.
The following results hold:
* For any fixed model f, there exists an μ_0 > 0 such that M(f^μ) ≤ M(f) only if
V_01(f) > V_
where V_01(f) is the zero-one style violation defined as E[1{_y ∈Yf(x,y) y_}].
* In particular, if the constraint is noise-free, we have
Δ^∞_margin(f)
= E[ max_y ∈Y f(x,y) - max_y∈ C(x) f(x,y) ]
= E[ (max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y))_+ ]
* The derivative of the change of the margin is
μΔ^μ_margin(f) =
-μ M(f^μ)
= - μE [
max_y ∈Y{ f(x,y) - μ v(x,y) } - f(x,y_) + μ v (x,y_)
]
= E[v(x,y_f^μ) - v(x,y_)]
where y_f^μ:= _y ∈Y{ f(x,y) - μ v(x,y)} is the argmax inference output of CCM. Moreover, this derivative is non-increasing with μ. Therefore, a necessary condition for CCM to reduce the margin is
E[v(x,y_f)] = V_01(f)
> V_
* This follows directly by taking the difference between M(f) and M(f^∞).
Due to the discontinuous nature of the argmax inference, the function v(x,y_f^μ) is in general not continuous with μ. On the other hand, if we assume μ↦E[v(x,y_f^μ)] is Lipschitz continuous, the condition proposed in (a) is also sufficient, as in the analysis for cross-entropy.
The impact of constrained inference on the hinge loss can be investigated by substituting f by f+ℓ. For example, a sufficient for improving the average hinge loss will be V_01(f+ℓ) > V_.
The quantity (max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y))_+ is closely related to the integrality loss defined in <cit.>. It is a hinge-stye surrogate loss function for the zero-one style violation function of f with argmax inference:
P{max_y ∉ C(x) f(x,y) - max_y∈ C(x) f(x,y)
≥ 0
}
= V_01(f)
§.§ ℓ^1 Loss
To facilitate our discussion, we first present the following lemmas that will be useful in this section.
For any constraint C we have the following:
* The derivative of the predicted probability is
μP_f^μ(y|x)
= P_f^μ(y) (P_f^μ(-C|x) - v(x,y))
* The second order derivative of the probability is
μP_f^μ(-C|x)
= P_f^μ(y|x) (
( P_f^μ(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
)
Recall that given any scoring function f, we denote
Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y))
and
Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y))
We also let Z_f(x) = Z_f^C(x) + Z_f^-C(x).
* The pointwise derivative of CCM's l^1 risk with respect to μ is then
μP_f^μ(y|x)
= μ^f(x,y) - μ v(x,y)/Z_f^μ(x)
= 1/(Z_f^μ(x))^2( Z_f^μ(x) (-v(x,y) ^f(x,y) - μ v(x,y)) + Z_f^μ^-C(x) ^f(x,y) - μ v(x,y))
= P_f^μ(y) (P_f^μ(-C) - v(x,y))
where the second equality follows from the fact that μ Z_f^μ(x) = -Z_f^μ^-C(x).
* Based on (a),
^2/^2 μP_f^μ(y|x)
= (P_f^μ(y) (P_f^μ(-C) - v(x,y)))(P_f^μ(-C) - v(x,y))
+ P_f^μ(y) (P_f^μ^2(-C) - P_f^μ(-C))
= P_f^μ(y|x) (
( P_f^μ(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
)
Now we discuss the change in ℓ^1 risk that is defined as Δ^μ(f):=R(f)-R(f^μ).
The following results hold:
* For any fixed model f, there exists an μ_0 > 0 such that R(f^μ) < R(f) if
E[P_f(y_)P_f(-C)]
> E[P_f(y_)v(x,y_)]
* The change of risk can be lower bounded by
Δ^μ(f)
≥1-^-2μ/2E_x[P_f(y_)P_f(-C)] - μ V_
* In particular, if the constraint is noise-free, we have
Δ^∞(f)
≥E_x[P_f(y_)P_f(-C)]
* From Lemma <ref> (a) we know the derivative of the risk with respect to μ at μ=0 is
E[P_f(y_)P_f(-C)] - E[P_f(y_)v(x,y_)]
Further, Lemma <ref> (b) implies this derivative is Lipschitz with respect to μ since for any μ,
| P_f^μ(y|x) (
( P_f(-C|x) - v(x,y))^2 + P_f^μ^2(-C|x) - P_f^μ(-C|x)
) |
≤ 1
Therefore, a sufficient condition for the existence of an μ_0 > 0 such that R(f^μ) < R(f) is that E[P_f(y_)P_f(-C)] > E[P_f(y_)v(x,y_)].
* First, we note for any y and μ that
P_f^μ(y)P_f^μ(-C)
= ^f(x,y)-μ v(x,y) Z_f^-C(x)/^μ/(Z_f^μ(x))^2
≥^f(x,y)-μ v(x,y) Z_f^-C(x)/^μ/(Z_f(x))^2
≥^f(x,y)-μ Z_f^-C(x)/^μ/(Z_f(x))^2
= P_f(y)P_f(-C)^-2μ
Also,
E[P_f(y_)v(x,y_)]
≤E[v(x,y_)]
= V_
Integrating the derivative gives
Δ^μ(f)
≥∫^μ_0 E[
P_f(y_)P_f(-C)^-2t - V_] t
= 1-^-2μ/2E_x[P_f(y_)P_f(-C)] - μ V_
* With noise-free constraints,
P_f^μ(y_)P_f^μ(-C)
= ^f(x,y_) Z_f^-C(x)/^μ/(Z_f^μ(x))^2
≥^f(x,y_) Z_f^-C(x)/^μ/(Z_f(x))^2
= P_f(y_)P_f(-C)^-μ
Integrating both sides gives
Δ^μ(f)
≥∫^μ_0 E[
P_f(y_)P_f(-C)^-t] t
= E_x[P_f(y_)P_f(-C)]
The term E_x[P_f(y_)P_f(-C)] plays a key role in these results, and it measures the average violation of the model f, weighted by the model's confidence of the true label. The first result shows that if this weighted average violation is larger than that of the true data distribution, then CCM is helpful. The last result shows that a model with a larger weighted violation obtains more benefits from strictly constrained inference.
§ PROOFS FROM SECTION 5
§.§ Proof of Theorem <ref>
Recall f_⋆^μ = _g∈F^μ R_(g) + ρ V_(g) is the optimal CCM for the regularized surrogate objective and is the cross entropy risk minimizer in F. According to our notation, ^μ is the constrained model with base model .
By this definition, we have
R_(f_⋆^μ ) +ρ V_(f_⋆^μ)
≤ R_(^μ) +ρ V_(^μ)
Therefore,
R_(f_⋆^μ)
≤ R_(^μ) + ρ (V_(^μ) - V_(f_∞^μ))
≤ R_(^μ) + ρ V_(^μ)
≤ R_() - Δ_^μ() + ρ V_(^μ)
Therefore, a sufficient condition for R_(f_⋆^μ) ≤ R_() is that ρ V_(^μ) < Δ_^μ(). Furthermore, recall for any scoring function f, we define Z_f^C(x) := ∑_y ∈ C(x)exp(f(x,y)) and Z_f^-C(x) := ∑_y ∉ C(x)exp(f(x,y)). We then have
V_(f) - V_(f^μ)
= E[
-log( Z_f^C(x)/Z_f^C(x) + Z_f^-C(x))
] - E[
-log( Z_f^C(x)/Z_f^C(x) + Z_f^-C(x)/^μ)
]
= E[
-log( Z_f^C(x) + Z_f^-C(x)/^μ/Z_f^C(x) + Z_f^-C(x))
]
= ∫^μ_0 E[
Z^-C_f(x)/^t/Z_f^C(x) + Z_f^-C(x)/^t] t
= Δ^μ_(f) + μ V_ (compare to equation (<ref>))
Therefore, Δ^μ_() = V_() - V_(^μ) - μ V_. So, the sufficient condition can be reformulated as
ρ
< V_() - V_(^μ) - μ V_/V_(^μ)
§.§ Proof of Theorem <ref>
We have seen in Theorem <ref> that for any scoring function f, there is a μ > 0 such that R_(f^μ) < R_(f) if and only if V(f) ≥ V_. On the other hand, we know from Lemma <ref> that
V(f_ρ)
≤ V(f_∞) + 1/ρ
Therefore, if
ρ≥1/V_ - V(f_∞)
we must have V(f_ρ) ≤ V_, which implies there is no μ > 0 such that R_((f_ρ)^μ) < R_(f_ρ).
|
http://arxiv.org/abs/2307.04677v1 | 20230710162135 | Practical Trustworthiness Model for DNN in Dedicated 6G Application | [
"Anouar Nechi",
"Ahmed Mahmoudi",
"Christoph Herold",
"Daniel Widmer",
"Thomas Kürner",
"Mladen Berekovic",
"Saleh Mulhem"
] | cs.NI | [
"cs.NI",
"eess.SP"
] |
Practical Trustworthiness Model for DNN in Dedicated 6G Application
This work was partially supported by the DFG Project Nr. 403579441, ”Meteracom: Metrology for parallel THz communication channels.”
Anouar Nechi^1, Ahmed Mahmoudi^1, Christoph Herold^2, Daniel Widmer^1, Thomas Kürner^2, Mladen Berekovic^1,
and Saleh Mulhem^1
^1Institute of Computer Engineering, University of Lübeck, Lübeck, Germany
^2Institute for Communications Technology, Technische Universität Braunschweig, Braunschweig, Germany
^1{name.surname}@uni-luebeck.de, ^2{surname}@ifn.ing.tu-bs.de
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================
Artificial intelligence (AI) is considered an efficient response to several challenges facing 6G technology. However, AI still suffers from a huge trust issue due to its ambiguous way of making predictions. Therefore, there is a need for a method to evaluate the AI's trustworthiness in practice for future 6G applications. This paper presents a practical model to analyze the trustworthiness of AI in a dedicated 6G application. In particular, we present two customized deep neural networks (DNNs) to solve the automatic modulation recognition (AMR) problem in Terahertz communications-based 6G technology. Then, a specific trustworthiness model and its attributes, namely data robustness, parameter sensitivity, and security covering adversarial examples, are introduced. The evaluation results indicate that the proposed trustworthiness attributes are crucial to evaluate the trustworthiness of DNN for this 6G application.
6G communication, Terahertz band, AI, Modulation recognition, Trustworthiness
§ INTRODUCTION
The sixth-generation (6G) network technology aims to outperform the current wireless standards by utilizing frequencies above 100 GHz <cit.>. Hence, designing efficient communication systems at these frequencies is far more complex than those at lower frequency systems. 6G technology increasingly relies on two main pillars: Terahertz communications (THzCom) and Machine Learning (ML). While 6G has made one further step toward THzCom according to IEEE802.15.3d standard <cit.>, ML is recommended as a novel solution for 6G performance optimization <cit.>. In other words, the ultra-wide THz band ranging from 0.1 to 10 THz is foreseen as an excellent candidate for 6G, whereas ML has proved its efficiency in solving technical problems in communication systems <cit.>.
Furthermore, ML ambiguously solves several problems in wireless communications. For instance, Deep Neural Networks (DNNs) as a subset of ML have been used in a black-box manner to solve the automatic modulation recognition problem <cit.>. Therefore, there is a huge need to understand the risk of deploying such an Artificial Intelligence (AI) algorithm. According to an Independent High-Level Expert Group on AI <cit.>, the only possibility to achieve the maximum benefits of AI is to ensure its trustworthiness during all steps of development and use. The concept of trustworthy AI can be perceived as a response to mitigate the risks of deploying AI <cit.>. Several works have proposed definitions of system trustworthiness <cit.>, <cit.> or specified definitions of trustworthy AI <cit.>, <cit.>. However, these definitions are still general and introduce principles more than practical approaches <cit.>. In practice, a model of trustworthiness evaluation for a dedicated application in 6G is still missing. To our knowledge, the literature comprises studies investigating AI, especially DNN in communication, such as THzCom-based 6G technology. Nevertheless, the open literature on applying DNN in the 6G domain still needs to catch up to the problem of trustworthiness evaluation.
§ RESEARCH METHODOLOGY & BACKGROUND
Our proposed research methodology is carried out as follows:
* We first define one of the 6G problems. In particular, we choose a THzCom-based automatic modulation recognition (AMR) problem to demonstrate.
* We propose two customized DNNs to solve this dedicated problem.
* Then, we study trustworthiness attributes that need to be considered for this problem, and we present the so-called trustworthiness model based on these attributes.
* Finally, we apply this model as a practical approach to evaluate the trustworthiness of the customized DNNs.
The focus of this research methodology is not on developing DNNs to solve the AMR problem but on using the customized DNNs as practical examples to evaluate their trustworthiness in the 6G environment.
In the following, we introduce THzCom-based AMR as one of the 6G problems, and we review the available DNN-based solutions for such a problem. Then, we investigate the available trustworthiness models for DNNs.
§.§ Deep Learning-based AMR for THz Communication
In modern communication systems, a transmitter can use a pool of modulation schemes to control data rate and bandwidth usage. While the transmitter adaptively selects the modulation type, the receiver may or may not know the modulation type. This problem is usually perceived as a classification problem, where the receiver aims at recognizing and classifying the modulation. To solve such a problem, modulation information can be supplied in each signal frame, allowing the receiver to identify the modulation type and react accordingly. However, this approach has become more expensive since modern wireless networks are very heterogeneous, and the number of users is increasing significantly. Therefore, such an approach may not be efficient enough in real-world scenarios as it degrades spectrum efficiency due to the additional information in each signal frame <cit.>.
AMR has been proposed to detect the modulation scheme of received signals without any potential overhead in the network protocol. Ultimately, the signals are demodulated, and the received data is recovered correctly. Further, conventional AMR approaches require a huge amount of computation or experts’ feature extraction experience <cit.>.
To overcome these issues, deep learning (DL) is considered a powerful tool that can be used for AMR to provide high classification accuracy. DL does not require prior pre-processing or feature extraction, making it more efficient than conventional approaches.
For instance, Convolutional Neural Networks (CNNs) were used in <cit.> to extract features from raw I/Q data and perform classification. In <cit.>, Recurrent Neural Network (RNN)-based AMR has been proposed to extract sequence-correlated features of I/Q signal components and amplitude/phase signal components to recognize modulation schemes. Other works employed RNNs to estimate signal parameters and correct signal distortions like Carrier Frequency Offset (CFO) and multipath fading <cit.>. The results revealed that the proposed RNN model provides not only good accuracy in signal distortion estimation but also outperforms many DL methods in terms of classification accuracy.
§.§ AI Trustworthiness
Several works have investigated the concepts of trustworthiness and dependability to determine their attributes. In system design, availability, reliability, safety, integrity, and maintainability are defined as dependability attributes <cit.>. Nevertheless, this definition does not cover all security attributes in which the definition excludes confidentiality. In <cit.>, trustworthiness is defined as a twin of dependability that includes the following attributes: reliability, safety, maintainability, availability, integrity, and confidentiality. This definition considers security as one of the dependability attributes. In AI-based system design, the above definitions of trustworthiness do not cover the recent AI requirements. AI is highly data-dependent and needs dedicated attributes for its trustworthiness. Therefore, new attributes of trustworthiness have been introduced, mainly security, robustness, safety, transparency, and fairness <cit.>. However, these attributes are general and not specified for a dedicated AI application. To determine the trustworthiness attributes of DNN regarding AMR in THzCom-based 6G technology, the interaction between DNN and its host environment needs to be carefully investigated and described.
§.§ Paper's Contribution
As 6G is crossing the primeval stage of its development, it is the right time to consider the trustworthiness of DL deployed in this technology. This paper proposes a trustworthiness model to analyze DNNs designed for recognizing modulation schemes in THzCom-based 6G technology. To the best of our knowledge, this work introduces the first practical approach to evaluating the trustworthiness of DNNs designed for AMR in THzCom-based 6G technology.
§ DEEP LEARNING-BASED AUTOMATIC MODULATION RECOGNITION
§.§ Synthetic THz dataset
A dataset of transmitted I/Q samples has been used for the AMR task. The THz-dataset contains seven modulation schemes: BPSK, QPSK, 8PSK, QAM16, QAM64, 8APSK, and OOK. Each modulation scheme consists of 26 Signal-to-Noise-Ratio (SNR) levels with 4096 examples per level. The total number of samples in the dataset is 745,472. It was generated using the link-level simulation module of the Simulator for Mobile Networks (SiMoNe) <cit.>. The link level simulation module was developed to simulate point-to-point communication links under the influence of realistic propagation effects in accordance with the IEEE802.15.3d standard <cit.>. The simulated transmission was performed using a Root-Raised-Cosine (RRC) transmit pulse and an AWGN channel. The Nyquist Bandwidth is 880 MHz with an oversampling factor of 8, and it has not been subjected to any channel coding technique. All samples have the exact representation to make data processing easier. The THz dataset samples have a 1024 × 2 shape (I/Q representation).
§.§ Two DNN Models for Automatic Modulation Classification
DNN consists of multiple layers that process input data and generate a set of probabilities (classification). Each layer comprises a set of parameters (weights and biases) used to process the final output in conjunction with the activation function. In the following, two DNNs are adopted using the proposed THz dataset to classify THz modulation schemes. The resulting DNN classifiers have a 32-bit floating point (FP) parameter format.
§.§.§ CNN for AMR
CNNs have been widely used for computer vision problems <cit.>. A CNN model can learn directly from the raw data without prior expert feature extraction or pre-processing of the raw data. To benefit from this property in AMR, we construct a CNN including three convolutional layers. Each layer is followed by a batch normalization layer, a ReLU activation function, and a MaxPooling layer. We feed the raw I/Q samples of each radio signal into the CNN model. The extracted feature maps are then forwarded to the fully connected region of the network for classification, where we employ the Scaled exponential Linear Unit (SeLU) activation function and an Alpha dropout. The proposed CNN classifier has 555,287 parameters, and its layout is shown in Table <ref>.
The CNN classifier achieves, on average, 68.8% accuracy across all SNR levels. Fig. <ref> shows the confusion matrix of CNN regarding each modulation scheme. CNN classifier imprecisely predicts the higher-order modulation schemes, namely 16QAM and 64QAM, and achieves only 56.2% and 55.9% correct predictions, respectively. In contrast, the low-order modulation schemes appear to be the least confused, achieving 84.6% for BPSK and 93.1% for OOK.
§.§.§ ResNet for AMR
Deep Residual Networks (ResNets) are enhanced versions of CNN. ResNet uses skip connections to process features at multiple scales and depths through the network. Moreover, it is possible to use wider layers, train effectively with fewer epochs, and achieve better results compared to traditional CNN <cit.>. We construct a ResNet layout similar to <cit.> for radio signal classification. Fig. <ref> shows the proposed ResNet architecture. It consists of six residual units, each with two skip connections, followed by a fully connected region with the same configuration as the proposed CNN but with only 159015 parameters.
The ResNet classifier achieves 70.8% accuracy across all SNR levels. ResNet classifier exhibits 2% higher accuracy than CNN and has fewer parameters. This result emphasizes the effectiveness of ResNets over conventional CNN classifiers. Fig. <ref> shows the confusion matrix of ResNet. 16.8% confusion between 16QAM and 64QAM is noted only. For the remaining modulation schemes, we observe a slight improvement in accuracy.
§ TRUSTWORTHINESS: MODEL & ATTRIBUTES
To determine the trustworthiness attributes of DNN regarding AMR in THzCom-based 6G technology, we first formulate DNN as a function of multiple inputs and parameters, and we link this formulation and the trustworthiness attributes as follows.
The layer i of a DNN can be seen as an operation f_i[p_i](x_i-1), where p_i represents a set of layer i's parameters p_i=(W^j_i,b_i) including j weights and one bias, and x_i-1 is the output of the previous layer. A composition of these operations defines the DNN classifier f_DNN as,
f_DNN(x_in; p)=f_l[p_l] ∘⋯∘ f_2[p_2] ∘ f_1[p_1](x_in)
Where x_in is an input signal, p is a set of DNN parameters p={p_1, ..., p_l}, and l is the number of DNN layers. The values of DNN parameters and the model hyperparameters are given during the training phase based on the THz dataset. In the prediction phase, the DNN classifier f_DNN(x_in; p) can be perceived as a function of two inputs: The trained parameters p and the signal x_in like an input variable.
Therefore, the proposed trustworthiness model of such a DNN considers only the input signals x_in and the DNN parameters p={p_1, ..., p_l}. Other building blocks of DNN are considered reliable and trustworthy such as activation functions etc. This model helps to explain how DNN interacts with the THzCom environment and the user. Fig. <ref> illustrates the three trustworthiness attributes that need to be considered in DNN-based AMR, described as follows:
* Data robustness analysis helps to understand when the DNN classifiers exhibit low accuracy due to environmental variation. It aims at evaluating such variations and their impact on the quality of DNN classifiers <cit.>. Precisely, DNN robustness analysis investigates a noisy environment effect on the input signals x_in and its impact on the DNN classification accuracy. Here, different SNR levels are applied to input signal x_in, and then, the drop in the DNN accuracy is observed and estimated.
* Parameters sensitivity analysis provides a deep understanding of DNN's reliability, especially the unreliable classification causes. The reliability can be evaluated by analyzing the sensitivity of the DNN parameters p={p_1, ..., p_l} for given signals. Reliability indicates the DNN classifier should perform with the same accuracy as it's intended without failure (unreliable classification with less than 50% accuracy). The proposed sensitivity analysis follows a random bit flipping model <cit.>.
* Adversarial examples indicate the impact of deterministic signal changes on the DNN classifiers introduced by an attacker. Adversarial example attacks can be performed for Security evaluation. Here, the attacker chooses and generates inputs x_in to confuse DNN classifiers during the inference phase, resulting in misclassification <cit.>.
Our trustworthiness model excludes the transparency of DNN from its attributes as AMR doesn't use private data, and it also ignores DNN fairness as the used dataset is balanced. The misclassification leads to selecting an incorrect scheme, and the received signals cannot be demodulated. This event and its consequence are already involved in the reliability attribute. Therefore, DNN safety can be seen as a subset of reliability in our application.
§ TRUSTWORTHINESS ANALYSIS OF DNN FOR AMR
In this section, we analyze the trustworthiness of the proposed CNN and ResNet by using our trustworthiness model and its attributes.
§.§ Data Robustness Analysis
The impact of environmental variation on the trained DNN model is considered a significant factor of trustworthiness. In other words, the trained DNN model should be aware of the diverse data distribution regarding different environmental scenarios <cit.>. In this context, the impact of a noisy environment on DL-based AMR is evaluated. This problem is critical as it affects the data robustness of the DNN model.
SNR is a crucial metric in any communication system. SNR quantifies the environmental variation by indicating the signal quality concerning a communication channel noise. To analyze the data robustness of DL-based AMR, the following steps are carried out: (1) The dataset is split into a training and testing set with consideration of the various SNR levels to maintain a balanced dataset, (2) DNN models are trained based on the resulting dataset, and (3) the accuracy of the proposed DNN models is evaluated considering the various SNR levels.
Further, we apply the mentioned steps to the proposed CNN and ResNet models. Fig. <ref> shows that data samples with low SNR ranging from -20 to -4 dB are hard to classify and score a maximum accuracy of 50%. With such a noise level, the constellation of the received signals is random and does not form meaningful clusters to distinguish between the different modulation schemes. It's worth noting that the model accuracy increases when the SNR increases from -2 dB to 10 dB. The model accuracy attains 99% as SNR approaches 10 dB. The highest model accuracy is achieved starting from 10 dB.
Moreover, the ResNet model exhibits better accuracy than the CNN model in the SNR interval of -2 dB to 10 dB. The accuracies of both models are correlated out of this interval. As a result, ResNet-based AMR is more robust than CNN-based AMR regarding the noisy channel variation.
§.§ Sensitivity Analysis
AI sensitivity analysis determines vulnerable bits that significantly decrease the classification accuracy when flipped. Sensitivity analysis relies on a bit-flipping model of AI parameters. Sensitivity analysis aims to provide a deep understanding of AI's behavior and gives some hints towards explaining AI's decision-making.
To conduct the sensitivity analysis of CNN and ResNet classifiers, a single-bit flip is randomly introduced to the DNN's parameters <cit.>. Both the bit position and the targeted parameter are uniformly distributed. First, we randomly inject single-bit faults 1000 times at different bit positions and parameter locations of each layer of the CNN classifier. In the case of the ResNet, we randomly inject single-bit faults in the residual block, convolution, and dense layers. Nevertheless, the injected faults in the convolution and dense layers are performed similarly to the CNN. However, the faults in the residual block are randomly injected at different bit positions, parameter locations, and layers. The above fault injection experiments are conducted during inference.
The single-bit faults injected in 32-bit FP parameters indicate that the exponent (from 23 to 30-bit position) is more sensitive than the mantissa (from 0 to 22-bit position). This emphasizes the well-known results in public literature.
To better understand the exponent sensitivity, we divide the vulnerable exponent bits into two categories: the first category includes the vulnerable bits resulting in misclassification (unreliable classifier with accuracy lower than 50%), and the second covers the vulnerable bits resulting in accuracy degradation.
Fig. <ref>-a illustrates the impact of single-bit faults on the CNN classifier regarding convolution layers C1, C2, C3 and dense layers D1, D2, D3. The unreliable classification is observed at 25 in C1 and 30 in C1, C2, C3, D2, D3. However, the flipping of bit 30 in D1 results only in accuracy degradation. It should be noted that the faults injected in the remaining layers show insignificant accuracy degradation. Fig. <ref>-b shows the impact of bit flipping on all layers of the ResNet classifier. The 30th bit (i.e. bit 31) is more sensitive than the others as it causes unreliable classification across all layers. The remaining vulnerable bits cause only an accuracy drop.
As a result, flipping the vulnerable 30th-bit causes the misclassification of both classifiers. Other vulnerable bits show only accuracy degradation.
§.§ Security Analysis
In <cit.>, several neural network models exhibit vulnerabilities to adversarial examples, where the attacker generates some inputs that lead to misclassification. These inputs are slightly different from the original inputs that are classified correctly, yet they are likely to cause such misclassification. The adversarial examples mainly occur due to some “linear behavior in high-dimensional spaces” <cit.>. This observation introduces many efficient adversarial example attacks such as Fast Gradient Method <cit.>, Projected Gradient Descend <cit.>.
To analyze the impact of adversarial examples, we launch eight attacks to generate adversarial examples against the investigated CNN and ResNet classifiers using the Adversarial Robustness Toolbox (v1.2.0) <cit.>. We set up each attack using the predefined sets of attack parameters. Then, we perform the same attacks on both classifiers. Table <ref> shows the attack results of Fast Gradient Method (FGM) <cit.>, Projected Gradient Descend (PGD) <cit.>, NewtonFool <cit.>, DeepFool <cit.>, HopSkipJump <cit.>, Zeroth Order Optimization (Zoo) <cit.>, and Carlini & Wagner methods (C&W) <cit.> over L_2, and L_∞. The adversarial example resistance (AER) quantifies the model's correct classifications despite adversarial example generation. For instance, both classifiers exhibit comparably high AER against C&W over L_2, while the generated adversarial examples by PGD fundamentally devastate both. Generally, both models lead to different AER w.r.t. the respective attacks. Thus, the decision for the more-resistant model against adversarial examples will remain dependent on the chosen attacks.
§.§ Trustworthiness Evaluation
According to the above analysis, the trustworthiness level of the investigated classifiers can be evaluated as follows. First, when the SNR ranges from 0 to 30 dB, both classifiers are robust against environmental variations. Here, ResNet shows greater robustness than the CNN classifier. Second, the sensitivity analysis of the classifier’s parameters indicates that flipping the vulnerable 30th-bit results in unreliable classifications. Finally, both classifiers show different levels of resistance against selected adversarial example attacks without yielding a clear verdict for either DNN. Future work may identify suitable attacks to provide the required metrics.
§ CONCLUSION
In this paper, we introduced a methodology to build a practical trustworthiness model of Deep Neuronal Networks (DNN) dedicated to one of the 6G applications. The need for such a model is significant as 6G technology requires higher levels of reliability and security compared to prior generations.
Particularly, we constructed two DNN classifiers addressing the automatic modulation recognition (AMR) problem in a THzCom-based 6G environment. Then, we applied our trustworthiness model to analyze the classifiers w.r.t. attributes chosen to meet this environment: Robustness, DNN parameter reliability, and DNN adversarial example resistance. Based on our experiments results, we conclude our trustworthiness model a suitable approach to analyze the trustworthiness of the used DNN for AMR in THzCom-based 6G technology.
IEEEtran
|
http://arxiv.org/abs/2307.04734v1 | 20230710175005 | Quandle coloring quivers and 2-bridge links | [
"Tirasan Khandhawit",
"Korn Kruaykitanon",
"Puttipong Pongtanapaisan"
] | math.QA | [
"math.QA"
] |
The quandle coloring quiver was introduced by Cho and Nelson as a categorification of the quandle coloring number. In some cases, it has been shown that the quiver invariant offers more information than other quandle enhancements. In this paper, we compute the quandle coloring quivers of 2-bridge links with respect to the dihedral quandles.
On tricyclic graphs with maximum edge Mostar index
Fazal Hayat^a, Shou-Jun Xu^a[Corresponding author
E-mail addresses: [email protected] (F. Hayat), [email protected] (S. J. Xu), [email protected] (B. Zhou)], Bo Zhou^b
^aSchool of Mathematics and Statistics, Gansu Center for Applied Mathematics,
Lanzhou University, Lanzhou 730000, P.R. China
^bSchool of Mathematical Sciences, South China Normal University,
Guangzhou 510631, P.R. China
==========================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
A quandle is an algebraic structure whose axioms are inspired by the Reidemeister moves on link diagrams <cit.>. There is a natural quandle Q() associated to each link called the fundamental quandle, which gives rise to an invariant of the link. In fact, Q() is a complete invariant when the link has one component <cit.>. Studying presentations of Q() can be difficult, and therefore, it is common to extract some information by considering the set of homomorphisms from Q() to a different quandle X. The cardinality of such a set |Hom(Q(),X)| is often called the quandle coloring number, which has been investigated by many quandle theorists over the years.
Since a set contains more information in addition to its cardinality, the quandle coloring number can be enhanced to give a stronger link invariant. For more details on some examples of useful enhancements such as cocycle and module enhancements, the readers are encouraged to consult <cit.>. This paper concerns a particular enhancement introduced by Cho and Nelson called the quandle coloring quiver 𝒬() <cit.>. Roughly, elements of Hom(Q(),X) can be thought of as vertices scattered all over the place, where each vertex represents an assignment of a coloring to . The quiver-valued invariant 𝒬() gives a way to organize these vertices into a directed graph.
For some particular choices of target quandles X appearing in Hom(Q(),X), the quandle coloring quivers have been determined for various families of links <cit.>. It has also been shown that in some cases, the quiver gives more information than cocycle and module enhancements <cit.>. In this paper, we calculate the quandle quivers for all 2-bridge links with respect to any choice of dihedral quandle. This is particularly interesting when we use the dihedral quandle ℤ_n^dih of composite order n since the quandle coloring quiver is determined by the coloring number when n=p_1p_2⋯ p_k, where p_i is prime <cit.>. To demonstrate this, we give some more examples where our computations offer more information than the quandle counting invariants in the final section.
§.§ Organization
This paper is organized as follows. In Section <ref>, we discuss basic definitions from quandle theory and knot theory. In Section <ref>, we calculate the quandle coloring number of 2-bridge links. The coloring number is needed as it is the number of vertices of the quandle coloring quiver invariant. In Section <ref>, we prove our main result. Before stating the result in full generality, we discuss the case when n is a power of a prime for ease of reading. We end the paper with more examples where our quiver computations give proper quandle enhancements.
§ PRELIMINARIES
In this section, we review some relevant terminologies.
§.§ Quandles
A quandle is a nonempty set X equipped with a binary operation :X× X→ X such that the following properties hold:
Q1: x x=x for all x∈ X.
Q2: The map β_y : X→ X, given by β_y(x)= x y, is invertible for all y∈ X.
Q3: (x y ) z = (x z) (y z) for all x,y,z∈ X.
Since β_y is invertible, we have a bijection β_y^-1:X→ X. Define ^-1: X× X → X by x ^-1 y := β_y^-1(x). If β_y=β_y^-1 for all y∈ X, or equivalently =^-1, then the quandle is said to be involutory. In this paper, we primary work with dihedral quandles, which are in fact involutory quandles.
For each n∈ℕ, on ℤ_n={0,1,2,…,n-1}, x y := 2y-xn defines the dihedral quandle of order n. Denote by the dihedral quandle of order n. For all x,y∈ℤ_n, we have β_y∘β_y(x)≡β_y(2y-x)≡ 2y-(2y-x)=xn. From here we see that dihedral quandles are involutory.
Often, it is useful to study maps between quandles that behave well with the quandle axioms.
A quandle homomorphism from (X,_X) to (Y,_Y) is a map f:X→ Y such that f(x _X y)=f(x)_Y f(y). Denote by (X,Y) the set of all quandle homomorphisms from X to Y.
Let X be a quandle. A quandle endomorphism on X is a quandle homomorphism from X to itself. A quandle automorphism on X is a quandle endomorphism on X that is also a bijection. Denote by (X) the set of all quandle endomorphisms on X and by (X) the set of all quandle automorphisms on X.
Under the usual composition, (X) has monoid structure, whereas (X) has group structure.
There is a particularly natural quandle that can be defined from a link diagram.
Let be an oriented link and D be an oriented diagram of with n strands, x_1,x_2,…,x_n. The fundamental quandle of D is the quandle freely generated by x_1,x_2,…,x_n with relations from each crossing as in Figure <ref>. The fundamental quandle of an oriented link is defined to be the fundamental quandle of an oriented diagram of .
A basic way to study homomorphisms between quandles is to count how many there are.
Let be a link and X be a finite quandle. An X-quandle coloring of is a quandle homomorphism Q()→ X. The X-quandle counting invariant of the link is the number of X-quandle colorings of , i.e. the size of the set (Q(),X). This cardinality is also called the quandle coloring number.
For any link and a quandle X. Fix x∈ X. Then, ψ_x : Q()→ X, given by ψ_x(x_i):=x, defines a quandle homomorphism since at each crossing we have ψ_x(x_i x_j)=x=x x= ψ_x(x_i)ψ_x(x_j). Such quandle colorings are called the trivial quandle coloring. In general, we have {ψ_x:x∈ X}⊆(Q(),X), so |(Q(),X)|≥ |X|.
Since the set of homomorphisms contains more information than its cardinality, various quandle enhancements have been defined. The following concept is particularly relevant to this paper.
Let X be a finite quandle. Fix S⊆(X,X). The X-quandle coloring quiver _X^S() of a link with respect to S is the direct graph with vertex set (Q(),X) and directed edges ψ_1 f→ψ_2 whenever ψ_2=f∘ψ_1 and f∈ S. When S=(X,X), we denote the corresponding quiver by simply _X() and call it the full quandle coloring quiver.
Denote by (K_n,m) the directed graph with n vertices where every vertex has m directed edges from itself to each vertex. For each graph G and H, define G ∇_mH to be the disjoint union graph G⊔ H with additional m directed edges from every vertex of H to each vertex of G.
§.§ Rational Tangles and Links
An n-string tangle is a collection of n properly embedded disjoint arcs in the 3-ball. In this paper, we will work exclusively with 2-string tangles. Thus, we will simply refer to 2-string tangles as tangles for brevity. A tangle can also be defined diagrammatically.
A tangle diagram is a portion of a link diagram surrounded by a circle intersecting the link diagram in four points labelled NE,NW,SE,SW. Two tangle diagrams are equivalent if and only if one can be obtained from another by Reidemeister moves in finitely many steps inside the surrounding circle while the four points remain fixed.
We will now give a definition of rational tangles. We note that there are other ways to define the equivalent object in the literature.
Let [0] denote the horizontal tangle shown in Figure <ref> (left). For an integer p≠ 0, let [p] denote the tangle obtained from twisting the NE and SE endpoints p times, where the sign is positive (resp. negative) if the overstrand has positive (resp. negative) slope (see Figure <ref>).
Given two tangles T_1 and T_2, we can connect the two tangles into a new one. Let us denote by T_1T_2 the tangle obtained from reflecting T_1 along NW-SE line and connecting it to T_2 from the left. (see Figure <ref>) Note that in general, T_1T_2≠ T_2T_1.
Let N≥ 1, and p_1,p_2,…,p_N be integers. Let [p_1p_2… p_N] be the tangle T_N, where T_1=[p_1] and T_j=T_j-1[p_j] for 1≤ j≤ N. This kind of tangle is called a rational tangle.
To each rational tangle [p_1p_2… p_N], there is an associated rational number
p_1p_2… p_N := p_N+1/… + 1/p_2+1/p_1
that is a complete tangle invariant. That is, Conway showed that two rational tangles are equivalent if and only if their rational numbers are equal <cit.>.
The numerator closure of a rational tangle yields a rational link. It can be shown that rational links are precisely two-bridge links. Let us denote by (p_1p_2… p_N), or (p_1p_2… p_N) the closure of the rational tangle [p_1p_2… p_N] (see Figure <ref>).
We note that any rational tangle can be put in a canonical form so that each p_i in (p_1p_2… p_N) has the same sign <cit.>. Since ((-p_1)(-p_2)… (-p_N)) is the mirror image of (p_1p_2… p_N), their involutorized fundamental quandles are isomorphic. Hence, their quandle enhancements, e.g. coloring number, quiver, are isomorphic. From now on, we shall assume that p_1,p_2,…,p_N>0.
§ THE QUANDLE COLORING NUMBERS
The main goal of this section is to determine the number of colorings of 2-bridge links by dihedral quandles. We begin by discussing a presentation for the fundamental quandle of 2-bridge links:
Q((p_1p_2… p_N)) =⟨ x_j,i for 1≤ j≤ N and 1≤ i≤ p_j+2|
x_j,i = x_j,i-2^± x_j,i-1 for 1≤ j≤ N and 3≤ i≤ p_j+2,
x_2,1=x_1,2, x_2,2=x_1,p_1+2,
x_j,1=x_j-2,p_j-2+1, x_j,2=x_j-1,p_j-1+2 for 3≤ j≤ N,
x_N,p_N+1=x_1,1 , x_N,p_N+2=x_N-1,p_N-1+1⟩,
Since dihedral quandles are involutory, i.e. =^-1, for any quandle homomorphism ψ:Q((p_1p_2… p_N))→ we have the following relations
ψ(x_j,i )= ψ(x_j,i-2) ψ(x_j,i-1) for 1≤ j≤ N and 3≤ i≤ p_j+2,
ψ(x_2,1)=ψ(x_1,2), ψ(x_2,2)=ψ(x_1,p_1+2),
ψ(x_j,1)=ψ(x_j-2,p_j-2+1), ψ(x_j,2)=ψ(x_j-1,p_j-1+2) for 3≤ j≤ N,
ψ(x_N,p_N+1)=ψ(x_1,1 ), ψ(x_N,p_N+2)=ψ(x_N-1,p_N-1+1).
Moreover, any map ψ : {x_j,i| 1≤ j≤ N and 1≤ i≤ p_j+2}→ satisfying the relations extends to a unique quandle homomorphism ψ̃: Q((p_1p_2… p_N))→.
Next, we prove an important proposition relating the colorings of two generating strands. This generalizes Proposition 2.4 of <cit.>.
For a rational tangle [p_1p_2… p_N], let Δ_j be the numerator of the rational number p_1p_2… p_j and also denote by Δ := Δ_N. Note that Δ_j satisfies recurrence relation
Δ_0 := 1, Δ_1 = p_1, Δ_j = p_j Δ_j-1 + Δ_j-2 .
In fact, the number Δ is the determinant of (p_1p_2… p_N) (see <cit.>).
For ψ∈(Q((p_1p_2… p_N)),), we have
Δψ(x_1,1)≡Δψ(x_1,2) n.
Since the term of the form aψ(x_1,2)-(a-1)ψ(x_1,1) appears frequently in the proofs, we let [a]:=aψ(x_1,2)-(a-1)ψ(x_1,1). We observe that for all 1≤ j≤ N, we have ψ(x_j,p_j+1)=p_jψ(x_j,2)-(p_j-1)ψ(x_j,1), and ψ(x_j,p_j+2)=(p_j+1)ψ(x_j,2)-p_jψ(x_j,1).
Claim: For all 1≤ j≤ N, we have ψ(x_j,p_j+1)=[Δ_j] and ψ(x_j,p_j+2)=[Δ_j+Δ_j-1].
We prove the claim by induction. For base case j=1, we note that ψ(x_1,p_1+1)=[p_1]=[Δ_1], and ψ(x_1,p_1+2)=[p_1+1]=[Δ_1+Δ_0]. Let 1≤ j≤ N. Suppose that the claim hold true for all positive integer less than j.
Case 1 j=2. We have ψ(x_2,1)=ψ(x_1,2)=[1] and ψ(x_2,2)=ψ(x_1,p_1+2)=[p_1+1]. This gives ψ(x_2,p_j+1)=p_2 [p_1+1]-(p_2-1)[1]=[p_2p_1+1]=[Δ_2], and ψ(x_2,p_j+2)=(p_2+1)[p_1+1]-p_2[1]=[p_2p_1+p_1+1]=[Δ_2+Δ_1].
Case 2 j≥ 3. As j-1,j-2≥ 1, we apply inductive hypothesis and obtain
ψ(x_j,1) = ψ(x_j-2,p_j-2+1) = [Δ_j-2],
ψ(x_j,2) = ψ(x_j-1,p_j-1+2) = [Δ_j-1+Δ_j-2],
ψ(x_j,p_j+1) = p_j[Δ_j-1+Δ_j-2] - (p_j-1) [Δ_j-2] = [p_jΔ_j-1+Δ_j-2]=[Δ_j],
ψ(x_j,p_j+2) = (p_j+1)[Δ_j-1+Δ_j-2] - p_j [Δ_j-2]
= [p_jΔ_j-1+Δ_j-2+Δ_j-1] = [Δ_j+Δ_j-1].
Thus, the claim is verified.
With the claim proved, we have ψ(x_N,p_N+1)=[Δ_N] and ψ(x_N,p_N+2)=[Δ_N + Δ_N-1]. The relations from the closure of the tangle give a single equation
Δ_N ψ(x_1,1) ≡Δ_N ψ(x_1,2) n.
Hence, the assertion is proved.
Any map ψ: {x_1,1,x_1,2}→ such that Δψ(x_1,1)≡Δψ(x_1,2)n extends to a unique quandle homomorphism ψ̃: Q((p_1p_2… p_N)) →, i.e. the diagram
{x_1,1,x_1,2}[r,"ψ"] [d,hook,"i"]
Q((p_1p_2… p_N))[ru,dashed,"ψ̃"swap]
commutes.
We extend ψ to ψ̅: {x_j,i| 1≤ j≤ N and 1≤ i≤ p_j+2}→ uniquely to other generators recursively using the following relations
ψ̅(x_j,i ) = ψ̅(x_j,i-2) ψ̅(x_j,i-1) for 1≤ j≤ N and 3≤ i≤ p_j+2,
ψ̅(x_2,1) =ψ̅(x_1,2), ψ̅(x_2,2)=ψ̅(x_1,p_1+2),
ψ̅(x_j,1) =ψ̅(x_j-2,p_j-2+1), ψ̅(x_j,2)=ψ̅(x_j-1,p_j-1+2) for 3≤ j≤ N.
From the proof of Proposition <ref>, we see that ψ̅(x_N,p_N+1)=[Δ_N] and ψ̅(x_N,p_N+2)=[Δ_N + Δ_N-1]. Hence, the relations
ψ̅(x_N,p_N+1)=ψ̅(x_1,1 ), ψ̅(x_N,p_N+2)=ψ̅(x_N-1,p_N-1+1)
hold and ψ̅ extends to a unique quandle homomorphism ψ̃: Q((p_1p_2… p_N))→.
The quandle coloring number of a 2-bridge link is given by the formula |(Q((p_1p_2… p_N)),)|=n(Δ,n), where n of which are trivial quandle colorings.
By Proposition <ref> and <ref>, |(Q((p_1p_2… p_N)),)| is equal to the number of choices of (ψ(x_1,1),ψ(x_1,2))∈× such that Δψ(x_1,1)≡Δψ(x_1,2) n which is exactly n(Δ,n). Among all the colorings, there are n trivial colorings corresponding to choices ψ(x_1,1)=ψ(x_1,2)∈.
§ THE QUANDLE COLORING QUIVER OF 2-BRIDGE LINKS
It will turn out that the quiver invariant can be organized based on how the automorphism group () acts on the colorings.
Throughout this section, we consider a 2-bridge link (N/M), where N and M are positive and relatively prime.
By Proposition <ref>, we denote by [a,b] the unique quandle homomorphism ψ∈(Q((N/M)), ) such that ψ(x_1,1)=a and ψ(x_1,2)=b.
Note that such a and b satisfy n | N(b-a).
Analogously, for x,y ∈ℤ_n, there is a unique endomorphism f ∈() such that f(0) = x and f(1) =y. We denote such an endomorphism by x,y
Observe that f(a) = ay - (a-1)x = (y-x)a + x n. Moreover, x,y is an automorphism precisely when y-x∈ℤ_n^×.
() acts on (Q((N/M)), ) by post-composition, i.e. f [a,b] :=f∘ [a,b]=[f(a),f(b)].
For [a,b]∈(Q((N/M)), ) and f= x,y∈(), we see that n| N(y-x)(b-a)=N(f(b)-f(a)), i.e. [f(a),f(b)]∈(Q((N/M)), ). Since composition is associative and 1_∈() fixes any [a,b], we see that all the group action axioms are satisfied.
Write ψ∼ϕ if ψ and ϕ lie in the same orbit under the action.
For ψ,ψ',ϕ,ϕ' ∈(Q((N/M)),) such that ψ∼ψ' and ϕ∼ϕ', we have
|{f∈(): ϕ=f∘ψ}|=|{f∈(): ϕ'=f∘ψ'}|.
Since ψ∼ψ' and ϕ∼ϕ', there exist g,h∈() such that ψ'=g∘ψ and ϕ'=h∘ϕ. Define two maps T: {f∈(): ϕ=f∘ψ}→{f∈(): ϕ'=f∘ψ' } by f↦ h∘ f∘ g^-1, and S: {f∈(): ϕ'=f∘ψ' }→{f∈(): ϕ=f∘ψ} by f↦ h^-1∘ f∘ g. We see that T and S are inverse to each other. Hence, two sets are of the same size.
By translation, any orbit contains an element of the form [0,a].
Consequently, it suffices to consider edges between them, i.e.
|{f∈(): [0,b]=f∘ [0,a]}|=|{f∈(): f(0)=0, f(a)=b}|.
For a,b∈, we have
|{f∈(): f(0)=0, f(a)=b}|=|{x∈ℤ_n: ax≡ b n}|.
Two maps f↦ f(1) and x↦ 0,x are inverses.
It is a basic number theory result that
{x∈ℤ_n: ax≡ b n}=(a,n) if (a,n)| b,
0 else..
We immediately have our result.
For a,b∈, we have
|{f∈(): [0,b]=f∘ [0,a]}|=(a,n) if (a,n)| b,
0 else.
§.§ The quiver when n is a power of a prime
Let us first consider the case when n=p^α, where p a prime and α is a positive integer.
The p-adic valuation of an integer m, denoted by ν_p(m), is the highest power of p dividing m.
Given p, α, and N, we set β=min{α,ν_p(N)}. We now characterize orbits of
(Q((N/M)), ) and count endomorphisms between them.
Under the action of () on (Q((N/M)), ), for α-β≤ j,j'≤α , we have
* [0,p^j]∈(Q((N/M)), ).
* [0,p^j] and [0,p^j'] lie in same orbit if and only if j=j'.
* The size of the orbit of [0,p^j], denoted by n_j, is given by
n_j= p^2α-j-1(p-1) if j<α,
p^α if j=α.
* (Q((N/M)), ) is partitioned into orbits with {[0,p^j]: α-β≤ j≤α} being a complete set of representatives.
* The number of endomorphisms of ℤ_p^α^dih sending [0, p^j] to [0, p^j'], denoted by n_j,j', is given by
n_j,j'= 0 if j>j',
p^j if j≤ j'.
* Since j ≥α-β≥α-ν_p(N), we have p^α| Np^j.
* The converse is obvious. Without loss of generality, let us suppose that j>j'.
We see that (p^j,p^α) = p^j ∤ p^j', so there is no automorphism from [0,p^j] to [0,p^j'] by Proposition <ref>.
* We first determine the size of stabilizer of [0,p^j], which is equal to the number of x,y∈() such that x,y [0,p^j]=[0,p^j]. Note that the size of () is equal to |ℤ_p^α| · |ℤ_p^α^×| = p^αϕ(p^α) = p^2α-1(p-1).
Case 1 j=α. In this case, it is equivalent to count a number of x,y such that x=0 and y∈ℤ_p^α^×, so the stabilizer of [0,p^α] = [0,0] is of the size |ℤ_p^α^×|=ϕ(p^α). By orbit-stabilizer theorem, the size of the orbit of [0,p^α] is p^αϕ(p^α)/ϕ(p^α)=p^α.
Case 2 j<α. In this case, we count a number of x,y such that x=0 and y p^j = p^j p^α. The last condition is equivalent to y = 1+ kp^α-j for a nonnegative integer k<p^j. Hence, the stabilizer of [0,p^j] is of the size p^j. By orbit-stabilizer theorem, the size of the orbit of [0,p^j] is p^αϕ(p^α)/p^j=p^2α-j-1(p-1).
* Consider the total size of the orbit of [0,p^j] for all α-β≤ j≤α
∑_α-β≤ j ≤α n_j = p^α + ∑_α-β≤ j < αp^2α-j-1(p-1)
= p^α + p^2α-1(p-1)·1/p^α-β∑_0≤ j< β1/p^j
=p^α + p^2α-1(p-1)·1/p^α-β·1-1/p^β/1-1/p
= p^α+β.
One the other hand, we have |(Q((N/M)), )|=p^α(N,p^α)=p^α+β by corollary <ref>. Hence [0,p^j] for α-β≤ j≤α are complete representatives.
* This follows from Proposition <ref>.
Combining all the results from Lemma <ref> and Lemma <ref>, we are able to determine the full quandle coloring quiver of the two-bridge link (N/M) with respect to the quandle .
Let p be a prime, α≥ 1 be an integer, and N,M∈ℕ with (N,M)=1. The full coloring quiver of the two-bridge link (N/M) with respect to the quandle is given by
_((N/M))≅ G_β,
where β=min{ν_p(N),α}, G_0:= (K_p^α,p^α) and G_j:=G_j-1∇_p^α-j(K_p^α+j-1(p-1),p^α-j) for 1≤ j (see Figure <ref>).
In short terms, the full coloring quiver 𝒬_((N/M)) has its vertex set partitioned into orbits of [0,p^j] for α-β≤ j≤α, each of which induces a regular complete subgraph, and has p^i directed edges from each vertex from the orbit of [0,p^i] to each vertex from the orbit of [0,p^j] whenever i≤ j. If the order of the dihedral quandle is fixed, then the number β determines the number of components of the quiver.
For instance, suppose that L is the 4-crossing torus link and our quandle is ℤ_4^dih. Then, {[0,0],[1,1],[2,2],[3,3]} constitutes an orbit,
{[0,1],[1,2],[2,3],[3,0],[0,3],[1,0],[2,1],[3,2]} constitutes an orbit, and
{[0,2],[1,3],[2,0],[3,1]} constitutes an orbit.
Let p be a prime and N,M∈ℕ with (N,M)=1. Then, the quiver
_ℤ_p^dih((N/M))≅(K_p,p)∇_1(K_p(p-1),1) if p| N,
(K_p,p) if p∤ N.
Set α=1 in theorem <ref>.
§.§ The general case
For convenience, we start using multi-index notation. For a fix positive integer n, we write the prime decomposition n=∏_i p_i^α_i as p^α, where p is regarded as the sequence of distinct prime factors and α is regarded as the sequence of corresponding exponents. For sequences of nonnegative integers j=(j_i) and j'=(j_i') with the same length as p, we write p^j:= ∏_i p_i^j_i, and define j≼j' iff j_i≤ j_i' for all i.
The next result generalizes Lemma <ref>. In a similar manner, we set the sequence β with β_i=min{α_i,ν_p_i(N)}.
Under the action of () on (Q((N/M)), ) , for α-β≼ j,j'≼α we have
* [0, p^j]∈(Q((N/M)), ).
* [0,p^j] and [0,p^j'] lie in same orbit if and only if j=j'.
* The size of the orbit of [0,p^j] is given by n_j :=∏_i n_j_i, where
n_j_i= p_i^2α_i-j_i-1(p_i-1) if j_i<α_i,
p_i^α_i if j_i =α_i.
* (Q((N/M)), ) is partitioned into orbits with {[0,p^j]: α-β≼ j≼α} being a complete set of representatives.
* The number n_j,j' of endomorphisms of sending [0,p^j] to [0,p^j'] is given by
n_j,j'= 0 if j⋠j',
p^j if j≼ j'.
The proof also closely follows the proof of Lemma <ref>
* For each i, we have α_i≤ν_p_i(N)+j_i since α_i-ν_p_i(N)≤α_i-β_i ≤ j_i. Thus, p^α| Np^j and [0,p^j]∈(Q((N/M)), ).
* The converse is obvious. Without loss of generality, suppose that j_i<j_i' for some index i. Suppose for contradiction that there is x,y∈() such that [x,p^j(y-x)+x]= x,y [0,p^j]=[0,p^j']. We see that x=0 and p^j y≡ p^j'p^α. This implies p_i | y and (y,n)≥ p_i>1, which contradicts with y ∈ℤ_n^×. Hence, [0,p^j] and [0,p^j'] lie in different orbits.
* We also try to determine the size of the stabilizer of [0,p^j], which is equal to the number of x,y∈() such that x,y [0,p^j]=[0,p^j]. We see that x=0 and y∈ℤ_n^× satisfying p^jy≡ p^j n. By looking at each prime, the condition is equivalent to the solving the system p^j_i y≡ p^j_ip_i^α_i with (y,p_i^α_i)=1 for each i.
Case 1 j_i=α_i. The condition p^j_i y≡ p^j_ip_i^α_i is trivial, so there are ϕ(p_i^α_i) solutions.
Case 2 j_i<α_i. In this case, there are p_i^j_i solutions of the form y = 1+ kp_i^α_i-j_ip_i^α_i, where 0≤ k<p_i^j_i. Note that the solutions satisfy (y,p_i^α_i)=1.
Let us define
m_j_i= p_i^j_i if j_i<α_i,
ϕ(p_i^α_i) if j_i =α_i.
By Chinese remainder theorem, the size of the stabilizer of [0,p^j] is ∏_i m_j_i. Thus, by orbit-stabilizer theorem, the size of the orbit of [0,p^j] is
nϕ(n)/∏_i m_j_i=∏_i p_i^α_iϕ(p_i^α_i)/m_j_i=∏_i p_i^2α_i-1(p_i-1)/m_j_i=∏_i n_j_i.
* Consider the total of the size of orbits
∑_α-β≼ j ≼α n_j = ∑_α_I-β_I ≤ j_I ≤α_I…∑_α_1-β_1 ≤ j_1 ≤α_1∏_i n_j_i
= ∏_i∑_α_i-β_i ≤ j_i ≤α_i n_j_i
= ∏_i[p_i^α_i + ∑_α_i-β_i ≤ j_i < α_ip_i^2α_i-j_i-1(p_i-1) ]
= ∏_i[ p_i^α_i + p_i^2α_i-1(p_i-1)·1/p_i^α_i-β_i∑_0≤ j_i< β_i1/p_i^j_i]
=∏_i[ p_i^α_i + p_i^2α_i-1(p_i-1)·1/p_i^α_i-β_i·1-1/p_i^β_i/1-1/p_i]
=∏_i p_i^α_i+β_i = p^α+β.
Since |(Q((N/M)), )|=n(N,n)=p^α+β, we have all elements from these orbits.
* This also follows from Proposition <ref>.
Combining all the results from Lemma <ref> and Lemma <ref>, we are able to determine the full quandle coloring quiver of the two-bridge link (N/M) with respect to the quandle .
Let Λ be a set, G={G_λ}_λ∈Λ be a family of graphs indexed by Λ, and w:Λ×Λ→ℕ_0 be a map. Denote by ∇_w G the disjoint union graph _λ∈ΛG_λ with additional w(λ,μ) directed edges from each vertex of G_λ to each vertex of G_μ. With this notion, G_2∇_m̂ G_1 = ∇_w {G_1,G_2}, where w:{1,2}×{1,2}→ℕ_0 is given by w(1,2)=m and w(2,1)=w(1,1)=w(2,2)=0.
Let n be a positive integer and write n=∏_i p_i^α_i, where p_i are distinct primes and α_i > 0. Let N,M be positive integers with (N,M)=1 and set β_i=min{α_i,ν_p_i(N)}. Let Λ={j: α-β≼ j≼α}. The full quandle coloring quiver of the two-bridge link (N/M) with respect to the quandle is given by
_((N/M))≅∇_w {(K_n_j,p^j):j∈Λ},
where w:Λ×Λ→ℕ_0 is given by
w(j,j')= p^j if j≼ j' and j≠ j',
0 else.
The full quandle coloring quiver _((N/M)) is a higher dimensional generalization of that when n is a prime power. Its vertex set is partitioned into orbits that can be arranged into a higher dimensional grid with width in the i-th dimension depending only on β_i. We can see in the proof of Lemma <ref> that problems reduce to subproblems for each prime dividing the order of the dihedral quandle. Roughly speaking, the orbits and stabilizers split into "products". See section 4 of <cit.> for more rigorous discussion of this situation.
The torus link (N,2)≅(N/1). The full quandle coloring quiver _ℤ_12^dih((36,2)) is shown in Figure <ref>.
§.§ Applications and remarks
The formulas of quandle cocycle invariants of 2-bridge links are given in <cit.> for dihedral quandles of prime orders. This information can be combined with our results to calculate the quandle cocycle quivers of 2-bridge links <cit.>. Similarly, the authors of <cit.> computed quandle module invariants using some dihedral quandles, which can be used to compute the quandle module quivers <cit.> when combined with our result.
By a result of Taniguchi <cit.>, the quandle coloring quiver is not a stronger invariant if one uses the dihedral quandle of order n=p_1p_2⋯ p_k, where p_i is a prime number. To find an instance of proper enhancement, we may have to consider a quandle whose order is a power of a prime.
Consider the dihedral quandle Q=ℤ_4^dih. Then, the quandle coloring number of T(9,3) and T(4,2) by Q are both 16. By the main result of this paper and a result in <cit.>, the associated quiver invariants are not equal. In particular, the quiver for T(4,2) contains three complete graphs K_4, K_4, and K_8. On the other hand, the quiver for T(9,3) contains four copies of complete graphs that are all K_4 as shown schematically in Figure 6 of <cit.> (merging parallel edges). More examples can be obtained by replacing 9 with 6k+3 where k=1,2,3,...
Of course, other invariants already distinguish the links in the examples above, but our computations offer additional tools for potential use in the future to distinguish unknown knotted objects.
§.§ Acknowledgments
The research conducted for this paper is supported by the Pacific Institute for the Mathematical Sciences (PIMS). The first author is supported by the Centre of Excellence
in Mathematics, the Commission on Higher Education, Thailand. The research and findings may not reflect those of the Institute. The third author thanks Nicholas Cazet for helpful conversations and for introducing him to Fielder's work. We are grateful to Chris Soteros for support.
plain
|
http://arxiv.org/abs/2307.05439v1 | 20230711170523 | Metropolis Sampling for Constrained Diffusion Models | [
"Nic Fishman",
"Leo Klarner",
"Emile Mathieu",
"Michael Hutchinson",
"Valentin de Bortoli"
] | cs.LG | [
"cs.LG"
] |
Channel State Information-Free Location-Privacy Enhancement: Fake Path Injection
Jianxiu Li, Graduate Student Member, IEEE,
and Urbashi Mitra, Fellow, IEEE
J. Li and U. Mitra are with the Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA (e-mails: jianxiul, [email protected]).
This paper was presented in part at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023) <cit.>. This work has been funded largely by the USC + Amazon Center on Secure and Trusted Machine Learning as well as in part by one or more of the following: NSF CCF-1817200, DOE DE-SC0021417, Swedish Research Council 2018-04359, NSF CCF-2008927, NSF CCF-2200221, ONR 503400-78050, and ONR N00014-15-1-2550.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Denoising diffusion models have recently emerged as the predominant paradigm for generative modelling. Their extension to Riemannian manifolds
has
facilitated their application to an array of problems in the natural
sciences.
Yet, in many practical settings, such manifolds are defined by a set of
constraints and are not covered by the existing (Riemannian) diffusion model
methodology. Recent work has attempted to address this issue by employing
novel noising processes based on logarithmic barrier methods or reflected
Brownian motions. However, the associated samplers are computationally
burdensome as the complexity of the constraints
increases
. In this paper, we introduce an alternative simple noising scheme based on
Metropolis sampling that affords substantial gains in computational efficiency
and
empirical performance compared to the earlier samplers. Of independent interest, we prove that this new process corresponds to a valid discretisation
of the reflected Brownian motion. We demonstrate the scalability and
flexibility of our approach on a range of problem settings with convex and
non-convex constraints, including applications from geospatial modelling,
robotics and protein design.
§ INTRODUCTION
In recent years, denoising diffusion models
<cit.> have
emerged as a powerful paradigm for generative modelling, achieving
state-of-the-art performance across a range of domains. They work by
progressively adding noise to data following a Stochastic Differential Equation
(SDE)—the forward noising process—until it is close to the invariant
distribution of the SDE. The generative model is then given by an approximation of
the associated time-reversed denoising process, which is also an SDE
whose drift depends on the gradient of the logarithmic densities of the forward
process, referred to as the Stein score.
Building on the success of diffusion models for image generation tasks,
<cit.> and <cit.> have recently
extended this framework to a wide range of Riemannian manifolds, enabling the
specification of inherent structural properties of the modelled domain. This
has broadened the applicability of diffusion models to problems in the natural and engineering sciences, including the
conformational modelling of small molecules <cit.>, proteins
<cit.> and robotic platforms
<cit.>.
However, in many data-scarce or safety-critical settings, researchers may want to restrict the modelled domain further by specifying problem-informed constraints to make maximal use of limited experimental data or prevent unwanted behaviour <cit.>. As illustrated in <Ref>, such domain-informed constraints can be naturally represented as a Riemannian manifold with boundary. Training diffusion models on such constrained manifolds is thus
an important problem that requires principled noising processes—and corresponding discretisations—that stay within the constrained set.
Recent work by <cit.> has attempted to derive such
processes and extend the applicability of diffusion models to
inequality-constrained manifolds by investigating the generative modelling
applications of classic sampling schemes based on log-barrier methods
<cit.>
and the reflected Brownian motion
<cit.>. While
empirically promising, the proposed algorithms can be computationally and
numerically burdensome, and require bespoke implementations for different
manifolds and constraints. Concurrently, <cit.> have
investigated the use of reflected diffusion models for image modelling. They
focus on the high-dimensional hypercube, as this setting admits a theoretically
grounded treatment of the static thresholding method which is widely used
in image models such as <cit.>. Their method exhibits
robust scaling properties and impressive visual results in this
framework. However, the introduced samplers suffer from the same limitations as
<cit.> for more complex manifolds and constraints.
Here, we propose a new method for generative modelling on constrained manifolds
based on a Metropolis-based discretisation of the reflected Brownian motion. The
Metropolised process' chief advantage is that it is lightweight: the only
additional requirement over those outlined in <cit.>
that is needed to implement a constrained diffusion model is an efficient binary
function that indicates whether any given point is within the constrained
set. The Metropolised approximation of the reflected Brownian motion is
substantially easier to implement, faster to compute and more numerically stable
than the previously considered sampling schemes. Our core theoretical
contribution is to show that this new discretisation converges to the reflected
SDE by using the invariance principle for SDEs with boundary
<cit.>. To the best of our knowledge, this is the first
time that such a process has been investigated. We demonstrate that our method
attains improved empirical results on manifolds with convex and non-convex constraints by applying it to a range of problems from geospatial modelling, robotics and protein design.
§ BACKGROUND
Riemannian manifolds.
A Riemannian manifold is defined as a tuple (, ) with a smooth
manifold and a metric defining an inner product on tangent spaces. In
this work, we will use the exponential map
exp_x: T_x →, as well as the extension of the
gradient ∇, divergence div and Laplace Δ operators to
. All of these quantities can be defined in local coordinates in terms of the
metric[if the connection chosen on the Riemannian manifold is the
Lévi-Civita connection]. The extension of the Laplace operator to is
called the Laplace-Beltrami operator, also denoted Δ when there is no
ambiguity.
Using Δ, we can define a Brownian motion on ,
denoted (_t)_t ≥ 0 and with density w.r.t. the volume form of
denoted p_t for any t > 0.
We refer to <cit.> for a thorough
treatment of Riemannian manifolds and to <cit.> for details
on stochastic analysis on manifolds.
In the following, we consider a manifold defined by
M̧ = x ∈Ņf_i(x) < 0 , i ∈ℐ,
where (Ņ, ) is a Riemannian manifold, ℐ is an
arbitrary finite indexing family and for any i ∈ℐ,
f_i∈(Ņ, ). Since ℐ is finite and f_i continuous
for any i ∈I̧, is an open set of Ņ and inherits its metric
. This captures simple Euclidean polytopes and complex constrained spaces like <Ref>.
Denoising diffusion models.
Denoising diffusion models <cit.> work as follows: let (_t)_t ∈0,T be a
noising process that corrupts the original data distribution p_0. We
assume that (_t)_t ≥ 0 converges to N(0,σ^2), with
σ > 0. Several such processes exist, but in practice we consider the
Ornstein-Uhlenbeck (OU) process, also referred as VP-SDE, which is defined by the following
Stochastic Differential Equation (SDE)
_t = - 12_t t + σ_t, _0 ∼ p_0.
Under conditions on p_0, for any T > 0,
(_t)_t ∈0,T = (_T-t)_t ∈0,T is also the
(weak) solution to a SDE
<cit.>
_t
= {12_t + σ^2 ∇log
p_T-t(_t)} t + σ_t, _0 ∼ p_T,
where p_t denotes the
density of _t.
In practice, ∇log p_t is approximated with a score network
(t,x) ↦ s_θ(t,x) trained by minimising either a denoising score
matching (dsm) loss or an implicit score matching (ism) loss
<cit.>
ℓ(θ) = 𝔼_t ∼𝒰([0,T]), (_0, _t)[ λ_t ( 1/2 s_θ(t, _t) ^2 + div(s_θ)(t, _t))],
where λ_t >0. For a flexible score network, the global minimiser
θ^⋆ = argmin_θℒ(θ) satisfies
s_θ^⋆(t, ·)=∇log p_t. <cit.>
and <cit.> have extended denoising diffusion models to the
Riemannian setting. The time-reversal formula (<ref>) remains
the same, replacing the Euclidean gradient with its Riemannian equivalent. The
ism loss can still be computed in that setting. However, the samplers
used in the Riemannian setting differ from the classical Euler-Maruyama
discretisation used in the Euclidean framework. <cit.>
use Geodesic Random Walks <cit.>, which ensure that the
samples remain on the manifold at every step. In this paper, we propose a
sampler with similar properties in the case of constrained manifolds.
Reflected SDE. We conclude this section by recalling the framework
for studying reflected SDEs, which is introduced via the notion of the
Skorokhod problem. For simplicity, we present this in the
Euclidean space ^d, but note that reflected processes can be defined on arbitrary
smooth manifolds 𝒩. In the case
of the Brownian motion, a solution to the Skorokhod problem is a process of the form
(_t, _t)_t ≥ 0. Locally, (_t)_t ≥ 0 can be
seen as a regular Brownian motion (_t)_t ≥ 0 while
(_t)_t ≥ 0 forces (_t)_t ≥ 0 to remain in
. Under mild additional regularity conditions on and
(_t, _t)_t ≥ 0, see <cit.>,
(_t, _t)_t ≥ 0 is a solution to the Skorokhod
problem if for any t ≥ 0
_t = _0 + _t - _t ∈,
_t = ∫_0^t 1__s ∈∂_s and
_t = ∫_0^t (_s) _s , where
(_t)_t ≥ 0 is the total variation of (_t)_t ≥ 0[In this case (_t)_t ≥ 0 is not regular enough, but if it were in the class ^1, its total variation would be given by ∫_0^t ∂_t _t s in the one-dimensional case.].
Let us provide some intuition on this definition. When
(_t)_t ≥ 0 hits the boundary ∂,
-_t pushes the process back into along the inward normal -(_t),
according to
_t = ∫_0^t (_s) _s. The
condition
_t = ∫_0^t 1__s ∈∂_s is more technical and can be seen as imposing that
_t remains constant so long as (_t)_t ≥ 0 does not hit
∂. We refer to <cit.> and <cit.> for a more thorough
introduction of these notions in the context of diffusion models.
§ DIFFUSION MODELS FOR CONSTRAINED MANIFOLDS VIA METROPOLIS SAMPLING
In <Ref>, we highlight the practical limitations of existing constrained diffusion models and propose an alternative Metropolis sampling-based approach.
In <Ref>, we outline our proof that this process corresponds to a valid discretisation of the reflected Brownian motion, justifying its use in diffusion models.
An overview of the
samplers we cover in this section is presented in <Ref>.
§.§ Practical limitations of existing
constrained diffusion models
Barrier metrics.
In the barrier approach, the constrained manifold is transformed into an
unconstrained space via a barrier metric. This metric is defined by
∇^2 ϕ(x) with ϕ(x) = ∑_i∈I̧ϕ_i(d(x, f_i)) where
d(x, f_i) is the minimum distance from the point x to the set defined by
f_i(x) = 0 and ϕ_i is a monotone decreasing function such that
lim_z → 0ϕ_i(z) = ∞ . Under additional regularity
assumptions, ϕ is called a barrier function (see
<cit.>). This definition ensures that the barrier function induces a well-defined exponential map on the manifold, making the Riemannian
diffusion model frameworks of <cit.> and
<cit.> applicable. In practice, evaluating ϕ
requires computing d(x, ∂) (and its derivatives), which can be
prohibitively expensive.
Furthermore, since the exponential under the induced manifold is not easy to compute, the barrier methods in <cit.> approximate it by projecting the exponential on the original manifold back into the constrained set, incurring additional bias and necessitating a projection, which can itself be computationally intractable, as we discuss in more detail below.
Reflected stochastic processes.
<cit.> and <cit.> introduce diffusion models based on the
reflected Brownian motion (RBM). One possible
discretisation of the reflected SDE is to
[label=(*)]
* consider a classical step of the Euler-Maruyama discretization (or the Geodesic Random Walk in the Riemannian setting) and
* reflect this step according to the boundary defined by ∂.
To compute the reflection, one must check whether the step crosses the boundary. If it does, the point of intersection needs to be calculated, the ray reflected at this point, and the step continued in the reflected direction.
This can require an arbitrarily large number of reflections depending on the step size,
the geodesic on the manifold, and the geometry of the bounded region within the manifold.
We refer to <Ref> for the pseudocode of the reflection step and
additional comments.
r0.42
0.42
An alternative approach to discretising a reflected SDE is to replace the
reflection with a projection, see <cit.> for
instance. However, the projection requires the most expensive part of the
reflection algorithm: computing the intersection of the geodesic with the
boundary.
Metropolis approximation.
As outlined above, both of the existing approaches for constrained diffusion models require manifold- and constraint-specific implementations and become computationally intractable as the complexity and dimension of the modelled geometry increases.
This limits their practical usefulness even for relatively simple manifolds with well-defined exponential maps and linear inequality constraints (such as e.g. polytopes).
In the following, we introduce a method that aims to solve both of these problems.
The sampler we propose is similar to a classical Euler-Maruyama discretisation of
the Brownian motion, except that, whenever a step would carry the Brownian
motion outside of the constrained region, we reject it (see <Ref>).
This is a Metropolised version of the usual discretisation and is almost trivial to implement compared to the
existing barrier, reflection and projection methods.
Hence, this method enables the principled extension of diffusion models to arbitrarily constrained domains at virtually no added implementational complexity or computational expense.
§.§ Relating the Metropolis sampler to the reflected Brownian motion
In this section, we prove that the proposed Metropolis sampling-based process (<Ref>) corresponds to a valid discretization of the reflected process, justifying its use in diffusion models.
Here we focus on a concise presentation of the core concepts
and the main results. A full proof can be found in
<Ref>. For simplicity, we restrict ourselves to the Euclidean setting. All of our results require particular assumptions on , which we discuss at the end of this section.
We begin with a definition of the
Metropolis approximation of RBM.
For any γ >0 and k ∈, let X_0^γ∈ and
X_k+1^γ = X_k^γ + √(γ) Z_k^γ if
X_k^γ + √(γ) Z_k^γ∈ and X_k^γ
otherwise. The sequence (X_k^γ)_k ∈ is called the Metropolis approximation of RBM.
For any γ > 0, we consider (_t^γ)_t ≥ 0, the linear
interpolation of (X^γ_k)_k ∈ such that for any k ∈,
_k γ^γ = X_k^γ. The following result is the main theoretical
contribution of our paper.
Under assumptions on ,
for any T ≥ 0, (_t^γ)_t ∈0,T weakly converges to the RBM
(_t)_t ∈0,T as γ→ 0.
The rest of the section is devoted to a high level presentation of the proof of
<Ref>. It is theoretically impractical to work
directly with the Metropolis approximation of RBM. Instead, we introduce an
auxiliary process, show this converges to the RBM, and finally prove that the
convergence of the auxiliary process implies the convergence of our Metropolis
discretisation.
For any γ > 0 and k ∈, let
X̂_0^γ = x ∈ and
X̂_k+1^γ = X̂_k^γ + √(γ) Z_k^γ with
Z_k^γ a Gaussian random variable conditioned on
X̂_k^γ + √(γ) Z_k^γ∈. The sequence
(X̂_k^γ)_k ∈ is called the Rejection approximation of RBM.
R0.42
0.42
We call this process Rejection approximation of RBM since in practice, Z_k^γ is
sampled using rejection sampling, see <Ref>. For any
γ > 0, we also consider (_t^γ)_t ≥ 0, the linear
interpolation of (X̂^γ_k)_k ∈ such that for any
k ∈, _k γ^γ = X̂_k^γ. In <Ref>, we prove the following result.
Under assumptions on ,
for any T ≥ 0, (_t^γ)_t ∈0,T weakly converges to the Reflected Brownian Motion
(_t)_t ∈0,T as γ→ 0.
Here we give some elements of the proof. Details and full
derivations are postponed to <Ref>. Our approach is
based on the invariance principle of <cit.>. More
precisely, we show that we can compute an equivalent `drift' and `diffusion
matrix' for the discretised process and that, as γ→ 0, the drift
converges to zero and the diffusion matrix converges to . In the
Euclidean setting, this result, accompanied by mild regularity and growth
assumptions, ensures that the discretization weakly converges to the original SDE. However, the case with boundary is much more complicated, primarily
because the approximate drift might explode near the boundary, thus we need to
verify exactly how the drift behaves as γ→ 0 and as the process
approaches the boundary. We show that the normalised drift converges to the inward normal near the boundary. This ensures that [label=(*)]
* in the interior of the drift converges to zero, i.e. locally in the
interior of the Brownian motion and the Reflected Brownian Motion
coincide,
* on the boundary, the drift pushes the samples inside the manifold according to the inward normal, mimicking (_t)_t ≥ 0 in (<ref>).
Finally, with results from <cit.> and <cit.>, we show the convergence to the RBM.
Our next step is to show that the approximate drift and diffusion matrix of the Metropolised
process are upper and lower bounded by their counterparts
in the rejection process. While the upper-bound is easy to derive, the
lower-bound requires the following result.
Under assumptions on , ∀ > 0, ∃ >0 such that for any
γ∈0, and for any x ∈,
γ∈0, γ̅ and Z ∼N(0, ) we have
ℙ(x + √(γ) Z ∈) ≥ 1/2-, with
Z ∼N(0, ).
<Ref> tells us that locally the boundary
looks like a half-space when integrating w.r.t. a Gaussian measure. A
corollary is that, for γ > 0 small enough and for any k ∈, the
probability that X_k+1^γ = X_k^γ is upper bounded uniformly
w.r.t. X_k^γ∈. The proof of <Ref> uses
<Ref> in <Ref>, whose proof relies
on the concept of tubular neighborhoods <cit.>.
Having established the lower and upper bound, we can conclude the proof by
noting that the approximate drift and the diffusion matrix in the rejection
and Metropolis case coincide as γ→ 0. This is enough to apply the
same results as before, giving the desired convergence.
Assumptions on . Before concluding this section, we detail the
assumptions we make on . For <Ref> to hold,
we assume that = x ∈^dΦ(x) > 0 is bounded,
with Φ∈^2(^d, ) concave. We have that
∂ = x ∈^dΦ(x) = 0. In addition, we
assume that for any x ∈∂, ∇Φ(x) = 1. These
assumptions match those <cit.> use for their study of the
existence of solutions to the RBM. While it seems possible to relax the
global existence of Φ to a local one, the regularity
assumption of the domain is key. This regularity is essential to
establish <Ref> and the associated geometrical
result on tubular neighborhoods. We also emphasize that the smoothness of the
domain is central in the results of <cit.> on the
equivalence of two definitions of RBMs which we rely on.
§ RELATED WORK ON APPROXIMATIONS OF REFLECTED SDES
Several schemes have been introduced to approximately sample from reflected
Stochastic Differential Equations. They can be interpreted as modifications of
classical Euler-Maruyama schemes used to discretise SDEs without
boundary. One of the most common approaches is to use the Euler-Maruyama
discretisation and project the solution onto the boundary if it escapes from the
domain . In this case, mean-square error rates of order almost 1/2
have been proven under various conditions
<cit.>.
Concretely this means that
𝔼[_t - X_n^t/n^2] = O(n^-1+ε) with
>0 arbitrary small and where (X_k^γ)_k ∈ is the
projection scheme.
The rate 1/2 is tight
<cit.>. It is possible to use the Euler-Peano method to get slight improvements for a mean-square error rate of
order of 1/2, but this is impractical as it assumes that
one can solve a (simplified) Skorokhod problem, which is usually intractable.
<cit.> introduced a penalised
method which pushes the solution away from the boundary and shows a mean-square error
of order 1/4, see also <cit.>. Weak errors of order
1 have been obtained in <cit.> and <cit.> by
introducing a reflection component in the discretisation or using some local
approximation of the domain to a half-space. We refer to
<cit.> for an introduction to the discretisation of
reflected SDEs. Closer to our work, <cit.> consider three
different methods to approximate reflected Brownian motions on general domains
(two based on discrete methods and one based on killed diffusions). Only
qualitative results are provided. To the best of our knowledge, no previous work in the
probability literature has investigated the Metropolised scheme we
propose.
Our Metropolis scheme is also related to the ball walk
<cit.>, which replaces the Gaussian random variable with a
uniform over the ball (or the Dikin ellipsoid). <cit.> and <cit.> have
studied the asymptotic convergence rate of the ball walk, but, to the best of our
knowledge, its limiting behaviour when the step size goes to zero has not been
investigated.
§ EXPERIMENTAL RESULTS
To demonstrate the practical utility and empirical performance of the proposed Metropolis diffusion models, we conduct a comprehensive evaluation on a range of synthetic and real-world tasks.
In <Ref>, we assess the scalability of our method by applying it to synthetic distributions on hypercubes and simplices of increasing dimensionality.
In <Ref>, we extend the evaluation to real-world tasks on manifolds with convex constraints by applying our method to the robotics and protein design datasets presented in <cit.>.
In <Ref>, we additionally demonstrate that our method extends to constrained manifolds with highly non-convex boundaries—a setting that is intractable with existing approaches. As we found—in line with <cit.>—that log-barrier diffusion models perform strictly worse than reflected approaches across all experimental settings, we focus on a more detailed comparison
with the latter here and postpone additional empirical results to <Ref>.
For all experiments, we use a simple 6 layer MLP with sine activations and a score rescaling function to ensure that the score reaches zero at the boundary, scaling linearly into the interior of the constrained set as in <cit.> and <cit.>. We set T=1, β_0=1e-3 and tune β_1 to ensure that the forward process reaches the invariant distribution with a linear β-schedule. We use a learning rate of 2e-4 with a cosine learning rate schedule and an ism loss with a modified loss weighting function of (1 + t), a batch size of 256 and 8 repeats per batch. All models were trained on a single NVIDIA GeForce GTX 1080 GPU.
Additional details are provided in <Ref>.
§.§ Synthetic distributions on simple polytopes
Log-likelihood (↑) of a held-out test set from a synthetic bimodal distribution over convex subsets of ^d bounded by the hypercube [-1,1]^d and unit simplex Δ^d. Means and standard deviations are computed over 3 different runs. Training time in hours is listed in parentheses.
Constraint d Reflected (hours) Metropolis (hours)
3*[-1,1]^d
2 2.25.01 (8.95) 2.32.05 (0.72)
3 3.77.13 (8.97) 4.15.15 (0.71)
10 7.42.77 (10.15) 10.80 .34 (0.90)
3*Δ^d
2 1.01 .01 (9.17) 1.06.02 (0.82)
3 2.64.01 (9.43) 3.23.17 (0.78)
10 7.00.13 (10.53) 7.81.20 (0.97)
In this section, we investigate the scalability of the proposed Metropolis diffusion models by applying them to synthetic bimodal distributions over the d-dimensional hypercube [-1, 1]^d and unit simplex Δ^d. A quantitative comparison of the log-likelihood of a held-out test set is presented in <Ref>, while a visual comparison is postponed to <Ref>.
We find that our Metropolis models outperform reflected approaches across all dimensions and constraint geometries by a substantial and statistically significant margin while training in one tenth of the time.
The degree of improvement seems to scale with the dimensionality of the problem: the larger the dimension of the experiment, the larger the gain in performance from using our proposed Metropolis scheme.
§.§ Modelling proteins and robotic arms under convex constraints
In addition to illustrating our method's scalability on high-dimensional synthetic tasks, we follow the experimental setup from <cit.> to additionally demonstrate its practical utility and favourable empirical performance on two real-world problems from robotics and protein design.
Constrained configurational modelling of robotic arms. The problem of modelling the configurations and trajectories of a robotic arm can be formulated as learning a distribution over the locations and manipulability ellipsoids of its joints, parameterised on ^dת_++^d, where Ş_++^d is the manifold of symmetric positive-definite (SPD) d× d matrices <cit.>. For practical robotics applications, it may be desirable to restrict the maximal velocity with which a robotic arm can move or the maximum force it can exert. This manifests in a trace constraint C>0 on Ş_++^d, resulting in a constrained manifold {M∈Ş_++^d:∑_i=1^dM_ii < C}. Following <cit.>, we parametrise this constraint via the Cholesky decomposition <cit.> and use the resulting setup to model the dataset presented in <cit.>.
Conformational modelling of protein backbones. Modelling the conformational ensembles of proteins is a data-scarce problem with a range of important applications in biotechnology and drug discovery <cit.>. In many practical settings, it may often be unnecessary to model the structural ensembles of an entire protein, as researchers are primarily interested in specific functional sites that are embedded in a structurally conserved scaffold <cit.>. Modelling the conformational ensembles of such substructural elements requires positional constraints on their endpoints to ensure that they can be accommodated by the remaining protein. Using the parametrisation and dataset presented in <cit.>, we formulate the problem of modelling the backbone conformations of a cyclic peptide of length N=6 as learning a distribution over the product of a polytope ℙ⊂^3 and the hypertorus 𝕋^4.
We quantify the empirical performance of different methods by evaluating the log-likelihood of a held-out test set and present the resulting performance metrics in <Ref>.
Again, we find that our Metropolis model outperforms the reflected approach by a statistically significant margin while training 7-8 times as fast.
Qualitative visual comparisons of samples from the true distribution, the trained diffusion models and the uniform distribution
are presented in <Ref>, with full univariate marginal and pairwise bivariate correlation plots postponed to <Ref>.
§.§ Modelling geospatial data within non-convex country borders
Motivated by the strong empirical performance of our approach on tasks with challenging convex constraints, we investigate its ability to approximate
distributions whose support is restricted to manifolds with highly non-convex boundaries—a setting that is intractable with existing approaches. To this end, we derived a geospatial dataset based on wildfire incidence rates within the continental United States (see <Ref> for full details) and trained a Metropolis diffusion model constrained by the corresponding country borders on the sphere 𝒮^2. A qualitative visual comparison of samples from the true distribution, our model, and the uniform distribution is presented in <Ref>, demonstrating that our approach is able to successfully model challenging multimodal and sparse distributions on constrained manifolds with highly non-convex boundaries.
§ CONCLUSION
Accurately modelling distributions on constrained Riemannian manifolds is a challenging problem with a range of impactful practical applications. In this work, we have proposed a mathematically principled and computationally scalable extension of the existing diffusion model methodology to this setting. Based on a Metropolisation of random walks in Euclidean spaces and on Riemannian manifolds, we have shown that our approach corresponds to a valid discretisation of the reflected Brownian motion, justifying its use in diffusion models. To demonstrate the practical utility of our method, we have performed an extensive empirical evaluation, showing that it outperforms existing constrained diffusion models on a range of synthetic and real-world tasks defined on manifolds with convex boundaries, including applications from robotics and protein design.
Leveraging the flexibility and simplicity of our method, we have also demonstrated that it extends beyond convex constraints and is able to successfully model distributions on manifolds with highly non-convex boundaries.
While we found our method to perform well across the synthetic and real-world applications we considered, we expect it to perform poorly on certain constraint geometries. For instance, the current implementation relies on an isotropic noise distribution which could impede its performance on exceedingly narrow constraint geometries, even with correspondingly small step sizes. In this context, an important direction of future research would be to investigate whether we can instead sample from more suitable distributions, e.g. a Dikin ellipsoid, while maintaining the simplicity and efficiency of the Metropolis approach.
More general topics of future work include the derivation of quantitative weak and mean square errors of our proposed discretisation scheme, as well as its application to more semantic constraints, for example with the objective of imposing sparsity in a basis or fixing the colour or style of an image.
§ OVERVIEW
In <Ref>, we recall some basic concepts of Riemannian
geometry which are key to defining discretisations of the reflected Brownian motion. In
<Ref>, we give some details on the reflection step
in reflected discretizations. In <Ref>, we prove the
convergence of the rejection and Metropolis discretizations to the true
reflected Brownian Motion. The geospatial dataset with non-convex constraints based on wildfire incidence rates in the continental United States is presented
<Ref>. All supplementary experimental details and empirical results are
gathered in <Ref>.
§ MANIFOLD CONCEPTS
In the following, we aim to introduce key concepts that underpin diffusion models on Riemannian manifolds, with a particular focus on notions relevant to the reflected Brownian motion that we build on in <Ref>. For a more thorough treatment with reference to reflected diffusion models, we refer to <cit.>. For a detailed presentation of smooth manifolds, see <cit.>.
A Riemannian manifold is a tuple (, ) with a smooth manifold and a metric that imbues the manifold with a notion of distance and curvature and is defined as a smooth positive-definite inner product on each of the tangent spaces of the manifold:
(p): T_pM̧×T_pM̧→.
The tangent space T_p of a point p on a manifold is an extension of the notion of tangent planes and can be thought of as the space of derivatives of scalar functions on the manifold at that point.
To establish how different tangent spaces relate to one another, we need to additionally introduce the concept of a connection. This is a map that takes two vector fields and produces a derivative of the first with respect to the second, typically written as ∇(X, Y) = ∇_X Y.
While there are infinitely many connections on any given manifold, the Levi-Cevita emerges as a natural choice if we impose the following two conditions:
* X · ((Y,Z)) = (∇_X Y, Z) + (Y, ∇_X Z),
* X, Y = ∇_X Y - ∇_Y X,
where ·, · is the Lie bracket. These conditions ensure that the connection is (i) metric-preserving and (ii) torsion-free, with the latter guaranteeing a unique connection and integrability on the manifold.
Using the metric and Levi-Cevita connection, we can define a number of key concepts:
Geodesic. Geodesics extend the Euclidean notion of `straight lines' to manifolds. They are defined as the unique path γ: (0,1) →M̧ such that ∇_γ'γ' = 0 and are the shortest path between two points on a manifold, in the sense that
L(γ) = ∫_0^1 √((γ(t))(γ'(t), γ(t)))dt
is minimal.
Exponential map. The exponential map on a manifold is given by the mapping between an element v̌∈ T_pM̧ of the tangent space at point p and the endpoint of the unique geodesic γ with γ(0) = p and γ'(0) = v̌.
Intersection. The intersection along a geodesic is the first point at which the geodesic intersects the boundary. We recall that the boundary is defined by f=0. We can define this via an optimisation problem: compute the minimum t > 0 such that we have that exp_(x, t z) is a root of f: f(exp_(x, t z)) = 0. We will say that exp_(x, t z) = _ (x, z; f) and that t = _ (x, z; f).
Parallel transport.
We say that a vector field X is parallel to a curve γ: (0,1)→M̧ if
∇_γ' X = 0,
where γ': (0,1)→T_γ(t)M̧.
For two points on the manifold p,q∈M̧ that are connected by a curve γ, and an initial vector X_0 ∈T_pM̧, there is a unique vector field X that is parallel to γ such that X(p) = X_0. This induces a map between the tangent spaces at p and q
τ_γ: T_pM̧→T_qM̧, which is referred to as the parallel transport of tangent vectors between p and q and satisfies the condition that for v̌, ǔ∈T_pM̧
(p)(v̌, ǔ) = (q)(τ_γ(v̌), τ_γ(ǔ)).
Reflection.
For an element v̌∈ T_pM̧ in the tangent space of the manifold at point p and a constraint characterised by its unit normal vector ň∈ T_pM̧, the reflection of v̌ in the tangent space is given by v̌' = v̌ - 2(v̌,ň) ň.
§ FULL REFLECTED DISCRETISATION
Here, we reproduce the central algorithm for the full discretisation of the reflected Brownian motion (<Ref>) derived for Euclidean models in <cit.> and for Reimannian models in <cit.>. Its central component is the Reflected Step Algorithm (<Ref>), which gives a generic computation for the reflection in any manifold.
Due to the need to balance speed and numerical instability issues around the boundary, an efficient practical implementation of the reflected step is highly non-trivial, even for simple polytopes in Euclidean space.
More complex geometries and boundaries make this problem significantly worse: a constraint on the trace of SPD matrices
under the log-Cholesky metric of <cit.> requires solving
complex non-convex optimisation problems for each sample at each discretised
sampling step in both the forward and reverse process. This motivates our work in this paper.
These problems motivated the development of our Metropolis approximation, which significantly simplifies the random walk. Instead of requiring the intersection, parallel transport and reflection, we simply need to be able to evaluate the constraint functions f_i. We highlight this simplicity in <Ref>.
§ CONVERGENCE TO THE REFLECTED PROCESS
K
In this note, we assume that = x ∈^dΦ(x) > 0
is compact, with Φ∈^2(^d, ). We have that
∂ = x ∈^dΦ(x) = 0. In addition, we
assume that for any x ∈∂, ∇Φ(x) = 1 and
that Φ is concave. The closure of is denoted . The assumption
that Φ is concave is only used in <Ref>-<ref> and
can be dropped. We consider it for simplicity.
Let (X̂_k^γ)_k ∈ given for any γ >0 and k ∈ by
X̂_0^γ = x ∈ and for
X̂_k+1^γ = X̂_k^γ + √(γ) Z_k^γ with Z_k^γ a
Gaussian random variable conditioned on
X̂_k^γ + √(γ) Z_k^γ∈. In practice, Z_k^γ
is sampled using rejection sampling. We define
^γ : _+ → given for any k ∈ by
^γ_kγ = X̂_k^γ and for any
t ∈kγ, (k+1)γ, ^γ_t = X̂_k^γ. Note
that (_t)_t ∈0,T is a (0,T, ) valued
random variable, where (0,T, ) is the space of
right-continuous with left-limit processes which take values in . We denote
^γ the distribution of (_t^γ)_t ∈0,T on
(0,T, ).
Our goal is to show the following theorem.
For any T ≥ 0, (_t^γ)_t ∈0,T weakly converges to
(_t)_t ∈0,T such that for any t ∈0,T
_t = x + _t - _t , _t = ∫_0^t 1__s ∈∂_s , _t = ∫_0^t (_s) _s .
In order to prove the result, we prove that the distribution of the Markov
chain seen as an element of (0,T, ) converges to a
solution of the Skorokhod problem (<ref>). In particular, we
first show that the limiting distribution satisfies a submartingale problem
following <cit.>. The transition from a
solution of a submartingale problem to a weak solution of the Skorokhod
problem is given by <cit.>
and <cit.>. In order to apply
<cit.>, we define an intermediate drift and
diffusion matrix, see (<ref>) and
(<ref>). To prove the theorem one needs to control
the drift and diffusion matrix inside and show that it converges to 0
and respectively. The technical part of the proof comes from the control
of the drift coefficient near the boundary. In particular, we show that if the
intermediate drift is large then we are close to the boundary and the
intermediate drift is pointing inward. To investigate the local properties of
the drift near the boundary we rely on the notion of tubular neighborhood, see
<cit.>.
Some key properties of the tubular neighborhood are stated in
<Ref>. We then establish a few technical lemmas about the
tail probability of some distributions in <Ref>. Controls
on the diffusion matrix and lower bounds on the probability of belonging in
are given in <Ref>. Properties of large drift terms are given
in <Ref>. The convergence of the drift and diffusion
matrix on compact sets is given in <Ref>. The
convergence of the boundary terms is investigated in
<Ref>. Finally, we conclude the proof in
<Ref>.
§.§ Properties of the tubular neighborhood
Using the results of <cit.>, we establish the existence of an open
set of (for the induced topology of ^d) satisfying several
important properties.
There exist ⊂ open and C≥ 1, r̅ >0 such that
for any γ∈0, γ̅ with =1 the following
properties hold:
* For any x ∈, there exist a unique x̅∈∂ and
α̅ >0 such that
x = x̅ + α̅∇Φ(x̅).
* For any α̅∈0,r̅ and x̅∈∂
such that x̅ + α̅∇Φ(x̅) ∈, let
x = x̅ + α̅∇Φ(x̅) and (x, γ) such
that x + √(γ)z ∈(x,γ) if
-α̅γ^-1/2≤α < r̅γ^-1/2 , v^2 ≤ ( αγ^1/2 + α̅)/(Cγ),
with z = α∇Φ(x̅) + v, with v ⊥∇Φ(x̅). Then (x, γ) ⊂.
* = x̅ + α∇Φ(x̅)x̅∈∂, α∈0,r̅ is open in .
* For any x ∈,
x + √(γ)z ∈∩(x,γ)^ then
α≥r̅γ^-1/2 or
v^2 ≥ ( αγ^1/2 + α̅)/(Cγ) and
αγ^1/2 + α̅≥ 0, with
z = α∇Φ(x̅) + v, with x̅ given in
<ref> and v ⊥∇Φ(x̅). .
* There exists R > 0 such that
x ∈d(x, ∂) ≤ 2R ⊂.
Let γ∈0, with = 1. First, note that for
any x̅∈∂, the normal space is given by
α∇Φ(x̅)α∈. Using this
result and <cit.> there exists r̃_0 > 0
such that
_0 = x̅ + α∇Φ(x̅)x̅∈∂, α∈-r̃_0, r̃_0⊂^d is open[This is the tubular neighborhood theorem which is
key to the rest of the proof.]. We have that for any
α∈-r_0, 0 and x̅∈∂
Φ(x̅+α∇Φ(x̅)) = Φ(x̅) + α∇Φ(x̅)^2 + ∫_0^1 ∇^2 Φ(x̅+tα∇Φ(x̅))(α∇Φ(x̅))^⊗ 2 t
≤α + C̃_0 α^2 < 0,
with r_0 = min(r̃_0, 1/(2C̃_0 )),
where we have used that Φ(x̅) = 0, ∇Φ(x̅)=1 and
defined
C̃_0 = sup∇^2 Φ(x̅+α∇Φ(x̅))x̅∈∂, α∈-r̃_0,r̃_0.
Reciprocally, we have for any α∈0,r_0 and x̅∈∂
Φ(x̅+α∇Φ(x̅)) = Φ(x̅) + α∇Φ(x̅)^2 + ∫_0^1 ∇^2 Φ(x̅+tα∇Φ(x̅))(α∇Φ(x̅))^⊗ 2 t ≥α - C_0 α^2,
where we have used that Φ(x̅) =0,
∇Φ(x̅)=1 and defined
C_0 = sup∇^2 Φ(x̅+α∇Φ(x̅))x̅∈∂, α∈-r_0,r_0. Let r_1 = min(r_0, 1/(2C_0)). Then,
_1 = x̅ + α∇Φ(x̅)x̅∈∂, α∈-r_1, r_1⊂^d is open and
_1 ∩ = x̅ + α∇Φ(x̅)x̅∈∂, α∈0, r_1 .
In what follows, we define = _1 ∩. Note that is
open for the induced topology and that ∂⊂. In
particular, ∂ is compact, ^ is closed and
∂∩^ = ∅. Hence, there exists r > 0
such that d(∂, ^) ≥ 4r. Without loss of
generality we can assume that r ≤ 1/2. We also have
x ∈d(x, ∂) ≤ 2r ⊂. The
proof of <ref> follows from the definition of _0. In the rest of
the proof, we define
C^1/2 = 2max(1, sup∇^2 Φ(x̅ + u) x̅∈∂, u^2 ≤ r(r+1)) , r̅ = min(1/(2C^1/2), r/2).
Let us prove <ref>. Consider
x + √(γ) z ∈(x,γ) with (x,γ) given by
(<ref>) and x = x̅ + α̅∇Φ(x̅) and
z = α∇Φ(x̅) + v with v ⊥∇Φ(x̅).
In particular, we recall that we have
-α̅γ^-1/2≤α < r̅γ^-1/2 , v^2 ≤ ( αγ^1/2 + α̅)/(Cγ) .
This implies that
α̅ + √(γ)α≤ 2 r̅ , γv^2 ≤ 2 r̅ / C .
First, using that C ≥1, ∇Φ(x̅)=1,
(<ref>) and (<ref>), we have
x+√(γ)z - x̅^2 = (α̅ + √(γ)α)^2 + γv^2 ≤ r^2 + r/C ≤ r(r+1) .
Then, we have that
Φ(x+√(γ)z) = Φ(x̅) + α̅ + √(γ)α + ∫_0^1 ∇^2 Φ(x̅+t(x+√(γ)z-x̅))(x+√(γ)z-x̅)^⊗ 2 t
≥α̅ + √(γ)α - (C^1/2/2) ((α̅ + √(γ)α)^2 + γv^2) ,
where we recall that
C^1/2 = 2max(1, sup∇^2 Φ(x̅ + u) x̅∈∂, u^2 ≤ r(r+1)) , r̅ = min(1/(2C^1/2), r/2).
First, using that r≤ 1/2 and (<ref>), we have
α̅ + √(γ)α≤ 2r ≤ 1. Since,
v^2 ≤ (α̅ +√(γ)α)/(Cγ) and we have
that v^2 < 1/(C γ). Let
P(X) = X - (C^1/2/2)X^2 - (C^1/2/2)γv^2. We have that P(x) ≥ 0 if and
only if x ∈x_min, x_max with
x_min = (1 -(1 -Cγv^2)^1/2)/C^1/2, x_max = (1 +(1 -Cγv^2)^1/2)/C^1/2.
Using that for any t ∈0,1, (1-t)^1/2≥ 1 - t we have that
x_min≤γ C v^2/2 , x_max≥ 1/C^1/2 .
Since v^2 ≤ (√(γ)α + α̅)/(γ C),
we have that α̅ + √(γ)α≥ x_min. In addition,
using that α̅ + √(γ)α≤ 2 r̅≤ 1/C^1/2≤ x_max,
we get that P(α̅ + √(γ)α) ≥ 0 and therefore
x + √(γ)z ∈ since Φ(x+ √(γ)z) ≥ 0. This
concludes the proof of <ref>. Note that the condition
α≥ -γ^-1/2α̅ is implied by the condition
v^2 ≤ (√(γ)α + α̅)/(γ C).
Using that
⊂x ∈d(x,∂) ≤ 2r⊂, <ref> is a direct consequence of <cit.>].
Next,
we prove <ref>. Let
x + √(γ)z ∈∩(x,γ)^. If
α <-α̅γ^-1/2 then since Φ is concave, we have
Φ(x+√(γ)z) = Φ(x̅) + α̅ + √(γ)α + ∫_0^1 ∇^2 Φ(x̅ + t(x+√(γ)z-x̅))(x+√(γ)z-x̅)^⊗ 2 t < 0 ,
where we have used that Φ(x̅) = 0. This is absurd, hence either
α≥r̅γ^-1/2 or
v^2 ≥ ( αγ^1/2 + α̅)/(Cγ) and
αγ^1/2 + α̅≥ 0, which concludes the proof. The
proof of <ref> is similar to the proof that
x ∈d(x, ∂) ≤ 2r ⊂.
The main message of <Ref> is that using
<Ref>-<ref>, if we move in the direction of
∇Φ(x̅) (the inward normal) with magnitude α then we are
allowed to move in the orthonormal direction with magnitude α^1/2. In
the next paragraph, we discuss this fact in details and shows it is necessary
for the rest of our study.
The necessity of <Ref>-<ref>.
At first sight one can wonder if the statement of
<Ref>-<ref> could be simplify. In particular, it would
be simpler to replace this statement with: for any
α̅∈0,r̅ and x̅∈∂ such that
x̅ + α̅∇Φ(x̅) ∈, let
x = x̅ + α̅∇Φ(x̅) and (x, γ) such
that x + √(γ)z ∈(x,γ) if
-α̅γ^-1/2≤α < r̅γ^-1/2 , v^2 ≤ ( αγ^1/2 + α̅)^2/(Cγ),
with z = α∇Φ(x̅) + v, with
v ⊥∇Φ(x̅). Then (x, γ) ⊂. Note
that v^2 ≤ ( αγ^1/2 + α̅)/(Cγ)
is replaced by
v^2 ≤ ( αγ^1/2 + α̅)^2/(Cγ),
see <Ref> for an illustration. However, in that case
<Ref>-<ref> becomes: in addition, if
x + √(γ)z ∈∩(x,γ)^ then
α≥ rγ^-1/2 or
v^2 ≥ ( αγ^1/2 + α̅)^2/(Cγ) and
αγ^1/2 + α̅≥ 0.
In what follows, when controlling the properties of large drift, see the
proof of <Ref> and the proof of
<Ref>, we need to control quantities of the form
x + √(γ) Z ∈(x, γ)^∩ /√(γ)[The division by √(γ) comes
from the definition of the intermediate drift
(<ref>).] Using the original
<Ref>-<ref> it is possible to show that this
quantity is bounded. However, if one uses the updated version of
<Ref>-<ref> then one needs to show that there
exists M ≥ 0 and >0 such that for any
γ∈0, (here we have assumed that
α̅ = 0, i.e. x ∈∂ for simplicity)
∫_0^r/γ^-1/2∫_∇Φ(x̅)^⊥1_v^2 ≥α^2φ(v) φ(α) v α≤ M √(γ) ,
which is absurd.
§.§ Technical lemmas
We start with a few technical lemmas which will allow us to control some
Gaussian probabilities outside of a compact set. We denote
Ψ: _+ ×→0,1 such that for any k ∈,
Ψ(·, k) is the tail probability of a χ-squared random variable
with parameter k, i.e. for any k ∈ and t ≥ 0 we have
Ψ(t, k) = Z^2 ≥ t,
with Z a Gaussian random variable in ^k with zero mean and identity
covariance matrix. We will make extensive use of the following lemma which is a
direct consequence of <cit.>.
For any k ∈ and t ∈_+ with t ≥ 5k, Ψ(t, k) ≤exp[-t/5].
Let k ∈. First, note that for any x ≥ k, we have that
k + 2 (k x)^1/2 + 2x ≤ 5x. Combining this result and <cit.>, we have that for any x ≥ k
X^2 ≥ 5 x≤exp[-x] ,
with X a ^k-valued Gaussian random variable with zero mean and
identity covariance matrix. This concludes the proof upon letting t = 5x.
Let φ: ^p →_+ given for any u ∈ by
φ(u) = (2 )^-p/2exp[-u^2/2][In the rest
of the supplementary, we never precise the dimension p ∈ which can
be deduced from the variable.], i.e. the density of a real Gaussian random
variable with zero mean and unit variance. While
<Ref> appears technical, it will be central to
provide quantitative upper bounds on the rejection probability, see
<Ref> for instance.
For any k ∈, α_0>0, β_0 ∈0,1 and δ > 0 we have
ψ(δ) = sup∫_0^+∞Ψ(α_0 t/δ, k)^β_0φ(t - t_0/δ) tt_0 ≥ 0≤ C_0 δ ,
with C_0 = 5(2)^-1/2(k+1)/(α_0 β_0).
Let k ∈, α_0 >0, β_0 ∈0,1 and δ > 0. Let
t_δ = 5 k δ /α_0. Note that if t ≥ t_δ then,
α_0 t /δ≥ 5k. In addition, we have
∫_0^+∞Ψ(α_0 t/δ, k)^β_0φ(t - t_0/δ) t ≤ (2 )^-1/2∫_0^+∞Ψ(α_0 t/δ, k)^β_0 t
≤ (2 )^-1/2∫_0^t_δΨ(α_0 t/δ, k)^β_0 t + (2 )^-1/2∫_t_δ^+∞Ψ(α_0 t/δ, k)^β_0 t .
Using that for any w >0,
∫_0^+∞exp[-w t] t ≤ (1/w), that for any
u ≥ 0, Ψ(u, k) ≤ 1 and that if u ≥ 5k,
Ψ(u,k) ≤exp[-u/5], we get for any t_0 ≥ 0
∫_0^+∞Ψ(α_0 t/δ, k) φ(t - t_0/δ) ≤ (2)^-1/2 [5k δ/α_0 + 5 δ/(α_0 β_0)] ≤ (5(2)^-1/2(k+1)/(α_0β_0)) δ ,
which concludes the proof.
Finally, we have the following lemma, which is similar to
<Ref> but will be used to control quantities related to
the norm.
For any k ∈, α_0>0, β_0 ∈0,1 and δ > 0 we have
ψ(δ) = ∫_0^+∞Ψ(α_0 t/δ, k)^β_0 t φ(t) t≤ C_0 δ^2 ,
with C_0 = 25(2)^-1(k^2+1)/(α_0β_0)^2.
Let k ∈, α_0 >0, β_0 ∈0,1 and δ > 0.
Let t_δ = 5 k δ /α_0. Note that if t ≥ t_δ then,
α_0 t /δ≥ 5k. In addition, we have
∫_0^+∞Ψ(α_0 t/δ, k)^β_0 t φ(t) t ≤ (2 )^-1∫_0^t_δΨ(α_0 t/δ, k)^β_0 t t + (2)^-1∫_t_δ^+∞Ψ(α_0 t/δ, k)^β_0 t t .
In addition, using that if u ≥ 5k then Ψ(u,k) ≤exp[-u/5], we get
(2)^-1∫_t_δ^+∞Ψ(α_0 t/δ, k)^β_0 t t ≤ (2)^-1∫_0^+∞exp[-α_0β_0 t /(5δ)] t t ≤ (2)^-1 25 δ^2 / (α_0 β_0)^2 .
Finally, using that for any u ≥ 0, Ψ(u,k)≤1, we have
(2 )^-1∫_0^t_δΨ(α_0 t/δ, k)^β_0 t t≤ (2 )^-1 25 k^2 δ^2 / α_0^2 ,
which concludes the proof.
§.§ Lower bound on the inside probability and control of moments of order two and higher
Lower bound on the inside probability. We begin with the following
lemma which controls the expectation of 1 + Z outside of
(x,γ). We recall that is defined in
<Ref>-<ref>.
Let = 1. Let x ∈, Z ∈∼N(0, ) and
γ∈0, γ̅ then we have
max(1_x + √(γ) Z ∈∩(x,γ)^, ⟨ Z, ∇Φ(x̅)⟩1_x + √(γ) Z ∈∩(x,γ)^) ≤ψ(γ) ,
with ψ: _+ →_+ such that
lim sup_t → 0ψ(t)/t^1/2 < +∞.
Let r̅ > 0 given by <Ref>. First, we have that
∫_∫_^d-1 (1 + α + v) 1_α≥r̅ /γ^1/2φ(α) φ(v) α v
≤ d ∫_ (1 + α) 1_α≥r̅ /γ^1/2φ(α) α≤ d (Ψ(r̅^2/γ,1) + exp[-r̅^2/(2γ)]) .
Second, using <Ref>, we have that
∫_∫_^d-11_v^2≥ (α̅+√(γ)α)/(Cγ)1_α̅+√(γ)α≥ 0φ(α) φ(v) α v
≤∫_1_α̅+√(γ)α≥ 0Ψ((α̅+√(γ)α)/(Cγ), d-1) φ(α) α
≤∫_0^+∞Ψ(α/Cγ^1/2, d-1) φ(α-α̅/γ^1/2) α≤Ψ_1(γ^1/2) .
Second, using <Ref>, we have that
∫_∫_^d-1α1_v^2≥ (α̅+√(γ)α)/(Cγ)1_α̅+√(γ)α≥ 0φ(α) φ(v) α v
= ∫_αΨ((α̅+√(γ)α)/(Cγ),d-1) 1_α̅+√(γ)α≥ 0φ(α) α
≤∫_0^+∞Ψ(α/Cγ^1/2,d-1) αφ(α) α≤Ψ_2(γ^1/2).
Note that we have
lim sup_γ→ 0Ψ_2(γ^1/2) + Ψ_1(γ^1/2) <
+∞. We conclude upon combining (<ref>),
(<ref>) and (<ref>) with
<Ref>-<ref> and the fact that Φ(x̅) =1.
The following lemma allow us to give a lower bound to the quantity
1_x+√(γ)Z ∈ uniformly w.r.t x ∈.
There exists >0 such that for any γ∈0,
and for any x ∈, γ∈0, γ̅ and
Z ∼N(0, ) we have
1_x + √(γ) Z ∈≥ 1/4 .
Let γ∈0,.
If x ∉ then B(x,2R) ⊂ using
<Ref>-<ref> and therefore
1_x + √(γ) Z ∈≥ 1/4 for
> 0 small enough. Now, assume that x ∈. Using
<Ref>, we have that
1_x + √(γ) Z ∈∩(x,
γ)^≤ψ(γ). In addition, using
<Ref>-<ref>, we have that for any γ >0
1_x + √(γ) Z ∈ ≥1_x + √(γ) Z ∈(x, γ)
≥∫_-α̅γ^-1/2^r γ^-1/2∫_∇Φ(x̅)^⊥1_v^2≤ (α̅+γ^1/2α)/(Cγ)φ(α)φ(v) α v
≥∫_-α̅γ^-1/2^r γ^-1/2 (1 - Ψ((α̅+γ^1/2α)/(Cγ),d-1)) φ(α) α
≥ (1/2) - Ψ(r^2/γ, 1) - ∫_-α̅γ^-1/2^+∞Ψ((α̅+γ^1/2α)/(Cγ),d-1) φ(α) α.
Hence, using <Ref> and
<Ref>, there exists >0 such that for any
γ∈0,,
Ψ(r^2/γ, 1) + ∫_0^+∞Ψ(α/(Cγ^1/2),d)
φ(α-γ^1/2α̅) α≤ 1/4, which
concludes the proof.
Note that the result of <Ref> can be improved to
1/2- for any > 0. In particular this result tells us that for
γ > 0 small enough, looks like the hyperplane from the
point of view of the Gaussian with variance γ centered on ∂.
Bound on moments of order two and higher.
In what follows, we define for any γ >0,
Δ^γ: →_+ given for any x ∈ by
Δ^γ(x) = (1/γ) ∫_^d1_x + √(γ) z ∈√(γ) z^4 φ(z) z / ∫_^d1_x + √(γ) z ∈φ(z) z.
We have lim_γ→ 0supΔ^γ(x)x ∈ = 0.
Let >0 given by <Ref>. Let x ∈ and
γ∈0,. We have using <Ref>
∫_^d1_x + √(γ) z ∈φ(z) z ≥ 1/4 .
We also have that
(1/γ) ∫_^d1_x + √(γ) z ∈√(γ) z^4 φ(z) z ≤ 3 γ d^2 .
Therefore, we get that for any γ∈0,,
Δ^γ(x) ≤ 12 γ d^2, which concludes the proof.
In what follows, we define for any γ >0,
Σ̂^γ: →S_d^+() given for any x ∈ by
Σ̂^γ(x) = ∫_^d1_x + √(γ) z ∈ z ⊗ z φ(z) z / ∫_^d1_x + √(γ) z ∈φ(z) z.
There exists >0 such that for any x ∈ and
γ∈0, we have
Σ̂^γ(x)≤ 4d .
Let x ∈ and >0 given by
<Ref>. For any γ∈0,, we
have using <Ref>
∫_^d1_x + √(γ) z ∈φ(z) z ≥ 1/4 .
We also have that
∫_^d1_x + √(γ) z ∈z^2 φ(z) z ≤ d ,
which concludes the proof.
§.§ Properties of large drift terms
Finally, we define for any γ >0,
b̂^γ: →^d given for any x ∈ by
b̂^γ(x) = γ^-1/2∫_^d1_x + √(γ) z ∈ z φ(z) z / ∫_^d1_x + √(γ) z ∈φ(z) z.
First, we show away from the boundary the drift b̂^γ converges to zero.
There exists >0 such that for any γ∈0,,
r > 0 and x ∈ such that d(x, ∂) ≥ r we have
b̂^γ(x)≤ 2d Ψ(r/γ,d)^1/2/γ^1/2.
Let x ∈ and >0 given by
<Ref>. For any γ∈0, we have
using <Ref>
∫_^d1_x + √(γ) z ∈φ(z) z ≥ 1/4 .
We also have that
∫_^d1_x + √(γ) z ∈ z φ(z) z ≤∫_^d1_z≤ r/γ^1/2 z φ(z) z + ∫_^d1_z≥ r/γ^1/2zφ(z) z
≤2 ∫_^d1_z≥ r/γ^1/2zφ(z) z≤ 2d Ψ(r/γ,d)^1/2/γ^1/2 ,
which concludes the proof.
We have the following corollary.
There exists >0 such that for any δ >0 there exists
M_δ >0 such that for any γ∈0, and
x ∈, b̂^γ(x)≥ M_δ, then
Φ(x) ≤δ.
Let >0 given by <Ref>. Let
f: _+ →_+ given for any r > 0 by
f(r) = supγ > 0Ψ(r/γ,1)^1/2/γ^1/2. We
have that f is non-increasing and lim_r → 0 f(r)=+∞. Let
δ > 0 and M_δ =2df(δ/C) with
C = sup∇Φ(x)x ∈. Let
γ∈0, and x ∈ such that
b̂^γ(x)≥ M_δ then using
<Ref> we have that
d(x, ∂) ≤δ / C. Let x̅∈∂ such that
x - x̅ = d(x, ∂). We have
Φ(x) ≤Φ(x̅) + ∫_0^1 ⟨∇Φ(x̅ + t(x-x̅)), x - x̅⟩ t ≤δ,
which concludes the proof.
For ease of notation, for any γ >0, we define
b̅^γ = γ^1/2b̂^γ, the renormalized version
of the drift. First, we have the following result which will ensure that the
drift projected on the normal component does not vanish.
There exists > 0 such that for any γ∈0, and x ∈ we have
⟨b̅^γ(x), ∇Φ(x̅) ⟩≥b̅^γ(x) - ψ(γ) ,
with ψ: _+ →_+ such that lim sup_γ→ 0ψ(γ)/√(γ) <+∞.
Let x ∈ and >0 given by
<Ref>. For any γ∈0, we have
using <Ref>
∫_^d1_x + √(γ) z ∈φ(z) z ≥ 1/4 .
In addition, we have
∫_^d1_x + √(γ)z ∈⟨ z, ∇Φ(x̅) ⟩φ(z) z ≥∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, ∇Φ(x̅) ⟩φ(z) z
-∫_^d1_x + √(γ)z ∈∩(x,γ)^⟨ z, ∇Φ(x̅) ⟩φ(z) .
Using <Ref>, we get that
∫_^d1_x + √(γ)z ∈⟨ z, ∇Φ(x̅) ⟩φ(z) z≥∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, ∇Φ(x̅) ⟩φ(z) z - ψ(γ) .
Let {e_i }_i=1^d-1 a basis of ∇Φ(x̅)^⊥.
Using <Ref>-<ref>, we have that for any i ∈{1, …, d-1}
∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z = ∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥1_v^2 ≤ (γ^1/2α + α̅)/γ⟨ v, e_i ⟩φ(v) φ(α) v α.
Hence, combining this result and the Cauchy-Schwarz inequality we have for any i ∈{1, …, d-1}
(∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z)^2 = (∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥1_v^2 ≥ (γ^1/2α + α̅)/γ⟨ v, e_i ⟩φ(v) φ(α) v α)^2
≤∫_∇Φ(x̅)^⊥⟨ v, e_i ⟩^2 φ(v) v (∫_-α̅/γ^1/2^r/γ^1/2Ψ((α̅ + αγ^1/2)/γ, d-1)^1/2φ(α) α )^2
≤ (∫_-α̅/γ^1/2^r/γ^1/2Ψ((α̅ + αγ^1/2)/γ, d-1)^1/2φ(α) α )^2 .
Hence, using <Ref>, we get that
∑_i=1^d-1( ∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z)^2≤ (d-1) ψ^2(γ) ,
with ψ given by <Ref> with β_0=1/2. Therefore, we get that
(∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, ∇Φ(z̅) ⟩φ(z) z)^2
= (∫_^d1_x + √(γ) z ∈φ(z) z)^2 b̅^γ(x)^2 - ∑_i=1^d-1( ∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z)^2
≥ (∫_^d1_x + √(γ) z ∈φ(z) z)^2 b̅^γ(x)^2 - ψ(γ)^2.
We conclude the proof upon using that for any a,b ≥ 0, (a+b)^1/2≤ a^1/2 + b^1/2 and (<ref>).
We are now ready to state the following lower bound on the drift.
There exist >0, M ≥ 0 and c >0 such that for any
x ∈ and γ∈0, if
b̂^γ(x)≥ M then x ∈ and
min(⟨b̂^γ(x), ∇Φ(x) ⟩, ⟨b̂^γ(x), ∇Φ(x̅) ⟩) ≥ c b̂^γ(x) .
Let >0 given by <Ref> and
M_0 = 4 supψ(γ)/γ^1/2γ∈0,. In addition, let c = 1/4. Using
<Ref> and <Ref>-<ref>,
there exists M_1 ≥ 0 such that for any any x ∈, if
b̂^γ(x)≥ M_1 then x ∈ and
x = x̅ + α∇Φ(x̅) with α≤ 1/(4C) and
C = sup∇^2 Φ(x)x ∈. We
denote M = max(M_0, M_1). Let γ∈0, and
x ∈ such that b̂^γ(x)≥ M. Using
<Ref>, we have that
⟨b̂^γ(x), ∇Φ(x̅) ⟩≥b̂^γ(x) - ψ(γ)/γ^1/2 .
Using that ψ(γ)/γ^1/2≤ M/2 ≤b̂^γ(x)/2, we have
⟨b̂^γ(x), ∇Φ(x̅) ⟩≥ (1/2) b̂^γ(x) .
Since x - x̅≤α≤ 1/(4C) we have
⟨b̂^γ(x), ∇Φ(x) ⟩≥ (1/2 - C α )
b̂^γ(x)≥b̂^γ(x) / 4, which concludes the
proof.
§.§ Convergence on compact sets
In this section, we show the convergence of the drift and diffusion matrix on
compact sets. We recall that does not include its boundary
∂.
For any compact set ⊂ and >0, there exists
>0 such that for any γ∈0, we have for any
x ∈
b̂^γ(x)≤ , Σ̂^γ(x) - ≤ .
Let ⊂ be a compact set and γ >0. Since
∩∂ = ∅, there exists r > 0 such that for any
x ∈, d(x, ∂) > r. Therefore, we have that for any x ∈
b̂^γ(x) = γ^-1/2∫_x + √(γ)z ∈ z φ(z) z / ∫_x + √(γ)z ∈φ(z) z .
In addition, using the Cauchy-Schwarz inequality we have
∫_x + √(γ)z ∈ z φ(z) z ≤∫_^d z φ(z) z + ∫_^zφ(z) z
≤∫_^d1_z≥ r / γ^1/2zφ(z) z ≤√(d)Ψ(r^2/γ, d)^1/2 .
Using <Ref> and <Ref>, there
exists _0 >0 such that for any γ∈0, _0 we
have that for any x ∈
b̂^γ(x)≤ 4dΨ(r^2/γ,1)^1/2/γ^1/2≤ ,
which concludes the first part of the proof.
Similarly, we have that for any x ∈
∫_x + √(γ)z ∈ (z ⊗ z - ) φ(z) z ≤∫_^d (z⊗ z - ) φ(z) z + ∫_^zφ(z) z
≤∫_^d1_z≥ r / γ^1/2z ⊗ z - φ(z) z
≤√(2)(1 + 3d^2)^1/2Ψ(r^2/γ, d)^1/2 .
Using <Ref> and <Ref>, there
exists _1 >0 such that for any γ∈0, _1, we
have that for any x ∈
Σ̂^γ(x) - ≤ 4 √(2)(1 + 3d^2)^1/2Ψ(r^2/γ, 1)^1/2≤ ,
which concludes the proof upon letting = min(_0, _1).
§.§ Convergence on the boundary
Finally, we investigate the behavior at the boundary of the diffusion matrix and
the drift. First, we show that there is a lower bound to the diffusion matrix
near the boundary. Second, we show that the renormalized drift converges to the
outward normal.
There exist c > 0 and >0 such that for any
γ∈0,, u ∈^d and x ∈ we have
⟨ u, Σ̂^γ(x) u ⟩≥ c u^2 .
In particular, there exist r, >0 such that for any
γ∈0, and x ∈ with
d(x, ∂) ≤ r
⟨∇Φ(x), Σ̂^γ(x) ∇Φ(x) ⟩≥ .
First, we show (<ref>). Let x ∈. We have for any u ∈^d
⟨ u, Σ̂^γ(x) u ⟩ = ∫_^d1_x +√(γ)z ∈⟨ z, u⟩^2 φ(z) z / ∫_^d1_x +√(γ)z ∈ z
≥∫_^d1_x+√(γ) z ∈(x,γ)⟨ z, u⟩^2 φ(z) z .
For any u ∈^d, let α_u = ⟨ u, ∇Φ(x̅) ⟩.
Using <Ref>-<ref> we have for any u ∈^d
∫_^d1_x+√(γ) z ∈(x,γ)⟨ z, u⟩^2 φ(z) z
= ∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥ (⟨ u, v ⟩ + αα_u)^2
1_v^2 ≤ (αγ^1/2 + α̅)/γφ(v) φ(α) v α
≥∫_0^r/γ^1/2∫_∇Φ(x̅)^⊥ (⟨ u, v ⟩^2 + α^2 α_u^2) 1_v^2 ≤ (αγ^1/2 + α̅)/γφ(v) φ(α) v α
≥α_u^2 ∫_0^r/γ^1/2α^2 φ(α) α + ∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥⟨ u, v ⟩^2 1_v^2 ≤ (αγ^1/2 + α̅)/γφ(v) φ(α) v α .
Using Cauchy-Schwarz inequality, we have
∫_0^r/γ^1/2α^2 φ(α) α = (1/2) - ∫_r/γ^1/2^+∞α^2 φ(α) α≥ (1/2) - 3 Φ(r^2/γ,1)^1/2.
In addition, using the Cauchy-Schwarz inequality, we have that
∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥⟨ u, v ⟩^2 1_v^2 ≤ (αγ^1/2 + α̅)/γφ(v) φ(α) v α
= ∫_∇Φ(x̅)^⊥⟨ u, v ⟩^2 φ(v) v ∫_-α̅/γ^1/2^r/γ^1/2φ(α) α
- ∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥⟨ u, v ⟩^2 1_v^2 ≥ (αγ^1/2 + α̅)/γφ(v) φ(α) v α
≥ (u^2 - α_u^2) ((1/2) - Φ(r^2/γ,1))
- √(3)(d-1) u^2 ∫_0^+∞Φ(α/γ^1/2,d-1)^1/2φ(α - α̅/γ^1/2) α .
Combining this result, (<ref>), (<ref>) and <Ref> there exists > 0 such that for any γ∈0, and u ∈^d
∫_^d1_x+√(γ) z ∈(x,γ)⟨ z, u⟩^2 φ(z) z≥ (1/4) u^2 ,
which concludes the proof of (<ref>). Finally, using
<Ref>-<ref>, we have that for any x ∈ if
d(x, ∂) ≤ R then x ∈. Let r = min(R, 1/(2C)) with
C = sup∇^2 Φ(x)x ∈. We have
that for any x ∈ such that d(x, ∂)≤ r
∇Φ(x)≥∇Φ(x̅_0) - C r ≥ 1/2 ,
where x̅_0 is such that x - x̅_0≤ r and
x̅_0 ∈∂. Combining this result and
(<ref>) concludes the proof upon letting
= 1/16.
Finally, we investigate the behavior of the normalized drift near the boundary.
For any x̅_0 ∈∂ and >0, there exist , r, M >0 such that for any
x ∈ and γ∈0, with
x - x̅_0≤ r and b̂^γ(x)≥ M
b̂^γ(x)/⟨b̂^γ(x), ∇Φ(x) ⟩ - ∇Φ(x̅_0)≤ .
Let be given by <Ref>. Let ψ given by
<Ref> and
M_0 = supψ(γ)/γ^1/2γ∈0,
< +∞. Let M = 16 M_0 / (c^1/2) with c given in
<Ref>. Let R > 0 given by
<Ref>-<ref> such that for any x ∈
with d(x, ∂) there exist x̅∈∂ and
α∈0, c /(4C) such that
x = x̅ + α∇Φ(x̅) with
C = sup∇^2 Φ(x)x ∈ and c
given in <Ref>. Let
r = min(r̅, c /4, R) and x ∈M with
x - x̅_0≤ r. First, since
d(x, ∂) ≤ R, there exist x̅∈∂ and
α∈0,/(4C) such that
x = x̅ + α∇Φ(x̅). Therefore, we get that
x̅ - x̅_0≤/(2C) and therefore
∇Φ(x̅_0) - ∇Φ(x̅)≤ /2.
In addition, we have that
b̂^γ(x)/⟨b̂^γ(x), ∇Φ(x) ⟩ - b̂^γ(x)/⟨b̂^γ(x), ∇Φ(x̅) ⟩
≤b̂^γ(x)^2 ∇Φ(x) - ∇Φ(x̅) / (⟨b̂^γ(x), ∇Φ(x) ⟩⟨b̂^γ(x), ∇Φ(x̅) ⟩) .
Using <Ref>, we get that
b̂^γ(x)/⟨b̂^γ(x), ∇Φ(x) ⟩ - b̂^γ(x)/⟨b̂^γ(x), ∇Φ(x̅) ⟩≤/4 .
In what follows, we show that
b̂^γ(x)/⟨b̂^γ(x), ∇Φ(x̅) ⟩ - ∇Φ(x̅)^2 ≤/2 .
In particular, we show that for any u ∈∇Φ(x̅)^⊥ with u=1,
⟨b̂^γ(x), u ⟩ ^2 ≤ (/16) ⟨b̂^γ(x), ∇Φ(x̅) ⟩^2.
Assuming (<ref>), letting
u = (b̂^γ(x) - ⟨b̂^γ(x), ∇Φ(x̅⟩)) /
(b̂^γ(x)^2 - ⟨b̂^γ(x), ∇Φ(x̅⟩^2)^1/2 and using that
b̂^γ(x) = ⟨b̂^γ(x), u⟩ u + ⟨b̂^γ(x), ∇Φ(x̅) ⟩∇Φ(x̅) we have
b̂^γ(x) / ⟨b̂^γ(x), ∇Φ(x) ⟩ - ∇Φ(x̅) ≤b̂^γ(x) / ⟨b̂^γ(x), ∇Φ(x̅) ⟩ - ∇Φ(x̅)
+ b̂^γ(x) / ⟨b̂^γ(x), ∇Φ(x) ⟩ - b̂^γ(x) / ⟨b̂^γ(x), ∇Φ(x̅)⟩
≤⟨b̂^γ(x), u⟩ / ⟨b̂^γ(x), ∇Φ(x̅) ⟩ + / 4 ≤ / 2 ,
which concludes the proof. Let u ∈∇Φ(x̅)^⊥ with
u=1 and {e_i }_i=1^d-1 an orthonormal basis of
∇Φ(x̅)^⊥. There exist {a_i}_i=1^d-1 such that
∑_i=1^d-1 a_i^2 = 1 and u = ∑_i=1^d-1 a_i e_i. Using
<Ref>-<ref>, we have that for any
i ∈{1, …, d-1}
∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z = ∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥1_v^2 ≤ (γ^1/2α + α̅)/γ⟨ v, e_i ⟩φ(v) φ(α) v α
= ∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥1_v^2 ≥ (γ^1/2α + α̅)/γ⟨ v, e_i ⟩φ(v) φ(α) v α
Hence, combining this result and the Cauchy-Schwarz inequality we have for any i ∈{1, …, d-1}
(∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z)^2 = (∫_-α̅/γ^1/2^r/γ^1/2∫_∇Φ(x̅)^⊥1_v^2 ≥ (γ^1/2α + α̅)/γ⟨ v, e_i ⟩φ(v) φ(α) v α)^2
≤∫_∇Φ(x̅)^⊥⟨ v, e_i ⟩^2 φ(v) v (∫_-α̅/γ^1/2^r/γ^1/2Ψ((α̅ + αγ^1/2)/γ, d-1)^1/2φ(α) α )^2 .
Hence, we get that
∑_i=1^d-1a_i^2 ( ∫_^d1_x + √(γ)z ∈(x,γ)⟨ z, e_i ⟩φ(z) z)^2≤u^2 ψ^2(γ) ,
with ψ given by <Ref>. Recalling that b̂^γ(x)≥ M we have
⟨b̂^γ(x), u ⟩^2 ≤ 16ψ(γ)^2/γ≤ c^2 ( / 16) M^2 ≤ ( / 16) ⟨b̂^γ(x), ∇Φ(x̅) ⟩^2 ,
which concludes the proof.
§.§ Submartingale problem and weak solution
We are now ready to conclude the proof.
There exists ^⋆ a distribution on (0,T, ) such
that lim_γ→ 0^γ = ^⋆. In addition, for any
f ∈^1,2(0,T×, ) with
⟨∇Φ(x̅), ∇ f(x) ⟩≥ 0 for any
t ∈0,T and x ∈∂, we have that the process
(f(t, ω(t)))_t ∈0,T given for any t ∈0,T
f(t, ω(t)) - ∫_0^t (∂_s f(s, ω(s) + 12Δ f(s, ω(s))) 1_(ω(s)) s ,
is a submartingale.
Condition (A) <cit.> is a consequence of
<Ref>. Condition (B) <cit.> is
a consequence of <Ref>. Condition (C)
<cit.> is a consequence of
<Ref>. Condition (D) <cit.> is
a consequence of <Ref>. We fix ρ =0 and condition (1)
<cit.> is a consequence of
<Ref>. Condition (2)-(iii)
<cit.> is a consequence of
<Ref>. Condition (2)-(iv)
<cit.> is a consequence of
<Ref>. We conclude upon using <cit.> and <cit.>.
We finally conclude the proof of <Ref> upon using the
results of <cit.> which establish the link between a weak
solution to the reflected SDE and the solution to a submartingale problem.
For any T ≥ 0, (_t^γ)_t ∈0,T weakly converges to
(_t)_t ∈0,T such that for any t ∈0,T
_t = x + _t - _t , _t = ∫_0^t 1__s ∈∂_s , _t = ∫_0^t (_s) _s .
Using <Ref> and <cit.>, we have that in
<Ref>is associated with a solution to the extended
Skorokhod problem. We conclude that a solution to the extended Skorokhod
problem is a solution to the Skorokhod problem using <cit.>.
§.§ Extension to the Metropolis process
We recall that the Metropolis process is defined as follows. Let
(X_k^γ)_k ∈ given for any γ >0 and k ∈ by
X_0^γ = x ∈ and for
X_k+1^γ = X_k^γ + √(γ) Z_k if
X_k^γ + √(γ) Z_k^γ∈ and X_k^γ otherwise,
Z_k ∼N(0, ). We recall that b̂^γ,
Σ̂^γ and Δ̂^γ are given by
(<ref>), (<ref>) and
(<ref>). In particular, denoting ^γ the
Markov kernel associated with (X̂_k^γ)_k ∈, i.e.
^γ: ×→0,1 such that for any
x ∈, ^γ(x, ·) is a probability measure, for any
∈, ^γ(·, ) is a measurable function
and
1_(X̂_1^γ) | X̂_0^γ=x =
^γ(x, ). We have that for any γ > 0 and x ∈
b̂^γ(x) = (1/γ) ∫_ (y - x) ^γ(x, y) ,
Σ̂^γ(x) = (1/γ)∫_ (y - x)^⊗ 2^γ(x, y) ,
Δ̂^γ(x) = (1/γ)∫_y - x^4 ^γ(x, y) .
In what follows, we denote
a^γ(x) = 1_x + √(γ) Z_0 ∈. Denote
^γ the kernel associated with (X_k^γ)_k ∈. We have
that for any ∈, γ >0 and x ∈
^γ(x,) = 1_X_k+1^γ∈1_x + √(γ) Z_k+1∈ + (1-a^γ(x)) 1_(x)
= a^γ(x) ^γ(x, ) + (1-a^γ(x)) 1_(x) .
We define for any γ > 0 and x ∈
b^γ(x) = (1/γ) ∫_ (y - x) ^γ(x, y) ,
Σ^γ(x) = (1/γ)∫_ (y - x)^⊗ 2^γ(x, y) ,
Δ^γ(x) = (1/γ)∫_y - x^4 ^γ(x, y) .
Using (<ref>), we get that for any γ >0 and x ∈
b^γ(x) = a^γ(x) b̂^γ(x) , Σ^γ(x) = a^γ(x) Σ̂^γ(x) , Δ^γ(x) = a^γ(x) Δ̂^γ(x) .
Using <Ref>, we have that for any
γ∈0, and x ∈, a^γ(x) ≥ 1/4.
In order to conclude for the convergence of the Metropolis process we adapt
<Ref> and <Ref>. We define
^γ : _+ → given for any k ∈ by
^γ_kγ = X_k^γ and for any
t ∈kγ, (k+1)γ, ^γ_t = X_k^γ. Note
that (_t)_t ∈0,T is a (0,T, ) valued
random variable, where (0,T, ) is the space of
right-continuous with left-limit processes which take values in . We
denote ^γ the distribution of (_t)_t ∈0,T on
(0,T, ).
There exists ^⋆ a distribution on (0,T, ) such
that lim_γ→ 0^γ = ^⋆. In addition, for any
f ∈^1,2(0,T×, ) with
⟨∇Φ(x̅), ∇ f(x) ⟩≥ 0 for any
t ∈0,T and x ∈∂, we have that the process
(f(t, ω(t)))_t ∈0,T given for any t ∈0,T
f(t, ω(t)) - ∫_0^t (∂_s f(s, ω(s) + 12Δ f(s, ω(s))) 1_(ω(s)) s ,
is a submartingale.
Condition (A) <cit.> is a consequence of
<Ref> and (<ref>). Condition (B)
<cit.> is a consequence of
<Ref> and (<ref>). Condition (C)
<cit.> is a consequence of <Ref>
and (<ref>). Condition (D) <cit.>
is a consequence of <Ref> and (<ref>). We
fix ρ =0 and condition (1) <cit.> is a
consequence of <Ref> and that lim_γ→ 0 a^γ = 1
uniformly on compact subsets ⊂. Condition (2)-(iii)
<cit.> is a consequence of
<Ref> and (<ref>). Condition (2)-(iv)
<cit.> is a consequence of
<Ref> and (<ref>). We conclude upon
using <cit.> and <cit.>.
For any T ≥ 0, (_t^γ)_t ∈0,T weakly converges to
(_t)_t ∈0,T such that for any t ∈0,T
_t = x + _t - _t , _t = ∫_0^t 1__s ∈∂_s , _t = ∫_0^t (_s) _s .
The proof is identical to <Ref>.
§ MODELLING GEOSPATIAL DATA WITHIN NON-CONVEX BOUNDARIES
To demonstrate the ability of the proposed method to model distributions whose support is restricted to manifolds with highly non-convex boundaries, we derived a geospatial dataset based on the historical wildfire incidence rate within the continental United States (described in in <Ref>) and, using the corresponding country borders, trained a constrained diffusion model by adapting the point-in-spherical-polytope conditions outlined in <cit.> (described in <Ref>).
§.§ Derivation of bounded geospatial dataset
Specifically, we retrieved the rasterised version of the wildfire data provided by <cit.>, converted it to a spherical geodetic coordinate system using the Cartopy library <cit.>, and drew a weighted subsample of size 1e6. We then retrieved the country borders of the continental United States from <cit.> and mapped them to the same geodetic reference frame as the wildfire data. A visualization of the resulting dataset is presented in <Ref>.
§.§ Point-in-spherical-polytope algorithms
The support of the data-generating distribution we aim to approximate is thus restricted to a highly non-convex spherical polytope ℙ∈𝒮^2 given by the country borders of the continental United States. To determine whether a query point q∈𝒮^2 is within ℙ, we adapt an efficient reformulation of the point-in-spherical-polygon algorithm <cit.> presented in <cit.>.
The algorithm requires the provision of a reference point r∈𝒮^2 known to be located in ℙ and determines whether q is inside or outside the polygon by checking whether the geodesic between r and q crosses the polygon an even or odd number of times.
Letting x∈ℝ^3 denote the Cartesian coordinates of a point x∈𝒮^2, <cit.> rely on a Cartesian reference coordinate system Q (with its z-axis given by r) and the corresponding spherical coordinate system Q to decompose the edge-crossing condition of <cit.> into two efficiently computable parts. That is, the geodesic between q and r crosses an edge e_i=(v_i, v_j) of the polygon if:
* the longitude of q in Q is bounded by the longitudes of v_i and v_j in Q, i.e.
ϕ_Q(q)∈[min(ϕ_Q(v_i), ϕ_Q(v_j)), max(ϕ_Q(v_i), ϕ_Q(v_j))],
* the plane specified by the normal vector p_i=v_i×v_j represents an equator that separates q and r into two different hemispheres, i.e.
sign(⟨p_i, r⟩·⟨p_i, q⟩) = -1.
Especially when ℙ is fixed and the corresponding coordinate transformations and normal vectors can be precomputed for each edge, this algorithm affords an efficient and parallelisable approach to determining whether any given point on 𝒮^2 is contained by a spherical polytope.
§ SUPPLEMENTARY EXPERIMENTAL RESULTS
§.§ Evaluating log-barrier models
Following <cit.>, we approached the empirical evaluation of our Metropolis model by computing the maximum mean discrepancy (MMD) <cit.> between samples from the true distribution and the trained diffusion models. The MMD is a statistic that quantifies the similarity of two samples by computing the distance of their respective mean embeddings in a reproducing kernel Hilbert space. For this, we use an RBF kernel with the same length scales as the standard deviations of the normal distributions used to generate the synthetic distribution. We sum these RBF kernels by the weights of the corresponding components of the synthetic Gaussian mixture model.
From the results in <Ref>, it is clear that the log-barrier approach performs significantly worse than the Reflected model across all but one and worse than the Metropolis models across all settings. This, in conjunction with numerical instabilities we encountered when attempting to evaluate sample likelihoods with the log-barrier models as presented in <cit.>, motivated us to focus on the Reflected and Metropolis models in the main text.
§.§ Implementational details
An anonymised repository with the datasets and code necessary to reproduce our experiments can be found under the following links: The main repository is available https://anonymous.4open.science/r/constrained-diffusion-models/here and the supporting package which handles the geometry can be found under https://anonymous.4open.science/r/geomstats-with-boundary/here.
We use the same architecture in all of our experiments: a 6-layer MLP with 512 hidden units and sine activation functions, except in the output layer, which uses a linear activation function. Following <cit.>, we implement a simple linear function that scales the score by the distance to the boundary, approaching zero within ϵ = 0.01 of the boundary. This ensures the score obeys the Neumann boundary conditions required by the reflected Brownian Motion. For the geospatial dataset within non-convex country borders, we do not use distance rescaling. Instead, we substitute it with a series of step functions to rescale the score. This is a proof-of-concept to show that even when computing the distance is hard, simple and efficient approximations suffice. When constructing Riemannian diffusion models on the torus and sphere for the protein and geospatial datasets, we follow <cit.> and include an additional preconditioner for the score on the manifold. We do not use the residual trick or the standard deviation trick, which are both common score-rescaling functions in image model architectures; in our setting, we find that they adversely affect model training.
For the forward/reverse process we always set T=1, β_0=1e-3 and then tune β_1 to ensure that the forward process just reaches the invariant distribution with a linear β-schedule. At sampling time we use N=100 steps of the discretised process. We discretise the training process by selecting a random N between 0 and 100 for each example, rolling out to that time point. This lets us cheaply implement a simple variance reduction technique: we take multiple samples from this trajectory by selecting multiple random N to save for each example. This technique was originally described in <cit.> and we find it is also helpful for our Metropolis models. For all experiments, we use the ism loss with a modified weighting function of (1 + t), which we found to be essential to model training. All experiments use a batch size of 256 with 8 repeats per batch. For training, we use a learning rate of 2e-4 with a cosine learning rate schedule. We trained for 100,000 batches on the synthetic examples and 300,000 batches on the real-world examples (robotics, proteins, wildfires).
We selected these hyperparameters from a systematic search over learning rates (6e-4, 2e-4, 6e-5, 2e-5), learning rate schedules (cosine, log-linear), and batch sizes (128, 256, 512, 1024) on synthetic examples for the reflected and log-barrier models. Similar parameters worked well for both, and we used those for our Metropolis models to allow a straightforward comparison. We tried N=100,1000 for several synthetic examples but found that very large rollout times actually hurt performance for the Metropolis model, though the log-barrier performed a bit better with longer rollouts and the reflected was the same.
All models were trained on a single NVIDIA GeForce GTX 1080 GPU. All of the Metropolis models presented here can easily be trained on this hardware in under 4 hours. The runtime for the log-barrier and reflected models is considerably longer.
§.§ Synthetic Distributions on Constrained Manifolds of Increasing Dimensionality
§.§ Constrained Configurational Modelling of Robotic Arms
The following univariate marginal and pairwise bivariate plots visualise the distribution of different samples in
* the three dimensions needed to describe an ellipsoid M=[ l_1 l_2; l_2 l_3 ]∈Ş_++^2 and
* the two dimensions needed to describe a location in ^2.
§.§.§ Visualisation of samples from the data distribution
§.§.§ Visualisation of samples from our Metropolis sampling-based diffusion model
§.§.§ Visualisation of samples from a reflected Brownian motion-based diffusion model
§.§.§ Visualisation of samples from the uniform distribution
§.§ Conformational Modelling of Protein Backbones
The following univariate marginal and pairwise bivariate plots visualise the distribution of different samples in
[label=(*)]
* the polytope ℙ⊂^3 and
* the torus 𝕋^4
used to parametrise the conformations of a polypeptide chain of length N=6 with coinciding endpoints. We refer to <cit.> for full detail on the reparametrisation and to <cit.> for a full description of the dataset.
§.§.§ Visualisation of samples from the data distribution
§.§.§ Visualisation of samples from our Metropolis sampling-based diffusion model
§.§.§ Visualisation of samples from a reflected Brownian motion-based diffusion model
§.§.§ Visualisation of samples from the uniform distribution
|
http://arxiv.org/abs/2307.04976v1 | 20230711023334 | Multi-fidelity Emulator for Cosmological Large Scale 21 cm Lightcone Images: a Few-shot Transfer Learning Approach with GAN | [
"Kangning Diao",
"Yi Mao"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.IM"
] |
[
Multi-fidelity Emulator for Cosmological Large Scale 21 cm Lightcone Images:
a Few-shot Transfer Learning Approach with GAN
Kangning Diaoto
Yi Maoto
toDepartment of Astronomy, Tsinghua University, Beijing, China
Kangning [email protected]
Yi [email protected]
Machine Learning, ICML
0.3in
]
Large-scale numerical simulations (≳ 500Mpc) of cosmic reionization are required to match the large survey volume of the upcoming Square Kilometre Array (SKA). We present a multi-fidelity emulation technique for generating large-scale lightcone images of cosmic reionization. We first train generative adversarial networks (GAN) on small-scale simulations and transfer that knowledge to large-scale simulations with hundreds of training images. Our method achieves high accuracy in generating lightcone images, as measured by various statistics with mostly percentage errors. This approach saves computational resources by 90% compared to conventional training methods. Our technique enables efficient and accurate emulation of large-scale images of the Universe.
§ INTRODUCTION
In preparation for the upcoming era of 21 cm cosmology, many models have been developed to extract information from observations. These models range from the semi-numerical simulation, e.g. <cit.> to hydrodynamical radiation transfer simulation, e.g. <cit.>, with varying levels of accuracy and computational cost. In addition, different approaches have been applied to infer cosmological and astrophysical parameters, including the Markov Chain Monte Carlo (MCMC) code, e.g. <cit.> to the machine learning boosted simulation-based inference <cit.>. However, parameter inference typically requires many forward simulations. Given the large field of view of the next-generation telescopes, large-scale simulations are required to fully exploit the information contained in the observations. However, these large-scale simulations are computationally expensive, which has inspired the development of emulators as an alternative.
Building emulators typically requires numerous training samples. For large-scale simulations, the cost of obtaining these training samples can be prohibitive in and of itself. To address this issue, the concept of multi-fidelity emulation <cit.> has been proposed. This approach first uses low-cost (low-fidelity) simulations to create an emulator. The emulator is then calibrated with a small number of high-cost (high-fidelity) simulations, reducing the computational cost while still maintaining the output quality.
Here we choose GAN <cit.> as our emulation model. GAN emulation has previously demonstrated the ability to produce high-quality samples. However, GAN training is known to suffer very often from mode collapse, especially with a dataset smaller than ∼ 1000 images. In the context of 21 cm lightcone emulation, this would typically require ≳ 1000 expensive simulations which are sometimes impossibly costly. In this paper, we propose the few-shot transfer learning <cit.> to train a faithful large-scale 21 cm lightcone image emulator with a limited number of simulations. Few-shot transfer learning allows us to learn a new task with a limited number of samples, which serves as the `calibrating' procedure in multi-fidelity emulation. This multi-fidelity emulation allows us to significantly reduce the number of simulations required to train an accurate lightcone image emulator.
§ METHODOLOGY
Our approach involves a two-step process. First, we train our GAN with 120000 small-scale (size of (2,64,512)) images. In the second step, we train our large-scale GAN on 320 large-scale (size of (2,256,512)) images while preserving the diversity of GAN results. We will explain our approach in detail in the following.
StyleGAN 2:
The GAN architecture used in this work is StyleGAN 2 <cit.>. Our generator G consists of two parts: First, a mapping network f takes the astrophysical parameter 𝐜 and a random vector 𝐳 and returns a style vector 𝐰. Second, a synthesis network g uses the style vector 𝐰 to shift the weights in convolution kernels, and Gaussian random noise is injected into the feature map right after each convolution to provide variations in different scales of features. Our discriminator D has a ResNet <cit.>-like architecture.
Cross-Domain Correspondence (CDC): Assuming we have a good small-scale StyleGAN emulator, we expand the size of the generator's first layer, resulting in a final output size of (2,256,512).
Next, we retrain our GAN with large-scale images. We first employ the patchy-level discriminator and cross-domain correspondence as described in <cit.>. We mark the small-scale GAN as our source model G_s and the large-scale GAN as the target model G_t. First, we use the same batch of vectors (𝐳,𝐜) feeding both G_s and G_t, getting the corresponding small-scale images G_s(𝐳,𝐜) and large-scale G_t(𝐳,𝐜). Then we calculate the cosine similarity s_(i,j) between any pair of images in G_s(𝐳,𝐜) as
𝐒_s(𝐳,𝐜)={cos(G_s(z_i,c_i),G_s(z_j,c_j))_∀ i≠ j}
and similarly for G_t we have:
𝐒_t(𝐳,𝐜)={cos(G_t(z_i,c_i),G_s(z_j,c_j))_∀ i≠ j}
Here the cos denotes the cosine similarity. Next, we normalize these two vectors using softmax and calculate the KL divergence between vectors:
ℒ_ CDC = D_ KL(Softmax(𝐒_s),Softmax(𝐒_t))
In this way, one can encourage the G_t to generate samples with a diversity similar to G_s, relieving the mode collapse problem.
Other Techniques: A patchy-level discriminator is also adopted in this work. We divided the astrophysical parameter space into two parts: the anchor region and the rest. The anchor region is a spherical region around training set parameters with a small radius. In this region, the GAN image G_t(𝐳,𝐜_ anch) has a good training sample to compare with. Thus, we apply the full discriminator with these parameters. If 𝐜 is located outside the anchor region, we only apply a patch discriminator: in this case, the discriminator does not calculate the loss of the whole image but calculates the loss of different patches of the image.
Since the small-scale information in both training sets is identical, we freeze the first two layers of the discriminator <cit.>. We add the small-scale discriminator D_s loss to ensure the correctness of small-scale information. Our code is public-available in this GitHub repo[<https://github.com/dkn16/multi-fidel-gan-21cm>].
§ DATASET
The training dataset for this project consists of two parts: a small-scale dataset and a large-scale dataset. All the data are generated with <cit.>, and each simulation has distinct reionization parameters. Our parameters are the ionizing efficiency ζ and the minimum virial temperature T_ vir. We explored a range of 10<ζ<250 and 4<log T_ vir<6, and the parameters are sampled with Latin-Hypercube Sampling<cit.>.
The small-scale dataset has a resolution of (64,64,512) and consists of 30,000 simulations with a comoving box length of (128,128,1024) Mpc. The third axis (z-axis) is along the line of sight (LoS), spanning a redshift range of 7.51<z<11.93. For each redshift, we run a realization
and select the corresponding slice for our final data. We include the matter overdensity field δ_m and the 21 cm brightness temperature T_b field for training. Since the overdensity field is highly correlated with other intensity mappings (IM) like CO and C[II] lines, we expect our method can be transferred to other IM images smoothly. For each sample, we cut four image slices, resulting in 120000 lightcone images with a size of (2,64,512) in our small-scale dataset, containing both the overdensity and brightness temperature field.
The large-scale dataset has a (256,256,512) resolution and consists of 80 simulations with a comoving box length of (512,512,1024) Mpc, covering the same redshift range. As before, for each sample, we cut four slices and obtained 320 lightcone images with a size of (2,256,512) in our large-scale dataset.
§ RESULTS
Here we present the evaluation of our model results. A visual inspection of generated samples is shown in Fig. <ref>. We tested our result on 3 combinations of parameters, each having distinct evolution history. For each parameter combination, we run 4 simulations with distinct initial conditions generated with different random seeds for testing.
Global Signal:
We calculated the global 21 cm signal of the GAN results. Limited by the size of the test set, the mean value is calculated with 1024 image samples. Our result is shown in Fig. <ref>. We see that GAN works well, with an error of mostly less than 5% and a well-matched 2σ region.
Power spectrum (PS):
Fig. <ref> shows the T_b auto-PS, T_b-δ_m cross-PS and δ_m auto-PS. GAN results perform well on small scales, with an error of less than 10%, except when the PS is close to 0. On extremely large scales, the error can exceed 50%. This is unsurprising because we lack training samples. The GAN still captures the large-scale power when the T_b signal has a high amplitude. Moreover, the relative error is insignificant compared with the sampling variance.
From T_b auto-PS figures (Fig. <ref>, top row), the change of lines shows an evolution with the time that power is transferred from small scale to large scale. Again, the accuracy of the cross-PS (Fig. <ref>, middle row) guarantees the correlation between T_b and δ_m. At early stages, the HI traces the matter field well, and the GAN T_b and δ_m fields have positive cross-correlation at all scales. Later, the cross-correlation becomes negative due to the fact that dense regions hosted ionizing sources earlier and ionized first. Our GAN performs well in reproducing these features. The GAN samples with different parameters have similar matter PS (Fig. <ref>, bottom row), which agrees with the truth.
Non-Gaussianity:
Here we employ the scattering transform <cit.> coefficients as a non-Gaussian statistic to evaluate our GAN. A detailed description can be found in e.g. <cit.>. We calculated the second-order ST coefficients S_2 as measures for non-Gaussianity with Kymatio <cit.>. As the image sample size grows, we set the kernel size scale j=0,3,6 to capture more large-scale information. Results are shown in Fig. <ref>. When (j_1,j_2) = (0,3), the error is less significant as ≲ 10%. When j_2=6 the error exceeds 20%.
§ SUMMARY
In this paper, we introduce the few-shot transfer learning technique to build an emulator for large-scale 21 cm simulations. The large-scale GAN is trained with 80 simulations, and the relative error of statistics is less than 10% on small scales. On large scales, a mild increase in error arises due to insufficient training samples.
Generating our multi-fidelity dataset requires ∼ 1.2× 10^5 CPU hours, while a purely large scale dataset requires ∼ 1.5× 10^5 CPU hours, with 5000 simulations, an optimistic estimate of dataset size consistent with e.g. <cit.>. Our method reduces the computational cost by 90%, which will enable us to emulate more complex simulations in the future.
§ ACKNOWLEDGEMENTS
This work is supported by the National SKA Program of China (grant No. 2020SKA0110401), NSFC (grant No. 11821303), and the National Key R&D Program of China (grant No. 2018YFA0404502). We thank Xiaosheng Zhao, Ce Sui, and especially Richard Grumitt for inspiring discussions.
We acknowledge the Tsinghua Astrophysics High-Performance Computing platform at Tsinghua University for providing computational and data storage resources that have contributed to the research results reported within this paper.
icml2023
§ COMPARISON WITH PREVIOUS WORK
Several noteworthy applications of GAN in astronomy have been extensively explored in previous studies <cit.>. Previous works have made significant progress in utilizing innovative GAN structures such as the progressively growing GAN (PGGAN) <cit.> and stabilized GAN <cit.>. These studies have demonstrated sub-percent-level accuracy, as assessed by various statistical measures, for unconditional emulation, and achieved accuracy at the ten percent level for conditional emulation. A comparison between our results and previous findings is presented in Table <ref>. By employing the StyleGAN2 architecture, we have achieved percent-level accuracy in conditional emulation with sufficient training samples, as validated by various statistical measures. In the few-shot learning scenario, our GAN exhibits similar accuracy on a small scale and demonstrates a moderate increase on a larger scale. Furthermore, our large-scale GAN, combined with few-shot transfer learning techniques, allows for computational resource savings ranging from 90% to 99%, depending on different estimations.
§ TEST ON MODE COLLAPSE
§.§ Visual inspection
To assess the diversity of our model, we conducted a visual inspection. We generated multiple realizations for both GAN samples and simulation samples, as illustrated in Fig. <ref>. Upon careful observation, we observed that the shape and size of ionized bubbles exhibit variation across different GAN samples, indicating the absence of any specific preference for bubble features. Furthermore, the locations of ionized bubbles also appear random, as no discernible trend or pattern was observed among the samples we examined.
§.§ Pixel level variance
In addition to visual inspections, we also computed the standard deviation of the T_b field for each pixel, as depicted in Fig. <ref>. Our aim was to observe any potential decrease in the standard deviation, which could indicate mode collapse. Upon analyzing the results in Fig. <ref>, we noticed that the variance for both GAN and simulation samples appeared similar, particularly for higher T_b values. However, we observed mild fluctuations in the standard deviation when the T_b value was low. Based on this analysis, we can conclude that there is no clear evidence of significant mode collapse at the pixel level.
§.§ Feature level variance
Lastly, we computed the 2σ scatter of the second-order ST coefficients (S_2) for the T_b field, which serves as a representation of image features. The results are presented in Figures <ref>-<ref>. Consistent with the analysis in Section <ref>, we selected the scales (j_1,j_2) as (0,3), (0,6), and (3,6) to capture both small and large-scale features.
Upon examination, we observed that in most cases, the 2σ scatter of GAN features overlapped with that of simulation samples, indicating the absence of mode collapse at the feature level. However, in the bottom subplot of Fig. <ref>, we noticed a deviation in both the mean value and 2σ scatter for certain features at the super-large scale. This suggests a slight mode collapse issue in the generated images at that particular scale.
In conclusion, our analysis indicates that there is no strong evidence of mode collapse at the feature level. The GAN samples generally mimic the behavior of the simulation samples quite well, except when the T_b approaches zero.
|
http://arxiv.org/abs/2307.05547v1 | 20230709055046 | Robust Routing Made Easy: Reinforcing Networks Against Non-Benign Faults | [
"Christoph Lenzen",
"Moti Medina",
"Mehrdad Saberi",
"Stefan Schmid"
] | cs.DC | [
"cs.DC"
] |
Robust Routing Made Easy:
Reinforcing Networks Against Non-Benign Faults
Research supported by the Federal Ministry of Education and Research (BMBF), grant 16KISK020K, 2021-2025.
This article extends work presented at SSS
2017 <cit.>.
Christoph Lenzen^1 Moti Medina^2 Mehrdad Saberi^3 Stefan Schmid^4
^1CISPA Helmholtz Center for Information Security, Germany ^2Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
^3University of Maryland, College Park, USA ^4TU Berlin, Germany
August 12, 2023
===============================================================================================================================================================================================================================================================================
With the increasing scale of communication networks,
the likelihood of failures grows as well.
Since these networks form a critical backbone
of our digital society, it is important that they rely on
robust routing algorithms which ensure connectivity
despite such failures. While most modern communication
networks feature robust routing mechanisms, these mechanisms
are often fairly complex to design and verify, as they
need to account for the effects of failures and rerouting
on communication.
This paper conceptualizes the design of robust routing mechanisms,
with the aim to avoid such complexity. In particular,
we showcase simple and generic blackbox transformations that increase resilience of routing against independently distributed failures, which allows
to simulate the routing scheme on the original network, even in the presence
of non-benign node failures (henceforth called faults). This is attractive
as the system specification and routing policy can simply be preserved.
We present a scheme for constructing such a reinforced network, given
an existing (synchronous) network and a routing scheme. We prove that
this algorithm comes with small constant overheads, and only requires a minimal
amount of additional node and edge resources;
in fact, if the failure probability is smaller than 1/n,
the algorithm can come without any overhead at all.
At the same time,
it allows to tolerate a large number of
independent random (node) faults,
asymptotically almost surely.
We complement our analytical results with simulations on different real-world topologies.
§ INTRODUCTION
Communication networks have become a critical backbone
of our digital society. For example, many datacentric applications
related to entertainment, social networking, or health, among others,
are distributed and rely on the high availability and
dependability of the interconnecting network (e.g., a
datacenter network or a wide-area network).
At the same time, with the increasing scale of
today's distributed and networked systems (often relying
on commodity hardware as a design choice
<cit.>), the number of
failures is likely to increase as well
<cit.>.
It is hence important that communication networks can tolerate
such failures and
remain operational despite the failure of some of their
components.
Robust routing mechanisms aim to provide such guarantees:
by rerouting traffic quickly upon failures,
reachability is preserved. Most communication
networks readily feature robust routing mechanisms,
in the control plane (e.g.
<cit.>), in
the data plane (e.g. <cit.>), as well as on higher
layers (e.g. <cit.>).
However, the design of such robust routing mechanisms is
still challenging and comes with tradeoffs, especially if
resilience should extend to multiple failures <cit.>.
Besides a fast reaction time and re-establishing connectivity, the
resulting routes typically need to fulfill certain additional properties,
related to the network specification and policy.
Ensuring such properties however can be fairly complex,
as packets inevitably follow different paths after failures.
Interestingly, while the problem of how to re-establish reachability
after failures is well explored,
the problem of providing specific properties on the failover
paths is much less understood.
This paper conceptualizes the design of robust routing, presenting a new approach to robust routing which conceptually differs
significantly from existing literature by relying on proactive reinforcement (rather than reaction to failures).
In particular, our approach aims to overcome the complexities involved in designing
robust routing algorithms, by simply sticking to the original
network and routing specification.
To achieve this, our approach is to mask the effects of failures
using redundancy: in the spirit of error correction,
we proactively reinforce networks by adding a minimal number of
additional nodes and links, rather than
coping with failed components when they occur.
The latter is crucial
for practicability: significant refactoring of existing systems
and/or accommodating substantial design constraints is rarely
affordable.
In this paper, to ensure robustness while maintaining
the network and routing specification, we aim to
provide a high degree of fault-tolerance,
which goes beyond simple equipment and failstop failures,
but accounts for more general faults which include non-benign
failures of entire nodes.
While our approach presented in this paper will be general
and applies to any network topology, we are particularly
interested in datacenter networks (e.g., based on low-dimensional
hypercubes or d-dimensional tori <cit.>)
as well as in wide-area
networks (which are typically sparse <cit.>).
We will show that our approach works especially well for these networks.
§.§ The Challenge
More specifically,
we are given a network G=(V,E) and a routing scheme, i.e.,
a set of routes in G.
We seek to reinforce the network G by
allocating additional resources, in terms of nodes and edges,
and to provide a corresponding routing strategy to simulate the routing scheme
on the original network despite non-benign node failures.
The main goal is to maximize the probability that the network withstands
failures (in particular, random failures of entire nodes),
while minimizing the resource overhead.
Furthermore, we want to ensure that the network transformation is simple
to implement, and that it interferes as little as possible with the existing system design and operation, e.g., it
does not change the reinforced system's specification.
Toward this goal, in this paper, we make a number of simplifying assumptions.
First and most notably, we assume independent failures,
that is, we aim at masking faults with little or no correlation among each other.
Theoretically, this is motivated by the fact that
guaranteeing full functionality despite having f adversarially placed faults trivially requires redundancy (e.g., node degrees) larger than f.
There is also practical motivation to consider independent faults:
many distributed systems proactively avoid fault clusters
<cit.> and there is also empirical
evidence that in certain scenarios, failures are only weakly correlated <cit.>.
Second, we treat nodes and their outgoing links as fault-containment regions (according to <cit.>), i.e., they are the basic components our systems are comprised of.
This choice is made for the sake of concreteness;
similar results could be obtained when considering, e.g., edge failures, without changing the gist of results or techniques.
With these considerations in mind, the probability of uniformly random
node failures that the reinforced system can tolerate is a canonical choice for measuring resilience.
Third, we focus on synchronous networks, for
several reasons:
synchrony not only helps in handling faults, both on the theoretical level (as illustrated by the famous FLP theorem <cit.>) and for ensuring correct implementation, but it also
simplifies presentation, making it easier to focus on the proposed concepts.
In this sense, we believe
that our approach is of particular interest in the context of real-time systems,
where the requirement of meeting hard deadlines makes synchrony an especially attractive choice.
§.§ Contributions and Techniques
This paper proposes a novel and simple approach to robust routing,
which decouples the task of designing a reinforced network from the task of
designing a routing scheme over the input network. By virtue of this decoupling,
our approach supports arbitrary routing schemes and objectives,
from load minimization to throughput maximization and beyond,
in various models of computation, e.g., centralized or distributed, randomized
or deterministic, online or offline, or oblivious.
We first consider a trivial approach:
we simply replace each node by ℓ∈ copies
and for each edge we connect each pair of copies of its endpoints,
where ℓ is a constant.[Choosing concreteness over generality,
we focus on the, in our view, most interesting case of constant ℓ. It is straightforward to generalize the analysis.]
Whenever a message would be sent over an edge in the original graph,
it should be sent over each copy of the edge in the reinforced graph.
If not too many copies of a given node fail, this enables each receiving copy to recover the correct message.
Thus, each non-faulty copy of a node can run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph.
When analyzing this approach,
we observe that asymptotically almost surely (a.a.s., with probability 1-o(1)) and with ℓ=2f+1, this reinforcement can sustain an independent probability p of f Byzantine node failures <cit.>, for any p∈ o(n^-1/(f+1)), i.e., f nodes may violate the protocol in any arbitrary way (and may hence also collude).
This threshold is sharp up to (small) constant factors: for p∈ω(n^-1/(f+1)), a.a.s. there is some node for which all of its copies fail.
If we restrict the fault model to omission faults
(faulty nodes may skip sending some messages but otherwise act according to the protocol), ℓ=f+1 suffices.
The cost of this reinforcement is that the number of nodes and edges increase by factors of ℓ and ℓ^2, respectively.
Therefore, already this simplistic solution can support non-crash faults of probability p∈ o(1/√(n)) at a factor-4 overhead.
We note that the simulation introduces no large computational overhead and
does not change the way the system works, enabling to use it as a blackbox.
Also randomized algorithms can be simulated in a similar fashion,
provided that all copies of a node have access to a shared source of randomness.
Note that this requirement is much weaker than globally shared randomness:
it makes sense to place the copies of a node in physical proximity to approximately preserve the geometrical layout of the physical realization of the network topology.
Our approach above raises the question whether
we can reduce the involved overhead further.
In this paper, we will answer this question positively:
We propose to apply the above strategy only to a small
subset E' of the edge set.
Denoting by v_1,…,v_ℓ the copies of node v∈ V, for
any remaining edge {v,w}∈ E∖ E' we add only edges
{v_i,w_i}, i∈ [ℓ], to the reinforced graph.
The idea is to choose E' in a way such that the connected components
induced by E∖ E' are of constant size, yet |E'|=ε |E|.
This results in the same asymptotic threshold for p, while the number of edges of the reinforced graph drops to ((1-ε)ℓ+εℓ^2)|E|.
For any constant choice of ε, we give constructions with this property for grids or tori of constant dimension and minor-free graphs of bounded degree.
Again, we consider the case of f=1 of particular interest:
in many typical network topologies, we can reinforce the network to boost the failure probability that can be tolerated from Θ(1/n) to Ω(1/√(n)) by roughly doubling (omission faults) or tripling (Byzantine faults) the number of nodes and edges.
The redundancy in this second construction is near-optimal under the constraint that we want to simulate an arbitrary routing scheme in a blackbox fashion,
as it entails that we need a surviving copy of each edge, and thus in particular each node.
In many cases, the paid price will be smaller than the price for making each individual component sufficiently reliable to avoid this overhead.
Furthermore, we will argue that the simplicity of our constructions enables us to re-purpose the redundant resources in applications with less strict reliability requirements.
Our results show that while approach is general and can be applied to any
existing network topology (we will describe and analyze valid reinforcements for
our faults models on general graphs), it can be refined and is particularly
interesting in the context of networks that
admit suitable partitionings. Such networks include
sparse, minor-free graphs, which are practically relevant topologies in
wide-area networks, as well as torus graphs and low-dimensional
hypercubes, which arise in datacenters and parallel architectures.
To complement our theoretical findings and investigate the reinforcement
cost in real networks, we conducted experiments on the Internet Topology Zoo <cit.>.
We find that our approach achieves robustness at significantly lower cost compared to
the naive replication strategy often employed in dependable networks.
§.§ Putting Things Into Perspective
In contrast to much existing robust routing literature on reactive
approaches to link failures <cit.> (which come with a delay),
we consider a proactive approach by enhancing the network with redundancy.
Our proactive approach also allows us to replicate the routing scheme (and hence the network policy) on the new network.
In particular, we show that if the failure probability is smaller than 1/n, there is a good probability that our approach works even without any overhead at all.
Furthermore, there are two ways in which our system can be used. One approach is to replicate the entire node (including the compute part), and then forward the traffic to its two associated peers. Alternatively, traffic can also simply be replicated to multiple NICs, without additional compute requirements, depending on the failure model. More generally, our contribution can also be seen more abstractly and the robust routing happen on a logical level, depending on the failure scenario.
Also, we show that in the absence of a valid message, it can simply be ignored, as the rest of the system continues to perform
The most closely related work to ours is NetCo <cit.>,
which also relies on network reinforcement and can handle malicious behavior.
NetCo is is based on a robust
combiner concept known from cryptography, and complements each router with two additional routers.
Using software-defined networking, traffic is replicated across the three (untrusted) devices and then merged again, using a consensus algorithm. While a high degree of robustness is achieved, the three-fold overhead is significant. More importantly, however, in contrast to our approach, Netco requires special hardware for splitting and merging the traffic; while the functionality of this hardware can be simple, it still needs to be trusted. The consensus requirement dramatically reduces the throughput, as shown in the empirical evaluation of NetCo in <cit.>.
Our solution does not require such components and is hence not only more practical but also significantly more performant.
§.§ Organization
In <ref>, we sketch the properties of our approach and state a number of potential applications. In <ref>, we formalize the fault models that we tackle in this article alongside the notion of a valid reinforcement and its complexity measures. In <ref> and <ref>, we study valid reinforcements on general graphs, and in <ref>, we study more efficient reinforcements for specific graphs.
We complement our analytical results with an empirical simulation study in
<ref>.
In <ref> we raise a number of points in favor of the reinforcement approach. We review related work in
<ref>, and we conclude and present a number of interesting
follow-up questions in <ref>.
§ HIGH-LEVEL OVERVIEW: REINFORCING NETWORKS
Let us first give an informal overview of our blackbox transformation
for reinforcing networks (for formal specification see <ref>), as well as its guarantees and preconditions.
Assumptions on the Input Network
We have two main assumptions on the network at hand: (1) We consider synchronous routing networks, and (2) each node in the network (alongside its outgoing links) is a fault-containment region, i.e., it fails independently from other nodes.
We do not make any assumptions on the network topology, but will provide specific
optimizations for practically relevant topologies (such as sparse, minor-free networks
or hypercubes) in <ref>.
Valid Reinforcement Simulation Guarantees
Our reinforcements create a number of copies of each node. We have each non-faulty copy of a node run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph. Moreover, the simulation fully preserves all guarantees of the schedule, including its timing, and introduces no big computational overhead.
This assumption is simple to meet in stateless networks, while it requires synchronization primitives in case of stateful network functions.
Unaffected Complexity and Cost Measures
Routing schemes usually revolve around objective functions such as load minimization, maximizing the throughput, minimizing the latency, etc., while aiming to minimize complexity related to, e.g., the running time for centralized algorithms, the number of rounds for distributed algorithms, the message size, etc. Moreover, there is the degree of uncertainty that can be sustained, e.g., whether the input to the algorithm is fully available at the beginning of the computation (offline computation) or revealed over time (online computation). Our reinforcements preserve all of these properties, as they operate in a blackbox fashion. For example, our machinery readily yields various fault-tolerant packet routing algorithms in the Synchronous Store-and-Forward model by Aiello et. al <cit.>. More specifically, from <cit.> we obtain a centralized deterministic online algorithm on unidirectional grids of constant dimension that achieves a competitive ratio which is polylogarithmic in the number of nodes of the input network w.r.t. throughput maximization. Using <cit.> instead, we get a centralized randomized offline algorithm on the unidirectional line with constant approximation ratio w.r.t. throughput maximization. In the case that deadlines need to be met the approximation ratio is, roughly, O(log^* n) <cit.>. As a final example, one can obtain from <cit.> various online distributed algorithms with sublinear competitive ratios w.r.t. throughput maximization.
Cost and Gains of the Reinforcement
The price of adding fault-tolerance is given by the increase in the network size, i.e., the number of nodes and edges of the reinforced network in comparison to the original one. Due to the assumed independence of node failures, it is straightforward to see that the (uniform) probability of sustainable node faults increases roughly like n^-1/(f+1) in return for (i) a linear-in-f increase in the number of nodes and (ii) an increase in the number of edges that is quadratic in f. We then proceed to improve the construction for grids and minor-free constant-degree graphs to reduce the increase in the number of edges to being roughly linear in f. Based on this information, one can then assess the effort in terms of these additional resources that is beneficial, as less reliable nodes in turn are cheaper to build, maintain, and operate. We also note that, due to the ability of the reinforced network to ensure ongoing unrestricted operability in the presence of some faulty nodes, faulty nodes can be replaced or repaired before communication is impaired or breaks down.
Preprocessing
Preprocessing is used, e.g., in computing routing tables in Oblivious Routing <cit.>.
The reinforcement simply uses the output of such a preprocessing stage in the same manner as the original algorithm. In other words, the preprocessing is done on the input network and its output determines the input routing scheme. In particular, the preprocessing may be randomized and does not need to be modified in any way.
Randomization
Randomized routing algorithms can be simulated as well, provided that all copies of a node have access to a shared source of randomness. We remark that, as our scheme locally duplicates the network topology, it is natural to preserve the physical realization of the network topology in the sense that all (non-faulty) copies of a node are placed in physical proximity. This implies that this constraint is much easier to satisfy than globally shared randomness.
§ PRELIMINARIES
We consider synchronous routing networks.
Formally, the network is modeled as a directed graph G=(V,E), where V is the set of n≜ |V| vertices, and E is the set of m≜ |E| edges (or links).
Each node maintains a state, based on which it decides in each round for each of its outgoing links which message to transmit.
We are not concerned with the inner workings of the node, i.e., how the state is updated;
rather, we assume that we are given a scheduling algorithm performing the task of updating this state and use it in our blackbox transformations.
In particular, we allow for online, distributed, and randomized algorithms.
Probability-p Byzantine Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol in arbitrary ways, including delaying, dropping, or forging messages, etc.
Probability-p Omission Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol by not sending a message over an outgoing link when they should. We note that it is sufficient for this fault model to be satisfied logically. That is, as long as a correct node can identify incorrect messages, it may simply drop them, resulting in the same behavior of the system at all correct nodes as if the message was never sent.
Simulations and Reinforcement
For a given network G=(V,E) and a scheduling algorithm A, we will seek to reinforce (G,A) by constructing G'=(V',E') and scheduling algorithm A' such that the original algorithm A is simulated by A' on G', where G' is subject to random node failures. We now formalize these notions. First, we require that there is a surjective mapping P:V'→ V; fix G' and P, and choose F'⊆ V' randomly as specified above.
Assume that in each round r∈, each v'∈ V'∖ F' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, a strict majority of the nodes v'∈ V' with P(v')=v computes in each round r∈ the state of v in A in this round. The simulation is strong, if not only for each v∈ V there is a strict majority doing so, but all v'∈ V'∖ F' compute the state of P(v') in each round.
Assume that in each round r∈, each v'∈ V' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, there is v'∈ V' with P(v')=v that computes in each round r∈ the state of v in A in this round. The simulation is strong, if each v'∈ V' computes the state of P(v') in each round.
A (strong) reinforcement of a graph G=(V,E) is a graph G'=(V',E'), a surjective mapping P V'→ V, and a way of determining a scheduling algorithm A' for G' out of scheduling algorithm A for G. The reinforcement is valid under the given fault model (p or p) if A' is a (strong) simulation of A a.a.s.
*Resources and Performance Measures.
We use the following performance measures.
* The probability p of independent node failures that can be sustained a.a.s.
* The ratio ν≜ |V'|/|V|, i.e., the relative increase in the number of nodes.
* The ratio η≜|E'|/|E|, i.e., the relative increase in the number of edges.
We now briefly discuss, from a practical point of view, why we do not explicitly consider further metrics that are of interest.
§.§ Other Performance Measures
* Latency:
As our reinforcements require (time-preserving) simulation relations, in terms of rounds, there is no increase in latency whatsoever.
However, we note that (i) we require all copies of a node to have access to the input (i.e., routing requests) of the simulated node and (ii) our simulations require to map received messages in G' to received messages of the simulated node in G.
Regarding (i), recall that it is beneficial to place all copies of a node in physical vicinity, implying that the induced additional latency is small.
Moreover, our constructions naturally lend themselves to support redundancy in computations as well, by having each copy of a node perform the tasks of its original;
in this case, (i) comes for free.
Concerning (ii), we remark that the respective operations are extremely simple;
implementing them directly in hardware is straightforward and will have limited impact on latency in most systems.
* Bandwidth/link capacities.
We consider the uniform setting in this work.
Taking into account how our simulations operate, one may use the ratio η as a proxy for this value.
* Energy consumption.
Regarding the energy consumption of links, the same applies as for bandwidth.
The energy nodes use for routing computations is the same as in the original system, except for the overhead induced by Point (ii) we discussed for latency.
Neglecting the latter, the energy overhead is in the range [min{ν,η},max{ν,η}].
* Hardware cost.
Again, neglecting the computational overhead of the simulation, the relative overhead lies in the range [min{ν,η},max{ν,η}]
In light of these considerations, we focus on p, ν, and η as key metrics for evaluating the performance of our reinforcement strategies.
§ STRONG REINFORCEMENT UNDER BYZ(P)
We now present and analyze valid reinforcements
under Byz(p)
for our faults model
on general graphs.
Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and set ℓ = 2f+1.
Reinforced Network G'
We set V'≜ V× [ℓ], where [ℓ]≜{1,…,ℓ}, and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Consider node v'∈ V'∖ F'. We want to maintain the invariant that in each round, each such node has a copy of the state of v=P(v') in A. To this end, v'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the message that has been sent to v' by at least f+1 of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step requires such a majority to exist; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, then A' strongly simulates A.
We show the claim by induction on the round number r∈, where we consider the initialization to anchor the induction at r=0. For the step from r to r+1, observe that because all v'∈ V'∖ F' have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. Accordingly, each v'∈ V'∖ F' receives
the message A would send over (w,v) ∈ E
from each w'∈ V'∖ F' with P(w')=w (via the link (w',v')). By the assumption of the lemma, we have at least ℓ-f=f+1 such nodes, implying that v' updates the local copy of the state of A as if it received the same messages as when executing A in round r+1. Thus, the induction step succeeds and the proof is complete.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
If p ∈ o(n^-1/(f+1)), the above construction is a valid strong reinforcement for the fault model p. If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f. If p ∈ o(n^-1/(f+1)), using ℓ=2f+1 and a union bound we see that the probability of this event is at least
1-n∑_j=f+1^2f+12f+1jp^j(1-p)^2f+1-j
≥ 1-n ∑_j=f+1^2f+12f+1jp^j
≥ 1-n 2f+1f+1p^f+1∑_j=0^f p^j
≥ 1-n (2e)^f·p^f+1/1-p= 1-o(1).
Here, the second to last step uses that ab≤ (ae/b)^b and the final step exploits that p∈ o(n^-1/(f+1)).
For the second claim, assume w.l.o.g. p≤ 1/3, as increasing p further certainly increases the probability of the system to fail. For any v∈ V, the probability that |{v_i∈ F'}|> f is independent of the same event for other nodes and larger than
2f+1f+1p^f+1(1-p)^f≥(3/2)^f p^f+1(1-p)^f≥ p^f+1,
since ab≥ (a/b)^b and 1-p≥ 2/3. Hence, if G contains Ω(n) nodes v with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the probability that there is such a node v for which |{v_i∈ F'}|> f is at least
1-(1-p^f+1)^Ω(n)⊆ 1-(1-ω(1/n))^Ω(n)= 1-o(1).
If there is such a node v, there are algorithms A and inputs so that A sends a message across some edge (v,w) in some round. If faulty nodes do not send messages in this round, the nodes w_i∈ V'∖ F' do not receive the correct message from more than f nodes v_i and the simulation fails. Hence, the reinforcement cannot be valid.
For constant p, one can determine suitable values of f∈Θ(log n) using Chernoff's bound. However, as our focus is on small (constant) overhead factors, we refrain from presenting the calculation here.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = ℓ^2 = 4f^2 + 4f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes.
However, η = 9, i.e., while the number of edges also increases only by a constant, it seems too large in systems where the limiting factor is the amount of links that can be afforded.
§ STRONG REINFORCEMENT UNDER OM(P)
The strong reinforcement from the previous section is, trivially, also a strong reinforcement under p. However, we can reduce the number of copies per node for the weaker fault model. Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and, this time, set ℓ = f+1.
Reinforced Network G'
We set V'≜ V× [ℓ] and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Each node[Nodes suffering omission failures still can simulate A correctly.] v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the (unique) message that has been sent to v' by some of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step assumes that some such neighbor sends a message and all w' with P(w') send the same such message; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, A' strongly simulates A.
Analogous to the one of Lemma <ref>, with the difference that faulty nodes may only omit sending messages and thus a single correct copy per node is sufficient.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
The above construction is a valid strong reinforcement for the fault model p if p ∈ o(n^-1/(f+1)). If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f = ℓ -1. For v∈ V,
{v_i | i∈ [ℓ]}∩ F'=ℓ = p^f+1.
By a union bound, A' thus simulates A with probability 1-o(1) if p∈ o(n^-1/(f+1)).
Conversely, if there are Ω(n) nodes with non-zero outdegree and p∈ω(n^-1/(f+1)), with probability 1-o(1) all copies of at least one such node v are faulty. If v sends a message under A, but all corresponding messages of copies of v are not sent, the simulation fails. This shows that in this case the reinforcement is not valid.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = ℓ^2 = f^2 + 2f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and quadrupling the number of edges.
§ MORE EFFICIENT REINFORCEMENT
In this section, we reduce the overhead in terms of edges at the expense of obtaining reinforcements that are not strong. We stress that the obtained trade-off between redundancy (ν and η) and the sustainable probability of faults p is asymptotically optimal: as we require to preserve arbitrary routing schemes in a blackbox fashion, we need sufficient redundancy on the link level to directly simulate communication. From this observation, both for p and p we can readily derive trivial lower bounds on redundancy that match the constructions below up to lower-order terms.
§.§ A Toy Example
Before we give the construction, we give some intuition on how we can reduce the number of required edges. Consider the following simple case. G is a single path of n vertices (v_1,…, v_n), and the schedule requires that in round i, a message is sent from v_i to v_i+1. We would like to use a “budget” of only n additional vertices and an additional (1+) m=(1+) (n-1) links, assuming the fault model p. One approach is to duplicate the path and extend the routing scheme accordingly. We already used our entire budget apart from m links! This reinforcement is valid as long as one of the paths succeeds in delivering the message all the way.
The probability that one of the paths “survives” is
1-(1-(1-p)^n)^2 ≤ 1-(1-e^-pn)^2 ≤ e^-2pn,
where we used that 1-x≤ e^-x for any x∈ℝ.
Hence, for any p = ω(1/n), the survival probability is o(1). In contrast, the strong reinforcement with ℓ=2 (i.e., f=1) given in <ref> sustains any p∈ o(1/√(n)) with probability 1-o(1); however, while it adds n nodes only, it requires 3m additional edges.
We need to add some additional edges to avoid that the likelihood of the message reaching its destination drops too quickly. To this end, we use the remaining ε m edges to “cross” between the two paths every h≜ 2/ε hops (assume h is an integer), cf. Figure <ref>.
This splits the path into segments of h nodes each. As long as, for each such segment, in one of its copies all nodes survive, the message is delivered. For a given segment, this occurs with probability 1-(1-(1-p)^h)^2≥ 1-(ph)^2. Overall, the message is thus delivered with probability at least (1-(ph)^2)^n/h≥ 1-nhp^2.
As for any constant ε, h is a constant, this means that the message is delivered a.a.s. granted that p∈ o(1/√(n))!
The reader is cautioned to not conclude from this example that random sampling of edges will be sufficient for our purposes in more involved graphs. Since we want to handle arbitrary routing schemes, we have no control over the number of utilized routing paths. As the latter is exponential in n, the probability that a fixed path is not “broken” by F would have to be exponentially small in n. Moreover, trying to leverage Lovász Local Lemma for a deterministic result runs into the problem that there is no (reasonable) bound on the number of routing paths that pass through a single node, i.e., the relevant random variables (i.e., whether a path “survives”) exhibit lots of dependencies.
§.§ Partitioning the Graph
To apply the above strategy to other graphs, we must take into account that there can be multiple intertwined routing paths. However, the key point in the above example was not that we had path segments, but rather that we partitioned the nodes into constant-size regions and added few edges inside these regions, while fully connecting the copies of nodes at the boundary of the regions.
In general, it is not possible to partition the nodes into constant-sized subsets such that only a very small fraction of the edges connects different subsets; any graph with good expansion is a counter-example. Fortunately, many network topologies used in practice are good candidates for our approach. In the following, we will discuss grid networks and minor free graphs, and show how to apply the above strategy in each of these families of graphs.
Grid Networks
We can generalize the above strategy to hypercubes of dimension d>1.
A q-ary d-dimensional hypercube has node set [q]^d and two nodes are adjacent if they agree on all but one index i∈ [d], for which |v_i-w_i|=1.
For any h,d∈, assume that h divides q∈ and set ε=1/h. Then the q-ary d-dimensional hypercube can be partitioned into (q/h)^d regions of h^d nodes such that at most an ε-fraction of the edges connects nodes from different regions.
We subdivide the node set into h-ary d-dimensional subcubes; for an example of the subdivision of the node set of a 6-ary 2-dimensional hypercube into 2-ary 2-dimensional subcubes see Figure <ref>. There are (q/h)^d such subcubes. The edges crossing the regions are those connecting the faces of adjacent subcubes. For each subcube, we attribute for each dimension one face to each subcube (the opposite face being accounted for by the adjacent subcube in that direction). Thus, we have at most dh^d-1 crossing edges per subcube. The total number of edges per subcube are these crossing edges plus the d(h-1)h^d-1 edges within the subcube. Overall, the fraction of crossedges is thus at most 1/(1+(h-1))=1/h, as claimed.
Note that the above result and proof extend to tori, which also include the “wrap-around” edges connecting the first and last nodes in any given dimension.
Minor free Graphs
Another general class of graphs that can be partitioned in a similar fashion are minor free bounded-degree graphs.
For a fixed graph H, H is a minor of G if H is isomorphic to a graph that can be obtained by zero or more
edge contractions on a subgraph of G. We say that a graph G is H-minor free if H is not a minor of G.
For any such graph, we can apply a corollary from <cit.>, which is based on <cit.>, to construct a suitable partition.
Let H be a fixed graph. There is a constant c(H) > 1 such that for every ∈ (0, 1] and
every H-minor free graph G = (V, E) with degree bounded by Δ a partition R_1,…,R_k⊆ V with the following properties can be found in time O(|V|^3/2):
* ∀ i : |R_i|≤c(H)Δ^2/^2,
* ∀ i the subgraph induced by R_i in G is connected.
* |{(u,v) | u ∈ R_i, v ∈ R_j, i≠ j}|≤· |V|.
Grids and tori of dimension d>2 are not minor-free.
We note that this construction is not satisfactory, as it involves large constants. It demonstrates that a large class of graphs is amenable to the suggested approach, but it is advisable to search for optimized constructions for more specialized graph families before applying the scheme.
§.§ Reinforcement
Equipped with a suitable partition of the original graph G=(V,E) into disjoint regions R_1,…,R_k⊆ V, we reinforce as follows.
As before, we set V'≜ V× [ℓ], denote v_i≜ (v,i), define P(v_i)≜ v, and set ℓ≜ f+1. However, the edge set of G' differs. For e=(v,w)∈ E,
E_e'≜{(v_i,w_i) | i∈ [ℓ]}
{(v_i,w_j) | i,j∈ [ℓ]}
and we set E'≜⋃_e∈ E E_e'.
Simulation under Om(p)
Consider v∈ V. We want to maintain the invariant that in each round, some v_i has a copy of the state of v in A. To this end, v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A and sets _v'=;
* sends on each link (v',w')∈ E' in each round
* message M, if P(v') would send M via (P(v'),P(w')) when executing A and _v'=,
* a special symbol if _v'=, but v would not send a message via (P(v'),P(w')) according to A, or
* no message if _v'=;
* if, in a given round, _v'= and v' receives for each neighbor w of P(v') a message from some w_j∈ V', it updates the local copy of the state of v in A as if P(v') received this message (interpreting as no message); and
* if this is not the case, v' sets _v'=.
We claim that as long as _v'= at v', v' has indeed a copy of the state of P(v') in the corresponding execution of A; therefore, it can send the right messages and update its state variables correctly.
Suppose that for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], some i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. As P(C)=V, it suffices to show that each v'∈ C successfully maintains a copy of the state of P(v') under A. However, we also need to make sure that all messages, not only the ones sent by nodes in c, are “correct,” in the sense that a message sent over edge (v',w')∈ E' in round r would be sent by A over (P(v'),P(w')) (where means no message is sent). Therefore, we will argue that the set of nodes T_r≜{v'∈ V' | _v'= in round r} knows the state of their counterpart P(v') under A up to and including round r∈. As nodes v' with _v'= do not send any messages, this invariant guarantees that all sent messages are correct in the above sense.
We now show by induction on the round number r∈ that (i) each v'∈ T_r knows the state of P(v') under A and (ii) C⊆ T_r. Due to initialization, this is correct initially, i.e., in “round 0;” we use this to anchor the induction at r=0, setting T_0≜ V'.
For the step from r to r+1, note that because all v'∈ T_r have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E' with P(w')=w. Recall that v'∈ T_r+1 if and only if v'∈ T_r and for each (w,P(v'))∈ E there is at least one w'∈ V' with P(w')=w from which v' receives a message. Since under p nodes in F' may only omit sending messages, it follows that v'∈ T_r+1 correctly updates the state variables of P(v'), just as P(v') would in round r+1 of A.
It remains to show that C⊆ T_r+1. Consider v_i∈ C and (w,v)∈ E. If v,w∈ R_k' for some k'∈ [k], then w_i∈ C by definition of C. Hence, by the induction hypothesis, w_i∈ T_r, and w_i will send the message w would send in round r+1 of A over (w,v)∈ E to v_i, using the edge (w_i,v_i)∈ E'. If this is not the case, then there is some j∈ [ℓ] such that w_j∈ C and we have that (w_j,v_i)∈ E'. Again, v_i will receive the message w would send in round r+1 of A from w_j. We conclude that v_i receives at least one copy of the message from w for each (w,v)∈ E, implying that v∈ T_r+1 as claimed. Thus, the induction step succeeds and the proof is complete.
Figure <ref> provides an example of a comparison between a network, a naive duplication of that network, and its reinforcement. The simulation process of sending a message in the same sample network is shown in Figure <ref>.
Resilience of the Reinforcement
We denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree and R∈ O(1), p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Accordingly, the probability that for a given k' the precondition of the lemma is violated is at most (Rp)^f+1. As k≤ n/r, taking a union bound over all k' yields that with probability at least 1-n/r· (Rp)^f+1, A' simulates A. Therefore, the reinforcement is valid if p ∈ o((n/r)^-1/(f+1)/R).
Now assume that r≤ R∈ O(1) and also that p∈ω(n^-1/(f+1))⊆ω((n/r)^-1/(f+1)/R). Thus, for each v∈ V, all v'∈ V' with P(v')=v simultaneously end up in F' with probability ω(1/n). Therefore, if Ω(n) nodes have non-zero outdegree, with a probability in 1-(1-ω(1/n))^Ω(n)=1-o(1) for at least one such node v all its copies end up in F'. In this case, the simulation fails if v sends a message under A, but all copies of v' suffer omission failures in the respective round.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(1+ε)f+ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and multiplying the number of edges by 2.4.
For hypercubes and tori, the asymptotic notation for p does not hide huge constants.
Lemma <ref> shows that h enters the threshold in Theorem <ref> as h^-d+1/2.
For the cases of d=2 and d=3, which are the most typical (for d>3 grids and tori suffer from large distortion when embedding them into 3-dimensional space), the threshold on p degrades by factors of 11.2 and 55.9, respectively.
§.§ Simulation under Byz(p)
The same strategy can be applied for the stronger fault model p, if we switch back to having ℓ=2f+1 copies and nodes accepting the majority message among all messages from copies of a neighbor in the original graph.
Consider node v∈ V. We want to maintain the invariant that in each round, a majority among the nodes v_i, i∈ [ℓ], has a copy of the state of v in A. For v'∈ V' and (w,P(v'))∈ E, set N_v'(w)≜{w'∈ V' | (w',v')∈ E'}. With this notation, v' behaves as follows.
[(1)]
* It initializes local copies of all state variables of v as in A.
* It sends in each round on each link (v',w')∈ E' the message v would send on (P(v'),P(w')) when executing A (if v' cannot compute this correctly, it may send an arbitrary message).
* It updates its state in round r as if it received, for each (w,P(v'))∈ E, the message the majority of nodes in N_v'(w) sent.
Suppose for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], f+1 indices i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. We claim that each v'∈ C successfully maintains a copy of the state of P(v') under A. We show this by induction on the round number r∈, anchored at r=0 due to initialization.
For the step from r to r+1, observe that because all v'∈ C have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. For each v'∈ C and each (w,P(v')), we distinguish two cases. If P(v') and w are in the same region, let i be such that v'=v_i. In this case, N_v'(w)={w_i} and, by definition of C, w_i∈ C. Thus, by the induction hypothesis, w_i sends the correct message in round r+1 over the link (w',v'). On the other hand, if P(v') and w are in different regions, N_v'(w)={w_i | i∈ [ℓ]}. By the definition of C and the induction hypothesis, the majority of these nodes (i.e., at least f+1 of them) sends the correct message w would send over (w,P(v')) in round r+1 when executing A. We conclude that v' correctly updates its state, completing the proof.
Resilience of the Reinforcement
As before, denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for the fault model p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Thus, analogous to the proof of Theorem <ref>, the probability that for a given k' the condition is violated is at most
∑_j=f+1^2f+12f+1j(Rp)^j(1-Rp)^2f+1-j
= (2e)^f(Rp)^f+1(1+o(1)).
By a union bound over the at most n/r regions, we conclude that the precondition p ∈ o((n/r)^-1/(f+1)/R) guarantees that the simulation succeeds a.a.s.
For the second statement, observe that for each node v∈ V of non-zero outdegree,
|{v_i}∩ F'|≥ f+1≥ p^f+1= ω(1/n).
Thus, a.a.s. there is such a node v. Let (v,w)∈ E and assume that A sends a message over (v,w) in some round. If v and w are in the same region, the faulty nodes sending an incorrect message will result in a majority of the 2f+1=|{w'∈ V' | P(w')=w}| copies of w attaining an incorrect state (of the simulation), i.e., the simulation fails. Similarly, if w is in a different region than v, for each copy of w the majority message received from N_w'(v) will be incorrect, resulting in an incorrect state.
Note that the probability bounds in Theorem <ref> are essentially tight in case R∈ O(1). A more careful analysis establishes similar results for r∈Θ(R)∩ω(1), by considering w.l.o.g. the case that all regions are connected and analyzing the probability that within a region, there is some path so that for at least f+1 copies of the path in G', some node on the path is faulty. However, as again we consider the case R∈ O(1) to be the most interesting one, we refrain from generalizing the analysis.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(2+2ε)f+4ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes and multiplying the number of edges by 4.2.
§ EMPIRICAL EVALUATION
We have shown that our approach from <ref> works particularly well
for graphs that admit a certain partitioning, such as
sparse graphs (e.g., minor-free graphs) or low-dimensional
hypercubes. To provide some empirical motivation for the relevance
of these examples, we note that the topologies collected
in the Rocketfuel <cit.> and Internet Topology Zoo <cit.> projects
are all sparse: almost a third (namely 32%) of the topologies even belong to the family of
cactus graphs, and roughly half of the graphs (49%) are outerplanar <cit.>.
To complement our analytical results and study the reinforcement cost
of our approach in realistic networks, we conducted simulations on
the around 250 networks from the Internet Topology Zoo.
While we have a fairly good understanding of the different network topologies
deployed in practice, unfortunately, little is known about the state-of-the-art protection mechanisms used by network operators today. Network operators are typically reluctant to share details about their infrastructure for security reasons, rendering a comparative evaluation difficult. That said, it seems relatively safe to assume that the most robust solutions rely on an one-by-one (“A/B”) replication strategy which allows to completely reroute traffic to a backup network; this baseline requires doubling resources and can hence be fairly costly.
In the following, we will report on our main insights.
Due to space constraints, we focus on the case of omission faults;
the results for Byzantine faults follow the same general trends.
Recall that we replace each node by f+1 of its copies, and each edge with endpoints in
different regions of the partition with (f+1)^2 copies; every other edge is replaced by f+1 copies.
Our goal is to do this partitioning such that it minimizes the edge overhead of the new network and
maximizes the probability of the network's resilience.
The fault probability of the network for given p, f and partitions with l_1, l_2, ..., l_k nodes is calculated as
1 - ∏_i=1^k [1-(1-(1-p)^l_i)^f+1].
In the following, as a case study, we fix a target network failure probability of at most 0.01.
That is, the reinforced network is guaranteed to operate correctly with a probability of 99%, and we aim to maximize the probability p with which nodes independently fail subject to this constraint.
For this fixed target resilience of the network, we determine the value of p matching it using the above formula.
We remark that the qualitative behavior for smaller probabilities of network failure is the same, where the more stringent requirement means that our scheme outperforms naive approaches for even smaller network sizes.
For the examined topologies, it turned out that no specialized tools were needed to find good partitionings.
We considered a Spectral Graph Partitioning tool <cit.> and Metis <cit.>,
a partitioning algorithm from a python library.
For small networks (less than 14 nodes), we further implemented a brute-force algorithm,
which provides an optimal baseline.
Figure <ref> shows the resulting edge overheads for the different partitioning algorithms
as a function of p and for f=3, at hand of a specific example.
For reference, we added the value of p for the original graph (f=0) to the plot, which has an overhead factor of 1 (no redundancy).
As to be expected, for each algorithm and the fixed value of f=3, as the number of components in partitionings increases, the edge overhead and p
increase as well.
The “Singleton partition” point for f=3 indicates the extreme case where the size of the components is equal to 1 and the approach becomes identical to strong reinforcement (see <ref>);
hence, it has an edge overhead of (f+1)^2=16.
The leftmost points of the f=3 curves correspond to the other extreme of “partitioning” the nodes into a single set, resulting in naive replication of the original graph, at an edge overhead of f+1=4.
We observed this general behavior for networks of all sizes under varying f, where the spectral partitioning consistently outperformed Metis, and both performed very close to the brute force algorithm on networks to which it was applicable.
We concluded that the spectral partitioning algorithm is sufficient to obtain results that are close to optimal for the considered graphs, most of which have fewer than 100 nodes, with only a handful of examples with size between 100 and 200.
Accordingly, in the following we confine the presentation to the results obtained using the spectral partitioning algorithm.
In Figure <ref>, we take a closer look on how the edge overhead
depends on f, at hand of a network of 33 nodes. Note that the partitionings do not depend on f, causing the 10 curves to have similar shape.
As f increases, the node overhead, edge overhead, and p for the reinforced networks increase.
We can see that it is advisable to use larger values of f only if the strong reinforcement approach for smaller f cannot push p to the desired value.
We also see that f=1 is sufficient to drive p up to more than 6%, improving by almost two orders of magnitude over the roughly 0.01/33≈ 0.03% the unmodified network can tolerate with probability 99%.
While increasing f further does increase resilience, the relative gains are much smaller, suggesting that f=1 is the most interesting case.
Following up on this, in Figure <ref> we plot p for all existing networks in the Topology Zoo using the spectral graph partitioning algorithm and f=1.
Specifically, for each network, we calculated the value of p on a set of reinforced networks with different node and edge overheads. Naturally, with increasing network size, the value of p that can be sustained at a given overhead becomes smaller. Note, however, that naive replication quickly loses ground as n becomes larger. In particular, already for about 20 nodes, an edge overhead of 3 with our approach is better than adding two redundant copies of the original network, resulting in more nodes, but the same number of edges. Beyond roughly 50 nodes, our approach outperforms two independent copies of the network using fewer edges, i.e., an edge overhead of 2.5.
To show more clearly when our approach outperforms naive network replication, Figure <ref> plots the relative gain in the probability p of node failure that can be sustained compared to the original network.
This plot is similar to the previous one. The y-axis now represents p divided by the value of p for the original graph. We now see that naive replication provides an almost constant improvement across the board. This is due to the fact that under this simple scheme, the reinforcement fails as soon as in each copy of the graph at least one node fails, as it is possible that a routing path in the original graph involves all nodes corresponding to failed copies.
Denote by p_k the probability of node failure that can be sustained with 99% reliability when simply using k copies of the original graph (in particular p_1≈ 0.01/n). For small k, the probability (1-p_k)^n that a single copy of the original graph is fault-free needs to be close to 1. Hence, we can approximate (1-p_k)^n≈ 1-p_k n. The probability that all copies contain a failing node is hence approximately (p_kn)^k. Thus, p_1 n ≈ 0.01≈ (p_k n)^k, yielding that
p_k/p_1=p_k n/p_1 n≈0.01^1/k/0.01=100^1-1/k.
In particular, we can expect ratios of roughly 10 for k=2 and 21.5 for k=3, respectively. The small discrepancy to the actual numbers is due to the approximation error, which would be smaller for higher target resilience.
As the plot clearly shows, our method achieves a relative improvement that increases with n, as predicted by Theorem <ref>.
In conclusion, we see that our approach promises substantial improvements over the naive replication strategy,
which is commonly employed in mission-critical networks
(e.g., using dual planes as in RFC 7855 <cit.>).
§ DISCUSSION
In the previous sections, we have established that constant-factor redundancy can significantly increase reliability of the communication network in a blackbox fashion. Our constructions in <ref> are close to optimal. Naturally, one might argue that the costs are still too high. However, apart from pointing out that the costs of using sufficiently reliable components may be even higher, we would like to raise a number of additional points in favor of the approach.
Node Redundancy
When building reliable large-scale systems, fault-tolerance needs to be considered on all system levels. Unless nodes are sufficiently reliable, node replication is mandatory, regardless of the communication network. In other words, the node redundancy required by our construction may not be an actual overhead to begin with. When taking this point of view, the salient question becomes whether the increase in links is acceptable. Here, the first observation is that any system employing node redundancy will need to handle the arising additional communication, incurring the respective burden on the communication network. Apart from still having to handle the additional traffic, however, the system designer now needs to make sure that the network is sufficiently reliable for the node redundancy to matter. Our simple schemes then provide a means to provide the necessary communication infrastructure without risking to introduce, e.g., a single point of failure during the design of the communication network; at the same time, the design process is simplified and modularized.
Dynamic Faults
Because of the introduced fault-tolerance, faulty components do not impede the system as a whole, so long as the simulation of the routing scheme can still be carried out. Hence, one may repair faulty nodes at runtime. If T is the time for detecting and fixing a fault, we can discretize time in units of T and denote by p_T the (assumed to be independent) probability that a node is faulty in a given time slot, which can be bounded by twice the probability to fail within T time. Then the failure probabilities we computed in our analysis directly translate to an upper bound on the expected fraction of time during which the system is not (fully) operational.
Adaptivity
The employed node- and link-level redundancy may be required for mission-critical applications only, or the system may run into capacity issues. In this case, we can exploit that the reinforced network has a very simple structure, making various adaptive strategies straightforward to implement.
* One might use a subnetwork only, deactivating the remaining nodes and links, such that a reinforced network for smaller f (or a copy of the original network, if f=0) remains. This saves energy.
* One might subdivide the network into several smaller reinforced networks, each of which can perform different tasks.
* One might leverage the redundant links to increase the overall bandwidth between (copies of) nodes, at the expense of reliability.
* The above operations can be applied locally; e.g., in a congested region of the network, the link redundancy could be used for additional bandwidth. Note that if only a small part of the network is congested, the overall system reliability will not deteriorate significantly.
Note that the above strategies can be refined and combined according to the profile of requirements of the system.
§ RELATED WORK
Robust routing is an essential feature of dependable
communication networks, and has been explored
intensively in the literature already.
*Resilient Routing on the Network Layer
In contrast to our approach,
existing resilient routing mechanisms on the network layer
are typically reactive.
They
can be categorized
according to whether they are supported in the
control plane, e.g.,
<cit.>,
or in the data plane, e.g., <cit.>,
see also the recent survey <cit.>.
These mechanisms are usually designed to cope with link failures.
Resilient routing algorithms in the control plane
typically rely on a global recomputation of paths
(either
centralized <cit.>,
distributed <cit.>
or both <cit.>),
or on techniques based on link reversal <cit.>, and can
hence re-establish policies relatively easily;
however, they come at the price of a relatively high restoration time
<cit.>.
Resilient routing algorithms in the dataplane can react to failures
significantly faster <cit.>; however,
due to the local nature of the failover, it is challenging to
maintain network policies or even a high degree of resilience <cit.>.
In this line of literature,
the network is usually given and the goal is to re-establish
routing paths quickly, ideally as long as the underlying physical
network is connected (known as perfect resilience <cit.>).
In contrast, in this paper we ask the question of how to proactively enhance the
network in order to tolerate failures, rather than reacting to them. In particular, we consider more general failures,
beyond link failures and benign faults.
We argue that such a re-enforced
network simplifies routing as it is not necessary to compute new paths.
The resulting problems are very different in nature, also in terms
of the required algorithmic techniques.
*Local Faults
In this paper, we consider more general failure models
than typically studied in the resilient routing literature above,
as our model is essentially a local fault model.
Byzantine faults were studied in <cit.> in the context of broadcast and consensus problems. Unlike its global classical counterpart, the f-local Byzantine adversary can control at most f neighbors of each vertex. This more restricted adversary gives rise to more scalable solutions, as the problems can be solved in networks of degree O(f); without this restriction, degrees need to be proportional to the total number of faults in the network.
We also limit our adversary in its selection of Byzantine nodes, by requiring that the faulty nodes are chosen independently at random. As illustrated, e.g., by Lemma <ref> and Theorem <ref>, there is a close connection between the two settings. Informally, we show that certain values of p correspond, asymptotically almost surely (a.a.s), to an f-local Byzantine adversary. However, we diverge from the approach in <cit.> in that we require a fully time-preserving simulation of a fault-free routing schedule, as opposed to solving the routing task in the reinforced network from scratch.
*Fault-Tolerant Logical Network Structures
Our work is reminiscent of literature on
the design fault-tolerant network structures.
In this area (see <cit.> for a survey), the goal is to compute a sub-network that has a predefined property, e.g., containing minimum spanning tree. More specifically, the sub-network should sustain adversarial omission faults without losing the property. Hence, the sub-network is usually augmented (with edges) from the input network in comparison to its corresponding non-fault-tolerant counterpart. Naturally, an additional goal is to compute a small such sub-network. In contrast, we design a network that is reinforced (or augmented) by additional edges and nodes so that a given routing scheme can be simulated while facing randomized Byzantine faults. As we ask for being able to “reproduce” an arbitrary routing scheme (in the sense of a simulation relation), we cannot rely on a sub-network.
The literature also considered random fault models.
In the network reliability problem, the goal is to compute the probability that the (connected) input network becomes disconnected under random independent edge failures. The reliability of a network is the probability that the network remains connected after this random process.
Karger <cit.> gave a fully polynomial randomized approximation scheme for the network reliability problem.
Chechik et. al <cit.> studied a variant of the task, in which the goal is to compute a sparse sub-network that approximates the reliability of the input network.
We, on the other hand, construct a reinforced network that increases the reliability of the input network;
note also that our requirements are much stricter than merely preserving connectivity.
*Self-healing systems
In the context of self-healing routing (e.g., Castañeda et al. <cit.>), researchers have studied a model where an adversary removes nodes in an online fashion, one node in each time step (at most n such steps). In turn, the distributed algorithm adds links and sends at most O(Δ) additional messages to overcome the inflicted omission fault.
Ideally, the algorithm is “compact”: each node's storage is limited to o(n) bits.
A nice property of the algorithm in <cit.> is that the degrees are increased by at most 3. For our purposes, an issue is that the diameter is increased by a logarithmic factor of the maximum initial degree, and hence the same holds for the latency of the routing scheme. Instead, we design a network that is “oblivious” to faults in the sense that the network is “ready” for independent random faults up to a certain probability, without the need to reroute messages or any other reconfiguration. Moreover, our reinforcements tolerate Byzantine faults and work for arbitrary routing schemes. We remark that compact self-healing routing schemes also deal with the update time of the local data structures following the deletion of a node; no such update is required in our approach.
*Robust Peer-to-Peer Systems
Peer-to-peer systems are often particularly dynamic and the development
of robust algorithms hence crucial.
Kuhn et. al <cit.> study faults in peer-to-peer systems in which an adversary adds and removes nodes from the network within a short period of time (this process is also called churn). In this setting, the goal is to maintain functionality of the network in spite of this adversarial process. Kuhn et al. <cit.> considered hypercube and pancake topologies, with a powerful adversary that cannot be “fooled” by randomness. However, it is limited to at most O(Δ) nodes, where Δ is the (maximum) node degree, which it can add or remove within any constant amount of time. The main idea in <cit.> is to maintain a balanced partition of the nodes, where each part plays the role of a supernode in the network topology. This is done by rebalancing the nodes after several adversarial acts, and increasing the dimensionality of the hypercube in case the parts become too big.
Hypercubes were also of particular interest in this paper. We employ two partitioning techniques to make sure that: (1) the size of each part is constant and (2) the number of links in the cut between the parts is at most · n, where n is the number of nodes. These partitioning techniques help us dial down the overheads within each part, and avoid a failure of each part due to its small size. However, we note that our motivation for considering these topologies is that they are used as communication topologies, for which we can provide good reinforcements, rather than choosing them to exploit their structure for constructing efficient and/or reliable routing schemes (which is of course one, but not the only reason for them being used in practice).
§ CONCLUSION
In this paper, we proposed simple replication strategies for improving network reliability. Despite being simple and general, both in terms of their application and analysis, our strategies can substantially reduce the required reliability on the component level to maintain network functionality compared the baseline, without losing messages or increasing latencies.
The presented transformations allow us to directly reuse non-fault-tolerant routing schemes as a blackbox,
and hence avoid the need to refactor working solutions.
We consider this property highly useful in general and essential in real-time systems.
Hence, being prepared for non-benign faults can be simple, affordable, and practical, and therefore enables building larger reliable networks. Interestingly, while our basic schemes may hardly surprise, we are not aware of any work systematically exploring and analyzing this perspective.
We understand our work as a first step and believe that it opens
several interesting avenues for future research.
For example:
* Which network topologies allow for good partitions as utilized in <ref>? Small constants here result in highly efficient reinforcement schemes, which are key to practical solutions.
* Is it possible to guarantee strong simulations at smaller overheads?
* Can constructions akin to the one given in <ref> be applied to a larger class of graphs?
On the practical side, while
our simulations indicate that our approach
can be significantly more efficient than a naive one-by-one replication strategy
to provision
dependable ISP networks,
it will be interesting to extend these empirical studies and also consider
practical aspects such as the incremental deployment
in specific networks.
Acknowledgments.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 716562) and from the Vienna Science and Technology Fund (WWTF), under grant number ICT19-045 (project WHATIF).
This research was supported by the Israel Science Foundation under Grant 867/19.
spmpsci
[
< g r a p h i c s >
]Christoph Lenzen
received a diploma degree in mathematics from the University of Bonn in 2007 and a
Ph. D. degree from ETH Zurich in 2011. After postdoc positions at the Hebrew University of Jerusalem,
the Weizmann Institute of Science, and MIT, he became group leader at MPI for Informatics in 2014.
In 2021 he became faculty member at CISPA.
He received the best paper award at PODC 2009, the ETH medal for his dissertation, and in 2017 an ERC starting grant.
[
< g r a p h i c s >
]Moti Medina
is a faculty member at the Engineering Faculty at Bar-Ilan University since 2021. Previously, he was a faculty member at the Ben-Gurion University of the Negev and a post-doc
researcher in MPI for Informatics and in the Algorithms and Complexity group at
LIAFA (Paris 7). He graduated his Ph. D., M. Sc., and B. Sc. studies at the
School of Electrical Engineering at Tel-Aviv University, in 2014, 2009, and 2007
respectively. Moti is also a co-author of a text-book on logic design
“Digital Logic Design: A Rigorous Approach”, Cambridge Univ. Press, Oct.
2012.
[
< g r a p h i c s >
]Mehrdad Saberi
is an undergraduate student in Computer Engineering at Sharif University of Technology, Tehran, Iran. He achieved a silver medal in International Olympiad in Informatics (2018, Japan) during high school and is currently interested in studying and doing research in Theoretical Computer Science.
[
< g r a p h i c s >
]Stefan Schmid
is a Professor at TU Berlin, Germany.
He received his MSc (2004) and PhD
(2008) from ETH Zurich, Switzerland. Subsequently, Stefan Schmid
worked as postdoc at TU Munich and the University of Paderborn (2009).
From 2009 to 2015, he was a senior research scientist at the Telekom Innovations Laboratories (T-Labs) in Berlin, Germany, from 2015 to 2018 an Associate
Professor at Aalborg University, Denmark, and from 2018 to 2021 a Professor
at the University of Vienna, Austria.
His research interests revolve around algorithmic problems of networked and distributed systems,
currently with a focus on self-adjusting networks
(related to his ERC project AdjustNet) and resilient networks (related to his WWTF project
WhatIf).
|
http://arxiv.org/abs/2307.04786v1 | 20230710180001 | Combining contextuality and causality: a game semantics approach | [
"Samson Abramsky",
"Rui Soares Barbosa",
"Amy Searle"
] | quant-ph | [
"quant-ph",
"cs.LO"
] |
Combining contextuality and causality]Combining contextuality and causality:
a game semantics approach
S. Abramsky]Samson Abramsky
Samson Abramsky Department of Computer Science, University College London 66–72 Gower Street, London WC1E 6EA, United Kingdom
[email protected]
http://www.cs.ucl.ac.uk/people/S.Abramsky/
R.S. Barbosa]Rui Soares Barbosa
Rui Soares Barbosa INL – International Iberian Nanotechnology Laboraory Av. Mestre José Veiga, 4715-330 Braga, Portugal
[email protected]
https://www.ruisoaresbarbosa.com/
A. Searle]Amy Searle
Amy Searle Department of Physics, University of Oxford Clarendon Laboratory, Parks Road, Oxford OX1 3PU, United Kingdom
[email protected]
https://www.physics.ox.ac.uk/our-people/searle
We develop an approach to combining contextuality with causality, which is general enough to cover causal background structure, adaptive measurement-based quantum computation, and causal networks.
The key idea is to view contextuality as arising from a game played between Experimenter and Nature, allowing for causal dependencies in the actions of both the Experimenter (choice of measurements) and Nature (choice of outcomes).
[
[
Received 24 May 2023 / Accepted 30 June 2023
================================================
§ INTRODUCTION
Contextuality is a key non-classical feature of quantum theory.
Besides its importance in quantum foundations, it has been linked to quantum advantage in information-processing tasks.
It also arises beyond quantum mechanics, cf. <cit.>.
We wish to generalise contextuality to accommodate causality and adaptivity.
These features may arise from:
* fundamental aspects of the physical setting, in particular the causal structure of spacetime;
* the causal structure of an experiment, where measurements are performed in some causal order, and moreover, which measurements are performed may depend on the outcomes of previous measurements;
* feed forward in measurement-based quantum computation (MBQC) <cit.>, and more generally, adaptive computation.
Our objectives include:
* A more fine-grained analysis of contextuality.
Signalling should be allowed from the causal past, the backward light cone, and thus no-signalling/no-disturbance should be imposed only from outside it.
This in turn modifies the scope of classicality (non-contextuality), which now becomes relative to this weaker form of no-signalling constraints.
* A better connection with computational models such as circuits and MBQC. Explicitly representing causal flows of information, outputs of gates feeding into inputs of other gates, enables a deeper analysis of the relationships between contextuality and quantum advantage.
It turns out that capturing these different manifestations of causality and their interactions with contextuality is rather subtle.
The perspective we adopt here is to view contextuality as a two-person game played between Experimenter and Nature.
The Experimenter's moves are the measurements; the actions of the Experimenter are to choose the next measurement to be performed. Nature's moves are the outcomes.
We can capture the various forms of causal dependency which may arise in terms of strategies for Experimenter or for Nature.
The game format is already familiar in the form of non-local games.
There, the Verifier plays the role of the Experimenter, and Nature responds with outcomes according to the probability distributions
corresponding to Alice–Bob strategies.
Non-local games are one-shot games, with a single round of interaction. By considering more general games, causal structure can be incorporated.
Our treatment builds upon the sheaf-theoretic approach to contextuality. A pleasing feature is that once one modifies the basic sheaf of events to take causal structure into account, the further definitions and treatment of contextuality follow automatically.
This illustrates the advantages of a compositional and functorial approach.
§ PREVIOUS WORK
Pearl had already noted the connection with Bell inequalities in his seminal paper on testability of causal models with latent and instrumental variables <cit.>.
The extension of causal networks to allow for quantum resources, or more generally the operations offered by Generalised Probabilistic Theories, has been studied in <cit.>.
Our starting point is the sheaf-theoretic treatment of contextuality introduced in <cit.>, and extensively developed subsequently.
This is a general, mathematically robust approach, which provides a basis for:
* the contextual fraction as a measure of contextuality <cit.>;
* a general characterisation of noncontextuality inequalities in terms of consistency conditions (“logical Bell inequalities”, Boole's “conditions of possible experience”) <cit.>;
* resource theory of contextuality, and simulations between contextual systems <cit.>;
* cohomological criteria for contextuality, the topology of contextuality <cit.>;
* connections with logic and computation, database theory, constraint satisfaction <cit.>;
* generalisations <cit.> and applications <cit.> of Vorob'ev's theorem <cit.>.
The aim is to develop a refined version incorporating causality for which all these features will carry over.
There have been some prior works in this direction:
* Shane Mansfield in <cit.> introduced a refinement of the sheaf-theoretic approach with an order on the measurements,
and used it to study the two-slit experiment and the Leggett–Garg scenario.
* Stefano Gogioso and Nicola Pinzani in <cit.> developed a causal refinement of the sheaf-theoretic approach to non-locality, for the case of Bell-type scenarios.
They introduce an order on the sites or agents in the Bell scenario.
In both cases, the order is used to refine the no-signalling or no-disturbance condition which guarantees that joint distributions have consistent marginals.
In the presence of causality, signalling is allowed from within the backwards light cone or causal past of an event, and thus no-signalling is only required outside it.
One may contrast this with the Contextuality-by-Default (CbD) approach introduced by Ehtibar Dzhafarov and Janne Kujala <cit.>.
In CbD, every variable is regarded as contextual, differently labelled in each context.
Classicality is characterised by the existence of a joint distribution under which different occurrences of variables with the same “content” have the same value with the maximum probability consistent with their individual marginals.
This allows for the analysis of arbitrary signalling systems, which has applications e.g. in the behavioural sciences, where signalling is the norm. Moreover, this signalling may in general be impossible to characterise or control.
By contrast, both in the above work by Mansfield and Gogioso–Pinzani and in the present paper, the aim is to explicitly describe a given causal background – which might arise from the structure of an experiment, circuit, or physical system – and to characterise contextuality relative to such a background.
In this paper, we extend the scope of previous work in several directions.
First, we allow more general dependencies of events on their prior causal histories.
In particular, the choice of which measurement to perform can depend on previous outcomes as well as on which measurements have been performed. This is an important feature of MBQC (“feedforward”), and more generally of adaptive computation.
Secondly, we extend general contextuality scenarios with causality, not just the non-locality Bell scenarios as in the Gogioso–Pinzani (GP) approach.
Finally, and most subtly, we recognise the different roles played by Nature and Experimenter in their causal interactions, highlighting an important difference between causal background and adaptivity.
An interesting feature of our approach, in common with that of Gogioso–Pinzani, is that it proceeds essentially by modifying the sheaf of events from <cit.> to reflect the refined signalling constraints in the presence of causality.
Once this has been done, the remainder of the analysis of contextuality follows exactly the same script as in <cit.>.
In particular, the appropriate definition of empirical model, the relaxed no-signalling constraints, and the notion of classicality/non-contextuality follow automatically.
§ EXAMPLES
As we have already suggested, causality in relation to contextuality has dual aspects. It may be imposed by Nature, in the form of a causal background against which the contextual behaviour plays out; or it may be imposed by the Experimenter, to achieve computational effects (adaptive computation).
We illustrate these two sources of causality in two basic examples.
§.§ Example I: causal background à la GP
Consider a standard bipartite nonlocality scenario, the Bell–CHSH scenario:
two experimenters, Alice and Bob, with sets of local measurements I_A and I_B, and outcome sets O_A and O_B.
We may think of these as `ìnputs” and “outputs”.
We now introduce a variation, in which
we assume that Alice's events causally precede those of Bob.
Thus Bob's backward light cone includes the events where Alice chooses a measurement and observes an outcome.
Whereas in a standard, causally “flat” scenario, we would have deterministic outcomes given by functions
s_A : I_A → O_A, s_B : I_B → O_B,
with these causal constraints, we have functions
s_A : I_A → O_A, s_B : I_A × I_B → O_B .
That is, the responses by Nature to Bob's measurement may depend on the previous measurement made by Alice.[Note that, in a deterministic model, Nature “knows” what response it would have given for Alice's measurement, so there is no real dependency on this outcome.]
If we have measurements x_1, x_2 ∈ I_A, y ∈ I_B, then { (x_1,0), (y,0) } and { (x_2,0), (y,1) } are valid histories in a single deterministic model.
If we now go to distributions over such histories, say d_{x,y} as a distribution over outcomes for the Alice measurement x and the Bob measurement y, then
of the usual no-signalling/compatibility equations
d_{x,y} |_{x} = d_{x}
d_{x,y} |_{y} = d_{y}
only (<ref>) remains. In fact, d_{y} is not even defined, since { y} is not a “causally secured” context: the measurement y can never occur on its own without a preceding Alice measurement.
Thus no-signalling is relaxed in a controlled fashion.
§.§ Example II: Anders–Browne
The Anders–Browne construction <cit.> shows how we can use a form of Experimenter-imposed causality to promote two sub-universal computational models (Pauli measurements and mod-2 linear classical processing) to universal MBQC.
It uses the GHZ state as a resource state:
= |↑↑↑⟩ + |↓↓↓⟩/√(2) .
Performing local Pauli X and Y measurements, we obtain the following table of possible joint outcomes[The table shows only the possibilistic information, the supports of the probability distributions on joint outcomes, which are uniform on each row.]
+++ ++- +-+ +– -++ -+- –+ —
X Y Y 0 1 1 0 1 0 0 1
Y X Y 0 1 1 0 1 0 0 1
Y Y X 0 1 1 0 1 0 0 1
X X X 1 0 0 1 0 1 1 0
In terms of parities (products of +1/-1 outputs), the support satisfies the following equations:
[ X_1 Y_2 Y_3 = -1; Y_1 X_2 Y_3 = -1; Y_1 Y_2 X_3 = -1; X_1 X_2 X_3 = +1 . ]
The idea is to use an Experimenter causal flow to implement AND.
Taking X as 0, Y as 1, we consider the measurements for Alice and Bob as inputs to an AND gate.
We then use the following simple
mod-2 linear mapping (XOR on the bit representations) from the Alice–Bob measurements to determine Charlie's measurement:
[ 0, 0 ↦ 0; 0, 1 ↦ 1; 1, 0 ↦ 1; 1, 1 ↦ 0; ] [ X, X ↦ X; X, Y ↦ Y; Y, X ↦ Y; Y, Y ↦ X . ]
The output of the AND function is read off from the XOR of the three outcome bits.
We draw attention to the following two remarks.
* This example illustrates causality that is purely employed by the Experimenter.
From Nature's point of view, it is just the standard (“causally flat”) GHZ construction.
* The above describes a simplified “one-shot” implementation of a single AND gate.
To represent general logical circuits with embedded AND gates, using this construction as a building block,
really requires (classically computed) feedforward of measurement settings.
This means that there is full adaptivity at work, dependence of measurement choices on prior measurement outcomes.
§ GAME SEMANTICS OF CAUSALITY
We conceptualise the dual nature of causality as a two-person game, played between Experimenter and Nature:
* Experimenter’s moves are measurements to be performed;
* Nature’s moves are the outcomes.
By formalising this, we develop a theory of causal contextuality that recovers:
* the usual theory of contextuality in the “flat” case,
* the Gogioso–Pinzani theory of non-locality in a causal background,
* MBQC with adaptive computation,
* classical causal networks,
as special cases, and more.
§.§ Measurement scenarios
We begin by briefly reviewing some basic ingredients of the sheaf-theoretic formulation of contextuality. For further details, see e.g. <cit.>.
A (flat) measurement scenario is a pair (X, O), where:
* X is a set of measurements.
* O = { O_x }_x ∈ X is the set of possible outcomes for each measurement.
An event has the form (x,o), where x ∈ X and o ∈ O_x. It corresponds to the measurement x being performed, with outcome o being observed.
Given a set of events s, its domain is the set of measurements performed:
(s) π_1 s = { x |∃ o. (x,o) ∈ s } .
We say that s is consistent if (x,y), (x, y') ∈ s implies y = y'.
In this case, s defines a function from the measurements in its domain to outcomes.
A consistent set of events is a section.
We define the event sheaf over sets of measurements: for each set U ⊆ X of measurements, (U) is the set of sections whose domain is U; when U ⊆ V, there is a restriction map (V) →(U).
The functoriality of these restriction maps formalises the no-disturbance condition, or “generalised no-signalling”, at the level of deterministic models. Generalised no-signalling of probabilistic (or possibilistic) models will then follow automatically when we compose with the appropriate distribution monad, cf. <cit.>.
The sheaf property of the event sheaf – that compatible families of local sections glue together to yield unique global sections – corresponds to the fact that deterministic models are non-contextual.[Note that if we drop no-signalling, as in the CbD approach, this no longer holds.]
When we pass to distributions over the event sheaf,
the sheaf property no longer holds, and this is exactly how contextuality arises. More precisely, we extend the measurement scenario to a contextuality scenario by specifying a cover of X; a failure of the sheaf property with respect to this cover constitutes a witness to contextuality.
Our general strategy to accommodate causality is to modify the definition of the event sheaf. After this, we essentially follow the same script as above to give an account of contextuality in the causal setting. A similar procedure is followed in <cit.>.
§.§ Causal measurement scenarios
A causal measurement scenario is a tuple M=(X, O, ⊢), where the additional ingredient is an enabling relation
that expresses causal constraints.
The intended interpretation of s ⊢ x, where s ∈⋃_U ⊆ X(U) is a consistent set of events and x ∈ X a measurement,
is that it is possible to perform x after the events in s have occurred.
Note that this constraint refers to the measurement outcomes as well as the measurements that have been performed.
This allows adaptive behaviours to be described.
Given such a causal measurement scenario M, we use it to generate a set of histories. A history is a set of events that can happen in a causally consistent fashion. We associate each measurement x with a unique event occurrence, so histories are required to be consistent.
To formalise this, we first define the accessibility relation between consistent sets of events s and measurements x: s x if and only if x ∉(s) and for some t⊆ s, t ⊢ x. The intuition is that x may be performed if the events in s have occurred.
Now, (M), the set of histories over M, is defined inductively as the least family H of consistent sets of events
which contains the empty set and is closed under accessibility, meaning that if s ∈ H and s x,
then for all o ∈ O_x, s ∪{ (x,o)}∈ H. Note that if a measurement can be performed, then any of its outcomes may occur, forming a valid history.
We can give a more explicit description of (M) as a least fixed point. We define an increasing family of sets of histories { H_k } inductively:
H_0 {}
H_k+1 H_k ∪ { s ∪{ (x,o) }| s ∈ H_k, s x, o ∈ O_x }.
If X is finite, then for some k we have H_k = H_k+1, and (M) = H_k for the least such k.
§.§ Strategies
We regard a causal measurement scenario as specifying a game between Experimenter and Nature. Events (x,o) correspond to the Experimenter choosing a measurement x, and Nature responding with outcome o. The histories correspond to the plays or runs of the game.
Given this interpretation, we define a strategy for Nature over the game M as a set of histories ⊆(M) satisfying the following conditions:
* is downwards closed: if s, t ∈(M) and s ⊆ t ∈, then s ∈.
* is deterministic and total: if s ∈ and s x, then there is a unique o ∈ O_x such that s ∪{ (x,o) }∈.
Thus at any position s reachable under the strategy , the strategy determines a unique response to any measurement that can be chosen by the Experimenter.
We note an important property of strategies.
If s, t ∈, s ⊆ t, and s x, then
s ∪{ (x,o) }∈ t ∪{ (x,o) }∈ .
Under the given assumptions, since t x, we must have t ∪{ (x,o') }∈ for some o' ∈ O_x. Since s x, we have
that s ∪{ (x,o') } is a history (in (M)),
and by down-closure, s ∪{ (x,o') }∈. Since is deterministic, we must have o = o'.
Monotonicity says that
the outcomes for a measurement x under strategy are determined at the minimal histories at which x can occur. This still leaves open the possibility of assigning different outcomes to x relative to incomparable causal pasts.
We note another useful property, which follows immediately from totality and determinism.
If , τ are strategies with ⊆τ, then = τ.
§.§ The presheaf of strategies
Given a causal measurement scenario M = (X,O,⊢) and a set of measurements U ⊆ X, we define M_U, the restriction of M to U, as the causal measurement scenario (U, { O_x }_x ∈ U, ⊢_U), where s ⊢_U x iff s ⊢ x and (s) ∪{ x }⊆ U.
Note that M_X = M.
If U ⊆ V, then (M_U) is a down-closed subset of (M_V) under set inclusion.
Given a strategy over M_V, and U ⊆ V, we define |_U, the restriction of to U, as the intersection |_U ∩(M_U).
If is a strategy over M_V and U ⊆ V, then |_U is a strategy over M_U.
The restriction |_U inherits down-closure from .
For the second condition, if s ∈ |_U and s _U x, then s ∈ and s _V x. So, there is a unique o ∈ O_x such that s ∪{ (x,o) }∈.
But since x ∈ U, we have s ∪{ (x,o) }∈(M_U), and so s ∪{ (x,o) }∈ |_U.
Given a causal measurement scenario M = (X,O,⊢), we can now define a presheaf
Γ : (X)^→
of strategies over M.
For each U ⊆ X, Γ(U) is the set of strategies for M_U.
Given U ⊆ V, the restriction map Γ(U ⊆ V) : Γ(V) →Γ(U) is given by ↦ |_U.
The following is immediate:
Γ is a presheaf.
§.§ Historical note
Causal measurement scenarios are a renaming and repurposing of Kahn–Plotkin information matrices <cit.>, which were introduced circa 1975 to represent concrete domains.[For a historical perspective, see <cit.>.]
We have changed the terminology to reflect the intuitions and applications motivating the present paper:
Kahn–Plotkin Here
information matrix causal measurement scenario
cell measurement
value outcome
decision event
configuration history
The interpretation of causal measurement scenarios as Experimenter–Nature games, the notion of strategy, and the presheaf of strategies, are all new to the present paper.
§ CAUSAL CONTEXTUALITY
Our plan now is to follow the script from <cit.>, replacing the event sheaf by the presheaf of strategies Γ.
Thus local sections are replaced by strategies, whose assignments of outcomes to measurements are sensitive to the previous history of the game.
A causal contexuality scenario is a structure (M, ), where M = (X, O, ⊢) is a causal measurement scenario and is a cover of X, a family = { C_i }_i ∈ I of subsets of measurements C_i ⊆ X satisfying ⋃ = ⋃_i ∈ I C_i = X.
We work with the presheaf Γ of strategies over M, as described in the previous section.
Recall the distribution monad _R from <cit.>, where R is a semiring.
When R is the non-negative reals, it yields the usual discrete probability distributions.
We construct the presheaf _R Γ, obtained by composing the endofunctor part of the monad with the sheaf of strategies Γ.
An empirical model on the scenario (M, ) is a compatible family for the presheaf _R Γ over the cover = { C_i }_i ∈ I.
That is, it is a family { e_i }_i ∈ I, where e_i ∈_R Γ (C_i),
subject to the compatibility conditions: for all i, j ∈ I, e_i |_C_i ∩ C_j = e_j |_C_i ∩ C_j.
Each distribution e_i assigns probabilities to the strategies over M_C_i, to those strategies over M that only perform measurements drawn from the context C_i. As usual, the compatibility conditions require that the marginal distributions agree.
This follows the definition of empirical model in <cit.>, replacing the event sheaf by the presheaf of strategies.
The empirical model is causally non-contextual if this compatible family extends to a global section of the presheaf _R Γ, if there is a distribution d ∈_R Γ (X) such that, for all i ∈ I, d |_C_i = e_i.
If a causal contextuality scenario is finite, then so is the set of histories and therefore that of strategies.
The causally non-contextual models thus form a convex polytope, the convex hull of the empirical models on (M,) corresponding to deterministic strategies ∈Γ(X).
This is in keeping with the usual setup of “flat” non-locality and contextuality (without causality), where such classical polytopes are studied.
The classicality of a given model, membership in this polytope, can be checked by linear programming;
and this also suggests a generalisation of the contextual fraction <cit.> to the causal setting.
Similarly, causal contextuality is witnessed by violations of the linear inequalities defining the facets of the polytope.
An open question is to find a logical characterisation of such inequalities in the spirit of “logical Bell inequalities” <cit.>.
§ SPECIAL CASES
To check that these notions make sense, we look at two special cases: flat scenarios and Gogioso–Pinzani scenarios.
§.§ Flat scenarios
A contextuality scenario from <cit.> is (X, O, ). We define the trivial enabling relation where all measurements are initially enabled: ⊢ x for all x ∈ X. This yields a causal measurement scenario (M, ), where M = (X,O,⊢).
For any set of measurements U ⊆ X, the histories over M_U have support contained in U.
Using the monotonicity property and the fact that all measurements are enabled by ,
any strategy in Γ(U) assigns the same outcome to each measurement across all its histories.
Hence, it will correspond to a section in (U) = ∏_x ∈ U O_x. In fact, these will be in bijective correspondence.
Because of this bijective correspondence between Γ and , we see that the notions of empirical model, global section, and contextuality defined for the game-based scenario coincide with the usual notions in this case.
As this example illustrates, the restrictions on which measurements can be performed together are imposed by the cover, not by the causal structure.
§.§ GP scenarios
In recent work, Stefano Gogioso and Nicola Pinzani studied a causal refinement of the sheaf-theoretic approach to non-locality over Bell scenarios <cit.>.
A GP scenario is given by ((Ω, ≤), {}_∈, {}_∈), where:
* Ω is a set of sites or agents (Alice, Bob, etc.), with a causal ordering.
* is the set of inputs (or measurement settings) at .
* is the set of outputs (or measurement outcomes) at .
Given such a scenario, we define a causal measurement scenario M = (X,O,⊢).
This mirrors the usual encoding of Bell non-locality scenarios as contextuality scenarios.
First, we set:
* X ∑_∈ = { (, i) |∈, i ∈};
* O_(,i).
Given a set of events
s= { ( (ω_1,i_1) ,o_1), … , ( (_n, i_n) , o_n) }
and a measurement (, i) ∈ X, we define
s ⊢ (, i) if and only if
the support of s has a measurement for each site strictly preceding ω, {ω_1, …, ω_n } = {' ∈|' < }.
So, a measurement (, i) can only be played after a measurement from each site in the causal past of has been played.
Consequently, the support of any history is a set of measurements per site for some lower subset λ⊆.
This corresponds to the usual notion of context for Bell scenarios, refined to ensure that such contexts are “causally secured”.
We consider a simple example to illustrate the comparison between Γ defined over (X, O, ⊢), and the “sheaf of sections” from <cit.>.
We take to be the 2-chain _1 < _2.
This is a variation on a standard bipartite Bell–CHSH type scenario, with Alice causally preceding Bob, and hence allowed to signal to Bob.
We take the standard Bell scenario cover, where the maximal contexts correspond to choosing one measurement per site, and focus our analysis on the contexts below the cover
The equivalence between sections of Γ and those of the presheaf from <cit.> actually extends more generally to all subsets of measurements, but this is sufficient to illustrate our main point.
Now consider a strategy ∈Γ(X).
The non-empty histories in M which are compatible in the standard Bell cover have the form
{ ( _1,z_1) , o_1) } or { ( (_1,z_1) , o_1), ( (_2,z_2) , o_2) } ,
where z_i ∈{ x,y }, o_i ∈{ 0,1}, i=1,2.
Using monotonicity, the strtegy assigns a unique o_1 for each (_1,z_1) and a unique o_2 for each (_1,z_1) and (_2,z_2).
Thus determines a pair of functions of type
(I__1→ O__1) × (I__1× I__2→ O__2).
This accords with the description given in <cit.>; see in particular the discussion in Section 5.
It extends to an equivalence between Γ and the sheaf of sections of <cit.>.
Thus, if we take the standard Bell cover we obtain the same empirical models and notion of contextuality as in <cit.>.
In an extended version of the present paper, we show that
this analysis carries over to general GP scenarios. Hence, we recover the Gogioso–Pinzani theory as a special case of our framework.
§ THE SHEAF PROPERTY FOR THE STRATEGY PRESHEAF
The strategy presheaf Γ plays the role in our causal theory of the event sheaf in <cit.>.
The sheaf property of has some conceptual significance since it shows that for deterministic models local consistency implies global consistency. It is only when we introduce distributions, whether probabilistic or possibilistic, that the sheaf property fails and contextuality arises.
This raises the question of whether Γ is also a sheaf.
We now show one half of the sheaf property, namely that gluing is always possible.
So, the fact that local consistency implies global consistency for deterministic models carries over to the causal theory.
Let { U_i }_i ∈ I be a family of subsets of X covering U = ⋃_i ∈ I U_i.
Suppose we are given a compatible family {σ_i }_i ∈ I, with _i ∈Γ(U_i)
and _i |_U_i ∩ U_j = _j |_U_i ∩ U_j for all i, j ∈ I.
The sheaf property requires that there exist a unique strategy ∈Γ(U) such that |_U_i = _i for all i ∈ I.
From the definition of restriction, if such a gluing exists, it must contain the union ' ⋃_i ∈ I_i.
So, if this ' happens to be a strategy, by maximality it must be the required unique gluing of the family {_i}_i∈ I.
The union of down-closed sets is down-closed.
Thus ' can only fail to be a strategy if determinacy or totality fails.
We show that the first of these can never arise.
If {σ_i}_i ∈ I is a compatible family for the presheaf Γ, then ' ⋃_i ∈ I_i is deterministic.
Suppose that s ∪{ (x,o_k) }∈σ' for k = 1,2.
For some i,j ∈ I we have s ∪{ (x,o_1) }∈σ_i and s ∪{ (x,o_2) }∈σ_j.
This implies that (s) ∪{x}⊆ U_i ∩ U_j, and hence s ∪{ (x,o_1) }∈σ_i |_U_i ∩ U_j and s ∪{ (x,o_2) }∈σ_j |_U_i ∩ U_j. By compatibility and determinacy of _i and _j, this implies o_1 = o_2.
Finally, if totality fails, we can always complete the union ' to a strategy over M_U by making arbitrary choices of outcomes for any remaining accessible measurements.
In general, this can be done in multiple ways, so the uniqueness part of the sheaf condition fails, Γ is not separated.
We give a simple example to show how this can happen.
Fix X = { x,y,z }, O_w = { 0,1} for all w ∈{ x,y,z}, and the following enabling relation:
⊢ x, ⊢ y, { (x,0), (y,0) }⊢ z .
Consider the cover consisting of U_1 { x,z} and U_2 { y,z }, and take strategies
_1 { , { (x,0) } } and _2 { , { (y,0) } } .
Note that _1 and _2 are compatible since they both restrict to the empty strategy over U_1 ∩ U_2 = { z }, as the measurement z is not enabled.
Similarly, _1 and _2 are both total, since z is not accessible from any history over U_1 or U_2. However, _1 ∪_2 is not total, since z is accessible but has no assigned outcome.
This example is rather pathological, as it hinges on the inaccessibility of z in the cover, leading to the following question.
Is there a notion of “good cover” which implies that gluings are unique?
§ EXPERIMENTER STRATEGIES AND ADAPTIVE COMPUTATION
The strategies considered so far have been strategies for Nature. These prescribe a response – an outcome – for each measurement that can be chosen by the Experimenter.
Using the duality inherent in game theory, there is also a notion of strategy for Experimenter.
To formulate this, we use the following observation.
For a history s ∈(M), the following are equivalent:
* s is maximal in ((M),⊆);
* no measurement is accessible from s, for all x ∈ X, (s x).
We now define a strategy for Experimenter over the game M to be a set of histories τ⊆(M) satisfying the following conditions:
* τ is downwards closed: if s, t ∈(M) and s ⊆ t ∈τ, then s ∈τ.
* τ is co-total: if s ∈τ
and s is not maximal,
then there is a measurement x
with s x such that s ∪{ (x,o) }∈τ for some o ∈ O_x.
Moreover, for all such x, s ∪{ (x,o') }∈τ for all
o' ∈ O_x.
Thus at each stage, the strategy determines which measurements may be performed.
Note that it may allow more than one measurement, so some nondeterminism remains.
For each such measurement, it must then accept any possible response from Nature. The future choices of the Experimenter can then depend on Nature's responses, allowing for adaptive protocols.
If we are given a strategy for Nature and a strategy for the Experimenter τ, we can play them off against each other, resulting in ⟨σ|τ⟩∩τ.
This is the down-set of a set of maximal histories.
This operation can be extended to distributions on strategies, to mixed strategies, in a bilinear fashion.[The extension to mixed strategies hinges on the fact that the distribution monad is commutative.]
We refer to strategies for Nature as N-strategies, and to strategies for Experimenter as E-strategies.
§.§ Anders–Browne revisited
We now show how the Anders–Browne construction of an AND gate discussed in section <ref> can be formalised using an Experimenter strategy.
First, we have the description of the standard GHZ construction. This is given by a flat measurement scenario with X = { A_i, B_j, C_k | i,j,k ∈{ 0,1}}, and O_x = { 0,1 } for all x ∈ X.
The maximal compatible sets of measurements are all sets of the form { A_i, B_j, C_k } with i,j,k ∈{ 0,1}, a choice of one measurement per each site or agent.
We regard each measurement as initially enabled. The N-strategies for this scenario form the usual sections assigning an outcome to each choice of measurement for each site, and the GHZ model assigns distributions on these strategies as in the table shown in section <ref>.
To get the Anders–Browne construction, we consider the E-strategy which initially allows any A or B measurement to be performed, and after a history { (A_i, o_1), (B_j, o_2) } chooses the C-measurement C_i ⊕ j.
Playing this against the GHZ model results in a strategy that computes the AND function with probability 1.
The full power of adaptivity is required when using this as a building block to implement a more involved logical circuit. Suppose that the output of the AND gate above is to be fed as the first input of a second AND gate, built over a GHZ scenario with measurements labelled { A'_i, B'_j, C'_k | i,j,k ∈{ 0,1}}.
The E-strategy implements the first AND gate as above, with any B' measurement also enabled, being a free input.
After that, the A'-measurement can be determined: after a history containing { (A_i, o_1), (B_j, o_2), (C_i ⊕ j, o_3) }, the E-strategy chooses the A'-measurement A'_o_1 ⊕ o_2 ⊕ o_3. The second AND gate is then implemented like the first. Note that the choice of A'-measurement depends not only on previous measurement choices, but on outcomes provided by Nature.
§ OUTLOOK
In a forthcoming extended version of this paper, we show how a number of additional examples, including Leggett–Garg, can be handled in our approach.
We also show that our formalism faithfully represents a number of others, including Gogioso–Pinzani scenarios, adaptive MBQC, and causal networks.
In future work, we aim to employ our formalism to describe unconditional quantum advantage in shallow circuits, building on <cit.>.
We will also investigate other potential applications to quantum advantage.
We also aim to clarify how our approach can be related to the currently very active study of indefinite causal orders <cit.>.
§.§ Acknowledgements
This work was developed in part while AS was hosted on secondment at INL.
This work is supported by the Digital Horizon Europe project FoQaCiA, Foundations of quantum computational advantage, GA no. 101070558, funded by the European Union, NSERC (Canada), and UKRI (U.K.).
SA also acknowledges support from EPSRC – Engineering and Physical Sciences Research Council (U.K.) through
EPSRC fellowship EP/V040944/1, Resources in Computation.
RSB also acknowledges support from FCT – Fundação para a Ciência e a Tecnologia (Portugal) through CEECINST/00062/2018.
AS acknowledges support from EPSRC Standard Research Studentship (Doctoral Training Partnership), EP/T517811/1, and the Smith-Westlake Graduate Scholarship at St. Hugh's College.
amsplain
|
http://arxiv.org/abs/2307.07637v1 | 20230714214557 | The Haldane Model in a Magneto-optical Honeycomb Lattice | [
"M. J. Ablowitz",
"J. T. Cole"
] | physics.optics | [
"physics.optics",
"nlin.PS"
] |
Ψ|Ψ(r)||Ψ^∗(r)|Ψl̅
Department of Applied Mathematics, University of Colorado, Boulder, Colorado, USADepartment of Mathematics, University of Colorado, Colorado Springs, Colorado, USA
A two-dimensional honeycomb lattice composed of gyrotropic rods is studied. Beginning with Maxwell’s equations, a perturbed Wannier method is introduced which yields a tight-binding model with nearest and next-nearest neighbors. The resulting discrete model leads to a Haldane model and as such, topologically protected modes, associated with nonzero Chern numbers are supported. Changing the radii of the rods allows for the breaking of inversion symmetry which can change the topology of the system. This model explains experimental results associated with topological waves in magneto-optical honeycomb lattices. This method can also be applied to more general Chern insulator lattices. When on-site Kerr type nonlinear effects are considered, coherent soliton-like modes are found to propagate robustly through boundary defects.
The Haldane Model in a Magneto-optical Honeycomb Lattice
Justin T. Cole
August 12, 2023
=========================================================
§ INTRODUCTION
The study of topological insulators is an area of research currently receiving significant interest. These types of systems can be experimentally realized in numerous fields including ultracold fermionic systems <cit.>, semiconductors <cit.>, magnetic media <cit.>, equatorial waves <cit.>, and electromagnetic systems <cit.>. Underlying these works are topologically protected states that are robust to defects.
This work focuses on topological insulators that are distinguished by bulk eigenmodes with a nontrivial Chern number.
In this case, the bulk-edge correspondence implies the existence of topologically protected modes.
Indeed, these systems can support edge states that propagate unidirectionally around the boundary with and without material defects.
A standard approach for describing topological insulator lattice systems is the use of a tight-binding model. Typically, tight-binding models consist of a set of discrete equations that reduce the complexity of the governing equations, yet still capture the essential behavior. Moreover, it is common in experiments for the dielectric contrast in photonic waveguides to naturally reside in the deep lattice regime which is central in the tight-binding approximation <cit.>.
One of the most well-known and heavily studied topological insulator systems is the Haldane model<cit.>, associated with honeycomb lattices. This relatively simple model, which includes nearest
and next-nearest neighbor interactions, is able to capture the essence of Chern insulator systems.
The model illustrates that breaking of time-reversal symmetry is necessary, but not sufficient, for realizing bulk modes with nontrivial Chern topological invariants. Moreover, when inversion symmetry is broken in an appropriate manner exceeding that of time-reversal symmetry, a topological transition to a trivial Chern system can take place.
While <cit.> offers no derivation for the
model, it effectively describes the behavior of the quantum Hall effect in honeycomb lattices. Indeed, many authors have applied the Haldane model to describe systems with nonzero Chern numbers. However, it is not clear whether this is the true tight-binding reduction of electromagnetic systems or just a convenient model.
This work provides a direct derivation of the Haldane model from Maxwell's equations in a magneto-optical (MO) system. The physical system considered here is that of transverse magnetic (TM) waves in a ferrimagnetic photonic crystal with an applied external magnetic field.
Systems of this type have been realized in both square <cit.> and honeycomb <cit.> lattices and found to support topologically protected edge modes. Topologically protected modes in EM systems were originally proposed by Haldane and Raghu <cit.> and their existence studied in <cit.>. Our work directly connects Haldane's model <cit.> to electro-magneto-optical systems.
Additionally, it turns out that periodically driven photonic honeycomb lattices can
also yield the Haldane model in a
high-frequency limit <cit.>.
The key to our approach is to use a suitable Wannier basis
in which to expand the EM field <cit.>. Unfortunately, a direct Wannier expansion is ineffective due to nontrivial topology which is the result of a discontinuity in the spectral phase of the associated Bloch function <cit.>. As a result, the corresponding Wannier-Fourier coefficients do not decay rapidly. Seeking a tight-binding model in a basis of these slowly decaying Wannier modes would require many interactions, well beyond nearest neighbor, to accurately describe the problem. Consequently, this would cease to be an effective reduction of the original problem.
By considering nearest and next-nearest neighbor interactions, a Haldane model is derived from the original MO honeycomb lattice.
With physically relevant parameters, an analytical study of the system topology is conducted. The topological transition points are identified and found to agree well with numerical approximations. Nontrivial Chern numbers are found to correspond to unidirectional chiral modes, and vice versa for trivial Chern cases.
The Wannier basis method we use was applied to a square MO lattice in <cit.>. The results in this paper show that the method is effective, again. This approach can be applied to other systems, e.g. different lattices, governed by the TM equation with gyrotropic lattices. We expect this method to model other Chern insulator systems, e.g. TE systems with gyrotropic permittivity, as well.
We also examine the effect of nonlinearity on edge mode propagation. Edge solitons, unidirectional nonlinear envelopes that balance nonlinearity and dispersion have been explored in Floquet Chern insulator systems <cit.>. The work <cit.> showed that significant amounts of radiation are emitted from the solitary wave for highly localized (nonlinear) envelopes.
The nonlinear system we consider is a Haldane model
that includes on-site Kerr nonlinearity. A similar system has also been derived from nonlinear a Floquet system in a high-frequency driving limit <cit.>. Different nonlinear Haldane models with saturable nonlinearity <cit.> and mass terms <cit.> have previously been explored. Using
balanced envelopes such as those described above as a guide, we observe that slowly-varying envelopes can propagate coherently and robustly around lattice boundaries. Due to their ability to balance nonlinear and dispersive effects, while localized along the boundary,
we call these edge solitons.
§ MAGNETO-OPTICAL SYSTEM
The setup we consider is a planar array r = (x,y)^T of ferrimagnetic rods, e.g. YIG rods, arranged in a honeycomb lattice pattern (see Fig. <ref>). Similar designs were implemented in <cit.> and <cit.>. The parallelogram unit cell contains an “a-site” and “b-site” with radii of R_a and R_b, respectively. All other cells are integer translations of the lattice vectors
v_1 = ℓ[ 3/2; √(3)/2 ] , v_2 = ℓ[ 3/2; -√(3)/2 ]
from the unit cell, where ℓ is the distance between nearest neighbor rods. The notation (m,n) indicates a rod that is displaced m v_1 + n v_2 away from the unit cell, where m,n ∈ℤ.
A constant external magnetic field is applied in the perpendicular (out of the page) direction: H_0 z, and saturated magnetization.
For time-harmonic fields with angular frequency ω, the ferrite rods induce the gyrotropic permeability tensor <cit.>
[μ] =
[ μ i κ 0; - i κ μ 0; 0 0 μ_0 ] ,
where μ = μ_0 ( 1 + ω_0 ω_m/ω_0^2 - ω^2) and κ = μ_0 ωω_m/ω_0^2 - ω^2. The coefficients are defined in terms of ω_0 = μ_0 γ H_0 and ω_m = μ_0 γ M_s, where μ_0 is the vacuum permeability, γ is the gyromagnetic ratio, and M_s is the magnetization saturation of the material.
For rods with permittivity ε( r), the governing TM wave equation for a time-harmonic field is
-∇^2 E + ℳ·∇ E = ω^2 εμ̃ E ,
ℳ( r) = ∇lnμ̃ - i μ̃ ( z×∇η) ,
where E, plus its complex conjugate, is the z-component of the electric field, μ̃ = μ^2 - κ^2/μ and η = - κ/μ^2 - κ^2. Here we take a non-dispersive approximation and fix the values of μ and κ: ω is fixed to eventual band gap frequencies. For a typical YIG rod at frequency f = 7.7GHz (f=ω/2π) with saturation magnetization 4 π M_s = 1750 G and magnetizing field H_0 = 500 Oe, the constitutive relations are approximately μ = 0.88 μ_0, κ = -0.66 μ_0 and ε = 15 ε_0. The equation is nondimensionalized via: r→ℓ r, μ→μ_0 μ, κ→μ_0 κ, ε→ε_0 ε, ω→ cω/ℓ, where c is the speed of light and ε_0 = (c^2 μ_0)^-1 is the vacuum permittivity.
The coefficients in (<ref>) share the translation symmetry of the honeycomb lattice: ε( r + m v_1 + n v_2) = ε( r ), μ̃( r + m v_1 + n v_2) = μ̃( r ), and η( r + m v_1 + n v_2) = η( r ) where m,n ∈ℤ. Bloch theory motivates bulk wave solutions of
the form E( r ; k) = e^ i k· ru( r; k), where u( r+ m v_1 + n v_2 ; k)= u( r; k) for quasimomentum k where these
reciprocal lattice vectors are given by
k_1 = 2 π/ℓ[ 1/3; 1/√(3) ], k_2 = 2 π/ℓ[ 1/3; - 1/√(3) ] .
Solving the resulting equation for (u( r; k), ω( k)),
via spectral methods along the Γ M K Γ path in k-space (see Appendix <ref>), we obtain the two lowest spectral bands shown in Fig. <ref>.
The location of Dirac points are
K' = [ 0; 4 π/3 √(3)ℓ ] , K = [ 0; - 4 π/3 √(3)ℓ ].
In Fig. <ref>(a), no external magnetic field is applied and a conical Dirac point is observed at the K point. When a magnetic field is applied, then ℳ is nonzero, time-reversal symmetry is broken and a band gap opens [see Fig. <ref>(b)]. Moreover, there is an associated set of nonzero Chern numbers. Note that the first (lowest) band is denoted by `–' subscript, while the second band is denoted by `+' subscript.
In Fig. <ref> we also compare with the discrete–tight binding approximation discussed below.
§ A PERTURBED WANNIER APPROACH
A strong dielectric contrast between the rods and background motivates a tight-binding approximation, whereby a variable coefficient PDE with a periodic lattice potential, i.e. (<ref>), can be reduced to a constant coefficient system of ODEs <cit.>. Bloch wave solutions of (<ref>) are periodic with respect to the quasimomentum k: E( r; k + m k_1 + n k_2 ) = E( r; k),
where the reciprocal lattice vectors k_1,2 and satisfy v_i · k_j = 2 πδ_ij. As such, the Bloch wave can be expanded in a Fourier in k
series
E( r; k) = ∑_p∑_m,n W_mn^p( r) e^i k· (m v_1 + n v_2) ,
where W_mn^p denotes the Wannier function corresponding to the (m,n) spatial cell and p^ th spectral band.
Due to the properties of Fourier coefficients, the decay of W_mn^p( r) depends on the smoothness of E( r; k) in k. Chern insulators possess an essential phase discontinuity that can not be removed via gauge transformation <cit.>. As a result, a direct Wannier expansion is not useful. But, a closely related set of exponentially localized Wannier functions, which come from a problem with time-reversal symmetry <cit.> can be used perturbatively.
Consider (<ref>) with ℳ = 0, but μ̃≠μ_0; the so-called “perturbed problem”.
The maximally localized Wannier (MLWF) functions, found using well-known methods <cit.>, corresponding to the first two Wannier functions,
called W^a_mn( r) and W^b_mn( r), are shown in Fig. <ref> and centered at the a-sites (d + m v_1 + n v_2) and b-sites (2 d + m v_1 + n v_2), respectively. These Wannier functions are related to those in (<ref>) by a unitary transformation chosen to minimize the variance
(see Appendix <ref>).
Note that these functions are real, exponentially localized, and approximately possess
mirror symmetry about the x = 3 ℓ /2 axis; i.e. W^a_00(- ( r - 3ℓ x/2 )) = W^b_00( r - 3ℓ x/2 ).
The Bloch wave is expanded in terms of this new basis as
E( r; k) = ∑_m,n (a_mnW^a_mn( r) + b_mnW^b_mn( r)) ,
where the phases have been absorbed into the coefficients. Properly normalized Wannier modes exhibit the orthogonality property ⟨ W_mn^p , W_m'n'^p'⟩_ℝ^2 = δ_mm'δ_n n'δ_pp' for the weighted complex inner product ⟨ f , g ⟩_ℝ^2 = ∬_ℝ^2 f( r)^* g( r) ε( r) μ̃( r) d r.
Substituting (<ref>) into (<ref>) with ℳ≠ 0, multiplying by W^j_mn( r), j = a,b, and integrating over ℝ^2 yields a system of algebraic equations whose coefficients depend on
integrals over perturbed Wannier functions. Once the MLWFs are obtained, these integrals are numerically approximated.
Due to the deep lattice, in the simplest tight-binding approximation only nearby interactions are kept since the others are small. Below
we keep terms up to the next-nearest neighboring sites.
§ A HALDANE-TYPE MODEL
Inspection of the numerically computed tight-binding coefficients (see Appendix <ref>) reveals an
effective discrete approximation that is essentially
the well-known Haldane model <cit.>. Namely, replacing ω by id/dt
we obtain
d^2 a_mn/dt^2 + P a_mn + t_1 (δ_- b_mn )
+ t_2 e^i ϕ ( Δ_1 a_mn ) + t_2 e^-i ϕ ( Δ_2 a_mn ) = 0
d^2 b_mn/dt^2 + P b_mn + t_1 (δ_+ a_mn )
+ t̃_2 e^-i ϕ ( Δ_1 b_mn )+ t̃_2 e^i ϕ ( Δ_2 b_mn) = 0
where (δ_± c_mn) ≡ c_mn + c_m ± 1, n + c_m, n± 1 are the nearest neighbor interactions and (Δ_1 c_mn) ≡ c_m,n+1 + c_m-1,n + c_m+1,n-1 and (Δ_2 c_mn) ≡ c_m+1,n + c_m,n-1 + c_m-1,n+1 are next-nearest neighbor contributions;
parameters P, P, t_1, t_2, t̃_2, ϕ are real numbers; these values depend on the values of μ, κ, ε and sizes of the radii of the rods.
Notice that this system reduces to the `classical' Haldane model given in <cit.> when t̃_2 = t_2 and ϕ→ - ϕ. The equations can be put in a more standard form by looking for solutions of the form a_mn→ a_mne^i ω t,
similarly for b_mn, and then shifting the spectrum
ω^2 →ω^2 + (P + P)/2. This yields an on-site inversion parameter
M ≡P - P/2
which is important in <cit.> and below. The result of this latter spectral shift is to effectively replace P in (<ref>) by M and P in (<ref>) by -M.
We find the `classical' Haldane model when inversion symmetry is not broken (P = P, t_2 = t̃_2 ) and a modified version when inversion symmetry is broken (P ≠P, t_2 ≠t̃_2).
When the a-site and b-site rods differ, the inversion symmetry of the lattice r→ - r is broken and this leads to different interactions among the Wannier modes (see Sec. <ref> and Appendix <ref>).
The physical derivation that leads to the model (<ref>)-(<ref>) should be pointed out. The external magnetic field induces the complex next-nearest coefficients in the system.
Furthermore, this method appears applicable for the derivation of other tight-binding models in Chern insulator systems.
We compare the bulk bands of the discrete model to those numerically computed from (<ref>); see Fig. <ref>. (All tight-binding parameters used
can be found in Appendix <ref>) Indeed, the discrete approximation shows good agreement with the numerical bands; the relative error throughout the Brillouin zone is 6.5% or less. Moreover, for ℓ = 5.8 mm spacing, the gap frequencies in Fig. <ref>(b) lie in the vicinity of the 8 GHz microwave regime
found in <cit.>.
§.§ Analytical Calculation of Bulk Modes
Consider bulk plane wave solutions of system (<ref>)-(<ref>) of the form
a_mn(t) = α( k) e^i [ k· (m v_1 + n v_2) - ω( k) t ] ,
b_mn(t) = β( k) e^i [ k· (m v_1 + n v_2 ) - ω( k) t ] ,
where k∈ℝ. Next, define the nearest neighbor and next-nearest neighbor vectors
a_1 = 0, a_2 = v_1 , a_3 = v_2,
and b_1 = v_1 , b_2 = - v_2 , b_3 = v_2 - v_1, respectively.
Then the bulk Haldane system can be expressed as
the following eigenvalue system
[ M + H_0 + H_3 H_1 - i H_2; H_1 + i H_2 - M + τ (H_0 - H_3) ][ α; β ]
= ω^2
[ α; β ]
where τ = t̃_2/t_2 > 0 and M = (P - P)/2 with the terms
H_0( k) = 2 t_2 ∑_j = 1^3 cosϕcos ( k· b_j )
H_1( k) = t_1 ∑_j = 1^3 cos( k· a_j )
H_2( k) = t_1 ∑_j = 1^3 sin ( k· a_j )
H_3( k) = 2 t_2 ∑_j = 1^3 sinϕsin ( k· b_j ) .
Note that we have utilized the frequency shift ω^2 = ω^2 + (P + P)/2 to follow the convention used in <cit.>.
When τ = 1 this is precisely Haldane's model <cit.> when ϕ→ - ϕ and ω^2 →ω^2 + H_0.
The dispersion surfaces of (<ref>) are given by
ω^2_±( k) = H_0( k) (1 + τ) + H_3( k) (1 - τ)/2±√( H_1( k)^2 + H_2( k)^2 + 1/4[2M+ H_0( k)(1 - τ) + H_3( k) (1 + τ) ]^2 ) .
Below, we begin by studying the behavior of the spectrum near the Dirac points. In the absence of magnetization (t_2 = t̃_2 = 0), the spectral gap closes and the bands ω_± touch at these points. Moreover, as will be explained below, the contributions that result in nonzero Chern numbers are acquired at these points.
Consider the behavior of the spectral bands in (<ref>) at the Dirac point K' = ( 0 , 4 π/ 3√(3)ℓ)^T, where the functions H_j, j=1,...,4 reduce to
H_0(K') = - 3 t_2 cosϕ, H_1(K') = 0,
H_2(K') = 0, H_3(K') = 3 √(3) t_2 sinϕ.
Hence, at this Dirac point, the spectral bands in (<ref>) are given by
ω^2_± = - 3 t_2 cosϕ (1 + τ) + 3 √(3) t_2 sinϕ (1- τ)/2
±1/2| 2M - 3 t_2 cosϕ (1 - τ) +3 √(3) t_2 sinϕ (1 + τ) | .
A gap closure (i.e. ω_+= ω_-) occurs when the equation
2 M - 3 (t_2 - t̃_2) cosϕ + 3 √(3) (t_2 + t̃_2) sinϕ = 0
is satisfied.
If on the other hand, the Dirac point K = - K' is considered, then
H_0(K) = - 3 t_2 cosϕ, H_1(K) = 0,
H_2(K) = 0, H_3(K) = - 3 √(3) t_2 sinϕ,
where the only difference is the sign of H_3. Here, the corresponding gap closure occurs when the equation
2 M - 3 (t_2 - t̃_2) cosϕ - 3 √(3) (t_2 + t̃_2) sinϕ = 0
is satisfied.
When τ = 1 (inversion symmetry present), the gap closure condition reduces to
that of the classical Haldane model: M ± 3 √(3)sinϕ = 0. The curves in (<ref>) and (<ref>) are shown in Fig. <ref> for different values of τ and correspond to topological transition points.
The eigenmodes associated with the ω_±^2 eigenvalues in (<ref>) are
c_±( k) = 1/D( k)[ H_1( k) - i H_2( k); ω_±^2( k) -M - H_0( k) - H_3( k) ] .
The term D( k) is a normalization factor chosen to ensure || c_± ||_2 =1.
Notice that these functions are periodic in k:
c_±( k + k_j) = c_±( k ) ,
for the reciprocal lattice vectors, j = 1,2.
The Chern number for the first two bands is given by
C_± = 1/2 π i∬_Ω( < ∂ c_±/∂ k_x ,∂ c_±/∂ k_y> - < ∂ c_±/∂ k_y ,∂ c_±/∂ k_x>) d k ,
where ⟨ f , g⟩ = f^† g and † indicates the complex conjugate transpose.
The region Ω is a reciprocal unit cell, given by the parallelogram region formed by the reciprocal lattice vectors k_1, k_2.
To compute (<ref>), Stokes' Theorem is applied over Ω.
This equates the double integral over Ω to a closed line integral along the boundary ∂Ω.
However, since the eigenfunctions (<ref>) are not differentiable at the Dirac points, a contour integral which excludes these points must be implemented (see <cit.>).
Due to the periodic boundary conditions in the eigenmodes, the boundary ∂Ω makes no contribution to the Chern number. The only nontrivial contributions come from the two Dirac points:
C_± = - 1/2 π i∮_∂ K A_±( k) · d k - 1/2 π i∮_∂ K' A_±( k) · d k
where A_±( k) = ⟨ c_± , ∇_ k c_±⟩ = ⟨ c_± , ∂_k_x c_±⟩k_x + ⟨ c_± , ∂_k_y c_±⟩k_y is the Berry connection.
The contours of integration in (<ref>) are taken to be small counter-clockwise oriented circles centered around the Dirac points, K and K', respectively.
Next, the eigenmodes are linearized about the Dirac point k = K'. A similar calculation follows for the other Dirac point. Doing so, we get
c_±( k) ≈ c_±(K') + ( k - K') ·∇_ k c_±(K') ,
where ∇_ k≡∂_k_xk_x + ∂_k_yk_y.
After renormalizing the linear approximation via ψ = c_±/|| c_± ||_2, the Berry connection and Chern number (<ref>) are computed in the neighborhood of the K' Dirac point.
The following are the results. The contribution to the total Chern number at the K' Dirac point is -1 for
2 M - 3 (t_2 - t̃_2) cosϕ + 3 √(3) (t_2 + t̃_2) sinϕ > 0
and 0 otherwise. Meanwhile, the contribution at the K Dirac point is +1 for
2 M - 3 (t_2 - t̃_2) cosϕ - 3 √(3) (t_2 + t̃_2) sinϕ > 0
and 0 otherwise. These different region of topology are summarized in Fig. <ref>. This figure represents a generalization of the phase diagram in <cit.>.
The Chern number is found by combining the contributions in (<ref>) and (<ref>). Suppose we focus on the interval 0 < ϕ < π. Then for parameters that satisfy neither (<ref>) nor (<ref>), that is 2 M < 3 (t_2 - t̃_2) cosϕ - 3 √(3) (t_2 + t̃_2) sinϕ,
both Dirac points have zero contribution and C_+ =0 + 0 = 0. Next, for values that satisfy (<ref>), but not (<ref>),
the K' Dirac point contributes -1, and the K point has a null contribution, so C_+ = - 1 + 0 = -1. Lastly, when both (<ref>) and (<ref>) are satisfied, that is
2 M > 3 (t_2 - t̃_2) cosϕ + 3 √(3) (t_2 + t̃_2) sinϕ,
both Dirac points contribute and cancel each other out, so C_+ = -1 + 1 = 0. For all cases considered in this paper, the analytically computed Chern numbers agree with numerics <cit.>.
Lastly,
it is observed that, similar to the classic Haldane model, bands only open for ϕ≠ n π, n ∈ℤ. Physically, values of ϕ = n π corresponds to completely real next-nearest neighbor coefficients. Opening a spectral gap that supports topologically protected edge modes
requires complex next-nearest neighbor coefficients. In this model, the complex nature of the next-nearest neighbor coefficients comes from the external magnetic field.
§.§ Broken Inversion Symmetry
A notable feature of the Haldane model is a change in topology when the degree to which the inversion symmetry is broken is sufficiently large.
The generalized model (<ref>)-(<ref>) also exhibits this property. Physically, inversion symmetry of the system can be broken by choosing different radii for the a and b lattice sites, that is R_a ≠ R_b. Doing so leads to P ≠P, t_2 ≠t̃_2 and M ≠ 0 (as defined in (<ref>)).
Spectral band diagrams resulting from such a change are shown in Figs. <ref> and <ref>. As the radii differential changes, the system undergoes a topological transition that is captured by the model. We observe that when R_a is sufficiently smaller (or larger) than R_b, the system is a trivial Chern insulator. Only when R_a ≈ R_b do we observe a nontrivial Chern insulator state.
Specifically, for fixed radius R_b = 0.3 ℓ, the (numerical) transition points between a trivial and nontrivial Chern insulator occurs at approximately R_a = 0.27 ℓ (at k=K; see Fig. <ref>(b)) and R_a = 0.35 ℓ (at k=K'; see <ref>(b)).
The discrete model (<ref>)-(<ref>) is also found to exhibit these topological transitions when the inversion symmetry is broken, i.e. M ≠ 0 in (<ref>).
The different regions of topology were analytically studied in Sec. <ref>; this information is summarized in Fig. <ref>.
We note that for values of τ smaller than 1, like Fig. <ref>, typically the difference |R_a- R_b| for R_a < R_b needs to be smaller
to see a topological transition (numerical bands touch for R_a = 0.27ℓ , R_b = 0.3 ℓ, so |R_a - R_b| = 0.03 ℓ).
In contrast, when R_a > R_b and τ is larger than 1, like Fig. <ref>, a larger difference |R_a - R_b| is needed for a topological transition (numerical bands touch for R_a = 0.35ℓ , R_b = 0.3 ℓ, so |R_a - R_b| = 0.05 ℓ). In examining the locations of the parameters (see Table <ref> in Appendix <ref>) relative to the topological regions shown in Fig. <ref>, it appears this source of the asymmetry is the noticeable change in the value ϕ as R_a increases.
This differs from the behavior when R_a decreases, where ϕ does not change substantially. This asymmetry in the transition points also occurs if instead R_a is fixed and R_b is adjusted. The main difference is that the spectral touching points switch from what was observed above: K ↔ K'.
§ TOPOLOGICALLY PROTECTED EDGE MODES
The edge problem is now considered. An edge is placed along the zig-zag edge parallel to the v_1 lattice vector.
Outside a semi-infinite strip, the electric field is assumed to decay exponentially fast. We find edge states that decay exponentially in the v_2 direction.
Two topologically distinct edge band diagrams are shown in Fig. <ref>. Edges modes along the direction v_1 are found
by taking
a_mn(t) = a_n( k) e^i [ m k· v_1 + ω t ] , b_mn(t) = b_n( k) e^i [ m k· v_1 + ω t ] ,
which reduces the governing system (4)-(5) to
ω^2 a_n = P a_n + t_1 [ (1 + e^ - i k· v_1) b_n + b_n-1]
+ t_2 e^i ϕ[ a_n+1 + e^ - i k· v_1 a_n + e^ i k· v_1 a_n-1]
+ t_2 e^-i ϕ[ a_n-1 + e^ i k· v_1 a_n + e^ - i k· v_1 a_n+1] = 0 ,
ω^2 b_n = P b_n + t_1 [ (1 + e^ i k· v_1) a_n + a_n+1]
+ t̃_2 e^-i ϕ[ b_n+1 + e^ - i k· v_1 b_n + e^ i k· v_1 b_n-1]
+ t̃_2 e^i ϕ[ b_n-1 + e^ i k· v_1 b_n + e^ - i k· v_1 b_n+1] = 0 .
Note that k· v_1 = ( r k_1 + s k_2) · v_1 = 2 π r for r,s ∈ℝ due to the relationship k_i · v_j = 2 πδ_ij. As a result, the coefficients cover one period over
0 ≤ r ≤ 1.
This system is solved numerically by implementing zero Dirichlet boundary conditions
a_n, b_n = 0 , n < 0, n > N
where N is taken to be large.
We took N = 64 to generate Fig. <ref>. The band gap eigenfunctions are found to be exponentially localized and decay rapidly away from the boundary wall, in the v_2 direction.
The band configuration in Fig. <ref>(a') corresponds to bulk eigenmodes with zero Chern number due strong inversion symmetry breaking. As a result, there are no edge modes spanning the entire frequency gap. On the other hand, the system with corresponding nonzero Chern numbers in Fig. <ref>(c') exhibits a nontrivial
band structure inside the gap. These topologically protected chiral states propagate unidirectionally.
Finally, we consider time evolutions of these topologically distinct states.
To do so, envelope approximations are evolved by taking the quasi-monochromatic initial data
a_mn(0) = sech(ν m) e^i m k· v_1 a_n( k)
b_mn(0) = sech(ν m) e^i m k· v_1 b_n( k) ,
where a_n,b_n are numerically computed edge states indicated by the red dots at k = 0.5 k_1 in Fig. <ref> and ν is a relatively small parameter; here we took ν = 0.1.
Edge eigenmodes corresponding to localized along the bottom edge of MO honeycomb lattice are taken.
The edge envelopes are then propagated into a defect barrier missing two lattice cells in the - v_2 direction, in which the electric field is negligibly small.
Using the initial condition above, the evolutions obtained by solving (<ref>)-(<ref>) are highlighted in Fig. <ref>.
Edge states with corresponding nontrivial Chern invariants (see Figs. <ref>(c') and <ref>(c”)) propagate chiraly around the defect barrier. There is virtually no loss in amplitude. On the other hand, edge modes associated with zero Chern number (see Figs. <ref>(a') and <ref>(a”)) experience significant losses and scattering upon collision with the barrier. A portion of the original envelope propagates around the barrier, but there is a nearly 67%
amplitude loss due to scattering into the bulk.
As a final note, we observe small decay in the maximal amplitude of the these topologically protected modes, a roughly 10 % decline over 1500 time units. This is expected due to dispersion. It is well-known that self-focusing nonlinearity can balance these dispersive effects and form solitons <cit.>. This motivates this next section which investigates a nonlinear Haldane model and edge solitons.
§ A NONLINEAR HALDANE MODEL
In this section, we consider the effects of nonlinearity in our Haldane model. Similar versions of this model were mentioned in the Introduction section. The physical motivation here is that of a (relatively) high power electric field with non-negligible third-order polarization effects. The result is an onsite Kerr-type term that is proportional to the field intensity.
We consider the following nonlinear Haldane model
d^2 a_mn/dt^2 + P a_mn + t_1 (δ_- b_mn ) + σ |a_mn|^2 a_mn
+ t_2 e^i ϕ ( Δ_1 a_mn ) + t_2 e^-i ϕ ( Δ_2 a_mn ) = 0
d^2 b_mn/dt^2 + P b_mn + t_1 (δ_+ a_mn ) + σ |b_mn|^2 b_mn
+ t̃_2 e^-i ϕ ( Δ_1 b_mn )+ t̃_2 e^i ϕ ( Δ_2 b_mn) = 0
where the linear interaction coefficients, δ_±,Δ_j, j=1,2,
are defined below (<ref>)-(<ref>). Motivated by previous studies, we take an on-site focusing, Kerr-type nonlinearity, i.e. σ > 0. For the simulations below, we take σ = 0.1.
The initial conditions used to generate solitons below are of the form
a_mn(0) = A sech(ν m) e^i m k· v_1 a_n( k)
b_mn(0) = A sech(ν m) e^i m k· v_1 b_n( k) ,
with k = 0.65 k_1.
We choose a linear edge mode whose corresponding dispersion (second-derivative) is nonzero, i.e. ω”( k) ≈ -0.287 < 0 (see red `x' marker in Fig. <ref>). Note that these derivatives are defined in the directional derivative sense
ω'( k) =
∇ω |_ k_1= lim_h → 0ω ( k + h k_1) - ω ( k)/h .
For reference, the group velocity is ω' ( k) ≈ -0.0584 in the v_1 direction.
Unfortunately, the third order dispersion is relatively large, ω”' ( k) ≈ -1.401 which will impact the formation of solitons. For this relatively weak dispersion, we seek a comparable weak nonlinearity to balance it, i.e. A = 0.3. A corresponding slowly-varying profile (ν = 0.15) is chosen to ensure as pure of a single edge mode as possible is excited.
Using the parameters described above, a typical evolution thorough a one
lattice cell defect in the - v_2 direction is highlighted in Fig. <ref>.
The resulting nonlinear mode propagates over relatively long time scales (0 ≤ t ≤ 1200) with a nearly constant solitary form. We observe a small 3.7 % relative change in maximum magnitude between the initial and final states.
Hence, we refer to this as an edge soliton. We note that, eventually, on longer time scales higher-order dispersion terms will become non-negligible and the mode will degrade.
Now, some further remarks
about these topologically protected edge solitons. Choosing the appropriate
sign of dispersion is imperative for achieving a self-focusing effect and solitons. When ω”( k) > 0, we observe a gradual self-defocusing dissipation of the envelope. The ideal scenario for soliton formation in this MO lattice is a when ω”( k) <0 and ω”'( k) ≈ 0. Choosing a mode centered at the zero-dispersion (inflection) point, i.e. ω”( k) = 0, results in substantial dispersive break-up of initially localized solitary waves.
In the nonlinear case, when we send a slowly-modulated mode corresponding to topologically trivial (null Chern number) into a defect, we observe significant radiation into the bulk, similar to that observed in Fig. <ref>(c”) for
the linear evolution. In this weakly nonlinear regime, it is important to modulate a topologically nontrivial linear mode to obtain robust, unidirectional propagation. For a fully robust edge soliton, this nontrivial topology must be paired with a balanced soliton envelope.
§ CONCLUSION
A perturbed Wannier approach for obtaining tight-binding approximations containing nearest and next-nearest neighbors of a magneto-optical honeycomb lattice system is
studied.
Remarkably, this method leads to
the celebrated system studied by Haldane in 1988 <ref>. This model agrees with experiments <cit.> and indicates topological transitions can occur when inversion symmetry
and time-reversal symmetry are broken. This data-driven Wannier approach has been previously employed in rectangular lattice geometries <cit.> and can be applicable for discovering and extrapolating discrete reductions in other Chern insulator systems in cases where a direct Wannier approach is ineffective.
Acknowledgements
This project was partially supported by AFOSR under grants No. FA9550-19-1-0084 and No. FA9550-23-1-0105.
§ NUMERICAL COMPUTATION OF SPECTRAL BANDS AND WANNIER FUNCTIONS
The numerical computation of the Bloch modes and maximally localized Wannier functions (MLWFs) is reviewed below. A more comprehensive discussion can be found in <cit.>.
To simplify the necessary calculations, first, a linear transformation is introduced to map the parallelogram unit cell to a square. The change of variables
u = 1/3 x - 1/√(3) y , v = 1/3 x + 1/√(3) y
transforms the parallelogram formed by the lattice vectors v_1, v_2 into a square with side length 1. As a result, the master equation (<ref>) transforms to
- 4/9( ∂^2 E/∂ u^2 - ∂^2 E/∂ u ∂ v + ∂^2 E/∂ v^2) + g(u,v) ∂ E/∂ u + h(u,v) ∂ E/∂ v
= ω^2 ε(u,v) μ̃(u,v) E
where
g(u,v) = 4/9μ̃∂μ̃/∂ u - 2/9μ̃∂μ̃/∂ v + 2 i μ̃/3 √(3)∂η/∂ v
h(u,v) =- 4/9μ̃∂μ̃/∂ u + 2/9μ̃∂μ̃/∂ v - 2 i μ̃/3 √(3)∂η/∂ v .
From here, a formulation similar to that used in <cit.> can be applied. All transformed coefficients now have the periodicity: f(u + m , v + n ) = f(u,v) for m,n ∈ℤ. That is, the functions are periodic with respect to the transformed lattice vectors e_1 = (1 , 0)^T and e_2 = (0,1)^T.
Hence, transformed master equation (<ref>) is solved by looking for Bloch wave solutions with the form E( w, κ) = e^i κ· w u( w; κ), where u( w + m e_1 + n e_2; κ) = u( w; κ) for w = (u,v)^T, κ = (k_u, k_v)^T. Note: this κ is unrelated to κ of the permeability tensor.
The numerical spectral bands shown throughout this paper are computed by solving (<ref>) for the eigenfunction/eigenvalue pair (u,ω) as functions of the transformed quasimomentum. Subsequently, the quasimomentum is transformed back to the original k_x,k_y variables via
k_x = k_u + k_v/3 , k_y = k_v - k_u/√(3) .
The continuous Chern numbers are defined by
C_p = 1/2 π i∬_ BZ (∇_ k× A_p) · z d k ,
where A_p( k) = ⟨ u_p( r, k) | ∂_k_x u_p( r, k) ⟩_ UC,ϵμ̃ x + ⟨ u_p( r, k) | ∂_k_y u_p( r, k) ⟩_ UC,ϵμ̃ y
are numerically computed using the algorithm <cit.> with respect to the weighted inner product
⟨ f, g ⟩_ UC,ϵμ̃ = ∬_ UC f( r)^* g( r) ε( r) μ̃( r) d r ,
where UC denotes the unit cell.
The transformed Bloch wave is periodic with respect to the transformed reciprocal lattice vectors, κ_1 = 2 π (1 , 0)^T, κ_2 = 2 π (0 ,1)^T. Notice: e_i ·κ_j = 2 πδ_ij. As such, it can be expressed as the Fourier series
E( w; κ) = ∑_p∑_m,n W_mn^p( w) e^i κ· (m e_1 + n e_2) ,
where W_mn^p( w) is a transformed Wannier function corresponding to the p^ th band, and centered at the (m,n) unit cell.
For the problem studied in this paper, we only consider the lowest two bands p = 1,2 and truncate the remaining terms.
Next, the MLWF algorithm <cit.> is applied to find localized Wannier functions for the g = h = 0 (corresponding to ℳ = 0) problem in equation (<ref>). This is done by finding a unitary transformation that minimizes the functional describing the variance of the Wannier function, given by equation (<ref>) below. Let E_1( w, κ) = e^i κ· w u_1( w; κ) and E_2( w, κ) = e^i κ· w u_2( w; κ) correspond to first and second spectral bands, respectively.
A spectral unitary transformation of the Bloch functions is taken at fixed values of w
[ u^c( w; κ); u^d( w; κ) ]
=
𝕌(κ)
[ u_1( w; κ); u_2( w; κ) ]
where 𝕌(κ) is a 2× 2 matrix. Only after computing these Wannier functions do we realize where they are physically located. Upon inspection, we replace the labels c,d with the labels a,b, where a modes are centered at the a-sites and b modes centered at the b-sites (see Fig. <ref>).
Upon obtaining these functions, the Bloch modes E^a( w; κ) = e^i κ· wu^a( w; κ) and E^b( w; κ) = e^i κ· wu^b( w; κ) are computed and then used to construct the Wannier functions
W^a_mn( w) = 1/4 π^2∬_ BZ e^ - i κ· (m e_1 + n e_2 )E^a( w; κ) dκ
W^b_mn( w) = 1/4 π^2∬_ BZ e^ - i κ· (m e_1 + n e_2 )E^b( w; κ) dκ
shown in Fig. <ref>.
§ TIGHT-BINDING PARAMETERS
The parameters used to produce the tight-binding approximations are given in Table <ref>. All cases correspond to the magnetization by an external field, or ℳ( r) ≠ 0, except (i) which is the unmagnetized case. Also included are the corresponding rod radii.
The Chern number corresponding to the upper spectral surface of the tight-binding model is included.
The value C_+ = -1
corresponds to phase points located inside the topological region of Fig. <ref>, while C_+ = 0
lies above or below this region. The values in the table were computed analytically as well as numerically using the algorithm in <cit.> on the discrete eigenvectors.
In all cases considered, the values agreed. The topological numbers for the discrete (tight-binding) model also match those for the continuum model, except possibly near the sensitive topological transition points.
§ BREAKING OF INVERSION SYMMETRY
As discussed in Sec. <ref>, breaking inversion symmetry can induce a topological transition from a nontrivial
to trivial Chern insulator. This symmetry breaking can be implemented by choosing different radii at a-sites and b-sites, that is R_a ≠ R_b in Fig. <ref>. The spectral bands induced by this change are shown in Figs. <ref> and <ref>. In particular, decreasing R_a relative to R_b induces a topological transition and a touching point at the K Dirac point. If, on the other hand, one considers R_a larger relative to R_b, a similar topological transition occurs, but instead the gap closes at the opposite Dirac point, K' = - K.
A depiction of this is shown in Fig. <ref>.
The computed parameters are summarized in Table <ref>. Examining the inversion parameter M = (P - P)/2, it is observed to be positive when R_a < R_b, and negative when R_a > R_b.
For applications which seek to use these chiral edge modes, inversion symmetry should be nearly satisfied, that is, R_a ≈ R_b. More precisely, the system supports topologically protected modes when the parameters M and ϕ are chosen to reside in the inner (topological) region of Fig. <ref>.
Some of the
Wannier functions corresponding to broken inversion symmetry are shown in Fig. <ref>. In each case the rod profiles and their corresponding Wannier modes are shown. These Wannier modes are constructed in a manner similar to that described in Sec. <ref> and <cit.>, i.e. ℳ = 0, μ̃≠μ_0. Also given is the variance
Ω = ⟨ | r|^2 ⟩ - | ⟨ r⟩ |^2 ,
⟨ x^n ⟩≡∬_ℝ^2 x^n |W( r)|^2 ε( r) μ̃( r) d r
where W( r) is the corresponding Wannier mode.
The lattice sites whose rods have larger relative radius, correlate to a smaller variances; and vice versa for smaller relative rods. This is the source of t̃_2 ≠ t_2 and P ≠P. These different widths indicate different decay rates and imply different tight-binding coefficients among sites of the same type.
9
Jotzu2014 G. Jotzu, M. Messer, Rémi Desbuquois, M. Lebrat, T. Uehlinger, D. Greif, and T. Esslinger, Nature 515 237 (2014).
vonKlitzing1980 K. Von Klitzing, G. Dorda, and M. Pepper, Phys. Rev. Lett. 45 494 (1980).
Bernevig2006 B. A. Bernevig and S.-C. Zhang, Phys. Rev. Lett. 96 106802 (2006).
Chang2013 C.-Z. Chang, et al., Science 340 167 (2013).
Delplace2017 P. Delplace, J. B. Marston, and A. Venaille, Science 358 1075 (2017).
Rechtsman2013 M. C. Rechtsman, et al., Nature 496 196 (2013).
Ozawa2019 T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, and I. Carusotto, Rev. Mod. Phys. 91 015006 (2019).
Lu2014 L. Lu, J. D. Joannopoulos, and M. Soljac̆ić, Nat. Photon. 8 821 (2014).
Ablowitz2022a M. J. Ablowitz and J. T. Cole, Physica D 440 133440 (2022).
Fefferman2018 C. L. Fefferman, J. P. Lee‐Thorp, M. I. Weinstein, Comm. Pure Appl. Math. 71 1178 (2018).
Haldane1988 F. D. M. Haldane, Phys. Rev. Lett. 61 2015 (1988).
Wang2008 Z. Wang, Y. D. Chong, J. D. Joannopoulos, and M. Soljac̆ić, Phys. Rev. Lett. 100 013905 (2008).
Wang2009 Z. Wang, Y. D. Chong, J. D. Joannopoulos, and M. Soljac̆ić, Nature 461 772 (2009).
Sun2019 X.-C. Sun, C. He, X.-P. Liu, Y. Zou, M.-H. Lu, X. Hu, and Y.-F. Chen, Crystals 9 137 (2019).
Yang2013 Y. Yang, Y. Poo, R.-X. Wu, Y. Gu, and P. Chen, Appl. Phys. Lett. 102 231113 (2013).
Ao2009 X. Ao, Z. Lin, and C. T. Chan, Phys. Rev. B 80 033105 (2009).
Poo2011 Y. Poo, R.-X. Wu, Z. Lin, Y. Yang, and C. T. Chan, Phys. Rev. Lett. 106 093903 (2011).
Zhao2020 R. Zhao, G.-D. Xie, M. L. N. Chen, Z. Lan, Z. Huang, and W. E. I. Sha, Opt. Express 28 4638 (2020).
Haldane2008 F. D. M. Haldane and S. Raghu, Phys. Rev. Lett. 100 013904 (2008).
Raghu2008 S. Raghu and F. D. M. Haldane, Phys. Rev. A 78 033834 (2008).
LeeThorp2019 J. P. Lee-Thorp, M. I. Weinstein, Y. Zhu, Arch. Ration. Mech. Anal. 232 1 (2019).
Ablowitz2022 M. J. Ablowitz, J. T. Cole, and S. D. Nixon, SIAM J. Appl. Math., In press, (2023)
Ablowitz2020 M. J. Ablowitz and J. T. Cole, Phys. Rev. A 101 023811 (2020).
Ablowitz2014 M. J. Ablowitz, C. W. Curtis, and Y.-P. Ma, Phys. Rev. A 90 023813 (2014).
Ablowitz2017 M. J. Ablowitz and J. T. Cole, Phys. Rev. A 96 043868 (2017).
Mukherjee2021 S. Mukherjee and M. C. Rechtsman, Phys. Rev. X 11 041057 (2021).
Ablowitz2021 M. J. Ablowitz, J. T. Cole, P. Hu, and P. Rosenthal, Phys. Rev. E 103 042214 (2021).
Harari2018 G. Harari, et al., Science 359 eaar4003 (2018).
Zhou2017 X. Zhou, Y. Wang, D. Leykam, and Y. D. Chong, New J. Phys. 19 095002 (2017).
Brouder2007 C. Brouder, G. Panati, M. Calandra, C. Mourougane, and N. Marzari, Phys. Rev. Lett. 98 046402 (2007).
Marzari1997 N. Marzari and D. Vanderbilt, Phys. Rev. B 56 12847 (1997).
Pozar D. M. Pozar, Microwave Engineering 4th ed. (John Wiley & Sons, 2012).
fukui T. Fukui, Y. Hatsugai, and H. Suzuki, Phys. Soc. of Japan, 74 1674 (2005).
Ablowitz_book M. J. Ablowitz, Nonlinear Dispersive Waves (Cambridge Press, 2011).
|
http://arxiv.org/abs/2307.03968v1 | 20230708125450 | Multi-Level Power Series Solution for Large Surface and Volume Electric Field Integral Equation | [
"Y. K. Negi",
"N. Balakrishnan",
"S. M. Rao"
] | cs.CE | [
"cs.CE",
"cs.NA",
"math.NA"
] |
Impact of noise on inverse design: The case of NMR spectra matching
O. Anatole von Lilienfeld
August 12, 2023
===================================================================
In this paper, we propose a new multi-level power series solution method for solving a large surface and volume electric field integral equation-based H-Matrix.
The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation. The solution method
avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as
the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the O(NlogN) solution complexity.
Method of Moments (MoM), H-Matrix, surface electric field integral equation,volume electric field integral equation.
§ INTRODUCTION
With the use of ever increasing higher frequencies for various defence and civilian applications in the current world, the electrical
size of electromagnetic scattering/radiation problem has grown drastically <cit.>. Solving the electrically large problems numerically to obtain fast and
accurate results is the biggest challenge in the Computational Electromagnetics (CEM) community. Also, with the increase in computing power and memory,
the need for large-scale solution algorithms has grown even more. Out of the various numerical methods in CEM, the most popular methods are:
a) the Finite Difference Time Domain (FDTD) <cit.> method in the time domain and b) the Method of Moments (MoM) <cit.> and Finite Element
Method (FEM) <cit.> in the frequency domain. Traditionally, the frequency domain methods have been more popular than the time domain methods
as most of the early experimental results were available in the frequency domain and validating the computational results was convenient and easy.
Out of the various frequency domain methods, MoM based methods are highly accurate and flexible for modeling irregular structures, the MoM matrix
can be computed with the Surface Electric Field Integral Equation (S-EFIE) for solving Perfect Electrical Conductor (PEC) problems with surface mesh, and the
Volume Electric Field Integral Equation (V-EFIE) <cit.> for solving inhomogeneous dielectric problems with volume mesh. Further, the MoM leads
to a smaller number of unknowns compared to FEM and is free from grid dispersion error. However, the MoM matrix is a full matrix compared to a
sparse matrix for the FEM method. Hence, the solution to large size problems with MoM in electromagnetics requires high matrix memory
and computation time due to the dense matrix. Note that MoM dense matrix computation, matrix vector product and storage cost scales to O(N^2 ) for N number of unknowns. Solving the dense matrix with an iterative solver leads to N_itr O(N^2) calculations for N_itr iteration with O(N^2) for matrix-vector multiplication cost. With the direct solver, the complexity grows as O(N^3). Various fast solver algorithms like Multi-Level Fast Multipole Algorithm (MLFMA) <cit.>, Adaptive Integral Method (AIM) <cit.>, FFT <cit.>, IE-QR <cit.>,
and Hierarchical Matrix (H-Matrix) <cit.> have been proposed to overcome the MoM limitations of high memory and computation cost.
Fast solver reduces the matrix memory, matrix fill time, and matrix-vector product time to O(NlogN). The reduced matrix-vector product time
improves the solution time to N_itr O(NlogN) for N_itr iterations with various iterative solution methods like Bi-Conjugate Gradient
(BiCG) or Generalized Minimum Residual (GMRES).
Fast solvers are built on the compressibility property of the far-field interaction matrices. The compression of the far-field matrices can be done
using analytical matrix compression methods like MLFMA or AIM, and also with numerical matrix compression methods like H-Matrix. Compared to
analytical compression methods, numerical compression methods are easy to implement and are kernel independent. All the fast solvers depend on the
iteration count of the iterative solution methods. The convergence of the iterations depends on the condition number of the computed MoM matrix,
and further, for a large number of unknowns, the convergence iteration count also increases. The high iteration count can be mitigated by using various
preconditions like ILUT, Null-Field, and Schur's complement method based preconditioners <cit.>. The matrix preconditioner improves
the condition number of the matrices and reduces the iteration count of the overall matrix solution. Despite the improvement in solution time, the use
of preconditioners comes with the overhead of preconditioner computation time and extra preconditioner solution time for each iteration. Also, for
the solving of a large number of unknowns, the iteration count may still be high.
Recently there has been a trend in the CEM community for the development of an iteration-free fast solver method for solving problems with a large
number of unknowns. Various fast direct solvers <cit.> have been proposed to overcome the iteration dependency of the solution process.
These direct solvers are based on LU decomposition and compression methods. The methods are complex to implement and give quadratic scaling
for complex real-world problems.
In this work, we propose a Multi-Level (ML) fast matrix solution method based on the power series <cit.>. The proposed method exploits the
property of ML matrix compression of the H-Matrix. The matrix is solved for each level using the matrix computation of the leaf level only, and the
matrix solution can be terminated at the desired level as per the required accuracy. Our experimental results show that we get good accuracy even for the
lowest level solution. The method relies on matrix-vector multiplication at each level and using the solution of the lowest level saves matrix computation
time and memory requirement for the overall matrix solution.
The rest of the paper is organized as follows. Section II gives a summary of MoM computation for S-EFIE and V-EFIE, section III covers H-Matrix
computation for S-EFIE and V-EFIE. The derivation of the proposed ML power series solver is given in section IV. The numerical results of the
proposed method, and conclusion are discussed in sections V, and VI.
§ METHOD OF MOMENTS
MoM is a popular and efficient integral equation based method for solving various electromagnetic radiation/scattering problems. MoM can be computed using Electric Field Integral Equation (EFIE) for both surface and volume modeling. Surface modeling can be done using Rao Wilton Glisson (RWG) <cit.> triangle basis function, whereas volume modeling can be done using Schaubert Wilton Glisson (SWG) <cit.> tetrahedral basis function. In the case of dielectric modeling compared to S-EFIE, V-EFIE is an integral equation of the second kind and is more well-conditioned and stable. V-EFIE can model inhomogeneous bodies more efficiently than surface EFIE. In this work, we use RWG basis function for PEC surface S-EFIE modeling and SWG basis function for volume V-EFIE modeling. The surface/volume EFIE governing equation for the conductor/dielectric scattering body illuminated with the incident plane wave is given as the total electric field (E^total) from a scattering surface/volume and is the sum of incident electric field (E^inc) and scattered electric fields (E^scatt).
E^total=E^inc+E^scatt.
The scatted electric field is due to the surface current in PEC surface or volume polarization current in the dielectric media and is given as:
E^scatt=-jωA(r)- ∇ϕ(r).
In the above equation A(r) is the magnetic vector potential and describes radiation of current, ϕ(r) is electric potential and describes associate bound charge. Applying the boundary condition for PEC structure the S-EFIE can be written as:
E^inc=jωA(r)+ ∇ϕ(r).
Similarly, the V-EFIE can be written for a dielectric inhomogeneous body as:
E^inc=D(r)/ϵ(r) + jωA(r) + ∇ϕ(r).
In the above, equation D(r) is the electric flux density and ϵ(r) is the dielectric constant of the scattering volume media. The surface current in equation (3) for PEC structure is expanded with RWG function, and similarly in equation (4) for dielectric volume structure polarization current and charge is modeled with SWG basis function. Performing Galarkin testing over each term with integrating over the surface/volume, the final system of equation boils down to the linear system of the equation as below:
[Z]x=b.
In the above equation, Z is a dense MoM matrix, b is a known incident plane wave, and x is an unknown coefficient to be computed. The dense matrix leads to high cost matrix computation and memory requirement as well as solution time complexity. In the next section, we discuss the implementation of the H-Matrix for the mitigation of high cost of the conventional MoM matrix
§ H-MATRIX
The high cost of MoM limits its application to a few λ problem sizes. This limitation of MoM can be overcome by incorporating fast solvers. Most of the fast solvers work on the principle of compressibility of the far-field matrices. For the implementation of a fast solver, the mesh of geometry is divided into blocks using an oct-tree or binary-tree division process and terminated at the desired level with a limiting edge or face count in each block. The non-far-field interaction blocks at the lowest level are considered near-field blocks and are in the dense matrix form. The compression of the far-field block matrix at each level can be done analytically or numerically. The system of equations in equation (5) can now be written as the sum of near-field and far-field matrix form as:
[Z_N+Z_F]x=b.
In the above equation Z_N is a near-field block matrix and Z_F is far-field compressed block matrices for the MoM fast solver matrix. Numerical compression of far-field matrices is easy to implement and is kernel-independent. A few of the popular fast solvers using numerical compression methods are IE-QR, H-Matrix. In this work, we have implemented H-Matrix for ML matrix compression. For the ML compression computation, the mesh is divided into ML binary tree division-based subgroups. H-Matrix works on the computation of a far-field matrix for the interaction blocks satisfying the admissibility condition given in equation (7). The admissibility condition states that η times the distance between the observation cluster (Ω_t) and source cluster (Ω_s) should be greater or equal to the minimum diameter of the observation cluster or source cluster for far-field computation, where η is the admissibility control parameter, and its value is taken as 1.0.
η dist(Ω_t,Ω_s) ≥ min(diam(Ω_t),diam(Ω_s)).
The far-field matrix block compression is done in such a way that its parent interaction matrix should not be computed at the top level. Matrix compression at each level is carried out using Adaptive Cross Approximation (ACA) <cit.> <cit.> method. The method exploits the rank deficiency property of the far-field matrix blocks. The low-rank sub-block of the far-field Z_sub with m rows and n columns is decomposed into approximate U_(m× k) and V_(k× n) matrices where k is the numerical rank of the low-rank sub-block far-field matrix such that k<<min(m,n). In this work, for memory savings, we only compute half of the H-Matrix <cit.> by making the computation process symmetric, and to maintain the accuracy of the H-Matrix, we use re-compressed ACA <cit.> for far-field block compression. The solution of the iterative solver is iteration count dependent, and further, the convergence iteration count depends on the condition number of the matrix. Also, as the number of unknowns increases, the iterating count for the convergence increases. In the next section, we discuss our proposed method, which is an iteration count and far-field level block independent solution process.
§ MULTI-LEVEL POWER SERIES SOLUTION
The full H-Matrix is a combination of near-field and far-field block matrices. The far-field compressed block matrices are computed for various levels, and in equation (6), the far-field matrix (Z_F) can be further decomposed into the different matrix levels as below:
[Z_F]=[Z_F1]+[Z_F2]+[Z_F3].
In the above equation far-field matrix Z_F1 is for level 1, Z_F2 is for level 2. and, Z_F3 is for level 3. Level 3 forms the leaf level of the binary tree and level 1 as the top level of the tree. Fig. 1. shows the H-Matrix layout for a two-dimension strip. In Fig. 1. light gray boxes represent Z_F1 far-field matrix at level 1, dark gray boxes as Z_F2 is for level 2 and large white boxes as Z_F3 for level 3, the black boxes are the near-field dense matrices. For illustrative purposes, the near-field matrix is a diagonal block form for a two-dimension strip. The real-world problems are three-dimension in structure, giving a non-diagonal block near-field matrix. To implement our ML power series solution method, we must diagonalize the near-field block matrix. The near-field matrix in equation (6) is diagonalized using diagonal scaling coefficient [α], as computed in <cit.> such that the scaled diagonal block near-field matrix can be given as:
[Z̃_N]=[α][Z_N].
Expanding equation (8) and scaling it with the scaling coefficients [α] gives:
[α][Z_N+Z_F1+Z_F2+Z_F3]x=[α]b.
[Z̃_N]x+[α][Z_F1]x+[α][Z_F2]x+[α][Z_F3]x=b̃.
In the above equation b̃ is a [α] scaled vector b and can be further simplified as :
x+ [Z̃_N]^-1[α][Z_F1]x+[Z̃_N]^-1[α][Z_F2]x
+[Z̃_N]^-1[α][Z_F3]x= [Z̃_N]^-1b̃.
Let [Z̃_N]^-1[α][Z_F1]=[U_1], [Z̃_N]^-1[α][Z_F2]=[U_2] and [Z̃_N]^-1[α][Z_F3]=[U_3] equation (12) can further be simplified as
x+ [U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃.
[I+ U_1]x+[U_2]x +[U_3]x= [Z̃_N]^-1b̃.
x+[I+ U_1]^-1[U_2]x +[I+ U_1]^-1[U_3]x
=[I+ U_1]^-1 [Z̃_N]^-1b̃.
Let [I+ U_1]^-1[U_2]=[V_2] and [I+ U_1]^-1[U_3]
=[V_3] equation (15) can further be simplified as
x+ [V_2]x+[V_3]x = [I+ U_1]^-1 [Z̃_N]^-1b̃.
x+[I+ V_2]^-1[V_3]x=[I+ V_2]^-1[I+ U_1]^-1 [Z̃_N]^-1b̃.
Let [I+V_2 ]^-1 [V_3 ]=[W_3] and equation (17) can be written as
x+[W_3]x=[I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃.
x=[I+W_3 ]^-1 [I+V_2 ]^-1 [I+U_1 ]^-1 [Z̃_N]^-1b̃.
In the above equations [I+W_3 ]^-1,[I+ V_2 ]^-1 and [I+ U_1 ]^-1 can be solved independently at each level using a power series solution method with the expansion as below:
[I+ U_1 ]^-1=[I+ [Z̃_N]^-1[α][Z_F1]]^-1.
[I+V_2 ]^-1=[I+[I+U_1 ]^-1 [U_2 ]]^-1
=[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1 [Z̃_N]^-1[α][Z_F2]]^-1.
[I+W_3 ]^-1=[I+[I+V_2 ]^-1 [V_3 ]]^-1
=[I+[I+[I+U_1 ]^-1[U_2 ]]^-1[I+U_1 ]^-1[U_3 ]]^-1
=[I+[I+[I+ [Z̃_N]^-1[α][Z_F1]]^-1[Z̃_N]^-1 [α][Z_F2 ]]^-1
[I+[[Z̃_N]^-1 [α][Z_F1]]^-1[Z̃_N]^-1[α][Z_F3 ]]^-1.
From equations (20), (21), and (22), it can be observed that the solution of these equations is dependent on that level and the lower levels of the binary tree block interaction matrix. At each level, the inverse of the matrix system equation can be efficiently computed by using a fast power series solution<cit.>. The fast power series iterative solution converges in two fixed iterations. The solution process only depends on the matrix-vector product of the H-Matrix, thus retaining the complexity of O(NlogN)<cit.>. The ML solution can be computed at the desired level per the required accuracy. Our results show that the solution at the leaf level gives an accurate result leading to time and memory savings.
§ NUMERICAL RESULTS
In this section, we show the accuracy and efficiency of the proposed method. The simulations are carried out on 128 GB memory and an Intel (Xeon E5-2670) processor system for the double-precision data type. The H-Matrix computation is done with the ACA matrix compression error tolerance of 1e-3 <cit.> and solved with GMRES iterative solver with convergence tolerance of 1e-6 <cit.>. For a compressed or dense matrix [Z] if we want to expand [1+Z]^-1 in power series, the necessary and sufficient condition for convergence is |Z|<1 and we choose 0.1 for our simulations <cit.>.The conductor and dielectric geometry with dielectric constant ϵ_r is meshed with an element size less than λ/10 and λ/(10√(ϵ_r)) respectively. To show the accuracy of the proposed method, the RCS results are compared with full H-Matrix iterative solver<cit.>. In the further subsections, we demonstrate the far-field memory and computation time savings along with in solution time saving with our proposed ML power series solution with different examples.
§.§ PEC square plate
To show the accuracy and efficiency on a PEC object in this subsection, we consider a square plate of size 15.0 λ along x and y axis meshed with 67,200 unknown edges. The square plate mesh is divided with binary tree division till level 6. The PEC S-EFIE H-Matrix is solved with ML power series solution method and H-Matrix iterative solver. ML power series converges in 2 iterations, and the iterative solver solution converges in 686. Only the far-field matrix at leaf level 6 is computed for the ML power series solution, ignoring far-field computation from levels 1 to 5 of the binary tree.
Fig. 2. shows the Bi-static RCS of a PEC square plate, and from the Fig., it can be observed that the solution with ML power series solver matches with the H-Matrix iterative solver. Table 1 shows the savings in memory, computation, and solution time of the ML power series solution method as compared with conventional H-Matrix-based iterative solver.
§.§ Dielectric slab
To show the accuracy and efficiency for a considerable size dielectric problem in this subsection, we consider a dielectric slab elongated along the y-axis with a height of 10.0 λ length, 1.0 λ width, and 0.1 λ thickness and dielectric constant (ϵ_r=2.0) meshed with 120,080 tetrahedral faces. The ML power series converges in 2 iterations, and the regular H-Matrix iterative solver converges in 33 iterations.
The dielectric slab mesh is divided with binary tree division till level 10. Only the far-field matrix at leaf level 10 is computed for the ML power series solution. The accuracy of the method for a Bi-static RCS is shown in Fig. 3. Table 2 shows the significant matrix memory, matrix fill and solution time savings of the ML power series solution compared to the conventional H-Matrix-based iterative solver.
§.§ Dielectric hollow cylinder
In this subsection, we consider a dielectric hollow cylinder elongated along the y-axis with a size of 6.0λ length, 0.4λ outer radii, and 0.05λ thickness with a dielectric constant (ϵ_r=2.0), meshed with 158,830 tetrahedral faces. The ML power series converges in 2 iterations, and the H-Matrix iterative solver converges in 24 iterations.
The hollow cylinder mesh is partitioned with a binary tree division till level 8, and for the ML power series solution only the far-field matrix at leaf level 8 is computed. Fig. 4. shows the close match in the bi-static RCS computed using the ML power series method and that with regular H-Matrix iterative solver. Table 3 shows the memory and time saving of the ML power series solution compared to the conventional H-Matrix iterative solver.
§ CONCLUSION
It can be observed from the illustrative examples in the previous sections that our proposed ML power series solution method gives considerable matrix memory, fill and solve time saving for significant size problems. The solution method is as accurate as the H-Matrix iterative solver. The savings may not be substantial for small-size mesh structures. Still, the method will give significant savings for large-size problems taken up for illustration and for complex and sizeable electrical problems like antenna arrays and complex composite structures. Also, the technique is entirely algebraic in nature and can apply to fast analytical solver-based methods like AIM and MLFMA. The matrix block in each level can be computed independently, and the solution of the method only depends on the matrix-vector product of the system matrix. Hence, the proposed method is amenable to efficient parallelization.
ACESJournal
Yoginder Kumar Negi pict/yknegi.jpg
obtained the B.Tech degree in Electronics and Communic-ation Engineering from Guru Gobind Singh Indraprastha University, New Delhi, India, in 2005, M.Tech degree in Microwave Electronics from Delhi University, New Delhi, India, in 2007 and the PhD degree in engineering from Indian Institute of Science (IISc), Bangalore, India, in 2018.
Dr Negi joined Supercomputer Education Research Center (SERC), IISc Bangalore in 2008 as a Scientific Officer. He is currently working as a Senior Scientific Officer in SERC IISc Bangalore. His current research interests include numerical electromagnetics, fast techniques for electromagnetic application, bio-electromagnetics, high-performance computing, and antenna design and analysis.
B. Narayanaswamypict/nbk.jpg
received the B.E. degree (Hons.) in Electronics and Communi-cation from the University of Madras, Chennai, India, in 1972, and the Ph.D. degree from the Indian Institute of Science, Bengaluru, India, in 1979.
He joined the Department of Aerospace Engineering, Indian Institute of Science, as an Assistant Professor, in 1981, where he became a Full Professor in 1991, served as the Associate Director, from 2005 to 2014, and is currently an INSA Senior Scientist at the Supercomputer Education and Research Centre. He has authored over 200 publications in the international journals and international conferences. His current research interests include numerical electromagnetics, high-performance computing and networks, polarimetric radars and aerospace electronic systems, information security, and digital library.
Dr. Narayanaswamy is a fellow of the World Academy of Sciences (TWAS), the National Academy of Science, the Indian Academy of Sciences, the Indian National Academy of Engineering, the National Academy of Sciences, and the Institution of Electronics and Telecommunication Engineers.
Sadasiva M. Rao pict/smr.jpg
obtained his Bachelors, Masters, and Doctoral degrees in electrical engineering from Osmania University, Hyderabad, India, Indian Institute of Science, Bangalore, India, and University of Mississippi, USA, in 1974, 1976, and 1980, respectively. He is well known in the electromagnetic engineering community and included in the Thomson Scientifics Highly Cited Researchers List.
Dr. Rao has been teaching electromagnetic theory, communication systems, electrical circuits, and other related courses at the undergraduate and graduate level for the past 30 years at various institutions. At present, he is working at Naval Research Laboratories, USA. He published/presented over 200 papers in various journals/conferences. He is an elected Fellow of IEEE.
|
http://arxiv.org/abs/2307.05069v1 | 20230711071352 | Cognitive Bias and Belief Revision | [
"Panagiotis Papadamos",
"Nina Gierasimczuk"
] | cs.LO | [
"cs.LO",
"cs.AI"
] |
Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with Sample-aware Prompting and Dynamic Revision Chain
Chunxi Guo, Zhiliang Tian (), Jintao Tang, Shasha Li, Zhihua Wen,
Kaixuan Wang and Ting Wang ()
August 12, 2023
==============================================================================================================
In this paper we formalise three types of cognitive bias within the framework of belief revision: confirmation bias, framing bias, and anchoring bias. We interpret them generally, as restrictions on the process of iterated revision, and we apply them to three well-known belief revision methods: conditioning, lexicographic revision, and minimal revision. We investigate the reliability of biased belief revision methods in truth-tracking. We also run computer simulations to assess the performance of biased belief revision in random scenarios.
§ INTRODUCTION
Cognitive bias is a systematic human thought pattern connected with the distortion of received information, that usually leads to deviation from rationality (for a recent analysis see <cit.>). Such biases are specific not only to human intelligence, they can be also ascribed to artificial agents, algorithms and programs. For instance, confirmation bias can be seen as stubbornness against new information which contradicts the previously adopted view. In some cases such confirmation bias can be implemented into a system purposefully. Take as an example an authentication algorithm and a malicious user who is trying to break into an email account. Say that the algorithm, before it locks the access, allows only three attempts to enter the correct password. Hence, the algorithm (temporarily) insists that the user who tries to connect is the real holder of the credentials, despite the input being inconsistent with that hypothesis. The algorithm will not revise its `belief' about the user's identity, until it receives the evidence to the contrary a specific number of times. Another unorthodox example of a biased artificial agent concerns anchoring bias, where an agent makes a decision based on a recent, selected piece of information, possibly ignoring other data. In the context of artificial agents, such situations may occur justifiably when resources (like time or memory) are limited. As an example consider two computers, A and B, connected within a network. Computer A attempts to communicate with computer B, but for some reason, computer A does not receive B's response within a specified time range and, as a result, erroneously considers B dead. This inability to communicate leads computer A to change its `belief' about B's liveness, and, subsequently, to make decisions based on this distortion.
In this paper we study some dynamic aspects of three types of cognitive bias: confirmation bias, framing bias, and anchoring bias. We will apply them to three well-known belief revision methods: conditioning, lexicographic, and minimal revision <cit.>. We first recall the background of the model of truth-tracking by belief revision from <cit.> (related to earlier work in <cit.>, see also <cit.>), which borrows from computational learning theory, and identifiability in the limit in particular <cit.>. We proceed by investigating the effect of bias on truth-tracking properties of various belief revision policies. Finally, we present our computer simulation in which we empirically compare the performance of biased and regular belief revision in different scenarios. We close with several directions of further work.
§.§ Background: truth-tracking and belief revision
We will now introduce basic notions, following the framework of truth-tracking by belief revision proposed in <cit.>.
Our agents' uncertainty space will be represented by a so-called epistemic space, 𝕊=(S,𝒪), where S is a non-empty, at most countable set of worlds (or states), and 𝒪⊆𝒫(S) is a set of possible observations. We will call any subset p of S a proposition, and we will say that a proposition p is true in s∈ S if s∈ p.
Data streams and sequences describe the information an agent receives over time. A data stream is an infinite sequence of observations O⃗=(O_0, O_1,…), where O_i∈𝒪, for i∈ℕ. A data sequence is a finite initial segment of a data stream; we will write O⃗[n] for the initial segment of O⃗ of length n, i.e., O⃗_0,O⃗_1,…, O⃗_n-1. Given a (finite or infinite) data sequence σ, σ_n is the n-th element of in σ; set(σ) is the set of elements enumerated in σ; #O(σ) is the frequency of observation O in σ; let τ be a finite data sequence, then τ·σ is the concatenation of τ and σ.
A special type of data streams are sound and complete streams. A data stream O⃗ is sound with respect to a state s∈ S if and only if every element in O⃗ is true in the world s, formally s∈O⃗_n, for all n∈ℕ. A data stream O⃗ is complete with respect to a state s∈ S if and only if every proposition true in s is in O⃗, formally if s∈ O then there is an n∈ℕ, such that O=O⃗_n. Sound and complete streams form the most accommodating conditions for learning.
Given an epistemic space 𝕊=(S,𝒪) and a data sequence σ, a learning method L (also referred to it as a learner), is a function that takes as an input the epistemic space 𝕊 and the sequence σ, and returns a subset of S, L(𝕊,σ)⊆ S, called a conjecture.
The goal of learning is to identify the actual world, which is a special designated element of the epistemic space. Given the epistemic space of an agent and the incoming information, which is (to some degree) trusted, the agent learns facts about the actual world step by step in order to achieve its goal, identifying the actual world.
Let 𝕊=(S,𝒪) be an epistemic space, s∈𝕊 is identified in the limit by L on O⃗, iff there is a k, such that for all n≥ k, L(𝕊,O⃗[n])={s}; s∈𝕊 is identified in the limit by L iff s is identified in the limit by L on every sound and complete data stream for s; S is identified in the limit by L if all s∈ S are identified in the limit on by L; Finally, 𝕊 is identifiable in the limit iff there exists an L that identifies it in the limit.
To be able to talk about beliefs of our agents (and whether or not they align with the actual world), we add to the epistemic space a plausibility relation.
Given an epistemic space 𝕊=(S,𝒪), a prior plausibility assignment ≼⊆ S× S is a total preorder. Such 𝕊^≼ =(S, 𝒪, ≼) will be called a plausibility space (generated from 𝕊, for simplicity of our notation we will often refer to such space with 𝔹). The prior plausibility assignment is not fixed—it may be different for different agents, and serves as starting points of their individual belief revision processes. Plausibility models allow defining beliefs of agents. For any proposition p, we will say that the agent believes p in 𝕊^≼ if p is true in all worlds in min_≼ (S).
Plausibility spaces, and hence also beliefs, change during the belief revision process. We will focus on three popular belief revision methods that can drive such a learning: conditioning, lexicographic, and minimal belief revision.
A one-step revision method R_1 is a function such that for any plausibility space 𝔹=(S,𝒪,≼) and any observable proposition p∈𝒪 returns a new plausibility space R_1(𝔹,p). We define three one-step revision methods:
Conditioning, Cond_1, is a one-step revision method that takes as input a plausibility space 𝔹=(S,𝒪,≼) and a proposition p∈𝒪 and returns the restriction of 𝔹 to p. Formally, Cond(𝔹,p)=(S^p,𝒪,≼^p), where S^p=S∩ p and ≼^p=≼∩(S^p× S^p).
Lexicographic revision, Lex_1, is a one-step revision method that takes as input a plausibility space 𝔹=(S,𝒪,≼) and a proposition p∈𝒪 and returns a plausibility space Lex(𝔹,p)=(S,𝒪,≼^'), such that for all t, w∈ S, t≼^'w if and only if t≼_pw or t≼_p̅w or (t∈ p and w∉ p), where ≼_p=≼∩(p× p), ≼_p̅=≼∩(p̅×p̅), and p̅ is the complement of p in S.
Minimal revision, Mini_1, is a one-step revision method that takes as input a plausibility space 𝔹=(S,𝒪,≼) and a proposition p∈𝒪 and returns a new plausibility space Mini(𝔹,p)=(S,𝒪,≼^') where for all t,w∈ S, if t∈min_p and w∉min_p, then t≼^'w, otherwise t≼^'w if and only if t≼ w.
An iterated belief revision method R is obtained by iterating the one-step revision method R_1: R(𝔹,λ)=𝔹 if λ is an empty data sequence, and R(𝔹,σ· p)=R_1(R(𝔹,σ),p).
Let R be an iterated belief revision method, S^≼ a plausibility space, and O⃗ a stream. A belief revision based learning method is defined in the following way: L_R^≼(𝕊,O⃗ [n])=min_≼ R(𝕊^≼,O⃗ [n]).
We will say that the revision method R identifies 𝕊 in the limit iff there is a ≼ such that L_R^≼ identifies 𝕊 in the limit. A revision method R is universal on a class ℂ of epistemic spaces if it can identify in the limit every epistemic space 𝕊∈ℂ that is identifiable in the limit.
The belief revision methods Cond and Lex are universal, while Mini is not.
Learning methods can be compared with respect to their power. We will say that a learner L' is at least as powerful as learner L, L⊑ L', if every epistemic space 𝕊 that is identified in the limit by L', is identified in the limit by L. We will say that L' is strictly more powerful than learner L, if L⊑ L' and it's not the case that L'⊑ L. Analogously, using definition <ref>, we will apply the same terms to belief revision methods.
In the remainder of this paper we will discuss several ways of introducing cognitive bias into this picture of iterated belief revision and long-term truth-tracking, together with computer simulation results that paint a more quantitative picture of the analytical results.
§ SIMULATING BELIEF REVISION
Throughout this work we also present the results of computer simulations we run to see how various (biased) methods compare with respect to their truth-tracking ability. To this end we implemented artificial belief revision agents (for the biased and unbiased scenarios), which try to identify a selected actual world on the basis of sound and complete streams. We use the object-oriented programming language Python. The code can be found in the repository of the project <cit.>, and the structure of the code can be seen in Figure <ref>.
The simulation included both custom and random tests. Custom tests were created to check the correctness of the implemented functions, while random tests were created to investigate the reliability and the performance of the (biased) belief revision methods. In the implementation all plausibility spaces are finite. This choice is governed by the practicality of the implementation. We ran several series of tests. Each series of tests consisted of 200 tests, while the plausibility spaces consisted of ≈ 5 possible states and ≈ 12 observables, and the incoming data sequence was longer than the number of observables (≈ 2-4 more observables). These numbers were hard-coded to ensure computational feasibility of the experiment. The plausibility spaces we created for the automatic tests were completely random and so could turn out to be unidentifiable. This is the reason why there were identification failures for the universal revision methods, even for unbiased cases.
After we randomly generated an epistemic space, one of the states (let us call it s) was randomly designated to be the actual world, and a sound and complete data stream σ for s was generated. A plausibility preorder over the epistemic was then randomly generated (generating a plausibility space). We then called on each of the (biased) revision methods and made them attempt to identify s from σ. As we will also see in the later comparisons, overall the frequencies of successful identification by unbiased (regular) belief revision methods were very high across experiments: for conditioning between 94% and 98%, for lexicographic revision between 97% and 99%, and for minimal revision between 77% and 82%.
§ COGNITIVE BIAS AND BELIEF REVISION
We will propose abstract accounts of three types of cognitive bias: confirmation bias, framing bias, and anchoring bias. For each we will describe how an agent revises its belief. We will see how the bias affects truth-tracking, both theoretically, through a learning-theoretic analysis of (non-)universality, and practically, in computer simulations.
§.§ Confirmation Bias
Hahn and Harries <cit.> characterized confirmation bias as a list of four `cognitions', namely: hypothesis-determined information seeking, failure to pursue falsification strategy in the context of conditional reasoning, stubbornness to change of belief once formed, and overconfidence or illusion of validity of our belief. The first cognition will not concern us, as we don't focus on agents that actively seek information, but rather we focus on how passive agents perceive incoming information.
To analyse selective bias, given a space 𝕊=(S,𝒪), we could designate a subset of 𝒪 to be the set of propositions that are `important' to the agent. We would then allow that they are given a special, privileged treatment during the revision process. We choose to express this level of importance more generally with a numerical assignment, which we call the stubbornness function.
Given an epistemic space 𝕊=(S,𝒪), the stubbornness function is D:𝒫(S)→ℕ.
The stubbornness function describes the level of an agent's bias towards a proposition, intuitively the ones with stubbornness degree higher than 1 can be considered important to the agent. The higher the stubbornness degree, the more biased the agent is towards the proposition, so the more difficult it is to change its belief in that proposition—there should be strong evidence against it. For an unbiased agent the value of the function D for every proposition is 1. An unbiased agent will revise its beliefs instantly after it receives information inconsistent with its beliefs. An agent that is biased towards a proposition p and believes p, should receive information `p' D(p)-many times in order to react by revising its belief with p. The agent struggles with falsifying its belief, maintains the illusion of its belief's validity, by resisting change.
For each one-step revision method R_1 given in Definition <ref>, we will provide a confirmation-biased version or iterated revision R_CB. R_CB will take a plausibility space and a sequence of data and output a new plausibility space. Intuitively, it will attempt to execute the unbiased version of the revision method, but this will only succeed if the stubbornness degree allows it, i.e., if the data contradicting the proposition is repeated enough times.
Let 𝔹=(S,𝒪,≼) be a plausibility space and let D be a stubbornness function, σ∈𝒪^* be a data sequence[Let Σ be a set, then Σ^∗ is a set of all finite sequences of elements from Σ.], p∈𝒪 be an observable and R_1 is a one-step revision method. A confirmation-bias belief revision method R_CB is defined in the following way:
R_CB(𝔹,λ)=𝔹,
R_CB(𝔹,σ· p)=
R_1(R_CB(𝔹,σ),p) if #p(σ)≥ D(p),
R_CB(𝔹,σ) otherwise.
where λ is an empty sequence, #p(σ) stands for the number of occurrences of p in σ, and p the complement of p in S.
We obtain the confirmation-biased conditioning, lexicographic and minimal revision Cond_CB, Lex_CB, Mini_CB by substituting R_1 in the preceding definition by Cond_1, Lex_1, and Mini_1, respectively.
Truth-tracking under confirmation bias
An agent under confirmation bias updates its belief with respect to the stubbornness degree. Below we see that it is the crucial factor that breaks the universality of the belief revision methods.
Cond, Lex and, Mini are strictly more powerful than Cond_CB, Lex_CB, and Mini_CB, respectively.
We will give an example of an epistemic space 𝕊=(S,𝒪) that is identified by Cond, but is not identified by Cond_CB. Let 𝕊=(S,𝒪), where S={w,t,s,r}, 𝒪={p,q,p̅,q̅} and p={w,t},p̅={s,r}, q={w,s}, and q̅={t,r}. Clearly, this space is identifiable by regular conditioning method Cond: take the plausibility order that takes all worlds to be equally plausible. Then, whichever world s∈ S is designated as the actual one, a sound a complete data stream for s will, in finite time, enumerate enough information to for the Cond method to delete all the other worlds, and so the actual world remains as the only one, and so also the minimal (most plausible) one.
To see that Cond_CB will not be able to identify this space, let us assume that for all x∈𝒫(S), D(x)=2. We need to show that for any plausibility preorder on S there is a world s∈ S, and a sound and complete stream O⃗ for s, such that Cond_CB fails to identify s on O⃗. Take a preorder ≼ on S, there are two cases, either (a) there is a unique minimal element s, or (b) there is none. For (a), take a t∈ S, such that s≼ t. There is a sound and complete stream O⃗ for t, that enumerates each observable true in t exactly once. While reading that sequence, Cond_CB will not apply a single update, and so on a sound and complete sequence for t it will converge to s, which means it fails to identify t. For (b), a similar argument holds—for all among the minimal equiplausible worlds there will be a sound and complete sequence that enumerates every piece of data exactly once. On such a stream the update of Cond_CB will not fire at all, and so there will be always more than one candidate for the actual world, so Cond_CB will not converge to the singleton of the actual world.
It remains to be argued that Cond can identify in the limit everything that Cond_CB can. Take an epistemic space 𝕊=(S,𝒪), and assume that an s ∈ S is identified in the limit by Cond_CB on a stream O⃗ (that is sound and complete for s). That means that there is a k∈ℕ, such that for all n≥ k, L^≼_Cond_CB(𝕊,O⃗[n])={s}. So, for all t∈ S such that t≠ s, O⃗[n] includes O ∈𝒪, such that t∉ O. Hence, L^≼_Cond(𝕊,O⃗[k])={s}, and, since Cond only removes worlds, and O⃗ never enumerates anything false in s, L^≼_Cond(𝕊,O⃗[n])={s}, for all n≥ k.
A similar argument works for the Lex_CB and Mini_CB method.
Putting together Theorem <ref> and Proposition <ref> we get the following corollary.
Cond_CB and Lex_CB are not universal.
Clearly, confirmation bias can be detrimental to truth-tracking. The negative effect of stubbornness in revision can be uniformly overcome by the use of so-called fat streams, i.e., sound and complete streams that enumerate every information infinitely many times (which is possible as long as the set 𝒪 is at most countable). Fat streams were introduced and studied before in computational learning theory in the context of memory-limited learners (see, e.g., <cit.>).
Simulation results We ran a comparative simulation study of confirmation-biased revision and the regular unbiased revision, following the method described in Section <ref>. The stubbornness values were randomly generated for all observables in the epistemic space as integers from 1 to 5. Figure <ref> shows the respective frequencies of truth-tracking success.
§.§ Framing Bias
Framing bias, also known as framing effect <cit.> refers to the fact that the way information is perceived (framed) by an agent can affect decision-making. We will introduce the framing function, FR which, broadly speaking, gives a range of interpretation for an observation, i.e., the incoming information can be `re-framed' into another information, within the range allowed by FR.
Given an epistemic space 𝕊=(S,𝒪), the framing function is FR:𝒪→𝒫(S).
Note that the above definition is very general—we do not assume that the agent takes into account their observational apparatus, and so we allow for the observation to be interpreted as any proposition. While confirmation bias pertained to frequency of information in a stream, framing bias is related to its correctness and precision. We can pose a variety of constraints on framing, for instance we could require that the framed information is in some way related to the original information. In particular, in this paper we impose that, with the actual information O, the agent perceives X such that X⊆ O. In this case, i.e, FR(O)⊆𝒫(O). This particular kind of framing can be seen as overconfidence bias, since given an observation with some uncertainty range, the learner sees it as one with a narrower range, i.e., one that is more certain.
As before, we will formally model the three belief revision methods, conditioning, lexicographic revision, and minimal revision under the conditions of the bias.
Let 𝔹=(S,𝒪,≼) be a plausibility space, σ∈𝒪^* a data sequence, p∈𝒪 an observable, FR a framing function, and and R_1 is a one-step revision method. We define a framing-biased method in the following way:
R_FR(𝔹,λ)=𝔹,
R_FR(𝔹,σ· p)=R_1(R_FR(𝔹,σ),x), such that x∈ FR(p).
We obtain the framing-biased conditioning, lexicographic and minimal revision Cond_FR, Lex_FR, Mini_FR by substituting R_1 in the preceding definition by Cond_1, Lex_1 and Mini_1, respectively.
Truth-tracking under framing bias As before, we will now investigate how framing bias affects truth-tracking capabilities of belief revision methods.
Given a stream O⃗ =(O_0, O_1, …) and a framing function FR, we define a framing of O⃗ as FR(O⃗ )=(P_0,P_1,…), where for each i∈ℕ, P_i∈ FR(O_i). We will call FR(O⃗ ) static iff for every i,j∈ℕ, with i≠ j, if O_i=O_j then P_i=P_j, otherwise FR(O⃗ ) is dynamic.
The first observation is that there are limit cases in which framing will not restrict the learning power of any of the revision methods, for instance when framing is a static identity function, or in more complicated, lucky cases when sound and complete streams are framed into (possibly different) sound and complete streams. In general however, framing will result in a certain kind of blindness, some worlds can get overlooked during the revision process. In particular, given an observable O that is true at s, it might be the case that O will get mapped to a set P, such that s∉ P, in other words, the agent will interpret a true observation as a proposition that is false in the actual world. This would be detrimental to any revision method. Hence, we get the following propositions.
Cond_FR and Lex_FR are not universal.
Mini is strictly more powerful than Mini_FR.
The dynamic framing allows for fair framing of streams, where the agents observes input `erroneously' for finitely many steps, after which it is presented a full sound and complete stream. This is a notion analogous to that of fair streams in <cit.>, and the following is a direct consequence of the result therein of Lex being universal on fair streams.
Lex_FR is universal on fairly framed streams.
Simulation results As before, we ran a comparative simulation study of confirmation-biased revision and the regular unbiased revision. As before we generate a sound and complete stream, which then gets transformed into its framed version, by applying the framing function to each observation independently. By the restrictions we impose, the framing function outputs always a random subset of the original proposition, which can be the empty set. Figure <ref> shows the respective frequencies of truth-tracking success.
§.§ Anchoring Bias
Anchoring bias plays a role in decision-making influenced by the most recently received information, and it is strongly connected to lack of resources. We make everyday decisions under time pressure. These decisions are, often unconsciously, influenced by the piece of information received last before the decision point <cit.>. Moreover, anchoring bias in real-life scenarios can introduce a level of randomness in decision making. Consider, as an example, a student who takes part in an exam involving a multiple choice test. Due to lack of time they have to answer a question without being able to analyse it properly. While going through possible answers, the student might pick one that reminds them of something they have seen recently in their notes.
As in the previous cases, we will provide a general definition of anchoring-biased methods. The mechanism will consists of two components, one is that the revision mechanism will always perform a minimal change, the other one is that in the case the revision step results in multiple minimal possible words, one of them will be chosen at random and made most plausible overall. In order to phrase this formally, we need several new notions. Given a set S, a preorder ≼⊆ (S× S), and x∈ S, we define ≼↑ x:= (≼∩ (S∖{x}× S∖{x})) ∪{(x,s)| s∈ S∖{x}}. Intuitively, this operation takes an order and outputs a new updated version of it, with x upgraded to be the most plausible world. Now we will define new versions of one-step revision methods, which include in their first part the unbiased one-step revision methods and in their second part the upgrade operator.
Let 𝔹=(S,𝒪,≼), p∈𝒪 and Lex_1(𝔹, p)=(S,𝒪,≼'), we define
Lex^+_1(𝔹,p)= (S,𝒪,≼') if |min_≼'S|=1;
(S,𝒪,≼'↑ x), with x∈ min_≼' S otherwise.
The upgraded minimal revision, Mini^+_1, is defined analogously. It remains to discuss what happens when conditioning results in several minimal worlds. We propose the following interpretation.
Let 𝔹=(S,𝒪,≼), p∈𝒪 and Cond_1(𝔹, p)=(S',𝒪',≼'), we define
Cond^+_1(𝔹,p)= (S',𝒪',≼') if |min_≼'S'|=1;
({x},𝒪',∅), with x∈ min_≼' S' otherwise.
Cond^+_1 is a very `impatient' method, as long as a singular minimal world is available, it just follows the usual drill, but if at any stage several worlds are most plausible, it picks one of them and throws away the rest of the space. This is very radical, but this way we avoid upgrading the order, which would go against the spirit of conditioning.
Let 𝔹=(S,𝒪,≼), σ∈ O^* a data sequence, p∈ O an observable. We define the anchoring-biased methods R_AB as:
R_AB(𝔹,λ)=𝔹,
R_AB(𝔹,σ· p)=R^+_1(R_AB(𝔹,σ), min_≼_AB(S_AB∩ p)),
where R_AB(𝔹,σ)=(S_AB,𝒪_AB, ≼_AB).
We obtain the anchoring-biased conditioning, lexicographic and minimal revision Cond_AB, Lex_AB, Mini_AB by substituting R^+_1 above by Cond^+_1, Lex^+_1 and Mini^+_1, respectively.
Unbiased minimal belief revision is in itself, interestingly, a form of anchoring bias. An agent using minimal belief revision actually uses the most plausible worlds where the incoming information is true to update its belief accordingly. When it comes to lexicographic revision, the definition is slightly different, but the behavior of anchoring-biased lexicographic belief revision is the same as that of unbiased minimal revision. By imposing the extra upgrade condition we make anchoring-biased methods more `actionable', reflecting the fact that anchoring bias often plays a role in quick decision-making. After each revision step anchoring ensures that there is a candidate for the best possible world, which is randomly selected among the minimal worlds at that stage. This is especially important if resources for performing revision are limited (in the simulation these cases will be labeled `-res'). We will see that this augmentation positively affects the biased methods, even though in general the anchoring biased belief revision methods are not universal.
Truth-tracking under anchoring bias
Anchoring bias is most prominently connected to lack of resources. For example, when someone needs to make a decision under the pressure of time, anchoring bias can be used as heuristic. In this section we will show that, even though anchoring bias breaks universality, it can facilitate faster identification of the actual world.
Consider the plausibility space 𝔹=(𝕊,≼), where 𝕊=(S,𝒪), S={w,r,s,t} and s the actual world. The initial plausibility order is w≼ t≃ s ≼ r, so the agent is indifferent between the worlds t and s, and the observable propositions are p={w}, q={r,t,w},p̅={r,s,t} and q̅={s}. Consider also a sound a complete data stream with respect to the actual world, O⃗=(p̅,…,q̅,…). An agent using anchoring-biased conditioning identifies the actual world in the first piece of information with probability .5. Of course, with probability .5 the actual world is excluded and so the agent will not identify it. Assuming that the biased agent identifies the actual world, anchoring-biased conditioning is faster than conditioning by k-1 steps, where O⃗_κ is the first occurrence of q̅ in the data stream O⃗. Note that unbiased minimal revision will identify the world s only after receiving q̅.
The above example points at the following proposition.
Cond_AB is not universal.
Moreover, since Lex_AB is a version of Mini, based on Theorem <ref>, we can state the following.
Lex_AB is not universal.
Even though anchoring-biased lexicographic belief revision is not universal, it can facilitate faster truth tracking. The argument includes cases wherein the agent is indifferent between more than one most plausible worlds. Recall that an agent which uses anchoring-biased lexicographic revision revises similarly to one that uses unbiased minimal revision, but if the set of the worlds which considers most plausible is not a singleton, it selects one of the most plausible worlds with equal probability.
Unbiased minimal revision can be seen as a form of anchoring bias, as an agent that uses minimal belief revision, minimally updates its belief to be compatible with min_≼(p). The difference is in the way they select the most plausible worlds after each update. Anchoring bias minimal revision and unbiased minimal revision will be compared in simulations below, where we investigate if the randomness included in anchoring-biased minimal revision improves the performance with respect to unbiased minimal revision.
Simulation results We again ran a comparative simulation study of confirmation-biased revision and the regular unbiased revision, following the method described in Section <ref>. In the case there was more than one minimal state at a certain stage of the belief revision process, the anchoring method selected one of the minimal states at random to be the conjecture of the learning method. Figure <ref> shows the respective frequencies of truth-tracking success.
As anchoring bias often shows up in the context of limited resources, we run another experiment, wherein we included a parameter (a real number between 0 and 100) which decreases each time a revision takes place, and the process terminates when the resource is depleted. In this particular implementation, each time a revision is executed the available resource is halved and the agent stops revising when its resources fall below 1. As we can see in Figure <ref> the anchoring ability to select a random world to be the candidate for the actual world improves the truth-tracking ability, especially in the case of minimal revision.
Finally, let us summarize some general observations about the simulation. Various components of a plausibility space affect the performance of the methods, both biased and unbiased ones. Specifically, an increase in the number of states negatively affects the performance of the belief revision methods (see Figure <ref>), while an increase in the number of observables decreases the number of non-identifiable worlds, which in effect can make unbiased methods fail. More plots with the results can be found on the project repository <cit.>.
We also saw that, as expected, cognitive-biased belief revision methods perform worse than the unbiased ones. An exception is the anchoring-biased minimal belief revision method. Additionally, when limited resources are implemented, anchoring-biased belief revision methods perform better than the unbiased ones. This is a significant result, as it provides a potential alternative tool for truth-tracking when the resources are limited, which is usually the case in real life scenarios.
§ CONCLUSIONS
Cognitive bias in artificial intelligence is an interesting topic with a bright future, and as such deserves to be investigated in the context of belief revision and knowledge representation. In this paper we provided ways to formalize bias in belief-revision and learning. The three kinds of bias we discussed had completely different character, and employed different components of our belief revision based learners. We have also shown that bias can be detrimental to learning understood as truth-tracking.
In general, biased methods are by far less reliable than the unbiased ones.
While cognitive bias is generally problematic for truth-tracking, when resources are scarce it can be considered a tool or a heuristic. Anchoring-biased methods are a good example here, as the tests we conducted showed. This point can also serve as a rehabilitation of minimal revision, which in general is not a universal learning method.
When it comes to the simulation, we have found, in line with our expectations, that Cond and Lex identify the actual world in almost every test. Moreover, in general, the larger the number of observables, the higher the chances for the agent to identify the actual world. The same holds for the length of the data sequence, see Figure <ref>.
Biased belief revision methods, are in general less successful than the unbiased ones—in particular, the information loss in framing can be fatal for truth-tracking by conditioning. On the other hand, anchoring bias can be used as a heuristic for faster identification.
In our work we model only some types of cognitive bias, the ones more applicable in artificial intelligence. Types mostly related to human emotional decision-making were intentionally excluded, but they would be a very interesting topic of future work.
Moreover, although we investigated how randomness on the states, observables, and data streams affects truth-tracking, randomness of the environment itself is not a factor in this model. Assigning some bias to the elements of the tests could potentially give better insights into truth-tracking. Finally, it would be very interesting to relate our results to the existing work on resource bounded belief revision in the AGM paradigm, in particular to <cit.>, to look for expressibility results in the context of dynamic logic of learning theory (DLLT, <cit.>), and, last but not least, make steps towards empirical predictions for cognitive science of bias.
eptcsini
|
http://arxiv.org/abs/2307.04232v1 | 20230709172357 | Multi-spin probes for thermometry in the strong-coupling regime | [
"Marlon Brenes",
"Dvira Segal"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
[email protected]
Department of Physics and Centre for Quantum Information and Quantum Control, University of Toronto, 60 Saint George St., Toronto, Ontario, M5S 1A7, Canada
Department of Physics and Centre for Quantum Information and Quantum Control, University of Toronto, 60 Saint George St., Toronto, Ontario, M5S 1A7, Canada
Department of Chemistry University of Toronto, 80 Saint George St., Toronto, Ontario, M5S 3H6, Canada
We study the sensitivity of thermometric probes that are composed of N spins coupled to a sample prepared at temperature T. Our analysis extends beyond the weak-coupling limit into the strong sample-probe coupling regime. In particular, sample-induced interactions between each of the spins are generated via strong coupling effects and are not fine-tuned amongst each body composing the probe. By employing the reaction-coordinate mapping to evaluate the non-canonical equilibrium state of the probe at finite coupling, we compute the thermometric sensitivity via the quantum Fisher information through the equilibrium state itself. We find that for single-spin probes (N = 1), temperature sensitivity decreases in the regime of weak-to-intermediate coupling strength, however, as the coupling increases we observe much higher sensitivity of the probe in the low-temperature regime. Furthermore, as long as N > 1, there exist optimal values of the sample-probe interaction energy that allow one to attain enhanced thermometric sensitivity when compared to the maximum achieved precision obtained from thermal Gibbs states at weak coupling, particularly in the regime of low temperature. Finally, we show that this enhanced sensitivity may be observed from suboptimal measurements.
Multi-spin probes for thermometry in the strong-coupling regime
Dvira Segal
August 12, 2023
===============================================================
§ INTRODUCTION
Temperature estimation in the quantum domain is a fervent research field, which has received theoretical and practical attention in recent years <cit.>. As a subset of the already growing field of quantum thermodynamics <cit.>, quantum thermometry has emerged to develop and understand precise protocols for temperature estimation at the nanoscale. Achieving high precision in the estimation of very low temperatures is a difficult task, with a number of applications ranging from cold-atomic systems for quantum simulation <cit.> to sensing with nitrogen-vacancy centers in diamond <cit.> and biological systems <cit.>.
Diverse directions have taken place as recourse to achieve high-precision thermometry, most of which fall into two categories: local and global thermometry. While global thermometry <cit.> rose as means to understand temperature estimation in situations where the temperature range is not well-known a priori, local thermometry concerns the design of temperature probes and the optimal measurements to be carried out to achieve high-temperature sensitivity <cit.>. Adaptative-Bayesian strategies have also come into place with promising precision enhancement in temperature estimation <cit.>. In non-integrable quantum systems, where thermalisation is ubiquitous, the eigenstate thermalisation hypothesis provides the means to estimate temperatures from local operations <cit.>. Quantum thermal machines have also been proposed as means for temperature estimation <cit.>.
In turn, local thermometry can also be sub-categorised into two different classes of protocols: those which achieve temperature estimation via the study of the resulting equilibrium state from coupling a probe to a sample <cit.> and those which do so via the out-of-equilibrium dynamical response signals <cit.>. We shall refer to the former as equilibrium thermometry, where temperature estimation may only follow from indirect measurements on the equilibrium state. The precision of the temperature estimation, in this case, will depend on both the equilibrium state itself and on the particular indirect measurement chosen.
Whenever the probe-sample interaction energy is the smallest energy scale in the configuration, quantum master equations <cit.> predict that the equilibrium state of the probe, i.e., the resulting state in the limit of long times starting from a product state between a probe and a thermalised sample, will be a thermal Gibbs state. We refer to the Gibbs state as the “canonical" state.
Certain microscopic conditions need to be met for the equilibrium state to be thermal <cit.>, although, thermalisation between a probe and a sample at weak interaction energy is a physical phenomenon that occurs with a high degree of universality. In the coupling regime where the equilibrium state of the probe is canonical, several aspects have been highlighted in order to employ these states for temperature estimation. The optimal measurement that provides the highest temperature sensitivity is the energy measurement of the probe <cit.>, while the design of the probe that provides the ultimate temperature sensitivity is one for which the M levels of the energy spectrum of the probe contains a single, non-degenerate ground state; together with a (M - 1)-degenerate excited state <cit.>. For practical purposes, achieving such a high degree of control and design is indeed very complicated.
In the regime where the probe-sample interaction energy cannot be neglected, the equilibrium state of the probe is non-canonical <cit.> and it has been discussed that energy measurements remain optimal even in this regime from perturbation theory <cit.>; while bath-induced correlations and strong coupling may lead to enhanced temperature sensitivity in integrable and harmonic models <cit.>. It has also been shown that non-Markovian effects, which may be prominent in the strong probe-sample interaction energy, could also lead to enhancements in temperature estimations from dynamical signals <cit.>.
In this work, we consider thermal probes that are composed of multiple spins and possibly strongly coupled to a sample as means for temperature estimation. In particular, we consider sample-induced spin interactions to determine whether an enhancement in the temperature estimation may be achieved. While the equilibrium state in the ultra-strong coupling regime may be accessed via the projection of the probe Hamiltonian onto the eigenbasis of the coupling operator between the sample and the probe <cit.>; in the intermediate (non-perturbative) coupling regime, the equilibrium state is most appropriately described via numerical approaches. The reaction-coordinate mapping <cit.> may be employed in certain operational regimes with a high degree of accuracy <cit.> for specific spectral functions of the sample <cit.> to compute the equilibrium state at strong coupling. The reaction-coordinate mapping provides the means to address strong-coupling effects via a Markovian embedding <cit.>, in which an enlarged system Hamiltonian evolves under Markovian dynamics. It can also be extended via polaron transformations that allow for analytical insight <cit.>. We consider multi-spin probes coupled to a reaction coordinate to model strong-coupling effects and bath-mediated interactions. With this method, we study the reduced state obtained when tracing out the reaction-coordinate degrees of freedom, leading to a non-canonical equilibrium state of the probe.
To address the temperature sensitivity, we consider the signal-to-noise ratio (SNR) as a figure of merit, which can be upper-bounded with the quantum Fisher information <cit.> through the quantum Cramer-Rao bound <cit.>. By computing maximal SNR via the non-canonical equilibrium state of the probe mediated by bath-induced interactions, we summarise our results as follows:
* For a single-spin probe, the effect of strong coupling is detrimental to the optimal temperature sensitivity of the probe at weak-to-intermediate coupling energy. This falls in line with the findings in Ref. <cit.> and extends the results therein to the non-perturbative regime of strong coupling. In the intermediate-to-strong coupling regime, much higher sensitivity may be observed in the low-temperature regime (Fig. <ref>).
* For multi-spin probes where the internal interactions amongst each of the N spins are mediated via bath-induced correlations, we find that, as long as N > 1, the temperature range over which the probe is sensitive increases considerably. Furthermore, there exists an optimal coupling λ between the sample and the probe for a given temperature range and probe size N to attain the optimal SNR
(Figs. <ref>-<ref>).
* The broad-range temperature sensitivity for multi-spin probes can be attained via dephasing operations (diagonal measurements) on the reduced state of the probe, at the cost of decreased sensitivity in the high-temperature regime, but not in the low-temperature regime. Local operations, such as polarisation measurements on the multi-spin probe, diminish temperature sensitivity in the low-temperature regime but the observed sensitivity is higher than the one obtained from energy measurements at weak coupling (Fig. <ref>).
These results contribute to the growing literature to establish ultimate thermometric precision bounds in strong-coupling thermodynamics. Two drawbacks of equilibrium thermometry that have been pointed out are the long timescales required for equilibration <cit.> and the highly-peaked sensitivity at the level of the SNR that it is often observed <cit.>, which leaves one to somehow obtain prior knowledge of the temperature range to be estimated. We argue that bath-mediated interactions may alleviate these constraints, by increasing the temperature range over which the probe provides high sensitivity and in many cases reducing the timescales of equilibration via strong-coupling dynamics.
In Sec. <ref> we introduce the common language of equilibrium thermometry and our reaction-coordinate mapping, as well as the model we employ for equilibrium thermometry. In Sec. <ref> we delve into the optimal SNR results for our probe configurations and the SNR obtained from suboptimal measurements. We provide some analysis and conclusions in Sec. <ref>, together with some proposals for future directions.
§ EQUILIBRIUM THERMOMETRY
§.§ Ultimate precision bounds
Focusing on equilibration processes, thermometry relies on the parameter estimation from the equilibrium state of a probe. A thermalised sample is coupled to a probe and the entire configuration is allowed to relax to equilibrium. The equilibrium state of the probe (k_B 1)
ρ̂_p(β) = e^-βĤ_p/Z_p,
depends on the parameter under investigation, in this case, the temperature T 1 / β. The temperature may only be estimated via a set of m indirect measurements on the equilibrium state of the probe. The equilibrium state is defined via the Hamiltonian of the probe Ĥ_p and Z_p = [ exp(-βĤ_p)].
The ultimate precision that may be attained via this parameter estimation protocol is understood from the Cramer-Rao inequality <cit.>
δ T ≥ [m ℱ(T)]^-1/2,
where δ T stands for the temperature precision and ℱ is the quantum Fisher information (QFI) which, in this context, may be understood as the sensitivity of each optimal measurement <cit.>. The QFI is obtained when maximising the classical Fisher information (FI) over all possible measurements <cit.>.
It has been shown that the observable Ô with the largest (optimal) temperature sensitivity at thermal equilibrium is the Hamiltonian Ĥ_p of the probe itself, such that the minimum statistical uncertainty on the signal-to-noise ratio (SNR) is given by <cit.>
(T / δ T)^2 ≤ m C(T),
where C(T) = (δĤ_p / T)^2 is the heat capacity of the system and δ^2 Ĥ_p = [ρ̂_p(T)Ĥ^2_p] - [ρ̂_p(T)Ĥ_p]^2.
In a more general sense, it will be more useful for our discussion to consider the QFI as <cit.>
ℱ(β) = [L̂_β^2 ρ̂_p(β)],
where L̂_β is the symmetric-logarithmic derivative (SLD) defined implicitly from the Lyapunov equation
∂_βρ̂_p(β) = 1/2{L̂_β, ρ̂_p(β) },
with {·, ·} denoting the anti-commutator. Following our previous discussion, the most informative measurements can be shown to be the projections onto the eigenbasis of L̂_β <cit.>. For thermal equilibrium processes, where the equilibrium state is of the form ρ̂_p(β) = exp(-βĤ_p) / Z_p, the SLD can be shown to be L̂_β = ⟨Ĥ_p ⟩ - Ĥ_p <cit.>. In this case, the SLD is diagonal in the energy eigenbasis of the Hamiltonian of the probe.
If the equilibrium state of the probe is non-canonical, as may be the case when the sample-probe interaction energy is non-negligible, these conditions are not satisfied, in general <cit.>.
§.§ Thermalisation and strong-coupling thermal fixed point through the reaction coordinate mapping
At weak coupling, the state of the probe ρ̂_p(β) may be seen as the steady state of the resulting dynamics between the probe to a sample, modelled as a thermal reservoir, of which the temperature is to be estimated.
The total Hamiltonian of the configuration is given by
Ĥ_ tot = Ĥ_p + Ĥ_ B + γĤ_ int,
where Ĥ_p, Ĥ_ B and Ĥ_ int are the Hamiltonians of the probe, bath (sample) and their interaction; respectively. The coupling between the probe and the bath is controlled via the dimensionless parameter γ. In standard open-systems theory, a perturbative approximation to the second order of γ together with the Born-Markov approximations yield a quantum master equation in Lindblad form for the dynamics of the probe <cit.>
∂ρ̂_p / ∂ t = -[Ĥ_p, ρ̂_p] + ℒ{ρ̂_p },
where ℒ is the Lindblad superoperator and [·, ·] is the commutator. The above equation dictates the effective dynamics of the probe from environmental effects. For a given physical configuration, the form of ℒ will depend on the microscopic details of the probe-bath interaction Hamiltonian and a careful treatment has to be considered for Eq. (<ref>) to yield the correct steady-state at long-times, i.e., the (canonical) thermal state in Eq. (<ref>) <cit.>. Most importantly, the approximations that lead to Eq. (<ref>) require the probe and the bath to approximately be in a product state throughout the dynamics and that correlation functions of the bath decay over timescales much shorter than the characteristic timescales of the dynamics of the probe <cit.>.
At strong coupling, beyond second-order perturbative approximations in γ, these conditions cannot be guaranteed <cit.>. Instead, in this regime, the total equilibrium state is a Gibbs state of the entire configuration
ρ̂_tot = e^-βĤ_tot/Z_tot,
such that the reduced state of the probe is the partial trace over environmental degrees of freedom <cit.>
ρ̂_p = _B[e^-βĤ_tot/Z_tot],
the complication being that this expression leaves one to describe the, in principle, infinite amount of degrees of freedom of the environment. In certain scenarios, however, one may instead consider the repartitioning of the Hamiltonian into an enlarged system that contains certain degrees of freedom of the bath, and a residual bath to which the system is coupled. The mapping becomes useful as long as the resulting enlarged Hamiltonian remains weakly coupled to the residual bath. This approach is typically known as a Markovian embedding <cit.>, whereby strong-coupling effects are captured via the explicit evolution of the probe's state combined with some bath degrees of freedom. An example of a specific type of Markovian embedding is the so-called reaction coordinate mapping, as depicted in Fig. <ref> <cit.>.
Consider a probe coupled to a bosonic bath modelled via an infinite set of harmonic oscillators with total Hamiltonian
Ĥ_ tot = Ĥ_p + Ŝ∑_k f_k (b̂^†_k + b̂_k) + ∑_k ν_k b̂^†_k b̂_k,
where the set of {b̂_k } are canonical bosonic operators for the k-th mode with frequency ν_k and f_k is the coupling strength between probe Ĥ_p and sample through the probe's operator Ŝ. The reaction-coordinate mapping in its most basic form starts by extracting a collective mode (with canonical bosonic operators {â}) from the bath and including it as part of the system, such that the probe is turned into an enlarged system
Ĥ_p + Ωâ^†â + λŜ (â^† + â) ↦Ĥ_S,
where the extended system Ĥ_S is now weakly-coupled to the residual bath, i.e., the resulting bath description after the extraction of the strongly-coupled mode. In Eq. (<ref>), λ is the coupling strength and Ω the frequency of the extracted mode. Both λ and Ω follow from the spectral function of the original (previous to the mapping) sample J(ω), via <cit.>
λ^2 = 1/Ω∫_0^∞dω ω J(ω), Ω^2 = ∫_0^∞dω ω^3 J(ω)/∫_0^∞dω ω J(ω).
The mapping can be shown to lead to an extended system coupled weakly to a residual bath for certain spectral densities of the original model Eq. (<ref>) <cit.>. If that is the case, then one can justify a master equation of the form Eq. (<ref>) that leads to appropriate thermalisation of the extended system, such that in the limit of long times the steady state is thermal ρ̂_S(β) = exp(-βĤ_S) / Z_S, where Z_S = [ exp(-βĤ_S)] and Ĥ_S is the Hamiltonian of the enlarged system.
Through this approach, one may investigate strong-coupling thermometric effects by studying the reduced state of the probe after tracing out the reaction-coordinate degrees of freedom
ρ̂_p(β) = _ RC[e^-βĤ_S]/Z_S,
and computing first the SLD through Eq. (<ref>) and then QFI for the reduced state through Eq. (<ref>).
§ SPIN PROBES
The Hamiltonian of the model is given by Eq. (<ref>).
We consider a probe composed of N spins with the Hamiltonian
Ĥ_p = ∑_i=1^NΔσ̂^z_i.
Each of the spins composing the probe do not interact with the other, however, they are coupled strongly to the bath with a system operator given by
Ŝ = ∑_i=1^Nσ̂^x_i.
Using the reaction-coordinate mapping, we define the system Hamiltonian Eq. (<ref>) including the original probe model,
the reaction coordinate, and their mutual interaction.
The reaction coordinate itself couples to the residual bath allowing thermalisation of the extended system. For details on the mapping see e.g., Ref. <cit.>.
The extraction of a reaction coordinate from the bath (sample) and its inclusion as part of the probe (thermometer) Hamiltonian, as written in Eq. (<ref>) elucidates the generation of an effective coupling between all pairs of spins in the limit of non-vanishing coupling λ. This is the case since the spins in Ĥ_S, which are otherwise non-interacting, are coupled via a collective operator Ŝ to the same reaction-coordinate mode. This degree of freedom, which is included explicitly in the equilibrium state in Eq. (<ref>) before being traced out, mediates couplings between the spins of the probe.
As mentioned before, the system composed of the probe and the reaction coordinate is assumed to thermalise to a canonical Gibbs state. The reduced state of the probe can be shown to thermalise to a Gibbs state at weak λ <cit.>, however, this is not necessarily the case as λ increases. A natural question is thus whether strong coupling effects in our model could lead to enhanced or detrimental maximal signal-to-noise ratios, which we can compute via
T/δ T = √(β^2 ℱ(β))
in the single-shot scenario (m = 1).
§.§ Single-spin probe
The most basic form of spin probes, corresponding to a single spin strongly coupled to the bath, serves as a basic benchmark. In this case, we consider N = 1 in Eq. (<ref>).
The extended system Hamiltonian includes a single spin coupled to a reaction-coordinate mode (the latter coupled to the sample). For the SNR, we consider the reduced state of the single spin induced by strong coupling.
We compute the SNR as a function of temperature for different values of the coupling parameter λ. The results are shown in Fig. <ref>. The calculation involves the computation of the SLD L̂_β though Eq. (<ref>) to then compute the QFI in Eq. (<ref>), with ρ̂_p = _ RC[e^-βĤ_S] / Z_S and Ĥ_S from Eq. (<ref>). The reaction coordinate with a frequency of Ω = 15Δ is truncated to M = 50 levels, which was sufficient to attain convergence of the results shown in Fig. <ref>.
The value of the reaction coordinate frequency emerges from the characteristics of the spectral function of the bath (sample), see
Eq. (<ref>) and Appendix <ref>.
At weak coupling, the maximal SNR for the single-spin probe can be shown to be related to the heat capacity of the probe through √(C(T)) <cit.>, as depicted in the solid black line in Fig <ref>. We have that C(T) = ∂_T ⟨Ĥ_p ⟩ and can be computed analytically to obtain the maximum SNR in the single-shot scenario
T/δ T = √(C(T)) = 2Δβ e^βΔ/1 + e^2βΔ.
We see in Fig. <ref>, that the effect of strong coupling is detrimental to the sensitivity of single-spin thermometers in the weak-to-intermediate coupling regime (λ≲ 5Δ). This falls in line with the results exposed in Ref. <cit.>, in which a perturbative treatment lead to the conclusion that energy measurements in the weakly-coupled case remain the most informative measurements.
However, as the coupling strength λ increases, we see that at low temperatures, stronger coupling in the single-spin probe leads to much higher sensitivity than its weakly-coupled counterpart. In fact, in the range T / Δ = [10^-2, 10^-1], strong coupling leads to a SNR several orders of magnitude higher than the heat capacity of the spin-probe at weak coupling. A fast decay is observed at a given temperature for all the curves, however, indicating that one can only achieve certain precision at given temperature ranges from this protocol.
Most interestingly, though, the reduced state of the probe ρ̂_p does not acquire off-diagonal elements in this model <cit.>, which means that both ρ̂_p(β) and L̂_β are diagonal operators. This indicates that the precision shown in Fig. <ref> can be achieved via the measurements of the populations of the spin-probe at strong-coupling and there exists no need to evaluate the optimal basis for the measurement, as it corresponds to simple occupations of the reduced density matrix of the probe at equilibrium.
We can gather from these results that at intermediate-to-strong coupling, the populations of the spin levels acquire a different temperature dependence than the canonical ones, translating to differences in the SLD compared to weak coupling. This distinct dependence translates to an increased sensitivity of the probe at a lower temperature for sufficiently strong λ. Interestingly, for our choice of Ŝ = σ̂^x, no coherences are generated in the reduced state of the probe. Different choices for the coupling operator Ŝ do indeed lead to temperature-dependent coherences in the state of the probe. The choice of the coupling operator
can largely affect the sensitivity of the probe at strong coupling. See Appendix <ref> for further details.
§.§ Multi-spin probes
Having understood the single-spin probe at strong coupling, we now turn our attention to the N > 1 case. Recalling Eq. (<ref>) and Eq. (<ref>), we do not allow spins to directly interact with each other. However, they do develop an effective interaction via their strong coupling to the sample.
Fig. <ref> shows the SNR results as a function of temperature for different N and different coupling parameters λ. It can be seen from Fig. <ref>(a) that at weak-intermediate coupling, the behaviour of the SNR is rather similar to the one observed for the single-probe case. The optimal measurements remain to be the energy measurements in the basis of the weakly-coupled probe, even in the multi-spin case. In fact, in this regime of weak coupling, the bath-mediated interactions are weak enough that each of the spins composing the probe barely interact with each other. The increased sensitivity follows the trivial √(N) scaling for uncorrelated spins <cit.> at λ→ 0, which can be confirmed from Fig. <ref>(a).
However, as shown in Fig. <ref>(b) and Fig. <ref>(c), the effect of strong coupling is non-trivial. Particularly, the maximal SNR can be larger at strong coupling than its weakly-coupled counterpart, albeit at higher temperature ranges. Furthermore, strong coupling reveals different SNR peaks at different temperature ranges for certain values of N. We thus come to the conclusion from these results that temperature sensitivity may be higher at strong coupling for multi-spin probes, unlike the single-probe configuration. Furthermore, this effect can only be observed at relatively strong coupling as a collective effect stemming from many-body interactions induced by the sample.
In Fig. <ref> we show the SNR as a function of the coupling strength λ for probes composed of a different number of spins at different temperature values. These results show that indeed, at low temperatures, strong coupling translates to a higher sensitivity in multi-probe configurations. In fact, strong coupling translates to temperature sensitivity in certain regimes where weakly-coupled probes provide negligible information through energy measurements. Furthermore, there exists an optimal coupling at a given temperature for which the sensitivity is maximal. This effect washes away as the temperature increases, where we recover that the most informative measurements are the ones related to weakly-coupled configurations. We then see that multi-spin probes are primed for low-temperature thermometry.
§.§ Suboptimal measurements
We have seen that for a single spin probe, strong coupling increases thermometric sensitivity at sufficiently strong λ in the low-temperature regime. On the other hand, multi-spin probes show interesting behaviour, whereby strong coupling and many-body effects may provide higher temperature sensitivity, particularly in the low-temperature regime. However, achieving such precision from the equilibrium states can be very complicated. Indeed, even at weak coupling, energy measurements can involve highly non-local operations which pose practical and technical complications. At strong coupling, to take advantage of the higher sensitivity that may exist in certain temperature ranges as we have seen in Fig. <ref> and Fig. <ref>, the situation is even more complicated. In this coupling regime, the SLD develops off-diagonal matrix elements which are temperature-dependent, the same as the equilibrium state ρ̂_p from Eq. (<ref>). Therefore, it is even more difficult to understand and choose the optimal basis which renders the SLD in diagonal form, which then fixes the basis for the measurements required to attain the fundamental bound for thermometry. It is therefore imperative to consider suboptimal measurements, ones that are more feasible from the experimental perspective.
In light of this, we now consider the temperature sensitivity of the spin probes from suboptimal measurements. The first operation we consider is the dephasing of the reduced state of the probe ρ̂_p onto its diagonal basis
ρ̃_p = ∑_k |k⟩⟨k|ρ̂_p |k⟩⟨k|,
where |k⟩ = (0,⋯,1_k,⋯,0)^T are basis vectors such that ρ̃_p is diagonal in the spin basis. From this state we can compute the Fisher information via
ℱ_ D = [L̃^2_βρ̃_p(β)],
where L̃_β is the SLD from Eq. (<ref>) computed through ρ̃_p(β). Naturally, L̃_β is also diagonal in the spin basis, which then implies that the optimal SNR T / δ T = √(β^2 ℱ_ D) follows from the estimation of the occupations (diagonal matrix elements) of ρ̂_p(β).
We may also consider the suboptimal measurements which follow from the estimation of a local observable Ô. Given an observable Ô, we may consider the SNR from the measurements following the expectation values of Ô, via
T/δ T = T |χ_T(Ô)|/δÔ≤√(β^2 ℱ(β))
in the single-shot scenario (m = 1). In Eq. (<ref>), δ^2 Ô⟨Ô^2 ⟩ - ⟨Ô⟩^2 is the variance of Ô in the reduced state of the probe ρ̂_p(β) and χ_T(Ô) ∂_α[ρ̂_p(α) Ô]|_α=T is the temperature susceptibility of Ô <cit.>. We choose an extensive, yet local operator, for the suboptimal measurement. Consider an extensive sum of the spin polarizations in the z component,
Ô∑_k=1^Nσ̂^z_k,
which amounts to estimating the total polarisation of the probe composed of N spins.
In Fig. <ref> we display the
maximal
signal-to-noise ratio T / δ T as a function of temperature for different system sizes N and couplings λ, using four different measurement schemes: energy measurements at weak coupling, the optimal measurement at strong coupling at the value of λ, the dephasing operation which amounts to estimating the occupations of the density matrix and a sum of local operations where one measures the polarisation of all the spins in the z direction. Panels (a), (b), and (c) in Fig. <ref> show different system sizes, N = 2, 4, and 8, respectively. We start by highlighting that energy measurements at weak coupling have a higher peak for larger system sizes while, as discussed before, strong coupling effects lead to a multi-peaked SNR as a function of T in the optimal basis. Remarkably, even considering the dephasing operation in the spin basis of ρ̂_p(β) leads to the same multi-peaked behaviour, at the cost of reduced sensitivity at higher temperatures. Furthermore, this operation retains the low-temperature sensitivity observed from the optimal measurements at the value of λ, so one need not consider the optimal bound on the SNR at strong coupling for low-temperature thermometry. Considering an extensive sum of local observables, however, indeed leads to decreased sensitivity in the low-temperature regime. We do note that even this operation from local measurements, leads to broader-temperature sensitivity, with higher values of the SNR than the optimal weakly-coupled counterpart.
Our results suggest that high-temperature sensitivity increases with the system size N. This shows that many-body effects and sample-induced correlations in this case lead to relatively high sensitivity even from conducting local operations. Finally, we note that the optimal coupling λ changes with both the system size and the temperature range over which the probe is sensitive. We have selected the values of λ in Fig. <ref> such that they lie close to the optimal value taken from Fig. <ref>.
§ CONCLUSIONS
We have studied the impact of strong-coupling effects on
equilibrium thermometry employing multi-spin probes.
Using the reaction-coordinate mapping,
we showed that a non-canonical equilibrium state of such probes stems from strong-coupling effects with the sample. While the reduced states of the probes are non-canonical, the equilibrium state of the extended Hamiltonian that contains the reaction coordinate is indeed a standard Gibbs state and therefore, we can take advantage of the Markovian embedding to analyse the reduced states of the probes. This approximation can be shown to yield the correct equilibrium states in certain regimes of the spectral function of the sample <cit.>. From this treatment, we can consider strong-coupling effects in the non-perturbative regime. We have shown that, along the lines of the findings in Ref. <cit.> in which a perturbative treatment was employed, weak-to-intermediate coupling leads to an equilibrium state for which the optimal SNR for temperature thermometry is lower than its weakly-coupled, thermal Gibbs state counterpart. At stronger values of the coupling parameter, however, thermometric sensitivity in the case N = 1 is higher in the low-temperature regime. We can conclude from these results that the most informative measurements for single-spin probes are the energy measurements of the states that undergo a thermalisation process, up until the value of the sample-probe coupling strength increases such that one may attain higher sensitivity at low T.
For multi-spin probes and, in particular, configurations where the probe is composed of spins that are not finely tuned to interact with each other but rather through the sample itself, we have found that strong coupling leads to enhanced precision in the low-temperature regime. This trend is in accord with another configuration, where each body comprising the probe is a harmonic mode and the interaction amongst each mode with the sample is quadratic in bosonic operators, such that the equilibrium state is a non-canonical Gaussian state. As shown in Ref. <cit.>, in such a configuration, low-temperature sensitivity is also enhanced via sample-induced correlations between each body comprising the probe.
In both harmonic and anharmonic (spin) cases, the enhanced temperature sensitivity may also be accessed via local measurements. These results suggest that, in a more general sense, bath-induced correlations between local probes enhance low-temperature thermometry.
Our study, however, demonstrates that in multi-spin setups the SNR depends on the number of spins in a highly non-monotonic manner once the interaction extends beyond weak coupling.
Furthermore, in our model, the coupling mechanism of the spins to the sample provides another route for tunability of thermometric sensitivity (see Appendix <ref>).
A direction that has been less explored is related to the effects on the temperature sensitivity from strong coupling effects from dynamical signals. Non-Markovian effects may be prominent in this regime <cit.>. Furthermore, dynamical signals in the non-Markovian regime differ substantially from their weakly-coupled counterparts and may be studied in certain regimes with the reaction-coordinate mapping <cit.>. Given that non-Markovian effects may lead to enhanced temperature sensitivity <cit.>, a promising direction could be to consider strong-coupling effects on thermometric probes from the dynamical perspective using reaction-coordinate mapping.
We gratefully acknowledge fruitful discussions with John Goold and Mark T. Mitchison. The work of M.B. has been supported by the Centre for Quantum
Information and Quantum Control (CQIQC) at the University of Toronto. D.S. acknowledges support from NSERC and from the Canada Research Chair program. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto.
§ MAXIMALLY-COHERENT Ŝ FOR THE SINGLE-PROBE CASE
In the main text, we have considered a specific type of probe-sample coupling interaction. From Eq. (<ref>) and Eq. (<ref>), we have Ŝ = σ̂^x for the single-spin probe. This choice is, in principle, arbitrary. One may instead consider different forms of probe coupling operators that lead to coherences in the spin basis of the reduced state of the single-spin probe ρ̂_p [Eq. (<ref>)]. For instance, if we consider
Ŝ = 1/√(2) ( σ̂^x + σ̂^z ),
then coherences develop in the spin basis of the reduced state of the probe ρ̂_p = _RC[e^-βĤ_S] / Z_S (see Eq.(<ref>)). In turn, these coherences become temperature-dependent <cit.> in the non-canonical equilibrium state of the probe at finite sample-probe interaction energy.
In Fig. <ref> we display the SNR for the spin-boson model (single-spin probe) as a function of temperature for different values of the coupling parameter λ. These results are analogous to the ones displayed in Fig. <ref> and we have used the same parameters for the calculation, with the only difference being the coupling operator from Eq. (<ref>). It can be observed that for this type of coupling operator, the temperature sensitivity behaves quite differently at strong coupling than for the case Ŝ = σ̂^x examined in the main text. In particular, the observed higher SNR at low temperatures disappears in this case. The inset in Fig. <ref> displays the coherences being developed in the spin basis of the reduced state of the probe, which vanish in the limit T →∞. In analogy to the results exposed in our main example in Fig. <ref>, at weak-to-intermediate coupling, it is the weakly-coupled Gibbs state SNR (solid black line in Fig. <ref>) that translates to the highest temperature sensitivity. However, as λ increases, higher sensitivity may be achieved from the non-canonical equilibrium states of the probe. Nevertheless, we see that in this case, the SNR is not higher in the low-temperature regime when compared to its weakly-coupled counterpart, irrespective of the temperature-dependent coherences that develop in the reduced states of the probe ρ̂_p(β) at low temperature. We remark that the thermal Gibbs state at weak coupling is the equilibrium state of the probe irrespective of the sample-probe interaction operator, while the equilibrium state at strong coupling heavily depends on the microscopic details via Ŝ <cit.>. This implies that, at strong coupling, the microscopic details of the probe-sample interaction play an important role in the sensitivity of the probes.
§ EFFECT OF THE SPECTRAL FUNCTION OF THE SAMPLE
A free parameter in our simulations is the natural frequency of the reaction coordinate, which we have denoted with Ω and described in Eq. (<ref>). On physical grounds, Ω is the frequency of a collective-effective harmonic mode pertaining to the sample, to which the probe is most strongly coupled <cit.>. Tuning this parameter yields different equilibrium states of the probe in the finite-λ regime, as it is a property of the sample via its spectral function <cit.>. A spectral density of Brownian form
J(ω) = 4γΩ^2 λ^2 ω/(ω^2 - Ω^2)^2 + (2πγΩω)^2,
which is peaked around Ω with width γ, leads to an effective spectral density, after the reaction-coordinate mapping, of the Ohmic type J_RC(ω) = γω e^-|ω| / Λ, where Λ is a high-frequency cut-off <cit.>. The dimensionless width parameter γ is kept small, such that the enlarged system, comprising the probe and the reaction coordinate, is weakly coupled to the residual bath, i.e., to the sample after the reaction-coordinate mapping. In Fig. <ref> we display the SNR for the two-body spin probe (N = 2) as a function of temperature for different values of Ω. It can be observed that the effect of reducing Ω is to shift the temperature sensitivity to lower-temperature regimes, up to the point in which the low-temperature sensitivity vanishes for sufficiently low Ω. In our calculations we kept the dimension of the manifold of the reaction coordinate to a very high value, M = 2000, to ensure convergence in the entire temperature regime shown in Fig. <ref>. From these results we can conclude that in employing multi-spin probes for temperature estimation in the strong-coupling regime, two important parameters are required to be considered hand-in-hand: the effective probe-sample coupling parameter λ and, for this particular case, the frequency of the collective harmonic mode of the sample to which the probe is most-strongly coupled. In a more general sense, this is the result of the equilibrium state at finite coupling depending strongly on the microscopic details of both the sample and its interaction with the probe, unlike thermal Gibbs states at weak coupling which are, in general, independent on these details.
|
http://arxiv.org/abs/2307.06133v2 | 20230712123618 | Upgrade of the positron system of the ASACUSA-Cusp experiment | [
"A. Lanz",
"C. Amsler",
"H. Breuker",
"M. Bumbar",
"S. Chesnevskaya",
"G. Costantini",
"R. Ferragut",
"M. Giammarchi",
"A. Gligorova",
"G. Gosta",
"H. Higaki",
"E. D. Hunter",
"C. Killian",
"V. Kraxberger",
"N. Kuroda",
"M. Leali",
"G. Maero",
"C. Malbrunot",
"V. Mascagna",
"Y. Matsuda",
"V. Mäckel",
"S. Migliorati",
"D. J. Murtagh",
"A. Nanda",
"L. Nowak",
"F. Parnefjord Gustafsson",
"S. Rheinfrank",
"M. Romé",
"M. C. Simon",
"M. Tajima",
"V. Toso",
"U. Uggerhøj",
"S. Ulmer",
"L. Venturelli",
"A. Weiser",
"E. Widmann",
"Y. Yamazaki",
"J. Zmeskal"
] | physics.atom-ph | [
"physics.atom-ph",
"physics.plasm-ph"
] |
[
SANGRAM SATPATHI
====================
The ASACUSA-Cusp collaboration has recently upgraded the positron system to improve the production of antihydrogen. Previously, the experiment suffered from contamination of the vacuum in the antihydrogen production trap due to the transfer of positrons from the high pressure region of a buffer gas trap. This contamination reduced the lifetime of antiprotons. By adding a new positron accumulator and therefore decreasing the number of transfer cycles, the contamination of the vacuum has been reduced. Further to this, a new rare gas moderator and buffer gas trap, previously used at Aarhus University, were installed. Measurements from Aarhus suggested that the number of positrons could be increased by a factor of four in comparison to the old system used at CERN. This would mean a reduction of the time needed for accumulating a sufficient number of positrons (of the order of a few million) for an antihydrogen production cycle. Initial tests have shown that the new system yields a comparable number of positrons to the old system.
§ INTRODUCTION
The ASACUSA-Cusp collaboration aims to measure the hyperfine splitting of a spin-polarised, ground-state antihydrogen beam in a magnetic field free region with a relative precision of parts per million <cit.>. Antihydrogen atoms are produced by mixing positrons and antiprotons, primarily via three-body recombination, in the so-called Cusp trap (due to its cusped magnetic field) <cit.>. The highly inhomogeneous magnetic field focuses the two low-field seeking states on-axis while defocusing the high-field seeking states, yielding a spin-polarised beam of antihydrogen at the spectroscopy line. The first antihydrogen was successfully produced by <cit.>. Antihydrogen was then observed [2.7]m from the production region by <cit.>. A subsequent measurement of the quantum state distribution at the position of the microwave cavity showed that the majority of antihydrogen atoms are in Rydberg states and the observed rate is too low for performing a spectroscopy measurement <cit.>. Since then, the focus of experiments is to increase the production of ground-state antihydrogen, which depends strongly on the density and temperature of the positron plasma <cit.>. The first step was the upgrade of the Cusp trap in 2021, leading to a significant decrease of the temperature of an electron plasma <cit.>. The second step was the upgrade of the positron system to increase the number available for antihydrogen production and to reduce contamination of the ultra-high vacuum (UHV) of the Cusp trap during transfer.
In this work, the apparatus of the new system is described as well as the initial tests after the installation into the ASACUSA experimental area. In Section <ref>, technical details of the moderator system, the buffer gas trap and the accumulator are given, followed by the results of the individual systems in Section <ref> and a comparison with the previous system in Section <ref>.
§ POSITRON SYSTEM
The new positron system consists of three parts: a commercial rare-gas moderator (RGM) and buffer gas trap (BGT) from First Point Scientific Inc. (FPS), which was previously used at Aarhus University, and an accumulator. The position of the traps, magnets, gate valves, and detectors (microchannel plate (MCP) and plastic scintillator) are indicated in the scale drawing of the experimental setup shown in Fig. <ref>.
§.§ Rare Gas Moderator & Buffer Gas Trap
The RGM and the BGT comprise a standard system for producing bunches of positrons. In this section only a short overview of the system is given. More details on the operation of the RGM and the positron traps can be found elsewhere <cit.>.
A schematic drawing of the RGM system is shown in Fig. <ref>. Positrons are produced by a commercial ^22Na source from iThemba labs, purchased in 2011 with an activity of [1.89]GBq. The source is housed in an elkonite shielded, cone-shaped electrode mounted on a cryocooler. The high-energy positrons from the source are moderated by a Ne-ice moderator <cit.> which produces a slow beam with an energy of tens of eV, depending on the bias of the electrode. The positrons are magnetically guided from the source ([120]G) through the beamtube solenoid ([250]G) and focused with the matching coil ([175]G) into the BGT ([750]G). To prevent unmoderated positrons from reaching the trap, a [30]cm long elkonite rod is inserted in the beamline which has an inner diameter of [0.8]cm and is offset by [1]cm. Two saddle coils producing a perpendicular magnetic field ([23]G) give the moderated positrons a vertical offset and then realign them on axis, while unmoderated positrons annihilate on the rod. Additionally, this setup serves as a biological shield. When no positrons are required for accumulation, no current is applied to the saddle coils in which case positrons annihilate on the rod.
After moderation, the positrons are trapped in the BGT. A scale drawing of the electrode structure is shown in Fig. <ref>. It consists of seven electrodes: the Inlet (length (l): [18.5]mm, inner diameter (ID): [10]mm), S1 (l: [400]mm, ID: [10]mm), S2 (l: [255]mm, ID: [18]mm), S3-RW (l: [25]mm, ID: [18]mm), S3 (l: [25]mm, ID: [25]mm), S4 (l: [50]mm, ID: [25]mm) and the Gate electrode (l: [50]mm, ID: [25]mm). N_2 gas is introduced into S1 producing a pressure gradient from S1 (order of [10^-3]mbar) to the trapping region (order of [10^-6]mbar). The given pressures in the trap were simulated using Molflow+ <cit.> and agree with the calculation of the pressure in the trapping region using the measured lifetime. After multiple collisions with the N_2 molecules, those positrons which have not formed positronium have lost enough energy that they can no longer escape from the potential well produced in the lower pressure region by S3-RW, S3, S4 and Gate electrodes. For cooling the positrons SF_6 is introduced into the vacuum chamber <cit.> and a rotating wall (RW) electric field is applied to the eight-fold split electrode S3-RW (opposing electrodes are connected) to counteract radial expansion <cit.>. After the fill time and a short cooling phase—typically after about one second—the positrons are pulsed out of the trap using a fast high voltage pulser connected to the gate electrode.
The setup used at CERN is a modified version of the system as used at Aarhus University <cit.>: The fast high voltage amplifiers from the original system have been changed to slow, low noise amplifiers, and RC filters have been added to reduce the noise reaching the electrodes. The waveform generator for the RW has been replaced by BK Precision 4054b arbitrary waveform generators. To have better control over the trap potentials, the S3 and the RW electrodes, which were initially connected, have been separated to apply individual voltages on those two electrodes. Additionally, the control of the trap voltages has been upgraded to allow the possibility of introducing additional potential manipulations for moving the positrons.
§.§ Accumulator
The accumulator is a Penning-Malmberg trap housed in a [1100]G field produced by a solenoid magnet, designed to accumulate several bunches (or "stacks") of positrons coming from the BGT. A scale drawing of the trap structure is shown in Fig. <ref>, which consists of eleven aluminium electrodes: E1 (l: [47.5]mm, ID: [12]mm), C1-C9 (l: [29]mm, ID: [45]mm) and E2 (l: [47]mm, ID: [12]mm) which are spray-painted with colloidal graphite and separated by [3]mm thick ceramic spacer rings. C3 and C7 are four-fold split electrodes to apply a RW, E1 and E2 have smaller diameters to act as pumping restrictions and are designed to be used as pulsed electrodes to catch and extract positrons. The positron bunches from the BGT are magnetically guided into the accumulator by a [30]cm long solenoid magnet ([470]G) and two coils mounted in a Helmholtz configuration around the MCP (central field [210]G). The pressure downstream of the accumulator is [2.0·10^-7]mbar during accumulation, due to the high pressure of the trapping and cooling gas from the FPS. This setup provides enough gas to cool the positrons in the accumulator, such that no additional cooling gas supply is needed. After accumulation is finished, the gate valve between the BGT and the accumulator, labelled as GV1 in Fig. <ref>, is closed, the gas is pumped out and the transfer of positrons into the Cusp trap is performed.
§ RESULTS
The results from commissioning of the RGM, the BGT and the accumulator are shown in Table <ref>. The measurements of the individual properties are described in more detail in the following subsections.
§.§ Rare Gas Moderator & Buffer Gas Trap
The activity of the source was [89]MBq at the time the data were taken at CERN.
The moderator efficiency is measured by counting the moderated positrons with a channel electron multiplier (CEM, model: KBL15RS/90-EDR from Dr. Sjuts Optotechnik GmbH) which can be inserted into the beamtube (see Fig. <ref>). The detection efficiency at a bias voltage of [-2.1]kV is estimated, using the log-normal fit function with the parameters given in <cit.>, to be [(40 ± 7)]%. Thus, the measured count rate, 55000 events/s, equates to (1.4 ± 0.3)· 10^5 slow positrons per second, or a moderator efficiency of [(0.2 ± 0.1)]%, which is a factor of two to three smaller than previously reported for this system <cit.>. The efficiency may be slightly higher because we have not accounted for possible annihilations on the 79% transparency, grounded mesh attached to the front of the CEM holder. The electric field produced by the [-2.1]kV bias would, however, tend to focus the positrons in between the mesh-tines, increasing the effective transparency.
To calculate the lifetime of positrons in the BGT, it is filled for a variable time and the charge deposited on the MCP is measured with a charge amplifier. The data is fitted using an exponential rise to maximum, N(t) = a·[1-exp(-t/τ)], where a is the maximal number of positrons and τ the lifetime in the trap. The positrons have a lifetime of [(3.9 ± 0.8)]s. Filling the BGT for [1]s gives (21300±2200) positrons, yielding a trapping efficiency of [(15± 4)]%. The left hand side plot in Fig. <ref> shows the measured number of positrons, the exponential rise to maximum fit, and the inset shows the deviation from the linear increase at low fill times.
The energy distribution of the positrons is measured by a retarding field analysis using the E1 electrode of the accumulator and looking at the annihilation signal with a plastic scintillator at the position indicated in Fig. <ref>. The resulting energy spread, i.e. the full width at half maximum of the energy distribution, is [(1.49± 0.01)]eV, which is slightly smaller than the [(1.655±0.063)]eV measured in Aarhus by <cit.>. The size of the positron cloud is measured with a chevron stack MCP and the light produced in the attached phosphor screen is acquired with an Apogee ALTA-U77 CCD camera. The positrons are compressed with a quadrupolar rotating field with a frequency of [6.5]MHz and an amplitude of [0.5]V. The image is fitted by a 2-dimensional Gaussian yielding a radius of [(2.12±0.03)]mm when the magnetic field ratios of the trap and the detector is taken into account. This result is about a factor of two larger then the results in <cit.>. The expansion rate of the positron cloud of [(2.24±0.08)]mm/s is measured by stopping the RW drive and holding the positrons for a variable amount of time prior to extraction.
§.§ Accumulator
The positron bunches are magnetically guided into the accumulator and are caught in a well formed between E1 and C9. Shortly after catching, they are moved to the downstream region and merged with positrons that were previously caught while applying a RW with frequency of [100]kHz and an amplitude of [10]V. The high pressure of [2.0·10^-7]mbar in the accumulator from the gases in the BGT limits the lifetime of the positrons to [(104±22)]s. When the gas flow into the trap is stopped by closing the upstream gate valve (GV1), then the positrons can be held longer: [95]% still remain after ten minutes.
The MCP is mounted on a rotatable feedthrough, such that it can be turned to face either the BGT or the accumulator. For imaging or counting the charge deposited on the MCP, the positrons must be extracted upstream. This is a slightly different scheme than typically used for the transfer into the Cusp trap, which is performed by dumping from the downstream side of the accumulator. To achieve the same conditions for the positrons, they are moved to the same potential as if they were dumped towards the Cusp trap, but on the upstream side of the accumulator. The move has to be performed slowly to avoid positron losses or heating up the particles. By measuring the charge deposited on the MCP the transfer efficiency can be calculated. Due to the finite lifetime of positrons in the accumulator, the linear region of positron number vs. the stack number has been taken to evaluate the transfer efficiency (see inset right hand side of Fig. <ref>). From the slope in the linear region the number of positrons per stack is determined. This divided by the number of positrons delivered by the BGT in [1]s yields the transfer efficiency. The resulting efficiency is [92 ^+8_-11]%, which corresponds to (19600 ^+2700_-3100) positrons per transfer. The right hand side plot in Fig. <ref> shows the number of positrons as a function of number of stacks, the exponential rise to maximum fit, and the inset shows the deviation from a linear increase of the number of positrons for a low number of stacks.
Imaging of the extracted positrons yields a radius of [(1.44±0.05)]mm to [(1.85±0.06)]mm in the trap, depending on the number of stacks ([5-160]stacks), and hence the number density of positrons. The expansion rate is measured using [30]stacks, starting with a positron radius of [(1.63±0.04)]mm, to be [(0.013±0.004)]mm/s if GV1 is closed and [(0.045±0.003)]mm/s if GV1 is open.
§ DISCUSSION
Table <ref> compares the efficiencies of the FPS system at CERN, the FPS system as described by <cit.>, the accumulator, and the system previously used by ASACUSA at CERN <cit.>. The moderator efficiency of the previous system is [(0.25±0.1)]%, which is comparable with the new system. Andersen reported a moderator efficiency of [0.5]%, which would be a factor of two to three better. The reason for the smaller moderation efficiency achieved at CERN is under investigation.
The trapping efficiency of the previous BGT is [(17.4±1.8)]%, which is very similar to the new system being [(15±4)]%. Andersen reported a trapping efficiency of [35]%. The smaller trapping efficiency at CERN may arise from a slight misalignment of the trap within the magnet, which was found by imaging the slow beam and comparing to images of the trapped positrons. It was also found that positrons with a larger radius were apertured only on one side, presumably by scraping the Gate electrode on their way to the detector. The misalignment may be addressed by using large correction coils around the trap magnet to produce a compensating transverse field.
The purpose of the accumulator is primarily to reduce the contamination of the vacuum of the Cusp trap during transfer. Prior to the transfer of the positrons, GV1 is closed, causing the pressure in the accumulator to fall from [2.0·10^-7]mbar to [<1·10^-8]mbar within three seconds. After that, the gate valves GV2 and GV3 between the positron system and the Cusp trap are opened. Previously, multiple bunches of positrons were accumulated in the Cusp trap, repeatedly opening GV2 and GV3 between the high pressure region from the positron trap (order of [10^-6]mbar) and the UHV region of the Cusp trap ( [<10^-12]mbar). Due to this repetitive transfer, and being unable to pump out the buffer gas in advance, the vacuum in the Cusp trap was contaminated, shortening the lifetime of the antiprotons and reducing the time available to produce antihydrogen. In the previous system the lifetime of the positrons in the BGT was [40]s, which could significantly be increased to [(104±22)]s (with cooling gas present) and [>600]s (without cooling gas present) with the new accumulator. This increase in lifetime additionally means that a higher number of positrons can be sent into the Cusp trap per accumulation cycle.
§ CONCLUSIONS
The RGM and the BGT have been installed in the ASACUSA experiment at CERN and recommissioned after the transport from Aarhus University. With a 89 MBq source, this system provides roughly 10^4 positrons per second—comparable to the system it replaced. The positron bunches from the BGT are extracted once per second and accumulated in the newly developed and installed accumulator. Up to [40]bunches can be linearly stacked in the accumulator and held for more than ten minutes. The pressure in the accumulator decreases to <[10^-8]mbar within three seconds after the flow is stopped by closing the upstream gate valve to the BGT. This low pressure prior to the transfer of positrons reduces the contamination of the vacuum in the Cusp trap, from which the ASACUSA experiment suffered previously.
With the current system it is possible to accumulate 1.5· 10^6 positrons in [110]s. To increase the number of positrons from the RGM a new ^22Na source with an activity of [1.89]GBq was ordered. With the presented moderator, trapping and transfer efficiencies, the number of positrons per second is expected to increase by a factor of 20.
The increased lifetime of positrons in the accumulator allows for a longer accumulation time. This increases the number of positrons which can be transferred into the Cusp trap in a single positron transfer cycle. The ASACUSA-Cusp collaboration optimised the antiproton preparation cycle last year, such that each antiproton extraction from CERN every [108]s can be used. The lifetime of [(104±22)]s in the accumulator means that (with the new source) only one transfer cycle should be necessary for antihydrogen production.
To further increase the lifetime in the accumulator during filling, the installation of a pumping restriction downstream of the BGT is under consideration. Future work will also focus on the moderator efficiency, which is currently a factor of two to three lower than previously achieved at Aarhus University. The reason for this reduced efficiency is not yet understood, however investigations are ongoing. When the cause for the reduction in trapping efficiency and moderator efficiency are found, the upgraded system would not only have been successful in minimising the contamination of the Cusp trap, but also increased the efficiency of positron preparation by a factor of four compared to the previous system.
§ ACKNOWLEDGMENTS
This work was supported by the Austrian Science Fund (FWF) Grant Nos. P 32468, W1252-N27, and P 34438; the JSPS KAKENHI Fostering Joint International Research Grant No. B 19KK0075; the Grant-in-Aid for Scientific Research Grant No. B 20H01930; Special Research Projects for Basic Science of RIKEN; Università di Brescia and Istituto Nazionale di Fisica Nucleare; and the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 721559.
§ DECLARATION OF INTERESTS
The authors report no conflict of interest.
jpp
|
http://arxiv.org/abs/2307.04440v1 | 20230710094116 | Time-Frequency-Space Transmit Design and Signal Processing with Dynamic Subarray for Terahertz Integrated Sensing and Communication | [
"Yongzhi Wu",
"Chong Han"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Time-Frequency-Space Transmit Design and Signal Processing with Dynamic Subarray for Terahertz Integrated Sensing and Communication
Yongzhi Wu, Graduate Student Member, IEEE, and
Chong Han, Member, IEEE
This paper will be presented in part at IEEE SPAWC, September 2023 <cit.>.
Yongzhi Wu is with the Terahertz Wireless Communications (TWC) Laboratory, Shanghai Jiao Tong University, Shanghai, China (Email: [email protected]).
Chong Han is with the Terahertz Wireless Communications (TWC) Laboratory, Department of Electronic Engineering and Cooperative Medianet Innovation Center (CMIC), Shanghai Jiao Tong University, Shanghai, China (Email: [email protected]).
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
Terahertz (THz) integrated sensing and communication (ISAC) enables simultaneous data transmission with Terabit-per-second (Tbps) rate and millimeter-level accurate sensing. To realize such a blueprint, ultra-massive antenna arrays with directional beamforming are used to compensate for severe path loss in the THz band.
In this paper, the time-frequency-space transmit design is investigated for THz ISAC to generate time-varying scanning sensing beams and stable communication beams. Specifically, with the dynamic array-of-subarray (DAoSA) hybrid beamforming architecture and multi-carrier modulation, two ISAC hybrid precoding algorithms are proposed, namely, a vectorization (VEC) based algorithm that outperforms existing ISAC hybrid precoding methods and a low-complexity sensing codebook assisted (SCA) approach. Meanwhile, coupled with the transmit design, parameter estimation algorithms are proposed to realize high-accuracy sensing, including a wideband DAoSA MUSIC (W-DAoSA-MUSIC) method for angle estimation and a sum-DFT-GSS (S-DFT-GSS) approach for range and velocity estimation. Numerical results indicate that the proposed algorithms can realize centi-degree-level angle estimation accuracy and millimeter-level range estimation accuracy, which are one or two orders of magnitudes better than the methods in the millimeter-wave band. In addition, to overcome the cyclic prefix limitation and Doppler effects in the THz band, an inter-symbol interference- and inter-carrier interference-tackled sensing algorithm is developed to refine sensing capabilities for THz ISAC.
Terahertz integrated sensing and communications, ultra-massive MIMO, Orthogonal frequency division multiplexing, hybrid beamforming
§ INTRODUCTION
§.§ Background and Motivations
To address the rapidly growing demand for wireless data rates and the emergence of new application scenarios, the communication community is seeking new spectrum opportunities as well as new functionalities for sixth-generation (6G) and beyond wireless networks <cit.>. Following the former trend of moving up to higher frequencies, the Terahertz (THz) band is viewed as one of the key technologies to enable enormous potential in 6G wireless systems <cit.>. Another promising exploration is to use integrated sensing and communication (ISAC) technology, which can endow wireless networks with sensing capabilities to realize the mapping of the physical world to the digital world <cit.>.
Leveraging the ultra-broad bandwidth and the ultra-massive antenna arrays in the THz band, the integration of these two technologies, i.e., Terahertz integrated sensing and communication (THz ISAC) <cit.>, can achieve ultra-accurate sensing and Terabit-per-second data rates simultaneously.
Despite the promising vision of THz ISAC, critical challenges arise when designing THz ISAC transmit signal. First, there exists severe path loss in the THz band, which includes free path loss, reflection, and scattering losses. These losses strictly limit the maximum sensing and communication distance, and degrade sensing accuracy and data rate.
Second, with the power constraints, to compensate for such severe path loss, ultra-massive multiple-input multiple-output (UM-MIMO) antenna arrays with beamforming are used to generate highly directional beams <cit.>. Thus, energy-efficient and low-complexity beamforming algorithms need to be developed.
Third, the generation of directional beams restricts the angular coverage of sensing. In general, communication prefers stable beams toward users to enable tractable data detection, while sensing requires sweeping beams to scan possible targets in the surrounding environment <cit.>. To realize omnidirectional sensing with directional beams, effective and efficient narrowbeam management schemes, including transmit design in the time-frequency domain and beamforming design in the spatial domain are demanded to realize simultaneous sensing and communication for THz ISAC systems.
Meanwhile, the receive processing encounters significant challenges, especially for sensing parameter estimation algorithms in THz UM-MIMO systems, which are affected by the beamforming architectures and peculiarities of THz channels. First, the sensing algorithm for range and velocity estimation needs to be redesigned, since an additional dimension (namely, spatial domain) is introduced in the received signal model when using the ultra-large dimensional antenna arrays in the THz band.
Second, with high channel sparsity due to strong power loss of non-line-of-sight (NLoS) paths, the delay spread of the THz communication channel is reduced <cit.>. In this case, to utilize broad bandwidth with a fixed subcarrier number, we can increase subcarrier spacing, which is inversely proportional to the symbol duration. Thus, the symbol duration and cyclic prefix (CP) length are reduced in classical multi-carrier communication systems, such as orthogonal frequency-division multiplexing (OFDM). Nevertheless, the round-trip delay of sensing targets should be smaller than the CP duration with classical OFDM sensing algorithms <cit.>. For communication waveforms with reduced CP, there might exist inter-symbol interference (ISI) effects on the received sensing signal, which cause existing sensing methods inapplicable.
Third, as the Doppler shifts are proportional to the carrier frequency, the Doppler effects become even stricter in the THz band. If maintaining current waveform numerology of 5G wireless systems, Doppler effects in the presence of high-mobility targets may cause inter-carrier interference (ICI) effects and severely degrade sensing capabilities. Thus, to tackle these challenges, signal processing design in terms of sensing algorithms is vital to realize high-accuracy sensing, while data recovery has been well investigated <cit.>.
§.§ Related Works
§.§.§ Waveform Design
By jointly designing the ISAC transmit signal, sensing and communication can share the hardware and signal processing modules. From the perspective of the time-frequency domain, various ISAC waveforms have been investigated in the literature. As adopted in 4G and 5G standards, CP-OFDM is a promising candidate for ISAC although being a communication-centric design <cit.>. Since an OFDM waveform suffers from a high PAPR issue, especially in uplink transmission, some single-carrier waveforms, such as discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM), are investigated for THz ISAC systems, due to their low PAPR compared to OFDM <cit.>. Recently, orthogonal time frequency space (OTFS) has been studied in ISAC applications <cit.>, thanks to its advantages under doubly-selective channels in high-mobility scenarios. Furthermore, a DFT spread OTFS (DFT-s-OTFS) waveform is proposed in <cit.> to reduce the PAPR of OTFS for THz ISAC. However, the high complexity of data detection for MIMO-OTFS constitutes a serious problem. Despite the PAPR issue, OFDM is still a potential waveform in the THz band, since it has good compatibility with UM-MIMO and enables flexible time-frequency domain resource allocation among multiple users <cit.>. Thus, wideband UM-MIMO systems with multi-carrier modulations are investigated for THz communications in many recent works, including beamforming design <cit.>, channel estimation <cit.>, multiple access <cit.>, carrier aggregation <cit.>. Nevertheless, there is a lack of research on THz ISAC in this regard, especially focusing on the transmit design and sensing algorithms in the time-frequency-space domain.
§.§.§ Beamforming Design
Pertaining to MIMO-OFDM systems, with conventional fully-digital and analog beamforming architectures, multi-target estimation can be realized by utilizing opportunistic sensing <cit.> and multibeam optimization <cit.>.
Nevertheless, the fully-digital structure exhibits high hardware complexity and power consumption for THz ISAC systems with large-dimensional antenna arrays, while the analog beamforming architecture can only support one data stream with limited spatial multiplexing gain <cit.>.
As a combined approach, hybrid beamforming can realize comparable data rates with the fully-digital structure and exhibits less hardware complexity. Based on the full-connected (FC) hybrid beamforming architecture, authors in <cit.> propose a consensus-ADMM approach to design the analog and digital beamformers by jointly optimizing the spectral efficiency (SE) and spatial spectrum matching error of sensing. With the array-of-subarray (AoSA) structure, which further reduces the number of phase shifters and power consumption at the cost of sacrificing data rate, the ISAC hybrid beamformers can be designed by optimizing the Cramér-Rao bound <cit.> or minimizing the weighted Euclidean distance between the hybrid precoding matrix and the fully digital beamforming matrix <cit.>. To balance SE and power consumption, a dynamic array-of-subarray (DAoSA) hybrid precoding architecture is proposed in <cit.>, while the ISAC hybrid precoding design with dynamic subarray has not been investigated yet.
In addition, most of the aforementioned works design beamformers with some prior knowledge of target angles <cit.>, which is acceptable in target tracking scenarios but not available in general target estimation, i.e., target discovery mode. Thus, beam scanning-based sensing to discover targets with narrow beams in the THz band is still a significant issue to be addressed.
§.§ Contributions and Paper Structure
The contributions of this work are summarized as follows:
* We present a time-frequency-space transmit design framework for THz ISAC systems by considering a dynamic subarray hybrid beamforming architecture and multi-carrier waveform. In this framework, we develop a vectorization (VEC) based and a sensing codebook-assisted (SCA) ISAC hybrid precoding algorithms for the DAoSA structure. Our proposed ISAC hybrid precoding algorithms can realize the entire angular directions of sensing and data transmission by generating scanning sensing beams at different time slots and stable communication beams toward the user. Meanwhile, the proposed VEC algorithm outperforms existing ISAC hybrid precoding methods, and the SCA approach reduces the computational complexity.
* Based on the time-frequency-space domain transmit signal design, we propose parameter estimation algorithms at the sensing receiver, including a wideband DAoSA MUSIC (W-DAoSA-MUSIC) algorithm for angle estimation, and a sum-DFT and golden section search (S-DFT-GSS) method for range and velocity estimation. Simulation results indicate that the sensing accuracy with the proposed sensing algorithms can achieve centi-degree-level for angle estimation, millimeter-level for range estimation, and decimeter-per-second-level for velocity estimation.
* We further propose an ISI- and ICI-tackled sensing algorithm to overcome the CP limitation on the maximum sensing distance and estimation error caused by high-mobility targets. While the ICI is studied in <cit.>, the ISI effects have not been considered in the literature. Compared to the ISI-unaware estimation, the ISI-tackled sensing algorithm can accurately estimate the target with a round-trip delay larger than the CP duration. In contrast with ICI-unaware estimation, the ICI-tackled algorithm can overcome the masking problem of weak targets caused by the side lobes of the strong target in the presence of ICI effects.
The structure of the remainder of this paper is organized as follows. The system framework with the time-frequency-space transmit design for THz ISAC is presented in Sec. <ref>. The ISAC hybrid precoding algorithms are elaborated in Sec. <ref>. The sensing estimation algorithm design with the DAoSA architecture and multi-carrier modulation is proposed in Sec. <ref>. The ISI- and ICI- tackled sensing algorithm for THz ISAC is developed in Sec. <ref>. Sec. <ref> illustrates extensive simulation results. Finally, the paper is concluded in Sec. <ref>.
Notations: ℂ denotes the set of complex numbers; 𝐀(i, j) is the entry on the ith row and jth column of 𝐀; 𝔼{·} defines the expectation operation; The superscripts (·)^T and (·)^H stand for the transpose and Hermitian transpose operations; The notations ⊗ and ⊙ refer to the Kronecker product and Hadamard Product, respectively; det(·) and ·_F denote the determinant and Frobenius norm of a matrix; (·)^† indicates the Moore-Penrose pseudo inverse; vec(·) represents the vectorization operation.
§ SYSTEM FRAMEWORK
As shown in Fig. <ref>, we propose a THz ISAC system framework based on a wideband UM-MIMO architecture, in which the ISAC transceiver simultaneously senses potential targets in the surrounding spatial environment and sends information symbols to one communication receiver (without loss of generality) via the designed transmit signal in the time-frequency-space domain. Specifically, in the time-frequency domain, the data signal is modulated with orthogonal frequency-division multiplexing (OFDM) and spread across M subcarriers. In the spatial domain, the data streams at each subcarrier are precoded through a digital precoder 𝐅_BB∈ℂ^N_RF^t× N_s and an analog precoder 𝐅_RF∈ℂ^N_t × N_RF^t, where N_s denotes the number of data streams and N_RF^t refers to the number of transmit RF chains, with N_s ⩽ N_RF^t ≪ N_t.
As for the transceiver structure, the ISAC transceiver is equipped with an N_t-element transmit uniform planar array (UPA) to transmit the ISAC waveform and an N_r-element receive UPA to perform sensing echo processing. The communication receiver has an N_r-element UPA to accomplish signal reception and data detection. The transmit antenna arrays adopt a DAoSA hybrid beamforming structure <cit.>. With the DAoSA structure, the transmit antennas are divided into N_RF^t subarrays and each RF chain connects to each subarray with K_t = N_t / N_RF^t elements through a switch. Similarly, the received signal is combined through the analog combiner and the digital combiner with N_RF^r RF chains, and each receiver subarray contains K_r = N_r / N_RF^r elements.
§.§ Time-Frequency-Space Transmit Design
At the transmitter side, the ISAC system maps the transmitted bit streams to a large amount of data frames. A data frame is divided into Q time slots, each of which contains M × N data symbols, where M and N stand for the numbers of subcarriers and symbols during a time slot. In the multi-carrier hybrid beamforming architecture, at the qth time slot, the data symbols 𝐬_q[m, n] ∈ℂ^N_s× 1, q = 1, 2, ⋯, Q, m = 0, 1, ⋯, M - 1, n = 0, 1, ⋯, N - 1, which are generated from N_s data streams with 𝔼{𝐬_q[m, n] 𝐬^H_q[m, n]} = 1/N_s𝐈_N_s, are first precoded by a digital beamformer 𝐅_BB, q[m] and mapped to the mth subcarrier in the frequency domain, 𝐱_q[m, n] = 𝐅_BB, q[m] 𝐬_q[m, n]. Then, we perform the inverse discrete Fourier transform (IDFT) to transform the frequency-domain data blocks to the time-domain signal and add one cyclic prefix (CP) for each symbol before conducting up-conversion and analog beamforming 𝐅_RF, q∈ℂ^N_t× N_RF^t.
At the qth time slot, the proposed THz ISAC system with the time-frequency-space three-dimensional transmit design generates scanning beams toward the qth sensing direction and stable beams toward the communication user.
Note that all subcarriers share the same analog precoder while the digital precoder is performed for each subcarrier.
For the nth symbol during the qth time slot, the transmit time-domain signal can be expressed as,
𝐱̃_q, n (t) = ∑_m=0^M-1𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n] e^j2π mΔ f t,
where t denotes the time instant and Δ f refers to the subcarrier spacing. Then, the symbol duration T equals to 1/Δ f and the total symbol duration is expressed as T_o = T + T_cp with the CP duration of T_cp = M_cp/M T, where M_cp is the CP size. Thus, the duration of a time slot is T_s = N T_o and the frame duration can be expressed as T_f = Q T_s. To generate stable beams towards the communication user and scanning beams for searching sensing targets, the transmit beamformers are fixed during a time slot and vary at different time slots.
In this work, we consider a DAoSA hybrid beamforming architecture <cit.>, in which the connections between RF chains and subarrays can be intelligently adjusted through a network of switches. The analog precoding matrix 𝐅_RF, q can be written as,
𝐅_RF, q = 𝐅_P, q⊙𝐏_S,
where 𝐅_P, q∈ℂ^N_t× N_RF^t denotes the phase shifter network matrix and 𝐏_S∈{0, 1}^N_t × N_RF^t describes the binary switch network matrix, which can be expressed as
𝐏_S=[[ 𝐩_1,1 𝐩_1,2 … 𝐩_1, N_RF^t; 𝐩_2,1 𝐩_2,2 … 𝐩_2, N_RF^t; ⋮ ⋮ ⋱ ⋮; 𝐩_N_RF^t, 1 𝐩_N_RF^t, 2 … 𝐩_N_RF^t, N_RF^t ]],
where 𝐩_i, j stands for the status of the switch between the ith subarray and the jth RF chain. If this switch is closed, 𝐩_i, j = 1_K_t is an all-one vector. Conversely, 𝐩_i, j = 0_K_t is a zero vector. The phase shifter network matrix 𝐅_P, q satisfies a
constant modulus constraint, i.e., the modulus of its elements is 1. Then, the analog precoding matrix 𝐅_RF, q is given by
𝐅_RF, q=[[ 𝐟_1,1 𝐟_1,2 … 𝐟_1, N_RF^t; 𝐟_2,1 𝐟_2,2 … 𝐟_2, N_RF^t; ⋮ ⋮ ⋱ ⋮; 𝐟_N_RF^t, 1 𝐟_N_RF^t, 2 … 𝐟_N_RF^t, N_RF^t ]],
where 𝐟_i, j∈ℂ^K_t × 1 represents the joint precoding vector of the switch and the phase shifters between the ith subarray and the jth RF chain. When this switch is closed, 𝐟_i, j should satisfy the unit modulus constraint. When the switch is open, 𝐟_i, j is a zero vector. We denote the feasible set of the analog precoder 𝐅_RF, q as ℱ. Moreover, the normalized transmit power constraint is expressed as, 𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s.
§.§ Communication Model
With multi-carrier transmission, the communication received signal of the mth subcarrier and the nth symbol at qth time slot after the decoding process is expressed as
𝐫_q[m, n] = √(ρ)𝐂_BB^H[m] 𝐂_RF^H 𝐇_c[m] 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n]
+ 𝐂_BB^H[m] 𝐂_RF^H 𝐧_q[m, n],
where ρ describes the average received power, 𝐂_BB[m]∈ℂ^N_RF^r× N_s is the digital combining matrix, 𝐂_RF∈ℂ^N_r × N_RF^r is the analog combining matrix, and 𝐧_q[m, n] refers to the additive white Gaussian noise with independent and identically distribution 𝒞𝒩(0, σ_n^2). In the THz band, the channel is sparse and dominated by the line-of-sight (LoS) path and several reflected rays. Thus, as a benchmark, the multi-path channel model based on ray-tracing methods of the channel matrix 𝐇_c[m] at the mth subcarrier can be given by <cit.>,
𝐇_c[m] = γα_L[m] 𝐚_r(θ_L^r, ϕ_L^r) 𝐚_t^H(θ_L^t, ϕ_L^t)
+ γ∑_l=1^L_Nα_N, l[m] 𝐚_r(θ_N, l^r, ϕ_N, l^r) 𝐚_t^H(θ_N, l^t, ϕ_N, l^t),
where γ = √(N_t N_r/L_N + 1) and L_N represents the number of non-line-of-sight (NLoS) paths. Moreover, α_L[m] and α_N, l[m] denote the channel gain of the LoS path and lth NLoS path at mth subcarrier, respectively. In addition, θ^r(θ^t) and ϕ^r(ϕ^t) refer to the azimuth and elevation angles of arrival/departure (AoAs/AoDs). In the case of a UPA in the yz-plane with W and L elements on the y and z axes respectively, the array response vector can be expressed by,
𝐚(θ, ϕ) = 𝐚_z(ϕ) ⊗𝐚_y(θ, ϕ),
where
𝐚_y(θ, ϕ) = 1/√(W) [1, ⋯, e^jπ (W - 1) sin(θ) sin(ϕ)]^T,
𝐚_z(ϕ) = 1/√(L) [1, ⋯, e^jπ (L - 1) cos(ϕ)]^T,
and θ stands for the azimuth angle, and ϕ refers to the elevation angle.
For THz communications, we need to design hybrid precoders to maximize spectral efficiency. The achievable spectral efficiency can be expressed as <cit.>
R_q = 1/M∑_m=0^M-1log(𝐈_N_s + ρ/N_s𝐑_n^-1𝐂_BB^H[m] 𝐂_RF^H 𝐇_c[m]
×𝐅_RF, q𝐅_BB, q[m] 𝐅_BB, q^H[m] 𝐅_RF, q^H 𝐇_c^H[m] 𝐂_RF𝐂_BB[m]),
where 𝐑_n = σ_n^2 𝐂_BB^H[m] 𝐂_RF^H 𝐂_RF𝐂_BB[m] is a noise covariance matrix. The optimization problem of maximizing R_q at the transmitter side is equivalent to minimizing the Euclidean distance between the optimal fully digital precoder 𝐅_c[m] and the hybrid precoder as 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2. Generally, the channel state information (CSI) can be known at both transmitter and receiver by utilizing channel estimation <cit.> and is assumed to be time-invariant during a frame duration. Then, from the singular value decomposition (SVD) of the channel 𝐇_c[m], the unconstrained optimal precoder 𝐅_c[m] and decoder 𝐂_c[m] are comprised of the first N_s columns of the right and the left singular value matrices.
§.§ Sensing Model
In the THz band, directional beams are used to compensate for severe path loss and improve received sensing signal power, which limits the angular range of sensing targets. To realize entire-space sensing, we design a codebook-based beam-scanning scheme for THz sensing.
For the azimuth angle, the whole sensing angular domain is divided into Q scanning directions, ω = [ω_1, ω_2, ⋯, ω_Q]^T, each of which corresponds to a time slot. We can set Q = W and design the sensing beamforming vector as the qth column from a discrete Fourier transform (DFT) codebook, by which the transmitter can generate W orthogonal beamforming vectors and steer signals towards W independent sensing directions. Thus, the sensing codebook can be written as,
𝐀 = 𝐚_z(ϕ) ⊗ [𝐚_y,1(ω_1, ϕ), ⋯, 𝐚_y, W(ω_Q, ϕ)]
where
𝐚_y, q(ω_q, ϕ) = 1/√(W) [1, ⋯, e^jπ (W - 1)sin(ω_q)sin(ϕ)]^T,
and sin(ω_q) = -1 + 1/W + (q -1) 2/W for q = 1, 2, ⋯, W. In this case, the sensing angular window Ω_q at the qth time slot contains angles from arcsin(-1+(q-1)2/W) to arcsin(-1+q2/W).
At the sensing receiver, the frequency domain received signal of the mth subcarrier and the nth symbol at qth time slot is denoted as 𝐲_q[m, n]∈ℂ^N^r_RF× 1, which is given by
𝐲_q[m, n] = 𝐖_RF, q^H 𝐇_s[m, n] 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n]
+ 𝐖_RF, q^H 𝐞_q[m, n]
where 𝐖_RF, q∈ℂ^N_r× N_RF^r denotes the combing matrix at the sensing receiver and 𝐞_q[m, n] represents the AWGN vector.
At the ISAC transceiver side, the sensing receiver is collocated with the transmitter. Based on the OFDM radar sensing channel <cit.> and MIMO channel models <cit.>, the sensing channel matrix 𝐇_s[m, n] is expressed as,
𝐇_s[m, n] = √(N_t N_r/P)∑_p=1^P h_p e^-j2π m Δ f τ_p e^j2π ((q - 1) T_s + n T_o) ν_p
×𝐚_r(θ_p, ϕ_p) 𝐚_t^T(θ_p, ϕ_p),
where P stands for the number of sensing targets, each of which corresponds to one back-reflected path with complex channel coefficient h_p. For the pth target, the delay τ_p and the Doppler shift ν_p are calculated by τ_p = 2 r_p/c_0 (τ_p ⩽ T_cp) and ν_p = 2 f_c v_p/c_0 (ν_p ≪Δ f), where r_p and v_p refer to the range and relative velocity of the p targets, respectively. c_0 denotes the speed of light and f_c describes the carrier frequency. Moreover, θ_p and ϕ_p represent the azimuth and elevation angle-of-arrival of the pth target.
Beamforming design for sensing aims at achieving the highest beamforming gain towards the sensing direction. Thus, at the qth time slot, the optimal sensing precoder 𝐅_s, q∈ℂ^N_t × N_s can be generated from the qth column of the sensing codebook, namely, 𝐅_s, q = 1/√(N_t)𝐀(:, q) 1_N_s^T with a normalized factor of 1/√(N_t). Then, we need to minimize the Euclidean distance, 1/M∑_m=0^M-1𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2. At the sensing receiver side, 𝐖_RF, q is fixed during a time slot and the receive sensing beams point to N_RF^r random directions within Ω_q at the qth time slot.
§.§ Problem Formulation
At the THz ISAC transmitter, we need to design the analog and digital beamformers to simultaneously realize a communication link with ultra-fast data rates and provide a desired beampattern for high-accuracy sensing of surrounding targets.
Different from the conventional hybrid precoding design problem for communication, the optimal ISAC hybrid precoders should be sufficiently “close" to the time-invariant and frequency-dependent optimal communication precoder and the time-varying and frequency-independent optimal sensing precoder at the same time.
Based on the above models and analysis, we can formulate the following multi-objective optimization problem,
min_𝐅_RF, q, 𝐅_BB, q[m] 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2,
1/M∑_m=0^M-1𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2
s.t. 𝐅_RF, q∈ℱ,
𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s,
m = 0, 1, ⋯, M - 1,
for q = 1, 2, ⋯, Q.
Since this problem has multiple objective functions and the constraints are non-convex, it is rather difficult to obtain the global optimal solution. In the next section, we propose two algorithms for the THz ISAC hybrid precoding optimization problem to yield near-optimal solutions.
§ HYBRID PRECODING DESIGN FOR THZ ISAC
For the multi-objective ISAC hybrid precoding problem, we can introduce a weighting factor η (0 ≤η≤ 1), which provides the tradeoff between sensing and communication. Then, the hybrid precoding problem (<ref>) can be formulated as,
min_𝐅_RF, q, 𝐅_BB, q[m] 1/M∑_m=0^M-1(η𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2 +
(1 - η) 𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2 )
s.t. 𝐅_RF, q∈ℱ,
𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s,
m = 0, 1, ⋯, M - 1.
where η = 0 or η = 1 stands for either sensing-only or communication-only hybrid beamforming design problem. Without loss of generality, we can consider solving the hybrid precoding problem at different time slots separately. Then, a common approach is to use alternating minimization techniques <cit.>, i.e., alternately solving for 𝐅_RF, q and 𝐅_BB, q[m]. Hereby, with the irregular structure of the DAoSA analog precoder, we propose an ISAC hybrid precoding algorithm by modifying the vectorization-based (VEC) algorithm that was used for THz communications in <cit.>.
§.§ VEC-based ISAC Hybrid Precoding Algorithm
§.§.§ Digital Precoding Design
When fixing the analog precoder, we can impose an orthogonal constraint that 𝐅_BB, q[m] is unitary to mitigate the interference among data streams. Then, the problem (<ref>) can be transferred to,
min_𝐅_BB, q[m] 1/M∑_m=0^M-1𝐆_q[m] - 𝐁_q 𝐅_BB, q[m]_F^2
s.t. 𝐅_RF, q∈ℱ,
𝐅_BB, q^H[m]𝐅_BB, q[m] = 𝐈_N_s,
m = 0, 1, ⋯, M - 1.
where
𝐆_q[m] = [√(η)𝐅_c^T[m], √(1 - η)𝐅_s, q^T ]^T,
𝐁_q = [√(η)𝐅_RF, q^T, √(1 - η)𝐅_RF, q^T ]^T.
Similar to the solution of the so-called Orthogonal Procrustes problem (OPP) <cit.>, the solution to (<ref>) is given by,
𝐅_BB, q[m] = 𝐕_1 𝐔^H,
where 𝐆_q^H[m] 𝐁_q = 𝐔Σ𝐕^H is the SVD of 𝐆_q^H[m] 𝐁_q, and 𝐕_1 is the first N_s columns of 𝐕.
§.§.§ Analog Precoding Design
When fixing the digital precoder, we carry the vectorization process and the analog precoding design problem can be formulated as,
min_𝐅_RF, q 1/M∑_m=0^M-1(ηvec(𝐅_c[m]) - vec(𝐅_RF, q𝐅_BB, q[m])_2^2 +
(1 - η) vec(𝐅_s, q) - vec(𝐅_RF, q𝐅_BB, q[m])_2^2 ).
After removing the zero elements in vec(𝐅_RF, q), we need to solve its non-zero part 𝐟_eff∈ℂ^N_c K_t × 1, where N_c denotes the number of closed switches. This is a phase rotation problem, whose solution is given by
𝐟_eff = exp(j {∑_m=0^M-1𝐃^H vec(η𝐅_c[m] 𝐅_BB, q^H[m]
+ (1 - η) 𝐅_s, q𝐅_BB, q^H[m]) }),
where 𝐃 equals to 𝐈_N_t N_RF^t with d_1th, ⋯, d_N_t N_RF^t - N_c K_tth columns punctured, which correspond to the indices of zero elements in vec(𝐅_RF, q). Based on 𝐟_eff, the effective analog precoder 𝐅_RF, q can be recovered. With (<ref>) and (<ref>), we can alternatively calculate 𝐅_BB, q[m] and 𝐅_RF, q until convergence. After that, we finally update the digital precoders as
𝐅_BB, q[m] = √(N_s)/𝐅_RF, q𝐅_RF, q^†𝐆_q[m] _F𝐅_RF, q^†𝐆_q[m].
While the VEC algorithm provides a satisfactory solution, it requires a number of iterations in each time slot. Nevertheless, the optimal communication precoder 𝐅_c[m] remains the same at different time slots during a frame duration, while only the optimal sensing precoder 𝐅_s, q changes. Motivated by this, we can calculate the initial solutions of analog and digital precoders from 𝐅_c[m] and then update the analog precoders only once at each time slot based on the sensing codebook. Thus, we further propose the following low-complexity sensing codebook-assisted (SCA) ISAC hybrid precoding algorithm.
§.§ Low-Complexity SCA Algorithm
Instead of using the weighted objective function in (<ref>), we can define a weighted ISAC precoder as, 𝐅_q[m] = β (√(η)𝐅_RF, q + √(1 - η)𝐅_BB, q[m]) with a normalized factor of β = √(N_s) / √(η)𝐅_RF, q + √(1 - η)𝐅_BB, q[m]_F. Before designing the ISAC analog and digital precoders, we can first obtain the solution of analog precoder for the communication-only hybrid precoding design problem,
𝐅_RF = min_𝐅_RF, 𝐅_BB[m] 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF𝐅_BB[m]_F^2
s.t. 𝐅_RF∈ℱ,
𝐅_RF𝐅_BB[m]_F^2 = N_s,
m = 0, 1, ⋯, M - 1,
which can be directly solved by the VEC algorithm.
Based on the initial analog precoder 𝐅_RF, we can update the analog precoder 𝐅_RF, q at the qth time slot with the desired sensing beamforming vector 𝐀(:, q). Specifically, we calculate the error between the analog precoding vectors of the phase shifters with closed switches and corresponding columns of 𝐀(:, q) as,
E_i , j = 𝐀((i-1)K_t+1:iK_t, q) - 𝐅_RF((i-1)K_t+1:iK_t, j)_2,
for all (i, j) satifying 𝐩_i,j = 1_K_t. Then, we find the first K_s minimum values of E_i, j with the indices {(i_1, j_1), ⋯, (i_K_s, j_K_s)}, where K_s = ⌈ N_c (1-η)⌉ denotes the number of subarray beamforming vectors that need to be updated. Next, we can set the designed analog precoder 𝐅_RF, q = 𝐅_RF and update it as,
𝐅_RF, q((i_k-1)K_t+1:i_k K_t, j_k) = 𝐀((i_k-1)K_t+1:i_k K_t, q)
for k = 1, ⋯, K_s. The digital precoders are calculated as
𝐅_BB, q[m] = √(N_s)/𝐅_RF, q𝐅_RF, q^†𝐅_q[m] _F𝐅_RF, q^†𝐅_q[m].
§ SENSING ESTIMATION ALGORITHM DESIGN WITH DAOSA HYBRID BEAMFORMING
In this section, we propose the sensing parameter estimation algorithms at the sensing receiver. The task of the sensing receiver is to estimate the angle, range, and velocity of targets, given the transmit signal and the received sensing signal. As the whole sensing angular window is divided into Q scanning directions, at the qth time slot, we only sense the targets whose azimuth angles of arrival are within -Ω_q, given the knowledge of the received signal 𝐲_q and the transmit signal 𝐬_q.
For angle estimation, multiple signal classification (MUSIC) is a subspace-based method with super-resolution accuracy. Hereby, we adopt a DAoSA-MUSIC algorithm in <cit.> to estimate the target angle and propose the wideband DAoSA-MUSIC algorithm by extending to the wideband transmission. We need to reconstruct the observation matrix by performing stacking operations on the received signals at different subcarriers. After estimating each angle parameter, we develop a range and velocity parameter estimation algorithm over two stages, i.e., sum-DFT and golden section search (S-DFT-GSS).
§.§ W-DAoSA-MUSIC for Angle Estimation
At the pth time slot, we construct the observation vector of the sensing receiver 𝐲_q[m, n] ∈ℂ^N_RF^r × 1 as,
𝐲_q[m, n] = 𝐖_RF, q^H 𝐀_r 𝐒_q[m, n] + 𝐄_q[m, n],
where
𝐒_q[m, n] = Λ_q[m, n] 𝐀_t^T 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n],
𝐀_r = [𝐚_r(θ_1, ϕ_1), ⋯, 𝐚_r(θ_P, ϕ_P)],
𝐀_t = [𝐚_t(θ_1, ϕ_1), ⋯, 𝐚_t(θ_P, ϕ_P)],
Λ_q[m, n] = √(N_t N_r/P)diag{h_1^(q)[m, n], ⋯, h_P^(q)[m, n]},
𝐄_q[m, n] = 𝐖_RF, q^H 𝐞_q[m, n],
and h_p^(q)[m,n] = h_p e^-j2π m Δ f τ_p e^j2π ((q - 1) T_s + n T_o) ν_p. Then, we can stack all 𝐲_q[m, n] into one matrix as,
𝐘_θ, q = [[ 𝐲_q, 0 … 𝐲_q, N-1 ]]
with 𝐲_q, n = [𝐲_q[0, n],⋯, 𝐲_q[M-1, n]].
The precoders and the receive steering matrix 𝐀_r remain the same at different symbols during a time slot. Then (<ref>) can be written as,
𝐘_θ, q = 𝐖_RF, q^H 𝐀_r 𝐒_θ, q + 𝐄_q,
where 𝐒_θ, q = [𝐒_q[0, 0], ⋯, 𝐒_q[M-1, N-1]] is regarded as the P × M N-dimensional equivalent signal source matrix, and 𝐄_q ∈ℂ^N_RF^r × M N refers to the noise matrix. Based on (<ref>), we can perform the W-DAoSA-MUSIC algorithm to estimate the azimuth AoAs of targets.
Given the reconstructed observation matrix 𝐘_θ, q, the covariance matrix can be calculated as,
𝐑_θ, q = 1/M N𝐘_θ, q𝐘_θ, q^H.
Then we can conduct the eigenvalue decomposition (EVD) as,
𝐑_θ, q = 𝐔_s Σ_s 𝐔_s^H + 𝐔_n Σ_n 𝐔_n^H,
where Σ_s ∈ℂ^P_q × P_q consists of P_q leading eigenvalues, Σ_n ∈ℂ^(N_RF^r - P_q) × (N_RF^r - P_q) contains the remaining eigenvalues and P_q denotes the number of targets whose azimuth AoAs are within -Ω_q. With the signal subspace 𝐔_s ∈ℂ^N_RF^r × P_q and the noise subspace 𝐔_n ∈ℂ^N_RF^r × (N_RF^r - P_q), the pseudo spectrum of W-DAoSA-MUSIC can be formulated as,
𝐏_music(θ, ϕ) = 𝐚^H(θ, ϕ) 𝐖_RF, q𝐖_RF, q^H 𝐚(θ, ϕ)/𝐚^H(θ, ϕ) 𝐖_RF, q𝐔_n 𝐔_n^H 𝐖_RF, q^H 𝐚(θ, ϕ).
Finally, the AoA estimation (θ̂_p, ϕ̂_p) can be obtained by searching the peaks of the MUSIC spectrum within the angles of -Ω_q, expressed as
(θ̂_p, ϕ̂_p) = max_θ, ϕ𝐏_music(θ, ϕ).
§.§ S-DFT-GSS for Range and Velocity Estimation
For range and velocity estimation, the received signal model can be expressed as,
𝐲_q[m, n] = ∑_p=1^P h_p^(q) e^j2π n T_o ν_p e^-j2π m Δ f τ_p𝐱_p, q[m, n] + 𝐞_q[m, n],
where
𝐱_p, q[m, n] = 𝐖_RF, q^H 𝐇_θ(θ_p, ϕ_p) 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n],
𝐇_θ(θ, ϕ) = 𝐚_r(θ, ϕ) 𝐚_t^T(θ, ϕ)
𝐞_q[m, n] = 𝐖_RF, q^H 𝐞_q[m, n]
and h_p^(q) = √(N_t N_r/P) h_p e^j2π (q - 1) T_s ν_p. For each estimated AoA parameter (θ̂_p, ϕ̂_p), we can construct a maximum likelihood (ML) estimator by minimizing the log-likelihood function, given by
(τ̂_p, ν̂_p) = min_τ, ν, h∑_u=1^N_RF^r𝐘_u, q - hΨ(τ, ν) ⊙𝐗̂_u, q_F^2,
where
𝐘_u, q = [[ 𝐲_q(u)[0, 0] … 𝐲_q(u)[0, N - 1]; ⋮ ⋱ ⋮; 𝐲_q(u)[M-1, 0] … 𝐲_q(u)[M-1, N-1] ]],
Ψ(τ, ν) = Ψ_τΨ_ν^T,
𝐗̂_u, q = [[ 𝐱̂_q(u)[0, 0] … 𝐱̂_q(u)[0, N - 1]; ⋮ ⋱ ⋮; 𝐱̂_q(u)[M-1, 0] … 𝐱̂_q(u)[M-1, N-1] ]],
with
Ψ_τ = [e^-j2π 0 Δ f τ, e^-j 2π 1 Δ f τ, ⋯, e^-j 2π (M - 1) Δ f τ]^T,
Ψ_ν = [e^j2π 0 T_o ν, e^j2π 1 T_o ν, ⋯, e^j2π (N - 1) T_o ν]^T,
𝐱̂_q[m, n] = 𝐖_RF, q^H 𝐇_θ(θ̂_p, ϕ̂_p) 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n],
for u = 1, 2, ⋯, N_RF^r. Next, this minimization problem can be transformed to the maximization problem,
(τ̂_p, ν̂_p) = max_τ, ν𝐏_ML(τ, ν),
where
𝐏_ML(τ, ν) = |∑_u=1^N_RF^rTr((Ψ(τ, ν) ⊙𝐗̂_u, q)^H 𝐘_u, q)|^2/∑_u=1^N_RF^rΨ(τ, ν) ⊙𝐗̂_u, q_F^2
= |∑_u=1^N_RF^rTr((Ψ(τ, ν) ⊙𝐗̂_u, q)^H 𝐘_u, q)|^2
The solution in (<ref>) is obtained by searching (τ, ν) at which 𝐏_ML(τ, ν) achieves a maximum value in the region [0, 1/Δ f)× [-1/2T_o, 1/2 T_o).
To reduce the computational complexity, we can design a two-phase estimation method. Specifically, in the first phase, we operate the on-grid search within a discretized set of delay and Doppler axes with step sizes1/MΔ f and 1/N T_o, which can be implemented with the 2D DFT algorithm. In the second phase, based on the coarse estimation result, we conduct the off-grid estimation by introducing a 2D golden section search (GSS) method. We describe the proposed S-DFT-GSS estimation method in the following.
§.§.§ Phase I
To compute the ML estimator in (<ref>), we first perform an on-grid search on the discretized grid Γ = {(m_0/M Δ f, n_0/N T_o), m_0 = 0, ⋯, M - 1, n_0 = -N/2, ⋯, N/2-1}, as
(m̂_0, n̂_0) = max_(τ, ν)∈Γ𝐏_ML(m_0/M Δ f, n_0/NT).
Hereby, we need to calculate the M× N-dimensional ML estimator profiles on Γ, which can be computed from the sum of N_RF^r 2D DFT outputs, given by
𝐏_ML(m_0/M Δ f, n_0/NT) = |𝐠_d(m_0 + 1, [n_0]_N + 1)|^2
where
𝐠_d = ∑_u=1^N_RF^r𝐅_M^H (𝐗̂_u, q^* ⊙𝐘_u, q) 𝐅_N,
and 𝐅_M∈ℂ^M× M and 𝐅_N ∈ℂ^N × N refer to the normalized DFT matrices. Then we determine that the delay parameter lies between m̂_0 - 1/M Δ f and m̂_0 + 1/M Δ f and the Doppler parameter is between n̂_0 - 1/N T_o and n̂_0 + 1/N T_o. Thus, the search region Γ_g for off-grid estimation in the second phase becomes,
{(τ, ν), m̂_0 - 1/M Δ f≤τ≤m̂_0 + 1/M Δ f, n̂_0 - 1/N T_o≤ν≤n̂_0 + 1/N T_o}.
§.§.§ Phase II
In this phase, we perform an off-grid search over the continuous-valued region Γ_g, as
(τ̂_p, ν̂_p) = max_(τ, ν)∈Γ_g𝐏_ML(τ, ν).
Hereby, we can utilize the 2D golden section search technique, each step of which reduces the interval of uncertainty by
the golden ratio. Finally, the estimated velocity and range are given by r̂_p = τ̂_p c_0/2 and v̂_p = ν̂_p c_0/2 f_c, respectively.
§ ISI- AND ICI-TACKLED SENSING ALGORITHM
In the previous section, the proposed estimation algorithm is based on the assumption that the round-trip delay of targets is not longer than the CP duration and the Doppler shifts are much smaller than the subcarrier spacing, i.e., the sensing channel is both ISI- and ICI-free. Nevertheless, when it comes to the THz band, this assumption might become invalid in some cases. First, as the carrier frequency increases, the Doppler shift in the THz band grows much larger than the microwave band, which may cause inter-carrier interference and degrade sensing accuracy, especially in high-mobility scenarios. Second, with the decrease of communication delay spread in the THz band, larger subcarrier spacing can be used and the symbol and CP durations are reduced. However, this limits the maximum sensing distance if still using the proposed ISI- and ICI-unaware sensing algorithm in Sec. <ref> even when the link budget is sufficient.
In this section, we first derive the received signal model with ISI and ICI caused by the sensing channel and then develop an ISI- and ICI-tackled sensing algorithm to overcome the estimation problem with ICI and ISI. Since we take into account the ISI and ICI effects, we focus on the time-frequency domain signal model and design, by simplifying the notations of the spatial domain in this section.
§.§ Received Signal Model with ICI and ISI
During a time slot, we denote the data signal at the mth subcarrier and the nth symbol as X_m, n. Then, the transmit baseband signal with the CP part is expressed as,
s(t) = ∑_m=1^M-1∑_n=0^N-1 X_m, nrect(t - n T_o) e^j 2π m Δ f (t - T_cp - n T_o),
where rect(t) refers to a rectangular pulse that is limited to [0, T_o]. At the sensing receiver, the baseband time-domain continuous signal r(t) is given by,
r(t) = ∑_p=1^Pα_p e^j2πν_p t s(t - τ_p) + w(t),
where α_p stands for the channel coefficient of the pth target, w(t) denotes the AWGN, delay and Doppler parameters are described in Sec. <ref> with relaxing the assumptions τ_p ⩽ T_cp into τ_p ⩽ T_s and ν_p ≪Δ f into ν_p < Δ f. By sampling the received signal and removing the CP part, we obtain the baseband time-domain discrete signal,
r_m, n = r(t)|_t = nT_o + T_cp + m/MT
= ∑_p=1^Pα_p e^j2 πν_i (n T_o + T_cp + m/M T) s(nT_o + T_cp + m/M T - τ_p)
+ w_m, n.
Hereby, the key step is to derive the sampling signal s_τ_p, m, n = s(nT_o + T_cp + m/M T - τ_p), given by
s_τ_p, m, n = ∑_m'=0^M-1∑_n'=0^N-1 X_m', n'rect((n - n')T_o + T_cp + m/MT - τ_p)
× e^j 2π m' Δ f ((n - n') T_o + m/MT - τ_p ).
When k_p T_o ⩽τ_p < k_p T_o + T_cp with k_p = ⌊τ_i/T_o⌋ (⌊·⌋ stands for the floor function), we can obtain
s_τ_p, m, n = ∑_m'=0^M-1 X_m', n-k_p e^j2πm' m/M e^-j2π m' Δ f τ_p e^j2π m' k_p M_cp/M.
When k_p T_o + T_cp⩽τ_p < (k_p + 1) T_o, for m ⩾τ_p/TM - M_cp - k_p(M+M_cp), s_τ_p, m, n is the same as that in (<ref>). For m < τ/T M - M_cp - k_p (M + M_cp), we obtain
∑_m'=0^M-1 X_m', n-k-1 e^j2πm' m/M e^-j2π m' Δ f τ e^j2π m' k T_cp/T e^j2π m' M_cp/M.
Based on the above derivations, we can derive the time-domain input-output relation, i.e., the vector form of the received signal time-domain r_m, n at the q time slot, 𝐫_q ∈ℂ^MN× 1, is expressed as,
𝐫_q = ∑_p=1^Pα_p Δ^(ν_p)𝐃_NΠ_2MN^l_p + k_p Mvec( Π_M^-l_p (𝐃_l_pΠ_M^-M_cp + 𝐃̂_l_p)
·𝐅_M^H 𝐛_τ_p [𝐗_q-1, 𝐗_q ] ) + 𝐰_q,
where 𝐗_q ∈ℂ^M× N denotes the time-frequency domain transmit signal at the qth time slot, l_p = max{0, ⌈τ_p/T M - M_cp - k_p(M + M_cp) ⌉} (⌈·⌉ describes the ceiling function), Δ^(ν_p) = diag(vec(𝐕_ν_p)) with 𝐕_ν_p(m, n) = e^j2πν_p (n T_o + T_cp + m/M T), the matrix Π_M∈ℂ^M× M refers to the forward cyclic-shift (permutation) matrix, 𝐃_N equals to the identity matrix 𝐈_2MN with the first MN rows punctured, 𝐃_l_p equals to the identity matrix 𝐈_M with the last M - l_p rows turning into zero elements, 𝐃̂_l_p equals to the identity matrix 𝐈_M with the first l_p rows becoming zero elements, 𝐛_τ_p = diag{b_τ_p^0, ⋯, b_τ_p^M-1} with b_τ_p = e^j2π(k_pT_cp/T - τ_p/T), and 𝐰_q is the noise vector.
After performing DFT on the matrix form of 𝐫_q, 𝐑_q = vec^-1(𝐫_q) ∈ℂ^M× N, we obtain the frequency-domain received signal 𝐲_q ∈ℂ^MN× 1 at the qth time slot, given by
𝐲_q = vec(𝐅_M 𝐑_q)
= ∑_p=1^Pα_p 𝐇_p(τ_p, ν_p) [𝐱_q-1^T, 𝐱_q^T]^T + 𝐰_q,
where the matrix 𝐇_p(τ_p, ν_p) ∈ℂ^MN× 2MN is given by,
𝐇_p(τ_p, ν_p) = (𝐈_N ⊗𝐅_M) Δ^(ν_p)𝐃_NΠ_2MN^l_p + k_p M( 𝐈_2N⊗( Π_M^-l_p
·(𝐃_l_pΠ_M^-M_cp + 𝐃̂_l_p) 𝐅_M^H 𝐛_τ_p) ),
and 𝐱_q-1 = vec(𝐗_q - 1), 𝐱_q = vec(𝐗_q). If the ISI and ICI effects are ignored, the input-output relation in the time-frequency domain is approximated as the following matrix form,
𝐘_q ≈∑_p=1^P α_p 𝐗_q ⊙Ψ(τ_p, ν_p) + 𝐖_q.
The ISI- and ICI-unaware estimation is based on this approximated input-output relation, which is not accurate and causes estimation error in the presence of ISI and ICI effects.
§.§ ISI- and ICI-tackled Estimator
Based on the received sensing signal model with ISI and ICI in (<ref>), we can obtain the ISI- and ICI-tackled estimator, given by
(τ̂, ν̂) = max_τ, ν(𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T )^H 𝐲_q/𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T _2^2.
The complexity of the proposed ISI- and ICI-tackled estimation algorithm depends on the computation of 𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T. This can be implemented with computationally efficient operations, including FFT algorithms, cyclic shift, vectorization, and Hadamard product. Thus, the overall computational complexity of this estimator is 𝒪(MN log (MN)).
§ NUMERICAL RESULTS
In this section, we evaluate the sensing and communication performance of the proposed precoding algorithms and sensing parameter estimation methods. The key simulation parameters are listed in Table <ref>, which refer to the physical layer numerology for beyond 52.6 GHz communications in <cit.> and the THz link budget analysis in <cit.>. We consider a THz multipath channel with one LoS path and L_N = 4 NLoS paths.
In the simulations, we consider 2D beamforming, i.e., all elevation angles are set as ϕ_0 = 90^∘.
§.§ Performance of Hybrid Precoding Algorithms for THz ISAC
First, we evaluate the performance of the proposed VEC and SCA hybrid precoding algorithms for THz ISAC in terms of spectral efficiency and transmit beamforming gain towards the sensing direction. Specifically, we consider three hybrid precoding architectures, i.e., FC, AoSA, and DAoSA structures. In comparison, the PE-AltMin approach <cit.> and the TAltMin <cit.> algorithm are used for the FC and the AoSA structures, respectively. The proposed VEC and SCA algorithms are performed for the DAoSA architecture, which is equivalent to FC with N_c = (N_RF^t)^2 and AoSA with N_c = N_RF^t. Since we focus on the evaluations of the hybrid precoding design, the FC combining architecture is set at the communication receiver side. Moreover, the performance of fully digital precoding is evaluated as an upper bound. The subcarrier spacing is set as 1.92 MHz and the number of subcarriers equals 64. The signal-to-noise ratio (SNR) of the communication link is -20 dB.
As shown in Fig. <ref>, the performance tradeoff between spectral efficiency and transmit sensing beamforming gain using different hybrid precoding algorithms is plotted by setting the weighting factor within [0, 1]. We learn that the spectral efficiency decreases as the transmit sensing beamforming gain is improved as expected, since more energy is concentrated toward the sensing direction. In the FC structure, the proposed VEC algorithm performs slightly better than the PE-AltMin approach and achieves close performance to the fully digital precoding. In the AoSA architecture, the VEC algorithm realizes higher spectral efficiency than the TAltMin method when η > 0.5, i.e., communication dominates the precoding design. Moreover, while the proposed VEC algorithm outperforms the SCA method for all dynamic hybrid beamforming structures, the SCA algorithm is more computationally efficient.
Next, we investigate the spectral efficiency versus SNR with different numbers of closed switches. In Fig. <ref>, compared to the communication-only precoding design (η = 1), the spectral efficiency of the ISAC precoding design (η = 0.6) is reduced by approximately 2.5 bits/s/Hz at the SNR of -30 dB. When N_c = 16, the DAoSA structure becomes FC, and the proposed VEC ISAC hybrid precoding algorithm achieves near-optimal performance over the whole SNR range. With fewer closed switches, fewer phase shifters are used, which causes some performance loss while improving energy efficiency.
§.§ Transmit Beampattern
We illustrate the transmit beampattern of the designed hybrid precoders in Fig. <ref> and Fig. <ref> for different weights of ISAC precoding design and beam scanning over sequential time slots.
As shown in Fig. <ref>, η = 0 corresponds to the sensing-only precoder 𝐅_s, q. In this case, both the proposed VEC and SCA can realize the desired beampattern in the FC (N_c = 16) and AoSA (N_c = 4) architectures, which is generated from the DFT sensing codebook. When η becomes 0.5, we learn that the beamforming gain toward the sensing direction is slightly reduced while several communication sub-beams are formed and point to the angles of communication paths. In the case of η = 1, the communication-only precoding design does not generate sensing beams toward the sensing direction and concentrates all beams toward the communication receiver. In addition, it is demonstrated that the transmit beam in the FC structure realizes more similar pattern to the fully digital precoding compared with the AoSA structure.
In Fig. <ref>, it is shown that during a frame duration, the designed THz ISAC transmit signal can generate sweeping beams to scan possible targets in the surrounding environment over different time slots and stable beams toward the communication user to enable ultra-fast data transmission. We observe that the transmit beamforming gains toward the sensing direction can achieve approximately 20 dBi as the beam angle varies, while the communication beams remain similar at different time slots.
Complexity Analysis: We denote N_iter as the number of iterations of the alternating minimization in the VEC algorithm for each time slot. The overall computational complexity of the VEC-based ISAC hybrid precoding algorithm is given by 𝒪(Q N_iter N_t^2 ).
Since the SCA ISAC hybrid precoding algorithm does not require the process of alternating minimization for each time slot, it can reduce the computational complexity to 𝒪(N_iterN_t^2) compared with the VEC algorithm.
§.§ Sensing Accuracy
We further investigate the effectiveness of the proposed sensing algorithm with the DAoSA hybrid beamforming architecture. In Fig. <ref>, a number of sensing targets are randomly distributed between -90^∘ and 90^∘. We conduct beam scanning by using the proposed hybrid precoding algorithms in Sec. <ref> and then plot the normalized range profile based on the back-reflected sensing received signal by using the proposed sensing estimation algorithms in Sec. <ref>. At the qth time slot, we estimate the parameters of the target within the sensing angular window Ω_q. With the time-frequency-space transmit design, we realize entire-space multi-target sensing, although the directional narrow beams are used in the THz band.
Moreover, we evaluate the sensing accuracy of angle, range, and velocity estimation with the proposed sensing algorithm. In Fig. <ref>, we set the target parameters including the azimuth angle of 70^∘, the distance of 15 m, and the velocity of 20 m/s. The waveform parameters are M = 64 and Δ f = 3.84 MHz. The number of closed switches is 4 at both transmitter and sensing receiver sides. As the sensing SNR increases, the sensing accuracy is improved. Specifically, we observe that the angle, range, and velocity estimation can achieve centi-degree-level, millimeter-level, and decimeter-per-second-level accuracy, respectively. In addition, by decreasing the weighting factor η from 0.6 to 0.4, the sensing accuracy is improved, since more power is allocated to the sensing beam.
Complexity Analysis: The computational complexity of EVD in (<ref>) is 𝒪((N_RF^r)^3). Since N_RF^r is much smaller than N_r, the overall computational complexity of W-DAoSA-MUSIC mainly depends on the matrix-vector multiplication in (<ref>), namely, 𝒪(N_RF^r N_r). The computational complexity of the S-DFT-GSS algorithm is 𝒪(N_RF^r M N log (MN)) in the first phase and 𝒪(N_gss N_RF^r M N) in the second phase, where N_gss denotes the iterations of golden section search.
§.§ ISI and ICI Effects on Sensing Parameter Estimation
Finally, we study the ISI and ICI effects on sensing parameter estimation for THz ISAC systems. The subcarrier number is set as 1024. The considered scenario contains 3 targets with the ranges (10, 20, 30) m and the effective SNRs (-10, -15, 20) dB considering the beamforming gain. In Fig. <ref>, we compare the ICI-unaware and ICI-tackled estimation algorithms under two cases, i.e., sensing channels with weak and strong ICI effects, respectively. As shown in Fig. <ref>(a), the velocity of targets is set as 5 m/s, which corresponds to the low-mobility scenario. In this case, we learn that both ICI-unaware and ICI-tackled sensing algorithms have similar estimation results and can accurately estimate the parameters of 3 targets. Nevertheless, when the target velocity increases to 50 m/s in Fig. <ref>(b), with ICI-unaware estimation, ICI effects increase side-lobe levels of the target with the strongest power, which may cause masking of weak targets or large errors on the parameters of the other two targets. The distance of the target at 30 m is estimated as 29.4 m and the target at 20 m cannot be detected successfully due to the ambiguity caused by side lobes. In contrast, the proposed ICI-tackled sensing algorithm can overcome this problem and still accurately estimate these three targets.
In Fig. <ref>, we consider the ISI effects on THz ISAC systems. We consider the scenario containing 2 targets with the ranges (10, 45) m, the same velocity v = 5 m/s, and the effective SNRs (-10, -10) dB considering the beamforming gain. As shown in Fig. <ref>(a), when the subcarrier spacing is 480 kHz, the CP-limited maximum sensing distance is 78 m, which is longer than the target ranges. In this case, there is no ISI effect and we can obtain accurate estimated values of target ranges by using the ISI-unaware sensing algorithm. When the delay spread of the THz communication channel decreases, we can increase the subcarrier spacing and the CP duration becomes shorter, which reduces the CP-limited sensing distance. In Fig. <ref>(b), the subcarrier spacing increases to 3.84 MHz, and the CP-limited sensing distance is 9.8 m, which is shorter than the target ranges. Thus, there exist ISI effects on the received sensing signal. According to the normalized range profile using the ISI-unaware sensing algorithm, the range of the second target is estimated as 49 m, while the ground truth is 45 m. By comparison, the ISI-tackled sensing algorithm still performs well and is robust against the ISI effect.
§ CONCLUSION
In this paper, we have proposed a THz ISAC system framework, including the time-frequency-space transmit design with the DAoSA hybrid beamforming architecture and OFDM waveform, and sensing algorithms for angle, range, and velocity estimation. We propose two ISAC hybrid precoding algorithms, i.e., the near-optimal VEC method and the low-complexity SCA approach. Meanwhile, in the ISI- and ICI-free case, we propose the W-DAoSA-MUSIC angle estimation algorithm and the S-DFT-GSS range and velocity estimation method. Furthermore, when there exist ISI and ICI effects on target estimation in the THz band, we develop the ISI- and ICI-tackled sensing algorithm to overcome the CP limitation and high-mobility target estimation problem.
With extensive simulations, the results indicate that the proposed VEC ISAC hybrid precoding algorithm can achieve close performance to fully digital precoding and outperforms other existing methods. The developed SCA algorithm can reduce computational complexity by removing the process of alternating minimization for each time slot. Meanwhile, with the proposed estimation algorithms, centi-degree-level angle estimation, millimeter-level range estimation, and decimeter-per-second-level velocity estimation can be realized in THz ISAC systems.
IEEEtran
|
http://arxiv.org/abs/2307.05870v1 | 20230712015555 | Useful but Distracting: Keyword Highlights and Time-Synchronization in Captions for Language Learning | [
"Fiona Draxler",
"Henrike Weingärtner",
"Maximiliane Windl",
"Albrecht Schmidt",
"Lewis L. Chuang"
] | cs.HC | [
"cs.HC",
"H.5.2; K.3.1"
] |
Enhanced Caption Designs for Language Learning]Useful but Distracting: Keyword Highlights and Time-Synchronization in Captions for Language Learning
0000-0002-3112-6015
LMU Munich
Munich
Germany
80539
0000-0003-1100-312X
LMU Munich
Munich
Germany
80539
[email protected]
0000-0002-9743-3819
LMU Munich
Munich
Germany
80539
[email protected]
0000-0003-3890-1990
LMU Munich
Munich
Germany
80539
[email protected]
0000-0002-1975-5716
TU Chemnitz
Chemnitz
Germany
09111
[email protected]
Section
Captions provide language learners with a scaffold for comprehension and vocabulary acquisition. Past work has proposed several enhancements such as keyword highlights for increased learning gains. However, little is known about learners' experience with enhanced captions, although this is critical for adoption in everyday life.
We conducted a survey and focus group to elicit learner preferences and requirements and implemented a processing pipeline for enhanced captions with keyword highlights, time-synchronized keyword highlights, and keyword captions. A subsequent online study (n = 49) showed that time-synchronized keyword highlights were the preferred design for learning but were perceived as too distracting to replace standard captions in everyday viewing scenarios. We conclude that keyword highlights and time-synchronization are suitable for integrating learning into an entertaining everyday-life activity, but the design should be optimized to provide a more seamless experience.
< g r a p h i c s >
The four selected caption designs. (A) Standard captions, (B) full captions with keyword highlights, (C) timed keyword-only captions, (D), full captions with timed keyword highlights, where each keyword is highlighted when it is pronounced.
Visualization of the four caption types on top of a movie still. (a) shows standard captions (white text with black contour), (b) adds yellow keyword highlights to the standard captions, (c) keyword-only captions where each keyword is shown at the moment it is pronounced, (d) identical to the keyword highlights, except that keywords are only highlighted when they are pronounced.
<ccs2012>
<concept>
<concept_id>10010405.10010489.10010495</concept_id>
<concept_desc>Applied computing E-learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010179.10010183</concept_id>
<concept_desc>Computing methodologies Speech recognition</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121.10003122.10003334</concept_id>
<concept_desc>Human-centered computing User studies</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Applied computing E-learning
[300]Computing methodologies Speech recognition
[100]Human-centered computing User studies
[
Lewis L. Chuang
August 12, 2023
===================
§ INTRODUCTION
With streaming services and online video platforms, language learners have gained access to potentially unlimited content. Thanks to foreign-language audio and captions, they can improve their skills while watching their favorite show. However, captions on streaming platforms and other media providers are primarily designed for comprehension, not for engaging learners. For example, they include potentially distracting elements such as textual sound descriptions (e.g., [footsteps approaching] or [Dancing Queen playing on the radio]). Thus, optimizing captions to match language learners’ needs could improve the motivation to watch foreign-language media with captions and, in turn, also increase learning success.
Past work has already explored modifications of captions such as keyword captions <cit.>, captions including keyword translations <cit.>, or interactive support based on eye tracking <cit.>. In fact, several studies show increased learning gains for such enhanced captions <cit.>. However, the learners' perspective on enhanced captions is unclear, although a positive user experience may motivate viewers to integrate learning into leisure activities.
In this paper, we applied a user-centered design process to implement and evaluate the user experience and perceived usefulness of enhanced closed captions for language learning, targeting medium- to high-proficiency learners. As a first step, we identified learner needs in a focus group and an initial survey. Based on related work and our insights from the survey and focus group, we implemented a processing system for three enhanced caption types: (1) captions consisting only of time-synchronized keywords, (2) captions with keyword highlights, and (3) captions with time-synchronized keyword highlights. Words were considered keywords if they were not included in an English-language CEFR[European Reference Scale; <https://www.coe.int/en/web/common-european-framework-reference-languages/level-descriptions>] A1-B1 corpus. As a baseline design, we added standard full captions. We compared the viewing experience and perceived understanding with these four caption types in an online survey using excerpts from the movie Marriage Story. We found that (time-synchronized) captions outperformed ⊶ and questions with regards to hedonic qualities and scored almost as high as ⊶ on pragmatic qualities and perceived comprehension. However, the distractions caused by the highlights meant that a majority of users still preferred standard captions, except when they explicitly aimed at learning.
In sum, we contribute (1) a choice of three enhanced caption types that are promising from a user perspective, (2) a comparative evaluation of these caption types with regard to user experience and perceived comprehension, and (3) a discussion of implications for embedding captioned viewing in everyday life to support language learning.
§ RELATED WORK
Foreign-language videos, be it movies or TV shows, are a great tool for language learning: they immerse learners in a foreign culture <cit.>, enable comprehension practice <cit.>, and promote vocabulary learning <cit.>.
Generally speaking, videos provide exposure to authentic language, which is beneficial for language acquisition according to Krashen's input hypothesis <cit.>.
This section summarizes how learning can be supported through captions and subtitles[We use the term captions to refer to intralingual or same-language subtitles and subtitles to refer to interlingual or foreign-language subtitles <cit.>].
We discuss advanced caption design concepts that utilize the flexibility of current-day media players to optimize the viewing and learning experience for different target groups and briefly address technological prerequisites.
§.§ Captions and Subtitles for Language Learning
Captions and subtitles foster language learning through improved content comprehension <cit.>, listening comprehension <cit.>, vocabulary acquisition <cit.>, and to some extent, also grammar learning <cit.>.
For example, a study on content and listening comprehension showed that students who watch videos with subtitles or captions write better summaries than students without captions <cit.>. Similarly, learners provided with captions achieved higher scores in comprehension questions than those without <cit.>.
In terms of vocabulary learning, studies have observed both recall and recognition improvements when watching videos with captions or subtitles <cit.>. How many words a viewer learns depends on factors such as the words' imagery potential and whether the words sound similar to first-language words <cit.>.
Interestingly, <cit.> found larger vocabulary gains for more proficient students.
Studies on grammar learning through subtitles are scarce overall. For example, <cit.> found positive effects of textual enhancements in captions, but only for some of the enhanced structures, while a study with children by <cit.> showed no effects on grammar learning.
One important aspect to consider for learning success is cognitive load. On the one hand, the combination of multiple modalities—the associations of images, written, and spoken words—supports dual coding <cit.> and can lead to a greater depth of processing <cit.>.
On the other hand, subtitles add an additional information channel that viewers need to process, and this can potentially cause a high cognitive load. Accordingly, a study by <cit.> showed that many first-year learners found captions distracting and that adding captions impacted their listening comprehension. However, this was not the case for third-year learners who already had more language exposure. Similarly, an eye tracking and EEG study by <cit.> showed that despite the verbal redundancy effect, the risk of cognitive overload caused by captions was low.
Therefore, our target group in this work is also medium- to high-proficiency learners.
An outlook on additional aspects, such as the suitability of different video genres and recent work on learner strategies, is provided in the literature reviews by <cit.> and <cit.>.
The cited literature above includes work on captions (i.e., subtitles in the same language as the video) and subtitles (i.e., subtitles in the users' language). In fact, research so far has not shown conclusive evidence in favor of one or the other <cit.>. Unsurprisingly, subtitles are particularly helpful for content comprehension of novice learners <cit.>. However, another study found that learners watching a video with Scottish or Australian accents and English captions were better at understanding and repeating words than a Dutch subtitle group <cit.>. Regarding vocabulary learning, a 7-week study by <cit.> indicated that both novice and advanced learners perform better when using captions.
Moreover, <cit.> suggest advancing from subtitles to captions to no captions on subsequent viewings as a beneficial strategy.
In sum, the decision to use captions or subtitles depends on the learner's goal and context.
In this work, we focus on intralingual captions because of their widespread availability, or as <cit.> put it:
We are [...] fortunate that those with a disability have provided us, who are merely `hard-of-listening' in a foreign language, with a wonderful resource not only for making films and TV programmes accessible to us but for helping us improve our reading, listening, and speaking skills.
§.§ Enhanced and Interactive Caption Design
Above, we discussed standard full-text subtitles and captions. However, with current-day media players, loading new subtitle files has become very easy. This opens up new possibilities for static, adaptive, or even interactive subtitle and caption variants.
For example, static subtitle adaptations include captions that only show keywords <cit.> or highlight target word <cit.>. Both of these approaches can benefit learning by increasing the focus on target words or reducing distractions. Other proposed methods add keyword translations, similar to text glosses <cit.>.
However, a major challenge with keyword or highlight captions is the selection of appropriate keywords, as it is difficult to assess what learners already know. A common approach is to select words based on their frequency in corpora, such as the BNC/COCA lists for English <cit.>. <cit.> had experts choose the words that were deemed most difficult.
As a further adaptation, <cit.> proposed speaker-following subtitles, which clearly mark the connection between speaker and dialog content and, thus, may reduce eye strain by reducing saccade length <cit.>. However, this approach requires advanced preprocessing. <cit.> investigated the effectiveness of bilingual subtitles compared to captions, subtitles, and no subtitles using an eye-tracking study. Thereby they found that while bilingual captions lead to a higher meaning recognition, they can also be distracting as users tend to spend more time reading the translations than the new words in the target language. <cit.> investigated synchronizing speech signal and keyword captions and found short-term enhancements on subsequent viewing of non-captioned videos in comparison to full captions and no captions.
Finally, several projects and studies have explored interactive subtitles. For example, <cit.> enhanced captions with features for interactive vocabulary lookup, line translation, video navigation, and transcription to an alphabet familiar to the learner. This increased vocabulary learning in comparison to dual-language subtitles. However, the information-dense subtitles led to viewing times between 10 and 12 minutes for 5-minute videos, thus substantially changing the experience from linear viewing. In addition, <cit.> designed a dictionary where entries are enriched with captioned video clips, including target word highlights and translations, resulting in higher vocabulary retention than with a traditional dictionary.
Commercial platforms such as FluentU[<https://www.fluentu.com>, last accessed 2023-01-20], LingoPie[<https://lingopie.com>, last accessed 2023-01-20], and Language Reactor[<https://www.languagereactor.com>] also provide interactive captions for language learning and promote this as an enjoyable way of learning.
Since our objective is to integrate learning using captions into everyday viewing experiences, we do not include interactive elements that may shift the focus toward learning and consequently impact entertainment and long-term motivation. Thus, we apply a static approach with preprocessed subtitle files.
§.§ Subtitle Files and Subtitle Processing
Srt files are well-suited for simple adaptations because they are human-readable and supported by common media players such as VLC and can even be activated on top of browser-based Netflix and other video-on-demand players with extensions such as Substital[<https://chrome.google.com/webstore/detail/substital-add-subtitles-t/kkkbiiikppgjdiebcabomlbidfodipjg>, last accessed 2022-09-05].
However, they also come with several drawbacks. Notably, srt files are often unofficially distributed, are more easily available for blockbusters than arthouse movies, and frequently contain mistakes. In addition, the ideal timing can differ depending on the associated media type. For example, there may be additional opening credits in a BluRay version that are not shown by a video-on-demand provider, and this delays the timing of the BluRay subtitles, requiring manual synchronization or a tool such as <cit.>.
§ SURVEY ON CAPTION USAGE
As the first pointer towards favored caption designs for language learners, we surveyed 61 people on their current caption preferences and usage habits.
Specifically, we asked them how often they use captions or subtitles, what languages they set them to, and how much they like watching video material with captions.
§.§ Survey Participants
The 61 respondents were recruited via university mailing lists. They were between 17 and 65 years old (M = 27.0, SD = 8.9 years). Thirty-nine participants identified as female, 20 as male, one as diverse, and one did not disclose their gender. Fifty-nine participants were native German speakers, and two were native Russian speakers. Five participants listed a second native language (Italian, Russian, Farsi, or Spanish).
The survey was conducted in German.
We incentivized participation with a raffle of 20€ vouchers (one per ten participants).
§.§ Survey Results
The survey results revealed diverse subtitling and caption habits and preferences.
This was already apparent from the caption usage within the last thirty days: 16% of the participants reported never using captions, 26% used them a few times per month, and 57% used captions weekly or daily.
A majority of respondents stated that they used captions in the video language (74%) or subtitles in their native language (45%; multiple responses possible). 25% also set subtitles to a third language, for example, when the available options are limited or when they are watching with someone else.
The primary reason for activating captions were insufficient language skills (74%), distractions caused by a noisy environment (67%), a low video volume (51%), other people needing subtitles (51%), a fast rate of speech (46%), dialects (43%), difficult words (38%), for language learning (5%), unintelligible pronunciation (3%), or when watching without sound (3%).
Responding to the phrase “I like subtitles”, 46% of participants agreed with the statement, 23% reported a neutral feeling, and 31% disagreed.
Overall, the survey highlights that subtitles and captions are frequently used. Most participants in our sample activate subtitles for better comprehension, whereas only a few intentionally do so for language learning.
This points to an opportunity to increase the motivation to learn by adapting the caption design.
§ FOCUS GROUP ON PREFERRED AND ENVISIONED CAPTION DESIGNS
We conducted an online focus group with six participants to discuss how captions can be adapted to cater to the specific needs of language learners. First, we presented and discussed current caption solutions beyond traditional closed captioning. Then, we asked our participants to develop their own ideas.
§.§ Procedure and Participants
The participants (three male, and three female) were between 20 and 30 years old. They were all native German speakers and had learned English in school.
After an introduction round, we showed the participants short video clips with caption designs from or inspired by prior work. We asked them to discuss the concepts in light of their usefulness for language learning.
The first five clips were shown in one go; the last three were presented one after the other whenever the conversation had come to a hold. Overall, we showed eight subtitle variants:
* Captions with translations and explanations for individual words as in <cit.>
* Captions with translations of words on hover as in <cit.>
* Captions with keyword highlighting and an additional text box with keywords and their translations as in <cit.>
* A modified version of the latter without highlights and translations
* Another modified version of <cit.> without the standard captions
* Captions with translations in parentheses as in <cit.>
* Displaying captions next to the person speaking as in <cit.>
* Rather than spoken words, the last variant presented in-place object labels and translations. This variant showcased caption use beyond dialogues.
Following the discussion, the participants engaged in an ideation activity using the 6-3-5 brainwriting method[<https://en.wikipedia.org/wiki/6-3-5_Brainwriting>] on a collaborative board with digital sticky notes.
In the end, they shared and discussed their ideas with the group.
The focus group was conducted in German.
§.§ Findings of the Focus Group
The discussion in the focus group highlighted the importance of avoiding disruptions and considering cognitive demands while catering to situation-dependent information needs. The ideation phase provided a starting point for further exploration of adaptations and novel caption designs.
Disruptions and Cognitive Load
The participants identified attention switches caused by the caption design as potential sources of disruption. They were also afraid that overloaded designs would make them miss parts of the movie. Our participants considered this particularly critical for caption variations that included translations and redundant or non-essential information.
Specifically, they emphasized that native-language translations immediately and automatically attract attention, limiting the resources available for the original captions and the scene content.
In addition, they found translations particularly distracting when the original caption and the translation used different alphabets.
When translations were to be displayed, participants preferred them to be positioned under the original word rather than in a separate keyword box to minimize lookup times.
Participants also said that only words that are actually pronounced should be displayed. Even for the use case of documentaries, they considered object label captions (keyword variant 8) not helpful because of the factual learning focus inherent to documentaries.
In sum, our participants were afraid they could not focus on more than one thing at a time.
Situation- and User-Dependent Information
Participants noted that the requirements for captions depend on individual and situational factors such as the language level and the speakers' dialect or rate of speaking.
For example, they positively commented on the captions that moved along with speakers, in particular for speakers with strong accents or dialects. However, they felt that the display time might be too short for following fast speakers.
They also found the idea of keyword captions interesting. Keywords reduce the overall information load and can target words that are specifically helpful for learners of a given language level. For translated keyword captions, participants feared that they might not always be able to recognize them when they are pronounced.
Finally, they also discussed the timing of words so that they appear the moment they are pronounced. This way, viewers could immediately connect words with their pronunciation.
Extensions and Novel Ideas
Based on the discussed caption designs and their own experience, the participants came up with novel ideas and extensions of the presented caption variants.
These ideas can be grouped into concepts that focus either on comprehension or learning.
For better comprehension, suggestions include selective captioning of characters that speak dialects or are hard to follow. Similarly, captions could highlight technical terms or words that occur particularly infrequently and are, thus, more likely to be unknown.
For learning, participants felt that it might be helpful to add or highlight homonyms, typical idioms, dialectal differences, and/or words without direct translations. In an interactive system, translations could be shown on request.
Grammatical support could be provided, e.g., by coloring different tenses, endings, word boundaries, or functions of words.
Moreover, the level of detail should be adaptable to match the viewers' language level.
§ FINAL CAPTION DESIGNS AND HYPOTHESES
The user-centered design process including the initial survey and focus group motivated our final selection of caption designs as detailed below. Before comparing the enhanced caption designs in a user study, we derive hypotheses regarding the expected effect on user experience, perceived comprehension, perceived learning, and vocabulary recall.
§.§ Selected Caption Designs
Based on past literature, the focus group, and the survey, we finally selected the following four caption designs that vary between focusing on target words through keywords and providing context through full captions (cf. <ref>):
* Standard full captions. This variant represents state of the art and serves as a baseline.
* Full captions with keyword highlights. This variant is a modification of <cit.> that also proposed highlighting keywords. However, we do not show translations of the words because the participants in the focus group considered translations distracting.
* Timed keyword-only captions. This caption type shows keywords at the exact time they are spoken, while all other words are removed. The idea is based on <cit.>, who proposed timed keyword captions as a means to focus on vocabulary learning without the distraction caused by the full transcript. Our focus group also confirmed the potential of time synchronization.
* Full captions with timed keyword highlights. This variant is a hybrid of the timed keyword captions and the keyword highlights and was introduced to guide the viewers' attention while still providing context. With this variant, we aim to compensate for the potential mismatch between keyword selection and learner knowledge.
§.§ Hypotheses
We derive the following hypotheses concerning measures for user experience (UX), perceived comprehension and learning, and vocabulary recall. Assessing UX and perceived comprehension helps us understand what type of captions learners are potentially willing to use in everyday life. We also added vocabulary recall to position the effectiveness of our designs in relation to prior work, but this was not the primary focus for our study.
Hypotheses are based on related work, the focus group, and the survey.
H1a: The pragmatic quality is rated highest for ⊶. We expect this as viewers know this variant and feel most comfortable using it.
H1b: The hedonic quality is rated highest for . We expect this variant to be considered innovative and providing a good balance between context on the overall scene and focus on potentially challenging aspects.
H2: and achieve the best perceived comprehension. Conversely, achieve the lowest perceived comprehension. Again, we assume the focus on potentially challenging aspects to be crucial. Even though <cit.> stressed the advantage of reducing captions to keywords and reducing reading times, we expect that the lack of context hinders understanding, especially when the keyword selection is not perfectly matched to the viewers' language level.
H3: and fare best for perceived learning. These are followed by because viewers perceive a lack of context; ⊶ are perceived as least suitable for learning.
H4: Both highlighted variants and improve vocabulary recall scores in comparison to standard ⊶. As all three enhanced designs put additional focus on keywords, we expect them to attract the viewers' attention.
§ USER STUDY
To assess the hypotheses introduced in <ref>, we conducted a within-subject study with 49 participants.
Specifically, we compared the user experience, learning, and perceived comprehension with the four different caption types applied to four scenes from the movie Marriage Story, a 2019 movie that follows a couple's divorce.
As one of the proposed top 10 movies for “people at C1 level” <cit.>, Marriage Story is suitable for our target group of medium- to high-proficiency learners. The movie contains many dialogues, is non-violent overall, and it was easy to select non-explicit scenes with a diverse vocabulary.
§.§ Caption Generation and Video Preparation
We manipulate original srt files by removing non-keywords, adding highlights, or running forced alignment to adjust timestamps. <ref> gives an overview of the processing pipeline. We use a Python architecture with the pysrt package[<https://github.com/byroot/pysrt>] for working with subtitle files.
For all variants, the first step is the detection of keywords to determine what needs to be displayed or removed and what needs to be highlighted.
We follow a reverse approach, i.e., we mark a word as a keyword if it does not occur in non-keyword lists. For identifying words at a specific language level, we follow the approach proposed by <cit.>, who analyzed the vocabulary usage of a large number of movies. In particular, we merge the Oxford 5000 list[<https://www.oxfordlearnersdictionaries.com/about/wordlists/oxford3000-5000>] with the BNC/COCA corpus <cit.> to estimate the language level not only for the word stems but also the derived word forms and to remove proper names. When word levels are not uniquely identifiable (e.g., the stem “accept” is considered an A1 word, while “acceptance” is C1), we manually check for false positives. That is, we remove easy and frequent words that are not actually B2+ keywords.
We then mark keywords in the subtitles files with HTML font styling.
Finally, we run a Gentle[<https://github.com/lowerquality/gentle>] server for forced speech alignment. In case a keyword is highlighted for less than 500ms, we extend the display duration by 300ms or until the next caption line is shown.
We also used the script proposed by <cit.> to determine suitable scenes.
For this, we evenly partitioned the subtitle file into 30 parts and counted B2+ word (keyword) occurrences in each part. We manually extracted scenes from high-keyword partitions and verified that the scenes did not include explicit content.
Finally, we prepared all four caption types for the resulting four movie clips of 2-3 minutes, leading to 16 preprocessed caption + video combinations. The video clips contained 24, 30, 39, and 41 keywords, respectively. Because of the higher density of keywords and partially overlapping speech, clips three and four were slightly more difficult than the first two.
§.§ Procedure
The study was implemented as an online survey and could be taken in Spanish or German. Once participants had read the study information and given their consent, we asked them about their experience with subtitles and their prior knowledge of English. We also included a vocabulary pre-test modeled after Nation's Vocabulary Size Test[<https://www.wgtn.ac.nz/lals/resources/paul-nations-resources/vocabulary-tests>, last accessed 2022-09-10]. The pre-test included multiple-choice questions on five keywords from each scene and four distractor items that did not occur in the videos.
Participants then watched four movie clips, each with a different condition. Directly after each video, they responded to the UEQ-S <cit.> in the official Spanish or German version[Translations taken from <https://www.ueq-online.org>]. They rated their comprehension of the content and language and their overall impression of the caption variant.
The order of presentation and the pairing of the movie clip and caption variant were counterbalanced, using four of the 16 preprocessed videos for each participant.
After the four clips, we asked participants to what extent they had focused on learning, comprehension, and entertainment and asked them to rank the suitability of the caption variants for these goals. The last part of the survey was a vocabulary post-test.
Finally, two days later, participants took a second vocabulary post-test to accommodate for initial memory consolidation <cit.>.
We provide a full list of measures in <ref>.
We collected the demographics via Prolific.
§.§ Participants
We recruited native Spanish and German speakers that did not live in English-speaking countries via Prolific[<https://prolific.co>]. Fourty-nine participants completed the study. Of these, 17 identified as female and 32 as male. They were between 19 and 49 years old (M = 30.3, SD = 8.1 years). The 18 German speakers were residents of Germany (12), Austria (5), and Switzerland (1). The 31 Spanish speakers were residents of Mexico (16), Spain (9), Chile (5), and Portugal (1).
They self-assessed their English level at A2 (3), B1 (6), B2 (16), C1 (22), or C2 (6) on the CEFR scale.
The study took approximately 45 minutes, and participation was compensated with £8.5.
§ RESULTS
This section presents the study results of with a focus on the participants' experiences and perceptions, following the hypotheses from <ref> and closing with a final ranking and outlook on participants' envisioned designs.
§.§ Analysis
We validate the hypotheses for the four caption types with a repeated-measures ANOVA, with ⊶ serving as the baseline comparison. We apply a Greenhouse-Geisser correction when a Mauchly's test indicates a violation of the sphericity assumption. In case of a significant result, we follow up with pairwise post-hoc tests using a Holm correction and report Cohen's d for effect sizes.
We apply non-parametric Friedman tests with Holm-corrected Conover post-hoc tests for questions with a single ordinal scale.
All tests are performed with JASP <cit.>.
To illustrate potential explanations of identified trends, we augment the report with exemplary participant statements[Translated to English if necessary]. For the preferred caption designs, we cluster all available responses and inductively derive general themes.
§.§ User Experience (H1)
As seen in <ref>, and were rated best on the UEQ-S items representing the hedonic quality.
Pairwise post-hoc tests show significant differences between almost all conditions: ⊶ fare worse than (t = -5.17, p < 0.001, d = -0.97), (t = -4.86, p < 0.001, d = -0.91), and (t = -2.56, p = 0.041, d = -0.48). was rated better than (t = 2.30, p = 0.046, d = 0.43), and so was (t = 2.60, p = 0.041, d = 0.49).
With respect to the pragmatic quality, were clearly outperformed by the three other conditions.
Accordingly, pairwise comparisons show that performs significantly worse than ⊶ (t = 10.29, p < 0.001, d = 1.96), (t = 9.30, p < 0.001, d = 1.77), and (t = 8.10, p < 0.001, d = 1.54). The remaining comparisons showed no significant differences.
In H1a, we posited that the pragmatic quality would be rated highest for ⊶. However, and performed similarly well.
As expected in H1b, the hedonic quality was highest for , although came close. Thus, the benefit of time-synchronization was not as large as expected.
§.§ Perceived Comprehension of Language and Content (H2)
As shown in <ref>, ⊶ fared best for perceived language comprehension, closely followed by and . achieved a very low overall score at MD = 2 and was significantly worse than all other conditions (all p < 0.01). All caption types substantially contributed to content comprehension, with no median score below 5 (out of 6).
This means that as predicted in H3, achieved the lowest perceived comprehension. However, contrary to our expectations, ⊶ was comparable for content comprehension and slightly better for language comprehension than and .
§.§ Perceived Learning (H3)
The high median of 5 or 6 for all caption types on the question “I feel that I can learn new words very well with this caption variant,” suggests that overall, participants considered all caption types helpful for learning (cf. <ref>). Conover post-hoc tests still indicated that captions were significantly less suitable for learning than the other three types (all p ≤ 0.01).
There were no significant differences between the other conditions, and H3 cannot be confirmed.
§.§ Vocabulary Recall (H4)
The participants' prior knowledge of the tested vocabulary was high overall. On average, they correctly answered 88.0% of the 24 questions in the vocabulary test before watching the videos, 87.5% in the test right after, and 89.0% in the 2-day delayed post-test.
There were no differences in the score changes from before watching the videos to the 2-day delayed post-tests when differentiated by caption type.
We observed clear ceiling effects: some participants already knew all the words tested for a condition and could, therefore, not improve their score.
In the survey, two people admitted that they looked up words, and several others may have done so.
All in all, we cannot confirm H4. We did not identify any differences in the keyword recognition scores.
§.§ Final Ranking
The assessments above also align with the final ranking of the suitability for comprehension, entertainment, and learning after watching all videos (cf. <ref>). ⊶ captions were top-ranked for comprehension and entertainment,
while was top-ranked for learning. obtained the lowest overall ranking for all three use cases.
This was also reflected in the absolute rating of the caption types: On a scale from 1 to 7, participants liked ⊶ best (MD = 7, SD = 1.01). (SD = 1.58) and (SD = 1.72) were both rated at a median score of 6, and at 3 (SD = 1.71).
The participants' statements on the caption types give insights into possible reasons for the individual rankings.
Notably, ⊶ captions were considered helpful for comprehension because they are “familiar” (P49), “straightforward” (P48), and “efficient and non-disruptive” (P18). P12 described this type as “very clear, I understood everything perfectly.”
According to P21, they are “excellent for understanding spoken English in specific contexts.”
Typical comments explaining the participants' assessment of the and captions show that they were considered helpful but also distracting.
For example, for , P27 noted that “as long as the video and audio are aligned, this type of viewing captions is agreeable to also learn sentence construction and figures of speech. Sometimes, it distracts from the video because it takes more time to read the full sentences.”
Similarly, P10 explained that “if you want to pay attention to [comprehension and learning], highlighted words distract a bit. I see their use when someone is trying to learn new vocabulary.”
P3 felt that “highlighting some words can make you loose time while reading because the brain will focus on this specific word.”
Time-synchronization tended to increase the perceived level of distraction:
P17 stated that they “started to think about which word will turn yellow next”
and P18 added that “The yellow words can be a bit distracting for people that already [know] pretty well the meaning.”
Similarly, P7 liked seeing the highlights before they were spoken, so “you can anticipate the focus on the moment where it is mentioned.”
On the other hand, P49 found that captions seemed to “support you in paying more attention to the plot than with `normal' subtitles.”
The comments also illustrate why some participants felt that captions were not ideal for content and word comprehension. For example, P1 noted that they felt “distracted” because this caption type was “more focused on drawing the attention towards certain words than on helping with the plot.”
Moreover, eleven participants explicitly mentioned that they lacked context when they only saw keywords or preferred types that provided full context.
For example, P42 said “The keywords alone do not contribute at all to the understanding of the context for me.”
Similarly, five participants found that showing all words was helpful for comprehension.
Another issues was the selection of keywords: P27 noted that “the selected words did not necessarily coincide with [their] interest”
and P42 found the highlighting of words in background conversations confusing.
§.§ Preferred Caption Designs
As an outlook, we asked participants how they would design their own captions. We clustered responses in <ref>. Sixteen participants said they would stick to standard caption with no or almost no modification, largely because this is what they and other viewers are already used to.
Fifteen participants described a design very close to (time-synchronized) keyword highlights, adding some suggestions such as different typesetting. Thirteen participants listed additional elements to be included or changed in the captions, for example, different colors to distinguish speakers or background information on certain words.
§ DISCUSSION AND LIMITATIONS
By providing insights into the user perspective on captioned videos, we support researchers and practitioners in motivating users to embed learning activities into their everyday viewing experiences.
In particular, the opportunities and challenges we identified—such as the need for context, habits, distractions, and the potential to focus attention—inform the design of captioning for learning, comprehension, and entertainment.
§.§ Distractions Outweigh the Potential of Enhanced Captions for Entertainment and Comprehension
Although and performed better than or similar to ⊶ on various measures, the overall ranking in <ref> clearly shows that standard captions were the go-to solution in terms of comprehension and entertainment; only in the learning dimension, overtook ⊶.
Specifically, and were similarly attractive alternatives on the pragmatic subscale of the User Experience Questionnaire and were rated higher on the hedonic subscale.
Similarly, the number of participants describing their preferred captions as a variant of ⊶ or (Timed) captions was almost the same.
Still, it seems that due to the increased potential for distractions, the two caption variants that used highlighted keywords were not perceived as sufficiently agreeable, innovative, or helpful to overrule the influence of habits and familiarity.
The ranking and participant statements further indicate that learners are only willing to accept divided foci of attention in a learning scenario.
Research on visual perception agrees that sudden and easily distinguishable stimuli attract a viewer's attention <cit.>. Thus, it is unsurprising that a colored and/or suddenly appearing keyword will achieve this.
So, while <cit.> recommended timed keyword captions as a good alternative to standard captions because of the high density of relevant words, our findings suggest that participants did not like the viewing experience with timed changes and bright colors.
§.§ Choosing Ideal Keywords is Hard – Optimize Designs for Heuristics and Curricula
We chose our keywords based on a word frequency corpus aligned with estimated language levels. This is a typical approach in language learning and was, for example, also used by <cit.>. In other projects, keywords were based on expert ratings <cit.> or a pre-test <cit.>.
However, especially in our interconnected world and for a ubiquitous language such as English, it is almost impossible to perfectly model a learner's prior knowledge to predict unknown vocabulary.
In fact, several participants in our study mentioned that the selected keywords did not match their expectations.
Moreover, watching movies is often a social experience including two or more people, and adding another person to the equation complicates the process even further.
This means that keyword highlights will, at most, be an educated guess. But how critical is this, really? We argue that a suitable caption design that balances distractions, context, and focus is more crucial.
In particular, we expect that highlighting a few words too many will not have a dramatic impact on the viewing experience, as long as they do not annoy or distract the viewer (as was the case in our study).
Consequently, we recommend a conservative selection of keywords. For example, less obtrusive examples, such as bold or italic print, could be used (see also textual enhancement strategies <cit.>).
Furthermore, in the movie analyses performed by <cit.>, a substantial share of the vocabulary was estimated at B2 level or lower, indicating that the number of keywords in most movies will not surpass a certain threshold.
To preserve the context, the participants of our study demanded full captions. This is also beneficial with respect to imprecise keyword selection: full captions ensure that false negative keywords (unknown words that are not highlighted) will still be visible, albeit not highlighted.
Alternatively, captioned viewing could be aligned with classroom learning. We suggest a crowdsourced approach to collect target word lists. For example, <cit.> proposed a system for correcting auto-generated captions that could be extended with a feature for learners to highlight words relevant to their language class.
§.§ Limitations and Future Work
Our initial hope was that our caption enhancements would foster learning without causing a negative impact on the viewing experience. If this were the case, there would be no reason for viewers to stick with standard captions.
However, enhanced captions were only top-ranked for a learning scenario. This highlights the need for further adaptations to make the viewing experience with enhanced captions similarly enjoyable. Currently, we do not know to what extent this preference was caused by our design choices, such as using the yellow color for highlights. Consequently, future work should analyze the effect of design choices, factoring in findings from label design <cit.>.
We also encountered technical and methodological challenges during the implementation and evaluation of the caption types.
Notably, our processing pipeline is not yet fully automated and can, therefore, not be applied at scale.
For example, in two of the scenes we used, the lines of two characters partially overlapped. This required swapping some lines for the forced alignment, which our system is currently not capable of doing automatically.
In addition, although we aim to support implicit learning in everyday life, the constraints of our user study meant that we were not able to capture implicit learning directly. A long-term, in-situ study would be necessary to measure changes in the overall language level.
§ SUMMARY AND CONCLUSION
In this paper, we implement and evaluate three enhanced caption types that increase the focus on target words in language learning by highlighting and/or displaying words synchronized with the audio track. To gather viewers' opinions on these captions, we conducted an online survey evaluating the user experience, perceived comprehension, and vocabulary recognition with our enhanced caption types compared to standard captions. We discovered that participants preferred captions with highlights in a learning scenario but felt that they were too distracting for an everyday viewing experience.
These findings highlight challenges in the widespread adoption of captions optimized for learning in language learners' everyday lives.
ACM-Reference-Format
§ SURVEY MEASURES
|p.4|p.2|
Questions on subtitles and captions included in the online survey
Question (translated to English)
Type of Question
I like to use subtitles/captions very much (any language).
5-point Likert scale
How often have you used subtitles/captions in the past 30 days?
Selection menu
In what situations do you use subtitles/captions? (any language)?
Selection menu with option to specify own
How do you set subtitles/captions when the video is in a foreign language (any language)?
Selection menu with option to specify own
If you could design your own subtitles/captions, how would they look?
Text field
§ USER STUDY MEASURES
|l|p.4|p.2|
Measures and questions included in the user study
Measure
Question (translated to English)
Type of Question
Demographics
How old are you?
Text Field
Demographics
How do you identify yourself?
Selection menu with option to specify own
Demographics
In which country do you currently live?
Selection menu with option to specify own
Demographics
What level of education do you have?
Selection menu with option to specify own
Demographics
What is your current occupation?
Selection menu with option to specify own
Demographics
What is your native language?
Selection menu with option to specify own
English Experience
How often do you speak English?
Selection menu
English Experience
How often do you need to understand English (for example, when reading or on the Internet)?
Selection menu
English Experience
What is your English language level?
Selection menu
Vocabulary Pre-Test
What synonym or definition can you use to meaningfully replace the words in angle brackets in the following sentences?
4 Options per question
Caption Habits & Preferences
I like to use subtitles very much (no matter in which language).
7-Point Likert Scale
Caption Habits & Preferences
How often have you used subtitles (in any language) in the last 30 days?
Selection menu
Caption Habits & Preferences
How do you set the subtitles if the video is in a foreign language (any language)?
Multiple Choice Selection menu with option to specify own
User Experience
UEQ-S
7-Point Likert Scale
Self-Assessment
I understood the language very well.
6-Point Likert Scale
Self-Assessment
I understood the plot very well.
6-Point Likert Scale
User Experience
Watching the video with this kind of subtitles was very pleasant.
6-Point Likert Scale
Self-Assessment
I have the impression that I can learn new words very well with this subtitle variant.
6-Point Likert Scale
User Experience
I can very well imagine using this kind of subtitles myself.
6-Point Likert Scale
User Experience
I really like this subtitle variant overall.
7-Point Likert Scale
Additional Feedback
Is there anything else you would like to say?
Text field
Self-Assessment
How much did you pay attention to the following aspects while watching the videos? (Scene understanding, Learning new words, Entertainment)
6-Point Likert Scale for each
User Experience
Please sort all subtitle variants according to how well you like them if the focus is on learning new vocabulary.
Option to sort all 4 variants
Additional Feedback
Why did you sort the variants in this way?
Text field
User Experience
Please sort all subtitle variants according to how well you like them if the focus is on entertainment/pleasure.
Option to sort all 4 variants
Additional Feedback
Why did you sort the variants in this way?
Text field
User Experience
Please sort all subtitle variants according to how well you like them when the focus is on scene comprehension.
Option to sort all 4 variants
Additional Feedback
Why did you sort the variants in this way?
Text field
Desired Captions
If you could design your own subtitles, what would they look like?
Text field
Vocabulary Retention
What synonym or definition can you use to meaningfully replace the words in angle brackets in the following sentences?
4 Options per question
Additional Feedback
Is there anything else you would like to say?
Text field
|
http://arxiv.org/abs/2307.05470v1 | 20230708213703 | A Robust and Efficient Optimization Model for Electric Vehicle Charging Stations in Developing Countries under Electricity Uncertainty | [
"Mansur Arief",
"Yan Akhra",
"Iwan Vanany"
] | math.OC | [
"math.OC",
"econ.GN",
"q-fin.EC",
"stat.AP"
] |
1]Mansur M. Arief cor1
[email protected]
2]Yan Akhra
2]Iwan Vanany
[cor1]Corresponding Author
[1]organization=Department of Aeronautics and Astronautics Engineering, Stanford University,
addressline=450 Serra Mall,
city=Stanford,
postcode=94305,
state=CA,
country=USA
[2]organization=Department of Industrial and Systems Engineering, Institut Teknologi Sepuluh Nopember,
addressline=Sukolilo,
city=Surabaya,
postcode=60111,
state=East Java,
country=Indonesia
The rising demand for electric vehicles (EVs) worldwide necessitates the development of robust and accessible charging infrastructure, particularly in developing countries where electricity disruptions pose a significant challenge. Earlier charging infrastructure optimization studies do not rigorously address such service disruption characteristics, resulting in suboptimal infrastructure designs. To address this issue, we propose an efficient simulation-based optimization model that estimates candidate stations' service reliability and incorporates it into the objective function and constraints. We employ the control variates (CV) variance reduction technique to enhance simulation efficiency. Our model provides a highly robust solution that buffers against uncertain electricity disruptions, even when candidate station service reliability is subject to underestimation or overestimation. Using a dataset from Surabaya, Indonesia, our numerical experiment demonstrates that the proposed model achieves a 13% higher average objective value compared to the non-robust solution. Furthermore, the CV technique successfully reduces the simulation sample size up to 10 times compared to Monte Carlo, allowing the model to solve efficiently using a standard MIP solver. Our study provides a robust and efficient solution for designing EV charging infrastructure that can thrive even in developing countries with uncertain electricity disruptions.
* Proposed a simulation-based optimization model to design optimal EV charging station infrastructure that can withstand uncertain power supply in developing countries.
* Used control variates (CV) variance reduction technique to enhance simulation efficiency and provide a highly robust solution that buffers against uncertain electricity disruptions.
* Numerical experiment using data from Surabaya, Indonesia showed the proposed model achieved 13% higher average objective value compared to the non-robust solution.
* The enhanced simulation efficiency through CV reduces the required sample size by a factor of 10 compared to Monte Carlo simulations
* The proposed model showcases a potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions in developing countries.
electric vehicle charging station developing country uncertainty variance reduction
§ INTRODUCTION
The growing global demand for electric vehicles (EVs) has brought to the forefront the need for reliable and easily accessible EV charging infrastructure. According to a report by the International Energy Agency, as numerous governments set ambitious goals for electrifying their transportation systems, the worldwide EV demand has exponentiated in recent years. In 2010, there were only approximately 17,000 EVs on the world’s roads. In 2019, for instance, China led the global EV market, with more than 1 million EVs cars sold that year (more than 50% of global EV demand), followed by the whole of Europe with 561,000 cars sold and the USA with 327,000 cars sold. This trend is projected to persist in the upcoming years <cit.>.
Developing countries are also striving to promote EV adoption, coupled with greener electricity <cit.> to expedite the achievement of their sustainability goals. For example, Indonesia has set an ambitious target of having 20% of all automobile sales be electric by 2025, with a long-term goal of achieving fully electrified transportation by 2050 <cit.>. However, developing countries like Indonesia face significant infrastructure constraints that must be addressed to achieve these goals. The availability of EV charging infrastructure is a crucial issue that must be addressed to support the widespread adoption of EVs. In Indonesia, there were only 240 public EV charging points across the country as of 2021 <cit.>. However, an estimated 31,000 EV charging stations are required throughout the country to support sustainable electrification of vehicles in the country <cit.>.
This lacking infrastructure issue is not unique to Indonesia and is faced by many other developing countries to support the growth of EV adoption. Tackling this challenge by designing a convenient and reliable EV charging network is, however, a very complex task. To ensure a convenient location, it is essential to consider factors such as population density or potential EV demand distribution <cit.>. However, in major cities in developing countries, finding suitable land for charging stations may be challenging due to limited space availability. Furthermore, in developing countries, service uncertainty, including electricity, is one of the most significant issues. Implementing smart charging strategies <cit.> becomes hardly feasible due to electricity supply uncertainty. Outages and other electricity disruptions often occur, posing a significant problem for users who demand reliable service.
To address this challenge, our study proposes a robust solution for designing EV charging infrastructure that accounts for the challenge of electricity disruptions in developing countries. We introduce a simulation-based optimization model that estimates the service reliability of candidate charging stations and incorporates this information into the objective function and constraints. This approach offers a versatile solution by utilizing simulation approaches compared to previous works that assume available disruption probability models. Additionally, we employ a variance reduction technique called control variates (CV) to enhance simulation efficiency, reducing the required sample size by up to 10 times compared to naive Monte Carlo (MC) simulations. This results in an efficient mixed-integer programming (MIP) model that solves for optimal solutions that strike the balanced objective between minimizing the total cost of operating and investing in the charging infrastructure and providing high-quality service to the public. Fig. <ref> illustrates the comparison between the traditional modeling approach without variance reduction vs. the proposed framework that utilizes the variance reduction technique to achieve a tighter confidence interval (hence much more precise output) with less computational burden.
Our work contributes in three key ways. Firstly, we propose a model that specifically addresses the critical issue of electricity disruption in EV charging station planning, particularly in developing countries. Secondly, we integrate the estimation of disruption probabilities into our model, providing a more data-driven approach compared to previous works that assumed available disruption probability models apriori. Finally, our study demonstrates the robustness of the proposed model in solving EV charging infrastructure problems by comparing its performance to a non-robust model, even when disruption probabilities are slightly under or over-estimated. Our numerical experiment, based on an EV dataset from Surabaya, Indonesia, shows that our model achieves a 13% higher average objective value compared to the non-robust solution, highlighting its superior performance to help build sustainable and thriving ecosystems for EVs, both in developed and developing countries in the years to come.
The rest of this paper is structured as follows. In Section <ref>, we provide a concise overview of the literature related to the optimization of EV charging infrastructure
We then present the proposed model formulations in Section <ref>
and approach incorporating the CV technique to estimate the service reliability (i.e. the complement of disruption probability). In Section <ref>, we describe the experiment settings and discuss the main findings in Section <ref>. Finally, we conclude our work in Section <ref>.
§ LITERATURE REVIEW
In this section, we briefly review earlier works directly related to the planning of EV charging infrastructure and relevant case studies that motivate our approach. Examining these earlier works offers insight into the evolution of methodologies, leading to the proposed work, which uniquely introduces a combination of stochastic modeling and variance reduction techniques. The summary is provided in Table <ref>.
The planning of EV charging infrastructure can be viewed as a facility location problem, which aims to minimize an objective function subject to constraints related to the desired performance of the network facilities. Early studies, including those by <cit.> and <cit.>, adopted deterministic models focusing on minimizing charging stations and development costs, respectively. <cit.> sought to maximize service demand, whereas <cit.> aimed to minimize infrastructure and access costs. Similar objectives were pursued by <cit.>, <cit.>, and <cit.>, with deterministic models being the common methodology.
Several other studies, like those conducted by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, continued the trend of deterministic models, exploring various aspects of EV charging station optimization. Other researchers, including <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, and <cit.>, focused on minimizing the number of charging stations or the operating cost, or maximizing the EV flow coverage.
Another line of work integrates charging infrastructure into the smart-grid design <cit.> or other renewable energy sources such as solar cells <cit.>. While this approach provides an integrated solution to renewable energy issues and amplifies the positive impact of EVs on the environment, it may not be practical for urban areas in developing countries. A comprehensive review of charging infrastructure designs is presented by <cit.>, emphasizing the need for increasingly detailed modeling that accounts for randomness and variability. However, there is a lack of rigorous real-world case studies that emphasize uncertainty quantification in the modeling framework.
Several case studies have been conducted in both developed and developing countries. For example, <cit.> studied the problem of slow-charging technology in Lisbon, where vehicles are often parked overnight. In contrast, <cit.> considered both fast- and slow-charging technologies, focusing on robustly covering all demands and avoiding partial fulfillment in the city of Toronto. Another case study was conducted by <cit.> using a GIS-based model in Ankara and adopting a fuzzy approach. A city-scale simulation was developed for Singapore by <cit.>, focusing on the trade-off between cost minimization and customer accessibility maximization. Lastly, <cit.> proposed a set covering model for EV charging stations in Surabaya but ignored electricity disruption and only provided redundant demand coverage to provide a buffer against uncertainty, resulting in an overly simplified model and sub-optimal solutions.
In light of these studies, it is clear that the EV facility location problem is a complex and multifaceted issue that requires a tailored approach for different regions and contexts. Developing countries, in particular, may face unique challenges, such as power electricity disruptions, that must be considered in the planning and design of EV facilities. Such disruptions and uncertainty are addressed only in a handful of studies. For instance, <cit.> uses a multi-criteria decision-making approach aiming to strike a balanced solution against flooding disruption that maximizes the charging convenience, minimizes the impact of flood hazards, and minimizes the impact of existing charging stations using TOPSIS. <cit.> integrates the electric bus charging stations with photovoltaic and energy storage systems using a two-stage stochastic programming model, enabling them to incorporate the uncertainty of PV power outputs. <cit.> optimizes the size of the energy storage system considering the annualized cost, penalty cost for buying power during peak hours, and penalty cost for resilience violations. Other works that consider stochastic modeling include <cit.>, which directly use either structure of the stochastic models or simulations to represent elements of uncertainty into their optimization models. The caveat is that the resulting model can be extremely hard to solve, especially when a solution with high confidence is desired.
The proposed work extends the use of stochastic modeling and introduces control variates <cit.>, a variance reduction technique that can speed up a simulation-based optimization model, to the field. We propose an approach that addresses the challenges of the need to account for electricity disruptions via simulation and controlling the resulting objective value uncertainties by adjusting the simulation sample size. Simulation modeling enables the modeler to adjust the degree of modeling fidelity, depending on the prior knowledge available, and can be easily verified by estimating the probability of electricity disruptions and comparing it with available historical data. The resulting simulation-based robust model can be accelerated using variance reduction techniques (i.e., control variates), and it offers a more accurate and practical approach for planning and designing EV charging infrastructure that considers uncertainty and disruptions. The integration of stochastic modeling and control variates sets this work apart from previous research, potentially paving the way for more efficient and effective EV charging station location optimization solutions.
§ MODEL FORMULATION
In this section, we describe our modeling components, including the decision variables, objective function, constraint set, model benchmarks (robust and non-robust model), and the CV method we employ to improve simulation efficiency.
§.§ Decision Variables
We consider a set of demand nodes I and supply nodes J, representing sub-district centers and charging station candidate locations in the region under study. We also consider K vehicle types, representing different vehicle modalities that the residents use for commuting (here, we consider two modalities: electric motorcycles and electric cars). The average time to travel from node i ∈ I to node j ∈ J is denoted by d_ij. A threshold parameter d_max is introduced as an upper bound for this travel time as a proxy to study the robustness of the solution w.r.t. consumer time-to-travel for charging.
The decision variables include binary variables
x_j indicating whether the charging station candidate j is selected or not and y_ij indicating whether demand node i is to be assigned to be served by charging station j. In addition, we also use integer decision variables v_ij^k and u_j, denoting the number of electric vehicles of type k from node i charged at node j and the number of units of charging connectors installed at node j, respectively.
x_j =
1, if station j ∈ J is selected
0, otherwise
y_ij =
1, if node i ∈ I is assigned to node j ∈ J
0, otherwise,
v_ij^k ∈{0, 1, ⋯}, ∀ i ∈ I, j ∈ J, k ∈ K
u_j ∈{0, 1, ⋯}, ∀ j ∈ J
Each opened station j incurs a daily cost h_j and can only accommodate q_j charging connectors due to limited space. Each charging connector incurs g daily operational cost and has a limited daily charging throughput of c_j kWh. A vehicle type k takes e_k kWh energy and t_k time to charge using fast-charging technology. We use the electricity price denoted by r to convert the energy used to monetary value.
§.§ Objective Function
The objective is to maximize daily profits under random disruption events at each station, i.e., the revenue from all undisrupted stations minus operational and investment costs. We add a penalty term for any unmet customer demands due to the disruptions to study proper incentivizing mechanisms to achieve further robust models in the ablation study.
To this end, we consider each charging station j ∈ J to have a reliability
p_j = ℙ(Z_j ≤ z_j) = 𝔼 [𝕀(Z_j) ≤ z_j].
The disruption events are simulated utilizing random variable Z = [Z_j]_∀ j ∈ J∼ q. Z_j represents the underlying state triggering electricity disruption at station j whenever it exceeds some threshold z_j. In practice, electricity disruption events may occur due to extreme weather, spiking demand, or fallen trees <cit.> (in which Z_j might represent wind speed, cumulative region-wide demand, or fallen tree branch weights, respectively, that hits electrical equipment and z_j is the equipment threshold to deal with the corresponding random Z_j realization). <cit.> presents a review of how EV charging infrastructures strain the electricity grids, which, in turn, exacerbate the likelihood of electricity outages, especially in developing countries.
With this consideration, the objective function can be formulated as follows: we have prior information about p_j, ∀ j ∈ J.
max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p_j v_ij^k_revenue - s d_ij (1-p_j) v_ij^k_penalty
- ∑_j ∈ J(g u_j + h_j x_j)_total cost.
On the other hand, if p_j is not available, then we can use simulation to estimate the following objective:
max ∑_i ∈ I∑_j ∈ J∑_k ∈ K r e_k v_ij^k 𝔼[𝕀(Z_j≤ z_j) ]_revenue - s d_ij v_ij^k 𝔼[𝕀(Z_j > z_j) ]_penalty
- ∑_j ∈ J(g u_j + h_j x_j)_total cost,
where 𝕀(Z_jl≤ z_j) is binary variables indicating whether the disruption occurs or not.
𝕀 (Z_jl≤ z_j) = 1, if Z_jl≤ z_j
0, otherwise.
Monte Carlo (MC) simulation is one of the most practical methods to achieve this. MC uses n i.i.d. copies of the random variable to estimate the expectation. For each j ∈ J, we first generate Z_j1, Z_j2, ⋯ Z_jn. We then check if the disruption event is triggered or not at the l-th sample and output the binary indicators I_jl = 𝕀 (Z_jl≤ z_j). Then, we use the binary indicators in our final (robust) objective function:
max ∑_i ∈ I∑_j ∈ J∑_k ∈ K∑_l=1^n 1/n( (r e_k v_ij^k I_jl_revenue
- s d_ij v_ij^k (1-I_jl)_penalty)
- ∑_j ∈ J (g u_j + h_j x_j)_total cost.
We call our model the Robust Model in the experiment, to contrast with the original (Non-Robust) model proposed by <cit.>, which is attained when setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n} in (<ref>) during optimization. The solutions of both models are evaluated under random disruption events generated using a different random seed.
§.§ Constraints
The maximization of the objective function in (<ref>) is subject to a set of constraints:
s.t. ∑_k ∈ k v_ij^k ≤ y_ij M, ∀ i ∈ I, j ∈ J,
d_ij y_ij≤ d_max , ∀ i ∈ I, j ∈ J,
∑_j ∈ J v_ij^k = w_i^k, ∀ i ∈ I, k ∈ K,
∑_i ∈ I∑_k ∈ K t_k v_ij^k ≤ c_j u_j, ∀ j ∈ J,
u_j ≤ x_j q_j, ∀ j ∈ J,
∑_i ∈ I y_ij≤ x_j M, ∀ j ∈ J,
∑_j ∈ J y_ij≥ 1, ∀ i ∈ I,
∑_j ∈ J x_j ≤ N
∑_j ∈ J∑_l=1^n 1/n y_ij I_jl≥p̅, ∀ i ∈ I
∑_j ∈ J∑_l=1^n 1/n v_ij^k I_jl≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K
In the above formulation, constraint (<ref>) ensures that charging stations can only charge vehicles if assigned. Constraint (<ref>) ensures the maximum time-to-charge for consumers does not exceed the set threshold d_max. Constraint (<ref>) ensures all charging demands are fulfilled, where w_i^k denotes the number of vehicles of type k to charge at demand point i. Constraint (<ref>) ensures that the required charging capacity to fulfill each station's assigned demand does not exceed the installed capacity. Constraint (<ref>) restricts the number of charging connectors installed in each station. Constraint (<ref>) ensures that demands are assigned only to opened stations. Constraint (<ref>) guarantees that at least one stations cover each demand. Constraint (<ref>) limits the maximum number of stations to open.
Finally, constraint (<ref>-<ref>) ensures that the probability that at least one of the assigned charging stations serving a given demand is not under an electricity outage is greater than or equal to p̅, assuming that outages between stations are independent.
§.§ Robust vs. Non-Robust Model
The consideration of p_j in our formulation is part of our attempt to boost the robustness of the original model and address the unique challenges and characteristics of urban areas in developing countries. The Non-Robust Model ignores disruption probability, resulting in a more simplified model. Our formulation is general, in the sense that we can attain the earlier model by setting I_jl = 1 for all j ∈ J, l ∈{1, 2, ⋯ n}. This earlier model ignores disruption uncertainty and often results in an overly cost-optimized solution that can have serious performance degradation when disruption occurs. Fig <ref> (left) shows a non-robust solution where only two stations are selected to cover 30+ demand nodes in the city of Surabaya. In this solution, many demand nodes are only covered by one station (no redundancy), and thus, when an electricity disruption hits the charging station, the charging demands will not be met and the residents are served very poorly. Our proposed robust model aims to incorporate the disruption uncertainty and optimizes the location and capacity of EV charging stations while balancing the trade-offs between consumer service level and economic profits. This incorporation maintains a linear objective function and linearized constraints, which still yields an MIP model that can solve efficiently using standard solvers.
§.§ Improving the Efficiency of Disruption Probability Estimation
While the proposed objective function in (<ref>) is still linear, the sample size n required to achieve high statistical confidence might blow up as the disruption probabilities 1 - p_j, ∀ j ∈ J become lower (e.g., as the utilities in developing countries mature). Note that our objective essentially estimates p_j by generating enough values Z_j1, Z_j2, ⋯, Z_jn, and compute
p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j)
which can be shown to be unbiased and converges to p_j.
Under the assumption that Z = [Z_j]_∀ j ∈ J∼ q are independently and identically distributed, and z_j, ∀ j ∈ J are fixed threshold values, estimator p̂_j is an unbiased and consistent estimator of p_j.
The proof is straightforward but is provided here for completeness.
Unbiasedness:
𝔼[p̂_j] = 𝔼[ 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) ]
= 1/n∑_l=1^n 𝔼[ 𝕀(Z_jl≤ z_j) ]
= 1/n∑_l=1^n p_j
= p_j
where the first equality follows from the definition of p̂_j, the second equality follows from the linearity of the expectation operator to the sum of indicator functions, and the third line follows from the fact that Z_jl are independently and identically distributed, and the third equality follows from the definition of p_j.
Consistency:
We know that by the law of large numbers, for any ϵ > 0,
lim_n →∞ℙ(|p̂_j - p_j| ≥ϵ) = 0.
Hence, p̂_j converges in probability to p_j, and thus it is a consistent estimator of p_j.
Supposed that we already have an estimate p̂_j, ∀ j
∈ J. We can now plug the estimate into our optimization problem, giving
max ∑_i ∈ I∑_j ∈ J∑_k ∈ Kr e_k p̂_j v_ij^k_revenue - s d_ij (1-p̂_j) v_ij^k_penalty
- ∑_j ∈ J(g u_j + h_j x_j)_total cost
s.t. Constraint (<ref>)-(<ref>)
∑_j ∈ J y_ijp̂_j ≥p̅, ∀ i ∈ I
∑_j ∈ J v_ij^k p̂_j ≥∑_j ∈ J v_ij^kp̅, ∀ i ∈ I, k ∈ K .
Note that this formulation using p̂_j, ∀ j ∈ J is equivalent to the robust model using indicator variables I_jl, ∀ j ∈ J, l ∈{1, 2, ⋯, n} earlier that uses the objective function (<ref>).
§.§.§ Estimating p̂_j to Sufficient Accuracy
While p̂_j is unbiased and consistent, the sample size to ensure a precise estimate can be arbitrarily large, especially when we want a higher accuracy (e.g. when the disruption rate 1-p_j is tiny, such as in developed countries where utility service has high reliability). Suppose we want an δ-accuracy and 1-α confidence level to estimate p_j = 0.9999. Then, we can use Hoeffding's inequality to determine the sample size. According to Hoeffding's inequality, for any δ > 0, the probability that the estimate deviates from the true value by more than δ is bounded by
ℙ(|p̂_j - p_j| > δ) ≤ 2e^-2nδ^2,
where n is the sample size. Hence, if we want to ensure 1-α confidence level, we set 2e^-2nδ^2 = α, and solve for n
n = 1/2δ^2ln(2/α).
For instance, if we want an accuracy of δ = 0.0001 and a confidence level of 1-α = 0.95, then the required sample size is
n = 1/2(0.0001)^2ln(2/0.05) ≈ 114,763,
which is quite huge. Figure <ref> shows the sample size (in a log_10 scale) for various α and δ values. Note, however, that this is an upper bound and in practice, this sample size is not always necessary.
If we have N := |J| stations and each p_j has to be estimated using n≈ 114,763 samples, then we will need N × 114,763 samples to estimate the samples prior to solving the optimization problem, which can be overly burdensome if each simulation runs considers complex systems. Thus, we seek ways to improve efficiency and reduce the variance of the estimator.
§.§.§ Improving Efficiency via Control Variates
One way to improve the estimation efficiency and thus reduce the sample size is through the use of control variates (CV) <cit.>. CV involves introducing a new variable that is correlated with the random variable of interest and can be easily estimated. The CV is then used to adjust the estimate of the random variable to improve its efficiency by reducing the variance of the estimator using the cheaper-to-compute random variable. In our case, we can use CV to estimate p_j = ℙ(Z_j ≤ z_j). Let g(Z_j) be a function of Z_j that is easy to compute. Specifically, if we consider Gaussian q = N(μ, σ) and Z_j ∼ q, we can use
g(z) = Φ(z)
the CDF of the standard normal distribution as the CV to compute g(Z_j). The CV estimator for p_j is computed as
p̂_j = 1/n∑_l=1^n 𝕀(Z_jl≤ z_j) + π_j ( 𝕀 (X_jl≤z̅_j)-g(z̅_j) )
where Z_jl is the l-th sample from the distribution q, X_jl's are standard normal random variables correlated with Z_jl, and z̅_j are the scaled version of z_j chosen to threshold X_jl. Finally, π_j is chosen to minimize the variance
π_j = - Cov( ∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/Var(∑_l=1^n 𝕀(X_jl≤z̅_j)).
We can show that the CV estimator is unbiased and achieves variance reductions in the following remarks. The reduction in variance, subsequently, allows us to reduce the sample size to achieve the same level of δ and α.
The CV estimator (<ref>) is unbiased for p_j.
The proof is straightforward, showing 𝔼[p̂_j] = p_j.
𝔼[p̂_j] = 1/n∑_l=1^n𝔼[𝕀(Z_jl≤ z_j)]
+π_j (1/n∑_l=1^n𝔼[ 𝕀(X_jl≤z̅_j)]-g(z̅_j) )
= 1/n∑_l=1^np_j + π_j (1/n∑_l=1^n g(z̅_j) ) - π_j g(z̅_j)
= p_j.
Assuming we can generate highly correlated random variables Z_jl and X_jl simultaneously and choose the optimal π_j (<ref>), the CV estimator (<ref>) attains a variance reduction.
Note that the variance without using CV is
Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j)).
With CV, the variance of the estimator is
Var(p̂_j) = 1/n^2( Var(∑_l=1^n𝕀(Z_jl≤ z_j))
+2π_j Cov(∑_l=1^n𝕀(Z_jl≤ z_j),∑_l=1^n𝕀(X_jl≤z̅_j) )
+π_j^2 Var(∑_l=1^n𝕀(X_jl≤z̅_j)) ) .
Plugging in the optimal π_j for our problem and simplifying, we have
Var(p̂_j) = 1/n^2Var(∑_l=1^n𝕀(Z_jl≤ z_j))
- Cov^2(∑_l=1^n 𝕀(Z_jl≤ z_j), ∑_l=1^n 𝕀(X_jl≤z̅_j) )/n^2 Var(∑_l=1^n 𝕀(X_jl≤z̅_j)).
We can see that the second term in RHS is non-positive, which means that the variance is reduced the most if 𝕀(Z_jl≤ z_j) and 𝕀(X_jl≤z̅_j) are highly correlated (either positively or negatively), which intuitively means X_jl provides some information about Z_jl. It is important to note, however, that in practice, we often use sample covariances and sample variances to compute π_j, so the CV estimator might not achieve this theoretical variance reduction.
§ NUMERICAL EXPERIMENTS
In this study, we examine the EV and electricity data obtained from Surabaya, Indonesia. The EV dataset includes 11 candidate charging stations, 31 sub-regions of the city representing demand nodes, and two vehicle types, namely motorcycles (k=1) and cars (k=2). Figure <ref> illustrates the locations of the candidate charging stations (red nodes) and demand nodes (blue nodes), where the size of the blue nodes denotes the size of the demand at each location. This charging demand, i.e. the number of EVs of type k at each demand node i, is represented by w_i^k. The average travel time from demand node i to charging station j using vehicle k, d_ij^k, is amassed from Google Maps. The full capacity for each charging connector is considered as c_j=1440 minutes/day for all j ∈ J with 24/7 operational hours and the number of connectors installed in station j ∈ J is limited to q_j=8 for all j ∈ J, due to land availability in the candidate locations.
We estimate the disruption probability by simulating random electricity demands Z = [Z_j]_∀ j ∈ J where Z_j ∼ q_j. We obtain this masked data from the local electricity company, which performed data masking and rescaling for privacy and security reasons. The masked mean and standard deviation of q_j along with demand threshold z_j are summarized in Table <ref>. The simulation uses this probability model to generate random demands and an electricity disruption event is triggered for the whole day at station j when Z_j ≥ z_j. Hence, we have station reliability p_j = ℙ(Z_j ≤ z_j), ∀ j ∈ J. The other experiment parameters are summarized in Table <ref>.
We then build our model by running n simulation replications and computing the mean of the objective function values. The result is summarized in Fig. <ref> and Fig. <ref> for n up
to 10,000. The selected stations and demand assignments for each model solution are shown in Fig. <ref> (left: Non-Robust Model, right: Robust Model) and Fig. <ref> (left: Misspecified Model #1, right: Misspecified Model #2). The Misspecified Model #1 is built assuming 0.95p_j while the Misspecified Model #2 assumes 1.05p_j for all j ∈ J, highlighting underestimation and overestimation of service reliability respectively.
The CV estimator is constructed using standard normal random variables X_jl with z̅_j properly scaled. This gives a highly correlated random variables 𝕀(X_jl≤z̅_j) to 𝕀(Z_jl≤ z_j). We show the estimated station reliability (p_j) using MC and CV in Fig. <ref> and its standard error in Fig. <ref> to highlight the superior estimation efficiency using the CV estimator.
§ DISCUSSION AND FINDINGS
In this section, we discuss our findings regarding the robustness of the optimal solutions against disruptions even when the probability is misspecified and the enhanced disruption simulation efficiency that allows robust decision-making for our problem against disruption uncertainties. We also highlight the limitation of the model and our outlook for future research.
§.§ Robustness of the Optimal Solutions
Figure <ref> summarizes the objective function values obtained by benchmarking the Robust Model, Non-Robust Model, Misspecified Model #1 (underestimated station reliability), and Misspecified Model #2 (overestimated station reliability). The optimal solution of the Robust Model (represented by orange and brown lines) outperforms the other models. Conversely, the solution of the Non-Robust Model (represented by blue and purple lines) yields the lowest objective value. The Non-Robust Model prioritizes minimizing operational and investment costs, resulting in only two charging stations being opened. This leads to lower revenue and higher penalties, particularly during disruptions. In contrast, the Robust Model balances operational and investment costs with potential revenue losses and penalties incurred during disruptions. As a result, the Robust Model opens three charging stations, distributing the large charging stations across the geography of the city, resulting in an 18% higher total cost than the Non-Robust Model solution. However, it provides better protection against revenue loss and penalties incurred during disruptions. We also suggest that these charging stations implement a smart energy management policy <cit.> for added robustness. This added robustness leads to a 10% higher revenue and 60% lower penalty when disruptions occur, yielding an approximately 13% higher overall objective. Figure <ref> shows that the Robust Model's balanced solution covers more demand points with two charging stations, resulting in a better revenue and penalty trade-off than the Non-Robust Model.
The Robust Model with misspecified station reliability still provides some level of robustness, as evidenced by the objective values of both the underestimation and overestimation scenarios. These models' solutions have objective values lower than the Robust Model solution but higher than the Non-Robust Model solution. Thus, while accurately estimating station reliability is beneficial, the model can still tolerate imperfections. When utilizing the Robust Model with underestimated station reliability, the solution tends to be more conservative and provides a higher level of buffer against disruptions. This results in a solution with four charging stations, with over 90% of demand points covered by two or more charging stations. On the other hand, overestimating station reliability leads to a solution with only three charging stations, resulting in a lower cost and an objective value very close to the Robust Model. Figure <ref> illustrates the charging station placement for both the underestimated and overestimated scenarios.
§.§ Improved Simulation Efficiency using CV Estimator
We now discuss how we incorporate the simulation into our robust model. The main challenges center around incorporating electricity station reliability p_j, ∀ j ∈ J (and thus corresponding disruption probability 1-p_j, ∀ j ∈ J ), which might require a huge sample size to achieve desired precision level (thus increasing the computational burden of computing the objective function (either (<ref>) or (<ref>)) and the reliability constraints (either (<ref>)-(<ref>) or (<ref>)-(<ref>)).
While both MC and CV estimators of the objective values are unbiased and converge to the same value for each model, the proposed CV estimation approach appears to effectively reduce the estimation variance, thus yielding tighter confidence intervals in Fig. <ref> (brown, silver, pink, and purple lines vs. orange, red, green, and blue lines). Furthermore, Fig. <ref> highlight that all CV estimators attain about 10× smaller standard errors compared to their MC counterparts. This means that CV improves the simulation efficiency and reduces the sample size required to attain the same precision up to a factor of 10 vs. naive MC simulation approach, without accuracy loss.
The dominant efficiency performance of the CV-based estimation technique that reduces the sample size requirement while maintaining accuracy allows us to incorporate the estimated station reliability into the objective function and reliability constraints. This results in the proposed Robust Model that can be solved without increasing the computational cost significantly. The high efficiency of the CV over MC in estimating the reliability probabilities (even to values close to 1.00) is emphasized in Fig. <ref>, in which all CV estimates attain much tighter confidence intervals regardless of the target probability. In this estimation, again, CV estimators attain 10× smaller standard error for the same sample size used by MC estimators. This highlights the applicability of our robust modeling method to deal with problems where electricity disruptions are extremely rare and need to be estimated to an ultra-level precision.
§.§ Limitation of the Current Work
Although our CV-assisted robust model provides optimal solutions that strike a balance between minimal cost and buffering against electricity disruptions, we acknowledge that scaling it to larger problems, such as a larger charging station candidate set and more fine-grained demand points, heavily relies on the efficiency of the MIP solver. Moreover, we acknowledge that the electricity pricing rate used in this study is simplified, whereas more recent dynamic electricity pricing schemes are available and more realistic, though highly nonlinear. Incorporating such schemes could improve the accuracy of our revenue model, but it may not be feasible with our current solver. Additionally, the CV estimation approach used in this study is based on some prior knowledge about the probability model of the random variable triggering the disruption events. In practice, such knowledge may not be easy to obtain. However, we recognize that machine learning models can be leveraged to extract features from historical datasets and estimate disruption events. We can also leverage machine learning techniques to estimate the battery capacity of the EVs <cit.> to better predict the charging time for each arriving demand to extend our model to incorporate nonlinear dynamics and more realistic operations in our future work.
§ CONCLUSION
In this study, we propose a simulation-based optimization model to address the critical issue of designing robust planning for EV charging stations in developing countries, where electricity disruptions may frequently occur and impact customer satisfaction. Our model considers service reliability as a key factor and incorporates it into the objective function and constraints using the control variates (CV) variance reduction technique to improve simulation efficiency. Our numerical experiment, based on a dataset from Surabaya, Indonesia, demonstrates the superior performance of our robust model solution compared to its non-robust counterpart, even in cases of underestimated or overestimated service reliability. While our proposed model shows promise, we acknowledge its reliance on an efficient MIP solver and its use of a simplified electricity pricing rate. Furthermore, our CV estimator is based on prior knowledge of the probability model, which may not be available in practice. As such, we seek to extend our model to cover nonlinear MIP and learning-based disruption estimation in future work. Nonetheless, our model's ability to reduce the required sample size by up to 10× compared to Monte Carlo simulations highlights its potential to provide a robust solution to the challenges associated with EV charging infrastructure under random electricity disruptions.
elsarticle-harv
|
http://arxiv.org/abs/2307.06287v1 | 20230712163541 | Rational Neural Network Controllers | [
"Matthew Newton",
"Antonis Papachristodoulou"
] | eess.SY | [
"eess.SY",
"cs.LG",
"cs.SY"
] |
Local Limit Theorems for Energy Fluxes of Infinite Divisible Random Fields
[
August 12, 2023
==========================================================================
Neural networks have shown great success in many machine learning related tasks, due to their ability to act as general function approximators. Recent work has demonstrated the effectiveness of neural networks in control systems (known as neural feedback loops), most notably by using a neural network as a controller. However, one of the big challenges of this approach is that neural networks have been shown to be sensitive to adversarial attacks. This means that, unless they are designed properly, they are not an ideal candidate for controllers due to issues with robustness and uncertainty, which are pivotal aspects of control systems. There has been initial work on robustness to both analyse and design dynamical systems with neural network controllers. However, one prominent issue with these methods is that they use existing neural network
architectures tailored for traditional machine learning tasks. These structures may not be appropriate for neural network controllers and it is important to consider alternative architectures. This paper considers rational neural networks and presents novel rational activation functions, which can be used effectively in robustness problems for neural feedback loops. Rational activation functions are replaced by a general rational neural network structure, which is convex in the neural network's parameters. A method is proposed to recover a stabilising controller from a Sum of Squares feasibility test. This approach is then applied to a refined rational neural network which is more compatible with Sum of Squares programming. Numerical examples show that this method can successfully recover stabilising rational neural network controllers for neural feedback loops with non-linear plants with noise and parametric uncertainty.
§ INTRODUCTION
Neural networks (NNs) have shown to be highly effective in numerous machine learning tasks. Examples of these include but are not limited to: image recognition, weather prediction, natural language processing, autonomous vehicle technology, medical imaging and social media algorithms <cit.>. There have been numerous advancements that have contributed to their success such as the development of modern NN architectures <cit.>, the increase in computational power available <cit.> and the availability of big data.
More recently, there has been an increased interest in using NNs in control systems. One reason for this is the emergence of the parallel field of reinforcement learning. By harnessing the power of NNs, deep reinforcement learning has been used to create a decision-making agent to greatly outperform humans in many complex tasks. Such examples include the board game Go <cit.> and the video game Dota 2 <cit.>. Despite their success, there are many significant issues with theses methods. These learnt policies can perform poorly when the learnt environment is different from the real environment <cit.>. Additionally, bounds to quantify their safety do not sufficiently describe the performance of the algorithm and can be overly conservative <cit.>. However, with new advancements in robust control and the success of NNs in reinforcement learning, there is a strong motivation for work at their intersection.
We refer to control systems that contain NNs as controllers as neural feedback loops (NFLs). Most research completed in this area has addressed the robustness analysis of NFLs, where the NN's parameters are given and the task is to quantify the system's safety or robustness properties. However, the more challenging task is to obtain the NN controller's parameters, whilst enforcing robustness guarantees. NNs are useful since they can be used as general function approximators <cit.>, but contain a large number of parameters. This means that optimising over all of the parameters is often computationally expensive.
One method to design the NN controller is by learning an expert control law using input-output data and then checking the robustness guarantees using analysis methods, <cit.>. However, this may lead to poor robustness certificates because no relevant objective is being optimised while training the NN. It is also possible to use reinforcement learning methods. This often involves training the controller by simulating the system trajectories and then updating the controller's parameters subject to maximising a reward function. However, there are significant challenges with this process; it is very computationally intensive, requiring a large amount of hyperparameter tuning and sometimes leads to undesirable behaviour <cit.>. The parameterised NN controller can then be analysed to obtain robustness certificates in a defined operating region, however these guarantees can be poor. It can be difficult for these approaches to outperform traditional control laws. Furthermore, adding additional non-linearities into the model through the NN's activation functions can increase the complexity of the closed-loop system.
Despite these drawbacks, recent methods have focused on addressing these issues by designing NN controllers whilst ensuring robustness guarantees in the process. Methods that focus on developing reinforcement learning algorithms such as <cit.> are able to create an NN policy which can be combined with robust control guarantees. However, these approaches still suffer from other reinforcement learning issues such as requiring significant computational time and hyperparameter tuning. These drawbacks can be mitigated by instead trying to obtain an NN controller by learning from an optimal model predictive control law. This allows the NN controller to be trained offline and when implemented it can be significantly less computationally expensive than the full model predictive control law. To achieve this an SDP framework that incorporates integral quadratic constraints is presented in <cit.>. An iterative algorithm is used to alternate between optimising the NN parameters to fit the control law and maximising the region of attraction. However, these approaches require a known model predictive control law to optimise over the controller's input-output data, which may not be easy to obtain. This approach relies on a loop transformation and Schur complement to ensure the optimisation problem is convex. Similar approaches have been used for different NN architectures such as recurrent NNs <cit.> and recurrent equilibrium network controllers <cit.>. These methods also include a projected policy gradient algorithm and reinforcement learning to synthesise the controller, instead of requiring an expert control law.
§.§ Our Contribution
One shortfall of many recent approaches such as <cit.> is that large convex approximations must be made, as the original problem is non-convex. For example, the ReLU and tanh activation functions must be sector bounded in the region [0,1] with no constraints obtained from pre-processing bounds such as Interval Bound Propagation <cit.>. These sector constraints are shown in Figure <ref>. This can lead to the acquired robustness certificates being conservative. Another issue with these methods is that they rely on iterative approaches, where the algorithm alternates between optimising the performance of the controller and the stability guarantees. This can make the problem computationally expensive and intractable for large scale systems. They also require that the plant model is linear subject to sector bounded non-linearities.
A root cause of the problems with recent approaches is that they use traditional NN architectures with ReLU, sigmoid and tanh activation functions. These structures have shown to be very effective in many machine learning tasks, however they may not be preferable for use in control systems. By considering a class of NNs that are better aligned with control system techniques, these large convex approximations could be mitigated. Since Sum of Squares (SOS) programming is effective for systems with polynomial and rational functions, we investigate what happens when the NN is built upon them.
* Motivated by the success of rational neural networks in machine learning tasks, we propose novel rational activation functions to approximate the traditional sigmoid and tanh activation functions. We then show their effectiveness in analysing neural feedback loops that contain these rational activation functions.
* We note that we could be falling short of the potential expressivity of the neural network by using fixed activation functions. Therefore we consider a general neural network structure which is built upon equations that are convex in the design parameters. We show that a neural network of this form can be expressed in a similar way to that of a feed forward neural network with rational activation functions.
* We consider the Lyapunov stability criteria for constrained dynamics systems. We then propose a novel convex Sum of Squares procedure to recover a stabilising controller for a non-linear system through solving a Sum of Squares feasibility test.
* This procedure is extended to the generalised rational neural network architecture, to recover a stabilising rational neural network in a convex way. We then adapt the rational neural network architecture to make it more compatible with Sum of Squares programming and allow the stabilising controller to be recovered.
* We show for numerous examples that our proposed procedure and neural network architecture are able to effectively recover stabilising controllers for unstable systems with non-linear plants, noise and parametric uncertainty.
§ PRELIMINARIES
§.§ Sum of Squares Programming and The Positivstellensatz
In this section we outline how to formulate and solve SOS optimisation problems by solving an equivalent Semidefinite Program (SDP). For more details on SOS programming the reader is referred to <cit.>.
The fundamental idea behind SOS is to replace a polynomial positivity condition with a condition that enforces that the polynomial is a Sum of Squares.
A polynomial p(x) is said to be a Sum of Squares (SOS) polynomial if it can be expressed as
p(x) = ∑_i=1r_i^2(x),
where r_i(x) ∈ℝ[x]. We denote the set of polynomials that admit this decomposition by Σ[x] and say `p(x) is SOS'. Note that ℝ[x_1, … , x_n] is defined as the set of polynomials in x_1, …, x_n with real coefficients and we denote x = (x_1, …, x_n) for simplicity.
A monomial in x = (x_1, … x_n) is
x^β = x_1^β_1x_2^β_2… x_n^β_n,
where the exponent and degree are denoted as β = (β_1, … , β_n) ∈ℕ^n and |β| = β_1 + … + β_n respectively.
The column vector of monomials with only certain exponents is expressed as x^𝔹 = (x^β)_β∈𝔹, where 𝔹⊂ℕ^n is the set of exponents that are used in the monomials. The summation operation on 𝔹 is defined as
𝔹 + 𝔹 := {β + γ : β, γ∈𝔹}.
A polynomial p(x) can be written as a linear combination of a set of monomials x^𝔹 for a set of coefficients p_β∈ℝ such that
p = ∑_β∈ℕ_d^np_β x^β,
where ℕ_d^n = {β∈ℕ^n : |β| ≤ d } is the set of all n-variate exponents of degree d or less.
A polynomial p(x) is SOS if and only if it can be written in what is referred to as a Gram matrix representation such that
p(x) = (x^𝔹)^T Q x^𝔹,
where Q ∈𝕊_+^|𝔹| is a positive semidefinite matrix.
The Gram matrix representation can be rewritten as
(x^𝔹)^T Q x^𝔹 = ⟨ Q, x^𝔹 (x^𝔹)^T⟩ = ∑_α = 𝔹 + 𝔹⟨ Q, A_α⟩ x^α,
where the symmetric binary matrix A_α∈𝕊^|𝔹| for each exponent α∈𝔹 + 𝔹 is
[A_α]_β,γ :=
1, β + γ = α,
0, otherwise.
The following is therefore true
p(x) ∈Σ[x] ⇔∃ Q ∈𝕊_+^|𝔹| where ⟨ Q, A_α⟩ =p_α, ∀α∈𝔹 + 𝔹.
This means that checking the SOS condition is equivalent to solving an SDP, which can be achieved with parsers such as SOSTOOLS <cit.> in MATLAB.
A central theorem of real algebraic geometry is known as the Positivstellensatz (Psatz) <cit.>, which will now be briefly outlined. The Psatz provides an equivalent relation between an algebraic condition and the emptiness of a semi-algebraic set.
We express the semi-algebraic set with notation
S = { x ∈ℝ^n | g_i(x) ≥ 0, h_j(x) = 0, ∀ i = 1, …, q_1, j = 1, … , q_2 },
where g_i and h_j are polynomial functions.
Given f_1, … , f_r∈ℝ[x], the multiplicative monoid generated by the f_k's is the set of all finite products of f_k's, including 1 (i.e. the empty product). It is denoted as ℳ(f_1, … , f_r).
Given g_1, … , g_q_1∈ℝ[x], the cone generated by the g_i's is
cone{g_1, … , g_q_1} = { s_0 + ∑_i=1^q_1 s_iG_i | s_i∈Σ [x], G_i∈ℳ(g_1, … , g_q_1) }.
Given h_1, … , h_q_2∈ℝ[x], the ideal generated by the h_k's is
ideal{h_1, … , h_q_2} = {∑_j=1^q_2 t_jh_j | t_j∈ℝ[x] }.
(Positivstellensatz, <cit.>)
Given the semi-algebraic set S in (<ref>), the following are equivalent:
* The set S is empty.
* There exist s_i ∈Σ[x] in (<ref>) and t_j ∈ℝ[x] in (<ref>) such that
cone{g_1, … , g_q_1} + ideal{h_1, … , h_q_2} = 0.
Based on Theorem <ref> one can create a convex test for computational purposes by using a representation of the function p(x).
Consider the set S in (<ref>). If
p = 1 + ∑_j = 1^q_2t_jh_j + s_0 + ∑_i = 1^q_1s_ig_i,
where s_i∈Σ[x] and t_j∈ℝ[x], then p(x)>0, ∀ x ∈ S.
To test if the polynomial p(x) ≥ 0, ∀ x ∈ S using SOS programming we can rewrite (<ref>) as
p + ∑_j = 1^q_2t_jh_j - ∑_i = 1^q_1s_ig_i∈Σ[x],
where s_i∈Σ[x] and t_j∈ℝ[x]. By selecting higher degree multipliers s_i, t_j etc. we can obtain a series of set emptiness tests with increasing complexity and non-decreasing accuracy.
§.§ Stability of Neural Feedback Loops using Sum of Squares
In this section we outline the methods proposed in <cit.>, which presents a method to determine if the equilibrium a closed loop NFL is stable and then also use this to compute an inner approximation of the region of attraction.
Consider a continuous-time system
ż(t) = f(z(t),u(t)),
where f is the plant model z(t) ∈ℝ^n_z and u(t) ∈ℝ^n_u are the system states and inputs respectively. n_z and n_u are the number of system states and inputs respectively. This system is a continuous time NFL if the controller u is an NN. Consider a state feedback controller u(t) = π(z(t)): ℝ^n_z→ℝ^n_u as a feed-forward fully connected NN such that
x^0(t) = z(t),
v^k(t) = W^kx^k(t) + b^k, for k = 0,…, ℓ - 1,
x^k+1(t) = ϕ (v^k(t)), for k = 0,…, ℓ - 1,
π(z(t)) = W^ℓx^ℓ(t) + b^ℓ,
where W^k∈ℝ^n_k+1× n_k, b^k∈ℝ^n_k+1 are the weights matrix and biases of the (k+1)^th layer respectively and z(t) = x^0(t) ∈ℝ^n_z is the input into the NN. The activation function ϕ is applied element-wise to the v^k(t) terms. The number of neurons in the k^th layer is denoted by n_k.
As shown in <cit.>, an NFL can be viewed as a dynamical system with equality and inequality constraints that arise from the input-output description of the NN. These constraints can be described as the semi-algebraic set in (<ref>). Further constraints can be added to the semi-algebraic set when only local asymptotic stability is being verified. The region
D^z = { z ∈ℝ^n_z | d_k(z) ≥ 0, k=1,…, n_d},
is considered where the stability conditions will need to be satisfied.
(<cit.>)
Consider System (<ref>) in feedback with an NN controller given by (<ref>). Suppose the input-output properties of the NN are described by (<ref>), and consider the region given by (<ref>). Suppose there exists a polynomial function V(z) satisfying the following conditions
V(z) - ρ(z) ∈Σ[z],
ρ(z) > 0,
-∂ V/∂ z(z) f(z,π(z)) - ∑_k=1^n_dp_k(X)d_k(z) - ∑_j=1^q_2t_j(X)h_j(x) - ∑_i=1^q_1s_i(X)g_i(x) ∈Σ[X],
p_k(X) ∈Σ[X], ∀ k = 1, … , n_d,
s_i(X) ∈Σ[X], ∀ i = 1, …, q_1,
t_j(X) ∈ℝ[X], ∀ j = 1, …, q_2,
where X is a vector of all the system and NN states, i.e. X = (x,z). Then the equilibrium of the NFL is stable.
If a Lyapunov function is constructed using Proposition <ref> then the region of attraction can be approximated. To achieve this we find the largest level set of the Lyapunov function V(z) that is contained within the region that the Lyapunov conditions are satisfied. This can be cast as an SOS program. Consider a Lyapunov function V(z), if the SOS optimisation problem
|z|^k(V(z) - γ) + p(z)d(z) ∈Σ[z],
p(z) ∈Σ[z],
is feasible, where γ is a variable to be maximised and k is a positive integer, then V(z) ≤γ is an estimate of the region of attraction.
§ RATIONAL NEURAL NETWORKS
There has been little prior work investigating NNs with rational expressions for control. <cit.> showed the expressive power of using rational activation functions and how they can be used to approximate commonly used activation functions such as ReLU. The effectiveness of rational NNs in machine learning tasks was shown in <cit.>. Polynomial activation functions have also been used in previous work <cit.>, however they have not seen much investigation within the machine learning research community due to vanishing and exploding gradients that can be exhibited when used with the backpropagation algorithm <cit.>. Another class of NNs that has seen recent interest are quadratic NNs. These can take many different forms, which are summarised in <cit.>. Quadratic NNs have been shown to be beneficial in many aspects <cit.>, such as being general universal function approximators. Recent work from <cit.> has explored the use of a two-layer NN controller that has quadratic activation functions and how a convex formulation can be achieved using this structure.
One of the benefits of using polynomial or rational activation functions in the NN is that they can contain trainable parameters, which can improve the performance of the NN and reduce the number of neurons in the network. For example, the rational activation function used in <cit.> is expressed as
ϕ(x) = P(x)/Q(x) = ∑_i=0^r_Pa_ix^i/∑_j=0^r_Qb_jx^j,
where r_P = deg(P(x)), r_Q = deg(Q(x)) and a_i and b_j are the trainable parameters within the activation function. Another proposed activation function based on rational functions is the Padé activation unit, which has shown to be useful when applied to image classification <cit.>. This activation function is expressed as
ϕ(x) = P(x)/Q(x) = ∑_i=0^r_Pa_ix^i/1+ | ∑_j=0^r_Qb_jx^j |
and can be used to learn and approximate commonly used activation functions, whilst training the NN. The resulting NNs can have compact representations and can perform similarly to state-of-the-art NNs.
§.§ Rational Approximation of Tanh Activation Function
Despite their success in some machine learning applications, rational activation functions have yet to be applied to NN controllers. Most methods to train NNs in control systems use traditional activation functions such as ReLU, sigmoid and tanh. In addition, NNs that are used in control systems are often significantly smaller than those used for machine learning tasks such as image classification. The reason behind this is that methods to train NNs for control often require the use of an SDP, meaning that having a large number of parameters in the NN makes the problem become intractable. However, the requirement for the NN to only contain a small number of neurons to learn a function that can sufficiently control a system has not been well explored. One justification for this is the low number of input and output dimensions that are required for small NFLs. However, if the number of system states increases, then larger networks may be required, which may be intractable to obtain with current methods.
For NNs to be effectively used as controllers, it would be of interest to investigate the use of alternative NN structures and activation functions that may give a sufficient level of expressivity in the network, whilst being easier to compute or ensure robustness guarantees. Motivated by this we propose a novel activation function defined by a simple rational expression. We name this function `Rtanh' as it is an approximation of the tanh activation function
tanh(x) = ϕ(x) = e^x - e^-x/e^x + e^-x
and is defined as
Rtanh(x) = ϕ(x) = 4x/x^2 + 4.
This function is shown in Figure <ref> against the tanh function and the error between these functions is shown in Figure <ref>.
The tanh function can be over-approximated by sector constraints, as demonstrated in <cit.>. These constraints can then be used to analyse the NFL by creating an optimisation problem through an SDP or SOS programming. However, these bounds are very conservative as shown in Figure <ref>. A big advantage of the `Rtanh' function is that it can be represented by a single equality constraint such that
ϕ(x)(x^2 + 4) - 4x = 0.
If this activation function were to be used in an NN, then to test its robustness properties using the Psatz, the equality constraint (<ref>) can be used directly. This is beneficial as no conservatism is introduced when abstracting the input-output properties of the NN using a semi-algebraic set.
To demonstrate that the Rtanh activation function is useful when analysing NFLs, we take an existing NFL that uses tanh activation functions. We consider the inverted pendulum from <cit.>, which uses a five layer NN with five nodes in each layer and tanh activation functions. The dynamics of this system are expressed as
θ̈(t) = mgl sinθ (t) - μθ̇ (t) + sat(u(t))/m l^2,
where sat(·) is the saturation function. The system is discretised with time step Δ t = 0.2 and is parameterised by m = 0.15kg, l=0.5m, μ = 0.5Nms/rad, g=10m/s2, u_max = 1 where u_max is the saturation limit in the saturation function. As shown in <cit.>, we can use `ReachSparsePsatz', which is an SOS optimisation technique to approximate the reachable set at each time step. Using the tanh activation function as in the original NN controller, the reachable sets can be computed and are shown in Figure <ref>.
We then replace the tanh function with Rtanh and observe the system behaviour. We conduct the same reachability analysis as in <cit.> by computing the reachable sets for six time steps. In Figure <ref> we can see that this activation function behaves similarly to that of the tanh function and that using the Psatz with sparse polynomial optimisation (ReachSparsePsatz) gives very tight approximations of the reachable sets.
We also compute the region of attraction using the approach outlined in Section <ref>, refereed to as `NNSOSStability'. The region of attraction when using the tanh and Rtanh functions are shown in Figure <ref> and Figure <ref> respectively. We find that the region of attraction is increased significantly when using Rtanh over the tanh activation function. The areas of the region of attraction on the phase plane are 58 and 0.94 per square unit for the Rtanh and tanh activation functions respectively.
These results show that the Rtanh function can be useful in NNs and NFLs. However, this function is not compatible with most recent methods to learn NN controllers due to it being represented by an equality that is a polynomial of degree three. Indeed, most methods require sector inequality constraints and the Schur complement, which is used to make the problem convex. Therefore, this function cannot be used in those formulations. This requires the use of an alternative method to obtain an NN controller with these activation functions, which we will investigate later in this paper.
§.§ Rational Approximation of Sigmoid Activation Function
It is also possible to approximate the sigmoid function as a rational function. We define the function
Rsig(x) = ϕ (x) = (x + 4)^2/2(x^2 + 16).
As shown in Figure <ref>, this is a good approximation to the sigmoid function. Figure <ref> shows the error with the sigmoid function. Rsig can be expressed as the equality constraint
2(x^2 + 16)ϕ (x) - (x + 4)^2 = 0.
§.§ Irrational Approximation of ReLU Activation Function
Approximating the ReLU function with a rational function is more difficult. Here we consider the class of irrational activation functions; to demonstrate how they could be used we propose the following function
IReLU(x) = ϕ(x) = √(x^2 + 1) + x - 1.
This function and the error with the ReLU function are shown in Figure <ref> and Figure <ref> respectively. By making a substitution, the IReLU function can be expressed as a semi-algebraic set with two equality constraints and one inequality constraint
ϕ(x) - y - x + 1 = 0,
y^2 - x^2 - 1 = 0,
y ≥ 0.
§.§ General Rational Neural Networks
The rational approximations of the tanh and sigmoid functions proposed in Section <ref> can be used as activation functions in an NN. However, if we were to use these structures we could be missing the potential expressivity that can be obtained from a general class of rational functions. As in (<ref>), rational activation functions can contain training parameters. Therefore, instead of considering a rigid structure for the activation function and fixing the parameters, we can consider a general rational expression that contains the NN parameters.
If we were to consider a feed-forward fully connected NN with predefined rational activation functions, then the NN can be written as
x^0 = u,
v^k = W^kx^k + b^k, for k = 0,…, ℓ - 1,
x^k+1 = p(v^k)/q(v^k), for k = 0,…, ℓ - 1,
π(u) = W^ℓx^ℓ + b^ℓ,
where p(v^k) and q(v^k) are polynomial functions with specified coefficients. However, if we substitute the preactivation value v_j^k into the polynomial expressions, we obtain a rational expression in x_j^k, where the coefficients are parameterised by the values of the weight matrices W^k and biases vector b^k. Any coefficients in the rational activation function will be multiplied by the weights and bias terms. Therefore, we can instead consider an NN with no affine transformation and parameters only in the rational activation function. We can then write the rational NN as
x^0 = u,
x_i^k+1 = p_i(x^k)/q_i(x^k), for i = 0,…, n_k, for k = 0,…, ℓ - 1,
π_i(u) = p_i(x^ℓ)/q_i(x^ℓ), for i = 0,…, n_ℓ,
where p_i(x^k) and q_i(x^k) are general polynomial functions which can be written as
p_i(x) = ∑_α∈ℕ_d_p_i^nλ_αx^α,
q_i(x) = ∑_β∈ℕ_d_q_i^nγ_βx^β,
where d_q_i = deg(q_i), d_p_i = deg(p_i), α and β are the exponents that are defined in Section <ref> and λ_α and γ_β are the coefficients of p_i(x) and q_i(x) respectively. To show the similarity between (<ref>) and (<ref>) we present the following simple example.
Consider an NN with structure defined by (<ref>) with two layers and two nodes in each layer. The rational activation functions with deg(p) = deg(q) = 2 can be written as
p(v) = c_1v^2 + c_2v + c_3,
q(v) = d_1v^2 + d_2v + d_3.
The first node in the second layer can be expressed as
v_1^1 = W_1^1x_1^1 + W_2^1x_2^1 + b_1^1,
x_1^2 = p(v_1^1)/q(v_1^1).
We can substitute in the preactivation terms into the activation functions to obtain the polynomials
p(v_1^1) = c_1(W_1^1)^2(x_1^1)^2 + 2c_2W_1^1W_2^1(x_1^1x_2^1) + c_3(W_2^1)^2(x_2^1)^2 +
(c_1W_1^1 + 2c_2W_1^1b_1^1)(x_1^1) + (c_1W_2^1 + 2c_2W_2^1b_1^1)(x_2^1) + (c_1(b_1^1)^2 + c_2b_1^1 + c_3),
q(v_1^1) = d_1(W_1^1)^2(x_1^1)^2 + 2d_2W_1^1W_2^1(x_1^1x_2^1) + d_3(W_2^1)^2(x_2^1)^2 +
(d_1W_1^1 + 2d_2W_1^1b_1^1)(x_1^1) + (d_1W_2^1 + 2d_2W_2^1b_1^1)(x_2^1) + (d_1(b_1^1)^2 + d_2b_1^1 + d_3),
which form the rational expression for x_1^2. However, we can instead write the rational expression as
x_1^2 = λ_1(x_1^1)^2 + λ_2x_1^1x_2^1 + λ_3(x_2^1)^2 + λ_4x_1^1 + λ_5x_2^1 + λ_6/γ_1(x_1^1)^2 + γ_2x_1^1x_2^1 + γ_3(x_2^1)^2 + γ_4x_1^1 + γ_5x_2^1 + γ_6,
which is in the form of (<ref>). This can be generalised to larger NNs with higher degree polynomials in the rational functions. This approach can reduce the number of parameters in the NN and each polynomial is convex in the decision variables. This allows the parameters in the rational expression to be tuned directly instead of simultaneously tuning the weights, biases and rational activation parameters.
§ RECOVERING STABILISING CONTROLLERS USING SUM OF SQUARES
In this section, we propose a novel procedure to obtain a stabilising controller for a non-linear polynomial system using SOS programming. To do this we leverage the Psatz and exploit its structure to generate a feasibility test for a stabilising controller.
Consider the polynomial P(x) ∈ℝ[x] and partition x = [y, z] so that y ∈ℝ^n, z ∈ℝ. Consider the set
S = { y ∈ℝ^n, z ∈ℝ | p_i(y,z) - q_i(y)z = 0 ∀ i = 1, …, m },
where p_i(y,z) ∈ℝ[x], q_i(z) ∈ℝ[z]. If
P(x) - ∑_i=1^m (p_i(y,z) - q_i(y)z) ∈Σ[x], q_i(y) ≠ 0, ∀ i = 1, …, m,
then P(x) ≥ 0 on the set
T = { y ∈ℝ^n, z ∈ℝ | p_i(y,z)/q_i(y) - z = 0, q_i(y) ≠ 0 ∀ i = 1, …, m }.
Consider the set S, if
P(x) - ∑_i=1^m t_i(p_i(y,z) - q_i(y)z) ∈Σ[x],
where t_i = 1, since t_i∈ℝ[x], then P(x) ≥ 0 on S. If we include the condition q_i(y) ≠ 0, ∀ i = 1, …, m, then we can rewrite (<ref>) as
P(x) - ∑_i=1^m q_i(y) ( p_i(y,z)/q_i(y) - z ) ∈Σ[x], q_i(y) ≠ 0, ∀ i = 1, …, m,
since q_i(y) ∈ℝ[x], then P(x) ≥ 0 on T.
Proposition <ref> considers the Psatz in a particular form to obtain a positivity certificate of a function over a set of rational functions. In essence, this formulation fixes the multipliers in the Psatz to unity and then searches over the polynomials to find a constraint set that satisfies the feasibility test. This is useful if we want to find a constraint set for the nonnegativity of a function, instead of testing that function over a known constraint set. This can be leveraged to find a controller that satisfies the Lyapunov stability conditions.
Consider the non-linear system
ż = f(z) + g(z)u,
u = p(z)/q(z), q(z) ≠ 0,
where z ∈ℝ^n_z are the system states, u ∈ℝ^n_u is the controller input and f(z) and g(z) are polynomials. Suppose that there exists a Lyapunov function V(z) such that V(z) is positive definite in a neighbourhood of the origin and polynomials p(z), q(z) satisfying
-∂ V/∂ z(f(z) + g(z)u) - (p(z) - q(z)u) ≥ 0 ∀ z,u.
Then the origin of the state space is a stable equilibrium of the system.
We consider the stability of constrained dynamical systems as in <cit.>. Using the same argument as in Proposition <ref>, if
-∂ V/∂ z(f(z) + g(z)u),
is nonnegative on the set
{ z ∈ℝ^n_z, u ∈ℝ^n_u | p(z) - q(z)u = 0, q(z) ≠ 0 },
then it is also nonnegative on the set
{ z ∈ℝ^n_z, u ∈ℝ^n_u | p(z)/q(z) - u = 0, q(z) ≠ 0 },
which defines the controller in the closed-loop system.
To compute the Lyapunov function and polynomials that define the rational controller in Proposition <ref> we can formulate an SOS program.
Consider the dynamical system (<ref>) in Proposition <ref>. Suppose there exists polynomial functions V(z), p(z), q(z), a positive definite function ρ (z) such that
V(z) - ρ(z) ∈Σ[z],
-∂ V/∂ z(f(z) + g(z)u) - (p(z) - q(z)u) ∈Σ[X],
p(z) ∈ℝ[X],
q(z) ∈ℝ[X],
q(z) ≠ 0,
where X = (z,u) is a vector of all of the states. Then the origin of the system is a stable equilibrium.
Theorem <ref> allows us to reconstruct a stabilising controller from a feasibility test using SOS programming. However, the structure of the controller is limited as it is a simple rational function. We therefore extend this approach to a more expressive class of functions through rational NNs.
§.§ Extension to Rational Neural Network Controllers
The technique outlined in the previous section can be used to recover a stabilising controller that is a rational function of the system states. However, we can expand this approach to consider an NN architecture that contains rational functions similar to the one proposed in (<ref>). We consider a state feedback controller u(t) = π (z(t)) : ℝ^n_z→ℝ^n_u as a rational NN such that
x^0(t) = z(t),
x_i^k+1(t) = p_i^k(x^k(t))/q_i^k(x^k(t)) , for i = 1, …, n_k, k = 0, … , ℓ - 1,
u_i(t) = π_i (z(t)) = p_i^ℓ(x^ℓ(t))/q_i^ℓ(x^ℓ(t)), for i = 1, … , n_z,
where p_i^k(x^k(t)), q_i^k(x^k(t)) are the polynomials that form the rational expression associated with the i^th node in the (k+1)^th layer. The number of neurons in the k^th layer is denoted by n_k. We will drop the time dependence notation throughout the rest of this paper for simplicity.
We can apply Proposition <ref> and the theory of constrained dynamical systems as in <cit.> due to the well-defined structure of this controller. The following proposition shows how Lyapunov stability over constrained dynamical systems can be used to recover a controller of this form.
Consider the non-linear system
ż = f(z) + g(z)u,
u = π(z),
where z ∈ℝ^n_z are the system states, u ∈ℝ^n_u is the controller input and f(z) and g(z) are polynomials. Consider the controller structure π(z) defined in (<ref>) and the region given by (<ref>). Suppose there exist polynomial functions V(z), p_i^k(x^k) ∀ i = 1, …, n_k+1, k = 0, … , ℓ, and q_i^k(x^k) ∀ i = 1, …, n_k+1, k = 0, … , ℓ satisfying the following conditions
V(z) - ρ(z) ∈Σ[z],
ρ(z) > 0,
-∂ V/∂ z(z) (f(z) + g(z)u) - ∑_k=1^n_ds_k(X)d_k(z) …
- ∑_k=1^ℓ∑_i=1^n_k( p_i^k(x^k) - q_i^k(x^k) x_i^k+1) …
- ∑_i=1^n_u( p_i^ℓ(x^ℓ) - q_i^ℓ(x^ℓ) u_i) ∈Σ[X],
s_k(X) ∈Σ[X], ∀ k = 1, … , n_d,
q_i^k(x^k) ≠ 0, ∀ i = 1, …, n_k, k = 0, … , ℓ,
where X is a vector of all the system and NN states, i.e. X = (x,u,z). Then the equilibrium of the system is stable.
The above proposition can generate a feasible SOS program, however the rational NN controller may be difficult to recover. This is because the coefficients in the q_i^k(x^k) terms will be set to very small values by the SOS program, due to each term cancelling with the adjacent layers. We therefore propose an alternative rational NN controller structure in the following section to mitigate this issue.
§.§ Refined Rational Neural Network Controller
The Lyapunov condition in Proposition <ref> may result in numerical issues when solving the SOS program due to the structure of the constraints, making the rational NN controller difficult to recover. To overcome this issue, we enrich the NN structure by considering a state feedback controller u = π (z) : ℝ^n_z→ℝ^n_u as a rational NN such that
y^0 = z,
x_i^k+1 = ∑_j=1^n_kp_i,j^k(y^k)y_j^k/q_i^k(z) , for i = 1, …, n_k+1, k = 0, … , ℓ - 1,
y_i^k+1 = x_i^k+1 + 1 , for i = 1, …, n_k, k = 0, … , ℓ - 1,
u_i = π_i (z) = ∑_j=1^n_ℓp_i,j^ℓ(y^ℓ)y_j^ℓ/q_i^ℓ(z)(∑_m = 1^n_z z_m^2), for i = 1, … , n_z,
where p_i,j^k(x^k), q_i^k(z) are the j^th polynomials that form the rational expression associated with the i^th node in the (k+1)^th layer.
Each x_j^k node in the NN contains a rational activation function and each y_j^k node is equal to the x_j^k term with the addition of a bias term which we set to unity. The y_j^k term that appears in the numerator of the rational activation function is to ensure that all terms in the x_i^k+1 node are a function of all of the nodes in the k^th layer and to impose more structure on the NN. The denominator q_i^k(z) is a function of the system states to ensure that all nodes in the network are tied to the system states and not just the nodes in the previous layer. This will ensure that the SOS program does not set the coefficients in the q_i^k(z) terms to very small values. The final layer contains a multiplier term which is a sum of all of the system states ∑_m = 1^n_z z_m^2 to ensure that the controller input goes to zero at the origin.
We can then adapt Proposition <ref> for this modified rational NN structure, to obtain a controller that can be recoverable from the feasibility test.
Consider the non-linear system
ż = f(z) + g(z)u,
u = π(z),
where z ∈ℝ^n_z are the system states, u ∈ℝ^n_u is the controller input and f(z) and g(z) are polynomials. Consider the controller structure π(z) defined in (<ref>) and the region given by (<ref>). Suppose there exist polynomial functions V(z), p_i,j^k(x^k) ∀ i = 1, …, n_k+1, j = 1, …, n_k, k = 0, … , ℓ and q_i^k(z) ∀ i = 1, …, n_k+1, k = 0, … , ℓ satisfying the following conditions
V(z) - ρ(z) ∈Σ[z],
ρ(z) > 0,
-∂ V/∂ z(z) (f(z) + g(z)u) - ∑_k=1^n_ds_k(X)d_k(z) …
- ∑_k=1^ℓ∑_i=1^n_k( ( ∑_j=1^n_k-1 p_i,j^k(y^k)y_j^k) - q_i^k(z) x_i^k+1) …
- ∑_k=1^ℓ∑_i=1^n_k t_i,k(X) (y_i^k - x_i^k - 1) …
- ∑_i=1^n_u( ∑_j=1^n_ℓ( p_i,j^ℓ(y^ℓ)y_j^ℓ( ∑_m=1^n_z z_m^2) ) - q_i^ℓ(z) u_i) ∈Σ[X],
s_k(X) ∈Σ[X], ∀ k = 1, … , n_d,
t_i,k(X) ∈ℝ[X], ∀ k = 1, …, ℓ, i = 1, …, n_k,
q_i^k(x^k) ≠ 0, ∀ i = 1, …, n_k, k = 0, … , ℓ,
where X is a vector of all the system and NN states, i.e. X = (u,x,y,z). Then the equilibrium of the system is stable.
The SOS program in the above proposition is convex in the rational NN parameters and can hence be solved using SOS programming. Note that saturation and any uncertainty and robustness conditions can be incorporated in the same way that is demonstrated in <cit.>. Proposition <ref> presents a method to obtain a stabilising NN controller for a non-linear polynomial system in a convex way by solving one SOS optimisation problem. As described in Section <ref>, other recent approaches such as <cit.> rely on iterative algorithms or reinforcement learning formulations that are significantly more expensive to compute.
§ NUMERICAL EXAMPLES
We demonstrate how the approach in Proposition <ref> can be used to recover stabilising rational NN controllers through numerical examples. These examples were run on a four-core Intel Xeon processor @3.50GHz with 16GB of RAM. The SOS programs were implemented using MATLAB and SOSTOOLS to parse the SOS constraints into an SDP, which is solved using MOSEK <cit.>.
§.§ One Dimensional System
To show that this method is able to obtain a stabilising rational NN controller, we consider a very simple one dimensional linear system of the form
ż = z + u.
This system is unstable without a feedback controller. To recover a stabilising controller we set the size of the rational NN to be a small two layer network with a single node in each layer. The equations of the controller can be written as
x_1 = λ_1,1 z^4 + λ_1,2z^2/γ_1,1z^2 + γ_1,2,
y_1 = x_1 + 1,
u = (λ_2,1y_1^2 + λ_2,2y_1) z/γ_2,1z^2 + γ_2,2,
where γ_1,1≥ 0, γ_1,2 > 0, γ_2,1≥ 0, γ_2,2 > 0. We also add saturation to the controller such that -10 ≤ u ≤ 10 and we enforce the local region of the state space to be -10 ≤ z ≤ 10. The SOS program is able to recover a feasible controller which can stabilise the system to the zero equilibrium. The state trajectory and controller input for this NFL over time are shown in Figure <ref> and <ref> respectively.
§.§ Three Dimensional Non-linear System
We now consider a three dimensional non-linear system system given by
ż_1 = -z_1 + z_2 - z_3,
ż_2 = -z_1(z_3 + 1) - z_2,
ż_3 = -z_1 + u,
and attempt to find a stabilising rational NN controller for the system. We set the NN to have two layers and three nodes in each layer, with saturation -10 ≤ u ≤ 10. The system states are defined to operate in the region
1^2 - z_1^2 - z_2^2 - z_3^2≥ 0.
Each polynomial in the rational NN is assigned to contain zeroth to fourth order terms. We define a quartic Lyapunov function and second order polynomials for the s_k and t_i,k terms. The trajectories for this system are shown in Figure <ref>, which shows that the controller can successfully stabilise the system.
§.§ Non-linear Inverted Pendulum
We now consider the inverted pendulum proposed in <cit.> with dynamics given by
θ̈(t) = mglsin(θ(t)) - μθ̇(t) + sat(u(t))/ml^2.
As shown in <cit.>, we can rewrite the dynamics as a four dimensional polynomial system. We let z_1 = θ, z_2 = θ̇ and by making the substitution z_3 = sin(z_1), z_4 = cos(z_1) the system can be written as
ż_1 = z_2,
ż_2 = g/lz_3 - μ/ml^2 z_2 + 1/ml^2 u,
ż_3 = z_2z_4,
ż_4 = -z_2z_3,
where m=0.15 kg, l=0.5 m, μ=0.5 Nmsrad^-1, g=9.81 ms^-2 and the controller input is saturated such that -1 ≤ u ≤ 1. The system also requires the equality constraint
z_3^2 + z_4^2 - 1 = 0,
to be enforced. We do not define any region of the state space and instead consider global stability. We include the additional robustness constraints on the length of the pendulum to be ± 0.1 its original length and additive white noise w on the angular velocity such that || w ||_∞≤ 0.1. By making the substitution δ = 1/l the full dynamical system can be written as
ż_1 = z_2,
ż_2 = g δ z_3 - μδ^2/mz_2 + δ^2/m u + w,
ż_3 = z_2z_4,
ż_4 = -z_2z_3,
0 = z_3^2 + z_4^2 - 1,
0 ≤ 1^2 - u^2,
0 ≤ 0.1^2 - w^2,
0 ≤ (1/0.4 - δ)(δ - 1/0.6).
The Lyapunov function must be carefully constructed due to the z_4 state being equal to one at the origin. We therefore define the Lyapunov function to be the sum of two Lyapunov functions, the first of which is defined as
V_1(z_3,z_4) = a_1z_3^2 + a_2z_4^2 + a_3z_4 + a_4
and the second is quadratic in z_1 and z_2. To ensure that the Lyapunov function is zero at the origin we must enforce
a_2 + a_3 + a_4 = 0.
To ensure that the Lyapunov function is positive definite, we define
ρ(z) = ϵ_1z_1^2 + ϵ_2z_2^2 + ϵ_3(1 - z_4),
where ϵ_1≥ 0.1, ϵ_2≥ 0.1, ϵ_3≥ 0.1.
The size of the rational NN controller is a two layer network with four nodes in each layer. By setting the size of the polynomials in the network to be between zeroth and fourth order, we are able to recover a controller. The trajectories for this NFL are shown in Figure <ref>. We can see that the controller initially drives the system states towards a manifold and then moves it towards the equilibrium. Since the control system is discontinuous at the manifold, further analysis is required to show stability of the system.
§ CONCLUSION
In this paper, we analyse the use of rational NNs in previous application areas. We present novel rational activation functions to approximate the traditional sigmoid and tanh functions and show how they can be used in robustness problems for NFLs. We argue that rational activation functions can be replaced with a general rational NN structure where each layer is convex in the NN's parameters. We then propose a method to recover a stabilising controller from a feasibility test and then extend this approach to rational NNs. This structure is refined to make it more compatible when used in conjunction with SOS programming. Through numerous numerical examples we show how this approach can be used to recover stabilising rational NN controllers for NFLs with non-linear plants with noise and parametric uncertainty.
§ ACKNOWLEDGEMENTS
This work was supported by EPSRC grants EP/L015897/1 (to M. Newton) and EP/M002454/1 (to A. Papachristodoulou) and the Tony Corner Research Fund.
|
http://arxiv.org/abs/2307.05150v1 | 20230711101325 | A Modal Logic for Explaining some Graph Neural Networks | [
"Pierre Nunn",
"François Schwarzentruber"
] | cs.AI | [
"cs.AI",
"cs.LO"
] |
ExFaceGAN: Exploring Identity Directions in GAN’s Learned Latent Space for Synthetic Identity Generation
Fadi Boutros^1, Marcel Klemt^1, Meiling Fang^1, Arjan Kuijper^1,2, Naser Damer^1,2
^1Fraunhofer Institute for Computer Graphics Research IGD, Darmstadt, Germany
^2Department of Computer Science, TU Darmstadt,
Darmstadt, Germany
Email: [email protected]
==================================================================================================================================================================================================================================================================================
In this paper, we propose a modal logic in which counting modalities appear in linear inequalities. We show that each formula can be transformed into an equivalent graph neural network (GNN). We also show that each GNN can be transformed into a formula. We show that the satisfiability problem is decidable. We also discuss some variants that are in PSPACE.
§ INTRODUCTION
Graph neural networks are used to learn a class of graphs or pointed graphs (a graph with a designated vertex). GNNs are used in many applications: social networks <cit.>, chemistry, knowledge graphs etc. (see <cit.> for an overview of the applications of GNNs).
The Saint-Graal for explaning GNNs would be to provide an algorithm for the following problem:
Synthesis of an explanationa GNN a logical formula ϕ such that = ϕ
where is the class of pointed graphs recognized by the GNN , and ϕ is the class of pointed graphs in which ϕ holds. In other words, the goal is to compute a formula ϕ (in modal logic, or in graded modal logic for example) that completely explains the class of graphs recognized by .
For instance, in a social network, a person is recommended by a GNN A iff that person has at least one friend that is musician (the formula ϕ being for instance expressed in modal logic by musician, where is the existential modal operator).
The synthesis of an explanation in some logic - let say first-order logic, modal logic, or graded modal logic - is highly challenging. In this paper, we tackle a less challenging problem but that goes in the same direction. We provide an algorithmic solution for tackling the following problems:
2
P1: Verification of an explanationa GNN , a logical formula ϕyes, if = ϕ
P2: Verification of an explanationa GNN , a logical formula ϕyes, if ⊆ϕ
P3: Verification of an explanationa GNN , a logical formula ϕyes, if ϕ⊆
P4: Finding a counterexamplea GNN , a logical formula ϕyes, if ϕ∩≠∅
Here are kind of question instances that problems P1-4 are able to solve:
* P1: is a recommended person a person that has at least one musician friend?
* P2: does any recommended person have a musician friend?
* P3: is any person that a musician friend recommended?
* P4: is it possible to recommend a person that has at least one musician friend?
Our solution is a general methodology to solve the problems P1-4 which consists in representing everything (ϕ but also the GNN A) in logic.
Interestingly, there is a neat correspondence between graded modal logic and GNNs (<cit.>, <cit.>). Graded modal logic <cit.> is a modal logic offering the ability via the construction ^≥ kϕ to say that a vertex has more that k successors satisfying a formula ϕ We know that a GNN that is expressible in first-order logic (FO) is also captured by a formula in graded modal logic <cit.>. However the use of graded modal logic is problematic because we do not know how to represent any GNN into it (in particular those who are not expressible in FO).
That is why we define a logic called which is more expressive enough to capture a reasonable class of GNNs while being able to express any formula in modal logic or graded modal logic. We then provide an algorithm for the satisfiability problem for .
In this article, we capture AC-GNN (aggregation-combination graph neural networks) <cit.> that are defined by an aggregation function which is the sum of feature vectors, the combination functions being linear functions truncated with the activation function max(0, min(1, x)), and where the classification function is linear too. The max(0, min(1, x)) is called truncated reLU (see <cit.>) or clipped reLU (see <cit.>).
The logic we consider is a combination of counting modalities and linear programming. It extends graded modal logic. We provide a translation from any GNN A to a formula tr(A) in so the problems P1-4 reformulate as follows:
* P1: is tr(A) ↔ϕ valid?
* P2: is tr(A) →ϕ valid?
* P3: is ϕ→ tr(A) valid?
* P4: is ϕ tr(A) satisfiable?
The formula ϕ can be for instance a formula of modal logic K and graded modal logic. As subsumes these logics, all the problems P1-4 in fact reduce to the satisfiability problem of (recall that a formula valid if its negation is unsatisfiable). We prove that the satisfiability problem of is decidable.
Interestingly, given a formula, we are able to construct an equivalent GNN. This can be used to tune an existing GNN.
Suppose you learnt a GNN but you aim at constructing a new GNN that behaves like but excludes the pointed graphs that do not satisfy ϕ. More precisely, for the following problem:
Tuning of a GNNa GNN , a logical formula ϕa new GNN ' such that ' = ∩ϕ
we simply take a GNN ' that represents the formula tr() ϕ.
More precisely, the contributions of this paper are:
* the formal definition of logic ;
* the construction of a GNN equivalent to a -formula ϕ (it generalizes the result of Prop 4.1 in <cit.>)
* the construction of a -formula tr() equivalent to
* the fact that the satisfiability problem of is in (i.e. with an oracle).
* Restrictions of the satisfiability problem that are in .
Outline.
In Section <ref> we recall the definition of AC-GNN. In Section <ref>, we define the logic . In Section <ref>, we study the correspondence between GNN and logic . In Section <ref>, we discuss the satisfiability problem of .
§ BACKGROUND ON AC-GNN
In this paper, we consider aggregate-combine GNN (AC-GNN) <cit.>, also sometimes called message passing neural
network (MPNN) <cit.>. In the rest of the paper, we call a AC-GNN simply a GNN.
A (labeled directed) graph G is a tuple (, , ) such that is a finite set of vertices, ⊆× a set of directed edges and is a mapping from to a valuation over a set of atomic propositions. We write ℓ(u)(p) = 1 when atomic proposition p is true in u, and ℓ(u)(p) = 0 otherwise.
A state x is a mapping from into ^ for some .
As in <cit.>, we use the term `state' for both denoting x and also the vector x(v) at a given vertex v.
Suppose that the relevant atomic propositions are p_1, …, p_k.
The initial state x_0 is defined by:
x_0(u) = (ℓ(u)(p_1), …, ℓ(u)(p_k), 0, …, 0)
for all u ∈ V.
For simplicity, we suppose that all states are of the same dimension .
An aggregation function is a function mapping finite multisets of vectors in ℝ^ to vectors in ℝ^. A combination function is a function mapping a vector in ℝ^2 to vectors in ℝ^.
A GNN layer of input/output dimension p is defined by an aggregation function and a combination function .
A GNN is a tuple (^(1),...^(),) where ^(1),...^() are d GNN layers and : ^→0, 1 is a classification function.
When applied to a graph G, the t-th GNN layer ^(t) transforms the previous state t-1 into the next state t by:
tu= (t-1 u,(t-1 v | uv ∈))
where and are respectively the aggregation and combination function of the t-th layer.
In the above equation, note that the argument of is the multiset of the state vectors of the successors of v. Thus, the same vector may occur several times in that multiset. Figure <ref> explains how a layer works at each vertex.
vertex = [circle,draw, inner sep=0mm,minimum height=3mm,font=]
processarrow = [line width = 1mm, -latex]
Figure <ref> explains how the overall GNN works: the state is updated at each layer; at the end the fonction says whether each vertex is positive (1) or negative (0).
Let A be GNN. We define A as the set of pointed graphs (G, u) such that
CLS(x_(u)) = 1.
In the rest of the article, we suppose that the aggregation function is a sum:
(X) = ∑_x ∈ X x.
§ OUR PROPOSAL: LOGIC
In this section, we describe the syntax and semantics of . We finish the section by defining its satisfiability problem.
§.§ Syntax
Consider a countable set of propositions. We define the language of logic as the set of formulas generated by the following BNF:
ϕ ::= p |¬ϕ|ϕϕ| E ≥ 0
E ::= c |ϕ|ϕ| E + E | c× E
where p ranges over , and c ranges over ℤ. This logic is an extension of modal logic. Atomic formulas are propositions p, inequalities and equalities of linear expressions. We consider linear expressions over ϕ and ϕ. The number ϕ is equal to 1 if ϕ holds in the current world and equal 0 otherwise. The number ϕ is the number of successors in which ϕ hold. The language seems strict but we write E_1 ≤ E_2 for E_2 - E_1 ≥ 0, E = 0 for (E ≥ 0) (-E ≥ 0), etc.
Graded modal logic <cit.> extends classical modal logic by offering counting modality constructions of the form ^≥ kϕ which means there are at least k successors in which ϕ holds.
Logic is more expressive than Graded modal logic since ^≥ kϕ is rewritten in k ≤ϕ.
Interestingly, the property `there are more p-successors than q-successors' can be expressed in logic by p ≥ q, but cannot be expressed in FO, thus not graded modal logic. This is proven via a Ehrenfeucht-Fraïssé game.
The set of subformulas, ϕ is defined by induction on ϕ:
p = p
¬ϕ = ¬ϕϕ
ϕψ = ϕψϕψ
E ≥ 0 = E ≥ 0⋃ψ|1_ψ or ψ appears in E
The modal depth of a formula, ϕ and the modal depth of an expression, E are defined by mutual induction on ϕ and E:
3cm
p = 0
¬ϕ = ϕ
ϕψ = max(ϕ,ψ)
E ≥ 0 = E
3cm
c = 0
1_ϕ = ϕ
ϕ = ϕ + 1
E_1 + E_2 = max(E_1,E_2)
k*E = E
As in modal logic, modalities are organized in levels.
(1_p q≤ 4≤( p ≥ 2) ≤ 4 = 2.
The expressions q and ( p ≥ 2) are at the root level (level 1), while the expression p is at level 2.
A formula can be represented by a DAG (directed acyclic graph) instead of just a syntactic tree. It allows to share common expressions. For instance (p q) ≥ 1_p q is represented by the following DAG in which p q is used twice:
[yscale=0.7]
(leq) at (0, 0) ≤;
(modalitynumber) at (1, 0.5) ;
(1) at (1, -0.5) 1_...;
(land) at (2, 0) ;
(p) at (3, 0.5) p;
(q) at (3, -0.5) q;
[->] (leq) – (modalitynumber);
[->] (leq) – (1);
[->] (modalitynumber) – (land);
[->] (1) – (land);
[->] (land) – (p);
[->] (land) – (q);
§.§ Semantics
As in modal logic, a formula ϕ is evaluated in a pointed graph (G, u) (also known as pointed Kripke model).
We define the truth conditions (G,u) ϕ (ϕ is true in u) and the semantics EG,u (the value of E in u) of an expression E by mutual induction on ϕ and E as follows.
(G,u) p if (u)(p) = 1
(G,u) ϕ if it is not the case that (G,u) ϕ
(G,u) ϕψ if (G,u) ϕ and (G,u) ψ
(G,u) E ≥ 0 if EG,u≥ 0
[ cG, u = c; E_1+E_2G, u = E_1G,u+E_2G,u; c × EG, u = c ×EG,u; ϕG, u =
1 if (G,u) ϕ
0 else; ϕG, u = |{v ∈| (u,v) ∈ E and (G,v) ϕ}| ]
Consider the pointed graph G, u shown in Figure <ref>. We have G, u p (¬ p ≥ 2) ( p ≥ 1) ≤ 1. Indeed, p holds in u, u has (at least) two successors in which ¬ p holds. Moreover, there is (at most) one successor which has at least one p-successor.
ϕ is the set of the pointed graphs G, u such that G, u ϕ.
We say that ϕ is satisfiable when there exists a pointed graph G, u such that G, u ϕ.
The satisfiability problem is: given ϕ in the language of , is ϕ satisfiable?
§ CORRESPONDENCE
We explain how to transform a -formula into a GNN, and vice versa.
§.§ From logic to GNN
Let us show that each -formula is captured by a GNN. The proof follows the same line that the proof of the fact that each formula of graded modal logic is captured by a GNN (see Prop 4.1 in <cit.>). However, our first result (point 1 in the following theorem) is a generalisation of their result since logic is more expressive than graded modal logic. Moreover, point 2 of the following theorem explicitly mentions a bound on the number of layers in the GNN.
For each
-formula ϕ,
we can compute a GNN A such that ϕ= A. Furthermore, we have:
* Either the number of layers and the dimension of the states in A is |ϕ|;
* Or the number of layers of A is O(md(ϕ)).
Let ϕ be a formula. Let (ϕ_1,...ϕ_L) be an enumeration of the sub-formulas of ϕ such that ϕ_L = ϕ.
We will construct a GNN 𝒜_ϕ with L layers. The dimension of the states is L. The goal is that for all k ≤ n, the k-th component of ℓ v is equal to 1 if the formula ϕ_k is satisfied in node v, or 0 otherwise.
The aggregation and combination function in each layer are set by:
(X) = ∑_x ∈ Xx
(x,y) = (xC+yA +b)
where is the function that applies componentwise the function[4] σ(x) = min((max(0,x),1)
and where A,C ∈^L× L and b ∈^L are defined as follows. All cells are zeros, except the cells given in the following table:
ϕ_ℓ ℓ-th columns of C and A b_ℓ
p C_ℓℓ = 1 0
¬ϕ_i C_i ℓ = -1 1
ϕ_i ϕ_j C_i ℓ = C_jℓ = 1 0
ϕ_i ϕ_j C_i ℓ = C_jℓ = 1 -1
c ≤∑_i ∈ I k_i× 1_ϕ_i + ∑_i ∈ I' k_i ×ϕ_i
C_iℓ = k_i for i ∈ I
-c+1
A_kℓ = k_i
for i ∈ I'
For proving point 2., the idea is to transform each propositional level of ϕ into a CNF. We obtain a -formula ϕ' potentially exponentially larger than ϕ.
The advantage is now that each propositional level is a CNF and thus is of depth at most 2. We treat an arbitrary large disjunction or conjunction as a single step of computation. In other words, we now consider an enumeration (ϕ_1,...ϕ_L) of subformulas of ϕ' such that ϕ_L = ϕ' and with L = O(md(ϕ)). Here are the corresponding ℓ-columns of C and b_ℓ for the cases where ϕ_ℓ is an arbitrary large disjunction or conjunction:
ϕ_ℓ ℓ-th column of C b_ℓ
⋁_i ∈ Iϕ”_i ∨⋁_i ∈ I'ϕ”_i C_iℓ = 1 for i ∈ I
-1 for i ∈ I'
I'
⋀_i ∈ Iϕ”_i C_iℓ = 1 for i ∈ I -I+1
Consider the formula ϕ = p ∧ (8 ≤ 3× q).
We define the following GNN which is equivalent to ϕ as follows. The aggregation function at each layer is
(X) = ∑_x ∈ X x.
The combination function for each layer is
(x,y) = (xC+yA+b)
where
C = [ 1 0 0 1; 0 1 0 0; 0 0 0 1; 0 0 0 0 ],
A = [ 0 0 0 0; 0 0 3 0; 0 0 0 0; 0 0 0 0; ], and
b = [ 0 0 -7 -1; ].
The columns in the matrices (from top to bottom) are respectively evaluated the following subformulas in that order: p, q, 8 ≤ q, ϕ.
§.§ From GNN to logic
In this subsection, we show how to compute a -formula that is equivalent to a GNN. Note that this direction was already tackled for graded modal logic for the subclass of GNNs that are FO-expressible, but their proof is not constructive <cit.>.
Let A be a GNN A with all aggregation function being (X) = ∑_x ∈ X x, and the classfication function being linear: CLS(x) = ∑_i a_i x_i ≥ 0. Then we can compute in poly-time in |A| a -formula tr(A) represented as DAG such that A = tr(A).
Let us consider a GNN A of L layers where the aggregation function is always:
(X) = ∑_x ∈ Xx.
The idea is that we represent the state x_0(v) at all vertices v by the truth value of some formulas. Initially, the states is represented by the formulas[4] (p_1, …, p_k, , …, ).
Suppose that states x_t(v) are represented by the formulas (ϕ_1, …, ϕ_).
Then if the combination function is
(x,y) = (xC+yA +b)
then the states x_t+1(v) are represented by the formulas (ϕ'_1, …, ϕ'_) where ϕ'_ℓ is
∑_i=1..d 1_ϕ_i C_iℓ + ∑_i=1..dϕ_i A_iℓ + b_ℓ≥ 1
Now, we have formulas (ϕ_1, …, ϕ_) to represent x_L(v). As is linear, the final formula is 1_(1_ϕ_1, …, 1_ϕ_).
CLS(x_(u)) = 1.
§ DECIDABILITY
est-ce qu'on peut faire plus efficace que ce qui est fait dans cette section ?
Let us give an algorithm to solve the satisfiability of , inspired by classical tableau method for modal logic K <cit.>, but taking linear constraints into account.
§.§ Design of the algorithm
First constructions 1_ψ are easy to treat. Either ψ holds and we say that 1_ψ = 1, or ψ does not hold and we say that 1_ψ = 0. However, treating naively ψ as variables in a linear program will unfortunately not work. Let us note ψ_1, …ψ_n the variables of the form ψ that appear in ϕ and that are not in the scope of a -modality. First, some ψ_i may be unsatisfiable, thus ψ_i = 0. But the issue is more subtle. For instance, we always have
p + ¬ p = q + ¬ q (1).
The reader may imagine even more involved interactions between the ψ_i than equation (1).
To take this interactions into account, we consider all possible conjunctions of ψ_i and ¬ψ_i. We define for all words w ∈0, 1^n:
conj_w := ⋀_i = 1..n | w_i = 1ψ_i ⋀_i = 1..n | w_i = 0¬ψ_i.
conj_0100 = ¬ψ_1 ψ_2 ¬ψ_3 ¬ψ_4.
We then introduce a variable in the linear program for each word w that counts the number of successors in which conj_w holds. We have ψ_i = ∑_w | w_i = 1 x_w.
How do we guarantee that p + ¬ p = q + ¬ q? Suppose that ψ_1 = p, ψ_2 = ¬ p, ψ_3 = q, ψ_4 = ¬ q.
As p ¬ p is unsatisfiable, we have x_1100 = x_1101 = x_1110 = x_1111 = 0. We write x_11** = 0. In the same way, as and p ¬ p and q ¬ q are unsatisfiable x_00** = 0, x_**00 = 0 and x_**11 = 0. Finally:
p = x_1010+x_1001 ¬ p = x_0110+x_0101
q = x_1010+x_0110 ¬ q = x_1001+x_0101
We see that p + ¬ p = x_1010+x_1001 + x_0110+x_0101 = q + ¬ q.
Instead of providing tableau rules, we decided to present a more abstract version with Hintikka sets (see Def. 6.24 in <cit.>).
They can be thought as a possible way to completely apply Boolean rules while keeping consistent. We adapt the definition to our setting.
A Hintikka set Σ for formula ϕ is a smallest (for inclusion) set of subformulas such that:
* ϕ∈Σ;
* if ψ_1 ψ_2 ∈Σ then ψ_1 ∈Σ and ψ_2 ∈Σ
* if ψ_1 ψ_2 ∈Σ then ψ_1 ∈Σ or ψ_2 ∈Σ
* for all ψ, ψ∉Σ or ¬ψ∉Σ
* 1_ψ appears in ϕ, either ψ∈Σ and 1_ψ = 1 ∈Σ, or ¬ψ∈Σ and 1_ψ = 0 ∈Σ.
Point 1 says that ϕ should be true. In point 3, if ψ_1 ψ_2 then one of the formula – ψ_1 or ψ_2 – should be true, without telling which one. Point 4 is the consistency. Point 5 makes the link between the truth of ψ and the value of 1_ψ.
Consider formula ϕ = p ( r ≥ 1_q). There are two possible Hintikka sets for formula ϕ:
p, r ≥ 1_q, 1_q = 1, q
and p, r ≥ 1_q, 1_q = 0, ¬ q.
The algorithm (see Figure <ref>) consists in examining all possible Hintikka sets. For each of them, we extract the linear program (line 3). We then compute the integer linear program by considering the variables x_w discussed above (line 6). Line 7: we call recursively the function sat on conj_w and we add the constraint x_w = 0 in case conj_w is unsatisfiable.
§.§ Soundness and completeness
ϕ is -satisfiable iff sat(ϕ) returns true.
We prove the proposition by induction on md(ϕ). The induction works because md(conj_w) < md(ϕ).
⇒
Suppose that ϕ is satisfiable: let G, v such that G, u ϕ. Let us prove that sat(ϕ) returns true. We consider the Hintikka set H made up of formulas that are true in G, v. The obtained S is ILP-satisfiable because these equations and inequations are satisfied in G, u. Indeed, here is a solution: we set x_w to be the number of successors of u in which conj_w hold. By induction, the call sat(conj_w) are all correct: so if sat(conj_w) returns false, then conj_w is unsatisfiable. Thus there are no u-successors satisfying conj_w, and the constraints x_w = 0 (added line 8) hold. The number of v-successors satisfying ψ_i is ∑_w | w_i = 1 x_w. So S is ILP-satisfiable, and the algorithm returns true.
⇐
Conversely, suppose that sat(ϕ) returns true. First, consider S the corresponding Hintikka set for which the algorithm returned true. From S we extract a valuation (u) for the propositions to be set to true or false at a node u. By induction, for all w ∈0, 1^n, the call sat(conj_w) are all correct. Thus, if conj_w is unsatisfiable, x_w = 0; otherwise x_w is not constrained. As sat(ϕ) returned true, we know that S is ILP-satisfiable. Consider a solution. If x_w > 0, we know that conj_w is satisfiable. We consider x_w copies of a pointed graph G_w, u_w satisfying conj_w. We construct a model G, u for ϕ as follows. We take u as the point with valuation (v). We then link u to each point of the copies of u_w. The inequations in S are satisfied at G, u. Indeed, the u-successors satisfying ψ_i are exactly the u_w with w_i = 1, and ψ_i is ∑_w | w_i = 1 x_w. Thus, the obtained pointed G, u is a model of ϕ. Figure <ref> shows an example of the construction.
Note that has the tree-model property as modal logic K. It can be proven by induction on md(ϕ), relying on the construction in the ⇐-direction in the proof above.
§.§ Complexity
The recursive depth of our algorithm (Figure <ref>) is bounded by the modal depth md(ϕ) of the initial formula ϕ: the recursive tree is of depth md(ϕ). Its branching factor is exponential in |ϕ|. The number of nodes remains exponential in |ϕ|. At each node, there is an exponential number of steps, provided that the ILP-solver is considered as an NP-oracle since integer linear programming (ILP) is in NP <cit.> (note that the linear programs computed here are of exponential size in |ϕ|).
Our algorithm runs in exponential time in |ϕ|, by calling a NP-oracle: deciding the satisfiability problem of is in . The class is defined as the class of decision problems decided by an algorithm running in exponential time (i.e. 2^poly(n)) with a NP oracle, typically a SAT oracle or a ILP oracle (note that exponentially long ILP instances may be solved in one step) (see <cit.>, <cit.>). Note that the complexity class is included in the exponential hierarchy which is included in .
The satisfiability problem of is decidable and is in .
§.§ subcases
Let us discuss three types of restrictions to get -membership.
Bounding the number of conj_w.
If we can limit the number of considered conjonctions conj_w, we may obtain a procedure running in polynomial space, making the restricted version of the satisfiability problem of in . For instance, if we know in advance that at each level of modal depth formulas in the scope of a -modality are mutually unsatisfiable, then we do not need to consider all the conjunctions conj_w: all conj_w are 0 except conj_10...0, conj_010...0, ..., conj_0...01. We keep a linear program polynomial in |ϕ|.
Bounded the number of modalities.
If we artificially make the syntactic restriction where we bound the number n of ϕ at each level, the satisfiability problem is also in . Indeed, n becomes a constant, thus 2^n is a constant too. The size of S is only polynomial in the size of ϕ.
Bounded branching. Many graphs have bounded branching: grid graphs (of degree 4), sparse networks, etc.
If we ask whether a formula is satisfiable in a graph whose degree is bounded by a polynomial in the size of the input, then only a polynomial number of variables x_w will be non-zero. The algorithm is then adapted by guessing the polynomial-size subset of variables x_w that are non-zero. Again, we can run the algorithm in polynomial space.
Let k > 0 be an integer.
The satisfiability problem of restricted to graphs of degree at most k is -complete.
-membership comes from the discussion above. -hardness holds because the modal logic on graphs with at most 2 successors per worlds is -hard. Write ψ as ¬ψ≤ 0.
Interestingly, there are fragments in which if a formula ϕ is satisfiable then ϕ is satisfiable in a model of polynomial degree in |ϕ|. Consider the fragment in which inequalities in formulas are of the form ψ≤ψ' (i.e. no addition, no multiplication by a scalar). Then if there is a solution, we can at each level have an ordering ψ_1 ≤…≤ψ_n where some of the ≤ may be strict. But then we can suppose that w.l.o.g. 0 ≤ψ_i+1 - ψ_i ≤ 1. It means that the number of successors of each ψ_i is O(i); the number of successors is O(n^2). We get:
The satisfiability problem of is -complete when inequalities in formulas are of the form ψ≤ψ'.
-membership comes from the discussion above. -hardness comes from the fact modal logic K is reducible to it. Write ψ as ⊤≤ψ.
à relire et vérifier
§ RELATED WORK
Many works combine modal logic and quantitative aspects: counting (<cit.>, <cit.>), probabilities <cit.>.
Linear programming and modal logic have already been combined to solve the satisfiability problem of graded/probabilistic modal logic <cit.>. Our logic can be seen as a `recursification' of the logic used in
<cit.>. They allow for counting successors satisfying a given feature, and not any subformula. Interestingly, they allow for counting also among all nodes in the graph (sort of counting universal modality). Their logic is proven to be undecidable by reduction from the Post correspondence problem. Contrary to our setting, they use their logic only to characterize labelled graphs, but not to give a back and forth comparison with the GNN machinery itself.
Modal logic has also been combined with neural network in the so-called
Connectionist modal logic <cit.> but it has no direct connection with GNNs.
Another solution would be to use directly explainable GNN such as those in <cit.>. This is of course a deep debate: using models easy to use for learning, versus interpretable models <cit.>. The choice depends on the target application.
Yuan et al. <cit.> provide a survey on methods used to provide explanations for GNNs by using black-box techniques.
According to them, they are instance-level and model-level explanations. Instance-level explanations explain on why a graph has been recognized by an GNN; model-level ones how a given GNN works. For instance, they are also many methods based on Logic Explained Networks and variants to generate logical explanation candidates ϕ <cit.>. Once a candidate is generated we could imagine use our problem P1 (given in the introduction) to check whether A = ϕ, and thus being able to fully synthesize a trustworthy explanation.
Our paper is clearly close to model-level explanations.
dire quelque chose d'intéressant sur EXPTIME^NP <cit.>
§ PERSPECTIVES
We aim at considering a larger class of GNNs. This will need to augment the expressivity of the logic, for instance by adding reLU in the language. Fortunately SMT solvers have been extended to capture reLU <cit.>.
Another possible direction would be to consider other classes of graphs. For instance, reflexive, transitive graphs. Restricted types of graphs lead to different modal logics: KT (validities on reflexive Kripke models), KD (on serial models), S4 (reflexive and transitive models), KB (models where relations are symmetric), S5 (models where relations are equivalence relations), etc. <cit.> The logic defined in this paper is the counterpart of modal logic K with linear programs. In the same way, we could define KT^#, S4^#, S5^#, etc. For instance, KB^# would be the set of validities of -formulas over symmetric models; KB^# would be the logic used when GNNs are only used to recognize undirected pointed graphs (for instance persons in a social network where friendship is undirected). In the future work, some connections between GNNs and logics designed to express properties over persons in social network, such as <cit.> could be investigated.
A next direction of research would be to build a tool. The main difficulty is the complexity of the algorithm. However, we may rely on heuristics to guide the search (namely SAT solvers for computing only the relevant Hintikka sets, and relaxed linear programs). We could also directly use SMT solvers.
Of course, our Saint-Graal is the synthesis of a formula that matches a specification. This problem is close to the formula synthesis problem presented in <cit.>. An idea would be to represent the set of possible suitable explanation formulas by a grammar G (for instance, the grammar restricted to graded modal logic) and to compute a formula generated by G which is equivalent to tr(A).
plain
§ APPENDIX
§.§ Proof that is more expressive than FO (Example 2)
We show that is more expressive than FO by proving that the formula p ≥ q is not expressible by a FO formula ϕ(x). We observe that if the property `for all vertices of a graph p ≥ q' is not expressible in FO, then the FO formula ϕ(x) doesn't exist, because if it existed the property would be expressible in FO by the formula ∀ x ϕ(x).
For each integer n > 0, we consider the graphs A_n and B_n such that every vertices of A_n verify p ≥ q while this is not the case for B_n :
[vertex] (w) at (-0.5, 1) ;
[vertex] (u1) at (-1.5, 0) p;
[vertex] (u2) at (-0.75, 0) p;
at (-0.25, 0) ...;
at (-0.9, 1) w;
at (-1.5, -0.4) u_1;
at (-0.75, -0.4) u_2;
at (0.25, -0.4) u_n;
[vertex] (un) at (0.25, 0) p;
[vertex] (v1) at (-1.5, 2) q;
[vertex] (v2) at (-0.75, 2) q;
at (-0.25, 2) ...;
at (-1.5, 2.4) v_1;
at (-0.75, 2.4) v_2;
at (0.25, 2.4) v_n;
[vertex] (vn) at (0.25, 2) q;
[->] (w) edge (v1);
[->] (w) edge (v2);
[->] (w) edge (vn);
[->] (w) edge (u1);
[->] (w) edge (u2);
[->] (w) edge (un);
[vertex] (bw) at (5, 1) ;
[vertex] (bu1) at (4, 0) p;
[vertex] (bu2) at (4.75, 0) p;
at (5.25, 0) ...;
at (4.6, 1) w';
at (4, -0.4) u'_1;
at (4.75, -0.4) u'_2;
at (5.75, -0.4) u'_n;
[vertex] (bun) at (5.75, 0) p;
[vertex] (bv1) at (3.75, 2) q;
[vertex] (bv2) at (4.5, 2) q;
at (5, 2) ...;
at (3.75, 2.4) v'_1;
at (4.5, 2.4) v'_2;
at (5.5, 2.4) v'_n;
at (6.25, 2.4) v'_n+1;
[vertex] (bvn) at (5.5, 2) q;
[vertex] (bvn1) at (6.25, 2) q;
[->] (bw) edge (bv1);
[->] (bw) edge (bv2);
[->] (bw) edge (bvn);
[->] (bw) edge (bvn1);
[->] (bw) edge (bu1);
[->] (bw) edge (bu2);
[->] (bw) edge (bun);
at (-0.5, -1) A_n;
at (5, -1) B_n;
citer un textbook+expliquer pour le pointed graph
An n-round Ehrenfeucht-Fraïssé game is a game between two players, the spoiler and the duplicator played on two graphs A = (V_A,E_A,ℓ_A) and B = (V_B,E_B,ℓ_B). We suppose that On each round the spoiler picks one graph and a vertex in this graph. The duplicator then chooses a vertex on the other graph. We have n vertices (a_1,a_2,...,a_n) chosen in A and n vertices (b_1,b_2,...,b_n) chosen in B. The duplicator wins if and only if for 1 ≤ i,j ≤ n :
a_i=a_j ⇔ b_i = b_j
(a_i,a_j) ∈ E_A ⇔ (b_i,b_j) ∈ E_B
ℓ(a_i)= ℓ(b_i)
On the graphs A_n and B_n, the duplicator wins the Ehrenfeucht-Fraïssé game with n rounds: if the spoiler chooses w (resp. w') the duplicator chooses w' (resp. w), if the spoiler chooses some u_i or v_i (resp. u'_i or v'_i) the duplicator chooses u'_j or v'_j (resp. u'_j or v'_j) (If the world chosen by spoiler has not be chosen in the previous round the duplicator pick a fresh index j. Else the duplicator pick the j corresponding to the world chosen in the previous rounds). Since there are only n distinct values that i can take choose? t'as dit choose avant!, the duplicator will win the game in n rounds with her strategy. Thus the property "for every vertices of a graph, p ≥ q is not expressible in FO. Therefore is more expressive than FO.
|
http://arxiv.org/abs/2307.04460v1 | 20230710101312 | Exploiting an External Microphone for Binaural RTF-Vector-Based Direction of Arrival Estimation for Multiple Speakers | [
"Daniel Fejgin",
"Simon Doclo"
] | eess.AS | [
"eess.AS",
"cs.SD",
"eess.SP"
] |
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging
Chi Wang
August 12, 2023
==================================================================================================================================
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2177/1 - Project ID 390895286 and Project ID 352015383 - SFB 1330 B2.In hearing aid applications, an important objective is to accurately estimate the direction of arrival (DOA) of multiple speakers in noisy and reverberant environments. Recently, we proposed a binaural DOA estimation method, where the DOAs of the speakers are estimated by selecting the directions for which the so-called Hermitian angle spectrum between the estimated relative transfer function (RTF) vector and a database of prototype anechoic RTF vectors is maximized. The RTF vector is estimated using the covariance whitening (CW) method, which requires a computationally complex generalized eigenvalue decomposition. The spatial spectrum is obtained by only considering frequencies where it is likely that one speaker dominates over the other speakers, noise and reverberation. In this contribution, we exploit the availability of an external microphone that is spatially separated from the hearing aid microphones and consider a low-complexity RTF vector estimation method that assumes a low spatial coherence between the undesired components in the external microphone and the hearing aid microphones. Using recordings of two speakers and diffuse-like babble noise in acoustic environments with mild reverberation and low signal-to-noise ratio, simulation results show that the proposed method yields a comparable DOA estimation performance as the CW method at a lower computational complexity.
§ INTRODUCTION
In speech communication applications such as hearing aids, methods for estimating the direction of arrival (DOA) of multiple speakers are often required. To solve this estimation task, (deep) learning-based and model-based methods are continuously developed and advanced <cit.>. However, only few methods exploit the availability of external mobile devices equipped with microphones <cit.>, although wirelessly linking hearing aids to these devices has become increasingly popular <cit.>.
Recently, we proposed relative-transfer-function (RTF) vector-based DOA estimation methods for a single speaker in <cit.>, without relying on the external microphone to be close to the target speaker and capturing only little noise or reverberation as in <cit.>. We estimated the DOA as the direction that maximized the similarity between the estimated RTF vector and a database of prototype anechoic RTF vectors for different directions in terms of a frequency-averaged distance function.
However, the methods in <cit.> considered only a single speaker. To address DOA estimation for multiple speakers, we introduced the so-called frequency-averaged Hermitian angle spectrum from which the DOAs were estimated as the directions corresponding to the peaks of this spatial spectrum (throughout the paper, we refer to a direction-dependent similarity score as a spatial spectrum) <cit.>. Opposed to <cit.>, the spatial spectrum was constructed from time-frequency (TF) bins where one speaker was assumed to be dominant over all other speakers, noise, and reverberation, solely.
Estimation of the RTF vector of a speaker from noisy microphone signals can be accomplished using, e.g., the state-of-the-art covariance whitening (CW) method <cit.> or the spatial coherence (SC) method <cit.>. Despite the effectiveness of the CW method and the possibility to apply the method using only the head-mounted microphone signals or all available signals, such a computationally expensive method (due to the inherent generalized eigenvalue decomposition) is less desirable than methods with a lower computation complexity for resource-constrained applications like hearing aids. Opposed to the CW method, the SC method requires an external microphone but does not perform expensive matrix decompositions. The SC method relies on the assumption of a low spatial coherence between the undesired component in one of the microphone signals and the undesired components in the remaining microphone signals. As shown in <cit.>, this assumption holds quite well, for example, when the distance between the external microphone and the head-mounted microphones is large enough and the undesired component is spatially diffuse-like.
In this paper, we propose to construct the frequency-averaged Hermitian angle spectrum for DOA estimation for multiple speakers using the computationally inexpensive SC method. We compare the DOA estimation accuracy when estimating the RTF vector using the SC method or the CW method in a reverberant acoustic scenario with diffuse-like babble noise. Experimental results show for multiple positions of the external microphone that estimating the RTF vector with the SC method yields a DOA estimation accuracy that is comparable to the CW method at a lower computational complexity.
§ SIGNAL MODEL AND NOTATION
We consider a binaural hearing aid setup with M microphones, i.e., M/2 microphones on each hearing aid, and one external microphone that is spatially separated from the head-mounted microphones and can be located at an arbitrary position, i.e., M +1 microphones in total. We consider an acoustic scenario with J simultaneously active speakers with DOAs θ_1:J (in the azimuthal plane) in a noisy and reverberant environment, where J is assumed to be known. In the short-time Fourier transform (STFT) domain, the m-th microphone signal can be written as
Y_m(k,l) = ∑_j=1^JX_m,j(k,l) + N_m(k,l) ,
where m ∈{1,…,M+1} denotes the microphone index, k∈{1,…,K} and l∈{1,…,L} denote the frequency bin index and the frame index, respectively, and X_m,j(k,l) and N_m(k,l) denote the j-th speech component and the noise component in the m-th microphone signal, respectively. For conciseness, we will omit the frequency bin index k and the frame index l in the remainder of this paper wherever possible. Assuming sparsity in the STFT domain and one dominant speaker (indexed by j=d) per TF bin <cit.>, and stacking all microphone signals in an (M+1)-dimensional vector =[Y_1,…, Y_M+1]^T, where (·)^T denotes transposition, the vector is given by
= ∑_j=1^J + ≈ + ,
with , , and defined similarly as .
Choosing the first microphone as the reference microphone (without loss of generality) and assuming that the speech component for each (dominant) speaker can be decomposed into a direct-path component and a reverberant component , can be written as
= + = X_1,d^ DP + ,
where
= [1, G_2,…, G_M+1]^T
denotes the extended (M+1)-dimensional direct-path RTF vector and X_1,d^ DP denotes the direct-path speech component of the dominant speaker in the reference microphone. The M-dimensional head-mounted direct-path RTF vector corresponding to the head-mounted microphone signals can be extracted from as
= , = [𝐈_M× M,0_M] ,
where denotes the (M× M+1)-dimensional selection matrix for the head-mounted microphone signals with 𝐈_M× M denoting an (M× M)-dimensional identity matrix and 0_M denoting an M-dimensional vector of zeros. Both RTF vectors and encode the DOA of the dominant speaker. However, the extended RTF vector depends on the (unknown) position of the external microphone, whereas the head-mounted RTF vector with fixed relative positions of the head-mounted microphones (ignoring small movements of the hearing aids due to head movements) does not depend on the position of the external microphone. Hence, for DOA estimation, we will only consider the head-mounted RTF vector .
The noise and reverberation components are condensed into the undesired component = + such that ≈ +.
Assuming uncorrelated direct-path speech and undesired components, the covariance matrix of the noisy microphone signals can be written as
= ℰ{^H} = + ,
with
= , = ℰ{^H} ,
where (·)^H and ℰ{·} denote the complex transposition and expectation operator, respectively. and denote the covariance matrices of the direct-path dominant speech component and undesired component, respectively, and =ℰ{| X_1,d^ DP|^2} denotes the power spectral density of the direct-path dominant speech component in the reference microphone.
§ RTF-VECTOR-BASED DOA ESTIMATION
In this section, we review the RTF-vector-based DOA estimation method proposed in <cit.> that is based on finding the directions corresponding to the peaks of the spatial spectrum called frequency-averaged Hermitian angle spectrum.
To estimate the DOAs θ_1:J of the speakers from the estimated head-mounted[As previously stated, we only consider the estimated head-mounted RTF vector for DOA estimation and not the extended RTF vector that depends both on the speaker DOA and the (unknown) position of the external microphone.] RTF vector (k,l), the estimated head-mounted RTF vector (k,l) is compared to a database of prototype anechoic RTF vectors for several directions θ_i , i=1,…, I using the Hermitian angle <cit.> as a measure of dissimilarity, i.e.,
p(k,l,θ_i) = h(𝐠̂_H_d(k,l),) ,
h(𝐠̂,𝐠̅) = arccos(𝐠̅^H𝐠̂/𝐠̅_2 𝐠̂_2) .
These prototype anechoic head-mounted RTF vectors can be obtained, e.g., via measurements using the same microphone array configuration as used during the actual source localization or using spherical diffraction models <cit.>.
Accounting for the disjoint activity of the speakers in the STFT domain and aiming at including only TF bins where the estimated head-mounted RTF vector (k,l) is a good estimate for the direct-path RTF vector in (<ref>) (of one of the speakers), the narrowband spatial spectrum (<ref>) is integrated over a set 𝒦(l) of selected frequency bins, where it is likely that one speaker dominates over all other speakers, noise, and reverberation <cit.>, i.e.,
P(l,θ_i)=-∑_k∈𝒦(l)p(k,l,θ_i) .
Based on the usage of the Hermitian angle for the construction of (<ref>), the spatial spectrum in (<ref>) is called the frequency-averaged Hermitian angle spectrum. The DOAs θ_1:J(l) are estimated by selecting the directions corresponding to the J peaks of this spatial spectrum (assuming J to be known).
In the context of DOA estimation, coherence-based quantities such as the coherent-to-diffuse ratio (CDR) are a common criterion for frequency subset selection <cit.>. The usage of the CDR as a criterion for frequency subset selection can be motivated by the fact, that for higher values of the CDR at the respective TF bin it is more likely that a speaker dominates over all other speakers, noise, and reverberation at the respective TF bin. As in <cit.>, the subset 𝒦(l) is obtained using the coherent-to-diffuse ratio (CDR) criterion (<ref>), i.e.,
𝒦(l) = {k: CDR(k,l)≥CDR_thresh} ,
where the CDR is estimated as
CDR(k,l) = f(Γ_y,eff(k,l), Γ_u(k)) ,
with the CDR-functional f defined in (<ref>) for a single microphone pair comprising the microphones m=i and m=j <cit.>. The arguments of the function in (<ref>) are the estimated coherence Γ_y,i,j of the noisy signal
Γ_y_i,j(k,l)= Φ̂_y_i,j(k,l)/√(Φ̂_y_i,i(k,l) Φ̂_y_j,j(k,l))
with Φ̂_y_i,j denoting an estimate of the (i,j)-th element of the covariance matrix of the noisy microphone signals and a model Γ_u,i,j of the coherence of the undesired component. To consider more than just a single microphone pair for the estimation of the CDR, the coherence of the noisy signals between multiple microphone pairs (denoted as the microphone set ℳ) between the left and the right hearing aid is averaged prior to evaluating the CDR-functional in (<ref>), resulting in the binaural effective coherence <cit.>, i.e.,
equation1Γ_y,eff(k,l) = 1/|ℳ|∑_i,j ∈ℳΓ_y_i,j(k,l) ,
Thus, the binaural effective coherence represents the average coherence between the head-mounted microphone signals. Due to the arbitrary position of the external microphone, we consider only the head-mounted microphones (with fixed relative positions) for the estimation of the binaural effective coherence Γ_y,eff(k,l).
To model the coherence of the undesired component for the estimation of the CDR in (<ref>) between the head-mounted microphone signals, head shadow effects need to be included. Assuming a diffuse sound field for both the noise and reverberation component, a modified sinc-model <cit.> is employed, i.e.,
Γ_u(k) = (αω_kr/c) 1/√(1 + (βω_kr/c)^4) ,
where ω_k denotes the discrete angular frequency, r denotes the distance between the microphones of left and right hearing aid which is approximated as the diameter of a head, c denotes the speed of sound, and α=0.5 and β=2.2 denote empirically determined parameters of the modified sinc-model.
In this paper we compare the influence of different RTF vector estimation methods on constructing the frequency-averaged Hermitian angle spectrum in (<ref>). In <cit.> no external microphone was used and therefore the DOAs were estimated from the spatial spectrum as in (<ref>) constructed from head-mounted RTF vectors that were estimated using the CW method as in (<ref>), i.e.,
P^(CW)(l,θ_i)=-∑_k∈𝒦(l)h(𝐠̂_H_d^(CW)(k,l),) .
In this paper, we propose to exploit the availability of the external microphone and estimate the DOAs from the spatial spectrum constructed as in (<ref>) constructed from head-mounted RTF vectors that are estimated using the SC method as in (<ref>), i.e.,
P^(SC)(l,θ_i)=-∑_k∈𝒦(l)h(𝐠̂_H_d^(SC)(k,l),)
A summary on the covariance whitening (CW) method <cit.> and the spatial coherence (SC) method <cit.> is provided in the next section.
§ RTF VECTOR ESTIMATION
In order to estimate DOAs of multiple speakers, a frequency-averaged Hermitian angle spectrum is constructed, which assess the similarity between the estimated M-dimensional head-mounted RTF vector and a database of prototype anechoic RTF vectors for different directions. In this section, we review two RTF vector estimation methods. The computationally expensive state-of-the-art covariance whitening (CW) method <cit.> is summarized in Section <ref>. The computationally inexpensive spatial coherence (SC) method <cit.> is discussed in Section <ref>.
§.§ Covariance whitening (CW)
To apply the CW method <cit.>, estimates and of the covariance matrices of the noisy signal and the undesired signal component are required. Based on these estimates, the head-mounted direct-path RTF vector can be estimated using only the head-mounted microphone signals as
^(CW) =f(^H,^H) ,
f(Φ̌_y,Φ̌_u) =Φ̌_u^1/2Φ̌_u^-1/2Φ̌_yΦ̌_u^-H/2/𝐞̌_1^TΦ̌_u^1/2Φ̌_u^-1/2Φ̌_yΦ̌_u^-H/2 ,
where · denotes the principal eigenvector of a matrix, Φ̌_u^1/2 denotes a square-root decomposition (e.g., Cholesky decomposition) of the M̌-dimensional matrix Φ̌_u and 𝐞̌_1=[1,0,…,0]^T denotes an M̌-dimensional selection vector. Note that can be estimated likewise from the head-mounted microphone signals and the external microphone signal together, via f(,), differing in general from the estimate ^(CW) as in (<ref>). However, based on the results of <cit.> and <cit.>, we will consider only the estimate as in (<ref>) obtained from the head-mounted microphone signals only as no significant benefit in DOA estimation performance was reported when all microphone signals were used.
§.§ Spatial coherence (SC)
The SC method <cit.> requires an external microphone and relies on the assumption of a low spatial coherence between the undesired component U_M+1 in the external microphone signal and the undesired components U_m, m∈{1,…,M}, in the head-mounted microphone signals, i.e.
U_mU_M+1^∗≈ 0 , m∈{1,…, M} .
As shown in <cit.>, this assumption holds quite well, for example, when the distance between the external microphone and the head-mounted microphones is large enough and the undesired component is spatially diffuse-like. Exploiting this assumption, results in Y_mY_M+1^∗=X_mX_M+1^∗, m∈{1,…, M}, thus the RTF vector can be efficiently estimated without expensive matrix decompositions as
^(SC) = _M+1/_1^T_M+1 ,
with _m denoting an (M+1)-dimensional selection vector selecting the m-th element.
§ EXPERIMENTAL RESULTS
Applying the CW and SC method for RTF vector estimation, in this section we compare the DOA estimation performance when using the SC-based frequency-averaged Hermitian angle spectrum as in (<ref>) against the DOA estimation performance when using the CW-based frequency-averaged Hermitian angle spectrum as in (<ref>). We evaluate the methods with recorded signals for an acoustic scenario with two static speakers in a reverberant room with diffuse-like babble noise. The experimental setup and implementation details of the algorithms are described in Section <ref>. The results in terms of localization accuracy are presented and discussed in Section <ref>.
§.§ Experimental setup and implementation details
For the experiments we used signals that were recorded in a laboratory at the University of Oldenburg with dimensions of about [parse-numbers=false]7×6×2.7, where the reverberation time can be adjusted by means of absorber panels, which are mounted to the walls and the ceiling. The reverberation time was set to approximately T_ 60≈250. Fig. <ref> depicts the experimental setup. A dummy head with a binaural hearing aid setup (M = 4) was placed approximately in the center of the laboratory. For this hearing aid setup a database of prototype anechoic RTF vectors is obtained from measured anechoic binaural room impulse responses <cit.> with an angular resolution of 5 (I = 72). A single external microphone was placed at four different positions (denoted as E1 - E4), which was not restricted to be close to a speaker. Two speakers from the EBU SQAM CD corpus <cit.> (male and female, English language) were played back via loudspeakers that were located at approximately 2 distance from the dummy head. For the evaluation, all 72 pairs of DOAs of non-collocated speakers (each of the 9 DOAs in the range [-160,-120,…,160]) were considered. The speech signals were constantly active and had a duration of approximately 5. Diffuse-like noise was generated with four loudspeakers facing the corners of the laboratory, playing back different multi-talker recordings. The speech and noise components were recorded separately and were mixed at {-5,0,5} broadband signal-to-noise ratio (SNR) averaged over all head-mounted microphones of the hearing aid setup. All microphone signals were recorded simultaneously, hence neglecting synchronization and latency aspects.
The microphone signals were processed in the STFT-domain using a 32 square-root Hann window with 50 % overlap at a sampling frequency of 16. The covariance matrices and were estimated recursively during detected speech-and-noise and noise-only TF bins, respectively, using smoothing factors corresponding to time constants of 250 for and 500 for , respectively. The speech-and-noise TF bins were discriminated from noise-only TF bins based on the speech presence probability <cit.>, averaged and thresholded over all head-mounted microphone signals.
We assess the DOA estimation performance by averaging the localization accuracy over the considered DOA pairs and SNRs. For the localization accuracy we average the per-frame-accuracies over all frames, where we define the per-frame accuracy as
ACC(l) = j_correct(l)/J ,
with j_correct(l) denoting the number of speakers that are correctly localized within a range of ± 5^∘ in the l-th frame and J=2.
§.§ Results
Fig. <ref> depicts the average localization accuracies that are obtained from the spatial spectrum as in (<ref>), denoted by CW, and the accuracies obtained from the spatial spectrum as in (<ref>), denoted by SC-EX, where X stands for one of the four positions of the external microphone. To show the effectiveness of the subset selection, we considered two threshold values, CDR_thresh = [parse-numbers = false]-∞ (corresponding to selecting all frequencies) and CDR_thresh = 0, shown as blue bars and orange bars, respectively.
First, for every condition a large improvement in the localization accuracy of up to 11 due to the frequency subset selection can be observed. This result is in line with the results reported in <cit.>. Second, considering the spatial spectrum obtained from (<ref>), it can be observed that the position of the external microphone has a minor effect on the estimated DOA, resulting in localization accuracies in the range 62 - 66 using a threshold value of CDR_thresh = 0. For the external microphone placed at positions E3 or E4, i.e., close to the loudspeakers playing back the noise, a slightly lower DOA estimation accuracy can be observed when comparing to the external microphone placed at positions E1 or E2. Third, comparing the DOA estimation performance when using the CW method against the SC method for estimating the head-mounted RTF vector, a difference up to around 5 - 7 can be observed. Thus, the low-complexity SC method yields a comparable DOA estimation performance for multiple speakers as the CW method, which is line with the single speaker DOA estimation results reported in <cit.>.
§ CONCLUSIONS
Based on two RTF vector estimation methods, in this paper we compared the DOA estimation performance for multiple speakers for a binaural hearing aid setup exploiting an external microphone or not. We did not restrict the position of the external microphone to be close to the target speaker. Estimating the RTF vector using either the CW method without exploiting the external microphone or using the SC method exploiting the external microphone, we constructed a frequency-averaged Hermitian angle spectrum from which the DOAs of the speakers were estimated as the directions that maximized the spatial spectrum. We evaluated the approach using simulations with recorded two speaker scenarios in acoustic environments with mild reverberation and diffuse-like babble noise scaled to low SNRs for different positions of the external microphone. The results show that using the SC method for the construction of the frequency-averaged Hermitian angle spectrum yields a DOA estimation accuracy (62 - 66) that is comparable to the CW method (≈70) at a lower computational complexity.
|
http://arxiv.org/abs/2307.04084v1 | 20230709022832 | A Sustainability Roadmap for C$^3$ | [
"Martin Breidenbach",
"Brendon Bullard",
"Emilio Alessandro Nanni",
"Dimitrios Ntounis",
"Caterina Vernieri"
] | hep-ex | [
"hep-ex",
"physics.acc-ph"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
§ INTRODUCTION
An electron-positron collider gives a unique opportunity to study the Higgs boson's properties with unprecedented precision and also provide an exceptionally clean environment to search for subtle new physics effects <cit.>. A number of different "Higgs factory" proposals, based on linear and circular colliders, are now under consideration. All of these provide collisions at center of mass energies in the range of 240-370 GeV, and some also are capable of reaching higher energies.
A high-energy particle collider is a large energy-consuming research facility. As such, it is important to balance its scientific importance against its environmental cost. The environmental impact of large accelerators has been analyzed in the recent Snowmass 2021 study <cit.> of the future of particle physics in the US <cit.>. The papers <cit.> have examined the environmental cost of particular Higgs factory proposals, though often concentrating on particular elements of the total cost.
In this paper, we attempt a comprehensive evaluation of the carbon cost of the Cool Copper Collider () Higgs factory proposal <cit.> over its full lifetime, including costs from construction and from operation over the proposed timeline. The structure of this paper is as follows: in Section <ref>, we briefly review the design of . In Section <ref>, we review the physics reach for and other Higgs factory proposals and introduce a metric for balancing carbon impact against the physics impact of each proposal. In Section <ref>, we analyze the power costs of operation of and describe methods for modifying the power design of the accelerator that would lead to substantial savings with little impact on the physics performance. In Section <ref>, we analyze the carbon impact of the construction of C3 and emphasize that cut-and-cover construction, as opposed to construction in a deep tunnel, has significant advantages. In Section <ref>, we discuss options for the source of electrical power for the laboratory. In Section <ref>, we bring these analyses together to estimate the total carbon footprint of . Using information from available studies and design reports, we estimate the carbon impact of other Higgs factory proposals and compare these to in the framework described in Section <ref>.
§ REVIEW OF THE ACCELERATOR DESIGN
, recently proposed <cit.>, is a linear facility that will first operate at 250 GeV center-of-mass collisions. Immediately after, without further extension of the linac, it will run at 550 GeV with an RF power upgrade. The high energy operations will enable the exploration of the Higgs-top coupling, and provide direct access to the Higgs self-coupling with double Higgs production <cit.>. Furthermore, the beam polarization, which exploits the strong dependence of electroweak processes on the chirality of the initial state particles, will offer unique insights into the underlying physics, acting as a new tool for discovery <cit.>. This offers a strong complementarity with proton and circular colliders, where beam polarization is not possible.
utilizes a radically different approach to linear accelerators to build a collider with high gradient and high RF efficiency, and thus lower capital and operating costs <cit.>. is based on a distributed coupling accelerator concept, running under liquid nitrogen (LN) <cit.>, that has led to an optimized accelerating gradient and minimized breakdown problems with respect to earlier designs based on normal conducting technologies. This has yielded an overall optimization of the gradient at 70 and 120 MeV/m for the 250 GeV and 550 GeV operating points, respectively.<cit.>. Much higher energies are possible if length is not the major consideration. The fundamental parameters, assumed for the analysis in this paper, are shown in Table <ref>.
By far the major development to date is the actual distributed coupling accelerator structure. will use C-band (5.712 GHz) standing wave RF accelerating structures that are 1 m long. Each has an RF waveguide to bring power in, and in the more probable operating modes, splits RF power evenly between the beam and dissipation in the structure with 43% beam loading. Operating at 80 K brings the shunt impedance up to 300 MΩ/m, allowing for efficient operation at 120 MeV/m. These gradients have been demonstrated at C-band <cit.> and with an electron beam in an X-Band (11.424 GHz) structure on the SLAC XTA beamline <cit.>. The C-band structure has been tested at low power at SLAC and at high power without beam at Radiabeam <cit.>. The gradient results in a collider with a 550 GeV center-of-mass energy capability on an 8 km footprint.
A pre-conceptual design for the overall linac cryogenics has been developed that includes the design for the CryoModules. For the 250 GeV and 550 GeV design, each linac will have 3 re-liquification cryoplants. LN will flow out along the linac in both directions, so there are 6 flow runs. The LN will be above the raft structures, with an initial velocity of ∼0.03 m/s. The LN will cool the accelerator structures by nucleate boiling with a power density of 0.4 W/cm^2, producing saturated vapor which counter-flows back to the cryoplant. Each cryo-run is about 450 meters in length. The vapor velocity near the cryoplant is ∼3 m/s.
§ COMPARISON OF HIGGS FACTORY PHYSICS REACH
Among the colliders being evaluated by the community, the International Linear Collider (ILC) <cit.>, based on superconducting RF technology, has the most advanced design <cit.>, and the ILC is currently under consideration for construction in Japan.
CERN is pursuing as its main strategy a large circular collider, the FCC <cit.>, and China is planning a similar circular collider, the CEPC <cit.>. Each of these circular colliders would require a tunnel with circumference of the order of 100 km to limit synchrotron radiation. Still, though, the expected instantaneous luminosity drops off significantly above center-of-mass energies of 350–400 GeV.
A different alternative is to construct a compact linear collider based on high gradient acceleration. CERN is also pursuing such a proposal, CLIC <cit.>, that would operate at a collision energy of 380 GeV.
The carbon footprint of the proposed future Higgs factories should be assessed relative to the expected physics reach, which has been reviewed most recently in the context of the Snowmass Community process <cit.>. The primary physics goal of a future Higgs factory is the determination of the total Higgs width and Higgs couplings with per-cent or sub-per-cent precision. A reasonable figure of merit to gauge the physics reach of each machine is the expected level of precision for each of these measurements. We note that evaluating the projected measurement precision accounts for the fact that different beam configurations (center-of-mass energy and beam polarization) have a strong impact on the physics reach of each of those machines. These differences in precision are not accounted for when comparing the total number of Higgs bosons produced alone <cit.>.
The physics reach at colliders increases with the center-of-mass energy, since different Higgs boson production mechanisms become accessible. At 250 GeV center-of-mass energy operations the main Higgs boson production mechanism is associated production with a Z boson (→ ZH), enabling a model-independent determination of the Higgs boson total width. Higgs boson production via the W-boson fusion reaction e^+e^-→νν̅H is accessible at √(s)∼500 GeV, where the only visible signals in the final state come from Higgs boson decays. This allows Higgs boson measurements governed by different systematic effects, complementary to the 250 GeV data, as well as opportunities to study effects such as separation of H → gg/bb̅/cc̅ decays and CP violation in H →τ^+τ^- <cit.>. Importantly, at high center-of-mass energies, double Higgs boson production in the ZHH channel opens up, providing direct access to the Higgs boson self-coupling λ_3. At circular machines, given the energy limitations, double Higgs boson production mechanisms are not accessible, thus allowing only for indirect and model-dependent measurements of λ_3, through loop effects in single-Higgs production.
The use of longitudinal beam polarization offers unique advantages for effective precision measurements at a linear collider, since the interaction cross sections at an collider have strong dependencies on beam polarization.
It has been demonstrated that at 250 GeV center-of-mass energy, the ultimate precision reach in the determination of Higgs couplings, through a Standard Model Effective Field Theory (SMEFT) analysis, for an integrated luminosity of 2 ab^-1 with polarized beams, has comparable sensitivity to 5 ab^-1 with unpolarized beams, with most of the gain coming from e^- polarization alone <cit.>. The main effect of beam polarization is to discriminate the effect of different SMEFT operators that contribute to the Higgs boson coupling. There is a similar gain of about a factor of 2.5 from discrimination of the effects of the operators contributing to the WWγ and WWZ couplings, which also enter the SMEFT analysis.
The positron polarization becomes more relevant at higher center-of-mass energies. For instance, W-boson fusion reactions, such as e^+e^-→νν̅H, proceed only from e_L^-e_R^+ initial states, providing a cross-section (or, equivalently, effective luminosity) enhancement of ∼ 2.5 for typical polarizations foreseen at future linear machines <cit.>. Here positron polarization makes a significant contribution. This implies that the same number of Higgs bosons can be produced through this process with only ∼ 40 % of the integrated luminosity, compared to having unpolarized beams.
Moreover, beam polarization at high energy enables the suppression of relevant backgrounds, such as the dominant e^+e^-→ W^+W^- background for positive (negative) electron (positron) beam polarization, increasing the signal-over-background ratio and allowing the precise measurement of the rate of other backgrounds, as well as the reduction of detector-related systematic uncertainties, with combined measurements of datasets with four distinct initial-state polarization configurations. These effects collectively indicate the increased precision reach that beam polarization provides for linear machines <cit.>.
Additionally, electron (primarily) and positron (secondarily) polarization enhance the precision in the extraction of the Higgs couplings, compared to having unpolarized beams. For example, it has been shown that having polarized initial state can yield an effective luminosity improvement factor for linear machines up to ∼ 2.5, thus allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.reference
For these reasons, in this analysis we propose a comparison of the carbon footprint of collider concepts relative to their expected precision in Higgs coupling measurements. Table <ref> summarizes the projected relative precision for Higgs boson couplings measurements at each collider combined with projected results from the HL-LHC. As can be seen, the overall physics reach of all proposed Higgs factories is similar <cit.> for the 240-250 GeV operations, and additional measurements become accessible for the higher center-of-mass energy runs at linear colliders. We also compare the Higgs Factory proposals is in terms of total energy consumption and carbon emissions, for both construction activities and operations, with the latter being the most relevant number when evaluating each project's impact on the global climate.
We then present an estimate of energy consumption and carbon footprint per unit of physics output. This is achieved by taking the average of the relative precision over all Higgs couplings, weighing them by the relative improvement in their measurement with respect to HL-LHC:
⟨δκ/κ⟩ = ∑_iw_i(δκ/κ)_i/∑_iw_i
where the sum runs over the columns of Table <ref> and the weight is defined as:
w = (δκ/κ)_HL-LHC-(δκ/κ)_HL-LHC+HF/(δκ/κ)_HL-LHC+HF
This definition weights measurements by their relative improvement over HL-LHC when combining the HL-LHC and future Higgs Factory (HF) results. Qualitatively, measurements that minimally improve those of HL-LHC are assigned weights near zero, while HF measurements with high precision or large improvement over HL-LHC are assigned larger weights. While other weighting schemes could be used, we argue that Equation <ref> is unbiased towards the type of physics measurement (e.g. Yukawa, self-coupling, vector coupling) and it emphasises the individual strengths of each collider facility.
For the estimation of the weighted average precision, the hcc̅ coupling was excluded, since there is no estimate for HL-LHC, whereas we assume that the hhh coupling for CEPC can be measured with the same precision as for FCC. The weighted average precision for each collider is given in the last row of Table <ref>.
§ POWER CONSUMPTION AND OPTIMIZATIONS
The most obvious way to reduce the carbon impact of a major facility is to minimize the amount of power that it consumes, thereby minimizing the associated emissions from energy production. This is firmly within the means of the facility designers and crucially does not rely on grid electrification. The nominal operating parameters for -250 are given in Table <ref>.
Several avenues can be pursued to optimize operational power requirements. Improvements in luminosity or reduction in power consumption are possible through the development of ancillary technology by increasing the RF source efficiency, increasing the efficiency of powering the accelerating structures or modification of beam parameters to increase luminosity. At present the main linac requires ∼100 MW of power with 40 MW for the RF sources and 60 MW for the cryogenics.
For the RF sources, the concept utilizes an overall RF system efficiency of 50% which is in line with present high power RF sources that are designed with efficiency in mind. However, significant advances in modern design techniques for klystrons are increasing the klystron amplifier's ultimate efficiency significantly with the inclusion of higher order mode cavities, multi-cell outputs and advanced multi-dimensional computational tools. For example, designs now exist for a 50 MW class RF source<cit.> approaching an amplifier efficiency of 70%. Multi-beam RF sources, reducing the beam perveance, have advanced design efforts exceeding 80% efficiency<cit.>. These results reinforce modern understanding on the limits of klystron efficiency <cit.> which indicate a klystron amplifier efficiency of 70-80% is possible, leading to an overall RF source efficiency of 65%.
RF pulse compression, presently not in the baseline, is also a well known technique for powering high gradient structures. For , pulse compression is particularly useful due to the impact of power loss at cryogenic temperatures and due to the relatively long fill time for a copper structure operating at cryogenic temperatures. In a previous study<cit.>, it was found that low factors of pulse compression, which preserves RF efficiency in the compressor<cit.>, improves the overall efficiency of the system by 30%. Recently, additional efforts have been made to realize the extremely high Q cavities required for pulse compression with cryogenically cooled RF structures <cit.>; these include concepts operating at room temperature and inside the cryostat at 80 K.
For the baseline design <cit.> we anticipate operation with 700 ns and 250 ns flat tops respectively for gradients of 70 and 120 MeV/m and a constant power dissipation of 2.5 kW/m at 120 Hz. Figure <ref> and Figure <ref> show the RF power, dissipated energy and gradient during the pulse. While these flat top lengths were selected to limit the challenges of breakdown, increasing the flat top length and reducing the repetition rate should be investigated in order to reduce the thermal load on the linac. At present, the thermal balance between the structure fill/dump time and the flat top is approximately 50% (equal thermal load). If we were to extend the flat top lengths by a factor of two and reduce the repetition rate by a factor of two, the thermal dissipation in the main linac would decrease by 25%. This improvement would have little effect on the overall design of the accelerator, and would be acceptable if the breakdown rates remain low enough. Proving that this is possible will require high gradient testing of structures with 1400 ns and 500 ns respectively.
The beam current of is relatively low thanks to the large bunch spacing and efficient accelerating structures. One could pursue the possibility of reducing the bunch spacing to increase the current. However, this will require compatibility studies with the detector design. Here we consider the scenario where the bunch spacing is reduced by a factor of two. This would keep a bunch spacing of >1 ns for both -250/550, resulting in a decrease of 25% for the cryogenics power. The RF power required would only decrease by 20% because the peak RF power required would be slightly higher during the RF pulse flat top to compensate for the additional current.
We note that these approaches can all be combined for mutual benefit as shown in the last row of Table <ref>. The demonstration R&D plan <cit.> will be able to investigate these approaches and lead to potential power savings.
§ CARBON IMPACT OF CONSTRUCTION
Under the assumption that the electric grid will be successfully de-carbonized by 2040, as it is the goal of many international climate plans, then construction, rather than operations, may well dominate the climate impact of a new particle physics facility <cit.>.
For FCC it is projected that the whole accelerator complex[The main tunnel plus the additional buildings on the site, the materials for the accelerator and detectors, assuming a main tunnel length of 97.7 km (the updated FCC design anticipates 91 km).] will have a carbon impact similar to that of the redevelopment of a neighbourhood of a major city <cit.>. This indicates that the environmental impact of any future collider facility is going to receive the same scrutiny as that of a major urban construction project.
The bottom-up analysis in <cit.> derives an estimate of global warming potential (GWP) for the main tunnel material (concrete) manufacture alone to be equivalent to the release of 237 ktons of CO_2 (). An alternative top-down analysis is instead dependent on the character of the earth to be excavated, leading to estimates ranging from 5-10 kton /km of tunnel construction and total emissions of 489-978 kton [Contributions from many bypass tunnels, access shafts, large experimental caverns, and new surface sites are excluded.].
A life cycle assessment of the ILC and CLIC accelerator facilities is being performed by ARUP <cit.> to evaluate their holistic GWP, so far providing a detailed environmental impact analysis of construction. The components of construction are divided into classes: raw material supply, material transport, material manufacture, material transport to work site, and construction process. These are labelled A1 through A5, where A1-A3 are grouped as materials emissions and A4-A5 are grouped as transport and construction process emissions. The total GWP for ILC and CLIC is taken to be 266 and 127 kton <cit.>, respectively[We use the emissions figures associated to the CLIC drive-beam design, which is more efficient than the alternative design utilizing only klystrons for RF power.]. The approximate construction GWP for the main tunnels are 6.38 kton /km for CLIC (5.6m diameter) and 7.34 kton /km for ILC (9.5m diameter); the FCC tunnel design is similar to that of CLIC, so 6.38 kton /km is used for the calculation of emissions for both FCC and CEPC. While a comprehensive civil engineering report is unavailable for FCC and CEPC, we estimate the concrete required for klystron gallery, access shafts, alcoves, and caverns to contribute an additional 30% of emissions, similar to what is anticipated for CLIC. The analysis indicates that the A4-A5 components constitute 20% for CLIC and 15% for ILC. In the absence of equivalent life cycle assessment analysis for FCC and CEPC, we account for the A4-A5 contributions as an additional 25%. A summary of these parameters is given in Table <ref>.
The tunnel will be about 8 km long with a rectangular profile in each of its component systems. Assuming a cut and cover approach, all the excavated material will be replaced to yield a small berm. We estimate that for the whole accelerator complex only about 50 thousands cubic meters of spoil for the experimental hall will have to be relocated. Figure <ref> shows a schematic of the cross section, where the klystron gallery is situated directly above the accelerator hall with sufficient concrete shielding to allow constant access to the klystron gallery during operation. The application of a top-down estimate of 6-7 kton /km obtained from the ARUP report is not appropriate for the surface site due the differing cross section geometries of the accelerator housing. To allow for a fair comparison among facilities, we take the same basic assumptions of construction materials. In particular, that construction uses a mix of CEM1 C40 concrete and 80% recycled steel, the GWP of concrete is taken to be 0.18 kg /kg concrete with density 2400 kg/m^3<cit.>, and 85%/15% of emissions originate from concrete/steel production. Taking into account construction of the main linacs, injector linacs, damping rings, beam delivery system, and experimental hall, the total volume of construction material is estimated to be about 260,000 m^3 (consisting mostly of concrete by volume). This leads to a GWP of 133 kton for A1-A3 components and GWP per unit length of the main linac of around 17 kton /km. Notably, this is roughly a factor 2 larger than the GWP/km of main tunnel construction of ILC and CLIC; this suggests further tunnel geometry optimizations are achievable with a detailed engineering study. The surface site construction eliminates the need for additional infrastructure (e.g. access tunnels and turnarounds) and greatly reduces the complexity of the construction process, which we estimate to account for an additional 10%[This estimate is half the A4-A5 component associated to tunnelled facilities and is expected to overestimate the improvement associated to a cut and cover approach, due to significant reduction to spoil transport and operation of a boring machine] to the GWP. This yields a final estimate of 146 kton for civil engineering.
Unlike other Higgs factories under evaluation, the site has not been decided yet. A collider could in principle be sited anywhere in the world.
A community decision will be made regarding the actual site selection, although we note that the offers a unique opportunity to realize an affordable energy frontier facility in the US in the near term and the entire program could be sited within the existing US National Laboratories. The tunnel layout would be adapted to its location, and a cut and cover site, suitable for a horizontal layout, is extremely attractive also for both cost and schedule reasons.
The details of the siting options at FNAL are discussed in <cit.>. Sites such as the DOE Hanford site located in the Pacific Northwest have room to accommodate even bigger footprint machines within their site boundary.
§ POSSIBLE MITIGATION STRATEGY DURING OPERATIONS
The carbon footprint of the electricity production required to meet the total site power requirements of 150-175 MW can be substantial. The average carbon intensity of energy production since May 2022 is 194 and 381 g CO_2/kWh for the CAISO and PJM power grids, respectively <cit.>. This would result in the CO_2 emissions equivalent of 5.7 and 11.2 mega tonnes of CO_2 equivalent for a 20 year run. The electrification of the grid will allow to operate much more sustainably by the time data taking begins. The U.S. “has set a goal to reach 100 percent carbon pollution-free electricity by 2035" in its 2021 emissions target report <cit.>. The U.S. is making progress toward this goal, having been ranked #1 on the Renewable Energy Country Attractiveness Index in 2021, driven primarily by widespread adoption of solar energy. The outlook for renewable energy investments have been further buoyed by the recent passage of the Inflation Reduction Act <cit.>. While full electrification by 2035 is conceivable, it is helpful to consider powering infrastructure required by operating only with renewable energy sources to evaluate associated costs and feasibility. The three technologies of interest to this study are photovoltaic cells (solar), onshore and offshore turbines (wind), and energy storage systems (batteries) to facilitate the diurnal cycle of power generation by solar and wind sources.
Solar is the most appealing renewable energy source. It has achieved the highest market penetration among renewable sources and is expected to achieve utility-scale parity with non-renewables within the next decade. The present cost of PV cells is between 0.82 - 1.01 $/W and the land area required to operate a 3 MW scale solar farm is 6-8 acres/MW <cit.>. Assuming PV cell efficiencies will be driven well beyond the present 30% limit by multi-junction fabrication techniques, the values $0.80/W and 4 acres/MW are assumed <cit.>.
While wind energy trails solar in terms of market penetration, providing over 120 GW domestically, it would offer a complementary daily load profile to that of solar energy, where approximately twice as much power is generated at night than during the day, by both onshore and offshore wind farms <cit.>. While onshore wind has greatest penetration in the Midwest, where average wind speeds at 100m elevation can exceed 10 m/s, smaller wind turbines with lower peak output capacity and lower cut-in wind speeds can be suitable for regions where wind patterns are less intense <cit.>. Typical peak power output for onshore and offshore wind turbines are 3 MW and 10 MW with typical capacity factors (efficiency) of 40% and 60%, respectively <cit.>. The significantly higher power production capacity for offshore wind turbines offers an advantage to candidate sites located on the coasts. Fixed-bottom and floating turbines are the preferred for offshore farms on the Atlantic and Pacific coasts, respectively. Floating turbines have the additional advantage of eliminating high-frequency vibrations resulting from mechanical coupling to the sea floor, which can significantly increase the turbine's functional lifetime, and installation of a floating turbine has a significantly reduced impact on local marine life <cit.>. The costs of onshore, fixed-bottom offshore and floating offshore turbines are around 1.3, 3.25 and 5.3 $/W <cit.>.
A major challenge to full electrification is the need to deliver power to end-users reliably when generation is dependent on natural processes which fluctuate on short timescales (local weather patterns, daily cycle) and long timescales (seasons, regional climate cycles). Energy storage systems are required to eliminate dependence on non-renewables during periods of low production by renewable sources, and can be realised using mechanical, thermal, and chemical energy storage techniques. For example, pumped storage hydro-power (PSH) stations represented 99% of utility-scale energy storage in 2019, each of which has GWh-scale capacity <cit.>. While PSH stations can be used to balance load profiles on the regional scale, they can only be situated where geological constraints allow. Battery energy storage systems (BESS) are not subject to such constraints and can further be build in a distributed network near end-users, rather than in large centralised plants. However, utility-scale battery technology is still nascent, with liquid lithium-ion as the most common battery chemistry. While other designs, like lithium-sulfur, lithium-metal, and sodium-ion, can offer higher energy densities and longer lifetimes, various technical challenges must be overcome. As alternative designs are developed for the future, lithium-ion batteries can support BESS operating on the scale required for today. The world's largest BESS is located in Moss Landing, CA, and has a capacity of 1.4 GWh and can deliver 350 MW to the CAISO grid. The Edwards and Sanburn Solar and Energy Storage site, to be completed in 2023, will use 2.5 millon PV modules and 110,000 lithium-ion batteries situated on 6,000 acres to produce up to 1.1 GW and store 3.32 GWh.
We rely on projections of BESS costs and capacities in the years 2040 and 2050 to appraise those associated to . A reference case for the projected domestic storage capacity in batteries in the years 2040 and 2050 are 120 GWh and 210 GWh, respectively <cit.>. The maximum amount of storage capacity needed to power for a 12 hour period at 150 (175) MW is 1.2 (1.4) GWh, constituting less than 1% of expected total market capacity. By 2040, hydro-pumped energy storage will constitute 20% of total storage capacity and will be relegated to storage durations of more than 12 hours. Lithium-ion battery cell lifetimes are typically on the order of 1000 cycles, and other battery chemistries have rapidly increased in lifetime in recent years, topping 600 cycles for Lithium NMC <cit.>. If a 1000 cycle lifetime is assumed for future battery technologies, and batteries would experience 300 full cycles in a year, each battery module would need to be replaced 3 times in each 10 year run period. Costs could be mitigated through battery recycling, at minimum to be smelted and the valuable elements Nickel and Cobalt captured, 10% of the battery cost could feasibly be reclaimed. The cost of batteries designed for 10 hour storage in the years 2040 and 2050 are 125 and 100 $/kWh <cit.>. These parameters can be used to estimate the total cost of batteries for powering scenarios over the full 20 year run time.
Finally, cost mitigation strategies can be explored. The current compensation rate for surplus power sold back to Pacific Gas and Energy was around $525/kW/year on average from January 2022 to May 2023 <cit.>. An analysis by S&P indicates that in 2030, $55/kW/year could be generated through energy arbitrage, where energy purchased during the day can be stored and sold at night when energy prices are driven by the higher cost non-renewables <cit.>. This analysis also shows that the average cost of energy will not substantially decrease over time. Higher battery capacity would be required to capitalise on arbitrage opportunities and is therefore less appealing than selling excess energy production immediately during daytime production. An additional 150 MW of solar capacity in excess of requirements could generate $380 million. If government investment on the scale of the Production and Investment Tax Credits (PTC and ITC) outlined in the IRA were to be available during construction, the cost of batteries could be reduced by 30% and the cost of renewable power generation could be reduced by $0.0275/kWh <cit.>.
For the following analysis, a day/night cycle of 12 hours each is considered and the average power production over the course of a full day is 175 MW. The total energy storage capacity from batteries is set to provide the difference in nighttime power generation (and must be charged during the day with power generated in excess of 175 MW). Table <ref> summarises a possible design configuration using a mix of solar and wind energy.
While the composition of this energy portfolio can impact the total cost estimates, the total cost of energy infrastructure required to de-carbonize operations is approximately $1 billion over the coarse of 20 years of operation. It is important to note that this falls largely outside the scope of project budget. Indeed, most of this cost will be covered by general investment by the US government in electrification of the grid. While FCC would not be able to access 550 GeV CoM energy, it is expected to require 350 MW in the 365 GeV tt̅ run configuration <cit.>. CERN receives significantly de-carbonized energy from France, where 56 nuclear reactors collectively deliver 63 GW to the grid (1.1 GW/plant on average) <cit.>. Assuming FCC operated with nuclear power alone, it would consume 30% of the power output of a single plant. A nuclear reactor today typically costs around 8 billion euros, implying that the energy infrastructure required to operate FCC sustainably is $2.5 billion.
The previous analysis leads to two conclusions about sustainable operation of :
* The required technological innovation of solar, wind, and energy storage systems is expected to meet the site power needs for by the beginning of operations
* Market availability of these technologies will be sufficiently scaled such that they can be deployed for , and the associated costs born by government investment in renewable energy will be comparable if not less than alternate e^+e^- Higgs factory options
We would like to estimate the cost within the budget scope required to operate sustainably in a realistic scenario. A $200 million budget for renewables would support a 250 MW solar farm, fully covering the needs of during the day with an average excess production of 87.5 MW that can be sold to the grid. Assuming increased capacity of domestic BESS results in negligible energy price differences between day and night through arbitrage, would incur energy costs only from the additional 75 MW needed at night on average. At $0.06/kWh, this would amount to $780 million over 20 years. To effectively erase this additional energy cost, the solar farm budget can be increased to $270 million to provide twice the average site power needs. It should be emphasised that can achieve effective energy independence with a modest investment in solar infrastructure. Given the carbon intensity of solar, wind, nuclear, and natural gas of 11, 11, 12, and 524 gCO_2/kWh in the CAISO grid, along with the least optimistic projection of domestic renewable energy production by the US Energy Information Administration, the carbon intensity of electricity produced by the CAISO grid can be expected to fall below 125 gCO_2/kWh by 2050 <cit.>. This is driven by a doubling of solar/wind and a 25% reduction in gas in terms of total energy portfolio composition. Since half of site power originates purely from solar, the average carbon intensity of energy consumption will be better than 68 gCO_2/kWh. This is further improved to 46 gCO_2/kWh in the high technology uptake scenario. These are comparable to the carbon intensity in France of 38 gCO_2/kWh, which is not expected to be further reduced.
§ MITIGATION STRATEGIES FOR OPERATIONS
https://www.nature.com/articles/d41586-022-03551-5
https://link.springer.com/article/10.1140/epjp/s13360-022-03319-w
There can be considerable emissions associated with the production of energy required to meet site operation power requirements. This is highly dependent on the region in which the project operates; regions with highly de-carbonized electricity grids (via solar, wind, hydroelectric, and nuclear power) offer significantly reduced carbon emissions related to energy production than those running on non-renewable energies (gas, oil, and coal). The total emissions of each collider project are then evaluated as the product of the total amount of energy consumed and the local carbon intensity for its production.
While total de-carbonization of the electric grid by 2040 is a nominal goal, it is not assured. The 2040 projection of carbon intensity based on the stated policies scenario for Japan, China, the European Union, and the United States are roughly 150, 300, 40, and 45 t/GWh, respectively <cit.>. However, local variations in renewable energy systems implementation is neglected in these estimates; for example, the CERN-based colliders could take advantage of a 50-50 mix of renewable and nuclear energy. Additional mitigation strategies, such as construction of dedicated renewable energy plants, would reduce the carbon impact of operations in other regions. This strategy has been thoroughly investigated by the Green ILC Project <cit.>. A more moderate strategy can be envisioned for . A 185 MW solar farm could be built with a $150 million budget <cit.>, double covering the average power requirement of [This estimate considers the power optimizations in Table <ref>], such that excess power could be stored for later use at night[The additional cost of selling and purchasing energy through utility companies can be reduced through special contracts and is neglected here], allowing to achieve green energy independence. The use of multi-junction photovoltaic cell fabrication techniques would improve power conversion efficiency well beyond 30% that is common in today's cells <cit.>, allowing such a solar farm to be situated on about 5 km^2 of land <cit.>.
This estimate relies on energy storage systems supported by regional electricity grids. To better understand the feasibility of scaling all parts of energy production (which may fall under the project budget) and energy storage infrastructure (which would be funded by the US government, but would nonetheless need investment), we perform a holistic cost estimate. We first note that the energy storage capacity required to supply 150 MW continuously for 12 hours is less than 1% the expected grid energy storage capacity in 2040 <cit.>, indicating that the US grid should be able to reasonable support operations at this scale using renewable energy. We assume lithium ion batteries[Lithium ion batteries are not considered to be viable long term energy storage solutions, instead technologies such as flow batteries and systems based on mechanical potential energy are favored] are the primary energy storage technology with a lifetime of 1000 cycles, experiencing 300 cycles per year with 10% of battery cost reclaimed through recycling at a base cost of 125 (100) $/kWh in 2040 and 2050 <cit.>. We take the cost of energy production of solar to be $0.80/W <cit.> while taking that of onshore, fixed-bottom offshore and floating offshore wind turbines to be around 1.3, 3.25 and 5.3 $/W <cit.>. An energy production portfolio that provides continuous power for over a 12 hour day/12 hour night period based on these technologies alone would cost approximately $1 billion. This estimate is primarily driven by requirements of battery energy storage systems and holds for a variety of energy source mixes. This indicates a similar cost would be associated to a site located near the Pacific or Atlantic coasts, which could leverage floating and fixed-bottom turbines respectively, in the Southern US where solar would be most efficient, or proximate to large wind farms in the Midwest. A more precise cost and feasibility analysis can be performed when a candidate site is defined, as has been done for experiments operating at the South pole, for example <cit.>. This cost analysis demonstrates that operations could be supported sustainably within the US within the next two decades given conservative projections of technological development.
As a point of comparison, the power requirement of FCC would be about 30% of the output of a large nuclear plant (generating 1.1 GW on average <cit.>). At about $8 billion per facility, the cost of renewable energy infrastructure for FCC would be about $2.5 billion.
To obtain an estimate of the carbon impact of operations at future collider facilities that takes mitigation strategies into account, we first note the carbon intensity of solar, wind, hydro, and nuclear are around 30, 15, 25 and 5 t/GWh, respectively <cit.>. These estimates have some regional variation due to the differences in supply chains and local infrastructure. For instance, given the lifetime of existing nuclear plants of about 30 years, replacement or construction of entirely new facilities will be required and it might effect the overall carbon intensity. While the ultimate energy production portfolio will be different for facilities constructed in different regions, we take a common estimate of 20 t/GWh for all collider facilities in this analysis. We find this to be a reasonable estimate given that any facility can propose mitigation strategies to decouple their carbon impact from the regional average. It also reflects the expectation that clean energy infrastructure supply chains will improve over the next 20 years.
§ ANALYSIS OF TOTAL CARBON FOOTPRINT
A straightforward calculation of total energy consumption is possible using the information summarized in Table <ref>, which includes estimates of the site power P during collision mode, the annual collision time T_collisions and the total running time in years T_run for each center-of-mass energy √(s) considered. We take into account the time spent with the beam operating at full RF and cooling power outside of data-taking mode, for example for machine development, as an additional week for every 6 weeks of data-taking (i.e. +17%), represented as T_development. We take the site power requirement for the remaining period in a calendar year to be 30% of the site power requirement during data-taking (denoted by κ_down). This value is a conservative upper estimate, since without RF power and associated heat load, any accelerator can be kept cold with a small fraction of power to the cryogenics system.
Using these values, the annual energy consumed is calculated as:
E_annual = P[κ_down· T_year+(1-κ_down)(T_collisions + T_development)]
and the total energy consumption summing over all run configurations √(s) runs is
E_total=∑_r ∈ runsE(r)_annual· T_run(r)
For the circular collider projects, FCC and CEPC, we consider separately the cumulative energy consumption of the Higgs physics runs (i.e. √(s)>240 GeV) for a focused comparison on the basis of Higgs physics reach argued in Section <ref>, but additionally include the contribution of Z-pole and WW-threshold runs which impact the climate nevertheless.
Figure <ref> shows the energy consumption for the considered collider projects. The least energy is consumed by CLIC, driven by the lowest planned run time at low energies and its marginally lower power consumption compared to and ILC, which are comparable. The energy consumption of CEPC is large compared to FCC because CEPC plans to collect four times the integrated luminosity at 240 GeV with an associated tripling of the total run duration.
Figure <ref> shows the precision-weighted energy consumption for the considered collider projects, estimated by multiplying the energy consumption of Figure <ref> with the average relative precision in the last row of Table <ref>. The lowest run time for CLIC is now compensated by the reduced relative precision, in comparison to and ILC, leading to overall closer precision-weighted energy consumption. Similarly, the large proposed run time for CEPC is now taken into account in conjunction with the improved precision reach, yielding a total weighted energy consumption closer to FCC.
Figure <ref> shows the associated GWP of the total energy required for operations, obtained by multiplying the total energy consumption by the respective carbon intensity. The GWP of FCC operations benefits from the de-carbonized electricity expected in France and Switzerland, despite its high total energy requirements.
Figure <ref> shows the GWP due to construction of accelerator facilities. The carbon footprint is very similar among the linear and circular colliders, which is driven primarily by the total length of the accelerator. Figure <ref> shows the total GWP from construction and operations. CLIC is the most environmentally friendly option, owing to its lead performance in operations emissions as well as its small footprint. The total GWP of and ILC are driven by operations while that of CLIC, FCC, and CEPC are almost entirely driven by construction emissions. Possible reductions in the construction component could be achieved by using concrete with lower cement content than CEM1 C40 considered in this analysis. Such cases would still leave FCC GWP dominated by construction processes.
Finally, Figure <ref> shows the total precision-weighted GWP from construction and operations, estimated in the same way as the precision-weighted energy consumption in Figure <ref>. Given the overall similar GWP for CLIC and and the superior precision reach of at higher energies, compared to CLIC, appears to be the most environmentally friendly option, when accounting for the precision-weighted total carbon footprint.
The total energy consumption is given in table <ref> for three cases:
(a) when considering the complete running scenarios of Table <ref>, which include higher √(s) runs for ILC,and runs at the Z-pole and WW-threshold for CEPC,FCC;
(b) when only considering the "Higgs factory" modes of the proposed colliders, thus excluding the Z, WW runs for CEPC,FCC;
(c) and when only including the √(s)=250 GeV run for ILC/, since this run already provides comparable sensitivity to the Higgs couplings as the other proposed Higgs factories, as is shown in Table <ref>.
Update table with latest estimates!
The 2045 estimates for the carbon intensity in the various locations where the collider projects could be hosted are given on the 3rd row of table <ref>, and the total carbon footprint is given on the same table for the two cases considered (6th and last row). The total energy consumption and carbon footprint are also shown in Figures <ref>,<ref>.
§ CONCLUSIONS
We present the first analysis of the environmental impact of the newly proposed collider and a comparison with the other proposed facilities in terms of physics reach, energy needs and carbon footprint for both construction and operations.
The physics reach of the proposed linear and circular e^+e^- colliders has been studied extensively in the context of the US Snowmass and European Strategy processes. We zero in on the Higgs boson coupling measurement precision achievable at , CLIC, ILC, FCC, and CEPC and point out that they are generally similar, though linear colliders can operate at higher collision energies allowing access to additional measurements of the Higgs boson's properties. Moreover, the use of polarization at linear facilities effectively compensates for the lower luminosity.
On this basis, the global warming potential of these facilities is compared in terms of absolute environmental impact and in terms of environmental impact per unit of physics output obtained by a weighted average of expected precision on Higgs coupling measurements. The operations emissions of could be improved through beam parameter optimization leading to 63 (79) MW power reduction compared to the nominal 150 (175) MW in the 250 (550) GeV running mode. Mitigation strategies using dedicated renewable energy facilities can reduce the carbon intensity of energy production to 20 ton /GWh. We find that global warming potential is driven by construction rather than by operations beyond 2040. The compact nature of linear collider facilities reduces the total volume of construction materials and opens up the option for a surface site to simplify the construction process. We conclude that linear colliders and in particular have great potential for an environmentally sustainable path forward for high energy collider facilities.
§ ADDITIONAL POINTS
Somewhere in the intro or dedicated section state the importance of polarization for electrons and positrons.
How large is the systematic on the positron polarization measurement ? –> check with Peskin
Detectors with short duty cycle –> less systematic –> more effective per number of Higgs bosons
Beam Damp experiments
When assessing the energy consumption and carbon footprint of a proposed Higgs factory, one has
* The figure of merit when assessing the scientific output of a Higgs factory should not be the number of Higgs bosons produced per se, but rather the precision in the Physics observables of interest (particularly Higgs couplings) that can be reached for a given number of Higgs bosons produced.
* Electron (primarily) and positron (secondarily) polarization can yield an effective luminosity improvement factor for linear machines of ∼ 2.5, i.e. allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.
* Additionally, linear machines can probe higher center-of-mass energies, which offers various advantages compared to linear machines:
* At higher √(s), Higgs boson production cross section increases, enabling a more efficient production of Higgs bosons.
* At high √(s) (above ≃ 500 GeV), linear machines can probe double Higgs production via the ZHH channel, allowing for a direct measurement of the Higgs trilinear coupling λ_3.
For the electron Yukawa coupling, FCC can achieve a 𝒪(1) fractional uncertainty with the dedicated run at the Higgs mass pole, which was however not taken into account for the studies presented here.
Action Items (Dimitris):
* Reach out to each collider project asking for their most up-to-date estimates for: site power for each √(s), annual collision time, operational efficiency and downtime site power fraction → to make sure there is no contention about our estimates once published
More ideas:
* Reach out to Janot/Blondel asking specifics about their assumptions and references for the numbers they're quoting (Dimitris)
* Reach out to Doerr School of Sustainability contact (Caterina?)
* Follow up with Ken Bloom about carbon intensity projections
§ ACKNOWLEDGEMENTS
The authors express their gratitude to Dan Akerib, Tom Shutt, Sridhara Dasu, Patrick Maede, and Jim Brau for their insightful discussions, which have significantly contributed to this work. The authors also extend their appreciation to Michael Peskin and Steinar Stapnes for providing feedback on the manuscript.
The work of the authors is supported by the US Department of Energy under contract DE–AC02–76SF00515.
tocsectionBibliography
atlasnote
|
http://arxiv.org/abs/2307.07249v1 | 20230714094929 | Search for dark matter annual modulation with DarkSide-50 | [
"The DarkSide-50 Collaboration",
":",
"P. Agnes",
"I. F. M. Albuquerque",
"T. Alexander",
"A. K. Alton",
"M. Ave",
"H. O. Back",
"G. Batignani",
"K. Biery",
"V. Bocci",
"W. M. Bonivento",
"B. Bottino",
"S. Bussino",
"M. Cadeddu",
"M. Cadoni",
"F. Calaprice",
"A. Caminata",
"M. D. Campos",
"N. Canci",
"M. Caravati",
"N. Cargioli",
"M. Cariello",
"M. Carlini",
"V. Cataudella",
"P. Cavalcante",
"S. Cavuoti",
"S. Chashin",
"A. Chepurnov",
"C. Cicalò",
"G. Covone",
"D. D'Angelo",
"S. Davini",
"A. De Candia",
"S. De Cecco",
"G. De Filippis",
"G. De Rosa",
"A. V. Derbin",
"A. Devoto",
"M. D'Incecco",
"C. Dionisi",
"F. Dordei",
"M. Downing",
"D. D'Urso",
"M. Fairbairn",
"G. Fiorillo",
"D. Franco",
"F. Gabriele",
"C. Galbiati",
"C. Ghiano",
"C. Giganti",
"G. K. Giovanetti",
"A. M. Goretti",
"G. Grilli di Cortona",
"A. Grobov",
"M. Gromov",
"M. Guan",
"M. Gulino",
"B. R. Hackett",
"K. Herner",
"T. Hessel",
"B. Hosseini",
"F. Hubaut",
"T. Hugues",
"E. V. Hungerford",
"An. Ianni",
"V. Ippolito",
"K. Keeter",
"C. L. Kendziora",
"M. Kimura",
"I. Kochanek",
"D. Korablev",
"G. Korga",
"A. Kubankin",
"M. Kuss",
"M. Kuźniak",
"M. La Commara",
"M. Lai",
"X. Li",
"M. Lissia",
"G. Longo",
"O. Lychagina",
"I. N. Machulin",
"L. P. Mapelli",
"S. M. Mari",
"J. Maricic",
"A. Messina",
"R. Milincic",
"J. Monroe",
"M. Morrocchi",
"X. Mougeot",
"V. N. Muratova",
"P. Musico",
"A. O. Nozdrina",
"A. Oleinik",
"F. Ortica",
"L. Pagani",
"M. Pallavicini",
"L. Pandola",
"E. Pantic",
"E. Paoloni",
"K. Pelczar",
"N. Pelliccia",
"S. Piacentini",
"A. Pocar",
"D. M. Poehlmann",
"S. Pordes",
"S. S. Poudel",
"P. Pralavorio",
"D. D. Price",
"F. Ragusa",
"M. Razeti",
"A. Razeto",
"A. L. Renshaw",
"M. Rescigno",
"J. Rode",
"A. Romani",
"D. Sablone",
"O. Samoylov",
"E. Sandford",
"W. Sands",
"S. Sanfilippo",
"C. Savarese",
"B. Schlitzer",
"D. A. Semenov",
"A. Shchagin",
"A. Sheshukov",
"M. D. Skorokhvatov",
"O. Smirnov",
"A. Sotnikov",
"S. Stracka",
"Y. Suvorov",
"R. Tartaglia",
"G. Testera",
"A. Tonazzo",
"E. V. Unzhakov",
"A. Vishneva",
"R. B. Vogelaar",
"M. Wada",
"H. Wang",
"Y. Wang",
"S. Westerdale",
"M. M. Wojcik",
"X. Xiao",
"C. Yang",
"G. Zuzel"
] | hep-ex | [
"hep-ex",
"astro-ph.CO"
] |
APS/123-QED
The DarkSide-50 Collaboration
Dark matter induced event rate in an Earth-based detector is predicted to show an annual modulation as a result of the Earth's orbital motion around the Sun.
We searched for this modulation signature using the ionization signal of the liquid argon time projection chamber.
No significant signature compatible with dark matter is observed in the electron recoil equivalent energy range above 40, the lowest threshold ever achieved in such a search.
Search for dark matter annual modulation with DarkSide-50
G. Zuzel
August 12, 2023
==========================================================
The combined effect of Earth's rotations around the Sun and the galactic center is expected to produce an annual modulation of the dark matter particle interaction rate in terrestrial detectors <cit.>, thereby offering a unique signature for directly probing dark matter particles and unveiling their true nature.
The DAMA/LIBRA experiment claimed the detection of such a signature in their NaI detectors in the range <cit.>.
The interpretation of this claim with the Weakly Interacting Massive Particle (WIMP) hypothesis is however currently facing challenges due to the null detection of WIMP-induced nuclear-recoil signals in other experiments <cit.>.
An independent approach to test this claim and possibly to reveal WIMP properties can be offered by searching for the modulation with other detectors which have different target materials, background sources, energy resolution, and experimental sites.
Dual-phase noble-liquid time projection chambers (TPCs) measure the scintillation and ionization signals from a particle interacting in the liquid.
Such detectors were originally designed to discover and have led the search for the WIMPs with masses above 10□.
Moreover, in the last decade, they have also exhibited world-class sensitivity to light dark matter candidates exploiting only the ionization signal spectrum above a few detected ionization electrons (N_e) <cit.>.
Among them, the detector, a liquid argon (LAr) TPC located underground at the Laboratori Nazionali del Gran Sasso (LNGS) <cit.>, recently demonstrated an unprecedented sensitivity in this energy region <cit.>.
This achievement was accomplished by looking for an event excess in the energy spectrum with respect to the background model above 0.06 electron recoil equivalent ().
In this work, we report for the first time on the search for the annual rate modulation of events down to 0.04, the lowest threshold ever achieved in a dark matter modulation search.
The analysis relies on two approaches: the maximum likelihood fit and the Lomb-Scargle periodogram.
The results are also compared to the claim by the DAMA/LIBRA experiment.
The TPC is housed in a stainless steel double-walled, vacuum-insulated cryostat, shielded by a 30 boron-loaded liquid scintillator veto instrumented with 110 8-inch PMTs.
The purpose of this is to actively tag neutrons .
A 1 water Čerenkov veto, equipped with 80 PMTs, surrounds the neutron veto to actively tag cosmic muons and to passively shield the TPC against external backgrounds <cit.>.
Two arrays of 19 3-inch photomultiplier tubes (PMTs), located at the top and the bottom of the TPC, detect light pulses from scintillation (S1) induced by particle interactions in the liquid bulk.
The same interactions generate ionization electrons, which are drifted through the LAr volume by a 200 electric field up to the top of the TPC.
Then, they are extracted into the gas phase by a 2.8 field and induce delayed photon pulses (S2) by electroluminescence under a 4.2 field.
started taking data on April 2015 with a low-radioactivity LAr target, extracted from a deep underground source (UAr) <cit.>, and concluded the operations on February 2018.
The first four months of data were contaminated by the cosmogenic 37Ar isotope, with a half-life of 35.0 <cit.>, and were only used to calibrate the ionization response <cit.>.
About 25% of the rest of the data taking was devoted to calibration campaigns with dissolved and external radioactive sources.
The livetime used in this paper corresponds to 693.3.
Selected events are required to be single-scatter, i.e., with a single S2 pulse, after a veto of 20 for each event triggering the DAQ.
Additional cuts are used to remove pileup pulses, surface α events, and events reconstructed in the outer ∼7 thick cylindrical shell of the TPC.
In addition, the low energy threshold for this analysis is defined in order to reject spurious electrons (SEs) <cit.>, the object of a paper in preparation.
These originate from ionization electrons trapped on impurities along the drift in LAr, and released with a certain delay.
A full description of the selection criteria can be found in DarkSide-50:2022qzh.
A crucial aspect for this analysis is long-term stability of the detector performance, monitored by various sensors incorporated inside the cryogenic system, as well as by the recorded events from the TPC itself.
The two parameters whose fluctuations may potentially have a high impact on the modulation search are the electric drift field, F, and the average number of detected S2 photons per ionization electron extracted in the gas phase, g_2.
The stability of F is monitored via the stability of the supplied high voltages for the electric fields and via the stability of the drift time of events at the very bottom of the TPC.
The maximum fluctuation of F was estimated to be less than 0.01%, too small to affect the ionization response.
Based on the S2/S1 ratio for electronic recoil events above the region of interest (RoI) (0.0421), g_2 varies at most by 0.5% over the whole data-taking period.
The impact from the instability is evaluated by pseudo experiments and found to be negligible compared to the statistical fluctuations.
The time evolution of background events can be described by the combination of a set of decaying exponentials and a constant term.
The latter component includes the radioactive backgrounds whose lifetime is much longer than the data-taking period of about three years, and is dominated by the β-decay of 39Ar (268 <cit.>).
The exponential components arise from the decays of 37Ar (35.0 <cit.>), 85Kr (10.8 <cit.>), 54Mn (312.1 <cit.>), and 60Co (5.27 <cit.>).
The first two isotopes are intrinsically present in the LAr, while the latter two are contaminants of the PMTs, and 60Co is also present in the cryostat stainless steel.
The latter two emit γ- and x-rays, which deposit energy in the LAr target.
The background model is generated with the DarkSide-50 -based Monte Carlo <cit.> code.
The model is built on data from an extensive materials screening campaign to characterize the trace radioactivity content of every detector component.
It also uses measurements with <cit.> and incorporates the detector response model <cit.>.
fig:rate_bestfit shows the measured time-dependent event rates for events with N_e in the 441 and 4168 ranges, corresponding to 0.062.0 and 2.06.0, respectively.
The signal and backgrounds are modelled with
f(t) = A_χcos(t-ϕ/T/2π) +
∑_lA_l/τ_l e^-t/τ_l + C,
(l = ^37Ar, ^85Kr, ^54Mn, ^60Co)
where A_χ is the amplitude of the modulated term of the signal, ϕ the phase, and T the period fixed to 1.
The constant term C is the sum of the time-averaged signal component and long-lived backgrounds.
The parameters τ_l and A_l correspond to the decay times and amplitudes, respectively, of the short-lived isotopes l.
Examples of background-only fits to data, by fixing A_χ=0, are shown in fig:rate_bestfit for the two ranges.
The statistical significance of a possible modulated signal is assessed using the following binned likelihood with the bin width of 7
ℒ = ∏_i ∈ t_bins𝒫( n_i| m_i(A_χ, ϕ,C,Θ))
×∏_θ_k ∈ Θ𝒢(θ_k|θ^0_k, Δθ_k).
The first term represents the Poisson probability of observing n_i events in the i^th time bin with respect to the expected ones, m_i(A_χ, ϕ, C, Θ), evaluated with eq:pdf.
In the fit, A_χ, ϕ and C are left free to vary, while the other parameters are contained inside Θ, which represents the set of remaining nuisance parameters constrained by the Gaussian penalty terms in the last factor of eq:likelihood.
In the latter, θ^0_k and Δθ_k represent the nominal central values and uncertainties, respectively, of the nuisance parameters and are listed in tab:nuis_par.
The nuisance parameters account for uncertainties on the fiducial volume of the TPC (which induces a 1.1% uncertainty on the event rate from 54Mn and 60Co in the PMTs and cryostat; and a 1.5% uncertainty on the other event rates, acting in a correlated way <cit.>) and on the activities of short-lived decays in the energy range of interest.
These are obtained from the combination of the uncertainty on the measured rate (14%, 4.7%, 40%, 12% for ^37Ar, ^85Kr, ^54Mn, ^60Co, respectively <cit.>), with the uncertainty arising from the definition of the energy range due to the ionization response.
In addition, the uncertainty on the ^85Kr activity is combined with the spectral uncertainties from the β-decay Q-value and atomic exchange and screening effects <cit.>, as discussed in DarkSide-50:2022qzh.
The fit to data with eq:likelihood does not show any evidence of modulation in either of the two analyzed ranges.
fig:bestfit_amp_phase shows the best fit values of (A_χ, ϕ), and the associated 68% and 95% confidence level (C.L.) contours, for the two energy ranges.
The same analysis has been repeated by varying the bin width from 1 to 10, and no significant variations have been found.
The result in the 2.06.0 range is used to test the modulation observed by DAMA/LIBRA in the same interval, compatible with a dark matter signal over 14 cycles with a significance of >13σ <cit.>.
The significance from this analysis is such that we can neither confirm nor reject the DAMA/LIBRA observation over the null hypothesis.
For completeness, the same conclusion is drawn for the 1.03.0 range, also analyzed by DAMA/LIBRA.
Additional constraints on the modulation amplitude are obtained by simultaneously fitting event timestamps and energies after fixing the period (1) and the phase (maximum at June 2nd) to those expected from the Standard Halo Model <cit.>.
This approach does not require any assumption on the SE rate and thus allows the range to be extended down to 3 or 0.04, which corresponds to the primary electron induced by the interaction plus, on average, two subsequent ionization electron.
The likelihood,
ℒ = ∏_i ∈ t_bins∏_j ∈ E _bins𝒫( n_i^j| m_i^j(A_χ^j,C^j,Θ̃))
×∏_θ̃_̃k̃ ∈ Θ̃𝒢(θ̃_k|θ̃^0_k, Δθ̃_k),
is the product of the Poisson probabilities in each of the ij-bins defined by the event time (i) and energy expressed in terms of number of electrons (j) given the signal amplitude, A^j_χ, and the constant background component, C^j.
The chosen bin width along the time axis corresponds to 7 and the bin widths along the energy axis are 0.02 below 0.06, 0.25 below 1, 1 up to 6, and 2 elsewhere, starting from 0.04 (3).
The sample of events with 3 is contaminated by SE's.
To account for this background, we anchored its time variation to that of events below 3, selected in coincidence with the previous event, largely dominated by SE.
This approach is justified by the observation that the spectrum of events occuring in a 2 window from the previous event, which consists of more than 90% of SE's, is stable over time.
The amplitude of the signal in each energy interval, A^j_χ, is assumed uncorrelated with the others.
Nuisance parameters Θ̃, in eq:likelihood2 are the same as in eq:likelihood, but account for energy spectral distortions of the background components as done in DarkSide-50:2022qzh.
fig:limit_amp shows the best-fitted amplitude as a function of the energy, together with the 1- and 2-σ significance coverages, as derived with background-only Monte Carlo datasets.
The results from DAMA/LIBRA <cit.>, COSINE-100 <cit.>, and XMASS <cit.> are also shown.
In contrast to our approach, the DAMA/LIBRA looked at each energy bin independently and measured the amplitude by looking at the residuals of a yearly averaged event rate.
Finally, a Lomb-Scargle periodogram analysis <cit.> is performed on the temporal evolution of the event rate to look for sinusoidal signals with any period, including the one expected from dark matter.
The analysis is applied to the data residuals, after the subtraction of the best-fitted background model, shown in fig:rate_bestfit.
The uncertainty from the background fit is propagated to the data errors.
The false alarm probability is calculated with the Bootstrap method <cit.> and used to assess the significance of the sinusoidal signals.
The sensitivity of this analysis is evaluated by applying the Lomb-Scargle analysis over 1000 pseudo experiments where an annual modulation signal has been injected.
A median of 1σ significance for the false alarm probability is obtained with the addition of 0.03.
The analysis of the data does not identify any significant modulation, scanning periods up to 800, as shown in fig:ls.
In conclusion, we searched for an event rate modulation in the data between 2.0 and 6.0, where DAMA/LIBRA observed a yearly modulated signal compatible with dark matter.
Also, for the first time, we probed the energy range down to 0.04, the lowest threshold ever probed in an annual dark-matter modulation search.
In none of the analyzed intervals, a modulation signal was observed.
The significance of this result is not sufficient to confirm or reject the DAMA/LIBRA observation.
The stability of the detector over nearly three years of operation, the accuracy of the background model, and the low-energy threshold achieved demonstrate the competitiveness of the dual-phase LAr-TPC technology in searching for modulation signals.
This result is therefore promising in view of future massive dual-phase liquid argon experiments <cit.>, expected to reach much larger exposures and even lower background levels.
The DarkSide Collaboration offers its profound gratitude to the LNGS and its staff for their invaluable technical and logistical support. We also thank the Fermilab Particle Physics, Scientific, and Core Computing Divisions. Construction and operation of the DarkSide-50 detector was supported by the U.S. National Science Foundation (NSF) (Grants No. PHY-0919363, No. PHY-1004072, No. PHY-1004054, No. PHY-1242585, No. PHY-1314483, No. PHY-1314501, No. PHY-1314507, No. PHY-1352795, No. PHY-1622415, and associated collaborative grants No. PHY-1211308 and No. PHY-1455351), the Italian Istituto Nazionale di Fisica Nucleare, the U.S. Department of Energy (Contracts No. DE-FG02-91ER40671, No. DEAC02-07CH11359, and No. DE-AC05-76RL01830), the Polish NCN (Grant No. UMO-2019/33/B/ST2/02884) and the Polish Ministry for Education and Science (Grant No. 6811/IA/SP/2018). We also acknowledge financial support from the French Institut National de Physique Nucléaire et de Physique des Particules (IN2P3), the IN2P3-COPIN consortium (Grant No. 20-152), and the UnivEarthS LabEx program (Grants No. ANR-10-LABX-0023 and No. ANR-18-IDEX-0001), from the São Paulo Research Foundation (FAPESP) (Grant No. 2016/09084-0), from the Interdisciplinary Scientific and Educational School of Moscow University “Fundamental and Applied Space Research”, from the Program of the Ministry of Education and Science of the Russian Federation for higher education establishments, project No. FZWG-2020-0032 (2019-1569), the International Research Agenda Programme AstroCeNT (MAB/2018/7) funded by the Foundation for Polish Science (FNP) from the European Regional Development Fund, and the European Union's Horizon 2020 research and innovation program under grant agreement No 952480 (DarkWave), and from the Science and Technology Facilities Council, United Kingdom. I. Albuquerque is partially supported by the Brazilian Research Council (CNPq). The theoretical calculation of beta decays was performed as part of the EMPIR Project 20FUN04 PrimA-LTD. This project has received funding from the EMPIR program co-financed by the Participating States and from the European Union’s Horizon 2020 research and innovation program. Isotopes used in this research were supplied by the United States Department of Energy Office of Science by the Isotope Program in the Office of Nuclear Physics.
|
http://arxiv.org/abs/2307.05827v1 | 20230711223647 | Relational Extraction on Wikipedia Tables using Convolutional and Memory Networks | [
"Arif Shahriar",
"Rohan Saha",
"Denilson Barbosa"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.IR",
"cs.LG"
] |
SnakeSynth: New Interactions for Generative Audio Synthesis
Eric Easthope
University of British Columbia
Vancouver, British Columbia, Canada
[email protected]
August 12, 2023
===============================================================================================================================
*Equal Contributionfootnote
Relation extraction (RE) is the task of extracting relations between entities in text. Most RE methods extract relations from free-form running text and leave out other rich data sources, such as tables. We explore RE from the perspective of applying neural methods on tabularly organized data. We introduce a new model consisting of Convolutional Neural Network (CNN) and Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities and learn dependencies among them, respectively. We evaluate our model on a large and recent dataset and compare results with previous neural methods. Experimental results show that our model consistently outperforms the previous model for the task of relation extraction on tabular data. We perform comprehensive error analyses and ablation study to show the contribution of various components of our model. Finally, we discuss the usefulness and trade-offs of our approach, and provide suggestions for fostering further research.
§ INTRODUCTION
Knowledge graphs (KG) are important lexical resources for various applications involving natural language, such as web searches, question answering, etc. However, KGs quickly become incomplete as the world changes. Therefore, adding new facts to a KG is crucial for maintaining its relevance. Relation extraction (RE) is the task of extracting relations between two entities in a piece of text. RE has been widely used as a way of KG completion.
Although there is a plethora of work in relation extraction, most methods process continuous free-form text (e.g., complete sentences) mentioning entities, leaving out other important data sources such as tables.
Unlike previous works that used neural networks on continuous text <cit.>, we focus on extracting relations from tabular data.
We use a neural model for our analysis as neural methods have been shown to outperform traditional RE approaches that require feature engineering; <cit.> give a recent review of neural methods in relation extraction.
The model extracts relations between a pair of entities in different columns inside a table and, for encyclopedic and biographical articles, between the subject of the article and an entity in a table inside that article.
The model uses a combination of convolutions and memory networks to automatically extract useful features and model dependencies among features, respectively. We show that our approach can consistently outperform and makes fewer errors than a previous model.
Our main contributions are as follows.
* We outperform a state-of-the-art neural model for extracting relations from table data.
* We perform a comprehensive error analysis to highlight the cost of model parameters for a comparable performance gain.
* Analyze the model performance for individual relations and investigate the strengths and limitations of the proposed method.
All of our code is provided in this repository: <https://github.com/simpleParadox/RE_656>
§ RELATED WORK
Most prior works have mainly focused on sentence-level RE where deep neural networks have been used to assign relations for a pair of entities <cit.>. Recent works have also moved the research direction from sentence level to document level RE to utilize richer information in documents and perform relation extraction across sentences. For document-level relation extraction, recent works have also used techniques such as constructing a document-level graph using dependency trees, coreference information, rule-based heuristics, and Graph Convolutional Networks (GCN) <cit.> for reasoning and predicting relations. As evident, RE from continuous text is explored widely, but only a few papers have addressed the task of RE from data that is non-free form, such as data organized into tables <cit.>.
We need features that accurately describe the input data for the relation classification task. These features can be manually created or automatically learned from the input. used manual feature-engineering techniques and traditional machine-learning models to extract relations in the form of Resource Description Framework (RDF) triples from tabular data. Although their method achieved an F1-score of 79.40%, it requires complicated manual feature engineering. On the contrary, most recent works overcome the task of manual feature engineering using end-to-end deep learning techniques, and we use a similar motivation to use neural models for automating feature extraction for relation classification.
The most notable work related to ours is the one by , looking at extracting relations from a given pair of entities in Wikipedia tables. They used embeddings from BERT <cit.> and a simple neural network with 1 LSTM unit to classify relations. Although a highly effective approach, we found the method to be over-simplistic to properly capture many relations.
We show that a more sophisticated model involving convolutions and bidirectional-LSTM may be a better approach for the task of classifying relations for entity pairs from tabular data.
The choice of convolution networks here is justified by the many previous works showing that CNNs perform significantly better than traditional feature-based methods for relation extraction. Each instance in our data is composed of multiple components such as table headers, table caption, section title containing the table etc. A CNN will automatically learn the useful features, and then finally, max-pooling merges them to perform predictions globally. Previous works such as <cit.> introduced the convolutional architecture with piecewise max pooling (PCNN) to capture structural information between entities and adopted multi-instance learning into PCNN for a dataset that was built using distant supervision <cit.>. They divided the input sentence into three segments and applied a max-pooling operation on each segment instead of the entire sentence. Secondly, <cit.> used a CNN model for an RE task with sentence-level attention for multi-instance learning, where the model used informative sentences and de-emphasized noisy samples. Finally, <cit.> proposed a novel framework that uses separate head-tail convolution and pooling to encode input sentences and classified relations from coarse to fine to filter out negative instances. Therefore, the papers mentioned above have shown the effectiveness of CNN for automatically learning features from sentences.
Hybrid neural models have also been shown to perform well in RE tasks. <cit.> introduced a hybrid neural network (NN) that consists of a bidirectional encoder-decoder LSTM module (BILSTM-ED) for named entity recognition and a CNN module for relation classification. Initially, they used BILSTM-ED to capture context and then fed obtained contextual information to the CNN module to improve relation classification. Furthermore, an encoder-decoder-based CNN+LSTM approach has been presented by <cit.> for distant supervised RE. Their CNN encoder captured sentence features from a bag of sentences and merged them into a bag representation, and the LSTM decoder predicted relations sequentially by modelling relations' dependencies. As hybrid networks have shown their utility for the RE task, we utilize a hybrid architecture for relation classification from tabular data.
The utility of BiLSTM is also evident in tackling the task of RE. <cit.> proposed an end-to-end recurrent neural model incorporating an entity-aware attention mechanism with latent entity typing. They applied BiLSTM to build recurrent neural architecture to encode the context of the sentence. We also include a BiLSTM as a component of our model since it has been shown to perform well on RE tasks by modelling contextual information and leveraging long-term dependencies.
§ METHODS
Here, we describe our task and our model in detail.
§.§ Task
The task is to extract relations between a pair of entities in which one or both appear inside a table. This task has been studied in the context of Wikipedia, so we use that encyclopedia in our discussion for clarity. Recall that each Wikipedia article is about a single entity, which is called the (entity) subject of that article. Our task is then to find relations either between a pair of entities appearing on the same row (but different columns) of a table inside an article, or between an entity appearing inside a table and the subject entity of the article.
For example, consider a table from the Wikipedia article “Nishan-e-Haider” shown in Figure <ref>. Each entity under the “name of the recipient” column (“Raja Muhammad Sarwar") is a recipient of the award “Nishan-e-Haider”[<https://en.wikipedia.org/wiki/Nishan-e-Haider>].
Therefore, the article subject has a relation (award-nominee) with the recipient entity in the table cell. Furthermore, elements of the article besides table cell values, like a column header (“Name of the recipient”), table section title, and caption (“Recipients") provide additional contextual information to identify the relation “award-nominee” between corresponding entity pairs.
§.§ Embeddings
Before training our model, we obtain vector representations of our input. For each table in the dataset, we tokenize the table cell values representing the subject and object entities. We also use contextual information from the table, including the title of the section containing the table and table headers and captions (if present).
In addition, we use the subject and object column indices to obtain related entity pairs for a table row. We do not use the table section paragraphs as <cit.> found no gain in performance by including them.
We concatenate the entity pairs and the contextual information to obtain a training sample for a given relation. We then preprocess the sample and remove all non-alphanumeric characters (e.g. <SEP> token, brackets []) using Python’s module. Then we use the pretrained BERT tokenizer[<https://github.com/google-research/bert/blob/master/tokenization.py>] based on the WordPiece to tokenize the inputs. To obtain a vector representation of the concatenated input, we use HuggingFace's implementation of BERT (base_uncased) <cit.> pretrained on Wikipedia and BookCorpus and trained in an uncased fashion. We set the max length of the input to consist of 80 tokens, compared to the previous work by <cit.>, which used 50 tokens. We retrieve a 768-dimensional word embedding for each token and then concatenate all the embeddings to represent the sample. We used BERT embeddings because they have been shown to perform well in various NLP tasks <cit.>.
Moreover, we use contextual clues for tables for relation extraction which justifies the use of contextual word embeddings.
§.§ Convolutional Neural Network
As customary <cit.>, we fed the instance embeddings to a convolutional layer as it is capable of merging all the local features in input sentences. Since we are considering all surrounding information around the table, important information can appear anywhere in the input sentence. Therefore, it is necessary to leverage all local features and contextual clues in input samples. Convolution involves a dot product of the weight matrix with every k grams in the sequence S to obtain latent featureC^(i), which is shown in equation <ref>.W_c^(i)∈ℝ^k × d indicates i_th convolutional filter, k indicates context window size of the learnable filter and b^(i) indicates bias term.
To ensure input dimensions are consistent, we padded with zeros evenly to the left and right of the input sequence. Moreover, we employed 8 filters in the convolution process to learn different features. We applied the ReLU non-linear activation to the output for incorporating non-linearity.
C^(i) = W_c^(i)× S_l:l+k-1 + b^(i)
Finally, we used max-pooling to preserve the most prominent features derived from each filter, which is defined in the following equation. The max-pooling operation combines all local features to obtain a fixed-size representation of each input sentence.
C^(i)_max = max{C^(i)}
§.§ Long-Short-Term-Memory Network
We have used bidirectional long short-term memory networks (BiLSTM) because both earlier and later information can be considered for sequentially modeling contextual information in forward and reverse order. Moreover, LSTM models were successfully applied for relation extraction tasks <cit.> as it uses memory blocks to capture long-term temporal dependencies. <cit.> also achieved high performance by using LSTMs to predict relations between pairs of entities in Wikipedia tables. Inspired by their work, we have experimented with BiLSTM to observe any performance increment.
We use BiLSTM to capture interactions among hidden representations obtained from the pooling layer. So, the input to the BiLSTM layer is a sequence obtained from the previous layer C_max = {c_1, c_2, …, c_n}. Here, n indicates half of the maximum token length preserved after downsampling the convolutional output representation using the max-pooling operation.
h_t = ForwardLSTM(c_t, h_t-1)
h_t = Backward LSTM(c_t, h_t-1)
x_t = [h_t;h_t]
The BiLSTM consists of two sub-LSTM networks: a forward LSTM and a backward LSTM for modeling dependencies in forward and backward order, respectively. h_t and h_t are the computed outputs at the t^th time step from the forward and backward LSTM. Then, we concatenate hidden states h_t and h_t to obtain the final hidden representation h_t.
§.§ Dropout
We use dropout at the BiLSTM layer for regularization to prevent overfitting. Dropout randomly turns-off a fraction of hidden units during the forward pass. It ensures that hidden units can identify features independent of each other rather than showing co-adaption and enable the model to learn a more general representation.
§.§ Classification Layer
We feed the output of the LSTM/BiLSTM layer into a fully connected layer. We then take the output of the fully connected layer and apply a softmax function to obtain the probability for each class.
z_k = W × X
ŷ = softmax(z_k)
where X is the output of the LSTM/BiLSTM layer. We show the architecture of our proposed model in Figure <ref>.
§ EXPERIMENTS
§.§ Dataset
We use the data from <cit.> in all of our experiemnts. The dataset contains individual JSON files for each relation. These JSON files were obtained from a Wikidata dump from March 2019. We used subject and object column indexes present in the dataset to retrieve the subject and object entity pairs from Wikipedia articles. These subject and object entities indicate related entity pairs in the same row of table or article subject and associated table cell value. Moreover, the dataset also includes table information like the title of the table section, table caption and headers, and table section paragraph. To the best of our knowledge, this is the most recent and the largest dataset created specifically for the task of RE on tabular data.
The dataset was annotated using distant supervision by aligning Freebase entities with mentions of pairs of entities appearing in the table row or article subject and table cell value. The dataset contains 217,834 tables and 29 relations (28 relation types and one none relation). The dataset is highly imbalanced, with some relation classes having less than 500 examples. This results in a long-tailed dataset. We do not remove these long-tailed relations.
§.§ Model Training and Evaluation
To train and evaluate our model, we split the dataset into train and test splits. We follow the configurations used by <cit.>, where 40% of the data was used for training the model, 40% for validation (for hyperparameter tuning), and 20% for testing. We use five seeds to obtain train, validation, and test splits and report our results which is the average over the five seeds. We use sparse categorical cross-entropy loss[<https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy>] to train the model.
We used one Nvidia A100 GPU (40GB Memory) for model training.
§.§ Comparison with Baseline Model
We use the neural relation extraction model proposed by <cit.>, consisting of a single LSTM unit, as the baseline. In order to have a fair comparison with the model introduced by , we use F1 and accuracy to measure the performance of our model. We trained the model for forty epochs (as suggested by ). We summarize the number of training parameters of our model and compare it to that of the baseline in Table <ref>. We also performed an ablation study where we removed the convolutional layer and investigated the performance of the task for the BiLSTM model only. We show the differences between the hyperparameters of our model and the baseline model in Table <ref>.
§ RESULTS
We show the results in Table <ref>. For relation extraction on tabular data, the previous best model was proposed by <cit.>. Although the performance of the baseline model is significantly high, it may benefit from leveraging automated feature extraction methods, such as using a CNN to extract features. We also add more LSTM units to increase the learning capability of the model. We refer to the upgraded model as CNN+LSTM or CNN+BiLSTM (based on whether we use LSTM or BiLSTM). As we see in Table <ref>, both CNN+LSTM and CNN+BiLSTM outperform the baseline model and are the current state-of-the-art model for relation extraction on tabular data. The accuracy of the CNN+LSTM model is 5.57% points higher, and the accuracy of the CNN+BiLSTM model is 5.8% points higher than the baseline. A higher accuracy will result in more accurately assigning a relation class to an entity pair.
We believe that our model performed better because we used 8 BiLSTM units for capturing context and learning dependencies, and 8 CNN filters as a feature extractor. In contrast, <cit.> used only a single LSTM unit for modeling dependencies among input tokens. In comparison to the baseline method that used a maximum token length of 50, we used a maximum token length of 80 to capture more information for each instance. Furthermore, we use dropout that benefits the model, preventing overfitting and ensuring generalizability.
Interestingly, our model was not able to outperform the baseline in terms of F1 score but was still able to provide comparable performance of around 92.46%. Although a model with better performance will lead to improvements in downstream tasks, for applications such as building knowledge graphs, the performance achieved by our model is sufficient.
§.§ Ablation Study
To understand the effectiveness of the convolution layer, we perform an ablation study. We perform the relation extraction on the dataset without using the CNN module, which we refer to as the BilSTM-only model (with 8 units). The number of training parameters is shown in Table <ref>. Interestingly, removing the CNN module improves the performance on the task by 6.19% points more than the baseline. This improvement is likely due to the increase in the number of trainable parameters to over twice that of the CNN+LSTM model. This increase in the number of trainable parameters also leads to a more complex model. Such a result reinforces the prevalent idea that increasing the number of parameters is helpful for the model to learn information from the data. However, this comes at the cost of requiring more computing resources.
§.§ Performance vs. Parameters Tradeoff
For the dataset, a combination of convolution and memory networks performs better for the relation classification task. The number of trainable parameters for CNN+LSTM is almost ten times that of the baseline model. Although the cost of training increases, this increment in the number of parameters leads to more information being learned by the deep learning model, which results in better performance over the baseline. Moreover, the CNN+BiLSTM outperforms the CNN+LSTM model as it holds the capacity to learn more information from the data due to more trainable parameters in the BiLSTM (10,000 parameters more than CNN+LSTM model). In addition, BiLSTM equips the model with the capability of learning context in both forward and reverse order. In fact, when we train models by increasing the number of parameters, the classification accuracy increases. However, the F1 score does not follow a similar trend. Our model has a comparable F1 score which should be sufficient for relation extractions, although the baseline model performs better in terms of F1 score. As model complexity increases, so do the resources required for training the model. Compared to the baseline model, which has only 4,559 trainable parameters, our proposed model has a much higher number of parameters, significantly increasing training time.
Although we do not investigate avenues of model interpretability in this work, models with more parameters generally tend to be less interpretable than models with fewer parameters. These factors should be considered when designing models for any task. Keeping this in mind, we used a max pooling layer after the CNN model to reduce the number of trainable parameters compared to the BiLSTM model without significant loss in generalizable performance.
As the CNN+LSTM/BiLSTM model has a higher performance, this will directly translate into more relations being accurately added to an existing knowledge graph.
Our model also converges faster than the baseline model (outperforming the previous model in terms of accuracy in about five epochs). This performance increase is likely due to the complexity of the model and more trainable parameters.
From the ablation study in section <ref>, we observe that using just the BiLSTM model leads to performance gain over the CNN+BiLSTM model. However, the slight performance gain of 0.39% points in accuracy and 1.89% points in F1 score comes with the cost of a significant increase in the number of trainable parameters (36,472 more parameters than CNN+BiLSTM). This BiLSTM-only model leads to higher training time and a less interpretable architecture. Therefore, considering the computing cost and performance trade-off, we advocate for the CNN+BiLSTM for extracting relations from tabular data as a balance between the two extremes.
Fine-tuning BERT may also be beneficial for our task as fine-tuning approaches for language models have been shown to benefit the task at hand <cit.>. However, fine-tuning can be extremely computationally extensive and may be impractical for scenarios where time is of importance. Moreover, fine-tuning BERT results in an increase in the number of trainable parameters, thus increasing the complexity of the model. Although beneficial for relation extraction, we used the embedddings from the pre-trained model in the interest of training and computation time.
§.§ Difficult Relations
We also wanted to investigate our model's ability to distinguish between difficult relations. We show a confusion matrix in Figure <ref> that depicts the accuracy of our proposed model for all the relation classes (we chose the model for the best performing seed value). Relations such as , , , and are some of the most confusing examples for the model. This may be due to the fact that such relations are very similar to each other and is thus difficult for the model to distinguish one from the other. One may choose to provide extra information from the Wikipedia article or the table to the model for better understanding of the relations. More research is required to explore this idea.As model complexity increases, so does the performance leading to better ability to distinguish between relations. However, this may not directly translate to high classification accuracy for difficult relations. A worthwhile direction to explore would be to design intelligent model training strategies that focus specifically on difficult relations without compromising performance on the rest of the classes.
§ CONCLUSION AND FUTURE WORK
In this work, we proposed a neural method that uses a combination of convolution and memory networks to extract relations from Wikipedia tables, which we evaluate on a benchmark dataset. We also showed that combining convolution and max pooling helps to learn more about the data without a significant increase in the number of training parameters. We analyze our results and discuss the trade-off between the number of training parameters and model performance. Finally, we show how our model performs on relations that are deemed to be difficult to distinguish between and suggest some possible improvements for such cases.
We also conducted an ablation study to show the usefulness of the CNN layer. An extension of the ablation approach would be to remove certain input fields, like table cell values, headers, and captions, to evaluate model performance. An impactful idea in the space of relation extraction is the usage of the attention mechanism. Using the attention mechanism to identify tokens in the input that better represent a relation is a promising approach that may significantly improve tabular relation extraction. We also highlight the trade-offs between parameters and the performance of the model as a first step toward probing relation extraction models. As neural network models become larger, it becomes even more crucial to provide explanations about the inner workings of the model. As neural network models grow larger with more training parameters, interpretability becomes crucial. In the future, we want to use sophisticated tools such as LIME <cit.> and SHAP <cit.> to explain how complex relation extraction models understand the input to classify them into correct categories.
|
http://arxiv.org/abs/2307.05592v1 | 20230710180717 | Functional PCA and Deep Neural Networks-based Bayesian Inverse Uncertainty Quantification with Transient Experimental Data | [
"Ziyu Xie",
"Mahmoud Yaseen",
"Xu Wu"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
NCSU]Ziyu Xie
NCSU]Mahmoud Yaseen
NCSU]Xu Wumycorrespondingauthor
[mycorrespondingauthor]Corresponding author
[email protected]
[NCSU]Department of Nuclear Engineering, North Carolina State University
Burlington Engineering Laboratories, 2500 Stinson Drive, Raleigh, NC 27695
Inverse UQ is the process to inversely quantify the model input uncertainties based on experimental data. This work focuses on developing an inverse UQ process for time-dependent responses, using dimensionality reduction by functional principal component analysis (PCA) and deep neural network (DNN)-based surrogate models. The demonstration is based on the inverse UQ of TRACE physical model parameters using the FEBA transient experimental data. The measurement data is time-dependent peak cladding temperature (PCT). Since the quantity-of-interest (QoI) is time-dependent that corresponds to infinite-dimensional responses, PCA is used to reduce the QoI dimension while preserving the transient profile of the PCT, in order to make the inverse UQ process more efficient. However, conventional PCA applied directly to the PCT time series profiles can hardly represent the data precisely due to the sudden temperature drop at the time of quenching. As a result, a functional alignment method is used to separate the phase and amplitude information of the transient PCT profiles before dimensionality reduction. DNNs are then trained using PC scores from functional PCA to build surrogate models of TRACE in order to reduce the computational cost in Markov Chain Monte Carlo sampling. Bayesian neural networks are used to estimate the uncertainties of DNN surrogate model predictions. In this study, we compared four different inverse UQ processes with different dimensionality reduction methods and surrogate models. The proposed approach shows an improvement in reducing the dimension of the TRACE transient simulations, and the forward propagation of inverse UQ results has a better agreement with the experimental data.
Bayesian inverse UQ Functional alignment Functional PCA Neural networks
§ INTRODUCTION
Uncertainty Quantification (UQ) is the process to quantify the uncertainties in Quantity-of-Interest (QoIs) by propagating the uncertainties in input parameters through a computer model. In the field of nuclear engineering, most UQ research has focused on forward UQ (FUQ), which involves propagating input uncertainties through computational models to quantify uncertainties in QoIs. However, in FUQ, the input parameter uncertainties are often user-defined or based on subjective expert opinion, which lacks mathematical rigor and can introduce inaccuracies. To address this issue, inverse UQ (IUQ) has been developed in order to quantify input uncertainties based on experimental data. IUQ research in nuclear engineering primarily relies on statistical analysis and the developed methods can be categorized into three groups <cit.>: frequentist (deterministic) <cit.> <cit.> <cit.> <cit.> <cit.> <cit.>, Bayesian (probabilistic) <cit.> <cit.> <cit.> <cit.> <cit.> <cit.> <cit.>, and empirical (design-of-experiments) <cit.> <cit.> <cit.> <cit.>. These methods compare computational simulations with experimental data to estimate the uncertainties of the model input parameters. Frequentist IUQ gives the most likely input parameters that can reproduce the experimental data. Bayesian IUQ quantifies the uncertainties of the input parameters by reducing the disagreement between simulation and experimental data. Empirical IUQ seeks a range of input values based on which the model predictions can envelop the measurement data. See <cit.> for a more detailed review and comparison of these approaches.
In addition, there has been a growing interest in IUQ research over the past decade in the nuclear engineering area. For example, multiple international activities have been undertaken to develop and evaluate the effectiveness of IUQ methods. In fact, many of the IUQ methods mentioned above are developed and/or improved within these international projects. Notable among these is the Post-BEMUSE Reflood Models Input Uncertainty Methods (PREMIUM) <cit.> benchmark, which focuses on core reflood problems and employs Flooding Experiments with Blocked Arrays (FEBA) tests to quantify and validate input uncertainties in system thermal-hydraulics (TH) models. The OECD/NEA has also performed two follow-up projects: the Systematic Approach for Input Uncertainty Quantification Methodology (SAPIUM) <cit.> and Application Tests for Realization of Inverse Uncertainty Quantification and Validation Methodologies in thermal-hydraulics (ATRIUM) <cit.> (ongoing). These projects aim to develop a systematic approach for quantifying and validating the uncertainty of physical models in system TH codes. In this paper, we will focus on the Bayesian IUQ method. Several improvements will be developed and implemented based on our previous work on the modular Bayesian approach <cit.> <cit.>.
In Bayesian IUQ, Markov Chain Monte Carlo (MCMC) methods <cit.> are usually utilized to explore the posterior distributions of input parameters. MCMC generates samples that follow a probability density proportional to the parameter posterior distribution. However, a typical MCMC algorithm often requires more than 10,000 samples to reach a converged solution. This can be computationally expensive, especially for nuclear Thermal-Hydraulic (TH) system codes. To address this challenge, surrogate models can be employed to significantly reduce the computational cost. Surrogate models give an approximation of the relation of the input and output of the original computer model (also called full model), and they require only a limited number of full model runs for the training process. Some machine learning (ML) methods such as Gaussian process (GP) and deep neural network (DNN) have been widely used as surrogate models.
The application of surrogate models to replace the original computational models introduces an additional source of uncertainty, referred to as the code or interpolation uncertainty <cit.> <cit.> in literature. Conventional DNN-based surrogate models give deterministic predictions of the QoIs for given inputs. Consequently, when used as surrogates, DNNs do not provide estimation of the code/interpolation uncertainty directly. To capture the approximation uncertainty introduced by using DNN-based surrogate models, in this study we will implement the Bayesian inference method for UQ of DNNs. Specifically, Bayesian neural networks (BNNs) are trained as surrogate models of the full model. A BNN is a neural network with distributions over parameters. In BNNs, a prior distribution is specified upon the parameters (weights, bias) of neural networks and then, given the training data, the posterior distributions over the parameters are computed, which are used to quantify predictive uncertainty. Our previous work <cit.> benchmarked three methods, namely, Monte Carlo dropout, deep ensembles, and BNN to estimate the prediction/approximation uncertainties of DNNs. In another study <cit.>, these methods were applied to time series data derived from TRACE simulations of the FEBA experiments. In this work, the quantified DNN prediction uncertainties, which are essentially the code/interpolation uncertainties when using DNN as surrogate models, will be incorporated into the Bayesian IUQ process.
When performing IUQ for transient problems, the responses typically exhibit time-dependence, resulting in high-dimensional and highly-correlated data. Such high dimensionality and correlation can lead to challenges for surrogate modeling techniques such as GP <cit.> and DNN <cit.>. To overcome this challenge, dimensionality reduction methods such as principal component analysis (PCA) are often employed, and they have shown successful applications in nuclear engineering. In a study by Wu et al. <cit.>, the dimensionality of a time-dependent fission gas release model was reduced using PCA. The experimental data was transferred into the Principal Component (PC) subspace within a Bayesian IUQ framework. Similarly, Roma et al. <cit.> also utilized PCA in an IUQ study. It's worth noting that PCA has been employed in other areas, such as global sensitivity analysis <cit.>.
However, conventional PCA may not accurately represent time series data when the transient profiles contain important phase and magnitude information that need to be preserved simultaneously. The standard PCA technique is applied to centered data, which may smooth out the phase and magnitude information of the dataset. To address this limitation, a functional PCA method has been developed, specifically designed to handle time series data and preserve both phase and amplitude information <cit.> <cit.>. By separating the phase and amplitude information of transient data, the functional PCA method overcomes the challenges posed by conventional PCA, enabling more accurate representation and preservation of the essential features in time series data. This functional PCA approach has shown successful applications in the field of TH system code predictions <cit.>, demonstrating significant improvements in dimensionality reduction performance.
This work focus on developing a Bayesian IUQ process for time-dependent QoIs with a demonstration example using the FEBA experimental data and the TRACE computer model. We implemented and compared four different Bayesian IUQ processes: (1) conventional PCA with GP surrogate model as a reference solution, as it is one of the most widely used approach in literature for transient data <cit.> <cit.>, (2) conventional PCA with DNN surrogate model without code uncertainty, (3) functional PCA with DNN surrogate model without code uncertainty, and (4) functional PCA with DNN surrogate model with code uncertainty through implementation of BNN. Previously, relevant work has been preformed using FEBA experimental data in Bayesian IUQ <cit.>, but without considering the surrogate model uncertainty. The contribution and novelty in this work can be summarized as: (i) implementation of functional PCA for time-dependent QoI that contains important phase and magnitude information to be preserved during dimensionality reduction, (ii) surrogate modelling with DNN, while accounting for the code/interpolation uncertainties through BNN, solved with variational inference, (3) a comprehensive and systematic investigation of four Bayesian IUQ processes to study the influence of GP/DNN as surrogate models, conventional vs. functional PCAs, as well as the code/interpolation uncertainties introduced by surrogate models.
A complete IUQ study usually includes sensitivity analysis to select the most influential calibrated parameters <cit.>, as well as FUQ and validation to test the IUQ results. In this study, we leveraged the sensitivity analysis study in our previous work <cit.>. It has been found that functional PCA improves the dimensionality reduction by a better reconstruction quality using only a few PCs. Using the functional PCA and DNN-based surrogate models while accounting for the code/interpolation uncertainty leads to the best Bayesian IUQ results. Forward propagation of the IUQ results and validation using experimental data not seen in IUQ have shown that the proposed approach has the best agreement with experimental data when compared with the other IUQ methods.
The rest of the paper is arranged as follows. Section <ref> gives an introduction to the FEBA experiment and the TRACE computer model. Section <ref> introduces the PCA method with functional alignment, the method used for UQ of DNN model and the Bayesian IUQ methods. Section <ref> presents the results for functional PCA, surrogate modeling, various IUQ methods, forward propagation of the IUQ results and validation. Section <ref> concludes the paper and discusses the future work.
§ PROBLEM DEFINITION
In the 1980s, the Karlsruhe Institute of Technology carried out a series of experiments known as FEBA <cit.> <cit.> to improve the understanding of heat transfer during reflooding. The experiment facility consisted of a full height 5 × 5 bundle of pressurized water reactor rod simulators, with a heater that provided a cosine power profile over the height of the rod, which is shown in Figure <ref> (a). The length of the rod was 4 m, with a heated length of 3.9 m, and the cladding temperature was measured at eight different elevations. This paper focuses on FEBA test series 1 test number 216, which is the baseline test with no flow blockage and undisturbed bundle geometry containing all grid spacers. The experimental conditions are: water flooding velocity of 3.8 cm/s, system pressure of 4.1 bars, feedwater temperatures of 48°C in the first 30s and 37°C at the end respectively, and power starting at 200 kW and decay heat transient corresponding 120% of ANS Standard about 40 seconds after reactor shutdown <cit.>. The reason for selecting this test is that it was well studied in the PREMIUM benchmark <cit.>. The TRACE (v5.0p5) <cit.> system TH code is used to simulate the experiments based on the given initial and boundary conditions in FEBA test 216. Figure <ref> shows the model built for TRACE simulation in this work and a typical peak cladding temperature (PCT) time series profile from TRACE simulation.
For FEBA test 216, the time-dependent PCTs were measured at 8 different axial positions over the bundle. In this specific problem, we choose the experimental data at axial position z = 2225 mm for IUQ and data at other 2 axial positions for validation (z = 1135 mm and z = 3315 mm). The QoI is the whole transient PCT profile, which contains major phase and magnitude information including the maximum PCT (T_max), time to reach the maximum PCT (t_max), and the time of quenching (t_quench).
In the TRACE model of the FEBA experiment, 36 uncertain physical model parameters in UQ section of TRACE system code <cit.> were initially considered. However, not all of these parameters are significant to the QoIs. To reduce the input dimension by identifying the significant physical model parameters, our previous research <cit.> performed a global sensitivity analysis study, resulting in the selection of four physical model parameters that are significant to the QoIs. These parameters are multiplicative factors that can be perturbed in the TRACE input deck, and their nominal values are 1.0, as shown in Table <ref>. These four physical model parameters will be considered as calibration parameters in the IUQ study. Uninformative uniform distributions were chosen for the priors in the range of [0,5]. The objective of IUQ is to determine the posterior distributions of these calibration parameters based on the chosen experimental data, such that the agreement between TRACE simulation and the FEBA experimental data can be improved. Furthermore, the quantified posterior uncertainties in these physical model parameters are expected to result in better TRACE prediction of experimental tests whose data is not used in the IUQ process.
§ METHODOLOGIES
§.§ Principal Component Analysis
Since the QoIs in this problem are time-dependent, which corresponds to an infinite-dimension response, one may pick the PCT values at many time points to adequately represent the time series evolution. This, however, will result in high-dimensional outputs that are also highly correlated. Because a surrogate model for TRACE has to be used in order to reduce the computational cost in MCMC sampling, it is impractical and computationally expensive to create separate surrogate models for all the outputs. To address this challenge, PCA, an unsupervised ML method, is usually employed to reduce the dimensionality of high-dimensional correlated data. PCA is a statistical procedure that uses an orthogonal transformation to convert possibly correlated data into a set of linearly uncorrelated variables. The resulting PCs are orthogonal to each other and the corresponding PC scores are treated as values of new variables, whose number is much smaller than the number of original variables. Furthermore, the limited number of selected PCs can still preserve the majority of the original data variance after dimensionality reduction.
Once TRACE is used to simulate the time-dependent PCT profile, p = 1000 points are chosen evenly from the PCT profile. Note that a series of numerical tests has shown that such a number of points is sufficient, as it gives the same PCA results with cases when much larger values for p are used. Next, N = 500 samples are generated from prior distribution of input parameters by Latin-hypercube sampling (LHS), which is listed in Table <ref>. This results in a p× N data matrix 𝐀, in which the rows represent the high-dimensional correlated outputs and the columns represent different samples. To transform the original data matrix 𝐀 into an uncorrelated set of variables, we seek to find a p× p linear transform matrix 𝐏. The linear transformation 𝐏𝐀=𝐁 will result in a new p× N data matrix 𝐁, which contains the samples of the transformed uncorrelated variables. A typical 2-dimension PCA process is shown in Figure <ref>. To find the matrix 𝐏, we use the following steps:
* Center the original data matrix 𝐀 by defining the row means as a column vector 𝐮. The centered data matrix 𝐀_centered is obtained by subtracting 𝐮 form each column of 𝐀.
* Find the singular value decomposition (SVD) of 𝐀_centered:
𝐀_centered = 𝐔Λ𝐕^⊤
Where 𝐔 is a p× p orthogonal matrix , 𝐕 is a N× N orthogonal matrix, and Λ is a p× N diagonal matrix with non-negative real numbers on the diagonal. The diagonal entries of Λ are called the singular values of 𝐀_centered and are arranged in descending order.
* Choose 𝐏 = 𝐔^⊤, then we have:
𝐏𝐀_centered = 𝐔^⊤𝐀_centered = Λ𝐕^⊤ = 𝐁
In this case, it can be proven that the new variables in the new data matrix 𝐁 are uncorrelated because its covariance matrix is diagonal. The matrix 𝐏 provides a linear transformation from the original data basis to the PC basis. The rows of 𝐏 are the PCs. The columns of 𝐁 contain the samples of the transformed variables, also called PC scores.
* Determine the reduced dimension of the PC subspace p^* which could be much smaller than p based on the total variance explained by the PC subspace, using the diagonal entries in Λ. Usually, the variances explained by the PCs decrease rapidly, only a few PCs can explain 95% to 99% of the total variance. Using a small value of p^*, define a p^*× p transformation matrix 𝐏^*
𝐏^*𝐀_centered = 𝐁^*
where 𝐁^* is a new data matrix with low-dimensional uncorrelated variables.
* To reconstruct the original PCT time series profile based on a sample 𝐛^* in the PC subspace, we use the following relation:
𝐚_centered = (𝐏^*)^⊤𝐛^*
Then the mean vector 𝐮 computed in step 1 will be added to 𝐚_centered, to obtain the original data series profile.
Through this PCA process, we can reduce the dimension of the QoIs from p = 1000 (or even more, depending on how many points are picked from the transient curve) to less than 10. If the selected PCs are used as QoIs in IUQ process, the experimental data also need to be transferred into the PC subspace. The uncertainty of experimental data also needs to be transformed in a similar way using the following relation:
Σ^*_data = 𝐏^*Σ_data(𝐏^*)^⊤
where Σ_data is a p× p matrix that includes the uncertainty of experimental data. It can be a full matrix if the correlations between the high-dimensional correlated responses are known. However, such information is usually not available so one may assume Σ_data is a diagonal matrix. From equation (<ref>) we can find that the new variance Σ^*_data in the PC subspace is usually a p^*× p^* full matrix with non-zero off-diagonal entries. This new data uncertainty matrix needs to be considered in the Bayesian IUQ process.
§.§ Functional PCA
In the conventional PCA method described above, the original data matrix is centered in the first step, and the mean vector has to be added back in order to reconstruct the data. As shown in Figure <ref>, each TRACE simulated PCT profile has its own phase and magnitude information, t_max, t_quench, and T_max. Using the mean vector will “smooth out” such important information. As a result, the conventional PCA method may not be able to recover such phase and magnitude information accurately using only first few PCs, even though they explain more than 99% of the total variance. There may be non-negligible fluctuations in the reconstructed PCT profiles near the quenching time, which is shown in Figure <ref>. To solve this problem, functional alignment <cit.> <cit.> is used to separate the phase and magnitude information of the original data matrix before dimensionality reduction. The combination of functional alignment and conventional PCA will be referred to as functional PCA (fPCA) in the following.
Functional alignment aims in aligning the “landmark” points, which are t_max and t_quench in this problem, of the whole dataset to the same points. A composite function f(t) = f(γ(t)) is used to adjust the original function, where γ(t) is called the warping function, and f(t) is the warped data. The set of all warping functions γ(t) (Γ) will have the following property:
Γ = {γ : [0,t]→ [0,t] |γ(0) = 0, γ(t) = t, γ is a monotonically increasing function}
The main problem is to find the warping functions γ(t) that can align all the functions at the landmark points. Many methods have been developed for determining γ(t) through minimizing the cost function inf _γ∈Γ ||f_1(t) - f_2(γ(t))|| <cit.> <cit.>. Here, we will introduce the square root slope function (SRSF) method <cit.> <cit.>, which uses the square root slope to represent the original function. The SRSF of the original function f(t) is defined in the following form:
q(t) = sign(f(t))√(|f(t)|)
If f(t) is a continuous function, then the SRSF q(t) is square-integrable. The function f(t) can be calculated using the integral f(t) = f(0)+∫_0^t q(s)| q(s)| ds, since q(s)| q(s)| = ḟ(s). If we warp a function f by γ, the SRSF of f(γ(t)) is given by:
q(t) = q(γ(t))√(γ(t))
A new cost function is defined based on the norm of two SRSFs. The warping function γ(t) is determined by minimizing this cost function.
D_y(f_1, f_2)=inf _γ∈Γq_1-(q_2(γ)) √(γ̇)
After the separation of amplitude and phase information for all of the samples, the original samples are transferred into a series of warped data with aligned landmarks points and warping function includes phase information. Figure <ref> shows an example of functional alignment of a series of functions. Afterwards, conventional PCA is applied to all warped data f(t) and warping functions γ(t) for dimensionality reduction. The data of f(t) and γ(t) could be represented by the first few PCs. To reconstruct the original function f(t) using the limited number of PCs, warped data f(t) and warping function γ(t) are first reconstructed based on the related PCs based on inverse PCA.
Finally, phase and amplitude reconstructed functions are combined through f(t) = f(γ^-1(t)) to generate the original function before functional alignment. Since the warping function should be monotonically increasing, a smoothing function is applied to the PCA reconstructed γ(t) functions to avoid non-monotonic issues calculating the inverse function γ^-1(t). Note that previously curve registration and alignment for FEBA benchmark has been applied in an earlier work <cit.>. The major focus of this work is building DNN-based surrogate models for the PC scores after functional alignment and its application in IUQ and FUQ.
Figure <ref> shows the procedure of fPCA application. In this framework, surrogate models like DNN are used to represent the PC scores from phase and amplitude information, respectively. When new samples are given, the predictions of DNN surrogate models go through an “inverse fPCA” process, as discussed above, to reconstruct the original time series data.
§.§ Bayesian Neural Networks
In a standard neural network structure, the learnable parameters (weights and biases) are initialized randomly with deterministic values. During the prediction stage, one would anticipate a deterministic output for a given input since the weights and biases have fixed values after training. In contrast, a BNN <cit.> <cit.> is a neural network in which the learnable parameters follow random distributions. To train a BNN, prior distributions are assigned to the neural network parameters, and then, based on the training data, posterior distributions of the parameters are computed by updating the prior distributions during training process. Figure <ref> compares a standard neural network with a BNN. Following training, the BNN is evaluated at the same input several times, each time with its parameters sampled from the posterior distributions, resulting in different values for the prediction that can be used to obtain the predictive uncertainties.
The inference of the posterior distributions is challenging since most DNNs nowadays have large number of parameters. To address this issue, various methods have been developed for Bayesian inference of neural networks, including sampling-based methods such as MCMC <cit.> and optimization-based methods like variational inference <cit.> <cit.>. Variational methods are advantageous because they converge faster, making them more suitable for large neural networks. In this study, we used variational inference to train the BNN. Note that we have treated the bias parameters as deterministic, as neural network predictions are less sensitive to these parameters than to weights. This is because the bias term is added to the product of weights and the activation from the previous hidden layer, and thus the impact of weights on DNN predictions is more significant than that of bias.
A probabilistic model is assumed for the BNN, in which the weights are learned by using Maximum Likelihood Estimation (MLE). The posterior weights (𝐰) are computed during training based on Bayes' rule for a given training dataset (𝒟):
P (𝐰 | 𝒟) = P (𝒟 | 𝐰) · P (𝐰)/P (𝒟)
where P (𝐰) is the prior distribution for 𝐰 and it is assumed to be certain non-informative distribution, P (𝒟 | 𝐰) is the likelihood function, and P (𝐰 | 𝒟) is the posterior distribution for 𝐰. Prior and posterior represent our knowledge of 𝐰 before and after observing 𝒟, respectively. P (𝒟) does not contain 𝐰 so it is usually treated as a normalizing constant. It is sometimes referred to as the evidence term. When making predictions at a test data 𝐱^*, the predictive distribution of the output 𝐲^* is given by:
P (𝐲^* | 𝐱^*) = 𝔼_P (𝐰 | 𝒟)[ P (𝐲^* | 𝐱^*, 𝐰) ]
where the expectation operator 𝔼_P (𝐰 | 𝒟) means we need to integrate over P (𝐰 | 𝒟). The term P (𝐲^* | 𝐱^*, 𝐰) represents the probability of the prediction at a test point 𝐱^* and the posteriors of the weights. Each possible configuration of the weights, weighted according to the posterior distribution P (𝐰 | 𝒟), makes a prediction about 𝐲^* given 𝐱^*. This is why taking an expectation under the posterior distribution on weights is equivalent to using an ensemble of an infinite number of neural networks. Unfortunately, such expectation operation is intractable for neural networks of any practical size, due to a large number of parameters as well as the difficulty to perform exact integration. This is the main motivation to use a variational approximation for P (𝐰 | 𝒟). Variational inference methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and ML. It is used to approximate complex posterior probabilities that are difficult to evaluate directly as an alternative strategy to MCMC sampling. An alternative variational distribution is proposed to approximate P (𝐰 | 𝒟), it consists of a distribution set whose parameters are optimized using the Kullback-Leibler divergence. For more mathematical and implementation details, interested readers are recommended to look at <cit.> <cit.> <cit.>.
§.§ Bayesian Framework for IUQ
FUQ requires knowledge of computer model input uncertainties to generate the uncertainty of simulation outputs. These inputs uncertainties are often determined by “expert opinion” or “user self-evaluation”. However, such determination lacks mathematical rigor and may be subjective, which may lead to misleading FUQ results. IUQ is a method to inversely quantify the input uncertainties based on the given experiment data while keeping the simulation results consistent with the experiment data. A modular Bayesian framework for IUQ has been developed <cit.> previously and it will be used in this work. In the following we will provide a brief introduction to Bayesian IUQ.
Consider a computer model 𝐲^M( 𝐱, θ), where 𝐲^M is the model response, 𝐱 is the vector of design variables, and θ is the vector of calibration parameters. The differences of design and calibration variables have been discussed in our previous work <cit.>. Given the design variable 𝐱, the reality 𝐲^R( 𝐱) can be learned by (1) running model simulation, which involves model uncertainty δ(𝐱), and (2) performing experiments, which involves measurement uncertainty ϵ. These terms can be combined in the so-called “model updating equation” <cit.>:
𝐲^E (𝐱) = 𝐲^M( 𝐱, θ^*) + δ(𝐱) + ϵ
where δ(𝐱) is the model uncertainty/discrepancy, due to missing/incomplete physics and numerical approximation errors during the modeling process. θ^* is the “true" but unknown values of θ. ϵ∼𝒩( 0, Σ_exp) represents the measurement/experimental uncertainty which is considered as a normal distribution. The model discrepancy term δ(𝐱) is dependent on 𝐱 which stands for design variables such as initial conditions or boundary conditions. Since only one experimental test is considered in this transient problem, we do not have enough cases of 𝐱 to learn δ(𝐱). Therefore, the model discrepancy is not considered in this study.
Based on the assumption that the experimental uncertainty is Gaussian, ϵ = 𝐲^E (𝐱) - 𝐲^M( 𝐱, θ^*) follows a multi-dimensional normal distribution. As a result, the posterior distribution p ( θ^* | 𝐲^E, 𝐲^M) can be written as:
p ( θ^* | 𝐲^E, 𝐲^M) ∝ p ( θ^*) · p ( 𝐲^E, 𝐲^M | θ^*)
∝ p ( θ^*) ·1/√(|Σ|)·exp[ - 1/2[ 𝐲^E - 𝐲^M]^⊤Σ^-1[ 𝐲^E - 𝐲^M] ]
where p ( θ^*) is the prior distribution that can be provided by user evaluation or expert opinion. p ( 𝐲^E, 𝐲^M | θ^*) is the likelihood function. Prior and posterior distributions represent our knowledge of θ before and after observation of measurement data, respectively. Σ is the covariance of the likelihood which consists of two parts:
Σ = Σ_exp + Σ_code
where Σ_exp is the experimental uncertainty due to measurement error, and Σ_code is the code/interpolation uncertainty due to the use of surrogate models to reduce the computational cost. The term Σ_code = 0 if the computer model is used directly in the IUQ process, instead of the surrogate models. Note that a component for model uncertainty/discrepancy should be included in Σ_exp if possible. As discussed above, due to the very limited amount of data, it is not considered in this work. To calculate the posterior distribution, an adaptive MCMC algorithm <cit.> is used to generate samples following the probability densities of posterior distributions. To reduce the computational cost of MCMC sampling, surrogate models built with DNN are used to represent the simulation of computer models. In this project, we will compare GP and DNN as surrogate models to determine which approach lead to better IUQ results. Note that when we say “DNN-based surrogate models” with consideration of the code uncertainty Σ_code, we essentially mean BNN. BNN is a special implementation of DNN that accounts for the prediction uncertainty.
§ RESULTS AND DISCUSSIONS
In this paper, we compare four different methods for IUQ with different dimensionality reduction and surrogate modeling approaches. Table <ref> lists the details of these methods. The conventional PCA process follows Section <ref>, while the fPCA process follows Section <ref>. In Method 1, the combination of conventional PCA and GP serves as a reference solution. GP has a unique feature compared to other ML methods, which is that the mean square error (MSE), also called the variance of the prediction is directly available. Therefore, Σ_code can be easily included in the IUQ process. Compared to DNN, GP is mainly used for problems with low-dimensional features and smooth responses. The combination of fPCA and GP has been explored in <cit.>. Since the main focus of this work is to demonstrate the applicability and benefits of fPCA with DNN while accounting for Σ_code, we will not repeat this approach. Methods 2 and 3 use conventional DNNs as surrogate models, while Method 4 uses BNNs.
This section is arranged as follows. Section <ref> introduces the functional alignment results of TRACE simulation samples and gives a comparison of conventional PCA with fPCA. Section <ref> presents the results for validation of surrogate modeling and the UQ process for BNN models. Section <ref> presents the IUQ results, including the posterior distributions of the calibration parameters and their mean values and standard deviations. Section <ref> presents the validation results for the IUQ results, including FUQ for experimental tests at 3 different axial positions and comparison with model simulations based on the prior distributions and the experimental data.
§.§ Results for fPCA
To train the fast-running and accurate surrogate models for TRACE, 500 random samples are generated based on the prior distributions of 4 calibration parameters using LHS. Next, the conventional and functional PCA methods are performed to obtain the PC scores for the samples, which will be used as the training data for the surrogate models. For conventional PCA, we apply PCA to the PCT profiles directly. For fPCA, functional alignment is applied first to the PCT profiles before the PCA process.
During fPCA, the original TRACE simulation data was separated into warped data which includes amplitude information, and warping function which contains phase information. The results are shown in Figure <ref>, the time to reach maximum PCT (t_max), and the time of quenching (t_quench) for all the 500 TRACE simulated PCT profiles are aligned to the same positions, respectively.
After functional alignment, we have two series of datasets, f(t) for warped data and γ(t) for warping functions. Both datasets have 1000 time steps, which form two 1000 × 500 matrices, whereas the original dataset has only one 1000 × 500 data matrix. PCA is then applied to both datasets. Figure <ref> shows the total variance explained by the first 10 PC scores for PCA of the warped data, the warping function, and the original TRACE simulation data without functional alignment. The fPCA process shows a significant improvement compared to the conventional PCA because we need fewer PCs to account for the 99% of the total variance. The first 2 PCs for warped data and the first 4 PCs for warping functions are chosen as the new QoIs by fPCA, which can explain over 99% of the total variance. For conventional PCA, we choose the first 4 PCs as QoIs, which explain 95% of the total variance. The first 10 PCs would have to be chosen to account for 99% of the total variance.
Figure <ref> shows the comparisons of TRACE simulation samples and reconstructed PCT profiles from PC scores with and without functional alignment. The reconstruction process means obtaining the original PCT profiles from the PC scores, as explained in Section <ref>. For the PCA method without functional alignment, we used 10 PCs, which explains the same variance as the PCs used in our fPCA study. In Figure <ref>, the reconstructed PCT profiles based on 6 PCs from fPCA show a good agreement with the TRACE simulation results, while the reconstructed PCT profiles by conventional PCA show oscillations as expected, especially near the quenching time when the PCT has a sudden drop.
§.§ Results for surrogate modeling
The QoIs, surrogate models and validation results of the four IUQ methods in Table <ref> are summarized in Table <ref>. Before training the surrogate models, all the PC scores are standardized to zero mean and unit variance and separated into three groups, 70% for training, 15% for validation and 15% for testing. All the surrogate models take the four calibration parameters in Table <ref> as inputs. For Method 1, one multi-dimensional GP model was trained for 4 PCs as outputs. For methods 2-4, separate DNN/BNN models were used to represent each PC. Neural network models can certainly represent multi-dimensional responses. However, it was found that the accuracy was not as good as training separate DNNs/BNNs for each PC as response. One possible reason is that the PCs scores are essentially samples for transformed uncorrelated variables. When they are used to train a single DNN/BNN, all the layers/neurons before the output layer are shared, while only the weights from the last hidden layer to the output layer are different. This can cause the DNN/BNN to have a less satisfactory performance. We have also performed hyperparameter tuning with grid search to find optimized neural network architectures, learning rates, etc. Because the amount of training data is small for this simple problem, the DNN models only have 3 hidden layers with 10, 20, and 10 hidden neurons, and 1 output layer with one neuron to represent the PC.
Figure <ref>, <ref>, <ref> show the validation results for methods 1-3, using the testing dataset (note that the validation dataset has been used for hyperparameter tuning). Most of the surrogate models show a good prediction accuracy, with a R^2 (the predictivity coefficient) value larger than 0.95. However, the GP/DNN model for the fourth PC of conventional PCA (Figures <ref> and <ref>) and the DNN model for the fourth PC of the warping function (Figure <ref>) do not perform as well. Nevertheless, these higher order PCs are not as important as the previous ones due to their much smaller contribution to the total variance, as shown in Figure <ref>.
Figure <ref> shows the results for BNNs. In the training process using variational inference, the weight parameters are assumed to following Gaussian distributions, whose means and variances are learned. Once a BNN is trained, it can be evaluated multiple times at the same input, each time with different samples of the weight parameters. The resulting predictions can be collected as samples of the responses from which the mean values and variances (uncertainties) can be computed.
To quantify the uncertainty of BNN predictions, we perform 200 predictions for each sample with different network parameters from the posterior distributions of BNN parameters. Figure <ref> presents the mean values and one standard deviations (std in the figure) compared to the test samples. It can be seen that for the majority of the test samples, either the BNN mean values are close to the true PC scores, or the true PC scores are with one standard deviation of the BNN predictions. One exception is the fourth PC of warping function just like Method 3 in Figure <ref>. As discussed above, this is not expected to cause an issue in further analysis.
The uncertainties of the BNN predictions will be included in the Bayesian IUQ process as Σ_code. In Method 1, the variance from GP is directly available without further computation. However, this is not true for BNN surrogate models in Method 4, because one needs to run the BNN many times in order to obtain the prediction uncertainties as shown in Figure <ref>. This will significantly slow down the MCMC sampling process and it contradicts our intention to use surrogate models. Figure <ref> shows the relationship between BNN predictions and the corresponding standard deviations for the testing samples, which indicates linear relationships for most cases. Based on this observation, we have made a simplification by fitting the standard deviations as linear functions of the BNN predictions. During MCMC sampling, the BNNs are only evaluated once for every random walk, and the uncertainties of the surrogate models (in terms of the standard deviations) are evaluated by these linear relations. Note that the test samples for the second PC of the warped data appears to form two clusters instead of a linear relationship. In this case, we simply take the centroids of the two clusters, and determine the standard deviation based on which cluster the BNN prediction is closer to.
§.§ Results for IUQ
For each IUQ method, 25,000 MCMC samples were generated to explore the posterior distributions of the calibration parameters. The MCMC sampling process takes about 1-2 hours using the surrogate models, which would otherwise take a few thousand hours using the TRACE system code. The first 5,000 samples were abandoned as burn-in since the MCMC chains are not converged in the beginning, which is shown in Figure <ref>. Afterwards, we picked every 20 remaining samples for the purpose of “thinning" to reduce the auto-correlation among the MCMC samples. The remaining 1,000 posterior samples were investigated for the posterior distributions of the calibration parameters.
Table <ref> and Figures <ref>, <ref>, and <ref> present the posterior distributions of the four calibration parameters by all the four IUQ methods. For Method 1, the posterior distributions based on the GP model have a larger differences from other methods. A potential reason for this could be the prediction accuracy for Methods 2-4 based on DNN surrogate models are better. For parameter , which is the most sensitive parameter among the four (see reference <cit.> for more details), the mean values between different methods are similar, which gives similar results in the PCT profile (as shown in Section <ref>). The posterior results for all four methods have demonstrated a significant reduction of uncertainty from the prior distributions. Compared with Method 2 and 3, the results from Method 4 have a larger uncertainty, since the code uncertainty from the BNN models is considered.
One advantage of the Bayesian IUQ method is that it can identify the correlations between different calibration parameters through random walk of the MCMC samples, even though the prior distributions are assumed to be independent. The parameters' marginal distributions and pair-wise joint distributions are shown in Figures <ref> and <ref>. For examples, for all of the four IUQ methods, and have a strong negative correlation. As a result, when generating new samples from posterior distributions, the correlations between the calibration parameters should be considered. There are some differences in the posterior marginal/joint distributions by different IUQ methods, this is expected since inverse problems are usually ill-posed with many different solutions. To validate the IUQ results, we will determine whether the posterior distributions will make TRACE simulations more consistent with the FEBA experimental data, not only with the test case that has been used in IUQ, but also test cases unseen during IUQ.
§.§ Results for FUQ and validation
To determine which IUQ method produces the best IUQ results, we propagated the quantified posterior distributions of the calibration parameters through the TRACE model to obtain the updated prediction uncertainties in the PCT profiles. This step can take advantage of the existing GP/DNN surrogate models that were trained during the IUQ process to reduce the computational cost in the FUQ process. We generated 1000 random samples from the joint distributions of the posterior distributions from each IUQ method, then we used the surrogate models in each IUQ method to generate the PC scores and subsequently the reconstructed PCT profiles.
The results of the FUQ process based on both the posterior and prior distributions at axial position z = 2225 mm are shown in Figure <ref>, together with the FEBA experimental data. Note that this is only a proof-of-concept of Bayesian IUQ, rather than a valid “validation” process, because the same data has already been used in IUQ. It can be observed that: (1) the 95% confidence intervals based on the posteriors are much smaller than those based on the prior, due to reduction of uncertainty by IUQ, (2) compared with Methods 1/2 that used conventional PCA, the FUQ results of Methods 3/4 with fPCA have a better agreement with the experimental data, especially around t_max. After considering the code uncertainty using BNN surrogate models, the FUQ results of Method 4 show a larger 95% confidence interval than Method 3, which covers most of the experimental data.
To perform a more rigorous validation of the IUQ results, new experimental data that is not seen during IUQ should be used. We performed FUQ and validation at two other axial positions (z = 1135 mm and 3315 mm) of the FEBA experiment test 216. Since the surrogate models are not applicable for these datasets, we need to use TRACE to run the samples used for FUQ. We generated 300 random samples from joint posterior distributions of the four different IUQ methods and ran the TRACE model to get the PCT profiles. The FUQ and validation results at the two axial positions are shown in Figure <ref> and <ref>, respectively. For all of the FUQ results, the mean values based on the posteriors show a better agreement with the experimental data. Other observations are similar with Figure <ref>. Methods 1/2 produce results that have larger disagreement with the data before and around t_max, while Methods 3/4 produce results that have slightly larger disagreement with the data at around t_quench. Overall, the FUQ results from Method 4 has the largest posterior uncertainty range and the best coverage of the experimental data. This proves that the combination of fPCA and DNN-based surrogate model while accounting for the code uncertainty has improved the Bayesian IUQ process for this transient dataset.
§ CONCLUSIONS
This paper proposed a Bayesian inverse Uncertainty Quantification (IUQ) process for time-dependent responses using four methods that used different dimensionality reduction processes and surrogate models. We proposed a framework for Bayesian IUQ that combine functional principal component analysis (PCA) and deep neural network (DNN)-based surrogate models while accounting for the code/interpolation uncertainty. Functional PCA separates the phase and amplitude information of the time series data before dimensionality reduction, which shows an improved performance over the conventional method. The use of DNN-based surrogate models has also proven to be very effective in representing the PC scores, and it significantly reduces the computational cost in Markov Chain Monte Carlo (MCMC) sampling sampling. We also considered code uncertainty for surrogate models in Bayesian IUQ by adopting Bayesian neural networks (BNNs). Since the sampling-based UQ process for BNN will increase the computation cost in the IUQ process, we estimate the BNN uncertainty with a linear regression model since there is a clear linear relationship between BNN prediction and uncertainty. The proposed approach has been applied to the peak cladding temperature in the FEBA benchmark. Forward Uncertainty Quantification (FUQ) and validation of the proposed IUQ method have demonstrated that the code simulations based on the posterior distributions have an improved agreement with the experimental data while the uncertainty ranges can envelop the majority of the experimental data.
The primary limitation of this framework is that the model uncertainty is not considered in this IUQ study since there is only one experimental test is considered. In further study, we will include the model discrepancy term that comes from the missing and inaccurate physics in the system code, in order to design a more comprehensive IUQ process. We seek to find a mathematical representation for FEBA transient data. In addition, an IUQ method based on hierarchical Bayesian modeling can be applied since data at different axial position can be considered through this model.
|
http://arxiv.org/abs/2307.06192v1 | 20230712143327 | Failed supernova simulations beyond black hole formation | [
"Takami Kuroda",
"Masaru Shibata"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage
Relativistic second-order viscous hydrodynamics from kinetic theory with extended relaxation-time approximation
Amaresh Jaiswal
August 12, 2023
===============================================================================================================
We present an axisymmetric failed supernova simulation beyond black hole formation, for the first time with numerical relativity and two-moment multi energy neutrino transport.
To ensure stable numerical evolution, we use an excision method for neutrino radiation-hydrodynamics within the inner part of black hole domain.
We demonstrate that our excision method is capable to stably evolve the radiation-hydrodynamics in dynamical black hole spacetime.
As a remarkable signature of the final moment of PNS, we find the emergence of high energy neutrinos.
Those high energy neutrinos are associated with the proto-neutron star shock surface being swallowed by the central black hole and could be a possible observable of failed supernovae.
(stars:) supernovae: general – stars: black holes – neutrinos – gravitational waves
§ INTRODUCTION
Massive stellar collapse is one of the main formation channels of stellar-mass black hole (BH), whose existence was observationally substantiated through numerous coalescence events <cit.>.
Massive stars heavier than ∼8 M_⊙ undergo a catastrophic gravitational core-collapse (CC) at the end stage of their evolution.
The subsequent evolutionary path is rich in variety and determines the remnant property.
Broadly speaking, less to moderately massive stars explode as core-collapse supernova (CCSN), whereas more massive stars are prone to fail the explosion, sometimes completely and sometimes exhibiting only a feeble explosion <cit.>.
At the same time some of more massive stars are known to be accompanied by a very energetic explosion termed as hypernova <cit.>, whose explosion energy is about one order of magnitude larger than those of canonical SNe.
The CCSN explosion scenario and the mass range determining the fate are yet to be fully understood <cit.>.
It is evident, however, that unless the explosion possesses sufficient energy to expel substantial amounts of stellar mantle, the central compact remnant will ultimately acquire a mass that surpasses the maximum mass limit, above which its internal pressure cannot counteract its own self-gravitational force, thereby leading to the formation of a black hole.
The remnant property is tightly connected with its progenitor mass <cit.>.
In general, the more massive the progenitor is, the higher the probability of being BH is.
Moreover, recent parametric studies, focusing on the explodability by the standard neutrino heating mechanism, have revealed that the compactness <cit.> could potentially be a good indicator of BH formation <cit.>.
Like these, the formation of a BH is predominantly determined by the compactness of the progenitor star, along with the detailed explosion scenario (but see <cit.> for counterexamples).
There are currently numerous multi-dimensional simulations reporting a successful SN explosion <cit.>.
These studies are primarily directed towards less massive, or more precisely less compact, progenitor stars, in which the canonical neutrino heating mechanism can trigger the explosion, leaving behind a neutron star (NS).
However, there are several observational evidences of a “failed” supernova <cit.>.
These events report a sudden disappearance of red supergiant, inferring that the whole progenitor star collapses and becomes a BH without noticeable explosions.
Furthermore exceptionally low energy SNe, e.g., SN 2008ha <cit.>, were detected, which could possibly be explained by “fallback” during SN explosion <cit.>.
Should these events be a gravitational collapse of massive star, the remnant becomes most likely a BH due to their inferred small ejecta mass.
These observations associated possibly with a BH formation strongly motivate us to explore the failed and fallback SN scenarios.
There were, however, severe numerical difficulties in performing SN simulations in BH spacetime.
First, multi-dimensional SN simulations in general relativity (GR), for instance with numerical relativity, are still minor, e.g., <cit.> (and its subsequent works) using the so-called conformal flatness condition (CFC) or <cit.> with a Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formalism <cit.>.
Since BHs are fundamentally general relativistic objects, the formation process, namely from the onset of gravitational collapse of massive progenitor to BH formation and beyond, can be precisely followed only by numerical relativity.
Second, sophisticated neutrino transport is essential for modern SN simulations.
However, numerical relativity simulation in BH spacetime combined with sophisticated neutrino transport is currently still challenging.
To date, simulations only up to BH formation <cit.> or switching to Newtonian gravity with a large excision region (several times of the Schwarzschild radius) immediately after BH formation <cit.> are reported.
Very recently <cit.> reported the first SN simulations solving the full spatial domain above the BH, i.e., without discarding too large computational domain in the vicinity of central BH, based on the CFC metric.
The main obstacle of neutrino transport in BH spacetime, or rather immediately after BH formation, stems from the rapid change of matter field.
At the moment of BH formation, the (rest mass) density just above the BH is generally high ≳10^14 g cm^-3.
The density, however, quickly decreases to ∼10^10 g cm^-3 within a few ms concomitantly with the proto-neutron star (PNS) being swallowed by the central BH.
This indicates that the region in the vicinity of the BH rapidly shifts from optically thick to thin condition and such extreme condition makes neutrino transport with full interactions a significantly challenging subject.
In addition, the matter (and probably also radiation) field inside the BH is typically required to be “excised” for stable numerical evolution.
As of now, however, there is no concrete method how we should treat the radiation field inside the excised region and also inside BH for stable numerical evolution.
In this study, we report our first SN simulation beyond BH formation with numerical relativity and multi-energy neutrino transport.
We use an excision method for both matter and neutrino radiation fields inside a part of BH domain.
Our excision method demonstrates stable evolution immediately after BH formation as well as in the subsequent BH accretion phase.
Furthermore, we find the emergence of high energy neutrinos associated with the PNS shock surface being swallowed by the central BH, which could potentially be a probe of the very final moment of PNS.
We also show that these high energy neutrinos could be detectable by the current and next-generation neutrino detectors if the BH formation happens in our Galaxy.
This paper is organized as follows.
Section <ref> starts with a concise summary of our GR radiation-hydrodynamic scheme with an excision scheme and also describe the initial setup of the simulation.
The main results and detailed analysis of our new findings are presented in Section <ref>.
We summarize our results and conclude in Section <ref>.
Throughout the paper, Greek indices run from 0 to 3 and Latin indices from 1 to 3, except ν and ε which denote neutrino species and energy, respectively.
§ METHOD
In our full GR radiation-hydrodynamics simulations, we solve the evolution equations of metric, hydrodynamics, and energy-dependent neutrino radiation.
Each of the evolution equations is solved in an operator-splitting manner, while the system evolves selfconsistently as a whole, satisfying the Hamiltonian and momentum constraints <cit.>.
In Sec. <ref>, we describe our numerical method focusing particularly on the excision method applied to the neutrino radiation-hydrodynamics variables.
Sec. <ref> is devoted to explaining the computed model and numerical setup.
§.§ Radiation hydrodynamics in BH spacetime
We solve full GR multi-energy neutrino transport equations in axisymmetric 2+1 dimensions (two spatial dimensions and one momentum-space dimension).
Details of the code are described in our previous studies <cit.>.
The black hole spacetime is evolved using the BSSN formalism <cit.> with a fourth order finite differencing for the spatial derivatives and a four-step Runge-Kutta method.
We choose `1+log' slicing condition for the lapse and gamma-driver condition for the shift vector <cit.>.
BH formation is determined by identifying the location of apparent horizon (AH) by an AH finder, e.g., <cit.>.
After the AH formation, we enforce an excision method for radiation-hydrodynamics inside the AH, while we evolve the full black hole spacetime without excision for geometrical variables.
Here we will briefly explain our excision technique for radiation-hydrodynamics.
Once the AH is found, we divide the interior of AH into two: inner and outer regions.
The interface of these two regions is locating at fr_ AH(θ), where f∈[0,1] and r_ AH(θ) denotes the radius of AH at θ-direction with θ being the angle with respect to z-axis.
In the outer region, we solve the full neutrino radiation-hydrodynamics in the same way as the outside of AH (i.e. r>r_ AH).
On the other hand, we excise the inner region and artificially set all primitive variables, i.e., the rest mass density ρ, entropy s, electron fracion Y_e, spacial components of four-velocity u^i, and the zeroth and first order neutrino radiation moments (E_(ν,ε),F^i_(ν,ε)), as
[
[ ρ; u^i; s; Y_e; E_(ν,ε); F_(ν,ε)_i; ]]=
[
[ ∼0.1ρ_ max; 0; ≈ 1.5 k_ B baryon^-1; ≈ 0.15; E_ thick_(ν,ε); F_ thick_(ν,ε)_i; ]] for r(θ)≤ fr_ AH(θ).
Here ρ_ max represents the maximum rest mass density outside of the AH, which therefore changes its value with time due to the mass accretion onto BH.
Regarding the entropy and electron fraction, we use fixed values taken from typical NS structures.
The zeroth and first order radiation moments (E_ thick_(ν,ε),F_ thick_(ν,ε)_i) inside the inner region are enforced to be the moments in the optically thick limit <cit.> assuming the beta equilibrium with matter.
We shortly touch the appropriate value for f.
Usually, source terms for neutrino-matter interactions including gravitational red-shift and Doppler terms are quite stiff.
Inside the inner region r(θ)≤ fr_ AH(θ), we do not evolve any radiation-matter fields, that is, these stiff source terms are suddenly switched off across the excision boundary.
Such artificial treatment inevitably causes spurious behaviours appearing especially in the radiation fields near the excision boundary.
If we choose the value of f to be close to unity, those spurious oscillations eventually propagate even out to the outside of AH and the simulation will be crashed.
Therefore in this study we set f=0.5 to avoid such pathological behavior.
With these treatments, we found numerically stable neutrino radiation-hydrodynamic evolution in BH spacetime.
§.§ Model
We use a non-rotating massive star with zero metallicity, whose initial mass at its zero-age main sequence is 70 M_⊙ <cit.>.
This progenitor star was reported to form a BH within a few hundred milliseconds after the first bounce <cit.>.
We use the DD2 EOS of <cit.>.
The maximum NS mass of DD2 for cold and non-rotating case is 2.42 M_⊙, which is consistent with the existence of observationally confirmed massive NSs with ∼2 M_⊙ <cit.>.
The 2D axially symmetric computational domain extends to 1.5×10^4 km from the center.
In the cylindrical computational domain, 2:1 ratio nested boxes with 11 refinement levels are embedded, and each nested box contains 64× 64 cells so that the finest resolution at the center becomes ≈230 m. In this work, we assume the plane symmetry with respect to the equatorial plane. The neutrino energy space ε logarithmically covers from 3 to 400 MeV with 14 energy bins.
In this study, we use the up-to-date neutrino rates of <cit.>, which are used also in our recent studies <cit.>.
§ RESULTS
We first describe the picture of post-bounce evolution till the formation of BH.
Fig. <ref> shows: (a) the maximum rest-mass density ρ_ max,15 in units of 10^15 g cm^-3 (black), baryon mass of PNS M_ PNS (blue), and central lapse function α_ c (red); (b) neutrino luminosity L_ν,51 in units of 10^51 erg s^-1 for neutrino species; and (c) neutrino mean energy ⟨ε_ν⟩.
The PNS surface is defined by the location for which the rest mass density drops below 10^10 g cm^-3.
L_ν and ⟨ε_ν⟩ are evaluated from the emergent neutrino spectra measured at r=400 km.
In panel (a), we also plot the maximum mass of DD2 EOS for cold and non-rotating stars by the horizontal dash-dotted line of 2.42 M_⊙.
Panel (a) exhibits that the M_ PNS exceeds the maximum allowed mass of current EOS at t_ pb∼100 ms.
However, because of an additional contribution from thermal pressure, the PNS does not immediately collapse to a black hole.
From the maximum density evolution, we see a sharp increase at t_ pb∼177 ms, at the same time α_c decreases to ∼0. This signals the BH formation.
Prior to the BH formation at t_ pb≳ 160 ms, electron and anti-electron type neutrino luminosities show a decresing trend, while heavy-lepton neutrinos show a rapid increase in both its luminosity and mean energy.
These features were previously identified in 1D full-GR simulations with Boltzmann neutrino transport <cit.> and are commonly observed in the literature, due to rapid contraction of the PNS to the forming BH (see also, <cit.> as well as 3D models by <cit.>).
The overall features before the BH formation are in a good agreement with our former model z70 reported in <cit.>, in which the DD2-based nuclear EOS taking into account a first-order quantum chromodynamics (QCD) phase transition was used.
Taking into account the fact that the QCD phase transition occurs after the PNS starts collapsing <cit.>, the agreement between the current and previous models is quite reasonable.
We then discuss the neutrino radiation-hydrodynamics evolution after the BH formation, focusing mainly on how effectively our excision method manage to prevent propagation of spurious behaviours often appeared at the excision boundary.
Fig. <ref> displays spherically averaged spatial profiles of the rest mass density (top-left), electron fraction (top-right), entropy (middle-left), radial component of the three velocity (middle-right), electron type neutrino luminosity (bottom-left), and anti-electron type (solid-line) and heavy-lepton type (dash-dotted line) neutrino luminosities (bottom-right), at several time slices.
In the middle-left panel, we supplementary plot a temperature profile, but only at t_ BH=0 ms (red dash-dotted line), which is used in the later discussion with Fig. <ref>.
Each color represents the post BH formation time t_ BH, denoted in the top-left panel. Once the AH is formed, we plot structures only aoutside the AH.
Slightly before AH formation at t_ BH=-0.1 ms, the central density exceeds 10^15 g cm^-3 and the velocity profile inside the PNS shows the infalling structure.
For t_ BH≥0 ms, for which we apply an excision method described in the previous section, we see essentially no numerical instabilities at the interface of the AH.
All the neutrino radiation fields and hydrodynamical variables exhibit smooth structures across the AH and subsequently swallowed into its inside.
From the density structural evolution, the maximum density drops by four orders of magnitude, from ∼10^14 g cm^-3 to ∼10^10 g cm^-3, within a few ms, presenting a clear transition from optically thick to thin conditions.
This feature makes SN simulations in dynamical BH spacetime one of numerically challenging subjects.
We found that, if we suddenly switch off the neutrino-matter interactions inside the AH, it causes spurious behaviors, which eventually leak out to the outside and lead to a code crash.
Therefore we believe that it is essential to ensure a buffer zone between the AH and the excised region, especially when the neutrino radiation fields are taken into account.
During the first few ms after AH formation, low-Y_e and high entropy material, which represent typical PNS shocked material, are still present outside the AH.
They are, however, immediately swallowed by the BH and for t_ BH≳3 ms the BH accretion enters a nearly steady state, exhibiting high-Y_e (∼0.49) and relatively low entropy (∼5 k_ B baryon^-1) flows (see magenta lines).
Next we focus on how the neutrino signals in association with the BH formation are radiated away.
Bottom two panels indicate that all neutrino species have an outgoing flux for r≳30 km at the time of the BH formation.
In the vicinity of AH, on the other hand, neutrino radiation fields experience a strong drag by infalling high density component (≳10^12 g cm^-3) and have an inward flux.
After the mass accretion becomes a nearly steady state flow for t_ BH≳3 ms, the dominant neutrino-matter interaction is the electron capture due to continuous replenishment of high-Y_e materials (∼0.49, see top-right panel) from stellar mantle.
It results in a sustained neutrino emission even after the BH formation for electron type neutrinos (see blue and magenta lines in the bottom-left panel in Fig. <ref>), while the rest of neutrino species has essentially no production channel and their neutrino luminosities quickly subside.
<cit.> reported a BH excision scheme with neutrino transport.
According to their long time failed CCSN simulation in 1D spherical symmetry, qualitatively similar spatial profiles of neutrino luminosities, namely relatively strong ν_e emission continuing even after BH formation, was also reported.
Fig. <ref> displays: (a) the irreducible mass M_ irr and 2-norm of Hamiltonian constraint violation ||H||_2, (b) neutrino luminosities, and (c) mean neutrino energies, as a function of t_ BH.
Here, M_ irr is defined by the area of apparent horizon A as M_ irr=√(A/16π) <cit.> and ||H||_2 measures the constraint violation only for numerical cells outside the AH.
From panel (a), the irreducible mass shows an increasing trend from M_ irr∼2.88 M_⊙ to ∼3.06 M_⊙ during the first 40 ms.
At the moment of the AH formation, the measured value of the protoneutron star mass, M_ PNS, is ∼2.76 M_⊙, which rapidly decreases to ≲0.001 M_⊙ (the total mass outside of the AH and where ρ≥10^10 g cm^-3 is met) within a few ms.
It means that the estimated M_ irr is slightly larger than M_ PNS at t_ BH=0 ms.
Furthermore, from panel (a), M_ irr initially shows a slightly odd behavior, a nearly constant evolution until t_ BH∼8 ms, and it increases afterward.
From these, we naively suspect that the current numerical resolution at the center Δ x∼230 m might not be high enough[The BH is resolved by ∼13-14 grid points at its formation.] to accurately resolve the location of apparent horizon and may tend to overestimate the initial BH mass approximately by ∼0.1 M_⊙, i.e., ∼3 % error in the evaluation for the total BH mass or the AH radius.
However, once the system relaxes to a quasi-steady state for t_ BH≳10 ms, M_ irr increases with a reasonable growth rate of Ṁ_ irr≈ 4.66 M_⊙ s^-1, which agrees approximately with that of the PNS mass, Ṁ_ PNS≈ 4.73 M_⊙ s^-1, before the BH formation (see panel (a) in Fig. <ref>).
The 2-norm of Hamiltonian constraint ||H||_2 stays around ∼10^-4 without any secular increase after BH formation.
Regarding the neutrino signals, the neutrino luminosity for all species show a rapid distinction and eventually migrate to a quasi steady state for t_ BH≳5 ms.
From panel (b), L_ν_e stays around ∼2×10^49 erg s^-1 till the end of our calculation, which features a long term steady state mass accretion onto the BH.
Nearly constant L_ν_e of the order of 𝒪(10^49) erg s^-1 is also reported in <cit.>.
The neutrino mean energy ⟨ε_ν⟩ may reveal the final moment of devastating PNS collapse.
As can be clearly seen, ⟨ε_ν⟩ for all neutrino species show a drastic increase at t_ BH∼3 ms.
This is particularly the case for heavy lepton type neutrinos, which show a remarkably high mean energy of ⟨ε_ν_x⟩∼90 MeV.
These values are even higher than those from the QCD CCSN models <cit.>, which are also known to emit high energy neutrinos ⟨ε_ν_x⟩∼40 MeV due to strong shock heating in association with the quark core bounce.
We will now shortly discuss their possible excitation mechanism.
First, since we measure the emergent neutrino signals at r=400 km, these high energy neutrinos are produced at t_ BH∼1-2 ms.
From Fig. <ref>, this time corresponds exactly to the time when huge amounts of hot PNS envelope together with a shock surface infall with a relativistic speed of ∼0.3c.
The highest temperature of collapsing PNS material (middle-left panel in Fig. <ref>) for the regions of r≳30 km, where F_ν_x has a positive sign (bottom-right panel) and can contribute to the emergent neutrino spectrum, is merely T∼10 MeV.
It indicates that heavy lepton type neutrinos, whose energy are ⟨ε_ν_x⟩∼30 MeV, could be barely explained via such as pair production channel, although it is not likely for much higher neutrino energy of ∼90 MeV.
To further discuss their origin, we examine their spectral features.
Fig. <ref> depicts: (a) the distribution function f_ε[We reconstruct the distribution function f_ε simply by f_ε=J_ε/4πε^3, where J_ε denotes the zeroth order neutrino radiation moment measured in the comoving frame at the energy bin ε. With an appropriate closure relation, J_ε is determined from the zeroth and first order radiation momenta (E_ε,F_ε^μ), which are measured in the Eulerian frame and are the basic variables evolved in our M1 neutrino transport.] for ν̅_e (black lines) and ν_x (red lines) at three different time slices: t_ BH=0 ms, 3 ms (corresponding to the time when high energy neutrinos are observed), and 7 ms, (b) time evolution of distribution function f_ε for all energy bins higher than ε≥52 MeV (this time, 52, 78, 117, 176, and 265 MeV) (solid lines) and mean energy ⟨ε⟩ (dashed line) for ν̅_e, and (c) same as the panel (b) but for ν_x.
All these values are measured at r=400 km.
From panel (a), the energy spectrum at t_ BH=3 ms for ν_x exhibits a flatter profile with relatively more populations for neutrinos with ≳50 MeV.
Such feature cannot be seen in other two time snapshots.
We attribute the flatter profile to a consequence of more effective isoenergy scatterings taking place in the upstream to the relativistically infalling shock surface.
Because of the rapid infall of the PNS shock surface (see v_r-profiles from t_ BH=-0.1 ms to 1 ms in Fig. <ref>), the outgoing comoving neutrino flux ahead of the shock becomes relatively larger.
Consequently the effect of isoenergy neutrino scatterings becomes more prominent compared to the case with a stationary shock surface.
Furthermore, that impact is more visible for high energy neutrinos as the cross section of the isoenergy scatterings is proportional to the square of the incoming neutrino energy.
Indeed, from panel (c), the distribution function for heavy lepton type neutrinos shows an increase(decreasing) trend for ε≥117(≤78) MeV at t_ BH≲3 ms.
Particularly at the energy bin ε=117 MeV (f_ε=117: red line), its increase is noteworthy with its maximum appearing at t_ BH∼3 ms.
Neutrinos at higher energie bins (ε=176 and 265 MeV) also show a sudden increase with slight time delays of ∼0.5 ms from the peak time for f_ε=117.
These time delays are mostly due to that higher energy neutrinos require a longer time for escaping from collapsing stellar mantle.
On the other hand, regarding ν̅_e (as well as ν_e), the less population of high energy neutrinos (ε≳50 MeV) prior to the BH formation than that of ν_x (compare two thin lines in panel (a)) leads simply to a less noticeable increase at t_ BH∼3-4 ms.
Additionally, the presence of charged current reactions tend to suppress their increase.
In fact, f_ε≥117 for ν̅_e shows approximately an order of magnitude smaller values than that for ν_x.
These features result in the observed high energy neutrinos pronounced for heavy lepton type ones (Fig.<ref>).
Although our moment formalism cannot capture the particle acceleration mechanisms at the shock front, non-thermal shock acceleration <cit.> is also reported to excite high energy neutrinos from CCSNe.
As a comparison with previous studies, <cit.> has perofrmed a GR Monte Carlo neutrino transport and reported high energy neutrinos with ⟨ε_ν_x⟩∼50 MeV in association with BH formation.
Since their calculations are performed on the fixed spacetime and matter fields after BH formation, quantitative differences in ⟨ε_ν⟩ from ours are inevitable.
We, however, believe that the emission of high energy neutrinos just after the BH formation seem to be a common feature and might be used as a smoking gun of infall of PNS surface.
<cit.> performed CCSN simulations with BH formation.
However, since they excise the innermost 400 km once they find the AH and also their models present a successful shock expansion, i.e., corresponding to the fallback SN model, the emergence of high energy neutrinos similar to ours was not reported.
Finally, we discuss observable multi messenger signals for a current failed CCSN model.
Fig. <ref> displays from top: (a) the neutrino detection rate Γ of Hyper-Kamiokande (HK) <cit.>; (b) Γ of IceCube (IC) <cit.>; (c) matter origin gravitational waves (GWs) Dh_+; and (d) spectrogram of h_+ obtained by a short-time Fourier transform.
We assume a source distance of D=10 kpc.
h_+ is the gravitational wave strain, which is calculated from a standard quadrupole formula, and we show only the non-vanishing component in axisymmetric profile observed along the equatorial plane.
The neutrino detection rate Γ is evaluated in the same way as <cit.> assuming a Fermi-Dirac distribution for the neutrino energy spectrum <cit.>.
Note that in the evaluation for Γ, we consider two extreme cases: all ν̅_e emitted from the source reach the detectors without neutrino flavor conversion and cause the signal at the detectors (black lines in the figure); all ν̅_x (identical to ν_x in this study) emitted from the source are completely swapped by ν̅_e and cause the signals (red lines).
In inset of the upper two panels, we show a magnified view of Γ relative to BH formation time t_ BH to feature detection of high energy neutrinos.
Regarding the neutrino detection rate Γ, both of the two extreme cases, i.e., with and without neutrino flavor conversion, essentially show a quantitatively similar monotonic increase until the BH formation.
This feature can be seen for both detectors.
This indicates that the possible range of neutrino oscillation effects <cit.>, i.e. the region bounded by two lines in panels (a,b), is quite small, compared to previous studies using less massive progenitor stars <cit.>.
For instance, Γ_ν̅_e→ν̅_e becomes ∼1.5 times higher than Γ_ν̅_x→ν̅_e for t_ pb≳100 ms for CCSN models with less massive progenitor stars, while the current one with a more massive progenitor star presents roughly comparable values.
Another remarkable feature is rapid increase of Γ_ν̅_x→ν̅_e (red lines) as the PNS approaches BH formation (t_ pb≳150 ms).
It is a clear signature of the increasing behavior of both L_ν_x and ⟨ε_ν_x⟩ shown in Fig. <ref>.
We also discuss if the high energy heavy lepton type neutrinos, as a possible signature of the shock surface being swallowed by BH, could be observed.
From insets, we can marginally observe a slight increase for Γ_ν̅_x→ν̅_e (red lines) at t_ BH∼3 ms, which is more visible for IC.
This time is consistent with the emission time of high energy neutrinos (see panel (c) in Fig. <ref>).
If we could observe such a tentative increase of neutrino detection during the exponential decay, it could be a possible signature of the aforementioned final moment of the PNS shock surface.
Bottom two panels show the emitted GWs.
We see essentially the same features as have been discussed for model z70 in <cit.>.
During the first ∼50 ms after bounce, relatively large and low frequency GWs originated from postbounce convective motions are observed, whose amplitudes and frequencies reach ∼50 cm and ∼100 Hz, respectively.
Afterward the gravitational waveform shows a considerable subsidence, which is then disrupted at t_ pb≳120 ms.
At the moment of BH formation, burst like GWs of the order of ∼100 cm are emitted presenting a broad band emission.
Once the BH is formed and BH accretion settles into a quasi steady state for t_ BH≳3 ms, we observe essentially no GWs for the current non-rotating model.
§ SUMMARY
We have presented a results of 2D axisymmetric CCSN simulation for a massive star with 70 M_⊙.
Our core-collapse supernova model is based on numerical relativity, which solves the GR neutrino-radiation hydrodynamics equations together with the two-moment (M1) neutrino transport equations of <cit.>.
We used up-to-date neutrino opacities following <cit.> and employed the DD2 EOS of <cit.>.
In this framework, we follow for the first time “beyond BH formation”.
To ensure stable numerical evolution, we use an excision method for neutrino radiation-hydrodynamics, while we evolve the geometrical variables for entire computational domain.
Our results showed consistent PNS evolution and multi-messenger signals during the PNS contraction phase with previous studies, for which the same progenitor model was used <cit.>.
The current non-rotating PNS model exceeds the maximum NS mass for DD2 EOS at ∼100 ms after bounce.
Subsequently, it initiates the second gravitational collapse, resulting in BH formation at t_ pb∼177 ms.
After we identify the AH, our excision technique demonstrates its capability to stably evolve the radiation-hydrodynamics in dynamical BH spacetime.
We solve the full neutrino-matter interactions taking into account the gravitational redshift and Doppler terms from the AH down to the excision domain, so that spurious oscillations often appearing around the excision surface do not leak outside the AH.
We also mention that our current numerical method satisfies the Hamiltonian constraint well and its violation after BH formation is free from secular growth.
After the BH formation, the PNS envelope was simply swallowed by the BH and the system transitions to a nearly steady BH-accretion phase within a few ms.
Afterward the BH mass, i.e. the area of AH, gradually increases because of the continuous mass inflow.
The accretion flow is composed of high-Y_e (∼0.5) material, reflecting the component of progenitor core (i.e. iron).
On the contrary to the simple collapse dynamics of PNS, its impact on the emergent neutrino signals was not so trivial.
Our findings are: (1) neutrinos with significantly high energies, especially for heavy lepton type neutrinos whose mean energy reaches ∼90 MeV, are observed during the infall phase of PNS envelope and (2) a steady state neutrino emission of electron type neutrinos in the BH accretion phase.
Possible observations of high energy neutrinos from BH formation are also reported in a previous similar (but spherical symmetric) study by <cit.>.
We attribute the first feature to more efficient isoenergy scatterings between neutrinos, which strive to emerge from the shock surface, and infalling stellar mantle ahead of the shock, which is mainly composed of heavy nuclei.
Using time evolution of neutrino spectral property, we showed that propagation of high energy neutrinos is indeed hindered, when the PNS shock surface drastically collapses (i.e. 1 ms≲ t_ BH≲2 ms).
Once the shock surface is engulfed by the BH, those neutrinos are radiated away, with some time delays for higher energy neutrinos.
In the BH accretion phase, the main component of accretion flow is high-Y_e stellar mantle, whose temperature is at the highest a few MeV.
Therefore the main neutrino emission channel is the electron capture on heavy nuclei occurring in the vicinity of AH.
It results in a nearly constant electron type neutrino luminosity as also reported in <cit.>.
We would like to emphasize that these neutrino properties could be revealed only by full neutrino radiation-hydrodynamic simulations with numerical relativity without excising the relevant region outside the AH, i.e., by fully solving the region outside the BH.
In this study we employed only one non-rotating progenitor model.
In our future works, we are interested in exploring various CCSN models accompanied by BH formation.
For instance, a fallback scenario is one of the interesting topics.
The current progenitor model has a significantly high compactness ξ_2.5=1.0 at precollapse stage (<cit.> and see also Table 1 in <cit.>), which leads to strong mass accretions during the PNS contraction phase.
Therefore it induces the PNS core-collapse without affording an opportunity for shock revival.
However, if one considers less compact stars <cit.> or rotating stars <cit.>, the shock revival aided by neutrino heating could happen before BH formation.
Such systems could be observed as a faint supernova <cit.> and should be distinguished from the current failed SN (or direct BH formation) model with no shock revival.
Progenitor model dependency should definitely be explored in the future study to explain various observations.
Another interesting topic to be explored is the collapsar scenario <cit.> as a possible route to long gamma-ray bursts and hypernovae.
In the collapsar scenario, a BH surrounded by a massive disk is formed, i.e., highly non spherical system is formed. Such systems can be followed only in numerical relativity with no approximation like CFC approximation.
For instance, after the formation of a massive disk, viscous effects significantly heat the disk, leading eventually to the launch of energetic outflows <cit.>.
As another intriguing and also a challenging topic in the context of collapsar scenario, the impact of magnetic fields threading the central BH is undoubtedly worth to be explored as a possible origin of relativistic jets generated via, e.g., the Blandford-Znajek mechanism <cit.>.
It has been recently demonstrated by <cit.> that the Blandford-Znajek mechanism is a promising mechanism for launching a jet, but only in the framework of compact mergers.
We will explore this fascinating topic in our future CCSN studies.
§ ACKNOWLEDGEMENTS
We acknowledge K. Kiuchi, S. Fujibayashi, and A. Betranhandy for fruitful discussions.
This work was in part supported by Grant-in-Aid for Scientific Research (Nos. 20H00158 and 23H04900) of Japanese MEXT/JSPS.
Numerical computations were carried out on Sakura and Raven clusters at Max Planck Computing and Data Facility.
mnras
|
http://arxiv.org/abs/2307.04257v1 | 20230709195309 | Hyperon polarization and its correlation with directed flow in high-energy nuclear collisions | [
"Ze-Fang Jiang",
"Xiang-Yu Wu",
"Shanshan Cao",
"Ben-Wei Zhang"
] | nucl-th | [
"nucl-th",
"hep-ph"
] |
This line only printed with preprint option
[email protected]
Department of Physics and Electronic-Information Engineering, Hubei Engineering University, Xiaogan, Hubei, 432000, China
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China
[email protected]
Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong, 266237, China
[email protected]
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China
Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter,
South China Normal University, Guangzhou, Guangdong, 510006, China
We investigate the hyperon polarization and its correlation with the directed flow of the quark-gluon plasma (QGP) in non-central Au+Au collisions at =27 GeV. A modified 3-dimensional (3D) Glauber model is developed and coupled to a (3+1)-D viscous hydrodynamic evolution of the QGP. Within this framework, we obtain a satisfactory simultaneous description of the directed flow of identified particles and Λ polarization, and show sensitivity of polarization to both the tilted geometry and the longitudinal flow profile of the QGP. A non-monotonic transverse momentum dependence of the Λ polarization is found in our calculation, which is absent from hydrodynamic simulation using other initialization methods and can be tested by future experimental data with higher precision. A strong correlation (or anti-correlation) is found between the global polarization and directed flow of Λ when the longitudinal flow field (or medium deformation) varies, indicating the common origin of these two quantities. Therefore, a combination of these observables may provide a more stringent constraint on the initial condition of the QGP.
Hyperon polarization and its correlation with directed flow in high-energy nuclear collisions
Ben-Wei Zhang
August 12, 2023
=============================================================================================
§ INTRODUCTION
A highly excited state of nuclear matter, known as the Quark-Gluon Plasma (QGP), is created in high-energy nucleus-nucleus collisions at the Relativistic Heavy-Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC).
Quantifying the QGP properties becomes one of the primary goals of the heavy-ion collision programs since its discovery at the beginning of this century <cit.>.
In non-central heavy-ion collisions, huge orbital angular momentum (OAM) or vorticity field can be deposited into the QGP, leading to the global polarization of hyperons through the spin-orbital coupling <cit.> or spin-vorticity coupling <cit.>. This initiates the exploration of spin physics in a strongly coupled system.
The chiral kinetic theory <cit.> and phenomenology, such as chiral vortical effect <cit.>, chiral vortical wave <cit.>, the change of the QCD phase diagram induced by the vorticity effect <cit.> and the spin-hydrodynamics <cit.> are under active investigation.
Recently, the STAR experiment has confirmed the global polarization of Λ(Λ̅) hyperons in semi-peripheral Au+Au collisions <cit.>, which implies an average fluid vorticity of ω≈ (9±1) × 10^21 s^-1. This is the most vortical fluid ever observed in nature. Further analyses of the global and local polarization have revealed new insights into the vortical properties of the QGP <cit.>.
Various theoretical approaches have been developed to study the influence of the fluid vorticity on spin polarization, including transport models (e.g. AMPT) with the assumption of local thermal equilibrium <cit.>, the Quark-Gluon-String Model (QGSM) <cit.> and (3+1)-dimensional viscous hydrodynamic models <cit.>.
These models consistently capture the features of the beam energy dependence of the global polarization along the out-of-plane direction (-P^y) as observed from the RHIC to the LHC energies. However, inconsistency still remains in the azimuthal angle dependence of the local polarization between theoretical calculations and the experimental data <cit.>. Considerable efforts have been devoted in resolving this local polarization puzzle <cit.>.
Within the hydrodynamic approach, it has been found that the hyperon polarization is sensitive to the initial condition of the QGP evolution <cit.>. Significant impacts on the polarization have been revealed from the initial velocity field of the medium <cit.>, and the initial geometry of the medium which affects the vorticity field inside the QGP <cit.>. Since these aspects are also the origin of other soft hadron observables like their collective flow coefficients, it would be of great interest to study polarization together with these observables in the same framework and utilize their combination to better constrain the initial condition of the QGP <cit.>. This is also the focus of our present work.
Following our previous exploration <cit.> on the interplay between the hydrodynamic initial condition and the directed flow of hadrons in non-central heavy-ion collisions, we will further investigate how the tilted geometry of the QGP fireball and its longitudinal flow velocity field affect the hyperon polarization, including its dependence on rapidity, centrality and transverse momentum. Detailed comparisons on the Λ polarization will be conducted between different contributions to polarization from the kinetic theory, and also between different initialization models of our hydrodynamic simulation. Since the asymmetric initial condition serves as the common origin of both the hyperon polarization and the directed flow of hadrons, we will explore the correlation between these two observables as the medium geometry and flow field vary. We will use Au+Au collisions at =27 GeV as an environment for our discussion, considering the abundance of experimental data on both polarization and directed flow coefficient of Λ hyperons in this collision system.
The rest of this paper will be structured as follows. In Sec. <ref>, we will first present the theoretical framework we develop for a simultaneous investigation on directed flow and polarization of the QGP, including a 3-dimensional (3D) Glauber model that involves a tilted medium geometry and an initial longitudinal flow field, a (3+1)-D hydrodynamic model for the QGP evolution and a modified Cooper-Frye formalism for evaluating the polarization pseudo-vector on the chemical freezeout hypersurface. Numerical results on the hadron directed flow and the hyperon polarization will then be presented in Sec. <ref>, with specific focus on the dependence of the Λ polarization on the medium geometry and longitudinal flow profile, and the correlation between polarization and directed flow. In the end, we will summarize in Sec. <ref>.
§ MODEL FRAMEWORK
§.§ Initial condition
We use a modified Glauber model to generate the initial condition of hydrodynamic evolution of the QGP, which possesses a counterclockwisely tilted geometry in the reaction plane with respect to the beam (longitudinal) direction <cit.>.
The Woods-Saxon (WS) distribution of nucleons is applied to calculate the nuclear thickness function of the Au nucleus as
T(x,y)=∫_-∞^∞dzρ_0/1+exp[(r-R_0)/d_0],
where ρ_0=0.17fm^-3 is the average nucleon density, r=√(x^2+y^2+z^2) is the radial position with x, y, z being the space coordinates, R_0=6.38 fm is the radius of nucleus and d_0=0.535 fm is the surface diffusiveness parameter.
For two nuclei travelling along the longitudinal (±ẑ) direction and colliding with an impact parameter 𝐛, their thickness functions are then given by
T_+(𝐱_T)=T(𝐱_T-𝐛/2), T_-(𝐱_T)=T(𝐱_T+𝐛/2),
where 𝐱_T=(x,y) is the transverse plane coordinate. According to the Glauber model, their corresponding densities of participant nucleons of inelasitic scatterings are given by
T_1(𝐱_T) =T_+(𝐱_T){1-[1-σ_NN T_-(𝐱_T)/A]^A} ,
T_2(𝐱_T) =T_-(𝐱_T){1-[1-σ_NN T_+(𝐱_T)/A]^A} ,
with A being the mass number and σ_NN being the inelastic nucleon-nucleon scattering cross section <cit.>.
Inspired by the anisotropy of hadrons emitted by the QGP, it has been proposed in Ref. <cit.> that non-central collisions deposit energy asymmetrically along the longitudinal direction, as illustrated in the upper panel of Fig. <ref>. This leads to a counterclockwise tilt of the QGP fireball in the reaction plane with respect to the beam direction. Different parameterization schemes of the initial condition have been proposed in literature <cit.> to introduce this deformation of the nuclear matter and have been shown consistent with each other. In this work, we follow our earlier studies <cit.> and parameterize the spacetime rapidity (η_s) dependence of wounded (or participant) nucleon distribution as
W_N(x,y,η_s)= T_1(x,y)+T_2(x,y)
+ H_t[T_1(x,y)-T_2(x,y)]tan(η_s/η_t),
where the parameter H_t reflects the overall imbalance strength of energy deposition between forward and backward η_s. It relies on the impact parameter of collisions, and is set as H_t = 2.07b/fm in the present study in order to consistently describe the centrality dependence of the soft hadron observables in Au+Au collisions at =27 GeV later. Additionally, the function tan (η_s/η_t) in Eq. (<ref>) determines how the imbalance varies with η_s.
We use a constant parameter η_t=8.0 in this study, which provides a good description of the directed flow (v_1) of charged particles in our previous work <cit.>.
After accounting for contributions from both wounded nucleons and binary (hard) collisions, the total weight function reads
W(x,y,η_s)=(1-α)W_N(x,y,η_s)+α n_BC(x,y)/[(1-α)W_N(0,0,0)+α n_BC(0,0)]|_𝐛=0,
where n_BC(x,y)=σ_NNT_+(x,y)T_-(x,y) represents the number of binary collisions, and α=0.05 is called the collision hardness parameter determined by the centrality (or 𝐛) dependence of the soft hadron yield <cit.>.
Under the Bjorken flow assumption, the initial energy density ε_0 and the normalized local net baryon density n_0 are given by <cit.>
ε_0(x,y,η_s) =K · W(x,y,η_s) · H(η_s) ,
n_0(x,y,η_s) =1/N· W(x,y,η_s) · H(η_s) · H_B(η_s) ,
with the overall factor K set by the multiplicity distribution (dN_ch/dη or dN_ch/dy) of soft hadrons, and N being a normalization factor for n_0.
In Eqs. (<ref>) and (<ref>), a function
H(η_s)=exp[-(|η_s|-η_w)^2/2σ^2_ηθ(|η_s|-η_w) ]
is introduced to describe the plateau structure in the longitudinal distribution of emitted hadrons, in which η_w controls the width of the central rapidity plateau and σ_η determines the width (speed) of the Gaussian decay outside the plateau region <cit.>. In order to model the accumulation of baryons in the forward and backward rapidity regions, we also include the following distribution of baryon density in the longitudinal direction <cit.>
H_B(η_s)=exp[-(η_s-η_n)^2/2σ^2_n]+exp[-(η_s+η_n)^2/2σ^2_n],
where parameters η_n and σ_n are calibrated by the p_T spectra of protons and antiprotons <cit.>.
Since we aim at exploring the hyperon polarization in the same framework, which is sensitive to the gradient of the fluid velocity field <cit.>, we need to extend the initialization model beyond the Bjorken approximation for the fluid velocity. Following Refs. <cit.>, we construct the initial energy-momentum tensor components as
T^ττ =ε_0(x,y,η_s)cosh(y_L) ,
T^τη_s =1/τ_0ε_0(x,y,η_s)sinh(y_L) ,
where the rapidity variable is modeled as
y_L≡ f_v y_CM.
Here, the center of mass rapidity y_CM at a given transverse location (x,y) depends on both the beam energy y_beam≡arccosh[√(s_NN)/(2m_N)] and the imbalance between the participant thickness functions as
y_CM=arctanh[T_1-T_2/T_1+T_2tanh (y_beam)],
where m_N is the nucleon mass and f_v∈ [0, 1] parameterizes the fraction of y_CM deposited into the longitudinal flow velocity.
This f_v parameter allows one to vary the magnitude of the longitudinal flow velocity gradient, which influences both local and global polarization of Λ(Λ̅) hyperons. When f_v=0, one recovers the Bjorken flow scenario with y_L=0 <cit.>. With Eqs. (<ref>) and (<ref>), the initial fluid velocity in the η_s direction is given by
v_η_s=T^τη_s/(T^ττ+P), in which P is the pressure.
In the present work, the initial fluid velocity in the transverse plane is assumed to be zero by setting T^τ x = T^τ y = 0.
In Tab. <ref>, we summarize the parameters used to initialize the QGP medium in this study. The first four parameters (K, τ_0, σ_η, and η_w) are adjusted based on the rapidity dependence of the charged particle yields (dN_ch/dy) in the most central collisions at a given beam energy.
With these parameters, the combination of our initial condition and the CLVisc hydrodynamic simulation is able to provide a good description of the p_T spectra of different types of identified particles (π^+, K^+, p and p̅) in different centrality regions across the RHIC-BES energies <cit.>. This provides a reliable baseline for our subsequent investigation on the global and local polarization of hyperons in this work.
The last parameter (f_v) in In Tab. <ref> is adjusted according to the directed flow coefficients of π^-, p and p̅, Λ and Λ̅. The value of f_v we use here is different from the one used in Ref. <cit.> due to our different assumptions on the initial geometry of the QGP profile. With the decrease of the beam energy, a larger fraction of the longitudinal momentum of the colliding nuclei can be deposited into the initial longitudinal velocity <cit.>.
With the model parameters listed above, we first present in Fig. <ref> the distributions of the initial energy density (middle panel) and net baryon number density (bottom panel) on the η_s-x plane for 20-50% (b=8.57 fm) Au+Au collisions at =27 GeV. Their values beyond the Bjorken approximation are solved from the modified energy-momentum tensor components in Eqs. (<ref>) and (<ref>). One may clearly observes a tilted geometry of the QGP fireball with respect to the beam direction within this initialization model. Apart from an asymmetrical shift along the forward and backward rapidity directions, a counterclockwise tilt in the η_s-x plane can be seen for both the energy and net baryon densities. Due to their different parameterizations in Eq. (<ref>) and Eq. (<ref>), the baryon density exhibits stronger shift towards large rapidity as well as stronger tilt compared to the energy density. As discussed in <cit.>, this could be understood with the string models of the initial state <cit.>: while the baryon density deposition is driven by the valence quarks in the participant nucleons, energy density deposition originates from the melting of strings that involves both valence and sea quarks. We expect stronger tilt of these density profiles in more peripheral collisions due to the stronger drag experienced by participant nucleons from spectators. In phenomenology, the asymmetry in the energy density is responsible for the rapidity-odd directed flow of soft hadrons, while the asymmetry in the baryon density affects the abundance of baryons and anti-baryons produced from different locations of the QGP <cit.>.
§.§ Hydrodynamic evolution
Starting with the initial condition constructed in the previous subsection, we utilize a (3+1)-D viscous hydrodynamic model CLVisc <cit.> to describe the further evolution of the QGP medium. Under finite baryon chemical potential, the hydrodynamic equations read <cit.>
∇_μ T^μν =0 ,
∇_μ J^μ =0 ,
where the energy-momentum tensor T^μν and the net baryon current J^μ are defined as
T^μν = ε U^μU^ν - PΔ^μν + π^μν ,
J^μ = nU^μ+V^μ ,
with ε, P, n, u^μ, π^μν, V^μ being the local energy density, pressure, net baryon density, flow velocity field, shear stress tensor and baryon diffusion current respectively.
The projection tensor is given by Δ^μν = g^μν-u^μu^ν with the metric tensor g^μν = diag (1,-1,-1,-1). Effects of the bulk viscosity is not included in the present study yet.
The dissipative currents π^μν and V^μ are given by the following expressions based on the Israel-Stewart-like second order hydrodynamic expansion <cit.>:
Δ^μν_αβ (u·∂) π^αβ =
-1/τ_π(π^μν - η_vσ^μν)
- 4/3π^μνθ
-5/7π^α<μσ_α^ν>+ 9/704/e+Pπ^<μ_απ^ν>α ,
Δ^μν (u·∂) V_ν = - 1/τ_V(V^μ-κ_B▽^μμ_B/T)-V^μθ
-3/10V_νσ^μν ,
where θ = ∂· u is the expansion rate, σ^μν = ∂^<μ u^ν> is the shear tensor,
η_v and κ_B are the shear viscosity and baryon diffusion coefficient.
For an arbitrary tensor A^μν, its traceless symmetric part is given by A^<μν> = 1/2[(Δ^μαΔ^νβ+Δ^ναΔ^μβ)-2/3Δ^μνΔ^αβ]A_αβ <cit.>.
The specific shear viscosity C_η_v and the baryon diffusion coefficient κ_B are model parameters in hydrodynamic simulation, which are connected to η_v and parameter C_B via
C_η_v = η_v T/e+P,
κ_B = C_B/Tn[1/3(μ_B/T)-nT/e+P] ,
where μ_B is the baryon chemical potential. In this work, we use C_η_v=0.08 and C_B=0.4 for all collision centrality classes <cit.>, and set the relaxation times as τ_π = 5C_η_v/T and τ_V = C_B/T.
We solve the hydrodynamic equations using the NEOS-BQS equation of state (EOS) <cit.>, which extends the lattice EOS at zero net baryon density to finite net baryon density via the Taylor expansion <cit.>. This EOS provides a smooth crossover between the QGP and the hadron phase under the conditions of strangeness neutrality (n_S=0) and electric charge density n_Q = 0.4n_B.
§.§ Particlization
We use the isoenergy-density freezeout condition <cit.> in our study and determine the freezeout hypersurface by a fixed energy density (e_frz= 0.4 GeV/fm^3) <cit.>. We apply the Cooper-Frye formalism on this hypersurface to obtain the hadron momentum distribution:
dN/p_T dp_T dϕ dy = g_i/(2π)^3∫_Σ p^μdΣ_μf_eq(1+δ f_π+δ f_V) .
In the above equation, g_i is the spin-color degeneracy factor for identified hadrons, and dΣ_μ is the hypersurface element determined by the projection method <cit.>. The thermal distribution (f_ eq) and the out-of-equilibrium corrections (δ f_π and δ f_V) satisfy
f_ eq = 1/exp[(p_μU^μ - Bμ_B )/T_f] ∓ 1 ,
δ f_π(x,p) = (1± f^eq(x,p)) p_μp_νπ^μν/2T^2_f(e+P),
δ f_V(x,p) = (1± f^eq(x,p))(n_B/e+P-B/U^μp_μ)p^μV_μ/κ_B/ τ_V ,
where T_f is the chemical freezeout temperature, and B represents the baryon number of an identified hadron.
The out-of-equilibrium corrections above are obtained from the Boltzmann equation via the relaxation time approximation <cit.>. Contributions from resonance decay have been taken into account in this work based on Ref. <cit.>, although hadronic scatterings after the QGP phase has not been included yet.
§.§ Spin polarization
In non-central heavy-ion collisions, the quarks are polarized due to the massive initial orbital angular momentum of the QGP fireball <cit.>.
We assume collision system to be in local thermal equilibrium on the freezeout hypersurface. Meanwhile, the conservation of spin is respected during hadronization and resonance decay processes <cit.>.
The polarization pseudo-vector for spin-1/2 fermions can be obtained using the modified Cooper-Frye formalism as <cit.>
𝒮^μ(𝐩)=∫ d Σ· p 𝒥_5^μ(p, X)/2 m ∫ d Σ·𝒩(p, X),
where 𝒥^μ_5 is the axial charge current density and 𝒩^μ(p, X) is the number density of fermions in the phase space.
Following the quantum kinetic theory <cit.>,
𝒮^μ(𝐩) can be decomposed into different sources as
𝒮^μ(𝐩) = 𝒮_thermal^μ(𝐩)
+𝒮_shear^μ(𝐩)+𝒮_accT^μ(𝐩)
+𝒮_chemical^μ(𝐩)+𝒮_EB^μ(𝐩),
where
𝒮_thermal^μ(𝐩) = ∫ dΣ^σF_σϵ^μναβp_ν∂_αu_β/T,
𝒮_shear^μ(𝐩) = ∫ dΣ^σF_σϵ^μναβp_ν u_β/(u· p)T
× p^ρ(∂_ρu_α+∂_αu_ρ-u_ρDu_α),
𝒮_accT^μ(𝐩) = -∫ dΣ^σF_σϵ^μναβp_νu_α/T(Du_β-∂_βT/T),
𝒮_chemical^μ(𝐩) = 2∫ dΣ^σF_σ1/(u· p)ϵ^μναβp_αu_β∂_νμ/T,
𝒮_EB^μ(𝐩) = 2∫ dΣ^σF_σ[ϵ^μναβp_αu_βE_ν/(u· p)T+B^μ/T],
with
F^μ = ħ/8m_ΛΦ(𝐩)p^μf_eq(1-f_eq),
Φ(𝐩) = ∫ dΣ^μp_μf_eq.
The five terms in Eq. (<ref>) represent polarization induced by the thermal vorticity (𝒮_thermal^μ), the shear tensor (𝒮_shear^μ),
the fluid acceleration minus temperature gradient (𝒮_accT^μ),
the gradient of chemical potential over temperature (𝒮_chemical^μ),
and the external electromagnetic field (𝒮_EB^μ), respectively.
Detailed expressions of these terms can be derived from the statistic model <cit.> and the Kubo formula <cit.>.
Here, S^μ_shear and S^μ_chemical are also named as the shear-induced polarization (SIP) and the baryonic spin Hall effect (SHE) in literature <cit.>.
Since the electromagnetic field decays rapidly and its evolution profile has not been well constrained in heavy-ion collisions yet, we only take into account the first four terms but neglect the 𝒮_EB^μ term in the current study <cit.>.
The polarization vector of Λ (or Λ̅) in its rest frame can then be constructed as
P⃗^*(𝐩) = P⃗(𝐩)-P⃗(𝐩) ·𝐩/p^0(p^0+m)𝐩,
where
P^μ(𝐩) ≡1/s𝒮^μ(𝐩),
with s=1/2 being the particle spin.
After averaging over the transverse momentum, one obtains the local polarization as
⟨P⃗(ϕ_p) ⟩ = ∫_y_min^y_maxdy ∫_p_Tmin^p_Tmaxp_Tdp_T
[ Φ (𝐩)P⃗^*(𝐩)]/∫_y_min^y_maxdy ∫_p_Tmin^p_Tmaxp_Tdp_TΦ(𝐩) ,
in which ϕ_p is the azimuthal angle, and Φ(𝐩) is an integration on the freezeout hypersurface defined in Eq. (<ref>).
The mass of Λ (or Λ) is set as m = 1.116 GeV. Finally, the global polarization of Λ and Λ is obtained by further averaging P⃗^*(𝐩) over ϕ_p in Eq. (<ref>).
§ NUMERICAL RESULTS
In this section, we present the directed flow coefficient and polarization of Λ(Λ̅) hyperons in Au+Au collisions at = 27 GeV from the CLVisc hydrodynamic calculation using the tilted initial geometry with non-zero initial longitudinal flow velocity field.
We first analyze the directed flow v_1 of pions, protons and antiprotons in various centrality classes to determine the H_t value for the tilted QGP fireballs at different centralities. Using the H_t value extracted from the directed flow, we then investigate the relation between the global polarization of Λ(Λ̅) hyperons and centrality, transverse momentum, and pseudo-rapidity in Sec. <ref>.
We further study the dependence of the global polarization of Λ hyperons on the tilted QGP geometry and the initial velocity field in Sec. <ref>.
The global polarization generated by different initial condition models – the tilted Glauber model, AMPT, and SMASH – are compared in Sec. <ref>.
The correlation between global polarization and the directed flow of Λ̅ hyperons is investigated in Sec. <ref>.
In the end, we present results for the local polarization of Λ hyperons in Sec. <ref>
§.§ Directed flow of identified particles and global polarization of Λ hyperons
We start with validating our model setup by comparing the directed flow of identified hadrons and global polarization of Λ(Λ̅) hyperons between our calculation and the STAR data <cit.> in Figs. <ref>-<ref>.
The directed flow coefficient v_1 can be extracted as the first-order Fourier coefficient of the azimuthal distribution of particle momentum as
v_1(y)=⟨cos(ϕ-Ψ_1)⟩=∫cos(ϕ-Ψ_1)dN/dy dϕdϕ/∫dN/dy dϕdϕ,
where Ψ_1 is the first order event plane angle of a nucleus-nucleus collision.
Due to the use of a smooth initial condition of the energy density and baryon number density, effects of event-by-event fluctuations have not been taken into account.
As a result, the event plane coincides with the spectator plane, which can be identified using deflected neutrons measured at large rapidity.
In Fig. <ref>, we first present the v_1 of different species of hadrons as a function of rapitidy in Au-Au collisions at =27 GeV. The transverse momentum range 0<p_T<3.0 GeV of these hadrons is used for the analysis. In the upper panel, we show the v_1 of π^- in three different centrality regions. By using a linear dependence H_t=2.07 b/fm between the tilt parameter and the impact parameter in Eq. (<ref>), a reasonable centrality dependence of the pion v_1 can be obtained. Using the same model setup, we present the v_1 of protons and anti-protons in the middle panel for a given centrality bin. As discussed in Refs. <cit.>, introducing the tilted geometry for the net baryon density provides a satisfactory description of the splitting of v_1 between p and p̅. Similarly, our model results on the v_1 of Λ and Λ̅ are also consistent with the STAR observation <cit.>, as shown in the lower panel of Fig. <ref>.
In Fig. <ref>, we present the global polarization of hyperons along the out-of-plane direction, -P^y, analyzed within the kinematic region of p_T∈ [0.5 GeV, 3.0 GeV] and y∈ [-1, 1]. In the upper panels, we compare different contributions, i.e., different terms in Eq. (<ref>), to the polarization of Λ as functions of (from left to right) centrality, transverse momentum and rapidity, respectively. One observes that after integrating over p_T, the thermal vorticity is the dominant contributor to the global polarization of Λ across different centralities and rapidities (left and right). However, in the middle panel, it is interesting to note that opposite tends with respect to p_T can be seen between the thermal vorticity and shear tensor contributions: the former decreases while the latter increases as p_T becomes larger. Contribution from the shear term becomes non-negligible above p_T∼ 1 GeV and even becomes dominant above p_T∼ 1.5 GeV. Later, we will show that the p_T dependences of these two terms rely on the medium geometry and the longitudinal flow field of the QGP.
In the lower panels of Fig. <ref>, we combine contributions from the four terms (thermal, shear, accT, and chemical) and present the global polarization (-P^y) of both Λ and Λ̅ as functions of centrality, transverse momentum and rapidity. Our model calculation provides a satisfactory description of the hyperon polarization compared to the STAR data <cit.>. Only a minor difference is observed between Λ and Λ̅, which results from the chemical term contribution to -P^y. In addition, due to the opposite p_T dependences between thermal and shear contributions (middle panel in the upper row), their combination leads to a non-monotonic dependence of -P^y on p_T (middle panel in the lower row). This feature can be examined with more precise data in the future, and provide more stringent constraints on different components of hyperon polarization. With these validations of our model calculation, we will explore the dependence of hyperon polarization on the medium profiles and its correlation with the directed flow in the rest of this work.
§.§ Effects of the initial QGP geometry and longitudinal flow on global polarization
In this subsection, we implement a detailed analysis on how the initial geometry and longitudinal flow profiles of the QGP affect the global polarization of Λ hyperons.
In Fig. <ref>, we first fix the initial longitudinal flow velocity field with f_v=0.23 and study how the tilt of the QGP geometry influences different components of Λ polarization. The upper plot shows the global polarization as a function of p_T. And in each panel, we study how the H_t parameter affects each contribution – thermal, shear, accT, and chemical – to the Λ polarization. As H_t increases from 0 to 15, one observes the slope of -P^y(p_T) decreases from positive to negative values in the thermal vorticity term, while increases from negative to positive values in the shear tensor term. This could be understood with the -u_β∂_α T/T^2 component in the S_thermal^μ term and the u_β/T component in the S_shear^μ term, which are both amplified with a more asymmetric medium and lower temperature at mid-rapidity when H_t increases.
Consequently, the non-monotonic dependence of their combination on p_T may provide additional constraint on the medium geometry if the experimental data becomes sufficiently precise.
Little impact from H_t has been found on the Λ polarization from the fluid acceleration (accT) term and the SHE (chemical) term. A similar investigation is conducted in the lower plot of Fig. <ref>, where the Λ polarization is studied as a function of rapidity. As the value of H_t increases from 0 to 15, the dip structure of the Λ polarization at mid-rapidity from the thermal vorticity term gradually transits into a peak structure. The value of this global polarization near y=0 is enhanced from 0.40 to 0.73. For the other three terms of global polarization, impact of this tilted deformation of the QGP appears small.
In Fig. <ref>, we combine contributions from the four terms above and present the total value of Λ polarization as functions of both p_T (upper panel) and y (lower panel). When the f_v parameter is fixed at 0.23, one observes an enhancement in the value of -P^y as one increases the tilt parameter H_t. Meanwhile, a clear non-monotonic behavior of polarization with respect to p_T appears when H_t is sufficiently large, which may serve as a signature of the tilted geometry of the QGP fireball.
Similarly, we study the relation between the longitudinal flow velocity field (or f_v) and the global polarization in Figs. <ref> and <ref>. Here, we fix H_t=2.07b/fm for the medium geometry, which is fitted from the centrality dependence of the hadron v_1 earlier. In Fig. <ref>, we present p_T (upper plot) and y (lower plot) dependences of -P^y for four different contributions separately. As one increases the value of f_v from 0 to 0.3, an enhanced global polarization is seen from the thermal vorticity term. This can be understood with the stronger longitudinal velocity gradient deposited into the QGP when f_v becomes larger, which directly increases the global vorticity of the medium and therefore the Λ polarization. On the other hand, little variation is observed in the other three terms when we change the f_v parameter. The total value of polarization is presented in Fig. <ref> after contributions from the four terms are combined. When the medium geometry is fixed via H_t=2.07b/fm, a non-monotonic p_T dependence of Λ polarization can be observed in the upper panel for different values of f_v applied here. Increasing the f_v value significantly enhances the magnitude of polarization. As shown in the lower panel, this enhancement appears more prominent at mid-rapidity than at large rapidity.
§.§ Comparison between different initialization models
Constraining the initial condition from the final state hadron observables is an ongoing effort of heavy-ion programs. It has been suggested in Ref. <cit.> that the Λ polarization can be affected by implementing different initialization models. Therefore, it is of great interest to investigate whether the initial condition we develop in this work introduces further impacts on polarization. In this subsection, we compare the Λ polarization between three different initialization methods: the titled optical Glauber model described in Sec. <ref>, SMASH <cit.> and AMPT <cit.>. The parameters and settings of SMASH and AMPT are identical to those used in Ref. <cit.>. And after the CLVisc hydrodynamic evolution, these three initial conditions are able to produce comparable p_T spectra of charged particles.
Shown in Fig. <ref> is the global polarization of Λ in 20-50% Au+Au collisions at =27 GeV as functions of p_T (upper panel) and y (lower panel), compared between CLVisc hydrodynamic calculations with three different initialization models. One can observe a larger value of polarization from using our current tilted optical Glauber model (labeled as “CCNU") than from using SMASH and AMPT. This results from both the tilted geometry of the QGP fireball and the longitudinal flow gradient introduced in our current model. As discussed in the previous subsection, the tilted geometry also gives rise to the non-monotonic p_T dependence of the global polarization, which is absent in results from using the other two initialization models. When the tilt is strong, the magnitude of shear induced polarization polarization increases rapidly with p_T. On the other hand, this shear term from SMASH or AMPT initial condition only increases moderately in the given p_T region. Currently, it is hard to distinguish between the three initialization models based on the experimental data due to its large uncertainties. Future measurements with higher precision may help better constrain the initial condition in heavy-ion collisions.
§.§ Correlation between global polarization and directed flow
As seen in the previous two subsections, the value of hyperon polarization strongly depends on the initial condition of the QGP. Meanwhile, the initial geometry and flow field of the QGP also determine the collective flow coefficients of the final state hadrons. Therefore, one would naturally expect certain correlation between these two observables in heavy-ion collisions, as already suggested by both experimental data <cit.> and theoretical studies <cit.>. In this subsection, we will combine our analyses on the directed flow and global polarization of hyperons and explore how they are correlated with each other.
Similar to Figs. <ref>-<ref>, we first review the dependence of the hadron v_1 on the tilted geometry and the initial longitudinal flow profile in Fig. <ref> for 10-40% Au+Au collisions at √(s_NN)=27 GeV. Here we choose the Λ̅ hyperon since the anisotropy of the anti-baryons is mainly driven by the energy distribution of the QGP rather than the baryon number density deposited by the projectile and target nuclei <cit.>. In the upper panel, we fix the f_v=0.23 parameter for the initial longitudinal flow and vary the H_t parameter for the tilted deformation of the medium geometry. One observes that as H_t increases from 0 to 25, the slope of directed flow with respect to rapidity (dv_1/dy) around mid-rapidity decreases from positive to negative values. On the other hand, when we fix H_t=14.8 (using H_t=2.07b/fm) for the medium geometry and vary f_v for the longitudinal flow in the lower panel, one observes an increase in dv_1/dy from negative values towards 0. These observations are consistent with our findings for anti-baryons in a prior work <cit.> on the directed flow coefficients of different hadron species at the BES energies.
In Fig. <ref>, we combine results of dv_1/dy and -P^y of Λ̅ around mid-rapidity from our hydrodynamic calculation using different values of H_t and f_v. According to Figs. <ref>, <ref> and <ref>, when f_v=0.23 is fixed, increasing H_t increases -P^y but decreases dv_1/dy. This leads to an anti-correlation between the global polarization and the slope of directed flow of Λ̅, as shown by the red diamond symbols in Fig. <ref>. Contrarily, when H_t=14.8 is fixed, increasing f_v simultaneously increases dv_1/dy and -P^y of Λ̅, resulting in a positive correlation between these two observables as shown by the green star symbols. In both cases, good linear relations can be seen between the v_1 slope and the global polarization of Λ̅. Therefore, as suggested in Ref. <cit.>, between directed flow and global polarization, one may infer the value of one from the other.
§.§ Local polarization of Λ hyperons
In the end, we complete our study by presenting the local polarization of Λ hyperons.
Shown in Fig. <ref> is the local polarization in the -ŷ direction as a function of the azimuthal angle (ϕ_p) in 20-50% Au+Au collisions at =27 GeV, compared between CLVisc hydrodynamic calculations using different H_t (upper panel) and f_v (lower panel) parameters. Consistent with our previous conclusions on the global polarization, enhancing the tilted deformation of the QGP or its initial longitudinal flow gradient also increases the local value of -P^y at different ϕ_p between 0 and π. Similarly, increasing H_t and f_v also enhances the magnitude of local polarization in the z direction (|P^z|), as shown in the upper and lower panels of Fig. <ref> respectively. Note that the cosine-like feature of -P^y and the negative sine shape of P^z with respect to ϕ_p are both opposite to observations in the experimental data <cit.>. Although it has been proposed that contributions from the shear induced term and the spin Hall term help improve the theoretical description of local polarization towards the experimental observation <cit.>, after combining them with the dominating term of thermal vorticity, the discrepancies still exist in our current results.
§ CONCLUSIONS
We have studied the hyperon polarization and its correlation with the directed flow of hadrons in Au+Au collisions at =27 GeV. The CLVisc hydrodynamic simulation is coupled to a modified 3D Glauber initial condition that models a tilted QGP medium with an initial longitudinal velocity field. Using model parameters determined by the directed flow coefficient of different species of identified particles, our calculation provides a satisfactory description of the global polarization of Λ(Λ̅) hyperons observed at the STAR experiment, as functions of centrality, transverse momentum and rapidity. We find that the thermal vorticity dominates the p_T-integrated global and local polarization of hyperons, while the shear-induced polarization is important at high p_T. Increasing the counterclockwise tilt of the QGP fireball with respect to the beam direction enhances the thermal vorticity contribution to the Λ polarization at low p_T, while suppresses its contribution at high p_T. The opposite trend is found for the shear-induced contribution. Therefore, a non-monotonic dependence on p_T is found for the global polarization of Λ with the presence of a tilted QGP profile. Effects of this tilted geometry on the fluid acceleration term and the baryonic spin Hall term are found small in our calculation. Depositing stronger initial longitudinal flow velocity into the QGP gives rise to a larger orbital angular momentum and therefore a larger thermal vorticity contribution to the Λ polarization. However, effects of this initial velocity on the other three terms of polarization are found negligible. Compared to the same hydrodynamic simulation using SMASH or AMPT initial condition, our current calculation provides a larger value of Λ polarization, indicating the sensitivity of global polarization to the initial geometry and the longitudinal flow velocity of the QGP. Furthermore, a strong correlation is found between the Λ polarization and its directed flow coefficient. When the medium geometry is fixed, the Λ polarization is linearly correlated with the slope of v_1(y) near mid-rapidity as the initial longitudinal flow velocity is varied. To the contrary, these two quantities are linearly anti-correlated when the initial flow is fixed while the tilt of the medium is varied. These imply the medium geometry and the longitudinal flow velocity are the common origin of polarization and directed flow, and therefore the combination of these two observables may provide a tight constraint on the initial condition of the QGP produced in non-central heavy-ion collisions.
The framework presented in the present work can be extended to studying the hyperon polarization at other beam energies at RHIC and LHC. However, apart from the medium geometry and longitudinal flow profile, other effects might be crucial for understanding the polarization phenomenology at lower collision energies. For instance, the electromagnetic field produced in energetic nuclear collisions can cause directional drift of charged quarks and thus affect the splitting of global polarization between Λ and Λ̅ <cit.>.
The deformation of nuclear structure may also contribute to the polarization of hyperons <cit.>.
In addition, the correlation between the hyperon polarization and its directed flow found in this work can be further extended to correlation with hard probe observables for an even more stringent constraint on the QGP properties <cit.>.
These aspects will be explored in our upcoming efforts.
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11935007, 12175122 and 2021-867, Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the Natural Science Foundation of Hubei Province No. 2021CFB272, the Education Department of Hubei Province of China with Young Talents Project No. Q20212703, the Open Foundation of Key Laboratory of Quark and Lepton Physics (MOE) No. QLPL202104 and the Xiaogan Natural Science Foundation under Grant No. XGKJ2021010016.
unsrt
|
http://arxiv.org/abs/2307.03889v1 | 20230708034331 | Quantum techniques for eigenvalue problems | [
"Dean Lee"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas",
"nucl-th"
] |
Article Title]Quantum techniques for eigenvalue problems
*]Dean [email protected]
[*]Facility for Rare Isotope Beams and Department of Physics and Astronomy, Michigan State University, East Lansing, 48824 MI, USA
This article is a brief introduction to quantum algorithms for the eigenvalue problem in quantum many-body systems. Rather than a broad survey of topics, we focus on providing a conceptual understanding of several quantum algorithms that cover the essentials of adiabatic evolution, variational methods, phase detection algorithms, and several other approaches. For each method, we discuss the potential advantages and remaining challenges.
[
[
August 12, 2023
===================
§ INTRODUCTION
Quantum computing has the potential to address many of the unsolved problems of quantum many-body physics. By allowing for arbitrary linear combinations of tensor products of qubits, one can store
exponentially more information than classical bits. This opens the possibility of calculations of strongly-interacting systems with many degrees of freedom without the
need for Monte Carlo methods and their accompanying problems associated with
sign oscillations <cit.>. Furthermore, qubits
naturally evolve with unitary real-time dynamics, providing access to non-equilibrium processes, which are often well beyond the reach of first-principles
calculations using classical computers. But there are also great challenges to realizing the promise of quantum computing. One of the main problems is the fact that the quantum computing devices available today have significant limitations due to gate errors, qubit decoherence, faulty measurement readout, small numbers of qubits, and limited qubit connectivity. These problems severely limit the class of problems that one can address at present. Nevertheless, significant advances are being made in quantum hardware performance and scale <cit.>, and it is useful to consider the design and performance of quantum algorithms as quantum resources grow and become more reliable.
There is an excellent and comprehensive review on quantum computing and quantum many-body systems in Ref. <cit.>. Instead of writing another review with similarly broad scope, in this article we instead focus on several algorithms of interest for eigenvalue problems. The aim is to provide a readable introduction for novice readers with enough detail to demonstrate the concepts and execution of each method. We should note that there are many useful algorithms of relevance to eigenvalue problems that we do not cover here. These include cooling algorithms <cit.>, coupled heat bath approaches <cit.>, dissipative open system methods <cit.>, spectral combing
<cit.>, symmetry projection techniques <cit.>, linear combinations of unitaries <cit.>, and imaginary time evolution <cit.>.
In the following, we start with a review of the adiabatic theorem and the performance of adiabatic evolution for the preparation of eigenstates. After this, we cover the broad class of variational methods. We discuss gradient calculation techniques for optimization and several specific variational algorithms. Thereafter we present several phase detection algorithms. These include phase estimation, iterative phase estimation, and the rodeo algorithm. We then conclude with a summary and outlook for the future.
§ ADIABATIC EVOLUTION
The adiabatic theorem states that if a quantum state is an eigenstate of an initial Hamiltonian H(0) = H_0, then the quantum state will remain trapped in an exact eigenstate of the instantaneous Hamiltonian H(t) in the limit that the time dependence of H(t) is infinitely slow <cit.>. If this evolution has only finite duration, then the error will scale inversely with the total time evolution, T.
We can use quantum adiabatic evolution to prepare the eigenstates of any Hamiltonian H_1 by preparing an exact eigenstate of some simple initial Hamiltonian H_0. For the purpose of analysis, it is convenient to scale out the dependence on the total duration of time T and work with the rescaled variable s = t/T. We then make a smooth interpolation H(s) with s ranging from s= 0 to s=1, with H(0)=H_0 and H(1)=H_1 <cit.>. Let us define the adiabatic evolution operator
U(s) = Texp[-i T∫_0^s H(s') ds' ],
where T indicates time ordering where operators at later times are placed on the left. In the limit of large time T, the unitary transformation U(1) will map any eigenstate of H_0 to an eigenstate of H_1. In Ref. <cit.>, it is observed that the unitarily-transformed Hamiltonian,
H'(1) = U^†(1)H_1 U(1),
is a Hamiltonian whose eigenvalues are equal to H_1 but whose eigenvectors are equal to H_0. For this reason, the term “Hamiltonian translator” was used to describe the unitary transformation U(1). Suppose we start from the Hamiltonian H(0) and perform a perturbation theory expansion in the difference, H'(1)-H(0),
H'(1) = H(0) + [H'(1)-H(0)].
Since H(0) and H'(1) share the same eigenvectors, we find that first-order perturbation for the energy is exact and all other terms in perturbation theory for the energy or wave function vanish.
Let us now consider the one-parameter eigenvector |ψ(s)⟩, which is an instantaneous eigenvector of H(s) for s in the interval [0,1]. Let Δ(s) be the spectral gap between |ψ(s)⟩ and the rest of the energy spectrum of H(s). In computing the spectral gap, we can ignore sectors that are orthogonal to |ψ(s)⟩ due to symmetries that are respected by H(s). We note that |ψ(0)⟩ is an eigenstate of H_0. Let us define |ψ_U(s)⟩ as U(s)|ψ(0)⟩.
We use the symbol || · || to denote the operator norm. Building upon the work of Ref. <cit.>, Jansen et al. <cit.> derived the rigorous bound that
T ≥1/δ{∫^s_0 [ || ∂_s^2 H(s')||/Δ^2(s) + 7 || ∂_s H(s')||^2/Δ^3(s)] ds' + B }
is sufficient to satisfy the error bound
|⟨ψ(s)|ψ_U(s)||⟩≥ 1-δ,
where B is a boundary term that vanishes when ∂_s H(0) and ∂_s H(1) both equal zero <cit.>. We see from Eq. (<ref>) that, for any fixed system, the required time T is scaling inversely with the error δ.
The challenge with adiabatic state preparation for eigenstates of quantum many-body systems is the fact that Δ(s) may be extremely small for large systems. This is especially true when H_0 and H_1 have very different eigenstates, and H(s) must pass through one or more quantum phase transitions. This motivates the search for initial Hamiltonians H_0 for which the starting eigenstate can be prepared on a quantum computer, but the eigenstate structure of H_0 is not completely trivial and has some resemblance to that of H_1 <cit.>. Even in cases where |ψ_U(1)⟩ is not a good approximation to the eigenstate |ψ(1)⟩ of H_1, the state |ψ_U(1)⟩ can still be a useful starting vector for other state preparation algorithms which converge more rapidly.
In order to perform the time evolution in Eq. (<ref>), one usually uses some version of the Trotter approximation. The conceptual starting point for the Trotter approximation is the Baker-Campbell-Hausdorff formula, which states that when e^Ae^B = e^C, we have the formal series
C = A + B + 1/2[A,B] + 1/12[A,[A,B]] - 1/12[B,[A,B]] + ⋯.
If our Hamiltonian has two non-commuting pieces,
H = H_A + H_B.
At first order in the Trotter-Suzuki expansion, we can use <cit.>
e^-iHΔ t = e^-iH_AΔ te^-iH_BΔ t + O[(Δ t)^2]
= e^-iH_BΔ te^-iH_AΔ t + O[(Δ t)^2].
At second order we have
e^-iHΔ t = e^-iH_BΔ t/2e^-iH_AΔ te^-iH_BΔ t/2 + O[(Δ t)^3]
= e^-iH_AΔ t/2e^-iH_BΔ t e^-iH_AΔ t/2+ O[(Δ t)^3].
The generalization to higher-order expressions can be found in Ref. <cit.>. The performance of the Trotter-Suzuki expansion can be improved in numerous ways, such as using random orderings <cit.>, sums of Trotter products at different orders <cit.>, extrapolation methods <cit.>, and renormalization <cit.>.
§ VARIATIONAL METHODS
Variational quantum algorithms encompass a broad class of methods that are among the most popular approaches to the preparation of eigenstates using current and near-term quantum hardware.
While the examples we consider here are optimizing a single vector, there are also many different variational methods that use subspaces <cit.>. The typical strategy is a hybrid approach where the quantum device is used to prepare a parameterized family of possible wave functions, and then a classical computation is performed to minimize the associated cost function. Let θ be an L-dimensional vector of parameters θ_j. The most common example is the search for the ground state of a quantum Hamiltonian H by minimizing a cost function C(θ) given by the energy expectation value C(θ) = ⟨θ|H|θ|$⟩<cit.>. We consider a general ansatz for the wave function|θ⟩that is a product of unitary operators acting upon some simple initial state|ψ_I⟩<cit.>,
|θ⟩ = V_L U_L(θ_L) ⋯
V_1 U_1(θ_1)|ψ_I⟩.
where eachV_jis a fixed unitary operator. It is convenient to take eachU_j(θ_j)as an exponential of a Hermitian operatorH_j,
U_j(θ_j) = exp(-iH_jθ_j/2),
where we restrictH_jto be its own inverse so thatH_j^2 = I. This involutory condition is satisfied by any product of Pauli matrices on any multi-qubit system. In such cases, we have the simple trigonometric relation,
U_j(θ_j) = cos(θ_j/2)I -i sin(θ_j/2)H_j.
For any operatorO, we find that
U_j^†(θ_j) O U_j(θ_j) = O_1 + O_sinsin(θ_j) + O_coscos(θ_j),
for some operatorsO_1,O_sin, andO_cosindependent ofθ_j.
It follows that <cit.>
∂/∂θ_j U_j^†(θ_j) O U_j(θ_j) = O_sincos(θ_j) - O_cossin(θ_j)
= U_j^†(θ_j+α_j) O U_j(θ_j+α_j) - U_j^†(θ_j-α_j) O U_j(θ_j-α_j)/2 sin(α_j),
for anyα_jsuch thatsin(α_j)0. We note that this parameter shift formula is exact and not simply a finite-difference approximation. This allows values forα_jofO(1), which is helpful to measure gradient components in the presence of stochastic and systematic errors. One can now compute the components of the gradient using
∂/∂θ_jC(θ ) = C(θ+ α_j)-C(θ- α_j)/2 sin(α_j),
where the vectorα_jhas components[α_j]_k = α_j δ_jk. These gradients can be used to minimize the cost functionC(θ).
Consider adiabatic evolution with initial HamiltonianH_0, final HamiltonianH_1, and interpolating HamiltonianH(s) = sH_1 + (1-s)H_0. We then have a string of exponentials for our adiabatic evolution operator,
U(1) = e^-iH(1)ds⋯ e^-iH(s)ds⋯ e^-iH(0)ds,
which we apply to the ground state ofH_0. LetNbe the number of time steps and letds = 1/(N+1). If we now use the Trotter approximation to write
e^-iH(s)ds = e^-i[sH_1 + (1-s)H_0]ds≈ e^-isH_1dse^-i(1-s)H_0ds,
then the adiabaic evolution operator has the form
U(1) ≈ e^-iγ_NH_1dse^-iβ_NH_0ds⋯ e^-iγ_jH_1dse^-iβ_jH_0ds⋯ e^-iγ_0H_1dse^-iβ_0H_0ds,
forβ_j = 1 - jdsandγ_j = jds. This structure provides the theoretical motivation for the quantum approximate optimization algorithm (QAOA) <cit.>. Instead of using the values forγ_jandβ_jas prescribed by adiabatic evolution, they are treated as free variational parameters optimized to minimize the energy expectation of the HamiltonianH_1.
For large quantum systems, the required number of variational parameters will grow with system size. The number of variational parameters needed as a function of the size of the system with fixed error tolerance remains an open question.
There are at least two major challenges that arise in quantum variational algorithms for large systems. The first challenge is the problem of barren plateaus. For parameterized random quantum circuits, the components of the cost function gradient will become exponentially small in the number of qubits of the quantum system <cit.>.
The second challenge is the appearance of many local minima, making gradient descent optimization difficult. Before discussing the problem of local minima, we first review some terminology from computational complexity theory. A decision problem is one where the two possible answers are yes or no. P refers to the set of decision problems that can be solved using a deterministic Turing machine in polynomial time. NP refers to the set of decision problems whose solution, once given, can be confirmed by a deterministic Turing machine in polynomial time. Equivalently, NP refers to decision problems that can be solved using a non-deterministic Turing machine in polynomial time, where a general non-deterministic Turing machine is endowed with the ability to branch over all possible outcomes in parallel. A problempis NP-hard if all problems in NP can be obtained in polynomial time from the solution ofp. If a decision problem in NP is NP-hard, then it is called NP-complete.
Consider a graph withdvertices and an adjacency matrixA_i,jmarking the edges of the graph that equal0or1for each pair of vertices{i,j}. The MaxCut problem poses the task of finding the subsetSof the vertices that maximizes the number of edges connectingSand its complement,
∑_i∈ S∑_j ∉ S A_i,j.
The MaxCut problem was shown to be NP-complete <cit.>. The continuous MaxCut problem consists of finding thed-dimensional vectorϕ = [0,2π)^dthat minimizes
μ(ϕ) = 1/4∑_i=1^d ∑_j=1^d A_i,j[cos(ϕ_i)cos(ϕ_j)-1] .
In Ref. <cit.>, the continuous MaxCut problem is shown to be equivalent to the MaxCut problem and therefore also NP-hard. Furthermore, the continuous MaxCut problem can also be recast as a variational quantum optimization problem for the Ising model Hamiltonian,
1/4∑_i=1^d ∑_j=1^d A_i,j (σ^z_i σ^z_j - 1),
with variational wave function
e^-i σ^y_dϕ_d/2⋯ e^-i σ^y_1ϕ_1/2|0⟩^⊗ d.
Although there is no proof that NP contains problems outside of P, there is much speculation that this is true. NP-hard problems would then belong to the set of difficult problems outside of P, and this would include the problem of minimizing the variational cost function for an Ising Hamiltonian.
Although the general performance of variational methods for large quantum systems is challenging, there are many cases in which major simplifications arise because of some simplification, such as the emergence of a mean-field picture. There are many examples of such approaches for fermionic quantum many-body systems <cit.>. One popular example is the unitary coupled cluster (UCC) method. In the UCC method, one starts with an initial state |ψ_I⟩, which is a mean-field reference state. For the unitary transformation, U, we take the form
U = e^T(θ) - T^†(θ),
where
T(θ) = ∑_m T_m(θ),
and
T_m(θ) is an m-body operator that produces excitations. The singles excitation has the form,
T_1(θ) = ∑_i ∑_a θ^i_a a^†_a a_i,
where a^†_a and a_i and fermionic creation and annihilation operators for orbitals a and i respectively. The doubles terms has the structure,
T_2(θ) = 1/4∑_i<j∑_a<bθ^i,j_a,b a^†_a a^†_b a_j a_i.
For the general case, we have
T_m(θ) = 1/(m!)^2∑_i<j<⋯∑_a<b<⋯θ^i,j,⋯_a,b,⋯ a^†_a a^†_b ⋯ a_j a_i.
There are several ways to encode ferimonic antisymmetrization properties on a quantum computer. Although often not the most efficient, the simplest approach is the Jordan-Wigner transformation <cit.>. We define
σ^+_j = (σ^x_j + i σ^y_j)/2,
σ^-_j = (σ^x_j - i σ^y_j)/2,
and use the convention that |0⟩ corresponds to occupation number 0, and |1⟩ corresponds to occupation number 1. We then have a faithful representation of the algebra of creation and annihilation operators with the mapping
a^†_j = σ^-_j ⊗σ^z_j-1⊗⋯⊗σ^z_1,
a_j = σ^+_j ⊗σ^z_j-1⊗⋯⊗σ^z_1.
This gives the required anticommutation relations,
{a_j,a^†_k} = δ_j,k, { a_j,a_k } = { a^†_j,a^†_k } = 0.
Many other antisymmetrization techniques <cit.> have been designed that are computationally more efficient in cases where the products of creation and annihilation operators in the Hamiltonian appear in combinations with some locality restriction with respect to the orbital index.
A convenient choice for the mean-field reference state |ψ_I⟩ is a Hartree-Fock state, corresponding to a Slater determinant of single-particle orbitals achieving the lowest energy expectation value. The Thouless theorem <cit.> shows how to prepare any desired Slater determinant state starting from any other Slater determinant state. Let α_p( r) label the original orbitals and let β_p( r) label the new orbitals. We take a^†_p, a_p to be the creation and annihilation operators for α_p( r), and b^†_p, b_p to be the creation and annihilation operators for β_p( r).
Let N be the number of particles in our system of interest. The aim is to derive a simple relation between b^†_N ⋯ b^†_1 | vac⟩ and a^†_N ⋯ a^†_1 | vac⟩. Without loss of generality, we use a linear transformation to redefine the orbitals β_1( r), ⋯, β_N( r) so that for each p = 1, ⋯, N, we have
b^†_p = a^†_p + ∑_q=N+1^∞ a^†_q u_q,p
for some coefficient matrix u_q,p.
The linear transformation on β_1( r), ⋯, β_N( r) has no effect on b^†_N ⋯ b^†_1 | vac⟩ except for introducing an overall normalization factor. Our convention will ensure that b^†_N ⋯ b^†_1 | vac⟩ and a^†_N ⋯ a^†_1 | vac⟩ have the same normalization.
The Thouless theorem is based on the observation that for each p = 1, ⋯, N,
( a^†_p + ∑_q=N+1^∞ a^†_q u_q,p) F( no a^†_p) | vac⟩
= ( 1 + ∑_q=N+1^∞ a^†_q u_q,p a_p ) a^†_p F( no a^†_p) | vac⟩,
where F( no a^†_p) is an arbitrary function of the creation and annihilation operators where a^†_p does not appear. We then have
b^†_N ⋯ b^†_1 | vac⟩ = ( a^†_N + ∑_q=N+1^∞ a^†_q u_q,N) ⋯( a^†_1 + ∑_q=N+1^∞ a^†_q u_q,1) | vac⟩
= ( 1 + ∑_q=N+1^∞ a^†_q u_q,N a_N ) a^†_N ⋯( 1 + ∑_q=N+1^∞ a^†_q u_q,1 a_1 ) a^†_1 | vac⟩.
This leads to simple relation,
b^†_N ⋯ b^†_1 | vac⟩ = ( 1 + ∑_q=N+1^∞ a^†_q u_q,N a_N ) ⋯( 1 + ∑_q=N+1^∞ a^†_q u_q,1 a_1 ) a^†_N ⋯ a^†_1 | vac⟩
= exp( ∑_p=1^N ∑_q=N+1^∞ a^†_q u_q,p a_p ) a^†_N ⋯ a^†_1 | vac⟩.
Once the Hartree-Fock orbitals are determined using classical computing, one can prepare a simple N-particle Slater determinant state with orbitals given by the computational basis of the quantum computer and then apply the transformation in Eq. (<ref>)<cit.>.
§ PHASE DETECTION ALGORITHMS
Quantum phase estimation <cit.> is a well-known example of a phase detection algorithm that can be used to find energy eigenvalues and prepare energy eigenstates of the quantum many-body problem <cit.>. Suppose for the moment that |ψ⟩ is an eigenstate of the unitary operator U with eigenvalue e^2π iθ. Of particular interest is the case where the unitary operator U is the time evolution operator for some Hamiltonian H over some fixed time step Δ t. The goal is to efficiently determine the phase angle θ. Since U|ψ⟩=e^2π i θ|ψ⟩, we have U^2^j|ψ⟩ = e^2π i θ 2^j|ψ⟩ for any nonnegative integer j. Together with the state |ψ⟩, we take n ancilla qubits with each initialized as |0⟩. The resulting state is |0⟩^⊗ n⊗|ψ⟩. The Hadamard gate is a single qubit gate that maps |0⟩ to 1/√(2)(|0⟩ + |1⟩) and maps |1⟩ to 1/√(2)(|0⟩ - |1⟩). The action of the Hadamard gate for a general linear combination of |0⟩ and |1⟩ is determined by linearity. We apply Hadamard gates to each of the ancilla qubits so that we get
1/2^n/2( |0⟩ + |1⟩)^⊗ n⊗|ψ⟩.
For each of the ancilla qubits j = 0, ⋯, n-1, we use the ancilla qubit to control the unitary gate U^2^j. This means that U^2^j is applied when the ancilla qubit j is in state |1⟩, but no operation is performed if the ancilla qubit is in state |0⟩. The result we get is <cit.>
1/2^n/2( |0⟩ + e^2π i θ 2^n-1|1⟩) ⊗⋯⊗( |0⟩ + e^2π i θ 2^0|1⟩) ⊗|ψ⟩ = |f(θ)⟩⊗|ψ⟩,
where
|f(θ)⟩ = 1/2^n/2∑_m=0^2^n-1( e^2π i θ 2^n-1 m_n-1|m_n-1⟩) ⊗⋯(e^2π i θ 2^0 m_0⊗|m_0⟩)
= 1/2^n/2∑_m=0^2^n-1 e^2π i θ m|m_n-1⟩⊗⋯⊗|m_0⟩,
and m_n-1⋯ m_0 are the binary digits of the integer m.
Let k be an integer between 0 and 2^n-1 with binary representation k_n-1⋯ k_0. We note that when θ equals k divided by 2^n, then |f(k/2^m)⟩ is the quantum Fourier transform of the state |k_n-1⟩⊗⋯⊗|k_0⟩,
|f(k/2^m)⟩ = 1/2^n/2∑_m=0^2^n-1 e^2π i k m/2^n|m_n-1⟩⊗⋯⊗|m_0⟩.
We can therefore extract information about the value of θ by applying the inverse quantum Fourier transform to |f(θ)⟩,
QFT^-1|f(θ)⟩ = 1/2^n∑_k=0^2^n-1∑_m=0^2^n-1 e^-2π i (k/2^n-θ) m|k_n-1⟩⊗⋯⊗|k_0⟩.
We see that if θ equals k/2^n for some integer k in the summation, then QFT^-1|f(θ)⟩ equals |k_n-1⟩⊗⋯⊗|k_0⟩. In the general case, we get a superposition of such states |k_n-1⟩⊗⋯⊗|k_0⟩ that is highly peaked for integers k, where k/2^n is close to θ. We simply measure each ancilla qubit and determine k/2^n to obtain an estimate of θ. This is repeated over several trials to build a probability distribution and refine the estimate of θ.
Suppose now that |ψ⟩ is not an eigenstate of U but rather a general superposition of eigenstates |ψ_a⟩ with eigenvalues e^2π iθ_a,
|ψ⟩ = ∑_a c_a |ψ_a⟩.
We can now apply phase estimation to the general state |ψ⟩ in exactly the same manner as before. Let us assume that the separation between each θ_a is large compared to 1/2^n. This ensures that the peaked distributions we get for each eigenvector have negligible overlap. The outcome after measuring the n ancilla qubits will be
|k_n-1⟩⊗⋯⊗|k_0⟩⊗|ψ_a⟩,
for some eigenstate |ψ_a⟩. The probability of |ψ_a⟩ being selected will equal |c_a|^2. The error of quantum phase estimation in determining eigenvalues will scale inversely with 2^n. This arises from the discretization of energy values k/2^n, where k is an integer from 0 to 2^n-1. If we relate U to the time evolution of a Hamiltonian H for time step Δ t, the error in the energy scales inversely with the total time evolution required. This scaling of the uncertainty matches the lower bound one expects from the Heisenberg uncertainty principle.
The error of phase estimation for eigenstate preparation arises from the admixture of terms from different eigenstates,
1/2^n∑_a ∑_m=0^2^n-1 c_a e^-2π i (k/2^n-θ_a) m|k_n-1⟩⊗⋯⊗|k_0⟩⊗|ψ_a⟩.
When the spacing between θ_a is much larger than 1/2^n, then the contamination of other eigenstates will be O(2^-n). For the case when U is the time evolution of a Hamiltonian H for time duration Δ t, then the error of eigenstate preparation scales inversely with the total amount of time evolution needed.
We have mentioned the quantum Fourier transform, but have not yet discussed how it is implemented. It suffices to describe its action on the state |k_n-1⟩⊗⋯⊗|k_0⟩. We again use the notation that k_n-1⋯ k_0 are the binary digits of the integer k. The desired action of the quantum Fourier transform upon |k_n-1⟩⊗⋯⊗|k_0⟩ is
1/2^n/2( |0⟩ + e^2π i k 2^n-1/ 2^n|1⟩) ⊗⋯⊗( |0⟩ + e^2π i k 2^0/ 2^n|1⟩)
= 1/2^n/2∑_m=0^2^n-1 e^2π i k m/2^n|m_n-1⟩⊗⋯⊗|m_0⟩.
The first few steps of the quantum Fourier transform algorithm will actually produce the desired result with the tensor product in the reverse order,
1/2^n/2( |0⟩ + e^2π i k 2^0/ 2^n|1⟩) ⊗⋯⊗( |0⟩ + e^2π i k 2^n-1/ 2^n|1⟩) .
But this can be fixed by pairwise swap gates between qubits 0 and n-1, 1 and n-2, etc.
The quantum Fourier transform begins with the state
|k_n-1⟩⊗|k_n-2⟩⊗⋯⊗|k_0⟩.
We first act upon qubit n-1 with a Hadamard gate and this gives
1/2^1/2( |0⟩ + e^2π i k_n-1 2^n-1/ 2^n|1⟩) ⊗|k_n-2⟩⊗⋯⊗|k_0⟩.
The coefficient in front of |1⟩ equals 1 if k_n-1=0 and equals -1 if k_n-1=1. We use qubit n-2 to apply a controlled phase rotation to qubit n-1 by a phase e^2π i k_n-22^n-2/2^n. The result is
1/2^1/2( |0⟩ + e^2π i (k_n-1 2^n-1 + k_n-2 2^n-2)/ 2^n|1⟩) ⊗|k_n-2⟩⊗⋯⊗|k_0⟩.
We continue in this manner with qubit j applying a controlled phase rotation on qubit n-1 by a phase e^2π i k_j2^j/2^n. After doing this for all of the remaining qubits, we get
1/2^1/2( |0⟩ + e^2π i k/ 2^n|1⟩) ⊗|k_n-2⟩⊗⋯⊗|k_0⟩.
We perform the analogous process for qubits n-2, ⋯, 1. For the qubit 0, we simply apply the Hadamard gate. In the end, we get the desired result,
1/2^n/2( |0⟩ + e^2π i k 2^0/ 2^n|1⟩) ⊗⋯⊗( |0⟩ + e^2π i k 2^n-1/ 2^n|1⟩).
As described above, we now apply swap gates between pairs of qubits 0 and n-1, 1 and n-2, etc. and then we obtain the desired quantum Fourier transform.
Iterative phase estimation performs the determination of the binary digits of θ one at a time <cit.>. Let |ψ⟩ again be an eigenstate of U with eigenvalue e^2π i θ.
We first consider the case where θ is equal to k/2^n where k is an integer between 0 and 2^n-1. We start with |0⟩⊗|ψ⟩ and apply a Hadamard gate to obtain
1/2^1/2(|0⟩ + |1⟩)⊗|ψ⟩.
We now use the ancilla qubit to perform the controlled unitary operator U^2^n-1. The result is then
|f_0(θ)⟩⊗|ψ⟩,
where
|f_0(θ)⟩ = 1/2^1/2(|0⟩ + e^2π i θ 2^n-1|1⟩).
Applying a Hadamard gate to |f_0(θ)⟩ gives
( 1 + e^2π i θ 2^n-1/2|0⟩ + 1 - e^2π i θ 2^n-1/2|1⟩) = δ_k_0,0|0⟩ + δ_k_0,1|1⟩.
Therefore, we can determine the digit k_0. Let us assume that we have determined the digits from k_0, ⋯, k_j-1. We can determine k_j by taking
1/2^1/2(|0⟩ + |1⟩)⊗|ψ⟩
and using the ancilla qubit to perform the controlled unitary operator U^2^n-j-1 followed by the phase gate
|0⟩⟨0| + e^- 2 π i (k_j-12^-2 + ⋯ + k_02^-j-1)|1⟩⟨1|,
on the ancilla qubit. This phase gate removes the complex phase associated with the binary digits k_0, ⋯, k_j-1 that have already been determined. The net result is
|f_j(θ)⟩⊗|ψ⟩,
where
|f_j(θ)⟩ = 1/2^1/2(|0⟩ + e^2π i θ 2^n-j-1 - 2 π i (k_j-12^-2 + ⋯ + k_02^-j-1) |1⟩).
Applying a Hadamard gate to |f_j(θ)⟩ gives
δ_k_j,0|0⟩ + δ_k_j,1|1⟩.
For the general case where θ is not equal to k/2^n for some integer k between 0 and 2^n-1, there will be some distribution of values associated with the measurements of the binary digits k_n-1, ⋯, k_0. As with regular phase estimation, the error in energy resolution scales inversely with 2^n and is therefore inversely proportional to the number of operations of U needed. If U is the time evolution of a Hamiltonian H over time step Δ t, then the error in the energy scales inversely with the total time evolution required. Iterative phase estimation is not designed to perform eigenstate preparation. If we start from a general linear combination of energy eigenstates, then the uncertainty in the sequential measurements of the binary digits k_n-1, ⋯, k_0 arising from the different eigenvalues e^2π i θ_a will prevent the algorithm from functioning as intended.
The rodeo algorithm is another phase detection algorithm <cit.> that shares some structural similarities with iterative phase estimation. In contrast to iterative phase estimation, however, the rodeo algorithm is efficient in preparing energy eigenstates starting from a general initial state. Let H be the Hamiltonian for which we want to prepare energy eigenstates. To explain the algorithm, we first consider the case where the initial state is an eigenstate of H. We call it |ψ_j⟩ with eigenvalue E_j. We use one ancilla qubit and start with the state
|0⟩⊗|ψ_j⟩,
and apply the Hadamard gate on the ancilla qubit,
1/2^1/2( |0⟩ + |1⟩) ⊗|ψ_j⟩.
We then use the ancilla to perform the controlled unitary for e^-i H t_1 and apply the phase gate
|0⟩⟨0| + e^iEt_1|1⟩⟨1|,
on the ancilla. This produces
1/2^1/2( |0⟩ + e^-i(E_j-E)t_1|1⟩) ⊗|ψ_j⟩.
We now apply a Hadamard gate to the ancilla qubit, which then gives
1/2[ ( 1 + e^-i(E_j-E)t_1) |0⟩+ ( 1 - e^-i(E_j-E)t_1) |1⟩] ⊗|ψ_j⟩.
If we measure the ancilla qubit, the probability of measuring |0⟩ is cos^2[(E_j-E)t_1/2] and the probability of measuring |1⟩ is sin^2[(E_j-E)t_1/2]. We call the measurement of |0⟩ a success and the measurement of |1⟩ a failure. We repeat this process for n cycles with times t_1, ⋯, t_n. The probability of success for all n cycles is
∏_k=1^n cos^2[(E_j-E)t_k/2].
If we take random times t_1, ⋯, t_n to be chosen from a Gaussian normal distribution with zero mean and σ standard deviation, then the success probability averaged over many trials will equal
P_n(E) = [1+e^-(E_j-E)^2σ^2/2]^n /2^n.
We see that the peak value is equal to 1 when E_j = E and the width of the peak is O(σ^-1n^-1/2).
Let us now consider a general linear combination of energy eigenstates
|ψ⟩ = ∑_j c_j |ψ_j⟩.
For this case, the probability of success for n cycles is
P_n(E) = ∑_j [1+e^-(E_j-E)^2σ^2/2]^n | c_j |^2 /2^n.
When scanning over the input parameter E, peaks in P_n(E) will appear at places where there are eigenvalues E_j and the overlap with the initial state is not too small. For fixed n, the error of the energy determination scales inversely with σ. Similarly to phase estimation and iterative phase estimation, the rodeo algorithm saturates the Heisenberg bound, where the error in the energy scales inversely with the total duration of time evolution.
In contrast with both phase estimation and iterative phase estimation, the rodeo algorithm is exponentially fast for eigenstate preparation. There are several other energy projection and filtering methods with similar characteristics <cit.>. Once the peak of the eigenstate energy in P_n(E) is located approximately, we set E as the peak value. With E fixed and σ fixed, the error estimates for the eigenvector scale as 1/2^n for small n and accelerate to 1/4^n for asymptotically large values of n<cit.>. The 1/2^n comes from the fact that the arithmetic mean of cos^2(θ) equals 1/2, while the 1/4^n comes from the fact that the geometric mean of cos^2(θ) equals 1/4. In Ref. <cit.>, it was shown that the use of progressive smaller values for the time evolution parameters t_j accelerates the convergence of the rodeo algorithm towards 1/4^n. The main limitation of the rodeo algorithm for large quantum many-body systems is the requirement that the initial state have non-negligible overlap with the eigenstate of interest. This is a difficult problem that is common to nearly all eigenstate preparation algorithms that use measurement projection. Nevertheless, one can use techniques such as adiabatic evolution, variational methods, or some other approach as a preconditioner to significantly increase the overlap with the eigenstate of interest <cit.>.
§ SUMMARY AND OUTLOOK
In this article, we have presented several methods that show the essential features of adiabatic evolution, variational methods, and phase detection algorithms. All of the algorithms have their strengths and limitations, and one common theme is that the techniques can be combined with each other to produce something that is potentially greater than the sum of its parts. For example, adiabatic evolution provides a theoretical foundation for the QAOA variational method. In turn, the variational method can be used to find a good starting Hamiltonian for adiabatic evolution. Both adiabatic evolution and variation methods can be used as an initial-state preconditioner for phase detection algorithms.
There has been great interest by both scientists and the general public on the question of quantum advantage, if and when quantum computers are able to perform tasks exceeding the capabilities of classical computers. It is generally believed that calculations of real-time dynamics and spectral functions of quantum many-body systems are areas ripe for possible quantum advantage. However, the dynamics of some quantum many-body system starting from a trivial initial state is not something that connects directly with real-world phenomena. To make connections with real-world experiments and observations, one also needs the ability to prepare energy eigenstates. It is not clear whether quantum advantage will be achievable for the task of eigenstate preparation. However, this may not be necessary. It may be enough for quantum eigenstate preparation to be competitive with classical computing methods to achieve quantum advantage for calculating the real-time dynamics or spectral functions for real-world applications. The algorithms described in this article provide some of the tools needed, but much more work is needed to address the remaining challenges.
Acknowledgments
The author acknowledges support from the U.S. Department of Energy (DE-SC0021152, DE-SC0013365, DE-SC0023658) and the SciDAC-4 and SciDAC-5 NUCLEI Collaborations. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. |
http://arxiv.org/abs/2307.04693v1 | 20230710164634 | COMEX: A Tool for Generating Customized Source Code Representations | [
"Debeshee Das",
"Noble Saji Mathews",
"Alex Mathai",
"Srikanth Tamilselvam",
"Kranthi Sedamaki",
"Sridhar Chimalakonda",
"Atul Kumar"
] | cs.SE | [
"cs.SE",
"cs.AI"
] |
COMEX: A Tool for Generating Customized Source Code Representations
Debeshee Das1§,
Noble Saji Mathews1§,
Alex Mathai2,
Srikanth Tamilselvam2,
Kranthi Sedamaki1,
Sridhar Chimalakonda1 and
Atul Kumar2
1
Indian Institute of Technology Tirupati, India
2 IBM Research, India
{debesheedas, elbonleon, alexmathai98, srikanthtamilselvam, skranthi4444, sridhar.chimalakonda, atulkumar}@gmail.com
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================
[1]Authors have contributed equally
Learning effective representations of source code is critical for any Machine Learning for Software Engineering (ML4SE) system. Inspired by natural language processing, large language models (LLMs) like Codex and CodeGen treat code as generic sequences of text and are trained on huge corpora of code data, achieving state of the art performance on several software engineering (SE) tasks. However, valid source code, unlike natural language, follows a strict structure and pattern governed by the underlying grammar of the programming language. Current LLMs do not exploit this property of the source code as they treat code like a sequence of tokens and overlook key structural and semantic properties of code that can be extracted from code-views like the Control Flow Graph (CFG), Data Flow Graph (DFG), Abstract Syntax Tree (AST), etc. Unfortunately, the process of generating and integrating code-views for every programming language is cumbersome and time consuming. To overcome this barrier, we propose our tool - a framework that allows researchers and developers to create and combine multiple code-views which can be used by machine learning (ML) models for various SE tasks. Some salient features of our tool are: (i) it works directly on source code (which need not be compilable), (ii) it currently supports Java and C#, (iii) it can analyze both method-level snippets and program-level snippets by using both intra-procedural and inter-procedural analysis, and (iv) it is easily extendable to other languages as it is built on tree-sitter - a widely used incremental parser that supports over 40 languages. We believe this easy-to-use code-view generation and customization tool will give impetus to research in source code representation learning methods and ML4SE. The demonstration of our tool can be found at <https://youtu.be/GER6U87FVbU>.
Representation Learning, Static Analysis
§ INTRODUCTION
Source code representation learning is the task of effectively capturing useful syntactic and semantic information
embedded in source code <cit.>. It forms the backbone of ML pipelines for various SE tasks such as code classification, bug prediction, code clone detection and code summarization. Therefore, representing source code for use in ML models, with minimal loss of important information is an active research area <cit.>. It is important to note that source code is different from natural language as it follows an unambiguous structure and pattern, usually adhering to a strict underlying grammar. Hence, while creating representations for source code, it is important to infuse information from this unique structural aspect. To address this, many works including GraphCodeBERT<cit.> and GREAT<cit.> have explored leveraging code-views as a means to learn source code representations. Unfortunately, the process of generating code-views for multiple programming languages and customizing them for various SE tasks is often a time consuming process.
Most available tools are (a) positioned for analysis on compiled or compilable code (and not incomplete or uncompilable source code), (b) are specific for a single language, and (c) are not able to support both intra-procedural and inter-procedural analysis.
To address these concerns, we propose - a framework that (a) works directly on source code to generate and combine multiple code-views, (b) supports Java and C# (with planned support for other languages) and (c) works for both method-level and program-level snippets using intra-procedural and inter-procedural analysis. Since it is based on a single parser package (tree-sitter[<https://tree-sitter.github.io/tree-sitter/>]), it can be extended to new languages without additional dependencies.
As of today, most state-of-the-art models like CodeGen <cit.> and Codex <cit.> treat source code like free flowing text. Though this assumption helps simplify the required data pre-processing, it loses out on many structural aspects of code. Recently, works like NSG <cit.> have shown the benefits of using code structure. NSG leverages weak supervision using a syntax tree to generate full-length syntactically valid method bodies. Their results showcase that using this technique, even a small model (63 million parameters) can outperform LLMs like Codex (12 billion parameters).
To fuel research on similar grounds, we hope that with this package, we have lowered the entry barrier for researchers to easily integrate and leverage code-views while learning source code representations.
§ RELATED WORK
Several ML4SE works leverage code-views such as the AST <cit.>, the CFG <cit.>, the DFG <cit.>, and their combinations (CDFG <cit.>), to learn better code representations and improve performance on downstream SE tasks <cit.>.
Unfortunately, most available tools that create such views are specific to a single language.
SOOT <cit.>, a popular static analysis tool for Java, requires the input Java code to be compilable and for all definitions to be available. But many existing research datasets are mostly method-level datasets with incomplete snippets and definitions <cit.>. Although python_graphs <cit.>, a framework for generating program graphs for Python, provides a composite “program graph" with combined information from various typical code-views, it does not provide users the flexibility to combine, reduce or customize the typical code-views as supported by . Joern is an open-source static analysis tool often used
as a source for intermediate graph representations of code <cit.> with support for Java, Python, C, C++, etc., providing code-views without a means to customize, combine, or easily extend to other languages. It has limited support for inter-procedural control-flow and data-flow analysis, and for interactive exploration and visualization[https://galois.com/blog/2022/08/mate-interactive-program-analysis-with-code-property-graphs/https://galois.com/blog/2022/08/mate-interactive-program-analysis-with-code-property-graphs/]. overcomes these limitations by providing support for generation of code-views through static code analysis even for non-compilable code both at function and program level, supporting out-of-the-box composition of views and easy extension to new languages without introducing further language-specific parser dependencies.
§ THE PACKAGE
is open-sourced[<https://github.com/IBM/tree-sitter-codeviews>] and also made available as a Python package[<https://pypi.org/project/comex/>]. Additionally, we have exposed a command-line-interface that allows users to conveniently specify the input code-snippet, output format types (dot,json,png) and any required customizations or combinations of different code-views.
An overview of is depicted in Fig. <ref>. As can be seen, starts with a code snippet and user-defined configuration as input. The snippet is then passed through a tree-sitter parser to generate a concrete syntax tree (CST). An enhanced symbol table is created by processing the CST, and both of these together are used to create a CFG. Using the CFG, we implement reaching definition analysis (RDA) to generate the DFG. It is important to note that for CFG and DFG we implement both intra-procedural and inter-procedural analysis. In what follows, we elaborate on the details of the different code-views that we make available through .
§.§ Abstract Syntax Tree
We generate an AST by filtering some of the CST nodes provided by tree-sitter. Trivial nodes such as semicolons (;) and braces ({,}) are dropped, while non-trivial nodes
such as field_access or method_invocation are retained.
We also provide customizations for the AST like (i) a collapsed AST and (ii) a minimized AST. A ‘collapsed AST’ is one where all occurrences of the same variable are collapsed into one node. Whereas, in a ‘minimized AST’, certain node types can be ‘blacklisted’ based on the purpose of the code representation. The rationale behind these customizations is to provide smaller ASTs without losing out on critical information. This results in fewer AST nodes, thus reducing graph sizes which helps make Graph Neural Network (GNN) <cit.> approaches to source code representation learning computationally tractable.
§.§ Control-Flow Graph
Statement-level control-flow - Using the tree-sitter generated CST and the enhanced symbol table, we proceed to create our CFG code-view. A typical CFG consists of a network of basic blocks, where each block is a set of instructions that execute sequentially with no intermediate control jump. Hence, constructing a CFG is usually a two-step process, where we first identify the basic blocks and then determine the control-flow edges between them.
However, in , we choose to produce a statement-level CFG that maps the control-flow between statements (and not blocks). This is useful for certain ML-based approaches and for generating the DFG as elaborated in (§<ref>).
The CFG for both Java and C# is a statement-level approximation of control-flow.
Inter-procedural control-flow - We support inter-procedural control-flow by statically analyzing all class definitions, object reference declarations, abstraction and inheritance specifications, method and constructor signatures and overloading. Fig. <ref> shows a code snippet with two class definitions, ClassA (A) and ClassB (B), apart from the Main class (C). The CFG edges are highlighted in red. The diagram depicts the change of control-flow during object instantiation to the corresponding class definition via “constructor_call" edges D (29 → 1) and E (30 → 6). As an explicit constructor is available for ClassB, the control flows through the constructor before returning to the site of instantiation via the “class_return" edge F (8 → 30) . In case of method or constructor overloading, the function signatures are compared to determine the control-flow edges. When methods are called on object references, they are linked with the corresponding definition by matching the function signatures and available static references within the corresponding class. Nested function calls are also handled by tracking and mapping back all statically available signatures of function calls and their definitions.
§.§ Data-Flow Graph
Using the CFG generated in (§<ref>), we perform data-flow analysis to create our DFG code-view.
One of the fundamental techniques in data-flow analysis is Reaching Definition Analysis (RDA)
where we identify the set of definitions that may reach a program point, i.e., the definitions that may affect the value of a variable at that point. A statement-level DFG is then generated using this information. Using the RDA-based implementation addresses many of the significant drawbacks that we found in the data-flow extraction logic used by GraphCodeBERT <cit.> such as lack of inter-procedural analysis, incorrect handling of scope information as well as data-flow thorough loops. It should be noted that the RDA-based analysis is inherently more computationally expensive.
In addition to method level analysis, we also support an out-of-the-box program-level DFG via a two-phase RDA. The first phase is the typical RDA algorithm for each method, followed by another iteration of RDA that also takes into consideration the inter-procedural control-flow. This implementation helps track changes made to variables that are passed as parameters via method invocations. This is only performed for non-primitive data-types since primitive data-types are passed by value in Java and C#. A full-blown alias analysis, which precisely determines all possible aliasing relationships can be challenging and computationally expensive. We hence support a partial alias analysis technique that approximates the possible memory references in a program. We also provide two additional data-flow relations - “LastDef" and “LastUse". Enabling “LastDef" results in edges that link between re-definitions of variables as well as edges between declarations and definitions of variables. Similarly, “LastUse" links the current use of a variable to the last program point where it was read. These relationships help add more edges in those method-level snippets that mainly use global variables which are not defined in the method body.
§.§ Combinations and Customizations
In addition to generating code-views, can also combine and customize multiple code-views into a single graph.
For example, a combination of CFG and DFG would generate the two code-views separately and then combine them based on unique node identifiers as shown in Fig. <ref>. Additionally, as we used just one parser package, we are able to implement this feature using a single module (CombinedDriver) that works seamlessly across all languages. is currently capable of generating over 15 different customized representations[Please refer to https://github.com/IBM/tree-sitter-codeviews/blob/main/List_Of_Views.pdfList-Of-Views.pdf in the repository for a complete list].
§ DISCUSSION AND LIMITATIONS
was tested for robustness by generating and validating the code-views obtained on the large datasets popularly used for benchmarking ML-based SE tasks (CodeNet <cit.>, CodeSearchNet <cit.> and <cit.>). Many of these datapoints have missing definitions and are not compilable, but their code-views were successfully generated as long as they were free of syntax errors. However, we are unable to provide a very accurate alias analysis that usually works only for compilable code because we support non-compilable input code snippets. Instead we provide a partial alias analysis. Among the aforementioned datasets, only <cit.> has C# datapoints which is why we expect our implementation of Java code-views to be more robust than our C# implementation.
§ CONCLUSION AND FUTURE WORK
In source code representation learning research, there are many notable works that exploit code-specific properties like control-flow, data-flow, read-write dependencies, etc., in addition to treating code as regular natural language text. To this end, we believe that will enable researchers and developers in this domain to extract and customize structural information from code-views for new methods of representation learning. provides a framework which can be extended to support more code-views and their combinations and can be easily extended to many other popular languages like Python and C++ which can spur research in ML4SE and effective source code representation learning.
IEEEtran
|
http://arxiv.org/abs/2307.04084v1 | 20230709022832 | A Sustainability Roadmap for C$^3$ | [
"Martin Breidenbach",
"Brendon Bullard",
"Emilio Alessandro Nanni",
"Dimitrios Ntounis",
"Caterina Vernieri"
] | hep-ex | [
"hep-ex",
"physics.acc-ph"
] |
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement
Jinghua Zhang
Received August 12, 2023; accepted August 12, 2023
====================================================================================
§ INTRODUCTION
An electron-positron collider gives a unique opportunity to study the Higgs boson's properties with unprecedented precision and also provide an exceptionally clean environment to search for subtle new physics effects <cit.>. A number of different "Higgs factory" proposals, based on linear and circular colliders, are now under consideration. All of these provide collisions at center of mass energies in the range of 240-370 GeV, and some also are capable of reaching higher energies.
A high-energy particle collider is a large energy-consuming research facility. As such, it is important to balance its scientific importance against its environmental cost. The environmental impact of large accelerators has been analyzed in the recent Snowmass 2021 study <cit.> of the future of particle physics in the US <cit.>. The papers <cit.> have examined the environmental cost of particular Higgs factory proposals, though often concentrating on particular elements of the total cost.
In this paper, we attempt a comprehensive evaluation of the carbon cost of the Cool Copper Collider () Higgs factory proposal <cit.> over its full lifetime, including costs from construction and from operation over the proposed timeline. The structure of this paper is as follows: in Section <ref>, we briefly review the design of . In Section <ref>, we review the physics reach for and other Higgs factory proposals and introduce a metric for balancing carbon impact against the physics impact of each proposal. In Section <ref>, we analyze the power costs of operation of and describe methods for modifying the power design of the accelerator that would lead to substantial savings with little impact on the physics performance. In Section <ref>, we analyze the carbon impact of the construction of C3 and emphasize that cut-and-cover construction, as opposed to construction in a deep tunnel, has significant advantages. In Section <ref>, we discuss options for the source of electrical power for the laboratory. In Section <ref>, we bring these analyses together to estimate the total carbon footprint of . Using information from available studies and design reports, we estimate the carbon impact of other Higgs factory proposals and compare these to in the framework described in Section <ref>.
§ REVIEW OF THE ACCELERATOR DESIGN
, recently proposed <cit.>, is a linear facility that will first operate at 250 GeV center-of-mass collisions. Immediately after, without further extension of the linac, it will run at 550 GeV with an RF power upgrade. The high energy operations will enable the exploration of the Higgs-top coupling, and provide direct access to the Higgs self-coupling with double Higgs production <cit.>. Furthermore, the beam polarization, which exploits the strong dependence of electroweak processes on the chirality of the initial state particles, will offer unique insights into the underlying physics, acting as a new tool for discovery <cit.>. This offers a strong complementarity with proton and circular colliders, where beam polarization is not possible.
utilizes a radically different approach to linear accelerators to build a collider with high gradient and high RF efficiency, and thus lower capital and operating costs <cit.>. is based on a distributed coupling accelerator concept, running under liquid nitrogen (LN) <cit.>, that has led to an optimized accelerating gradient and minimized breakdown problems with respect to earlier designs based on normal conducting technologies. This has yielded an overall optimization of the gradient at 70 and 120 MeV/m for the 250 GeV and 550 GeV operating points, respectively.<cit.>. Much higher energies are possible if length is not the major consideration. The fundamental parameters, assumed for the analysis in this paper, are shown in Table <ref>.
By far the major development to date is the actual distributed coupling accelerator structure. will use C-band (5.712 GHz) standing wave RF accelerating structures that are 1 m long. Each has an RF waveguide to bring power in, and in the more probable operating modes, splits RF power evenly between the beam and dissipation in the structure with 43% beam loading. Operating at 80 K brings the shunt impedance up to 300 MΩ/m, allowing for efficient operation at 120 MeV/m. These gradients have been demonstrated at C-band <cit.> and with an electron beam in an X-Band (11.424 GHz) structure on the SLAC XTA beamline <cit.>. The C-band structure has been tested at low power at SLAC and at high power without beam at Radiabeam <cit.>. The gradient results in a collider with a 550 GeV center-of-mass energy capability on an 8 km footprint.
A pre-conceptual design for the overall linac cryogenics has been developed that includes the design for the CryoModules. For the 250 GeV and 550 GeV design, each linac will have 3 re-liquification cryoplants. LN will flow out along the linac in both directions, so there are 6 flow runs. The LN will be above the raft structures, with an initial velocity of ∼0.03 m/s. The LN will cool the accelerator structures by nucleate boiling with a power density of 0.4 W/cm^2, producing saturated vapor which counter-flows back to the cryoplant. Each cryo-run is about 450 meters in length. The vapor velocity near the cryoplant is ∼3 m/s.
§ COMPARISON OF HIGGS FACTORY PHYSICS REACH
Among the colliders being evaluated by the community, the International Linear Collider (ILC) <cit.>, based on superconducting RF technology, has the most advanced design <cit.>, and the ILC is currently under consideration for construction in Japan.
CERN is pursuing as its main strategy a large circular collider, the FCC <cit.>, and China is planning a similar circular collider, the CEPC <cit.>. Each of these circular colliders would require a tunnel with circumference of the order of 100 km to limit synchrotron radiation. Still, though, the expected instantaneous luminosity drops off significantly above center-of-mass energies of 350–400 GeV.
A different alternative is to construct a compact linear collider based on high gradient acceleration. CERN is also pursuing such a proposal, CLIC <cit.>, that would operate at a collision energy of 380 GeV.
The carbon footprint of the proposed future Higgs factories should be assessed relative to the expected physics reach, which has been reviewed most recently in the context of the Snowmass Community process <cit.>. The primary physics goal of a future Higgs factory is the determination of the total Higgs width and Higgs couplings with per-cent or sub-per-cent precision. A reasonable figure of merit to gauge the physics reach of each machine is the expected level of precision for each of these measurements. We note that evaluating the projected measurement precision accounts for the fact that different beam configurations (center-of-mass energy and beam polarization) have a strong impact on the physics reach of each of those machines. These differences in precision are not accounted for when comparing the total number of Higgs bosons produced alone <cit.>.
The physics reach at colliders increases with the center-of-mass energy, since different Higgs boson production mechanisms become accessible. At 250 GeV center-of-mass energy operations the main Higgs boson production mechanism is associated production with a Z boson (→ ZH), enabling a model-independent determination of the Higgs boson total width. Higgs boson production via the W-boson fusion reaction e^+e^-→νν̅H is accessible at √(s)∼500 GeV, where the only visible signals in the final state come from Higgs boson decays. This allows Higgs boson measurements governed by different systematic effects, complementary to the 250 GeV data, as well as opportunities to study effects such as separation of H → gg/bb̅/cc̅ decays and CP violation in H →τ^+τ^- <cit.>. Importantly, at high center-of-mass energies, double Higgs boson production in the ZHH channel opens up, providing direct access to the Higgs boson self-coupling λ_3. At circular machines, given the energy limitations, double Higgs boson production mechanisms are not accessible, thus allowing only for indirect and model-dependent measurements of λ_3, through loop effects in single-Higgs production.
The use of longitudinal beam polarization offers unique advantages for effective precision measurements at a linear collider, since the interaction cross sections at an collider have strong dependencies on beam polarization.
It has been demonstrated that at 250 GeV center-of-mass energy, the ultimate precision reach in the determination of Higgs couplings, through a Standard Model Effective Field Theory (SMEFT) analysis, for an integrated luminosity of 2 ab^-1 with polarized beams, has comparable sensitivity to 5 ab^-1 with unpolarized beams, with most of the gain coming from e^- polarization alone <cit.>. The main effect of beam polarization is to discriminate the effect of different SMEFT operators that contribute to the Higgs boson coupling. There is a similar gain of about a factor of 2.5 from discrimination of the effects of the operators contributing to the WWγ and WWZ couplings, which also enter the SMEFT analysis.
The positron polarization becomes more relevant at higher center-of-mass energies. For instance, W-boson fusion reactions, such as e^+e^-→νν̅H, proceed only from e_L^-e_R^+ initial states, providing a cross-section (or, equivalently, effective luminosity) enhancement of ∼ 2.5 for typical polarizations foreseen at future linear machines <cit.>. Here positron polarization makes a significant contribution. This implies that the same number of Higgs bosons can be produced through this process with only ∼ 40 % of the integrated luminosity, compared to having unpolarized beams.
Moreover, beam polarization at high energy enables the suppression of relevant backgrounds, such as the dominant e^+e^-→ W^+W^- background for positive (negative) electron (positron) beam polarization, increasing the signal-over-background ratio and allowing the precise measurement of the rate of other backgrounds, as well as the reduction of detector-related systematic uncertainties, with combined measurements of datasets with four distinct initial-state polarization configurations. These effects collectively indicate the increased precision reach that beam polarization provides for linear machines <cit.>.
Additionally, electron (primarily) and positron (secondarily) polarization enhance the precision in the extraction of the Higgs couplings, compared to having unpolarized beams. For example, it has been shown that having polarized initial state can yield an effective luminosity improvement factor for linear machines up to ∼ 2.5, thus allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.reference
For these reasons, in this analysis we propose a comparison of the carbon footprint of collider concepts relative to their expected precision in Higgs coupling measurements. Table <ref> summarizes the projected relative precision for Higgs boson couplings measurements at each collider combined with projected results from the HL-LHC. As can be seen, the overall physics reach of all proposed Higgs factories is similar <cit.> for the 240-250 GeV operations, and additional measurements become accessible for the higher center-of-mass energy runs at linear colliders. We also compare the Higgs Factory proposals is in terms of total energy consumption and carbon emissions, for both construction activities and operations, with the latter being the most relevant number when evaluating each project's impact on the global climate.
We then present an estimate of energy consumption and carbon footprint per unit of physics output. This is achieved by taking the average of the relative precision over all Higgs couplings, weighing them by the relative improvement in their measurement with respect to HL-LHC:
⟨δκ/κ⟩ = ∑_iw_i(δκ/κ)_i/∑_iw_i
where the sum runs over the columns of Table <ref> and the weight is defined as:
w = (δκ/κ)_HL-LHC-(δκ/κ)_HL-LHC+HF/(δκ/κ)_HL-LHC+HF
This definition weights measurements by their relative improvement over HL-LHC when combining the HL-LHC and future Higgs Factory (HF) results. Qualitatively, measurements that minimally improve those of HL-LHC are assigned weights near zero, while HF measurements with high precision or large improvement over HL-LHC are assigned larger weights. While other weighting schemes could be used, we argue that Equation <ref> is unbiased towards the type of physics measurement (e.g. Yukawa, self-coupling, vector coupling) and it emphasises the individual strengths of each collider facility.
For the estimation of the weighted average precision, the hcc̅ coupling was excluded, since there is no estimate for HL-LHC, whereas we assume that the hhh coupling for CEPC can be measured with the same precision as for FCC. The weighted average precision for each collider is given in the last row of Table <ref>.
§ POWER CONSUMPTION AND OPTIMIZATIONS
The most obvious way to reduce the carbon impact of a major facility is to minimize the amount of power that it consumes, thereby minimizing the associated emissions from energy production. This is firmly within the means of the facility designers and crucially does not rely on grid electrification. The nominal operating parameters for -250 are given in Table <ref>.
Several avenues can be pursued to optimize operational power requirements. Improvements in luminosity or reduction in power consumption are possible through the development of ancillary technology by increasing the RF source efficiency, increasing the efficiency of powering the accelerating structures or modification of beam parameters to increase luminosity. At present the main linac requires ∼100 MW of power with 40 MW for the RF sources and 60 MW for the cryogenics.
For the RF sources, the concept utilizes an overall RF system efficiency of 50% which is in line with present high power RF sources that are designed with efficiency in mind. However, significant advances in modern design techniques for klystrons are increasing the klystron amplifier's ultimate efficiency significantly with the inclusion of higher order mode cavities, multi-cell outputs and advanced multi-dimensional computational tools. For example, designs now exist for a 50 MW class RF source<cit.> approaching an amplifier efficiency of 70%. Multi-beam RF sources, reducing the beam perveance, have advanced design efforts exceeding 80% efficiency<cit.>. These results reinforce modern understanding on the limits of klystron efficiency <cit.> which indicate a klystron amplifier efficiency of 70-80% is possible, leading to an overall RF source efficiency of 65%.
RF pulse compression, presently not in the baseline, is also a well known technique for powering high gradient structures. For , pulse compression is particularly useful due to the impact of power loss at cryogenic temperatures and due to the relatively long fill time for a copper structure operating at cryogenic temperatures. In a previous study<cit.>, it was found that low factors of pulse compression, which preserves RF efficiency in the compressor<cit.>, improves the overall efficiency of the system by 30%. Recently, additional efforts have been made to realize the extremely high Q cavities required for pulse compression with cryogenically cooled RF structures <cit.>; these include concepts operating at room temperature and inside the cryostat at 80 K.
For the baseline design <cit.> we anticipate operation with 700 ns and 250 ns flat tops respectively for gradients of 70 and 120 MeV/m and a constant power dissipation of 2.5 kW/m at 120 Hz. Figure <ref> and Figure <ref> show the RF power, dissipated energy and gradient during the pulse. While these flat top lengths were selected to limit the challenges of breakdown, increasing the flat top length and reducing the repetition rate should be investigated in order to reduce the thermal load on the linac. At present, the thermal balance between the structure fill/dump time and the flat top is approximately 50% (equal thermal load). If we were to extend the flat top lengths by a factor of two and reduce the repetition rate by a factor of two, the thermal dissipation in the main linac would decrease by 25%. This improvement would have little effect on the overall design of the accelerator, and would be acceptable if the breakdown rates remain low enough. Proving that this is possible will require high gradient testing of structures with 1400 ns and 500 ns respectively.
The beam current of is relatively low thanks to the large bunch spacing and efficient accelerating structures. One could pursue the possibility of reducing the bunch spacing to increase the current. However, this will require compatibility studies with the detector design. Here we consider the scenario where the bunch spacing is reduced by a factor of two. This would keep a bunch spacing of >1 ns for both -250/550, resulting in a decrease of 25% for the cryogenics power. The RF power required would only decrease by 20% because the peak RF power required would be slightly higher during the RF pulse flat top to compensate for the additional current.
We note that these approaches can all be combined for mutual benefit as shown in the last row of Table <ref>. The demonstration R&D plan <cit.> will be able to investigate these approaches and lead to potential power savings.
§ CARBON IMPACT OF CONSTRUCTION
Under the assumption that the electric grid will be successfully de-carbonized by 2040, as it is the goal of many international climate plans, then construction, rather than operations, may well dominate the climate impact of a new particle physics facility <cit.>.
For FCC it is projected that the whole accelerator complex[The main tunnel plus the additional buildings on the site, the materials for the accelerator and detectors, assuming a main tunnel length of 97.7 km (the updated FCC design anticipates 91 km).] will have a carbon impact similar to that of the redevelopment of a neighbourhood of a major city <cit.>. This indicates that the environmental impact of any future collider facility is going to receive the same scrutiny as that of a major urban construction project.
The bottom-up analysis in <cit.> derives an estimate of global warming potential (GWP) for the main tunnel material (concrete) manufacture alone to be equivalent to the release of 237 ktons of CO_2 (). An alternative top-down analysis is instead dependent on the character of the earth to be excavated, leading to estimates ranging from 5-10 kton /km of tunnel construction and total emissions of 489-978 kton [Contributions from many bypass tunnels, access shafts, large experimental caverns, and new surface sites are excluded.].
A life cycle assessment of the ILC and CLIC accelerator facilities is being performed by ARUP <cit.> to evaluate their holistic GWP, so far providing a detailed environmental impact analysis of construction. The components of construction are divided into classes: raw material supply, material transport, material manufacture, material transport to work site, and construction process. These are labelled A1 through A5, where A1-A3 are grouped as materials emissions and A4-A5 are grouped as transport and construction process emissions. The total GWP for ILC and CLIC is taken to be 266 and 127 kton <cit.>, respectively[We use the emissions figures associated to the CLIC drive-beam design, which is more efficient than the alternative design utilizing only klystrons for RF power.]. The approximate construction GWP for the main tunnels are 6.38 kton /km for CLIC (5.6m diameter) and 7.34 kton /km for ILC (9.5m diameter); the FCC tunnel design is similar to that of CLIC, so 6.38 kton /km is used for the calculation of emissions for both FCC and CEPC. While a comprehensive civil engineering report is unavailable for FCC and CEPC, we estimate the concrete required for klystron gallery, access shafts, alcoves, and caverns to contribute an additional 30% of emissions, similar to what is anticipated for CLIC. The analysis indicates that the A4-A5 components constitute 20% for CLIC and 15% for ILC. In the absence of equivalent life cycle assessment analysis for FCC and CEPC, we account for the A4-A5 contributions as an additional 25%. A summary of these parameters is given in Table <ref>.
The tunnel will be about 8 km long with a rectangular profile in each of its component systems. Assuming a cut and cover approach, all the excavated material will be replaced to yield a small berm. We estimate that for the whole accelerator complex only about 50 thousands cubic meters of spoil for the experimental hall will have to be relocated. Figure <ref> shows a schematic of the cross section, where the klystron gallery is situated directly above the accelerator hall with sufficient concrete shielding to allow constant access to the klystron gallery during operation. The application of a top-down estimate of 6-7 kton /km obtained from the ARUP report is not appropriate for the surface site due the differing cross section geometries of the accelerator housing. To allow for a fair comparison among facilities, we take the same basic assumptions of construction materials. In particular, that construction uses a mix of CEM1 C40 concrete and 80% recycled steel, the GWP of concrete is taken to be 0.18 kg /kg concrete with density 2400 kg/m^3<cit.>, and 85%/15% of emissions originate from concrete/steel production. Taking into account construction of the main linacs, injector linacs, damping rings, beam delivery system, and experimental hall, the total volume of construction material is estimated to be about 260,000 m^3 (consisting mostly of concrete by volume). This leads to a GWP of 133 kton for A1-A3 components and GWP per unit length of the main linac of around 17 kton /km. Notably, this is roughly a factor 2 larger than the GWP/km of main tunnel construction of ILC and CLIC; this suggests further tunnel geometry optimizations are achievable with a detailed engineering study. The surface site construction eliminates the need for additional infrastructure (e.g. access tunnels and turnarounds) and greatly reduces the complexity of the construction process, which we estimate to account for an additional 10%[This estimate is half the A4-A5 component associated to tunnelled facilities and is expected to overestimate the improvement associated to a cut and cover approach, due to significant reduction to spoil transport and operation of a boring machine] to the GWP. This yields a final estimate of 146 kton for civil engineering.
Unlike other Higgs factories under evaluation, the site has not been decided yet. A collider could in principle be sited anywhere in the world.
A community decision will be made regarding the actual site selection, although we note that the offers a unique opportunity to realize an affordable energy frontier facility in the US in the near term and the entire program could be sited within the existing US National Laboratories. The tunnel layout would be adapted to its location, and a cut and cover site, suitable for a horizontal layout, is extremely attractive also for both cost and schedule reasons.
The details of the siting options at FNAL are discussed in <cit.>. Sites such as the DOE Hanford site located in the Pacific Northwest have room to accommodate even bigger footprint machines within their site boundary.
§ POSSIBLE MITIGATION STRATEGY DURING OPERATIONS
The carbon footprint of the electricity production required to meet the total site power requirements of 150-175 MW can be substantial. The average carbon intensity of energy production since May 2022 is 194 and 381 g CO_2/kWh for the CAISO and PJM power grids, respectively <cit.>. This would result in the CO_2 emissions equivalent of 5.7 and 11.2 mega tonnes of CO_2 equivalent for a 20 year run. The electrification of the grid will allow to operate much more sustainably by the time data taking begins. The U.S. “has set a goal to reach 100 percent carbon pollution-free electricity by 2035" in its 2021 emissions target report <cit.>. The U.S. is making progress toward this goal, having been ranked #1 on the Renewable Energy Country Attractiveness Index in 2021, driven primarily by widespread adoption of solar energy. The outlook for renewable energy investments have been further buoyed by the recent passage of the Inflation Reduction Act <cit.>. While full electrification by 2035 is conceivable, it is helpful to consider powering infrastructure required by operating only with renewable energy sources to evaluate associated costs and feasibility. The three technologies of interest to this study are photovoltaic cells (solar), onshore and offshore turbines (wind), and energy storage systems (batteries) to facilitate the diurnal cycle of power generation by solar and wind sources.
Solar is the most appealing renewable energy source. It has achieved the highest market penetration among renewable sources and is expected to achieve utility-scale parity with non-renewables within the next decade. The present cost of PV cells is between 0.82 - 1.01 $/W and the land area required to operate a 3 MW scale solar farm is 6-8 acres/MW <cit.>. Assuming PV cell efficiencies will be driven well beyond the present 30% limit by multi-junction fabrication techniques, the values $0.80/W and 4 acres/MW are assumed <cit.>.
While wind energy trails solar in terms of market penetration, providing over 120 GW domestically, it would offer a complementary daily load profile to that of solar energy, where approximately twice as much power is generated at night than during the day, by both onshore and offshore wind farms <cit.>. While onshore wind has greatest penetration in the Midwest, where average wind speeds at 100m elevation can exceed 10 m/s, smaller wind turbines with lower peak output capacity and lower cut-in wind speeds can be suitable for regions where wind patterns are less intense <cit.>. Typical peak power output for onshore and offshore wind turbines are 3 MW and 10 MW with typical capacity factors (efficiency) of 40% and 60%, respectively <cit.>. The significantly higher power production capacity for offshore wind turbines offers an advantage to candidate sites located on the coasts. Fixed-bottom and floating turbines are the preferred for offshore farms on the Atlantic and Pacific coasts, respectively. Floating turbines have the additional advantage of eliminating high-frequency vibrations resulting from mechanical coupling to the sea floor, which can significantly increase the turbine's functional lifetime, and installation of a floating turbine has a significantly reduced impact on local marine life <cit.>. The costs of onshore, fixed-bottom offshore and floating offshore turbines are around 1.3, 3.25 and 5.3 $/W <cit.>.
A major challenge to full electrification is the need to deliver power to end-users reliably when generation is dependent on natural processes which fluctuate on short timescales (local weather patterns, daily cycle) and long timescales (seasons, regional climate cycles). Energy storage systems are required to eliminate dependence on non-renewables during periods of low production by renewable sources, and can be realised using mechanical, thermal, and chemical energy storage techniques. For example, pumped storage hydro-power (PSH) stations represented 99% of utility-scale energy storage in 2019, each of which has GWh-scale capacity <cit.>. While PSH stations can be used to balance load profiles on the regional scale, they can only be situated where geological constraints allow. Battery energy storage systems (BESS) are not subject to such constraints and can further be build in a distributed network near end-users, rather than in large centralised plants. However, utility-scale battery technology is still nascent, with liquid lithium-ion as the most common battery chemistry. While other designs, like lithium-sulfur, lithium-metal, and sodium-ion, can offer higher energy densities and longer lifetimes, various technical challenges must be overcome. As alternative designs are developed for the future, lithium-ion batteries can support BESS operating on the scale required for today. The world's largest BESS is located in Moss Landing, CA, and has a capacity of 1.4 GWh and can deliver 350 MW to the CAISO grid. The Edwards and Sanburn Solar and Energy Storage site, to be completed in 2023, will use 2.5 millon PV modules and 110,000 lithium-ion batteries situated on 6,000 acres to produce up to 1.1 GW and store 3.32 GWh.
We rely on projections of BESS costs and capacities in the years 2040 and 2050 to appraise those associated to . A reference case for the projected domestic storage capacity in batteries in the years 2040 and 2050 are 120 GWh and 210 GWh, respectively <cit.>. The maximum amount of storage capacity needed to power for a 12 hour period at 150 (175) MW is 1.2 (1.4) GWh, constituting less than 1% of expected total market capacity. By 2040, hydro-pumped energy storage will constitute 20% of total storage capacity and will be relegated to storage durations of more than 12 hours. Lithium-ion battery cell lifetimes are typically on the order of 1000 cycles, and other battery chemistries have rapidly increased in lifetime in recent years, topping 600 cycles for Lithium NMC <cit.>. If a 1000 cycle lifetime is assumed for future battery technologies, and batteries would experience 300 full cycles in a year, each battery module would need to be replaced 3 times in each 10 year run period. Costs could be mitigated through battery recycling, at minimum to be smelted and the valuable elements Nickel and Cobalt captured, 10% of the battery cost could feasibly be reclaimed. The cost of batteries designed for 10 hour storage in the years 2040 and 2050 are 125 and 100 $/kWh <cit.>. These parameters can be used to estimate the total cost of batteries for powering scenarios over the full 20 year run time.
Finally, cost mitigation strategies can be explored. The current compensation rate for surplus power sold back to Pacific Gas and Energy was around $525/kW/year on average from January 2022 to May 2023 <cit.>. An analysis by S&P indicates that in 2030, $55/kW/year could be generated through energy arbitrage, where energy purchased during the day can be stored and sold at night when energy prices are driven by the higher cost non-renewables <cit.>. This analysis also shows that the average cost of energy will not substantially decrease over time. Higher battery capacity would be required to capitalise on arbitrage opportunities and is therefore less appealing than selling excess energy production immediately during daytime production. An additional 150 MW of solar capacity in excess of requirements could generate $380 million. If government investment on the scale of the Production and Investment Tax Credits (PTC and ITC) outlined in the IRA were to be available during construction, the cost of batteries could be reduced by 30% and the cost of renewable power generation could be reduced by $0.0275/kWh <cit.>.
For the following analysis, a day/night cycle of 12 hours each is considered and the average power production over the course of a full day is 175 MW. The total energy storage capacity from batteries is set to provide the difference in nighttime power generation (and must be charged during the day with power generated in excess of 175 MW). Table <ref> summarises a possible design configuration using a mix of solar and wind energy.
While the composition of this energy portfolio can impact the total cost estimates, the total cost of energy infrastructure required to de-carbonize operations is approximately $1 billion over the coarse of 20 years of operation. It is important to note that this falls largely outside the scope of project budget. Indeed, most of this cost will be covered by general investment by the US government in electrification of the grid. While FCC would not be able to access 550 GeV CoM energy, it is expected to require 350 MW in the 365 GeV tt̅ run configuration <cit.>. CERN receives significantly de-carbonized energy from France, where 56 nuclear reactors collectively deliver 63 GW to the grid (1.1 GW/plant on average) <cit.>. Assuming FCC operated with nuclear power alone, it would consume 30% of the power output of a single plant. A nuclear reactor today typically costs around 8 billion euros, implying that the energy infrastructure required to operate FCC sustainably is $2.5 billion.
The previous analysis leads to two conclusions about sustainable operation of :
* The required technological innovation of solar, wind, and energy storage systems is expected to meet the site power needs for by the beginning of operations
* Market availability of these technologies will be sufficiently scaled such that they can be deployed for , and the associated costs born by government investment in renewable energy will be comparable if not less than alternate e^+e^- Higgs factory options
We would like to estimate the cost within the budget scope required to operate sustainably in a realistic scenario. A $200 million budget for renewables would support a 250 MW solar farm, fully covering the needs of during the day with an average excess production of 87.5 MW that can be sold to the grid. Assuming increased capacity of domestic BESS results in negligible energy price differences between day and night through arbitrage, would incur energy costs only from the additional 75 MW needed at night on average. At $0.06/kWh, this would amount to $780 million over 20 years. To effectively erase this additional energy cost, the solar farm budget can be increased to $270 million to provide twice the average site power needs. It should be emphasised that can achieve effective energy independence with a modest investment in solar infrastructure. Given the carbon intensity of solar, wind, nuclear, and natural gas of 11, 11, 12, and 524 gCO_2/kWh in the CAISO grid, along with the least optimistic projection of domestic renewable energy production by the US Energy Information Administration, the carbon intensity of electricity produced by the CAISO grid can be expected to fall below 125 gCO_2/kWh by 2050 <cit.>. This is driven by a doubling of solar/wind and a 25% reduction in gas in terms of total energy portfolio composition. Since half of site power originates purely from solar, the average carbon intensity of energy consumption will be better than 68 gCO_2/kWh. This is further improved to 46 gCO_2/kWh in the high technology uptake scenario. These are comparable to the carbon intensity in France of 38 gCO_2/kWh, which is not expected to be further reduced.
§ MITIGATION STRATEGIES FOR OPERATIONS
https://www.nature.com/articles/d41586-022-03551-5
https://link.springer.com/article/10.1140/epjp/s13360-022-03319-w
There can be considerable emissions associated with the production of energy required to meet site operation power requirements. This is highly dependent on the region in which the project operates; regions with highly de-carbonized electricity grids (via solar, wind, hydroelectric, and nuclear power) offer significantly reduced carbon emissions related to energy production than those running on non-renewable energies (gas, oil, and coal). The total emissions of each collider project are then evaluated as the product of the total amount of energy consumed and the local carbon intensity for its production.
While total de-carbonization of the electric grid by 2040 is a nominal goal, it is not assured. The 2040 projection of carbon intensity based on the stated policies scenario for Japan, China, the European Union, and the United States are roughly 150, 300, 40, and 45 t/GWh, respectively <cit.>. However, local variations in renewable energy systems implementation is neglected in these estimates; for example, the CERN-based colliders could take advantage of a 50-50 mix of renewable and nuclear energy. Additional mitigation strategies, such as construction of dedicated renewable energy plants, would reduce the carbon impact of operations in other regions. This strategy has been thoroughly investigated by the Green ILC Project <cit.>. A more moderate strategy can be envisioned for . A 185 MW solar farm could be built with a $150 million budget <cit.>, double covering the average power requirement of [This estimate considers the power optimizations in Table <ref>], such that excess power could be stored for later use at night[The additional cost of selling and purchasing energy through utility companies can be reduced through special contracts and is neglected here], allowing to achieve green energy independence. The use of multi-junction photovoltaic cell fabrication techniques would improve power conversion efficiency well beyond 30% that is common in today's cells <cit.>, allowing such a solar farm to be situated on about 5 km^2 of land <cit.>.
This estimate relies on energy storage systems supported by regional electricity grids. To better understand the feasibility of scaling all parts of energy production (which may fall under the project budget) and energy storage infrastructure (which would be funded by the US government, but would nonetheless need investment), we perform a holistic cost estimate. We first note that the energy storage capacity required to supply 150 MW continuously for 12 hours is less than 1% the expected grid energy storage capacity in 2040 <cit.>, indicating that the US grid should be able to reasonable support operations at this scale using renewable energy. We assume lithium ion batteries[Lithium ion batteries are not considered to be viable long term energy storage solutions, instead technologies such as flow batteries and systems based on mechanical potential energy are favored] are the primary energy storage technology with a lifetime of 1000 cycles, experiencing 300 cycles per year with 10% of battery cost reclaimed through recycling at a base cost of 125 (100) $/kWh in 2040 and 2050 <cit.>. We take the cost of energy production of solar to be $0.80/W <cit.> while taking that of onshore, fixed-bottom offshore and floating offshore wind turbines to be around 1.3, 3.25 and 5.3 $/W <cit.>. An energy production portfolio that provides continuous power for over a 12 hour day/12 hour night period based on these technologies alone would cost approximately $1 billion. This estimate is primarily driven by requirements of battery energy storage systems and holds for a variety of energy source mixes. This indicates a similar cost would be associated to a site located near the Pacific or Atlantic coasts, which could leverage floating and fixed-bottom turbines respectively, in the Southern US where solar would be most efficient, or proximate to large wind farms in the Midwest. A more precise cost and feasibility analysis can be performed when a candidate site is defined, as has been done for experiments operating at the South pole, for example <cit.>. This cost analysis demonstrates that operations could be supported sustainably within the US within the next two decades given conservative projections of technological development.
As a point of comparison, the power requirement of FCC would be about 30% of the output of a large nuclear plant (generating 1.1 GW on average <cit.>). At about $8 billion per facility, the cost of renewable energy infrastructure for FCC would be about $2.5 billion.
To obtain an estimate of the carbon impact of operations at future collider facilities that takes mitigation strategies into account, we first note the carbon intensity of solar, wind, hydro, and nuclear are around 30, 15, 25 and 5 t/GWh, respectively <cit.>. These estimates have some regional variation due to the differences in supply chains and local infrastructure. For instance, given the lifetime of existing nuclear plants of about 30 years, replacement or construction of entirely new facilities will be required and it might effect the overall carbon intensity. While the ultimate energy production portfolio will be different for facilities constructed in different regions, we take a common estimate of 20 t/GWh for all collider facilities in this analysis. We find this to be a reasonable estimate given that any facility can propose mitigation strategies to decouple their carbon impact from the regional average. It also reflects the expectation that clean energy infrastructure supply chains will improve over the next 20 years.
§ ANALYSIS OF TOTAL CARBON FOOTPRINT
A straightforward calculation of total energy consumption is possible using the information summarized in Table <ref>, which includes estimates of the site power P during collision mode, the annual collision time T_collisions and the total running time in years T_run for each center-of-mass energy √(s) considered. We take into account the time spent with the beam operating at full RF and cooling power outside of data-taking mode, for example for machine development, as an additional week for every 6 weeks of data-taking (i.e. +17%), represented as T_development. We take the site power requirement for the remaining period in a calendar year to be 30% of the site power requirement during data-taking (denoted by κ_down). This value is a conservative upper estimate, since without RF power and associated heat load, any accelerator can be kept cold with a small fraction of power to the cryogenics system.
Using these values, the annual energy consumed is calculated as:
E_annual = P[κ_down· T_year+(1-κ_down)(T_collisions + T_development)]
and the total energy consumption summing over all run configurations √(s) runs is
E_total=∑_r ∈ runsE(r)_annual· T_run(r)
For the circular collider projects, FCC and CEPC, we consider separately the cumulative energy consumption of the Higgs physics runs (i.e. √(s)>240 GeV) for a focused comparison on the basis of Higgs physics reach argued in Section <ref>, but additionally include the contribution of Z-pole and WW-threshold runs which impact the climate nevertheless.
Figure <ref> shows the energy consumption for the considered collider projects. The least energy is consumed by CLIC, driven by the lowest planned run time at low energies and its marginally lower power consumption compared to and ILC, which are comparable. The energy consumption of CEPC is large compared to FCC because CEPC plans to collect four times the integrated luminosity at 240 GeV with an associated tripling of the total run duration.
Figure <ref> shows the precision-weighted energy consumption for the considered collider projects, estimated by multiplying the energy consumption of Figure <ref> with the average relative precision in the last row of Table <ref>. The lowest run time for CLIC is now compensated by the reduced relative precision, in comparison to and ILC, leading to overall closer precision-weighted energy consumption. Similarly, the large proposed run time for CEPC is now taken into account in conjunction with the improved precision reach, yielding a total weighted energy consumption closer to FCC.
Figure <ref> shows the associated GWP of the total energy required for operations, obtained by multiplying the total energy consumption by the respective carbon intensity. The GWP of FCC operations benefits from the de-carbonized electricity expected in France and Switzerland, despite its high total energy requirements.
Figure <ref> shows the GWP due to construction of accelerator facilities. The carbon footprint is very similar among the linear and circular colliders, which is driven primarily by the total length of the accelerator. Figure <ref> shows the total GWP from construction and operations. CLIC is the most environmentally friendly option, owing to its lead performance in operations emissions as well as its small footprint. The total GWP of and ILC are driven by operations while that of CLIC, FCC, and CEPC are almost entirely driven by construction emissions. Possible reductions in the construction component could be achieved by using concrete with lower cement content than CEM1 C40 considered in this analysis. Such cases would still leave FCC GWP dominated by construction processes.
Finally, Figure <ref> shows the total precision-weighted GWP from construction and operations, estimated in the same way as the precision-weighted energy consumption in Figure <ref>. Given the overall similar GWP for CLIC and and the superior precision reach of at higher energies, compared to CLIC, appears to be the most environmentally friendly option, when accounting for the precision-weighted total carbon footprint.
The total energy consumption is given in table <ref> for three cases:
(a) when considering the complete running scenarios of Table <ref>, which include higher √(s) runs for ILC,and runs at the Z-pole and WW-threshold for CEPC,FCC;
(b) when only considering the "Higgs factory" modes of the proposed colliders, thus excluding the Z, WW runs for CEPC,FCC;
(c) and when only including the √(s)=250 GeV run for ILC/, since this run already provides comparable sensitivity to the Higgs couplings as the other proposed Higgs factories, as is shown in Table <ref>.
Update table with latest estimates!
The 2045 estimates for the carbon intensity in the various locations where the collider projects could be hosted are given on the 3rd row of table <ref>, and the total carbon footprint is given on the same table for the two cases considered (6th and last row). The total energy consumption and carbon footprint are also shown in Figures <ref>,<ref>.
§ CONCLUSIONS
We present the first analysis of the environmental impact of the newly proposed collider and a comparison with the other proposed facilities in terms of physics reach, energy needs and carbon footprint for both construction and operations.
The physics reach of the proposed linear and circular e^+e^- colliders has been studied extensively in the context of the US Snowmass and European Strategy processes. We zero in on the Higgs boson coupling measurement precision achievable at , CLIC, ILC, FCC, and CEPC and point out that they are generally similar, though linear colliders can operate at higher collision energies allowing access to additional measurements of the Higgs boson's properties. Moreover, the use of polarization at linear facilities effectively compensates for the lower luminosity.
On this basis, the global warming potential of these facilities is compared in terms of absolute environmental impact and in terms of environmental impact per unit of physics output obtained by a weighted average of expected precision on Higgs coupling measurements. The operations emissions of could be improved through beam parameter optimization leading to 63 (79) MW power reduction compared to the nominal 150 (175) MW in the 250 (550) GeV running mode. Mitigation strategies using dedicated renewable energy facilities can reduce the carbon intensity of energy production to 20 ton /GWh. We find that global warming potential is driven by construction rather than by operations beyond 2040. The compact nature of linear collider facilities reduces the total volume of construction materials and opens up the option for a surface site to simplify the construction process. We conclude that linear colliders and in particular have great potential for an environmentally sustainable path forward for high energy collider facilities.
§ ADDITIONAL POINTS
Somewhere in the intro or dedicated section state the importance of polarization for electrons and positrons.
How large is the systematic on the positron polarization measurement ? –> check with Peskin
Detectors with short duty cycle –> less systematic –> more effective per number of Higgs bosons
Beam Damp experiments
When assessing the energy consumption and carbon footprint of a proposed Higgs factory, one has
* The figure of merit when assessing the scientific output of a Higgs factory should not be the number of Higgs bosons produced per se, but rather the precision in the Physics observables of interest (particularly Higgs couplings) that can be reached for a given number of Higgs bosons produced.
* Electron (primarily) and positron (secondarily) polarization can yield an effective luminosity improvement factor for linear machines of ∼ 2.5, i.e. allowing the same precision for various Higgs couplings to be reached with ∼ 40 % of the integrated luminosity.
* Additionally, linear machines can probe higher center-of-mass energies, which offers various advantages compared to linear machines:
* At higher √(s), Higgs boson production cross section increases, enabling a more efficient production of Higgs bosons.
* At high √(s) (above ≃ 500 GeV), linear machines can probe double Higgs production via the ZHH channel, allowing for a direct measurement of the Higgs trilinear coupling λ_3.
For the electron Yukawa coupling, FCC can achieve a 𝒪(1) fractional uncertainty with the dedicated run at the Higgs mass pole, which was however not taken into account for the studies presented here.
Action Items (Dimitris):
* Reach out to each collider project asking for their most up-to-date estimates for: site power for each √(s), annual collision time, operational efficiency and downtime site power fraction → to make sure there is no contention about our estimates once published
More ideas:
* Reach out to Janot/Blondel asking specifics about their assumptions and references for the numbers they're quoting (Dimitris)
* Reach out to Doerr School of Sustainability contact (Caterina?)
* Follow up with Ken Bloom about carbon intensity projections
§ ACKNOWLEDGEMENTS
The authors express their gratitude to Dan Akerib, Tom Shutt, Sridhara Dasu, Patrick Maede, and Jim Brau for their insightful discussions, which have significantly contributed to this work. The authors also extend their appreciation to Michael Peskin and Steinar Stapnes for providing feedback on the manuscript.
The work of the authors is supported by the US Department of Energy under contract DE–AC02–76SF00515.
tocsectionBibliography
atlasnote
|
http://arxiv.org/abs/2307.04216v1 | 20230709161102 | Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data | [
"Hieu Le",
"Hernan Santos",
"Jian Tao"
] | cs.LG | [
"cs.LG",
"cs.AI",
"eess.IV"
] |
Texas A&M University
Administration Building, 400 Bizzell St
College Station
Texas
USA
77843
[email protected]
0000-0003-2510-073X
0009-0005-0113-5067
Texas A&M University
Administration Building, 400 Bizzell St
College Station
Texas
USA
77843
[email protected]
0000-0003-4228-6089
Texas A&M University
Administration Building, 400 Bizzell St
College Station
Texas
USA
77843
[email protected]
Lossy compression has become an important technique to reduce data size in many domains. This type of compression is especially valuable for large-scale scientific data, whose size ranges up to several petabytes. Although Autoencoder-based models have been successfully leveraged to compress images and videos, such neural networks have not widely gained attention in the scientific data domain. Our work presents a neural network that not only significantly compresses large-scale scientific data but also maintains high reconstruction quality. The proposed model is tested with scientific benchmark data available publicly and applied to a large-scale high-resolution climate modeling data set. Our model achieves a compression ratio of 140 on several benchmark data sets without compromising the reconstruction quality. Simulation data from the High-Resolution Community Earth System Model (CESM) Version 1.3 over 500 years are also being compressed with a compression ratio of 200 while the reconstruction error is negligible for scientific analysis.
[Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Hieu Le ([email protected])]
<ccs2012>
<concept>
<concept_id>10002951.10003152</concept_id>
<concept_desc>Information systems Information storage systems</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies Machine learning</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[300]Information systems Information storage systems
[300]Computing methodologies Machine learning
Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data
Jian Tao
August 12, 2023
================================================================================================
§ INTRODUCTION
Over the past few decades, the amount of information available for analysis has increased significantly. Scientific instruments and related computation systems such as Linac Coherent Light Source <cit.>, the Very Large Array Radio Telescope <cit.> and high-resolution climate modeling <cit.>, produce a massive amount of data and put a huge burden on the existing storage system. It is important to design efficient compression models that are able to reduce the data size for storage while maintaining the key information for analysis.
Data compression can be lossless and lossy. Lossless compression, whose reconstruction is exactly the same as the original data, suffers from a low compression ratio (around 2:1 <cit.>) on floating-point datasets <cit.>. Meanwhile, lossy compression scarifies data information to achieve a much higher compression ratio. Despite the loss of information, the quality of data reconstructed by lossy compression schemes is generally acceptable and usable <cit.>. The nature of lossy compression has driven scientists and engineers to implement many compression algorithms and methods to substantially reduce the size of scientific data <cit.>, whose size is often enormous (might be up to 32 exabytes <cit.>). Furthermore, recent studies by <cit.>, <cit.>, and <cit.> showed that reconstruction data from lossy compression can be used for post-hoc analyses.
In recent years, both scientific and engineering communities have focused on developing neural network models for computer vision <cit.>, natural language processing <cit.>, and compression <cit.>. Among numerous types of deep learning models, Autoencoder (AE) has gained tremendous attention because of its capability to learn data representation. AE is a type of neural network that can efficiently learn the representation of input data in an unsupervised manner for reconstruction. Internally, the network contains a bottleneck layer, whose representation is much smaller than its inputs in terms of size. Therefore, AE is primarily used for dimension reduction and feature extraction. Many variations of AE have been developed to improve the quality of reconstructed data <cit.>. Although AE has been shown to be successful in lossy image and video compression <cit.>, there are only a few number of studies leveraging this type of neural network for scientific data compression <cit.>.
In this work, we explore the possibility of leveraging a lossy AE-based compression model to compress scientific data. Specifically, this work aims to achieve high reconstruction quality of data at a very low bit rate, below 0.50. We propose a novel AE-based model that is capable of significantly reducing data size without compromising data quality. Moreover, Higher-order Singular Value Decomposition (HOSVD) method is also implemented and applied to compress floating-point data. Outputs of HOSVD are used to compare against the outputs of the AE-based model. The key contributions of this work are as follows:
* We introduce data processing methods for both training and testing sets to overcome issues when dealing with large-scale scientific data. These techniques enable efficient compression on both high performance computing (HPC) nodes and regular commercial devices, such as personal computers.
* Targeting a very low bit rate region (below 0.5), a lossy AE-based compression model is proposed to significantly compress simulation data from high-resolution HR-CESM1.3 data sets.
The rest of this paper is organized as follows. In Section <ref>, related work is discussed. In Section <ref>, we describe important concepts and techniques, which are implemented in our proposed models. Section <ref> describes compression experiments on benchmark data and large-scale simulation data. We evaluate and analyze our results in Section <ref>. Section <ref> concludes our findings with directions for future work.
§ RELATED WORK
Traditional lossy compression for scientific data could be categorized into two types: prediction-based and transform-based. Transform compression (e.g. ZFP <cit.>) transformed the data before applying other techniques, e.g. embedded coding, to truncate the transformed data. Coefficients and bit-planes determined by the model were used to decompress data. Increasing the number of coefficients and bit-planes improved the quality of reconstructed data but decreased the compression ratio.
On the other hand, prediction-based models, such as SZ <cit.>, FPZIP <cit.>, predicted the target data using previous reconstructed data points <cit.>. Similar to transform-based models, the authors found that the fidelity of the reconstructed data degraded when a high compression ratio was required. Prediction-based models have been shown to have high reconstruction quality at a high compression ratio, which leads to more studies to improve the performance of this type of compression <cit.>.
Recently, deep learning models have been leveraged to compress many types of data. Many AE-based models showed remarkable results in image and volumetric compression tasks.
Balle et al. <cit.> introduced an effective end-to-end AE-based model to compress images. The authors trained their models to optimize rate-distortion performance. In order to balance the trade-off between the quality of reconstructed data and compression ratio, both losses for reconstruction and compression rate were minimized simultaneously. Since the quantization layer of their compression models prevented the gradients from flowing through the networks, independently and identically distributed uniform noise was used to replace the quantization layer during training. The added noise enabled the back-propagation without significantly deteriorating the performance of the quantization layer when compressing images.
Similarly, Theis et al. <cit.> implemented AE-based compression models using a different quantization method from <cit.>. Their soft quantization was continuous, thus allowing the gradient to flow smoothly through the entire network. As a result, additive noise was not required for training.
Models with two levels of quantization were also investigated in <cit.>. The second layer not only provided fine-grained quantization but also acted as a prior for the first quantization layer. Moreover, arithmetic encoding <cit.> <cit.> was implemented instead of variants of Huffman coding <cit.>. Integer quantization, proposed by <cit.>, was applied to quantization layers to eliminate the dependence on hardware-platform floating-point implementation, which varied from machine to machine, during compression.
Adopting the idea of two-level quantization, several studies have been conducted to improve the capability of neural networks in image compression. Minnen et al. <cit.> built an autoregressive model. The first quantization layer, which received inputs from the prior given by the second quantization and from the encoder, autoregressively processes data representation to produce high quality images. Their neural networks were also among the first machine learning models that outperformed the state-of-the-art BPG compression <cit.>. However, autoregression by its nature prevented neural networks to compute in parallel. Models created by <cit.> eliminated the autoregressive layer and replaced it with multiple splitting layers, which enabled the decoder to comprehensively learn different sets of channels in parallel. Additionally, optimization for compression speed using neural networks was addressed by <cit.>, which suggested several methods to improve compression performance.
Compression on audio signals using AE-based neural networks has also experienced much progress. The work of <cit.> outperformed MP3 in both compression ratio and audio quality. Their models adopted vector quantization techniques proposed by <cit.>. The authors not only optimized signal losses in the time domain but also minimized reconstruction losses in the frequency domain. Furthermore, the coupling of AE and Generative Adversarial Networks (GAN) <cit.> was leveraged to achieve a high-quality compression model.
Neural networks have also been implemented to compress volumetric scene data. Kim et al. <cit.> replaced fine-grain compression layers in their tree-based models with neural networks, which greatly enhanced the performance on volumetric representation. Coordinate networks by <cit.> not only focused on learning the scene representation but also provided great compression capability.
However, image and video compression models mainly reconstructed integer pixels (or voxels), which were only a subset of scientific data, where data types ranged from integer to floating-point. As a result, several studies using neural networks to enhance scientific data compression have been conducted. Glaws et al. <cit.> proposed an AE model, which was built upon 12 residual blocks of convolution layers. The authors also incorporated three compression layers to reduce the dimensions of the data in their AE's architecture. The model was trained to compress turbulence data with a fixed compression ratio of 64.
Liu et al. <cit.> introduced a seven-layer AE model to compress scientific data. The encoder was comprised of three layers of fully connected layers, each of which compressed input data by eight folds. Theoretically, the encoder could compress data by 512x (8^3). Similar to the encoder, the decoder had three fully connected layers to decompress the compressed data. Between the encoder and decoder, a bottleneck contained latent variables, whose size was much smaller than the inputs. However, this work mainly focused on small-scale 1D data, whereas our models learned data representation in higher dimensions, particularly in 2D and 3D. Another limitation of this model was that only CPUs were used for compression, which did not fully utilize the parallel computing power offered by GPUs <cit.>.
Recently, a compression method proposed by Liu et al.<cit.> achieved great results for 2D and 3D data. Their AESZ framework was comprised of a Lorenzo prediction model and an AE model, each of which compressed data independently. Compression outcomes from both models were then compared for the framework to select the model for the data being compressed. The compression ratio of their proposed framework on many scientific data sets surpassed results from other hand-engineered models, e.g. and other AE-based models. However, instead of optimizing one particular model for each input, the framework employed two distinct models to compress the same data.
Slightly different from traditional deep learning models, physics-informed neural networks (PINNs) <cit.> have been successfully developed to extrapolate data and solve many scientific problems. Choi et al. <cit.> combined PINN and variational autoencoder (VAE) <cit.> to compress plasma simulation data. Unlike other types of neural networks, this PINN model optimized several physics constraints, such as mass, moment, and energy, along with the reconstruction loss, i.e. L2 distance. Similar to our work, the authors used integer quantized latent variables, which could be reliably transmitted between different hardware and software platforms as studied by <cit.>.
§ METHODS
Our proposed model is built upon three main components: an encoder network (E), a quantizer (Q), and a decoder network (D). The encoder network encodes data inputs to output latent variables z_e. The quantizer then processes z_e to produce a quantized latent representation z_q. Finally, the decoder network reconstructs data from the compressed representation z_q to output x̂. The whole model is trained in an end-to-end fashion to minimize a reconstruction loss and constraint losses imposed by codebooks of the quantizer. The model architecture is depicted in Figure <ref>.
The detailed implementation of the model is present in Table <ref>. Each stage (EncRes) of the Encoder is connected to an intermediate convolution layer. The intermediate layer acts as a bridge to map the number of channels to the desired vector dimension of the quantization layer. The output representation is then quantized using the corresponding codebook.
§.§ Encoder & Decoder Architecture
As mentioned above, the encoder is trained to extract data representation into latent spaces, whereas the decoder decodes the latent variables to reconstruct the given data. There are two most widely used reconstruction errors, mean-squared error (MSE) and multi-scale structural similarity (MS-SSIM). Depending on targeting criteria, either measure can be used to achieve desirable outcomes. Both measures have been shown to be good metrics since they generally output generated images with high quality <cit.>.
The encoder network E is created hierarchically. The first level aggressively reduces the dimension of the inputs and learns their data representation. The second level also performs slight dimension reduction. Data representation from the second level is quantized by its corresponding vector quantizer. The quantized values are then fed into the first-level vector quantizer. The second-level quantization acts as a prior to the first-level quantization. The additional information from the second-level quantization improves the capability of the first-level quantization, which leads to a better reconstruction quality. Even though the second level creates slightly more bits during compression, reconstruction quality improvement significantly outweighs a slight decrease in compression ratio.
The network E comprises several 2D convolution layers and blocks of residual connections. The first two convolution layers map inputs to a higher number of channels using a kernel size of 4. It is followed by a couple of residual blocks, which consist of strided convolutions with a kernel size of 5. Components of a residual block are illustrated in Figure <ref>. We use non-linear GELU functions as our activation functions <cit.>. A generalized divisive normalization (GDN) is used to normalize residual block's outputs and transform their distribution to be more Gaussian <cit.>. GDN is effective for both image compression <cit.> as well as scientific data compression <cit.>. The encoder network can be simply represented as a mapping function as shown in equation <ref>.
z_e = 𝐄(x)
The decoder network D is a mirror of the encoder network E. Transposed convolution layers are used to replace strided convolutions. Transposed convolutions at the beginning of each hierarchy alter decoder inputs to acquire suitable representation with C channels for the following residual blocks. In general, all blocks of the decoder loosely reverse all operations performed by the encoder. Network D maps the latent representation back to the original dimension, outputting reconstructed data. The decoder can also be considered to be a mapping function as shown in equation <ref>.
x̂ = 𝐃(decoder_inputs)
§.§ Vector Quantizer
Although vanilla AE can perform dimension reduction, it cannot flexibly generate data given fixed inputs. Variational Autoencoder (VAE) <cit.> and its variants are implemented to improve reconstruction performance. VAEs not only minimize the reconstruction loss but also learn the distribution of the latent variable by optimizing the Kullback–Leibler (KL) divergence. As a result, a more diverse set of images can be generated with much higher quality <cit.>.
Based on the idea of VAE, we impose slightly different criteria on the objective function. Following a proposed approach implemented in Vector Quantized Variational Autoencoder (VQ-VAE) <cit.>, our model is trained to minimize the reconstructed loss, i.e. L2 distance, as well as optimize discrete codebooks. The latent representation encoded by the encoder is projected onto codebook vectors. The vector, which has the smallest Euclidean distance to the encoded latent variable, is selected to become the decoder's inputs as shown in equation <ref>
z_q = 𝐐(z_e) = argmin_q_k ∈ Q (||z_e - q_k||)
The quantizer outputs a set of integer values, which are indices of quantized vectors. These indices are then further compressed using a lossless compression scheme, i.e. Huffman coding-based algorithms. The size of compressed quantized data is significantly reduced because quantized values are in the form of integers, which are efficiently compressed by any lossless compression algorithm.
Our training procedure for our codebooks is similar to the method described in <cit.>. Each codebook in the model is updated using an exponential moving average with a decay of 0.99. The update is measured based on changes in codebook entries after each training iteration. A straight-through estimator <cit.> is implemented to overcome the discontinuity of gradients created by discrete codebooks. The estimator acts as an identity function that identically maps the gradients of the decoder to the encoder in the backward propagation.
Overall, the end-to-end training is comprised of three mapping functions: encoding, quantization, and decoding. The model can be summarized using equation <ref>.
x̂ = Model(x) = 𝐃(𝐐(𝐄(x)))
where E is the encoder network, Q is the quantizer, and D is the decoder network.
§.§ Preprocessing Large-scale Data
§.§.§ Data Standardization
In this work, we focus on compressing large-scale high-resolution scientific data obtained from Earth's simulation. Since each set of data has its own data distribution, it is important to preprocess raw data prior to training. Statistical measures of data are usually available for most simulations. The availability of statistics enables us to use Gaussian standardization for data whose distribution is Gaussian. The technique is also applicable to distribution that approaches the Gaussian distribution. The standardization method is shown in equation <ref>.
x_st = x - μ/σ
where x is a data value, μ is the mean of the data, and σ is the data standard deviation.
The inverse of standardization is required for converting the reconstructed data back to the actual value range. The inverse is formulated in equation <ref>.
x = μ + x_st*σ
However, if the data distribution is not Gaussian, directly applying standardization does not improve compression performance. In this scenario, logarithm scaling is a technique to transform the original data to its corresponding logarithmic scale. The technique usually changes the data distribution to be close to Gaussian, which enables us to effectively use the standardization method on the data.
§.§.§ Missing Value Handling
Data masking is necessary for data compression in many cases. In many scientific simulations, there are regions that are not of interest to the researchers conducting experiments. Those areas are generally assigned values that are extremely negative or easily distinguished from actual simulation values. Therefore, we use masking layers to indicate valuable values and ignore unwanted regions in our model. Even though the masking increases the storage size, this redundancy is negligible since it is made up of several integer values, which can be significantly compressed by any standard lossless compression algorithm such as Huffman coding-based compression schemes.
Missing values in data are also replaced by a different value. The replacing values can be the mean or the median of the entire available data. For simplicity, we assign missing values with the data mean since data statistics are readily available. After cleansing missing values and masking the data, the data and their corresponding masks are partitioned into small blocks.
§.§.§ Data Partitioning
Machine learning models generally cannot handle raw scientific data, since each dimension of any data is large, which cannot fit into the system's memory. To overcome this issue, data are partitioned into small blocks prior to training or compression. Each dimension of a block is a power of two. Particularly, we restrict the block to having a height and width of 64 for the training process, as we observe that this setting achieves the best reconstruction quality. Moreover, a power of two in each block dimension makes the up-sampling and down-sampling efficient. No padding or trimming is required for the outputs, which saves additional computing power.
However, the shapes of raw data are not always divisible by two, which is a requirement to have a block size of a power of 2. Then, data whose size is not a multiple of block size are padded. Padding is performed at the edges of each dimension. For Earth's simulation data, we cyclically replicate data values at one edge and concatenate them at the other end. For example, to pad the left edge of 2D data, values on the right edge are copied and appended to the opposite side. This padding pattern is especially helpful for treating continuous simulation data with periodical boundary conditions, e.g., climate modeling data.
The partitioning technique mentioned above works well in general. However, as all partitioned blocks are discrete, the whole set of partitions does not include any transition from one block to its adjacent neighbors. To smooth out the boundary and make the transition from one block to another more accurate, an overlapping block partition technique is implemented <cit.>. Instead of making mutually exclusive blocks of data, adjacent blocks are partitioned in a way that they overlap with each other in a small area. In particular, assuming each block is of size 64 and there is an overlap of eight, the second block is created to contain the last eight values of the first block as well as the next 56 values. The data overlapping technique is only implemented for training data, whereas the discrete data partitioning technique without overlapping is used for testing and compression.
§.§ Objective Function
§.§.§ Reconstruction Loss
The reconstruction loss is the discrepancy between the reconstructed and original data. We minimize the L2 distance of the target and compressed data, i.e. l_recon(x, x̂) = ||x-x̂||_2. The minimization simply matches the compressed data to the original data as closely as possible.
§.§.§ VQ commitment loss
The commitment loss accounts for the difference between the quantized codebook vectors and outputs of the encoder. Since quantization distorts the data, decreasing the distance between the quantized vectors and the original data reduces the distortion. We impose an L2 distance constraint on the codebook vectors and their corresponding inputs. The commitment loss, l_q, is defined as in equation <ref>.
l_q(z_e, z_q) = ||z_e-z_q||_2 = ||z_e-Q(z_e)||_2
Where z_e and z_q are outputs of the encoder and their corresponding quantization values, respectively.
Overall, the model is trained to optimize the following objective
L = λ_recon * mask * l_recon + λ_q * l_q
where mask is a masking layer, which indicates which data points should be taken into account in optimization; λ_recon and λ_q are constant coefficients of the reconstruction and commitment losses, respectively. The constant λ_q is set to be 0.25 following the suggestion by <cit.>. The objective function in equation <ref> is acquired based on the assumption that quantization values are uniformly distributed. Uniform distribution leads to a removal of an additional KL term in the objective because the term becomes a constant with respect to encoder parameters <cit.>.
§.§ Error-bounded Technique
Reconstructed data from neural networks sometimes have large distortions from the original data. To counteract the large distortion of some reconstructed values, a straight-through technique is introduced. The straight-though technique classifies reconstructed values into two groups, predictable and unpredictable. Reconstructed data that meet the tolerance constraints are called predictable values. In other words, predictable data have error values less than or equal to a predefined threshold. Otherwise, they are unpredictable values. Unlike predictable values, which can be used directly as final reconstructed values, unpredictable values have errors that exceed the threshold. Thus, corresponding true values and their locations are saved separately on a file to replace unpredictable values during reconstruction.
§.§ High-Order Singular Value Decomposition
The Higher Order Singular Value Decomposition (HOSVD) is a generalization of the Singular Value Decomposition (SVD) to higher-order tensors. SVD is a factorization technique that decomposes a matrix into three separate matrices, each of these matrices represents different aspects of the Data. HOSVD extends this concept to 3D data or n-dimensional data.
Just like SVD, HOSVD decomposes a high-dimensional tensor into a set of sub-tensors that capture a specific aspect of the original data. This decomposition is achieved by first re-organizing a tensor into a set of matrices and then applying SVD to each of those matrices. One of the advantages of HOSVD is that it preserves the mode-orthogonality property of the original tensor, resulting in a set of sub-tensors that are orthogonal along the mode axes. This property makes HOSVD particularly useful in applications such as image compression and feature extraction, where maintaining the structure of the original data is important.
The HOSVD could be approximated with a predefined tolerance through equation <ref>.
𝒳≈U^(1), …, U^(N)argmin𝒳 - 𝒢_F^2,
where
𝒢 = 𝒳×_1 U^(1)T×_2 U^(2)T×_3 ⋯×_N U^(N)T,
and U^(n) is an orthonormal matrix for each mode-n of 𝒳.
In this formula, 𝒳 is the original higher-order tensor that we want to decompose, and 𝒢 is the reconstructed tensor using HOSVD. The U^(n) matrices represent the orthogonal bases for each mode-n of 𝒳, and ×_n denotes the mode-n product of a tensor. ._F is the Frobenius norm.
It is important to notice that the HOSVD formula is a minimization problem, where we seek for a set of orthogonal matrices that best approximate the original tensor 𝒳 with a predefined tolerance. This tolerance will be directly proportional to the compression-ratio and bit rate. The goal is to find the set of matrices that minimize the Frobenius norm of the difference between 𝒳 and 𝒢, which is a measure of the distance between the two tensors.
§.§ Metrics
§.§.§ Peak signal-to-noise ratio (PSNR)
The metric measures the performance of compression schemes. PSNR is defined via the mean squared error (MSE). MSE is given in equation <ref>.
MSE(x,x̂) = 1/n||x-x̂||_2^2
Where x and x̂ are the original and reconstructed data, respectively.
PSNR is then defined as
PSNR = 10*log_10(MAX_I^2/MSE)
Where MAX_I is the maximum range of the input data.
PSNR is inversely proportional to MSE. When the error between input and output data is small, MSE is a small number, which leads to a large PSNR. Therefore, it is desired to maximize PSNR for any compression model.
§.§.§ Compression Ratio
Compression ratio (CR) is the ratio between the sizes of the original data and their compressed latent representation. The compressed data of our model are outputs of the quantizer in an integer format. We define CR as
CR = original_size/compressed_latent_size
§.§.§ Bit rate
Bit rate is a convenient way to represent compression ratio. It is a measure of the average number of bits used per data point for the compressed data. It is inversely proportional to compression ratio and is defined in equation <ref>
bit_rate = data_type_size/CR
where data_type_size is the size of the data type. For instance, it is 32 for single-precision data or 64 for double-precision data. Thus, a small bit rate, or large compression ratio, is the objective of any compression algorithms.
§.§.§ Compression speed
Compression and decompression speeds are the time taken to process the data. They are both defined as
speed = original_size/computation_time
The speed in this work is expressed in unit of MB/s
§ EXPERIMENTS
§.§ Resource Availability
This paper uses existing, publicly available data from SDRBench (<https://sdrbench.github.io/>) for bench-marking the performance of our model. As for the compression on real-world application data, the model compresses the High-Resolution Earth System Prediction (iHESP) data. The iHESP data have been deposited at <https://ihesp.github.io/archive/> and are publicly available as of the date of publication. All original code has been deposited at <https://github.com/hieutrungle/data-slim> and is publicly available as of the date of publication. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
§.§ Hardware
All models are trained on GPU A100 compute nodes of Texas A&M High Performance Research Computing (TAMU HPRC). Each node is equipped with two Intel Xeon 6248R (Cascade Lake), 3.0GHz, 24-core, and 2 NVIDIA A100 40GB GPU. After training, our models reconstruct data on different platforms, the first one comprises GPU A100 compute nodes and the second is a personal computer with 11th Gen Intel i5-11600K, 12-core, 4.9GHz, and NVIDIA GeForce RTX 3060 Ti. The compression performance on the latter platform, as will be discussed in section <ref>, shows that the model can also efficiently run on regular personal devices instead of powerful chips such as NVIDIA A100.
§.§ Benchmark Data: SDRBench
Our proposed models are initially tested on published scientific benchmark data, SDRBench <cit.>. This benchmark provides numerous simulation data from many different fields, ranging from electronic structures of atoms, molecules to weather, and cosmology. The benchmark is publicly available for different scientific purposes.
Even though we focus on compression 2D data, a couple of 3D data sets are also being compressed to verify the possibility of generalizing our architecture to higher-dimension data. Table <ref> summarizes several data sets and some fields we use for our compression. A brief description of the data is as follows:
* NYX data: The data describe simulation astrophysical reacting flows created using an adaptive mesh, cosmological hydrodynamics simulation code <cit.>.
* CESM data: Both 2D and 3D CESM data are cloud properties acquired from a well-known climate simulation package <cit.>. As suggested by domain scientists, 3D CESM data should be treated as 2D for compression. This treatment improves both compression ratio and PSNR of reconstructed data <cit.>.
The 3D CESM data include comprehensive attributes of cloud properties for many different altitudes, which can be viewed as many 2D data stacking on top of each other. Therefore, we use the 3D CESM data as a training set for CESM cloud data, whereas all snapshots of 2D CESM data are our testing data.
§.§ High-Resolution Earth System Prediction (iHESP) Data
The International Laboratory for High‐Resolution Earth System Prediction (iHESP)<cit.> was a project aiming to develop more advanced modeling frameworks for high-resolution multiscale Earth system predictions to improve the simulation and prediction of future changes in extreme events. iHESP also provides numerous global and regional high-resolution simulation data spanning hundreds of years. The global climate was simulated using different high-resolution configurations of CESM version 1.3 for atmosphere, land, ocean, and sea-ice. Meanwhile, regional data were generated from the ocean model ROMS (Regional Ocean Modelling System) with the atmospheric model WRF (Weather Research and Forecast model) using the CESM/CIME coupling infrastructure. All data are also publicly accessible.
Among a large array of ocean properties provided by iHESP, sea surface temperature (SST) is one of the most important attributes for the ocean. The property is simulated over hundred years, which leads to a substantial amount of storage needed to store the data. However, the large amount of available data also enables us to leverage machine learning for compression.
Basic information of SST data is presented in Table <ref>. The first dimension of the data represents the time evolution. The next two dimensions are the height and width of the data, respectively. General ocean information, such as simulation history and climate coefficients, are also included in the metadata of the data set. Latitudes and longitudes are also available to scale the data back to the global coordinate system when it is required.
Data preprocessing is crucial for SST in both training and compression. Temperature values are only available where sea water presents, whereas undefined values are assigned to continents. In other to deal with missing values, a masking layer is created to differentiate between those regions.
The data are split into two sets, a training set and a testing set. The training set contains almost ∼100GB of SST data while the testing set consists of temperature data of the last 120 consecutive months in the simulation. Data in the training set are partitioned using the overlapping technique, while we apply the discrete partitioning technique for the testing set. Both training and testing sets contain blocks of data of size 64. During compression, data are partitioned into blocks of size 256 for better resolution.
§ RESULTS AND DISCUSSION
§.§ Compression of Benchmark Data
§.§.§ 2D Data
Visualization of the reconstruction of CESM CLDHGH data is presented in Figure <ref>. At a bit-rate of 0.22, our proposed model provides great visual results. The model preserves most detail of the data, especially for regions where gradual changes in temperature occur. However, at the boundaries of different regions, slight distortion can be observed. The distortion is caused by the sharp changes in adjacent values.
The compression performance of our models on different data sets is compared to other compression models, namely HOSVD, SZ2.1 <cit.>, ZFP <cit.>, and AESZ <cit.>. Using HOSVD as our baseline, the PSNR of our proposed model is higher than that of the baseline model at all bit-rates (Figure <ref>). Moreover, Figure <ref> shows that our proposed model outperforms other compression schemes when bit-rates are below 0.40, which are equivalent to compression ratios of greater than 80. At a very low bit-rate of 0.22, the reconstructed data of our model has a PSNR of 46.35 dB. This is an improvement from the hybrid AESZ model, which requires a bit-rate of around 0.41 to obtain the same PSNR.
However, the PSNR of the proposed model does not follow the same trend as the compression performance of other compression models. Our trained model has a fixed set of parameters. To increase PSNR without training a different model, we apply the straight-through method to restrict the error-bound of the reconstructed data. There is a possibility of training different models with larger latent variables and codebooks to get much higher PSNR at any certain bit-rate. However, we found it unnecessary to exhaustively explore all possible combinations of neural networks to achieve higher PSNR at bit-rate higher than 1, since we aim at compressing data at bit-rates below 0.50.
Our compression model is also leveraged to compress different 2D data. Compression performance on many different cloud data is illustrated in Figure <ref>. Since the CESM 3D CLOUD data should be treated as 2D data as suggested by domain scientists <cit.>, its compression results are presented together with other 2D data. It is worth mentioning that compression on all CESM cloud data uses the same model architecture with the exact same weights. Even when applying the model to these data, it obtains high PSNR while maintaining a very low bit-rate. The compression performance indicates that this particular model for CESM cloud data achieves a good generalization.
Compression performance comparison between our proposed model and HOSVD is presented in the Appendix <ref>.
The compression performance of our proposed model outperforms HOSVD for all benchmark data. More compression performance comparisons and results are presented in Appendix <ref> and <ref>.
§.§.§ 3D Data
The proposed model achieves reasonable compression on 3D benchmark data. As can be seen from Figure <ref>, at low bit-rates, our model surpasses SZ2.1 and ZFP in terms of performance. However, the quality of reconstruction from the hybrid model - AESZ - is higher than our model. One of many possible reasons for the weaker performance of our compressor is that our model is designed using primarily 2D convolution layers. Therefore, it does not have the extensive capability to learn data representation in 3D. On the other hand, when compressing 3D data, AESZ changes its machine learning architecture to 3D convolution neural networks. This change is one of the factors that boost the compression performance for volumetric data.
§.§ Compression of iHESP Sea Surface Temperature (SST) Data
Compression results for the testing set of high-resolution SST data show that the model can reconstruct data with high quality while maintaining a high compression ratio even for large-scale simulation data. As can be seen from Figure <ref>, after being compressed by a factor of 240, the reconstruction achieves a PSNR of 50.16. Moreover, in terms of visualization, it is unlikely to detect differences between the original and reconstructed data. However, there are some slightly noticeable distortion areas, especially along coastal lines between oceans and continents. Since data are only available for sea water, data points on continents are set to be a suitable constant. The assignment of the constant creates large variations in values along the edges of continents, which hinders the reconstruction ability of the model in those particular regions.
Table <ref> presents the compression performance of the model on the whole testing data. The quality of reconstruction, PSNR, of each snapshot varies from 48.58 to 51.5. The reason for the differences in PSNR is that data distribution of each snapshot differs from time to time, which leads to the variation in quantization values from codebooks; hence changes in the reconstruction quality. Nevertheless, the deviation of the snapshots' PSNR does not vary far from the average of 50.04, which indicates that our model achieves stable performance over all data sequences.
Compression and decompression speeds are also acceptable. Compression speeds on HPC nodes are presented in Table <ref>. On average, it takes around 45 seconds to complete either compression or decompression for 4GB data. On a personal computer with an NVIDIA 3060 Ti accelerator, compression and decompression both take around one and a half minutes on the same data. The small difference in the two platforms indicates that the compression pipeline is primarily bottlenecked by the data transferring between CPUs and GPUs. However, compression speed on the personal computer shows promising results that the model is also suitable for compression on small devices.
§ CONCLUSIONS
Our proposed model shows to be effective in compressing floating-point scientific data, both on 2D benchmark data and large-scale high-resolution data. It achieves an extremely high compression ratio while preserving a high quality of reconstruction. The model outperforms other state-of-the-art models in some benchmark data sets, particularly 2D simulation data. However, there is room for further improvement. Other lossless compression schemes, such as arithmetic coding, which offers better compression performance, can be used to replace Huffman coding. The model can also be further improved by optimizing the rate loss term, which potentially leads to a better compression ratio. Furthermore, the compression pipeline of the proposed model can be optimized to improve the speed of compression. Since scientific data compression using neural networks is still in its early age, there is so much more potential improvement that can be achieved for future research along this line.
§ AUTHOR CONTRIBUTIONS
Conceptualization, Jian Tao and Hieu Le; Methodology, Hieu Le, Jian Tao, and Hernan Santos; Investigation,
Hieu Le and Jian Tao; Writing – Original Draft, Hieu Le, Jian Tao, and Hernan Santos; Writing –
Review & Editing, Hieu Le and Jian Tao; Funding Acquisition, Jian Tao; Resources,
Jian Tao and Hieu Le; Supervision, Jian Tao.
§ DECLARATION OF INTERESTS
The authors declare no competing interests.
The authors would like to thank Dr. Chao Tian, Dr. Jaison Kurian,
and Dr. Ping Chang from Texas A&M University for their suggestions
and comments on this work. The authors gratefully acknowledge the
helpful support provided by the School of Performance, Visualization
and Fine Arts, Texas A&M High Performance Research Computing (HPRC)
and Texas A&M Institute of Data Science (TAMIDS). Portions of this
research were conducted with the advanced computing resources provided
by Texas A&M High Performance Research Computing. This work is
partially supported by the TAMIDS Career Initiation Fellow Program and
NSF grants OAC-2112356, OAC-2019129, and OAC-1925764.
ACM-Reference-Format
§ ADDITIONAL EXPERIMENTS
§.§ Frequency Loss Term
We conduct experiments which take into account loss terms in the frequency domain of the data. Both inputs and reconstruction are transformed into the frequency domain using Fast Fourier Transform (FFT). The L2 distance between the two transformed sets are then calculated using equation <ref>. This added term forces the model to directly minimize errors between high and low frequency components of the L2 distance.
l_fft(x, x̂) = ||FFT(x)-FFT(x̂)||_2
Where x and x̂ are inputs and reconstructed data, respectively.
The FFT loss, l_fft, is added with other losses to create an objective function of the model as shown in equation <ref>.
L = λ_recon * mask * l_recon + λ_q * l_q + λ_fft * l_fft
where λ_recon, λ_q , and λ_fft are constant coefficients of the reconstruction, commitment losses, and FFT loss, respectively.
§.§ Results
Our model trained with the added FFT loss performs reasonably well for the iHESP sea surface temperature (SST) data set. At a compression ratio of 221.63, the model achieves a PSNR of 47.04 for the reconstruction. Despite of having a good quality of reconstruction, its performance is surpassed by the model trained without the added FFT loss as discuss in section <ref>, which achieves an average PSNR of 50.04 at a compression ratio of 231.54. One possible explanation for the lower reconstruction quality is that there is a trade-off between the MSE terms in the time domain and the frequency domain during training. While the MSE loss term in the time domain learn data representation in a particular region, the FFT loss term focuses on different regions. As a result, the quantitative result, PSNR, of the "FFT model" is outperformed by its counterpart.
§ COMPRESSION PERFORMANCE COMPARISON
Our proposed neural network model outperforms purely mathematical model, HOSVD, in all benchmark data sets. Figure <ref> show that at low bit-rates, the proposed model achieve much better reconstruction quality than the HOSVD.
§ ADDITIONAL VISUALIZATION RESULTS
Additional results of the compression using our proposed model are provided in this section. Figure <ref> - <ref> show visualization results for compression on different data sets.
|
http://arxiv.org/abs/2307.07540v1 | 20230714140909 | Flow-Guided Controllable Line Drawing Generation | [
"Chengyu Fang",
"Xianfeng Han"
] | cs.CV | [
"cs.CV",
"cs.MM"
] |
Reconstructing Three-decade Global Fine-Grained Nighttime Light Observations by a New Super-Resolution Framework
Marc Demoustier, Yue Zhang, Venkatesh Narasimha Murthy(), Florin C. Ghesu, and Dorin Comaniciu
August 12, 2023
================================================================================================================
In this paper, we investigate the problem of automatically controllable artistic character line drawing generation from photographs by proposing a Vector Flow Aware and Line Controllable Image-to-Image Translation architecture, which can be viewed as an appealing intersection between Artificial Intelligence and Arts. Specifically, we first present an Image-to-Flow network (I2FNet) to efficiently and robustly create the vector flow field in a learning-based manner, which can provide a direction guide for drawing lines. Then, we introduce our well-designed Double Flow Generator (DFG) framework to fuse features from learned vector flow and input image flow guaranteeing the spatial coherence of lines. Meanwhile, in order to allow for controllable character line drawing generation, we integrate a Line Control Matrix (LCM) into DFG and train a Line Control Regressor (LCR) to synthesize drawings with different styles by elaborately controlling the level of details, such as thickness, smoothness, and continuity, of lines. Finally, we design a Fourier Transformation Loss to further constrain the character line generation from the frequency domain view of the point. Quantitative and qualitative experiments demonstrate that our approach can obtain superior performance in producing high-resolution character line-drawing images with perceptually realistic characteristics.
§ INTRODUCTION
Character line drawing or line art refers to a process that uses various kinds of lines (e.g., straight lines or curved lines) to create an abstract and stylistic illustration for visual properties of characters, including cartoon, manga and real person. It can be actually considered as a concise yet effective art modality in the domain of non-photorealistic rendering (NPR) <cit.> communicating geometric shape and semantic information to viewers <cit.>. Therefore, character line drawing spans a board spectrum of applications, such as colorization <cit.>, 2D animation <cit.> and image editing <cit.>. However, free-hand character line drawing requires professional drawing skills, expensive labor cost and higher time consumption <cit.>. It is highly desirable to develop automatic techniques, aiming to automatically generate character line drawings from given photographs.
Early traditional approaches for performing photo-to-line drawing translation mainly depend on low-level edge-based techniques (e.g., Canny edge detector), which focus on using gradient to capture accurate edges with less illustration of artistic stylization <cit.>. Yet what we want is a perceptually meaningful image. Meanwhile, for the persons in photos, not only the outlines but also the implicit properties like the material of the clothes <cit.> should be taken into consideration to form high-quality illustration. Fortunately, in recent years, due to continuing increase in the learning power of hierarchical representation, deep learning techniques, especially the emergence of generative adversarial networks (GANs) <cit.> have revolutionized the area of image-to-image (I2I) translation. Many works have explored the produce of line arts using deep convolutional neural networks (CNNs), but they mainly center on drawings for face portraits <cit.> and objects <cit.>.
On the other hand, most previous methods on I2I translation tasks actually are not ideal choices for high-quality character line drawing generation. This is mainly due to the following challenges: (1) line drawing is a fairly abstract representation, which uses a set of much sparser lines to depict the visual characteristics of characters, especially the body part. (2) Lacking of direction guidance, the lines resulted from these models are usually unsharp, incomplete, unnatural and noisy. (3) Few methods are able to provide line detail adjustment in an user-controllable way.
To address the above problems, in this work, we propose a vector flow aware and line controllable network architecture based on the core idea of GAN to generate even higher visual-quality character line drawings. First, we use learning based model to efficiently produce vector flow with better preservation of edge direction. Then, under the guidance of the learned flow, together with original input image flow, our Double Flow Generator (DFG) can improve the coherence of lines and reduce noise by fusing features from both flows. Finally, the control over different levels of line details is achieved by introduction of our Line Control Matrix and Line Control Regressor. With these well-designed modules, strategies and supervision by GAN loss, Line Control loss, Pixel-wise loss as well as our developed Fourier Transformation loss, our method can capture coherent, stylistic and clean lines from photographs of the persons.
In summary, our main contributions are as follows:
* An Image-to-Flow network (I2FNet) is designed for efficient generation of edge flow field, which can maintain the direction of the edge lines.
* We devise a Double Flow Generator (DFG) model to enhance the spatial coherence of character lines by mutually fusing information flow from learned vector field above and original input image.
* We obtain the control over different levels of line detail by embedding a Control Matrix into DFG and training a Control Regressor to produce different character line drawing styles.
* We construct a GAN-based Vector Flow Aware Image-to-Image framework for automated generation of visually high-quality character line drawings from photographs, and introduce a Fourier Transformation Loss to supervise the learning process in terms of frequency domain.
* Both quantitative and qualitative comparisons against state-of-the-art methods show the effectiveness, competitiveness, or even superiority of our proposed architecture.
§ RELATED WORK
Character line drawing/art representation visualizes the appearance of person/cartoon/manga in images or photographs using a set of lines. Its automated generation can be considered as one of the most important applications of Image-to-Image (I2I) translation in arts. Here, we briefly review related work in these fields.
§.§ Image-to-Image Translation
The recent increase in image generation power of Generative Adversarial Networks (GANs) have create unprecedented opportunities for development of Image-to-Image (I2I) translation field, whose objective is to learn a mapping from source image domain to target image domain. A broad spectrum of successful applications of I2I translation can be found in pencil drawing <cit.> and image colorization <cit.> <cit.>.
Pix2Pix <cit.> adopts the conditional GANs to formulate the first general-purpose image-to-image translation structure suitable for graphics and vision tasks. However, it has difficulty in producing high-resolution and realistic images. To address this problem, Wang et al. <cit.> used a coarse-to-fine generator, a multi-scale discriminator and an improved adversarial loss to form the high-resolution version of Pix2Pix, named Pix2PixHD. CycleGAN <cit.> introduces two adversarial discriminators and two cycle-consistency losses to achieve mappings from X domain to Y domain and from Y to X. While Zhu et al. <cit.> proposed the BiCycleGAN to create a bijective connections between latent space and output by combining both the conditional variational autoencoder GAN (cVAE-GAN) and conditional latent regressor GAN (cLR-GAN), and it can obtain more realistic and diverse results. In order to avoid washing away information, a spatially-adaptive denormalization layer <cit.> is designed. With its help, the semantic information can be propagated through the network via the learned transformation for photorealistic image generation.
§.§ Line Drawing Generation
Line drawing can be viewed as a special kind of art modality, which attempts to represent the visual properties of object or character or scene using basic lines without shading. However, generating line drawing is a challenging study due to its characteristics of abstraction, diversity, sparsity and invariance <cit.>.
Li et al. <cit.> developed an encoder-decoder network to achieve structural line extraction from screen-rich manga images. Xiang et al. <cit.> proposed a framework containing two generators and two discriminators that performs photo-to-sketch and sketch-to-photo learning jointly. For artistic portrait drawing generation, APDrawingGAN <cit.> makes use of a global network for global facial structure description, and six local networks for eyes, nose, mouth, hair and background generation. However, it only produces coarse drawings. To enhance fine details, APDrawingGAN++ <cit.> chooses to exploit autoencoder as generator, and integrates a hair classier, a lip classifier to avoid synthesizing undesirable style. Yi et al. <cit.> devised a asymmetric cycle GAN framework whose learning process is guided by a novelly designed quality metric for better looking APDrawings generation. Chan et al. <cit.> decoupled line drawings into encoding geometry, semantics and appearance enforced by CLIP, appearance and geometry losses.
Different from these methods, we mainly pay our attention to controllable artistic character line drawing synthesis from full body images/photographs, involving cartoon and real person, which contains much sparser set of lines, especially in body part. Our study leverages feature representations from the image and edge flows to synthesize coherent, stylistic and clear lines under the supervision of Adversarial loss, pixel-wise loss, FFT loss. The control over line details is obtained by introduction of Line Control Matrix and Line Control Regressor.
§ DATA PREPARATION
In this study, we formulate both Image-to-Flow and Image/Photograph-to-Character line drawing transformations as supervised Image-to-Image translation tasks. Therefore, in order to find the mapping from character image/photograph to edge tangent flow field and that from image to character line drawings in learning based manner, building a high-quality dataset is desirable for training and evaluating our model.
We collect a large number of character images from online websites, which mainly contain images/photographs of man, women, manga/cartoon boy and girl to ensure the diversity and richness of data. In order to make these images more suitable for processing by deep learning networks, we first scale them to a uniform size of 1024 × 1024 pixels.
Then, for each image, we use the traditional edge tangent flow acquisition strategy <cit.> to generate its corresponding ETF vector field. On the other hand, since it is fairly difficult to obtain character image/photograph-line drawing pairs, we choose to use Flow-based Difference of Gaussian methods to produce several smoothing line drawings with different level of details for every image, and record the corresponding line control parameter. Finally, we group the images and their corresponding ETF filed as well as several line drawings with different control parameters to construct the final dataset. Finally, our dataset contains 1,634 images in total, where 1,037 samples are selected for training, and 327 images for testing. In the case of controllable generation, five line drawings are selected for each image, together with corresponding ETF to train our character image/photograph-to-line drawing translation network. In addition, it should be noted that in order to make a fair comparison, in our comparison experiments and ablation study, we use only one character line drawing for every image to train all the models, which is generated using a fixed control parameter value.
§ PROPOSED APPROACH
§.§ Overview
Since automated character line drawing generation can be formulated as a task in the domain of image-to-image translation, our goal is to perform a photo-to-line drawing transformation. That is, given an arbitrary image from source character photograph domain 𝒫, our proposed approach outputs a corresponding image belonging to target character line drawing domain 𝒞 via learned mapping function f: 𝒫→𝒞.
The overall architecture is schematically illustrated in Figure <ref>. It can be clearly seen that our model consists of an Image-to-Flow Network (Section <ref>) for edge flow field generation, a Double Flow Generator (Section <ref>) and a drawing discriminator (Section <ref>) for edge flow guided translation from photo to line drawing, and a Control Regressor (Section <ref>) for adjustment of drawing details.
§.§ Image-to-Flow Network (I2FNet)
Actually, the direction of strokes plays an important role in determining the coherence and consistency of the lines. Kang et al. <cit.> proposed the edge tangent flow (ETF) technique providing a much better solution for finding directions of local image structure. However, the traditional method for yielding ETF is usually sensitive to user-specified parameters <cit.>, and requires much more expensive time cost to obtain a high-quality illustration. Therefore, it is necessary to define an efficient mapping function f_I2FNet to perform transformation from image 𝒫 to vector flow ℰ (i.e. f_ETF: 𝒫→ℰ) in a learning based manner.
To this end, we propose a simple but efficient Image-to-Flow network based on the core idea of GAN model. Specifically, the generator G_ETF adopts an U-Net structure with five encoding layers and five decoding blocks. It begins with a transformation module that coverts the input character image/photograph p ∈𝒫 into grayscale image, followed by hierarchical feature learning as well as four downsampling operations via CL (Convolution, LReLU) and CIL (Convolution, Instance Normalization, LReLU) layers. Then, we use DR (Deconvolution, ReLU), DIR (Deconvolution, Instance Normalization, ReLU) and UCT (Upsampling, Convolution, Tanh) layers as encoding blocks to gradually upsample the feature maps from last encoding layer to produce edge tangent flow vector field G_ETF(p). For ETF discriminator D_ETF, we utilize the PatchGAN classifier <cit.> to determine prediction map of real/fake for each 94×94 patch. Figure <ref> displays the visualization of directions of learned ETF vector field.
§.§ Double Flow Generator (DFG)
Once the ETF field has been constructed, we can use the direction of drawings it provides to guide the generation of character lines with enhanced quality and continuity. Here, we design a Double Flow Generator network G_CL that is fed into the original input image p and the corresponding ETF vector field e ∈ℰ and outputs a character line drawing.
In particular, the DFG G_CL also adopts an U-Net framework but with two encoder branches: an ETF encoder and an image encoder. Both of them use six encoding layers to extract useful information hierarchically from input ETF and image flow. Then, at the end of the encoders, we concatenate the output feature maps and transfer the fused features into our decoder. Simultaneously, for each decoding layer, we also integrate the feature maps from corresponding encoding layers in both encoders with the help of skip-connections. The final character line drawing illustration G_CL(p, e) is created through upsampling these hierarchically-fused features.
§.§ Line Drawing Discriminator
The objective of line drawing discriminator D_CL is to distinguish the synthesized character line drawing from the ground truth counterpart. We also consider the PatchGAN structure as discriminator to classify if each 94×94 is real or fake.
§.§ Line Control Regressor
In order to achieve various character line drawing styles via a flexible control over the line thickness and smoothness in a user-controllable manner, we introduce the following strategies. In our DFG network, given an image-line drawing pair and its corresponding ETF field, we first obtain its corresponding control parameter α according to Section <ref>, and use it to define a Line Control Matrix a ∈𝒜. Then, we append this matrix to each decoding layer to constrain the generation of drawing to have expected style. Finally, we design a Line Control Regressor (LCR) ℛ based on Fully Convolutional Regression Network, which takes ETF, ground truth of character line drawing and control parameter as inputs during training. It should be noted that (1) the control parameter value is in the range [0, 1]. (2) Our LCR is trained independently. The output of trained model will be involved in the calculation of Control Loss that supervises the training of DFG model, which is depicted in Section <ref>.
§.§ Objective Function
In order to obtain high-quality character line drawings, we use the following four loss functions to supervise the training process.
Adversarial Loss.
The DFG G_C attempts to generate indistinguishable character line drawing images, while the discriminator D_c aims to differentiate the synthesized results from real ones. Therefore, we use Adversarial loss ℒ_adv^CL to encourage the output to be visually close to ground truth, which is defined as follows:
ℒ_adv^CL =
E_p∼𝒫,c∼𝒞 [logD_CL(p, c) ]
+ E_p∼𝒫,c∼𝒞 [log(1-D_CL(c, G_CL(p, G_ETF(p)))) ]
For ETF generations using I2FNet, similarly, we also take advantage of Adversarial loss to force the synthesized ETF to be similar to target ETF domain ℰ.
ℒ_adv^ETF =
E_p∼𝒫,e∼ℰ [logD_ETF(p, e) ]
+ E_p∼𝒫, e ∼ℰ [log(1-D_ETF(e, G_ETF(p))) ]
Pixel-wise Loss. In order to make the translated drawing image perfectly match the ground truth of character line drawing from the pixel point of view, we choose to calculate the L_1 distance to measure their similarity due to the fact that (1) L_1 contributes to generation of less blurring results <cit.><cit.>. (2) It can stabilize the training procedure <cit.>.
ℒ_pixel-wise = E_p,e [ G_CL(p) - e _1 ]
Control Loss. One of our key contributions is perform controllable character line drawing generation via adjustment of line control parameter. To achieve this, we incorporate a Control Loss to encourage our DFG to obtain character line drawing with expected style. The control loss ℒ_lc can be written as,
ℒ_lc = E_p,a [ ℛ(G_CL(p, G_ETF(p)), G_ETF(p)) - a _1 ]
FFT Loss. Theoretically, analysing character line drawing from the perspective of frequency domain is also an ideal choice for our translation task. This is mainly because similar image should have similar spectrum features, and the edges are usually related to high frequencies. The examples visualized in Figure <ref> visualizes the spectrum information of examples of real and synthesized character line drawings. We, therefore, design a Fast Fourier Transformation (FFT) Loss to drive the generated character line drawings to be similar to ground truth drawings in frequency domain. The formulation of FFT loss is as follows,
ℒ_fft = E_p, c [ FFT(c)-FFT(G_CL(p, G_ETF(p))) _1 ]
Where FFT(·) represents Fast Fourier Transformation operation.
Total Loss. The final objective loss function of our model is formulated as,
ℒ_total = λ_advℒ_adv^CL + λ_pixelℒ_pixel-wise
+ λ_lcℒ_lc + λ_fftℒ_fft
In experiments, we set λ_adv = 1, λ_pixel = 100, λ_lc = 1, λ_fft = 0.05.
§ EXPERIMENTS
§.§ Implementation details
We implement our proposed character line drawing synthesis network using Pytorch. The Adam solver with β_1=0.5 and β_2=0.999 are used to optimize our model for 200 epochs. The initial learning rate is set as 0.0002. The mini batchsize is 2, and 1 for I2FNet. For all the methods to be compared, we choose to use the default hyperparameter settings from original papers to train these models. All the experiments are performed on a single 24GB NVIDIA GeForce RTX 3090 GPU. And the training images are resized to 1024 × 1024 pixels.
§.§ Evaluation Metrics
To quantitatively evaluate the quality of generated character line drawings, we consider the following four metrics: (1) Fréchet Inception Distance (FID) <cit.> comparing the distribution between the yielded character line drawing images and the set of grounding truth (lower value means better quality). (2) Structural Similarity Metric (SSIM) <cit.> measuring the similarity between the generated drawing images and ground truth images (Higher scores manifest better results). (3) Peak Signal-to-noise Ratio (PSNR) <cit.> evaluating the intensity difference between the prediction and ground truth (Larger values indicate smaller difference).
§.§ Comparison with the state-of-the-arts
We compare our proposed approach with several state-of-the-art models, including Pix2Pix <cit.>, BicycleGAN <cit.>, Pix2PixHD <cit.>, SPADE <cit.> and Anime2Sketch <cit.>. For a fair comparison, we use our prepared dataset and their default settings to train and evaluate these networks.
§.§.§ Quantitative evaluation
Table <ref> reports the quantitative performance evaluation of our proposed model against state-of-the-art methods. From these results, we can achieve the following observations: (1) Our character line drawing network obtains much better performance in all three measurement metrics, significantly outperforming these competing approaches. (2) The lowest FID value indicates that the distribution of our generated character line drawings is closest to that of ground truth drawings. And the highest SSIM and PSNR scores means maximum similarity between synthesized drawings and real ones. (3) Summarily, the quantitative evaluations verify the effectiveness of our proposed model in generating high-quality and high-fidelity character line drawing.
§.§.§ Qualitative comparison
Figure <ref> visualizes the qualitative of comparison of the results in terms of generated character line drawing itself and its difference from ground truth. We use the red color to represent information present in the real drawings but not in the generated ones, while the blue color means opposite. From these visualization results, we can conclude the following findings.
For Pix2Pix <cit.>, BicycleGAN <cit.>, Pix2PixHD <cit.>, SPADE <cit.>, although they generate character line drawings with somewhat acceptable perceptual appearance, they also have many unwanted artifacts. Anime2Sketch <cit.> yields drawings in which a great deal of detail is lost. In contrast, the results of our method is visually much closer to ground truth, compared to these models. It demonstrates that our method can not only reduce the noise/artifacts via FFT loss, but also use continuous, clear and smooth lines to depict the details in the character image/photograph under the guidance of ETF field.
In summary, our approach performs favorably against these state-of-the-art models in terms of dealing with fine details, line quality preservation and noise reduction.
§.§ User control
From previous discussion, it can be found that by introducing the Line Control Matrix and Line Control Regressor network, our image/photo-line drawing translation architecture can perform fine-grained control over character line drawing styles with user-specified values for control parameter α. Figure <ref> illustrates examples of character line drawings with different styles generated using different α for cartoon, manga and real person images/photographs. It is reasonable to draw the following conclusions: (1) By adjusting the control parameter α, users indeed can get character line drawings with their desired styles. (2) The net effect of increasing in α will be an character line drawing image with an increasing amount of details. And the lines become much more clear, complete, and continuous. (3) As expected, the LCM and LCR have greatly important effects on resolving line details for character line drawing synthesis.
§.§ Ablation study
We conduct ablation studies to illustrate the contribution of our proposed modules to the character line drawing synthesis. Here, we train three variants of our model, namely without FFT loss, without ETF encnoder, and without both (our baseline). Table <ref> reports the quantitative performance comparison with our DFG in terms of FID, SSIM and PSNR. And Figure <ref> visualizes qualitative comparison of an example using these models.
From these experimental results, we can achieve the following observations: (1) Removing the FFT loss and ETF encoder results in degraded performance as well as the reduction of similarity between generated and real character line drawings. (2) Meanwhile, we observe blurry, incoherent and incomplete lines due to the lack of guidance of stroke direction and frequency spectrum. (3) Therefore, the FFT loss and ETF encoder are essential to our character line drawing network. They jointly encourage our method to produce visually high-quality character line drawings with clear, coherent and less noisy lines.
§ CONCLUSION
In this paper, we present framework for controllable artistic character line drawing generation. Three modules, namely Image-to-Flow network, Double Flow Generator and Line Controllable Regressor are well designed that contribute to synthesize higher visual-quality illustration. Experimental results demonstrate that our model can produce coherent, clear, stylistic and controllable lines.
|
http://arxiv.org/abs/2307.04348v2 | 20230710051628 | Full statistics of non-equilibrium heat and work for many-body quantum Otto engines and universal bounds: A non-equilibrium Green's function approach | [
"Sandipan Mohanta",
"Bijay Kumar Agarwalla"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.mes-hall"
] |
apsrev
|
http://arxiv.org/abs/2307.06103v1 | 20230712115904 | Experimental detectability of spin current shot noise | [
"Luise Siegl",
"Michaela Lammel",
"Akashdeep Kamra",
"Hans Huebl",
"Wolfgang Belzig",
"Sebastian T. B. Goennenwein"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
^1Department of Physics, University of Konstanz, 78457 Konstanz, Germany
^2Condensed Matter Physics Center (IFIMAC) and Departamento de Física Teórica de la Materia Condensada, Universidad Autónoma de Madrid, E-28049 Madrid, Spain
^3Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, 85748 Garching, Germany
^4TUM School of Natural Sciences, Technische Universität München, 85748 Garching, Germany
^5Munich Center for Quantum Science and Technology (MCQST), 80799 München, Germany
A magnonic spin current crossing a ferromagnet-metal interface is accompanied by spin current shot noise arising from the discrete quanta of spin carried by magnons.
In thin films, e.g., the spin of so-called squeezed magnons have been shown to deviate from the common value ħ, with corresponding changes in the spin noise.
In experiments, spin currents are typically converted to charge currents via the inverse spin Hall effect.
We here analyze the magnitude of the spin current shot noise in the charge channel for a typical electrically detected spin pumping experiment, and find that the voltage noise originating from the spin current shot noise is much smaller than the inevitable Johnson-Nyquist noise.
Furthermore, due to the local nature of the spin-charge conversion, the ratio between spin current shot noise and Johnson-Nyquist noise does not scale with sample geometry and sensitively depends on material-specific transport properties.
Our analysis thus provides guidance for the experimental detection of squeezed magnons through spin pumping shot noise.
Experimental detectability of spin current shot noise
Sebastian T. B. Goennenwein^1
August 12, 2023
=====================================================
The power spectral density of charge current fluctuations contains fundamental information about the underlying transport and dynamics <cit.>.
For example, the discrete nature of electric charge results in shot noise <cit.>.
Vice-versa, shot noise experiments allow to quantify the quantum of charge relevant for transport.
In diodes and related structures, the existence of shot noise shows that the electrical current is carried by elementary charges, while in fractional quantum Hall systems, composite Fermions with fractional charge are the relevant electrical transport quanta <cit.>.
In superconducting contacts, multiple charge quanta have been predicted <cit.> and observed <cit.>.
On the other hand, thermal fluctuations of charge carriers inside an electrical conductor at equilibrium lead to the Johnson-Nyquist noise <cit.>.
In recent years, pure spin transport has attracted considerable interest.
In particular, spin pumping has emerged as a powerful method for the generation of pure spin currents in ferromagnetic/normal metal (FM/N) heterostructures <cit.>.
There, a magnon mode in the FM is populated using a coherent microwave drive, i.e., in ferromagnetic resonance (FMR).
The resulting nonequilibrium magnonic spin is partially absorbed by the electrons in N causing a pure spin current flow across the FM/N interface.
Taking advantage of the (inverse) spin Hall effect which interconverts pure spin and charge currents in a metal with strong spin-orbit coupling <cit.>, such spin currents can be detected in N as an electrical current or voltage signal.
Thermal fluctuations of a pure spin current have been detected experimentally in a yttrium iron garnet/platinum bilayer employing a magnetic field orientation-dependent measurement of the voltage noise power spectral density <cit.>.
The observed thermal spin current noise was theoretically shown to be related to the spin Hall magnetoresistance (SMR) effect via the fluctuation-dissipation theorem <cit.>.
The SMR effect is governed by spin current flow across a magnetic insulator/metal interface <cit.>.
More precisely, the resistance of the metal layer changes as a function of the magnetization orientation due to spin current flow across the interface.
However, while serving as a proof-of-concept for spin current noise measurements <cit.>, the observed thermal voltage noise in platinum did not provide deeper insights into the microscopic mechanisms of spin transport.
The situation is different for fluctuations of a non-equilibrium current, such as spin shot noise arising from a pure spin current I_s flowing across the FM/N interface <cit.>. In analogy to electrical shot noise, this spin current shot noise can be used to experimentally detect and quantify the spin transport quantum <cit.>.
For spin transport arising from a coherently driven magnon mode, as is the case in a typical ferromagnetic resonance scenario, the non-integer character of the magnon spin transport quantum, e.g. due to squeezing effects, could become experimentally accessible.
The same information is harder to infer from a thermally driven spin current shot noise, for example via the spin Seebeck effect <cit.>, as it involves multiple magnon modes with different effective spins.
However, previous theoretical works <cit.> on spin current shot noise have restricted themselves to examine spin currents, with only a preliminary discussion on spin to charge current noise conversion <cit.>.
Moreover, to assess the experimental detectability, the additional noise contributions arising from charge fluctuations in the normal metal must be taken into account.
Here, we calculate the magnitude of the spin current shot noise power spectral density upon conversion to the charge channel, and consider how the spin to charge conversion impacts the detectability of spin current shot noise.
A key prerequisite for this consideration is that the spatial correlations in the electronic spin current injected into N are short ranged comparable to the electronic wavelengths.
This implies that the conversion factor for spin to charge dc currents, widely used in spin-Hall effect based studies, does not provide a complete picture for the conversion of the interfacial spin current to the electrically measured voltage fluctuations.
In consequence, we find that the voltage noise resulting from the spin current shot noise is substantially smaller than the purely charge based Johnson-Nyquist (JN) noise in the normal metal.
We consider the shot noise associated with the spin pumping current in a system at a finite temperature T, driven by a coherent microwave magnetic field at driving frequency ω <cit.> which corresponds to the ferromagnetic resonance frequency.
The ensuing power spectral density in the spin channel for a total spin current I_s traversing the FM/N interface is given by S_I_sI_s = 2ħ^*I_s2k_BT/ħω <cit.>, where k_B is the Boltzmann constant, ħ is the reduced Planck constant and ħ^* the effective spin.
Interestingly, spin shot noise thus reflects magnon squeezing effects, since the quantum of angular momentum relevant for the noise is no longer ħ, but ħ^* = ħ (1+δ) <cit.>, where δ is a material and sample geometry dependent factor.
These results suggest that spin current shot noise can be used to experimentally detect and quantify magnon squeezing by the effective spin ħ^* via spin pumping experiments.
Now, we consider the experimental detection scheme to measure spin current shot noise electrically via the inverse spin Hall effect (ISHE) in the N layer.
Within the FM layer the magnetisation M is coherently driven out of its equilibrium position by FMR <cit.>.
Thus, a pure spin current I_s = j_swl propagating along z-direction is pumped over the interface of width w and length l into the metal layer, as depicted in Fig. <ref>.
Note that for this expression of the total spin pumping current, we have assumed a spatially uniform spin pumping current density j_s, which is valid for a dc current or the expectation value of an ac current.
Since a direct experimental detection of spin is difficult, the spin current density is converted into a charge current density j_c = 2e/ħθ_SH j_s×s via the inverse spin Hall effect in N <cit.>, as sketched in Fig. <ref>. Here, e is the elementary charge, and θ_SH the spin Hall angle.
In most experiments, open circuit electrical boundary conditions are implemented, such that an open-circuit dc voltage V is detected instead of the charge current.
Applying such an ISHE-based electrical detection scheme to spin current shot noise, one obtains a voltage noise power spectral density S_VV∝ S_I_sI_s in the charge channel inside the normal metal.
Generally, the spin shot noise can be detected as an electrical voltage noise power spectral density S_VV or current noise power spectral density S_II.
Since the current and voltage power spectral densities can be transformed into each other by (S_VV/V^2)_I=const = (S_II/I^2)_U=const <cit.>, we here focus on S_VV.
The typical theoretical analysis <cit.> exploits the spatial homogeneity of the spin pumping current density j_s over the FM/N interface to relate j_s with the experimentally measurable quantity, i.e. the total charge current I (or voltage V) through the normal metal.
This, in turn, is a consequence of the spatial invariance of the coherent microwave drive causing FMR and thus, the magnetization precession in the FM layer.
On the other hand, the spin pumping current noise or fluctuations are expected to have short ranged correlations determined by the wavelength of the electrons that absorb and carry the spin current in N.
Thus, we need to go beyond the typical relation for dc currents <cit.>, as discussed below, to relate the power spectral density of the total spin pumping current noise S_I_sI_s <cit.> to the total charge current I through the normal metal.
The spin current shot noise S_I_sI_s scales with the system temperature and is largest under the condition of ferromagnetic resonance ω=ω_0 <cit.>.
We thus exploit the result from Ref. <cit.> in the high temperature limit k_B T >> ħω for sufficiently low driving frequencies ω, as it is the experimentally relevant limit.
Assuming a y-polarized spin current density j_s, we obtain the spatially and temporally resolved spin pumping current density correlator
⟨ j_s(t,ρ)j_s(t',ρ') ⟩ = 2 ħ^* j_s2k_B T/ħω_0δ (t-t') δ (ρ-ρ')
local in time t and space, where ρ is the two-dimensional position vector in the interfacial plane.
Equation (<ref>) captures the low-frequency and frequency-independent part of the spin current noise power spectral density.
It has been derived starting from the correlator of the total spin current across the interface <cit.>, and considering a coherent region.
Assuming additionally that the coherence length is much smaller than the sample dimensions, a delta function in space is obtained.
Taking this spatio-temporal correlation for the interfacial spin current density and following an analysis similar to that in Ref. <cit.>, we evaluate the voltage noise power spectral density S_VV^shot of the spin pumping current shot noise in the charge channel
S_VV^shot = 16 θ_SH^2 λ_sd^2 ρ_N^2 l/w t_N^2 j_s e^2 ħ^*/ħ^22k_B T/ħω_0tanh^2(t_N/2 λ_sd) .
Here, ω_0 is the FMR frequency, and λ_sd, ρ_N and t_N are the spin diffusion length, the resistivity and the thickness of the metal layer N, respectively.
The enhancement 2k_B T/ħω_0 in the spin current shot noise is particular for the spin pumping case <cit.> and is contrary to the typical electronic transport <cit.>.
In addition to the voltage noise, Eq. (<ref>), arising from the spin pumping current traversing the FM/N interface, the normal metal harbors a thermal charge fluctuations-based Johnson-Nyquist (JN) noise with a power density
S_VV^JN = 4 k_B T R.
Here, R = ρ_N l/(t_N w) is the resistance of N.
Note, that the noise represented by Eq. (<ref>) is different from the contribution of the thermal magnonic spin current fluctuations <cit.>, which can be considered a magnonic spin transport analogue of the JN noise.
As the thermal magnonic spin current fluctuations has been theoretically shown to be smaller than the shot noise in a wide range of parameters <cit.> we disregard this magnonic contribution in our analysis here.
The voltage noise in N thus will have (at least) two contributions, S_VV = S_VV^shot + S_VV^JN.
Since both are frequency independent (white) at low frequencies, we consider and compare their absolute magnitudes.
The ratio
S_VV^shot/S_VV^JN = 8 θ^2_SHλ^2_sdρ_N/t_N j_sħ^*e^2/ħ^3ω_0tanh^2(t_N/2 λ_sd)
should be maximized for the detection of spin current shot noise.
Since δ is on the order of 1 in thin ferromagnetic films, we use ħ^*=2ħ to estimate the ratio <cit.>.
Therefore, the adjustable parameters for maximizing the spin noise voltage signal are the magnitude of the spin current density j_s, the FMR frequency ω_0, and the thickness t_N of the metal layer together with its material-specific properties θ_SH, λ_sd and ρ_N.
In finding the optimal sample design for electrical spin current shot noise experiments, we first note that only the ratio of spin diffusion length λ_sd and metal layer thickness t_N enters in Eq. (<ref>).
Since t_N can be straightforwardly chosen by appropriate sample design, we numerically optimize the expression
p=λ_sd/t_Ntanh^2(t_N/2 λ_sd)
from Eq. (<ref>).
Using Newton's method we find numerically that Eq. (<ref>) has a global maximum at p≈ 0.29 for t_N≈ 2.18λ_sd.
Next, we consider the material properties of the normal metal N in the combination θ^2_SHλ_sdρ_N in Eq. (<ref>).
Figure <ref> shows a compilation of values from literature for a set of different spin Hall active metals <cit.>.
The symbols hereby indicate different materials, the colors the references from which the values were taken.
As evident from Fig. <ref>, the values of θ^2_SHλ_sdρ_N span 5 orders of magnitude.
This large variation reflects the broad scatter in spin Hall angles and spin diffusion lengths reported even for the same material <cit.>.
Finally, we turn to the magnitude of the spin current density j_s.
In spin pumping experiments, values of 1.4e-11J/m^2 < j_s < 8.8e-9J/m^2 have been reported <cit.>.
Note that Weiler et al. found that the spin Seebeck effect allows generating larger j_s than those obtained from spin pumping <cit.>, presumably due to the contribution of magnons with a broad range of wave vectors and frequencies. However, the non-integer effective spin ħ^* of the squeezed magnon is largest for the k=0 eigenmode (Kittel mode) and ħ^* →ħ with increasing k <cit.>.
Therefore, experiments based on the spin Seebeck effect appear less suitable for investigating the basic mechanisms behind spin current shot noise and magnon squeezing effects. We therefore here focus on pure spin currents generated via spin pumping.
Taking together the previous results we can extract the ratio of the spin current shot noise and Johnson-Nyquist noise for different parameter combinations.
Figure <ref> shows this ratio in the voltage channel, S_VV^shot/S_VV^JN, as given by Eq. (<ref>).
We hereby assumed a FMR frequency of ω_0/2π=10GHz as typical for measurements with a microwave cavity, and ħ^*=2ħ as mentioned above.
Based on the parameter values discussed in the preceding paragraphs, the range of experimentally achievable noise power ratios is indicated as a semi-transparent rectangle in the figure.
Notably, even for the best possible combination of material parameters and spin current densities reported in the literature the ratio S_VV^shot/S_VV^JN is smaller than 10^-4.
This upper experimental boundary is marked by the black line in Fig. <ref>.
In previous experiments addressing the voltage noise due to thermal spin current fluctuations, changes in the noise magnitude of ≈1e-3 of the Johnson-Nyquist noise <cit.> could be resolved.
We furthermore include data from Weiler et al. (yellow symbols) <cit.> in Fig. <ref>, as a typical example for electrically detected spin pumping data recorded using yttrium iron garnet/Pt thin film bilayers and a microwave cavity.
In these experiments, the ratio of the spin current shot noise and Johnson-Nyquist noise is smaller than 10^-6.
Note that, while in cavity-based FMR a frequency of 10GHz is widely used, it would be beneficial for spin current shot noise experiments to reduce the microwave frequency as much as possible to increase the shot noise.
Our analysis thus indicates that for the detection of the spin current shot noise in an electrical experiment, both the sample properties and the spin current drive need to be carefully optimized.
Furthermore, to resolve the effective spin ħ^* of the magnon from spin pumping driven experiments <cit.>, even higher experimental precision will be required.
In summary, we derived the correlator of the spin pumping current density including its time and spatial dependence.
Considering the spin-to-charge conversion process typically used in electrically detected spin current experiments, we find that in the voltage channel, the spin current shot noise is small compared to the ubiquitous Johnson-Nyquist noise.
More precisely, the ratio of the spin current shot noise to Johnson-Nyquist noise is estimated to be at most 10^-3 using parameters from literature and assuming a driving frequency of 1GHz.
We thus conclude that a careful choice of materials is of key importance for the measurement of the spin pumping current shot noise and thus the effective spin of squeezed magnon.
Hence, our work offers important guidance regarding sample design and optimization for the experimental detection of spin current shot noise.
We acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via the SFB 1432 – Project-ID 425217212.
acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via Germany’s Excellence Strategy EXC-2111-390814868.
A. Kamra acknowledges financial support from the Spanish Ministry for Science and InnovationAEI Grant CEX2018-000805-M (through the “Maria de Maeztu” Programme for Units of Excellence in R&D) and grant RYC2021-031063-I funded by MCIN/AEI/10.13039/501100011033 and “European Union Next Generation EU/PRTR”.
|
http://arxiv.org/abs/2307.05722v1 | 20230710112941 | Exploring Large Language Model for Graph Data Understanding in Online Job Recommendations | [
"Likang Wu",
"Zhaopeng Qiu",
"Zhi Zheng",
"Hengshu Zhu",
"Enhong Chen"
] | cs.AI | [
"cs.AI",
"cs.CL",
"cs.IR"
] |
Calculating Originality of LLM Assisted Source Code
Shipra Sharma
[email protected]
Balwinder Sodhi
Department of Computer Science and Engineering
Indian Institute of Technology Ropar
India
[email protected]
==========================================================================================================================================================================
[1]Corresponding Author.
Large Language Models (LLMs) have revolutionized natural language processing tasks, demonstrating their exceptional capabilities in various domains. However, their potential for behavior graph understanding in job recommendations remains largely unexplored. This paper focuses on unveiling the capability of large language models in understanding behavior graphs and leveraging this understanding to enhance recommendations in online recruitment, including the promotion of out-of-distribution (OOD) application. We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs and uncover underlying patterns and relationships. Specifically, we propose a meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias introduced by path-based sequence input. By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users. We evaluate the effectiveness of our approach on a comprehensive dataset and demonstrate its ability to improve the relevance and quality of recommended quality. This research not only sheds light on the untapped potential of large language models but also provides valuable insights for developing advanced recommendation systems in the recruitment market. The findings contribute to the growing field of natural language processing and offer practical implications for enhancing job search experiences.
§ INTRODUCTION
Recommendation in online recruitment aims at suggesting relevant job opportunities to job seekers based on their preferences and qualifications, improving the chances of matching the right employment. With the exponential growth of online recruitment platforms and the need for efficient and personalized job search experiences, the development of effective job recommendation systems has become crucial.
In online recruitment systems, job postings and resumes are written in natural language. Traditional approaches have treated job-resume matching as a supervised text-matching problem using paired data for training <cit.>. However, online recruitment platforms often suffer from sparse interaction data, with job postings attracting only a few candidates on average <cit.>. To address this, recent studies <cit.> have explored the use of behavior graphs to capture high-order interactions and alleviate the sparse interaction issue. These behavior graphs leverage message passing to enhance the understanding of user preferences.
Different from many general recommendation tasks, it is easy to find that textual understanding forms the backbone of job recommendation, and behavior modeling contributes to the personalized module. In our work, we aim to break through the accuracy bottleneck of job recommender by promoting the semantic richness of textual representation. Inspired by several recent successful recommendations based on text pre-training <cit.>, we first introduce the large language model (LLM) as the job recommendation framework that directly generates targets to achieve this goal. There are many benefits and is also very natural to do this. For instance, out-of-distribution items usually appear in recruitment markets since new job demands are constantly emerging, such as prompt engineers for generative models. The powerful semantic mining ability and massive external knowledge of LLM enhance the generation and associative power of recommender, which is able to generate reasonable recommendation results for the hard OOD items.
However, the existing learning schema of LLM recommender cannot understand the non-textual behavior graph which weakens the personalized recommendation ability for different job seekers. To address this challenge, we propose a meta-path prompt constructor to encode the interaction information of graph into the natural language prompt. Specifically, in such a heterogeneous behavior graph, each meta-path composed of various types of nodes and edges can be transferred into a description naturally since each type indicates a specific and meaningful interaction, e.g., interview, conversation, etc. Along this line, for each job seeker, LLM captures the high-order interaction feature to augment her personality with the meta-path prompt.
Based on the above analysis, we explore the inclusion of graph data understanding in large language model-based recommendations for the first time. An efficient large language model named GLRec (Graph-understanding LLM Recommender) is proposed to optimize the recommended quality of job recommendation, which is fine-tuned with LoRa <cit.> in our constructed instruction dataset for aligning the gap between pre-trained knowledge and actual recruitment domain. Especially, our exploration presents two valuable and important findings that largely influence the graph understanding strategy of LLM: (i). Different paths would present different weights for the model decision. (ii). The position bias of the order of path prompts brings unstable answers. For this issue, we carefully design path shuffling, adaptive path selector, and their hybrid path augmentation mechanism to alleviate the negative impact brings by different path prompts. Through extensive experiments on real-world recruitment datasets, we observe a significant performance gain through the development of LLM and its graph learning strategy. The main contributions could be summarized as follows:
* To our best knowledge, we are the first to implement the fine-tuned large language model as job recommender, which promotes matching accuracy via the semantic richness and massive knowledge of LLM.
* We propose the meta-path prompt constructor that leverages LLM recommender to understand behavior graphs for the first time and design a corresponding path augmentation module to alleviate the prompt bias.
* We conduct sufficient experiments on real-world recruitment datasets, and the experimental results and visualization cases show the superiority of our model.
§ RELATED WORK
By combing through the research idea, our work is mainly related to two research areas: job recommendation and methods of LLM for recommendation. We will introduce the mainstream work of these two research directions in detail, and point out the shortcomings of existing methods to draw the motivation for our proposed framework.
§.§ Job Recommendation
Job Recommendation, especially job-resume matching is a necessary task in recruitment data mining, and it has been extensively studied in the literature <cit.>. Early methods approached this problem as a recommendation task <cit.>, relying on collaborative filtering assumptions. However, recent research has focused more on text-matching technology, aiming to improve the representation of job and resume documents [26].
Various techniques have been proposed to encode job and resume information. For example, <cit.> utilized CNN for encoding, while <cit.> leveraged RNN and BiLSTM to capture sequential information. <cit.> introduced a profiling memory module to learn latent preference representation by interacting with both job and resume sides. Additionally, <cit.> explored the effectiveness of adversarial training for job-resume matching. In addition to the aforementioned research, there are also works that consider multi-granularity interactions. The ranking-based loss function can be used to capture multi-level interactions as supervision signals <cit.>. <cit.> propose a bilateral multi-behavior sequence model to describe users' dynamic comprehensive preferences. These approaches highlight the importance of considering various interaction patterns and incorporating additional user information to improve the quality of job recommendations. However, online recruitment platforms frequently encounter challenges due to sparse interaction data, resulting in job postings attracting only a limited number of candidates on average <cit.>. Recent studies <cit.> have investigated the utilization of behavior graphs to capture high-order interactions and mitigate the problem of sparse interactions. These behavior graphs employ message-passing techniques to enrich the understanding of personalized user preferences.
§.§ Large Language Models for Recommendation
LLMs offer the potential to extract high-quality representations of textual features and leverage extensive external knowledge to enhance recommendation systems. <cit.> conducted a systematic review and analysis of existing LLM-based recommendation systems. Existing work can be divided into two categories: discriminative models and generative models. Most discriminative models align the representations of pre-trained models like BERT with domain-specific data through fine-tuning. For example, <cit.> proposed pre-training and fine-tuning-based approach to learn users' representation, which leveraged content-rich domains to complement those users' features with insufficient behavior data. Additionally, some research explores training strategies like prompt tuning. <cit.> leveraged BERT's Masked Language Modeling (MLM) head to uncover its understanding of item genres using cloze-style prompts. Prompt4NR <cit.> pioneered the application of the prompt learning paradigm for news recommendation. Generative models usually translate recommendation tasks as natural language tasks, and then apply techniques such as in-context learning <cit.>, prompt tuning <cit.>, and instruction tuning <cit.> to adapt LLMs to directly generate the recommendation results. Compared to discriminative models, generative models have
better natural language generation capabilities. In the job-resume matching area, there is a generative model which develops LLM to generate potential JDs for more explainable and suitable recommendations <cit.>. Although LLM recommenders achieve successful applications through their ability of knowledge association, the lack of graph data understanding ability reduces personalized adaption. In our work, we aim to address this crucial challenge in the online recruitment scenario.
§ METHODOLOGY
In this section, we first illustrate our research problem formally and present related notations. Then the technical detail of GLRec would be introduced progressively. The overall framework is shown in Figure <ref>.
§.§ Preliminary
§.§.§ Problem Formulation
Consider a set of candidates C = {c_1, c_2, …, c_n_1} and a set of jobs 𝒥 = {j_1, j_2, …, j_n_2}, where n_1 and n_2 represent the total number of candidates and jobs, respectively. Each candidate and job are associated with textual documents that describe their resumes and job requirements. They are also linked to a collection of directed interaction records (such as interviewing and discussing) within the recruitment platform. These interactions are formally represented as 𝒜c_i = {c_i → j' | c_i ∈ C, j' ∈𝒥} and 𝒜j_k = {j_k → c' | j_k ∈𝒥, c' ∈ C}, indicating the directed interactions or links initiated by candidate c_i or employer j_k (referred to as a job). We use i and k as indices for candidates and jobs, respectively. Our objective is to predict the compatibility between a job posting and a candidate.
§.§.§ Generative Large Language Models
Generative LLMs are powerful language models that can generate coherent and contextually relevant text. These models, such as GPT-3/4, are trained on vast amounts of text data and can generate human-like text based on a given prompt or input. Fine-tuning is a common adaption strategy to align the target of pre-trained model and domain-specific applications, such as two popular paradigms of prompt tuning, and instruction tuning. For all these tuning methods, they have an equal final objective loss of autoregressive training as follows:
ℒ_f = max _Θ∑_(x, y) ∈𝒯∑_t=1^|y|log(𝒫_Θ(y_t| x, y_<t)),
Taking instruction tuning as an example, which designs and constructs instruction data to restrict the output scope and format. x and y represent the “Instruction Input” and “Instruction Output” in the self-instruct data, respectively, e.g., Instruction Input: “Do you like this item?”, Instruction Output: “Yes.”. And y_t is the t-th token of the y, y_<t represents the tokens before y_t, Θ is the original parameters of LLM, and 𝒯 is the training set.
§.§.§ Task-specific Instruction
In our work, we design two job recommendation tasks to test the LLM recommender following existing related work <cit.>, i.e., point-wise and pair-wise job matching. Here we introduce our designed template for the sample in our dataset, where information related to privacy and business has been filtered. Assume there is a job seeker called candidate whose Candidate Profile Prompt and recommended JD Prompt are defined as:
Candidate Profile Prompt: Age: 25, Education: Bachelor's degree, Graduation School: XXX
University, Major: Computer Applied Science, Work Experience: 2 years.
JD Prompt: Position Title: Full Stack Engineer, Educational Requirement: Bachelor's degree, Work Experience: 1-3 years, Skill Requirements: HTML/JAVA/Spring Boot/SQL.
For the point-wise task, we let the LLM recommender learn to predict the satisfaction of a candidate with a recommended job. The instruction is designed as:
Point-wise Instruction: You are a recommender, determining whether a candidate would be satisfied with the recommended job position. Please answer with “Yes." or “No.".
For the pair-wise task, we let the LLM recommender learn to justify the preference of a candidate for a recommended job pair. Given two jobs' JD Prompt “A" and “B", the instruction is designed as:
Pair-wise Instruction: You are a recommender, determining which position will match the candidate. Please answer with “[A]." or “[B].".
With the above designed prompts and instruction text, the LLM is able to adapt to a domain recommendation situation. Note that, to ensure the stability of training, we add the JD prompt to the back of ground truth to increase the predicted length. To further fuse interaction knowledges, in the next section, we will illustrate the understanding part of graph data for the large language model: behavior meta-path prompt generation.
§.§ Behavior Meta-path Prompt Generation
To inject large language models with the ability to comprehend interactive relationships in graph data, we propose a meta-path-based prompt constructor to obtain prompt inputs that represent local subgraphs. Before delving into the details of our approach, it is necessary to provide a formal introduction to heterogeneous graph and meta-path.
Heterogeneous Graph.
A heterogeneous graph, denoted as 𝒢=(V, E), consists of an object set V and a link set E.
A heterogeneous graph is also associated with a node type mapping function ϕ: V →𝒱 and
a link type mapping function ψ: E →ℰ. 𝒱 and ℰ denote
the sets of predefined object types and link types, where |𝒱|+|ℰ|>2.
Meta-path.
A meta-path P is defined as a path in the form of 𝒱_1 𝒱_2 ⋯𝒱_l+1 (abbreviated as 𝒱_1 𝒱_2 ⋯𝒱_l+1), which describes a composite relation ℰ_1 ∘ℰ_2 ∘⋯∘ℰ_l between objects 𝒱_1 and 𝒱_l+1, where ∘ denotes the composition operator on relations.
Heterogeneous graphs are more diverse and complex in terms of their semantics compared to homogeneous graphs. Meta-paths are commonly used techniques to mine and represent the interaction semantics within them. In the context of online recruitment, the interactions between job seekers and job positions, which involve different types of behaviors, form a behavior graph. This behavior graph is a typical heterogeneous graph, where different node types include Candidate, JD, and different edge types include messaging, interviewing, matching, and more.
Due to the unique and defined semantics of each type of edge in the behavior graph, it is natural to consider transferring the graph data format meta-path to a natural language description which is acceptable for the large language model. We only need to predefine the prompt template according to the appeared edges in a path and then fill in the template with the resume or job description information. For instance, given a typical meta-path c_1 j_1 c_2. The prompt template is constructed as:
Meta-path Prompt: c_1 interviewed for position j_1. This position discussed with a job seeker c_2.
The node information, i.e., the description of candidates or JD, then will be filled in the meta-path prompt template to generate the final prompt data in our dataset. The real case can be referred to in Figure <ref>. In addition, to avoid too similar meta-paths leading to redundancy, we define a simple similarity metric as follows,
𝒮_i,j = |P_i ∩ P_j |/|P_i ∪ P_j|, P_i, P_j ∈Φ_P,
where Φ_P denotes the set of sampled meta-paths for a candidate. P_i, P_j indicates two meta-paths in this set. |P_i ∩ P_j| is the number of tokens that exist simultaneously in two paths, and P_i ∪ P_j is the union of them. We ensure that 𝒮_i, j≤γ between the final selected M meta-paths and 0 ≤γ≤ 1 is a hyperparameter.
§.§.§ Path Debiasing and Soft Selection
Different from the traditional network embedding, sequence-based meta-path prompts would lead to two challenges for LLM to understand the candidates' behavior sub-graph.
Influence of Path Weight. Different meta-paths would present different weights for the model decision.
Position Bias of Path Prompt. The position bias of the order of path prompts brings unstable answers.
These two challenges appeared when recognizing the pre-trained large language model as a recommender, which hinders the effective modeling of semantic relationships in the graph by LLM recommendation models. To provide a more intuitive explanation, we extracted a real-world case from the log of a popular recruitment platform and visualized them in Figure <ref>. Specifically, for a job seeker in the IT industry, given his Candidate Profile Prompt, Meta-path Prompt 1, and Meta-path Prompt 2, we further feed the LLM with a Task-specific Instruction belonging to point-wise recommendation. The LLM recommender is expected to output the decision of “Yes” or “No” to present the preference of the candidate. Challenge 1 corresponds to Case 1 and Case 2 in this figure. We can find that the same profile and task description with different behavior meta-paths forces LLM to make different predictions. Obviously, the diversity of technology stacks in Path 1 reveals the candidate's preference for full-stack development, and compared to Path 2, the background of path-related job seeker is more close to our candidate. Therefore, for this candidate, Path 1 is evidently more important for the final decision. For Challenge 2, if we construct the input sequence as Case 3, i.e., the order is meta-path prompt 1 → meta-path prompt 2, the LLM outputs the wrong answer “No”. But with a reverse path prompt order, the LLM is able to provide an accurate prediction. Similar to the widely known
position bias of candidate items <cit.>, the position of context prompt clearly misleads the model to generate unstable outputs.
To address the negative impact of these two challenges on the recommendation results, we carefully design an augmentation module specifically for the meta-path prompt, which consists of three concise but effective strategies. The first strategy is Shuffle Mechanism. When preparing domain data for the model's supervised fine-tuning (SFT), for each sample that contains multiple paths, we randomly shuffle the meta-path prompts in the sample m times. This data augmentation technique allows the model to learn semantic invariance patterns from different combinations of paths, leading to more stable results. It enhances the robustness of the model without introducing redundant information. The second strategy is Path Soft Selector. In this work, we regard the path sampling process in Behavior Meta-path Prompt Generation as a hard selection to heuristic selects semantically rich paths. The Path Soft Selector is used to further adaptively assign a learned weight distribution to the constructed meta-path prompts. Firstly, for a given meta-path prompt ℳ_i , i ∈{1, 2, ..., M} (M denotes the number of path), we obtain the LLM word embedding e_t of each token t ∈ℳ_i. So, the meta-path embedding H_i of ℳ_i can be obtained via a mean pooling as follows,
H_i = 1/|ℳ_i|∑_t ∈ℳ_i e_t, i ∈{1, 2, ..., M}.
Then we propose a soft selector to calculate the weight for each meta-path embedding as:
α_i = softmax (W_a H_i) = exp(W_a H_i)/∑_j=1^Mexp(W_a H_j),
where W_a ∈ℛ^1 × d_e is a trainable parameter, and d_e denotes the dimension of E_i. To avoid the training collapse caused by changed value scale, we utilize a controller parameter λ∈ (0, 0.5] to update word embeddings in Eq. (<ref>).
ê_t = e_t + λ·α_i e_t, t ∈ℳ_i,
Compared with most existing tuned or non-tuned LLM models, our prompt augmentation mechanism considers phrase-based attention to distinguish different paths. Actually, this simple solution can be transferred to other similar situations, such as weighed sentence embeddings.
What's more, the third strategy is the Hybrid Mechanism which implements Shuffle Mechanism and Path Soft Selector simultaneously. This hybrid module is expected to address the both two challenges. We will evaluate these three strategies in the experiment section.
§.§ LLM Instruction Tuning and Recommendation
In this subsection, we will introduce the instruction tuning and recommendation process, which aims to align the used LLM with the recommendation task effectively and efficiently. For instruction tuning, we follow the general supervised fine-tuning method to minimize the autoregressive loss calculated by ground truth and corresponding LLM output. In our work, we mask the loss position of the prompt part. Specific prompt format, task-specific instruction, and ground truth have been introduced in the Methodology section. However, direct fine-tuning of the entire model can be computationally intensive and time-consuming. To address this, we propose a lightweight fine-tuning strategy using LoRA, which involves freezing the pre-trained model parameters and introducing trainable rank decomposition matrices into each layer of the Transformer architecture. This approach facilitates lightweight fine-tuning while reducing GPU memory consumption. And the final learning objective can be computed as follows:
ℒ_f = max _Θ_L∑_(x, y) ∈𝒯∑_t = 1^|y|log(P_Θ+Θ_L(y_t| e_x, y_<t))
where Θ_L is the LoRA parameters and we only update LoRA parameters during the training process. Note that, different from existing fine-tuning frameworks for recommendation systems, we replace their token input x by the embedding e_x in Eq. (<ref>), since we update the prompt token embedding in the soft selector.
As for the recommendation process, since the trained model has learned the output format of our defined ground truth after several SFT alignment steps. So our designed answer parsing is a simple way. We catch the softmax probability of label generation (the token used to denote label, such as “Yes./No.” or “[A]/[B]” in our work ) in the position of model's output corresponding to that in the ground truth. Along this line, the final prediction probability is calculated.
§ EXPERIMENTS
To evaluate the motivation of our model, we conduct experiments to answer the following research questions:
* RQ1: How much improvement can be achieved in the field of job recommendation by using recommendation systems based on generative large language models?
* RQ2: How does the inclusion of behavior graph understanding affect the effectiveness of GLRec?
* RQ3: How well does the meta-path augmentation module optimize the influence of path selection on decision-making and the bias introduced by prompts?
§.§ Experimental Settings
§.§.§ Datasets.
We conduct experiments on the dataset Recr which is collected from a real-world and large online recruitment platform in China to assess recommendation methods.
The dataset was constructed from the online logs and contained two kinds of behavior: Match and Interaction, corresponding to the matching set and interaction set mentioned in Problem Formulation. Besides, each candidate (and job) is associated with a descriptive text (i.e., resume or job description). The overall statistics are shown in Table <ref>. From the statistical data, it can be seen that job recommendation is a sparsely interactive scenario. The segmentation ratio of the training set and testing set is 5:1. Note that all sensitive or private information has been filtered out from the data.
§.§.§ Baseline.
To provide a comprehensive evaluation of our GLRec, we compare it against both LLM-based and traditional recommendation methods:
* RobertaRec <cit.>: Candidate resume and JD text are encoded into fixed-length vectors using RoBERTa encoder and then used to calculate similarity scores, enabling personalized recommendations.
* HGT <cit.>: Heterogeneous Graph Transformer is a powerful graph learning model which propagates the embeddings (initialized by RoBERTa) of nodes on behavior graph to capture high-order interactions.
* TALLrec <cit.>: An advanced fine-tuned LLM recommender that uses instruction tuning on self-instruct data with users' historical interactions. The original backbone of its pre-trained model is LLaMA, and we change it by BELLE as the same as ours for the Chinese corpus.
§.§.§ Evaluation Metric.
We evaluate the two tasks using the conventional evaluation metric for explicit recommendation: Area Under the Receiver Operating Characteristic (AUC), as our two tasks can be transferred to binary classification problems and the metric captures the similarity between our setting and predicting user interest in a target item. We calculate the AUC score using the Scikit-learn package.
§.§.§ Implementation Details.
In this paper, we utilize BELLE-LLaMA-7B <cit.> as the pre-trained LLM backbone due to its expanded Chinese vocabulary. The instruction-tuning and model inference, using LoRa, are conducted on 4 Tesla A100 80G GPUs.
Our approach incorporates the meta-path prompt and user-specific task instructions as model inputs for personalized recommendations. In our experiments, we investigate the impact of different numbers of paths, specifically [0, 1, 2, 3], for GLRec.
Further details regarding the path prompt and instructions can be found in the Methodology section.
Additionally, both RobertaRec and HGT have a token embedding dimension of 768, and HGT utilizes mean pooling to obtain the initial node embedding.
For all methods, we optimize model parameters using the Adam <cit.> optimizer with a default learning rate of 1e-4, minimizing the MSE loss as the optimization objective.
§.§ Performance Comparison
In this section, we conduct performance comparison experiments on Recr to answer RQ1. As mentioned in the task definition in Section Methodology, the point-wise and pair-wise settings are implemented for evaluation. We also explore the influence of the OOD situation on different models. The experimental split settings of Random, OOD_position, and OOD_JD are introduced below:
* Random: We randomly split the training and testing dataset based on the interaction record of each user.
* OOD_position: The intersection on JD's “job position” feature between the training set and the testing set is empty.
* OOD_JD: The intersection on JD items between the training set and the testing set is empty.
Our experimental results are reported in Table <ref>. Overall, our proposed GLRec model achieves the best performance among all baselines. There are distinctive score gaps between GLRec and all baselines according to the improvement in Table <ref>. It demonstrates the superiority and adaptability of the large-scale model framework that incorporates relationship understanding and extensive semantic knowledge in the job recommendation scenario. What's even more exciting is that GLRec demonstrates impressive performance on OOD tasks. While its performance may decline slightly compared to the random setting, our model achieves a significant breakthrough compared to other models, which essentially result in near-random guessing. This phenomenon illustrates the necessity of utilizing knowledge association for model generalization. Go deeper into the part of baselines, the graph-based HGT outperforms the conventional dual-tower matching model (RobertaRec) in the context of job recommendation, which further proves the significance of learning relationships. What's more, we find that most models perform better on the pair-wise task than that of point-wise task. That is to say, directly determining whether an item is suitable is more challenging than comparing its priority with another item.
§.§ The Impact of Meta-path Number
In this experiment, we investigate the impact of meta-path number on the effectiveness of GLRec.
Here we evaluate the point-wise performance on Random setting using the AUC metric for different numbers of meta-paths, ranging from 0 to 3.
We also input the meta-path prompt (removing extra instruction text for feature conciseness) into RobertaRec for comparison. From the line graph of Figure <ref>, we can observe the following trends:
* For GLRec, the results consistently increase as the number of meta-paths increases. This indicates that the inclusion of behavior graph understanding significantly improves the recommendation effectiveness of GLRec.
* One notable observation is the significant improvement in GLRec's performance when transitioning from 0 meta-paths to 1 meta-path, and achieve the peak with only 2 meta-paths. The core increases from 0.71 to 0.88, indicating a substantial boost in recommendation effectiveness. This improvement suggests that the chain-of-thought ability of the LLM, inspired by in-context learning, plays a crucial role in GLRec's performance.
* For RobertaRec, which does not incorporate behavior graph understanding, the values remain relatively stable across different meta-path numbers. The reason is that discriminative bert-based model lacks the ability to effectively understand prompts like generative LLMs.
The results indicate that the inclusion of behavior graph understanding through meta-path prompt input has a significant positive impact on the effectiveness of GLRec. By leveraging the rich information in behavior graphs, GLRec gains a deeper understanding of user-item interactions, leading to improved recommendation performance, which provides the sufficient evidence for RQ2.
§.§ The Impact of Bias of Meta-path Prompt
Due to the sequential nature of language model input, the construction of multi-path prompt sequences results in a human-induced position bias, or order bias, which disrupts the final decision-making of LLM model. Additionally, this input pattern does not allow the model to learn the importance of semantic information in different paths. Therefore, we design a path shuffle mechanism, a path soft selector, and a hybrid mechanism combining both to enhance the model's understanding of path information and mitigate bias. The experimental results are reported in Figure <ref>. Here the metric is AUC and the task is point-wise setting.
According to Figure <ref>, our three strategies can all surpass the original input without path prompt augmentation in both two sub-experiments, which proves the necessity of path debiasing. Although the shuffle mechanism and soft selector have their own advantages and disadvantages in two different path scale experiments, both can relatively improve the quality of the results. And the hybrid module of both can bring more stable results, indicating that it is indeed necessary for the model to consider the position factors of input meta-paths and the influencing factors of different path prompts on decision-making in experiments, in order to cope with actual recommendation scenarios. In theory, in other similar scenarios, such as the input for LLM consists of multiple sentence prompts without prior order, our proposed shuffle mechanism and the soft selector can both play a certain role in enhancing the robustness of model training. We will continue to explore this property in our future work.
§ CONCLUSION
In conclusion, this paper proposed GLRec, a job recommendation model that first combines large language models (LLMs) with behavior graph understanding. By leveraging the semantic richness and massive knowledge of LLMs, GLRec improved the quality of job recommendations compared to traditional approaches. The meta-path prompt constructor encoded the behavior graph's interaction information into natural language prompts, enhancing personalized recommendations. Experimental results validated the effectiveness of GLRec, showcasing its superiority in real-world recruitment datasets. This research contributes to the advancement of LLM-based job recommendation and opens up new possibilities of graph data understanding for LLMs in personalized recommendations. However, there are still some areas that need to be further optimized in our work, such as larger scale experimental validation and finer grained module testing.
aaai
|
http://arxiv.org/abs/2307.05250v2 | 20230711133129 | The simplicial complex of Brauer pairs of a finite reductive group | [
"Damiano Rossi"
] | math.RT | [
"math.RT",
"math.AT",
"math.GR",
"20J05, 20G40, 20C20, 55P91"
] |
matrix,arrows,decorations.pathmorphing
arrows.meta
definition
definDefinition[section]
definition
ex[defin]Example
plain
theo[defin]Theorem
plain
prop[defin]Proposition
plain
lem[defin]Lemma
plain
cor[defin]Corollary
definition
rmk[defin]Remark
definition
exe[defin]Exercise
definition
pb[defin]Problem
plain
conj[defin]Conjecture
definition
notation[defin]Notation
definition
hyp[defin]Hypothesis
plain
dade[defin]Dade's Projective Conjecture
plain
alp[defin]Alperin's Weight Conjecture
plain
ctc[defin]Character Triple Conjecture
definition
cond[defin]Condition
plain
para[defin]Parametrisation
definition
ass[defin]Assumption
definition
*defin*Definition
definition
*ex*Example
plain
*theo*Theorem
plain
plain
*conj*Conjecture
*prop*Proposition
plain
*lem*Lemma
plain
*cor*Corollary
definition
*rmk*Remark
definition
*exe*Exercise
plain
theoATheorem
plain
conjA[theoA]Conjecture
plain
condA[theoA]Condition
plain
paraA[theoA]Parametrisation
plain
corA[theoA]Corollary
equationsection
@space@setup
@preskip=@postskip=0pt
[enumerate]label=(*)
Tr
𝐑
Uch
Br
Bl
UBl
Bl_nc
bl
ht
Char
Irr
AbIrr
Ab
ps
Lin
Rep
Br
Proj
Ext
Ker
Syl
Aut
Out
ℭ𝔩
tr
Ind
Res
IBr
PSL
PGL
SL
GL
SU
GU
Spin
Sp
PSp
𝐍
𝐂̧
𝐙
Ø𝐎
ℰ
𝔻̣
𝔼
𝔸
𝔹̱
𝔓
𝔉
𝒞
𝕊
𝒫
𝒬
k̨
Ŭ
𝐆
𝐇̋
𝐊
Ł𝐋
𝐌
𝐓
𝐒
𝐁
𝐏
𝐐
𝐗
𝕃
𝕂
ℒ
𝕌
𝕍̌
|
http://arxiv.org/abs/2307.04987v1 | 20230711025249 | Inflationary magnetogenesis with a self-consistent coupling function | [
"Y. Li",
"L. Y. Zhang"
] | astro-ph.CO | [
"astro-ph.CO",
"gr-qc"
] |
Y.Li and L.Y.ZhangInflationary magnetogenesis with a self-consistent coupling function
School of Science, Dalian Maritime University, Dalian 116026, China
[email protected]
School of Science, Dalian Maritime University, Dalian 116026, China
[email protected]
Inflationary magnetogenesis with a self-consistent coupling function
Le-Yao Zhang
August 12, 2023
=====================================================================
Received (Day Month Year)Revised (Day Month Year)
In this paper, we discuss the inflationary magnetogenesis scenario, in which the coupling function is introduced to break the conformal invariance of electromagnetic action. Unlike in conventional models, we
deduce the Maxwell's equations under the perturbed FRW metric.
We found that, the self-consistency of the action depends on the form of the coupling function when the scalar mode perturbations have been considered. Therefore, this self-consistency can be seen as a restriction on the coupling function. In this paper, we give the restrictive equation for coupling function then obtain the specific form of the coupling function in a simple model. We found that the coupling function depends on the potential of the inflaton and thus is model dependent. We obtain the
power spectrum of electric field and magnetic field in large-field inflation model.
We also found that the coupling function is a incresing function of time during slow-roll era as
most of inflationary magnetogenesis models, it will lead to strong coupling problem.
This issue is discussed qualitatively by introducing a correction function during the preheating.
PACS Nos.:98.80.Cq.
§ INTRODUCTION
Observations indicate that the universe is magnetized on a wide range of length scales<cit.>.
The sources of these magnetic fields are still unclear.
There are two types of models that can be used to explain the origin of these magnetic fields: astrophysical scenario
<cit.> and primordial scenario(see Refs.<cit.> for reviews). The former believes that these magnetic fields originate from some astrophysical processes.
The origin of magnetic fields in galaxies and clusters can be explained in such models. However,
this type of models is difficult to explain the origin of the magnetic fields in cosmic voids.
The magnetic fields in the cosmic voids are more like the origin of the early universe<cit.>.
The latter, i.e. primordia scenario, assumes that these large-scale magnetic fields originated in the early stage of the universe.
One class of possible sources of the primordial magnetic fields are phase transitions like electroweak phase transition <cit.> or the QCD transtition <cit.>.
However, in these scenarios very tiny fields on galactic scales obtain unless helicity is also generated, in which case one can have an inverse cascade of energy to large scales<cit.>.
The other class of possible sources of primordial magnetic field are inflationary magnetogenesis<cit.>. The inflation provides an ideal setting for the generation of primordial large-scale field<cit.>,
therefore we focus on the inflationary magnetogensis in this paper.
Due to the conformal invariance of the standard electromagnetic action and the FRW metric is conformally flat, the electromagnetic field is not amplified during the inflation era <cit.>. Therefore, in order to be able to generate large scale magnetic fields by inflation, it is necessary to break this conformal invariance<cit.>.
One way to do this is to introduce a time-dependent coupling function f^2(ϕ) into
the action <cit.>.
On the other hand, an effective way of linking theoretical models to observations is to consider the effects of the existence of large-scale magnetic fields on cosmological perturbations.
While, cosmological perturbations will also in turn affect the evolution of large-scale magnetic fields. So the complete discussion should be to solve the evolution of electromagnetic field and cosmological perturbation together. In other words, it is necessary to consider the magentogenesis in inflation model in which cosmological perturbations have been included.
Hovever, it is difficult to solve the equations which include all fields (pertubations and electromagnetic field). There are two methods to discuss this issue approximately. One way is to consider the magnetogenesis in unperturbed FRW metric and discuss its backreaction on the perturbations e.g. on CMB. Most of the current work is done in this way (see <cit.> for example). The other way, which is used in this paper, is to consider the magnetogenesis in perturbed FRW metric and discuss the influence of perturbations on the electromagnetic field.
As we will discuss in this paper, the existence of cosmological perturbations restrict the form of the coupling function.
Under the FRW background, the introduction of the coupling function does not destroy the self-consistency of the action, which means that the secondary constraint equation for electromagnetic field ∇⃗·E⃗=0 is satisfied automatically. However, if one consider the perturbed FRW background, this constraint equation will be not a trivial equation. In this situation, we can treat this equation as the restriction on the coupling function, i.e. the form of a self-consistent coupling function f(ϕ) should satisfy this equation. The purpose of this paper is to discuss the inflationary magnetogensis with this self-consistent coupling function.
This paper is organized as follow: We deduce the Maxwell's equations under the perturbed FRW metric , and then get the restrict equation for f(ϕ) in section <ref> . We apply this restrict equation to
slow-roll inflation in the end of section <ref> and obtain the power spectrum of electromagnetic field in large-field inflation model in section <ref> . We also discuss the backreaction in section <ref> and strong coupling problems in section <ref> , the summary in the section <ref> .
§ MAXWELL'S EQUATIONS UNDER PERTURBATED FRW BACKGROUND
To get the restrict equation for f(ϕ), let us consider the FRW metric with scalar mode of inhomogeneous perturbations in longitudinal gauge:
ds^2 = -(1+2Φ)dt^2+a^2(t)(1-2Φ)δ_ijdx^idx^j
= a^2(η)[-(1+2Φ)dη^2+(1-2Φ)δ_ijdx^idx^j]
where Φ is Bardeen potential, t is cosmic time and η is conformal time.
The action of matter during inflation can be written as <cit.>:
S=-1/16π∫ d^4x √(-g)[g^αβg^μνf^2(φ)F_μαF_νβ]
-∫ d^4x √(-g)[1/2g^μν∂_μφ∂_μφ+V(φ)]
where φ(t, x)=ϕ(t)+δϕ(t, x) is the inflaton and its perturbation.
F_αβ=A_β;α-A_α;β=A_β,α-A_α,β
is the electromagnetic field tensor, with A_α being the standard electromagnetic 4-potential. f(φ) is the coupling function which is introduced to break the conformally invariant of
the standard electromagnetic action <cit.>. For the convenience of discussion, we expand the coupling function as:
f^2(φ)=f^2(ϕ+δϕ)≈[f(ϕ)+df/dφ|_ϕδϕ]^2
≈ f^2(ϕ)[1+𝒢(ϕ)δϕ]
where
𝒢(ϕ)≡2/f(ϕ)df/dφ|_ϕ
It is worth to notice that 𝒢 depend only on time or, in other words, it is scale-independent.
In the model we discuss here, we treat the electromagnetic field as a “test" field which means
that F_αβ do not affect the evolution of the background (a and ϕ) and perturbations (Φ and δϕ),
but the background and perturbations can affect the evolution of electromagnetic field.
The Maxwell's equations can be obtain by using the action (<ref>):
∂_ρ[√(-g)f^2(φ)g^σμg^ρνF_μν]=0
For conformal time, the time component of Eq.(<ref>) (σ=0) lead to:
∂_i[f^2(ϕ)(1-2Φ+𝒢δϕ)δ^ijF_0j]=0
In Minkowski spacetime, Eq.(<ref>) is noting but ∇⃗·E⃗=0.
This is the secondary constraint equation for source-free electromagnetic field (one can refer to Appendix E in <cit.>).We will see later, this equation is trivial equation in FRW background and is non-trivial in perturbed background.
The space component of Eq.(<ref>) (σ=i) can also be obtain similarly:
(1-2Φ+𝒢δϕ)A”_j-(1+2Φ+𝒢δϕ)∇^2A_j
+[𝒢ϕ'(1-2Φ+𝒢δϕ)-2Φ'+𝒢'δϕ+gδϕ']A'_j
+(2Φ_,k+𝒢δϕ_,k)
δ^kℓ(A_ℓ,j-A_j,ℓ)=0
where ∇^2≡δ^kℓ∂_k∂_ℓ is Laplace operator and ' denote the derivative with respect to conformal time.
For the convenience of discussion, we assume that A_i can be expressed as perturbation expansion:
A_i=A_i^(0)+A_i^(1)+A^(2)_i+⋯
where O[A_i^(0)]∼ O[Φ].
In this paper we adopt the Coulomb gauge:A_0(η, x)=0, ∂_jA^j(η, x)=0.
Under this gauge, Maxwell's equations can be rewritten as:
∂_i{𝒢δϕ[A^(0)_j]'-4Φ[A^(0)_j]'-2A^(0)_jΦ'}δ^ij=0
and
{[A^(0)_j]”+𝒢ϕ'[A^(0)_j]'-∇^2A^(0)_j}
+ {[A^(1)_j]”+𝒢ϕ'[A^(1)_j]'-∇^2A^(1)_j}-Q_j^(1)=0
where:
Q_j^(1) ≡ (2Φ-𝒢δϕ) {[A^(0)_j]”+𝒢ϕ'[A^(0)_j]'-∇^2A^(0)_j}
+ (2Φ-𝒢δϕ)'[A^(0)_j]'
+4Φ∇^2A^(0)_j+δ^kℓ(𝒢δϕ+2Φ)_,k(A^(0)_j,ℓ-A^(0)_ℓ,j)
Notice that, all the order in the left of Eq.(<ref>) should be zero, therefore the space component of Maxwell equations lead to
[A^(0)_j]”+𝒢ϕ'[A^(0)_j]'-∇^2A^(0)_j = 0
[A^(1)_j]”+𝒢ϕ'[A^(1)_j]'-∇^2A^(1)_j = Q_j^(1)
If scalar perturbations are not taken into account (Φ=δϕ=0), then time component of Maxwell's equations Eq.(<ref>) becomes a trivial equation, and source term Q_j^(1) vanish. One can
choose the form of the coupling function f(ϕ) (and 𝒢) freely from the beginning and solve the space component Eq.(<ref>,<ref>) directly. However, once the scalar perturbations have been considered, Eq.(<ref>) is no longer trivial.
This means that not any coupling function can make the theory self-consistent.
The form of the coupling function is to ensure that this constraint equation holds.
Although Eq.(<ref>) is order O[Φ^2], it does not means that the restriction on coupling function is order of perturbations. The trivial or non-trivial of Eq.(<ref>) is essentially different.
This fact is not surprising.
It is rooted in the fact that the choice of Lagrangian density is not arbitrary and it needs to satisfy self-consistent conditions<cit.>.
From Eq.(<ref>,<ref>,<ref>) one can see that, there are two aspects to the influence of cosmological perturbations on electromagnetic fields:
* The perturbations restrict the form of coupling function by Eq.(<ref>).
* The perturbations (and A^(0)_j) provide a source term Q_j^(1) for A^(1)_j.
An consistent discussion should be solving Eq.(<ref>,<ref>,<ref>) together. However, one can notice that, there is no A^(1)_j in Eq.(<ref>) and Eq.(<ref>). This means that we can solve Eq.(<ref>,<ref>) to obtain the 𝒢 and A^(0)_j first, and then insert the 𝒢 and A^(0)_j into Eq.(<ref>) to get the solution of A^(1)_j.
Furthermore, the source term Q_j^(1) in Eq.(<ref>) is order O[Φ^2], and then the A^(1)_j will be more smaller than A^(0)_j. Therefore in this paper, we only focus on the evolution of the main part of A_j, i.e. A_j^(0). In other words, we only focus on the influence (i) of
cosmological perturbations.
In fact, Eq.(<ref>) is the same as the evolution equation of the electromagnetic 4-potential in the conventional model <cit.> besides the coupling function and A^(0)_j should satisfy the Eq.(<ref>).
We treat Eq.(<ref>) as a restriction on the form of the coupling function.
One can choose a coupling function which satisfy Eq.(<ref>), then to solve the Eq.(<ref>).
From Eq.(<ref>), one can obtain that
𝒢δϕ[A^(0)_j]'-4Φ[A^(0)_j]'-2A^(0)_jΦ'=ℂ(η)
where ℂ(η) is the function of η. The choice of function ℂ(η) will affect the form of the coupling function.
In order to get the specific form of the coupling function, we consider the slow-roll era of inflation first.
During the slow-roll inflation, the super-Hubble scale Fourier mode of Bardeen potential Φ_k satisfy<cit.>
Φ'_k = 0, Φ_k≈ϵℋδϕ_k/ϕ'
where ϵ≡-Ḣ/H^2 is slow-roll parameter, and dot denote the derivative with respect to cosmic time. ℋ≡ a'/a
is conformal Hubble parameter and δϕ_k is Fourier mode of δϕ.
If one consider the large scales only, then we have:
Φ' ≈ 0, Φ≈ϵℋδϕ/ϕ'
Insert Eq.(<ref>) into Eq.(<ref>) one can get
𝒢=4ϵℋ/ϕ'+ℂ(η)/δϕ[A^(0)_j]'
From the Eq.(<ref>), one can see that 𝒢 is scale-independent, therefore the second term in righthand of Eq.(<ref>) should vanish, which means that we should consider the models in which ℂ(η)=0. This consideration give
𝒢=4ϵℋ/ϕ'
Although the above discussion only considers the behavior at large scales, the form of the 𝒢 is scale-independent, so Eq.(<ref>) holds at all scales.
During the slow-roll inflation, the Klein-Gorden equation of ϕ is 3Hϕ̇≈-V_,ϕ. Note that the Friedmann equation in the slow-roll
inflation is H^2≈ V/3, then Eq.(<ref>) change to
𝒢=-2V_,ϕ/V
Using Eq.(<ref>) one can get the form of coupling function f(ϕ) as:
f(ϕ)∝exp(-∫^ϕV_,ϕ/Vdϕ)∝ V^-1
Eq.(<ref>) shows that the form of self-consistent coupling functiong depend on the
potential of inflaton. In other words, the form of the coupling function is model dependent.
§ POWER SPECTRUM OF ELECTROMAGNETIC FIELD IN LARGE-FIELD MODEL
To obtain the power spectrum of electromagnetic field, we consider the
large-field inflation with polynomial potentials:
V(ϕ)=Λ^4(ϕ/μ)^p (p>0)
where Λ is the “height" of potential, corresponding to the vacuum energy density during inflation, and μ is the “width" of the potential, corresponding to the change in the field value Δϕ during inflation<cit.>.
According to Eq.(<ref>), the coupling function in this large-field model is
f(ϕ)∝ϕ^-p
By using ϕ̇≈-V,_ϕ/3H, 3H^2≈ V during slow-roll inflation one can have<cit.>:
lna/a_i=-1/2p(ϕ^2-ϕ_i^2)
where a_i and ϕ_i is the scale factor and value of inflaton at the beginning of the inflation. And then the coupling function can be written as the function of scale factor:
f(a)∝(-2plna/a_i+ϕ_i^2)^-p/2
This form of f(ϕ) is different from conventional models (see <cit.> for review). In these models, it is often assumed that the coupling function has a power law form of scale factor.
However, Eq.(<ref>) shows that the form of the self-consistent coupling function is not a power law form as in conventional models.
This means that the power-law-like coupling function does not
satisfy the self-consistent condition Eq.(<ref>). This is the main difference between the model we discuss here and the conventional models.
During slow-roll inflation era, the scale factor as the function of confomal time a(η) can be assumed
as:
a(η)=a_i|η/η_i|^1+β
where η_i is the conformal time when the inflation begin. The case β=-2 corresponds to de Sitter space-time. During the inflation, η→0_-. Insert Eq.(<ref>) into Eq.(<ref>) we have:
f∝[-2p(1+β)ln|η/η_i|+ϕ^2_i]^-p/2
It is worth noting that the coupling function will diverges at the conformal time
|η_∞|=|η_i|exp[ϕ^2_i/2p(1+β)]
When η=η_∞, ϕ=0, which means that the slow roll phase has been ended before η_∞.
Insert Eq.(<ref>) into Eq.(<ref>) one can get:
f∝[2p(1+β)ln|η_∞/η|]^-p/2∝[ln|η_∞/η|]^-p/2
The next step is to solve Eq.(<ref>) by using coupling function Eq.(<ref>) or Eq.(<ref>). Before that, it is convenient to
set 𝒜≡ a(η)f(ϕ(η))A^(0)_j(η,k), where A^(0)_j(η,k)
is Fourier mode of A^(0)_j. Eq.(<ref>) can be written as<cit.>:
𝒜”(η,k)+(k^2-f”/f)𝒜(η,k)=0
At the beginning of inflation, η≈η_i.
This means f”/f≈ 0 and the Eq.(<ref>) change to
𝒜”+k^2𝒜=0 ⇒ 𝒜∝exp(± i kη)
At small scale limit, the solution should be “negative frequency"<cit.>. In order to satisfy the Wronskian condition<cit.>, the solution of 𝒜 should be
𝒜=1/√(2k)exp(-ikη)
At the late time of inflation when η≈η_∞ we have
f”/f≈p/2(p/2+1)T^-2
where T≡|η_∞|-|η|=η-η_∞, therefore T<0 during the slow-roll inflation.
Notice that dT=d η, then the Eq.(<ref>) change to
d^2/dT^2𝒜+[k^2-p/2(p/2+1)T^-2]𝒜=0
The general solution of Eq.(<ref>) is
𝒜=(-kT)^1/2[C_1(k)J_ν(-kT)+C_2(k)J_-ν(-kT)]
where ν≡(1+p)/2.
When |η|≫|η_∞|, Eq.(<ref>) should be approximated to Eq.(<ref>). While, when
|η|≫|η_∞|, T≈η, therefore Eq.(<ref>) just need to be approximated to
𝒜=1/√(2k)exp(-ikT)
Eq.(<ref>) can be used to determine the coefficient C_1, C_2.
We focus on the behavior on large scale, so we take the large scale limit: -kη→0⇒-kT→0. After ignoring the decay term, we have:
𝒜≈ k^-1/2c(γ)(-kT)^1-γ
where
c(γ)=√(π/2^3-2γ)exp[iπ(1+γ)/2]/Γ(3/2-γ)cos(πγ)
and γ≡1+p/2>1. Therefore the power spectrum of magnetic field is
dρ_B/dk=1/kdρ_B/dln k=c^2(γ)/2π^2kH^4(kη)^6-2γ(1-η_∞/η)^2-2γ
The power spectrum of electric field can also be obtain:
dρ_E/dk=1/kdρ_E/dln k≈d^2(γ)/2π^2kH^4(kη)^4-2γ(1-η_∞/η)^-2γ
where
d(γ)=√(π)exp[iπ(1+γ)/2]/2^-γ+1/2Γ(-γ+1/2)cos(πγ)
From Eq.(<ref>,<ref>) we can see that the spectral index of magnetic power spectrum is n_B=6-2γ and the spectral index of electric power spectrum is n_E=4-2γ.
Therefore, scale invariant spectrum of magnetic field can be got when γ=3 i.e. p=4.
When γ=2 i.e. p=2 one can get the scale invariant spectrum of electric field. If the magnetic field spectrum is scale invariant (γ=3), the electric field spectrum is red.
Because of γ>1, it is can be seen from Eq.(<ref>) and Eq.(<ref>) that the spectrum of magnetic and electric field will increase rapidly when η→η_∞. This can cause backreaction problem.
However, as we discussed above, the slow-roll inflation will ended before η_∞. We assume that the moment when the slow-roll ends is η_end, and the value of inflaton at this moment is ϕ_end.
Then we can get approximately that
|η_∞/η_end|-1≈ln|η_∞/η_end|=-ϕ_end^2/2p(1+β)≡𝒴
Insert Eq.(<ref>) into Eq.(<ref>,<ref>) one can estimate the power spectrum of magnetic and electric field at the end of slow-roll inflation:
dρ_B/dln k|_end ≈ c^2(γ)/2π^2H^4_end(k/a_endH_end)^6-2γ𝒴^2-2γ
dρ_E/dln k|_end ≈ d^2(γ)/2π^2H^4_end(k/a_endH_end)
^4-2γ𝒴^-2γ
To avoid the backreaction problem, the energy density of electromagnetic field can not exceed the energy density of the inflaton, this require that
dρ_B/dln k|_end+dρ_E/dln k|_end<ρ_end
We focus on scale invariant spectrum of magnetic field, i.e. γ=3, p=4, then Eq.(<ref>) means
H_end^4/2π^2𝒴^-6d^2(γ)(k/a_0H_0)^-2(a_0H_0/a_endH_end)^-2
<3/8πH^2_endM_pl^2
The ratio (a_0H_0)/(a_endH_end) can be estimated as<cit.>:
a_0H_0/a_endH_end≈1.51×10^-29h/R, (h≈0.72)
where R depend on the reheating phase, and for simple estimate one can chose R≈ρ_end^1/4 as in <cit.>. Insert these into Eq.(<ref>) we can have
ρ_end<(ϕ_endM_pl^1/3)^8×10^-42
If one require the ρ_end should be satify the requirements of nuleosynthesis (ρ_nuc≈10^-85M_pl^4), then
ϕ_end>10^-43M_pl^4/3
This means that as long as the slow-roll era ends before the inflaton decays too small, the backreaction problem can be avoided.
We also can estimate the present day value of magnetic field strength simply. From Eq.(<ref>) we know that
the power spectrum of the model in this paper is amplified by the factor 𝒴^2-2γ compare with the conventional model <cit.>.
This factor is independ on the scale factor. If one assume the instant reheating, then the present day power spectrum of magnetic field is also amplified by this factor.
While, this factor is depend on the detail of the inflation model, specifically on ϕ_end.
For example,
for scale invariant magnetic pectrum, i.e. V(ϕ)∝ϕ^4 , this factor is (8/ϕ_end^2)^4.
In ϕ^4 inflation model, the slow-roll era ends at ϕ_end≈ M_pl/2 <cit.>, and this factor change to (4/π)^4. Therefore, the present day value of magnetic field strength is amplified by (4/π)^2≈1.6 compare with the conventional model <cit.>. This means that the magnetic field on coherence scale 1Mpc today is
B_0≈8×10^-10G(H/10^-5M_pl)
This result satisfy the lower bound of γ-ray observation B∼10^-15G <cit.>.
Notice that the lower limit of ϕ_end is very small (see Eq.(<ref>)), then the magnitude of the 𝒴 factor has a large span.
Therefore, we can use ϕ_end as a tunable parameter of the model, and adjust the value of it to make the predicted magnetic field strength of the model consistent with today's observations.
§ STRONG COUPLING
As the last part of this paper, we will discuss the problem of strong coupling qualitatively.
From Eq.(<ref>) we know that the coupling function is monotonically increasing function during the
slow-roll of inflation. This will lead to the strong coupling problem which was first pointed out in <cit.>. To avoid this problem, one can assume
a decreasing coupling function during the preheating era like in <cit.>.
However, in this paper, we found that the coupling fucntion should satisfy the Eq.(<ref>).
An attractive possibility is that this equation can lead to an decrease coupling function after the slow-roll of inflation.
Therefore it is interesting to discuss the
behavior of coupling function during the preheating.
It should be noted that, Eq.(<ref>) is satisfied only in slow-roll era. During the preheating, the coupling function should be obtain by solving the Eq.(<ref>,<ref>) together.
One can eliminate the A_j by combining Eq.(<ref>,<ref>) and get
𝒟_1h'+𝒟_2h+𝒟_3h^2+𝒟_4=0
where
𝒟_1=ϕ'^2-2ℋq
𝒟_2=4qϕ'^2-8ℋq^2-2ϕ'ϕ”+2ℋ'q+2ℋq'
𝒟_3=-ϕ'^2+2ℋq
𝒟_4=-4ϕ'^2q'-ϕ'^4+4ℋqϕ'^2-4ℋ^2q^2
+8ϕ'ϕ”q-8ℋ'q^2
q≡Φ/δϕϕ', h≡𝒢ϕ'=2f'/f
Eq.(<ref>) is the equation that the coupling function needs to satisfy during the preheating. On large scale, the Fourier mode of Bardeen potential and perturbation of inflaton can be written as <cit.>
Φ_k=𝒞(1-H/a∫ adt), δϕ_k≈𝒞ϕ̇(a^-1∫ adt)
Consider the case where the magnetic field is scale invarant p=4, which means that we should consider the
ϕ^4 preheating. In this model of preheating, a∝√(t) and the evolution of ϕ can be approximated as <cit.>
ϕ≈ϕ̃/acos(0.8472√(λ)ϕ̃η)
For qualitative discussion, we assume
ϕ=1/ηcosη
The evolution curve of the coupling function given in the Fig.<ref>. It can be seen that the coupling function will increase during the preheating era. This will lead to strong coupling problem.
In conventional models, one strategy to solve strong coupling problem is to ansatz a decreasing coupling function during the preheating era directly as in <cit.>. However, in the models we consider here, the coupling function should satisfy Eq.(<ref>), which means that the coupling function is model dependent. In other words, to get the decreasing coupling function, we should modify the preheating model.
We consider a toy model in which there are some modifications at the beginning of ϕ^4 preheating. As the preheating process goes on, the modificatons will vanish and the preheating model will approximate to the ϕ^4 model.
These modifications affect both the dynamics of background ϕ, ℋ... and perturbation Φ, δϕ..., and affect the Eq.(<ref>). We assume that the effect of these modifications of preheating model change the Eq.(<ref>) to
𝒟_1ĥ'+𝒟_2ĥ+𝒟_3ĥ^2+𝒟_4=0
where
ĥ(η)≡ h(η)+r(η)
and r(η) is a decreasing function of η. In Eq.(<ref>), the dynamics of ϕ,ℋ,δϕ,Φ are the same as in ϕ^4 model, which means that we assume the modifications of preheating model equivalent to introduce a correction function r(η) in Eq.(<ref>).
As the preheating proceeds, the correction function will tend to zero
and ĥ→ h, therefore h will satisfy the Eq.(<ref>) at late time of preheating and the preheating model approximate to the ϕ^4 model as we assumed.
One simple choice of r(η) is
r(η)=1/η^n
Notice that the form of r(η) depend on the modifications of preheating model, then the parameter n can be seen as a parameter of preheating model. This parameter can be choosen to avoid the strong problem.
The evolution of coupling function during preheating with different choices of r(η) are given in Fig.<ref>.
It can be seen that a decreasing coupling function can be obtained by selecting the appropriate parameter n (i.g. n=1 in Fig.<ref>), and then the strong coupling can be avoided.
§ SUMMARY AND DISCUSSION
In this paper, we discuss the inflationary magnetogenesis with a coupling function which can keep the action to be self-consistent. This self-consistence coming from the time component of Maxwell's equation which is the secondary constraint for electromagnetic field. Under the FRW metric, this is a trival equation. However, once we consider the perturbed metric, this equation become non-trival and can be seen as a
restrict equation for coupling function f(ϕ), see Eq.(<ref>). Taking this as a starting point, we calculated the power spectrum of the electric and magnetic fields in the large-scale inflation model.
We estimate the present day value of magnetic field, and the result satisfy the lower bound of γ-ray observation.
We found that, the power spectrum obtained in this paper is multiplied by a factor related to ϕ_end compared to the conventional model<cit.>. This means that one can generate the required magnetic field by choosing a suitable inflation model, or conversely use the magnetic field observed today to limit ϕ_end. This provides a possibility to use today's large-scale magnetic field strength to estimate the value of the inflaton field at the end of the slow-roll.
On the other hand, to avoid the backreaction problem at the end of inflation, the value of inflaton at the end of slow-roll era ϕ_end should have a lower limit (∼10^-43)(see Eq.(<ref>)). This lower limit is so small that the upper limit of the factor 𝒴 can be large. This means that the model discussed in this paper can generate a sufficiently strong magnetic field without causing backreaction problem.
The strong coupling problem can also appear in the situation which is discussed here. One way to solve the problem of strong coupling is to introduce an decreasing coupling function in the preheating era. In this paper, the coupling function is determined by Eq.(<ref>) or Eq.(<ref>) in preheating era. Unfortunately these equations give an increasing coupling function. To make the coupling function change to a decreasing function, it is necessary to introduce a correction function in the early stage of preheating.
So it is a very interesting open topic to find a suitable preheating model in which the coupling function can naturally be determined as an decreasing function.
§ ACKNOWLEDGMENTS
This work was supported by the Fundamental Research Funds for the Central Universities of
Ministry of Education of China under Grants No. 3132018242,
the Natural Science Foundation of Liaoning Province of China under Grant No.20170520161 and the National Natural Science Foundation of China under Grant No.11447198 (Fund of theoretical physics).
ws-mpla
|
http://arxiv.org/abs/2307.03956v1 | 20230708112126 | The annealed parabolic Anderson model on a regular tree | [
"Frank den Hollander",
"Daoyi Wang"
] | math.PR | [
"math.PR"
] |
[1]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
[2]
Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
[email protected]
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
We study the total mass of the solution to the parabolic Anderson model on a regular tree with an i.i.d. random potential whose marginal distribution is double-exponential. In earlier work we identified two terms in the asymptotic expansion for large time of the total mass under the quenched law, i.e., conditional on the realisation of the random potential. In the present paper we do the same for the annealed law, i.e., averaged over the random potential. It turns out that the annealed expansion differs from the quenched expansion. The derivation of the annealed expansion is based on a new approach to control the local times of the random walk appearing in the Feynman-Kac formula for the total mass. In particular, we condition on the backbone to infinity of the random walk, truncate and periodise the infinite tree relative to the backbone to obtain a random walk on a finite subtree with a specific boundary condition, employ the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs, and afterwards let the truncation level tend to infinity to obtain an asymptotically sharp asymptotic expansion.
MSC2010: 60H25, 82B44, 05C80.
Keywords: Parabolic Anderson model, Feynman-Kac formula, regular tree, double-exponential random potential, backbone of random walk, annealed Lyapunov exponent, variational formula.
Acknowledgment:
The research in this paper was supported by the Netherlands Organisation for Scientific Research through NWO Gravitation Grant NETWORKS-024.002.003.
The annealed parabolic Anderson model
on a regular tree
F. den Hollander
[1]
D. Wang
[2]
August 12, 2023
=========================================================
§ INTRODUCTION AND MAIN RESULTS
Section <ref> provides background and motivation, Section <ref> lists notations, definitions and assumptions, Section <ref> states the main theorems, while Section <ref> places these theorems in their proper context.
§.§ Background and motivation
The parabolic Anderson model (PAM) is the Cauchy problem
∂_t u(x,t) = Δ_ u(x,t) + ξ(x) u(x,t) , t>0, x ∈,
where t is time, is an ambient space, Δ_ is a Laplace operator acting on functions on , and ξ is a random potential on . Most of the literature considers the setting where is either ^d or ^d with d ≥ 1, starting with the foundational papers <cit.>, <cit.>, <cit.> and further developed through a long series of follow-up papers (see the monograph <cit.> and the survey paper <cit.> for an overview). More recently, other choices for have been considered as well:
(I)
Deterministic graphs (the complete graph <cit.>, the hypercube <cit.>).
(II)
Random graphs (the Galton-Watson tree <cit.>, <cit.>, the configuration model <cit.>).
Much remains open for the latter class.
The main target for the PAM is a description of intermittency: for large t the solution u(·,t) of (<ref>) concentrates on well-separated regions in , called intermittent islands. Much of the literature focusses on a detailed description of the size, shape and location of these islands, and on the profiles of the potential ξ(·) and the solution u(·,t) on them. A special role is played by the case where ξ is an i.i.d. random potential with a double-exponential marginal distribution
(ξ(0) > u) = ^-^u/ϱ, u ∈,
where ϱ∈ (0,∞) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and represents a class of its own.
In the present paper we consider the case where 𝒳 is an unrooted regular tree . Our focus will be on the asymptotics as t→∞ of the total mass
U(t) = ∑_x ∈ u(x,t).
In earlier work <cit.>, <cit.> we were concerned with the case where 𝒳 is a rooted Galton-Watson tree in the quenched setting, i.e., almost surely with respect to the random tree and the random potential. This work was restricted to the case where the random potential is given by (<ref>) and the offspring distribution of the Galton-Watson tree has support in \{1} with a sufficiently thin tail. In the present paper our focus will be on the annealed setting, i.e., averaged over the random potential. We derive two terms in the asymptotic expansion as t→∞ of the average total mass
⟨ U(t) ⟩ = ∑_x ∈⟨ u(x,t) ⟩,
where ⟨·⟩ denotes expectation with respect to the law of the random potential. It turns out that the annealed expansion differs from the quenched expansion, even though the same variational formula plays a central role in the two second terms.
The derivation in the annealed setting forces us to follow a different route than in the quenched setting, based on various approximations of that are more delicate than the standard approximation of ^d (see <cit.>). This is the reason why we consider regular trees rather than Galton-Watson trees, to which we hope to return later. A key tool in the analysis is the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, which is recalled in Appendix <ref>.
§.§ The PAM on a graph
§.§.§ Notations and definitions
Let G = (V,E) be a simple connected undirected graph, either finite or countably infinite, with a designated vertex called the root. Let Δ_G be the Laplacian on G, i.e.,
(Δ_G f)(x) = ∑_y∈ V:{x,y}∈ E [f(y) - f(x)], x ∈ V, f V→,
which acts along the edges of G. Let ξ := (ξ(x))_x ∈ V be a random potential attached to the vertices of G, taking values in . Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,
[ ∂_t u(x,t) = (Δ_G u)(x,t) + ξ(x) u(x,t), x ∈ V, t>0,; u(x,0) = δ_(x), x ∈ V. ]
The quantity u(x,t) can be interpreted as the amount of mass at time t at site x when initially there is unit mass at . The total mass at time t is U(t) = ∑_x ∈ V u(x,t). The total mass is given by the Feynman-Kac formula
U(t) = _(^∫_0^t ξ(X_s) s),
where X=(X_t)_t ≥ 0 is the continuous-time random walk on the vertices V with jump rate 1 along the edges E, and _ denotes the law of X given X_0=. Let ⟨·⟩ denote expectation with respect to ξ. The quantity of interest in this paper is the average total mass at time t:
⟨ U(t) ⟩ = ⟨_(^∫_0^t ξ(X_s) s)⟩.
§.§.§ Assumption on the potential
Throughout the paper we assume that the random potential ξ = (ξ(x))_x ∈ V consists of i.i.d. random variables with a marginal distribution whose cumulant generating function
H(u) = log⟨^uξ()⟩
satisfies the following:
[Asymptotic double-exponential potential]
There exists a ϱ∈ (0,∞) such that
lim_u→∞ u H”(u) = ϱ.
[Double-exponential potential]
A special case of (<ref>) is when ξ() has the double-exponential distribution in (<ref>), in which case
H(u) = logΓ(ϱ u + 1)
with Γ the gamma function.
By Stirling's approximation, (<ref>) implies
H(u) = ϱ u log(ϱ u) - ϱ u + o(u), u →∞.
Assumption <ref> is more than enough to guarantee existence and uniqueness of the non-negative solution to (<ref>) on any discrete graph with at most exponential growth (as can be inferred from the proof in <cit.>, <cit.> for the case G=^d). Since ξ is assumed to be i.i.d., we have from (<ref>) that
⟨ U(t) ⟩ = 𝔼_𝒪(exp[∑_x∈ V H(ℓ_t(x))]),
where
ℓ_t(x) = ∫^t_0 1{X_s =x } s, x ∈ V, t≥ 0,
is the local time of X at vertex x up to time t.
§.§.§ Variational formula
The following characteristic variational formula is important for the description of the asymptotics of ⟨ U(t)⟩. Denote by (V) the set of probability measures on V. For p ∈(V), define
I_E(p) = ∑_{x,y}∈ E( √(p(x)) - √(p(y)) )^2,
J_V(p) = - ∑_x ∈ V p(x) log p(x),
and set
χ_G(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞).
The first term in (<ref>) is the quadratic form associated with the Laplacian, which is the large deviation rate function for
the empirical distribution
L_t = 1/t∫_0^t δ_X_s s = 1/t∑_x ∈ Vℓ_t(x) δ_x ∈(V)
(see e.g. <cit.>). The second term in (<ref>) captures the second order asymptotics of ∑_x ∈ V H(tp(x)) as t →∞ via (<ref>) (see e.g. <cit.>).
§.§.§ Reformulation
The following lemma pulls the leading order term out of the expansion and shows that the second order term is controlled by the large deviation principle for the empirical distribution.
[Key object for the expansion]
If G=(V,E) is finite, then
⟨ U(t) ⟩ = ^H(t) + o(t) _(^-ϱ t J_V(L_t)),
t →∞.
where J_V is the functional in (<ref>) and L_t is the empirical distribution in (<ref>).
Because ∑_x ∈ Vℓ_t(x) = t, we can rewrite (<ref>) as
⟨ U(t) ⟩ = _(exp[∑_x∈ V H(ℓ_t(x))])
= ^H(t) _(exp{t ∑_x∈ V1/t[H(ℓ_t(x)tt)-ℓ_t(x)tH(t)]}).
Assumption <ref> implies that H has the following scaling property (see <cit.>):
lim_t→∞1/t [H(ct) - cH(t)] = ϱ c log c uniformly in c ∈ [0,1].
Hence the claim follows.
§.§ The PAM on an unrooted regular tree: annealed total mass for large times and key variational formula
In this section we specialise to the case where G= = (E,V), an unrooted regular tree of degree d +1 with d ≥ 2 (see Fig. <ref>). The main theorem of our paper is the following expansion.
[Growth rate of the total mass]
For any d ≥ 4, subject to Assumption <ref>,
1/tlog⟨ U(t) ⟩ = ϱlog(ϱ t) - ϱ - χ_(ϱ) + o(1), t →∞,
where χ_(ϱ) is the variational formula in (<ref>) with G=.
The proof of Theorem <ref> is given in Sections <ref>–<ref> and makes use of technical computations collected in Appendices <ref>–<ref>.
The main properties of the key quantity
χ_(ϱ) = inf_p ∈(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
are collected in the following theorem (see Fig. <ref>).
[Properties of the variational formula]
For any d ≥ 2 the following hold:
(a) The infimum in (<ref>) may be restricted to the set
_^↓(V) = {p ∈(V) argmax p = ,
p is non-increasing in the distance to }.
(b) For every ϱ∈ (0,∞), the infimum in (<ref>) restricted to _^↓(V) is attained, every minimiser p is such that p>0 on V, and ∂ S_R = ∑_∂ B_R()p(x), R∈_0, satisfies
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ,
where B_R() is the ball of radius R centred at .
(c) The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with
lim_ϱ↓ 0χ_(ϱ) = d-1, lim_ϱ→∞χ_(ϱ) = d+1.
The proof of Theorem <ref> is given in Appendix <ref> (see Fig. <ref>).
§.§ Discussion
1.
Theorem <ref> identifies the scaling of the total mass up to and including terms that are exponential in t. The first two terms in the right-hand side of (<ref>) are the same as those of 1/t H(t) (recall (<ref>)). The third term is a correction that comes from the cost for X in the Feynman-Kac formula in (<ref>) to create an optimal local time profile somewhere in , which is captured by the minimiser(s) of the variational formula in (<ref>).
2.
For the quenched model on a rooted Galton-Watson tree we found in <cit.>, <cit.> that
1/tlog U(t) = ϱlog(ϱ t ϑ/loglog t)
- ϱ - χ(ϱ) +o(1), t →∞,
×-a.s.,
where is the law of the potential, is the law of , ϑ is the logarithm of the mean of the offspring distribution, and
χ_(ϱ) = inf_⊂χ_(ϱ)
with χ_(ϱ) given by (<ref>) and the infimum running over all subtrees of . This result was shown to be valid as soon as the offspring distribution has support in \{1} (i.e., all degrees are at least 3) and has a sufficiently thin tail. The extra terms in (<ref>) come from the cost for X in the Feynman-Kac formula in (<ref>) to travel in a time of order o(t) to an optimal finite subtree with an optimal profile of the potential, referred to as intermittent islands, located at a distance of order ϱ t/loglog t from , and to subsequently spend most of its time on that subtree. In this cost the parameter ϑ appears, which is absent in (<ref>). It was shown in <cit.> that if ϱ≥ 1/log (d_min+1), with d_min the minimum of the support of the offspring distribution, then the infimum in (<ref>) is attained at the unrooted regular tree with degree d_min+1, i.e., the minimal unrooted regular tree contained in , for which ϑ = log d_min. Possibly the bound on ϱ is redundant.
3. In view of Lemma <ref> and the fact that Assumption <ref> implies (<ref>), we see that the proof of Theorem <ref> amounts to showing that, on = (V,E),
lim_t→∞1/tlog_(^-ϱ t J_V(L_t)) = - χ_(ϱ).
We achieve this by deriving asymptotically matching upper and lower bounds. These bounds are obtained by truncating outside a ball of radius R, to obtain a finite tree _R, deriving the t→∞ asymptotics for finite R, and letting R→∞ afterwards. For the lower bound we can use the standard truncation technique based on killing X when it exits _R and applying the large deviation principle for the empirical distribution of Markov processes on finite graphs derived in <cit.>. For the upper bound, however, we cannot use the standard truncation technique based on periodisation of X beyond radius R, because is an expander graph (see <cit.> for a list of known techniques on ^d and ^d). Instead, we follow a route in which is approximated in successive stages by a version of _R with a specific boundary condition, based on monitoring X relative to its backbone to infinity. This route allows us to use the large deviation principle for the empirical distribution of Markov renewal processes on finite graphs derived in <cit.>, but we need the condition d ≥ 4 to control the specific boundary condition in the limit as R →∞ (see Remark <ref> for more details). The reason why the approximation of by finite subtrees is successful is precisely because in the parabolic Anderson model the total mass tends to concentrate on intermittent islands.
4. Theorem <ref> shows that, modulo translations, the optimal strategy for L_t as t→∞ is to be close to a minimiser of the variational formula in (<ref>) restricted to _^↓(V). Any minimiser is centred at , strictly positive everywhere, non-increasing in the distance to , and rapidly tending to zero. The following questions remain open:
(1)
Is the minimiser p unique modulo translation?
(2)
Does p(x) satisfy lim_|x| →∞ |x|^-1logp̅(x) = -∞, with |x| the distance between x and ?
(3)
Is p radially symmetric?
(4)
Is ϱ↦χ_(ϱ) analytic on (0,∞)?
We expect the answer to be yes for (1) and (2), and to be no for (3) and (4).
§ PROOF OF THE MAIN THEOREM: LOWER BOUND
In this section we prove the lower bound in Theorem <ref>, which is standard and straightforward. In Section <ref> we obtain a lower bound in terms of a variational formula by killing the random walk when it exits _R. In Section <ref> we derive the lower bound of the expansion by letting R→∞ in the variational formula.
§.§ Killing and lower variational formula
Fix R∈ℕ. Let _R be the subtree of =(V,E) consisting of all the vertices that are within distance R of the root and all the edges connecting them. Put V_R=V_R(_R) and E_R = E(_R). Let τ_R = inf{t ≥ 0 X_t ∉ V_R} denote the first time that X exits _R. It follows from (<ref>) that
⟨ U(t) ⟩≥_(exp[∑_x∈ V_R
H(ℓ_t(x))]1{τ_R>t}).
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≥^H(t) + o(t) _[^-ϱ t J_V(L_t)1{τ_R>t}]
with J_V the functional defined in (<ref>). As shown in <cit.> (see also <cit.>), the family of sub-probability distributions _(L_t ∈· , τ_R>t), t ≥ 0, satisfies the LDP on ^R(V) = {p ∈(V) supp(p) ⊂ V_R} with rate function I_E, with I_E the functional defined in (<ref>). This is the standard LDP for the empirical distribution of Markov processes. Therefore, by Varadhan's Lemma,
lim_t→∞1/tlog_[^-ϱ t J_V(L_t)1{τ_R>t}] = - χ^-_R(ϱ)
with
χ^-_R(ϱ) = inf_p ∈^R(V) [I_E(p) +ϱ J_V(p)],
where we use that p ↦ J_V(p) is bounded and continuous (in the discrete topology) on ^R(V). Note that
lim_t →∞1/tlog_(τ_R>t) = - inf_p∈^R(V) I_E(p) < 0,
which is non-zero because any p ∈^R(V) is non-constant on V. The expression in (<ref>) is the same as (<ref>) with G=, except that p is restricted to V_R.
§.§ Limit of the lower variational formula
Clearly, R ↦χ^-_R(ϱ) is non-increasing. To complete the proof of the lower bound in Theorem <ref>, it remains is to show the following.
lim sup_R→∞χ^-_R(ϱ) ≤χ_(ϱ).
Pick any p ∈(V) such that I_E(p)<∞ and J_V(p)<∞. Let p^ R be the projection of p onto V_R, i.e.,
p^ R(x) = {[ p(x), x ∈int(V_R),; ∑_y ≥ x p(y), x ∈∂ V_R, ].
where y ≥ x means that y is an element of the progeny of x in . Since p^ R∈^R(V), we have from (<ref>) that χ^-_R(ϱ) ≤ I_E(p^ R) + ϱ J_V(p^ R). Trivially, lim_R→∞ I_E(p^ R) = I_E(p) and lim_R→∞ J_V(p^ R) = J_V(p), and so we have lim sup_R→∞χ^-_R(ϱ) ≤ I_E(p) + ϱ J_V(p). Since this bound holds for arbitrary p ∈(V), the claim follows from (<ref>).
§ PROOF OF THE MAIN THEOREM: UPPER BOUND
In this section we prove the upper bound in Theorem <ref>, which is more laborious and requires a more delicate approach than the standard periodisation argument used on ^d . In Section <ref> we obtain an upper bound in terms of a variational formula on a version of _R with a specific boundary condition. The argument comes in four steps, encapsulated in Lemmas <ref>–<ref> below:
(I)
Condition on the backbone of X (Section <ref>).
(II)
Project X onto a concatenation of finite subtrees attached to this backbone that are rooted versions of _R (Section <ref>).
(III)
Periodise the projected X to obtain a Markov renewal process on a single finite subtree and show that the periodisation can be chosen such that the local times at the vertices on the boundary of the finite subtree are negligible (Section <ref>).
(IV)
Use the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.> to obtain a variational formula on a single subtree (Section <ref>).
In Section <ref> we derive the upper bound of the expansion by letting R→∞ in the variational formula.
§.§ Backbone, projection, periodisation and upper variational formula
§.§.§ Backbone
For r ∈_0, let τ_r be the last time when X visits ∂ B_r(), the boundary of the ball of radius r around . Then the sequence = (X_τ_r)_r ∈_0 forms the backbone of X, running from to infinity.
[Condition on a backbone]
For every backbone and every t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))])
= 𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = ).
By symmetry, the conditional expectation in the right-hand side does not depend on the choice of . Indeed, permutations of the edges away from the root do not affect the law of ∑_x∈ V() H(ℓ_t(x)).
Turn the one-sided backbone into a two-sided backbone by adding a second backbone from to infinity. By symmetry, the choice of this second backbone is arbitrary, say '. Redraw by representing ' ∪ as and representing the rest of as a sequence of rooted trees ^∗ = (^∗_x)_x ∈ hanging off (see Fig. <ref>). In ^∗_x, the root sits at x and has d-1 downward edges, while all lower vertices have d downward edges.
Let X^=(X^_t)_t ≥ 0 be the random walk on ^ and (ℓ^_t(x))_x ∈^ the local times of X^ at time t.
[Representation of as a backbone with rooted trees]
For every and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V() H(ℓ_t(x))] | = )
= 𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞).
Simply redraw as ^.
Note that X^ is a Markov process whose sojourn times have distribution EXP(d+1) and whose steps are drawn uniformly at random from the d+1 edges that are incident to each vertex.
§.§.§ Projection
For R ∈\{1}, cut into slices of length R, i.e.,
= ∪_k∈ (z + (kR+I)), I={0,1,…,R-1},
where z is to be chosen later. Apply the following two maps to ^ (in the order presented):
(i)
For each k ∈, fold ^∗_z+(kR+(R-1)) onto ^∗_z+(k+1)R by folding the d-1 edges downwards from the root on top of the edge in connecting z+(kR+(R-1)) and z+(k+1)R, and putting the d infinite rooted trees hanging off each of these d-1 edges on top of the rooted tree ^*_z+(k+1)R hanging off z+(k+1)R. Note that each of the d infinite rooted trees is a copy of ^*_z+(k+1)R.
(ii)
For each k ∈ and m ∈{0,1,…,R-2}, cut off all the infinite subtrees trees in ^∗_z+(kR+m) whose roots are at depth (R-1)-m. Note that the total number of leaves after the cutting equals
(d-1) ∑_m=0^R-2 d^(R-2)-m = (d-1)d^R-2 1-d^-(R-1)/1-d^-1 = d^R-1 - 1,
which is the same as the total number of leaves of the rooted tree ^*_R of depth R-1 (i.e., with R generations) minus 1 (a fact we will need below).
By doing so we obtain a concatenation of finite units
_R=(_R[k])_k ∈
that are rooted trees of depth R-1 (see Fig. <ref>). Together with the two maps that turn ^ into _R, we apply two maps to X^:
(i)
All excursions of X^ in the infinite subtrees that are folded to the right and on top are projected accordingly.
(ii)
All excursions of X^ in the infinite subtrees that are cut off are replaced by a sojourn of X^_R in the tadpoles that replace these subtrees (see Fig. <ref>)
The resulting path, which we call X^_R = (X^_R_t)_t ≥ 0, is a Markov renewal process with the following properties:
* The sojourn times in all the vertices that are not tadpoles have distribution EXP(d+1).
* The sojourn times in all the tadpoles have distribution ψ, defined as the conditional distribution of the return time τ of the random walk on the infinite rooted tree ^* given that τ<∞ (see <cit.> for a proper definition).
* The transitions into the tadpoles have probability d/d+1, the transitions out of the tadpoles have probability 1 (because of the condition X^_∞ = + ∞).
* The transitions from z + (kR+(R-1)) to z+(k+1)R have probability d/d+1, while the reverse transitions have probability 1/d+1.
Write (ℓ^ _R_t(x))_x ∈ V__R to denote the local times of X^_R at time t.
[Projection onto a concatenation of finite subtrees]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(^ ) H(ℓ^_t(x))]
| X^_∞ = + ∞)
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
| X^_R_∞ = + ∞).
The maps that are applied to turn X^ into X^_R are such that local times are stacked on top of each other. Since H defined in (<ref>) is convex and H(0)=0, we have H(ℓ) + H(ℓ') ≤ H(ℓ+ℓ') for all ℓ,ℓ' ∈_0, which implies the inequality.
§.§.§ Periodisation
Our next observation is that the condition {X^_R_∞ = + ∞} is redundant.
[Condition redundant]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] | X^_R_∞ = + ∞)
= 𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))] ).
The event {X^_R_∞ = + ∞} has probability 1 because on the edges connecting the units of _R (see Fig. <ref>) there is a drift downwards. To see why, note that 1/d+1 < 12 < d/d+1 because d ≥ 2, and use that a one-dimensional random walk with drift is transient to the right <cit.>.
Since _R is periodic, we can fold X^_R onto a single unit _R, to obtain a Markov renewal process X^_R on _R (see Fig. <ref>) in which the transition from the top vertex to the right-most bottom vertex has probability 1/d+1, while the reverse transition has probability d/d+1. Clearly, the sojourn time distributions are not affected by the folding and therefore remain as above. Write (ℓ^ _R_t(x))_x ∈ V(_R) to denote the local times of X^_R at time t.
[Periodisation to a single finite subtree]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]).
The periodisation again stacks local time on top of each other.
Before we proceed we make a crucial observation, namely, we may still choose the shift z ∈{0,1,…,R-1} of the cuts of the two-sided backbone (recall Fig. <ref>). We will do so in such a way that the local time up to time t spent in the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of a unit in _R
= all vertices marked by ∙ in Fig. <ref>
is at most t/R. After the periodisation these vertices are mapped to the set ∂_ _R defined by
∂_ _R = all vertices at the top or at the bottom of _R
= all vertices marked by ∙ in Fig. <ref>.
[Control on the time spent at the boundary]
For every R ∈\{1} and t ≥ 0,
𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))])
≤𝔼_𝒪(exp[∑_x∈ V(_R) H(ℓ^ _R_t(x))]
1_{1/t∑_x ∈∂_ _Rℓ^_R_t(x) ≤ 1/R}).
For different z the sets of vertices making up ∂_R correspond to disjoint sets of vertices in ^ (see Fig. <ref>). Since ∑_x ∈^ℓ^_t(x) = t for all t ≥ 0, it follows that there exists a z for which ∑_x ∈∂_Rℓ^_t(x) ≤ t/R. Therefore the upper bound in Lemma <ref> can be strengthened to the one that is claimed.
§.§.§ Upper variational formula
Lemmas <ref>–<ref> provide us with an upper bound for the average total mass (recall ((<ref>)) on the infinite tree in terms of the same quantity on the finite tree-like unit _R with a specific boundary condition. Along the way we have paid a price: the sojourn times in the tadpoles are no longer exponentially distributed, and the transition probabilities into and out of the tadpoles and between the top vertex and the right-most bottom vertex are biased. We therefore need the large deviation principle for the empirical distribution of Markov renewal processes derived in <cit.>, which we can now apply to the upper bound.
Since _R is finite, Lemma <ref> gives
⟨ U(t) ⟩≤^H(t) + o(t) 𝔼_𝒪(^-ϱ J_V(_R)(L^ _R_t)
1_{L^_R_t(∂_ _R) ≤ 1/R})
with J_V the functional defined in (<ref>). The following lemma controls the expectation in the right-hand side.
[Scaling of the key expectation]
For every R ∈\{1},
lim_t→∞1/tlog_(^-ϱ t J_V(_R)(L^_R_t) 1_{L^_R_t(∂_ _R) ≤ 1/R}) = - χ^+_R(ϱ),
where
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)},
with
I^†_E(_R)(p) = inf_β∈ (0,∞)inf_q ∈(V(_R))[K(β q) + K(p |β q)],
where
K(β q) = sup_q∈(V(_R))∑_x ∈ V(_R)β q(x) log(q(x)∑_y ∈ V(_R)π_x,yq(y)),
K(p |β q) = ∑_x ∈ V(_R)β q(x) (λ_x)(p(x)β q(x)),
with
(λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),
λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ,
where ψ_x=ψ when x is a tadpole, ψ_x = EXP(d+1) when x is not a tadpole, and π_x,y is the transition kernel of the discrete-time Markov chain on V(_R) embedded in X^_R.
Apply the large deviation principle derived in <cit.>, which we recall in Proposition <ref> in Appendix <ref>.
The expression in (<ref>) is similar to (<ref>) with G=_R, expect that the rate function I_E(_R) in (<ref>) is more involved than the rate function I_E in (<ref>).
§.§ Limit of the upper variational formula
The prefactor ^H(t)+o(1) in Lemma <ref> accounts for the terms ϱlog(ϱ t)-ϱ in the right-hand side of (<ref>) (recall <ref>). In view of Lemma <ref>, in order to complete the proof of the upper bound in Theorem <ref> it suffices to prove the following lemma.
For any d ≥ 4, lim inf_R→∞χ^+_R(ϱ) ≥χ_(ϱ).
The proof is given in Appendix <ref> and relies on two steps:
* Show that, for d ≥ 4,
I^†_E(_R)(p) ≥ I^+_E(_R)(p) + O(1/R)
with I^+_E(_R) a rate function similar to the standard rate function I_E(_R) given by (<ref>).
* Show that, d ≥ 2,
χ^ +_R(ϱ) = inf_p ∈(V(_R))p(∂_ _R) ≤ 1/R{I^+_E(_R)(p) + ϱ J_V(_R)(p)}
satisfies
lim inf_R→∞χ^ +_R(ϱ) ≥χ_(ϱ).
§ LARGE DEVIATION PRINCIPLE FOR THE LOCAL TIMES OF MARKOV RENEWAL PROCESSES
The following LDP, which was used in the proof of Lemma <ref>, was derived in <cit.>, and generalises the LDP for the empirical distribution of a Markov proceses on a finite state space derived in <cit.>. See <cit.> for the definition of the LDP.
Let Y=(Y_t)_t ≥ 0 be the Markov renewal process on the finite graph G=(V,E) with transition kernel (π_x,y)_{x,y}∈ E and with sojourn times whose distributions (ψ_x)_x ∈ V have support (0,∞). For t > 0, let L_t^Y denote the empirical distribution of Y at time t (see (<ref>)). Then the family (ℙ(L^Y_t ∈·))_t>0 satisfies the LDP on 𝒫(V) with rate t and with rate function I^†_E given by
I^†_E(p) = inf_β∈ (0,∞)inf_q ∈(V)[K(β q) + K(p |β q)]
with
K(β q) = sup_q∈(V)∑_x ∈ Vβ q(x) log(q(x)∑_y∈ Vπ_x,yq(y)),
K(p |β q ) = ∑_x ∈ Vβ q(x) (λ_x)(p(x)β q(x)),
where
[ (λ_x)(α) = sup_θ∈ℝ [αθ - λ_x(θ)], α∈ [0,∞),; λ_x(θ) = log∫_0^∞^θτψ_x(τ), θ∈ℝ. ]
The rate function I_E consist of two parts: K in (<ref>) is the rate function of the LDP on (V) for the empirical distribution of the discrete-time Markov chain on V with transition kernel (π_x,y)_{x,y}∈ E (see <cit.>), while K in (<ref>) is the rate function of the LDP on (0,∞) for the empirical mean of the sojourn times, given the empirical distribution of the discrete-time Markov chain. Moreover, λ_x is the cumulant generating function associated with ψ_x, and λ_x is the Legendre transform of λ_x, playing the role of the Cramèr rate function for the empirical mean of the i.i.d. sojourn times at x. The parameter β plays the role of the ratio between the continuous time scale and the discrete time scale.
§ SOJOURN TIMES: CUMULANT GENERATING FUNCTIONS AND LEGENDRE TRANFORMS
In Appendix <ref> we recall general properties of cumulant generating functions and Legendre transforms, in Appendices <ref> and <ref> we identify both for the two sojourn time distributions arising in Lemma <ref>, respectively.
§.§ General observations
Let λ be the cumulant generating function of a non-degenerate sojourn time distribution ϕ, and λ be the Legendre transform of λ (recall (<ref>)). Both λ and λ are strictly convex, are analytic in the interior of their domain, and achieve a unique zero at θ = 0, respectively, α=α_c with α_c= ∫_0^∞τϕ(τ). Furthermore, λ diverges at some θ_c ∈ (0,∞] and has slope α_c at θ=0. Moreover, if the slope of λ diverges at θ_c, then λ is finite on (0,∞).
The supremum in the Legendre transform defining (λ)(α) is uniquely taken at θ=θ(α) solving the equation
λ'(θ(α)) = α.
The tangent of λ with slope α at θ(α) intersects the vertical axis at (-λ)(α), i.e., putting
μ(α) = λ(θ(α))
we have
μ(α) = α (λ)'(α)-(λ)(α).
(See Fig. <ref>.) Note that by differentiating (<ref>) we get
μ'(α) = α(λ)”(α),
which shows that α↦μ(α) is strictly increasing and hence invertible, with inverse function μ^-1.
Note that by differentiating the relation (λ)(α) = αθ(α)-λ(θ(α)) we get
(λ)'(α) = θ(α).
A further relation that is useful reads
(λ)' ∘μ^-1 = λ^-1,
which follows because μ = λ∘θ by (<ref>) and (λ)' = θ by (<ref>).
§.§ Exponential sojourn time
If ϕ=EXP(d+1), then the cumulant generating function λ(θ) = log∫_0^∞^θτψ(τ) is given by
λ(θ) =
log(d+1d+1-θ), θ < d+1,
∞, θ≥ d+1.
To find (λ)(α), we compute
∂/∂θ[αθ - log(d+1d+1 - θ)] = α - 1/d+1-θ,
∂^2/∂θ^2[αθ - log(dd+1-θ)] = - 1/(d+1-θ)^2 < 0.
Hence the supremum in (<ref>) is uniquely taken at
θ(α) = d+1 - 1α, α > 0,
so that
(λ)(α) = α (d+1) -1 - log[α (d+1)], α>0.
Thus, λ and λ have the shape in Fig. <ref>, with θ_c = d+1 and α_c = 1/d+1, and with lim_θ↑θ_cλ(θ) = ∞ and lim_θ↑θ_cλ'(θ) = ∞.
Note that μ has domain (0,∞) and range .
§.§ Non-exponential sojourn time
For ϕ=ψ the computations are more involved. Let ^*=(E,V) be the infinite rooted regular tree of degree d+1. Write for the root. Let X = (X_n)_n ∈_0 be the discrete-time simple random walk on ^*=(E,V) starting from . Write τ_ to denote the time of the first return of X to . Define r = ℙ_(τ_<∞). It is easy to compute r by projecting X on _0: r is the return probability to the origin of the random walk on _0 that jumps to the right with probability p = dd+1 and to the left with probability q = 1d+1, which equals p/q (see <cit.>). Thus, r= 1/d.
For y ∈^*, define h_y = ℙ_y(τ_ <∞). Then h_y can be explicitly calculated, namely,
h_y =
d^-|y|, y∈^*∖{},
1, y= .
Note that h is a harmonic function on ^* ∖, i.e., h_y = ∑_z∈^*π_y,z h_z, y∈^*∖. We can therefore consider the Doob-transform of X, which is the random walk with transition probabilities away from the root given by
σ̌_y,z =
d/d+1, z=y^↑,
1/d1/d+1, z≠ y^↑, {y,z}∈ E,
0, else,
y ∈^*∖{},
and transition probabilities from the root are given by
σ̌_,z =
1/d, {,z}∈ E,
0, else.
Thus, the Doob-transform reverses the upward and the downward drift of X.
Recall from Lemma <ref> that ψ is the distribution of τ_ conditional on {τ_<∞} and on X leaving at time 0.
Let λ(θ) = log∫_0^∞^θτψ(τ). Then
^λ(θ)
= d+1-θ/2 [1- √(1- 4d(d+1-θ)^2) ], θ∈ (-∞,θ_c],
∞, else,
with θ_c = (√(d)-1)^2. The range of exp∘λ is (0,√(d) ], with the maximal value is uniquely taken at θ=θ_c.
To compute the moment-generating function of τ_, we consider the Doob-transform of X and its projection onto ℕ_0. Let p_2k = P(τ_ = 2k). It is well-known that (see <cit.>)
G^p,q(s) = (s^τ_|τ_ <∞) = ∑_k ∈ s^2k p_2k = 1/2p[1- √(1-4pqs^2)], |s| ≤ 1.
Therefore we have
^λ(θ) = (^θτ_)
= ∑_k ∈ p_2k [(^θ EXP(d+1))]^2k-1
= ∑_k ∈ p_2k(d+1/d+1 - θ)^2k-1
= (d+1 -θ/d+1) G^p,q(s)
with
p = 1d+1, q = dd+1, s = d+1/d+1-θ.
Inserting (<ref>) into (<ref>), we get the formula for λ(θ). From the term in the square root we see that λ(θ) is finite if and only if θ≤θ_c = d+1-2√(d) = (√(d)-1)^2.
There is no easy closed form expression for (λ)(α), but it is easily checked that λ and λ have the shape in Fig. <ref>, with θ_c = (√(d)-1)^2 and α_c = ∫_0^∞τψ(τ)<∞, and with λ(θ_c) = log√(d)<∞ and λ'(θ_c)=∞, i.e., there is a cusp at the threshold θ_c, implying that λ is finite on (0,∞). It follows from (<ref>) that
lim_α→∞1/α (λ)(α) = lim_α→∞θ(α) = θ_c.
The function λ^-1∘log = (exp∘λ)^-1 is given by
(exp∘λ)^-1(β) = d+1 - β -d/β, β∈ (0,√(d) ].
The range of (exp∘λ)^-1 is (-∞,θ_c], with the maximal value θ_c uniquely taken at β = √(d).
We need to invert exp∘λ in (<ref>). Abbreviate χ = d+1-θ/2. Then
β = χ[1-√(1-d/χ^2) ] ⟹ χ = β^2+d/2β ⟹ θ = d+1 - β^2 + d/β.
Note that (√(d),∞) is not part of the domain of (exp∘λ)^-1, even though the right-hand side of (<ref>) still makes sense (as a second branch). Note that μ has domain (0,∞) and range (-∞,√(d) ] (see Fig. <ref>).
§ ANALYSIS OF THE VARIATIONAL PROBLEM ON THE INFINITE REGULAR TREE
In this appendix we prove Theorem <ref>. Appendix <ref> formulates two theorems that imply Theorem <ref>, Appendix <ref> provides the proof of these theorems. Recall the definition of (V), I_E(p) and J_V(p) from (<ref>). Set
χ_(ϱ) = inf_p ∈_(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
where _(V) = {p ∈(V) argmax p = }. Since (V), I_E and J_V are invariant under translations, the centering at is harmless.
§.§ Two properties
For every ϱ∈ (0,∞) the infimum in (<ref>) is attained, and every minimiser p is strictly positive, non-increasing in the distance to the root, and such that
∑_N∈_0∂ S_R log (R+1) ≤d+1/ϱ,
∂ S_R = ∑_∂ B_R()p(x),
where B_R() is the ball of radius R around .
The function ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz continuous on (0,∞), with lim_ϱ↓ 0χ_(ϱ) = d-1 and lim_ϱ→∞χ_(ϱ) = d+1.
Theorems <ref>–<ref> settle Theorem <ref>. Their proof uses the following two lemmas.
For every ϱ∈ (0,∞), the infimum in (<ref>) may be restricted to p ∈_(V) such that J_V(p) ≤d+1ϱ.
Let δ_∈_(V) denote the point measure at . Then, for all ϱ∈ (0,∞),
χ_(ϱ) ≤ I_E(δ_) + ϱ J_V(δ_) = (d+1) + ϱ× 0 = d+1.
Since I_V ≥ 0, we may restrict the infimum in (<ref>) to p with J_V(p) ≤d+1/ϱ.
For every ϱ∈ (0,∞), there exists a c(ϱ) >0 such that the infimum in (<ref>) may be restricted to p∈𝒫_(V) such that J_V(p) ≥ c(ϱ).
Since J_V(p) = 0 if and only if p = δ_ is a point measure, it suffices to show that δ_ is not a minimiser of χ_(ϱ). To that end, for y ∈ V compute
∂/∂ p(y)[I_E(p) + ϱ J_V(p)] = 1 - ∑_z∼ y√(p(z)/p(y)) - ϱlog p(y) -ϱ.
Because p()>0, it follows that the right-hand side tends to -∞ as p(y) ↓ 0 for every y ∼. Hence, no p ∈_(V) with p(y) = 0 for some y ∼ can be a minimiser of (<ref>), or be the weak limit point of a minimising sequence. In particular, δ_ cannot.
§.§ Proof of the two properties
First observe that (V) and J_V are invariant under permutations, i.e., for any p ∈(V) and any relabelling π of the vertices in V, we have π p ∈(V) and J_V(π p)=J_V(p). The same does not hold for I_E, but we can apply permutations such that I_E(π p) ≤ I_E(p).
1.
Pick any p ∈(V). Pick any backbone = {x_0, x_1,⋯} that runs from x_0 = to infinity. Consider a permutation π that reorders the vertices in such that {(π p)(x)}_x ∈ becomes non-increasing. Together with the reordering, transport all the trees that hang off as well. Since π p is non-increasing along , while all the edges that do not lie on have the same neighbouring values in p and in π p, we have
I_E(π p) ≤ I_E(p).
Indeed,
12 [I_E(p) - I_E(π p)] = ∑_k ∈_0√((π p)(x_k) (π p)(x_k+1))
- ∑_k ∈_0√(p(x_k)p(x_k+1)),
where we use that p(x_0) = (π p)(x_0) (because p(x_0) ≥ p(x_k) for all k∈) and ∑_k∈ p(x_k) = ∑_k∈ (π p)(x_k). The right-hand side of (<ref>) is ≥ 0 by the rearrangement inequality for sums of products of two sequences <cit.>. In fact, strict inequality in (<ref>) holds unless p is constant along . But this is impossible possible because it would imply that p() = 0 and hence p(x) = 0 for all x ∈ V. Thus, p and being arbitrary, it follows from (<ref>) that any minimiser or minimising sequence must be non-increasing in the distance to . Indeed, if it were not, then there would be a along which the reordering would lead to a lower value of I_E+ϱ J_V. Hence we may replace (<ref>) by
χ_(ϱ) = inf_p ∈_^↓(V) [I_E(p) + ϱ J_V(p)], ϱ∈ (0,∞),
with _^↓(V) defined in (<ref>).
2.
Let p ∈_^↓(V). Estimate
J_V(p) = ∑_R ∈_0∑_x ∈∂ B_R() [-p(x)log p(x)]
≥∑_R ∈_0∑_x ∈∂ B_R()[-p(x)log(1R+1)],
where we use that p(x) ≤1R+1 for all x ∈∂ B_R(). Hence
J_V(p) ≥∑_R ∈_0∂ S_R log(R+1)
with ∂ S_R = ∑_x ∈∂ B_R() p(x). By Lemma <ref>, J_V(p) ≤d+1/ϱ, and so
∑_R ∈_0∂ S_R log(R+1) ≤d+1/ϱ.
The computation in (<ref>) shows that any p for which there exist z ∼ y with p(z)>0 and p(y)=0 cannot be minimiser nor a weak limit point of a minimising sequence. Hence all minimisers or weak limit points of minimising sequences are strictly positive everywhere.
3.
Take any minimising sequence (p_n)_n∈ of (<ref>). By (<ref>), lim_R→∞∑_x ∉ B_R() p_n(x) = 0 uniformly in n∈, and so (p_n)_n∈ is tight. By Prokhorov's theorem, tightness is equivalent to (p_n)_n∈ being relatively compact, i.e., there is a subsequence (p_n_k)_k∈ that converges weakly to a limit p∈_^↓(V). By Fatou's lemma, we have lim inf_k→∞ I_E(p_n_k) ≥ I_E(p) and lim inf_k→∞ J_V(p_n_k) ≥ J_V(p). Hence
χ_(ϱ) = lim_k →∞ [I_E(p_n_k) + ϱ J_V(p_n_k)] ≥ I_E(p) + ϱ J_V(p).
Hence p is a minimiser of (<ref>).
The proof uses approximation arguments.
1.
We first show that ϱ↦χ_(ϱ) is strictly increasing and globally Lipschitz. Pick ϱ_1 < ϱ_2. Let p̅_ϱ_1 be any minimiser of (<ref>) at ϱ_1, i.e.,
χ_(ϱ_1) = I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1).
Estimate
[I_E(p̅_ϱ_1) + ϱ_1 J_V(p̅_ϱ_1)]
= [I_E(p̅_ϱ_1) + ϱ_2 J_V(p̅_ϱ_1)] - (ϱ_2 - ϱ_1)J_V(p̅_ϱ_1)
≥χ_(ϱ_2) - (ϱ_2 - ϱ_1) J_V(p̅_ϱ_1)
≥χ(ϱ_2) - (ϱ_2 - ϱ_1) d+1ϱ_1,
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≤ (ϱ_2-ϱ_1) d+1ϱ_1.
Similarly, let p̅_ϱ_2 be any minimiser of (<ref>) at ϱ_2, i.e.,
χ_(ϱ_2) = I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2).
Estimate
[I_E(p̅_ϱ_2) + ϱ_2 J_V(p̅_ϱ_2)]
= [I_E(p̅_ϱ_2) + ϱ_1 J_V(p̅_ϱ_2)] + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) J_V(p̅_ϱ_2)
≥χ_(ϱ_1) + (ϱ_2 - ϱ_1) c(ϱ_2),
where we use Lemma <ref>. Therefore
χ_(ϱ_2) - χ_(ϱ_1) ≥ c(ϱ_2)(ϱ_2 - ϱ_1).
2.
Because χ_(ϱ) ≤ d+1 for all ϱ∈ (0,∞), it follows that lim_ϱ→∞χ_(ϱ) ≤ d+1. To obtain the reverse inequality, let p_ϱ be any minimiser of (<ref>) at ϱ. By Lemma <ref>, we may assume that J_V(p_ϱ) ≤d+1/ϱ. Hence lim_ϱ→∞ J_V(p_ϱ) = 0, and consequently lim_ϱ→∞p_ϱ= δ_ weakly. Therefore, by Fatou's lemma, lim_ϱ→∞χ_(ϱ) = lim_ϱ→∞ [I_E(p) + ϱ J_V(p)] ≥lim inf_ϱ→∞ I_E(p_ϱ) ≥ I_E(δ_) = d+1.
3.
To prove that lim_ϱ↓ 0χ_(ϱ) ≤ d-1, estimate
χ_(ϱ) ≤inf_p ∈_^↓(V)(p) ⊆ B_R() [I_E(p)+ϱ J_V(p)],
R ∈_0.
Because
sup_p ∈_^↓(V)(p) ⊆ B_R() J_V(p) = J_V(p_R) = log |B_R()|,
R ∈_0,
with
p_R(x) =
|B_R()|^-1, x ∈ B_R(),
0, else,
it follows that
lim_ϱ↓ 0χ_(ϱ)
≤inf_p ∈_^↓(V)(p) ⊆ B_R() I_E(p)
≤ I_E(p_R), R ∈_0.
Compute (recall (<ref>)) ,
I_E(p_R) = |∂ B_R+1()|/|B_R()|, R ∈_0.
Inserting the relations
|∂ B_R()| = {[ 1, R=0,; (d+1)d^R-1, R ∈, ].
|B_R()| = ∑_R'=0^R |∂ B_R'()| = 1 + d+1/d-1(d^R-1),
R ∈_0,
we get
I_E(p_R) = (d-1) (d+1)d^R/(d+1)d^R-2.
Hence lim_R→∞ I_E(p_R) = d-1, and so lim_ϱ↓ 0χ_(ϱ) ≤ d-1.
4.
To prove that lim_ϱ↓ 0χ_(ϱ) ≥ d-1, note that because J_V ≥ 0 we can estimate
lim_ϱ↓ 0χ_(ϱ) ≥inf_p ∈_^↓(V) I_E(p).
It therefore suffices to show that
inf_p ∈_^↓(V) I_E(p) ≥ d-1,
i.e., (p_R)_R ∈_0 is a minimising sequence of the infimum in the left-hand side. The proof goes as follows. Write (recall (<ref>))
I_E(p) = 12 ∑_x,y ∈ Vx ∼ y(√(p(x)) - √(p(y)) )^2
= 12 ∑_x,y ∈ Vx ∼ y[p(x) + p(y) - 2 √(p(x)p(y)) ]
= (d+1) - ∑_x,y ∈ Vx ∼ y√(p(x)p(y)).
Since is a tree, each edge can be labelled by the end-vertex that is farthest from . Hence the sum in the right-hand side can be written as
∑_x ∈ V ∖ 2√(p(x)p(x^↓)),
where x^↓ is the unique neighbour of x that is closer to than x. Since 2√(p(x)p(x^↓))≤ p(x) + p(x^↓), it follows that
∑_x ∈ V ∖ 2√(p(x)p(x^↓))≤∑_x ∈ V ∖ p(x) + ∑_x ∈ V ∖ p(x^↓)
= [1-p()] + 1.
Therefore
I_E(p) ≥ d - 1 + p(),
which settles the claim.
§ LARGE DEVIATION ESTIMATE FOR THE LOCAL TIME AWAY FROM THE BACKBONE
In this appendix we derive a large deviation principle for the total local times at successive depths of the random walk on ^ (see Fig. <ref>). This large deviation principle is not actually needed, but serves as a warm up for the more elaborate computations in Appendix <ref>.
For k∈_0, let V_k be the set of vertices in ^ that are at distance k from the backbone (see Fig. <ref>). For R ∈, define
[ ℓ^R_t(k) = ∑_x ∈ V_kℓ^_t(x), k = 0,1,…,R,; ℓ_t^R = ∑_k > R∑_x∈ V_kℓ^_t(x), k= R+1, ]
and
L_t^R = 1/t ((ℓ_t(k))_k=0^R, ℓ^R_t).
Abbreviate V^*_R = {0,1,…,R,R+1},
For every R ∈, (L_t^R)_t ≥ 0 satisfies the large deviation principle on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = [√((d-1)p(0))-√(dp(1)) ]^2 + ∑_k=1^R-1[√(p(k))-√(dp(k+1)) ]^2
+ [√(p(R)+p(R+1)) - √(dp(R+1)) ]^2.
By monitoring the random walk on the tree in Fig. <ref> and projecting its depth on the vertices 0,1,…,R, respectively, R+1, we can apply the LDP in Proposition <ref> (see Fig. <ref>).
1.
The sojourn times have distribution EXP(d+1) at vertices k=0,1,…,R and distribution ψ at vertex k=R+1. The transition probabilities are
[ π_0,0 = 2d+1, π_0,1 = d-1d+1,; π_k,k+1 = 1d+1, π_k,k-1 = dd+1, k = 1,…,R,; π_R+1,R = 1. ]
Proposition <ref> therefore yields that (L_t^R)_t ≥ 0 satisfies the LDP on on (V^*_R) with rate t and with rate function I^†_R given by
I^†_R(p) = (d+1) ∑_k=0^R p(k) + inf_v V^*_R → (0,∞)sup_u V^*_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C,
where
A = ∑_k=1^R v(x) {1+log(du(k-1)+u(k+1)/u(k) p(k)/v(k))},
B = v(0) {1+log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0))},
C = v(R+1) {log(u(R)/u(R+1))-(λ)(p(R+1)/v(R+1))}.
Here we use (<ref>) to compute A and B, and for C we recall that λ is the Legendre transform of the cumulant generation function λ of ψ computed in Lemma <ref>.
2.
We compute the infimum of L(u,v) over v for fixed u.
∙ For k=1,…,R,
∂ A/∂ v(k) = log(du(k-1)+u(k+1)/u(k) p(k)/v(k)),
⟹v̅_u(k) = p(k) du(k-1)+u(k+1)/u(k).
The second derivative is 1/v(k)>0.
∙ For k=0,
∂ B/∂ v(0) = log(2u(0)+(d-1)u(1)/u(0) p(0)/v(0)),
⟹v̅_u(0) = p(0) 2u(0)+(d-1)u(1)/u(0).
The second derivative is 1/v(0)>0.
∙ For k=R+1, the computation is more delicate. Define (recall (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(R+1)/u(R) ≤√(d). Compute
∂ C/∂ v(R+1) = μ(p(R+1)/v(R+1)) - log(u(R+1)/u(R)),
⟹v̅(R+1) = p(R+1)/α_u(R+1)
with α_u(R+1) solving the equation
log(u(R+1)/u(R)) = μ(α_u(R+1)).
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(R+1) = μ^-1(log(u(R+1)/u(R))).
Putting (<ref>)–(<ref>) together, we get
L(u) = inf_v V^*_R → (0,∞) L(u,v)
= - ∑_k=1^R A_u(k) - B_u + C_u
with
A_u(k) = du(k-1)+u(k+1)/u(k) p(k), k = 1,…,R,
B_u = 2u(0)+(d-1)u(1)/u(0) p(0),
and
C_u = p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - log(u(R+1)/u(R))]
= p(R+1)/α_u(R+1)[(λ)(α_u(R+1)) - μ(α_u(R+1))]
= p(R+1) (λ)^'(α_u(R+1))
= p(R+1) ((λ)^'∘μ^-1)(log(u(R+1)/u(R))).
In (<ref>) in Appendix <ref> we showed that (λ)' ∘μ^-1 = λ^-1. Moreover, in (<ref>) in Appendix <ref> we showed that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], C_u(R+1) is only defined when u(R+1)/u(R) ≤√(d), in which case
C_u = p(R+1) S(u(R+1)/u(R)).
▸ u(R+1)/u(R) ≤√(d). In this case ∂ C/∂ v(R+1)>0, the infimum is taken at v̅(R+1)=0, and hence (recall (<ref>))
C_u = p(R+1) (√(d)-1)^2 = p(R+1) S(√(d)).
Note that the right-hand side does not depend on u. The expressions in (<ref>)–(<ref>) can be summarised as
C_u = p(R+1) S(√(d)∧u(R+1)/u(R)).
3.
Next we compute the supremum over u of
L(u) = L(u,v̅_u) = - A_u - B_u + C_u.
with A_u = ∑_k=1^R A_u(k). We only write down the derivatives that are non-zero.
∙ For k=2,…,R-1,
- ∂ A_u/∂ u(k) = - p(k+1) d/u(k+1) - p(k-1) 1/u(k-1) + p(k) du(k-1)+u(k+1)/u(k)^2.
∙ For k=1,
- ∂ A_u/∂ u(1) = - p(2) d/u(2) + p(1) du(0)+u(2)/u(1)^2,
- ∂ B_u/∂ u(1) = - p(0) d-1/u(0).
∙ For k=R,
- ∂ A_u/∂ u(R) = - p(R-1) 1/u(R-1) + p(R) du(R-1)+u(R+1)/u(R)^2,
∂ C_u/∂ u(R) = p(R+1) [u(R+1)/u(R)^2 - d/u(R+1)]
1_{u(R+1)/u(R)≤√(d)}.
∙ For k=0,
-∂ A_u/∂ u(0) = - p(1) d/u(1),
-∂ B_u/∂ u(0) = p(0) (d-1)u(1)/u(0)^2.
∙ For k=R+1,
-∂ A_u/∂ u(R+1) = - p(R) 1/u(R),
∂ C_u/∂ u(R+1) = p(R+1) [-1/u(R) + du(R)/u(R+1)^2]
1_{u(R+1)/u(R)≤√(d)}.
All the first derivatives of A_u+B_u+C_u are zero when we choose
u̅(0) = √((d-1)p(0)), u̅(k) = √(d^kp(k)), k = 1,…,R,
u̅(R+1) = √(d^R+1 p(R)p(R+1)/p(R)+p(R+1)).
All the second derivatives are strictly negative, and so u̅ is the unique maximiser.
4.
Inserting (<ref>) into (<ref>), we get
L(u̅) = L(u̅,v̅_u̅) = - ∑_k=2^R-1 A_u̅(k)
- [A_u̅(1) + B_u̅] - A_u̅(R) + C_u̅
= -∑_k=2^R-1√(dp(k)) [√(p(k-1)) + √(p(k+1)) ]
- [2√(d(d-1)p(0)p(1)) + 2p(0) + √(dp(1)p(2)) ]
- [√(dp(R-1)p(R)) + √(p(R)/p(R)+p(R+1)) √(dp(R)p(R+1)) ]
+ p(R+1) S(√(dp(R+1)/p(R)+p(R+1)) ).
Recalling (<ref>), (<ref>) and (<ref>), and rearranging terms, we find the expression in (<ref>).
Note that I^†_R has a unique zero at p given by
p(0) = 12, p(k) = 12 (d-1)d^-k, k = 1,…,R, p(R+1) = 12d^-R.
This shows that the fraction of the local time typically spent a distance k away from the backbone decays exponentially fast in k.
§ ANALYSIS OF THE UPPER VARIATIONAL FORMULA
In this appendix we carry out the proof of the claims in Section <ref>, namely, we settle (<ref>) in Appendix <ref> and (<ref>) in Appendix <ref>. The computations carried out in Appendix <ref> guide us along the way.
§.§ Identification of the rate function for the local times on the truncated tree
To identify the rate function I^†_E(_R) in Lemma <ref>, we need to work out the two infima between braces in (<ref>). The computation follows the same line of argument as in Appendix <ref>, but is more delicate. We will only end up with a lower bound. However, this is sufficient for the upper variational formula.
To simplify the notation we write (recall Fig. <ref>):
(V_R,E_R) = vertex and edge set of _R without the tadpoles,
= top vertex of V_R,
⋆ = right-most bottom vertex of V_R,
∂ V_R = set of vertices at the bottom of V_R,
= set of tadpoles,
_x = tadpole attached to x ∈∂ V_R\⋆.
Note that ∂ V_R consists of ⋆ and the vertices to which the tadpoles are attached. Note that int(V_R) = V_R ∖∂ V_R includes .
1.
Inserting (<ref>) in Appendix <ref> into (<ref>)–(<ref>), we get
I^†_E(_R)(p) = (d+1) ∑_x∈ V_R p(x)
+ inf_β∈ (0,∞)inf_q ∈(V_R)sup_q∈(V_R) L(β,q,q| p)
with
L(β,q,q| p) = - A - B - C - D,
where
A = ∑_x ∈int(V_R)β q(x){1+log(∑_y ∼ xq(y)/q(x)p(x)/β q(x))},
B = ∑_x ∈∂ V_R\⋆β q(x){1+log(q(x^↑)
+ d q(_x)/q(x)p(x)/β q(x))},
C = β q(⋆) {1+log(q(⋆^↑) + d q()/q(⋆)p(⋆)/β q(⋆))},
D = ∑_x ∈β q(x){log(q(x^↑)/q(x))
- (λ)(p(x)/β q(x)) },
with λ the Legende transform of the cumulant generating function of ψ (recall (<ref>)) and x^↑ the unique vertex to which x is attached upwards. (Recall that y ∼ x means that x and y are connected by an edge in E_R.) Note that A,B,C each combine two terms, and that A,B,C,D depend on p. We suppress this dependence because p is fixed.
2.
Inserting the parametrisation q = u/u_1 and q = v/v_1 with u,v V_R → (0,∞) and putting β q = v, we may write
I^†_E(^R)(p) = (d+1) ∑_x∈ V_R p(x) + inf_v V_R → (0,∞)sup_u V_R → (0,∞) L(u,v)
with
L(u,v) = - A - B - C - D,
where
A = ∑_x ∈int(V_R) v(x){1+log(∑_y ∼ xu(y)/u(x)p(x)/v(x))},
B = ∑_x ∈∂ V_R \⋆ v(x){1+log(u(x^↑)
+ d u(_x)/u(x)p(x)/v(x))},
C = v(⋆) {1+log(u(⋆^↑) + d u()/u(⋆)p(⋆)/v(⋆))},
D = ∑_x ∈v(x){log(u(x^↑)/u(x)) - (λ)(p(x)/v(x)) }.
Our task is to carry out the supremum over u and the infimum over v in (<ref>).
3.
First, we compute the infimum over v for fixed u. (Later we will make a judicious choice for u to obtain a lower bound.) Abbreviate
A_u(x) = ∑_y ∼ xu(y)/u(x) p(x), x ∈int(V_R),
B_u(x) = u(x^↑) + d u(_x)/u(x) p(x), x∈∂ V_R\⋆,
C_u(⋆) = u(⋆^↑) + d u()/u(⋆) p(⋆).
∙
For z ∈ V_R, the first derivatives of L are
z ∈int(V_R) ∂ L(u,v)/∂ v(z) = -log(A_u(z)/v(z)),
z ∈∂ V_R\⋆ ∂ L(u,v)/∂ v(z) = -log(B_u(z)/v(z)),
z = ⋆ ∂ L(u,v)/∂ v(z) = -log(C_u(z)/v(z)),
while the second derivatives of L equal 1/v(z)>0. Hence the infimum is uniquely taken at
x ∈int(V_R) v̅(x) = A_u(x),
x ∈ V_R \⋆ v̅(x) = B_u(x),
x = ⋆ v̅(x) = C_u(x).
∙ For z ∈, the computation is more delicate. Define (see (<ref>) in Appendix <ref>)
μ(α) = α (λ)^'(α) - (λ)(α).
The function μ has range (-∞,log√(d) ], with the maximal value uniquely taken at α=∞. Therefore there are two cases.
▸ u(x)/u(x^↑) ≤√(d):
Abbreviate α_u(z) = p(z)/v(z). For z ∈,
∂ L(u,v)/∂ v(z) = log(u(z)/u(z^↑))
+ (λ)(p(z)/v(z)) - p(z)/v(z) (λ)^'(p(z)/v(z))
= log(u(z)/u(z^↑)) - μ(α_u(z)),
∂^2 L(u,v)/v(z)^2 =p^2(z)/v^3(z) (λ)^”(p(z)/v(z)) >0,
where we use that λ, being a Legendre transform, is strictly convex. Hence the infimum is uniquely taken at
v̅(x) = p(x)/α_u(x), x ∈,
with α_u(x) solving the equation
log(u(x)/u(x^↑))
= μ(α_u(x)), x ∈.
Since μ'(α) = α(λ)”(α) and λ is strictly convex (see Fig. <ref> in Appendix <ref>), μ is strictly increasing and therefore invertible. Consequently,
α_u(x) = μ^-1(log(u(x)/u(x^↑))), x ∈.
Putting the above formulas together, we arrive at (recall (<ref>))
L(u) = inf_v V_R → (0,∞) L(u,v)
= - ∑_x ∈int(V_R) A_u(x) - ∑_x∈∂ V_R\⋆ B_u(x) - C_u(⋆)
+ ∑_x ∈ D_u(x)
with (recall (<ref>))
D_u(x) = - p(x)/α_u(x)[log(u(x^↑)/u(x)) - (λ)(α_u(x))]
= p(x)/α_u(x)[(λ)(α_u(x)) - μ(α_u(x))]
= p(x) (λ)^'(α_u(x))
= p(x) ((λ)^'∘μ^-1)(log(u(x)/u(x^↑))).
In (<ref>) in Appendix <ref> we show that (λ)' ∘μ^-1 = λ^-1. Moreover In (<ref>) in Appendix <ref> we show that (λ^-1∘log) = S with
S(β) = d+1 - β - d/β, β∈ (0,√(d) ].
Since S has domain (0,√(d) ], D_u(x) is only defined when u(x)/u(x^↑) ≤√(d), in which case
D_u(x) = p(x) S(u(x)/u(x^↑)), x ∈.
▸ u(x)/u(x^↑) > √(d): In this case ∂ L(u,v)/∂ v(z) > 0, the infimum is uniquely taken at v̅(x)=0, and
D_u(x) = p(x) (√(d)-1)^2 = p(x) S(√(d)), x ∈,
where we use (<ref>). Note that the right-hand side does not depend on u.
4.
Next, we compute the supremum over u. The first derivatives of L are
z ∈int(V_R) \ ∂ L(u)/∂ u(z)
= ∑_y ∼ z u(y)/u^2(z) p(z) - ∑_y ∼ z1/u(y) p(y),
z = ∂ L(u)/∂ u()
= ∑_y ∼ u(y)/u()^2 p() -∑_y: y^↑ = 1/u(y)p(y)
- d/u(⋆) p(⋆),
z = ⋆ ∂ L(u)/∂ u(⋆)
= -1/u() p() + u(⋆^↑) + du()/u(⋆)^2 p(⋆),
z ∈∂ V_R \⋆ ∂ L(u)/∂ u(z)
= -1/u(z^↑) p(z^↑) + u(z^↑)+du(_z)/u(z)^2 p(z)
+ [u(_z)/u(z)^2 - d/u(_z)]p(_z)
1_{u(z)/u(z^↑)≤√(d)},
z ∈ ∂ L(u)/∂ u(z)
= -d/u(z^↑) p(z^↑)
+ [-1/u(z^↑) +du(z^↑)/u(z)^2] p(z)
1_{u(z)/u(z^↑)≤√(d)}.
The second derivates of L are all <0. The first line in (<ref>) can be rewritten as
∑_y ∼ z u(y) [p(z)/u^2(z) - p(y)/u^2(y)],
which is zero when
u̅(x) = √(p(x)), x ∈ V_R.
Given the choice in (<ref>), the fifth line in (<ref>) is zero when
u̅(x) = √(dp(x^↑)p(x)/dp(x^↑)+p(x)), x ∈.
Indeed, the derivative is strictly negative when the indicator is 0 and therefore the indicator must be 1. But the latter is guaranteed by (<ref>)–(<ref>), which imply that
u̅(x)/u̅(x^↑) = √(dp(x)/dp(x^↑)+p(x))≤√(d), x ∈.
Given the choice in (<ref>)–(<ref>), also the fourth line in (<ref>) is zero. Thus, only the second and third line in (<ref>) are non-zero, but this is harmless because ,⋆ carry a negligible weight in the limit as R →∞ because of the constraint p(∂ V_R ∪) ≤ 1/R in Lemma <ref> (recall (<ref>)).
Inserting (<ref>)–(<ref>) into (<ref>) and using (<ref>), (<ref>), we get the following lower bound:
sup_u V_R → (0,∞) L(u)
≥ - ∑_x ∈int(V_R) A_u̅(x)
- ∑_x∈∂ V_R\⋆ B_u̅(x)
- C_u̅(⋆) + ∑_x ∈ D_u̅(x)
= - ∑_x ∈int(V_R)∑_y ∼ x√(p(y)p(x))
- ∑_x∈∂ V_R \⋆√(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))
-√(p(⋆))(√(p(⋆^↑))+ d√(p()))
+ ∑_x ∈ p(x) (d+1-√(d)[√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) ]).
5.
Using the relation (d+1) p(x) = ∑_y∼ x p(x), x∈int(V_R), we get from (<ref>) that
I^†_E(^R)(p) ≥ K^1_R(p) + K^2_R(p)
with
K^1_R(p)
= ∑_x ∈int(V_R)∑_y ∼ x[p(x) - √(p(x)p(y)) ]
= ∑_{x,y}∈E_R(√(p(x)) - √(p(y)) )^2
+ [p()-√(p()p(⋆)) ] - ∑_x∈∂ V_R[ p(x) - √(p(x)p(x^↑)) ]
and
K^2_R(p)
= ∑_x∈∂ V_R \⋆[(d+1) p(x) - √(p(x))(√(p(x^↑))
+ d√(dp(x)p(_x)/dp(x)+p(_x)))]
+ (d+1) p(⋆)-√(p(⋆))(√(p(⋆^↑)) + d√(p()))
+ ∑_x ∈ p(x) [d+1-√(d) (√(p(x)/d p(x^↑) + p(x))
+ √(d p(x^↑) + p(x)/p(x)) )].
The first sum in the right-hand side of K^1_R(p) equals the standard rate function I_E_R(p) given by (<ref>), with
E_R = E_R ∖{,⋆}
the set of edges in the unit _R without the tadpoles and without the edge {,⋆} (i.e., E_R = E(^*_R); recall Fig. <ref>). Rearranging and simplifying terms, we arrive at
I^†_E(^R)(p) ≥ I_E_R(p)+ K^3_R(p)
with
K^3_R(p) = S_∂ V_R \⋆(p) + S_,⋆(p) + S_(∂ V_R \⋆) ∪(p),
where
S_∂ V_R \⋆(p)
= d ∑_x∈∂ V_R \⋆ p(x),
S_,⋆(p)
= (√(p()) - √(p(⋆)))^2 + (d-1)[p(⋆) - √(p()p(⋆)) ],
S_(∂ V_R \⋆) ∪(p)
= - ∑_x∈∂ V_R \⋆ p(x) d√(dp(_x)/dp(x)+p(_x))
+ ∑_x∈∂ V_R \⋆ p(_x) (d+1-√(d) [√(p(_x)/d p(x) + p(_x))
+ √(d p(x) + p(_x)/p(_x)) ]).
6.
Since √(p()p(⋆))≤12[p()+p(⋆)], the boundary constraint ∑_x∈∂ V_R ∪ p(x) ≤ 1/R implies that S_∂ V_R \⋆(p) + S_,⋆(p) = O(1/R). The same constraint implies that the first sum in S_(∂ V_R \⋆) ∪(p) is O(1/R). Hence
K^3_R(p) = O(1/R) + ∑_x∈∂ V_R \⋆ p(x) F(p(_x)p(x))
with
F(w) = w (d+1-√(d) [√(w/d+w) + √(d+w/w) ]).
The map w ↦ F(w) is continuous on (0,∞) with
F(w) = {[ √(w) + (d+1)w + O(w^3/2), w ↓ 0,; [(d+1)-2√(d) ] w + O(w^-1), w →∞. ].
From this we see that if d ≥ 4, then there exists a C ∈ (1,∞) such that
F(w)+C ≥(1-√(w) )^2, w ∈ [0,∞).
Hence we have the lower bound
K^3_R(p)
≥ O(1/R) + ∑_x∈∂ V_R \⋆
p(x) [-C + (1-√(p(_x)p(x)) )^2]
= O(1/R) + ∑_x∈∂ V_R \⋆(√(p(x))-√(p(_x)) )^2.
Via (<ref>)–(<ref>), it follows that
I^†_E(^R)(p) ≥ O(1/R) + I_E_R(p), R ∈,
with I_E_R(p) the standard rate function given by (<ref>), with
E_R = E_R ∪[∪_x ∈∂ V_R ∖⋆{x,_x}]
the set of edges in the unit _R that is obtained from the unit _R by removing the edge {,⋆} (i.e., E_R = E(_R); recall Fig. <ref>). This completes the proof of (<ref>).
The condition d ≥ 4 is needed only in (<ref>). For d=2,3 we have F(w)+C ≥θ_c(1-√(w) )^2 with θ_c = d+1-2√(d)∈ (0,1). Consequently, the edges {x,_x}, x ∈∂ V_R∖⋆, carry a weight that is smaller than that of the edges in , which may cause the optimal p to stick to the boundary as R→∞, in which case we do not have (<ref>).
§.§ Limit of the upper variational formula
Note that
_R ⊆,
with the infinite tree. Consequently,
I_E_R(p) = I_E()(p) - (d-1) ∑_x ∈∂ V_R ∖⋆ p(x),
∀ p ∈(V()) (p) = V(_R),
where the sum compensates for the contribution coming from the edges in that link the vertices in ∂ V_R ∖⋆ to the vertices one layer deeper in that are not tadpoles. Since this sum is O(1/R), we obtain (recall (<ref>))
χ^+_R(ϱ) = inf_p ∈(V(_R))p(∂__R) ≤ 1/R{I^†_E(_R)(p) + ϱ J_V(_R)(p)}
≥ O(1/R) + inf_p ∈(V())(p) = V(_R),
p(∂__R) ≤ 1/R{I_E()(p) + ϱ J_V()(p)}
≥ O(1/R) + χ_(ρ),
where the last inequality follows after dropping the constraint under the infimum and recalling (<ref>). This completes the proof of (<ref>).
99
A2016
A. Astrauskas,
From extreme values of i.i.d. random fields to extreme eigenvalues of finite-volume Anderson Hamiltonian,
Probab. Surv. 13, 156–244, 2016.
AGH2016
L. Avena, O. Gün, M. Hesse,
The parabolic Anderson model on the hypercube,
Stoch. Proc. Appl. 130, 3369–3393, 2020.
DV75
M.D. Donsker and S.R.S. Varadhan,
Asymptotic evaluation of certain Markov process expectations for large time,
Comm. Pure Appl. Math. (I) 28, 1–47, 1975; (II) 28, 279–301, 1975; (III) 29, 389–461, 1976; (IV) 36, 183–212, 1983.
FM1990
K. Fleischmann, S.A. Molchanov,
Exact asymptotics in a mean field model with random potential,
Probab. Theory Relat. Fields 86, 239–251, 1990.
G1977
J. Gärtner,
On large deviations from the invariant measure,
Theory Probab. Appl. 22, 24–39, 1977.
GdH1999
J. Gärtner, F. den Hollander,
Correlation structure of intermittency in the parabolic Anderson model,
Probab. Theory Relat. Fields 114, 1–54, 1999.
GM1990
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model I. Intermittency and related problems,
Commun. Math. Phys. 132, 613–655, 1990.
GM1998
J. Gärtner, S.A. Molchanov,
Parabolic problems for the Anderson model II. Second-order asymptotics and structure of high peaks,
Probab. Theory Relat. Fields 111, 17–55, 1998.
HLP1952
G.H. Hardy, J.E. Littlewood, G. Pólya,
Inequalities,
Cambridge Mathematical Library (2nd. ed.), Cambridge University Press, 1952.
dHLDP2000
F. den Hollander,
Large Deviations,
Fields Institute Monographs 14, Providence RI, American Mathematical Society, 2000.
dHKdS2020
F. den Hollander, W. Konig, R.S. dos Santos,
The parabolic Anderson model on a Galton-Watson tree,
in Out of Equilibrium 3: Celebrating Vladas Sidoravicius
(eds. M.E. Vares, R. Fernandez, L.R. Fontes, C.M. Newman),
Progress in Probability 77, Birkhäuser, 2021, pp. 591–635.
dHW2021
F. den Hollander, D. Wang,
The parabolic Anderson model on a Galton-Watson tree revisited,
J. Stat. Phys. 189, paper no. 8, 1–30, 2022.
LP2016
R. Lyons, Y. Peres,
Probability on Trees and Networks,
Cambridge Series in Statistical and Probabilistic Mathematics 42,
Cambridge University Press, New York, 2016.
K2016
W. König,
The Parabolic Anderson Model,
Pathways in Mathematics, Birkhäuser, 2016.
MZ2016
M. Mariani, L. Zambotti,
Large deviations for the empirical measure of heavy-tailed Markov renewal processes,
Adv. Appl. Probab. 48, 648–671, 2016.
S1976
F. Spitzer,
Principles of Random Walk (2nd ed.),
Graduate Texts in Mathematics, Springer, 1976.
|
http://arxiv.org/abs/2307.10211v1 | 20230714132120 | Swimming by spinning: spinning-top type rotations regularize sperm swimming into persistently symmetric paths in 3D | [
"Xiaomeng Ren",
"Hermes Bloomfield-Gadêlha"
] | physics.bio-ph | [
"physics.bio-ph"
] |
Sperm modulate their flagellar symmetry to navigate through complex physico-chemical environments and achieve reproductive function. Yet it remains elusive how sperm swim forwards despite the inherent asymmetry of several components that constitutes the flagellar engine. Despite the critical importance of symmetry, or the lack of it, on sperm navigation and its physiological state, there is no methodology to date that can robustly detect the symmetry state of the beat in free-swimming sperm in 3D. How does symmetric progressive swimming emerge even for asymmetric beating, and how can beating (a)symmetry be inferred experimentally? Here, we numerically resolve the fluid mechanics of swimming around asymmetrically beating spermatozoa. This reveals that sperm spinning critically regularizes swimming into persistently symmetric paths in 3D, allowing sperm to swim forwards despite any imperfections on the beat. The sperm orientation in three-dimensions, and not the swimming path, can inform the symmetry state of the beat, eliminating the need of tracking the flagellum in 3D. We report a surprising correspondence between the movement of sperm and spinning-top experiments, indicating that the flagellum drives “spinning-top” type rotations during sperm swimming, and that this parallel is not a mere analogy. These results may prove essential in future studies on the role of (a)symmetry in spinning and swimming microorganisms and micro-robots, as body orientation detection has been vastly overlooked in favour of swimming path detection. Altogether, sperm rotation may provide a foolproof mechanism for forward propulsion and navigation in nature that would otherwise not be possible for flagella with broken symmetry.
=1
§ INTRODUCTION
Sperm navigate through female reproductive tract to fertilize the egg, encountering numerous hostile environments of viscous mucus, complex wall geometry, acid vaginal fluid and immune system, with small probability of success <cit.>. During this process, sperm flagellum plays a vital role providing motility and driving the cell forwards, via the emergence of both symmetric or asymmetric beating patterns <cit.>. From centriole inner scaffold and its associated complex <cit.> to dynein arrangement <cit.>, and ion channel distributions <cit.>, asymmetry is present throughout the structure of sperm flagellum. Nevertheless, asymmetric modulation of the beat is also critical for sperm capacitation, hyperactivation, signalling and chemotaxis, and essential during fertilization <cit.>. As such, waveform asymmetry is a critical proxy to distinguish different physiological states of the sperm flagellum. Despite the importance of symmetry state detection of sperm flagella, there is no methodology to date that can robustly measure waveform asymmetry in free-swimming sperm in 3D (Fig. <ref>).
In the context of sperm swimming, it is generally accepted that symmetrical waveform leads to straight swimming trajectories, whilst asymmetrical beating patterns result in asymmetric swimming paths <cit.>. In an apparent contradiction, although asymmetry is intrinsic to the flagellar apparatus <cit.>, three-dimensional (3D) sperm tracking experiments (Fig. <ref>) show that the majority of sperm has a progressive swimming helical path with a globally straight forward direction <cit.>. Fig. <ref> C-F (top row) depicts four representative categories of the experimental sperm head trajectories taken from <cit.> that exhibit progressive swimming: helical ribbon (HR), twisted ribbon (TR), spinning star (SS) and helical loop (HL), representing 91.7% of a total of 2133 tracked bovine sperm in 3D. Indeed, it has been long hypothesised in the literature that, despite the presence of waveform asymmetry, global forward motion is enabled by the out-of-plane beating component that drives sperm rotations as it swims <cit.>— though the exact mechanisms by which this could take place remain unexplored <cit.>. Here, we test this hypothesis using mathematical modelling and simulation of free-swimming sperm driven by a symmetric and asymmetric beating flagellum in 3D. This reveals a novel spinning-top-like motion that regularizes sperm swimming into persistently symmetric swimming paths in 3D, with important consequences on the sperm swimming and empirical detection of flagellar waveform asymmetry, or symmetry, in sperm.
Numerical simulations in Fig. <ref> show that both symmetric and asymmetric waveforms recapitulate the diversity of experimental sperm trajectories in 3D <cit.>, with progressive swimming modes, such as HR, TR, SS and HL, also in agreement with earlier studies <cit.>. This is in contrast with the simpler case of planar asymmetric waveforms which always lead to biased, curved trajectories <cit.> (Fig. <ref>, CR). The persistent progressive swimming shown in Fig. <ref> is also consistent with recent research by Zaferani et al. <cit.> showing that progressive swimming is possible with asymmetric beating, which was attributed, once again, to the potential counteraction of the sperm rolling. Nevertheless, our numerical simulations in Fig. <ref> show that the symmetry state of the flagellum cannot be inferred from sperm trajectories alone, even if detected in 3D, as in <cit.>, or if 3D flagellar beating information is available for free-swimming sperm at the lab frame of reference <cit.>, as we further demonstrate in this study. In all, given the persistent symmetry of the swimming paths, how does the waveform asymmetry affect the sperm motion in 3D and how can this be detected experimentally? It is implausible that any source of asymmetry is simply `filtered out' from the system because of sperm rotations, according to the above hypothesis. In other words, waveform asymmetry must be manifested and detectable at some level during cell swimming, and this is indeed what we find here: the waveform asymmetry drives sperm in complex rotational orbits in 3D, thus far overlooked in the literature.
The complex interplay between body rotations and asymmetry is not exclusive to sperm swimming. Rolling disturbances may cause resonance and catastrophic yawing in missiles <cit.>, whilst rifle bullets obtain gyroscopic stability and improved accuracy with the appropriate spin <cit.>. The physics behind the above rotating systems and sperm swimming is very distinct, yet similarities of motion patterns may indicate deeper mathematical connections among such disparate systems. This is indeed the case for dynamics of spinning tops described by Euler equations and the statics of elastic rods governed by Kirchhoff equations <cit.>. In the same spirit, we report here a surprising correspondence between simulations of sperm swimming and experimental tracks of spinning-tops <cit.> (Fig. <ref>), revealing that 3D sperm rotation is qualitatively similar to spinning-tops, and that this parallel is not a mere analogy. The sperm flagellum cycles around the head-tail junction and drives spinning-top-like rotations on the entire cell during sperm swimming, suggesting that a mathematical equivalence between the dynamical systems of these seemly unrelated motion types may exist.
§ RESULTS
We have conducted non-local microhydrodynamic simulations of free sperm swimming to elucidate the role of intrinsic waveform asymmetries on the resulting 3D sperm motion. We focus on the coupling between the out-of-plane component of three-dimensional helicoid waveforms <cit.>, which regulates sperm rotations in 3D through the waveform parameters α or τ (as detailed in Materials and Methods), and two types of static beating asymmetries observed experimentally: (i) a one-sided waveform shift <cit.>, regulated by the B parameter in the xyz-waveform model (xyz-model), in Fig. <ref> A-F, and (ii) a static curvature waveform bias <cit.>, captured by the average curvature κ_0 in the κ-waveform model (κ-model), in Fig. <ref> I-N. A mathematical description of the models, estimations of parameter values, and the previous works that our numerical simulations are built on can be found in the Materials and Methods section.
§.§ Symmetric and asymmetric beating patterns recapitulate experimental sperm swimming trajectories in 3D
Fig. <ref> and Table <ref> show excellent agreements with representative experimental sperm trajectories and dynamics <cit.>, with both symmetric and asymmetric waveforms (Methods), highlighting the validity of the framework employed, and in further agreement with early studies <cit.>, for straight ribbon (SR), circular ribbon (CR), helical ribbon (HR), twisted ribbon (TR), spinning star (SS) and helical loop (HL). Simpler trajectory patterns, such as SR, CR and HR, can only be obtained by either symmetric (SR) or asymmetric (CR, HR) waveforms, whereas Fig. <ref> D-F show that TR, SS and HL modes are not exclusive to symmetric or asymmetric beating patterns. Asymmetric waveforms can generate almost indistinguishable swimming trajectories to the symmetric ones in 3D, thus both cases provide excellent agreement with experiments. That is, symmetric forward trajectories can be equally observed for both symmetric and asymmetric waveforms, even when the beat patterns are highly asymmetric, for both types of asymmetry (waveform shift or static curvature). Finally, in Table <ref> we compare our numerical results for the HL mode with experimental measurements of sperm motion from Ref. <cit.>, showing once again excellent accordance, further validating our simulations, waveform models and parameter choices quantitatively.
§.§ Waveform rotation amplitude and asymmetry regulate the diversity of swimming paths in 3D
Fig. <ref> A-F (xyz-model) and I-N (κ-model) show waveforms with different combinations of beating asymmetry (B, κ_0) and out-of-plane rotation amplitude (α, τ), together with the resultant diversity of swimming trajectories (insets): SR, CR, HR, TR, SS and HL. This highlights how small differences in waveform provoke large distinctions in swimming paths. Fine structures of the paths are recorded in Movies S1-S2, where intricate cusp formation and sharp turns are highlighted, as exemplified in Fig. <ref>N, and in particular, the trace patterns of HL mode exhibit local loops and global revolutions (Fig. <ref>H) with opposed chirality, similar to tracks previously reported in <cit.>. Note that the simulated swimming paths have been re-aligned to the x-axis above, for comparison purpose, as described in Fig. <ref>H and Fig. <ref> A and B. Our simulations show that the overall direction of the swimming path is governed by both the waveform asymmetry and the out-of-plane component of the beat, as shown in Movie S3. As such, it may be possible that sperm can dynamically adjust these controls alone (waveform asymmetry and out-of-plane amplitude of rotation) to navigate in 3D in response to changes in the environment.
Fig. <ref> G and O, and Fig. S1, show the diversity map of trajectory-type in the asymmetry-rotation parameter space (B-α and κ_0-τ). When the flagellar rotation amplitude is small (α, τ low), the waveform asymmetry dictates the variations of swimming patterns. On the other hand, when the out-of-plane component is high (α, τ high), the waveform asymmetry has negligible influence on the trajectory type; with the exception of HL for the κ-model, which is only possible for large values of κ_0 and τ in Fig. <ref>O. For example, TR switches to HR by increasing B, for α between 10^-3 to 10^-2, whilst for α larger than 10^-2, the trajectory-type is independent of B, as similarly observed for the κ-model.
Fig. <ref> H and P, as well as Fig. S2, show the progressive velocity along the trajectory central axis (red line in Fig. <ref>H) varying with asymmetry (B or κ_0) and rotation amplitude (α or τ) of the waveform.
As expected, highest speeds appear for symmetric waveforms. However, even though flagellar asymmetry weakens the forward velocity, the out-of-plane component is able to increase any reduction in progressive speed caused by asymmetry, with increasing α or τ, compensating in this way the detrimental effect that asymmetry has on progressive motion.
The influence of α and τ for both κ- and xyz-models are non-monotonic, indicating that an optimal level of waveform rotation amplitude exists for a given asymmetric waveform, to maximize the progressive speed.
§.§ Waveform asymmetry is suppressed in the sperm swimming paths but manifested in the 3D orientation orbits
We measured the pitch and radius of the symmetric helical paths (P and r), as well as the tilt angles of head orientation (ψ_ξ_1,2,3) in the lab frame, as defined in Fig. <ref> A-E, and parameters displayed in Fig. <ref> F-O and Fig. S3. Both P and r are affected by the waveform asymmetry, and at the same time, regulated by the out-of-plane component of the beat. As α and τ increase and the waveform becomes more circular in the cross-section, the large pitch and radius in Fig. <ref> F, G, K and L drop, and ultimately decay to zero, regardless of the level of asymmetry, indicating a transition from large helical paths (small α, τ) to a linear forward movement (large α, τ), see Movie S4.
When α, τ =0, the characterised circular trajectory of the CR mode demonstrates a linear relationship between trajectory curvature κ_swim and waveform asymmetry (insets of Fig. <ref> G and L, Fig. S3 B and G), consistent with observations by <cit.>. Fig. <ref> F, G, K and L show that any distinction between the swimming paths due to the waveform asymmetry is lost as the out-of-plane component of the beat is increased (all curves collapse to zero), indicating that the flagellum rotation amplitude can indeed suppress the manifestation of the waveform asymmetry at the swimming path level whilst promoting global forward motion.
The waveform asymmetry instigates complex rotational orbits in 3D (Fig. <ref> C-E, H-J and M-O). When α and τ are varied, the tilt angles of the sperm head orientation shown in Fig. <ref> H-J and M-O remain fairly constant for symmetric waveforms, whilst those for asymmetric waveforms are affected noticeably— see comparisons provided in Movies S4 and S5, where the orbits of the head orientation vector ξ_2 are regularized, in contrast to the wobbling traces of ξ_1,3.
As α (τ) increases, ψ_ξ_2 and ψ_ξ_3 tend to approach 90^∘ and 0^∘, respectively, but asymptote to slightly different angles depending on the level of waveform asymmetry. The head orientation of asymmetric cases, for the basis vectors ξ_2,3, align more closely with the corresponding symmetric cases, as the flagellum rotation amplitude increases, indicating the suppression of rotation in two orientation directions as α and τ increases. Most importantly, the spherical orbits of ξ_1 direction differ dramatically depending on the level of waveform asymmetry, even when the out-of-plane component of the beat (α,τ) is large (Fig. <ref> C, H and M). In the case of static asymmetric curvature in Fig. <ref> M, this distinction becomes less prominent for very large values of τ, indicating an asymmetry-dependent effect on the dynamics of ξ_1, as well as ξ_3, when comparing Figs. <ref> J and O for large values of α and τ.
§.§ Sperm rotates like a spinning-top
The sperm head centre position (path trajectory) revolves around the central axis (Fig. <ref>H and Movie S4), whilst the sperm head orientation rotates around the central axis during its spherical orbit (Fig. <ref> E, J and O) with a precession motion, as depicted by the red nutating (wobbling) trajectory in Fig. <ref>E. In other words, the head long axis ξ_3 revolves around the progressive swimming direction as the head spins around itself, defining the sperm head precession. To further quantify the sperm head precession, we define the angle between central axis (red line in Fig. <ref>H) and head precession axis (black line in Fig. <ref>I) as γ. Fig. S5 shows the statistics of γ, where the medians and interquartile ranges of the angle vary closely around zero, with larger angles ranging between 10 and 30 degrees.
The tendency of ψ_ξ_3 declining towards 0^∘ (Fig. <ref> J and O, and Fig. S3 E and J) implies that a larger waveform rotation amplitude (α and τ) leads to a head orientation in which the head long axis is almost parallel to the head precession axis and its progressive swimming direction. This is similar to the movement of a spinning-top. Movie S4 presents typical precession and nutation movements of the sperm head long axis around the progressive direction (precession axis), while the head spins around its own longitudinal axis ξ_3, in addition to oscillatory nutations, characterized by the wobbling movement of head long axes as it rotates/precesses around the progressive direction.
Fig. <ref> shows the striking similarity between observations of spinning-top orbits <cit.>, obtained by tracking the spinning-top long axis, and the rotation patterns of the sperm head around the swimming directions (yz projections of helical paths in Fig. <ref>H). All trajectories are characterized by local loops (or cusps) following a global revolution around the centre (Fig. <ref>). The remarkable similarity among such diverse trace patterns suggests that such qualitative comparison is not a mere analogy, and that an underlying equivalence may exist between the dynamical systems that govern these movements. Fig. <ref> shows the experimental orbits of spinning-tops with mixed precession and nutation movements. The spinning-top configuration in Fig. <ref>I <cit.> is referred as a `bottom-heavy' type of spinning-top, as its centre of mass is located below the spinning tip <cit.>, resulting in the formation of outward-directed local loops. When the nutation movement is small, the local loops become less developed and ultimately degenerate into cusps, as shown in Fig. <ref> D-H. In direct correspondence, sperm beatings with a lower wavelength, such as k=π, exhibit sharper turns in their projected trajectories leading to cusp formations, while large values generate loops. Waveform characteristics thus instigate spinning-top-like effects on the sperm swimming: sperm head nutation defines the type of helical path that emerges whilst regulated by the wavelength of the beat and the out-of-plane component. It is noteworthy that the sperm trajectories presented in Fig. <ref> are for large flagellar rotation amplitudes (α and τ), and traces with higher values of (α, τ) display a much denser appearance of loops and cusps.
§.§ Waveform rotation inhibits asymmetry in the sperm orientation orbital cycle
The time sperm takes to complete one period along its helical path (Fig. <ref>H) is defined as Δ T_tra, and the time head orientation vectors complete one spherical orbit (Fig. <ref>I) is defined as Δ T_rot, in units of beat cycle. Fig. <ref> A and C, and Fig. S4 A and C, show the orbital period of the head rotation, Δ T_rot, as a function of α,τ for distinct waveform asymmetries (B,κ_0).
The smaller Δ T_rot period is, the faster the angular speed of head rotation will be. When α,τ are small, Δ T_rot can be as large as thousands of beat cycles, depending on the level of asymmetry.
However, as α,τ increase, Δ T_rot decreases, regardless of the magnitude of the waveform asymmetry, with all cases collapsing into a fast sperm rotating mode, dominated by the out-of-plane component of the waveform.
For large α,τ, the orbital revolution is much faster (details see Movies S6, S7 and S8), yet characterised by a smoother wobbling movement and a lower spatial frequency, as depicted by the insets showing the spherical orbit for one revolution. As such, large waveform rotation amplitude inhibits the effect of waveform asymmetry on the sperm orientation orbital cycle.
Fig. <ref> B, D, and Fig. S4 B, D, show the relative deviations between head trajectory helical period Δ T_tra and head rotation orbital period Δ T_rot. For most of the range of α,τ studied, the deviation is either zero or very small, indicating that head trajectory and rotation revolution follow similar behaviours.
For instance, the decreasing Δ T_rot caused by the increasing B (κ_0) at small α (τ) implies a shorter period of head rotation movement, as well as a faster revolution of the translation trajectory, for a quasi-planar waveform with more asymmetry.
An apparent discrepancy between the periods of translation and rotation occurs when the waveform out-of-plane component is very large. Movie S7 shows an example of sperm motion where the revolutions of the head translation and spherical orbit are in synchrony, with zero deviation and α=0.05, in contrast with Movie S8, in which head spinning is faster than the head trajectory revolution, with a deviation of 3.2 beat cycles and α=0.5.
§.§ Quantification of waveform asymmetry in free-swimming sperm requires detection of sperm head orientation in 3D
The waveform asymmetry is not detectable from translational motion of the sperm head and flagellum in 3D at the lab frame of reference. As shown in the above sections, sperm rotation `filters-out' waveform asymmetry from linear translations and subsequent swimming paths. This is further illustrated in Fig. <ref>, which shows flagellar beating relative to the laboratory frame of reference (A, E), comoving frame of reference (B, F), and body frame of reference (D, H), for both symmetric and asymmetric waveforms. The beating patterns in the comoving frame (Fig. <ref> B and F) are obtained from those in the lab frame such that one observes the flagellum translating with sperm head along the progressive direction (but not rotating with the sperm head), with projections on the yz-plane exhibiting symmetric distributions of the waveform tracers around the central swimming axis for both symmetric and asymmetric cases (Fig. <ref> B and F). The principal components of the mid-flagellar tracers in Fig. <ref> C and G show that the fitted envelopes by ellipses have semi-axes ratio (a/b) close to one, see SI for details on principal component analysis (PCA). All flagellar motions in Fig. <ref> C and G appear to be symmetric. The symmetry state of the waveform is indistinguishable at both lab and comoving frames of reference. Indeed, we show here that asymmetry, or indeed symmetry, of the waveform is only detectable with full knowledge of the 3D head orientation, as displayed in Fig. <ref> H-J, where black markers denote the simulations depicted in Fig. <ref> showing sizeable distinctions in angular differences between the asymmetric and symmetric cases. Detection of both head orientation and translations in 3D is thus ideal, as it allows reconstruction of the true beating of the flagellum, relative to the body frame of reference, from which the waveform symmetry state can be detected. The body frames of reference in Fig. <ref> D and H clearly distinguish asymmetric from symmetric beatings.
As shown in Fig. <ref> D-F, it is challenging to distinguish asymmetric and symmetric beating patterns by inspecting sperm head trajectories alone, or even waveform trajectories in 3D (Fig. <ref>). Fig. <ref> H, M, and Fig. S3 C, H, however, reveal that 3D angular movements of sperm head alone can distinguish asymmetric and symmetric waveforms without requiring waveform tracers nor head trajectories in 3D, even for large values of α,τ. When the waveform is symmetric (B=0,κ_0=0), the head orientation relative to the head precession axis, quantified by ψ_ξ_1,2,3, does not change with α or τ, and deviations from these angles are associated with the magnitude of the waveform asymmetry. As such, the 3D orientations in Fig. <ref> can be used alone as a lookup table to infer waveform asymmetry. For the symmetric cases, ψ_ξ_1=ψ_ξ_2=90^∘, and ψ_ξ_3 = 7^∘, 5.5^∘, 3.6^∘ and 6.6^∘, respectively, for the xyz-model with k=π, 2π, 3π, and the κ-model. Movie S4 further shows how the orientation orbits of asymmetric waveforms vary with α, in comparison with the symmetric cases in Movie S5.
§.§ CASA parameters cannot detect waveform asymmetry of 3D beating spermatozoa
Computer-assisted sperm analysis (CASA) systems are used in clinical settings to assess sperm motility from 2D tracers <cit.>. Here we present evaluations of the so-called CASA parameters of our numerical 3D sperm trajectories, namely curvilinear velocity (VCL), straight-line velocity (VSL), average-path velocity (VAP), linearity (LIN, equal to VSL/VCL), wobble (WOB, equal to VAP/VCL) and straightness (STR, equal to VSL/VAP) <cit.>. Figs. S6 and Fig. <ref> display the generalised 3D CASA parameters and the deviation between 2D projections of the 3D movements and the 3D results, respectively, as a function of waveform asymmetry and out-of-plane component of the beat. The effect of waveform asymmetry is nearly indistinguishable in Figs. S6 and Fig. <ref>. CASA parameters are only weakly affected by the waveform asymmetry B, and thus they are unable to detect asymmetry, as also expected from the persistently symmetric swimming paths presented previously in Fig. <ref>-<ref>, <ref>.
Although the 2D CASA parameters deviate only weakly from their 3D counterparts for most of the parameter space in Fig. <ref>, the relative deviation can be as high as 33.21%, 38.68%, 15.69%, 41.26%, 37.94% and 48.40%, respectively, for VCL, VSL, VAP, LIN, WOB and STR. The absolute difference between 2D and 3D for VCL can reach up to 28.71μm/s, a value that surpasses, for example, the threshold to distinguish slow from rapid progressive motility (25 μm/s), according to the WHO guidance <cit.>, and constitutes approximately 1/5 of the speed used for hyperactivated motility (150μm/s) <cit.>. As such, 2D CASA measurements of 3D sperm trajectories may introduce unknown inaccuracies on sperm motility assessment using CASA parameters.
§ DISCUSSION
We have conducted numerical simulations of sperm swimming hydrodynamics to elucidate the role of intrinsic waveform asymmetry on the resulting 3D swimming movements. Numerical simulations were verified against experimental swimming observations and measurements <cit.>, with waveform model parameters estimated directly from experiments <cit.>. Our study revealed a complex interplay between flagellar beat asymmetry and the sperm motions in 3D. Counterintuitively, we showed that the waveform asymmetry is not manifested in swimming path patterns, rather it impacts the three-dimensional head rotations in a complex manner. The swimming trajectories of both symmetrical and asymmetrical flagellar beatings are persistently symmetric. As such, 3D sperm trajectories alone cannot inform the symmetry state of beating patterns. Most interestingly, the waveform asymmetry information is `stored' in the head orientation dynamics. Indeed, the flagellar beat is a 3D helicoid that continuously cycles, rotating around a centre point, so that sperm head rotation (driven by this flagellar motion) depends directly on the level of asymmetry of the beating helicoid. Waveform asymmetry deviates the head orientation during motion (Movies S4-S5), particularly altering the relative angles between the head basis vectors and the head rotational axis (Fig. <ref> C-E, H-J and M-O, Fig. S3 C-E and H-J). We have showed that 3D head orientation alone is sufficient to scrutinize whether a given flagellar beat is symmetric or not. This may prove critical in future empirical studies, as 3D body orientation detection of microorganisms has been vastly overlooked in favour of trajectory detection in the literature.
The sperm flagellum apparatus possess an umbrella of intrinsic asymmetric components spanning from the molecular to micron level, including molecular motors, radial spokes and elastic linkers, microtubules, outer-dense-fibres, centrioles, basal components and ion channels <cit.>, to name a few. But how does sperm achieve forward swimming with such intrinsically asymmetric beating flagella? We have shown that forward swimming motion is not hindered by waveform asymmetry, due to the regularising role of the rotational motion arising from the moment balance. This allows generic flagellar apparatus to propel cells forwards regardless of any `imperfection' that may drive the flagellar beat in an asymmetric manner. The rotational motion of the sperm flagellum thus provides a foolproof mechanism for forward propulsion in nature, which could be potentially critical during the evolution of these cell appendages while achieving biological function - it would be an impossible task to grow a `perfect' flagellar apparatus with exactly symmetric molecular components at every level. Our results suggest that imperfections of the flagellar beat would not dramatically influence their ability to swim forwards for 3D beating patterns.
The sperm's capacity to persistently swim in a straight helical path despite the waveform asymmetry, however, does not hinder their ability to steer and navigate in 3D. Different levels of waveform asymmetry, modulated by flagellar rotation, leads to helical paths in different directions in space (Movie S3). This may allow sperm to use asymmetric waveform controls to navigate in 3D. This can be achieved by simply tuning the waveform asymmetry and the out-of-plane component, without risking to `trap' itself into circular swimming paths, as observed for asymmetric planar waveforms <cit.>.
Asymmetric modulation of the beat is also an important proxy used to inform sperm capacitation and hyperactivation <cit.>, physiological states that are critical for fertilization. Our results indicate that it would be a challenging task to identify sperm hyperactivity, using the sperm's path asymmetry as a proxy, even if recorded in 3D, given the persistent symmetric characteristics of sperm trajectories for asymmetric beating we observe.
In all, the waveform asymmetry amplifies the diversity of helical swimming paths (Fig. <ref> G and O, Fig. S1), and due to the sperm rotations, detrimental effects on the swimming speed can be circumvented (Fig. <ref>H, Fig. S2).
Comparison between simulations and experimental sperm trajectories showed that both symmetric and asymmetric waveform models were able to reproduce the observed trajectory patterns (Fig. <ref> D-F). As a result, symmetry of the waveform cannot be uniquely inferred from observations of swimming trajectories alone, and likewise, waveform model comparison with experiments, at sperm trajectory level, provides insufficient information to scrutinise model closeness to experiments. This indicates that head centre trajectories should be considered together with 3D head orientation dynamics for quantitative comparisons between free-swimming sperm experiments and model predictions <cit.>. This is particularly relevant, as comparisons between experiments and theory have been largely limited to the head trajectory level, thus calling for a reevaluation of the generally expected symmetry of both flagellar beat and swimming paths in 3D.
We have demonstrated that even 3D detection of the flagellum waveform taken at the laboratory frame of reference may provide insufficient information to scrutinise the presence of asymmetry in the beat (Fig. <ref>). In this case, detection of head translations, in conjunction with head rotations, may be required for empirical inference of beat asymmetries, whilst also allowing the reconstruction of the “true” flagellar beat relative to the body frame of reference—the flagellar movement as viewed from a fixed point of reference located at the sperm head, which translates and rotates with the sperm (Fig. <ref>G).
This is due to the fact that although the shape of the flagellum is the same in both laboratory and body frames of reference, they differ dramatically in location and orientation from each other. Mathematically, the unknown translations and rotations of the body frame for a prescribed waveform can be obtained by solving a well-posed momentum balance system of equations (section <ref>), known as the mobility problem in the low Reynolds number hydrodynamics <cit.>. The inverse problem, on the other hand, of finding body frame movement from the lab frame flagellar beat, is not well-posed, as the flagellum centreline does not carry orientation information of its local basis vectors in relation to the body frame <cit.>. Hence, 3D flagellar tracking without direct body orientation detection may not fully inform how the flagellum beats at the body frame. Indirect inference of the body orientation from observed flagellar path in 3D has been employed as an alternative <cit.>, though more research is needed to determine
whether this approach can confidently resolve the complex rotational movement of the sperm body frame in 3D. We hope that these results will motivate further advances on high-precision and direct measurements of microorganisms' body orientation in 3D <cit.>.
The sperm head rotational dynamics share very similar characteristics with the kinematics of spinning-tops (Fig. <ref>E), with the emergence of both precession and nutation movements. It is not the first time that a parallel between seemly unrelated systems was found with spinning-top dynamics. The celebrated Kirchhoff equations for the statics of elastic rods share an intimate relation with the governing dynamics of spinning rigid bodies <cit.>, despite the very different physics involved. This may also be the case for sperm swimming in 3D. Fig. <ref> compares the trajectory patterns of sperm head center and observed orbits for bottom-heavy spinning-tops. The similarity between the trace patterns is striking, and reveals that indeed 3D sperm swimming is qualitatively similar to spinning-tops, and that this parallel is not a mere analogy. This remarkable similarity is despite the very distinct physics that govern the hydrodynamics of rotating sperm at the micro-scale and the inertial dynamics of spinning tops at the macro-scale <cit.>.
Fig. <ref> suggests that a mapping between these systems may exist, in which waveform characteristics could instigate spinning-top-like effects on the resulting sperm dynamics, and that a potential mathematical equivalence between these seemly unrelated motion types is possible, including subsequent connection with static configurations of Kirchhoff rods <cit.>.
In the context of experimental and clinical studies focusing on the assessment of sperm motility and hyperactivation, we observe that 2D measurements of 3D movements may oversimplify the true sperm swimming motion. Specifically, Fig. <ref> and Fig. S6 show that the so-called CASA parameters cannot distinguish symmetric from asymmetric waveforms for spermatozoa swimming in 3D. Large errors may arise from tracking 2D projections of 3D sperm movements. This is particularly important as current CASA measurements are restricted to 2D visualizations of the sperm swimming trajectories due to the limitations of imaging techniques <cit.>, and thus carry subsequent challenges on the potential misclassification of sperm motility <cit.>.
Our numerical study on the role of waveform asymmetry on the resulting 3D sperm swimming has several limitations: we focused on only two generic types of empirically observed waveform asymmetries <cit.>, one-sided waveform shifts and static waveform curvatures, but other types of waveform asymmetry may exist, in both static and dynamic forms <cit.>, such as second-harmonics used to steer sperm in 2D <cit.>, and planar beating inclined to the plane of flattening of the sperm head <cit.>. Our work only considers presence of the fundamental beating mode—other harmonics are equally observed in experiments, although they are manifested with much lower amplitudes <cit.>. We only accounted for beating patterns observed in low viscosity fluids, and neglected any sperm interaction with nearby walls and boundaries, which are well known to instigate boundary accumulation of sperm <cit.>. The mathematical framework and analysis developed here, however, can be easily generalised in future studies focusing on these elements. We also considered prescribed waveform models and, as such, the flagellar shape does not emerge spontaneously from the collective behaviour of molecular motors <cit.>. Despite the number of simplifications invoked, our results shed new light on the fundamental importance of waveform asymmetry, the complex 3D rotational motion of spermatozoa, and the diversity of persistently symmetric swimming patterns, despite any intrinsic asymmetry that may exist on the beat. We hope this work will instigate future research on the role of asymmetry on cell motility and rotational motion of microorganisms, waveform tracking in 3D, sperm motility, and artificial swimmers.
§ MATERIALS AND METHODS
§.§ Numerical simulations of sperm swimming in 3D
We exploit a meshfree approach using the Regularized Stokeslet method (RSM) by Cortez-Fauci-Medovikov<cit.> to solve the non-local low Reynolds number hydrodynamics of sperm swimming. The RSM has been extensively studied and validated in the literature <cit.>, and we use the novel nearest-neighbor discretization method developed by Gallagher-Smith <cit.> for efficient computations of the non-local flow fields. Gallagher-Smith method offers model simplicity and versatility, and has been optimized and validated for free-swimming problems, more details can be found in <cit.>, including a didactic Matlab implementation of the method. By invoking total momentum balance, this framework provides the free-swimming motion of a spermatozoon, relative to the laboratory fixed frame of reference (lab frame), by prescribing the beating pattern of the flagellum relative to the body fixed frame of reference (body frame), i.e. the reference frame that translates and rotates with the sperm head (Fig. <ref>). The microscale flow velocity at a spatial point x, driven by a regularized force ϕ ^ϵ (x-X) ·f at the location X, can be represented as u= G ^ϵ (x, X) ·f, where
ϕ ^ϵ (x-X)=(15ϵ ^4)/[8π(r^2+ϵ^2)^7/2] is the cutoff function, and
G ^ϵ=[(r^2+2ϵ^2)I+rr]/(8 π r_ϵ^3) is the regularized Stokeslet, with r=x-X, r=|r|, ϵ is the regularization parameter, and r_ϵ=√(r^2+ϵ^2) <cit.>. We describe the laboratory frame coordinates of the sperm as x=x_0+R·ξ, where x_0 is the origin of the body frame, i.e. head centre, R=[ξ_1, ξ_2, ξ_3] is director basis capturing the orientation of the body frame, and ξ the body frame coordinates of the flagellum shape (Fig. <ref>). The sperm velocity in the lab frame can be expressed as the boundary integral over the body surface ∂ D,
U+Ω×(x-x_0)+R·ξ = ∬_ X ∈∂ DG ^ϵ ( x, X) ·f ( X) d X,
where U and Ω are the unknown lab frame linear and angular velocities of the body frame, respectively, and the overdot of ξ denotes a time derivative of the body frame coordinates. The above equation embodies the non-local, force-velocity relationship and non-slip boundary condition, which is augmented by the total balance forces and torques on the sperm,
∬_ X ∈∂ Df ( X) d X =0
∬_ X ∈∂ D X ×f ( X) d X =0.
The above system of equations governs the so-called mobility problem, in which the unknown rigid-body motion results from imposed force and moment. In other words, the unknown traction f, and the translational U and rotational Ω velocities of the body frame can be obtained numerically from a prescribed waveform model relative to the body frame of reference (described below). This allows us to resolve the translating and rotating 3D kinematics of the swimming sperm at the lab frame, examples of which can be seen in Fig. <ref>. The system is treated as an initial-value problem and solved via the built-in function in , and the algorithm is implemented as dimensionless, where the flagellum length is normalized to 1 and time is quantified in terms of beat cycles. Here, we consider sperm swimming in an infinite fluid and neglect boundary effects for simplicity, though this could be easily incorporated <cit.> in future studies. The human sperm head has a marginal impact on sperm motility due to their typical small sizes (when compered against the flagellum length) <cit.>, and thus the head geometry adopted here is simplified to a scalene ellipsoid with axes of length 0.044, 0.036, and 0.022 <cit.>. A finer quadrature discretization with 700 points was used for the sperm head (though this could be lower), and a coarser force discretization with 136 points was used for the flagellum, with the regularization parameter ϵ = 0.25/45 chosen to approximately represent the ratio of flagellar radius to length of human sperm [ibid]. Simulations were conducted on the High Performance Computing system of University of Bristol: BlueCrystal Phase 4.
§.§.§ Symmetric and asymmetric flagellar waveform models
The asymmetry in flagellar beating patterns has been widely discussed in the literature <cit.>, for example, the effect of beat plane inclination to the plane of flattening of the head in sperm boundary accumulation was investigated in <cit.>, though restricted to planar beatings. Transitions in swimming behaviors relevant to asymmetry and rotation were also examined recently <cit.>, however the modelling framework was limited to 2D beatings within a local hydrodynamic theory, using the so-called restive-force theory (RFT) <cit.>. Here we investigate the role of waveform asymmetry in 3D for freely-swimming sperm, solving the non-local hydrodynamics around sperm swimming. This allows us to reveal the intricate manifestation of asymmetry in the complex rotations and translations of sperm swimming in 3D. Below, we introduce the waveform models dictating the 3D beating patterns at the body frame coordinates ξ, as required in Eq. <ref>, and provided below in Eqs. <ref> and <ref>. Direct experimental observations of flagellar beating relative to the sperm head so far are limited to tethered sperm <cit.> or for 2D swimming cells <cit.>. As such, the 3D beating is approximated by an elliptical helicoidal waveform, as inferred from experiments <cit.> and which has long history of usage in the literature <cit.>. Here we consider two static sources of waveform asymmetry that have been observed experimentally <cit.>: a newly observed one-sided bias of the waveform relative to the orientation of the sperm head long axis <cit.>, instigated by the internal asymmetric structure of the basal body and centriole, and the asymmetric mean curvature of the flagellum over beat cycle <cit.>. Mathematically, we consider these two forms of asymmetry as follows: (a) a waveform side-shift of an otherwise symmetric waving motion, captured by the parameter B, and (b) a static curvature bias, κ_0, that deforms the flagellum into a static curved shape from which symmetric waving component is overlaid, see details below. The two waveform models are denoted as `xyz-model' and `κ-model' for simplicity, with their out-of-plane motion regulated by the parameters α and τ, respectively, also referred as the waveform rotation amplitude.
The xyz-model employs the one-sided bias beating asymmetry (Fig. <ref> A-F). In this case, the waveform ξ-coordinates, as required in Eq. <ref>, are prescribed directly at the body frame of reference,
ξ_1 = A[cos(k ξ_3- t)+B]
ξ_2 =-α A sin(k ξ_3- t),
where A=0.2 ξ_3 is the modulating amplitude growing linearly with ξ_3 <cit.>, k is the wave number, taken to be k=π, 2π, 3π according to the estimations from the observed waving patterns <cit.>, B introduces the static one-sided shifting asymmetry, and α captures the out-of-plane motion of the beat, responsible for the rotation amplitude of the sperm flagellum in the body frame. If B=0, the waveform is perfectly symmetric (Fig. <ref> A and F), otherwise it yields an average flagellum shifted sideways relative to the head long axis ξ_3 (Fig. <ref> B-E). If α=0, the waveform is planar (Fig. <ref> A-B), whilst when α increases, the flagellum follows an elliptical path in the cross section, with perfect circular trajectories when α=1, see the projected point clouds in Fig. <ref> C-F. The sign of α dictates the chirality of flagellar beat, with a positive sign inducing a left-handed helicoid (Movie S8) and a negative sign inducing a right-handed helicoid (Movie S9). The rotational direction of the head spinning around its longitudinal axis ξ_3 is opposite to that of the tail due to the total momentum balance <cit.>, see Movies S8 and S9. Numerical simulations using k=2π are provided in the main text, and k=π, 3π are supplied in Supplementary Information.
The second type of waveform asymmetry due to a curvature bias is introduced via the κ-model (Fig. <ref> I-N), in which waveform curvature κ and torsion τ are prescribed instead,
κ = κ_0 + A_κcos(k s- t).
κ_0 represents the static curvature over one beat cycle, and A_κ is the amplitude. According to the experimental measurements <cit.>, we take κ_0 to range from 0 to 0.6, and A_κ is chosen to be 1.
If κ_0=0, the waveform is symmetric (Fig. <ref> I, L and M), while a non-zero κ_0 gives rise to a curved average shape of the flagellum (Fig. <ref> J, K and N). The out-of-plane component is controlled by τ, with τ=0 generating a planar waveform (Fig. <ref> I-J), and a larger τ producing a larger rotation amplitude of the flagellum at the body frame, with rounder waveform cross section with increasing τ (Fig. <ref> K-N). For comparison purpose, the wave number k for the κ-model was set to 2π.
With specified curvature and torsion, the flagellum waveform is obtained by integrating the local Frenet-Serret system of equations,
d ξ/ds= T,
d T/ds=κ N,
d N/ds=-κ T+τ B,
d B/ds=-τ N.
where d/ds is the derivative with respect to arclength, and T, N, B represent the tangent, normal, and binormal unit vectors of the local Frenet–Serret frame, from which the body frame coordinates ξ of the prescribed flagellum, used in Eq. <ref>, is obtained.
§ ACKNOWLEDGEMENTS.
The authors thank Professor Jonathan Rossiter for his inspiring and fruitful discussions. We acknowledge the computational facilities and team of the Advanced Computing Research Centre, University of Bristol: http://www.bristol.ac.uk/acrc/http://www.bristol.ac.uk/acrc/. Xiaomeng Ren acknowledges financial support of China Scholarship Council through Grant 202006830002.
|
http://arxiv.org/abs/2307.05003v1 | 20230711035204 | Programmable Integrated Photonics for Topological Hamiltonians | [
"Mehmet Berkay On",
"Farshid Ashtiani",
"David Sanchez-Jacome",
"Daniel Perez-Lopez",
"S. J. Ben Yoo",
"Andrea Blanco-Redondo"
] | physics.optics | [
"physics.optics"
] |
Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) Framework
Jesse Stevens*, Daniel N. Wilke, Itumeleng Setshedi
August 12, 2023
===========================================================================
A variety of topological Hamiltonians have been demonstrated in photonic platforms, leading to fundamental discoveries and enhanced robustness in applications such as lasing, sensing, and quantum technologies. To date, each topological photonic platform implements a specific type of Hamiltonian with inexistent or limited reconfigurability. Here, we propose and demonstrate different topological models by using the same reprogrammable integrated photonics platform, consisting of a hexagonal mesh of silicon Mach-Zehnder interferometers with phase-shifters. We specifically demonstrate a one-dimensional Su-Schrieffer-Heeger Hamiltonian supporting a localized topological edge mode and a higher-order topological insulator based on a two-dimensional breathing Kagome Hamiltonian with three corner states. These results highlight a nearly universal platform for topological models that may fast-track research progress toward applications of topological photonics and other coupled systems.
§ INTRODUCTION
The field of topological photonics <cit.> has gained tremendous traction in the last 15 years thanks to its unraveling of novel fundamental phenomena in topological physics as well as its potential to deliver robustness against certain types of defects and disorder for integrated photonic devices <cit.> such as lasers <cit.> and quantum information platforms <cit.>. The origins of topological photonics stem from the discovery of topological insulators in condensed matter physics <cit.>, where materials that are insulating in their bulk can conduct electricity without dissipation on their edges. These concepts were translated into photonics platforms <cit.>, where topology refers to a quantized property that describes the global behavior of the wavefunctions in a dispersion band.
A key feature of topological photonics is the existence of modes that live on the edge of photonic materials with different topologies and that show resilience to certain types of disorder. These edge modes have been demonstrated in a variety of platforms, from one-dimensional (1D) arrays of waveguides <cit.> or resonators <cit.> with chiral symmetries, to two dimensional (2D) lattices of helical waveguides <cit.> and ring resonators with asymmetric couplings <cit.>, all the way through bianisotropic metamaterials <cit.> and quasicrystals <cit.>.
While the majority of topological photonics platforms presented to date have a static character, a number of reconfigurable topological photonic insulators have been experimentally realized in the last few years <cit.>, as well as analogous concepts in acoustics <cit.> and plasmonics <cit.>. However, the reconfigurability in these platforms is limited to rerouting the pathways followed by the guided waves or switching these pathways on and off, while the type of Hamiltonian implemented in a given physical platform is fixed.
In parallel, programmable integrated photonic platforms have enabled fast development of a wide range of circuit architectures through real-time reconfiguration of a general-purpose photonic circuit via software programming <cit.>. Such systems typically consist of a 2D mesh of silicon photonics Mach-Zehnder interferometers (MZIs) whose transfer matrix can be programmed by adjusting the embedded phase shifters. This enables the reconfiguration of light paths through the mesh and the implementation of linear optical operations by interfering signals from different paths <cit.>, showing a ground-breaking potential for communications, machine learning <cit.> and quantum information processing <cit.> among other applications.
Here, we propose and experimentally demonstrate that topological physics can be observed in programmable integrated photonics platforms. Importantly, virtually any topological model can be implemented in programmable integrated photonic platforms that allow for exquisite reconfigurable control of the hopping strength and hopping phase between elements, as well as of the real and imaginary part of the onsite energies.
To illustrate this, we use a commercial programmable platform (iPronics' Smartlight Processor) to show robust localization of edge modes in a dimer chain of resonators resembling the Su-Schrieffer-Heeger (SSH) model <cit.> and of higher-order topological modes (corner modes) in a 2D breathing Kagome lattice <cit.> of resonators. Reprogrammable silicon photonic meshes represent a nearly universal test-bed for topological photonics, including non-Hermitian topological photonics <cit.>, that could greatly accelerate fundamental discoveries as well as the development of applications.
§ INTEGRATED PROGRAMMABLE MESH
A schematic view of the programmable silicon photonics chip used in our experiments is shown in Fig. <ref> (a). It consists of a hexagonal mesh of programmable unit cells (PUCs), where each PUC is formed by a 2x2 Mach Zehnder interferometer (MZI) with a thermo-optic phase shifter in each arm, as depicted in Fig. <ref>(c) <cit.>. The two optical inputs enter a 50/50 multimode interference (MMI) coupler followed by two thermo-optic phase shifters to adjust the optical phase shift of each arm. Another 50/50 MMI coupler combines the two phase adjusted signals and provides the PUC outputs. By controlling the phases imparted on each arm θ_1 and θ_2 one can realize any 2x2 complex unitary transfer matrix
T(θ_1,θ_2)=e^jϕ[ cos(Δ) -sin(Δ); sin(Δ) cos(Δ) ]
with
ϕ=θ_1+θ_2/2
representing a common phase to the two output signals and
Δ=θ_1-θ_2/2
determining the power splitting ratio between signals. Therefore, by programming the phase settings of the mesh PUCs, the optical signal can be routed into desired paths and arbitrary photonic circuit configurations can be realized.
To approach the realization of topological physics in programmable meshes we reconfigure the programmable cells of the mesh to create lattices of ring resonators with carefully engineered resonant frequencies and coupling rate between them. Note that, thanks to its interconnection profile, the hexagonal mesh allows for the programming of optical cavities and better resolution when compared to alternative lattice mesh designs <cit.>, and it is, therefore, better suited to implement topological Hamiltonians. The smallest possible ring resonator in this hexagonal mesh consists of six PUCs, as schematically depicted by the blue circumferences in Fig. <ref>(a). The PUCs shared between adjacent rings are programmed to determine the coupling rate (and if desirable the coupling phase) between the two rings. The power in each ring can be monitored by tapping a small amount of the power out of the ring to a monitoring photodiode, as depicted by the blue arrows exiting the lattice.
In particular, we have chosen to implement two different models to demonstrate the potential of programmable photonics to explore topological physics: a 1D SSH model and a 2D breathing Kagome lattice. Due to the size of the currently available hardware mesh, a rectangular arrangement of 72 PUCs shown in Fig.<ref>(a), only the 1D SSH model could be experimentally tested in the hardware. Nonetheless, we have implemented the 2D Kagome in a realistic simulator <cit.> of the mesh and we highlight that the size and shape of the lattice is well within the scalability scope of current technology.
§ 1D TOPOLOGICAL PHOTONICS IN THE PROGRAMMABLE MESH
We start by implementing the simplest topological model, the dimer chain, also referred to as the SSH model <cit.>, which relies on an alternate pattern of weak and strong coupling between sites and was demonstrated in optical experiments in 2009 in an optically-induced superlattice <cit.>. Since then, many optical implementations of the SSH have been proposed: from femtosecond laser written waveguides in glass <cit.> to silicon photonics waveguides <cit.>, all the way to microwave resonators <cit.> and others. All of these demonstrations have shown little to none reconfigurability. Here, we implement the SSH model in a programmable mesh by arranging the mesh into a bipartite lattice of seven ring resonators, as schematically depicted in Fig. <ref>(b). The experimental realization of this model on the silicon photonics programmable platform is marked by blue circumferences in Fig. <ref>(a).
The Hamiltonian describing this system of seven rings is given by
H=[k_w∑_n∈{1,3,5} a_n^† a_n+1 + k_s∑_n∈{2,4,6} a_n^† a_n+1]+H.c.
where k_w and k_s are the strong and weak coupling strengths between sites – accurately controlled in this experiment by programming the common PUC between rings – and a_n^† and a_n are the creation and annihilation operators on site n.
The calculated eigenvalues of this lattice, embodied here by the resonant frequencies of the supermodes, are shown in Fig. <ref> (a) for three different combinations of k_w and k_s. The supermodes frequencies are offset to the resonant frequency of the individual ring resonators f_0=193.396 THz. Note that slight differences in f_0 between rings can be compensated by adjusting the phase shifters in the mesh (see Supplemental Document Section 1.). This lattice is expected to have a bandgap with a topological edge mode localized in ring 1. Stronger dimerization patterns, in other words stronger contrast between k_w and k_s, are expected to lead to larger bandgaps and consequently to stronger and more robust localization of the edge mode. Thus, the reprogrammability of the lattice lends us full control over the band gap, the degree of localization and the robustness of the edge mode.
To experimentally prove this, we connect a continuous wave tunable laser to the input port of the mesh and monitor the power in the rings under different conditions. First, we tune the laser wavelength to f_0 and monitor the power in each ring using photodiodes. The resulting measurements, shown in Fig. <ref> (b), exhibit the characteristic modal distribution of the SSH edge modes with a maximum at the edge site and full localization in one of the sublattices, i.e. virtually zero power in the even rings. The measurements also confirm that stronger dimerization patterns lead to stronger localization at the edge. Subsequently, we tuned the input laser frequency within ± 6 GHz around f_0 and summed up the power in all the even and odd rings, as shown in Fig. <ref> (c) and (d), respectively. By looking at the width of the dip around f_0 in Fig. <ref> (c) one can appreciate how the band gap grows with increasing dimerization strength. This is because the only supermode present supported around f_0 is the topological edge mode, which is fully localized in the odd rings. Consequently, the peak exhibited around f_0 in the odd rings, as shown in Fig. <ref> (d), correlates strongly with the power in the topological edge mode and it becomes higher with stronger dimerization.
Next, we evaluate the robustness of the topological edge state by intentionally introducing perturbations on the coupling strengths. Specifically, twenty random variations are drawn independently from a normal Gaussian distribution around the nominal coupling strength for each pair of rings, ∼𝒩(0,σ^2), where σ is the standard deviation. Figure <ref> shows the power in each ring at f_0 for two dimerization patterns – a strong dimerization case with k_s=3.3 GHz, k_w=0.4 GHz in Figs. <ref> (a) and (b); and a weak dimerization case with k_s=3.3 GHz and k_w=1.1 GHz in Figs. <ref> (c) and (d). For each case, we consider two levels of disorder – low disorder with σ=0.15 GHz in Figs. <ref> (a) and (c) and high disorder with σ=0.3 GHz in Figs. <ref> (b) and (d)). The red dots represent the power in each ring in the absence of deliberately introduced disorder and the blue dots represent the power in the rings when each of the twenty random iterations of disorder is implemented.
We can now quantify the robustness of the topological mode by measuring the standard deviation of the power in the rings under disorder in the coupling. For instance, under low disorder (high disorder) the standard deviation of the power in ring 1 is σ^ring-1_power=8.2 nW (13.2 nW) in the strong dimerization case and 10.5 nW (20.4 nW) weak dimerization case. Since a strong signature of topological protection on the SSH model is the localization of light in one of the sublattices, we can also quantify the variation of the power in the even rings in the presence of disorder, which remains very close to zero in the strong dimerization case (σ^even-rings_power= 1.3 nW and 2.4 nW for low and high disorder respectively) and it becomes slightly larger in the weak dimerization case (σ^even-rings_power= 2.1 nW and 4.7 nW for low and high disorder respectively).
As opposed to conventional topological photonic platforms in which a proper robustness study would require the fabrication and measurement of a large number of devices, this platform allows for accurate quantification of the robustness against disorder in the coupling on the same chip by just software reprogramming.
§ 2D TOPOLOGICAL PHOTONICS IN THE PROGRAMMABLE MESH
To illustrate the versatility of programmable integrated platforms in the context of topological photonics, we now implement higher-order topological insulator (HOTI) based on a breathing kagome lattice. The kagome lattice is a 2D model consisting of corner sharing triangles with opposite orientations. While the tight-binding model of the kagome lattice exhibits graphene-like Dirac bands, a band gap opens when the coupling strengths between the sites in different triangles alternate. This is known as the breathing kagome lattice which has been shown to host higher-order topological corner states in a variety of settings <cit.>, including photonics <cit.>. Here, we implement a fully reprogrammable breathing
Kagome lattice by reconfiguring the silicon photonics mesh into a 2D array of coupled ring resonators arranged in corner sharing triangles with the upward pointing triangles and the downward pointing triangles having different coupling strengths, as depicted in Fig.<ref> (b). The implementation of such 2D lattice requires 72 PUCs, exactly the number of PUCs available in the silicon photonics chip of our experiments, see <ref>. However, the rectangular shape of this specific chip prevents the implementation of the model in Fig. <ref> (b) directly on the hardware, and thus we have implemented this model on a realistic simulator of the mesh <cit.>. Note that the scalability required for this demonstration is perfectly within the possibilities of the current technology.
The tight-binding Hamiltonian describing the breathing kagome lattice is
H=k_w∑_⟨ n,m⟩∈ a_n^† a_n+1 + k_s∑_⟨ n,m⟩∈▿ a_n^† a_n+1
where and ▿ represent the sites in the upward and downward pointing triangles. The theoretical energy spectra for three different dimerization patterns are shown in Fig. <ref> (a). We observe three quasi-degenerate energies at f-f_0≈0 that correspond to the energies of the corner states. The power distribution of one of those eigenmodes is shown in Figs. <ref> (b) and (c) for the stronger and weaker dimerization cases, respectively. It is evident that stronger dimerization leads to stronger light localization at the corners of the lattice.
Next, we simulate the insertion of light in ring 1 and monitor the power in each ring at the edge of the lattice. In the 2D Kagome lattice, light propagates clockwise and counter-clockwise directions inside the resonator, unlike the 1D SSH implemented on hardware mesh. Therefore, each resonator requires two monitoring ports and external detectors, as shown in Fig. <ref> (a). First, we vary the input frequency of the laser within a range of ±2 GHz around f_0 and sum the monitored power in all the edge rings for each frequency, as shown in Fig. <ref> (a). In the case with stronger dimerization (blue line) in Fig.<ref> (a) we observe a well defined peak at f_0, which indicates that most of the input light populates the corner states and that these states are strongly degenerate. This is confirmed by the power distribution over the edge rings shown in <ref> (b), in which the power is strongly localized in the three corner rings. Note that we do not have access to monitoring the bulk rings (rings 7,8 and 11) for the current programmable mesh architecture. However, it is possible to implement monitoring inside the mesh by non-invasive, contactless integrated light probes <cit.>.
Another interesting physical effect occurring in HOTIs under certain conditions is that of light fractionalization between the higher-order states <cit.>. Given the frequency degeneracy of the three corner states, inputting light in one of the corners is equivalent to exciting an equal superposition of the three corner eigenstates. We can observe some fractionalization of light to all three corners, in Fig. <ref> (b), the power in all three corners is not exactly equal. We have verified that this is due to the path-related phase differences experienced by the light reaching the bottom left and bottom right corners because of the slightly asymmetric implementation of the lattice in the silicon photonics mesh. This can be remediated by implementing a symmetric mesh (see Supplemental Document Section 2.).
Subsequently, we focus our attention on the cases with moderate (yellow line) and weak (green line) dimerization in Fig.<ref> (a). As the dimerization becomes weaker, so does the degeneracy of the corner states around f-f_0≈0 and this translates into the two sub-peaks observed on the power spectrum monitored on the edge rings. This becomes more pronounced for the weakest dimerization case (green line). Therefore, as illustrated in Figs.<ref>(c-g), when the input light has a frequency of f_0 the localization on the corner rings is not as strong as at the frequency of the subpeaks (f_0-0.15GHz and f_0-0.275GHz for the moderate and weak dimerization cases, respectively). For a quantifiable comparison, the percentage of light in the corner rings compared to the unit input power goes from 1.4% to 1.9%increases 36% when moving from f_0 to f_0-0.275GHz in the weakest dimerization case.
§ CONCLUSION
We have proposed and demonstrated that programmable integrated photonics can be used to implement different topological photonics models and to fully reconfigure the behavior of topological modes. In the same platform we have implemented a 1D SSH chain and a 2D HOTI and we have shown full control over the localization and robustness of the edge and corner modes.
The possibility of engineering, not only the coupling rate between sites, but also the phase of such couplings renders this platform readily available for the implementation of a wide variety of topological models, including magnetic-like Hamiltonians that have shown potential in lasers <cit.> and quantum optics functionality <cit.>. Moreover, the loss of each ring can also be individually and accurately controlled, opening a plethora of possibilities for non-Hermitian topological photonics investigations and devices<cit.>.
Another enticing future research avenue on this kind of programmable integrated platform is the exploration of lattices with explicitly broken time reversal symmetry (T) by time-harmonic modulation of the coupling strength between resonators <cit.>. A crucial requirement here is that the strength of the modulation must be larger than the decay rate, which translates into the need for fast modulation and low loss technologies. While the current hardware uses heaters to control the coupling and the loss is relatively high, it is within the scalability scope of this technology to introduce high-speed electro-optics phase-shifters and significantly reduce the loss of each cell. This would open the door to the study of a variety of truly non-reciprocal systems at optical frequencies with important fundamental and practical implications.
By showing that a general purpose programmable integrated photonics platform can be used to implement nearly any topological photonics model we hope to accelerate progress in the field, bypassing lengthy design and fabrication cycles and offering a fully reconfigurable platform in which the topological modes are easily tailored and the effects of disorder can be accurately quantified.
<#>1
IEEEtran
|
http://arxiv.org/abs/2307.07403v1 | 20230714152830 | Robust bounds on ALP dark matter from dwarf spheroidal galaxies in the optical MUSE-Faint survey | [
"Elisa Todarello",
"Marco Regis",
"Javier Reynoso-Cordova",
"Marco Taoso",
"Daniel Vaz",
"Jarle Brinchmann",
"Matthias Steinmetz",
"Sebastiaan L. Zoutendijk"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph"
] |
=1
=1
fpheadera,b]Elisa Todarello,[email protected],b]Marco Regis,a,b]Javier Reynoso-Cordova,
b]Marco Taoso,c,d]Daniel Vaz,c,e]Jarle Brinchmann,f]Matthias Steinmetz,e]Sebastiaan L. Zoutendijk[a]Dipartimento di Fisica, Università di Torino, via P. Giuria 1, I–10125 Torino, Italy[b]Istituto Nazionale di Fisica Nucleare, Sezione di Torino, via P. Giuria 1, I–10125 Torino, Italy[c]Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal[d]Departamento de Física e Astronomia, Faculdade de Ciências, Universidade do Porto,
Rua do Campo Alegre 687, PT4169-007 Porto, Portugal[e]Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands[f]Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482 Potsdam, Germany
Nearby dwarf spheroidal galaxies are ideal targets in the search for indirect dark matter (DM) signals.
In this work, we analyze MUSE spectroscopic observations of a sample of five galaxies, composed of both classical and ultra-faint dwarf spheroidals. The goal is to search for radiative decays of axion-like particles (ALPs) in the mass range of 2.7-5.3 eV. After taking into account the uncertainties associated with the DM spatial distribution in the galaxies, we derive robust bounds on the effective ALP-two-photon coupling.
They lie well below the QCD axion band and are significantly more constraining than limits from other probes, in the relevant mass range.
We also test the possible presence of a positive signal, concluding that none of the channels selected for this analysis, i.e., not affected by large background contamination, is exhibiting such evidence.
Robust bounds on ALP dark matter from dwarf spheroidal galaxies
in the optical MUSE-Faint survey
[
August 12, 2023
==================================================================================================
§ INTRODUCTION
Axion-like particles (ALPs) are pseudo Nambu-Goldstone bosons that arise in extensions of the Standard Model and that can act as cold dark matter (DM) candidates <cit.>.
A particularly well-motivated example is the QCD axion, which is associated with the Peccei-Quinn symmetry solution to the strong CP problem <cit.>.
Generically, ALPs couple to photons through the operator ℒ=-1/4g_aγ a F_μνF̃_μν, where a is the ALP field, F_μν is the electromagnetic field strength, F̃_μν its dual, and g_aγ the coupling constant. This interaction term leads to a variety of possibilities to detect ALPs in laboratory experiments or with astrophysical and cosmological probes, see e.g. <cit.> for reviews.
In astrophysical environments, an almost monochromatic photon emission is produced by the radiative decay of ALP DM, and,
for ALP masses in the eV range, this photon line falls in the optical and near-infrared bands.
In this frequency range, several upper bounds on this signal have been derived from observations <cit.>. In particular, ref. <cit.> derived the currently most stringent constraints on ALP radiative decays for masses between 2.7 and 5.3 eV, improving previous bounds by more than an order of magnitude. Interestingly, in recent years, ALPs masses in the eV mass range have been invoked to explain excesses in the measured cosmic near-infrared background and its angular anisotropies <cit.>.
The upper limits of <cit.> severely challenge some of these scenarios <cit.>.
The analysis in <cit.> is based on spectroscopic observations of the Leo T dwarf spheroidal galaxy obtained with the Multi Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope (VLT) <cit.>.
Dwarf spheroidal galaxies are ideal targets for searching for DM decay signals because they contain large DM densities and are relatively close to us. Moreover, measurements of the line-of-sight velocity of the stars in these objects allow us to infer the underlying DM distribution along with its uncertainty. This information is instrumental in order to reliably predicting the ALP signal, and deriving robust bounds on the ALP decay lifetime.
With the present work, we extend and improve the analysis in <cit.> in two ways: we enlarge the dataset exploiting recent MUSE observations of other dwarf galaxies, and we implement a more detailed treatment of the DM distribution and its uncertainty,
taking advantage of recent analyses.
More specifically, in addition to Leo T, we consider MUSE observations of the dwarf spheroidal galaxies Sculptor, Eridanus 2, Grus 1, and Hydra II.
Then, for Eridanus 2, Grus 1, Hydra II, and Leo T, we make use of the
recent determination of the DM content in these objects performed by the MUSE collaboration <cit.>.
Concretely, we consider two parametrizations for the DM density, namely the Navarro–Frenk–White (NFW) model <cit.> and a cored profile.
We account for the uncertainty on these DM distributions including the corresponding likelihoods derived in <cit.> in our statistical analysis; see sec:res for details.
For Sculptor, we follow the same procedure but we derive the DM distribution and the relevant likelihood by ourselves. This is accomplished by means of a Jeans analysis, using data from <cit.> and employing the same method of <cit.>.
For each target, we perform a search of ALP decay signals in the MUSE data, and then we combine the individual bounds in a global analysis.
We find upper limits on the ALP lifetime similar to, but slightly weaker than, those in <cit.>.
Finally, excluding channels severely contaminated by background, we do not find significant evidence for an ALP signal.
The structure of this paper is as follows. The data from MUSE observations are presented in sec:data. The calculation of the ALP decay signal is discussed in sec:axion. In sec:res, we discuss the statistical analysis and results. We conclude in sec:conc. The Jeans analysis for Sculptor is discussed in Appendix <ref>.
§ OBSERVATIONS AND DATA REDUCTION
As part of MUSE-Faint, a GTO survey of faint dwarf galaxies (PI Brinchmann), Leo T, Sculptor, Eridanus 2, Grus 1, and Hydra II[Sculptor is commonly considered to be a “classical" dwarf, while Leo T, Eridanus 2, Grus 1, and Hydra II are classified as “ultra-faint" dwarf galaxies.] were observed with MUSE, a large-field medium-resolution Integral Field Spectrograph installed on the VLT. We use multiple exposures of 900 s of each galaxy, with a total exposure time of 3.75 hours on Leo T (one field), 3 hours on Sculptor (one field), 21.5 hours on Eridanus 2 (five fields), 4 hours on Grus 1 (one field), and 14.75 hours on Hydra II (four fields).
The data were taken in the Wide Field Mode with adaptive optics (WFM-AO), which provides a 1 × 1 arcmin^2 field of view with a spatial sampling of 0.2 arcsec pixel^-1.
The data cover a wavelength range of 4700 - 9350 Å, sampled at a resolution of 1.25 Å. A blocking filter was used to remove the light from the sodium laser of the adaptive optics system to avoid contamination. This filter blocked light in the 5820-5970 Å (2.13 - 2.08 eV) range, which appears as a gap in the constraints presented in the following.
For data reduction, we refer the reader to Ref. <cit.> for details, while here we provide a brief summary. We performed the standard data reduction procedure using
the MUSE Data Reduction Software (DRS; version 2.8 <cit.>). Flux calibration was carried out using flux standards observed during the night, while atmospheric emission lines were removed by accounting for Raman scattering caused by the laser light of the adaptive optics system. We subtracted emission lines from the night sky that have well-known wavelengths and result in increased noise at those wavelengths. We measured a spatial resolution (full-width half maximum) of 0.61, 0.50, 0.53, 0.67, and 0.40 arcsec for Leo T, Sculptor, Eridanus 2, Grus 1, and Hydra II, respectively, at a wavelength of 7000 Å in the reduced datacubes.
To ensure an accurate analysis of the data cubes, it is crucial to have a reliable estimate of the noise. Previous studies (e.g., Ref <cit.>) have shown that the MUSE Data Reduction Software (DRS) underestimates uncertainties in the final data cube. Therefore, we proceeded as in Ref. <cit.> and re-estimated the pixel-to-pixel variance directly from each individual exposure data cube using the method described in Ref. <cit.>, creating mask images using SExtractor <cit.>.
We then combined all single-exposure data cubes using MPDAF <cit.>, to create the final data cubes that were used in the subsequent analysis.
The data contain numerous stellar sources within the field of view, both from the dwarf galaxy and also from some likely foreground stars from the Milky Way, as well as some galaxies.
To minimize the impact of these sources on the final results, we have identified and masked the brightest ones.
This was achieved by following the same approach as in Ref. <cit.> and involved two steps. First, we generated a white-light image by summing over the wavelength axis in each datacube. Next, we ran SExtractor on this white-light image, with a detection threshold of 3σ, resulting in a segmentation map that was used to mask sources. Therefore, we consider only pixels where no sources are detected in the white-light image.
In Fig. <ref> we show, for all the dwarf galaxies under consideration, the flux density per beam solid angle averaged over all the unmasked pixels.
§ ALP SIGNAL
We model the DM halo in a dwarf galaxy as a spherical system.
The flux density at wavelength λ produced by decays of ALPs from a given direction identified by θ can be computed as:
S_λ (θ)=Γ_a/4π 1/√(2π)σ_λexp[-(λ-λ_obs)^2/2σ_λ^2]∫ dΩ dℓρ_a[r(θ,Ω,ℓ)] B(Ω) .
The decay rate Γ_a depends on the ALP mass m_a and the effective ALP-two-photon coupling g_aγ. In natural units, it reads
Γ_a=g_aγ^2 m_a^3/(64π).
The wavelength of emission can be computed as λ_em=c/ν_em with ν_em=m_a/(4π). In Eq. <ref>, we neglect the velocity dispersion of ALPs in the dwarf halo, since it is typically ≲ 10^-4 c, namely smaller than the spectral resolution of MUSE, σ_λ/λ which varies between 1.2× 10^-4 and 2.7× 10^-4. However, we cannot neglect the heliocentric radial velocity of dwarfs, except for Leo T, and we correct for the Doppler shift when deriving the results below. The observed wavelength λ_obs is then given by λ_obs = λ_em (1 + v_radial).
We assume a Gaussian behavior for both the energy and the angular responses of the detector, with FWHM as a function of the wavelength taken from Ref. <cit.> and normalized to the value at 7000 Å mentioned in Sec. <ref>. The angular beam is denoted by B(Ω).
Under the assumption of spherical symmetry, the DM density ρ_a(r) is a function only of the radial distance r from the center of the dwarf, which can be expressed in terms of the coordinate along the line of sight ℓ and the angle of observation.
Our reference scenario for the description of the DM density profile is given by a “cuspy” distribution given by the NFW functional form <cit.>:
ρ_NFW(r)=ρ_s/(r/r_s)( 1 + r/r_s)^2 ,
where ρ_s and r_s are respectively the scale density and radius.
We also consider a second parameterization, dubbed “coreNFWtides”, which modifies the cusp by allowing for a central core (e.g., due to o star-formation feedback) and includes a decrease in density beyond a tidal radius <cit.>. This parametrization is not completely independent from the NFW one, since it builds on it, but it has the advantage to allow an easy assessment of the effect a core and a tidal radius have on our bounds. The coreNFWtides profile is described in more detail in the Appendix <ref>.
We constrain the DM profiles of the dwarf galaxies through a Jeans analysis of the velocity dispersion of their stars. For Eridanus 2, Grus 1, Hydra II, and Leo T, we used the likelihood derived in <cit.> (GravSphere method). In the case of Sculptor, we derived the likelihood ourselves, using the same approach as in <cit.>, but with data from <cit.>.
The results for the Sculptor case are reported in Appendix <ref>.
In the rest of our analysis, ρ_s and r_s are treated as free parameters of the model. For the coreNFWtides profile, the additional parameters describing the DM distribution, see Appendix <ref>, are fixed to their global best-fit values from the just mentioned analyses of dispersion velocities, in order to reduce the number of free parameters.
The other parameters that enter Eq. <ref> and that will be sampled in our scans are g_aγ and m_a. In total, there are four free parameters describing the expected flux from ALPs.
§ METHODS AND RESULTS
In our statistical analysis, we consider two types of data. On one side, we have the dispersion velocities of the stellar component in the dwarf galaxy, which allow us to infer the DM spatial distribution via Jeans analysis. From the likelihood defined in <cit.>, we derive a profile likelihood ℒ_Jeans^j depending only on ρ_s and r_s, where all the other “nuisance” parameters are profiled out (and, in the case of the coreNFWtides profile, the additional parameters are set to their global best-fit values).
The index j stands for the dwarf considered.
The second type of dataset we consider is the diffuse emission probed by MUSE observations in the direction of the dwarf galaxies.
As done in <cit.>, we compare the expected ALP signal with the observed data in each dwarf by means of a Gaussian likelihood (omitting the index j for simplicity):
ℒ_diff=e^-χ^2/2 with χ^2=1/N_pix^FWHM∑_i=1^N_pix(S_th^i-S_obs^i/σ_rms^i)^2 ,
where S_th^i is the theoretical estimate for the flux density in the pixel i, S_obs^i is the observed flux density and σ_rms^i is the r.m.s. error, both described in sec:data.
The theoretical estimate is given by Eq. <ref> along with an additional spatially flat term S_λ,flat that we incorporate in the fit to each individual map at every wavelength to account for incomplete sky subtraction. We consider this flat term a nuisance parameter.
N_pix is the total number of pixels in the area under investigation,
which we chose to be a circle of 60” of radius. The number of pixels within the MUSE angular beam is N_pix^FWHM, which has a size given by the aforementioned FWHM.
We define a likelihood ℒ_diff which depends only on ALP parameters by profiling out S_λ,flat^j from the likelihood in Eq. <ref>. Then, assuming the two types of datasets to be independent, we can define, at any given mass m_a, a global likelihood for each dwarf j:
ℒ^j(g_aγ,ρ_s^j,r_s^j)=ℒ_diff^j(g_aγ,ρ_s^j,r_s^j)×ℒ^j_Jeans(ρ_s^j,r_s^j) ,
and a combined likelihood considering all five targets simultaneously:
ℒ^all(g_aγ,ρ⃗_s,r⃗_s)=∏_j=1^5 ℒ^j(g_aγ,ρ_s^j,r_s^j)
Finally, we assume that λ_c(g_aγ)=-2ln[ℒ(g_aγ,ρ⃗_s^ lbf,r⃗_s^ lbf)/ℒ(g_aγ^b.f.,ρ⃗_s^ gbf,r⃗_s^ gbf)] follows a χ^2-distribution with one d.o.f. and with one-sided probability given by P=∫^∞_√(λ_c)dχ e^-χ^2/2/√(2 π), where g_aγ^b.f. denotes the best-fit value for the coupling at a specific ALP mass. The superscript gbf indicates the global best-fit, i.e. for g_aγ=g_aγ^b.f., whilst lbf denotes the best-fit of ρ⃗_s and r⃗_s for that given g_aγ. For the analysis on a single dwarf j, one has just to replace (ρ⃗_s,r⃗_s) with (ρ_s^j,r_s^j) in the expression of the estimator λ_c.
The 95% C.L. upper limit on g_aγ at mass m_a is obtained from λ_c=2.71.
Results are shown in Figs. <ref> (NFW profile) and <ref> (coreNFWtides). We see that the different targets provide similar bounds, which also further motivates us to perform the combined analysis. The coupling g_aγ is constrained at a level around 10^-12 GeV^-1 with significant fluctuations between adjacent masses, due to the noise from the process of subtracting the foreground emission lines.
Such rapid variation is more pronounced at lower masses/longer wavelengths reflecting the presence of strong OH emission lines from the night sky in this wavelength range.
The bounds improve slightly from low to high masses, which is due to the scaling of the decay rate with m_a^3, mitigated by an opposite energy dependence of the observational capabilities (angular and energy resolutions, foreground).
By comparing Figs. <ref> and <ref>, we see that the reduction of constraining power for the coreNFWtides profile is very limited. This is because we are probing a relatively large portion of the targets, and so the majority of the pixels entering the statistical analysis are not from the central region, where the two profiles differ, but from distances where the two profiles basically coincide.
The robustness of our results against different masking and error estimates is tested in the same way as discussed in <cit.>. We find negligible differences in the derived bounds from the alternative analyses.
In the left panel of Fig. <ref>, we summarize our findings for the NFW profile and include the bound derived in Ref. <cit.> from the observation of clusters, in Ref. <cit.> from the ratio of horizontal branch (HB) to Asymptotic Giant Branch stars in globular clusters and, for reference, the preferred region for the QCD axion <cit.>.
In the wavelength/mass range covered by our analysis, we can confidently exclude the QCD axion, which is also in tension with other astrophysical and laboratory probes associated with couplings different from g_aγ, see e.g. <cit.>, and the possible interpretation of near-infrared
background anisotropies in terms of ALP dark matter <cit.>.
In the right panel of Fig. <ref>, we compare the results of our combined analysis to Ref. <cit.>, obtained from the MUSE data of Leo T.
The current analysis is typically more conservative, even though bounds are at a comparable level, and this is mainly due to the treatment of the DM profile. Indeed, in Ref. <cit.>, the profile was derived by extrapolating results from a Jeans analysis at larger radii, while here it is derived directly from data. We found that, in the case of Leo T, the extrapolation slightly overshoots the real DM profile. On top of that, the uncertainty associated with the profile determination is now taken into account in a more rigorous statistical way, as described above.
For what concerns possible evidence of an ALP signal, we define, again at any given mass, λ_d=2ln[ℒ(g_aγ=0,ρ⃗_s^ lbf,r⃗_s^ lbf)/ℒ(g_aγ^b.f.,ρ⃗_s^ gbf,r⃗_s^ gbf)].
The ALP discovery would occur if √(λ_d)>5.
Due to the imperfect subtraction of emission lines from the night sky, many channels present large values of √(λ_d). In order to identify possible emission peaks due to the presence of an ALP, we need to remove this spurious evidence from our data, i.e. determine which channels are “unreliable".[Note that, in those channels, the bounds described above are weakened and not strengthened by the presence of such residual emission.] As a first step, we look at the sky spectrum of Leo T, obtained from a data cube without sky subtraction, by summing the flux over an area with no bright sources.
The sky spectrum presents large peaks above a non-zero frequency-dependent baseline level.
We search for peaks with heights above a threshold that we choose to be five times the standard deviation of fluctuations about the baseline in a region without large emission lines. With this criterion, we identify 225 peaks, all of which correspond to known atmospheric emission lines, except two, which we will not consider as sources of fake evidence in the following. Next, in order to determine how many channels are affected by a bright emission line, we construct an estimate of the reliability of the errors used in our analysis.
For this purpose, we use the individual Leo T exposures. We reduce the data for 11 individual exposures, following the data reduction process described in Sec. <ref>. By assuming that the data are perfectly aligned across exposures, we have:
f_i(λ) = f_true(λ) + N(0, σ_i(λ)^2),
where f_i(λ) is the flux measured at a given wavelength λ, f_true(λ) is the true flux at a given wavelength λ, and N(0, σ_i(λ)^2) represents the normally distributed random noise at each wavelength, with variance σ_i(λ)^2. We use the spectra of 220 stars in the Leo T field of view, and, for each star, we compute f_true(λ) as the average of the 11 observations (therefore i runs over 220 stars and 11 exposures).
It is important to note that this modeling approach is only suitable for sources, as the sky background may vary. On the other hand, the fluxes from stars are relatively stable, and f_true(λ) should be nearly constant.
If the uncertainty estimates σ_i(λ) are reliable, then the difference between the observed flux and the true flux, normalized by σ_i(λ), should be distributed approximately as a Gaussian with zero mean and standard deviation Σ(λ) = 1.
Next, we superimpose the sky spectrum of Leo T with Σ(λ). We observe that around the frequencies corresponding to atmospheric emission lines, Σ deviates significantly from 1. To determine the characteristic range of wavelengths affected, we look at three well-known isolated oxygen emission lines
(λ_P = 5577.3,6300.3,6363.8 Å). We find that Σ(λ) - 1 > 0.05 in a range [λ_P-δλ,λ_P+δλ], with δλ = 5.7 σ_λ and λ_P being the wavelength of the peak and σ_λ being the spectral resolution as in Eq. (<ref>).
We thus conduct the evidence search excluding channels that fall into the ±δλ region around all background emission peaks.
With this procedure, approximately 2000 channels out of 3719 are excluded.
Finally, we correct for the Look Elsewhere Effect (LEE) by dividing the p-value by a factor N_trials equal to the total number of used channels divided by the number of channels falling within the spectral resolution. We compute the p-value from the test statistics λ_d defined above, combining all targets.
We find no evidence for ALP DM in our data, i.e., no case with √(λ̃_d)>5, where λ̃_d has the same meaning of λ_d but corrected for the LEE.
§ CONCLUSIONS
Most ALP models predict a coupling between photons and ALPs.
This implies that we expect a monochromatic photon flux generated by ALP decays inside astrophysical structures.
Nearby dwarf spheroidal galaxies are ideal targets for this search since they are DM dominated and are relatively close to us.
Assuming ALPs to constitute all the DM in galaxy halos, we analyzed MUSE spectroscopic observations of five dwarf spheroidal galaxies to search for ALP radiative decays in the mass range 2.7-5.3 eV.
The excellent spectral resolution and sensitivity of the spectroscopic observations obtained with the MUSE instrument at the VLT allowed us to probe quite faint and diffuse monochromatic line emissions.
We tested the possible presence of an ALP DM signal, concluding that none of the channels selected for this analysis, i.e., not affected by large background contamination, is exhibiting a detection.
After taking into account the uncertainties associated with the DM spatial distribution in each dwarf galaxy, we derived robust bounds on the effective ALP-two-photon coupling. They lie well below the QCD axion band, and are significantly more constraining than limits from other probes, in the relevant mass range.
§ ACKNOWLEDGEMENTS
MT acknowledges support from the research grant ‘The Dark Universe: A Synergic Multimessenger Approach’ No. 2017X7X85K funded by MIUR.
MR, JR, MT and ET acknowledge support from the project “Theoretical Astroparticle Physics (TAsP)” funded by the INFN.
MR, JR and ET acknowledge support from `Departments of Excellence 2018-2022' grant awarded by the Italian Ministry of Education, University and Research (miur) L. 232/2016 and Research grant `From Darklight to Dark Matter: understanding the galaxy/matter connection to measure the Universe' No. 20179P3PKJ funded by miur.
JB and DV acknowledge support by Fundação para a Ciência e a Tecnologia (FCT) through the research grants UIDB/04434/2020 and UIDP/04434/2020 and through grant PTDC/FIS-AST/4862/2020. JB acknowledges work contract 2020.03379.CEECIND and DV acknowledges support from the Fundação para a Ciência e a Tecnologia (FCT) through the Fellowship 2022.13277.BD.
Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 0100.D-0807, 0101.D-0300, 0102.D-0372 and 0103.D-0705.
JHEP_improved
§ JEANS ANALYSIS OF SCULPTOR
The data we have used in our Jeans analysis for Sculptor are the line-of-sight velocities reported in <cit.> alongside the photometric measurements presented in <cit.> for surface density data. We model the stellar dynamics assuming a spherically symmetric and non-collisional Jeans equation <cit.>:
1 /ν(r)∂/∂ r( ν(r)σ_r^2) + 2β(r)σ_r^2/r = GM(<r)/r^2,
where ν(r) is the stellar density, σ_r the radial velocity dispersion, β(r) the velocity anisotropy and M(<r) is the total enclosed mass within a radius r from the center of the target. Furthermore, we model the radial stellar density profile as a sum of three Plummer spheres <cit.>:
ν(r) = ∑_j=1^33 M_j/4 π a_j^3( 1 + r^2/a_j^2)^-5/2,
M_j and a_j are parameters which can be constrained through the data.
The expression <ref> can be seen as a density expansion, analogous to a Gaussian decomposition. The velocity anisotropy β(r) is parametrized as:
β(r)=β_0 + (β_∞-β_0) 1/1+( r_a/r )^η,
where β_0, β_∞, r_a and η are also free parameters describing respectively the inner and outer orbital anisotropy, the radius and the sharpness of the transition.
The mass distribution is the sum of the DM component and the stellar contribution, which is modeled as in Eq. <ref> but with a free parameter fixing the overall normalization (in practice this corresponds to a free mass-to-light ratio).
As explained in sec:axion, we consider two options for the DM distribution, namely the NFW profile in Eq. <ref>, and the coreNFWtides model of <cit.>, which modifies the NFW distribution allowing for the presence of a central core, and a reduced density beyond a tidal radius.
More specifically, the central core is implemented by modifying the NFW mass distribution as follows:
M_cNFW(<r)= M_NFW(<r) f^n,
with 0≤ n ≤ 1 and
f=tanh( r/r_c),
being r_c the size of the core.
The associated density profile is:
ρ_cNFW(r)=f^n ρ_NFW(r)+nf^n-1(1-f^2)/4π r^2r_c M_NFW(<r).
Furthermore, since the galaxies that we are describing experience strong gravitational interactions with the host galaxy, tidal stripping is expected in their external regions.
Such an effect can be modeled by further modifying the enclosed mass beyond a tidal radius r_t:
M_cNFWt(<r) =
M_cNFW(<r) if r < r_t,
M_cNFW(r_t) + 4πρ_cNFW(r_t)r_t^3/3-δ[ (r/r_t)^3-δ -1 ], if r > r_t,
which in terms of the density profile reads
ρ_cNFWt(r) =
ρ_cNFW(r) if r<r_t,
ρ_cNFW(r_t)( r/r_t)^-δ, if r>r_t,
and the external slope δ is taken to be δ≥3.
Given all these ingredients, we can use the radial velocity dispersion obtained by solving equation <ref> to compute the line-of-sight velocity dispersion:
σ_L.O.S^2(R) = 2/Σ_* (R)∫_R^∞( 1 - βR^2/r^2) ν(r)σ_r^2 r/√(r^2 - R^2)dr,
being Σ_*(R) the projected stellar surface density, which can be expressed as:
Σ_*(R) = ∑_j=1^3M_j/π a_j^2(1 + R^2/a_j^2)^-2.
Finally, we have compared the model to the data by considering the Gaussian likelihood -2 lnℒ= χ^2_L.O.S+χ_Σ_*^2+χ^2_VSP_1+χ^2_VSP_2.
The first two terms correspond to the chi-squared for the line-of-sight velocity dispersion and surface density data, respectively.
The last two contributions are constructed from the fourth moments of the velocities distribution, known as the virial shape parameters VSP1_1 and VSP_2, which can be computed using equations [20-23] from Ref. <cit.>.
See Refs. <cit.> for more details.
The model is based on a total of 13 (17) free parameters: 2 (6) for NFW (coreNFWtides), 4 for the velocity anisotropy, and 7 for the stellar component.
We have explored this parameter space through a Markov Chain Monte Carlo (MCMC) simulation.
The numerical analysis is performed through the emcee<cit.> sampler which is implemented by using public python code pyGravSphere<cit.>.
Prior to the MCMC, the stellar model is fitted to the surface density data. Then, in the MCMC the corresponding parameters a_j and M_j are allowed to vary in a 50% range around their previously determined surface density best-fit values. The dark matter profile <ref> was added to the already existent profiles in pyGravSphere using the same priors as in <cit.>.
We present our results for the case of an NFW profile in Fig. <ref>, where we show the posterior probability distributions of log_10(ρ_s/(M_⊙/kpc^3)) and log_10(r_s/kpc), alongside their 0.16, 0.5 and 0.84 percentile values.
Analogous results are shown in Fig. <ref> for the coreNFWtides distribution.
In this case, in order to follow the numerical implementation of the public code GravSphere <cit.>, we parametrize the NFW profile through the concentration c_Δ and the mass M_Δ, which are
related to ρ_s and r_s by the equations:
c_Δ=r_Δ/r_s , ρ_s = Δρ_c c_Δ^3/3( ln(1 + c_Δ) - c_Δ/1+c_Δ),
where ρ_c is the critical density of the Universe, and Δ=200, which in turn defines r_200 as the radius at which the DM density in the halo is 200 times the critical density of the Universe
r_200 = ( 3/800M_200/πρ_c)^1/3.
Taking into account the additional parameters described in Eqs. <ref>-<ref>, the coreNFWtides profile is defined by six parameters: log_10(M_200/[M_⊙]), log_10(r_200/[kpc]), log_10(r_c/[kpc]), n, log_10(r_t/[kpc]) and δ.
We show their posterior distribution from our analysis in Fig.<ref>. Given the large number of parameters, this case shows significant degeneracy among the model parameters. On the other hand, the two most relevant ones, M_200 and c_200 are suitably well constrained.
|
http://arxiv.org/abs/2307.05694v1 | 20230709105511 | A Survey on Figure Classification Techniques in Scientific Documents | [
"Anurag Dhote",
"Mohammed Javed",
"David S Doermann"
] | cs.IR | [
"cs.IR",
"cs.CV",
"cs.LG"
] |
A Survey on Figure Classification Techniques in Scientific Documents
Dhote Anurag Radhesham^1, Mohammed Javed^1, David S Doermann^2
1Department of IT, Indian Institute of Information Technology, Allahabad, India
2Department of CSE, University at Buffalo, Buffalo, NY, USA
Email:{[email protected], [email protected], [email protected]}
August 12, 2023
===============================================================================================================================================================================================================================================================================================
Figures visually represent an essential piece of information and provide an effective means to communicate scientific facts. Recently there have been many efforts toward extracting data directly from figures, specifically from tables, diagrams, and plots, using different Artificial Intelligence and Machine Learning techniques. This is because removing information from figures could lead to deeper insights into the concepts highlighted in the scientific documents. In this survey paper, we systematically categorize figures into five classes - tables, photos, diagrams, maps, and plots, and subsequently present a critical review of the existing methodologies and data sets that address the problem of figure classification. Finally, we identify the current research gaps and provide possible directions for further research on figure classification.
Figure Classification;
Deep Learning;
Scientific documents;
Figure Mining;
Document Segmentation;
§ INTRODUCTION
Classification of images finds tremendous applications in various fields such as automobile, healthcare, agriculture, surveillance, and document analysis <cit.>. In scientific documents, different graphical visualizations such as tables, photos, diagrams, maps, and plots convey specific facts that are more effective than simple text. This factual information improves comprehension. Hence, extracting underlying information represented by figures is an important task. In general, it is referred to as figure mining. Figure mining includes enhancing the figure design, outlining the data represented by figures, detecting plagiarized documents, etc. The figure mining pipeline consists of (i) figure extraction from academic documents, (ii) classification of figures, and (iii) data extraction from each figure type. This paper aims to survey figure classification techniques and their related datasets comprehensively.
To address the problem of figure classification, it is crucial to detect and extract the figures from the respective documents using document segmentation techniques, as illustrated in Fig-<ref>. Generally, a document image may be segmented into text and non-text components. The non-text details are then further processed to classify them into an appropriate category. Much research has been done on the textual processing of documents. But as far as figures are concerned, there need to be more state-of-the-art methods that classify the scientific figures in their appropriate category. Chart image classification has recently interested many research groups <cit.>. This paper aims to highlight the work on chart image classification and include results that include other figure types. The techniques used for classification can be divided into handcrafted-based methods and deep learning-based methods.
The hand-crafted methods manually extract features using traditional feature extraction techniques, then classify the figures using machine learning models. On the other hand, deep learning techniques automatically learn features and classify the figures. Various approaches employed in these two categories are discussed in detail in the upcoming sections. This follows a discussion on several data sets reported in the related literature.
The rest of the paper is organized as follows. Section 2 provides information on the existing literature on the figure classification problem, and a summary of significant contributions is shown in Table<ref>. Section 3 includes a discussion of datasets used in recent works, and details of a few publicly available datasets are summarised in Table-<ref>. Section 4 provides pointers for future research work and many interesting problems that still need to be addressed in figure classification.
§ OVERVIEW OF FIGURE CLASSIFICATION PROBLEM
Figures are visualizations used in scientific literature to convey information and enhance comprehension. Figures often represent data that would otherwise be difficult to process if conveyed by the text. Figures are commonly categorized into well-known classes, such as tables, plots, diagrams, photos, equations, geometric shapes, maps, etc. Classes considered under the classification of figures can vary widely depending on the research field<cit.>. Giannakopoulos et al. <cit.> identify charts, diagrams, geometric shapes, maps, and photographs as the classes for the figure classification problem. Lee et al. <cit.> also considered the table a separate figure class in addition to plots, diagrams, photos, and equations. Table–<ref> summarizes the different figure types present in the existing literature.
It can be observed from the table that the figure categories like tables, plots, diagrams, and photos are popular figure types as compared to equations, geometric shapes, and maps. Considering the previous taxonomies, this paper's figures are divided into Tables, Photos, Diagrams, Plots, and maps. These five categories cover all the existing categories explored so far.
§.§ Table
A table is a structure with cells containing text and numeric data. Tables are very efficient at summarizing the textual information between methods that address similar problems. Tables in literature are used for tasks such as comparing existing methods, summarizing the data sets, highlighting observations, etc. Tables are hence recognized as an essential figure type in literature. Table detection and recognition problems have been extensively studied in previous years, and Hashmi et al.<cit.> summarize the existing work in the review. But classifying tables from other figure types remains open to research problems. Only some studies that have included tables while classifying figures use traditional and deep learning approaches. Lee et al.<cit.> use bags of visual words to classify tables among other figures such as Photos, Diagrams, Plots, etc.
Similarly, Jobin et al.<cit.>, Morris et al.<cit.>, Siegel et al.<cit.> use deep learning techniques to classify tables among other types of figures. Tables are rarely further divided into subcategories as the information conveyed through different table structures does not add to any further comprehension. So there are no subcategories under the class table.
§.§ Photo
A photo is generated when light falls on a photosensitive surface like a photographic sensor. Natural and medical images (diagnostic and radiological photos) are considered under the class photos. Depending on the scientific field, the presence of photos varies drastically. Photos are used in literature to provide deep insights on a specific topic, which are difficult to provide using text or other figure types. Jobin et al. [add ref] identified natural and medical images as figure categories in the DocFigure data set. They used a combination of FC-CNN and FV-CNN to classify these figure types. Medical images, commonly used in medical journals, papers, and articles, are further sub-categorized into diagnostic and nondiagnostic images in ImageCLEF2013 and 2016 datasets. Lagopoulos et al.<cit.>, Almakky et al.<cit.>, Andrearczyk, and Muller<cit.> consider the ImageCLEF2016 data sets to perform the figure classification task.
§.§ Diagram
A diagram represents the relationship between various parts of a concept. Figures like flowcharts, Gantt charts, schematics, conceptual diagrams, and tree diagrams are considered under the class diagrams. Diagrams improve perception by visualizing the structure and flow of a concept. Therefore, they are ubiquitous the scientific literature. Classification of diagrams into their subcategories has yet to be addressed in the literature. However, the existing literature has discussed the problem of the classification of diagrams among other figure types. Jobin et al.<cit.> considered flow charts and tree diagrams as figure types in the classification of figures. Lee et al.<cit.> identify diagrams as a crucial figure type and address its classification among other figure types. The bag-of-visual-words-based method is used to classify diagrams from different figure types.
§.§ Map
A map is a symbolic representation of the characteristics of a place or distribution. The map includes subcategories such as Geographical maps, Scientific maps, TreeMaps, and other geographical representations. Maps are used to describe various features localized in a particular area. Using scientific maps could lead to new insights into existing communities, concepts, and demographics based on map type. Hence it is essential to include maps as a figure type. Many researchers do not consider maps when addressing figure classification tasks. Giannakopoulos et al.<cit.>, Jobin et al.<cit.>, Morris et al.<cit.> include several types of maps in the dataset. Jobin et al. have incorporated Treemaps and Geographical maps into the DocFigure dataset. At the same time, Morris et al. include only geographical maps. As far as the author knows, scientific maps are not included in the existing literature.
§.§ Plot
The plot is a visual technique representing the relationships between two or more variables. Plots are widely used in the scientific literature to convey results with more clarity. There are various subcategories of plots, such as scatter, bar, pie, line, area, etc. Plots have strong representative power and simple rules and have been used in multiple research fields; hence they are considered significant figure types. As plots can be divided into various subcategories, which are also widely used in scientific literature, it has been addressed in existing works more than the other figure types. The following subsections have discussed a few traditional and deep learning approaches for addressing chart image classification.
§ RELATED WORK
The approaches implemented in the present work can be divided into traditional and deep learning categories. The figure classification problem has been addressed more in the bio-medical field than in other areas. This could be because a state-of-the-art data set is designed for automated figures analysis in the biomedical literature called ImageCLEF<cit.>. A detailed discussion regarding various approaches used for figure classification is provided in the following sub-sections. In addition to this, specifically, chart classification techniques are also summarized in detail.
§.§.§ Traditional Approaches
Traditional approaches rely on feature extraction methods used in computer vision. Features are manually extracted from the figures and then represented in mathematical form for further processing. These mathematical representations act as input to the classifiers, following the traditional method-based approach Savva et al.<cit.> present a system that automatically remodels the visualizations to increase visual comprehension. The authors use low-level image features for classification, and to improve further classification, they use text-level features. The performance is tested by training a multiclass SVM classifier on a corpus containing 2601 chart images labeled with ten categories, following Gao et al.'s manual extraction path.<cit.>, propose VIEW, a system that automatically extracts information from raster-format charts. The authors separate the textual and graphical components and classify the given chart image based on the graphic elements extracted from the visual components using SVM.
The text is limited to three chart categories of bar charts, pie charts, and line graphs, with 100 images for each category collected from various real-world digital resources. Instead of taking an image as input, Karthikeyani and Nagarajan<cit.> present a system to recognize chart images from PDF documents using eleven texture features that are part of the Gray Level Co-Occurrence Matrix. A chart image is located in the PDF Document database, and the features are extracted and fed to the learning model. SVM, KNN, and MLP are the classifiers used for getting the classification results. Cheng et al.<cit.> employ a multimodal approach that uses text and image features. These features are provided as input to MLP. The output is characterized as fuzzy sets to get the final result. The corpus contains 1707 figures with three categories and a 96.1% classification result. ReVision pioneered the technique for chart image classification and would act as a state-of-the-art method for future methods.
§.§.§ Deep Learning Approaches
Liu et al.<cit.> used a combination of Convolutional Neural Networks(CNN) and Deep Belief Networks (DBN) to capture high-level information present in deep hidden layers; fully Connected Layers of Deep CNN are used to extract deep hidden features. DBN is then used to predict the image class on the mentioned deep hidden features. Authors use the transfer learning concept and then perform fine-tuning to prevent overfitting. The data set included more than 5,000 images of charts in the categories of pie charts, scatter charts, line charts, bar charts, and flow charts. Deep features are useful over primitive features to provide better stability and scalability to the proposed framework.
Given the results of CNN in the classification of natural images, Siegel et al.<cit.> use two CNN-based architectures for figure classification. They evaluate AlexNet and ResNet-50, which are pre-trained on the ImageNet data set and then fine-tuned for figure classification. This transfer learning approach would be prevalent in subsequent works addressing this problem. The proposed frameworks outperformed the state-of-the-art model, ReVision, by a significant margin. ResNet-50 achieved the best classification accuracy of 86% performed on a dataset containing over 60000 images spread across seven categories.
Amara et al.<cit.> proposed a CNN-based LeNet model to classify their corpus of 3377 images into 11 categories. The model comprises eight layers: an output layer, one fully connected layer, five hidden layers, and an input layer. The fully connected layer is used as a classifier, while the hidden layers are convolution and pooling layers designed to extract features automatically. A fully connected layer employs softmax activation to classify images into predefined classes. For evaluation of the model's performance, an 80-20 split is performed on the data set for training and assessment. The proposed model performs better than the LeNet and pretrained LeNet architectures with an accuracy of 89.5%.
Jung et al. <cit.> present a classification method using the Caffe deep learning framework and evaluate its efficacy by comparing it with ReVision (a state-of-the-art chart-type classification system). The authors use GoogLeNet for classification and compare its results with shallower networks like LeNet-1 and AlexNet. Fivefold cross-validation is used for calculating the accuracy of the image corpus with 737 - 901 images for each chart type. The text concludes that ChartSense provides a higher classification accuracy for all types of graphs than ReVision.
Almakky et al.<cit.> developed a stack-auto encoder model for figure classification. They work with the ImageCLEF 2016<cit.> data set for biomedical subfigures having 30 classes and 10942 images. The data imbalance related to biomedical images has led the authors to use the proposed model. Five autoencoders were trained separately to extract the features in an unsupervised manner. This model is further fine-tuned to retain cohesion using the same binary cross-entropy criterion used to train SDAE layers. An overall accuracy of 64.3% was achieved using the proposed method. Poor overall accuracy compared to other works under the ImageCLEF challenge is attributed to low training samples and the nature of the data set.
With studies adapting the deep learning approach for chart image classification, a comparative study of traditional vs. CNN architectures was required. Chagas et al.<cit.> provide a comparative analysis of conventional vs. CNN techniques. The authors evaluate CNN architectures (VGG19, Resnet-50, and Inception-V3) for chart image classification for ten classes of charts. The performance is compared with conventional machine learning approaches such as classifiers Naive Bayes; HOG features combined with KNN, Support Vector Machine, and Random Forest. Pre-trained CNN models with fine-tuned last convolutional layers were used. The authors concluded that the CNN models surpass traditional methods with an accuracy of 77.76%(Resnet-50) and 76.77%(Inception-V3) compared to 45.03%(HOG+SVM).
Limitation in the figure data set was a significant problem in chart mining as both size and categories limited existing datasets. So, Jobin et al.<cit.> presented DocFigure, a figure classification data set with 33,000 figures for 28 different categories. To classify figures, the author proposes techniques that utilize the deep feature, deep texture feature, and a combination of both. Among these baseline classification techniques, the authors observed that combining deep feature and deep texture feature classifies images more efficiently than individual feature techniques. The average classification accuracy improved by 3.94% and 2.10% by concatenating FC-CNN and FV-CNN over individual use of FC-CNN and FV-CNN, respectively. The overall accuracy of the combined feature methods turned out to be 92.90%.
Due to the need for benchmarks in the chart mining process, Davila et al.<cit.> summarized the works of different participants in the first edition of the competition on Harvesting Raw Tables from Infographics, which provided data and tools to the chart recognition community. Two data sets were provided for the classification task. One was a synthetically generated AdobeSynth dataset, and the other UB-PMC data set was gathered from the PubMedCentral open-access library. The highest accuracy achieved for the synthetic data set was 99.81% whereas for the PMC data set, it was 88.29%. In the second edition of the competition, as the PMC set was improved and included in the training phase, the accuracy of models over the PMC set improved significantly to 92.8.%
Luo et al. proposed a unified method to handle various chart styles.<cit.> where they prove that generalization ability can be obtained in deep learning frameworks with rule-based methods. The experiments were carried out on three datasets with more than 300,000 images with three categories of graphs. In addition to the framework, an evaluation metric for bar, line, and pie charts is also introduced. The authors concluded that the proposed framework performs better than traditional, rule-based deep learning methods. Amara et al.<cit.> propose a deep learning-based framework that automates the feature extraction step, an improved LeNet convolutional neural network architecture version. Over 90,000 images of charts from 11 different categories were chosen for the experiments, and the proposed framework performs significantly better than model-based approaches.
§ DATASETS
There need to be more datasets that contain all the figure types discussed before. DocFigure<cit.> is one data set that includes tables, flowcharts, and other plots in a combined data set of 33,000 images. Morris et al.<cit.> propose SlideImages which includes 9 different classes with 3,629 images of various figures. Given the popularity of table recognition problems, data sets dedicated to images of tables have been developed over the past decade. Current works employ augmentation methods to cope with the problem of a small data set<cit.>.
There has been a significant improvement in size for chart image classification. Revision<cit.> dataset, which would be used for further studies for comparison, had only 2,601 images. The data sets proposed in recent years have more than 20,000 images. However, the data sets used for classification purposes mainly contain synthetic images. All data sets include the actual chart image in JPG, PNG, or JPEG format and the corresponding annotations in JSON and XML format. These studies ignore 3D charts, hand sketches, and composite figures. There need to be more authentic figure images extracted from documents that do not follow the fixed constraint prevalent in training image samples of existing data sets. Table-<ref> below shows the types of figures and their corresponding sample sizes. The data sets mentioned in the table are publicly available and were considered in the works of literature mentioned above.
§ FUTURE DIRECTIONS
Although there has been a significant increase in published articles on this classification problem, severe problems still need to be addressed.
§.§ Lack of Benchmark Data set
The chart image classification problem has been extensively addressed in previous work. However, the high-level classification of charts from other types of figures needs a more state-of-the-art approach. ImageCLEF dataset includes a variety of figure types but is restricted to images in the medical domain. In addition to this, DocFigure and Slideimages have several different figure types. Still, there is a lack of state-of-the-art data sets to address the figure classification problem. Hence, there is a need for a dataset that includes a significant number of images and figure categories that would cover as many different figure types as possible.
§.§ Lack of Robust Model
Recent work makes some hard assumptions while addressing this problem. Most existing data sets contain a small number of real-figure images extracted from documents. This leads non-robust systems to fail when image samples contain intra-class dissimilarity or inter-class similarity. Including authentic figure images in the training phase could improve model performance.
§.§ Inclusion of Noise
Most of the work in the existing literature ignores the effect of noise. The presence of different types of noise, such as background grids, low image quality, composite charts, and the presence of multiple components along with figures, leads to poor performance for models that perform exceptionally on noiseless data<cit.>.
So, there is a need for a robust deep-learning model to cover all the shortcomings mentioned above.
§ CONCLUSION
Figure classification is challenging due to the variety of figures present, the similarity between different figure types, and the noise in the figure images. Techniques used for figure classification have evolved remarkably. Earlier methods focused on manual feature extraction and providing the feature vectors to the different classifiers. Recent approaches, however, use more specific features corresponding to specific figure types more efficiently using deep learning models. Though the performance of these techniques is good, they are not robust enough to handle noisy and real figure image data. In this survey, various methods used for figure classification were discussed, along with the publicly available data sets. Also, some pointers are provided for the shortcomings in the current works.
ieeetr
|
http://arxiv.org/abs/2307.06215v1 | 20230712150436 | Entanglement from rotating black holes in thermal baths | [
"Ivan Agullo",
"Anthony J. Brady",
"Adrià Delhom",
"Dimitrios Kranas"
] | gr-qc | [
"gr-qc",
"hep-th",
"quant-ph"
] | |
http://arxiv.org/abs/2307.04460v1 | 20230710101312 | Exploiting an External Microphone for Binaural RTF-Vector-Based Direction of Arrival Estimation for Multiple Speakers | [
"Daniel Fejgin",
"Simon Doclo"
] | eess.AS | [
"eess.AS",
"cs.SD",
"eess.SP"
] |
Deformations at Earth's dayside magnetopause during quasi-radial IMF conditions: Global kinetic simulations and soft X-ray imaging
Chi Wang
August 12, 2023
==================================================================================================================================
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2177/1 - Project ID 390895286 and Project ID 352015383 - SFB 1330 B2.In hearing aid applications, an important objective is to accurately estimate the direction of arrival (DOA) of multiple speakers in noisy and reverberant environments. Recently, we proposed a binaural DOA estimation method, where the DOAs of the speakers are estimated by selecting the directions for which the so-called Hermitian angle spectrum between the estimated relative transfer function (RTF) vector and a database of prototype anechoic RTF vectors is maximized. The RTF vector is estimated using the covariance whitening (CW) method, which requires a computationally complex generalized eigenvalue decomposition. The spatial spectrum is obtained by only considering frequencies where it is likely that one speaker dominates over the other speakers, noise and reverberation. In this contribution, we exploit the availability of an external microphone that is spatially separated from the hearing aid microphones and consider a low-complexity RTF vector estimation method that assumes a low spatial coherence between the undesired components in the external microphone and the hearing aid microphones. Using recordings of two speakers and diffuse-like babble noise in acoustic environments with mild reverberation and low signal-to-noise ratio, simulation results show that the proposed method yields a comparable DOA estimation performance as the CW method at a lower computational complexity.
§ INTRODUCTION
In speech communication applications such as hearing aids, methods for estimating the direction of arrival (DOA) of multiple speakers are often required. To solve this estimation task, (deep) learning-based and model-based methods are continuously developed and advanced <cit.>. However, only few methods exploit the availability of external mobile devices equipped with microphones <cit.>, although wirelessly linking hearing aids to these devices has become increasingly popular <cit.>.
Recently, we proposed relative-transfer-function (RTF) vector-based DOA estimation methods for a single speaker in <cit.>, without relying on the external microphone to be close to the target speaker and capturing only little noise or reverberation as in <cit.>. We estimated the DOA as the direction that maximized the similarity between the estimated RTF vector and a database of prototype anechoic RTF vectors for different directions in terms of a frequency-averaged distance function.
However, the methods in <cit.> considered only a single speaker. To address DOA estimation for multiple speakers, we introduced the so-called frequency-averaged Hermitian angle spectrum from which the DOAs were estimated as the directions corresponding to the peaks of this spatial spectrum (throughout the paper, we refer to a direction-dependent similarity score as a spatial spectrum) <cit.>. Opposed to <cit.>, the spatial spectrum was constructed from time-frequency (TF) bins where one speaker was assumed to be dominant over all other speakers, noise, and reverberation, solely.
Estimation of the RTF vector of a speaker from noisy microphone signals can be accomplished using, e.g., the state-of-the-art covariance whitening (CW) method <cit.> or the spatial coherence (SC) method <cit.>. Despite the effectiveness of the CW method and the possibility to apply the method using only the head-mounted microphone signals or all available signals, such a computationally expensive method (due to the inherent generalized eigenvalue decomposition) is less desirable than methods with a lower computation complexity for resource-constrained applications like hearing aids. Opposed to the CW method, the SC method requires an external microphone but does not perform expensive matrix decompositions. The SC method relies on the assumption of a low spatial coherence between the undesired component in one of the microphone signals and the undesired components in the remaining microphone signals. As shown in <cit.>, this assumption holds quite well, for example, when the distance between the external microphone and the head-mounted microphones is large enough and the undesired component is spatially diffuse-like.
In this paper, we propose to construct the frequency-averaged Hermitian angle spectrum for DOA estimation for multiple speakers using the computationally inexpensive SC method. We compare the DOA estimation accuracy when estimating the RTF vector using the SC method or the CW method in a reverberant acoustic scenario with diffuse-like babble noise. Experimental results show for multiple positions of the external microphone that estimating the RTF vector with the SC method yields a DOA estimation accuracy that is comparable to the CW method at a lower computational complexity.
§ SIGNAL MODEL AND NOTATION
We consider a binaural hearing aid setup with M microphones, i.e., M/2 microphones on each hearing aid, and one external microphone that is spatially separated from the head-mounted microphones and can be located at an arbitrary position, i.e., M +1 microphones in total. We consider an acoustic scenario with J simultaneously active speakers with DOAs θ_1:J (in the azimuthal plane) in a noisy and reverberant environment, where J is assumed to be known. In the short-time Fourier transform (STFT) domain, the m-th microphone signal can be written as
Y_m(k,l) = ∑_j=1^JX_m,j(k,l) + N_m(k,l) ,
where m ∈{1,…,M+1} denotes the microphone index, k∈{1,…,K} and l∈{1,…,L} denote the frequency bin index and the frame index, respectively, and X_m,j(k,l) and N_m(k,l) denote the j-th speech component and the noise component in the m-th microphone signal, respectively. For conciseness, we will omit the frequency bin index k and the frame index l in the remainder of this paper wherever possible. Assuming sparsity in the STFT domain and one dominant speaker (indexed by j=d) per TF bin <cit.>, and stacking all microphone signals in an (M+1)-dimensional vector =[Y_1,…, Y_M+1]^T, where (·)^T denotes transposition, the vector is given by
= ∑_j=1^J + ≈ + ,
with , , and defined similarly as .
Choosing the first microphone as the reference microphone (without loss of generality) and assuming that the speech component for each (dominant) speaker can be decomposed into a direct-path component and a reverberant component , can be written as
= + = X_1,d^ DP + ,
where
= [1, G_2,…, G_M+1]^T
denotes the extended (M+1)-dimensional direct-path RTF vector and X_1,d^ DP denotes the direct-path speech component of the dominant speaker in the reference microphone. The M-dimensional head-mounted direct-path RTF vector corresponding to the head-mounted microphone signals can be extracted from as
= , = [𝐈_M× M,0_M] ,
where denotes the (M× M+1)-dimensional selection matrix for the head-mounted microphone signals with 𝐈_M× M denoting an (M× M)-dimensional identity matrix and 0_M denoting an M-dimensional vector of zeros. Both RTF vectors and encode the DOA of the dominant speaker. However, the extended RTF vector depends on the (unknown) position of the external microphone, whereas the head-mounted RTF vector with fixed relative positions of the head-mounted microphones (ignoring small movements of the hearing aids due to head movements) does not depend on the position of the external microphone. Hence, for DOA estimation, we will only consider the head-mounted RTF vector .
The noise and reverberation components are condensed into the undesired component = + such that ≈ +.
Assuming uncorrelated direct-path speech and undesired components, the covariance matrix of the noisy microphone signals can be written as
= ℰ{^H} = + ,
with
= , = ℰ{^H} ,
where (·)^H and ℰ{·} denote the complex transposition and expectation operator, respectively. and denote the covariance matrices of the direct-path dominant speech component and undesired component, respectively, and =ℰ{| X_1,d^ DP|^2} denotes the power spectral density of the direct-path dominant speech component in the reference microphone.
§ RTF-VECTOR-BASED DOA ESTIMATION
In this section, we review the RTF-vector-based DOA estimation method proposed in <cit.> that is based on finding the directions corresponding to the peaks of the spatial spectrum called frequency-averaged Hermitian angle spectrum.
To estimate the DOAs θ_1:J of the speakers from the estimated head-mounted[As previously stated, we only consider the estimated head-mounted RTF vector for DOA estimation and not the extended RTF vector that depends both on the speaker DOA and the (unknown) position of the external microphone.] RTF vector (k,l), the estimated head-mounted RTF vector (k,l) is compared to a database of prototype anechoic RTF vectors for several directions θ_i , i=1,…, I using the Hermitian angle <cit.> as a measure of dissimilarity, i.e.,
p(k,l,θ_i) = h(𝐠̂_H_d(k,l),) ,
h(𝐠̂,𝐠̅) = arccos(𝐠̅^H𝐠̂/𝐠̅_2 𝐠̂_2) .
These prototype anechoic head-mounted RTF vectors can be obtained, e.g., via measurements using the same microphone array configuration as used during the actual source localization or using spherical diffraction models <cit.>.
Accounting for the disjoint activity of the speakers in the STFT domain and aiming at including only TF bins where the estimated head-mounted RTF vector (k,l) is a good estimate for the direct-path RTF vector in (<ref>) (of one of the speakers), the narrowband spatial spectrum (<ref>) is integrated over a set 𝒦(l) of selected frequency bins, where it is likely that one speaker dominates over all other speakers, noise, and reverberation <cit.>, i.e.,
P(l,θ_i)=-∑_k∈𝒦(l)p(k,l,θ_i) .
Based on the usage of the Hermitian angle for the construction of (<ref>), the spatial spectrum in (<ref>) is called the frequency-averaged Hermitian angle spectrum. The DOAs θ_1:J(l) are estimated by selecting the directions corresponding to the J peaks of this spatial spectrum (assuming J to be known).
In the context of DOA estimation, coherence-based quantities such as the coherent-to-diffuse ratio (CDR) are a common criterion for frequency subset selection <cit.>. The usage of the CDR as a criterion for frequency subset selection can be motivated by the fact, that for higher values of the CDR at the respective TF bin it is more likely that a speaker dominates over all other speakers, noise, and reverberation at the respective TF bin. As in <cit.>, the subset 𝒦(l) is obtained using the coherent-to-diffuse ratio (CDR) criterion (<ref>), i.e.,
𝒦(l) = {k: CDR(k,l)≥CDR_thresh} ,
where the CDR is estimated as
CDR(k,l) = f(Γ_y,eff(k,l), Γ_u(k)) ,
with the CDR-functional f defined in (<ref>) for a single microphone pair comprising the microphones m=i and m=j <cit.>. The arguments of the function in (<ref>) are the estimated coherence Γ_y,i,j of the noisy signal
Γ_y_i,j(k,l)= Φ̂_y_i,j(k,l)/√(Φ̂_y_i,i(k,l) Φ̂_y_j,j(k,l))
with Φ̂_y_i,j denoting an estimate of the (i,j)-th element of the covariance matrix of the noisy microphone signals and a model Γ_u,i,j of the coherence of the undesired component. To consider more than just a single microphone pair for the estimation of the CDR, the coherence of the noisy signals between multiple microphone pairs (denoted as the microphone set ℳ) between the left and the right hearing aid is averaged prior to evaluating the CDR-functional in (<ref>), resulting in the binaural effective coherence <cit.>, i.e.,
equation1Γ_y,eff(k,l) = 1/|ℳ|∑_i,j ∈ℳΓ_y_i,j(k,l) ,
Thus, the binaural effective coherence represents the average coherence between the head-mounted microphone signals. Due to the arbitrary position of the external microphone, we consider only the head-mounted microphones (with fixed relative positions) for the estimation of the binaural effective coherence Γ_y,eff(k,l).
To model the coherence of the undesired component for the estimation of the CDR in (<ref>) between the head-mounted microphone signals, head shadow effects need to be included. Assuming a diffuse sound field for both the noise and reverberation component, a modified sinc-model <cit.> is employed, i.e.,
Γ_u(k) = (αω_kr/c) 1/√(1 + (βω_kr/c)^4) ,
where ω_k denotes the discrete angular frequency, r denotes the distance between the microphones of left and right hearing aid which is approximated as the diameter of a head, c denotes the speed of sound, and α=0.5 and β=2.2 denote empirically determined parameters of the modified sinc-model.
In this paper we compare the influence of different RTF vector estimation methods on constructing the frequency-averaged Hermitian angle spectrum in (<ref>). In <cit.> no external microphone was used and therefore the DOAs were estimated from the spatial spectrum as in (<ref>) constructed from head-mounted RTF vectors that were estimated using the CW method as in (<ref>), i.e.,
P^(CW)(l,θ_i)=-∑_k∈𝒦(l)h(𝐠̂_H_d^(CW)(k,l),) .
In this paper, we propose to exploit the availability of the external microphone and estimate the DOAs from the spatial spectrum constructed as in (<ref>) constructed from head-mounted RTF vectors that are estimated using the SC method as in (<ref>), i.e.,
P^(SC)(l,θ_i)=-∑_k∈𝒦(l)h(𝐠̂_H_d^(SC)(k,l),)
A summary on the covariance whitening (CW) method <cit.> and the spatial coherence (SC) method <cit.> is provided in the next section.
§ RTF VECTOR ESTIMATION
In order to estimate DOAs of multiple speakers, a frequency-averaged Hermitian angle spectrum is constructed, which assess the similarity between the estimated M-dimensional head-mounted RTF vector and a database of prototype anechoic RTF vectors for different directions. In this section, we review two RTF vector estimation methods. The computationally expensive state-of-the-art covariance whitening (CW) method <cit.> is summarized in Section <ref>. The computationally inexpensive spatial coherence (SC) method <cit.> is discussed in Section <ref>.
§.§ Covariance whitening (CW)
To apply the CW method <cit.>, estimates and of the covariance matrices of the noisy signal and the undesired signal component are required. Based on these estimates, the head-mounted direct-path RTF vector can be estimated using only the head-mounted microphone signals as
^(CW) =f(^H,^H) ,
f(Φ̌_y,Φ̌_u) =Φ̌_u^1/2Φ̌_u^-1/2Φ̌_yΦ̌_u^-H/2/𝐞̌_1^TΦ̌_u^1/2Φ̌_u^-1/2Φ̌_yΦ̌_u^-H/2 ,
where · denotes the principal eigenvector of a matrix, Φ̌_u^1/2 denotes a square-root decomposition (e.g., Cholesky decomposition) of the M̌-dimensional matrix Φ̌_u and 𝐞̌_1=[1,0,…,0]^T denotes an M̌-dimensional selection vector. Note that can be estimated likewise from the head-mounted microphone signals and the external microphone signal together, via f(,), differing in general from the estimate ^(CW) as in (<ref>). However, based on the results of <cit.> and <cit.>, we will consider only the estimate as in (<ref>) obtained from the head-mounted microphone signals only as no significant benefit in DOA estimation performance was reported when all microphone signals were used.
§.§ Spatial coherence (SC)
The SC method <cit.> requires an external microphone and relies on the assumption of a low spatial coherence between the undesired component U_M+1 in the external microphone signal and the undesired components U_m, m∈{1,…,M}, in the head-mounted microphone signals, i.e.
U_mU_M+1^∗≈ 0 , m∈{1,…, M} .
As shown in <cit.>, this assumption holds quite well, for example, when the distance between the external microphone and the head-mounted microphones is large enough and the undesired component is spatially diffuse-like. Exploiting this assumption, results in Y_mY_M+1^∗=X_mX_M+1^∗, m∈{1,…, M}, thus the RTF vector can be efficiently estimated without expensive matrix decompositions as
^(SC) = _M+1/_1^T_M+1 ,
with _m denoting an (M+1)-dimensional selection vector selecting the m-th element.
§ EXPERIMENTAL RESULTS
Applying the CW and SC method for RTF vector estimation, in this section we compare the DOA estimation performance when using the SC-based frequency-averaged Hermitian angle spectrum as in (<ref>) against the DOA estimation performance when using the CW-based frequency-averaged Hermitian angle spectrum as in (<ref>). We evaluate the methods with recorded signals for an acoustic scenario with two static speakers in a reverberant room with diffuse-like babble noise. The experimental setup and implementation details of the algorithms are described in Section <ref>. The results in terms of localization accuracy are presented and discussed in Section <ref>.
§.§ Experimental setup and implementation details
For the experiments we used signals that were recorded in a laboratory at the University of Oldenburg with dimensions of about [parse-numbers=false]7×6×2.7, where the reverberation time can be adjusted by means of absorber panels, which are mounted to the walls and the ceiling. The reverberation time was set to approximately T_ 60≈250. Fig. <ref> depicts the experimental setup. A dummy head with a binaural hearing aid setup (M = 4) was placed approximately in the center of the laboratory. For this hearing aid setup a database of prototype anechoic RTF vectors is obtained from measured anechoic binaural room impulse responses <cit.> with an angular resolution of 5 (I = 72). A single external microphone was placed at four different positions (denoted as E1 - E4), which was not restricted to be close to a speaker. Two speakers from the EBU SQAM CD corpus <cit.> (male and female, English language) were played back via loudspeakers that were located at approximately 2 distance from the dummy head. For the evaluation, all 72 pairs of DOAs of non-collocated speakers (each of the 9 DOAs in the range [-160,-120,…,160]) were considered. The speech signals were constantly active and had a duration of approximately 5. Diffuse-like noise was generated with four loudspeakers facing the corners of the laboratory, playing back different multi-talker recordings. The speech and noise components were recorded separately and were mixed at {-5,0,5} broadband signal-to-noise ratio (SNR) averaged over all head-mounted microphones of the hearing aid setup. All microphone signals were recorded simultaneously, hence neglecting synchronization and latency aspects.
The microphone signals were processed in the STFT-domain using a 32 square-root Hann window with 50 % overlap at a sampling frequency of 16. The covariance matrices and were estimated recursively during detected speech-and-noise and noise-only TF bins, respectively, using smoothing factors corresponding to time constants of 250 for and 500 for , respectively. The speech-and-noise TF bins were discriminated from noise-only TF bins based on the speech presence probability <cit.>, averaged and thresholded over all head-mounted microphone signals.
We assess the DOA estimation performance by averaging the localization accuracy over the considered DOA pairs and SNRs. For the localization accuracy we average the per-frame-accuracies over all frames, where we define the per-frame accuracy as
ACC(l) = j_correct(l)/J ,
with j_correct(l) denoting the number of speakers that are correctly localized within a range of ± 5^∘ in the l-th frame and J=2.
§.§ Results
Fig. <ref> depicts the average localization accuracies that are obtained from the spatial spectrum as in (<ref>), denoted by CW, and the accuracies obtained from the spatial spectrum as in (<ref>), denoted by SC-EX, where X stands for one of the four positions of the external microphone. To show the effectiveness of the subset selection, we considered two threshold values, CDR_thresh = [parse-numbers = false]-∞ (corresponding to selecting all frequencies) and CDR_thresh = 0, shown as blue bars and orange bars, respectively.
First, for every condition a large improvement in the localization accuracy of up to 11 due to the frequency subset selection can be observed. This result is in line with the results reported in <cit.>. Second, considering the spatial spectrum obtained from (<ref>), it can be observed that the position of the external microphone has a minor effect on the estimated DOA, resulting in localization accuracies in the range 62 - 66 using a threshold value of CDR_thresh = 0. For the external microphone placed at positions E3 or E4, i.e., close to the loudspeakers playing back the noise, a slightly lower DOA estimation accuracy can be observed when comparing to the external microphone placed at positions E1 or E2. Third, comparing the DOA estimation performance when using the CW method against the SC method for estimating the head-mounted RTF vector, a difference up to around 5 - 7 can be observed. Thus, the low-complexity SC method yields a comparable DOA estimation performance for multiple speakers as the CW method, which is line with the single speaker DOA estimation results reported in <cit.>.
§ CONCLUSIONS
Based on two RTF vector estimation methods, in this paper we compared the DOA estimation performance for multiple speakers for a binaural hearing aid setup exploiting an external microphone or not. We did not restrict the position of the external microphone to be close to the target speaker. Estimating the RTF vector using either the CW method without exploiting the external microphone or using the SC method exploiting the external microphone, we constructed a frequency-averaged Hermitian angle spectrum from which the DOAs of the speakers were estimated as the directions that maximized the spatial spectrum. We evaluated the approach using simulations with recorded two speaker scenarios in acoustic environments with mild reverberation and diffuse-like babble noise scaled to low SNRs for different positions of the external microphone. The results show that using the SC method for the construction of the frequency-averaged Hermitian angle spectrum yields a DOA estimation accuracy (62 - 66) that is comparable to the CW method (≈70) at a lower computational complexity.
|
http://arxiv.org/abs/2307.05541v1 | 20230708192609 | High Fidelity 3D Hand Shape Reconstruction via Scalable Graph Frequency Decomposition | [
"Tianyu Luan",
"Yuanhao Zhai",
"Jingjing Meng",
"Zhong Li",
"Zhang Chen",
"Yi Xu",
"Junsong Yuan"
] | cs.CV | [
"cs.CV"
] |
High Fidelity 3D Hand Shape Reconstruction
via Scalable Graph Frequency Decomposition
Tianyu Luan^1 Yuanhao Zhai^1 Jingjing Meng^1 Zhong Li^2
Zhang Chen^2 Yi Xu^2 Junsong Yuan^1
^1State University of New York at Buffalo ^2OPPO US Research Center, InnoPeak Technology, Inc.
{tianyulu,yzhai6,jmeng2,jsyuan}@buffalo.edu
{zhong.li,zhang.chen,yi.xu}@oppo.com
===================================================================================================================================================================================================================================================================================================
Despite the impressive performance obtained by recent single-image hand modeling
techniques, they lack the capability to capture sufficient details of the 3D
hand mesh.
This deficiency greatly limits their applications when high-fidelity hand
modeling is required, , personalized hand modeling.
To address this problem, we design a frequency split network to generate 3D hand
mesh using different frequency bands in a coarse-to-fine manner.
To capture high-frequency personalized details, we transform the 3D mesh into
the frequency domain, and propose a novel frequency decomposition loss to
supervise each frequency component.
By leveraging such a coarse-to-fine scheme, hand details that correspond to the
higher frequency domain can be preserved.
In addition, the proposed network is scalable, and can stop the inference at any
resolution level to accommodate different hardware with varying computational powers.
To quantitatively evaluate the performance of our method in terms of recovering
personalized shape details, we introduce a new evaluation metric named Mean
Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh
frequency component.
Extensive experiments demonstrate that our approach generates fine-grained
details for high-fidelity 3D hand reconstruction, and our evaluation metric is
more effective for measuring mesh details compared with traditional metrics. The code is available at <https://github.com/tyluann/FreqHand>.
§ INTRODUCTION
High-fidelity and personalized 3D hand modeling have seen great demand in 3D games, virtual reality, and the emerging Metaverse, as it brings better user experiences, , users can see their own realistic hands in the virtual space instead of the standard avatar hands. Therefore, it is of great importance to reconstruct high-fidelity hand meshes that can adapt to different users and application scenarios.
Despite previous successes in 3D hand reconstruction and modeling<cit.>, few existing solutions focus
on enriching the details of the reconstructed shape, and most current methods fail to generate consumer-friendly high-fidelity hands.
When we treat the hand mesh as graph signals, like most natural signals, the low-frequency components have larger amplitudes than those of the high-frequency parts, which we can observe in a hand mesh spectrum curve (<ref>). Consequently, if we generate the mesh purely in the spatial domain, the signals of different frequencies could be biased, thus the high-frequency information can be easily overwhelmed by its low-frequency counterpart.
Moreover, the wide usage of compact parametric models, such as MANO <cit.>, has limited the expressiveness of personalized details. Even though MANO can robustly estimate the hand pose and coarse shape, it sacrifices hand details for compactness and robustness in the parameterization process, so the detail expression ability of MANO is suppressed.
To better model detailed 3D shape information, we transform the hand mesh into the graph frequency domain, and design a frequency-based loss function to generate high-fidelity hand mesh in a scalable manner. Supervision in the frequency domain explicitly constrains the signal of a given frequency band from being influenced by other frequency bands. Therefore, the high-frequency signals of hand shape will not be suppressed by low-frequency signals despite the amplitude disadvantage.
To improve the expressiveness of hand models, we design a new hand model of 12,337 vertices that extends previous parametric models such as MANO with nonparametric representation for residual adjustments. While the nonparametric residual expresses personalized details, the parametric base ensures the overall structure of the hand mesh, , reliable estimation of hand pose and 3D shape. Instead of fixing the hand mesh resolution, we design our network architecture in a coarse-to-fine manner with three resolution levels U-net for scalability. Different levels of image features contribute to different levels of detail. Specifically, we use low-level features in high-frequency detail generation and high-level features in low-frequency detail generation. At each resolution level, our network outputs a hand mesh with the corresponding resolution. During inference, the network outputs an increasingly higher resolution mesh with more personalized details step-by-step, while the inference process can stop at any one of the three resolution levels.
In summary, our contributions include the following.
* We design a high-fidelity 3D hand model for reconstructing 3D hand shapes from single images. The hand representation provides detailed expression, and our frequency decomposition loss helps to capture the personalized shape information.
* To enable computational efficiency, we propose a frequency split network architecture to generate high-fidelity hand mesh in a scalable manner with multiple levels of detail.
During inference, our scalable framework supports budget-aware mesh reconstruction when the computational resources are limited.
* We propose a new metric to evaluate 3D mesh details. It better captures the signal-to-noise ratio of all frequency bands to evaluate high-fidelity hand meshes. The effectiveness of this metric has been validated by extensive experiments.
We evaluate our method on the InterHand2.6M dataset <cit.>. In addition to the proposed evaluation metrics, we also evaluate mean per joint position error (MPJPE) and mesh Chamfer distance (CD). Compared to MANO and other baselines, our proposed method achieves better results using all three metrics.
§ RELATED WORK
Parametric hand shape reconstruction. Parametric models are a popular approach in hand mesh reconstruction. Romero <cit.> proposed MANO, which uses a set of shape and pose parameters to control the movement and deformation of human hands. Many recent works <cit.> combined deep learning with MANO. They use features extracted from the RGB image as input, CNN to get the shape and pose parameters, and eventually these parameters to generate hand mesh. These methods make use of the strong prior knowledge provided by the hand parametric model, so that it is convenient to train the networks and the results are robust. However, the parametric method limits the mesh resolution and details of hands.
Non-parametric hand shape reconstruction. Non-parametric hand shape reconstruction typically estimates the vertex positions of a template with fixed topology. For example, Ge <cit.> proposed a method using a graph convolution network. It uses a predefined upsampling operation to build a multi-level spectrum GCN network. Kulon <cit.> used spatial GCN and spiral convolution operator for mesh generation. Moon <cit.> proposed a pixel-based approach. However, none of these works paid close attention to detailed shapes. Moon <cit.> provided an approach that outputs fine details, but since they need the 3D scanned meshes of the test cases for training, their model cannot do cross-identity reconstruction.
In our paper, we design a new hand model that combines the strength of both parametric and non-parametric approaches. We use this hand model as a basis to reconstruct high-fidelity hands.
Mesh frequency analysis. Previous works mainly focused on the spectrum analysis of the entire mesh graph. Chung. <cit.> defines the graph Fourier transformation and graph Laplacian operator, which builds the foundation of graph spectrum analysis. <cit.> extends commonly used signal processing operators to graph space. <cit.> proposes a spectrum graph convolution network based on graph spectrum characteristics. The spectral decomposition of the graph function is used to define graph-based convolution. Recent works such as <cit.> widely use spectrum GCN in different fields. However, these works mainly focus on the analysis of the overall graph spectrum. In this paper, we use spectrum analysis as a tool to design our provided loss function and metric.
§ PROPOSED METHOD
We propose a scalable network that reconstructs the detailed hand shape, and use frequency decomposition loss to acquire details. <ref> shows our network architecture. We design our network in the manner of a U-net. First, we generate a MANO mesh from image features from EfficientNet <cit.>.
Based on the MANO mesh, we use a
graph convolution network (green, yellow, and red modules in <ref>) to recover a high-fidelity hand mesh. In order to obtain high-frequency information, we use image features from different layers of the backbone network as a part of the GCN inputs. Specifically, at the low-resolution level, we take high-level image features as part of the input, and use a low-resolution graph topology to generate a low-resolution mesh. At medium and high-frequency levels, we use lower-level image feature through the skip connection to produce a high-resolution mesh.
Note that at every resolution level, the network will output the intermediate hand mesh, so it would naturally have the ability for scalable inference. During the training process, we supervise both intermediate meshes and the final high-resolution mesh. We discuss the details in the following.
§.§ High Fidelity 3D Hand Model
We design our hand representation based on MANO <cit.>. MANO factorizes human hands into a 10-dimensional shape representation β and a 35-dimensional pose representation θ.
MANO model can be represented as
{ M(θ, β) = W(T_P(θ, β), θ, w)
T_P(θ, β) = T + B_S(β) + B_P(θ)
.
where W is the linear blend skinning function.
Model parameter w is the blend weight. B_S and B_P are another two parameters of MANO named shape blend shape and pose blend shape, which are related to pose and shape parameters, respectively.
MANO can transfer complex hand surface estimation into a simple regression of a few pose and shape parameters.
However, MANO has limited capability in modeling shape detail.
It is not only limited by the number of pose and shape dimensions (45), but also by the number of vertices (778). In our work, we designed a new parametric-based model with 12,338 vertices generated from MANO via subdivision. The large vertex number greatly enhances the model's ability to represent details.
Subdivided MANO. To address this problem. We design an extended parametric model that can better represent details. First, we add detail residuals to MANO as
M^'(θ, β, d) = W(T_P^'(θ, β, d), θ, w^'),
T_P^'(θ, β, d) = T^' + B_S^'(β) + B_P^'(θ) + d,
where, w^', T^', B_S^'(β), and B_P^'(θ) are the parameters our model, and d is the learnable per-vertex location perturbation. The dimension of d is the same as the number of vertices.
Besides vertex residuals, we further increase the representation capability of our hand model by increasing the resolution of the mesh.
Motivated by the traditional Loop subdivision<cit.>, we propose to design our parametric hand model by subdividing the MANO template. Loop subdivision can be represented as
T^' = 𝐋_𝐬T,
where, T is original template mesh with n vertices and m edges. T^' is the subdivided template mesh with n+m vertices. 𝐋_𝐬∈ℝ^(n+m)× m is the linear transformation that defines the subdivision process. The position of each vertex on the new mesh is only determined by the neighbor vertices on the original mesh, so 𝐋_𝐬 is sparse. We use similar strategies to calculate B_S and B_P. The MANO parameters map the input shape and pose into vertex position adjustments. These mappings are linear matrices of dimension x × n.
Therefore, we can calculate the parameters as
w^' = (𝐋_𝐬w^⊤)^⊤,
B_S^' = (𝐋_𝐬B_S^⊤)^⊤,
B_P^' = (𝐋_𝐬B_P^⊤)^⊤.
We repeat the procedure twice to get sufficient resolution.
<ref> shows example meshes from the new model in different poses (d is set to 0). We can see that our representation inherits the advantages of the parametric hand model. It has a plausible structure with no visual artifacts when the hand poses change.
§.§ Hierachical Graph Convolution Network
Our GCN network utilizes a multiresolution graph architecture
that follows the subdivision process in Section <ref>.
Different from the single graph GCNs in previous works <cit.>, our GCN network uses different graphs in different layers. At each level, each vertex of the graph corresponds to a vertex on the mesh and the graph topology is defined by the mesh edges. Between two adjunct resolution levels, the network uses the 𝐋_𝐬 in <ref> for upsampling operation.
This architecture is designed for scalable inference. When the computing resources are limited, only the low-resolution mesh needs to be calculated; when the computing resources are sufficient, then we can calculate all the way to the high-resolution mesh. Moreover, this architecture allows us to explicitly supervise the intermediate results, so the details would be added level-by-level.
§.§ Graph Frequency Decomposition
In order to supervise the output mesh in the frequency domain and design the frequency-based metric, we need to do frequency decomposition on mesh shapes. Here, we regard the mesh as an undirected graph, and 3D locations of mesh vertices as signals on the graph. Then, the frequency decomposition of the mesh is the spectrum analysis of this graph signal. Following <cit.>, given an undirected graph 𝒢 = {𝒱, ℰ} with a vertices set of 𝒱= {1,2,...,N } and a set of edges ℰ= {(i, j) }_i,j ∈𝒱, the Laplacian matrix is defined as 𝐋:=𝐃 - 𝐀,
where 𝐀 is the N × N adjacency matrix with entries defined as edge weights a_ij and 𝐃 is the diagonal degree matrix. The ith diagonal entry di = ∑_ja_ij. In this paper, the edge weights are defined as
a_ij:={
1 , (i,j) ∈ℰ
0 , otherwise
.
which means all edges have the same weights. We decompose 𝐋 using spectrum decomposition:
𝐋=𝐔^⊤Λ𝐔.
Here, Λ is a diagonal matrix, in which the diagonal entries are the eigenvalues of 𝐋. 𝐔 is the eigenvector set of 𝐋. Since the Laplacian matrix 𝐋 describes the fluctuation of the graph signal, its eigenvalues show how "frequent" the fluctuations are in each eigenvector direction. Thus, the eigenvectors of larger eigenvalues are defined as higher frequency bases, and the eigenvectors of smaller eigenvalues are defined as lower frequency bases. Since the column vectors of 𝐔 is a set of orthonormal basis of the graph space, following <cit.>, we define transform F(x) = 𝐔^⊤x to be the Fourier transform of graph signal, and F'(x) = 𝐔x to be reverse Fourier transform. This means, given any graph function x ∈ℝ^N× d, we can decompose x in N different frequency components:
x=∑_i=1^N𝐔_𝐢(𝐔_𝐢^⊤x),
where 𝐔_𝐢∈ℝ^N × 1 is the ith column vector of 𝐔. d is the dimension of the graph signal on each vertex. 𝐔_𝐢^⊤x is the frequency component of x on the ith frequency base.
Having <ref>, we can decompose a hand mesh into frequency components. <ref> shows an example of a groundtruth mesh and its frequency decomposition result. The x-axis is the frequencies from low to high. The y-axis is the amplitude of each component in the logarithm. It is easy to observe that the signal amplitude generally decreases as the frequency increases. <ref> shows the cumulative frequency components starting from frequency 0. We can see how the mesh shape changes when we gradually add higher frequency signals to the hand mesh. In general, the hand details increase as higher frequency signals are gradually included.
§.§ Frequency Decomposition Loss
Frequency decomposition loss. Conventional joint and vertex loss, such as the widely used pre-joint error loss <cit.> and mesh pre-vertex error loss <cit.> commonly used in human body reconstruction, and Chamfer Distance Loss <cit.> commonly used in object reconstruction and 3D point cloud estimation, all measure the error in the spatial domain. In that case, the signals of different frequency components are aliased together. As shown in <ref>, the amplitudes of low-frequency signals of hand shape are much larger than high-frequency signals, so when alias happens, the high-frequency signals will get overwhelmed, which means direct supervision on the spatial domain would mainly focus on low-frequency signals. Thus, spatial loss mostly does not drive the network to generate high-frequency details. Our experiments in <ref> also demonstrate this.
To generate detailed information without being overwhelmed by low-frequency signals, we designed a loss function in the frequency domain. Specifically, we use graph frequency decomposition (<ref>) to define our frequency decomposition loss as
L_F = 1/F∑_f=1^Flog(𝐔_f^⊤V̂-𝐔_f^⊤V_gt^2/𝐔_f^⊤V̂𝐔_f^⊤V_gt + ϵ + 1),
where F=N is the number of total frequency components, 𝐔_f is the fth frequency base, · is L2 norm, ϵ = 1 × 10^-8 is a small number to avoid division-by-zero, V̂∈ℝ^N × 3 and V_gt∈ℝ^N × 3 are the predicted and groundtruth vertex locations, respectively. During training, for every frequency component, our loss reduces the influence of the amplitude of each frequency component, so that information on different frequency components would have equivalent attention. In <ref>, we demonstrate the effectiveness of the frequency decomposition loss.
Total loss function. We define the total loss function as:
L = λ_JL_J + ∑_l=1^3[ λ_v^(l)L_v^(l) + λ_F^(l)L_F^(l)],
where l is the resolution level. l=1 is the lowest-resolution level and l=3 is the highest resolution level.
L_J^(l) is 3D joint location error, L_v^(l) is per vertex error, and L_F^(l) is the frequency decomposition loss. λ_J^(l), λ_v^(l), and λ_F^(l) are hyper-parameters. For simplicity, we refer L_J^(l), L_v^(l), and L_F^(l) as L_J, L_v, and L_F for the rest of the paper.
Following previous work <cit.>, we define 3D joint location error and per vertex loss as:
L_J = 1/N_J∑_j=1^N_JĴ_̂ĵ-J_gt,j,
L_v = 1/N∑_i=1^Nv̂_i-v_gt,i,
where Ĵ_j and J_gt,j are the output joint location and groundtruth joint location. N_J is the number of joints. v̂_i and v_gt,i are the estimated and groundtruth location of the ith vertex, and N is the number of vertices.
§ EXPERIMENTS
§.§ Datasets
Our task requires detailed hand meshes for supervision. Because of the difficulty of acquiring 3D scan data, this supervision is expensive and hard to obtain in a large scale. One alternative plan is to generate meshes from multiview RGB images using multiview stereo methods. Considering the easy access, we stick to this plan and use the generated mesh as groundtruth in our experiments. We do all our experiments on the InterHand2.6M dataset <cit.>, which is a dataset consisting of multiview images, rich poses, and human hand pose annotations. The dataset typically provides 40-100 views for every frame of a hand video. Such a large amount of multiview information would help with more accurate mesh annotation. Finally, we remesh the result hand mesh into the same topology with our 3-level hand mesh template, respectively, so that we can provide mesh supervision for all 3 levels of our network. We use the resulting mesh as groundtruth for training and testing. In this paper, we use the mesh results provided in <cit.>, which are generated using multiview methods of <cit.>, and only use a subset of InterHand2.6m, due to the large number of data in the original dataset. The remeshing method and more dataset details can be found in supplementary material Section 4. In <ref> (last column, “groundtruth"), we show a few examples of the generated groundtruth meshes. Although these meshes are not the exact same as real hands, it is vivid and provides rich and high-fidelity details of human hands. This 3D mesh annotation method is not only enough to support our solution and verify our methods, but is also budget-friendly.
§.§ Implementation Details.
We follow the network architecture in <cit.> to generate intermediate MANO results. We use EfficientNet <cit.> as a backbone. The low-level, mid-level, and high-level features are extracted after the 1st, 3rd, and 7th blocks of EfficientNet, respectively. For each image feature, we use 1 × 1 convolutions to deduce dimensions. The channel numbers of 1 × 1 convolution are 32, 32, and 64 from low-level to high-level, respectively. After that, we project the initial human hand vertices to the feature maps, and sample a feature vector for every vertex using bilinear interpolation. The GCN graph has 778, 3093, and 12337 vertices at each resolution level.
In the training process, we first train <cit.> network, and then use the pretrained result to train our scalable network. For training <cit.>, we use their default hyper-parameters, set the learning rate to 1 × 10 ^-4, and set batch size to 48. When training GCN network, we set λ_J to be 1, set λ_v^(1) and λ_F^(1) to be 1 and 60, set λ_v^(2) and λ_F^(2) to be also 1 and 60, and set λ_v^(3) and λ_F^(3) to be 1 and 100. The learning rate is set to 5 × 10 ^-4 for GCN and 1e-4 for the rest of the network. The batch size is set to 28. The training process takes about 25 hours on 1 NVIDIA GTX3090Ti GPU for 150 epochs. In reference, we use a smooth kernel to post-process the mesh to reduce sharp changes.
More details of post-processing will be found in Supplementary Materials Section 3.
§.§ Quantitative Evaluation
We use mean per joint position error (MPJPE) and Chamfer distance (CD) to evaluate the hand pose and coarse shape. Besides, to better evaluate personalized details, we also evaluate our mesh results using the proposed mean signal-to-noise ratio (MSNR) metric.
Mean Signal-to-Noise Ratio (MSNR).
Previous metrics for 3D hand mesh mostly calculate the Euclidean distance between the results and the groundtruth. Although in most cases, Euclidean distance can roughly indicate the accuracy of the reconstruction results, it is not consistent with human cognitive standards: it is more sensitive to low-frequency errors, but does not perform well in personalized detail distinction or detailed shape similarity description.
Thus, we propose a metric that calculates the signal-to-noise ratio in every frequency base of the graph. We define our Mean Signal-to-Noise Ratio (MSNR) metric as
MSNR
=1/F∑_f=1^Flog(𝐔_f^⊤V̂/𝐔_f^⊤V̂ - 𝐔_f^⊤V_gt + ϵ),
where F=N is the total number of frequency components and S_f is the signal-to-noise ratio of the fth frequency component. 𝐔_f, V̂, and V_gt have the same meaning as in <ref>. ϵ=1 × 10 ^-8 is a small number to avoid division-by-zero. Thus, the maximum of S_f is 8. By this design, the SNR of different frequency components would not influence each other, so we can better evaluate the high-frequency information compared to the conventional Euclidean Distance.
We designed an experiment on InterHand2.6m to validate the effectiveness of our metric in evaluating high-frequency details. We add errors of 8 different frequency bands to the hand mesh. For each frequency band, the error amplitude is set under 10 different uniform distributions. As shown in <ref>,
we measure the MPVE and MSNR for every noise distribution on every frequency band, to see how the measured results of the two metrics change with the noise amplitude in each frequency band. The result shows that in the low-frequency part, MPVE increases fast when the noise amplitude increases (the upper lines), but in high-frequency bands, the measured result changes very slowly when the noise amplitude increases. MSNR behaves completely differently from MPVE. It is more sensitive to noise in the high-frequency band than in the low-frequency band. Thus, compared to Euclidean distance, MSNR better measures the error in high-frequency details. <ref> shows a few examples of noisy meshes.
Evaluation on InterHand2.6M dataset. We report mean per joint position error (MPJPE), Chamfer distance (CD), and mean signal-to-noise ratio (MSNR) to evaluate the overall accuracy of reconstructed hand meshes. <ref> shows the comparison among 3 levels of our proposed method and MANO. As shown in the table, the proposed method improves the accuracy of hand surface details by a large margin (as indicated by MSNR). We also observe that, while our method generates better shape details in a scalable manner, the joint locations and the overall shape of the output meshes also become slightly more accurate (as indicated by MPJPE and CD). Here, the MSNR of MANO, Ours-level 1, and Ours-level 2 are calculated after subdividing their meshes into the same resolution as Ours-level 3.
§.§ Ablation Study
We conduct several experiments to demonstrate the effectiveness of the feature skip connection design (in <ref>).
and different loss functions. The results are shown in <ref>. From the result, we observe that our projection-to-feature-map skip connection design leads to performance improvement in all three metrics.
For the loss functions, we observe MSNR degrades when the frequency decomposition loss is removed, indicating inferior mesh details.
Removing the per-vertex error loss dramatically increases the Chamfer distance, indicating that the overall shape is not well constrained.
The visualization results of the latter 2 experiments are shown in <ref>, if we do not use frequency decomposition loss, the mesh result we get tends to be smoother with less personalized details. If we do not use per-vertex error loss, the mesh's low-frequency information is not well-learned. The mesh we generate will have an overall shape deformation.
Scalable design. We also demonstrate the scalable design of the proposed network by analyzing the resource needed at each resolution level (<ref>). In general, higher resolution levels require more computational resources in the network, and more resources to store and render the mesh. Still, our approach supports scalable reconstruction and can be applied to scenarios with limited computational resources.
Here, “baseline" means only generating the MANO mesh in our network.
Visualization Results.
The qualitative reconstruction results are shown in <ref>. We observe that even when MANO is upsampled to 200k
vertices, it still does not capture personalized details while our results provide better shape details.
More qualitative results can be found in the Supplementary Material Section 5.
§ CONCLUSION
We provided a solution to reconstruct high-fidelity hand mesh from monocular RGB inputs in a scalable manner. We represent the hand mesh as a graph and design a scalable frequency split network to generate hand mesh from different frequency bands. To train the network, we propose a frequency decomposition loss to supervise each frequency component. Finally, we introduce a new evaluation metric named Mean Signal-to-Noise Ratio (MSNR) to measure the signal-to-noise ratio of each mesh frequency component, which can better measure the details of 3D shapes. The evaluations on benchmark datasets validate the effectiveness of our proposed method and the evaluation metric in terms of recovering 3D hand shape details.
§ ACKNOWLEDGMENTS
This work is supported in part by a gift grant from OPPO.
ieee_fullname
SAD
§ DETAILED NETWORK ARCHITECTURE
We proposed a detailed network architecture of our approach in <ref>. The green boxes are the features, in which we note the feature dimensions. The blue boxes represent blocks of EfficientNet <cit.>. The red boxes represent GCN blocks. The GCN residual blocks in the network are designed following the manner of <cit.>. Details of the residual blocks are shown on the right of the figure. The gray boxes are the feature skip-connection part. To get multi-level image features from feature maps, we project the vertices into the feature maps, and use a bilinear interpolation technique to sample features.
We will illustrate the process more in <ref>.
The purple boxes are the sub-network used to generate MANO mesh. The orange boxes indicate the annotation we used. The green arrows are feature streams and the red lines are skip connections.
We fetch skip-connected features from the output of EfficientNet Block 1, Block 3, and Block 7. The features are used as parts of the input of the GCN. The GCN has 3 levels. At each level, the input features go through a 10-layer GCN Residual Block, then output a feature vector and a 3D location at each vertex. The 3D locations are used as intermediate output and for supervision. The features are used as a part of the input for the next level. At the third level, we only output the 3D location of each vertex as the final mesh.
§ SKIP-CONNECTED FEATURE SAMPLING
In <ref>, the features fetched from EfficientNet are feature maps. We want to transfer them into feature vectors and put them on the vertices without losing spatial information. Thus, we design a feature sampling strategy to put the local image feature on each graph vertex. As shown in <ref>, we use orthodox projection to find the feature vector for each vertex on the feature map. For every vertex P, we calculate the projection point P^' on the feature map. Then, we extract the feature vector x ∈𝐑^c using bilinear interpolation at point P^', where c is the feature map channel number. The total output feature dimension is N × c, where N is the number of graph vertices.
§ MESH POST-PROCESSING
We do a post-process on the third-level mesh. Due to the flaws of groundtruth mesh (shown in <ref>), some of our output mesh also have similar structure flaws. To tackle this problem, we designed a smooth mask to reduce the flaws. <ref> shows the output of the network, our smooth mask, and our final mesh result. As we can see, the flaws are highly reduced. Note that, this flaw is caused by the noisy groundtruth, so it can also be reduced by a better remeshing of the training data in the future.
§ REMESHING PROCEDURE
We try to use the multiview stereo (MVS) generated mesh provided in <cit.>. However, the MVS mesh has about 500k vertices on each mesh. The large vertex number mesh with high redundancy makes our training process much slower. Moreover, without a fixed topology, the choices of shape supervision are limited. For example, we would not be able to use the per vertex loss and frequency decomposition loss for training.
Thus, we designed a remeshing technic to transfer the mesh generated in the multiview stereo (MVS) method into a unified topology. The algorithm is shown in <ref>a. First, we align the MVS mesh with a parametric template mesh. Here, we use template meshes designed in the main paper Section 3.2.
Second, we use an optimization approach to calculate a set of pose and shape parameters, so that the template mesh becomes a coarse approximation of the MVS mesh. Finally, we use the closet point on the MVS mesh as a substitute for each vertex on the parametric mesh. This procedure would preserve the detailed shape and the topology of the parametric template at the same time. In our experiments, we generate 3 resolution levels of groundtruth mesh for supervision, and use the third level for testing.
However, despite the good attributes of the groundtruth meshes, some of them still have flaws. <ref>b shows an example of the mesh flaws inside the mesh (red rectangle). It happens because some of the vertices on the parametric mesh find the wrong corresponding vertices on the MVS mesh. These groundtruth mesh flaws will eventually cause defects in generated mesh (shown in <ref>). We have largely reduced the flaws of our mesh using the mesh post-processing method mentioned in <ref>.
§ MORE VISUALIZATION RESULTS
We show more visualization results of our proposed method in <ref>.
§ FAILURE CASES
We show in <ref> a few failure cases where our method generates hand meshes with flaws. Most of these flaws are caused by groundtruth flaws in remeshing (shown in <ref>b).
§ FUTURE WORKS AND DISCUSSIONS
In future works, our backbone can be replaced with more recent work such as those in <cit.>. The object detection and segmentation-related network can be helpful for hand-related tasks. We would also improve the remeshing procedure to reduce the artifacts. Besides, we would also improve our method to tackle the in-the-wild hand reconstruction problem. Moreover, the frequency decomposition approach can be easily expanded to improve the details of human body reconstruction works such as <cit.>.
|
http://arxiv.org/abs/2307.04133v1 | 20230709091532 | Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach | [
"Yuanheng Zhang",
"Nan Jiang",
"Zhaoheng Xie",
"Junying Cao",
"Yueyang Teng"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
IEEE Transactions on Computational Imaging
Zhang, Jianget al.: Ultrasonic Image's Body Marker Annotation Removal: A Noise2Noise Approach
Ultrasonic Image's Annotation Removal: A Self-supervised Noise2Noise Approach
Yuanheng Zhang,
Nan Jiang,
Zhaoheng Xie,
Junying Cao*,
Yueyang Teng*
Y. Zhang is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
N. Jiang is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Z. Xie is with the Institute of Medical Technology, Peking University, China.
J. Cao is with the Department of Ultrasound, General Hospital of Northern Theater Command, China.
Y. Teng is with the College of Medicine and Biological Information Engineering, Northeastern University, China.
J. Cao and Y. Teng contributed equally to this work.
This work is supported by the Natural Science Foundation of Liaoning Province (2022-MS-114).
This work is supported by the Key R&D Plan Projects of Liaoning Province in 2020 (Project No. 2020JH2/10300122).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Accurately annotated ultrasonic images are vital components of a high-quality medical report.
Hospitals often have strict guidelines on the types of annotations that should appear on imaging results.
However, manually inspecting these images can be a cumbersome task.
While a neural network could potentially automate the process, training such a model typically requires a dataset of paired input and target images, which in turn involves significant human labour.
This study introduces an automated approach for detecting annotations in images.
This is achieved by treating the annotations as noise, creating a self-supervised pretext task and using a model trained under the Noise2Noise scheme to restore the image to a clean state.
We tested a variety of model structures on the denoising task against different types of annotation, including body marker annotation, radial line annotation, etc.
Our results demonstrate that most models trained under the Noise2Noise scheme outperformed their counterparts trained with noisy-clean data pairs.
The costumed U-Net yielded the most optimal outcome on the body marker annotation dataset, with high scores on segmentation precision and reconstruction similarity.
We released our code at <https://github.com/GrandArth/UltrasonicImage-N2N-Approach>.
Image Restoration,
Noise2Noise,
Segmentation,
U-Net,
Ultrasonic.
§ INTRODUCTION
Annotations, typically comprised of various labels and marks, are commonly utilized to record critical information from an ultrasonic exam,
including the precise location of potential lesions or suspicious findings, on archived results.
Such annotations prove beneficial in aiding physicians in interpreting the exam results,
particularly when surrounding structures do not provide any indication of the anatomic location of the image.
Additionally, hospitals often mandate the inclusion of annotations, especially in cases involving inter-hospital patient transfers <cit.>.
If the report does not have comprehensive annotations, patients are usually required to undergo an equivalent radiography exam at the facility of transfer.
Commonly employed types of annotations include body marker annotation <cit.>, radial line annotation, and vascular flow annotation.
The presence of these annotations serves as evidence for the standardization of the diagnostic process. Annotations not only document the reasoning behind the diagnostic assessment but also facilitate comparison between pre- and post-treatment imaging findings to gain further insight into the patient's condition.
However, the utilization of annotations during ultrasound exams may vary depending on the proficiency of the sonographer performing the procedure.
Ultrasound being a live examination makes it hard to implement additional reviews, thereby relying solely on the expertise of the operator to determine the presence of annotations.
Furthermore, the need for repetitive manual verification increases the likelihood of forgetting the task, particularly during busy schedules at hospitals.
As such, it is possible for the absence of annotations to occur.
Given the strict regulations and obvious beneficiation surrounding the need for annotations in medical imaging,
sonographers need to manually validate that the stored data satisfies these requirements to ensure that diagnoses meet the standard continuously.
However, this is a cognitively demanding undertaking as it entails the fulfillment of diverse annotation obligations tailored to specific image outcomes.
In addition, dealing with archived files manually is a cumbersome task as most medical data management systems do not consider this necessary and have no relevant feature implemented.
The utilization of neural networks for the automatic assessment of whether the stored data meets particular criteria is a logical approach.
To address the current issue, there are several approaches that can be taken using different types of deep learning models.
The first approach would involve treating the task as a semantic segmentation problem, where the goal is to classify each pixel in the image into one of several predefined categories.
Alternatively, the task could be framed as an instance segmentation problem, where the aim is to identify and label individual objects within the scene.
In order to accomplish these goals, attention-based models such as the Pyramid Attention Network <cit.> or the Reverse Attention Network <cit.> could be employed. Alternatively, generative models like variants of Generative Adversarial Networks (GANs) <cit.> are also viable.
Once the segmentation has been completed, the resulting labels could then be used to determine whether the image meets regulatory requirements or not.
This task could also be viewed as an object recognition challenge, and for this purpose,
models such as Single Shot MultiBox Detector (SSD) <cit.> or You Only Look Once (YOLO) <cit.> could be utilized to obtain the four coordinates of the bonding box of a detected object, which will serve as demonstrative evidence of the necessary annotations.
In order to train a model using deep learning, it is important to have a suitable training dataset that includes paired input and output data, regardless of the specific task being performed.
However, building an appropriate training dataset is a challenging task due to the absence of high-quality data such as segmentation masks, object coordinates and clean targets.
Acquiring such data requires a considerable amount of manual effort.
In this study, we introduced a self-supervised Noise2Noise approach to recognise annotations without needing a pairwise dataset by manually superposing common annotations onto a small set of unannotated images randomly and repeatedly.
We trained multiple network structures such as FCN, U-Net++, MultiResUNet, etc., for Noise2Noise to select an ideal one.
We noted that the majority of Noise2Noise based methods surpassed the corresponding
Noise2Clean (supervised learning) methods in which the former even receive a Sørensen-Dice coefficient (Dice) increases of up to 300%, an Intersection over Union (IoU) increase of up to 384%, and a Peak Signal to Noise Ratio Human Visual System Modified (PSNR_HVS_M) increase of up to 38% in some cases.
Among them our costumed U-Net achieved the best results, both quantitative and qualitatively.
The remainder of the paper is organized as follows:
Section <ref> discusses related works.
Section <ref> outlines our methodology, data sources, dataset building pipeline and model strcutures used in this work.
In Section <ref>,
quantitative metric scores and qualitative image results are provided to support our claim regarding the optimal model structure, loss function and observations on Noise2Noise's effect.
Finally, Section <ref> concludes the paper.
§ RELATED WORKS
§.§ Self-supervised Learning
Self-supervised learning is a way of training deep-learning models without human guidance or explicit instructions.
Unlike supervised learning which uses labeled examples, self-supervised models learn from unlabeled data by identifying patterns and relationships on their own.
It uses the structure of images (e.g., edges, shapes) to teach the deep-learning model how to identify important parts of an image automatically, rather than having to be explicitly told what to look for.
This is particularly helpful considering the abundance of unlabeled data that exists today and the amount of work required to create a properly constructed dataset.
To create a robust, large model, self-supervised learning is an essential tool.
The general process of self-supervised learning involves first creating a pretext task for the model to solve. By completing this task, the model can gain an understanding of the structural information embedded within the data. This understanding can then be transferred to downstream tasks using different forms of transfer learning.
Examples of pretext tasks include rotating an image for the model to predict the degree of rotation, reconstructing images from an altered view, or reconstructing images from a corrupted version of the original data.
In this work, we developed a pretext task where we asked the model to generate another noisy image from the noisy input while keeping the same original clean image beneath it.
Specifically, we manually extracted several common annotations from stored data and randomly superimposed them on a small set of unannotated images to create a large dataset.
The idea behind this approach was to train the model to recognize the crucial features of the original so that it could distinguish between noise and clean images.
§.§ Noise2Noise Training Scheme
Noise2Noise is originally proposed in <cit.> as a novel statistical reasoning for the task of image denoising.
It is shown that, under certain key constraints,
it is possible to train a denoising model using only corrupted images.
The constraints are: the distribution of the added noise must have a mean of zero and no correlation with the desired clean image,
and the correlation between the noise in the input image and the target image should be close to zero <cit.>.
By utilizing deep learning, a denoising task can be transformed into a regression problem,
where a neural network is used to learn the mapping between corrupted samples x̂_i and clean samples y_i by minimizing the empirical risk <cit.>
In <cit.>, inspecting the form of a typical training process shows that training a neural network is a generalization of a point estimating problem.
We can see that it is essentially solving the point estimating problem for each separate input.
This means, by finding the optimal parameters, the trained neural network will output the expectation or median of all possible mapping for input x.
This property often leads to unwanted fuzziness in many deep-learning applications.
However, in a denoising scenario, when the noise satisfies the above constraints and exists in both the model input and training target, the task of empirical risk minimization, given infinite data,
θargmin∑_i L(f_θ(x̂_i), ŷ_i)
is equivalent to the original regression problem
θargmin∑_i L(f_θ(x̂_i), y_i)
where f_θ(x) is the model parameterized by θ, L is loss function, x̂_i,ŷ_i are samples drawn from a noisy distribution and y_i representing clean samples.
The idea of using the self-supervised learning in conjunction with Noise2Noise training scheme aligns well with our goal of obtaining a clean image.
With a clean image, we can easily produce a segmentation map for various kinds of annotations, facilitating the models to recognise and categorize them accurately.
§ METHODOLOGY
Initially, our data includes collections of information that may or may not have specific annotations.
We manually examined and filtered the data to create a clean dataset for each annotation.
Next, we studied the individual components of different annotations and identified a general pattern for each one.
Using this pattern, we generated large datasets containing noisy data and trained a denoising model using the Noise2Noise approach.
Finally, we trained various model structures using both the Noise2Noise and conventional Noise2Clean techniques to obtain denoising models for the purpose of performance comparison.
§.§ Dataset
To manually synthesize a self-supervised Noise2Noise dataset, which our training requires,
it is essential to know the scheme of the different annotations and to construct a dataset according to it.
Our original data consists mainly of ultrasonic images provided by the General Hospital of Northern Theater Command.
These images were captured using external video capture cards and are in 8-bit sRGB format.
According to the type of noise, we divided these data into six categories:
* Images with body marker annotation
* Images without body marker annotation
* Images with radial line annotation
* Images without radial line annotation
* Images with vascular flow annotation
* Images without vascular flow annotation
Images with certain annotations are considered noisy images in the context of the noise removal task, and corresponding images without these annotations are considered clean.
Some typical images with various annotation are provided in Fig. <ref>.
To safeguard the confidentiality of the patient, any personal data displayed in the margin of the image is blurred using pixelization. This same technique is also used to obscure any similar information present in other images.
In essence, a body marker annotation is a marker selected from a fixed set of icons that indicates different regions of the human body and its current orientation.
It is typically located at the edge of the ultrasnoic image area and is labeled by the sonographer.
On some ultrasound machines, the body marker annotation has a fixed position.
However, from a statistical and training perspective, each real instance can be viewed as an image sample from a conditional distribution where the condition is the body marker annotation's location.
By randomly placing body marker annotation at any position within the image, we draw samples from a distribution without the aforementioned condition.
By learning to denoise samples from the unconditioned distribution, the model can effectively denoise samples from conditional distribution as well.
Other commonly used annotations that we introduced later comply with the same reasoning.
The radial line annotation is pairs of connected cross markers.
They are usually placed at the edge of the lesion area, with its placement determined by the size of the lesion.
One to three pairs of cross markers may be present in an image, corresponding to the three axes of 3D space, but typically there are only two pairs.
The vascular flow annotation is not an additional labeling feature meant to simplify identification.
Rather, it serves as a bounding box that identifies the specific area of the image being examined by the ultrasound flowmeter.
However, to keep things simple, we will continue to call it a form of annotation.
The presence of this annotation indicates that the relevant examination has been conducted.
To synthesize a Noise2Noise training dataset for above annotations,
we first manually extracted the necessary annotation icons from existing annotated data,
then we randomly overlay different annotations on the clean images we have.
The randomness of the noise overlay allows for the creation of a relatively large dataset.
By constructing training datasets in the above-mentioned process, each noisy image has three corresponding images for different tasks.
* A clean image which the noisy image originated from.
* A different noisy image created from the same clean image, using a different (in terms of position, form, etc.) noise sampled from the same distribution.
* A binary image recorded the position and form of the noise appended to the clean image.
An instance of the training dataset is presented in Fig. <ref>.
Using these images, the same dataset can be used for Noise2Noise training, conventional Noise2Clean training, and normal segmentation training.
Our approach to create this training dataset can minimize the amount of human labor required. Even with a limited amount of clean data, we are able to generate a large noisy dataset for training. The flow chart of the above process is also shown in Fig. <ref>.
§.§ Network Structures
In this research, we trained several structures to find the optimal solution and compare the two different training schemes: Noise2Noise and traditional Noise2Clean.
We adopted most of the structures from the traditional image segmentation model.
The models we adopted include FCN, DeepLabv3, LinkNet, MANet, U-Net Plus Plus, MultiResUNet and a costumed U-Net.
FCN is one of the models utilizing convolutional networks in semantic segmentation.
<cit.> uses fully-convolutional layers instead of fully-connected layers so that this model is compatible with non-fixed sized input and ouputs.
DeepLabv3 is a subsequent model of the DeepLab model famlily, developed by <cit.>.
The main feature of this model is the use of dilated convolution, also known as “atrous” convolution.
This method is advocated to combat the issue of feature resolution reduction in deep convolutional networks (due to pooling operations and strides in convolution operations) and the difficulties in multi-scale segmentation.
LinkNet is proposed by <cit.> to address the problem of the long processing time of most segmentation models.
By using a skip connection to pass spatial information directly to the corresponding decoder, LinkNet manages to preserve low-level information without additional parameters and re-learning operations.
MANet, or Multi-scale Attention Net, is developed to improve accuracy in semantic segmentation of remote sensing images.
By using a novel attention mechanism, treating attention as a kernel function, <cit.> reduces the complexity of the dot-product attention mechanism to O(N).
U-Net is a well-known encoder-decoder segmentation model.
It is originally proposed by <cit.> for segmenting biological microscopy images.
U-Net++ is a variant of U-Net proposed by <cit.>.
In their work, they proposed a novel skip connection block in which a dense convolution block is used to process the input from the encoder feature map so that the semantic level of the input is closer to the corresponding decoder feature map.
MultiResUNet is another modern variant of U-Net proposed by <cit.> as a potential successor.
They used an Inception-like layer to replace the consecutive convolution layers after each pooling and transpose-convolution layers, to percept objects at different scales.
They adopted a chain of convolution layers with residual connections instead of plain skip connection to process the feature map inputs before concatenating them to decoder feature maps.
In our work, since the vanilla U-Net does not match the spatial resolution of our dataset, we used a costumed U-Net similar to <cit.> in all of our tests.
Convolution layers with different stride and padding are used in this structure to ensure the input and output dimension is identical.
§ EXPERIMENTAL RESULTS
In this section,
we provide quantitative and qualitative results to support our claim in Section <ref>.
§.§ Evaluation
We evaluate the models performance based on segmentation precision and reconstruction similarity.
§.§.§ Segmentation Precision
In terms of noise reduction precision, for a typical segmentation model, we can use the output to compare it with a binary image known as the truth mask to compute a score based on the number of pixels that get classified into the right categories.
For a restoration model like ours, we subtract the model output from the model input to compute the binary segmentation result.
We compare the results with the segmentation truth mask to compute the Dice, IoU, and Pixel Accuracy (PA).
§.§.§ Reconstruction Similarity
For assessing reconstruction similarity, we use two metrics: Structural Similarity Index Measure (SSIM) and PSNR_HVS_M.
SSIM is a commonly used measure of image similarity.
The PSNR metric known as PSNR_HVS_M <cit.> is considered to be a more accurate representation of image quality,
which takes into consideration the Contrast Sensitivity Function (CSF) and the between-coceptor contrast masking of Discrete Cosine Transform (DCT) basis functions.
§.§ Training
The neural networks discussed in the previous section were trained using PyTorch 1.10.1.
RMSprop <cit.>, a variant of stochastic gradient descent that divides gradients by an average of their recent magnitude, was used as the optimizer with a learning rate of 0.00001, momentum of 0.9, weight decay of 1e-8, and default values <cit.> for other parameters.
Three datasets were created in aforementioned process to train various denoising models.
For body marker annotation, a dataset of 83,900 pairs of noisy images generated from 4,975 clean images was used.
For radial line annotation, 80,000 pairs of noisy images were generated from 3,936 clean images.
For vascular flow annotation, 80,000 pairs of noisy images were generated from 250 clean images.
§.§ Optimal Model Structure
To find the most effective combination of network structure and training scheme for the given task, we trained different network structures under the Noise2Noise and Noise2Clean schemes using the body mark annotation dataset.
Though utilizing only one type of annotation, this experiment's results could demonstrate the likely most suitable structure for other annotations as well.
L_1 loss is used to trained these models.
The results were compared using segmentation precision and reconstruction similarity, and are presented in Tables <ref> and <ref>.
We observed that Noise2Noise training scheme improves segmentation precision and reconstruction similarity in most cases.
The results presented in Tables <ref> and <ref> indicate that the models trained using the Noise2Noise scheme generally achieved higher Dice scores, IoU scores, PA scores, and PSNR_HVS_M scores.
Specifically, for the costumed U-Net, we observed an increase in the Dice and IoU of 0.151 and 0.155, respectively, and an increase of 11.625 for the PSNR_HVS_M when using linearly normalized input.
According to our hypothesis, the Noise2Noise training process improves the model's ability to understand the features of annotations through solving an “impossible” task of relocating the annotation.
This task is essentially a self-supervised pretext training task that helps the model gain a better understanding of the annotations and the spatial structure of the ultrasonic images, thus gaining higher performance.
We also noted that the costumed U-Net structure performed the best out of all the structures tested.
It achieved the highest Dice, IoU, SSIM, and PSNR_HSV_M scores under both training schemes.
The costumed U-Net trained using the Noise2Noise scheme achieved the highest segmentation precision and reconstruction similarity of all models, with a Dice of 0.712, an IoU of 0.596, an SSIM of 0.967, and a PSNR_HVS_M of 41.628.
Given the above results, we chose the costumed U-Net as the optimal model for later experiments.
§.§ Optimal Loss Function
To find the optimal loss function, we evaluate the convergence speed of different loss functions.
The loss functions we tested include L_1 loss, Huber loss, Smooth L_1 loss, MSE loss and several combinations of aforementioned loss functions.
The result is shown in Fig. <ref>.
In order to better visualize the differences in convergence speed between the losses, we present them in separated subplots.
As shown in Fig. <ref>, the L_1 loss and its variants (Huber loss and Smooth L_1 loss) are displayed on one subplot,
while the MSE loss-related losses are presented on another subplot in Fig. <ref>.
We observed that implementing MSE loss results in faster convergence, allowing the model to reach convergence in under 100 steps, as shown in Fig. <ref>.
Meanwhile, as depicted in Fig. <ref>, the loss functions based on L_1 loss achieve a much slower convergence after approximately 500 to 600 steps.
Although Huber loss and Smooth L_1 loss seem to have a quicker rate of convergence, closer examination in Fig. <ref> reveals that they both take around 500 steps to converge, which is similar to the standard L_1 loss.
We also noted from Fig. <ref> that using a combination of MSE loss and different L_1 based losses doesn't significantly affect the rate of convergence, likely because the difference in scale between the MSE loss and L1 loss and its variants causes MSE loss to remain the primary determinant of convergence speed.
Our study also conducted an evaluation of the costumed U-Net trained using various loss functions.
Our findings in Tables <ref> and <ref> revealed that there was minimal difference between the performances of these models, with the largest discrepancies in Dice, IoU, PA, SSIM and PSNR_HVS_M amounting to 0.023, 0.019, 0.003, 0.011 and 4.031 respectively.
These outcomes suggest that the selection of alternative loss functions has little influence on the overall performance of the model.
As such, we decided not to employ the MSE loss function in subsequent experiments and instead continued to utilize the L_1 loss.
§.§ Noise2Noise with Other Annotations
The improvement observed in the costumed U-Net trained using the Noise2Noise scheme is also apparent in other annotation datasets, as shown in <Ref>.
In the provided tables, the costumed U-Net has been trained using other two annotation datasets along with two different training schemes. The outcomes show a substantial enhancement in comparison to the Noise2Clean models, as there is approximately a half unit gain observed in both Dice and IoU metrics, an increase of around 0.01 in SSIM, and a rise of 5 units in PSNR_HVS_M for both types of annotations.
§.§ Qualitative Results
In this section, we present denoised images from models trained under different schemes to further support our claim.
As can be seen in Figs. <ref>, <ref> and <ref>,
the output from the Noise2Clean model contains obvious artifacts, whereas models trained using the Noise2Noise scheme do not suffer from this problem.
It is also worth noting that in the output images from Noise2Clean models, information in the edge area is compromised.
In contrast, the Noise2Noise models preserve this information well.
The evidence implies that models trained with the Noise2Noise scheme possess superior capabilities in identifying and distinguishing noise.
§ DISCUSSION
This study proposed a self-supervised data generation and training approach to build a large and diverse datasets starting from a small dataset with only few clean images.
We find that the costumed U-Net trained with the Noise2Noise scheme outperformed other models in terms of segmentation precision and reconstruction similarity in the annotation removal task.
The benefits of Noise2Noise training were observed across most model structures tested, and the models trained using this scheme produced fewer artifacts.
Our study has some limitations:
Firstly, we used separate parameter sets for the segmentation task of different annotations.
However, with the recent advancement of deep learning theories, it is now possible to use a single parameter set for the segmentation of all annotations presented in the image.
Additionally, there is potential for further research in the area of language-guided segmentation models, which would provide a more precise and flexible interface for medical professionals.
We find building a model that incorporates these innovations intriguing.
We also noted that our model was trained in a self-supervised manner, meaning it has potentially gained a strong understanding of the structural features of ultrasonic images.
This understanding is beneficial for downstream models such as object detection model.
Different ways of fine-tuning, like Low-Rank Adaptation (LoRA), adapter layers, etc. should be explored to find the optimal method to effectively transfer this understanding.
We plan to address these issues in future studies.
*
|
http://arxiv.org/abs/2307.04297v1 | 20230710012706 | AT 2023clx: the Faintest and Closest Optical Tidal Disruption Event Discovered in Nearby Star-forming Galaxy NGC 3799 | [
"Jiazheng Zhu",
"Ning Jiang",
"Tinggui Wang",
"Shifeng Huang",
"Zheyu Lin",
"Yibo Wang",
"Jian-Guo Wang"
] | astro-ph.HE | [
"astro-ph.HE"
] |
0000-0003-3824-9496]Jiazheng Zhu
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0002-7152-3621]Ning Jiang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0002-1517-6792]Tinggui Wang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0001-7689-6382]Shifeng Huang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0003-4959-1625]Zheyu Lin
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0003-4225-5442]Yibo Wang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0003-4156-3793]Jian-Guo Wang
Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011, PR China
Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Observatories, Kunming 650011, China
We report the discovery of a faint optical tidal disruption event (TDE) in the nearby star-forming galaxy NGC 3799. Identification of the TDE is based on its position at the galaxy nucleus, a light curve declining as t^-5/3, a blue continuum with an almost constant blackbody temperature of ∼12,000 K, and broad (≈15,000 km s^-1) Balmer lines and characteristic He II 4686 Å emission. The light curve of AT 2023clx peaked at an absolute magnitude of -17.16 mag in the g band and a maximum blackbody bolometric luminosity of 4.56×10^42 erg s^-1, making it the faintest TDE discovered to date. With a redshift of 0.01107 and a corresponding luminosity distance of 47.8 Mpc, it is also the closest optical TDE ever discovered to our best knowledge. Furthermore, our analysis of Swift/XRT observations of AT 2023clx yields a very tight 3σ upper limit of 9.53×10^39 erg s^-1 in the range 0.3–10 keV. AT2023clx, together with very few other faint TDEs such as AT 2020wey, prove that there are probably a large number of faint TDEs yet to be discovered at higher redshifts, which is consistent with the prediction of luminosity functions (LFs). The upcoming deeper optical time-domain surveys, such as the Legacy Survey of Space and Time (LSST) and the Wide-Field Survey Telescope (WFST) will discover more TDEs at even lower luminosities, allowing for a more precise constraint of the low-end of the LF.
§ INTRODUCTION
A tidal disruption event (TDE) is the phenomenon observed
when a star comes too close to a supermassive black hole (SMBH). The star will be tidally disrupted and produce a radiation flare peaked at the ultraviolet (UV) to soft X-ray band, which usually happens in the core of the galaxy (). Although the first TDE was detected in X-ray (), optical surveys have gradually dominated the discovery
of TDEs recently, especially since the operation of the Zwicky transient facility (). Moreover, a growing number of TDEs discovered in the infrared bands suggest that a considerable fraction of TDEs could be obscured by dust, which is missed by optical and X-ray surveys but can be revealed by their dust-reprocessed emission ().
Recently, <cit.> conducted a systematic analysis of demographics of TDEs using a sample of 33 optically selected TDEs from the ZTF survey over three years. They found an average peak of <M_g,peak>=-19.91 mag for the g-band. However, there are a few nearby TDEs that are significantly fainter, such as iPTF16fnl at 70.8 Mpc () and AT 2019qiz at 65.6 Mpc (), which belong to the spectroscopic class of H+He TDEs with Bowen fluorescence lines. <cit.> subsequently reported the faintest TDE in the ZTF sample at 119.7 Mpc, AT 2020wey, and found that these three fast-decaying H+He TDEs lacked any other common properties. The diversity indicates that a large sample is needed to pin down the nature of these faint TDEs.
In reality, faint TDEs could constitute the largest population of all TDEs, and we may be biased toward finding bright nuclear flares due to flux-limited wide-field surveys ().
In this letter, we present the discovery of the faintest and closest optical TDE so far in the nearby galaxy NGC 3799 at a redshift of 0.01107, which corresponds to a distance of 47.8 Mpc. This event was initially detected by the All Sky Automated Survey for SuperNovae (ASAS-SN; ), and it was suspected to be a TDE that could still be rising by <cit.> on February 26, 2023. We describe our follow-up observations and data reduction in Section 2, followed by the analysis of the photometric properties of AT 2023clx and the identification of it as a robust TDE candidate combined with spectral characteristics in Section 3. Finally, we briefly discuss our results and draw conclusions in Sections 4 and 5. We assume a cosmology with H_0 =70 km s^-1 Mpc^-1, Ω_m = 0.3, and Ω_Λ = 0.7.
§ OBSERVATIONS AND DATA
§.§ Ground-based Optical Photometry
We initiated optical ugri band follow-up observations of AT 2023clx with the 1.0m Las Cumbres Observatory Global Telescope network (LCOGT; ) immediately after the event was reported on the Transient Name Server on 2023-02-26 UT (). We used PanSTARRS () gri band stack images as reference images and employed HOTPANTS <cit.> for image subtraction. Prior to subtraction, we removed cosmic rays and aligned the images using Astrometry.net. After image subtraction, we performed point spread function (PSF) photometry on the difference image using the Photutils package of Astropy <cit.> for the gri data. For the u band, we performed aperture photometry with a 5 aperture and then subtracted the host galaxy magnitude (16.69±0.02 mag) with the same 5 aperture measured in the rescaled Sloan Digital Sky Survey (SDSS; ) u band image.
We tried to measure the position of AT 2023clx in the difference image of the LCO r-band image obtained on 2023-02-26 UT and the centroid of its host galaxy in the PanSTARRS r-band reference image by measuring the barycenter using the SExtractor. The offset between them was measured to be 0.21±0.17 arcsec, taking into account the uncertainty during the alignment of the image. This corresponds to a physical offset of 49±40 pc at the distance of NGC 3799. Therefore, we can conclude that AT 2023clx is consistent with the center of the galaxy at the resolution level of the LCO images, making it a potential candidate for TDE.
We were intrigued by this event because we believed that it was still in the rising stage, given that its luminosity was 2 magnitudes fainter than the average peak luminosity of optical TDEs <cit.>. However, our initial two observations indicated that it was already in decline, suggesting that it might be a rarely-discovered faint TDE. In fact, its peak luminosity in the g band (M_g= -17.16) is one of the faintest TDEs discovered thus far, comparable to that of iPTF16fnl (; M_g= -17.20).
§.§ Swift/UVOT photometry
UV images were obtained with the Neil Gehrels Swift Observatory (hereafter Swift) with the Ultra-Violet Optical Telescope (UVOT). The Swift photometry (PIs: Gomez, Huang, Leloudas, and Wevers) were measured with UVOTSOURCE task in the Heasoft package using 5 apertures after subtracting the galaxy background using Swift/UVOT images taken on July 18, 2010.
It was placed in the AB magnitude system <cit.>, adopting the revised zero points and the sensitivity of <cit.>.
§.§ Swift/XRT photometry
The X-ray Telescope (XRT) photometry was performed using XRTPIPELINE and XRTPRODUCTES. We used a circle with a radius of 47.1 as the source region and an annulus with an inner radius of 100 and an outer radius of 200 as the background. The source was X-ray faint in the observations, and we assumed an absorbed power-law spectrum with an index of Γ=1.75 <cit.> and a Galactic hydrogen density of 2.51× 10^20 cm^-2 <cit.>. We then derived the 3σ upper limit for the flux in the 0.3–10.0 keV range using WebPIMMS[<https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl>]. In addition, we stacked the 26 event files ranging from MJD 60002.74 to 60089.23, deriving the upper limit of the mean luminosity of 9.53× 10^39 in 3σ level, with a total exposure time of 35.79 ks.
§.§ Archival photometry Data
We also collected host-subtracted light curves of AT 2023clx from public time domain surveys, including data from the Asteroid Terrestrial Impact Last Alert System (ATLAS; ), the Zwicky Transient Facility (ZTF; ) and the All Sky Automated Survey for SuperNovae (ASAS-SN, ).
The ATLAS c- and o-band light curves were obtained using the ATLAS Forced Photometry Service, which produces PSF photometry on the difference images. ATLAS has 3-4 single exposures within each epoch (typically within one day), so we binned the light curve every epoch to improve the signal-to-noise ratio (SNR). The ZTF light curves were obtained using the Lasair alert broker[Website link: https://lasair-ztf.lsst.ac.uk/] (). The ASAS-SN host-subtracted g-band light curves were obtained using the ASAS-SN Sky Patrol photometry pipeline. We also collected and binned the photometry data of ASAS-SN. Considering the depth of g band of ASAS-SN is roughly 18.5 mag (), we only use the photometry data brighter than 18 mag because the data fainter than 18 mag have larger error than our LCO and ZTF photometry.
All light curves, after correction for Galactic extinction, are shown in Figure <ref>. We assumed a <cit.> extinction law with R_V=3.1 and a Galactic extinction of E(B-V)=0.0268±0.0003 mag ().
For better comparison of the peak g-band luminosity between AT 2023clx and the ZTF TDE samples, we tried to fit the profile of the light curve profile of AT 2023clx following the method of <cit.> (see Section 3.2 for details). The rise function was poorly constrained with either Gaussian or power-law function due to only three detected points in ASAS-SN. Hence, we combined the upper limit to estimate the rise function. Shown in Figure <ref>, AT 2023clx can roughly be described by a rise in power law (index n = 0.64) and a (t-t_0)^-5/3 power-law decline (t_0= MJD 59977±2 d) with a peak g-band luminosity of logL_g= 42.31, which is the faintest TDE discovered by the ZTF survey to date compared to <cit.>.
§.§ Optical Spectra Observation and Data Reduction
Five spectra were acquired for AT 2023clx, including one from SDSS DR7, one observed by <cit.> with the SEIMEI telescope[
We collected this spectrum from TNS website: https://www.wis-tns.org/object/2023clx] on 2023-02-26 UT, one observed by the ZTF group with the Keck telescope[
ZTF group upload this spectrum to another transient by mistake: https://www.wis-tns.org/object/2018meh] on 2023-03-20 UT and two obtained by ourselves with YFOSC onboard the LiJiang 2.4m telescope () on 2023-03-03 UT and 2023-03-13 UT.
We reduced the LJT spectra according to the standard procedure for long-slit spectra with PyRAF, a Python-based package of IRAF(). All spectra are shown in Figure <ref>.
§ ANALYSIS AND RESULTS
§.§ Photometric and Spectral Analysis
First, we use the package SUPERBOL ()
to fit the spectral energy distribution (SED) of AT 2023clx with a blackbody model. The observing cadence after MJD 60040 in r, i and u bands is quite sparse. So we use third-order polynomials only to interpolate the g, c and o bands light curve in the late phase to match with the Swift epochs. The evolution of blackbody luminosity, temperature, and effective radius are shown in Figure <ref>, compared with other faint TDEs.
The optical/UV light curves of TDEs usually show a monthly rise followed by a decay with a timescale of months to years, sometimes following a t^-5/3 power-law decline (). In the case of AT 2023clx, it exhibited a similar decline behavior that was consistent with a t^-5/3 power-law decline in both the g band and the bolometric light curves (see Figures <ref> and <ref>). The blackbody temperature was approximately 11,000–13,000 K, which slowly decreased at the peak and was followed by a slow return to 13,000 K. The blackbody radius R_BB was approximately 4.61 × 10^14 cm at the peak, followed by a decline. It is worth noting that AT2023clx has the lowest temperature among these faint TDEs, albeit with a medium blackbody radius. As a result, the maximum blackbody luminosity of AT 2023clx is only (4.56±0.53) × 10^42 erg s^-1, making it even fainter than previous low-luminosity TDEs (e.g., iPTF16fnl; , AT 2019qiz; , AT 2020wey; ). Furthermore, these faint TDEs (e.g., AT 2020wey; ) all exhibit fast declining light curves that are much steeper than the canonical t^-5/3. In the case of AT 2023clx, the decline appears to be less extreme than that of this subclass of TDEs. However, the limited number of samples at present cannot reveal their true statistical characteristics.
We then compared the optical spectra of these low-luminosity TDEs in Figure <ref>. Clearly, all these spectra near the peak display a blue continuum with broad H_α and H_β emission, as well as the typical TDE emission line He II 4686 Å. The broad H_α is blended with He I 6678 Å, and broad He I 5876 Å line was also evident in the +26 d Keck spectrum. The full width at half maximum (FWHM) of the broad Balmer lines is ≈15,000 km s^-1. The H_α profile is asymmetric, with a blueshifted peak at +4.3 d and a redshifted peak at +26 d (see Figure <ref>). A sharp peak on the blue side coincides in wavelength with [O I] 6300 Å, which would indicate the presence of low ionization, low density, and slow-moving gas.
The red side may include some contribution from He I 6678 Å, but this is likely not a major contributor, as we see only a very weak He I 5876 Å, similar to AT 2019qiz (). <cit.> eventually attributed the evolution of such H_α profiles of AT 2019qiz to an outflow.
In addition, as we have seen the asymmetric broad H_α profile, the N III 4640 Å also could be blended with He II and H_β. Furthermore, most of the spectra in Figure <ref> do not cover the range of N III 4100 Å, and it is hard to confirm whether the weak feature at +26 d is real or not. Thus, the N III 4100 Å can not be ruled out either.
However, the lack of low-ionization metal features (e.g., oxygen and calcium) in spectra near the peak almost excludes the typical Type-II supernovae scenario (). Type-IIn supernovae also show strong broad Balmer lines, but sometimes with high ionization lines of intermediate width that can be regarded as a result of CSM interaction (e.g., SN 2005ip, SN 2006jd and SN 2010jl; ). If the CSM interaction of Type-IIn is relatively weak compared to the above three, the Fe II and Ca II P-Cygni lines appear again in their spectra (). Furthermore, the optical light curves of Type-IIn are relatively long-lived compared to AT 2023clx and normal supernovae do not stay at a temperature as high as 13,000 K for months.
Therefore, AT 2023clx is consistent with a faint TDE scenario in both aspects of light curves and spectroscopic properties. Based on this, we classify AT 2023clx as a robust faint hydrogen and helium (H+He) optical TDE.
§.§ Host-galaxy Properties
NGC 3799, the host of AT 2023clx, is a well-resolved face-on spiral galaxy (see top panels of Figure <ref>). The pre-outburst SDSS spectra centered on the galaxy center have placed it in the locus of low-ionization nuclear emission-line region region (LINER) in the Baldwin-Phillips-Terlevich (BPT) diagram () based on its narrow-line ratios, indicating weak active galactic nuclear (AGN) activity albeit without broad emission lines.
<cit.> performed a spectral energy distribution (SED) analysis of a sample of 189 nearby galaxies, including NGC 3799. Their fitting considered the AGN emission and obtained a stellar mass of log(M_*/M_⊙) = 9.87±0.02 and a star formation rate (SFR) of logSFR = -0.09±0.02 for NGC 3799. Consequently, it is located on the main sequence of star-forming galaxies (SF) (; see Figure <ref>). AT 2023clx thus belongs to a TDE in a typical SF galaxy, whereas known optical TDEs show a preference for post-starburst (or green valley) galaxies (). Additionally, the fractional AGN contribution of NGC 3799 is indeed small (f_AGN0.2) according to their fitting. We emphasize that the overall star formation activity does not conflict with its LINER classification in the BPT diagram since the central regions of the SF galaxies are commonly found to be quenched (). Using the empirical relation between and the total galaxy stellar mass in the local universe ():
log(M_BH/M_⊙)=α + β log(M_stellar/10^11M_⊙)
The central in NGC 3799 is estimated to be 10^6.26±0.28M_⊙, using α=7.45±0.08 and β=1.05±0.11.
§ DISCUSSION
Although only a handful of TDEs have been discovered to be faint (M_g>-18), the luminosity functions (LFs) of optical TDEs suggest that their real number should be large, as all of these LFs show a rising trend toward the low end (). However, an accurate LF profile requires more precise constraints, particularly when discovering more faint TDEs.
As one of the faintest TDEs discovered by the ZTF survey, AT 2023clx could provide the first data point at L_g10^42.4 erg s^-1. We also used the “1/𝒱max" method described in <cit.> to estimate the volumetric rate and considered the latest TDE selection criteria from the ZTF survey (). The 𝒱max is defined as:
𝒱max≡ V(zmax) A_survey×τ_survey
For the survey duration (τ_survey), we set a starting date of 2018-10-1 UT and an end date of 2023-5-1 UT. Consistent with <cit.>, the effective survey area is set to A≈15000, dg^2. We binned the two faintest ZTF TDEs (AT 2020wey and AT 2023clx) to estimate the volumetric rate in the faint end (L_g10^42.5 erg s^-1) and compared it with known optical TDE LFs (see Figure <ref>). We found that our result is about a factor of two higher than that measured by <cit.>, which is easily understood since they had only AT 2020wey when they constructed the LF. The data, although with a large uncertainty, seem to be well consistent with both the double power-law fitting measured by <cit.> and the single power-law fitting measured by <cit.>. However, the volumetric rate at the low end is much lower than that of <cit.>. The general overestimation in <cit.> is probably due to sources that only have post-peak light curves in their sample.().
Furthermore, we used the double power-law TDE luminosity functions measured by <cit.>, which is consistent with our result, to estimate the proportion of faint TDE. Events that are fainter than or equally faint with AT 2019qiz constitute ∼74% in the currently observed g-band peak luminosity space (L_g∼ 42.3-44.7). This result is greater than the estimation obtained by <cit.> (roughly 50–60%). Again, we conclude that faint TDEs are not rare by nature, but constitute a large portion of the entire population, as mentioned by <cit.>.
The AT 2023clx might reveal that the contribution of the faint end may be even greater than previously thought.
It is worth noting that the distance used in this work is taken as a simple cosmological deduction from the redshift. The precise distance measurement for nearby galaxies is challenging, and thus previous works on faint nearby TDEs all adopted the same approach as us (e.g., 66.6 Mpc for iPTF16fnl; , 65.6 Mpc for AT 2019qiz; ).
However, at these short distances, peculiar velocities and some redshift independent distances are not negligible, which might significantly affect the luminosity estimate of AT 2023clx significantly. <cit.> performed a flow field correction on the nearby secondary distance indicators, including three attractors: the Local Supercluster, the Great Attractor and the Shapley Supercluster. The corrected recessional velocity of NGC 3799 in this way is 3823±31 km/s and the corrected distance is 54.6±4.6 Mpc, resulting in a peak g band magnitude of -17.45±0.12 mag and a peak blackbody luminosity of L_bb=(5.95±1.23) × 10^42 erg s^-1 for AT2023 clx. Although the luminosity has increased by ∼30%, it remains lower than iPTF16fnl (1.0±0.15× 10^43 erg s^-1; ) and AT 2020wey (8.74±0.69× 10^42 erg s^-1; ).
We carefully examined other distance estimates for NGC3799 from the NASA/IPAC Extragalactic Database (NED), such as a 3K cosmic microwave background (CMB) correction distance of 52.3±3.8 Mpc and a Galactocentric (GSR) distance of 46.4±3.4 Mpc. They are all less than the above value of 54.6±4.6 Mpc and thus lead to a lower correction of luminosity. Based on this, we confidently conclude that AT 2023clx is the faintest optical TDE observed to date.
§ CONCLUSION
In this work, we report the discovery of a new faint TDE in NGC 3799, a main-sequence star-forming galaxy located at a distance of only ∼50 Mpc. It holds the lowest peak blackbody luminosity and the closest distance among all optical TDEs discovered up to now. The main properties of AT 2023clx discovered by us are summarized below:
∙ The peak blackbody luminosity L_bb=(4.56±0.53) × 10^42 erg s^-1 is lower than that of all other low-luminosity TDEs, although its absolute magnitude in the g band (M_g=-19.16) is comparable to iPTF16fnl. It is also the closest optical TDE with a redshift of 0.01107 or a luminosity distance of 47.8 Mpc.
∙ Both the optical/UV light curves and the bolometric light curve show a t^-5/3 power-law decay after the peak, which is not as fast as that of other faint TDEs discovered before.
∙ AT 2023clx was not detectable in X-rays by Swift/XRT in any single observations or even in the stacked image. This yields a very tight 3σ upper limit of 9.53×10^39 erg s^-1 in the range of 0.3–10 keV.
∙ The spectra taken around the optical peak show a strong blue continuum and broad Balmer lines blended with helium features (FWHM ≈15,000 km s^-1), which is reminiscent of other faint TDEs.
AT 2023clx is the second optical TDE with a peak g band luminosity of L_g10^42.5 erg s^-1 in the ZTF survey. The addition of AT 2023clx increases the volumetric rate at the extreme faint end by a factor of two compared to that given by <cit.>. This finding further demonstrates that the luminosity function (LF) is continuously rising towards the low end, and there are likely many more faint TDEs waiting to be discovered. The rarity of reported faint TDEs is simply due to selection bias, as we are biased towards finding bright events due to the flux-limited surveys, as mentioned by <cit.>. The upcoming deeper surveys, such as the Legacy Survey of Space and Time (LSST; ) and the Wide-Field Survey Telescope (WFST; ), will undoubtedly find more faint TDEs, which will help us measure their rate and constrain the lower end of the TDE LF more precisely.
We thank the anonymous referee for an extremely quick response and for providing valuable comments, which help to improve the manuscript. This work is supported by the SKA Fast Radio Burst and High-Energy Transients Project (2022SKA0130102), the National Natural Science Foundation of China (grants 11833007, 12073025, 12192221), and the 111 Project for "Observational and Theoretical Research on Dark Matter and Dark Energy" (B23042). We acknowledge the support of the Cyrus Chun Ying Tang Foundations. This research uses data obtained
through the Telescope Access Program (TAP), which has been
funded by the TAP member institutes. The authors acknowledge the support of the Lijiang 2.4m telescope staff. Funding for the telescope has been provided by the Chinese Academy of Sciences and the People's Government of Yunnan Province”. The ZTF forced photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI: Graham).
[Arcavi et al.(2014)]Arcavi2014 Arcavi, I., Gal-Yam, A., Sullivan, M., et al. 2014, , 793, 38. doi:10.1088/0004-637X/793/1/38
[Astropy Collaboration et al.(2022)]Astropy Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, , 935, 167
[Bade et al.(1996)]Bade1996 Bade, N., Komossa, S., & Dahlem, M. 1996, , 309, L35
[Baldwin et al.(1981)]Baldwin1981 Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, , 93, 5
[Bellm et al.(2019)]Bellm2019 Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, , 131, 018002
[Blagorodnova et al.(2017)]Blagorodnova2017 Blagorodnova, N., Gezari, S., Hung, T., et al. 2017, , 844, 46
[Becker(2015)]Becker2015 Becker, A. 2015, Astrophysics Source Code Library. ascl:1504.004
[Breeveld et al.(2011)]Breeveld2011 Breeveld, A. A., Landsman, W., Holland, S. T., et al. 2011, Gamma Ray Bursts 2010, 1358, 373
[Brown et al.(2013)]LCOGT Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, , 125, 1031
[Brown et al.(2018)]Brown2018 Brown, J. S., Kochanek, C. S., Holoien, T. W.-S., et al. 2018, , 473, 1130
[Cardelli et al.(1989)]Cardelli1989 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245
[Chang et al.(2015)]Chang2015 Chang, Y.-Y., van der Wel, A., da Cunha, E., et al. 2015, , 219, 8
[Charalampopoulos et al.(2023)]Charalampopoulos2023 Charalampopoulos, P., Pursiainen, M., Leloudas, G., et al. 2023, , 673, A95
[de Jaeger et al.(2022)]deJ2022 de Jaeger, T., Shappee, B. J., Kochanek, C. S., et al. 2022, , 509, 3427. doi:10.1093/mnras/stab3141
[Ellison et al.(2018)]Ellison2018 Ellison, S. L., Sánchez, S. F., Ibarra-Medel, H., et al. 2018, , 474, 2039
[Fan et al.(2015)]Fan2015-2m4 Fan, Y.-F., Bai, J.-M., Zhang, J.-J., et al. 2015, Research in Astronomy and Astrophysics, 15, 918
[Filippenko (1997)]Filippenko97 Filippenko, A. V. 1997, ARA&A, 35, 309
[Flewelling et al.(2020)]PS1 Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2020, , 251, 7
[Fransson et al.(2014)]SN2010jl Fransson, C., Ergon, M., Challis, P. J., et al. 2014, , 797, 118
[French, Arcavi & Zabludoff (2016)]French2016 French, K. D., Arcavi, I., & Zabludoff, A. 2016, , 818, L21. doi:10.3847/2041-8205/818/1/L21
[French et al.(2020)]French2020 French, K. D., Wevers, T., Law-Smith, J., et al. 2020, , 216, 32. doi:10.1007/s11214-020-00657-y
[Gezari(2021)]Gezari2021 Gezari, S. 2021, , 59, 21
[Gunn et al.(2006)]Gunn2006 Gunn, J. E., Siegmund, W. A., Mannery, E. J., et al. 2006, , 131, 2332
[Hammerstein et al.(2021)]Hammerstein2021 Hammerstein, E., Gezari, S., van Velzen, S., et al. 2021, , 908, L20
[HI4PI Collaboration et al.(2016)]HI4PI2016 HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, , 594, A116
[Ivezić et al.(2019)]Ivezic2019 Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, , 873, 111
[Jiang et al.(2021)]Jiang2021 Jiang, N., Wang, T., Dou, L., et al. 2021, , 252, 32
[Lin et al.(2022a)]Lin2022a Lin, Z., Jiang, N., & Kong, X. 2022a, , 513, 2422
[Lin et al.(2022b)]Lin2022b Lin, Z., Jiang, N., Kong, X., et al. 2022b, , 939, L33
[Kochanek et al.(2017)]Kochanek2017 Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, , 129, 104502
[Masci et al.(2019)]Masci2019 Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, , 131, 018003
[Mattila et al.(2018)]Mattila2018 Mattila, S., Pérez-Torres, M., Efstathiou, A., et al. 2018, Science, 361, 482
[Mould et al.(2000)]Mould2000 Mould, J. R., Huchra, J. P., Freedman, W. L., et al. 2000, , 529, 786. doi:10.1086/308304
[Nicholl(2018)]Nicholl2018 Nicholl, M. 2018, Research Notes of the American Astronomical Society, 2, 230
[Nicholl et al.(2020)]Nicholl2020 Nicholl, M., Wevers, T., Oates, S. R., et al. 2020, , 499, 482
[Oke & Gunn (1983)]OkeGunn83 Oke, J. B., & Gunn, J. E. 1983, ApJ, 266, 713
[Onori et al.(2019)]Onori2019 Onori, F., Cannizzaro, G., Jonker, P. G., et al. 2019, , 489, 1463
[Ramos Padilla et al.(2020)]Ramos2020 Ramos Padilla, A. F., Ashby, M. L. N., Smith, H. A., et al. 2020, , 499, 4325
[Rees(1988)]Rees1988 Rees, M. J. 1988, , 333, 523
[Reines & Volonteri(2015)]Reines2015 Reines, A. E. & Volonteri, M. 2015, , 813, 82
[Ricci et al.(2017)]ricci2017 Ricci, C., Trakhtenbrot, B., Koss, M. J., et al. 2017, , 233, 17
[Schlafly & Finkbeiner(2011)]Schlafly2011 Schlafly, E. F. & Finkbeiner, D. P. 2011, , 737, 103
[Shappee et al.(2014)]Shappee2014 Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, , 788, 48
[Smith et al.(2019)]Smith2019 Smith, K. W., Williams, R. D., Young, D. R., et al. 2019, Research Notes of the American Astronomical Society, 3, 26
[Smith et al.(2020)]Smith2020 Smith, K. W., Smartt, S. J., Young, D. R., et al. 2020, , 132, 085002
[Stritzinger et al.(2012)]SN2005ip SN2006jd Stritzinger, M., Taddia, F., Fransson, C., et al. 2012, , 756, 173
[Tacchella et al.(2015)]Tacchella2015 Tacchella, S., Carollo, C. M., Renzini, A., et al. 2015, Science, 348, 314
[Taguchi et al.(2023)]Taguchi2023 Taguchi, K., Uno, K., Nagao, T., et al. 2023, Transient Name Server Classification Report, 2023-438
[Taddia et al.(2013)]Taddia2013 Taddia, F., Stritzinger, M. D., Sollerman, J., et al. 2013, , 555, A10
[Tody(1986)]Tody1986 Tody, D. 1986, , 627, 733
[Tody(1993)]Tody1993 Tody, D. 1993, Astronomical Data Analysis Software and Systems II, 52, 173
[Tonry et al.(2018)]Tonry2018 Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, , 130, 064505
[Veilleux & Osterbrock(1987)]Veilleux1987 Veilleux, S. & Osterbrock, D. E. 1987, , 63, 295
[van Velzen(2018)]Velzen2018 van Velzen, S. 2018, , 852, 72
[van Velzen et al.(2021)]Velzen2021 van Velzen, S., Gezari, S., Hammerstein, E., et al. 2021, , 908, 4
[Wang et al.(2019)]Wang2019-2m4 Wang, C.-J., Bai, J.-M., Fan, Y.-F., et al. 2019, Research in Astronomy and Astrophysics, 19, 149
[WFST Collaboration et al.(2023)]wfst2023 WFST Collaboration, Wang, T., Liu, G., et al. 2023, arXiv:2306.07590
[Yao et al.(2023)]Yao2023 Yao, Y., Ravi, V., Gezari, S., et al. 2023, arXiv:2303.06523
|
http://arxiv.org/abs/2307.04002v1 | 20230708160353 | Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems | [
"Jiaqi Zou",
"Songlin Sun",
"Christos Masouros",
"Yuanhao Cui",
"Yafeng Liu",
"Derrick Wing Kwan Ng"
] | eess.SP | [
"eess.SP"
] |
4.1ex
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Energy-Efficient Beamforming Design for Integrated Sensing and Communications Systems
Jiaqi Zou, Graduate Student Member, IEEE, Songlin Sun, Senior Member, IEEE, Christos Masouros, Senior Member, IEEE, Yuanhao Cui, Member, IEEE,
Ya-Feng Liu, Senior Member, IEEE, and Derrick Wing Kwan Ng, Fellow, IEEE
Part of this work has been submitted to the IEEE Global Communications Conference (GLOBECOM 2023) for possible presentation <cit.>.
Jiaqi Zou is with the School of Information and Communication Engineering, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China, and also with the Department of Electrical and Electronic Engineering, University College London, London WC1E 7JE, UK (e-mail: [email protected]).
Songlin Sun and Yuanhao Cui are with Beijing University of Posts and Telecommunications (BUPT), Beijing, China (e-mail: [email protected], [email protected]).
Christos Masouros is with the Department of Electrical and Electronic Engineering, University College London, WC1E 7JE, UK (e-mail: [email protected]).
Ya-Feng Liu is with the State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China (e-mail: [email protected])
Derrick Wing Kwan Ng is with the School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia (e-mail: [email protected]).
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
1.5
In this paper, we investigate the design of energy-efficient beamforming for an ISAC system, where the transmitted waveform is optimized for joint multi-user communication and target estimation simultaneously.
We aim to maximize the system energy efficiency (EE), taking into account the constraints of a maximum transmit power budget, a minimum required signal-to-interference-plus-noise ratio (SINR) for communication, and a maximum tolerable Cramér-Rao bound (CRB) for target estimation.
We first consider communication-centric EE maximization.
To handle the non-convex fractional objective function, we propose an iterative quadratic-transform-Dinkelbach method, where Schur complement and semi-definite relaxation (SDR) techniques are leveraged to solve the subproblem in each iteration.
For the scenarios where sensing is critical, we propose a novel performance metric for characterizing the sensing-centric EE and optimize the metric adopted in the scenario of sensing a point-like target and an extended target.
To handle the nonconvexity, we employ the successive convex approximation (SCA) technique to develop an efficient algorithm for approximating the nonconvex problem as a sequence of convex ones.
Furthermore, we adopt a Pareto optimization mechanism to articulate the tradeoff between the communication-centric EE and sensing-centric EE. We formulate the search of the Pareto boundary as a constrained optimization problem and propose a computationally efficient algorithm to handle it.
Numerical results validate the effectiveness of our proposed algorithms compared with the baseline schemes and
the obtained approximate Pareto boundary shows that there is a non-trivial tradeoff between communication-centric EE and sensing-centric EE, where the number of communication users and EE requirements have serious effects on the achievable tradeoff.
Integrated sensing and communication (ISAC), energy efficiency, fractional programming.
§ INTRODUCTION
Integrated sensing and communications (ISAC) are anticipated as a viable enabling technology for unlocking the potential of next-generation wireless networks, as the two kinds of systems tend to share various common devices, signal processing techniques, and even the hardware circuitries. Rather than the conventional parallel development of the two systems, the joint designs advocating their coexistence and cooperation have attracted extensive research interest in recent years. For instance, the coexistence of communication and radar systems focuses on spectrum sharing or physical integration design, which mainly aims to mitigate the mutual interference and efficiently manage the limited wireless resources <cit.>. Indeed, since communication and radar systems may transmit independent signals superimposed in the time/frequency domains, the interference between each other should be minimized to facilitate their individual functionalities. In such cases, numerous approaches have been proposed, such as cooperative spectrum sharing <cit.> and beamforming design <cit.>. Nevertheless, the existence of inevitable mutual interference still causes certain limitations on spectral efficiency performance.
Meanwhile, compared with the coexistence design approaches that generate communication and sensing signals separately, ISAC employs a common transmitted signal for realizing communication and sensing simultaneously. In such a case, the crux of ISAC is how to design a specialized waveform for effectively transmitting data and sensing potential targets.
In particular, the waveform design can be categorized into the communication-centric, radar-centric, and joint design according to the design goals <cit.>. Specifically, the radar-centric design aims to modulate the communication data onto the radar pulses, where the radar probing signals can be regarded as an information carrier <cit.>. On the other hand, communication-centric approaches utilize existing communication signals to sense the environment, such as cellular signals <cit.> and Wi-Fi signals <cit.>. In particular, various environmental conditions can be extracted from the received echoes of the communication signals, as the target's existence or movement inevitably affects the signal's propagation. Nevertheless, the integration performance is limited in the above two approaches, as the communication/sensing functionality is often carried out as ancillary tasks. In contrast, the joint ISAC design studies the co-design of signaling methodologies enabling both communications and sensing, which is the research content of this work.
§.§ Related Works
Related works of joint waveform design focus on striking a balance between the tradeoff of communication and sensing. For example, <cit.> investigated the tradeoff between the multi-user interference minimization and the appropriate radar beampattern formulation. Besides, a recent work in <cit.> considered the Cramér-Rao bound (CRB) minimization with guaranteed signal-to-interference-plus-noise ratio (SINR) for each communication user. Furthermore, as widely-used performance metrics, the fundamental tradeoff between the CRB for target parameter estimation and the data rate for communication was also investigated in <cit.> under various system settings, to unveil the potential of ISAC.
Despite the above approaches can achieve favorable performance tradeoffs between the estimation performance and spectral efficiency <cit.>, the energy efficiency (EE) optimization of the joint waveform has not been fully investigated. Currently, the energy consumption of the state-of-the-art fifth-generation (5G) wireless networks is extremely high, resulting in expensive operational costs <cit.>.
It is anticipated that the upcoming ISAC will pave the way for developing a perceptive wireless network requiring a much higher energy consumption than the current one, since the wireless signals are expected to achieve the dual purposes of environment sensing and information transmission simultaneously.
This could hinder the long-term development of sustainable and environmentally friendly wireless communication technologies.
Hence, there is a pressing need to investigate the energy efficiency design of ISAC for establishing
a perceptive-efficient and spectrally-efficient cellular network.
Actually, energy-aware optimization has been a hot topic in the past decade for conventional cellular networks,
e.g., <cit.>.
Specifically, EE is defined as the ratio of the achieved data rate and the required power consumption, capturing the energy consumption per bit in communication, which has been widely studied for various communication networks <cit.>.
However, these approaches for maximizing the communication EE cannot be directly applied to ISAC, as they do not take into consideration of sensing functionalities.
Recently, the EE optimization for radar-communication spectrum sharing has been studied in <cit.>, and the results cannot be applied to ISAC systems either due to the separated signal waveform design.
On the other hand, a few works have studied ISAC beamforming for maximizing communication-centric EE. For instance, the work of <cit.> investigated the communication EE maximization under the required radar beampattern constraint. Yet, it does not consider the sensing EE and the performance of target parameter estimation. Besides, the work of <cit.> focused on energy minimization under the sensing and communication constraints. In particular, the algorithm designed in <cit.> cannot handle the EE optimization due to the intrinsic challenges brought by fractional programming in the resource allocation design.
More importantly, to the best of our knowledge, the sensing-centric EE that characterizes the EE of target sensing has been rarely studied in the literature.
In particular, to fulfill the increasing demand for sensing services, it is natural for the base station (BS) to transmit the waveforms with high power for improving the detection and estimation performance. However, this operation will inevitably bring unaffordable energy costs, which contradicts to the emerging requirements of carbon neutrality and environmental sustainability for future wireless networks <cit.>.
Therefore, there is an urgent need for the design an energy-efficient sensing performance metric for ISAC.
§.§ Contributions
Against this background, this work considers the EE optimization for the waveform design of ISAC, where the communication-centric EE, sensing-centric EE, and their tradeoffs are investigated.
Specifically, for the ISAC systems wherein communication serves as the primary objective, we study the ISAC waveform design for maximizing the communication-centric EE, i.e., the ratio of the achievable rate and the corresponding power consumption, while guaranteeing both the target estimation and communication performance in terms of the CRB and SINR, respectively.
As for the sensing-centric ISAC systems, for the first time, we propose the performance metric to measure the sensing-centric EE for target parameter estimation.
Then, we optimize the ISAC waveform to maximize the sensing-centric EE, considering the constraints of SINR, CRB, and the maximum transmission power budget. Then, we study the Pareto boundary of communication-centric EE and sensing-centric EE for characterizing their tradeoffs. The main contributions of this paper are summarized as follows.
* We optimize the communication-centric EE considering the two scenarios having a point-like target estimation and an extended target estimation, respectively, under the constraints of CRB, SINR, and transmission power limitations. For the case of point-like target, the nonconvexity of the objective function and CRB constraint hinder the communication-centric EE optimization. For handling these challenges, we first adopt the quadratic-transform-Dinkelbach method to reformulate the nonconvex fractional objective function as a tractable formulation. Then, we adopt the semi-definite relaxation and linear matrix inequality to convert the nonconvex optimization problem into a sequence of convex optimization problems. Finally, we generalize the proposed algorithm to an extended target case.
* We propose a performance metric for capturing the notion of sensing-centric EE for the first time, which adopts the ratio of the reciprocal of the CRB to the transmit energy for measuring “information-per-Joule’’. Then, based on the proposed metric, we consider the sensing-centric EE maximization for point-like/extended targets by optimizing the transmit beamforming. Although the considered problem is nonconvex, we adopt the Schur complement to reformulate the problem into a tractable formulation, facilitating the development of a successive convex approximation (SCA)-based algorithm to effectively acquire the solution to the design problem.
* We adopt the Pareto optimization technique to characterize the tradeoff between the communication-centric EE and the sensing-centric EE. In particular, we formulate a constrained optimization problem that maximizes the communication-centric EE under the constraint of sensing-centric EE. To handle the nonconvexity of the considered optimization problem, we propose an SCA-based iterative algorithm for addressing the nonconvexity. Then, by varying the threshold of the sensing-centric EE, the approximate Pareto boundary can be obtained by solving a sequence of constrained problems. Simulation results present the Pareto boundary to demonstrate the tradeoff between the two EE metrics.
The remainder of this paper is organized as follows. Section II introduces the system model, including the communication model and the sensing model. In Section III, we study the optimization of the communication-centric EE under the sensing and communication constraints. The sensing-centric EE is studied in Section IV. Section V investigates the tradeoff between the communication-centric and the sensing-centric EE. Simulation results are provided in Section VI. Finally, we conclude the paper in Section VII.
Notations: The normal plain text (i.e., t), bold lowercase letters (i.e., 𝐰) and uppercase letters (i.e., 𝐖) represent scalars, vectors, and matrices, respectively. tr(·), rank(·), (·)^H, and (·)^T denote the trace operator, the rank operator, the Hermitian transpose, and the transpose operator, respectively. ℂ^n × n stands for an n × n complex-valued matrix. · represents the L_2 norm of a matrix. The inequality 𝐀≽0 means that 𝐀 is Hermitian positive semi-definite. Re(·) denotes the real part of the argument. We adopt 𝔼(·) for the stochastic expectation. ḟ(x) denotes the first derivative of function f(x). The notation ≜ is used for definitions.
§ SYSTEM MODEL
As depicted in Fig. <ref>, we consider an ISAC multiple-input multiple-output (MIMO) system, where the BS equipped with M transmit antennas serves K single-antenna UEs for communication with K ≤ M. Let k ∈𝒦≜{1,2, ⋯,K} denote the communication user set. As for radar estimation, the environmental information is simultaneously extracted from the reflected echoes with N receiving antennas implemented at the BS.
Without loss of generality, the number of transmit antennas is less than that of receive antennas, i.e., M ≤ N. As for target sensing, both the point-like target and the extended target cases are considered separately covering various practical scenarios. In particular, the former case denotes the unstructured point that is far away from the BS, such as unmanned aerial vehicles (UAVs). On the other hand, for the extended target, it acts as a reflecting surface with a large number of distributed scatterers, such as a vehicle or a pedestrian <cit.>. The detailed model is given as follows.
§.§ Communication Model
We denote the beamforming vector and the channel from the BS to the k-th user as 𝐰_k∈ℂ^M× 1 and 𝐡_k∈ℂ^M× 1, respectively. Then, the data symbol intended for the k-th user at time slot l is denoted as s_k[l], with unit power 𝔼( |s_k[l]|^2) =1. Left multiplying 𝐬[l] = [s_1[l], s_2[l], ⋯, s_k[l]]^T ∈ℂ^K × 1 with the beamforming matrix 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_k] ∈ℂ^M × K, the transmitted signal vector of the BS is given by 𝐱[l]= 𝐖𝐬[l].
Then, the transmitted ISAC waveform over L time slots can be denoted as 𝐗 = [ x[1], x[2], ⋯, x[L] ] ∈ℂ^M × L. Then, the received signal at the k-th user during the l-th time slot, l ∈{1, 2, ⋯, L}, is given as follows
y_k[l] = h_k^H 𝐰_k s_k[l] +
∑_k ∈𝒦 j ≠ k h_k^H 𝐰_j s_j[l] + z_c[l],
where z_c[l] is the additive white Gaussian noise (AWGN) with zero mean and variance σ_c^2. The received SINR at the k-th user can be calculated as
SINR_k( W) = | h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2,
and the corresponding achievable rate is R_k( W) = log_2(1+SINR_k ( W)).
It is well known that communication-centric EE is defined as a ratio of the transmission sum rate ∑_k R_k( W) to the total power consumption P. Following <cit.>, the power consumption can be calculated as
P = 1/ϵP_d + P_0,
where the power amplifier efficiency ϵ∈ [0,1] and P_0 denotes the constant circuit power consumed by circuitries in RF chains, power supply, cooling system, etc. Besides, the total transmit power is given by P_d = ∑_k w_k_2^2. Hence, the communication-centric EE, measuring the required “bits-per-Joule" <cit.>, can be calculated as
EE_C = R_k(𝐖)/ P = ∑_k log_2( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0.
§.§ Sensing Model
For radar sensing, the BS exploits the echo signals collected in L time slots to estimate the target parameter.
This work considers the two cases with either a point-like target or an extended target, respectively.
For notational simplicity, we consider the same angle of departure (AOD) and angle of arrival (AOA) of the target, i.e., θ_t=θ_r=θ <cit.>. Then,
for the point-like target that locates in the far field, the target response matrix can be denoted as
𝐀 = α𝐚_r(θ)𝐚^H_t(θ),
where 𝐚_x(θ), x∈{t,r}, is the steering vector for the transmit signal at angle θ. Following the existing works on ISAC, e.g., <cit.>, we assume that the BS employs a uniform linear antenna with a half-wavelength spacing between the adjacent antennas. Then, the transmit and receive steering vectors are given by
𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (M -1) cosθ]^T,
𝐚_t(θ) = [ 1, ⋯, e^-j π cosθ, e^-j π (N -1) cosθ]^T.
For the extended target that locates in the near field, we follow <cit.> to model it as a reflecting surface with N_s point-like scatters. Then, the target response matrix can be represented as
𝐀 = ∑_i=1^N_sα_i 𝐚_r(θ_i)𝐚_t^H(θ_i),
where α_i is the reflection coefficient of the i-th scatterer.
Therefore, the received target echoes 𝐘_R from the point-like or the extended targets can both be denoted as
𝐘_R = 𝐀𝐗 + 𝐙_s,
where 𝐙_s is the zero-mean AWGN with variance σ_s^2 in each element.
Since CRB is a lower bound on the variance of an unbiased estimator of an unknown parameter that can guarantee the performance of sensing <cit.>, we adopt the CRB as the sensing metric to design the energy-efficient ISAC in the following.
§ COMMUNICATION-CENTRIC ENERGY-EFFICIENT DESIGN
§.§ Point-Like Target Case
Since the CRB of α has a similar form as the one of θ, for conciseness,
this work only considers the CRB of θ to for the design of the ISAC beamforming. For the point-like target, the CRB of θ is given as follows <cit.>
CRB(θ)=σ_s^2/|α|^2 (M𝐚̇^H(θ)𝐑_𝐱^T𝐚̇(θ)+ 𝐚^H(θ)𝐑_𝐱^T𝐚(θ)‖𝐚̇(θ)‖^2-M|𝐚^H(θ)𝐑_𝐱^T𝐚̇(θ)|^2/𝐚^H(θ)𝐑_𝐱^T𝐚 (θ)),
where 𝐑_𝐱 is the sample covariance matrix of 𝐗. Since 𝔼( |s_k[l]|^2) =1, for a large L, we have the asymptotic result
R_𝐱 = 1/L X X^H ≈ W W^H = ∑_k=1^K w_k w_k^H <cit.>.
The communication-centric energy efficient design is to maximize the EE_C defined in (<ref>), under the constraints of multiple users’ required SINR and maximal CRB(θ), whose optimization problem can be formulated as follows
max_{𝐰_k}_k=1^K ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0
s.t. ∑_k=1^K w_k _2^2 ≤ P_max,
CRB(θ) ≤ρ ,
| h_k^H w_k |^2/σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k,
where P_max denotes the power budget of the BS and (<ref>) is the transmit power constraint.
Besides, ρ and γ_k are the required CRB threshold for sensing and the required SINR for the k-th communication user, respectively.
In general, it is challenging to solve problem (<ref>) directly, due to the nonconvexity of the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>).
For addressing the nonconvex optimization problem, we first adopt the Dinkelbach's method <cit.> to reformulate the problem (<ref>) as
max_{𝐰_k}_k=1^K f_1(𝐰_k) - λ f_2(𝐰_k)
s.t. (<ref>), (<ref>), (<ref>),
where f_1(𝐰_k) ≜∑_k=1^K log_2 ( 1+| h_k^H w_k |^2/σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2),
f_2(𝐰_k) ≜1/ϵ∑_k=1^K w_k_2^2 + P_0, and λ≥ 0 is the auxiliary variable to be iteratively updated by
λ = f_1(𝐰_k)/f_2(𝐰_k).
With (<ref>) and (<ref>), an efficient solution to problem (<ref>) can be obtained by updating 𝐰_k and λ alternately.
Nevertheless, problem (<ref>) is still difficult to handle due to the following issues: 1) the objective function (<ref>) is still non concave over {𝐰_k } due to the fractional function f_1(𝐰_k); 2) nonconvex constraints (<ref>) and (<ref>).
Since the function log_2(·) is concave and non-decreasing, the nonconvexity of (<ref>) can be addressed if the term inside log_2(·) can be reformulated as an equivalent concave formulation.
Bearing this in mind, since f_1(𝐰_k) belongs to the general multiple-ratio concave-convex fractional programming problem, we adopt the quadratic transform method <cit.> to reformulate f_1(𝐰_k) as
f_1(𝐰_k) = t_kmax∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k)
- t_k^2 B_k(𝐰_k) ) ,
where B_k(𝐰_k) = σ_c^2 + ∑_j=1,j ≠ k^K | h_k^H w_j |^2 and t_k is an introduced auxiliary variable that is iteratively updated by
t_k = | h_k^H w_k |( σ_c^2 +∑_j=1,j ≠ k^K| h_k^H w_j |^2)^-1.
Based on the above reformulations, problem (<ref>) can be recast as
max_{𝐰_k, t_k}_k=1^K, λ ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐰_k) ) - λ( 1/ϵ∑_k=1^K w_k_2^2 + P_0) s.t. (<ref>),
where {𝐰_k, t_k}_k=1^K and λ can be updated alternatively.
In the following, we focus on handling the nonconvex constraints (<ref>) and (<ref>). Specifically, constraint (<ref>) can be reformulated as
Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2 - M| a^H(θ) R_ x^Tȧ(θ)|^2/ a^H(θ) R_ x^T a(θ) - σ_s^2/2Lρ|α|^2 ≥ 0.
Then, for notational conciseness, denoting ℱ( R_X) ≜ Mȧ^H(θ) R_ x^Tȧ(θ)+ a^H(θ) R_ x^T a(θ)‖ȧ(θ)‖^2, (<ref>) can be reformulated as the following linear matrix inequality by leveraging the Schur complement <cit.>.
[ ℱ( R_x) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ) R_ x^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 .
Next, for handling the nonconvex constraint (<ref>), we introduce an auxiliary optimization variable matrix 𝐖_k and reformulate constraint (<ref>) into
tr(𝐐_k 𝐖_k) - γ_k ∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) ≥γ_k σ_c^2,
W_k =w_k w_k^H,
where 𝐐_k = h_k h_k^H. Then, problem (<ref>) can be equivalently reformulated as
max_{𝐰_k,𝐖_k, t_k}_k=1^K ∑_k=1^K log_2 ( 1+ 2 t_k ·Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0)
s.t. [ [ ℱ(∑_k=1^K𝐖_k) - σ_s^2/2Lρ|α|^2 √(M) a^H(θ)∑_k=1^K W_k^Tȧ(θ); √(M)ȧ^H(θ)∑_k=1^K W_k^T a(θ) a^H(θ)∑_k=1^K W_k^T a(θ) ] ]≽0 ,
(<ref>), (<ref>), (<ref>),
where B_k(𝐖_k) ≜∑_k ∈𝒦 j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2. However, constraint (<ref>) is a nonconvex equality constraint which is difficult to handle. Therefore, we introduce the following lemma to transform constraint (<ref>) into equivalent inequality constraints.
W_k =w_k w_k^H can be equivalently reformulated as
[ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , 𝐖_k ≽0, ∀ k,
tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤ 0, ∀ k.
The proof is given in Appendix A.
Although the equality constraint in (<ref>) has been reformulated as the equivalent inequality constraints, constraint (<ref>) is still nonconvex.
For handling this, we adopt the SCA technique that establishes an inner convex approximation of constraint (<ref>) given as
tr(𝐖_k) + (𝐰_k^(i-1))^H 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ 0, ∀ k,
where 𝐰^(i-1)_k is the solution obtained at the i-th iteration of the SCA.
Therefore, at the i-th iteration, the convex approximation of problem (<ref>) can be reformulated as
max_𝒲, t_k, λ ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0)
s.t. (<ref>), (<ref>),(<ref>),(<ref>),(<ref>).
Algorithm <ref> summarizes the iterative algorithm for handling problem (<ref>), where f̂_1(𝐰_k, 𝐖_k) = ∑_k=1^K log_2( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) and f̂_2(𝐖_k) =1/ϵ∑_k=1^K tr( W_k)+ P_0. Although we cannot guarantee that the optimal solution of problem (<ref>) can be obtained, the proposed Algorithm <ref> follows the inexact Dinkelbach-type algorithm adopted in <cit.>, whose convergence can be guaranteed by the following lemma.
Let {𝐰_k^i,𝐖_k^i} be the solution sequence generated by solving problem (<ref>). The sequence {λ^(i)} generated by Algorithm 1 is non-decreasing and convergent.
Since
f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))
=(λ^(i+1)-λ^(i))f̂_2(𝐖^(i)),
we have λ^(i+1)≥λ^(i) if f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥ 0.
Obviously, f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))=0. At the i-th iteration, we approximate problem (<ref>) as
problem (<ref>) around 𝐰_k^(i-1). Since 𝐰_k^(i-1) is definitely a feasible solution of problem (<ref>), we have
f̂_1(𝐰^(i),𝐖^(i))-λ^(i)f̂_2(𝐖^(i))≥f̂_1(𝐰^(i-1),𝐖^(i-1))-λ^(i)f̂_2(𝐖^(i-1))= 0.
Therefore, we can conclude that the sequence {λ^(i)} is non-decreasing and Algorithm 1 converges due to the finite power budget.
Complexity Analysis:
The computational complexity of Algorithm <ref> is dominated by solving problem (<ref>). Problem (<ref>) involves linear matrix inequality (LMI) constraints that dominate the computation complexity. We notice that the problem contains one LMI constraint of size 2M, K LMI constraints of size M+1, and K LMI constraints of size M.
Given the required accuracy ϵ_0 > 0, the ϵ_0-optimal solution can be achieved after a sequence of iterations. Then, the computational complexity can be given as 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ) by reserving the highest order term, where I_iter denotes the number of iterations <cit.>.
Due to the stringent requirement introduced by (<ref>), it is generally non-trival to directly obtain a feasible solution as an initial point. Alternatively, we can adopt the penalty SCA <cit.> and introduce auxilary variables ρ̅_k to transform problem (<ref>) into
max_𝒲, t_k, λ ∑_k=1^K log_2 ( 1+ 2 t_k Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) ) - λ( 1/ϵ∑_k=1^K tr( W_k)+ P_0) - p̅∑_k=1^K ρ̅_k
s.t. 𝐰_k^(i-1) - 2Re((𝐰_k^(i-1))^H 𝐰_k ) ≤ρ̅_k, ∀ k,
(<ref>), (<ref>), (<ref>), (<ref>),
where p̅ and ∑_k=1^K ρ̅_k denote the weight coefficient and the penalty term, respectively. To obtain the initial point of (<ref>), we can solve problem (<ref>) as an initial warm-up phase by gradually raising p̅ to induce a reduction in the penalty term to a smaller value. When the penalty term decreases to zero, problem (<ref>) reduces to problem (<ref>), whose solution serves as the feasible initial point of (<ref>).
§.§ Extended Target Case
For estimating the extended target, we follow <cit.> to consider the CRB of the target response matrix 𝐀 instead of the angle. Since K ≤ M, transmitting K signal streams is not always sufficient for recovering the rank-M matrix. To address this issue, the BS generates additional signals that are dedicated for target probing. As such, the augmented data matrix at the l-th time slot is 𝐱̃[l]≜[𝐖, 𝐖̃][𝐬[l];𝐬̃[l]], where 𝐬̃[l] ∈ℂ^(N_t-K) × 1 is the dedicated probing signal and 𝔼( 𝐬[l] 𝐬̃^H[l] ) = 0.
Note that in the augmented signal, the beamforming 𝐖 = [𝐰_1, 𝐰_2, ⋯, 𝐰_K] ∈ℂ^M × K broadcasts the information data to the K users and the beamforming 𝐖̃ = [𝐰_K+1, ⋯, 𝐰_K+M] ∈ℂ^M × M is employed to generate probing signals for enabling the estimation of the target response matrix. However, the introduced probing signals 𝐬̃[l] inevitably generate undesired interference to the served multiple users that introduces non-trivial tradeoff between sensing and communication. In particular, the SINR received at the k-th user is given by
S̃ĨÑR̃_k = | 𝐡_k^H 𝐰_k|^2/∑ _i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2,
where ‖𝐡_k^H𝐖̃‖^2_2 is the additional interference due to the probing signals.
In such a case, the CRB for the extended target estimation can be derived as
CRB_extended= σ_s^2 M/Ntr(𝐑_𝐱^ - 1),
where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H .
Based on the discussions above, the problem of communication-centric EE optimization for estimating an extended target can be formulated as
max_{𝐰_k}_k=1^K+M ∑_k=1^K log_2(1+S̃ĨÑR̃ _k)/1/ϵ∑_k=1^K+M w_k _2^2 + P_0
s.t. ∑_k=1^K+M w_k _2^2 ≤ P_max,
CRB_extended= σ_s^2 M/Ltr(𝐑_𝐱^ - 1) ≤τ ,
S̃ĨÑR̃_̃k̃≥γ_k.
Obviously, although constraints (<ref>) and (<ref>) are both convex, the fractional objective function (<ref>)
is still nonconvex.
Following Section <ref>, we first adopt Dinkelbach’s transformation to handle the nonconvex fractional programming and reformulate the problem as follows
max_{𝐰_k}_k=1^K+M ∑_k=1^K log_2 (1+S̃ĨÑR̃ _k) - λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0)
s.t. (<ref>), (<ref>), (<ref>).
Then, by exploiting the equality -log a = bmax (log b - ab) <cit.>, problem (<ref>) can be reformulated as
max_{𝐰_k}_k=1^K+M, {b_k}_k=1^K, λ ∑_k=1^K log_2 ( | 𝐡_k^H 𝐰_k|^2 + ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H𝐖̃‖_2^2 + σ _C^2)
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K| 𝐡_k^H𝐰_i|^2 + ‖𝐡_k^H 𝐖̃‖_2^2 + σ _C^2 ) )
- λ( 1/ϵ∑_k=1^K+M w_k _2^2 + P_0)
s.t. (<ref>), (<ref>), (<ref>).
For obtaining a tractable formulation, by introducing auxiliary variables 𝐖_k ≜𝐰_k 𝐰_k^H, k ∈ [1, 2, ⋯, K] and 𝐑_𝐖̃ = 𝐖̃𝐖̃^H, problem (<ref>) can be reformulated as
max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖_2, λ ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k )
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
- λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0)
,
s.t. tr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ≤ P_max,
σ_s^2 M/Ntr( ( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) ^-1) ≤τ ,
𝐡_k 𝐖_k 𝐡^H_k - γ_k ( ∑_i = 1,i k^K𝐡_k^H 𝐖_i 𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k ) ≥γ_k σ_c^2,
𝐖_k ≽0, ∀ k, 𝐑_𝐖̃≽0,
rank(𝐖_k) = 1, ∀ k.
After inspecting problem (<ref>), we can find that all constraints are convex, except for constraint (<ref>). Besides, the objective function in (<ref>) includes three sets of optimization variables: {λ}, {b_k}, and {{𝐖_k}_k=1^K, 𝐑_𝐖̃}. Moreover, when fixing the other two sets, the objective function is convex with respect to the remaining one. Therefore, we first adopt the rank relaxation to remove constraint (<ref>) and then employ an alternating optimization (AO) algorithm to optimize three sets of optimization variables alternately.
The detailed algorithm is summarized in Algorithm 2, where we denote
f̃_1(𝐖_k, 𝐑_𝐖̃ ) = ∑_k=1^K log_2 ( 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k )
+ ∑_k=1^K( log_2 b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
f̃_2(𝐖_k, 𝐑_𝐖̃ ) = 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0.
In the following theorem, we will show that the rank-1 solution of problem (<ref>) can be recovered from the solution generated by Algorithm 2.
Given the optimal solution obtained by Algorithm <ref> as {𝐖_k^∗, 𝐑^∗_𝐖̃}. When K = 1,
𝐖̂^∗ = 𝐖^∗𝐡_k 𝐡_k^H 𝐖^∗/𝐡_k^H 𝐖^∗𝐡_k, 𝐑̂^∗_𝐖̃= 𝐑^∗_𝐖̃
is the optimal rank-1 solution that achieves identical performance as {𝐖_k^∗, 𝐑^∗_𝐖̃}.
When K > 1, one can always construct the optimal solution that satisfies the rank-1 constraint acquiring the same performance.
The proof is given in Appendix B.
Complexity Analysis:
We provide the computational complexity of Algorithm <ref> as follows. Similarly, the problem (<ref>) is a semidefinite program that can be solved by the standard interior-point algorithm. We note that the problem involves K+1 LMI constraints of size M. We consider the highest order term and express the computational complexity as 𝒪( √(MK+M+K+1) M^6 K^3 I_iterlog(1/ϵ_0) ) for an ϵ_0-optimal solution, where I_iter represents the number of iterations <cit.>.
§ SENSING-CENTRIC ENERGY-EFFICIENT DESIGN
§.§ Performance Metric for Sensing-Centric EE
It is well known that CRB is the inverse of Fisher information for the unbiased estimator <cit.>. In fact, Fisher information is the statistical expected value of the observed information about an observable random variable. Considering these, we adopt the reciprocal ratio of the CRB to the transmit power, further normalized by the total time slot length. In this context, we arrive at a novel sensing-centric EE metric that measures the average sensing information per Joule, defined as
EE_s≜CRB^-1/L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) .
In this manner, both the sensing-centric EE and communication-centric EE measure the “information” per Joule, but the “information” has different meanings.
Based on the above metric, we study the waveform design to maximize the sensing-centric EE considering the point-like target and the extended target in Sections <ref> and <ref>, respectively.
§.§ Point-Like Target Case
Considering the point-like target, with the CRB of estimating θ given in (<ref>), the sensing-centric EE optimization problem can be formulated as
max_{𝐰_k}_k=1^K CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 )
s.t. ∑_k=1^K w_k _2^2 ≤ P_max,
CRB(θ) ≤ρ ,
| h_k^H w_k |^2/σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2≥γ_k, ∀ k.
Obviously, problem (<ref>) is also intractable due to the fractional objective function (<ref>) and nonconvex constraints (<ref>) and (<ref>).
For handling the fractional objective function (<ref>), with the introduced auxiliary optimization variables ω, t,ϕ, and ζ, problem (<ref>) can be reformulated as
max_{𝐰_k}_k=1^K, ω, ϕ, ζ ω
s.t. CRB^-1(θ) ≤1/t,
1/ϵ∑_k=1^K w_k_2^2 + P_0 ≤ϕ, t ≥ζ^2,
ω≤ζ^2/ϕ,
(<ref>), (<ref>), (<ref>).
The equivalence between (<ref>) and (<ref>) is obvious, since constraints
(<ref>), (<ref>), and (<ref>) should be active at the optimal solution. We note that (<ref>) share the same form with (<ref>). Therefore, with Schur complement, constraint (<ref>) can be reformulated as
[ ℱ(∑_k=1^K𝐖_k) - t σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0,
where ℱ(∑_k=1^K𝐖_k) ≜ Mȧ^H(θ)∑_k=1^K𝐖_k^Tȧ(θ)+ a^H(θ)∑_k=1^K𝐖_k^T a(θ)‖ȧ(θ)‖^2 and 𝐖_k = 𝐰_k 𝐰_k^H. Furthermore, Lemma <ref> presents an equivalent formulation of the equality 𝐖_k = 𝐰_k 𝐰_k^H whose convex approximation has been given in (<ref>) and (<ref>).
Then, for handling the fractional constraint (<ref>), we introduce auxiliary variables {τ_k, ψ_k, ∀ k} to reformulate (<ref>) as
τ^2_k / ψ_k ≥γ_k,
τ_k = 𝐡_k^H 𝐰_k,
ψ_k ≥σ_c^2 + ∑^K_j = 1, j ≠ k| h_k^H w_j |^2,
where (<ref>) and (<ref>) are convex constraints. Then, problem (<ref>) can be reformulated as
max_Θ ω
s.t. ω≤ζ^2/ϕ , γ_k ≤τ^2_k/ψ_k , ∀ k
(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>), (<ref>),
where Θ≜{{𝐖_k, 𝐰_k}_k=1^K, ω, t,ϕ, ζ, τ_k, ψ_k } denotes the set of optimization variables. Obviously constraint (<ref>) is convex. Therefore, the challenge for handling problem (<ref>) lies in the nonconvexity of constraint (<ref>). To deal with this, we adopt the SCA techniques to establish a convex approximation of constraint (<ref>). Since function ζ^2/ϕ is jointly convex with respect to ζ and ϕ, its convex lower approximation can be established as
ζ^2/ϕ ≥(ζ^(n))^2/ϕ^(n) + 2 ζ^(n)/ϕ^(n) (ζ - ζ^(n) ) - ( ζ^(n)/ϕ^(n)) ^2 (ϕ - ϕ^(n) ) = 2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ ,
where ζ^(n) and ϕ^(n) are the feasible points obtained at the n-th iteration of the SCA. Consequently, the inner convex approximation of ω≤ζ^2/ϕ is
ω≤2 ζ^(n)/ϕ^(n)ζ - ( ζ^(n)/ϕ^(n)) ^2 ϕ.
Similarly, the inner convex approximation of γ_k ≤τ^2_k/ψ_k, ∀ k is
γ_k ≤2 τ_k^(n)/ψ_k^(n)τ_k - ( τ_k^(n)/ψ_k^(n)) ^2 ψ_k , ∀ k ,
where τ_k^(n) and ψ_k^(n) are the feasible points obtained at the n-th iteration.
Finally, a convex approximation of problem (<ref>) is formulated as
max_Θ ω
s.t. (<ref>), (<ref>), (<ref>).
In this way, problem (<ref>) can be solved with off-the-shelf numerical convex program solvers such as CVX Toolbox <cit.>. We summarize the proposed iterative method in Algorithm <ref>, where its initial feasible solution can be obtained by following the penalty SCA method given in Remark 1.
In the following, we analyze the convergence of Algorithm <ref>. We can note that in the iterative procedure of Algorithm <ref>, Θ^(n-1) is always feasible in problem (<ref>) at n-th iteration owing to the adopted first-order Taylor approximation. We note that (<ref>) can be optimally solved and the optimal value of its objective function serves as a lower bound on that of (<ref>).
Therefore, it can be guaranteed that the optimal value of (<ref>) at n-th iteration n, denoted as p_∗^(n), always satisfies p_∗^(n)≥ p_∗^(n-1). Therefore, Algorithm <ref> produces a non-decreasing objective function of problem (<ref>).
Similar to Algorithm <ref>, the computational complexity of Algorithm <ref> is 𝒪( √((2M +1)(K+1)) M^6 K^3 I_iterln(1/ϵ_0) ).
§.§ Extened Target Case
For the case of the extended target, following the discussion in Section <ref>, we choose 𝐀 as the parameter to be estimated and adopt the formulation of CRB in (<ref>).
Then, we have the sensing-centric EE for sensing an extended target as
EE_S = ( σ_s^2 M/Ltr(𝐑_𝐱^-1) )^-1/ L ( 1/ϵtr(𝐑_𝐱) + P_0 ) = ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ) ,
where 𝐑_𝐗 = 𝐖𝐖^H + 𝐖̃𝐖̃^H = ∑_k=1^K 𝐰_k 𝐰_k^H + 𝐑_𝐖̃. Then, we formulate the problem as
max_{𝐰_k}_k=1^K,𝐑_𝐖̃ ( tr(𝐑_𝐱^ - 1) )^-1/σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 )
s.t. tr(𝐑_𝐱) ≤ P_max,
σ_s^2 M/Ntr(𝐑_𝐱^ - 1) ≤ϕ ,
S̃ĨÑR̃_̃k̃≥γ_k,
where S̃ĨÑR̃_̃k̃ is given in (<ref>) and can be recast as a convex form in (<ref>) by letting 𝐖_k = 𝐰_k 𝐰_k^H.
We notice that in (<ref>), the numerator is the reciprocal of a convex function and the denominator is strictly positive and convex. To handle its nonconvexity, we introduce auxiliary optimization variables p_e,q_e and equivalently transform the problem into
max_{𝐰_k}_k=1^K,𝐑_𝐖̃, q_e, p_e 1/p_e q_e
s.t. p_e ≥σ_s^2 M ( 1/ϵtr(𝐑_𝐱) + P_0 ), q_e ≥tr(𝐑_𝐱^ - 1),
(<ref>), (<ref>),(<ref>).
Then, the problem can be further transformed into its equivalent form as
min_{𝐖_k}_k=1^K,𝐑_𝐖̃, q_e, p_e ln(p_e) + ln(q_e) s.t. (<ref>), (<ref>),
where the objective function is still not convex, but can be approximated based on the first order Taylor series expansion given by
ln(p_e) + ln(q_e) ≤ln( p^(n)_e ) + ln( q_e^(n)) + 1/p_e^(n)( p_e-p_e^(n)) + 1/q^(n)_e( q_e-q^(n)_e) ,
where p_e^(n) and q_e^(n) are the feasible solutions obtained at the n-th iteration. Following the techniques detailed in Section <ref>, a convex approximation of problem (<ref>) at the n-th iteration can be established as
min_{𝐖_k}_k=1^K, 𝐑_𝐖̃, q_e, p_e ln(p^(n)_e) + ln(q_e^(n)) + 1/p_e^(n) (p_e-p_e^(n)) + 1/q^(n)_e (q_e-q^(n)_e)
s.t. (<ref>), (<ref>),(<ref>),(<ref>), (<ref>).
The computational complexity is 𝒪( √(MK+M+K+1) M^6 K^3 I_iterln(1/ϵ_0) ) for an ϵ_0-optimal solution.
Based on the optimal solution of (<ref>), denoted as {𝐖_k^∗, 𝐑^∗_𝐖̃}, the optimal rank-1 solutions can always be reconstructed.
The proof can be achieved by following the proof of Theorem 2 and the details are omitted for brevity.
§ APPROXIMATE PARETO BOUNDARY OF ENERGY-EFFICIENT ISAC SYSTEMS
In this section, we aim to investigate the Pareto boundary of the achievable EE performance region built on the communication-centric EE and the sensing-centric EE.
Considering the point-like target case, we follow <cit.> to formulate the search of the Pareto boundary as a constrained optimization problem that maximizes the communication-centric EE under the sensing-centric EE constraint. It is worth noting that the proposed algorithm can be adapted to the extended target case directly. Now, we aim to solve
max_{𝐰_k}_k=1^K ∑_k=1^K log_2 ( 1+| h_k^H w_k |^2 / ( σ_c^2 + ∑_k ∈𝒦 j ≠ k| h_k^H w_j |^2) ) /1/ϵ∑_k w_k_2^2 + P_0
s.t. CRB^-1(θ)/ L ( 1/ϵ∑_k=1^K w_k_2^2 + P_0 ) ≥ℰ,
∑_k w_k _2^2 ≤ P_max,
where ℰ denotes the required minimum sensing-centric EE threshold.
Obviously, problem (<ref>) is a nonconvex fractional program, which is challenging to solve directly.
To handle fractional objective function (<ref>) and nonconvex constraint (<ref>), we follow <cit.> to find the approximate optimal Pareto boundary for characterizing the tradeoff between the communication-centric EE and sensing-centric EE.
In particular, we first apply the Dinkelbach algorithm to reformulate fractional function (<ref>) as
max_λ ∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) - λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0 )
s.t. (<ref>), (<ref>),
where B_k(𝐖_k) = ∑^K_j=1, j ≠ ktr(𝐐_k 𝐖_j) + σ_c^2.
Furthermore, by introducing auxiliary variables b_k, k=1,…,K, the intractable fractional terms in (<ref>) can be equivalently formulated as
∑_k=1^Klog_2 ( 1+ | h_k^H w_k |^2 /B_k(𝐖_k)) = max_b_k ( ∑_k=1^Klog_2 (1+ b_k) - ∑_k=1^K b_k + ∑_k=1^K(1+b_k)| h_k^H w_k |^2 /B_k(𝐖_k)),
which has an analytical solution b_k = | h_k^H w_k |^2/B_k(𝐖_k).
Finally, by applying the quadratic transform <cit.>, problem (<ref>) can be reformulated as
max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) )
- λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0)
s.t. (<ref>), (<ref>),(<ref>),(<ref>).
The convex approximation of nonconvex constraint (<ref>) is constraint (<ref>), as mentioned in Section <ref>. For handling nonconvex constraint (<ref>),
we introduce an auxiliary variable ℰ̃ and employ the Schur complement to obtain the convex approximation of problem (<ref>) given by
max_{𝐰_k 𝐖_k, b_k, t_k}_k=1^K, λ ∑_k ( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k) - t_k^2 B_k(𝐖_k) )
- λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0)
s.t. [ ℱ(∑_k=1^K𝐖_k) - ℰ̃σ_s^2/2L |α|^2 √(M) a^H(θ)∑_k=1^K𝐖_k^Tȧ(θ); √(M)ȧ^H(θ) R_ x^T a(θ) a^H(θ) R_ x^T a(θ) ]≽0 ,
ℰ̃≥ℰ N (1/ϵ∑_k=1^Ktr( W_k)+ P_0),
(<ref>), (<ref>), (<ref>).
(<ref>) is convex whose optimum can be obtained by the interior point method. Therefore, an efficient solution of problem (<ref>) can be obtained by solving a sequence of problem (<ref>). Algorithm <ref> summarizes the iterative algorithm, where f̆_1(𝐰_k, 𝐖_k) = β/ℛ∑_k=1^K( log_2 (1+ b_k) - b_k + 2t_k √((1+b_k))Re(𝐰_k^H 𝐡_k). - t_k^2 B_k(𝐖_k) ) + (1-β) ϕ̃/L 𝒞, f̆_2(𝐖_k) = λ( 1/ϵ∑_k=1^Ktr( W_k)+ P_0).
§ NUMERICAL RESULTS
In this section, we provide simulation results of the proposed energy-efficient waveform design. Numerical analysis is presented to evaluate the performance of communication-centric EE (EE_C), sensing-centric EE (EE_S), and their approximate Pareto boundary.
Unless stated otherwise, we consider a dual-functional BS equipped N = 20 receiving antennas, with the frame length N set to 30. The maximum transmission power P_max is set to 30 dBm with the power amplifier efficiency ϵ = 0.35. The circuit power consumption is set to P_0 = 33 dBm. For the target estimation of radar, the target angle is θ = 90 ^∘.
§.§ EE_C Optimization
We first examine the performance of Algorithm <ref> for maximizing EE_C considering the existence of a point-like target. The convergence rate of Algorithm 1 is given in Fig. <ref>. Obviously, it enjoys a fast convergence rate, whose objective function value converges within 12 iterations on average.
Furthermore, the convergence rate of Algorithm 1 is almost the same for
different system parameters, e.g., different M and CRB constraints, which confirms the scalability of Algorithm 1.
Fig. <ref> investigates the EE_C performance versus the root-CRB threshold for different M. The EE_C increases with the increasing Root-CRB threshold, indicating that EE_C can achieve a higher level when the sensing performance requirement is less stringent. Indeed, increasing the number of antennas can improve EE_C, since more spatial degrees-of-freedom can be utilized for designing an efficient ISAC waveform. On the other hand, the baseline scheme only maximizes the communication sum rate under the same constraints of problem (<ref>).
Obviously, the EE_C of the baseline scheme is unsatisfying, since it only considers the spectral efficiency maximization instead of the EE_C maximization. In such a case, the baseline scheme encourages the ISAC BS to adopt as much power as possible for increasing the communication sum rate.
Fig. <ref> and Fig. <ref> plot the EE_C of the point-like target and extended target with the increasing SINR constraint of multiple users, γ_k, respectively. With the increasing γ_k, EE_C first remains unchanged and then decreases due to the shrunken feasible region. Therefore, increasing the downlink communication rate does not necessarily improve EE_C. Furthermore, with the increasing root-CRB, the EE_C decreases, since more power is allocated to radar sensing due to the increasing sensing requirements. A similar trend can also be found in Fig. <ref> for the increasing CRB in the extended target case.
§.§ EE_S Optimization
In this subsection, we investigate the performance of EE_S optimization for both the point-like target sensing and extended target cases. In Fig. <ref>, we first consider the point-like target to show the EE_S versus the increasing power budget, for different SINR levels. As expected, EE_S increases with the increasing P_T, since the increasing power improves the estimation accuracy and increases EE_S. Besides, lowering the SINR requirement also improves
EE_S, since relaxing the SINR constraint enlarges the feasible region and improves EE_S.
For demonstrating the performance gain obtained by our proposed Algorithm 3,
we perform the performance comparison with two other baselines, namely BA_1 and BA_2. In particular, BA_1 aims to minimize the transmission power while BA_2 aims to maximize the communication sum rate under the same constraints as our proposed method (γ_k = 5 dB, the root-CRB threshold is set to 0.15 deg, P_max = 30 dBm). The results indicate that EE_S of BA_1 is significantly low due to the insufficient power for improving the CRB performance. Additionally, EE_S of BA_2 is also inferior to the proposed method and exhibits a further decline as the transmission power increases, since most of the power is utilized for maximizing the sum rate instead of sensing target.
Fig. <ref> further demonstrates the EE_S versus the SINR requirement, where the root-CRB threshold is set to 0.15 deg. It can be observed that EE_S decreases as the increasing SINR and the number of communication users since the increasing communication requirements deteriorates the sensing performance.
As for the scenario of sensing an extended target, Fig. <ref> shows the EE_S versus communication SINR under different numbers of users and different CRB.
It is worth noting that the performance metric for the extended target sensing EE_S is different from the point-like target case.
Similar to the scenario of sensing a point-like target, EE_S decreases with the increasing requirements of communication SINR, especially when the number of users is larger. Besides, increasing CRB requirements improves EE_S, due to the improved estimation performance.
§.§ Approximate Pareto Boundary of Energy-Efficient ISAC.
Fig. <ref> plots the approximate Pareto boundary of energy-efficient ISAC, which demonstrates the tradeoff between EE_C and EE_S. With the more stringent EE_S constraint, the EE_C decreases.
In particular, when the required minimum sensing-centric EE threshold ℰ is small, strengthening the requirement of EE_S only affects EE_C mildly.
However, when the required EE_S beyonds a certain threshold, increasing EE_S constraint will bring a sharp decline in EE_C.
This phenomenon shows that there is a non-trivial tradeoff between EE_S and EE_C, which should be given serious consideration.
Besides, we can find that the area spanned by the Pareto boundary is sensitive to the number of communication users, K, since the increasing number of served communication users consumes the available spatial degrees of freedom which cannot compensate for the performance loss due to the increasingly stringent EE_S constraint.
Therefore, it is more challenging to balance EE_S and EE_C for a large K.
On the other hand, after the required EE_S surpasses some threshold, EE_C decreases sharply. This is because most of the available resources are allocated for satisfying the stringent EE_s constraint, such that the remaining resources are insufficient for guaranteeing the EE_C performance.
§ CONCLUSION
In this paper, we addressed the problem of maximizing energy efficiency for MIMO ISAC systems. We first studied the communication-centric EE adopting the conventional definition of EE in both the point-like target and extended target cases. We reformulated the objective function using the quadratic-transform-Dinkelbach method and solved the sub-problem by leveraging the Schur complement and semi-relaxation techniques. In the second part, we introduced a novel performance metric for measuring sensing-centric EE. We iteratively approximated the objective function as a convex program exploiting SCA to address this problem. Finally, we investigated the tradeoff between the two EE metrics and provided an effective solution. Numerical results showed an improvement compared to the benchmark on both communication-centric EE and sensing-centric EE performance, and we also demonstrated the tradeoff between communication-centric and sensing-centric EE.
§ APPENDIX A
First, we provide the matrix inequality
𝐖_k ≽𝐰_k 𝐰_k^H,
which satisfies either of the following cases:
Case I: 𝐖_k ≻𝐰_k 𝐰_k^H. Then, we have tr(𝐖_k) > tr(𝐰_k 𝐰^H_k).
Case II: 𝐖_k = 𝐰_k 𝐰_k^H. In this case, we have tr(𝐖_k) = tr(𝐰_k 𝐰^H_k).
By combining 𝐖_k ≽𝐰_k 𝐰_k^H, with an additional LMI constraint, given as tr(𝐖_k) ≤tr(𝐰_k 𝐰^H_k), we can guarantee that Case II always holds.
We remark that tr(𝐰_k 𝐰_k^H) = tr(𝐰^H_k 𝐰_k) =𝐰^H_k 𝐰_k. Further applying the Schur complement, W_k =w_k w_k^H can be equivalently transformed into the following LMI, given as
[ 𝐖_k 𝐰_k; 𝐰_k^H 1 ]≽0 , ∀ k, tr(𝐖_k) - 𝐰^H_k 𝐰_k ≤0, ∀ k,
which completes the proof.
§ APPENDIX B
For K = 1, we can derive that 𝐡_k^H 𝐖̂^∗𝐡_k = 𝐡_k^H 𝐖^∗𝐡_k. Hence, the received SNR and the transmission rate at the user does not decrease. Besides, we have
𝐖^∗ - 𝐖̂^∗ = ( 𝐖^∗)^1/2( 𝐈 - (𝐖^∗)^1/2𝐡_k 𝐡_k^H (𝐖^∗)^1/2/𝐡_k^H 𝐖^∗𝐡_k) ( 𝐖^∗)^1/2≽0,
indicating that the power constraint is satisfied due to 𝐖^∗≽𝐖̂^∗. Additionally, replacing 𝐖^∗ by 𝐖̂^∗ would not decrease the transmission rate or increase the total power, showing that 𝐖̂^∗ is the optimum to the objective function.
Then, we discuss the case of K > 1 . We introduce r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 and equivalently reformulate (<ref>) as
max_{𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃, λ ∑_k=1^K log( 1+r ) - λ( 1/ϵtr( ∑_k=1^K𝐖_k + 𝐑_𝐖̃) + P_0)
+ ∑_k=1^K( log b_k - b_k ( ∑_i = 1,i k^K𝐡_k^H𝐖_i𝐡_k + 𝐡_k^H 𝐑_𝐖̃𝐡_k + σ _C^2 ) )
s.t. r = 𝐡_k^H ( 𝐖_k +∑_i = 1,i k^K𝐖_i + 𝐑_𝐖̃ + σ _C^2 ) 𝐡_k -1 ,
(<ref>),(<ref>), (<ref>), (<ref>), (<ref>) .
We note that with the fixed λ, problem (<ref>) is jointly convex of variables {𝐖_k, b_k}_k=1^K, 𝐑_𝐖̃. Thus, it can be proved that Slater's condition holds such that strong duality holds. By introducing the Lagrange multipliers ϖ_k,1≤ 0, ϖ_k,2≤ 0, μ≤ 0 and Ψ_k ≽0, we provide the Lagrangian function of 𝐖_k as
ℒ(𝐖_k) = - ϖ_k,1𝐡_k^H 𝐖_k 𝐡_k + ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐖_k 𝐡_i + ϖ_k,2𝐡_k^H 𝐖_k 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐖_k 𝐡_i
- tr(𝐖_k Ψ_k)+ μtr(𝐖_k) + ξ ,
where ξ represent the terms that do not involve 𝐖_k. Then, the KKT conditions of (<ref>) is given as
ℒ̇(𝐖^∗_k) = 0 , 𝐖^∗_k Ψ_k = 0.
Then, we have Ψ^∗_k = 𝐀_k^∗ - ϖ_k,1𝐡_k^H 𝐡_k and
𝐀_k^∗ = ∑_i = 1,i k^Kϖ_i,1𝐡_i^H 𝐡_i + ϖ_k,2𝐡_k^H 𝐡_k - ∑_i = 1,i k^Kϖ_i,2γ_k 𝐡_i^H 𝐡_i + μ𝐈_M.
Nest, we discuss the rank of 𝐀_k^∗ under the following cases.
1) Case I: rank( 𝐀_k^∗) = M.
In this case, we have rank( Ψ^∗_k) ≥ M-1 with the inequality rank( 𝐗 + 𝐘 ) ≥rank( 𝐗 ) - rank( 𝐘 ) <cit.>. For rank(Ψ^∗_k ) = M, the first condition in (<ref>) implies 𝐖^∗_k = 0.
For rank(Ψ^∗_k ) = M - 1, we have rank( 𝐖^∗_k )= 1.
2) Case II: rank( 𝐀_k^∗) = r_a < M.
In this case, we exploit <cit.> to construct a rank-1 solution 𝐖^∗_k. We give {𝐪_k,i^∗}_i=1^M-r_ato denote the columns of orthonormal basis of Ω_k^∗, which represents the nullspace of 𝐀_k^∗. As Ψ^∗_k ≽0, we have (𝐪_k,i^∗)^H Ψ^∗_k 𝐪_k,i^∗ = - ϖ_k,1 |𝐡_k^H 𝐪_k,i^∗ |^2 ≥ 0. Since (<ref>) should be active at opimum indicating ϖ_k,1≥ 0, we have 𝐡_k^H 𝐪_k,i^∗ = 0 and Ψ^∗_k Ω_k^∗ = 0. Thus, the M - r_a dimensions of Ψ^∗_k's null space can be represented by Ω_k^∗. We further denote Ω_k^∗ as the null-space of Ψ^∗_k, we have rank(Ω_k^∗) ≥ M - r_a. Additionally, since rank( 𝐀_k^∗) = r_a, we have rank( Ψ^∗_k) ≥ r_a - 1, which shows that rank(Ω_k^∗) ≤ M - r_a + 1. Then, it can be readily noted that rank(Ω_k^∗) = M - r_a or rank(Ω_k^∗) = M - r_a + 1. When rank(Ω_k^∗) = M - r_a , we have 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H with λ_k,i^∗≥ 0. In such a case, 𝐡_k^H 𝐖_k^∗𝐡_k = 0, which constradicts the optimality. Hence, we conclude that rank(Ω_k^∗) = M - r_a + 1. Denoting Ω_k^∗ as [Ω_k^∗, 𝐩_k^∗], the optimal solution 𝐖^∗_k can be given as 𝐖^∗_k = ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H + λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H with λ̃^∗_k ≥ 0. Therefore, a rank-1 solution can be constructed as
𝐖̂_k^∗ = 𝐖^∗_k - ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H = λ̃^∗_k 𝐩_k^∗ (𝐩_k^∗)^H , 𝐑̂^∗_𝐖̃ = 𝐑^∗_𝐖̃ + ∑_i=1^M-r_aλ_k,i^∗𝐪_k,i^∗ (𝐪_k,i^∗)^H.
In the following, we show that the reconstructed solution, 𝐖̂_k^∗ and 𝐑̂^∗_𝐖̃ satisfy the constraints. Firstly, we have
𝐡_k^H 𝐖_k^∗𝐡_k = 𝐡_k^H 𝐖̂_k^∗𝐡_k, 𝐡_k^H (∑_i = 1,i k^K𝐖^∗_i + 𝐑^∗_𝐖̃) 𝐡_k = 𝐡_k^H (∑_i = 1,i k^K𝐖̂^∗_i + 𝐑̂^∗_𝐖̃) 𝐡_k.
Therefore, the right-hand side term in (<ref>) and the left-hand side term in (<ref>) remain unchanged.
Besides, it can be readily verified that constraints (<ref>) and (<ref>) hold, since 𝐖_k^∗ + 𝐑^∗_𝐖̃ = 𝐖̂^∗_k + 𝐑̂^∗_𝐖̃, which completes the proof.
IEEEtran
|
http://arxiv.org/abs/2307.03959v1 | 20230708115509 | Understanding the power-law nature of participation in community sports organizations | [
"Jia Yu",
"Mengjun Ding",
"Weiqiang Sun",
"Weisheng Hu",
"Huiru Wang"
] | cs.SI | [
"cs.SI",
"physics.soc-ph"
] |
Understanding the power-law nature of participation in community sports organizations
Jia Yu, Mengjun Ding, Weiqiang Sun, Senior Member, IEEE,
Weisheng Hu, Member, IEEE, Huiru Wang
Manuscript received June, 2023. (Corresponding author: Weiqiang Sun.)
Jia Yu, Mengjun Ding, Weiqiang Sun, and Weisheng Hu are with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China. Huiru Wang is with the Department of Physical Education, Shanghai Jiaotong University, Shanghai 200240, China.
(e-mail: {yujia543, mengjun_ding, sunwq, wshu, wanghr}@sjtu.edu.cn).
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The improvement of living standards and awareness of chronic diseases have increased the importance of community sports organizations in promoting the physical activity levels of the public. However, limited understanding of human behavior in this context often leads to suboptimal resource utilization. In this study, we analyzed the participation behavior of 2,956 members with a time span of 6 years in a community sports organization. Our study reveals that, at the population level, the participation frequency in activities adheres to a power-law distribution. To understand the underlying mechanisms driving crowd participation, we introduce a novel behavioral model called HFBI (Habit-Formation and Behavioral Inertia), demonstrating a robust fit to the observed power-law distribution. The habit formation mechanism indicates that individuals who are more engaged are more likely to maintain participation, while the behavioral inertia mechanism suggests that individuals' willingness to participate in activities diminishes with their absences from activities. At the individual level, our analysis reveals a burst-quiet participation pattern, with bursts often commencing with incentive activities. We also find a power-law distribution in the intervals between individual participations. Our research offers valuable insights into the complex dynamics of human participation in community sports activity and provides a theoretical foundation to inform intervention design. Furthermore, the flexibility of our model enables its application to other data exhibiting power-law properties, broadening its potential impact beyond the realm of community sports.
human behavior, power law, habit formation, behavioral inertia, burst timing, community sports activity.
§ INTRODUCTION
Globalization urbanization, and increased wealth have led to significant lifestyle changes, causing a wide decrease in physical activity. According to the World Health Organization (WHO), inactivity rates can climb as high as 70% in certain countries, primarily due to shifts in transportation habits, heightened reliance on technology, and urbanization <cit.>. Physical inactivity, which has been identified as a global pandemic, is responsible for up to 8% of non-communicable diseases and deaths globally <cit.>. Conservatively estimated, physical inactivity cost health-care systems INT$53.8 billion worldwide in 2013 <cit.>. Additionally, if the prevalence of physical inactivity remains unchanged, it is projected that by 2030, there will be around 499.2 million new cases of preventable major NCDs worldwide, resulting in direct health-care costs of INT$ 520 billion. The annual global cost of not taking action on physical inactivity is anticipated to reach approximately $47.6 billion <cit.>.
In an effort to improve physical activity participation, community sports organizations have achieved remarkable results in recent years. Many concur that community sport, as a low-threshold physical activity, is a powerful tool for targeting socially vulnerable groups <cit.>. Moreover, community sport has been recognized as a policy area and a social field that goes beyond “just" providing opportunities for groups to participate in sports. It also encompasses functions such as social care and crime reduction <cit.>. Today, being non-profit by nature, community sports organizations face greater challenges, such as competition for limited resources, volunteer availability, and capacity, and the impact of pandemics (such as COVID-19) <cit.>. Understanding the nature of the population participating in community sports is thus pivotal to making the best use of limited resources.
The interest in the data-driven exploration of human behavior has been persistent. Very early on, power-law distribution has been found in certain human behaviors, such as the intervals between emails <cit.>, the pattern of phone calls <cit.>, and complex social networks <cit.>. Efforts have been made to understand the principle behind the formation of this power-law distribution in these behaviors <cit.>. Classical models such as the decision-based queuing process <cit.> and preferential attachment <cit.> are proposed to explain the power law distribution observed in the waiting time for processing emails and the degree distribution in complex networks, respectively. Research on community sports organizations is usually conducted from an organizational management perspective, providing high-level guidance for organizational development by quantifying aspects such as resources, program design, diversity, life cycle, and resilience <cit.>. However, very few, if any, models are population-based and consider when, how, and who participates in community-level sports activities <cit.>.
In this study, with the data from 2,956 users collected over a span of six years, we discovered a power-law distribution of population participation in community sports activities. To explain this power-low distribution, we proposed the hypothesis of habit formation and behavioral inertia in community sports activity participation. Previous research has indicated that physical activity behavior can be developed through repeated experience of the activity in stable contexts <cit.>. Human behavior does exhibit inertia, as evidenced by the tendency for users to stick with default options <cit.> and purchase habits <cit.>. Our empirical data provides evidence of habit formation and behavioral inertia in community sports participation. It may help to address the question, “What is the typical `shape' of within-person real-world habit growth with repetition over the long-term" identified in the 2019 European Health Psychology Society Synergy Expert Meeting <cit.>. Based on these two mechanisms we designed a behavioral model called HFBI that can robustly fit the power-law distribution of the empirical data. Power-law distribution is also observed in the interval of participation at the individual level, signifying a burst-quiet pattern of activity participation. With the relevant activity information, we found that bursts tend to be initiated by activities with incentive rewards, suggesting that incentive activities can help call people back for sustained engagements. The main contributions of the article as described as follows.
* For the first time, we discovered that the frequency of population participation in community physical activities and the interval between individual's participations obey power-law distributions.
* We proposed an intuitive model to explain the power-law distribution of population participation in community physical activities, by taking into account habit formation and behavioral inertia. We demonstrated good fitting performance and statistical significance with real-world data. The model may as well be used in other domains where power-law distributions with low power-law exponents are observed.
* The intervals between individual's participation exhibit a power-law distribution, with a pattern of bursts followed by periods of inactivity (a burst-quiet pattern). We observed that bursts often start with incentive activities located in the head position. This implies that incentive activities not only attract more participants but also have the potential to call users back from a quiet state to an active state, thereby promoting sustained engagement.
The rest of this article is organized as follows. In Section II, we demonstrate the power-law phenomenon of participation frequency in activities at the population level. In Section III, we introduce the proposed HFBI model and present the evidence. In Section IV, we verify the participation patterns at the individual level and the role of incentive activities. In Section V, we present the related work. Finally, we summarize this paper in Section VI.
§ POWER-LAW DISTRIBUTION OF PARTICIPATION FREQUENCY AT THE POPULATION LEVEL
§.§ Data Description
The data used in our research was sourced from a university-based community sports platform that we develop and operate, which allows individuals to initiate or participate in sports activities. The initiator of the activity can choose whether or not to provide rewards as incentives for the activity. Over the course of 6 years, from May 2015 to May 2021, our dataset captured 28,714 records of activity participation in 770 activities (including 110 activities with incentives), involving a total of 2,956 individuals. Each record in the dataset contains the participant's ID, activity ID, team ID, and type of activity (whether to provide incentives or not). The activity IDs are consecutive natural numbers starting from 0 and arranged in the order of their occurrence (numbered from 0 to 769).
§.§ Fitting the Empirical Data
The frequency of user i participating in activities over the entire period is denoted as q_i. For the sequence of activity participation frequency { q_i}, we assume that the frequency larger than a truncated value q_min is described by the power law distribution,
p(q) ∼ q^-γ, q ≥ q_min.
In the Kolmogorov-Smirnov (KS) test, p>0.1 (or p>0.05) suggests that the data can be considered to follow a power-law distribution. We select the smallest value of q that satisfies the KS test with p>0.1 as q_min, and the data above q_min can be plausibly modeled as a power-law distribution. The estimate γ is chosen by maximum likelihood (MLE) <cit.>.
§.§ Power-law Distribution of Participation Frequency
The participation frequency of the population follows a power-law distribution. Fig. <ref> shows the empirical distribution of user participation frequency in activities in a complementary cumulative way to enhance the statistical significance <cit.>. The complementary cumulative function can be represented as F(q)=∑_q^'=q^∞ p(q^'), where p(q) denotes the proportion of individuals who participated in activities q times. A clear straight-line trend can be observed on the double logarithmic axis, indicating a power law distribution of the data. Kolmogorov–Smirnov (KS) tests and Maximum likelihood estimation (MLE) fits are employed to check whether the empirical distributions obey power law distribution and estimate the related parameters. The result shows that the frequency of population participation in the activity is in line with the power law distribution (p=0.18, q_min=2) with the power-law exponent γ=1.76. The cutoff of the tail indicates that there are fewer individuals participating in an exceptionally large number activity than what a power-law distribution would expect, which is a phenomenon commonly observed in real-world systems. Fig. <ref> shows the relationship between the fraction P of the participation and the most active p of the population. 80/20 rule is evident that the top 20% of the most active users contributed to approximately 84% of the total activity participation. Theoretically, the case is more extreme for power-law distributions with γ less than 2. However, the fact that the number of activities is finite and the tail cutoff brings the ratio close to the classical Pareto's law.
To demonstrate that the power-law distribution of the participation frequency is not momentary coincidental, we analyzed the data for each activity node after the platform scale reached 1000. All samples (287 (88.9%) with q_min=1 and 36 (11.1%) with q_min=2) conformed to the power law distribution by KS test, with p-values all greater than 0.1. Fig. <ref> presents the γ for all samples of 323 activity nodes. The range of γ spans from 1.66 to 1.81 with a mean of 1.72. And it keeps changing slowly with each activity held, first decreasing steadily, and then fluctuating and rising. The γ less than 2 indicates a significant “heavy tail" phenomenon in the frequency of participation.
§ HFBI-A BEHAVIORAL MODEL BASED ON HABIT FORMATION AND BEHAVIORAL INERTIA
To explore the principle behind the power-law distribution of the participation frequency, we propose a behavioral model named HFBI, which is based on the assumptions of habit formation and behavioral inertia. Intuitively, people who have participated in activities frequently or have just participated in an activity are more likely to participate in subsequent activities. They are supported by convincing evidence from our data.
§.§ Evidence for Habit Formation and Behavioral Inertia
To provide evidence for the habit formation and behavioral inertia mechanisms, we performed a statistical analysis of all activities in the dataset. The proportion of people who have participated in q activities and would choose to participate in a new available activity can be represented as
prop .(q)=∑_j=0^N-1 m_q^j/∑_j=0^N-1 n_q^j.
Here, n_q^j represents the number of individuals who have participated in q activities before a new activity j, m_q^j represents the number of individuals among them who choose to participate in the activity j, and N is the total number of activities in the dataset. The denominator represents the total number of individuals who have participated in q activities for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after participating in q activities.
Similarly, the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity can be represented as
prop .(d)=∑_j=0^N-1 m_d^j/∑_j=0^N-1 n_d^j.
n_d^j represents the number of individuals who have been away from activities for d sessions for activity j, m_d^j represents the number of individuals among them who choose to participate in the activity j. The denominator represents the total number of individuals who have been away from activities for d sessions for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after being away from activities for d sessions.
Fig. <ref> shows the proportion of people who have participated in q activities and would choose to participate in a new available activity. As shown, the proportion of individuals opting to continue participation increases almost linearly with the number of activities participated in the early stage. Fig. <ref> illustrates the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity. As the number of sessions away from activities increases, the proportion of people choosing to back to participating in activities sharply decreases. These provide solid evidence for the existence of habit formation and behavioral inertia in community sports participation.
§.§ The HFBI Model
Based on the evidence presented, we propose the HFBI model, which incorporates habit formation and behavioral inertia, to simulate user participation in activities. The experimental results demonstrate that the model can accurately simulate user participation in activities with only four parameters.
§.§.§ Parameter Settings
The HFBI model only requires four parameters: n, c, m, and α. n represents the number of activities held, i.e., the model's iteration count. c and m refer to the quantities of new and existing users participating in an activity (added in a round of iterations), respectively. α is a parameter that adjusts the ratio of habit formation and behavioral inertia to achieve a better fit with the empirical data. The parameters of c and m can be derived from the mean values of the dataset. Note that since the parameters are natural integers, the values of c and m will be rounded. To ensure consistency in the scale of the population, n is calculated based on the number of population, c, and m. Additionally, we initiate the iteration process with m pre-existing users to enable the selection of existing users at the start of the iteration.
§.§.§ Model Description
The model is characterized by adding users in a sequential and batched manner, which aligns with many real-life situations. Initially, we make the assumption that for every activity, there will be c new users and m existing users participating. For a new available activity and an existing user i, q_i represents the total number of activities that user i has participated in before, and d_i represents the interval between the last activity they participated in and the current new activity. User i participating in the activity can be attributed to two mechanisms. (1) User i has a probability of α to participate in the activity due to habit formation, which means the probability of participating is proportional to q_i:
q_i/∑_i ∈ I q_i.
(2) Additionally, there is a probability of 1-α for user i to participate in the activity due to behavioral inertia, which means the probability is a decreasing function of d_i:
1 / d_i/∑_i ∈ I 1 / d_i.
Therefore, the total probability of user i participating in the activity is:
ϕ_i=αq_i/∑_i ∈ I q_i+(1-α) 1 / d_i/∑_i ∈ I 1 / d_i.
I is the set of all existing users. The model will perform n rounds of iterations, adding c new users and selecting m existing users based on Eq. <ref> in each round. The c new users will be added to the existing user pool in each round. The overall process of the model is shown in Algorithm <ref>. Note that the specific form of the decreasing function for d_i is not unique, as it can be adjusted by the parameter α.
§.§.§ Proof of Power-Law Distribution and Exponent in Habit Formation
When only considering the habit formation, that is ϕ_i=q_i/∑_i ∈ I q_i, the model can generate power-law distribution data with a power exponent γ=2+c/m. The proof process is similar to the Price model <cit.>. In the HFBI, for every activity held, there will be c new users and m existing users participating, and the participation probability of existing users is proportional to the number of activities they have participated in before. Let p_q(n) be the fraction of users that have participated q times when the platform contains n users, which is also the probability distribution of participation frequency. q_i represents the number of activities participated by user i. When organizing an activity where only one user among all existing users will participate, the probability of existing user i participating in the activity is
q_i/∑_i q_i=q_i/n⟨ q⟩=q_i/n m+c/c.
where ⟨ q⟩ represents the average number of activities each person participates in, ⟨ q⟩=n^-1∑_i q_i. The number of people who have participated in q activities is np_q(n). When there is a new activity, the expected number of people who have participated in q activities and will join the new activity is
n p_q(n) × m ×q/n m+c/c=p_q(n) × m ×q/m+c/c.
Then the master equation for the evolution of the participation frequency distribution is
(n+c) p_q(n+c)=n p_q(n)+(q-1) m c/m+c p_q-1(n)-m q c/m+c p_q(n).
The left side of the equation is the expected number of people participating in the activity q times after adding an activity. The first term on the right-hand side here represents the number of users with previous q participation. The second term refers to the expected number of users who have a participation frequency of q-1 and join the activity and become q times, while the third term refers to the expected number of users who have a participation frequency of q and participate in this activity and are no longer q times.
Eq. <ref> is applicable for all cases where q ≠ 1. When q = 1, the right side of the equation will increase by c new users whose participation frequency becomes 1, instead of the second term in Eq. <ref>, and the equation for q=1 is
(n+c)p_1(n+c)=n p_1(n)+c-m c/m+cp_1(n).
When considering the limit of large population size n →∞ and calculating the asymptotic form of the distribution participation frequency in this limit, we take the limit n →∞ and use the shorthand p_q=p_q(∞). Eqs. <ref> and <ref> become
p_q=(q-1) m c/c(m+c)+m q c p_q-1 for q>1,
p_1=m+c/2 m+c for q = 1.
Let k = c/m, then
p_1=1+k/2+k for q = 1,
p_q=(q-1)/1+k+p p_q-1 for q>1.
With Eqs. <ref> and <ref>, we can iteratively determine p_q for all values of q, beginning with our initial solution for p_1. The results are as follows:
[ p_1=1+k/2+k; p_2=1/2+k+1×1+k/2+k; p_3=2/3+k+1×1/2+k+1×1+k/2+k; p_4=3/4+k+1×2/3+k+1×1/2+k+1×1+k/2+k; ...; ]
The expression for general q can be successively derived as:
p_q=(q-1) ×(q-2) …× 1 ×(1+k)/(q+k+1) ×(q-1+k+1) …×(2+k+1) ×(2+k).
It is known that the gamma function is
Γ(x)=∫_0^∞ t^x-1e^-t d t,
and it has the property that
Γ(x+1)=x Γ(x) for x > 0.
Applying this equation iteratively, we find that
Γ(x+n)/Γ(x)=(x+n-1)(x+n-2) … x.
Using this result, we can rewrite Eq. <ref> as
p_q=(1+k) Γ(q) Γ(2+k)/Γ(1) Γ(2+k+q).
By further employing Euler's formula
B(x, y)=Γ(x) Γ(y)/Γ(x+y),
Eq. <ref> can be simplified to
p_q=(1+k)/Γ(1) B(q, 2+k) .
Using Stirling’s approximation for the gamma
function, the beta function B(x, y) falls off as a power law for large values of x, with exponent y <cit.>,
B(x, y) ≃ x^-yΓ(y).
Applying this finding to Eq. <ref>, for large values of q, the distribution of participation frequency goes as
p_q∼ q^-γ=q^-(2+k)=q^-(2+c/m),
where the exponent γ is
γ=2+k=2+c/m.
Therefore, by only considering habit formation, represented by ϕ_i=q_i/∑_i ∈ I q_i, the model is able to generate data with a power-law distribution, where the power exponent is given by γ=2+c/m.
§.§.§ Experimental Results on the Real Dataset
We conducted experiments on real data, and the results show that HFBI is capable of generating data with only four parameters derived from the mean values of the empirical data and also exhibits good statistical significance.
The Kolmogorov-Smirnov (KS) test is used to assess whether the data generated by the model and empirical data are drawn from the same distribution. The KS statistic is a value that measures the maximum distance between two cumulative distribution functions (CDFs) of two samples, which is used to determine if two samples are drawn from the same underlying probability distribution or not. The null hypothesis is that the two distributions are identical. If p > 0.1, we cannot reject the null hypothesis, which suggests that the data generating process is plausible.
The experiment is first performed on the largest-scale data, that is, the data up to the last activity node. The parameter values for c, m, and n are derived from the mean values of the data and are determined as 4, 33, and 731, respectively. In Fig. <ref>, a comparison is shown between the generated data from HFBI and the real data. It can be seen that the distribution of the simulated data and the real data are very close. The model achieves the best fit when α is set to 0.9. The α values within the range of 0 to 1 suggest that the results of the empirical distribution are attributed to the combined effects of both habit formation and behavioral inertia mechanisms. The habit formation mechanism described by Eq. <ref> can be demonstrated to generate data with a power-law distribution for γ=2+c/m, which is strictly greater than 2 and differs from the empirical data. The participation frequency with γ less than 2 implies that the frequency of participation in activities is slightly more than what can be explained by the habit formation mechanism alone. The behavioral inertia mechanism precisely compensates for this deficiency, as it captures the situation of individuals who have just participated in an activity being highly likely to continue participating in one or two due to inertia. It effectively adjusts the exponent while preserving the power-law distribution. It is the joint effect of both mechanisms that generate data that closely fit the empirical data.
The data produced by the model is incapable of including the extremely rare users who have engaged in activities excessively. One possible explanation is that these individuals usually have a strong self-motivation to participate in activities, which cannot be captured by habit formation, as evidenced by the non-steady growth in the later stage of Fig. <ref>. And since the parameters have to be integers and the operation to maintain consistency of the number of users between the generated data and the empirical data, there will be a small difference between the model's n and the actual number of activity counts. This is considered acceptable since the proportion of these individuals is extremely low.
To demonstrate the robustness of the model, the model was also employed to fit the participation frequency up to each activity node. As the generated data can be slightly different each time, we conducted 5 runs for each possible value of α and selected the optimal α value with the highest average p-value among 5 runs. The average p-values and corresponding optimal α of model fitting for 323 samples are shown in Fig. <ref>. In Fig. <ref> and Fig. <ref>, the behavioral inertia mechanism is represented by 1 / d_i/∑_i ∈ I 1 / d_i,e^-d_i/∑_i ∈ I e^-d_i, respectively. It shows that different functional forms can achieve a good fit at different values of α. The model shows good fitting performance (p>0.1) for all empirical data samples, indicating its correctness and robustness. The range of α values from 0.69 to 1 suggests that the proportion of habit formation and behavioral inertia may vary in different situations. We can observe clear downward trends in α around 450 to 600, indicating that the proportion of behavioral inertia gradually increases during this stage. By combining with Fig. <ref>, it can be observed that there is also a decreasing trend of γ. This indicates that behavioral inertia can effectively help to capture situations with smaller γ.
§ PARTICIPATION PATTERNS AT THE INDIVIDUAL LEVEL
At the population level, the frequency of participation in activities follows a power-law distribution. At the individual level, the pattern of activity participation, specifically the intervals between each user's participation, is also worth studying. Similarly, we investigated the distribution of intervals between each individual's activity participation and discovered that they also exhibit a power-law distribution. In terms of activity participation patterns, it is a burst-quiet mode where individuals alternate between periods of high activity and periods of low activity.
§.§ The Burst-Quiet Pattern
The interval between an individual's participation is defined as the subtraction of the IDs of two consecutive activities in which they have participated, denoted by the r. Considering the requirement of a sufficient amount of interval sequence data, we focused on 58 loyal users who participated in more than 100 activities for the individual-level analysis. Fig. <ref> shows an example of a real user's participation in activities. It is evident that intervals of individual participation in activities vary greatly in size, with a majority being small and some being large. The participation of individuals is characterized by alternating bursts of high activity and long periods of low activity, similar to the outgoing mobile phone call sequence of an individual <cit.>. This burst-quiet pattern is common among the group of loyal users. We studied the distribution of interval sequences for all 58 users and discovered that their interval sequences also follow a power-law distribution (p>0.1 for 54 users, p>0.05 for all 58 users, r_min=1 for 48 users, and r_min=2 for 10 users).
The power law distribution also plays an important role in the intervals of individual participation in activities. Fig. <ref> shows examples of complementary cumulative probability distributions of the intervals for three users. The intervals of participation in the activities of each of the three individuals obeyed a power law distribution with different power exponents. Fig. <ref> plots the probability distribution of the estimated power-law exponents γ for all loyal individuals, revealing a range from 1.6 to 3.25 and a mean of 2.35. Although their activity participation intervals all follow power-law distributions, the difference in the power-law exponent is quite significant. The range of γ is surprisingly consistent with γ for individuals with the intraday inter-call duration that follows a power-law distribution reported by Jiang et al <cit.>. And the probability distributions are also somewhat similar, which may suggest a potential connection between the intervals of different human behaviors.
§.§ The Role of Incentive Activities in Bursts
Burst, characterized by frequent participation in activities with short intervals within a specific period, has a significant impact on improving individuals' overall fitness level. Therefore, it is important to explore the factors associated with this pattern to promote physical activity among the population. In this study, a burst is defined as a period in which the interval between consecutive activities a user participates in is less than a threshold value Δ. The specific value of Δ is arbitrarily set in empirical analysis <cit.>.
Organizations often invest resources to provide incentives for activities to attract users to participate. Incentives are crucial in promoting physical activity. Typically, physical activity behavior is initially motivated by incentive, and as habits form, it shifts towards unconscious and automatic processes <cit.>. The effectiveness of incentives can be immediately reflected in the number of participants in the activity. However, the benefits in other aspects are yet to be discovered. Our study has made some findings by observing the position of incentive activities in bursts. At thresholds of Δ=8, 9, and 10, we identified a total of 433, 399, and 378 bursts for all individuals, respectively, and recorded the positions of the first occurrence of the incentive activity within each burst. As shown in Fig. <ref>, the majority of bursts are observed to start with incentive activities. Table <ref> shows the number and percentage of bursts with the first incentive activity appearing at the head position in the bursts. Over 50% of bursts have their first incentive activity in the first position, and over 65% in the first three positions at different Δ. Note that there is only one in seven activities is incentivized. The proportion of incentive activities in the head of bursts is much higher than it, indicating a correlation between the occurrence of incentive activities and bursts. This phenomenon suggests that in addition to increasing the number of participants in the activity, incentive activities may also play a role in calling users back from a quiet state to a burst state for sustained engagements.
§ RELATED WORK
Power law distributions have been observed in various domains and contexts, such as biology <cit.>, general science <cit.>, economics <cit.> and the social sciences <cit.>. Many human behaviors, such as the intervals between sending emails <cit.> and the pattern of phone calls <cit.>, have also been identified as following power-law distributions. Our work has discovered that the participation frequency of the population and the intervals between individual participation in activities exhibit power-law distributions in the context of community sports organizations.
Over the years, there have been continuous efforts to propose diverse models aimed at replicating and explaining data characterized by power-law distributions. Barabási proposed the classic preferential attachment model, which can generate data exhibiting a power-law distribution with an exponent of 3 <cit.>. There are also derivative models that can generate data with power-law distributions with exponents between 2 and 3 <cit.>. They have been widely used to explain the power-law distribution of node degrees observed in social networks. The decision-based queuing process <cit.> simulates the power-law distribution of waiting times for emails by randomly assigning priorities to each incoming task and following a rule of processing tasks in priority order. This suggests that the power-law distribution of waiting times for emails may be attributed to human decision-making based on priorities. The preferential attachment model suggests that the power-law distribution of node degrees in networks may be due to the preferential connection of newly added nodes to high-degree nodes in the network <cit.>. In our HFBI model, the habit formation mechanism exhibits similarities to the preferential attachment model and can be proven to generate data conforming to a power-law distribution. In addition, the behavioral inertia component of the HFBI model introduces effective modifications, leading to a slight decrease in the exponent of the data while preserving its essential power-law characteristics.
Community sports organizations have been receiving increasing attention for their significant contributions to public health and social harmony. Klenk et al. <cit.> investigated the participation of people with disabilities in community sports activities from three aspects: (1) social contacts, interactions, and friendships, (2) self-perception and identity formation, and (3) social acceptance, support, and embeddedness. Hanlon et al. <cit.> conducted a questionnaire survey to investigate the needs and initiatives for women's participation in community sports activities. Zhou et al's survey <cit.> revealed a correlation between the provision of community-sport services (both core and peripheral services) and participants’ satisfaction levels. To the best of our knowledge, there is no research that explores and comprehensively understands individual participation in community sports organizations from a data-driven and modeling approach.
§ CONCLUSION
Our study has identified new members of the power-law data family, a) the frequency of community sports participation among populations, and b) the interval of individual activity participation. The participation frequency exhibits a power-law distribution with a tail cutoff and an exponent less than 2. We have proposed HFBI - a model based on habit formation and behavioral inertia, to uncover the underlying causes for this power-law distribution. In the model, the behavioral inertia mechanism effectively complements the habit formation mechanism, with which alone one can only generate power-law distributions with an exponent greater than 2. The model provides a robust fit to the empirical data. Furthermore, Individual participation in community sports activities exhibits a burst-quiet pattern. Importantly, our study suggests that periods of high activity bursts are often driven by incentive activities, highlighting the importance of incentive activities to sustain long-term physical activity behavior.
Our results have important implications for the design of interventions aimed at promoting sustainable physical activity behavior. Interventions can be better tailored to align with individuals' behavioral tendencies by gaining insights into habit formation, behavior inertia, and incentive activities. Additionally, the classic preferential attachment process restricts the power law exponent to γ>2 <cit.>, while many real-world networks exhibit γ<2 <cit.>. Our HFBI model based on habit formation and behavior inertia can be valuable in other domains where power-law distributions with low power-law exponents are observed, such as the population of cities <cit.>, short-message communication <cit.>, and corporate innovative patent counts <cit.>.
Despite the strengths of our study, there are limitations that should be noted. First, our study only focused on a sports community in a university, whose members are mostly well-educated university faculties and staff members, and may differ in the perception of self-motivated exercise from the population in the society at large. Further research is needed to understand how our study may be generalized to other community sports organizations. Secondly, the model cannot capture the behavior of extremely rare individuals who engage in activities excessively. As reported in the study's 80/20 rule, active individuals make a significant contribution to community activity participation, and future research should pay more attention to this group.
In conclusion, our study provides novel insights into the principle underlying human participation in community sports activities and offers practical implications for the design of interventions to promote sustained physical activity behavior and human health. Our findings may also have broader implications for other fields where power-law distributions are commonly observed.
§ ACKNOWLEDGMENTS
We would like to thank every member of the SJTU Health Community for their selfless commitment in building a supportive community and providing help to those in need.
IEEEtran
|
http://arxiv.org/abs/2307.04734v1 | 20230710175005 | Quandle coloring quivers and 2-bridge links | [
"Tirasan Khandhawit",
"Korn Kruaykitanon",
"Puttipong Pongtanapaisan"
] | math.QA | [
"math.QA"
] |
The quandle coloring quiver was introduced by Cho and Nelson as a categorification of the quandle coloring number. In some cases, it has been shown that the quiver invariant offers more information than other quandle enhancements. In this paper, we compute the quandle coloring quivers of 2-bridge links with respect to the dihedral quandles.
On tricyclic graphs with maximum edge Mostar index
Fazal Hayat^a, Shou-Jun Xu^a[Corresponding author
E-mail addresses: [email protected] (F. Hayat), [email protected] (S. J. Xu), [email protected] (B. Zhou)], Bo Zhou^b
^aSchool of Mathematics and Statistics, Gansu Center for Applied Mathematics,
Lanzhou University, Lanzhou 730000, P.R. China
^bSchool of Mathematical Sciences, South China Normal University,
Guangzhou 510631, P.R. China
==========================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
A quandle is an algebraic structure whose axioms are inspired by the Reidemeister moves on link diagrams <cit.>. There is a natural quandle Q() associated to each link called the fundamental quandle, which gives rise to an invariant of the link. In fact, Q() is a complete invariant when the link has one component <cit.>. Studying presentations of Q() can be difficult, and therefore, it is common to extract some information by considering the set of homomorphisms from Q() to a different quandle X. The cardinality of such a set |Hom(Q(),X)| is often called the quandle coloring number, which has been investigated by many quandle theorists over the years.
Since a set contains more information in addition to its cardinality, the quandle coloring number can be enhanced to give a stronger link invariant. For more details on some examples of useful enhancements such as cocycle and module enhancements, the readers are encouraged to consult <cit.>. This paper concerns a particular enhancement introduced by Cho and Nelson called the quandle coloring quiver 𝒬() <cit.>. Roughly, elements of Hom(Q(),X) can be thought of as vertices scattered all over the place, where each vertex represents an assignment of a coloring to . The quiver-valued invariant 𝒬() gives a way to organize these vertices into a directed graph.
For some particular choices of target quandles X appearing in Hom(Q(),X), the quandle coloring quivers have been determined for various families of links <cit.>. It has also been shown that in some cases, the quiver gives more information than cocycle and module enhancements <cit.>. In this paper, we calculate the quandle quivers for all 2-bridge links with respect to any choice of dihedral quandle. This is particularly interesting when we use the dihedral quandle ℤ_n^dih of composite order n since the quandle coloring quiver is determined by the coloring number when n=p_1p_2⋯ p_k, where p_i is prime <cit.>. To demonstrate this, we give some more examples where our computations offer more information than the quandle counting invariants in the final section.
§.§ Organization
This paper is organized as follows. In Section <ref>, we discuss basic definitions from quandle theory and knot theory. In Section <ref>, we calculate the quandle coloring number of 2-bridge links. The coloring number is needed as it is the number of vertices of the quandle coloring quiver invariant. In Section <ref>, we prove our main result. Before stating the result in full generality, we discuss the case when n is a power of a prime for ease of reading. We end the paper with more examples where our quiver computations give proper quandle enhancements.
§ PRELIMINARIES
In this section, we review some relevant terminologies.
§.§ Quandles
A quandle is a nonempty set X equipped with a binary operation :X× X→ X such that the following properties hold:
Q1: x x=x for all x∈ X.
Q2: The map β_y : X→ X, given by β_y(x)= x y, is invertible for all y∈ X.
Q3: (x y ) z = (x z) (y z) for all x,y,z∈ X.
Since β_y is invertible, we have a bijection β_y^-1:X→ X. Define ^-1: X× X → X by x ^-1 y := β_y^-1(x). If β_y=β_y^-1 for all y∈ X, or equivalently =^-1, then the quandle is said to be involutory. In this paper, we primary work with dihedral quandles, which are in fact involutory quandles.
For each n∈ℕ, on ℤ_n={0,1,2,…,n-1}, x y := 2y-xn defines the dihedral quandle of order n. Denote by the dihedral quandle of order n. For all x,y∈ℤ_n, we have β_y∘β_y(x)≡β_y(2y-x)≡ 2y-(2y-x)=xn. From here we see that dihedral quandles are involutory.
Often, it is useful to study maps between quandles that behave well with the quandle axioms.
A quandle homomorphism from (X,_X) to (Y,_Y) is a map f:X→ Y such that f(x _X y)=f(x)_Y f(y). Denote by (X,Y) the set of all quandle homomorphisms from X to Y.
Let X be a quandle. A quandle endomorphism on X is a quandle homomorphism from X to itself. A quandle automorphism on X is a quandle endomorphism on X that is also a bijection. Denote by (X) the set of all quandle endomorphisms on X and by (X) the set of all quandle automorphisms on X.
Under the usual composition, (X) has monoid structure, whereas (X) has group structure.
There is a particularly natural quandle that can be defined from a link diagram.
Let be an oriented link and D be an oriented diagram of with n strands, x_1,x_2,…,x_n. The fundamental quandle of D is the quandle freely generated by x_1,x_2,…,x_n with relations from each crossing as in Figure <ref>. The fundamental quandle of an oriented link is defined to be the fundamental quandle of an oriented diagram of .
A basic way to study homomorphisms between quandles is to count how many there are.
Let be a link and X be a finite quandle. An X-quandle coloring of is a quandle homomorphism Q()→ X. The X-quandle counting invariant of the link is the number of X-quandle colorings of , i.e. the size of the set (Q(),X). This cardinality is also called the quandle coloring number.
For any link and a quandle X. Fix x∈ X. Then, ψ_x : Q()→ X, given by ψ_x(x_i):=x, defines a quandle homomorphism since at each crossing we have ψ_x(x_i x_j)=x=x x= ψ_x(x_i)ψ_x(x_j). Such quandle colorings are called the trivial quandle coloring. In general, we have {ψ_x:x∈ X}⊆(Q(),X), so |(Q(),X)|≥ |X|.
Since the set of homomorphisms contains more information than its cardinality, various quandle enhancements have been defined. The following concept is particularly relevant to this paper.
Let X be a finite quandle. Fix S⊆(X,X). The X-quandle coloring quiver _X^S() of a link with respect to S is the direct graph with vertex set (Q(),X) and directed edges ψ_1 f→ψ_2 whenever ψ_2=f∘ψ_1 and f∈ S. When S=(X,X), we denote the corresponding quiver by simply _X() and call it the full quandle coloring quiver.
Denote by (K_n,m) the directed graph with n vertices where every vertex has m directed edges from itself to each vertex. For each graph G and H, define G ∇_mH to be the disjoint union graph G⊔ H with additional m directed edges from every vertex of H to each vertex of G.
§.§ Rational Tangles and Links
An n-string tangle is a collection of n properly embedded disjoint arcs in the 3-ball. In this paper, we will work exclusively with 2-string tangles. Thus, we will simply refer to 2-string tangles as tangles for brevity. A tangle can also be defined diagrammatically.
A tangle diagram is a portion of a link diagram surrounded by a circle intersecting the link diagram in four points labelled NE,NW,SE,SW. Two tangle diagrams are equivalent if and only if one can be obtained from another by Reidemeister moves in finitely many steps inside the surrounding circle while the four points remain fixed.
We will now give a definition of rational tangles. We note that there are other ways to define the equivalent object in the literature.
Let [0] denote the horizontal tangle shown in Figure <ref> (left). For an integer p≠ 0, let [p] denote the tangle obtained from twisting the NE and SE endpoints p times, where the sign is positive (resp. negative) if the overstrand has positive (resp. negative) slope (see Figure <ref>).
Given two tangles T_1 and T_2, we can connect the two tangles into a new one. Let us denote by T_1T_2 the tangle obtained from reflecting T_1 along NW-SE line and connecting it to T_2 from the left. (see Figure <ref>) Note that in general, T_1T_2≠ T_2T_1.
Let N≥ 1, and p_1,p_2,…,p_N be integers. Let [p_1p_2… p_N] be the tangle T_N, where T_1=[p_1] and T_j=T_j-1[p_j] for 1≤ j≤ N. This kind of tangle is called a rational tangle.
To each rational tangle [p_1p_2… p_N], there is an associated rational number
p_1p_2… p_N := p_N+1/… + 1/p_2+1/p_1
that is a complete tangle invariant. That is, Conway showed that two rational tangles are equivalent if and only if their rational numbers are equal <cit.>.
The numerator closure of a rational tangle yields a rational link. It can be shown that rational links are precisely two-bridge links. Let us denote by (p_1p_2… p_N), or (p_1p_2… p_N) the closure of the rational tangle [p_1p_2… p_N] (see Figure <ref>).
We note that any rational tangle can be put in a canonical form so that each p_i in (p_1p_2… p_N) has the same sign <cit.>. Since ((-p_1)(-p_2)… (-p_N)) is the mirror image of (p_1p_2… p_N), their involutorized fundamental quandles are isomorphic. Hence, their quandle enhancements, e.g. coloring number, quiver, are isomorphic. From now on, we shall assume that p_1,p_2,…,p_N>0.
§ THE QUANDLE COLORING NUMBERS
The main goal of this section is to determine the number of colorings of 2-bridge links by dihedral quandles. We begin by discussing a presentation for the fundamental quandle of 2-bridge links:
Q((p_1p_2… p_N)) =⟨ x_j,i for 1≤ j≤ N and 1≤ i≤ p_j+2|
x_j,i = x_j,i-2^± x_j,i-1 for 1≤ j≤ N and 3≤ i≤ p_j+2,
x_2,1=x_1,2, x_2,2=x_1,p_1+2,
x_j,1=x_j-2,p_j-2+1, x_j,2=x_j-1,p_j-1+2 for 3≤ j≤ N,
x_N,p_N+1=x_1,1 , x_N,p_N+2=x_N-1,p_N-1+1⟩,
Since dihedral quandles are involutory, i.e. =^-1, for any quandle homomorphism ψ:Q((p_1p_2… p_N))→ we have the following relations
ψ(x_j,i )= ψ(x_j,i-2) ψ(x_j,i-1) for 1≤ j≤ N and 3≤ i≤ p_j+2,
ψ(x_2,1)=ψ(x_1,2), ψ(x_2,2)=ψ(x_1,p_1+2),
ψ(x_j,1)=ψ(x_j-2,p_j-2+1), ψ(x_j,2)=ψ(x_j-1,p_j-1+2) for 3≤ j≤ N,
ψ(x_N,p_N+1)=ψ(x_1,1 ), ψ(x_N,p_N+2)=ψ(x_N-1,p_N-1+1).
Moreover, any map ψ : {x_j,i| 1≤ j≤ N and 1≤ i≤ p_j+2}→ satisfying the relations extends to a unique quandle homomorphism ψ̃: Q((p_1p_2… p_N))→.
Next, we prove an important proposition relating the colorings of two generating strands. This generalizes Proposition 2.4 of <cit.>.
For a rational tangle [p_1p_2… p_N], let Δ_j be the numerator of the rational number p_1p_2… p_j and also denote by Δ := Δ_N. Note that Δ_j satisfies recurrence relation
Δ_0 := 1, Δ_1 = p_1, Δ_j = p_j Δ_j-1 + Δ_j-2 .
In fact, the number Δ is the determinant of (p_1p_2… p_N) (see <cit.>).
For ψ∈(Q((p_1p_2… p_N)),), we have
Δψ(x_1,1)≡Δψ(x_1,2) n.
Since the term of the form aψ(x_1,2)-(a-1)ψ(x_1,1) appears frequently in the proofs, we let [a]:=aψ(x_1,2)-(a-1)ψ(x_1,1). We observe that for all 1≤ j≤ N, we have ψ(x_j,p_j+1)=p_jψ(x_j,2)-(p_j-1)ψ(x_j,1), and ψ(x_j,p_j+2)=(p_j+1)ψ(x_j,2)-p_jψ(x_j,1).
Claim: For all 1≤ j≤ N, we have ψ(x_j,p_j+1)=[Δ_j] and ψ(x_j,p_j+2)=[Δ_j+Δ_j-1].
We prove the claim by induction. For base case j=1, we note that ψ(x_1,p_1+1)=[p_1]=[Δ_1], and ψ(x_1,p_1+2)=[p_1+1]=[Δ_1+Δ_0]. Let 1≤ j≤ N. Suppose that the claim hold true for all positive integer less than j.
Case 1 j=2. We have ψ(x_2,1)=ψ(x_1,2)=[1] and ψ(x_2,2)=ψ(x_1,p_1+2)=[p_1+1]. This gives ψ(x_2,p_j+1)=p_2 [p_1+1]-(p_2-1)[1]=[p_2p_1+1]=[Δ_2], and ψ(x_2,p_j+2)=(p_2+1)[p_1+1]-p_2[1]=[p_2p_1+p_1+1]=[Δ_2+Δ_1].
Case 2 j≥ 3. As j-1,j-2≥ 1, we apply inductive hypothesis and obtain
ψ(x_j,1) = ψ(x_j-2,p_j-2+1) = [Δ_j-2],
ψ(x_j,2) = ψ(x_j-1,p_j-1+2) = [Δ_j-1+Δ_j-2],
ψ(x_j,p_j+1) = p_j[Δ_j-1+Δ_j-2] - (p_j-1) [Δ_j-2] = [p_jΔ_j-1+Δ_j-2]=[Δ_j],
ψ(x_j,p_j+2) = (p_j+1)[Δ_j-1+Δ_j-2] - p_j [Δ_j-2]
= [p_jΔ_j-1+Δ_j-2+Δ_j-1] = [Δ_j+Δ_j-1].
Thus, the claim is verified.
With the claim proved, we have ψ(x_N,p_N+1)=[Δ_N] and ψ(x_N,p_N+2)=[Δ_N + Δ_N-1]. The relations from the closure of the tangle give a single equation
Δ_N ψ(x_1,1) ≡Δ_N ψ(x_1,2) n.
Hence, the assertion is proved.
Any map ψ: {x_1,1,x_1,2}→ such that Δψ(x_1,1)≡Δψ(x_1,2)n extends to a unique quandle homomorphism ψ̃: Q((p_1p_2… p_N)) →, i.e. the diagram
{x_1,1,x_1,2}[r,"ψ"] [d,hook,"i"]
Q((p_1p_2… p_N))[ru,dashed,"ψ̃"swap]
commutes.
We extend ψ to ψ̅: {x_j,i| 1≤ j≤ N and 1≤ i≤ p_j+2}→ uniquely to other generators recursively using the following relations
ψ̅(x_j,i ) = ψ̅(x_j,i-2) ψ̅(x_j,i-1) for 1≤ j≤ N and 3≤ i≤ p_j+2,
ψ̅(x_2,1) =ψ̅(x_1,2), ψ̅(x_2,2)=ψ̅(x_1,p_1+2),
ψ̅(x_j,1) =ψ̅(x_j-2,p_j-2+1), ψ̅(x_j,2)=ψ̅(x_j-1,p_j-1+2) for 3≤ j≤ N.
From the proof of Proposition <ref>, we see that ψ̅(x_N,p_N+1)=[Δ_N] and ψ̅(x_N,p_N+2)=[Δ_N + Δ_N-1]. Hence, the relations
ψ̅(x_N,p_N+1)=ψ̅(x_1,1 ), ψ̅(x_N,p_N+2)=ψ̅(x_N-1,p_N-1+1)
hold and ψ̅ extends to a unique quandle homomorphism ψ̃: Q((p_1p_2… p_N))→.
The quandle coloring number of a 2-bridge link is given by the formula |(Q((p_1p_2… p_N)),)|=n(Δ,n), where n of which are trivial quandle colorings.
By Proposition <ref> and <ref>, |(Q((p_1p_2… p_N)),)| is equal to the number of choices of (ψ(x_1,1),ψ(x_1,2))∈× such that Δψ(x_1,1)≡Δψ(x_1,2) n which is exactly n(Δ,n). Among all the colorings, there are n trivial colorings corresponding to choices ψ(x_1,1)=ψ(x_1,2)∈.
§ THE QUANDLE COLORING QUIVER OF 2-BRIDGE LINKS
It will turn out that the quiver invariant can be organized based on how the automorphism group () acts on the colorings.
Throughout this section, we consider a 2-bridge link (N/M), where N and M are positive and relatively prime.
By Proposition <ref>, we denote by [a,b] the unique quandle homomorphism ψ∈(Q((N/M)), ) such that ψ(x_1,1)=a and ψ(x_1,2)=b.
Note that such a and b satisfy n | N(b-a).
Analogously, for x,y ∈ℤ_n, there is a unique endomorphism f ∈() such that f(0) = x and f(1) =y. We denote such an endomorphism by x,y
Observe that f(a) = ay - (a-1)x = (y-x)a + x n. Moreover, x,y is an automorphism precisely when y-x∈ℤ_n^×.
() acts on (Q((N/M)), ) by post-composition, i.e. f [a,b] :=f∘ [a,b]=[f(a),f(b)].
For [a,b]∈(Q((N/M)), ) and f= x,y∈(), we see that n| N(y-x)(b-a)=N(f(b)-f(a)), i.e. [f(a),f(b)]∈(Q((N/M)), ). Since composition is associative and 1_∈() fixes any [a,b], we see that all the group action axioms are satisfied.
Write ψ∼ϕ if ψ and ϕ lie in the same orbit under the action.
For ψ,ψ',ϕ,ϕ' ∈(Q((N/M)),) such that ψ∼ψ' and ϕ∼ϕ', we have
|{f∈(): ϕ=f∘ψ}|=|{f∈(): ϕ'=f∘ψ'}|.
Since ψ∼ψ' and ϕ∼ϕ', there exist g,h∈() such that ψ'=g∘ψ and ϕ'=h∘ϕ. Define two maps T: {f∈(): ϕ=f∘ψ}→{f∈(): ϕ'=f∘ψ' } by f↦ h∘ f∘ g^-1, and S: {f∈(): ϕ'=f∘ψ' }→{f∈(): ϕ=f∘ψ} by f↦ h^-1∘ f∘ g. We see that T and S are inverse to each other. Hence, two sets are of the same size.
By translation, any orbit contains an element of the form [0,a].
Consequently, it suffices to consider edges between them, i.e.
|{f∈(): [0,b]=f∘ [0,a]}|=|{f∈(): f(0)=0, f(a)=b}|.
For a,b∈, we have
|{f∈(): f(0)=0, f(a)=b}|=|{x∈ℤ_n: ax≡ b n}|.
Two maps f↦ f(1) and x↦ 0,x are inverses.
It is a basic number theory result that
{x∈ℤ_n: ax≡ b n}=(a,n) if (a,n)| b,
0 else..
We immediately have our result.
For a,b∈, we have
|{f∈(): [0,b]=f∘ [0,a]}|=(a,n) if (a,n)| b,
0 else.
§.§ The quiver when n is a power of a prime
Let us first consider the case when n=p^α, where p a prime and α is a positive integer.
The p-adic valuation of an integer m, denoted by ν_p(m), is the highest power of p dividing m.
Given p, α, and N, we set β=min{α,ν_p(N)}. We now characterize orbits of
(Q((N/M)), ) and count endomorphisms between them.
Under the action of () on (Q((N/M)), ), for α-β≤ j,j'≤α , we have
* [0,p^j]∈(Q((N/M)), ).
* [0,p^j] and [0,p^j'] lie in same orbit if and only if j=j'.
* The size of the orbit of [0,p^j], denoted by n_j, is given by
n_j= p^2α-j-1(p-1) if j<α,
p^α if j=α.
* (Q((N/M)), ) is partitioned into orbits with {[0,p^j]: α-β≤ j≤α} being a complete set of representatives.
* The number of endomorphisms of ℤ_p^α^dih sending [0, p^j] to [0, p^j'], denoted by n_j,j', is given by
n_j,j'= 0 if j>j',
p^j if j≤ j'.
* Since j ≥α-β≥α-ν_p(N), we have p^α| Np^j.
* The converse is obvious. Without loss of generality, let us suppose that j>j'.
We see that (p^j,p^α) = p^j ∤ p^j', so there is no automorphism from [0,p^j] to [0,p^j'] by Proposition <ref>.
* We first determine the size of stabilizer of [0,p^j], which is equal to the number of x,y∈() such that x,y [0,p^j]=[0,p^j]. Note that the size of () is equal to |ℤ_p^α| · |ℤ_p^α^×| = p^αϕ(p^α) = p^2α-1(p-1).
Case 1 j=α. In this case, it is equivalent to count a number of x,y such that x=0 and y∈ℤ_p^α^×, so the stabilizer of [0,p^α] = [0,0] is of the size |ℤ_p^α^×|=ϕ(p^α). By orbit-stabilizer theorem, the size of the orbit of [0,p^α] is p^αϕ(p^α)/ϕ(p^α)=p^α.
Case 2 j<α. In this case, we count a number of x,y such that x=0 and y p^j = p^j p^α. The last condition is equivalent to y = 1+ kp^α-j for a nonnegative integer k<p^j. Hence, the stabilizer of [0,p^j] is of the size p^j. By orbit-stabilizer theorem, the size of the orbit of [0,p^j] is p^αϕ(p^α)/p^j=p^2α-j-1(p-1).
* Consider the total size of the orbit of [0,p^j] for all α-β≤ j≤α
∑_α-β≤ j ≤α n_j = p^α + ∑_α-β≤ j < αp^2α-j-1(p-1)
= p^α + p^2α-1(p-1)·1/p^α-β∑_0≤ j< β1/p^j
=p^α + p^2α-1(p-1)·1/p^α-β·1-1/p^β/1-1/p
= p^α+β.
One the other hand, we have |(Q((N/M)), )|=p^α(N,p^α)=p^α+β by corollary <ref>. Hence [0,p^j] for α-β≤ j≤α are complete representatives.
* This follows from Proposition <ref>.
Combining all the results from Lemma <ref> and Lemma <ref>, we are able to determine the full quandle coloring quiver of the two-bridge link (N/M) with respect to the quandle .
Let p be a prime, α≥ 1 be an integer, and N,M∈ℕ with (N,M)=1. The full coloring quiver of the two-bridge link (N/M) with respect to the quandle is given by
_((N/M))≅ G_β,
where β=min{ν_p(N),α}, G_0:= (K_p^α,p^α) and G_j:=G_j-1∇_p^α-j(K_p^α+j-1(p-1),p^α-j) for 1≤ j (see Figure <ref>).
In short terms, the full coloring quiver 𝒬_((N/M)) has its vertex set partitioned into orbits of [0,p^j] for α-β≤ j≤α, each of which induces a regular complete subgraph, and has p^i directed edges from each vertex from the orbit of [0,p^i] to each vertex from the orbit of [0,p^j] whenever i≤ j. If the order of the dihedral quandle is fixed, then the number β determines the number of components of the quiver.
For instance, suppose that L is the 4-crossing torus link and our quandle is ℤ_4^dih. Then, {[0,0],[1,1],[2,2],[3,3]} constitutes an orbit,
{[0,1],[1,2],[2,3],[3,0],[0,3],[1,0],[2,1],[3,2]} constitutes an orbit, and
{[0,2],[1,3],[2,0],[3,1]} constitutes an orbit.
Let p be a prime and N,M∈ℕ with (N,M)=1. Then, the quiver
_ℤ_p^dih((N/M))≅(K_p,p)∇_1(K_p(p-1),1) if p| N,
(K_p,p) if p∤ N.
Set α=1 in theorem <ref>.
§.§ The general case
For convenience, we start using multi-index notation. For a fix positive integer n, we write the prime decomposition n=∏_i p_i^α_i as p^α, where p is regarded as the sequence of distinct prime factors and α is regarded as the sequence of corresponding exponents. For sequences of nonnegative integers j=(j_i) and j'=(j_i') with the same length as p, we write p^j:= ∏_i p_i^j_i, and define j≼j' iff j_i≤ j_i' for all i.
The next result generalizes Lemma <ref>. In a similar manner, we set the sequence β with β_i=min{α_i,ν_p_i(N)}.
Under the action of () on (Q((N/M)), ) , for α-β≼ j,j'≼α we have
* [0, p^j]∈(Q((N/M)), ).
* [0,p^j] and [0,p^j'] lie in same orbit if and only if j=j'.
* The size of the orbit of [0,p^j] is given by n_j :=∏_i n_j_i, where
n_j_i= p_i^2α_i-j_i-1(p_i-1) if j_i<α_i,
p_i^α_i if j_i =α_i.
* (Q((N/M)), ) is partitioned into orbits with {[0,p^j]: α-β≼ j≼α} being a complete set of representatives.
* The number n_j,j' of endomorphisms of sending [0,p^j] to [0,p^j'] is given by
n_j,j'= 0 if j⋠j',
p^j if j≼ j'.
The proof also closely follows the proof of Lemma <ref>
* For each i, we have α_i≤ν_p_i(N)+j_i since α_i-ν_p_i(N)≤α_i-β_i ≤ j_i. Thus, p^α| Np^j and [0,p^j]∈(Q((N/M)), ).
* The converse is obvious. Without loss of generality, suppose that j_i<j_i' for some index i. Suppose for contradiction that there is x,y∈() such that [x,p^j(y-x)+x]= x,y [0,p^j]=[0,p^j']. We see that x=0 and p^j y≡ p^j'p^α. This implies p_i | y and (y,n)≥ p_i>1, which contradicts with y ∈ℤ_n^×. Hence, [0,p^j] and [0,p^j'] lie in different orbits.
* We also try to determine the size of the stabilizer of [0,p^j], which is equal to the number of x,y∈() such that x,y [0,p^j]=[0,p^j]. We see that x=0 and y∈ℤ_n^× satisfying p^jy≡ p^j n. By looking at each prime, the condition is equivalent to the solving the system p^j_i y≡ p^j_ip_i^α_i with (y,p_i^α_i)=1 for each i.
Case 1 j_i=α_i. The condition p^j_i y≡ p^j_ip_i^α_i is trivial, so there are ϕ(p_i^α_i) solutions.
Case 2 j_i<α_i. In this case, there are p_i^j_i solutions of the form y = 1+ kp_i^α_i-j_ip_i^α_i, where 0≤ k<p_i^j_i. Note that the solutions satisfy (y,p_i^α_i)=1.
Let us define
m_j_i= p_i^j_i if j_i<α_i,
ϕ(p_i^α_i) if j_i =α_i.
By Chinese remainder theorem, the size of the stabilizer of [0,p^j] is ∏_i m_j_i. Thus, by orbit-stabilizer theorem, the size of the orbit of [0,p^j] is
nϕ(n)/∏_i m_j_i=∏_i p_i^α_iϕ(p_i^α_i)/m_j_i=∏_i p_i^2α_i-1(p_i-1)/m_j_i=∏_i n_j_i.
* Consider the total of the size of orbits
∑_α-β≼ j ≼α n_j = ∑_α_I-β_I ≤ j_I ≤α_I…∑_α_1-β_1 ≤ j_1 ≤α_1∏_i n_j_i
= ∏_i∑_α_i-β_i ≤ j_i ≤α_i n_j_i
= ∏_i[p_i^α_i + ∑_α_i-β_i ≤ j_i < α_ip_i^2α_i-j_i-1(p_i-1) ]
= ∏_i[ p_i^α_i + p_i^2α_i-1(p_i-1)·1/p_i^α_i-β_i∑_0≤ j_i< β_i1/p_i^j_i]
=∏_i[ p_i^α_i + p_i^2α_i-1(p_i-1)·1/p_i^α_i-β_i·1-1/p_i^β_i/1-1/p_i]
=∏_i p_i^α_i+β_i = p^α+β.
Since |(Q((N/M)), )|=n(N,n)=p^α+β, we have all elements from these orbits.
* This also follows from Proposition <ref>.
Combining all the results from Lemma <ref> and Lemma <ref>, we are able to determine the full quandle coloring quiver of the two-bridge link (N/M) with respect to the quandle .
Let Λ be a set, G={G_λ}_λ∈Λ be a family of graphs indexed by Λ, and w:Λ×Λ→ℕ_0 be a map. Denote by ∇_w G the disjoint union graph _λ∈ΛG_λ with additional w(λ,μ) directed edges from each vertex of G_λ to each vertex of G_μ. With this notion, G_2∇_m̂ G_1 = ∇_w {G_1,G_2}, where w:{1,2}×{1,2}→ℕ_0 is given by w(1,2)=m and w(2,1)=w(1,1)=w(2,2)=0.
Let n be a positive integer and write n=∏_i p_i^α_i, where p_i are distinct primes and α_i > 0. Let N,M be positive integers with (N,M)=1 and set β_i=min{α_i,ν_p_i(N)}. Let Λ={j: α-β≼ j≼α}. The full quandle coloring quiver of the two-bridge link (N/M) with respect to the quandle is given by
_((N/M))≅∇_w {(K_n_j,p^j):j∈Λ},
where w:Λ×Λ→ℕ_0 is given by
w(j,j')= p^j if j≼ j' and j≠ j',
0 else.
The full quandle coloring quiver _((N/M)) is a higher dimensional generalization of that when n is a prime power. Its vertex set is partitioned into orbits that can be arranged into a higher dimensional grid with width in the i-th dimension depending only on β_i. We can see in the proof of Lemma <ref> that problems reduce to subproblems for each prime dividing the order of the dihedral quandle. Roughly speaking, the orbits and stabilizers split into "products". See section 4 of <cit.> for more rigorous discussion of this situation.
The torus link (N,2)≅(N/1). The full quandle coloring quiver _ℤ_12^dih((36,2)) is shown in Figure <ref>.
§.§ Applications and remarks
The formulas of quandle cocycle invariants of 2-bridge links are given in <cit.> for dihedral quandles of prime orders. This information can be combined with our results to calculate the quandle cocycle quivers of 2-bridge links <cit.>. Similarly, the authors of <cit.> computed quandle module invariants using some dihedral quandles, which can be used to compute the quandle module quivers <cit.> when combined with our result.
By a result of Taniguchi <cit.>, the quandle coloring quiver is not a stronger invariant if one uses the dihedral quandle of order n=p_1p_2⋯ p_k, where p_i is a prime number. To find an instance of proper enhancement, we may have to consider a quandle whose order is a power of a prime.
Consider the dihedral quandle Q=ℤ_4^dih. Then, the quandle coloring number of T(9,3) and T(4,2) by Q are both 16. By the main result of this paper and a result in <cit.>, the associated quiver invariants are not equal. In particular, the quiver for T(4,2) contains three complete graphs K_4, K_4, and K_8. On the other hand, the quiver for T(9,3) contains four copies of complete graphs that are all K_4 as shown schematically in Figure 6 of <cit.> (merging parallel edges). More examples can be obtained by replacing 9 with 6k+3 where k=1,2,3,...
Of course, other invariants already distinguish the links in the examples above, but our computations offer additional tools for potential use in the future to distinguish unknown knotted objects.
§.§ Acknowledgments
The research conducted for this paper is supported by the Pacific Institute for the Mathematical Sciences (PIMS). The first author is supported by the Centre of Excellence
in Mathematics, the Commission on Higher Education, Thailand. The research and findings may not reflect those of the Institute. The third author thanks Nicholas Cazet for helpful conversations and for introducing him to Fielder's work. We are grateful to Chris Soteros for support.
plain
|
http://arxiv.org/abs/2307.05656v1 | 20230711154951 | Universal stability towards decoherence in quantum diffusive 1D chains | [
"Fabricio S. Lozano-Negro",
"Emilio Alvarez Navarro",
"Nahum C. Chávez",
"Francesco Mattiotti",
"Fausto Borgonovi",
"Horacio M. Pastawski",
"G. Luca Celardo"
] | quant-ph | [
"quant-ph"
] |
Instituto de Física Enrique Gaviola (CONICET-UNC) and Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, 5000, Córdoba, Argentina
Benemérita Universidad Autónoma de Puebla, Apartado Postal J-48, Instituto de Física, 72570, Mexico
Dipartimento di Matematica e Fisica and Interdisciplinary Laboratories for Advanced Materials Physics, Università Cattolica, via della Garzetta 48, 25133 Brescia, Italy
University of Strasbourg and CNRS, CESQ and ISIS (UMR 7006), aQCess, 67000 Strasbourg, France
Dipartimento di Matematica e Fisica and Interdisciplinary Laboratories for Advanced Materials Physics, Università Cattolica, via della Garzetta 48, 25133 Brescia, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Milano, via Celoria 16, I-20133, Milano, Italy
Instituto de Física Enrique Gaviola (CONICET-UNC) and Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, 5000, Córdoba, Argentina
Department of Physics and Astronomy, CSDC and INFN, Florence Section, University of Florence, Italy
Coherent diffusion usually arises between the localized and the ballistic regime, where typically Metal-Insulator Transitions emerge. By studying three different paradigmatic systems,
the Harper-Hofstadter-Aubry-André, the Fibonacci and the Power-Banded Random Matrices model, we show that in presence of coherent diffusion, transport is exceptionally stable towards decoherence. This is completely at odds with what happens for ballistic and localized dynamics, where the diffusion coefficient strongly depends on the decoherent noise. A universal dependence of the diffusion coefficient with the decoherence strength is analytically derived: the diffusion coefficient remains almost decoherence-independent until the coherence time becomes comparable with the mean elastic scattering time.
Thus quantum diffusive systems could be used to design stable quantum wires and explain the functionality of many biological systems, which often operate at the border between the ballistic and localized regime.
G. Luca Celardo
Received / Accepted
=======================
*Introduction.
The control of quantum transport in presence of environmental noise is crucial in many areas: cold atoms <cit.>, mesoscopic systems <cit.>, and quantum biology <cit.>. Thus, its better understanding would allow us to build more precise quantum channels of communication <cit.>, more efficient sunlight harvesting systems <cit.>, design devices to transfer charge or energy with minimal dissipation <cit.>, bio-mimetic photon sensors <cit.> and understand the functionality of many biological aggregates <cit.>.
The foundational ideas were set by P. W. Anderson <cit.>, who realized that elastic scattering from random disorder could localize the eigenfunctions.
Being common for 1D and 2D systems, localization occurs in 3D only above a critical disorder, e.g. at the Metal-Insulator transition (MIT) <cit.>.
It took a decade to realize that correlated disorder and long-range hopping could allow a MIT even in 1D
<cit.>. R. Landauer <cit.>, N. Mott <cit.> and H. Haken <cit.> considered the different roles of an environment. Landauer noticed that an actual finite system is connected to external reservoirs by current and voltage probes, a notion that M. Büttiker used to describe environmental decoherence and thermalization <cit.>. Mott and Haken sought to address the role of a thermal phonon bath that eventually leads to the Mott's variable-range-hopping. Haken's views on the interplay of disorder and decoherent noise lead to the Haken-Strobl model, which describes uncorrelated fluctuations in the site energies and it is equivalence to an infinite temperature bath. Later on, it was realized that decoherence in the otherwise Anderson's localized states, would favor conductance. This reaches a maximum<cit.> when the energy uncertainty associated with elastic scattering and that resulting from the coupling with the environment (i.e. decoherence) <cit.> are comparable. Thus, in the localized regime, it should exist a decoherence rate that optimizes transport. In the ballistic regime, decoherence limits transport, inducing diffusion and thus, generally <cit.>, decreasing transport efficiency.
How decoherent noise will affect transport in presence of quantum diffusive-like dynamics and its relation with a MIT is a much less studied subject.
Recent works focused on excitonic transport in large bio-molecules such as photosynthetic antenna complexes seeking to explain the puzzling great efficiency of many natural <cit.> and synthetic systems.
In this context, it was intriguing the hypothesis of a poised realm, hinted by S. Kauffman <cit.>, that in biological systems excitation transport occurs at the edge of chaos <cit.>. This led Vattay and coll. <cit.> to propose that 1D systems near the MIT are optimal for transport because environmental decoherence does not affect the system as strongly as it does in the extended regime while it provides enough delocalization to allow for good transport.
This seems at odds with an early theoretical analysis <cit.> indicating that, much as it occurs with the residual resistivity of impure 3D metals at low temperatures, for those 1D systems that could sustain quantum diffusion, it is this intrinsic dynamics what would give a particular stability towards decoherence.
With the purpose to settle this conflict, we study different paradigmatic models of quantum transport. We first analyze
the Harper-Hofstadter-Aubry-André (HHAA) model <cit.>, see Fig. <ref>a, across its MIT.
The experimental implementations <cit.> of this paradigmatic model leaved it in the spotlight <cit.>.
We found, both numerically and analytically, that only at the MIT the second moment of an initially localized excitation can be described by a diffusion coefficient D, which is very weakly dependent on the decoherent noise, below a characteristic decoherence strength γ_ϕ^c, see Fig. <ref>b.
We also show that, at long times, D determines the current and the Loschmidt Echo (LE) decay. Thus at the MIT both magnitudes are almost independent of the noise strength.
However, these findings do not settle the question of whether it is the diffusive quantum dynamic what brings stability towards decoherence or if this is inherent to the critical regime.
For this reason we also studied the Fibonacci chain <cit.> and the Power-Banded Random Matrices (PBRM)<cit.>, where a diffusive-like regime exists in some parameter range independently of their critical features. Our results show that, whenever a system is in a quantum coherent diffusive regime, transport is extremely stable towards decoherence, even outside the critical point.
Moreover, all models follow a universal expression for D, depending only on a single parameter: the ratio between the scattering and the decoherence time.
*Model and Methods.
The HHAA model<cit.>, Fig. <ref>a, describes a linear chain with hopping amplitude J among sites |n⟩ at distance a modulated by a local potential ε_n, according to the Hamiltonian:
ℋ=∑_n- J(| n⟩⟨n+1|+|n+1⟩⟨n|)+ε_n|n⟩⟨n|,
where ε_n=Wcos(2π q na+θ), q=q_g=(√(5)-1)/2a and 0<θ<2π is a random phase over which we average in numerical simulations.
Other values of q are discussed in the SM <cit.> Sec. <ref>. Contrary to the Anderson 1D model, the HHAA model presents a phase transition as the eigenstates are extended for 0≤ W<2J and localized for W>2J<cit.>. A notable trait is that the MIT occurs exactly for W=2J in the whole spectrum and that all eigenstates have the same localization length 2ξ=a/ln[W/2J] <cit.> for W>2J.
The presence of a local white-noise potential is described by the Haken-Strobl (HS) model <cit.>, widely used for excitonic transport. The environment is described by stochastic and uncorrelated fluctuations of the site energies V(t)=∑_n ε_n(t)|n⟩⟨n|, with ⟨ε_i(t)|=⟩0 and ⟨ε_n(t)ε_m(t')|=⟩ħγ_ϕδ_nmδ(t-t'). The dynamics can be described by the Lindblad master equation:
ρ̇= L[ρ] = -i/ħ[ ℋρ - ρℋ] + ℒ_ϕ[ρ],
with:
ℒ_ϕ[ρ] = γ_ϕ/ħ∑_n=1^N [ |n⟩⟨n|ρ|n⟩⟨n| - 1/2|n⟩⟨n|ρ - 1/2ρ|n⟩⟨n|],
where γ_ϕ/ħ is the dephasing rate.
It induces a diffusive spreading of the excitation in the infinite size limit of tight-binding models <cit.>. The HS master equation leads to a stationary equally probable population on all sites, corresponding to an infinite temperature bath <cit.> and it is a good approximation when the thermal energy is of the same order of the spectral width of the system (as it happens in many biological systems <cit.>).
Solving the master equation requires to handle N^2× N^2 matrices. To overcome this limit we use the Quantum-Drift model <cit.> (QD), an approach conceived as a realization of the Büttiker's local voltage probes <cit.> in a dynamical context <cit.>. Here, the system wave function follows a Trotter-Suzuki dynamics. Local collapse processes are represented as local energies fluctuating according to a Poisson process<cit.>. This yields local energies with a Lorentzian distribution of width γ_ϕ/2 (for details see SM <cit.> Sec. <ref>), allowing us to handle more than 10^4 sites.
The diffusion coefficient D = σ^2(t)/(2t) is computed numerically through the variance
σ^2(t)=a^2 [ ∑_n ρ_n,n(t) n^2 -(∑_n ρ_n,n(t) n)^2 ] starting from a local initial excitation in the middle of the chain.
Our results have been confirmed using the Green-Kubo approach <cit.>, see SM <cit.> Sec. <ref>.
*Results.
In absence of dephasing the short and long time behavior of the variance σ^2_0(t) can be computed analytically, see SM <cit.> Sec. <ref>. In all regimes, the initial spreading is always ballistic, σ_0^2(t)=v_0^2 t^2, with a velocity v_0^2=2a^2(J/ħ)^2. After this transient regime, we have : i) for W<2J and large times, the spreading is still ballistic, but with a different velocity: u^2=a^2|2J-W|^2/2ħ^2 (mean group velocity), see solid green curve in Fig. <ref>a; ii) for W>2J, localization occurs <cit.> and we have σ_0^2(∞)= 2ξ^2=2a^2(2ln(W/2J))^-2, see solid blue curve in Fig. <ref>a and iii) at the MIT for W=2J, the variance grows diffusely, σ_0^2(t)=2D_0t, see Fig. <ref>a. This is consistent with Ref. <cit.>, where deviations from a diffusive regime are shown not to affect the variance at criticality up to extremely large system sizes (N≈ 10^10), after which a weak super-diffusive dynamics will emerge.
The diffusion coefficient D_0= (v_0^2 τ_W )/2 at the MIT depends on both the initial velocity v_0 and the time τ_W over which local inhomogeneities manifest themselves <cit.>:
τ_W=ħ/Δ E, (Δ E)^2 = ⟨(ℋ_n,n-ℋ_n+1,n+1)^2⟩/2
where ℋ_n,n= ⟨ n|ℋ|n⟩ and ⟨⋯⟩ represents the average over all Hamiltonian diagonal elements. When considering disordered models, ⟨...⟩ also includes average over disorder. For the HHAA model we have
(Δ E)^2 = W^2(1-cos(2π q))/2, see SM <cit.> Sec. <ref>, so that:
D_0=a^2/ħJ^2/W√([1-cos(2π qa)]/2).
which we checked to be in very good agreement with the numerical results at the MIT, see Fig. <ref>a and SM <cit.> Sec. <ref>.
When the system is in contact with an environment, the time-dependent fluctuations of the site energies affect the dynamics, inducing a diffusive behavior. In Fig. <ref>a we show (symbols), for W<2J and W>2J, how the dynamics become diffusive after τ_ϕ≈ħ/γ_ϕ (see vertical dotted line). In general, the diffusion coefficient depends on the decoherence strength, apart at the MIT, where, interestingly, the dynamic remains diffusive with a diffusion coefficient very close to D_0 as in absence of noise, see also Fig. <ref>b,c.
In Fig. <ref>b we show that in the extended regime, D decreases with the decoherent strength, while in the localized regime D reaches a maximum. Remarkably, at the MIT, D is almost independent from decoherence up to γ_ϕ^c defined as 2ħ/γ_ϕ^c=τ_W, see vertical line in Fig. <ref>b. Fig. <ref>c, shows D vs the on-site potential strength for different decoherence strengths. As one can see all curves intersect at W=2J, suggesting the independence of decoherence precisely at the MIT.
In order to understand the exact dependence of D on γ_ϕ we apply a quantum collapse model for the environmental noise. The latter can be assimilated to a sequence of measurements of the excitation position <cit.>, inducing local collapse that leads to a random walk <cit.>. Then D can be readily determined from σ^2_0(t) as:
D ≃∫_0^∞ d t_i p(t_i) σ_0^2(t_i)/2τ,
where p(t_i) is the probability density of measurement at time t_i and τ=∫_0^∞ d t_i t_i p(t_i), details are given in SM <cit.> Sec. <ref>. Since the HS model corresponds to a Poisson process for the measurement collapses <cit.>, p(t_i)=e^-t_i/τ_ϕ/τ_ϕ. Using Eq. (<ref>) we obtain results in excellent agreement with numerical data, see black curves in Fig. <ref>b.
The independence of D from γ_ϕ can be derived from Eq. (<ref>) only by assuming a diffusive dynamics in absence of dephasing σ^2_0(t)=2 D_0 t, see SM <cit.> Sec. <ref>.
The robustness of the wave packet spreading at the MIT, naturally leads to the question of how current and coherences are affected by decoherence. We found that the steady state current is fully determined by the diffusion coefficient, thus showing the same robustness to decoherence at the MIT, see SM <cit.> Sec. <ref>.
The rate at which the environment destroys coherences has been studied through the decay of the Loschmidt echo or purity <cit.>, M(t)= [ρ(t)^2] that can be computed efficiently using the Quantum Drift method, see SM <cit.> Sec. <ref>.
As we start from a pure state, M(0)=1 and reaches 1/N when ρ(t) becomes a full mixture of N equally probable states. We find that, for all values of W/J, the long-time decay of the purity is a power law determined only by the diffusion coefficient: M(t)∼√(8π D t), which at the MIT is extremely robust to decoherence, see Fig. <ref>b.
In order to understand whether the robustness found in the HHAA model at criticality is due to the presence of a critical point or to the presence of a diffusive dynamics, we also studied other two models: A) the Fibonacci model <cit.> where there is no MIT but transport changes smoothly from super-diffusive to sub-diffusive as the strength of the on-site potential is varied; B) The Power-Banded Random Matrix (PBRM) model <cit.> which presents a MIT and a diffusive second moment in absence of dephasing in a finite range of parameters around the MIT (see SM <cit.> Sec. <ref>). Since this model incorporates the interferences that characterize different Feynman pathways it is often considered a 1^+D system.
The Fibonacci model<cit.> is described by the Hamiltonian (<ref>) , with on-site energies alternating among two values as in binary alloy models: ε_n=W(⌊ (n+1) q_g^2⌋-⌊ n q_g^2⌋), where ⌊...⌋ is the integer part. This model has no phase transition and the variance of an initial localized excitation grows in time as σ^2_0(t)∝ t^α where 0<α<2 depends continuously on the on-site potential strength <cit.>. On the other side, the Hamiltonian matrix elements for the PBRM model are taken from a normal distribution with zero mean and variance,
⟨ |ℋ_ij|^2|=⟩1/( 2+2(|i-j|/b)^2 μ) with i≠ j,
while the on-site energies are sampled from a normal distribution with ⟨ℋ_ii|=⟩0 and ⟨|ℋ_ii|^2|=⟩1. The model has a critical interaction range (μ=1) for all values of b where the system switches from extended (μ<1) to localized (μ>1) eigenstates<cit.> characterized by a multi-fractal nature <cit.>. In this model, we have found a diffusive excitation spreading in absence of decoherence, not only at the critical point but in a much broader range of μ values 1/2 < μ < 3/2. Note that even for 1≤μ≤ 3/2, the saturation value of the variance grows with the system size, thus allowing a diffusive-like spreading in the infinite size limit, see SM <cit.> Sec. <ref>. This sounds counter-intuitive since for 1≤μ≤ 3/2 we are in the localized regime if the participation ratio of the eigenstates is used as a figure of merit for localization <cit.>. This peculiarity is due to the long-range hopping present in this model.
Assuming an initial ballistic dynamics σ^2_0(t)=v^2_0t^2 for t < τ_W, followed by a diffusive spreading σ^2_0(t)=2D_0t, Eq. (<ref>) yields (see details in SM <cit.> Sec. <ref>):
D(x)/D_0=[2/x-(1+2/x)e^-x],
where x=τ_W/τ_ϕ.
This expression captures the dependence of D with large and small values of τ_W/τ_
ϕ. For τ_W/τ_ϕ≪ 1, the diffusion coefficient D≈ D_0(1-1/6(τ_W/τ_ϕ)^2). While for τ_W/τ_ϕ≫ 1, we enter the strong quantum Zeno regime and D/D_0 ≈ 2τ_ϕ / τ_W .
Eq. (<ref>), predicts a universal behavior for quantum systems showing diffusive-like coherent dynamics after its initial ballistic spreading.
Figure <ref> shows the normalized diffusion coefficient D/D_0 in the three models (HHAA, Fibonacci, and PBRM). Analysis has focused only on the diffusive-like coherent dynamics regime for all models, where D_0 is well defined.
The universal behavior predicted by Eq. (<ref>) is in excellent agreement with the numerical results for all models.
The fact that a coherent diffusive quantum dynamics is extremely robust to the environmental noise is in striking contrast with what one would expect considering scattering (with a time scale τ_W) and environmental noise (with a time scale τ_ϕ) as two independent Poisson's processes. In this case, the two processes can be thought of as a single Poisson's process with a time scale 1/τ=1/τ_W+1/τ_ϕ. Thus, for small values of τ_W/τ_ϕ≪ 1, we have D≈ D_0(1-τ_W/τ_ϕ), in contrast with the quadratic correction present in Eq. (<ref>). Our findings are also in contrast with standard results in classical systems where the diffusion coefficient for the dynamics in presence of external noise is the sum of the diffusion coefficients given by the two processes <cit.>. Note that Eq. (<ref>) also predicts a complete independence of the diffusion coefficient on the decoherence strength in presence of a fully diffusive coherent dynamics.
*Conclusions and Discussion.
By studying quantum transport in three paradigmatic models, all characterized by the presence of a quantum diffusive-like regime, we found a puzzling stability of transport of 1D systems towards decoherence which also shows up in the Purity decay. This stability originates in the diffusive nature of the coherent quantum dynamics and it holds as long as the decoherence time is larger than the mean elastic scattering time. Moreover, in the quantum diffusion regime, we analytically derived a universal law in which the diffusion coefficient depends on a single parameter, that is the ratio between the decoherence time and the mean elastic scattering time. We stress that this stability is very atypical for other transport regimes (i.e. ballistic and localized), where the diffusion coefficient is highly sensitive to decoherence.
Even if the HHAA and Fibonacci chains could be built through precise engineering <cit.>, their quantum diffusion regime is restricted to a narrow parametric range. Nevertheless, as occurs in the PBRM model, in many other quasi-1D systems the mean-free-path may become much larger than the localization length <cit.> and thus, the diffusion-like regime would occur in a wider range of parameters.
Thus, quantum coherent diffusion can be exploited to achieve an optimal compromise between efficient and stable transport in many realistic systems affected by environmental noise.
We think that our results might impact the study of several quasi-1D biological systems, where
charge/excitonic transport is functionally relevant. Charge propagation through the helical structure <cit.>, is crucial for energy transfer and self-repair of DNA <cit.>. In photosynthetic antennas, efficient excitonic transport is essential to collect sun-light. In the latter systems there is a convergence of energy scales, i.e. the couplings, disorder, and thermal fluctuations, are roughly of the same order, which would naturally place these systems in the universally robust regime discussed in our paper. Moreover studying spectral statistics of several biological molecules it was suggested that biologically relevant systems are typically at the border between a ballistic and a localized regime <cit.>. In several biological molecules such as proteins, photosynthetic antenna complexes, micro-tubules, RNA and DNA <cit.>, transport has shown to be surprisingly robust against thermal variations.
There, solvent reorganization that perturbs the carriers motion and the bond angle fluctuations are just two obvious sources of local decoherence <cit.>.
Our results give a new light to the hypothesis, promoted for biological systems, <cit.>, that being at the edge of chaos is favorable to charge/excitonic transport. Indeed, classical chaos is a road to diffusive dynamics <cit.> and, in turn, as we show here, quantum diffusion-like dynamics is extremely robust with respect to environmental noise.
In perspective, it would be interesting to investigate more realistic systems at finite temperature, mainly biological ones. We conjecture that quantum diffusion is a most relevant feature of Nature's poised realm.
HMP thanks D. Chialvo for introducing S. Kauffman to him. GLC thanks E. Sadurni and A. Mendez-Bermudez for useful discussions. The work of FSLN and HMP was possible by the support of CONICET, SeCyT-UNC and FonCyT.
FB and GLC acknowledge support by the Iniziativa Specifica INFN-DynSysMath. This work has been financially supported by the Catholic University of Sacred Heart and by MIUR within the Project No. PRIN 20172H2SC4.
apsrev4-2
Supplementary Material –
§ CURRENT.
In this section we study the steady-state current through the HHAA model in presence of pumping and draining of excitation from the opposite edges of the chain, in presence of dephasing. We also derive an approximate expression of the current as a function of the diffusion coefficient.
To generate a current, excitations are incoherently pumped and drained at the chain edges. This is modeled by including additional terms in the Lindblad master equation Eq. (<ref>) from the main text, which becomes
ρ̇= L[ρ] = -i/ħ[ H ρ - ρ H ] + ℒ_ϕ[ρ] + ℒ _p[ρ]+ ℒ _d[ρ],
where H is the HHAA Hamiltonian (<ref>) from the main text, L_ϕ is the dephasing dissipator Eq. (<ref>) from the main text, while the additional terms,
ℒ _p[ρ] =γ_p/ħ(|1⟩⟨0|ρ|0⟩⟨1|-1/2|0⟩⟨0|ρ-1/2ρ|0⟩⟨0|),
and
ℒ _d[ρ] =γ_d/ħ(|0⟩⟨N|ρ|N⟩⟨0|-1/2|N⟩⟨N|ρ-1/2ρ|N⟩⟨N|),
are two operators modeling the pumping on the first site (|1⟩) and draining from the last site (|N⟩). Here |0⟩ is the vacuum state, where no excitation is present in the system <cit.>. For simplicity, here the pumping and draining rates are set to be equal in magnitude (γ_p=γ_d). From solving Eq. (<ref>) at the steady-state (L[ρ_ss]=0) one can compute the stationary current,
I_ss=γ_d/ħ⟨N|ρ_ss|N⟩ .
with ρ_ss being the steady-state density operator <cit.>.
§.§ Steady-state current: Average transfer time method.
Since the master equation approach discussed above is numerically expensive, for large N we use the average transfer time method (ATT), as described in <cit.>. The average transfer time τ is defined as
τ= γ_d/ħ∫_0 ^∞ t⟨N |exp(-ℒ_ efft)ρ(0)| N| dt⟩
= γ_d/ħ⟨N |ℒ_ eff^-2ρ(0)| N| ⟩.
where L_ eff is the one from Eq. (<ref>) without pumping.
In <cit.> it has been proved that the steady-state current determined from the master equation (<ref>) in absence of dephasing depends only on the average transfer time, namely
I_ss=γ_p/γ_p τ + ħ .
We have numerically verified that Eq. (<ref>) is valid also in presence of dephasing, so in the following we use it due to its lower numerical complexity together with a heuristic construction, detailed here below.
§.§.§ Heuristic construction of the mean transfer time.
The ATT method gives us the possibility to heuristically construct the mean transfer time by considering the characteristic times of dephasing-induced diffusion and draining.
Since at equilibrium the probability of being at site N is 1/N and the drain rate is γ_d/ħ, we can estimate the drainage time as ħ N/γ_d. Then, in order to determine the diffusion time, we know that an excitation moves from one site to a neighbor with an average time a^2/(2D). Furthermore, the excitation moves as a random walk and the total number of steps required in 1D is N(N-1). Therefore, we estimate the diffusion time as N(N-1)a^2/(2D) <cit.>. Thus, adding the drainage time and the diffusion time we have
τ=ħN/γ_d+(N-1)N/2Da^2 .
Figure <ref>a shows a comparison of I_ss as a function of dephasing computed using the three different methods illustrated above here: the stationary solution of the master equation (<ref>) (ME), the ATT method (<ref>-<ref>), and the heuristic formula (<ref>). In the latter case, the diffusion coefficient D has been computed using the Green-Kubo approach [Eq. (<ref>), Sup. Mat. <ref>]. A general good agreement is observed between the three approaches. Deviations at small dephasing are due to the finite system size (N=100), for which the excitation reaches the chain edge ballistically within a time shorter than τ_ϕ=ħ/γ_ϕ.
Fig. <ref>b shows the normalized steady-state current N^2 I_ss as a function of γ_ϕ for different N in the three regimes for the HHAA model described in the main text. We observe that, as the length N of the chain is increased, the behavior of the current is determined by the diffusion coefficient Eq. (<ref>) (see yellow lines in Fig. <ref>b) where D has been computed analytically for W=0 (SM Eq. (<ref>)) and numerically via the quantum drift approach for W 0 and N=1000 (see Sec. <ref>). The current decreases with dephasing in the extended regime (W=0) and it is enhanced in the localized regime (W=3J, up to an optimal dephasing), while it remains almost unaffected at the critical point (W=2J) up to a characteristic dephasing.
Although this analysis is done for the HHAA model, it should also be valid for other models with nearest-neighbor couplings, such as the Fibonacci chain analyzed in this paper. In other words, we expect that the steady-state current is mostly determined by the diffusion coefficient in such systems.
§ QUANTUM DRIFT
In order to reduce the computational cost of calculation of the dynamics in presence of decoherence we use the Quantum Drift (QD) strategy, which only involves Trotter-Suzuki evolutions on the wave-vector <cit.>. Here, the dynamics are obtained by the sequential application of unitary evolution operators to the wave-function in small time steps (dt). The noise/decoherence (interaction with the environment), is introduced by adding stochastic energies fluctuations on every site, Γ̂_ϕ=∑_n β_n |n⟩⟨n|, uncorrelated in time. The probability distribution of these fluctuations is a Lorentzian function,
P(β_n)=1/πγ_ϕ/2/β_n^2+(γ_ϕ/2)^2.
Thus, the unitary evolution in a small time step is:
Û(dt)≈ e^iΓ̂_ϕdt/ħe^-iℋ̂dt/ħ,
where ℋ̂ is the system's Hamiltonian.
Finally, the evolved wave function at time t=N_tdt is:
|ψ̂(t)⟩=∏_j=1^N_te^iΓ̂_ϕdt/ħe^-iℋ̂ dt/ħ|ψ(0)⟩.
The QD evolution described here is equivalent to the Haken-Strobl dephasing (Eq. (<ref>)), see Fig. <ref>. As one can see there is a very good agreement between the Lindbladian and the QD evolution of the second moment of an initially local excitation for different dephasing strengths and system parameters.
§ HHAA MODEL: DYNAMICS IN ABSENCE OF DEPHASING.
Here we study the spreading of an initially localized wave packet at the center of the HHAA chain in absence of dephasing. In particular, we focus on the time evolution of the second moment σ^2_0 of the probability distribution to find the particle along the chain in absence of decoherence. As shown in the main text, in absence of decoherence and for long enough times, the second moment grows ballistically for W<2J, diffusively for W=2J and saturates for W>2J<cit.>.
It is known that, in the HHAA, in the localized regime the localization length of all eigenfunctions is 2ξ=a/ln[W/2J] <cit.>. It follows that the wave packet probability distribution at the steady state is localized close to a site n_0, P(n)=|⟨n|ψ(t)||⟩^2 = 1/2ξ(e^-|n-n_0|/ξ). Therefore, the variance's saturation value will be lim_t→∞σ_0^2(t) = l^2=2ξ^2=2a^2(2ln(W/2J))^-2.
In the following we will characterize the dynamics in the different regimes.
§.§ Extended phase.
In the extended phase, the dynamics of the variance for very long times become ballistic, σ_0^2(t)=u^2 t^2. From the Hamiltonian [Eq. (<ref>) of the main text] in the cases q=0 (ordered chain) and q=1/2 (dimerized chain) we have proved analytically (not shown) that the velocity u is directly connected with the support B of the spectral bands, and we have u^2=a^2B^2/8ħ^2. For q=0 there is a single band, B=4J and for q=1/2 we have two bands, with B=2 √(W^2+4 J^2)-2 √(W^2).
We here conjecture that the same expression is valid for any value of q in the HHAA model. For q given by the golden mean, in Ref. <cit.> it was shown that B=2|2J-W|. Thus we have u^2=4a^2|2J-W|^2 and the behavior of the variance in for long times is given by:
σ_0^2(t)=a^2|2J-W|^2/2ħ^2t^2.
These results have been confirmed numerically in Fig. <ref>a in the main text.
§.§ Critical point.
Here we analytically estimate the diffusion coefficient in absence of decoherence. We calculate the spreading of the wave packet ψ(t) perturbatively for short times (before the scattering due to the site potential enters in the dynamics),
so that the probability to be at site n at time t is: P_n(t)=|⟨n|ψ(t)||⟩^2 ≃ |⟨n| (1-i ℋ t/ħ)|n_0⟩|^2, where n_0 is the site where the excitation is localized initially. Defining ℋ_n,n_0=⟨n|ℋ|n_0⟩, and considering without lost of generality, n_0=0, we can write:
σ_0^2(t) = a^2∑_n P_n(t) n^2 -a^2(∑_n P_n(t) n)^2
≈
(t/ħ)^2a^2 ∑_n ℋ_n,0^2 n^2 -a^2(t/ħ)^4∑_nℋ_n,0^4 n^2
≈ (t/ħ)^2a^2 ∑_nℋ_n,0^2 n^2=v_0^2t^2
from which we find:
v^2_0=2a^2(J/ħ)^2,
for the HHAA since there are only nearest neighbors interactions.
We may define a time scale where the initial ballistic spreading end due to the presence of a quasi-periodic site potential of magnitude W. To see this effect, the perturbation expansion needs to be carry out to the 4th order: P_n(t)=|⟨n|ψ(t)||⟩^2 ≃ |⟨n| (1-i ℋ t/ħ-1/2ℋ^2 t^2/ħ-i 1/6ℋ^3 t^3/ħ+1/24ℋ^4 t^4/ħ)|n_0⟩|^2. Thus, to this level of approximation we have:
σ_0^2(t)/a^2 ≈ 2J^2(t/ħ)^2-1/12 ((ℋ_0,0 - ℋ_1,1)^2+(ℋ_0,0- ℋ_-1,-1)^2)J^2(t/ħ)^4,
σ_0^2(t)/a^2 ≈ 2J^2(t/ħ)^2-2/12⟨ (ℋ_n,n -ℋ_n+1,n+1)^2 ⟩ J^2(t/ħ)^4.
where the energy differences squared were replaced by the average value:
(Δ E)^2 =⟨ (ℋ_n,n -ℋ_n+1,n+1)^2 ⟩ =1/N-1∑_n=1^N-1(ℋ_n,n-ℋ_n+1,n+1)^2/2.
This definition takes into account the “correlation” between neighbors. For the HHAA model the average can be taken over the sites n or the realizations of the potential (phase θ in Eq. <ref>). For independent random disorder (Anderson disorder), yields directly the variance of the disorder ((Δ E)^2 =1/N-1∑_n=1^N-1ℋ_n,n^2), which is the standard magnitude to calculate the disorder time scale.
The first effect of this quartic correction is to change the concavity of the σ_0^2(t). This will happen when the second derivative of σ_0^2(t)/a^2 vanish at a time τ_W, so that:
τ_W=√((⟨ (ε_n - ε_n+1)^2⟩/2ħ^2)^-1)=ħ/Δ E.
By replacing with the HHAA site energies, using trigonometric identities, and summing over the sites, it can be shown that Δ E= W √((1-cos(2π q))/2), and we have:
τ_W = √(2)ħ/W√((1-cos(2π q)))
The diffusion coefficient in absence of dephasing, D_0, can be computed as follows:
D_0=v_0^2τ_W/2=a^2J^2/ħ√(2)/W√((1-cos(2π q))),
It is interesting to note how the correlations of the model (given by the modulation wave vector q) influence the scattering times and therefore the diffusion σ_0^2(t) = 2 D_0 t = v_0^2τ_W t.
Notice that here, the potential strength enters with a different power law than in the mean-free-time between collisions that results from the application of the Fermi Golden Rule to a Bloch state of energy ε for the uncorrelated disorder of Anderson's model<cit.> 1/τ_FGR=(2π/ħ)(W^2/12)N_1(ε) with N_1(ε)∝ 1/4π J√(1-(ε/2J)^2) being the density of directly connected states.
§ DIFFUSION COEFFICIENT IN PRESENCE OF DECOHERENCE.
§.§ Green-Kubo formula.
The diffusion coefficient D in presence of decoherence for the Haken-Strobl model can be computed from the Green-Kubo expression, using only the eigenenergies and eigenstates of the Hamiltonian,
Hϕ^μ = ε_μϕ^μ
as it has been derived in Ref. <cit.>:
D(u)=ħ/N∑_μ,ν=1^N γ_ϕ/γ_ϕ^2+ω_μ,ν^2|ĵ_μ,ν(u)|^2 ,
where γ_ϕ is the dephasing strength, ω_μ,ν=ε_μ-ε_ν is the energy difference between eigenstates μ and ν, and ĵ_μ,ν is the flux operator in the eigenbasis:
ĵ_ν,μ(u)=i/ħ∑_n,m (u·r_n,m)ϕ^μ *_n ϕ^ν_m H_n,m .
In the expression above, u is a unit vector indicating the transport direction, r_n,m is the vector connecting the positions of site n and m, ϕ^ν_n is the amplitude of the ν eigenstate at site n and ℋ_n,m=⟨ n| ℋ| m ⟩ is the coupling between n and m sites. In our 1D system with nearest neighbor interactions, u·r_n,m=m-n=± a and ℋ_n,m=J(δ_m,n+1+δ_m,n-1). Therefore,
ĵ_ν,μ=iJa/ħ∑_nϕ^μ *_n(ϕ^ν_n+1-ϕ^ν_n-1).
Equation (<ref>) have been compared with numerical simulations using the QD approach in Figures <ref>, <ref>, and <ref>. It also have been used to study the dependence with N of the diffusion coefficient in various models. Figure <ref> shows the diffusion coefficient D of the HHAA model in the three regimes as a function of the dephasing strength for different chain lengths N. We observe for small dephasing a clear dependence of D on the system size. This is due to the fact that when dephasing is small, the excitation reaches the boundaries before diffusion can sets in. Defining the typical time scale for dephasing to affect the dynamics as τ_ϕ=ħ/γ_ϕ, we can estimate the dephasing strength below which finite size effects are relevant, by comparing τ_ϕ with the time needed to reach ballistically the boundaries for the clean case (W=0). In the ballistic regime (W<2J) the value of decoherence strength below finite size effect starts to be relevant will decrease proportional to 1/N, while in the diffusive regime (W=2J) with 1/N^2 (see vertical dashed lines in Figures <ref>ab). In the localized regime finite size effects are negligible if the system size is larger than the localization length.
§.§ Analytical expression of the Diffusion coefficient from the coherent dynamics.
The presence of the Haken-Strobl dephasing can be thought as the system being measured by the environment<cit.>. This measurements occur at random times, where the times between subsequent measurements are distributed as p(t)=e^-t/τ_ϕ/τ_ϕ, with τ_ϕ=ħ/γ_ϕ. In this section, we employ this interpretation of the Haken-Strobl dephasing to obtain analytical expressions for the diffusion coefficient.
When the measurement occurs, the system has a probability distribution of being at the position r, P_0(r,t,r_0,t_0), determined by the coherent Hamiltonian dynamics. The initial position, r_0 at t_0 will only define the center of the probability density, since the systems is isotropic. This assumption is valid in the models treated in this work unless the excitation is close to the boundaries. Consequently, P_0(r, t, r_0, t_0)=P_0(r-r_0, t-t_0, 0,0). For simplicity we will consider r_0=0, t_0=0.
The probability density of measuring the system at site r at time t once the measurement process is included (P(r, t, 0,0)) will be determined by the integral equation:
P(r, t, 0,0)=P_0(r, t, 0,0)(1-∫_0^t p(t_i) d t_i)_No Measurement.+∫ d r_i∫^t_0 d t_i p(t_i) P̃(r, t, r_i, t_i) P_0(r_i, t_i, 0,0)_Measurement at (t_i,r_i),
which recurrently considered the probability of not being measured and of being measured several times.
To directly analyze the second moment of the distribution we multiply by r^2 and integrate over r on both sides:
σ^2(t)=σ_0^2(t)(1-∫_0^t p(t_i) d t_i)+∫ d r_i∫^t_0 d t_i p(t_i) ∫ d r P̃(r,t, r_i, t_i) r^2_r_i^2+σ^2(t-t_i) P_0(r_i, t_i, 0,0),
σ^2(t)=σ_0^2(t)(1-∫_0^t p(t_i) d t_i)+∫_0^t d t_i p(t_i) σ_0^2(t_i)+∫_0^t d t_i p(t_i) σ^2(t-t_i),
where we have used the independence of the initial site and times of the probabilities.
It can be shown by Laplace transform in Eq. (<ref>) (SM. <ref>), that for well-behaved p(t) and σ_0^2(t) (trivially fulfilled in the systems we consider), the dynamics of the variance σ^2(t) becomes diffusive at long enough times. Therefore, in the long time limit (t→∞) we have:
σ^2(t) ≃ 2 D t,
(1-∫_0^t p(t_i) d t_i) ≃ 0,
∫_0^t d t_i p(t_i) t_i ≃ τ_ϕ,
and,
D = ∫_0^∞ d t_i p(t_i) σ_0^2(t_i)/2τ_ϕ.
Then if σ_0^2(t)=2 D_0 t ∀ t the measurement process does not affect the diffusion coefficient:
D=2 D_0∫_0^∞ p(t_i) t_i d t_i/2 τ_ϕ=D_0.
Another physically relevant case is when the dynamics is initially ballistic up to some time τ_W followed by a diffusive dynamic.
σ_0^2(t)={[ v_0^2 t^2 if t<τ_W; 2 D_0 t if t>τ_W ] with D_0=v_0^2τ_W/2.
D=1/2 τ_ϕ(∫_0^τ_W2 D_0/τ_W t^2 p(t) d t+∫_τ_W^∞ 2 D_0 t p(t) d t)
Considering a Poisson process for the measurements: p(t)=e^-t/τ_ϕ/τ_ϕ, we have:
D(τ_ϕ)=D_0(2τ_ϕ/τ_W-(1+2τ_ϕ/τ_W)e^-τ_W/τ_ϕ),
this expression captures the dependence of D for large and small values of τ, so that D≈ D_0(1-1/6(τ_W/τ_ϕ)^2) and D≈ v^2_0τ_ϕ respectively. Note that considering a process p_δ(t)=δ(t-2τ_ϕ), would yield D=D_0 for τ_ϕ<τ_W/2 and D=v^2_0τ_W for τ_ϕ>τ_W/2.
§.§.§ Analytical solution for the spreading.
In this section we show that Eq. (<ref>) for p(t)=e^-t/τ_ϕ/τ_ϕ generates a diffusive dynamics at long times and find analytical solutions in some paradigmatic cases. Eq. (<ref>) can be rearranged in the following form:
σ^2(t)=f(t)+∫_0^t d t_i p(t_i) σ^2(t-t_i),
by noting that (1-∫_0^t p(t) d t)=e^-t/τ_ϕ=τ_ϕ p(t) and defining f(t)=τ_ϕ g(t)+∫_0^t d t_ig(t_i) with g(t)=σ_0^2(t)p(t).
The usual strategy to solve this type of equation is to use the Laplace's transform on the equation,
σ^2_LT(s)=ℱ(s)+σ^2_LT(s)𝒫(s),
where σ^2_LT(s), ℱ(s) and 𝒫(s)=1/sτ_ϕ+1 are the Laplace's transform of σ^2(t), f(t) and p(t) respectively.
Identifying 𝒢(s) as the Laplace's transform of g(t) we have
ℱ(s)=𝒢(s)(sτ_ϕ+1/s).
σ^2_LT(s)=ℱ(s)/1-𝒫(s)=𝒢(s)τ_ϕ(sτ_ϕ+1)^2/(sτ_ϕ)^2=𝒢(s)τ_ϕ[1/(sτ_ϕ)^2+2/sτ_ϕ+1 ].
Since the Laplace transform of t^n u(t), where u(t) is the step function, is n!/s^n+1 we observe that σ^2(t) will be diffusive in the long time limit if 𝒢(0) is finite and non zero, a condition trivially fulfilled in the systems under consideration. In this case, D=𝒢(0)/2τ_ϕ=∫_0^∞σ_0^2(t)p(t)dt/2τ_ϕ, as we found in Eq. (<ref>).
The inverse transform of σ_LT^2(s) can be carried out in several cases (for example σ_0^2(t)=A_α t^α), however, here we only discuss two paradigmatic cases σ_0^2(t)=2D_0t and σ_0^2(t)=v_0^2t^2. In the first case, the diffusive spreading, we find σ^2(t)=2D_0 t, i.e. the dynamic of σ_0^2 is not affected.
In the second case, the ballistic spreading, the solution is
σ^2(t)=2 τ_ϕ v_0^2 (τ_ϕ(e^-t/τ_ϕ-1)+t),
which for t≪τ_ϕ, σ^2(t)≈ v_0^2 t^2, maintains its ballistic behavior but becomes diffusive for t≫τ_ϕ, σ^2(t)≈ 2 v_0^2τ_ϕ t=2Dt. The same expression is found when the spreading in an ordered tight-binding chain with Haken-Strobl decoherence is addressed with the Lindblad formalism <cit.>.
It is important to note that if one considers two Poisson processes, p_1(t)=e^-t/τ_1/τ_1 and p_2(t)=e^-t/τ_2/τ_2, the combined effect will be equivalent to consider only one process with p(t)=e^-t/τ/τ with τ=τ_1 τ_2/τ_1 +τ_2, the sum of inverse times scales. This is the standard result in classical systems where one considers a particle that moves with velocity v_0 to the left or right with the same probability after a scattering with either of the two processes. The diffusion coefficient, in this case, is, D= v_0^2τ=v^2_0τ_1 τ_2/τ_1 +τ_2=D_1 1/1+τ_1/τ_2, which for τ_2≫τ_1 generates a linear correction to the diffusion coefficient associated with the process p_1.
§.§ Analytical expression of the Diffusion coefficient in the limit of strong and weak dephasing.
In this section, we use Eq. (<ref>) and the specific dynamics of σ^2_0(t) in the HHAA model (SM <ref>), to obtain the behavior of D in the limit of strong and weak dephasing.
We define the mean free path l from the expectation value of the coherent spreading l^2=∫_0^∞σ_0^2(t) p(t) dt. We compare it with a random walk analysis of the diffusion coefficient <cit.> which corresponds to a delta process where the system is measured by the environment at equal times δ t=2ħ/γ_ϕ. The diffusion coefficient is directly determined by the coherent spreading at the dephasing time:
D=l^2/2δ t= σ^2_0(t=2ħ/γ_ϕ)/22ħ/γ_ϕ,
this expression, however inaccurate, can be considered a first approximation to the diffusion coefficient.
Figures <ref> show the diffusion coefficient obtained from the time evolution (symbols), Eq. (<ref>) (dotted curves), Eq. (<ref>) (solid colored-curves) and from numerical integration of Eq. (<ref>) (solid black-curves). The yellow curve corresponds to Eq. (<ref>). We observe that using a Poisson process (Eq. (<ref>)) we obtain results smoother than with a Delta process (Eq. (<ref>)) (the fluctuations produced by particular interferences are washed-out) and can be obtained at almost the same computational cost.
§.§.§ Large dephasing.
For sufficiently large dephasing, γ_ϕ≫ħ/τ_W, the noise interrupts the dynamics before the systems notice if it is in an extended, critical, or localize phase. This is known as the strong Zeno regime. In this case, the measurement happens during the initial ballistic dynamics, where the variance grows as σ_0^2(t)=2a^2J^2/ħ^2t^2. Therefore, the dynamic corresponds to a random walk with a mean free path l^2=2a^2J^2/ħ^2δ t^2 and a mean free time δ t=2ħ/γ_ϕ. Thus, the diffusion coefficient is:
D=1/22a^2J^2/(γ_ϕ/2)^2γ_ϕ/2ħ=2a^2J^2/ħγ_ϕ.
The same result is obtained with the Poisson process p(t)=e^-t/τ_ϕ/τ_ϕ. This result is valid for all γ_ϕ for an infinite clean chain (W=0) <cit.>, since in that case τ_W →∞.
§.§.§ Extended phase TEXT.
For sufficiently small dephasing strength (depending on how close we are to the MIT), the system enters the long-time ballistic regime where σ^2_0(t)=a^2|2J-W|^2/2ħ^2t^2 from which we have:
D=a^2|2J-W|^2/2ħγ_ϕ.
Note that as we approach the MIT our estimate is valid for a smaller and smaller dephasing strength since the system enters the ballistic regime at larger times. Using the Poisson process p(t) and Eq. (<ref>) we obtain the same results. In Fig. <ref>a we compare the diffusion coefficient obtained from the numerical simulations (symbols) with the analytical approximation Eq. (<ref>).
§.§.§ MIT TEXT.
At the critical point, for t>τ_W the dynamic is diffusive and the variance is linearly dependent on the measurement time σ^2_0(δ t)=2 D_0 δ t. Given that we have l^2=2D_0 δ t, provided that γ_ϕ<2ħ/τ_W, and D= l^2/(2δ t) we obtain:
D=2D_0δ t/2δ t=D_0,
i.e. a diffusion coefficient independent of the dephasing.
This was shown to be exact for an always diffusive dynamic in SM <ref>. On the other hand when we consider a ballistic dynamics for short times and a Poisson measurement process some corrections appear.
§.§.§ Localized phase TEXT.
For sufficiently small dephasing strength (depending on how close we are to the MIT), the system gets localized with a localization length ξ=l/√(2) before dephasing sets in. So, considering σ^2_0=l^2 in Eq. <ref>:
σ^2(t)=l^2/τ_ϕt=l^2γ_ϕ/ħt=2ξ^2γ_ϕ/ħt.
This limit is also found in Ref. <cit.> from Eq. <ref>. Since in the HHAA model 2ξ^2=2a^2(2ln(W/2J))^-2, the diffusion coefficient is:
D=ξ^2γ_ϕ/ħ=a^2γ_ϕ/(2ln(W/2J))^2ħ.
The analytical result is shown in Fig. <ref>b compared with the numerical results. We observe a small discrepancy with the above formula, rooted in the fact that the numerically found l^2 is slightly smaller than the theoretical one.
Notice that, in contrast with the other regimes, the delta and Poisson process do not yield the same expression (the use of a delta process would underestimates the diffusion coefficient by a factor of two).
§ HHAA MODEL WITH DIFFERENT VALUES OF TEXT.
The diffusion coefficient derived for the critical point in absence of dephasing (Eq. (<ref>)) shows a dependence on q. In order to check the validity of our analytical prediction and the generality of the dephasing-independent regime we analyzed other irrational values of q, beyond the golden mean value used in the main text.
Particularly we study the dynamics of the system using fractions of the golden ratio as irrational numbers q=q_g/m, where m is an integer power of two. The continued fractions of the irrationals used are presented in Table I. Trials with irrational numbers of the form [0,{m}] yielded similar results.
The spreading in time of the wave packet in absence and presence of dephasing together with our analytical estimations for the diffusion coefficient is shown in Fig. <ref>a,b. As one can see, the initial ballistic spreading (Eq. (<ref>)) lasts until a time τ_W (Eq. (<ref>), indicated as vertical lines in Fig. <ref>a,b). After that time the dynamics is diffusive with a diffusion coefficient given by Eq. (<ref>). We notice, see panel (a), the presence of oscillations in the second moment which increase as q decreases. These oscillations are partly erased in presence of dephasing at long times as shown in Fig. <ref>b for γ_ϕ=0.02.
Fig. <ref>c shows the fitted values of D (symbols) together with the D values obtained from Eq. (<ref>) (dashed curves) as a function of γ_ϕ for different q at the MIT. As vertical dashed lines, we plot γ^c_ϕ=2ħ/τ_W which coincide with the beginning of the strong dephasing regime, where the diffusion coefficient decreases with dephasing. Notice that for large values of m the diffusion coefficient D presents wide oscillations as a function of γ_ϕ, probably due to a weaker irrationality of the q value. More investigations should be done in the future to understand the origin of these interesting oscillations. In Fig. <ref>d we plot the diffusion coefficient re-scaled by the theoretical value in absence of dephasing (Eq. (<ref>)) and γ_ϕ rescaled by the elastic scattering rate γ^c_ϕ=2ħ/τ_W. Fig. <ref>d confirm the validity of our analytical expressions of D and τ_W as a function of q.
§ STUDY OF DIFFERENT PARADIGMATIC MODELS OF TRANSPORT.
In this section we test the validity of Eq. (<ref>) and Eq. (<ref>), using two models that present a coherent diffusion regime and/or criticality: the Fibonacci chain, and the Power-Banded Random Matrices (PBRM) model.
§.§ The Fibonacci chain.
The Fibonacci model is described by the Hamiltonian:
ℋ=∑_nJ(| n⟩⟨n+1|+|n+1⟩⟨n|)
+ ε_n|n⟩⟨n|,
where the on-site potential is determined by ε_n=W(⌊ (n+1) q_g^2⌋-⌊ n q_g^2⌋), here ⌊ x⌋ represents the integer part of x and q_g=√(5)-1/2 is the golden ratio. In this potential, ε_n corresponds to the n-th element of the Fibonacci word sequence. For example, with a chain of length 8 the sequence is 0W00W0W0.
The dynamics in the Fibonacci chain were studied in absence and presence of dephasing <cit.>. In absence of dephasing it is known that the second moment grows, after the initial quadratic spreading, as a power law σ^2_0(t) ∝ t^α with an exponent that depends on the strength of the on-site potential. It grows subdiffusively (α<1) for W>3.15J, diffusively (α=1) for W=3.15J and superdiffusively (α>1) for W < 3.15J. These dynamics are shown in Fig. <ref>a.
This spreading can be written analytically in the approximated and simplified form:
σ_0^2(t)={[ v_0^2 t^2 if t<τ_W; 2 A t^α if t>τ_W ] with A=v_0^2τ_W^2-α/2..
From this expression and using Eq. (<ref>) with a Poisson process we obtain an analytical expression for the diffusion coefficient in presence of dephasing:
D=v^2_0 (τ_W^3 E_-α(τ_W/τ_ϕ)+ατ_ϕτ_W^2 Γ (α ) (τ_ϕ ^ατ_W^-α-(τ_W/τ_ϕ)^-α)+2 τ_ϕ^3-τ_ϕ e^-τ_W/τ_ϕ(2 τ_ϕ ^2+2 τ_ϕτ_W+τ_W^2))/2 τ_ϕ^2,
where Γ (α ) is the Euler Gamma function, and E_-α(τ_W/τ_ϕ)=∫^∞_1e^-(τ_W/τ_ϕ)tt^αdt.
The vertical lines in Fig. <ref>a, represent τ_W, calculated from the analytical equations Eq. (<ref>) and Eq. (<ref>) yielding 1/τ_W=q_g W/ħ. After this time the initial ballistic dynamic stops and the algebraic dynamic starts. Particularly, for W=3.15J, when the subsequent dynamic is diffusive, we obtain D_0=v^2τ_W/2. This analytical prediction is shown as a black-dashed line on top of the red curve.
Once dephasing is added, the dynamics becomes diffusive for all values of W. The diffusion coefficient as a function of the dephasing strength was computed numerically through a quantum drift dynamics for different values of W. These results are shown as symbols in Fig. <ref>b. They are compared with the numerical integration of Eq. (<ref>) using a Poisson process (black curves) and the analytical expression (Eq. (<ref>), yellow curves). We conclude that the diffusion coefficient depends only on the coherent dynamics and the noise strength.
From equations (<ref>) and (<ref>), it is clear that the dependence of σ_0^2(t) determines the behavior of D(γ_ϕ). Particularly, if σ_0^2(t)∝ t^α then D(γ_ϕ) ∝ (γ_ϕ)^(1-α) for γ_ϕ≪ 2ħ/τ_W. This dependence is pointed out in Fig. <ref>b with dashed-black lines on top of the data. These results are consistent with recent findings reported in Ref. <cit.>.
§.§ The PBRM model.
The power-law banded random matrix (PBRM) model describes one-dimensional (1D) tight-binding chains of length N with long-range random hoppings. This model is represented by N× N real symmetric random matrices whose elements are statistically independent random variables characterized by a normal distribution with zero mean and variance given by,
⟨|ℋ_ii|^2|=⟩J^2 and ⟨ |ℋ_ij|^2|=⟩ J^21/21/1+(|i-j|/b)^2 μ with i≠ j.
The PBRM model, Eq. (<ref>), depends on two control parameters: μ and b, while J is an energy scale that can be considered equal to 1 for all practical purposes.
For μ > 1 (μ < 1) the PBRM model is in the insulating (metallic) phase, so its eigenstates are localized (delocalized). At the MIT, which occurs for all values of b at μ = 1, the eigenfunctions are known to be multi-fractal.
The statistical properties of the eigenfunctions and eigenvalues of this model have been widely studied<cit.>. Here we study the spreading dynamics of an initially localized excitation at the middle of the chain in absence and presence of a decoherent environment.
As in the previous systems, the initial spreading of the local excitation is ballistic, where the second moment is given by σ^2_0=v_0^2t^2. Generalizing Eq. (<ref>) to account for the randomness of the Hamiltonian, we found that the velocity v_0 is:
v^2_0= 2∑^N/2_n=1⟨ℋ_n,0^2⟩ n^2= ∑^N/2_n=1J^2/1+(n/b)^2μ n^2,
where we summed over the sites to the right and left (factor 2) of the initial site (denoted as 0). This initial velocity (Eq. (<ref>)) diverges for μ < 3/2 at large N as N^3-2μ. For large N, b≪ 1, and μ< 3/2, the sum can approximated by an integral, yielding v_0^2≈ J^2 b^2μN^3-2μ/(3-2μ)2^3-2μ.
This initial ballistic spreading lasts up to t=τ_W, which should be addressed numerically since Eq. (<ref>) is only valid for NN chains and a similar analysis with this model do not yield a simple expression. However, on a first approximation if we use Eq. (<ref>), with uncorrelated and Gaussian distributed site energies with ⟨|ℋ_ii|^2|=⟩J^2, we obtain τ_W=1.
For t>τ_W, we find numerically that for 0.5<μ<1.5 the second moment of the excitation spreads diffusively (see Fig. <ref>a for μ=1). Note that the parameter b modifies the initial velocity and the diffusion coefficient. Consequently, we choose a small b=0.01 to reduce both the magnitude of the initial spread and the diffusion coefficient, generating a slower dynamic and having a larger window for diffusive dynamics before the system reaches saturation (at fix N). In the diffusive regime, we find σ^2_0≈ v_0^2 (√(2)τ_W) t. The factor √(2) is introduced based on the numerical results to correct the discrepancy in τ_W due to the long-range hopping.
It's important to note that, although the system is localized for 1.0<μ<1.5, its eigenfunctions have power-law tails with exponent 2μ, therefore its second moment diverge N→∞. The presence of these fat tails allow an unbounded growth in time of the the second moment in the limit of N→∞. For μ<1.5 the saturation value of the second moment σ^2_0,SV is σ^2_0,SV = N^2/12f(b,μ), where f(b,μ)≤ 1.
Thus, for μ<1.5 and assuming a spreading form σ^2_0(t>τ_W)= v_0^2τ_W^2+√(2)v_0^2τ_W(t-τ_W), we can calculate the time t_s where the spreading reaches its saturation value by imposing σ^2_0,SV=σ^2_0(t_s) obtaining:
t_s=σ^2_0,SV/√(2)v_0^2τ_W+τ_W(√(2)-1)/√(2)∝ N^2μ-1.
Our analytical estimate of t_s agrees with the numerical finding (see, Fig. <ref>a for μ=1). Eq. (<ref>) implies that as N increases, for μ<1/2 the saturation value will be reached at shorter times and eventually the dynamics will be always ballistic (t_s becomes smaller than τ_W). On the opposite case, for 1/2<μ<3/2, t_s increases with N and we have a diffusive spreading until saturation.
As in the previous models, the presence of a coherent quantum diffusion (for 1/2< μ < 3/2), generates an almost decoherence-independent diffusive regime. Indeed, for 2ħ/t_s ≲γ_ϕ≲ 2ħ/τ_W, D is almost constant, as most of the environmental measurements fall in the diffusive regime (after τ_W and before the saturation time t_s).
When γ_ϕ≪ 2ħ/t_s the noise enters in the dynamics after saturation, generating finite size effects. From Eq. (<ref>) we can see that for 1/2< μ < 3/2, t_s increases with N, and finite sizes effects start at smaller values of the decoherence strength, see Fig. <ref>b. For γ_ϕ > 2ħ/τ_W, decoherence affects the dynamics mainly during the initial ballistic spreading, leading to a decrease of the diffusion coefficient proportional to v_0^2.
For μ<1/2, the velocity of the initial ballistic spreading, see Eq. (<ref>), increases with N faster than that saturation value. Therefore, t_s decreases with N, becoming smaller than τ_W and leaving no place for a diffusive dynamic. Hence, no decoherence-independent region can be found for the diffusion coefficient.
For μ>3/2, t_s converges to a constant value as N increases. Thus, for γ_ϕ<2ħ/t_s the diffusion coefficient will be linearly dependent on γ_ϕ and we can not have a dephasing independent regime. This situation is similar to the localized case of the Harper-Hofstadter-Aubry-André.
§ PURITY.
The purity, defined as,
M(t)= [ρ(t)ρ(t)],
where ρ(t)=e^ℒtρ_0 is the evolved density matrix, is a measure of the coherence's level of ρ(t). M(t)=1 implies that ρ(t) is a pure state (fully coherent), while M(t)<1 indicates a mixed state (incoherent superposition).
In the following we show that the purity can be calculated using the quantum drift (QD) simulation by generating a Loschmidt echo in the dynamics.
The superoperator ℒ is defined by,
L[ρ] = -i/ħ[ ℋρ - ρℋ] + ℒ_ϕ[ρ]= L_0+ℒ_ϕ,
where ℋ is the Hamiltonian and ℒ_ϕ the HS dephasing. We can see that ℒ^†=ℒ_0^†+ℒ_ϕ^†=-ℒ_0+ℒ_ϕ, and since the density matrix is a Hermitian operator, we have, ρ(t)=ρ^†(t) e^ℒtρ_0=ρ_0e^ℒ^†t. Using these properties we rewrite the definition of the purity in the following form,
M(t) = [ρ(t)ρ(t)]= [e^ℒtρ_0e^ℒtρ_0]
= [ρ_0e^ℒ^† t e^ℒtρ_0]≡ [ρ_0 ρ_LE(2t)],
where it is clear that the purity is a comparison between the initial density matrix and the density matrix ρ_LE(2t) which is the result of two evolutions. In details, there is an initial forward evolution ρ(t)=e^(ℒ_0+ℒ_ϕ)tρ_0 and a second evolution with the sign of the Hamiltonian inverted (backward evolution) ρ_LE(2t)=e^(-ℒ_0+ℒ_ϕ)tρ(t), i.e. the purity corresponds to the echo observed on ρ_0 after reverting the time. If the initial state is a pure state ρ_0=|0⟩⟨0|, we can directly obtain the purity numerically by a stochastic simulation of the forward and backward evolution and by looking at the probability of returning to the initial state (in our case, the initial site).
We studied the purity as a function of time in the extended, critical, and localized regimes in the HHAA model changing the decoherence strength. These results are shown in Figures <ref>. We observe that for short times the decay of the purity is exponential and only depends on the decoherence strength and the initial state in all regimes. After t≈4ħ/γ_ϕ (numerically estimated), the decay of the purity becomes a power law, M(t)∝1/√(D(γ_ϕ,W)t), where D(γ_ϕ,W) is the diffusion coefficient of the forward dynamics (dashed curves in Fig. <ref>). From the results of the previous sections (for γ_ϕ<γ^c_ϕ) we infer that the rate of decay of the purity in this power-law regime decreases with γ_ϕ in the extended regime, increases in the localized regime, and remains constant at the critical point. This can be interpreted by considering that the localized states are more protected from decoherence, as decoherence affects fewer sites. In this case, as we increase the decoherence strength the decay of the purity is stronger in both the short and long time regimes as a consequence of the delocalization of the wave function. Secondly, in the extended regime, while a stronger decoherence causes a faster decay in the purity at short times, at large times, where the forward dynamics determine the decay rate, it becomes slower for stronger decoherence. This counter-intuitive result is understood as a consequence of the ballistic growth of the wave packet, which in the large time makes it more sensitive to fluctuations.
To clarify the behavior of M(t) at the MIT, we show in Fig. <ref>a the evolution of P_00 (probability of being at the initial site), where the Hamiltonian is reverted at time τ_R. At the LE-time, t=2τ_R, one has P_00(2τ_R)=M(τ_R). For γ_ϕ≪ħ/τ_R, we observe that the P_00(t) returns to the initial site and an echo is formed. Note that in absence of dephasing the return is complete and the purity is 1. However, if τ_R≫ 4ħ/γ_ϕ, P_00(2τ_R) is only determined by the forward diffusive dynamic without a significative echo formation. There are no coherences left to reconstruct the initial dynamic and therefore no echo (peak) is observed, i.e. the P_00(t) keeps decaying even with the Hamiltonian reverted. This means that the memory of the initial state is completely lost. Thus, the density matrix is the incoherent superposition of all possible histories. In this sense, after 4ħ/γ_ϕ the diffusive spreading observed at the MIT differs from the coherent quantum diffusion in the fact that the dynamics it is no longer reversible.
This purity behavior at the MIT is summarized Fig. <ref>b, where the value of the echo (purity) for different τ_R are shown as a function of τ_ϕ=ħ/γ_ϕ, as one can see, we observed a constant plateau up to τ_ϕ≈τ_R/4 indicated by vertical dashed-black lines. After that, we have an exponential growth up to the value 1.
Similar results are found by looking at the width of the returned packet. This is shown in Fig. <ref>c, where the time at which the second moment reaches its minimum (counted from the reversal time τ_R), is plotted as a function of τ_ϕ. After the change in the Hamiltonian sign the wave function starts to shrink, however, this shrinking lasts until the echo time (2τ_R) only if τ_ϕ>2τ_R. This is shown in Fig. <ref>c as a plateau. When τ_ϕ<2τ_R, the width of the wave packet reaches its minimum at approximately t≈τ_ϕ/2 and starts to broaden again. It is interesting to note that for 2ħ/τ_R< γ_ϕ < 4ħ/τ_R, the wave function is widening again but we observe an echo in the polarization.
We observed that the dependence of the diffusion coefficient with the dephasing strength is inherited by purity (LE) dynamics, as for long times it decays with a power law depending only on D. As a consequence, the purity decay at the critical point enters an almost dephasing-independent decay. However, this regime differs substantially from the chaos-induced LE perturbation independent decay proposed by Jalabert & Pastawski<cit.>, as we might have hinted from Ref. <cit.>. Indeed, in our case the correlation length of the noise fluctuations is smaller than the mean free path, which does not satisfy the conditions needed for a perturbation-independent decay of the LE. For our local noise, the Feynman history that has suffered a collision with the noisy potential loses the memory of where it comes from, thus it is irreversible as in the Büttiker's dephasing voltage probe. In that sense, the environment-independent decay of the LE/purity, should not be interpreted in the perturbation-independent decoherence context, but rather as a strong irreversibility.
|
http://arxiv.org/abs/2307.07551v1 | 20230714180004 | Resolved Kennicutt-Schmidt law in two strongly lensed star-forming galaxies at redshift 1 | [
"David Nagy",
"Miroslava Dessauges-Zavadsky",
"Matteo Messa",
"Johan Richard",
"Jiayi Sun",
"Françoise Combes",
"Yannick Eyholzer"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Resolved KS law in two strongly lensed SFGs at z=1
D. Nagy et al.
Département d’Astronomie, Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland
Department of Astronomy, Stockholm University, AlbaNova University Centre, SE-106 91 Stockholm, Sweden
Université Lyon, Université Lyon1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval, France
Department of Physics and Astronomy, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4M1, Canada
Canadian Institute for Theoretical Astrophysics (CITA), University of Toronto, 60 St George Street, Toronto, ON M5S 3H8, Canada
LERMA, Observatoire de Paris, PSL Research Université, CNRS, Sorbonne Université, UPMC, Paris, France
We study the star formation rate (SFR) versus molecular gas mass (M_mol) scaling relation from hundreds to thousands parsec scales in two strongly lensed galaxies at redshift z∼ 1, the Cosmic Snake and A521. We trace SFR using extinction-corrected rest-frame UV observations with the Hubble Space Telescope (HST), and M_mol using detections of the CO(4–3) line with the Atacama Large Millimetre/submillimetre Array (ALMA). The similar angular resolutions of our HST and ALMA observations of 0.15-0.2 combined with magnifications reaching μ>20 enable us to resolve structures in the galaxies of sizes lower than 100pc. These resolutions are close to those of nearby galaxies studies. This allows us to investigate for the first time the Kennicutt-Schmidt (KS) law (SFR-M_mol surface densities) at different spatial scales, from galactic scales to ∼ 100pc scales, in galaxies at z∼ 1. At integrated scales we find that both galaxies satisfy the KS law defined by galaxies at redshifts between 1 and 2.5. We test the resolved KS (rKS) law in cells of sizes down to 200pc in the two galaxies. We observe that this relationship generally holds in these z∼ 1 galaxies although its scatter increases significantly with decreasing spatial scales. We check the scale dependence of the spatial correlation between the surface densities of SFR and M_mol by focussing on apertures centred on individual star-forming regions and molecular clouds. We conclude that star-forming regions and molecular clouds become spatially de-correlated at ≲1kpc in the Cosmic Snake, whereas they appear de-correlated at all spatial scales (from 400pc to 6kpc) in A521.
Resolved Kennicutt-Schmidt law in two strongly lensed star-forming galaxies at redshift 1
David Nagy1email: [email protected] Miroslava Dessauges-Zavadsky1 Matteo Messa1,2 Johan Richard3 Jiayi Sun4,50000-0003-0378-4667 Françoise Combes6 Yannick Eyholzer1
received: ** 2023, accepted: * 2023
=====================================================================================================================================================================================
§ INTRODUCTION
The star formation rate (SFR) and the total atomic (H I) and molecular (H_2) gas mass (M_gas) of galaxies are closely related. Hydrogen being the primary fuel for star formation, its mass content is expected to correlate with SFR. A study of the SFR-M_gas relation by <cit.> revealed a clear correlation between the volume densities of SFR and M_gas, and in <cit.> it was recast as a power law relationship between surface densities (Σ): ΣSFR = A (Σ M_gas)^n. <cit.> measured a power law index n of the relation of 1.4± 0.15 in local galaxies. H_2 is the gas phase in which the majority of star formation occurs, as it is the densest and coldest phase of the interstellar medium. A galaxy with a high H_2 mass (M_mol) content is thus expected to form stars more efficiently. Therefore, the SFR-M_mol relation, commonly called the molecular Kennicutt-Schmidt (KS) law, has been extensively studied. It has also the form of a power law: ΣSFR = A (Σ M_mol)^n. Recent studies of the KS law report an index n of 1.03± 0.08 (e.g. ).
The surface densities in the KS law are integrated quantities measured on the whole galaxy. With the increasing availability of high resolution multiwavelength data for nearby galaxies, recent studies have been focusing on the investigation of the KS law at sub-galactic scales (). A conclusion of these studies is that the resolved KS (rKS) law holds down to sub-kiloparsec spatial scales with a power law index around 1-1.1, depending on the resolution. However, the scatter of the relation is expected to increase as the spatial scale decreases due to the statistical undersampling of the stellar IMF as well as time evolution of individual star-forming regions (e.g. ).
The molecular gas-to-SFR ratio, also called the molecular depletion time (τ_dep = Σ M_mol/ΣSFR), is the quantity that traces the time it would take for the molecular gas reservoir to get consumed assuming a constant SFR. If stars are formed in giant molecular clouds (GMCs) for many dynamical times, or in other words if the star-forming process is in quasi-equilibrium at the scale of a single GMC, then the molecular gas and young stars are expected to correlate on small scales. On the contrary, if the star formation is a rapid cycle and GMCs are quickly destroyed by massive stars, then a decorrelation is expected at small scales between gas and young stars. In nearby galaxies, the latter is the phenomenon which has been clearly observed, i.e. the star-forming process is a rapid cycle at small scales (e.g. ).
Sub-kpc studies are challenging at higher redshifts (z) because of the fine resolution needed. One can take advantage of strong gravitational lensing to probe a target galaxy behind massive galaxies or galaxy clusters at increased spatial resolutions and magnified luminosities (e.g. ). Such background galaxies are often strongly stretched and sometimes show multiple images, so one needs to model the foreground mass distribution in order to reconstruct the shape of the target at a given redshift. This allows us to probe sub-kpc sizes, and in the most strongly lensed regions even scales < 100pc. Using this methodology, it is possible to resolve in galaxies at z>1 small-scale structures like star-forming clumps (e.g. ), giant molecular clouds (GMCs, e.g. ), or to make other measurements at sub-kpc scales, such as metallicity gradients (e.g. ), kinematics (e.g. ), or radial profiles (e.g. ).
In this paper, we investigate the rKS law in two strongly lensed galaxies at z∼ 1: the Cosmic Snake galaxy behind the galaxy cluster MACS J1206.2-0847, and A521-sys1, which we refer to as A521, behind the galaxy cluster Abell 0521. These two galaxies are typical main sequence (MS) star-forming galaxies at their redshifts, for which multi-wavelength observations are available from, in particular, the Hubble Space Telescope (HST) in several filters, and the Atacama Large Millimeter/submillimeter Array (ALMA).
The paper is structured as follows: in Sect. <ref> we present the HST and ALMA observations of the Cosmic Snake and A521 and their data reductions, as well as their gravitational lens modelling. In Sect. <ref> we present the measurements of ΣSFR and Σ M_mol in both galaxies. In Sect. <ref> we analyse and discuss the integrated and resolved KS laws in the Cosmic Snake and A521. Finally, we give our conclusions in Sect. <ref>.
Throughout this paper, we adopt the Λ-CDM cosmology with H_0 = 70km.s^-1.Mpc^-1, Ω_M = 0.3, and Ω_Λ = 0.7. We adopt the <cit.> initial mass function (IMF).
§ OBSERVATIONS AND DATA REDUCTION
§.§ Cosmic Snake and A521 galaxies
The Cosmic Snake and A521 are two strongly lensed galaxies located behind the galaxy clusters MACS J1206.2-0847 and Abell 0521, respectively. They have several multiple images that are magnified by factors of a few to hundreds. For both of these galaxies we can see an arc including several images of the source galaxy with significant stretching and amplification, as well as an isolated counter-image with almost no stretching and amplification of a few (see Figs. <ref>, <ref>, and <ref>). These galaxies are representative of MS star-forming galaxies at z∼ 1, with the Cosmic Snake having a stellar mass M_⋆=(4.0± 0.5)× 10^10 M_⊙ and SFR=30± 10 M_⊙.yr^-1, and A521 having M_⋆=(7.4± 1.2)× 10^10 M_⊙ and SFR=26± 5 M_⊙.yr^-1. More detailed descriptions of these galaxies can be found in <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
§.§ HST observations
We used the image of MACS J1206.2-0847 observed in F390W with WFC3/UVIS in the context of the Cluster Lensing And Supernova survey with Hubble (CLASH), as this filter corresponds to rest-frame ultraviolet (UV) wavelengths.[The data from CLASH are available at <https://archive.stsci.edu/prepds/clash/>.] The map we used has a point spread functions (PSF) resolution of ∼ 0.1^'' and a pixel scale of 0.03^'' <cit.>, and the exposure time was ∼4959s. A full description of the CLASH dataset can be found in <cit.>.
We took the A521 images observed in F390W with WFC3/UVIS from the HST archive (ID: 15435, PI: Chisholm). The exposure time was 2470s. The software Multidrizzle <cit.> was used to align and combine in a single image individual calibrated exposures. The final image has a PSF resolution of 0.097^'' and a pixel scale of 0.06^'' <cit.>.
§.§ ALMA observations
The CO(4–3) emission of the Cosmic Snake was detected with ALMA in band 6 at 226.44GHz, corresponding to a redshift of z=1.03620. The observations were acquired in Cycle 3 (project 2013.1.01330.S), in the extended C38-5 configuration with a maximum baseline of 1.6km and 38 antennas of the 12m array. The total on-source integration time was 52.3min <cit.>. The isolated counter-image of the Cosmic Snake, as well as A521, were observed in band 6 in Cycle 4 (project 2016.1.00643.S), in the C40-6 configuration with a maximum baseline of 3.1km and 41 antennas of the 12m array. For the isolated counter-image of the Cosmic Snake, the total on-source time was 51.8min. For A521, it was 89.0min. The CO(4–3) line in A521 was detected at 225.66GHz, which corresponds to a redshift of z=1.04356 <cit.>. The spectral resolution was set to 7.8125MHz for all three observations.
The data reduction was performed using the standard automated reduction procedure from the pipeline of the Common Astronomy Software Application (CASA) package <cit.>. Briggs weighting was used to image the CO(4–3) emission with a robust factor of 0.5. Using the clean routine in CASA interactively on all channels until convergence, the final synthesized beam size obtained for the Cosmic Snake galaxy was 0.22×0.18 with a position angle of 85^∘ for the arc, and 0.21×0.18 with an angle of 49^∘ for the isolated counter-image. For A521 the final synthesized beam size was 0.19×0.16 at -74^∘. The adopted pixel scale for the CO(4–3) data cube is 0.04 for the Cosmic Snake arc and 0.03 for the Cosmic Snake isolated counter-image and A521. The achieved root mean squares (RMS) are 0.29mJy.beam^-1, 0.42mJy.beam^-1, and 0.20mJy.beam^-1, per 7.8125MHz channel, for the Cosmic Snake arc, the Cosmic Snake isolated counter-image, and A521, respectively. The CO(4–3) moment-zero maps were obtained using the immoments routine from CASA by integrating the flux over the total velocity range where CO(4–3) emission was detected.
§ METHODOLOGY
§.§ Gravitational lens model
The gravitational lens models used for the Cosmic Snake and A521 galaxies are constrained by multiple images found in HST observations. Lenstool <cit.> was used to compute and optimise the models. The RMS accuracies of the lens models for the positions in the image plane of the Cosmic Snake and A521 galaxies are 0.15^'' and 0.08^'', respectively. More details on the gravitational lens models used for the Cosmic Snake and A521 can be found in <cit.> for the Cosmic Snake, and in <cit.> and <cit.> for A521.
§.§ Convolution
Since we compare quantities derived from ALMA and HST fluxes in small regions of the galaxies, we ensured that our HST and ALMA maps of a given galaxy are comparable by matching their resolutions. First we adjusted the pixel scale of the HST and ALMA images, then we convolved the HST images with the synthesised beam of the ALMA observations, and the ALMA images with the PSF of HST.
§.§ Determination of physical quantities
§.§.§ Molecular gas mass
We used the CO(4–3) line detected with ALMA as the tracer of M_mol. First we converted the velocity-integrated flux of the CO(4–3) line (S_COΔ V) into luminosity (L'_CO(4 3)) using this equation from <cit.>:
L'_CO(4 3) = 3.25 × 10^7 S_CO(4 3)Δ Vν_obs^-2D_L^2(1+z)^-3 (K.km.s^-1.pc^2),
with S_COΔ V in Jy.km.s^-1, and where ν_obs is the observed frequency in GHz, and D_L the luminosity distance of the source in Mpc. The luminosity is then converted into M_mol <cit.>:
M_mol = (α_CO/M_⊙(K. km. s^-1.pc^2)^-1)(L'_CO(4 3)/0.33/K. km. s^-1.pc^2)M_⊙,
where we used the CO luminosity correction factor r_4,1=L'_CO(4 3)/L'_CO(1 0)=0.33, which was extrapolated from r_4,2 and r_2,1 measured in the Cosmic Snake <cit.> and z∼1.5 BzK galaxies <cit.>, respectively. We assumed the Milky Way CO-to-H_2 conversion factor α_CO=4.36M_⊙(K. km. s^-1.pc^2)^-1, since both in the Cosmic Snake and A521 α_CO was found to be close to the Milky Way value from the virialised mass of detected GMCs <cit.>.
In both galaxies, the CO(2–1) line was also detected with the Plateau de Bure Interferometer (PdBI) for the Cosmic Snake <cit.>, and with the Institut de radioastronomie millimétrique (IRAM) 30m single dish antenna for A521 <cit.>. In both cases, the total molecular gas content traced by the CO(2–1) emission was identical to that traced by CO(4–3). We therefore conclude that using the CO(4–3) line to trace the molecular gas mass is reliable.
§.§.§ Star formation rate
We used the HST rest-frame UV observations with the F390W filter to compute SFR using Eq. (1) of <cit.>:
SFR = 1.4× 10^-28 L_ν (ergs.s^-1.Hz^-1),
where L_ν is the UV luminosity. Furthermore, we applied an extinction correction as in <cit.> on the SFR as the UV continuum may be significantly affected by extinction:
f_i(λ) = f_o(λ) 10^0.4 E(B-V) k^e(λ),
with the obscuration curve for the stellar continuum k^e(λ) = 1.17(-2.156+1.509/λ - 0.198/λ^2 + 0.011/λ^3)+1.78 given by <cit.>, where λ is the rest-frame wavelength in μ m. The colour excess E(B-V) was computed in <cit.> both in radial bins and in the isolated counter-images of the galaxies by performing spectral energy distribution (SED) fitting on multiple HST bands. The values of E(B-V) obtained from SED fits are in agreement with the value estimated from the Balmer decrement by <cit.>.
§ ANALYSIS AND DISCUSSION
§.§ Integrated Kennicutt-Schmidt law
We measured the integrated Σ M_mol and ΣSFR on the isolated counter-image to the north-east of the arc for the Cosmic Snake, and on the counter-image to the east for A521. These counter-images show the entire galaxy for both galaxies, unlike the arcs where only a fraction of the galaxy is imaged. To compute Σ M_mol and ΣSFR we used the following method. We integrated both the CO(4–3) emission and the UV flux inside the half-light radius measured in the F160W band <cit.>, then we converted them into the corresponding physical quantities M_mol (using Eqs. (<ref>) and (<ref>)) and SFR (using Eq. (<ref>)), by correcting the SFR for extinction using E(B-V) computed in the same counter-images. To obtain Σ M_mol and ΣSFR we then divided by the respective half-light surfaces of the galaxies in the image plane. As both the fluxes and the surfaces were measured in the image plane, there is no need to correct for gravitational lensing if we assume a uniform amplification over the integration area. This is a fair assumption because the magnification varies only by ∼ 0.3 and ∼ 0.5 over the counter-images of the Cosmic Snake and A521, respectively.
The uncertainty on Σ M_mol (Δ(Σ M_mol)) was computed following
Δ(Σ M_mol) = Σ M_mol(σ_RMS)/√(N_pix,tot/N_pix,beam)
where Σ M_mol(σ_RMS) is the root mean square noise (σ_RMS) around the galaxy converted into Σ M_mol units, N_pix,tot is the total number of pixels inside the integration area, and N_pix,beam is the number of pixels in the beam. The uncertainty on ΣSFR (Δ(ΣSFR)) was computed following
Δ(ΣSFR) = √(( ΣSFR(σ_RMS)/√(N_pix,tot/N_pix,beam))^2 + ( ΣSFR(σ_phot) )^2 )
where ΣSFR(σ_RMS) is the root mean square noise (σ_RMS) around the galaxy converted into ΣSFR, and ΣSFR(σ_phot) is the photometric error converted into ΣSFR. We add the magnification uncertainties in quadrature, although these latter are negligible in comparison to other sources of uncertainties.
We find for the Cosmic Snake ΣSFR = 1.5±0.1M_⊙.yr^-1.kpc^-2 and Σ M_mol = 570±60M_⊙.pc^-2. For A521 we have ΣSFR = 1.8±0.1M_⊙.yr^-1.kpc^-2 and Σ M_mol = 430±50M_⊙.pc^-2. We show in Fig. <ref> the Cosmic Snake and A521 in the (integrated) KS diagram (ΣSFR-Σ M_mol), along with a compilations of 25 galaxies from <cit.> (z=1-2.5), 73 galaxies from <cit.> (z=1-2.4), and 4 galaxies from <cit.> (z∼ 1.2). These galaxies are all MS star-forming galaxies (SFGs). We also plot the slope from <cit.> for local spiral galaxies, as well as the slope obtained for stacks of MS SFGs by <cit.> at z=0.4-3.6. The Cosmic Snake and A521 are clearly within the distribution of z≳ 1 galaxies.
Furthermore, the compilation of z≳ 1 galaxies globally satisfies the KS relation with a slope of 1.13± 0.09 <cit.>, thus higher than z=0 galaxies. Such a steeper slope implies, for a given Σ M_mol, a higher ΣSFR in distant galaxies than in the nearby ones. This might indicate that z≳ 1 galaxies have higher star formation efficiencies. This is indeed the case, since the study of the integrated star formation efficiencies (SFE = M_mol/SFR) of MS galaxies shows a mild increase of SFE with redshift <cit.>.
§.§ Resolved Kennicutt-Schmidt law
We studied the rKS law in different bin sizes in the Cosmic Snake and A521. To do so we created 6 grids paving the reconstructed source plane images of each galaxy, with boxes of 200pc, 400pc, 800pc, 1600pc, 2800pc, and 3200pc for the Cosmic Snake, and 200pc, 400pc, 800pc, 1600pc, 3200pc, and 6400pc for A521. We consider an additional larger bin size in A521 as the galaxy is more extended than the Cosmic Snake, with star formation happening up to a galactocentric radius of 8kpc and molecular gas detected up to 6kpc, compared to, respectively, 7kpc and 1.7kpc in the Cosmic Snake <cit.>. We then lensed the grids in the corresponding image plane. Due to the differential lensing, the area of some of these boxes is smaller than the matched PSF (HST PSF convolved with ALMA beam) in the image plane, so we discarded those boxes. This is the case for about half of the boxes of 200pc, and 20% of the boxes of 400pc. We then measured Σ M_mol and ΣSFR inside each of the remaining boxes for both galaxies. In A521, one cluster member is present in front of a small part of the arc (corresponding to the upper green cross in Fig. <ref>). It has no significant diffuse emission so we simply masked it for our analysis.
To estimate the flux inside a given box from the ALMA maps, we applied the technique developed for the Cosmic Snake galaxy in <cit.> in the context of the search of molecular clouds. The method takes into account the three dimensions of the CO(4–3) datacube. To evaluate the detection threshold, the fidelity was computed as
fidelity(S/N) = 1- N_neg(S/N)/N_pos(S/N),
where N_pos and N_neg are the number of positive and negative emission detections with a given signal-to-noise (S/N) in the primary beam, respectively <cit.>. The fidelity of 100% was achieved at S/N=4.4 in individual channel maps in both the Cosmic Snake and A521, and when considering co-spatial emission in two adjacent channels, it was reached at S/N=4.0 in the Cosmic Snake and at S/N=3.6 in A521. Therefore, in the ALMA datacube of the Cosmic Snake, we extracted for a given box the emission from each individual channel where the flux inside the box was above a 4.4 σ RMS threshold, or above a 4.0 σ threshold if the flux inside the same box in an adjacent channel was also above 4.0 σ. We did the same for the ALMA maps of A521 with thresholds of 4.4 σ and 3.6 σ, respectively. Boxes below the ALMA RMS detection threshold are excluded, we do not consider upper limits. The HST flux is always detected where we detect CO. For each box we applied the extinction correction corresponding to the radial bin (computed in <cit.>) where the majority of the pixels of the box lies. The uncertainties on M_mol and SFR were computed inside each box following Eqs. <ref> and <ref>, respectively.
The results for the rKS relation in the Cosmic Snake and A521 are displayed in Figs. <ref> and <ref>, respectively, with each panel corresponding to a different bin size. We display with orange squares the Σ M_mol values corresponding to the means in 6 x-axis bins with equal number of datapoints. By performing a linear regression on all datapoints using the Levenberg-Marquardt algorithm[The algorithm takes into account the uncertainties on the x and y values of the datapoints] and least squares statistic in the Cosmic Snake for scales ≤ 1600pc, we obtain slopes of n^CS_200pc = 1.00± 0.08, n^CS_400pc = 0.9± 0.1, n^CS_800pc = 1.0± 0.3, and n^CS_1.6kpc = 1.1± 0.4. Uncertainties in the slope measurements increase with spatial scale due to the decrease of the number of datapoints. For the Cosmic Snake, the overall slope of the distribution for the bin sizes ≤1600pc is similar to the slope reported for local galaxies. For larger scales (> 1600pc) the number of boxes is too low for a reliable fit. For A521, no overall slope can be inferred at any bin size. The horizontal alignment of the binned means in Fig. <ref> can be due to a lack of correlation in the data, as the binned means of a random distribution of points has the same horizontal alignment. We investigate below the differences between the two galaxies, and in particular, in the context of nearby samples from literature.
In order to determine what is driving the difference in the distribution of datapoints between the two galaxies, we investigated the galactocentric effect, and used a colour coding depending on the galactocentric distance of each box. For the smaller scales of 200pc and 400pc we clearly see a segregation with the galactocentric distance in the Cosmic Snake galaxy. The boxes closer to the centre have much higher ΣSFR and Σ M_mol than the ones at large galactocentric radii. The Cosmic Snake has steep radial profiles of ΣSFR and Σ M_mol <cit.>, so seeing a correlation between the galactocentric distance and the positions of the datapoints in the rKS diagram is not surprising. In A521, no segregation with the galactocentric distance is seen. This is in line with the shallow radial profiles of ΣSFR and Σ M_mol in A521, hence no significant difference in the rKS diagram between regions closer to the centre and regions in the outskirts is seen.
<cit.> measured the rKS in 18 star-forming galaxies from the Physics at High Angular resolution in Nearby GalaxieS (PHANGS[<https://phangs.org/>]) survey, at scales of 100pc, 500pc, and 1kpc. They reported slopes[The slopes from <cit.> have been computed by binning the x-axis and averaging the y-axis values inside each ΣSFR bin, whereas our slopes were computed by taking into account every measurement.] of n_100pc = 1.06± 0.01, n_500pc = 1.06± 0.02, and n_1kpc = 1.03± 0.02, respectively, concluding that no evidence of systematic dependence on spatial scale is shown by the slopes. The slopes of local galaxies match those of the Cosmic Snake within error-bars, although our measurements have much bigger uncertainties due to sparser sampling.
Moreover, in both galaxies, and specifically in A521, we lack dynamical range in ΣSFR and Σ M_mol, especially for small values, to consistently constrain the rKS slope in z∼ 1 galaxies. ΣSFR spans ∼ 3.5 orders of magnitude in the Cosmic Snake and ∼ 2.5 in A521, compared to ∼ 5 in the sample of 18 galaxies from <cit.>, and Σ M_mol spans ∼ 2 orders of magnitude both in the Cosmic Snake and in A521, compared to ∼ 3 in <cit.>. Higher sensitivity observations could allow to refine the estimation of the slope in the Cosmic Snake, or to enable to estimate the slope in A521. It is however important to note that the lack of dynamical range in A521 is not only due to a poor S/N, as the Cosmic Snake has a S/N comparable to A521 but a much better dynamical range.
We plot the combined rKS of the Cosmic Snake and A521 in Fig. <ref>, in order to increase the dynamical range of ΣSFR and Σ M_mol. The slopes of the stacks (n^Stack) are: n^Stack_200pc = 0.88± 0.04, n^Stack_400pc = 0.76± 0.05, n^Stack_800pc = 0.79± 0.07, and n^Stack_1.6kpc = 0.8± 0.1. These slopes are shallower than the slopes of the Cosmic Snake galaxy alone, and also than the slopes obtained by <cit.>. The reason is that A521 has a high density of points below the rKS line from <cit.>, as illustrated by the contours in Fig. <ref>. One possible reason for these points to have so low SFR may be that the extinction is underestimated, and this, specifically, where the molecular gas density is high. It may also be due to the SFR tracer (rest-frame UV) we use, which traces star-forming regions with ages ∼100Myr. For a long continuous star formation history (SFH), the estimated SFR by Eq. <ref> would be accurate. However, in the case of a more bursty star formation with a constant SFH over a shorter time-frame of ∼10Myr, Eq. <ref> will underestimate the real SFR.
For each set of datapoints at a given bin size, we compute the scatter in dex (σ) as the standard deviation of the datapoints around the rKS power law fits from <cit.> at the closest reported spatial scale (100pc, 500pc, or 1kpc). We use this method instead of computing the scatter around the best-fitted power law like in <cit.> due to the uncertainty of the fit for the Cosmic Snake, and the meaningless fit if performed for A521. The values are reported in Table <ref>. Although the number of datapoints per grid binning size and the global shape of their distribution is notably different between the Cosmic Snake and A521, their respective scatters are similar at bin sizes up to 800pc. The scatter of both galaxies is also similar to the stack of the two, at those scales. At 1600pc, the scatter of the Cosmic Snake decreases significantly, whereas that of A521 stays constant up to 3200pc, then it decreases as well. The scatter decrease with increasing spatial scale is consistent with the results from <cit.>, <cit.>, and <cit.>. As a comparison, <cit.> reported scatters for the rKS law of σ_100pc = 0.41, σ_500pc = 0.33, and σ_1kpc = 0.27. They argued that the decrease of scatter at increasing spatial scales is due to the averaging out of small scale variations.
§.§ Scale dependence of the ΣSFR-Σ M_mol spatial correlation
We investigate the scale dependence of
the spatial correlation between ΣSFR and Σ M_mol in the Cosmic Snake and A521. As in <cit.>, we do this by considering τ_dep = Σ M_mol/ΣSFR. τ_dep is computed for apertures centred on CO and rest-frame UV peaks. The peaks were identified in the arcs of both galaxies, using the CO(4–3) emission from ALMA by <cit.> for the Cosmic Snake and by <cit.> for A521, and the rest-frame UV emission from HST by <cit.> for the Cosmic Snake and <cit.> for A521. The CO peaks trace the GMCs, and the rest-frame UV peaks trace the star-forming regions. We then project the locations of the peaks in the source plane, and we centre apertures of different sizes on those positions. We use circular apertures with diameters of 200, 400, 800, 1200, and 1400pc for the Cosmic Snake, and 400, 800, 1600, 3200, and 6400pc for A521. These apertures are then lensed into the image plane, and we measure fluxes within each of them. As in Section <ref>, we apply a 4.4 σ RMS detection threshold to each individual channel of the ALMA datacubes for both the Cosmic Snake and A521, and a 4.0 σ detection threshold in the case of co-spatial emission detections in two adjacent channels for the Cosmic Snake and 3.6 σ for A521. Again, as the HST rest-frame UV emission is always detected where we also detect CO, we do not apply any detection threshold to the HST maps. We only consider apertures that are larger than the matched PSF in the image plane. We compute an average τ_dep for each set of apertures of a given size and centred on a given type of peak (CO or UV). The uncertainty of a given average τ_dep measurement (Δτ_dep) is computed as
Δτ_dep = std(τ_dep)/√(N_peaks)
where std(τ_dep) is the standard deviation of all the τ_dep used to compute the average, and N_peaks is the number of peaks.
The molecular gas depletion times for apertures of different sizes are given in Fig. <ref>, showing separately the results for the apertures centred on CO peaks (blue points) and rest-frame UV peaks (red points). τ_dep is strongly varying with the spatial scale (aperture size) and the type of emission targeted (CO or UV). From apertures larger than ∼1kpc in the Cosmic Snake and ∼6kpc in A521, the depletion times around CO peaks and the ones around rest-frame UV peaks are converging towards a common value.
The overall behaviour of the molecular gas depletion time curves, commonly known as "tuning fork diagram", resembles that reported in the literature for local galaxies (e.g. ). However, the uncertainties are much larger for the z∼ 1 galaxies because of the lack of statistics, as the number of detected clumps is about ten times lower than in a typical local galaxy. The τ_dep convergence seems to happen at slightly larger scales in the Cosmic Snake (≳1kpc), and much larger scales in A521 (∼6kpc), than in local galaxies (500pc-1kpc). Some plausible explanations for these differences are:
* The difference might be due to the difference of tracer, as local studies used Hα as the tracer of star-forming regions, but we used rest-frame UV emission which traces, on average, older star cluster complexes. As a result, the UV clumps that we detect are on average older (∼100Myr) than the Hα clumps (10Myr) detected in nearby galaxies. This may imply that the dynamical drift is more significant because UV-bright star-forming regions have moved further away from their parent clouds for a given drift velocity.
* The drift of young stars from their parent molecular clouds might be faster in z∼ 1 galaxies than in local galaxies. This is expected from the larger gas fraction of high redshift galaxies, and also to their higher compactness; cloud-cloud collisions are enhanced and the gas is more dissipative, while the newly-formed stars are collisionless, and decouple faster from the gas. However, as argued in <cit.> and <cit.>, the dynamical drift alone, at least in nearby galaxies, is not significant enough to be the cause of such large separations between GMCs and star-forming regions.
* Unless the stellar feedback is not strong enough, after 100Myr, the GMC parents of the UV clumps should already be destroyed if their lifetime is comparable to local GMCs (10-30Myr; ), explaining the lack of correspondence between the CO and UV peaks. Resolved Hα observations are needed to check how significantly the difference of tracers impacts the observed results.
* The star-forming regions detected in our high-redshift galaxies might not be born in the GMCs that we observe, but in other undetected clouds. In other words, there is no correspondence between the GMCs and the star-forming regions that we detect. In the Cosmic Snake and A521, small apertures only contain few peaks, and the majority of them is of the kind the aperture is centred on (CO or UV). Apertures of increasing sizes will include more and more peaks of both kinds, so the ratio of the CO peaks and UV peaks will converge towards 1. Therefore, the scale dependence of τ_dep would actually trace the number of CO and UV peaks inside each aperture.
An explanation of the increasing scatter at smaller spatial scales seen in the rKS plots (Figs. <ref>, <ref>, and <ref>) and discussed in Sect. <ref> may be found in the divergence at small scales of the molecular gas depletion time curves. At large scales (>1kpc in the Cosmic Snake and >6kpc in A521), any aperture chosen results in proportional fluxes of molecular gas and SFR tracers, even when focusing specifically at either star-forming regions or GMCs. This means that at these large scales, a randomly selected aperture will likely have a ΣSFR and a Σ M_mol which satisfy the rKS relation, consequently, the scatter of the relation for a sample of randomly selected apertures larger than 1kpc (Cosmic Snake) or 6kpc (A521) will be low, which is what we observe (Table <ref>). However, as apertures get smaller, there is much larger scatter because individual star-forming regions are at different stages of time evolution and have thus different CO-to-UV ratios. Focusing on for example a GMC results in a large τ_dep because the flux from the tracer of SFR is missed, and τ_dep is dominated by the numerator Σ M_mol (and vice-versa). This is the reason of the divergence at small spatial scales seen in Fig. <ref>. However, when using a random gridding with a small bin size like in the ΣSFR-Σ M_mol plots, the boxes happen to be sometimes between a CO peak and a rest-UV peak, resulting in a datapoint which satisfies the rKS relation, but sometimes they also fall right on a given peak, which yields a datapoint with either a high ΣSFR and a low Σ M_mol, or the opposite. This is the cause of the large scatter seen at small scales in the rKS diagrams. As a result, the majority of the datapoints do not satisfy the rKS law at small scales, but the entire cloud of datapoints is centred on it, and even the slope is close to the value obtained for scales >1kpc (in the case of the Cosmic Snake). If the rKS law was valid at small scales, any randomly selected aperture would fall on the slope of the relation, within the scatter observed for the largest scales.
§ CONCLUSIONS
We analysed the KS law in the Cosmic Snake and A521, two strongly lensed galaxies at z∼ 1, at galactic integrated scales down to sub-kpc scales. We used the rest-frame UV emission from HST to trace SFR and the CO(4–3) emission line detected with ALMA to trace M_mol. In addition to several multiple images with magnifications of μ>20 which are significantly stretched, and where only a fraction of the galaxy is visible, both galaxies show an isolated counter-image with overall uniform magnifications of 4.3 and 3 for the Cosmic Snake and A521, respectively. In those counter-images, the entirety of the galaxies is visible, thus we used them to compute integrated values of SFR and M_mol. We found ΣSFR = 1.5±0.1M_⊙.yr^-1.kpc^-2 and Σ M_mol = 570±60M_⊙.pc^-2 in the Cosmic Snake, and ΣSFR = 1.8±0.1M_⊙.yr^-1.kpc^-2 and Σ M_mol = 430±50M_⊙.pc^-2 in A521. The two galaxies satisfy the integrated KS relation derived at z=1-2.5 <cit.>.
To study the rKS law by taking advantage of the strong gravitational lensing in the Cosmic Snake and A521, we defined 6 different grids in the source plane of each galaxies. We then lensed those grids in the image plane, and computed Σ M_mol and ΣSFR inside each box. The grids that we used had sizes of 200pc, 400pc, 800pc, 1600pc, 2800pc, and 3200pc for the Cosmic Snake, and 200pc, 400pc, 800pc, 1600pc, 3200pc, and 6400pc for A521.
We derived the following results from the analysis of the rKS law in the Cosmic Snake and A521:
* We were able to perform a linear regression on the measurements in the Cosmic Snake for scales ≤ 1600pc, obtaining slopes of n^CS_200pc = 1.00± 0.08, n^CS_400pc = 0.9± 0.1, n^CS_800pc = 1.0± 0.3, and n^CS_1.6kpc = 1.1± 0.4. These slopes are similar to those typically found in local galaxies. For A521 no overall slope could be inferred at any scale. We measured slopes for the combined rKS of the Cosmic Snake and A521 of n^Stack_200pc = 0.88± 0.04, n^Stack_400pc = 0.76± 0.05, n^Stack_800pc = 0.79± 0.07, and n^Stack_1.6kpc = 0.8± 0.1.
* To consistently constrain the rKS slopes in the analysed z∼ 1 galaxies, we lack dynamical range in both Σ M_mol and ΣSFR. In the study of 18 PHANGS galaxies from <cit.>, both quantities span at least 1 more order of magnitude than our study.
* We see a clear spatial segregation in the distribution of the datapoints in the rKS diagram of the Cosmic Snake. Points close to the galactic centre tend to have higher Σ M_mol and ΣSFR, whereas measurements in the outskirts show lower values. No such segregation is observed in A521. These observations match with the results from <cit.> showing that the Cosmic Snake has much steeper radial profiles than A521, in Σ M_mol and ΣSFR in particular.
* The scatter of the datapoints in the Cosmic Snake and A521 is very similar at small scales up to 800pc. The scatter of both galaxies decreases at higher scales of 1600pc for the Cosmic Snake, and 6400pc for A521. The decrease of scatter at increasing bin sizes is similar to what is observed in z=0 galaxies and is due to the averaging out of small scale variations.
We measured the average τ_dep inside apertures of different diameters centred on either rest-frame UV or CO(4–3) emission peaks in the Cosmic Snake and A521. In both galaxies, we observe the same overall behaviour as in local galaxies, that is the τ_dep values measured using small apertures are clearly different whether the apertures are centred on rest-frame UV peaks or on CO(4–3) peaks, and they converge towards a common value at spatial scales large enough. In nearby galaxies, the convergence typically happens in apertures of diameters of 500pc-1kpc. In the Cosmic Snake the τ_dep measurements converge at higher but comparable apertures of size ≳1kpc, whereas in A521 it happens in much larger apertures of ∼6kpc.
We conclude that the increasing scatter in the rKS diagrams in small bin sizes is partly explained by the divergence observed between τ_dep measured when focusing on rest-frame UV peaks and CO(4–3) peaks at small scales. By taking values of ΣSFR and Σ M_mol from randomly selected boxes of small sizes, the corresponding datapoint may satisfy the rKS law, but may also be significantly off if the aperture happens to capture only one kind of peak. In boxes larger than the size for which the τ_dep values converge, any datapoint will tend to fall on the rKS, hence the smaller scatter. In the Cosmic Snake and A521, the scales at which τ_dep converges are the scales at which the scatter of those galaxies in the rKS diagram decreases.
This work was supported by the Swiss National Science Foundation.
Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program #15435.
This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.01330.S, and ADS/JAO.ALMA#2016.1.00643.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
M.M. acknowledges the support of the Swedish Research Council, Vetenskapsrådet (internationell postdok grant 2019-00502).
JS acknowledges support by the Natural Sciences and Engineering Research Council of Canada (NSERC) through a Canadian Institute for Theoretical Astrophysics (CITA) National Fellowship.
aa
|
http://arxiv.org/abs/2307.04413v1 | 20230710083640 | Quantum Zeno effect: a qutrit controlled by a qubit | [
"Komal Kumari",
"Garima Rajpoot",
"Sudhir Ranjan Jain"
] | quant-ph | [
"quant-ph"
] |
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond
Kensuke Kobayashi
Received / Accepted
==============================================================================================
For a three-level system monitored by an ancilla, we show that quantum Zeno effect can be employed to control quantum jump for error correction. Further, we show that we can realize cNOT gate, and effect dense coding and teleportation. We believe that this work paves the way to generalize the control of a qudit.
§ INTRODUCTION
Quantum errors can be corrected only by developing methods to control quantum jumps. Recently, the quantum Zeno effect <cit.> has been employed to delay spontaneous emission, giving us time to detect possible erroneous jumps. Moreover, to observe and hence control quantum jumps, QZE has been shown to realize Dehmelt-like shelving <cit.>. This work was inspired by a very interesting and important experiment on “catching" and “reversing" a quantum jump by Minev et al. <cit.>. To take these thoughts further for realistic applications, we need to show this method of control for multi-level systems. Here we take the next step and consider a three-level system which has the possibility of three distinct frequencies ω_12, ω_23 and ω_13. One of these states is monitored by a detector: a two-level ancillary qubit <cit.>. In contrast to the control of two-level system where there is just one frequency, here there are three frequencies. Thus there are multiple time-scales under consideration. The aim of this article is to study the possibility of controlling spontaneous errors and shelving in the sense of Dehmelt and improvised in <cit.>.
The plan of the paper is as follows. In Section 2.1, we state the problem and present the principle of least action approach relevant to our physical situation. This is based on the mathematical treatment of n- level system, the details of which are reviewed in the Appendix. The solution of the evolution equation of the density matrix in terms of coordinates and conjugate momenta is shown. In Section 2.2, the construction of a cNOT gate using a three-level system is explained. It is interesting to see that the three-level system considered here can be related to dense coding and teleportation, explained in Sections 2.3 and 2.4.
§ QUTRIT DYNAMICS
We have a three-level system, i.e., a qutrit, with levels |1⟩, |2⟩ and |3⟩ and transition frequencies ω_12, ω_23 and ω_31.
For a three-level system, N=3, the density matrix is
ρ=1/3𝕀̂+1/2∑_i=1^8x_ix̂_i,
where 1≤ j<k≤ N, 1≤ l≤ N-1 <cit.>. For a detailed description, see Appendix. The operators are
x̂_1 = û_12 = |1⟩⟨2|+|2⟩⟨1|
x̂_2 = v̂_12 = -ι(|1⟩⟨2|-|2⟩⟨1|)
x̂_3 = ŵ_1 = |1⟩⟨1|-|2⟩⟨2|
x̂_4 = û_13 = |1⟩⟨3|+|3⟩⟨1|
x̂_5 = v̂_13 = -ι(|1⟩⟨3|-|3⟩⟨1|)
x̂_6 = û_23 = |2⟩⟨3|+|3⟩⟨2|
x̂_7 = v̂_23 = -ι(|2⟩⟨3|-|3⟩⟨2|)
x̂_8 = ŵ_2 = √(1/3)(|1⟩⟨1|+|2⟩⟨2|-2|3⟩⟨3|).
The density operator in the matrix form is
ρ̂ =[ 1/3+x_3/2+x_8/√(3) 1/2(x_1-ιx_2) 1/2(x_4-ιx_5); 1/2(x_1+ιx_2) 1/3-x_3/2+x_8/√(3) 1/2(x_6-ιx_7); 1/2(x_4+ιx_5) 1/2(x_6+ιx_7) 1/3-2x_8/√(3) ].
§.§ Monitoring a single level
Consider that the qutrit is interacting with an ancilla, a two-level system prepared initially in the state |0⟩ of σ_z, Fig. <ref>. The ancilla monitors the third level of the qutrit with a coupling strength J_3=√(α_3/δ t), where α_3 is a stochastic parameter related to the frequency of the detector. The qutrit+ancilla system evolves for a time δ t and then its σ_y operator is measured. If the outcome of measurement is 0, qutrit is in state |1⟩ or |2⟩. This evolution and measurement is performed n times for a total time of T=nδ t. The ancilla is reset after every measurement. The Hamiltonian of the qutrit+ancilla system is
H =H_s+H_s-d
=ω_12(|1⟩⟨2|+|2⟩⟨1|) + ω_23(|2⟩⟨3|+|3⟩⟨2|)+ ω_13(|1⟩⟨3|+|3⟩⟨1|)+J |3⟩⟨3|⊗σ_y^(3),
where H_s-d=J|3⟩⟨3|⊗σ_y^(3), denoting that the state |3⟩ is entangled with the ancilla and a measurement of the y observable of the ancilla. The Kraus operators for measurement are given by
ℳ_r =⟨r|exp[-ιH_s-dδt]|0⟩
= ⟨r|𝕀-ιH_s-d δt -1/2H_s-d^2 (δt)^2|0⟩
ℳ_0 = 𝕀-α_3/2|3⟩⟨3|δt
ℳ_1 =√(α_3δt)|3⟩⟨3|.
Upon unitary evolution of system via the operator 𝒰=exp-ι H_sδ t and measurements post-selected on t=0, we obtain
ρ(t+δ t)=ℳ^0 𝒰ρ𝒰^†ℳ^0†/Tr[ℳ^0 𝒰ρ𝒰^†ℳ^0†].
By extremising the action obtained for the Joint Probability Distribution Function (JPDF) for the system, we obtain eight coupled equations, their canonical conjugates, and a functional ℱ incorporating the back-action of measurement performed by the detector <cit.>
ẋ_1 =ω_23x_5+ω_13x_7+1/3α_3x_1(1-2√(3)x_8)
ẋ_2 =-2ω_12x_3-ω_23x_4+ω_13x_6+α_3/3x_2(1-2√(3)x_8)
ẋ_3 =2ω_12x_2+ω_13x_5-ω_23x_7+α_3/3x_3(1-2√(3)x_8)
ẋ_4 = ω_23x_2-ω_12x_7-α_3/6x_4(1+4√(3)x_8)
ẋ_5 = -ω_23x_1 +ω_12x_6 - ω_13(x_3+2√(3)x_8)-α_3/6x_5(1+4√(3)x_8)
ẋ_6 = -ω_13x_2-ω_12x_5-α_3/6x_6(1+4√(3)x_8)
ẋ_7 = -ω_13x_1+ω_12x_4+ω_23(x_3-2√(3)x_8)-α_3/6x_7(1+4√(3)x_8)
ẋ_8 =√(3)/2[ω_13x_5+ω_23x_7+2/9α_3(1-√(3)x_8(1+2√(3)x_8))]
The functional ℱ is given by ℱ=-α_3/3x_8(1-2√(3)x_8). The dynamical Hamiltonian is given by
ℋ =∑_i=1^8 p_iẋ_̇i̇+ℱ.
The canonically conjugate momenta can be derived by Hamilton's equations
p_i=-∂ℋ/∂ x_i.
Thus we obtain the coupled equations:
ṗ_1 = -α_3/3(1-2√(3)x_8)p_1+ω_23p_5+ω_13p_7
ṗ_2 = -α_3/3(1-2√(3)x_8)p_2-2ω_12p_3-ω_23p_4 +ω_13p_6
ṗ_3 = 2ω_12p_2-α_3/3(1-2√(3)x_8)p_3+ω_13p_5-ω_23p_7
ṗ_4 = ω_23p_2+α_3/6 (1+4√(3)x_8)p_4
ṗ_5 = ω_23 p_1 -ω_13p_3+α_3/6 (1+4√(3)x_8)p_5+ω_12p_6-√(3)/2ω_13p_8
ṗ_6 =-ω_13p_2-ω_12p_5+α_3/6(1+4√(3)x_8)p_6
ṗ_7 = -ω_13p_1+ω_23p_3+ω_12p_4+α_3/6(1+4√(3)x_8)p_7-√(3)/2ω_23p_7
ṗ_8 =2/√(3)α_3(x_1p_1+x_2p_2+x_3p_3+x_4p_4+x_5p_5+x_6p_6+x_7p_7+2x_8p_8)
+2√(3)(ω_13p_5+ω_23p_7)+α_3/3(p_8+1)-4/√(3)α_3x_8.
The dynamics of the position coordinates of the qutrit with time are shown in Fig. <ref>. When the detection frequency is less compared to all the transition frequencies of the system, the dynamics shows continuous oscillations, Fig. <ref> (a). In an intermediate frequency, the system shows oscillations for some time, after which, it gets arrested in a particular state, Fig. <ref> (b). When the detection frequency is higher compared to all the transition frequencies of the system, the Zeno regime sets in, Fig. <ref> (c). Each coordinate freezes at a particular value around a time t=6 and the system does not evolve any further.
The phase space dynamics of the qutrit are plotted in Figs. <ref> and <ref>, for a frequency lower and higher than the transition frequencies, respectively. In Fig. <ref>, for each coordinate, the qutrit shows evolution in the phase-space. However, in the Zeno regime, Fig. <ref>, it is evident that localization in x(p) is accompanied by delocalization of p(x). This shows that the system is shelved to a state. In terms of stability, localization in x or p corresponds to stability along that coordinate. It is clear that both x and p are not stable simultaneously, hence the points are saddle points, as in <cit.>.
§.§ Creating a cNOT gate
The three-level system can be used as a control and the ancilla as a target such that when the system is in |1⟩ or |2⟩, it does nothing to the ancilla (ancilla stays in initial state |0⟩_(n), whereas flips the ancilla to |1⟩_(n) when qutrit is in |3⟩. Such a gate can be represented as
cNOT =(|1⟩⟨1|+|2⟩⟨2|)⊗𝕀̂ + |3⟩⟨3|⊗σ_x^(n).
The states on which the cNOT acts are |1,0⟩, |2,0⟩ or |3,0⟩, where the first state is the qutrit state which controls the target ancilla initially in the state |0⟩. When cNOT acts on |3,0⟩, it gives |3,1⟩ and leaves the others unchanged.
§.§ Dense coding and teleportation
Some of the applications of entangled pairs are dense coding and teleportation. Dense coding uses one quantum bit together with a shared EPR pair to encode and transmit two classical bits <cit.>. Without using entanglement, only one classical bit of information can be extracted. Teleportaion is the opposite of dense coding as it uses two classical bits to transmit the state of an unknown qubit. The initial setup for both includes two parties, Alice and Bob who wish to communicate. Each is sent one of the entangled particles of an EPR pair
|ψ_0⟩=1/√(2) (|0⟩_A|0⟩_B+|1⟩_A|1⟩_B).
Each can perform transformations only on their particle unless they send over their particle.
Dense coding: Alice wants to transmit the state of two classical bits encoding one of the numbers {0,1,2,3}, depending on which, she performs one of the transformations {I,X,Y,Z} on her qubit of |ψ_0⟩. The resulting state is shown in table <ref>.
Bob decodes the information in two steps: cNOT to the entangled pair followed by Hadamard H on the first qubit:
Bob finally measures the two qubits to obtain the binary encoding sent by Alice.
Quantum teleportation: Due to the no-cloning theorem, the original state is destroyed and finally created at the target, hence the name teleportation. Alice has an qubit with unknown state |ϕ⟩=a|0⟩+b|1⟩. Both Alice and Bob share a part of the EPR pair just like in dense coding (<ref>). The initial state is then the three-qubit state:
|ψ⟩⊗|ψ_0⟩ =1/√(2)(a|0⟩⊗(|00⟩+|11⟩)+b|1⟩⊗(|00⟩+|11⟩))
=1/√(2)(a|000⟩+a|011⟩+b|100⟩+b|111⟩).
Alice controls the first two qubits and Bob controls the third. Alice uses the decoding step used by Bob in dense coding to the first two qubits in (<ref>), i.e., cNOT on first two followed by Hadamard on first qubit
(H⊗I⊗I) (cNOT⊗I)(|ψ⟩⊗|ψ⟩)
=(H⊗I⊗I)1/√(2)(a|000⟩+a|011⟩+b|110⟩+b|101⟩)
=1/2[a(|000⟩+|011⟩+|100⟩+|111⟩)+b(|010⟩+|001⟩-|110⟩-|101⟩)]
=1/2(|00⟩(a|0⟩+b|1⟩)+|01⟩(a|1⟩+b|0⟩)+|10⟩(a|0⟩-b|1⟩)+|11⟩(a|1⟩-b|0⟩)).
Upon measuring the first two qubits, Alice obtains one of the four states |00⟩, |01⟩, |10⟩ or |11⟩, depending upon which, Bob's qubit is projected to one of the four states a|0⟩+b|1⟩, a|1⟩+b|0⟩, a|0⟩-b|1⟩ or a|1⟩-b|0⟩. Alice sends her result as two classical bits to Bob. The original state |ϕ⟩ is contained in Bob's qubits. Upon receiving the two bits, Bob reconstructs the state by applying decoding transformation to his qubit:
Bob will finally have the qubit Alice wished to send.
§.§ Applications of entanglement using three-level system
We have considered a three-level system where the third level is being monitored by an ancilla. For communication and teleportation using the qutrit, we need to have two of the states acting as ground and the third, which is being monitored as the higher level. This will enable us to create a cNOT gate for the qutrit. Further, we need the regular Pauli operators corresponding to this setup, such that the bit-flip operator acts on the states as
X_13|1⟩=|3⟩, X_23|2⟩=|3⟩, X_13+23|3⟩= |1⟩+|2⟩/√(2).
Hence, the operators may be written as
X_13=[ 0 0 1; 0 0 0; 1 0 0; ] X_23=[ 0 0 0; 0 0 1; 0 1 0; ].
The resulting X operator read as
X=X_13+X_23/√(2)=1/√(2)[ 0 0 1; 0 0 1; 1 1 0; ].
We have
X|1⟩=1/√(2)[ 0; 0; 1; ]=1/√(2)|3⟩, X|2⟩=1/√(2)[ 0; 0; 1; ]=1/√(2)|3⟩ and X|3⟩=|1⟩+|2⟩/√(2).
Similarly, the Y operator is
Y=1/√(2)[ 0 0 1; 0 0 1; -1 -1 0; ],
with
Y|1⟩=1/√(2)[ 0; 0; -1; ]=-1/√(2)|3⟩, Y|2⟩=1/√(2)[ 0; 0; -1; ]=-1/√(2)|3⟩ and Y|3⟩=|1⟩+|2⟩/√(2).
The phase operator should act as
Z(|1⟩+|2⟩/√(2))=(|1⟩+|2⟩/√(2)) and Z|3⟩=-|3⟩.
That is,
Z=1/√(2)[ 1 0 0; 0 1 0; 0 0 -√(2); ].
The cNOT gate is given by
cNOT=(|1⟩⟨1|+|2⟩⟨2|)⊗ I^(n)+|3⟩⟨3|⊗σ_x^(n),
where superscript (n) represents the ancilla. To find the Hadamard operator, note that
|ψ_0⟩ =1/√(2)((|1⟩+|2⟩)/√(2)+|3⟩)
H 1/√(2)((|1⟩+|2⟩)/√(2)+|3⟩)=|1⟩+|2⟩/√(2)
H 1/√(2)((|1⟩+|2⟩)/√(2)-|3⟩)=|3⟩.
These are effected by the Hadamard gate:
H=1/2√(2)[ 1 1 √(2); 1 1 √(2); √(2) √(2) -2; ].
Now we have a set of operators at our disposal, acting as gates on this three-level system for dense coding and teleportation.
Dense coding: Alice encodes the digits {0,1,2,3} in state |ψ_0⟩ and performs transformations on her part of the state. Let the states of ancilla be {|g⟩,|e⟩}, the eigenstates of σ_z. These are entangled with the qutrit to parallel the EPR pair of qubits.
Then, Bob decodes using cNOT followed by Hadamard on the (first) qutrit. Here, the cNOT has control as the three-level system and target as a two level system. Hence, the flip operator will be the usual 2D Pauli σ_x. This is shown in table <ref>.
Teleportation: Alice has an unknown qubit |ϕ⟩=a|g⟩+b|e⟩ (ancilla). She wants to send this to Bob through a classical channel. They each share a part of the state
|ψ_0⟩=1/√(2)[|11⟩+|12⟩+|21⟩+|22⟩/2+|33⟩],
so that the combined state initially is
|ϕ⟩⊗|ψ_0⟩ =1/2√(2)[a(|g11⟩+|g12⟩+|g21⟩+|g22⟩+2|g33⟩)
+b(|e11⟩+|e12⟩+|e21⟩+|e22⟩+2|e33⟩)].
Alice controls the first two states in the tensor product in (<ref>) and Bob controls the third state. For the decoding step, Alice applies cNOT (|g⟩⟨g|⊗ I_3+|e⟩⟨e|⊗ X_3) on the first two states of the product followed by Hadamard on the first
(H_2⊗I⊗I) (cNOT⊗I)(|ϕ⟩⊗|ψ_0⟩)
= (H_2⊗I⊗I)1/2√(2)[a(|g11⟩+|g12⟩+|g21⟩+|g22⟩+2|g33⟩)
+√(2)b(|e31⟩+|e32⟩+1/√(2)(|e13⟩+|e23⟩))]
= 1/4[a(|g11⟩+|e11⟩+|g12⟩+|e12⟩+|g21⟩+|e21⟩+|g22⟩+|e22⟩+2|g33⟩+2|e33⟩)
+√(2)b(|g31⟩-|e31⟩+|g32⟩-|e32⟩+|g13⟩-|e13⟩+|g23⟩-|e23⟩)]
= 1/2√(2)[|g1⟩(a(|1⟩+|2⟩)/√(2)+b|3⟩)+|e1⟩(a(|1⟩+|2⟩)/√(2)-b|3⟩)
+|g2⟩(a(|1⟩+|2⟩)/√(2)+b|3⟩)+|e2⟩(a(|1⟩+|2⟩)/√(2)-b|3⟩)
+|g3⟩(√(2)a|3⟩+ √(2)b(|1⟩+|2⟩)/√(2))+|e3⟩(√(2)a|3⟩- √(2)b(|1⟩+|2⟩)/√(2))].
Thus the final encoded state is
|ψ⟩_f =1/2[|g⟩(|1⟩+|2⟩/√(2)){a(|1⟩+|2⟩/√(2))+b|3⟩}+|e⟩(|1⟩+|2⟩/√(2)){a(|1⟩+|2⟩/√(2))-b|3⟩}
+|g⟩|3⟩{a|3⟩+b(|1⟩+|2⟩/√(2))}+|e⟩|3⟩{a|3⟩-b(|1⟩+|2⟩/√(2))}]
Upon measuring the first two states, Alice will obtain one of the four states mentioned in the first column of Tab. <ref>, which she sends as two classical bits to Bob. Upon receiving them, Bob reconstructs the state by applying a decoding transformation (<ref>) to his part of the product state which contains the unknown state |ϕ⟩. Thus Bob will finally have the qubit state Alice wanted to send.
§.§ Monitoring two levels
Consider a qutrit interacting with two ancillae. The ancillae are again two-level systems, one of which monitor the state |2⟩ whereas the other monitors the state |3⟩ as shown in Fig. <ref>. The interaction strength between qutrit and ancilla monitoring |2⟩ (|3⟩) is J_2=√(α_2/δ t) (J_3=√(α_3/δ t)).
The Hamiltonian for this system can be given as
H =ω_12(|1⟩⟨2|+|2⟩⟨1|)+ω_23(|2⟩⟨3|+|3⟩⟨2|)+ω_13(|1⟩⟨3|+|3⟩⟨1|)+ H_s-d,
where
H_s-d = J_2|2⟩⟨2|⊗σ_y^(2)⊗𝕀^(3) + J_3 |3⟩⟨3| ⊗𝕀^(2)⊗σ_y^(3) + (J_2 |2⟩⟨2|+ J_3 |3⟩⟨3|) ⊗σ_y^(2) ⊗σ_y^(3).
The Kraus operators are given by
ℳ_r =⟨r_1 r_2|exp[-ιH_s-d δt]|00⟩
ℳ_00 = 𝕀 -J_2^2|2⟩⟨2| (δt)^2 -J_3^2|3⟩⟨3| (δt)^2
ℳ_01 = -J_3|3⟩⟨3| δt -ιJ_2^2 |2⟩⟨2| (δt)^2
ℳ_10 = -J_2|2⟩⟨2| δt -ιJ_3^2 |3⟩⟨3| (δt)^2
ℳ_11 = ι(J_2 |2⟩⟨2| +J_3 |3⟩⟨3|) δt.
So we have a 2× 2 Kraus operator matrix. The unitary evolution of qutrit under system Hamiltonian H_s and measurement postselected on r=00, we obtain 8 coupled dynamic equations from the density matrix
ρ(t+δ t)=ℳ_00𝒰ρ𝒰^†ℳ_00^†/Tr[ℳ_00𝒰ρ𝒰^†ℳ_00^†].
These equations are
ẋ_1 = -α_2 x_1x_3 +ω_23x_5 +ω_13x_7+1/3(α_2-2α_3)x_1(2√(3)x_8-1)
ẋ_2 = - [2ω_12x_3 + α_2 x_2x_3 + ω_23 x_4-ω_13 x_6 -1/3(α_2-2α_3)x_2(2√(3)x_8-1) ]
ẋ_3 = 1/3 [6ω_12x_2 + 2α_3 x_3 +3 ω_23 x_5 -3ω_23x_7-4√(3)α_3 x_3 x_8-α_2(1+x_3)(-2+3x_3-2√(3)x_8)]
ẋ_4 =1/3[3ω_23x_2-3ω_12x_7 +α_2 x_4 (2-3x_3+2√(3)x_8) -α_3 x_4 (1+4√(3) x_8) ]
ẋ_5 =1/3[-3ω_23x_1+(2α_2-α_3-3α_2 x_3)x_5 +3ω_12x_6+2√(3)(α_2-2α_3)x_5x_8-3ω_13(x_3+2√(3)x_8)]
ẋ_6 = [-ω_13 x_2 - ω_12x_5 - 1/3x_6 (α_2+α_3+3α_2 x_3 - 2√(3)α_2 x_8 +4√(3)α_3 x_8) ]
ẋ_7 = [-ω_13x_1 +ω_12x_4 -1/3 (α_2+α_3 +3 α_2 x_3)x_7 +2/√(3)(α_2-2α_3)x_7x_8+ω_23 (x_3-2√(3)x_8) ]
ẋ_8 = 1/6√(3) [4 α_3 + 9 ω_13 x_5+9 ω_23x_7-4α_3x_8(√(3)+6x_8)+α_3(-2+3x_3+2√(3)(1-3x_3)x_8+12x_8^2)]
The functional incorporating the backaction is ℱ=α_2x_3-2/3(α_2+α_3+√(3)α_2x_8-2√(3)α_3x_8). The corresponding conjugate momenta are
p_1 =α_2x_3p_1-1/3(α_2-2α_3)(2√(3)x_8-1)p_1+ω_23p_5+ω_13p_7
p_2 =α_2x_3 p_2-1/3(α_2-2α_3)(2√(3)x_8-1)p_2-2ω_12p_3-ω_23p_4+ω_13p_6
p_3 =α_2x_1p_1+2ω_12p_2+α_2x_2p_2-2/3α_3p_3+4√(3)/3α_3x_8p_3+α_2/3(1+6x_3-2√(3)x_8)p_3
+α_2x_4p_4+α_2x_5p_5+ω_13p_5+α_2x_6p_6+α_2x_7p_7-ω_23p_7-α_3/2√(3)p_8+x_8p_8-α_2
p_4 =ω_23p_2-α_2/3(2-3x_3+2√(3)x_8)p_4+α_3/3(1+4√(3)x_8)p_4-ω_12p_7
p_5 =-ω_23p_1-ω_23p_3-1/3(2α_2-α_3-3α_2x_3)p_5-2√(3)/3(α_2-2α_3)x_8p_5+ω_12p_6-√(3)/2ω_13p_8
p_6 =-ω_13p_2-ω_12p_5+1/3(α_2+α_3+3α_2x_3-2√(3)α_2x_8+4√(3)α_3x_8)p_6
p_7 =-ω_13p_1+ω_23p_3+ω_12p_4+1/3(α_2+α_3+3α_2x_3)p_7-2/√(3)(α_2-2α_3)x_8p_7-√(3)/2ω_23p_8
p_8 =-2/√(3)(α_2-2α_3)(x_1p_1+x_2p_2+x_3p_3+x_4p_4+x_5p_5+x_6p_6+x_7p_7-1?)-2/√(3)α_2p_3
+2√(3)(ω_13p_5+ω_23p_7)+2/3α_3(1+2√(3)x_8)p_8-α_3/3(1-3x_3)p_8-4/√(3)α_3x_8p_8.
The dynamics of the position coordinates of the qutrit with time are shown in Fig. <ref>. When the detection frequencies of the two detectors are less compared to all the transition frequencies of the system, the dynamics shows continuous oscillations, Fig. <ref> (a). In an intermediate frequency range, the system shows oscillations for some time, after which, it gets arrested in a particular state, Fig. <ref> (b). When the detection frequency is higher compared to all the transition frequencies of the system, the Zeno regime sets in, Fig. <ref> (c). Each coordinate freezes at a particular value around a time t=6, just as in the previous section where a single state was being monitored.
The phase space dynamics of the qutrit are plotted in Figs. <ref> and <ref>, for frequencies of both the detectors lower and higher than the transition frequencies, respectively. In Fig. <ref>, for each coordinate, the qutrit shows evolution in the phase-space. However, in the Zeno regime, Fig. <ref>, the system follows uncertainty principle - as soon as the position coordinates are fixed at a particular value, the uncertainty in the momentum coordinates peaks. This also shows that there is a saddle point. The qutrit gets shelved in the position coordinates and the is delocalised in the momentum coordinates.
§ CREATING A TOFFOLI GATE
The Kraus operators in (<ref>) indicate that the system may be in state 1 (M_00), state 2 (M_10), state 3 (M_01) or in a combination of 2 and 3, i.e., anywhere but not in state 1 (M_23). This can be interpreted as an operator
T = |1⟩⟨1|⊗|1⟩⟨1|⊗(𝕀⊗𝕀) + |2⟩⟨2|⊗|1⟩⟨1|⊗(X⊗𝕀)
+|1⟩⟨1|⊗|3⟩⟨3|⊗(𝕀⊗X) + |2⟩⟨2| ⊗|3⟩⟨3|⊗(X⊗X).
Consider 𝕀⊗𝕀, 𝕀⊗ X and x⊗𝕀 as giving an outcome of 0 and x⊗ X equivalent to producing an outcome of 1. The setup can then be interpreted as a Toffoli gate. For instance, if control is (1,1) and target is 0, the state is |1,1,(00)≡ 0⟩. If control is (2,3) and target is 1, the state is |2,3,(11)≡ 1⟩.
§ CONCLUDING REMARKS
Control of qutrit is shown by monitoring one or two levels. Due to the Quantum Zeno Effect, the state of the system is shown to shelve to a state other than the states of the three-level system. Treatment to a three-level system takes us out of Pauli algebra, here we have Gell-Mann matrices. In addition, we write a new set of operators to realise the cNOT gate with the qutrit as the control and the two-level ancilla as the target. With these operators, the applications of entanglement have been realised in a three-level system in dense coding and teleportation for the purpose of quantum communication. Application of the system to universal gates allows us to manipulate the states. In general, for N-level system also, the conclusion will hold good.
0.5 truecm
Data Availability Statement: No Data associated in the manuscript
0.25 truecm
Conflict of interests: Authors declare no conflict of interest.
§ APPENDIX: DENSITY MATRIX OF LG-LEVEL SYSTEM
An N-level system is defined by a Bloch vector whose components are expectation values of some observables <cit.>. The number of observables needed to identify the state are N^2-1. These correspond to N^2-1 independent parameters used to define a Hermitian density matrix operator ρ̂ with a constraint, Trρ̂=1. Choosing the generators of SU(N) for the observables x̂_i, the density matrix is determined from their expectation values ⟨x̂_i⟩'s as
ρ=1/N𝕀̂_N + 1/2∑_i=1^N^2-1⟨x̂_i⟩x̂_i.
The properties of the density matrix associated with a Hilbert space ℋ_N is given as
ρ∈ℒ(ℋ_N) : (i) Trρ=1 (ii) ρ = ρ^† (iii) ρ_i ≥ 0,
where ℒ is the space of linear operators on ℋ_N, i=1,2,… N and ρ_i's are the eigenvalues of ρ. The property (iv) Trρ^2≤ 1 follows from Eq. (<ref>). Equality holds when ρ is a pure state.
Following these properties, the operators x̂_i satisfy
( i) x̂_i = x̂_i^† ( ii) Tr [x̂_i] = 0 ( iii) Tr [x̂_ix̂_j] = 2δ_ij.
The x_i's are characterised with structure constants f_ijk, completely asymmetric tensor and g_ijk, completely symmetric tensor of Lie algebra
[x̂_i,x̂_j] = 2if_ijk x̂_k
{x̂_i,x̂_j} =2/Nδ_ijÎ_N+2g_ijkx̂_k.
By imposing (iv), the length of the operators x̂_i are restricted as
|x|≡√(x_ix_j)≤√(2(N-1)/N).
Systematic construction of the generators generalising the Pauli spin operators for an N-level system is given by <cit.>
{x̂_i}_i=1^N^2-1 = {û_jk,v̂_jk,ŵ_l}
where
û_jk = |j⟩⟨k| + |k⟩ ⟨j|,
v̂_jk = -ι(|j⟩⟨k| - |k⟩⟨j|),
ŵ_l = √(2/l(l+1))( ∑_j=1^l |j⟩⟨j|-l|l+1⟩⟨l+1|),
1≤j < k ≤N, 1≤l ≤N-1.
For N=2,
x̂_1 = û_12 = |1⟩⟨2| + |2⟩ ⟨1| ≡X̂,
x̂_2 = v̂_12 = -ι(|1⟩⟨2| - |2⟩⟨1|) ≡Ŷ,
x̂_3 = ŵ_l= |1⟩⟨1| - |2⟩⟨2| ≡Ẑ,
where |1⟩=[ 1 0; ]^ T and |2⟩=[ 0 1; ]^ T and the structure constants are f_ijk = ϵ_ijk (Levi-Civita), g_ijk=0.
99
ms B. Misra and E. C. G. Sudarshan, J. Math. Phys. 18, 756 (1977).
dehmelt H. G. Dehmelt, Bull. Am. Phys. Soc. 20, 60 (1975).
deh H. G. Dehmelt, IEEE Transactions on Instrumentation and Measurement, IM31, 83 (1982).
minev Z. Minev et al., Nature 570, 200 (2019).
parveen K. Snizhko, P. Kumar, A. Romito, Phys. Rev. Res. 2, 033512 (2020).
krjj Komal Kumari, Garima Rajpoot, Sandeep Joshi, and Sudhir R. Jain, Ann. Phys. 450, 169222 (2023).
kimura G. Kimura, The Bloch vector for N-level systems, Phys. Lett. A 314, 339 (2003).
jordan A. Chantasri, J. Dressel, A. Jordan, Phys. Rev. A 88, 042110 (2013).
jordan2 A. Chantasri, A. Jordan, Phys. Rev. A.92, 032125 (2015).
rieffel E. Rieffel, Wolfgang Polak, Quantum Computing: A gentle introduction, (The MIT Press, Cambridge) (2011).
hioe F. T. Hioe and J. H. Eberly, N-Level Coherence Vector and Higher Conservation Laws in Quantum Optics and Quantum Mechanics, Phys. Rev. Lett. 47, 838 (1981).
pottinger_lendi J. Pöttinger and K. Lendi, Generalized Bloch equations for decaying systems, Phys. Rev. A 31, 1299 (1985).
lendi K. Lendi, Entropy production in coherence-vector formulation for N-level systems, Phys. Rev. A 34, 662 (1986).
|
http://arxiv.org/abs/2307.04092v1 | 20230709042603 | Coupled-channel $D^\ast K^\ast -D_s^\ast ρ$ interactions and the origin of $T_{c\bar{s}0}(2900)$ | [
"Man-Yu Duan",
"Meng-Lin Du",
"Zhi-Hui Guo",
"En Wang",
"Dian-Yong Chen"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
[addref]
|
http://arxiv.org/abs/2307.07585v1 | 20230714192419 | Higher-order Time-Delay Interferometry | [
"Massimo Tinto",
"Sanjeev Dhurandhar"
] | gr-qc | [
"gr-qc"
] |
ℛ
𝒟
G
α
β̱
γ
ζ
𝒳
åσ
α_res
β_res
γ_res
ζ_res
X_res
Ċ
C̈
L̇
L̈
1/2
[email protected]ão de Astrofísica, Instituto
Nacional de Pesquisas Espaciais, S. J. Campos, SP 12227-010, [email protected] University Centre for Astronomy and Astrophysics,
Ganeshkhind, Pune, 411 007, India
Time-Delay Interferometry (TDI) is the data processing technique
that cancels the large laser phase fluctuations affecting the
one-way Doppler measurements made by unequal-arm space-based
gravitational wave interferometers. In a previous publication we
derived TDI combinations that exactly cancel the
laser phase fluctuations up to first order in the inter-spacecraft
velocities. This was done by interfering two digitally-synthesized
optical beams propagating a number of times clock- and
counter-clock-wise around the array. Here we extend that approach by
showing that the number of loops made by each beam before
interfering corresponds to a specific higher-order TDI space. In it
the cancellation of laser noise terms that depend on the
acceleration and higher-order time derivatives of the
inter-spacecraft light-travel-times is achieved exactly. Similarly to what we proved for the second-generation
TDI space, elements of a specific higher-order TDI space can be
obtained by first “lifting” the basis (, ,̱, X) of the
1^ st-generation TDI space to the higher-order space of
interest and then taking linear combinations of them with
coefficients that are polynomials of the six delays
operators. Higher-Order TDI might be required by future
interplanetary gravitational wave missions whose inter-spacecraft
distances vary appreciably with time, in particular, relative
velocities are much larger than those of currently planned arrays.
04.80.Nn, 95.55.Ym, 07.60.LyHigher-order Time-Delay Interferometry
Sanjeev Dhurandhar
August 12, 2023
======================================
§ INTRODUCTION
Interferometric detectors of gravitational waves may be thought of as
optical configurations with one or more arms folding coherent trains
of light. At points where these intersect, relative fluctuations of
frequency or phase are monitored (homodyne detection). Interference of
two or more beams, produced and monitored by a nonlinear device such
as a photo detector, exhibits sidebands as a low frequency signal.
The observed low frequency signal is due to frequency variations of
the sources of the beams about the nominal frequency ν_0 of the
beams, to relative motions of the sources and any mirrors (or optical
transponders) that do any beam folding, to temporal variations of the
index of refraction along the beams, and, according to general
relativity, to any time-variable gravitational fields present, such as
the transverse traceless metric curvature of a passing plane
gravitational wave train. To observe gravitational waves in this way,
it is thus necessary to control, or monitor, the other sources of
relative frequency fluctuations, and, in the data analysis, to
optimally use algorithms based on the different characteristic
interferometer responses to gravitational waves (the signal) and on
the other sources (the noise).
By comparing phases of split beams propagated along equal but
non-parallel arms, frequency fluctuations from the source of the beams
are removed directly at the photo detector and gravitational wave
signals at levels many orders of magnitude lower can be detected.
Especially for interferometers that use light generated by presently
available lasers, which display frequency stability roughly a few
parts in 10^-13 in the millihertz band, it is essential to remove
these fluctuations when searching for gravitational waves of
dimensionless amplitude smaller than 10^-21.
Space-based, three-arm interferometers
<cit.> are prevented from
canceling the laser noise by directly interfering the beams from their
unequal arms at a single photo detector because laser phase
fluctuations experience different delays. As a result, the Doppler
data from the three arms are measured at different photo detectors on
board the three spacecraft and are then digitally processed to
compensate for the inequality of the arms. This data processing
technique, called Time-Delay Interferometry (TDI) <cit.>,
entails time-shifting and linearly combining the Doppler measurements
so as to achieve the required sensitivity to gravitational radiation.
In a recent article <cit.> we re-analyzed the space of the
Time-Delay Interferometric (TDI) measurements that exactly cancel the laser noise up to the inter-spacecraft linear
velocity terms, i.e. the so called 2^ nd-generation TDI
space. By first regarding the basis (, ,̱, X) of the
1^ st-generation TDI space as the result of the interference of
two synthesized light-beams propagating once, clock- and
counter-clock-wise around the array, we then showed that exact
cancellation of the laser noise terms containing the inter-spacecraft
velocities could be achieved by making these beams complete a larger
number of loops around the array before interfering. In the case of
the Sagnac combinations, (, ,̱), the minimum number of loops
made by each beam around the array to exactly cancel the laser noise
linear velocity terms was found to be three, while for the unequal-arm
Michelson combination, X, the minimum number of loops was equal to
two. In physical terms, by making the synthesized beams go around the
array in the clock- and counter-clock-wise sense a number of
times before interfering, one ends up averaging out the effects due to
the rotation of the array and the time-dependence of the
inter-spacecraft light-travel-times. In this paper we will prove that
there exist a correspondence between the number of clock- and
counter-clock-wise loops made by the beams around the array and the
order of cancellation of the laser noise in the kinematic terms of the
inter-spacecraft light-travel-times. In the case of the unequal-arm
Michelson combination this result had already been noticed through a
numerical analysis <cit.>. In this article we
actually prove it analytically.
The paper is organized as follows. In section <ref> we review
some of the results presented in <cit.> that are relevant
here. We first summarize the “lifting”<cit.> technique, in
which elements of a basis of the 1^ st-generation TDI space are
rewritten in terms of the six delay operators. Then their
corresponding 2^ nd-generation and higher-order TDI expressions
are obtained by acting on specific combinations of their data with
uniquely identified polynomials of the six delays. This operation is
key to our method as it allows us to generalize the main property of a
basis of the 1^ st-generation TDI space: elements of the
2^ nd-generation and higher-order TDI spaces are obtained by
taking linear combinations of properly delayed lifted basis
<cit.>. The higher-order TDI combinations cancel laser noise
terms depending on the second- and higher-order time derivatives of
the light-travel-times. In physical terms, the operation of lifting
corresponds to two light beams making clock- and counter-clock-wise
loops around the array before being recombined on board the
transmitting spacecraft. In so doing the time-variations of the
light-travel-times is averaged out more and more accurately. As an
exemplification, after applying an additional lifting procedure to the
2^ nd-generation TDI combinations
(α_2, β_2, γ_2, X_2) derived in <cit.>, we
obtain the corresponding combinations
(α_3, β_3, γ_3, X_3). In Section <ref>, after deriving
useful identities of the six delay operators, we mathematically prove
that (α_3, β_3, γ_3, X_3) exactly cancel the laser
noise up to terms quadratic in the inter-spacecraft velocities and
linear in accelerations, and that higher-order TDI combinations
cancel the laser noise up to higher-order
time-derivatives of the inter-spacecraft light travel times. In
Section <ref> we then present our comments on our findings and
our conclusions.
§ THE LIFTING PROCEDURE
Here we present a brief summary of the lifting procedure discussed in
<cit.>. There it was shown that the operation of lifting
provides a way for deriving elements of the 2^ nd-generation
TDI space by lifting combinations of the 1^ st-generation TDI
space. As it will become clearer below, the lifting procedure can be
generalized so as to provide TDI combinations that exactly cancel the
laser noise containing delays of any order arising from kinematics.
We start by writing the one-way Doppler data y_i, y_i' in terms of
the laser noises using the notation introduced in
<cit.>. We index the one-way Doppler data as follows: the
beam arriving at spacecraft i has subscript i and is primed or
unprimed depending on whether the beam is traveling clock- or
counter-clock-wise around the interferometer array, with the sense
defined by the orientation of the array shown in Fig. <ref>.
Because of the Sagnac effect due to the rotation of the array, the
light-travel-time from say spacecraft i to j is not the same as
the one from j to i. Therefore L_i ≠ L'_i and so we have six
unequal time-dependent time-delays (we choose units so that the
velocity of light c is unity and L_i, L'_i have dimensions of time
- they are actually L_i/c, L'_i/c.). The corresponding delay
operators are labeled as _i and _i' and are defined by their
action on an arbitrary time-series Ψ(t) as
_i Ψ(t) ≡Ψ(t - L_i) and
_i'Ψ(t) ≡Ψ(t - L'_i) respectively.
The one-way phase measurements are then given by the following
expressions <cit.>
y_1 = _3 C_2 - C_1 , y_1' = _2' C_3 - C_1 ,
y_2 = _1 C_3 - C_2 , y_2' = _3' C_1 - C_2 ,
y_3 = _2 C_1 - C_3 , y_3' = _1' C_2 - C_3 ,
Thus, as seen in the figure, y_1 for example is the phase
difference time series measured at reception at spacecraft 1 with
transmission from spacecraft 2 (along L_3). [Besides the
primary inter-spacecraft Doppler measurement y_i, y_i' that
contain the gravitational wave signal, other metrology measurements
are made on board an interferometer's spacecraft. This is because
each spacecraft is equipped with two lasers and two proof-masses of
the onboard drag-free subsystem. It has been shown <cit.>,
however, that these onboard measurements can be properly delayed and
linearly combined with the inter-spacecraft measurements to make the
realistic interferometry configuration equivalent to that of an
array with only three lasers and six one-way inter-spacecraft
measurements.]
As emphasized in <cit.>, to generate elements of the
2^ nd-generation TDI space one first needs to derive the
expressions of the four generators, , ,̱, X, of the
1^ st-generation TDI that include the six delays
i, i' i, i'= 1, 2, 3, 1', 2', 3'. Since these combinations
correspond to two beams propagating clock- and counter-clock-wise
once, the lifting procedure makes these beams propagate clock- and
counter-clock-wise a number of times before being made to interfere.
The resulting data combinations exactly cancel the laser noise terms
linear in the inter-spacecraft velocities. The lifting procedure is
unique and can be applied iteratively an arbitrary number of times. As
we will show below, each iteration suppresses the laser noise
significantly more than that achieved at the previous iterative
step. To be specific, a 2^ nd-generation TDI combination
cancels the laser noise up to linear velocity terms, while the
corresponding 3^ rd-generation cancels it up to the
acceleration and terms quadratic in velocities. It should be
noticed that some elements of the 2^ nd-generation TDI space,
like the Sagnac combinations , ,̱, require more than two
“lifting” iterations to exactly cancel the laser noise up to the
linear velocity terms <cit.>. Therefore we will refer to the
n^ th-generation TDI space as those TDI combinations that
exactly cancel the laser noise up to the
(n-1)^ th time-derivatives of the time-delays.
§.§ Time-varying arm-lengths and vanishing commutators
If the arm-lengths are time-dependent, then the operators do not
commute and the laser noise will not cancel. However, if the
arm-lengths are analytic functions of time, we can Taylor expand the
operators and keep terms to a specific order in the time-derivatives
of the light-travel times. Although in the case of the currently
envisioned missions <cit.> it
is sufficient to cancel terms that are only first order in
L̇_i and L̇'_i or linear in velocities
<cit.>. However, in future missions one may have to
account for higher-order time-derivative terms because of the stronger
time-dependence of inter-spacecraft distances. In those cases the
lifting procedure presented in this article provides a method for
obtaining TDI combinations that cancel the laser noise up to the order
required.
Let us first start by noting that the effect of n operators
_k_1, ..., _k_n applied on the laser noise C(t).
We also write the expressions in a neat form. For three operators we
obtain: [The operators could refer to either L_i or
L_i'. We do not write the primes explicitly in order to avoid
clutter but the identities that we derive hold in either case.
Instead of writing _k_p we have denoted the same by just
_p where p can take any of the values
1, 2, 3, 1', 2', 3'.]_1 _2 _3 C(t) = C [t - L_3 (t - L_2 (t - L_1) - L_1) - L_2 (t - L_1) - L_1]
= C [t - L_1 - L_2 - L_3 + (L_2 v_3 + L_1 v_2 + L_1 v_3) - L_1 v_2 v_3 - (L_1 + L_2)^2 a_1 + L_1^2 a_2)] .
= C (t - ∑_i = 1^3 L_i) - V_3 - Q_3 - A_3) ,
≈ C (t - ∑_i = 1^3 L_i) + (V_3 - Q_3 - A_3) +
V_3^2 ,
where,
V_3 = L_1 v_2 + (L_1 + L_2) v_3 ,
Q_3 = L_1 v_2 v_3 ,
A_3 = [L_1^2 a_2 + (L_1 + L_2)^2 a_3] , where
v_i = _i and a_i = _i. We have neglected higher order terms
of order o (v^3), o (v a) etc. while obtaining the above results. We
have kept terms up to the quadratic order in velocities and linear in
accelerations. We further denote by V, Q, A, the terms linear in
velocities, quadratic in velocities and linear in acceleration,
respectively. For four operators _1 _2 _3 _4 operating on
C(t), we obtain:
V_4 = L_1 v_2 + (L_1 + L_2) v_3 + (L_1 + L_2 + L_3) v_4 ,
Q_4 = L_1 [v_2 v_3 + v_2 v_4 + v_3 v_4 ] + L_2 v_3 v_4 ,
A_4 = [L_1^2 a_2 + (L_1 + L_2)^2 a_3 + (L_1 + L_2 + L_3)^2 a_4]
, with the expression of C being essentially the same as in
Eq. (<ref>) but V_3, Q_3, A_3 replaced by V_4, Q_4, A_4
etc. Also we find that there are recursion relations like
Q_4 = V_3 v_4 + Q_3 which makes it convenient to derive the general
expressions for n operators. Accordingly, the general expression for
n operators is obtained from the above considerations by induction:
_1 _2 _3 ... _n C(t) ≈
C (t - ∑_i = 1^n L_i) + (V_n - Q_n - A_n) + V_n^2 ,
V_n = ∑_i = 1^n - 1 L_i (∑_j = i + 1^n v_j ) ,
Q_n = ∑_i = 1^n - 2 L_i (∑_j = i + 1, k > j^n v_j v_k ) ,
A_n = ∑_j = 2^n a_j (∑_i = 1^j -1 L_i )^2
.
Let us interpret the r.h.s. of this equation. The first term is
just the laser noise at a delayed time that is equal to the sum of the
delays at time t. If the arm lengths were constant in time this
would be the only term that would be present and would be sufficient
to cancel the laser frequency noise. These are just the first
generation TDI and the operators commute. The second term, on the
other hand, involves the multiplication of Ċ evaluated
at the delayed time by an expression involving V, Q, A - it contains
terms up to the second order in velocities and linear in
accelerations. This term makes the operators non-commutative. The
third term instead includes the second derivative of the laser noise
and contains terms quadratic in velocities. As shown in
<cit.> certain commutators cancel the laser noise
up to linear velocity terms in the following general way:
[x_1 x_2...x_n, x_σ(1),
x_σ(2), ..., x_σ(n)] = 0 .
where the “zero” on the RHS means up to first order in the
linear velocity and σ is a permutation on the n
symbols. However, as it will be shown in the next section, the
expression on the left-hand-side of Eq. (<ref>) allows us
to prove that, for a given n and a specific permutation of the
indices, the cancellation of the laser noise achieved is up to the
time-derivatives of (n-1)^th-order in inter-spacecraft time
delays.
Since this general result will be proved by induction, we first
provide the expressions for the higher-order (3^ rd-generation
TDI) Michelson and Sagnac combinations, (_3, _̱3, _3, X_3) and
show they can iteratively be related to their corresponding
previous-order combinations.
§.§ The Unequal-arm Michelson X_3
To derive the expression for X_3 we recall how the second-generation
expression X_2 was derived <cit.>. The unequal-arm
Michelson combinations include only the four one-way Doppler
measurements, (y_1, y_1', y_2', y_3) from the two arms
centered on spacecraft 1. Let us consider the following synthesized
two-way Doppler measurements and their residual laser noise terms:
X_↑ ≡ y_1 + _3 y_2' = (_3_3' - I) C_1 ,
X_↓ ≡ y_1' + _2' y_3 = (_2'_2 - I) C_1 .
As we know, the residual laser noise in the 1^ st-generation TDI combination X, is
equal to the following expression <cit.>:
X ≡ (_3_3' - I) X_↓ - (_2'_2 - I) X_↑
= [_3_3' , _2'_2] C_1 ≡_1 C_1 .
Here we have defined the commutator
_1 = [_3_3' , _2'_2] as the first commutator
which is associated with the 1^ st-generation unequal-arm
Michelson combination. It is easy to see the above commutator is
different from zero when the delays are functions of time and, to
first order, is in fact proportional to the inter-spacecraft relative
velocities. To derive the 2^ nd-generation TDI combination
X_2, which cancels exactly the laser noise up to linear velocity
terms, we rewrite the above expression for X in terms of its two
synthesized beams. They are equal to:
X_↑↑ ≡_2'_2 X_↑ + X_↓ = (_2'_2_3_3' - I) C_1 ,
X_↓↓ ≡_3_3' X_↓ + X_↑ =
(_3_3'_2'_2 - I) C_1 ,
The X_2 expression can be derived by repeating the same
procedure used for deriving X. This results in the following
expression:
X_2 ≡ (_3_3'_2'_2 - I) X_↑↑ -
(_2'_2_3_3' - I) X_↓↓ =
[_3_3'_2'_2, _2'_2_3_3'] C_1 ≡_2 C_1 =
0 ,
where we have defined the second commutator
_2 = [_3_3'_2'_2, _2'_2_3_3']. Also the equality to zero means “up to terms linear in
velocity”, and is a consequence of the general property of the
commutators of the delay operators proved in the previous section. This
can be easily seen from the following argument. Since we need to
cancel terms only up to linear in velocities for X_2, we only need
to consider the quantities V_n of Eq. (<ref>) for the
commutator. Here n = 8 because we have a product of 8 delay
operators
_3_3'_2'_2_2'_2_3_3' in the
first term of the commutator. The explicit expression is:
V_8 = L_3 (3 v_3' + 2 v_2' + 2 v_2 + v_3) + L_3' (2 v_2' + 2 v_2 + v_3 + v_3')
+ L_2' (3 v_2 + 2 v_3 + 2 v_3' + v_2') + L_2 ( 2 v_3 + 2
v_3' + v_2' + v_2) .
A permutation of indices
3 ⟷ 2', 3' ⟷ 2 produces the
second term in the commutator. But under this permutation of indices
as seen from Eq. (<ref>) the quantity V_8 is
invariant. Since the second term of the commutator has the opposite
sign, the V terms cancel out to give zero.
Let us define A_1 ≡_3_3' and
B_1 ≡_2'_2. We have the following commutator's
identity:
[A_1B_1, B_1A_1] = [[A_1, B_1], A_1B_1] ,
from which it follows that,
_2 ≡ [_3_3'_2'_2, _2'_2_3_3'] = [_1, _3_3'_2'_2] .
Similar to what was done for both X and X_2, one can obtain
X_3. From the expression for X_2 above we can write the following
two combinations corresponding to two synthesized beams making three
zero-area closed-loops along the two arms of the array. We have,
X_↑↑↑ ≡_3_3'_2'_2 X_↑↑ + X_↓↓ = (_3_3'_2'_2_2'_2_3_3' - I) C_1 ,
X_↓↓↓ ≡_2'_2_3_3' X_↓↓ + X_↑↑ = (_2'_2_3_3'_3_3'_2'_2 - I) C_1 ,
which implies the following expression of the residual laser noise in X_3:
X_3 ≡ (_2'_2_3_3'_3_3'_2'_2 -
I) X_↑↑↑ - (_3_3'_2'_2_2'_2_3_3' - I) X_↓↓↓
≡_3 C_1 = [_2'_2_3_3'_3_3'_2'_2, _3_3'_2'_2_2'_2_3_3'] C_1 .
From the commutator identity derived earlier we see that _3 can be
written in the following way,
_3 ≡ [_2'_2_3_3'_3_3'_2'_2, _3_3'_2'_2_2'_2_3_3'] = [_2, _2'_2_3_3'_3_3'_2'_2] ,
where _2 is in fact given by Eq. (<ref>), the operator of the
2^ nd-generation unequal-arm Michelson combination. We then
conclude that the following identity is satisfied in general,
_n = [X_n-1, _2'_2_3_3'_3_3'_2'_2...] ,
where the total number of delay operators on the right-hand-side is
equal to 2^n, as one can easily infer.
In the following section we will return to the expression of X_3 and
higher-order unequal-arm Michelson combinations. There we will show
that X_3 cancels laser noise terms that are quadratic in the
inter-spacecraft velocities and linear in the acceleration, and
prove a general theorem by which TDI combinations of order n (such
as X_n) cancel the laser noise up to (n-1)^th time-derivatives of the
time-delays.
§.§ The Sagnac combination α_3
A TDI Sagnac combination, _n, represents the result of the
interference of two synthesized light-beams on board spacecraft 1 after
making an equal number of clock- and counter-clock-wise loops around
the array. In <cit.> we obtained the expression of _2,
the 2^ nd-generation TDI Sagnac combination, that exactly
cancels laser noise up to terms linear in the inter-spacecraft
velocities. In what follows we derive _3 by first recalling the
expressions of , _1.5 and _2, and their residual
laser noises
≡α_↑ - α_↓ = (D_3 D_1 D_2 -
D_2' D_1' D_3') C_1 ,
where α_↑ and α_↓ are equal to the
following combinations of the one-way heterodyne measurements <cit.>,
α_↑ ≡ y_1 + D_3 y_2 + D_3 D_1 y_3 = (D_3 D_1 D_2 - I)
C_1 ,
α_↓ ≡ y_1' + D_2' y_3' + D_2' D_1' y_2' =
(D_2' D_1' D_3' - I) C_1 .
The Sagnac combination _1.5 is then obtained by making the
beams go around the array one additional time and results in the
following expression,
α_1.5≡ (D_2' D_1' D_3' - I) α_↑ - (D_3 D_1 D_2 - I) α_↓≡σ_1.5 C_1 = [D_2' D_1' D_3', D_3 D_1 D_2] C_1 .
From the properties of commutators derived in <cit.>, we
recognize that the right-hand-side of Eq. (<ref>) does
not cancel the laser noise containing terms linear in the
velocities. However, by making the beams going around the array one
more time, we obtain the following expression of the
second-generation Sagnac combination _2,
α_2 = (D_3 D_1 D_2 D_2' D_1' D_3' - I) α_↑↑ -
(D_2' D_1' D_3' D_3 D_1 D_2 - I) α_↓↓ ,
≡ å_2 C_1 =
[D_3 D_1 D_2 D_2' D_1' D_3', D_2' D_1' D_3' D_3 D_1 D_2] C_1 .
In Eq. (<ref>) α_↑↑,
α_↓↓ are equal to the following combinations of the six
delay operators _i, _j , i=1, 2, 3 , j = 1', 2', 3'<cit.>,
α_↑↑ = D_2' D_1' D_3' α_↑ + α_↓
= (D_2' D_1' D_3' D_3 D_1 D_2 - I) C_1 ,
α_↓↓ = α_↑
+ D_3 D_1 D_2 α_↓
= (D_3 D_1 D_2 D_2' D_1' D_3' - I) C_1 .
We may notice the operator that applies to C_1 in
Eq. (<ref>) is the commutator of two delay operators, each
being the product of the same number of primed and unprimed delay
operators and related by permutations of their indices. From the
commutator identities derived in the previous section, we conclude
that such a commutator results in the exact cancellation of the laser
noise up to linear velocity terms.
Let us now consider the following two combinations
entering in _2
α_↑↑↑ = D_3 D_1 D_2 D_2' D_1' D_3'α_↑↑ +
α_↓↓
=
(D_3 D_1 D_2 D_2' D_1' D_3' D_2' D_1' D_3' D_3 D_1 D_2 - I) C_1 ,
α_↓↓↓ = D_2' D_1' D_3' D_3 D_1 D_2 α_↓↓ +
α_↑↑
= (D_2' D_1' D_3' D_3 D_1 D_2 D_3 D_1 D_2 D_2' D_1' D_3' - I) C_1 .
From Eq. (<ref>) above we obtain the following expression for
_3 and its residual laser noise,
_3 = (D_2' D_1' D_3' D_3 D_1 D_2 D_3 D_1 D_2 D_2' D_1'
D_3' - I) α_↑↑↑ - (D_3 D_1 D_2
D_2' D_1' D_3' D_2' D_1' D_3' D_3 D_1 D_2 - I) α_↓↓↓
≡ å_3 C_1 = [D_2' D_1' D_3' D_3 D_1 D_2 D_3 D_1 D_2 D_2' D_1' D_3',
D_3 D_1 D_2 D_2' D_1' D_3' D_2' D_1' D_3' D_3 D_1 D_2] C_1 .
If we now define A_1 ≡ D_2' D_1' D_3', B_1 ≡ D_3 D_1 D_2, we see that
the right-hand-side of Eq. (<ref>) can be written as
[A_1B_1B_1A_1, B_1A_1A_1B_1], which is also equal to [[A_1B_1, B_1A_1], A_1B_1B_1A_1] from the
commutator's identity derived earlier. From these considerations we
finally have,
å_3 = [å_2, D_2' D_1' D_3' D_3 D_1 D_2 D_3 D_1 D_2 D_2'
D_1' D_3' D_3 D_1 D_2 D_2' D_1' D_3' D_2' D_1' D_3'
D_3 D_1 D_2] .
As in the case of the expression for the operator _n derived in
the previous section, here too we can relate the operator å_n to
the operator å_n-1 in the following way,
å_n = [å_n-1, D_2' D_1' D_3' D_3 D_1 D_2 D_3 D_1 D_2 D_2'
D_1' D_3' D_3 D_1 D_2 D_2' D_1' D_3' D_2' D_1' D_3'
D_3 D_1 D_2 ...] ,
where the total number of delay operators on the right-hand-side is
equal to 3 × 2^n, as one can easily infer.
§ HIGHER-ORDER TDI
In the previous section we showed that an order-n TDI combination
can be written in terms of its corresponding (n-1)-order one through a
commutator identity (see Eqs. (<ref>, <ref>)). In this section
we will take advantage of this property by first proving that the
third-order TDI combinations _3, _̱3, _3, X_3 cancel the laser
noise up to terms quadratic in the inter-spacecraft velocities and
linear in the accelerations. We will then generalize this result
and prove that combinations of order n cancel exactly the laser
noise up to the (n-1)^th-time-derivative terms of the
inter-spacecraft time delays. Since the proof proceeds similarly for
both the unequal-arm Michelson and the Sagnac combinations, in what
follows we will just focus on the Michelson combinations.
To take advantage of the dependence of X_3 on its lower-order
combinations X_2 and X, let us first focus on the expressions for
the residual laser noises in X and X_2. Using our previous
notation of section II B, namely, A_1 ≡_3_3' and
B_1 ≡_2'_2, to the first order we can write the
residual laser noise in X in the following form:
X = [A_1, B_1] C_1(t) = C_1(t - L_A_1(t) - L_B_1(t -
L_A_1(t))) - C_1(t - L_B_1(t) - L_A_1(t - L_B_1(t))) ,
≃ Ċ_1(t - L_B_1(t) - L_A_1(t)) [L̇_B_1 L_A_1 - L̇_A_1 L_B_1] ,
where L_B_1, L_A_1 are the two round-trip-light-times in the
two unequal arms and the symbol represents the usual
operation of time derivative. Eq. (<ref>) simply states that
the residual laser noise in X is linear in the inter-spacecraft
velocities through a “angular momentum-like” expression. We note
that A_1 and B_1 also represent time-delays and are time-delay
operators in their own right, and therefore follow the same
algebraic rules as the elementary delay operators _j. For
reasons that will become clearer later on, we will denote such an
expression as,
S^(1)≡ [L̇_B_1 L_A_1 - L̇_A_1 L_B_1] .
Since S^(1) contains terms linear in velocities, the laser noise
in X is not canceled at this order.
Let us now see how we can cancel the terms linear in
velocities. Let us consider the following two delay operators:
A_2 ≡_3_3'_2'_2 = A_1 B_1, B_2 ≡_2'_2 _3_3' = B_1 A_1. We can formally
write the expression of the first-order residual laser noise in X_2
in the following way:
X_2 = [A_2, B_2] C_1(t) = C_1(t - L_A_2(t) -
L_B_2 (t - L_A_2(t))) -
C_1(t - L_B_2(t) -
L_A_2 (t - L_B_2(t))) ,
≃ Ċ_1(t - L_A_2(t) - L_B_2(t)) [L̇_B_2 L_A_2 - L̇_A_2
L_B_2] ,
where we have denoted with (L_A_2 , L_B_2) the
two delays resulting from applying to the laser noise the two operators
(A_2 = _3_3'_2'_2 , B_2 = _2'_2 _3_3')
respectively.
In analogy with the expression of S^(1) in Eq. (<ref>), which
quantifies the first-order expression of the residual laser noise in
X, it is convenient to introduce the following combination that
defines the magnitude of the first-order residual laser noise in X_2:
S^(2)≡ [L̇_B_2 L_A_2 - L̇_A_2 L_B_2] .
To assess its magnitude we need to expand the two delays
(L_A_2 , L_B_2) in terms of the round-trip-light-times and
their time-derivatives through the following expressions,
L_A_2 = L_A_1(t) + L_B_1(t -
L_A_1(t)) ≃ L_A_1(t) +
L_B_1(t) - L̇_B_1(t ) L_A_1(t) ,
L̇_A_2 = L̇_A_1(t) + d/dt
L_B_1(t - L_A_1(t))
≃L̇_A_1(t) + L̇_B_1(t) -
d/dt (L̇_B_1(t) L_A_1(t) ) ,
L_B_2 = L_B_1(t) + L_A_1(t -
L_B_1(t)) ≃ L_B_1(t) +
L_A_1(t) - L̇_A_1(t ) L_B_1(t) ,
L̇_B_2 = L̇_B_1(t) + d/dt
L_A_1(t - L_B_1(t))
≃L̇_B_1(t) + L̇_A_1(t) -
d/dt (L̇_A_1(t) L_B_1(t) ) .
By substituting the expressions given by Eq. (<ref>) into
Eq. (<ref>), after some algebra we get,
S^(2) = [d/dt(L̇_A_1
L_B_1) - (L̇_A_1 + L̇_B_1)] S^(1) + [L_A_1 + L_B_1 - L̇_A_1
L_B_1] Ṡ^(1) .
Since S^(1) is linear in the inter-spacecraft velocities, from the
above expression we conclude that S^(2) (and therefore the
residual laser noise in X_2) only contains terms that are quadratic
in the relative velocities and linear in the
accelerations. Mathematically this is consequence of the dependence of
X_2 on X as shown in Eq. (<ref>). Thus we find that the
terms linear in velocities are canceled in X_2.
Let us now move on to X_3. From the expression of its residual laser
noise given in Eq. (<ref>), after defining the following two delay
operators:
A_3 ≡ A_2B_2 = _3_3'_2'_2 _2'_2 _3_3',
B_3 ≡ B_2A_2 = _2'_2 _3_3'_3_3'_2'_2, we can write the expression of its first-order residual
laser noise in the following way,
X_3 ≃Ċ_1(t - L_B_3(t) - L_A_3(t)) [L̇_B_3 L_A_3 - L̇_A_3
L_B_3] .
By defining S^(3) to be equal to:
S^(3)≡ [L̇_B_3 L_A_3 - L̇_A_3 L_B_3] ,
we will now show that S^(3) can be written as a linear combination
of S^(2) and Ṡ^(2), similarly to S^(2) being a
linear combination of S^(1) and Ṡ^(1). To prove this
result, we expand the two delays (L_A_3 , L_B_3) and their
time-derivatives in terms of the delays (L_A_2 , L_B_2) and
their time derivatives (which define S^(2)). We obtain:
L_A_3 = L_A_2(t) + L_B_2(t -
L_A_2(t)) ≃ L_A_2(t) +
L_B_2(t) - L̇_B_2(t ) L_A_2(t) ,
L̇_A_3 = L̇_A_2(t) + d/dt
L_B_2(t - L_A_2(t))
≃L̇_A_2(t) + L̇_B_2(t) -
d/dt (L̇_B_2(t) L_A_2(t) ) ,
L_B_3 = L_B_2(t) + L_A_2(t -
L_B_2(t)) ≃ L_B_2(t) +
L_A_2(t) - L̇_A_2(t ) L_B_2(t) ,
L̇_B_3 = L̇_B_2(t) + d/dt
L_A_2(t - L_B_2(t))
≃L̇_B_2(t) + L̇_A_2(t) -
d/dt (L̇_A_2(t) L_B_2(t) ) .
After substituting Eqs. (<ref>) into Eq. (<ref>),
we finally obtain the following expression for S^(3) in terms of
S^(2) and Ṡ^(2):
S^(3) = [ d/dt(L̇_A_2
L_B_2) - (L̇_A_2 + L̇_B_2)] S^(2) + [L_A_2 + L_B_2 - L̇_A_2
L_B_2] Ṡ^(2) .
Since S^(2) only contains terms that are either proportional to
the square of the inter-spacecraft velocities or to their relative
accelerations, and Ṡ^(2) is further suppressed over
S^(2) by a time derivative of these terms, from the structure of
Eq. (<ref>) we conclude that S^(3) is of order V smaller
than S^(2), with V being a typical inter-spacecraft
velocity. Therefore in X_3 terms quadratic in velocities and
linear in acceleration are canceled out.
From the derivations of the expressions for S^(2) and S^(3)
above it is now clear that the combination S^(4), associated with
the residual laser noise in X_4, will cancel laser noise terms that
are cubic in the velocity or of order velocity times acceleration
or linear in time derivative of the acceleration, and that in
general the expression S^(n) associated with the residual laser
noise in X_n will depend on the order n-1 combinations S^(n-1)
and Ṡ^(n-1) through a linear relationship similar to those
shown by Eqs.(<ref>, <ref>). This is because of the
mathematical structure of S^(n) and because its defining delays
can be written in terms of the delays entering the expression of
S^(n-1). By induction we therefore conclude that the residual
laser noise in the n-order unequal-arms Michelson combination X_n
will cancel exactly the laser noise up to (n-1)^th-
time-derivatives of the inter-spacecraft time delays.
§ CONCLUSIONS
We have presented a technique for constructing TDI combinations
that cancel the laser noise up to n^th-order time-derivative terms
of the inter-spacecraft light-travel-times. The lifting procedure,
which provides a way for constructing such TDI combinations, entails
making two synthesized laser beams going around the array along clock-
and counter-clock-wise paths a number of times before interfering back
at the transmitting spacecraft. In so doing the time-variations of
the light-travel-times is averaged out more and more accurately with
the number of loops performed by the beams. We derived the expressions
of the third-order TDI combinations (_3, _̱3, _3, X_3) as an
example application of the lifting procedure, and showed their
expressions cancel the laser noise up to terms quadratic in the
velocity and linear in the acceleration thanks to the theorem we
proved in Section <ref>. This result had previously been
noticed through a numerical analysis <cit.> and here
we have proved it analytically.
Although the higher-order TDI combinations have been derived using
analytic techniques, they could have also been formulated using
matrices. This would have resulted in the same higher-order TDI
observables derived here albeit numerically
<cit.>. This implies that
representations of operators using matrices lend themselves to easy
numerical manipulations.
It is important to note that currently planned GW missions do not need
to cancel laser noise terms quadratic in the velocities or linear in
the accelerations because of their benign inter-spacecraft relative
velocities (≈ 10 m/s)
<cit.>. However, future
interplanetary missions capable of measuring inter-spacecraft relative
Doppler of 10 km/s or larger will need to synthesize third-order
TDI combinations to suppress the laser noise to the required levels.
§ ACKNOWLEDGMENTS
M.T. thanks the National Institute for Space Research (INPE, Brazil)
for their kind hospitality while this work was
done. S.V.D. acknowledges the support of the Senior Scientist Platinum
Jubilee Fellowship from the National Academy of Science (NASI), India.
apsrev
|
http://arxiv.org/abs/2307.05119v1 | 20230711085431 | Independent domination versus packing in subcubic graphs | [
"Eun-Kyung Cho",
"Minki Kim"
] | math.CO | [
"math.CO"
] |
Transaction Fraud Detection via Spatial-Temporal-Aware Graph Transformer
Yue Tian,
Guanjun Liu, Senior Member, IEEE,
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
Yue Tian and Guanjun Liu are with Department of Computer Science, Tongji University, Shanghai 201804, China (e-mail: [email protected]; [email protected]).
August 12, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================
In 2011, Henning, Löwenstein, and Rautenbach observed that the domination number of a graph is bounded from above by the product of the packing number and the maximum degree of the graph.
We prove a stronger statement in subcubic graphs: the independent domination number is bounded from above by three times the packing number.
§ INTRODUCTION
Throughout this paper, we consider only finite graphs.
We say a graph is dominated by a vertex subset A if every vertex of the graph is either in A or is adjacent to a vertex in A. In this situation, we say A is a dominating set of G. The domination number of G is the size of a minimum dominating set of G, and is denoted by γ(G). Understanding the domination number of graphs is one of the most fundamental topics in graph theory.
An independent dominating set of G is a dominating set of G whose vertices are not adjacent to each other, that is, it is an independent set that dominates G. Analogously, the independent domination number of G, denoted by i(G), is defined as the size of a minimum independent dominating set of G. Independent dominating sets can be understood as maximal independent sets, and the independent domination number is the size of minimum maximal independent set. The independent domination number of graphs has been extensively studied since 1960s <cit.>. See <cit.> for an overview of independent domination numbers of graphs.
Obviously, i(G) ≥γ(G) for every graph G, and there has been a number of results comparing i(G) to γ(G). See, for example, <cit.>.
Meanwhile, the domination number of a graph can be viewed as an analogue of a covering number, the number of spherical balls needed to cover a given space.
To understand this, regard a closed neighborhood of a vertex as a graph analogue of unit ball in the plane.
Covering number has often been studied in comparison with spherical packing number, the number of spherical balls that we can put in a given space without any overlap.
In a similar point of view, we can consider a graph analogue of the packing number.
A packing (sometimes called a 2-packing) of a graph G is a vertex set whose closed neighborhoods are pairwise disjoint, or equivalently, whose pairwise distance in G is at least 3.
A maximal packing of G is a packing in G that is not properly contained in the larger packing of G.
The packing number of G is the maximum size of a packing in G, and is denoted by ρ(G).
A study of a packing related to a dominating set in graphs has a long history, which dates back to 1970s <cit.>. See also <cit.> for an overview of research in this direction.
Our focus is to reveal a relation between the independent domination number and the packing number in graphs.
As the first step toward this direction, we prove the following:
Let G be a graph where every vertex has degree at most 3.
Then for every maximal packing S of a graph G, there is an independent dominating set of size at most 3|S|.
Theorem <ref> was motivated by an observation by Henning, Löwenstein, and Rautenbach <cit.>
that says for every graph G with δ(G) ≥ 1, the neighborhood N_G(S) of every maximal packing S of G is a dominating set of G. This implies that γ(G) ≤Δ(G)ρ(G) when G is a graph with maximum degree Δ(G), and they also characterized when the equality holds for Δ(G) = 3: if G is a connected graph with Δ(G) ≤ 3, then γ(G) =3 ρ(G) if and only if G ∈{H_1, H_2, H_3}, where H_i's are the graphs depicted in Figure <ref>.
Note that H_1 and H_2 are the only cubic non-planar graphs of order 8.
The graph H_2 is the Wagner graph, and H_3 is the Petersen graph.
Here is an immediate corollary of the main theorem:
For every subcubic graph G, i(G) ≤ 3 ρ(G).
Observe that the inequality in Corollary <ref> is also tight by graphs H_1, H_2, H_3 in <cit.>. In addition, we observe that i(G) = 3ρ(G) when G is a complete bipartite graph K_3,3.
Our paper is organized as follows.
In Section <ref>, we indicate notation and provide an observation and a lemma that are used in the proof of Theorem <ref>.
The proof of Theorem <ref> is presented in Section <ref>, and we list some remarks and questions in Section <ref>.
§ PRELIMINARIES
For a positive integer k, let [k] be the set of all positive integers that are at most k.
That is, [k] = {1, 2, …, k}.
For sets S and T such that S ∩ T = ∅, we use S ⊔ T to denote the disjoint union of S and T.
Let G be a simple and undirected graph. The vertex set and the edge set of G are denoted by V(G) and E(G), respectively.
For v ∈ V(G), we denote by N_G(v) the neighborhood of v in G and by N_G[v] the closed neighborhood of v in G, that is, N_G[v] = N_G(v) ∪{v}.
The degree of v in G is denoted by _G(v), and the maximum degree and the minimum degree of G are denoted by Δ(G) and δ(G), respectively, that is,
Δ(G) = max{_G(v): v∈ V(G)} and δ(G) = min{_G(v): v∈ V(G)}.
For S ⊆ V(G), let N_G(S) = ⋃_v ∈ SN_G(v), and N_G[S] = ⋃_v ∈ S N_G[v].
Also, we use G[S] to denote the subgraph of G induced by S.
The length of a path in G is the number of edges in the path.
The end vertex of a path in G is a vertex which is incident with exactly one edge in the path. So each path has exactly two end vertices.
For u, v ∈ V(G), a shortest path from u to v in G is a path having minimum length among all paths whose end vertices are u and v. The length of such a path is called the distance between u and v in G, and is denoted by d_G(u,v).
Here are simple facts that allow us to consider only connected graphs when proving Theorem <ref>.
Let G be a graph with two components G_1 and G_2.
(1) A set S is a maximal packing in G if and only if V(G_1) ∩ S is a maximal packing in G_1, and V(G_2) ∩ S is a maximal packing in G_2.
(2) A set I is an independent dominating set of G if and only if V(G_1) ∩ I is an independent dominating set of G_1, and V(G_2) ∩ I is an independent dominating set of G_2.
In a graph, edges are unordered pairs of vertices. If we replace the edges of a graph with ordered pairs of vertices, it gives us a directed graph, or a digraph. The edges of a digraph are called arcs.
Let D be a digraph with vertex set V(D) and arc set A(D).
For a, b ∈ V(D), we use (a,b) to denote an arc from a to b.
A digraph is antisymmetric if at most one among {(a,b), (b,a)} is in A(D) for each a,b ∈ V(D).
For v ∈ V(D), the in-neighbor of v is a vertex u in D such that (u,v) ∈ A(D), and the out-neighbor of v is a vertex w in D such that (v,w) ∈ A(D).
The in-degree of v is the number of in-neighbors of v, and is denoted by ^-_D(v).
The out-degree of v is the number of out-neighbors of v, and is denoted by ^+_D(v).
Note that ∑_v ∈ V(D)^-_D(v) = ∑_v ∈ V(D)^+_D(v).
A directed path in D is a path v_1v_2 … v_k in D such that (v_i,v_i+1) ∈ A(D) for all i ∈ [k-1].
An orientation D of a graph G is a antisymmetric digraph whose underlying graph is G.
We will use the following lemma in the proof of Theorem <ref>. This is already known, for example, by Theorem 2 in <cit.>, but we include a proof for the completeness of this manuscript.
Every connected multigraph with minimum degree at least 2 admits an orientation with no sources.
Let H be a multigraph with minimum degree 2.
For an orientation D of H, let s(D) be the number of sources of D.
It is sufficient to show that for every orientation D of H with s(D) ≥ 1, there is another orientation D' of H with s(D') < s(D).
Suppose there is a source, say v, of D.
Let Reach^+_D(v) be the set of all vertices that is an endpoint of a directed path starting from v.
Suppose all vertices in Reach^+_D(v) has in-degree at most 1.
This implies that each of them has out-degree at least 1.
Let K be the subgraph of D induced by X := {v}∪Reach^+_D(v).
Then,
|X|-1 = |Reach^+_D(v)| ≥∑_x ∈ Xd^-_K(x) = ∑_x ∈ Xd^+_K(x) ≥ |X|,
which is a contradiction.
Now we may assume that there is w ∈Reach^+_D(v) whose in-degree is at least 2. Let P be a directed path in D that starts from v and terminates at w. We define D' as the orientation of H obtained from D by reversing all orientation of the edges in P.
Then we have _D'^-(v) = 1, so v is not a source in D'.
So is w; since _D^-(w) ≥ 2 and _D^+(w) ≥ 0, the above process makes _D'^-(w) ≥ 1 and _D'^+(w) ≥ 1.
For all vertices other than v and w, including all intermediate vertices in P, the in-degree and out-degree are preserved.
This implies that s(D') < s(D) as desired.
§ PROOF OF THEOREM <REF>
If G consists of a single vertex, then the statement is obvious.
Thus, we may assume that there are at least two vertices in G.
By Observation <ref>, it is sufficient to show that the statement holds for connected graphs.
Let G be a connected subcubic graph on a finite vertex set V,
and S be a maximal packing of G.
Let N=N_G(S) and R=V∖N_G[S].
Note that since S is a packing of G, every vertex in N has exactly one neighbor in S.
We first observe that N is a dominating set of G.
Since G is connected, every vertex in S must have a neighbor in N.
On the other hand, by the maximality of S, every vertex in R must be adjacent to a vertex in N.
Therefore, N dominates the whole graph G.
We will modify N to obtain an independent set in G of size at most 3|S| that still dominates G.
Let H = G[N].
Since every vertex in N has a neighbor in S and has degree at most 3 in G, the induced subgraph H has maximum degree at most 2.
This implies that H is the disjoint union of cycles C_1,…,C_p, paths P_1,…,P_q, and isolated vertices.
We may regard the set of P_i's of length 1 in H as an induced matching M.
Let W be the set of endpoints of the edges in M, that is, H[W] = M.
Given B ⊆ N, let X(B) be the set of all vertices s in S such that |N_G(s)| = 3 and N_G(s) ⊆ (N∖ B) ∩ W. We take a set A ⊆ N that satisfies the following:
* For each path P_i of length at least 2, A contains the two endpoints of P_i.
* A is a maximal independent set in H satisfying (i).
* |X(A)| is minimum among all choices satisfying (i) and (ii).
Note that, by (ii), A contains all isolated vertices of H. We claim that the minimality assumption of (iii) actually implies that X(A) is an empty set.
|X(A)| =0.
Suppose |X(A)| ≥ 1. Draw a subcubic multigraph Q on X(N∖ W) such that uv ∈ E(Q) if and only if there is an edge u'v' ∈ M such that uu', vv' ∈ E(G).
Let Q' be the union of all components of Q that contains a vertex in X(A).
Note that V(Q') is nonempty since |X(A)| ≥ 1.
See Figure <ref> for an illustration of the construction of Q and Q'.
We will show that Q' is a cubic graph to apply Lemma <ref>.
Suppose to the contrary that v ∈ V(Q') and _Q'(v) ≤ 2.
Let w be the vertex in X(A) that has minimum distance to v in Q.
Here, we assume w = v if v ∈ X(A).
Let w_1 w_2 … w_k be the shortest path from v to w in Q', where w_1 = v and w_k = w.
Clearly, by the minimality of k, each w_i should not be in X(A) for i∈[k-1].
For each i ∈ [k-1], let x_iy_i be an edge in M such that x_iw_i, y_iw_i+1∈ E(G).
Consider the path w_1x_1y_1w_2 … x_k-1y_k-1w_k in G.
Recall that, since v ∈ X(N∖ W), v has three neighbors in N, say N_G(v) = {u_1, u_2, x_1}, and there are v_1, v_2 ∈ N such that u_1v_1, u_2v_2∈ M.
Since _Q'(v) ≤ 2, without loss of generality, we may assume that v_2 is not adjacent to a vertex in X(N∖ W).
Let
A' = (A ∖ ({v_2}∪{x_i : i ∈ [k-1]})) ∪{u_2}∪{y_i : i ∈ [k-1]}.
See Figure <ref> for an illustration of A' obtained from A in Figure <ref>.
Clearly, A' satisfies (i) and (ii): note that A' is obtained from A by replacing some vertices in A ∩ W with the other endpoints in the corresponding edges in M.
However, now we have X(A') = X(A) ∖{w}, which is a contradiction to the minimality assumption of (iii) of A.
Thus we conclude that Q' is a cubic graph, and in particular, Q' is a multigraph having minimum degree at least 2.
Then by Lemma <ref>, there is an orientation D of Q' with no source.
For each (u,v) ∈ A(D), there is an edge u'v' in M such that uu', vv' ∈ E(G).
We let A” = (A ∖{u' : (u,v) ∈ A(D)}) ∪{v' : (u,v) ∈ A(D)}.
It is clear that A” satisfies (i) and (ii) because A” is obtained from A by replacing some vertices in A ∩ W with the other endpoints in the corresponding edges in M.
On the other hand, since D has no source, |X(A”)| =0 < 1 ≤ |X(A)|, which is again a contradiction to the minimality assumption of (iii) of A.
Therefore, it must be |X(A)| = 0.
Now, by the choice of A, N is dominated by A in G.
We finally modify A to dominate all vertices in S ∪ R.
Let T be the set of all vertices in R that are not dominated by A.
We construct a set  by adding to A
* the set, say S', of all vertices s ∈ S that are not dominated by A, that is, N(s) ⊆
N∖ A, and
* an independent dominating set, say Z, of G[T].
It is obvious that S' is an independent set.
Clearly, no vertices of A and Z are adjacent to a vertex in S'.
Also, by the definition of Z, no vertices of Z are adjacent to A.
Since S' ∪ A dominates V ∖ T and Z dominates T, Â is an independent dominating set of G.
We finally claim that  is the desired set, that is |Â| ≤ 3|S|.
Let S_i be the set of all vertices in S' of degree i.
We will show that there is a one-to-one function from  to N ⊔ S_1 ⊔ S_2, which implies |Â| ≤ |N|+|S_1|+|S_2|.
Then, since |N| ≤ |S_1| + 2|S_2| + 3|S_3|, we have
|Â| ≤ |N| + |S_1| + |S_2| ≤ 2|S_1| + 3|S_2| + 3|S_3| ≤ 3|S|.
Let s ∈ S_3.
By the assumption (i) and (ii) of A and the observation |X(A)| = 0, we note that at least one of the neighbors, say s^*, of s has two neighbors in N.
For each r ∈ Z, there must be a neighbor, say r^*, of r that is in N.
Now define a function f: Â→ N⊔ S_1 ⊔ S_2 by
f(v) = v for v ∈ A ⊔ S_1 ⊔ S_2
v^* for v ∈ S_3 ⊔ Z
Note that for every pair of r^* and s^*, they do not belong to A and that they cannot be the same since s^* does not have a neighbor in R.
Thus, it is clear that f is a well defined one-to-one function.
This completes the proof.
§ REMARK
We have investigated how small the minimum independent number can be when the packing number is given in subcubic graphs.
The first question we can ask is a generalization of Theorem <ref>, as an analogue of the observation by Henning, Löwenstein and Rautenbach that γ(G) ≤Δ(G)ρ(G) for every graph.
Is i(G) ≤Δ(G) ρ(G) for every graph G?
For Δ(G) ≤ 2, Question <ref> is obviously true, and for Δ(G) = 3, it is answered to be true by Theorem <ref>.
Thus, the first interesting case is when Δ(G) = 4.
Another interesting question is to characterize all graphs where Theorem <ref> is tight.
As we observed in the introduction, H_1, H_2, H_3 and K_3,3 in Figure <ref> has independent domination number exactly three times the packing number.
However, we do not know whether they are the only examples for the tightness of Theorem <ref>.
If G is a connected graph with Δ(G) ≤ 3, then for which graphs i(G) = 3 ρ(G) hold?
Once we know the answer for Question <ref>, it is worth to ask if the ratio i(G)/ρ(G) becomes strictly smaller than 3 for subcubic graphs if we exclude those satisfying i(G) = 3 ρ(G).
Especially, this is related to the following conjecture by Henning, Löwenstein, Rautenbach <cit.>:
Every connected subcubic graph G except the three graphs H_1, H_2, H_3 satisfies γ(G) ≤ 2 ρ(G).
It was shown in <cit.> that the conjecture is true for claw-free graphs, the graphs with no induced subgraph isomorphic to K_1,3.
Since the independent domination number and the domination number are the same in claw-free graphs, so it immediately follows that i(G) ≤ 2ρ(G) for subcubic claw-free graphs.
As a new step to confirm Conjecture <ref>, we can consider subcubic graphs excluding all graphs that belongs to the answer for Question <ref>.
It is quite ambitious, but we can also try to figure out whether i(G) ≤ 2ρ(G) for such graphs, as a stronger analogue of Conjecture <ref>.
§ ACKNOWLEDGEMENT
Part of this research was conducted during the Winter Workshop in Combinatorics that was held in South Korea from January 30, 2023 to February 3, 2023, organized by Ilkyoo Choi, Minki Kim, and Boram Park.
plain
|
http://arxiv.org/abs/2307.03890v1 | 20230708034628 | Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots | [
"Jie Yin",
"Hao Yin",
"Conghui Liang",
"Zhengyou Zhang"
] | cs.RO | [
"cs.RO"
] |
Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots
Jie Yin ^†, Hao Yin ^†, Conghui Liang^* ^ and Zhengyou Zhang ^ (IEEE Fellow & ACM Fellow)
Authors ^† are independent researchers. Authors ^ are with Tencent Robotics X Lab, Shenzhen, China.
^* Corresponding Author: Conghui Liang ([email protected])
August 12, 2023
========================================================================================================================================================================================================================================================================
High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research.
To support the research on corner cases of visual SLAM systems,
this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was
collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously.
To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences.
We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at https://github.com/sjtuyinjie/Ground-Challengehttps://github.com/sjtuyinjie/Ground-Challenge.
Data Sets for SLAM, Data Sets for Robotic Vision
§ INTRODUCTION
Intelligent ground robots have been widely used in industrial production and daily life, such as logistics, cleaning, warehouses, security, and food delivery. And navigation is the fundamental capability for these robots to execute these diverse tasks. To achieve reliable navigation, visual SLAM (Simultaneous Localization and Mapping) problem has been researched for decades, with quite a few classical methods proposed <cit.>.
A recent developing trend in visual SLAM is low-cost multi-sensor fusion, which has been verified to be a practical approach <cit.>
to enhance the robustness to diverse scenarios. Different sensors can complement each other, maximizing the perceptual awareness of environments. One of the best example is that visual-inertial odometry (VIO) algorithms can significantly improve the tracking stability and accuracy in aggressive motion and textureless scenarios.
While VIO systems have performed well in most cases, <cit.> has proven that this does not apply to ground vehicles.
For generic movement patterns, a VIO system has only four unobservable directions (three for global translation and one for global yaw). However, ground vehicles are restricted from moving in a 2D plane, mostly along a straight line or a circular arc, and thus the IMU is not sufficiently activated.
Therefore, the VIO system on the ground robot will suffer from additional DoF unobservability, such as the scale. To address this issue, <cit.> extends VINS-Mono <cit.> to
incorporate low-frequency wheel-encoder data and keep the scale observable. Similarly, <cit.> proposes a RGB-D Encoder SLAM system for differential-drive robots. Most recently, <cit.> proposes an optimization-based visual-inertial-wheel tightly coupled odometry, which claims to work robustly in dark or overexposed conditions. Nonetheless, its performance has not been tested on any public dataset with ground truth trajectories.
We believe that progress in SLAM, like in the AI field, is highly data-driven <cit.>.
Although there have been extensive public datasets available to evaluate different SLAM algorithms, most of these datasets are outdated and do not challenge cutting-edge SLAM algorithms. In our opinion, those datasets focusing on challenging cases can more efficiently reveal the defects and limitations of existing algorithms. We notice that corner case detection in autonomous driving receive extensive concern from researchers <cit.> <cit.> because such cases could easily cause the navigation system to drift. Similarly, once the localization module of the robot fails, it might cause industrial accidents and even pose potential threats to human safety as well. Nonetheless, to our knowledge, there is currently not much literature discussing the corner cases of robot navigation, which is not conducive to the safety of real-world robot applications.
To fill this gap, we present a novel SLAM dataset for ground robots, which aims to challenge existing cutting-edge SLAM systems with corner cases and thus promotes the progress of the multi-sensor fusion SLAM algorithm.
The challenges of our datasets lie in two areas: specific movement patterns and sensor failures, which will be elaborated in subsequent sections. Some scenarios covered in our datasets are visualized in Figure <ref>. Our major contributions are summarized as follows:
* We collect a novel visual SLAM dataset for ground robots with a rich pool of sensors in diverse environments both indoors and outdoors. Particularly, the dataset covers a series of challenging sequences including sensor failures and specific movement patterns.
* State-of-the-art SLAM algorithms of different settings are tested on our benchmark. And the results indicate these systems are not robust enough for situations such as sensor failures.
* To facilitate the research on corner cases of robot navigation, we will release the full dataset with ground truth trajectories and the configuration file of each tested algorithm upon paper publication.
§ RELATED WORKS
§.§ SLAM Datasets for Ground Robots
Most existing SLAM datasets are collected by UAVs <cit.> or cars <cit.>, but only a few are targeted at ground robots. For instance, Rawseeds <cit.> and UTIAS<cit.> provide RGB images only, thus making them unsuitable for evaluating multi-sensor fusion systems. The Rosario dataset <cit.> is rich in sensor variety, yet is specifically designed for agricultural environments. M2DGR <cit.> captures diverse indoor and outdoor scenarios, including some challenging scenes like elevators and darkrooms, but doesn't contain wheel odometer information which is essential for multi-sensor fusion SLAM algorithms due to its low cost and high precision. OpenLORIS<cit.> offers rich sensor types in visual challenging scenarios such as highly dynamic markets and poorly exposed corridors, but wheel challenges or motion challenges are not included.
§.§ Corner Cases
Corner cases, i.e., extreme and non-predictable situations, are a popular research topic in autonomous driving <cit.>. Although infrequent, these cases can potentially threaten the security and reliability of autonomous navigation systems. Corner cases exist in robot navigation tasks as well. To address such challenging scenarios, researchers have proposed various methods, such as RGB-D SLAM <cit.> and DS-SLAM <cit.>, to handle dynamic environments, and GVINS <cit.> to deal with degenerate cases including low-speed movement, less than four visible satellites, and GNSS-denial environments. Additionally, <cit.> proves that their method is robust in aggressive motions and a visual texture-less white wall. Nonetheless, we note that there are still plenty of corner cases that tend to be overlooked, such as wheel slippage, motion blur, and complete visual occlusion. There is a lack of SLAM datasets specifically designed for studying these corner cases, which is a gap yet to be filled. To sum up, it is urgent and critical to collect a novel SLAM dataset with rich sensor types, precise calibration, and sufficient challenge to support studies on corner cases, particularly sensor failures.
§ THE GROUND-CHALLENGE DATASET
§.§ Sensor setup
We construct a ground robot for data collection and the sensor locations on the robot are shown in Figure <ref>. The chassis is equipped with a front-view VI-Sensor (Visual-Inertial Sensor) that captures RGB and depth images along with 6-axis IMU's measurements. Driven by two driving wheels providing odometer information and four assisting wheels, the robot also has a high-precision 9-axis Xsens IMU and a 16-beam 3D LiDAR.
The ground truth trajectories and point clouds are generated by the Velodyne LiDAR and the Xsens IMU using Fast-LIO2 <cit.>, a state-of-the-art LiDAR-based SLAM system. To evaluate its performance, we compared the high-precision trajectories generated by a motion capture system with 16 infrared cameras to those generated by Fast-Lio2. The experiment revealed that Fast-LIO2 can reach a positioning accuracy of 3cm in a small-scale (15m x 15m) indoor room. Additionally, as reported in <cit.>, Fast-LIO2 can achieve less than 0.1m end-to-end error in an outdoor trajectory spanning 1000 meters. Thus, considering that it is difficult for visually-based SLAM algorithms to achieve similar accuracy in challenging scenarios, we use the result of Fast-LIO2 as the pseudo-ground-truth trajectory.
§.§ Synchronization and Calibration
We capture all the data using the ROSbag tool in the Robot Operating System (ROS). The RGB camera and 6-axis IMU embedded in the Realsense D435I are hard-synchronized, while the depth images are pixel-by-pixel aligned to the RGB images. The 3D LiDAR and 9-axis IMU are software-synchronized by triggering data capture at the same instance. To calculate the camera intrinsics of pinhole cameras, we use the MATLAB Camera Calibration Toolbox. To calibrate the internal parameters of the IMU, we use the toolbox from <cit.>, which includes the white noise and random walk of both the gyroscopic and accelerometer measurements. We choose the IMU frame as the reference to calibrate the extrinsic parameters (relative poses) between sensors, and employ the toolbox from <cit.> for calibrating the extrinsic parameters between cameras and IMU.
§.§ Data collection
We provide an overview of our dataset in Table <ref>. All data was captured using the Rosbag tool within the Robot Operating System (ROS). The recording process is as follows: First, we recorded Office and Room sequences, where the robot moves slowly in a well-lit and textured office or room respectively, to test the performance of different algorithms in normal situations. Subsequently, we designed a series of corner case experiments from three aspects: visual challenge, wheel odometer challenge, and particular movement pattern, which are presented as follows:
§.§.§ Visual Challenge
In our experiments, we manipulate the robot to move in a room with poor illumination (Darkroom sequences), back and forth in front of walls lacking texture (Wall sequences), and through scenarios of varying degrees of occlusion (Occlusion sequences). Figure <ref> (a) shows sequences Occlusion1∼2, which involves a person walking in front of the robot and causing intermittent partial occlusion. Figure <ref> (b) displays sequence Occlusion3, in which the camera is covered with the palm repeatedly. In sequence Occlusion4 (Figure <ref> (c)), a piece of black tape is attached to the camera's lens to completely block its view, disabling feature extraction and matching for visual SLAM. Furthermore, Motionblur sequences are generated by rapidly translating and rotating the robot, creating motion blur for cameras (Figure <ref> (d)).
§.§.§ Wheel Odometer Challenge
The Hall and Loop sequences are collected in a hall with smooth ground and a heavily carpeted aisle loop, respectively, where the wheels slip significantly. Moreover, we record Roughroad sequences to test the performance of the localization algorithm on rough roads.
§.§.§ Particular Moving Patterns
In the Sequences Corridor1 and Corridor2, the robot moves forward in a zigzag shape and straight forward, respectively. In the zigzag route, motion blur and less overlapping between adjacent image frames will lead to errors in feature matching.
In the Rotation sequence, the robot only rotates and hardly translates, which makes it difficult for vision-based algorithms to estimate the depth of feature points by triangulation. In the Static sequences, the robot stands still on a bracket, and we control its wheels to move in different directions through the handle. This experiment aims to test whether SLAM systems coupled with the wheel odometer can work well when the robot wheel is suspended.
Finally, we operate the robot from a flat surface to another, passing through a slope. In this experiment, since the wheel odometer only provides two-dimensional speed observations, it could be misleading to estimate three-dimensional trajectories.
§ EVALUATION
The features of all the sequences are described on our project website. We evaluated some SLAM systems with different sensor configurations on twelve representative sequences from our dataset. The tested algorithms are ORB-SLAM3 <cit.>, an optimization-based SLAM system; VINS-Mono <cit.>, one of the state-of-the-art monocular visual-inertial systems; VINS-RGBD <cit.>, a fusion algorithm of RGB-D and IMU information based on the VINS-Mono <cit.> framework; and VIW-Fusion <cit.>, a tightly-coupled visual-inertial-wheel system featuring online extrinsic calibration and wheel-aided initialization. Also, we use an EKF algorithm <cit.> for fusion of IMU and wheel odometer.
The EVO tool <cit.> was used to align all the estimated trajectories with ground truth trajectories to obtain the ATE RMSE <cit.>.
The quantitative results are shown in Table <ref>, with the estimated trajectories in 2D plotted in Figure <ref>. Since most of the selected sequences are highly challenging (even with sharp turns), ORB-SLAM3 (both monocular-inertial and RGBD-inertial version) performed poorly on most of our test sequences, with frequent tracking failures (less than 50% of successfully tracked frames), initialization failure, or scale drift.
In contrast, SLAM algorithms with multi-sensor fusion (like VIW-Fusion <cit.>) achieved better localization results but failed in some specific scenarios as well. We discuss the experiment results in detail as follows:
Normal Situation
The ATE RMSE results on Sequence Office3 indicate that existing localization methods can perform well when the motion mode matches the assumptions of these algorithms and all the sensors work well.
Vision Challenge
In Sequence Darkroom2 and Motionblur3, VINS-Mono <cit.> and VINS-RGBD <cit.> drift a lot due to visual failures, while Wheel odometer based algorithms work more robustly in this case.
In Sequence Occlusion4, all the vision-based methods including VIW-Fusion <cit.> fail to initialize because of poor feature extraction. This finding indicates that VIW-Fusion <cit.> has not been adequately designed to handle adverse conditions. A more prudent strategy may be to combine the wheel odometer and IMU to output a trajectory when a visual sensor failure is detected.
Wheel Odometer Challenge
In the sequences Roughroad3 and Slope1, vision-based systems perform worse than wheel odometer-based algorithms due to inaccurate scale estimation in aggressive motion. In Sequence Hall1, VINS-Mono <cit.> and VINS-RGBD <cit.> drift significantly due to ground reflection and faraway feature points. Here, VIW-Fusion <cit.> maintains satisfactory positioning performance even with slight wheel slippage, demonstrating the advantages and necessity of multi-sensor fusion in complex scenarios. However, when the wheels slip more severely in Sequence Loop2, the significant deviation caused by the wheel odometer increases the localization error of estimated trajectories. This can be attributed to two main reasons: current algorithms lack the ability to detect wheel slippage, and the angular velocity provided by the wheel speedometer is not accurate, leading to the long-term divergence of the estimated trajectory. To reduce the accumulation of errors, it is suggested that IMU's angular velocity measurement be used instead of the wheel odometer's.
Particular Movement Patterns
In Sequence Corridor1, the zigzag movement of the robot not only fails the feature extraction but also leads to severe wheel slippage. Therefore, all the tested algorithms cannot accurately estimate the trajectory. In Sequence Rotation1, pure rotation causes severe errors in depth estimation by VINS-Mono's triangulation, while the remaining tested systems perform well thanks to measurements from other sensors. Finally, in Sequence Static1, VIO systems cannot be initialized successfully due to the lack of IMU excitation. Since the wheels are still moving after suspension, the wheel odometer-based methods mistake the robot being in motion.
In summary, VINS-Mono <cit.> is most likely to generate catastrophic localization results in corner cases, and VINS-RGBD <cit.> can also inevitably fail when severe camera failures occur.
We have noticed that the wheel odometer alone can achieve good results in most situations, except for severe wheel slippage. Integrating the IMU and the wheel odometer through the EKF <cit.> can achieve higher accuracy than the raw odometer. Nonetheless, the trajectory of the EKF can shake violently in the initialization phase due to the inaccuracy in the initial covariance estimation (this part was manually eliminated in our experiment). VIW-Fusion <cit.> can achieve satisfying accuracy and robustness in most sequences, but its initialization in visual failure needs improvement. Furthermore, it lacks consideration for wheel slippage, and its adopted dead reckoning model will diverge in a long trajectory due to inaccurate angular velocity estimates.
The experiments conducted demonstrate the validity and value of our dataset as a benchmark for existing SLAM systems. The results further suggest that there is still much room for improvement in current cutting-edge multi-sensor fusion algorithms for real-world applications. Sensor failures, such as complete occlusion and wheel suspension, can be fatal for single-sensor-based methods; however, multi-sensor fusion systems should be designed to be more robust in these cases. For instance, we posit that a reliable visual-IMU-wheel system should be able to explicitly identify scenarios where visual observations are inaccurate and respond accordingly (e.g. disable visual information and rely only on wheel odometer and IMU). Nevertheless, to our knowledge, corner case identification and troubleshooting have been scarcely addressed in prior work. Therefore, we provide this dataset to support relevant researches.
§ CONCLUSION
We present Ground-Challenge, a novel ground robot dataset to encourage breakthroughs in multi-sensor fusion SLAM algorithms. Specifically, we have crafted a series of corner case experiments, including sensor failures in diverse environments, to challenge current cutting-edge SLAM systems. We have tested these systems on our dataset and analyzed their limitations in various scenarios, thus providing potential developing directions for SLAM. We are committed to continually updating our benchmark dataset. Specifically, we will mount 2D and 3D LiDAR on the robot, design experiments to invoke corner cases, and utilize higher-precision equipment such as motion capture systems to ensure accurate ground truth for LiDAR SLAM in our future work.
Acknowledgement
Thank Tencent Robotics X Lab for support to this work.
IEEEtran
|
http://arxiv.org/abs/2307.07232v1 | 20230714085310 | Envelopes of straight line families in the plane | [
"Takashi Nishimura"
] | math.DG | [
"math.DG",
"57R45, 58C25"
] |
Envelopes of straight line families in the plane
]
Envelopes of straight line families in the plane
Takashi Nishimura]Takashi Nishimura
Research Institute of Environment and Information Sciences,
Yokohama National University,
Yokohama 240-8501, Japan
[email protected]
There is a widespread method to represent the envelope when a given
hyperplane family creates an envelope.
However, one sometimes encounters cases when the widespread method
fails to represent the desired envelope precisely, and is confused.
At the same time, one wants to find a correct method to draw the envelope
precisely.
In this article, focused on straight line families in the plane,
an easy to understand explanation is given
on the recently discovered correct method to represent the envelope precisely.
Moreover, it is explained when and why the widespread method fails
to represent the precise shape of envelope as well.
[2020]57R45, 58C25
[
[
=====
§ INTRODUCTION
We start from an elementary example.
Let f: ℝ→ℝ^2 be the mapping
defined by f(t)=(t, sin t).
The regular curve f
gives a parametrization of the non-singular curve
𝒞={(X, Y)∈ℝ^2 | Y=sin X}.
The affine tangent line L_t to 𝒞
at a point (t, sin t)
may be defined by
(X-t, Y-sin t)·(-cos t, 1)=0,
where the dot in the center stands for the standard scalar product of
two vectors (X-t, Y-sin t) and
(-cos t, 1) in the vector space ^2.
Since the straight line family {L_t}_t∈ℝ
is
the affine tangent line family to 𝒞, we believe
that the sine curve 𝒞 must be
an envelope of {L_t}_t∈ℝ.
Thus, by using the widespread method to represent
the envelope of {L_t}_t∈ℝ
(for the widespread method, for instance refer to <cit.>),
we try to confirm that 𝒞 is actually an envelope of
{L_t}_t∈ℝ.
Set
F(X, Y, t)=(X-t, Y-sin t)·(-cos t, 1)=
-cos t X+Y+tcos t - sin t.
We have the following.
𝒟 = {(X, Y)∈ℝ^2 | ∃ t∈ℝ
F(X, Y, t)=∂ F/∂ t(X, Y, t)=0.}
= {(X, Y)∈ℝ^2 | ∃ t∈ℝ
-cos t X+Y+tcos t-sin t=sin t(X-t)=0.}
= {(X, Y)∈ℝ^2 | Y=X-2kπ (k∈) Y=-X+(2k+1)π (k∈)
Y=sin X .}
⫌ 𝒞.
Faced with the fact that 𝒞 is a proper subset of 𝒟,
we are confused.
In addition, we want to know a correct method
to represent 𝒞 precisely.
Let us review the correct method given in <cit.> in the case of
this example.
There are three steps. The first step is to normalize the defining equation
F=0. That is to say, replace
the defining equation F=0 with a new one G=0
having the form
G(X, Y, t)=
Xcosθ(t)+Ysinθ(t)-a(t).
Then, we have (for example)
G(X, Y, t)=
Xcosθ(t)+Ysinθ(t)-a(t)
=X-cos t/√(cos^2 t +1)+Y1/√(cos^2 t +1)
--tcos t+sin t/√(cos^2 t+1).
The second step (the most important step)
is to find a C^∞ function b: ℝ→ℝ
satisfying
d a/d t(t)=b(t)dθ/d t(t).
Elementary calculations show
d a/d t(t)=sin t(t+cos tsin t)/(cos^2 t+1)^3/2,
dθ/dt(t)= -sin t/cos^2 t+1.
Hence, we have b(t)=-(t+cos tsin t)/√(cos^2 t+1).
The final step is just to substitute
a(t)=-tcos t+sin t/√(cos^2t+1),
b(t)=-(t+cos t sin t)/√(cos^2 t+1), (cosθ(t), sinθ(t)) =
(-cos t/√(cos^2 t+1), 1/√(cos^2 t+1))
into
a(t)(cosθ(t), sinθ(t))+
b(t)(-sinθ(t), cosθ(t)).
Then, we have the desired parametrization of 𝒞 as follows.
a(t)(cosθ(t), sinθ(t))+
b(t)(-sinθ(t), cosθ(t))
= -tcos t+sin t/√(cos^2t+1)
(-cos t/√(cos^2 t+1), 1/√(cos^2 t+1))
+
-(t+cos t sin t)/√(cos^2 t+1)
(-1/√(cos^2 t+1), -cos t/√(cos^2 t+1))
= (tcos^2 t-sin tcos t+t+cos tsin t/cos^2 t+1,
-tcos t+sin t+tcos t+cos^2 tsin t/cos^2 t+1)
= (t(cos^2 t+1)/cos^2 t+1,
sin t(1+cos^2 t)/cos^2 t+1)
= (t, sin t)
= f(t).
Let N, a: N→ℝ^n+1 and ν: N→ S^n be
an n-dimensional C^∞ manifold without boundary,
a C^∞ function and a C^∞ mapping
respectively, where
S^n is the unit sphere in ℝ^n+1.
For any x∈ N, set
H_(ν(x), a(x))
={X∈ℝ^n+1 | X·ν(x)=a(x)}.
The problems on envelopes of hyperplanes
were classically studied (for instance see <cit.>).
Nevertheless, it is a surprizing fact that until very recently,
the basic problems on envelopes were
still wrapped in mystery (for instance, problems
in Problem <ref> below seemed to be unsolved).
In <cit.>, it is completely answered
to the following basic problems on envelopes created by
hyperplane families
ℋ={H_(ν(x), a(x))}_x∈ N.
(1) [Existence Problem]
Find a necessary and sufficient condition for
a given hyperplane family to create an envelope.
(2) [Uniqueness Problem]
Suppose that a given hyperplane family creates an envelope.
Then, find a necessary and sfficient condition for the envelope to be unique.
(3) [Representation Problem]
Suppose that a given hyperplane family creates an envelope.
Then, find a representing formula of the envelope.
This paper is an easy to understand expository article on the
solutions to Problem <ref> proved in <cit.>.
In order to concentrate on explaining
the core part of the solutions given in <cit.>,
n=1 is assumed hereafter in this article.
Namely, all answers to the following problems
are explained in this article.
(1) [Existence Problem]
Find a necessary and sufficient condition for
a given straight line family in the plane ^2 to create an envelope.
(2) [Uniqueness Problem]
Suppose that a given straight line family in the plane ^2
creates an envelope.
Then, find a necessary and sfficient condition for the envelope to be unique.
(3) [Representation Problem]
Suppose that a given hyperplane family in the plane ^2
creates an envelope.
Then, find a representing formula of the envelope.
All answers (Theorem <ref>,
Theorem <ref> and Theorem <ref>)
to Problem <ref>
explained in this article are easily applicable to any concrete
straight line family.
For the proofs of Theorem <ref>, Theorem <ref>
and Theorem <ref>, see <cit.>.
This paper is organized as follows.
In Section 2, we review several definitions concerning
envelopes created by a straight
line families. Section 3, Section 4 and Section 5 are devoted to
explain Theorem 1 (Answer to Existence Problem of Problem <ref>),
Theroem <ref>
(Answer to Uniqueness Problem of Problem <ref>) and
Theorem <ref>
(Answer to Representation Problem of Problem <ref>)
respectively.
Finally, in Section 6, it is explained when and why the widespread method fails.
§ PRELIMINARIES
Any straight line L in the plane ℝ^2 may be defined
as follows, where θ, a are real numbers.
L={(X, Y)∈ℝ^2 | Xcosθ+Ysinθ=a}.
Any straight line family ℒ
in the plane ℝ^2 may be defined
as ℒ={L_(θ(t), a(t))}_t∈ℝ,
where θ, a: ℝ→ℝ are
C^∞ functions and L_(θ(t), a(t)) is
a straight line as follows.
L_(θ(t), a(t))=
{(X, Y)∈ℝ^2 | Xcosθ(t)+Ysinθ(t)=a(t)}.
Given a straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ,
the mapping ν: ℝ→ S^1 defined by
ν(t)= (cosθ (t), sinθ (t))
is called the Gauss mapping of
ℒ.
Let ℒ={L_(θ(t), a(t))}_t∈ℝ
be a straight line family
in the plane ℝ^2.
A C^∞ mapping f: ℝ→ℝ^2 is called
an envelope created by
ℒ
if the following two hold for any t∈ℝ.
(a) d f/d t(t)·ν(t) = 0,
(b) f(t)∈ L_(θ(t), a(t)).
Thus, an envelope is a C^∞ mapping giving a solution of the
first order linear differential equation (a) with one constraint condition
(b).
Let L_t and f: ℝ→ℝ^2 be as in Example <ref>.
Thus,
L_t={(X, Y)∈ℝ^2
| -cos t X+Y+tcos t-sin t=0},
f(t)=(t, sin t).
Then, it is easily seen that conditions (a), (b) in Definition <ref>
are satisfied. Hence, by definition,
f must be an envelope of the straight line family
{L_t}_t∈ℝ.
A mapping f: →ℝ^2 is called
a frontal curve if there exists a mapping ν: → S^1
such that the following equality holds for any t∈.
d f/d t(t)·ν(t) =0.
The mapping ν: → S^1 given above is called
the Gauss mapping of the frontal f.
By definition, any envelope created by a straight line family is a frontal curve.
Conversely, again by definition,
any frontal curve f: →ℝ^2
with Gauss mapping ν: → S^1
is an envelope of the straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ,
where ν(t)=(cosθ(t), sinθ(t)) and
a(t)=f(t)·ν(t).
Hence, these two notions are essentially the same
although the notion of frontal curve has only
recently been recognized and investigated.
As an excellent survey article on frontal curve,
<cit.> is recommended to readers.
§ ANSWER TO THE EXISTENCE PROBLEM ON ENVELOPES
The following is the key notion in this article.
A straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ
in the plane ℝ^2 is said to be creative if there exists
a C^∞ function b: → satisfying
d a/d t(t)=b(t)dθ/d t(t)(*)
for any t∈.
The function b:→ is called a creator.
A straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ
in the plane ℝ^2 creates an envelope if and only if it is creative.
(1) In Example <ref>,
both cosθ(t) and sinθ(t) are
even functions while b(t) is an odd function. Hence,
b(t) cannot be
obtained as the pullbuck
(by ν)
of a C^∞ section of the cotangent bundle
T^*S^1→ S^1.
(2) In Example <ref>, we already calculated
dθ/dt(t)=-sin t/cos^2 t+1.
This implies that t=kπ (k∈) are singular points of
the Gauss mapping ν: ℝ→ S^1 in Example
<ref>.
Thus, as one can find in p.492 of <cit.>,
for each k∈ the dimension of the quotient vector space
V_k(ν)/wν(V_k(1)) is greater than or equal to 1,
where V_k(ν) is the vector space consisting of cotangent vector
field germs ξ: (, kπ) → T^*S^1 along ν: (, kπ)→ S^1
and wν(V_k(1)) is
the vector space consisting of composition germs
η∘ν: (, kπ)→ T^*S^1 of the Gauss mapping germ
ν: (, kπ)→ S^1 with C^∞ section germs
η: (S^1, ν(kπ))→ T^*S^1.
Therefore, the above fact b(t) cannot be obtained as the pullbuck
of a C^∞ section of the cotangent bundle
T^*S^1→ S^1 by ν is not a surprizing fact.
(3) By the above fact b(t) cannot be obtained as the pullbuck
of a C^∞ section of the cotangent bundle T^*S^1→ S^1
by ν , it seems that Contact Geometry is useless for
the proof of Theorem <ref>. Theorem <ref> is proved by
the anti-orthotomic technique developed in <cit.>.
Moreover, not only Theorem <ref> but also Theorem <ref> in
Section <ref> and Theorem <ref> in Section <ref>
can be obtained by the anti-orthotomic technique at once.
(4) Existence of b:→ satisfying
(*) in Definition <ref> may be regarded as differentiability of
the height function a: → by the Gauss mapping
ν: → S^1. Thus, the creator b: → may be
called as the derived function of the height function a
with respect to
differentiation by the Gauss mapping ν. Hence,
Theorem <ref> may be replaced with the assertion
A straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ
in the plane ℝ^2 creates an envelope
if and only if the height function
a: → is differentiable by the Gauss mapping
ν: → S^1.
Set θ(t)=a(t)= 0.
In this case,
d θ/d t(t)=d a/dt(t)= 0.
Hence, an arbitrary C^∞ function b: → is a creator,
that is to say, it
satisfies the equality
d a/d t(t)=b(t)d θ/d t(t) (∀ t∈ℝ).
Therefore, the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates an envelope.
Set θ(t)= 0, a(t)=t.
In this case,
d θ/d t(t)= 0 and
d a/d t(t)= 1.
Hence, there does not exist b: ℝ→ℝ such that
d a/d t(t)=b(t)d θ/d t(t) (∀ t∈ℝ).
Therefore, the straight line family
{L_(θ(t), a(t))}_t∈ℝ is not creative.
Set θ(t)= t, a(t)= 0.
In this case,
d θ/d t(t)= 1 and
d a/d t(t)= 0.
Hence,
0=d a/d t(t)=
(d a/d t(t)/dθ/d t(t))d θ/d t(t)
= 0/1× 1
holds.
Thus, by Theorem <ref>,
the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates
an envelope.
More generally, for a given straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ,
suppose that the Gauss mapping
ν: → S^1 of ℒ is non-singular.
Then, we have
d a/dt(t)=
(d a/d t(t)/dθ/d t(t))d θ/d t(t)
for any t∈. Thus, we have the following.
Let ℒ={L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Suppose that the Gauss mapping
ν: → S^1 of ℒ is non-singular.
Then, the family ℒ is always creative.
Therefore, by Theorem <ref>, it always creates an envelope.
Set θ(t)= t^2, a(t)= 0.
In this case,
d θ/d t(t)=2t and
d a/d t(t)≡ 0.
Thus, t=0 is a singular point of the Gauss mapping ν: → S^1.
Nevertheless, we have
0=d a/d t(t)=0×dθ/dt(t).
for any t∈.
Thus, by Theorem <ref>,
the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates
an envelope.
[Evolute of the graph of Y=sin X]
Consider
the affine normal line to the graph of
Y=sin X at (t, sin t).
The defining equation of it may be
F(X, Y, t)=(X-t, Y-sin t)·(1, cos t)=X+Ycos t -t-cos tsin t=0.
Thus, the normalized defining equation G(X, Y, t)=0 may be
G(X, Y, t)=
X1/√(1+cos^2 t)+Ycos t/√(1+cos^2 t)
-t+cos tsin t/√(1+cos^2 t)=0.
Hence, we have
(cosθ(t), sinθ(t))=
(1/√(1+cos^2 t), cos t/√(1+cos^2 t)) and
a(t)=t+cos tsin t/√(1+cos^2t).
By calculations, we have
d a/dt(t)
=cos t(3cos t+cos^3 t +tsin t)/(1+cos^2t)^3/2,
d θ/dt(t)=-sin t/1+cos^2 t.
For any k∈, we have da/dt(kπ) 0
and dθ/dt(kπ)=0.
Thus, there are no creator b: →.
Therefore, by Theorem <ref>,
the straight line family
{L_(θ(t), a(t))}_t∈
does not create an envelope, namely,
the evolute of the graph of Y=sin X does not exist.
§ ANSWER TO THE UNIQUENESS PROBLEM ON ENVELOPES
Let ℒ={L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Suppose that ℒ is creative.
Then, ℒ creates a unique envelope if and only if
the set consisting of regular points of the Gauss mapping
ν: → S^1 of ℒ is dense in .
Let ℒ={L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Suppose that the Gauss mapping
ν: → S^1 of ℒ is non-singular.
Then, the family ℒ creates a unique envelope.
As in Example <ref>, set
(cosθ(t), sinθ(t)) =
(-cos t/√(cos^2 t+1), 1/√(cos^2 t+1)),
a(t)=sin t - tcos t/√(cos^2 t+1).
Then, in Example <ref>, we confirmed
dθ/dt(t)= -sin t/cos^2 t+1.
Thus, the set of regular points of the Gauss mapping is a dense set
in .
Therefore, by Theorem <ref>, the graph of Y=sin X must be the
unique envelope.
As in Example <ref>, set θ(t)=a(t)= 0.
In Example <ref>, we confirmed that
the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates an envelope by
Theorem <ref>.
Since d θ/d t(t)= 0, the set of regular points of
the Gauss mapping is empty. Thus, by Theorem <ref>,
the created envelopes are not unique.
As in Example <ref>, set θ(t)=t^2, a(t)=0.
We already confirmed that
the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates an envelope by
Theorem <ref>.
Since d θ/d t(t)=2t, the set of regular points of
the Gauss mapping is dense in . Thus, by Theorem <ref>,
the created envelope is unique.
§ ANSWER TO THE REPRESENTATION PROBLEM ON ENVELOPES
Let ℒ={L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Suppose that ℒ is creative.
Then, any envelope of ℒ is parametrized by the mapping
∋ t ↦ a(t)(cosθ(t), sinθ(t)) +
b(t)(-sinθ(t), cosθ(t))∈^2
,
where, b: → is the
creator defined by (*) of Definition <ref>.
Theorem <ref> is depicted in Figure <ref>.
Let ℒ={L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Suppose that the Gauss mapping
ν: → S^1 of ℒ is non-singular.
Then, the unique envelope created by the family ℒ
is parametrized by the following mapping.
∋ t ↦ a(t)(cosθ(t), sinθ(t)) +
(d a/dt(t)/dθ/dt(t))
(-sinθ(t), cosθ(t))∈^2.
There are two typical cases where the Gauss mapping is non-singular
as follows.
[Hedgehogs]
Let ℒ=
{L_(t, a(t))}_t∈ℝ be a line family, where
a : ℝ→ℝ is an arbitrary C^∞ periodic
function with period 2π.
Then, the Gauss mapping is non-singular.
The unique envelope of ℒ is called a hedgehog
which had been started to study by <cit.>.
In this case, since d θ/d t(t)=1, the function b: →
satisfying (*) of Definition <ref> is nothing but d a/d t.
Thus, the hedgehog is parametrized as follows.
∋ t ↦ a(t)(cos t, sin t) +
d a/d t(t)(-sin t, cos t)∈^2.
Therefore the celebrated Kahn-Hoffman vector formula (<cit.>)
can be naturally obtained by our method.
[Clairaut differential equations]
Consider a Clairaut differential equation
Y=XdY/dX+g(dY/dX)
(**)
where g: ℝ→ℝ is an arbitrary C^∞ function.
For any t∈ℝ,
its general solution Y=tX+g(t) defines the straight line
L_(θ(t), a(t)),
where
ν(t)=(cosθ(t), sinθ(t))
=
(t/√(t^2+1),-1/√(t^2+1))
and a(t)=-g(t)/√(t^2+1).
It is easily seen that the Gauss mapping ν: → S^1
is non-singular.
Thus, by Theorem <ref> and Theorem <ref>,
there must exist the unique singular solution of
the Clairaut differential equation (**)
as
the unique envelope of the straight line family
{L_(θ(t), a(t))}_t∈.
Calculation shows that the unique creator b: → in this case has
the following form.
b(t)=-dg/dt(t)(t^2+1)+tg(t)/√(t^2+1).
Therefore, by Theorem <ref>, the unique singular solution
is parametrized as follows.
a(t)(cosθ(t), sinθ(t))
+ b(t)(-sinθ(t), cosθ(t))
= -g(t)/√(t^2+1)(t/√(t^2+1), -1/√(t^2+1))
+
-dg/dt(t)(t^2+1)+tg(t)/√(t^2+1)(1/√(t^2+1), t/√(t^2+1))
= 1/t^2+1(-dg/dt(t)(t^2+1),
(g(t)-tdg/dt(t))(t^2+1))
= (-dg/dt(t), g(t)-tdg/dt(t)).
For details on singularities of the unique singular solution of
the Clairaut differential equation(**), see for instance
<cit.>.
As in Example <ref> and Example <ref>,
set θ(t)=a(t)= 0.
We already confirmed that
the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates an envelope by
Theorem <ref> and the created envelopes are not unique
by Theorem <ref>.
Let b: → be an arbitrary C^∞ function.
Since b is a creator for envelopes of
{L_(θ(t), a(t))}_t∈ℝ, by Theorem
<ref>,
a(t)(cosθ(t), sinθ(t)) +
b(t)(-sinθ(t), cosθ(t))
= b(t)(-sin 0, cos 0)
= (0, b(t))
is an envelope of {L_(θ(t), a(t))}_t∈ℝ.
As in Example <ref> and Example <ref>,
set θ(t)=t^2, a(t)=0.
We already confirmed that
the straight line family
{L_(θ(t), a(t))}_t∈ℝ creates an envelope by
Theorem <ref> and the created envelope is unique
by Theorem <ref>.
In this case, the unique creator is the constant function 0.
Therefore, by Theorem
<ref>, the unique envelope is parametrized as follows as desired.
a(t)(cosθ(t), sinθ(t)) +
b(t)(-sinθ(t), cosθ(t))
= (0, 0).
§ WHEN AND WHY THE WIDESPREAD METHOD FAILS
Let ℒ=
{L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Then, the normalized equation
G(X, Y, t)=Xcosθ(t)+Ysinθ(t)-a(t)=0
is a defining equation of ℒ.
Suppose that ℒ is creative.
Then, by Theorem <ref>, we have
∂ G/∂ t(X, Y, t) = (-Xsinθ(t)+Ycosθ(t))
d θ/d t(t)-d a/d t(t)
= (-Xsinθ(t)+Ycosθ(t)
-b(t))d θ/d t(t).
Assume moreover that the Gauss mapping ν: → S^1
is non-singular.
Then, solving the system of equations G(X, Y, t)=
∂ G/∂ t(X, Y, t)=0 gives the precise shape of envelope of
ℒ.
Namely, the widespread method works well in this case
to represent the envelope precisely.
Nextly,
assume that the Gauss mapping ν is singular at
t=t_0.
Then, the second equation
∂ G/∂ t(X, Y, t_0)=
(-Xsinθ(t_0)+Ycosθ(t_0)
-b(t_0))d θ/d t(t_0)=0
is just
0=0,
which has no information, it is useless.
Thus,
the solution set of the system of equations
G(X, Y, t_0)=∂ G/∂ t(X, Y, t_0)=0
is exactly the straight line L_(θ(t_0), a(t_0))
at the singular point t=t_0.
Thus, the widespread method fails to get the precise shape of envelope
in this case.
From these observations, we have the following.
Let ℒ=
{L_(θ(t), a(t))}_t∈ℝ
be a straight line family in the plane ^2.
Suppose that the Gauss mapping of ℒ is non-singular.
Then, the unique envelope created by the straight line family can be
directly obtained by the widespread method.
Given a creative straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ
in the plane ^2, suppose that
the Gauss mapping of ℒ is singular.
Then, the widespread method fails to get an envelope precisely
at and only at singular points of Gauss mapping.
Given a creative straight line family
ℒ={L_(θ(t), a(t))}_t∈ℝ
in the plane ^2,
in order to get an envelope created by
ℒ precisely,
the widespread method can be used when and only when
the Gauss mapping is non-singular.
§ ACKNOWLEDGEMENT
This work was partially supported
by the Research Institute for Mathematical Sciences,
a Joint Usage/Research Center located in Kyoto University.
The author is
supported by JSPS KAKENHI (Grant No. 23K03109).
99
brucegiblinJ. W. Bruce and P. J. Giblin,
Curves and Singularities (second edition),
Cambridge University Press, Cambridge, 1992.
https://doi.org/10.1017/CBO9781139172615
historyE. Hairer and G. Wanner,
Analysis by Its History, Undergraduate Texts in Mathematics,
Springer New York, NY, 2008.
https://doi.org/10.1007/978-0-387-77036-9
hoffmancahnD. W. Hoffman and J. W. Cahn,
A vector thermodynamics for anisotropic surfaces,
Surface Science, 31 (1972), 368–388.
https://doi.org/10.1016/0039-6028(72)90268-3
ishikawaG. Ishikawa,
Singularities of frontals,
Adv. Stud. Pure Math., 78,
55–106, Math. Soc. Japan, Tokyo, 2018.
https://doi.org/10.2969/aspm/07810055
janeczkonishimuraS. Janeczko and T. Nishimura,
Anti-orthotomics of frontals and their applications, J. Math. Anal. Appl.,
487 (2020), 124019.
https://doi.org/10.1016/j.jmaa.2020.124019
hedgehogR. Langevin, G. Levitt and H. Rosenberg,
Hérissons et multihérissons (enveloppes
parametrées par leur application de Gauss),
In Singularities (Warsaw, 1985), pp. 245–-253,
Banach Center Publ., 20, PWN, Warsaw, 1988.
nishimuraT. Nishimura,
Hyperplane families creating envelopes,
Nonlinearity, 35 (2022), 2588.
https://doi.org/10.1088/1361-6544/ac61a0
sajitakahashiK. Saji and M. Takahashi,
Singularities of singular solutions of first-order differential equations
of Clairaut type, J. Dyn. Control Syst., 28 (2022), 19–41.
https://doi.org/10.1007/s10883-020-09511-4
wallC.T.C. Wall,
Finite determinacy of smooth map germs,
Bull. London Math. Soc., 13 (1981),481–539.
https://doi.org/10.1112/blms/13.6.481
|
http://arxiv.org/abs/2307.04039v1 | 20230708195157 | A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers | [
"Guy Blanc",
"Caleb Koch",
"Carmen Strassle",
"Li-Yang Tan"
] | cs.CC | [
"cs.CC",
"cs.DS"
] |
Typology of Risks of Generative Text-to-Image Models
Atoosa Kasirzadeh
====================================================
We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers.
The ε-approximate junta complexity of a function f is the smallest integer r such that f is ε-close to a function that depends only on r variables. A strong composition theorem states that if f has large ε-approximate junta complexity, then g ∘ f has even larger ε’-approximate junta complexity, even for ε’ ≫ε. We develop a fairly complete understanding of this behavior, proving that the junta complexity of g ∘ f is characterized by that of f along with the multivariate noise sensitivity of g. For the important case of symmetric functions g, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity.
We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics.
empty
empty
§ INTRODUCTION
The growth in the sizes of modern datasets is both a blessing and a curse. These datasets, many of which now come with billions of features, contain a wealth of information that machine learning algorithms seek to tap into. On the other hand, their size stands in the way of the opportunities they present, as many of the algorithms that we would like to run on them simply cannot handle their dimensionality.
Thankfully, for many tasks of interest the vast majority of features are irrelevant. This motivates the design of algorithms that are able to quickly home in on the small number of relevant features, and whose efficiency scales gracefully with the number of such features. Already in the early 1990s Blum <cit.> (see also <cit.>) proposed the clean theoretical challenge of learning an unknown r-junta, a function that depends on r≪ n many of its n variables. Quoting <cit.>, “It is my belief that some of the most central open problems in computational learning theory are, at their core, questions about finding relevant variables.” This is now known simply as the junta problem and is the subject of intensive study <cit.>, having distinguished itself as “the single most important open question in uniform distribution learning" <cit.>.
The premise of the junta problem suggests an even more basic algorithmic problem, that of determining if an unknown function is even an r-junta to begin with. This is the problem of testing juntas, introduced by Fischer, Kindler, Ron, Safra, and Samorodnitsky <cit.> and subsequently studied in numerous works <cit.>. Junta testers are also at the heart of the best known testers for numerous other classes of functions, the key insight being that many functions are well-approximated by small juntas (see <cit.> and Chapter 5 of <cit.> for more on this connection). The surveys by Blais <cit.> give broad overviews of various junta testers and their applications throughout theoretical computer science.
This work. These algorithmic applications motivate the study of approximability by small juntas as a complexity measure. For a function f : ^n → and a distribution 𝒟 over ^n, the ε-approximate junta complexity of f with respect to 𝒟, denoted J_𝒟(f,ε), is the smallest integer r such that f is ε-close to an r-junta. Among the most basic questions one can ask about any complexity measure of functions is how it behaves under composition. In the first part of this paper we develop, from the ground up, a fairly complete understanding of this question for junta complexity. We prove a near-optimal composition theorem (<Ref>) that is built on notions of noise stability, both classical and new. In the second part we draw a general connection (<Ref>) between the type of composition theorem that we prove—a strong composition theorem, which we will soon define—and property testing, showing how they can be used to design the first generic boosters for property testers. Combining our two main contributions yields new implications for junta testing.
§ OUR RESULTS AND TECHNIQUES
§.§ First main result: A strong composition theorem for junta complexity
Composition theorems are statements about hardness amplification: the goal is to understand the extent to which the disjoint composition (g ∘ f)(x) g(f(x^(1)),…,f(x^(k))) is more complex than f itself, and how this depends on intrinsic properties of the combining function g. For approximate measures such has junta complexity, we are furthermore interested in strong composition theorems, statements of the form:
J_𝒟^k(g∘ f, ε_large)≫ J_𝒟(f, ε_small) even for ε_large≫ε_small.
In words, the composed function requires much more resources—in our case, much larger junta approximators—even if one only seeks a much coarser approximation. Strong composition theorems stand in contrast to weak ones that only amplify hardness with respect to one of the two parameters, either resources or approximation quality only. The canonical example in this context is Yao’s XOR lemma <cit.>, which says that if f is mildly hard to approximate with size-s circuits, then XOR∘ f is extremely hard to approximate with size-s’ circuits. A long-recognized downside of this important result, inherent to all known proofs of it <cit.> and its generalizations to arbitrary combining functions <cit.>, is the fact that it is only known to hold for s’ ≪ s, whereas intuitively it should hold even for s’ ≫ s.
Composition theorems, both weak and strong, have been studied for a variety of complexity measures
but appear to have been underexplored for junta complexity. One reason may be that the question appears deceptively simple. Indeed, things are completely straightforward in the zero-error setting, where we have the intuitive identity J(g ∘ f, 0) = J(g,0)· J(f,0). However, we show that the question becomes surprisingly intricate once error is allowed.
§.§.§ Context and motivation: Counterexamples to natural composition theorems
The question proves to be tricky even in the special case where the combining function g is symmetric. We now state a sequence of three seemingly intuitive conjectures for this special case. While false, these conjectures and their counterexamples will motivate and lead us to the statement of our actual composition theorem. (Details and proofs of the counterexamples discussed in this section are given in <Ref>.)
The following notation will be useful for us throughout this paper:
Notation. For a function f : ^n→, distribution 𝒟 over ^n, and integer r, we write f̃_𝒟,r to denote the best r-junta approximator of f with respect to 𝒟. When 𝒟 is clear from context, we simply write f̃_r.
Conjecture 1. It will be convenient for us to consider composition theorems in their contrapositive form. Suppose we would like to approximate g ∘ f with an R-junta, say with respect to the uniform distribution. If g is a k-variable symmetric function, how would we go about constructing an approximator that achieves the highest accuracy possible? Since g is symmetric, one may be inclined to divide the “junta budget” of R evenly among the k inner functions and conjecture that
g ∘f̃_R/k = g(f̃_R/k,…,f̃_R/k)
achieves the best, or close to the best, accuracy among all R-junta approximators.
However, this is badly false. Let g be the k-variable Majority function and f the n-variable Parity function. For any choice of R satisfying R/k < n (i.e. each inner Parity receiving a budget that falls short of its arity), we have Pr[g∘f̃_R/k g∘ f] = 1/2. This is because it is “all or nothing” when it comes to approximating Parity: no (n-1)-junta can achieve accuracy better than that of a constant approximator. The best strategy is therefore to allocate a full budget of n to as many of the inner Parities as possible (i.e. R/n many of them), and a budget of zero to the others. This shows a gap of 1/2 versus 1-o(1) in the accuracies of the “divide budget equally” strategy and the optimal one.
Conjecture 2. In light of this counterexample, one may then conjecture that the best strategy is to partition the junta budget optimally among the k inner functions and feed the respective approximators of f into g. That is, the conjecture is that the best approximator is of the form:
g(f̃_r_1,…,f̃_r_k) where ∑_i=1^k r_i = R.
While this is true for our example above, it is again badly false in general. In fact, the error of such an approximator can be close to 1, even worse than the trivial bound of ≤1/2 achievable with a constant approximator.
Our counterexample reveals another counterintuitive aspect of the overall problem. Consider an approximator for g∘ f of the form g(f̃_r_1,…,f̃_r_k). We show its approximation accuracy can increase if we replace one of the inner approximators for f with a worse one: e.g. if we replace f̃_r_1 with f̃_r_1’ where r_1’ < r_1. In more technical terms that we will soon define: while the noise stability of a function is, as one would expect, monotone in the noise rate, we show that the natural generalization of it where the corruption probabilities of 0’s and 1’s are decoupled (defined in <Ref>) is not monotone.
Conjecture 3. Finally, we consider a conjecture that is far laxer than either of the previous ones. It simply states that the optimal approximator for the composed function g∘ f is one of composed form:
h(q^(1),…,q^(k)) for some h : ^k → and q^(1),…,q^(k) : ^n →,
where the relevant variables of q^(i) fall within the ith block of variables.
We show (to our own surprise) that this conjecture is still false: there are composed functions for which the optimal approximator is not of composed form. However, unlike the first two conjectures, our work shows that this conjecture is morally true in a precise sense.
§.§.§ Our Strong Composition Theorem
Our strong composition theorem implies a close quantitative relationship between the error of the optimal approximator and that of the optimal composed form approximator, and indeed one with a specific structure that we call canonical:
We say that a composed form approximator for g∘ f is canonical if it is of the form:
h(f̃_r_1,…,f̃_r_k),
where h : ^k→ is the function:
h(y) = (_∼𝒟^k[ (g∘ f)()|y_i = f̃_r_i(^(i)) for all i∈ [k]]).
For intuition regarding the choice of h, we note that for the fixed k-tuple of functions f̃_r_1,…,f̃_r_k, it is the combining function that minimizes error with respect to g∘ f.
Canonical composed form approximators are therefore ones whose individual components are “locally" optimal: each f̃_r_i is the optimal r_i-junta approximator for f, and h the optimal way of combining the f_r_i's. Our strong composition theorem will say that we can get very close to the globally optimal approximator this way.
The notion of noise stability is central to our work:
For any μ∈ (-1,1) and vector ρ⃗∈ [0,1]^k, we define the multivariate noise stability of g as
_μ,ρ⃗(g) = [g()g()]
where independently for each i ∈ [k], we draw (_i, _i) as follows: Using π_μ to denote the unique distribution supported on with mean μ, _i ∼π_μ, and
_i = _i w.p. ρ⃗_i
Independent draw from π_μ w.p. 1 - ρ⃗_i.
When μ = 0 we simply write _ρ⃗(g).
This definition allows for a different noise rate for each coordinate, generalizing the more commonly studied definition where the noise rates are the same for every coordinate (see e.g. Chapter 2 of <cit.>). We use the terms multivariate noise stability and univariate noise stability to distinguish these definitions. Even in the case of symmetric combining functions g, our strong composition theorem will naturally involve its multivariate noise stability (necessarily so, as already suggested by the counterexample to Conjecture 1).
We present our strong composition theorem as a sequence of two parts that each carries a standalone message, the first of which formalizes the fact that the optimal canonical composed form approximator is a good proxy for the actual optimal approximator. It will be more convenient for us to state our results in terms of advantage instead of error, the two quantities being related via the identity advantage = 1-2·error. Also, for notational clarity we only state here the special case where f is balanced (i.e. _𝒟[f] = 0).
[colback = white,arc=1mm, boxrule=0.25mm]
Let f : ^n→ and g:^k → be arbitrary functions and 𝒟 be any distribution over ^n. Assume that _𝒟[f]=0. For the task of approximating g ∘ f under 𝒟^k with an R-junta, there is a correlation vector ρ⃗∈ [0,1]^k such that
_ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator
≤Advantage of optimal approximator≤√(_ρ⃗(g)).
For most applications of composition theorems, including those in this paper, the parameters of interest are such that the quartic gap between the upper and lower bounds above are inconsequential. (In particular, if the advantage of the optimal canonical composed form approximator diminishes to 0 as k grows, our bounds imply that the same is true for the actual optimal approximator. Indeed, the two rates of convergence are the same up to a polynomial factor.)
Part II of <Ref> elaborates on the correlation vector ρ⃗, showing how it is is determined by the junta complexity of f and the noise stability of g:
[colback = white,arc=1mm, boxrule=0.25mm]
Theorem 1 (Part II: Explicit description of ρ⃗). The correlation vector ρ⃗∈ [0,1]^k in Part I is the vector that maximizes _ρ⃗(g), subject to the constraint:
ρ⃗_i = _𝒟[f·f̃_r_i] for all i∈ [k] where ∑_i=1^k r_i = R.
Taken together, the two parts of <Ref> show that the junta complexity of g∘ f is tightly characterized by the junta complexity of f and the multivariate noise stability of g. It furthermore gives a simple and explicit strategy for constructing a near-optimal approximator: first partition the junta budget optimally among the k inner functions; next approximate each inner function optimally with its allocated budget; and finally combine these approximators in the optimal way.
Naturally, it would be preferable to understand the strategy for constructing the actual optimal approximator, but our counterexamples suggest that it defies a clean and interpretable description even for symmetric g (indeed, even for g being the And function).
Corollary: Highly noise sensitive functions strongly amplify junta complexity. <Ref> yields a hardness amplification statement of the form <ref> the following way. Suppose f is mildly hard for r-juntas, i.e. [f̃_r f] ≥ε_small. Our goal is to show that g ∘ f is extremely hard for R-juntas, [(g∘ f)_R g∘ f] _large≫ε_small, even for R ≫ r. For any partition of R = ∑_i=1^k r_i, at most a 0.999-fraction of the r_i's exceed 1.01R/ k r. <Ref> therefore tells us that the advantage of the optimal R-junta is upper bounded by
√(_ρ⃗(g)) where at least a 0.001-fraction of ρ⃗'s coordinates are at most 1-2·_small.
(Equivalently, at least a 0.001-fraction of coordinates receive at least an _small amount of noise.)
This motivates the following definition:
The (δ,)-noise stability of a function g:^k→ is the quantity
max{_ρ⃗(g)at least a δ-fraction of ρ⃗'s coordinates are at most 1-2}.
By the monotonicity of noise stability, this maximum is achieved by a ρ⃗ with exactly a δ-fraction of coordinates being exactly 1-2, and the remaining (1-δ)-fraction being 1.
We have sketched the following corollary of <Ref>:
Let g : ^k → be a function whose (1/2,_small)-noise stability is at most τ. Then for all functions f,
J_𝒟^k(g∘ f, 12(1-√(τ)))__≥ 0.99k·J_𝒟(f,_small)__.
In words, g ∘ f requires much larger junta approximators, an Ω(k) multiplicative factor more, even if we allow much larger error, 1/2(1- √(τ)) _large instead of _small. As two extreme examples of combining functions g,
∘ The (0.001,_small)-noise stability of the k-variable Parity function is (1-2·_small)^Ω(k), making it an excellent amplifier of junta complexity.
∘ The (0.001,_small)-noise stability of a dictator function g(x) = x_i is 1, making it a terrible amplifier of junta complexity as one would expect: if g is a dictator function then g∘ f ≡ f is of course no more complex than f itself.
The partial-noise stability of these two specific examples are straightforward to compute, but the calculations quickly become unwieldy even for other basic functions. In addition to being a quantity of independent technical interest, the upcoming connections between strong composition theorems and the boosting of property testers will also motivate understanding the partial-noise stability of broad classes of functions beyond just parity and dictator. (Roughly speaking, to boost testers for a property 𝒫 we need to analyze a function g such that 𝒫 is closed under g.)
Our next result is a general technique that yields sharp bounds on the partial-noise stability, and more generally the multivariate noise stability, of all symmetric functions.
The multivariate noise sensitivity of symmetric functions. For a symmetric function g : ^k → one intuits that its multivariate noise stability at a vector ρ⃗∈ [0,1]^k should be related to its univariate noise stability at a value ρ^⋆∈ [0,1] that is an “average" of the coordinates of ρ⃗. (This is certainly not true for general functions; consider for example the dictator function.) Using techniques from the study of negative association, we formalize this intuition and prove that indeed it is sandwiched by the arithmetic and geometric means of the coordinates of ρ⃗:
Let g : ^k→ be a symmetric function, μ∈ (-1,1), and ρ⃗∈ [0,1]^k. Define
(∏_i ∈ [k]ρ⃗_i)^1/k and 1/k∑_i ∈ [k]ρ⃗_i.
Then
_μ,(g) ≤_μ,ρ⃗(g) ≤_μ,(g).
Furthermore, the lower bound holds under the weaker assumption that g is transitive.
The more “reasonable" ρ⃗ is, the closer the upper and lower bounds of <Ref> are. In particular, we get the following bound on the (δ,)-noise stability of symmetric functions:
For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆∈ [0,1] satisfying
1 - 2δ - O(^2) ≤ρ^⋆≤ 1 - 2δ.
Recall that corresponds to the initial inapproximability factor _small in <Ref>, and so the additive gap of O(^2) between the upper and lower bounds is indeed small for our intended application.
§.§ Second main result: Composition theorems and boosting of property testers
Composition theorems are most naturally thought of as statements about hardness amplification, and indeed that is how they are most commonly used. As our second main contribution, we show how they can be used fruitfully in their contrapositive form as meta-algorithms. In more detail, we show how they can be used to generically boost the performance guarantees of property testers. While boosting is a story of success in both the theory and practice of machine learning, to our knowledge the analogous concept in property testing has not yet been considered. The connection that we draw can be instantiated with either strong or weak composition theorems, but as we now see, the parameters are qualitatively better in case of strong composition theorems.
Within property testing, a major strand of research, initiated by Parnas, Ron, and Samorodnitsky <cit.>, concerns testing whether an unknown function has a concise representation. Consider any parameterized property 𝒫 = {𝒫_s}_s ∈ℕ of boolean functions: size-s parities, size-s juntas, size-s decision trees, s-sparse polynomials over various fields, and so on. The task is as follows:
Given queries to an unknown function f : ^n →, access to i.i.d. draws from a distribution 𝒟, and parameters s,s'∈ and > 0, distinguish between:
∘ Yes: f ∈𝒫_s
∘ No: f is ε-far under 𝒟 from every function in 𝒫_s'.
Note that the task is more challenging as ε gets smaller, and as the gap between s and s' gets smaller. We show how a composition theorem for 𝒫 allows one to trade off these two parameters: a tester for large ε can be upgraded into one for small ε, at the price of larger gap between s and s'. The stronger the composition theorem, the more favorable this tradeoff is, and with an optimally strong composition theorem one is able to improve the ε-dependence without any associated price in the multiplicative gap between s and s':
[colback = white,arc=1mm, boxrule=0.25mm]
Let 𝒫 = {𝒫_s }_s∈ be a property and g : ^k→ be such that 𝒫 behaves linearly w.r.t. g. Suppose that 𝒫 admits an (_small, _large,λ)-composition theorem w.r.t. g. Then any (_large,ks,λ ks')-tester for 𝒫 can be converted in to an (_small, s,s')-tester for 𝒫.
We defer the precise definitions of the terms “(_small,_large,λ)-composition theorem" and “behaves linearly" to the body of the paper, mentioning for now that λ∈ [0,1] measures the strength of the composition theorem: such a theorem says that the composed function requires λ k more resources to achieve _large error than original function to achieve _small error. Therefore λ = 1/k can be viewed as the threshold separating weak and strong composition theorems, with λ = 1 corresponding to an optimally strong one. (<Ref>, for example, achieves λ = 0.99.) Note that if λ = 1 in <Ref>, then an (_large,s,s)-tester for all s yields an (_small,s,s)-tester for all s.
The formal version of <Ref> will also show that it upgrades uniform-distribution testers to strong uniform-distribution testers, and distribution-free testers to strong distribution-free testers. This stands in contrast to standard boosting in learning which can only upgrade distribution-free learners.
§.§.§ Example applications of <Ref>: New implications for junta testing
As mentioned in the introduction, juntas are among the most basic and intensively-studied function classes in property testing. Owing to two decades of research, the complexity of testing juntas in the non-tolerant setting is now fairly well-understood: we have highly-efficient adaptive <cit.>, non-adaptive <cit.>, and distribution-free testers <cit.>, all of them achieving query complexities that are essentially optimal <cit.>.
The picture is much less clear in the more challenging tolerant setting. For the uniform distribution, the best known testers require exponentially many queries <cit.>, and there are no known distribution-free testers. By generalization <Ref> to the tolerant setting and instantiating it with our strong composition theorem for juntas, we obtain new implications, both positive and negative, that help clarify this picture.
Positive implication: boosting of tolerant junta testers. First, any tolerant junta tester for large distance parameter can now be converted into one for small distance parameters, at the price of a slight gap in the junta sizes of the Yes and No cases. For example, for both the uniform and distribution-free settings we get:
Suppose we have a (r)-query tester that distinguishes between
∘ Yes: f is 1/4-close to an r-junta
∘ No: f is 1/3-far from every r-junta.
Then for every > 0 we have a (r/)-query tester that distinguishes between
∘ Yes: f is -close to an r-junta
∘ No: f is Ω()-far from every 1.001r-junta.
The resulting gap between the junta sizes of the Yes and No cases, while mild, is admittedly not ideal. As alluded to above, this stems from the fact that the “strength parameter" of <Ref> is λ = 0.99 and not λ = 1. Designing boosters that do not incur this gap, either via an optimally strong composition theorem or otherwise, is a natural avenue for future work.
On the other hand, we now show that even with this gap, <Ref> already carries with it an interesting consequence. This consequence crucially relies on our composition theorem for juntas being strong; the proof would not have gone through had the strength parameter of <Ref> only been λ = 1/k.
Negative implication: NP-hardness in the distribution-free setting. This implication concerns the time rather than query complexity of testers. The same proof of <Ref> also converts a (r,n)-time tester into a (r,1/,n)-time tester. Implicit in the work of Hancock, Jiang, Li, and Tromp <cit.> is an NP-hardness result for tolerantly testing juntas in the distribution-free setting. One downside of their result is that it only holds in the regime of = 1/(n). Applying the time-analogue of <Ref>, we lift this hardness up to the standard regime of constant :
The following task is NP-hard under randomized reductions. Given queries to a function f : ^n→, access to i.i.d. draws from a distribution 𝒟, and parameters r∈ and > 0, distinguish between:
∘ Yes: f is 1/4-close under 𝒟 to an r-junta;
∘ No: f is 1/3-far under 𝒟 from every r-junta.
This implies a fairly dramatic separation between the non-tolerant versus tolerant versions of the problem. The recent (r)-query non-tolerant testers <cit.> are also time efficient, running in (r,n) time. <Ref> shows that any tolerant tester, regardless of query efficiency, must have time complexity that is as bad as that of SAT: e.g. if SAT requires randomized exponential time, then so does any tolerant tester.
In fact, our actual result is stronger than as stated in <Ref>: we prove that the task is NP-hard even if the Yes case states that f is 0-close under 𝒟 to an r-junta. We therefore show that the testers of <cit.> are quite fragile in the sense that they break if the Yes case in the definition of non-tolerant testing is changed from “f is an r-junta" to “f is 0-close under 𝒟 to an r-junta".
§ OTHER RELATED WORK
O'Donnell's generalization of Yao's XOR lemma.
Yao's XOR lemma states that if f is -hard against circuits of size s, meaning every size-s circuit differs from f on at least an -fraction of inputs, then XOR_k∘ f is (1/2 + 1/2(1-2)^k + δ)-hard against circuits of size s' where
s'= Θ(δ^2/log(1/))· s.
The (1-2)^k term in the resulting inapproximability factor agrees precisely with the (univariate) noise stability of XOR_k at ρ = 1-2. In <cit.> O'Donnell showed that this is no coincidence. He proved a far-reaching generalization of Yao's XOR lemma that allows for an arbitrary combining function g : ^k → instead of XOR, and showed that the resulting inapproximability of g∘ f is given by the “expected bias" of g, a quantity that is closely related to the (univariate) noise stability of g.
Like Yao's XOR lemma, <cit.>'s composition theorem is weak in the sense that the hardness of g∘ f only holds against size s' circuits where s' ≪ s. (In fact, <cit.> incurs an additional multiplicative loss of k in the resulting circuit size.) Our composition theorem concerns a different resource, juntas instead of circuits, and as emphasized in the introduction, our main focus is on proving a composition theorem that is strong in the sense of amplifying both the amount of resource required and the inapproximability factor.
Both our work and <cit.> utilize Fourier analysis in our proofs, which is to be expected given the centrality of noise stability to both works. That aside, our overall approach and techniques are entirely different from <cit.>'s—necessarily so, as we elaborate next.
Hardness amplification via boosting.
In <cit.> Klivans and Servedio observed that most known hardness amplification results are proved via a boosting-type argument. For example, for Yao's XOR lemma and <cit.>'s generalization of it, one proceeds by contradiction: one assumes that XOR_k∘ f can be mildly approximated by a size-s' circuit C (in the language of boosting, C is a weak hypothesis for XOR_k ∘ f), and one constructs a larger circuit C^⋆ of size s that well-approximates f (i.e. C^⋆ is a strong hypothesis for f). In boosting, the strong hypothesis is built out of many weak hypotheses; likewise, in Yao's XOR lemma the size-s circuit C^⋆ is built out of many size-s' circuits that are like C. The work of <cit.> formalizes this connection.
From this perspective, it becomes clear why such approaches are fundamentally limited to weak composition theorems where s' ≪ s. Strong composition theorems therefore necessitate a different tack, and indeed our proof proceeds via the forward implication instead of the contrapositive: we reason directly about the inapproximability of g∘ f under the assumption about the inapproximability of f. Somewhat ironically, our second main contribution is then an application of strong composition theorems to the boosting of property testers, which goes in the opposite direction to <cit.>'s “Boosting ⇒ Hardness Amplification" observation above.
Independent work of Chen and Patel <cit.>. A recent work of Chen and Patel also gives new lower bounds for tolerant junta testing. For the problem of testing whether an unknown function is _1-close to or _2-far from a k-junta under the uniform distribution, they prove a query lower bound of k^Ω(log(1/(_2-_1))), which is superpolynomial when the gap _2-_1 is subconstant. This yields the first superpolynomial query complexity separation between tolerant and non-tolerant testing for a natural property of boolean functions.
Their result is incomparable to <Ref> in several respects. We give a time lower bound when the gap _2-_1 is a fixed constant in the distribution-free setting. Being an NP-hardness result, our lower bound is conditional whereas theirs is unconditional.
§ DISCUSSION AND FUTURE WORK
Complexity measures can behave in highly counterintuitive ways under composition, which makes composition theorems, and strong composition theorems in particular, tricky to prove.
A motivating goal of this work is to develop an understanding of strong composition theorems from first principles, and hence our focus on junta complexity, perhaps the most basic complexity measure of a function. We are optimistic that our techniques can apply to other measures, though we believe that as in this work, much of the challenge will lie in first figuring out the right statement to prove.
Consider for example decision tree complexity, a natural next step from junta complexity. There are existing strong XOR lemmas for decision tree complexity, but they come with limitations and do not appear to be the final word. (Briefly, the XOR lemma of <cit.> is only strong when the initial inapproximability factor _small is at least a constant, and the strong XOR lemma of <cit.> only holds for decision trees that are allowed to “abort".) Indeed, Shaltiel <cit.> has shown that certain hoped-for strong XOR lemmas for decision tree complexity are false, though as he remarked, his counterexample “seems to exploit defects in the formation of the problem rather than show that our general intuition for direct product assertions is false". We hope that our results, and specifically the new connections to various notions of noise stability, can serve as a guide to the right statement for decision tree complexity and other measures.
As for our second main result, the general connection between strong composition theorems and the boosting of property testers, we believe that it adds compelling algorithmic motivation to the study of composition theorems, a topic traditionally considered to be mostly of complexity-theoretic interest. Likewise, we hope that our work spurs future research on this new notion of boosting for property testers, a notion that we believe is of interest independent of the connections to composition theorems. For example, an ambitious goal for future work is to broadly understand when and how a tester for constant distance parameter can be automatically upgraded into one with the optimal -dependence, as well as the associated costs of such a transformation.
§ PRELIMINARIES
Distributions and random variables. We use bold font (e.g ∼) to denote random variables.
For any set S, we use ∼ S as shorthand for ∼Unif(S) where Unif(·) denotes the uniform distribution. Of particular importance to this work will be μ-biased distributions over the Boolean hypercube.
For any μ∈ (-1,1), we use π_μ to denote the unique distribution over with mean μ. Formally, for ∼π_μ,
=
1 with probability 1 + μ/2
-1 with probability 1 - μ/2.
Similarly, for ∈ [-1,1]^k, we use π_ to denote the product distribution π__1×⋯×π__k.
Fix some bias μ∈ (-1,1). For any ∈ [0,1]^k and y ∈^k, we write y to denote that for each i ∈ [k], _i is independently drawn as
_i =
y_i with probability _i
Drawn from π_μ with probability 1 - _i.
Whenever we use the above notation, the choice of μ will be clear from context. This gives the following more succinct way to express <Ref>, defining multivariate noise stability,
_μ,(g) _∼ (π_μ)^k,[g()g()].
Some useful sets. For any integers a ≤ b, we use [a,b] as shorthand for the set {a, a+1, …, b}. Similarly, for b ≥ 1, we use [b] as shorthand for the set [1,b]. For any set S and ℓ≤ |S|, we use Sℓ to denote all subsets of S with cardinality ℓ.
Junta complexity. For any function f: ^n →, and S ⊆ [n], we say that f is an S-junta if for all x,y ∈^n for which x_i = y_i whenever i ∈ S it holds that f(x) = f(y). With a slight abuse of notation, when r ∈ [n] is an integer, we say that f is an r-junta if there is a set |S| ≤ r for which f is an r-junta.
Advantage.
For any functions f, g:^n → and distribution over ^n, we define
_(f,g) _∼[f() g()].
With a slight abuse of notation, we define for f:^n → and S ⊆ [n],
_(f,S) max_S-junta g:^n →_(f,g).
Similarly, for r ∈ [n],
_(f,r) max_r-junta g:^n →_(f,g).
When the base distribution is clear, we will drop it from our notation. Furthermore, for any function f:^n → and S ⊆ [n] or r ∈ [n], we use f̃_S and f̃_r to denote the S-junta and r-junta respectively maximizing the above two advantages.
Function composition.
For a function f: ^n →, its direct product f^⊗ k:*^n^k→^k is defined as
f^⊗ k(x^(1), …, x^(k)) = (f(x^(1)), …, f(x^(k))).
For any g:^k →, we use g ∘ f:*^n^k→ as shorthand for g∘ f^⊗ k, meaning,
(g∘ f)(x^(1), …, x^(k)) = g(f(x^(1)), …, f(x^(k))).
Vector powers. For any vector v ∈^k and set S ⊆ [k], we'll use the notation v^S as shorthand for
v^S ∏_i ∈ S v_i.
§.§ Fourier Analysis
Our proof of <Ref> will make heavy use of Fourier analysis over the μ-biased hypercube, (π_μ)^k. In this section, we will review relevant definitions and facts. A more complete exposition is given in <cit.>.
For any μ∈ (-1,1), we define ϕ_μ(x) x-μ/σ where σ√(1 - μ^2). Every g: ^k → can be uniquely decomposed as
g(y) = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(y_i) where ĝ_μ(S) = _∼ (π_μ)^k*g() ∏_i ∈ Sϕ_μ(_i).
This decomposition has a number of useful properties stemming from the fact that transforming g from its representation as a truth table to its Fourier coefficients ĝ_μ(S) is an orthonormal transformation.
[Basic facts about the Fourier decomposition]
* Plancherel's theorem: For any g, h: ^k → and μ∈ (-1,1),
_∼ (π_μ)^k[g()h()] = ∑_S ⊆ [k]ĝ_μ(S)ĥ_μ(S).
* Parseval's theorem: For any g: ^k → and μ∈ (-1,1),
_∼ (π_μ)^k[g()^2] = ∑_S ⊆ [k]ĝ_μ(S)^2.
In particular, when g has a range of , Parseval's theorem guarantees that the sum of its squared Fourier coefficients is 1. As a result, the following distribution is well defined.
For any g: ^k → and bias μ∈ (-1,1), the spectral sample of g, denoted _μ(g), is the probably distribution over subsets of [k] in which the set S has probability ĝ_μ(S)^2.
The Fourier decomposition gives a concise way to represent important quantities, as in the following results.
For any μ∈ (-1,1) and ∈ [0,1]^k, _μ, can be related to g's μ-biased Fourier decomposition as,
_μ, (g) = ∑_S ⊆ [k]ĝ(S)^2 ^S = _∼_μ(g)[()^].
We define g^()(y) _ y[g()]. Then, by Plancherel's theorem,
_μ, (g) = _∼ (π_μ)^k[g() g^()()] = ∑_S ⊆ [k]g_μ(S) g^()_μ(S).
Next, we compute the Fourier decomposition of g^().
g^()_μ(S) = _∼ (π_μ)^k*g^()() ∏_i ∈ Sϕ_μ(_i)
= _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)
= _∼ (π_μ)^k, *g() ∏_i ∈ Sϕ_μ(_i)(,) distributed identically to (, )
= _∼ (π_μ)^k*g() ·_*∏_i ∈ Sϕ_μ(_i).
Applying the independence of _1, …, _k conditioned on and that [ϕ_μ(_i)] = _i ϕ_μ(_i),
g^()_μ(S) = _∼ (π_μ)^k*g() ·∏_i ∈ S_i ϕ_μ(_i)
= ()^S ·_∼ (π_μ)^k*g() ·∏_i ∈ Sϕ_μ(_i) = ()^S g_μ(S).
Putting the above together,
_μ, (g) = ∑_S ⊆ [k]ĝ_μ(S)^2 ()^S.
One immediate corollary of the above is that multivariate noise stability is monotone.
For any μ∈ (-1,1), g:^k →, and , ρ⃗'⃗∈ [0,1]^k satisfying _i ≤ρ⃗'⃗_i for all i ∈ [k],
_μ, (g) ≤_μ, ρ⃗'⃗(g).
Recall that for any ν∈ [-1,1]^k, the distribution π_ν is the unique product distribution supported on ^k with mean ν. The Fourier decomposition of g also gives a useful way to compute _∼π_ν[g()].
For any g: ^k →, μ∈ (-1,1), and ν∈ [-1,1]^k,
_∼π_ν[g()] = ∑_S ⊆ [k]ĝ_μ(S) ∏_i ∈ Sϕ_μ(ν_i).
We expand g into it's Fourier decomposition
[g()] = ∑_S ⊆ [k]ĝ_μ(S) *∏_i ∈ Sϕ_μ(_i)Linearity of expectation
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*ϕ_μ(_i)_1, …, _k are independent
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ S*_i - μ/σDefinition of ϕ_μ
= ∑_S ⊆ [k]ĝ_μ(S)∏_i ∈ Sϕ_μ(ν_i). Linearity of expectation
§ A STRONG COMPOSITION THEOREM FOR JUNTAS
In this section, we characterize the junta size required to approximate g ∘ f in terms of the multivariate noise stability of g, and the junta size required to approximate f.
For any g: ^k →, f: ^n → and base distribution over ^n, let μ = _∼[f()].
* Lower bound on advantage: For any approximators q^(1), …, q^(k): ^n →, define the lower normalized correlations, for each i ∈ [k] as
α_i max*0, _(f, q^(i))^2 - μ^2/1 - μ^2.
Then, there is an h:^k → for which
_^k(g∘ f, h (q^(1), …, q^(k))) ≥_μ, α(g).
* Upper bound on advantage: For any S_1,…, S_k, define the upper normalized correlation as
β_i max*0,_(f, S_i) - μ^2/1 - μ^2,
construct S ⊆ [n] × [k] by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}). Then,
_^k(g∘ f, S) ≤√(_μ, β(g)).
Our goal is to understand the error of the best R-junta approximating g ∘ f. <Ref> says that for any way to partition R = r_1 + ⋯ r_k, the approximator h (f̃_r_1, …, f̃_r_k) achieves nearly optimal advantage across all R-juntas that partition their budget this way. Of course, by maximizing both sides across all partitions, we can conclude that there is some partitioning and function h for which h (f̃_r_1, …, f̃_r_k) has nearly optimal advantage among all R-juntas. Indeed, as a simple corollary of <Ref>, we can show that the error of the optimal canonical composed form approximator is within a factor of 4 of the optimal approximator. Recall that _(q_1,q_2) = _∼[q_1() ≠ q_2()] and is related to advantage via the equality = 1 - 2·.
For any g: ^k →, f:^n →, junta budget R, and base distribution , there is an h:^n → and partition of the budget r_1 + ⋯ + r_k = R for which,.
_^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤ 4 ·_^k(g∘ f, R).
When μ = 0, the guarantee of <Ref> can further be given in the concise form of <Ref>: For an appropriately chosen ∈ [0,1]^k,
_ρ⃗(g)^2 ≤Advantage of optimal canonical composed form approximator
≤Advantage of optimal approximator≤√(_ρ⃗(g)).
We include the proofs of <Ref> and <Ref> in <Ref>.
§.§ Proof of the lower bound on advantage
In this subsection, we show that (x_1, …, x_k) → h(f̃_r_1(x_1), …, f̃_r_k(x_k)) is close to the best R-junta approximator for g ∘ f. Here, the function h can be different than g, and this is necessary as shown in the counterexample to conjecture 2 in <Ref>.
For any g:^k →, f:^n →, and approximators q^(1), …, q^(k), there is some h:^k → for which
_^k(g∘ f, h ∘ (q^(1), …, q^(k))) ≥_μ, α(g),
where μ = _∼[f()] and for each i ∈ [k],
α_i max*0, (f, q^(i))^2 - μ^2/1 - μ^2.
Note α_i naturally interpolates between 0 and 1. Setting q^(i) to the better of the constant -1 or the constant +1 function will lead to α_i = 0, while setting q^(i) = f gives α_i = 1.
§.§.§ Characterizing the advantage of composed form approximators
To ease notation, we begin with a simpler setting. Suppose we use the same budget, r R/k, in each of the k pieces. Our goal is to understand
max_h:^k →(g∘ f, h∘f̃_r)
in terms of the noise sensitivity of g and (f, f̃_r). To do so, we will consider unbalanced noise stability.
For any x ∈^k, we use the notation x to denote that for each i ∈ [k], _i is independently drawn as
* If x_i = -1, with probability a, we set _i = x_i and otherwise set _i = -x_i
* If x_i = 1, with probability b, we set _i = x_i and otherwise set _i = -x_i.
For any g,h:^k →, μ∈ [-1,1] and a,b ∈ [0,1], we define the unbalanced noise stability as
_μ, (a,b)(g,h) = _∼ (π_μ)^k, [g()h()].
We refer to the above notion as unbalanced because when drawing x, the probability of the i^th coordinate flipping from -1 to 1 and from 1 to -1 may differ. Unbalanced noise stability is useful in our setting due to the following proposition.
For any f, f̃: ^n → and g,h:^k →,
_∼^k[(g ∘ f)() · (h ∘f̃)()] = _μ, (a,b)(g,h),
where
μ_∼[f()],
a _∼[f̃() = -1 | f() = -1],
b _∼[f̃() = 1 | f() = 1].
Draw ∼^k and then define f^⊗ k(), f̃^⊗ k(). Clearly,
_∼^k[(g ∘ f)() · (h ∘f̃)()] = [g() h()].
Furthermore, the distribution of , is equivalent to if we drew ∼ (π_μ)^k,. The above quantity therefore matches the definition of _μ, (a,b)(g,h).
§.§.§ Unbalanced noise stability behaves strangely
The most basic requirement of our approximation for g ∘ f is that it have advantage at least 0, as either the constant -1 or the constant +1 function is guaranteed to have such an advantage. Indeed, in the balanced case, it is well known that the approximation will satisfy this basic requirement even if we take h = g.
For any g:^k → and a ∈ [0,1/2],
_0, (a,a)(g,g) ≥ 0.
However, in the unbalanced case, this basic requirement no longer holds.
For any k ≥ 0, and a,b ∈ [0,1] for which |a-b| ≥ 0.01, there is a function g:^k → for which
_0, (a,b)(g,g) ≤ -(1-2^-Ω(k)).
Without loss of generality, we assume b ≥ a + 0.01. We define
g(x)
1 if ∑_i ∈ [k]x_i ≥ 0.005k,
-1 otherwise.
Draw ∼ (π_μ)^k,. Then,
*∑_i ∈ [k]_i = 0 , *∑_i ∈ [k]_i = k(b-a).
Furthermore, a standard application of Hoeffding's inequality implies that
[g() = 1] ≤ 2^-Ω(k) , [g() = -1]≤ 2^-Ω(k).
By union bound, with probability at least 2^-Ω(k), we have that both g() = -1 and g() = 1. This implies the desired result.
§.§.§ Unbalanced noise stability behaves well if we use the best h
Surprisingly, we show that if we use the best h, our approximation does meet this most basic requirement. Furthermore, we can relate it to the classical notion of balanced noise stability. The below Lemma directly implies <Ref>.
For any g:^k → and distribution over , each in ^k satisfying,
* The pairs (_1, _1), …, (_k, _k) are independent of one another.
* The means satisfy [_1] = ⋯ = [_k] = μ.
Define the correlations α_1, …, α_k as
α_i max*0,[_i _i]^2 - μ^2/1 - μ^2.
Then, there is an h:^k → for which
[g()h()] ≥_μ, α(g).
Comparing to <Ref>, if μ = 0, then α_i = max(0,1-a-b) for all i ∈ [k]. Since _μ, α(g) ≥ 0 whenever α≥ 0, <Ref> shows that the phenomenon in <Ref> cannot occur if we use the best approximator h.
The following Lemma will be useful in the proof of <Ref>.
For any function g: ^k →, let _1, …, _k be independent random variables each with mean μ and supported on [-1,1]. Then,
_*_∼π_[g()]^2 = _μ, ([ϕ_μ(_1)^2],
…, [ϕ_μ(_k)^2])(g).
We'll use the μ-biased Fourier expansion of g. Applying <Ref>,
_*_∼π_[g()]^2 = _**∑_S ⊆ [k]ĝ(S) ∏_i ∈ Sϕ_μ(_i)^2
= ∑_S_1, S_2 ⊆ [k]ĝ(S_1)ĝ(S_2)*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i).
We claim that, in the above sum, any term in which S_1 ≠ S_2 is equal to 0. Let S_1 S_2 denote the symmetric difference of S_1 and S_2. Then, due to the independence of _1, …, _k,
*∏_i ∈ S_1ϕ_μ(_i)∏_i ∈ S_2ϕ_μ(_i) = ∏_i ∈ S_1 ∩ S_2[ϕ_μ(_i)^2] ∏_i ∈ S_1 S_2[ϕ_μ(_i)].
Since the mean of _i is μ, [ϕ_μ(_i)] = ϕ_μ(μ) = 0. If S_1 ≠ S_2, there is at least one element in S_1 S_2, and so the term is 0. We are therefore left with,
_*_∼()[g()]^2 = ∑_S ⊆ [k]ĝ(S)^2∏_i ∈ S*ϕ_μ(_i)^2.
This is exactly the Fourier expansion for the claimed result.
We'll also use the following proposition.
For any random variable bounded on [-1,1] almost surely and with mean μ,
max*0,[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2 .
We expand, using linearity of expectation,
[ϕ_μ()^2] = *( - μ)^2/1 - μ^2 = [ρ^2] - 2μ[] + μ^2/1 - μ^2.
Since [] = μ, we have that [ϕ_μ()^2] = [^2] - μ^2/1 - μ^2. Therefore, by Jensen's inequality,
[]^2 - μ^2/1 - μ^2≤[ϕ_μ()^2].
Furthermore, since ^2 ≤,
[ϕ_μ()^2] ≤[] - μ^2/1 - μ^2.
Lastly, [ϕ_μ()^2] ≥ 0 follows from non-negativity.
Finally, we are ready to prove <Ref>.
For any y ∈^n, we define
g_(y) = [g() | = y].
Then, setting h(y) (g_(y)),
[g()h()] = _**g_()≥_**g_()^2.
Note that, conditioning on = y, the distribution of is still product. Let ν(y) be the mean of this distribution, so that
g_(y) = _∼π_ν(y)*g().
By <Ref>,
_**_∼π_ν()*g()^2 = _μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g).
For each i ∈ [k],
[ϕ_μ(ν()_i)^2] ≥max*0,_[ν()_i]^2 - μ^2/1 - μ^2<Ref>
≥max*0,_[_iν()_i]^2 - μ^2/1 - μ^2x≥ cx when c ∈
= max*0,_,[_i_i]^2 - μ^2/1 - μ^2Definition of ν(y)
= α_i.
Putting all of the above together,
[g()h()] ≥_μ, ([ϕ_μ(ν()_1)^2], …, [ϕ_μ(ν()_k)^2](g)
≥_μ, ρ(g),
where the final inequality follows from the monotonicity of noise stability.
§.§ Proof of the upper bound on advantage
In this section, we prove the following.
For any g: ^k→, f:^n →, μ_∼[f()], and S_1,…, S_k, define the upper normalized correlation as
β_i _(f, S_i) - μ^2/1 - μ^2.
For S ⊆ [n] × [k] constructed by taking S_1 from the first block, S_2 from the second block, and so on (formally S ∪_i ∈ [k], j ∈ S_i{(j,i)}).. Then,
_^k(g∘ f, S) ≤√(_μ, β(g)).
To begin with, we rewrite advantage in the following form.
For any function q: ^m →, distribution over ^m, and S ⊆ [m], define
q_S, ^(x) _∼[q() |_S = x_S],
where y_S = x_S is shorthand for x_i = y_i for all i ∈ S. Then,
_(q, S) = _∼**q_S, ^().
Consider any S-junta h. Then,
_(q, h) = _∼[
q() h()] = _∼*_∼[q() h() |_S = _S].
Since h is an S-junta, it must classify x and y the same whenever x_S = y_S. Therefore,
(q, h) = _∼*h()_∼[q() |_S = _S]
= _∼*h()q^_S,().
to maximize the above advantage among all h, we set h(x) = (q^_S, (x)), in which case
(q, h) = _∼**q^_S, ().
Given <Ref>, to compute _^k(g∘ f, S), it suffices to understand the function (g ∘ f)^_S,. We proceed to transform that function into a form which is easier to understand.
In the setting of <Ref>, for any x ∈ (^n)^k, let ν(x) ∈ [-1,1]^k be the vector where
ν(x)_i _∼^k[f() | x^(i)_S_i = _S_i].
Then,
(g ∘ f)^_S,^k(x) = _∼π_ν(x)[g()].
Consider drawing ∼ (^n)^k conditioned on _S = x_S. Let = f^⊗ k(). By definition,
(g ∘ f)^_S, ^k(x) = [g()].
Therefore, we merely need to show that the distribution of is that of π_ν(x). For this it is sufficient that,
* Each _1, …, _k is independent. This follows from the fact _1, …, _k are independent, and that the restriction that _S = x_S is a disjoint restriction for each of the k components.
* For each i ∈ [k], that [_i] = ν(x)_i. This follows from the definition of ν(x)_i.
The desired result follows from the fact that π_ν(x) is the unique product distribution over ^k with mean ν(x).
We now prove the upper bound.
Let ν be as defined in <Ref>. Applying it and <Ref>,
_^k(g∘ f, S) = _∼^k**_∼π_ν()[g()]≤√(_∼^k**_∼π_ν()[g()]^2).
The inequality above is Jensen's. Consider the random variables ν()_1, …, ν()_k. The have the following two properties.
* They are independent. This is because the value of ν()_i depends on only the value of _i, which is independent of the other _j for j ≠ i.
* They each have mean μ. This is because,
[ν()_i] = *_∼[f() | (^(i))_S_i = y_S_i] = _∼[f()] = μ.
Therefore, we can use <Ref>:
_∼^k**_∼π_ν()[g()]^2 = _μ, ([ϕ_μ(ν()_1)^2],
…, [ϕ_μ(ν()_k)^2])(g).
We can further upper bound,
[ϕ_μ(ν()_i)^2] ≤[ν()_i] - μ^2/1 - μ^2<Ref>
= (f, S_i) - μ^2/1 - μ^2<Ref>
= β_i.
Putting the above together, we have that
_^k(g∘ f, S) ≤√(_μ, β(g)).
§.§ Proofs of the consequences of our strong composition theorem
In this section, we complete the proofs of <Ref> and <Ref>.
For any partition of the budget junta budget r_1 + ⋯ + r_k = R, let (r_1,…,r_k) be the vector,
(r_1,…,r_k)_i _D(f, r_i).
Then, applying the upper bound on advantage of <Ref> and maximizing over all possible partitions of the budget R, we have that
_^k(g∘ f, R) ≤max_r_1 + ⋯ + r_k = R√(_(r_1, …, r_k)(g)).
This completes the upper bound on the advantage of the optimal R-junta approximator of g ∘ f of <Ref>. For the lower bound on the advantage of the optimal composed form approximator, let r_1, …, r_k be the partition of budget maximizing _(r_1, …, r_k)(g). Using the lower bound of <Ref>, and using (·)^2 to refer to an elementwise squaring of a vector,
_^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≥_(r_1,…,r_k)^2(g).
Using the Fourier expression for stability <Ref>,
_(r_1,…,r_k)^2(g) = _∼_μ(g)*(((r_1,…,r_k)^2)^
=_∼_μ(g)*(((r_1,…,r_k)^)^2
≥_∼_μ(g)*(((r_1,…,r_k)^)^2 Jensen's inequality
= _(r_1,…,r_k)(g)^2.
Therefore, there is a composed form approximator with advantage at least _(r_1, …, r_k)(g)^2.
Our proof of <Ref> uses the following.
For any α_1,…, α_m ∈ [0,1] and β_1, …, β_m ∈ [0,1], satisfying (1-α_i) ≤ 2(1-β_i) for each i ∈ [m],
1 - ∏_i ∈ [m]α_i ≤ 2* 1 - ∏_i ∈ [m]β_i .
We consider the vector β' ∈ [0,1]^m satisfying
1 - α_i = 2 · (1 - β'_i).
Note that β'_i ≥β_i, which means that
1 - ∏_i ∈ [m]β'_i ≤ 1 - ∏_i ∈ [m]β_i.
Now, consider the function q:[0,1] → [0,1] defined as
q(x) 1 - ∏_i ∈ [m]1 - x(1- α_i).
A quick calculation confirms that the second derivative of q is nonpositive, so q is concave. Furthermore, it satisfies,
q(0) = 0,
q(1) = 1 - ∏_i ∈ [m]α_i,
q(1/2) = 1 - ∏_i ∈ [m]β'_i.
We conclude,
1 - ∏_i ∈ [m]α_i concavity of q≤ 2*1 - ∏_i ∈ [m]β'_i≤*1 - ∏_i ∈ [m]β_i.
Let r_1 + ⋯ + r_k = R be the partition of R used in the junta achieving minimum error relative to g ∘ f and define, for each i ∈ [k],
α_i max*0, _(f, r_i)^2 - μ^2/1 - μ^2,
β_i max*0, _(f, r_i) - μ^2/1 - μ^2,
which satisfy the relation
1-α_i ≤ 2(1 - β_i).
Applying <Ref> and the relation = 1 - /2, we have that
_^k(g∘ f, R) ≥1 - √(_μ, β(g))/2, and _^k(g∘ f, h (f̃_r_1, …, f̃_r_k)) ≤1 - _μ, α(g)/2.
Our goal is to show the following series of inequalities, which would imply the desired result,
1 - _μ, α(g) (iq 1)≤ 2(1 - _μ, β(g)) (iq 2)≤ 4(1 - √(_μ, β(g))).
The second, (inequality 2), follows the fact that for any x ∈ [0,1], (1-x) ≤ 2(1-√(x)). For the first inequality, using <Ref>, we can express stability via the Fourier spectrum of g as
1 - _μ, α(g) = ∑_Sĝ(S)^2(1 - ∏_i ∈ Sα_i)
≤ 2∑_Sĝ(S)^2(1 - ∏_i ∈ Sβ_i) <Ref>, 1-α_i ≤ 2(1 - β_i)
= 2(1 - _μ, β(g)).
This proves inequality 1, giving the desired result.
§ MULTIVARIATE NOISE STABILITY OF SYMMETRIC FUNCTIONS
In this section, we prove <Ref> and <Ref>, connecting the multivariate noise stability of symmetric functions to their univariate noise stability.
For any function g:^k →, a permutation σ:[k]→ [k] is an automorphism of g if for all inputs x ∈^k,
g(x) = g(x_σ(1), …, x_σ(k)).
We say g is symmetric if every permutation of [k] is an automorphism of g. Similarly, g is transitive if for all i,j ∈ [k], there is an automorphism of g sending i to j.
§.§ The upper bound on the multivariate noise stability of symmetric functions
For any symmetric g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let 1/k ·∑_i ∈ [k]_i. Then,
_μ, (g)≤_μ, (g).
Our proof of <Ref> will use make heavy use of the negative association of random variables.
A set of random variables _1, …, _m supported on are negatively associated if for all disjoint subsets S_1, S_2 ⊆ [m] and S_1-juntas f_1:^m →, S_2-juntas f_2:^m → both monotonically nondecreasing,
[f_1()f_2()] ≤[f_1()][f_2()].
For our purposes, we will only need a few useful facts about negatively associated random variables given in <cit.> (see also <cit.> for a useful overview).
[Permutation distributions are negatively associated, <cit.>]
For any z_1, …, z_m ∈, draw a uniformly random permutation :[m] → [m] and set _i = z_(i) for each i ∈ [k]. Then, _1, …, _m are negatively associated.
[Subsets of negatively associated random variables are negatively associated]
For any 2 ≤ m' ≤ m, if _1, …, _m are negatively associated, then _1, …, _m' are also negatively associated.
[Product consequence of negative association]
For any negatively associated _1, …, _m and nondecreasing f:→_≥ 0,
*∏_i ∈ [m]f(_i)≤∏_i ∈ [m]*f(_i).
Given the above, facts about negative associated random variables, we can now prove <Ref>.
We expand _μ, (g) using the Fourier spectrum of g (<Ref>),
_μ, (g) = _∼_μ(g)[()^].
Let be the distributed the same as || for ∼_μ(g). Then,
_μ, (g) = _*_∼_μ(g)[()^| || = ℓ].
Since g is symmetric, for any |S_1| = |S_2|, ĝ(S_1) = ĝ(S_2). As a result the distribution of ∼_μ(g) conditioned on || = ℓ is simply a uniformly random size-ℓ subset of [k]. Formally,
_μ, (g) = _*_∼[k][()^].
Let _1, …, _k be a uniform random permutation of _1, …, _k. Then, the distribution of ()^ for ∼[k]ℓ is identical to that of ∏_i ∈ [ℓ]_i. By <Ref>, _1, …, _ℓ are negatively associated, and so,
_∼[k]ℓ[()^] = *∏_i ∈ [ℓ]_i(<Ref>)≤∏_i ∈ [ℓ][_i] = *^ℓ.
Therefore,
_μ, (g) ≤_**^ = _μ, (g).
§.§ The lower bound on the multivariate noise stability of symmetric functions
For any transitive g:^k →, μ∈ (-1,1), and ∈ [0,1]^k, let *∏_i ∈ [k]ρ⃗_i^1/k. Then,
_μ, (g)≥_μ, (g).
Note that every transitive g is also symmetric, but the reverse does not hold.
Similarly to the proof of <Ref>, let be the distribution of || when ∼_μ(g). Then,
_μ, (g) = _*_∼_μ(g)[()^| || = ].
For each S ⊆ [k], we'll use χ(S) ∈^k to denote the characteristic vector of S, meaning χ(S)_i [i ∈ S]. Then,
_μ, (g) = _*_∼_μ(g)*∏_i ∈ [k] (_i)^χ()_i | || =
= _*_∼_μ(g)*exp*∑_i ∈ [k]χ()_i log(_i) | || =
≥_*exp*_∼_μ(g)*∑_i ∈ [k]χ()_i log(_i) | || = Jensen's inequality
= _*exp*∑_i ∈ [k]log(_i) _∼_μ(g)*i ∈| || = . Linearity of expectation
Fix any i_1, i_2 ∈ [k] and level ℓ∈ [0,k]. Since g is transitive, there is an automorphism, σ, of g sending i_1 to i_2. Since σ is an automorphism of g, for any S ⊆ [k], for ∼_μ(g), [ = S] = [ = σ(S)]. As a result
_∼_μ(g)*i_1 ∈| || = ℓ = _∼_μ(g)*i_2 ∈| || = ℓ,
and so _∼_μ(g)*i ∈| || = ℓ must be the same for all i ∈ [k]. The sum of these probabilities is ℓ, meaning each is ℓ/k. This allows us to bound,
_μ, (g) ≥_*exp*∑_i ∈ [k]log(_i) ·/k
=_*∏_i ∈ [k]*_i^/k
=_*()^ = _μ, (g).
§.§ Bounding the (δ,)-noise stability of symmetric functions
Recall, from <Ref>, that the (δ,)-noise stability of a function g:^k→ is the quantity
max{_ρ⃗(g)at least δ-fraction of ρ⃗'s coordinates are at most 1-2}.
We prove <Ref>, restated below.
For any symmetric function g:^k →, δ∈ (0,1), and ∈ (0,1/2), let δ'kδ/k be δ rounded up to the nearest integer multiple of 1/k. Then, the (δ, )-noise stability of g is equal to _μ, ρ^⋆(g) for some ρ^⋆ satisfying
1 - 2δ' - 4^2 ≤ρ^⋆≤ 1 - 2δ'.
Since stability is monotone (<Ref>), the (δ, )-noise stability of g is its multivariate noise stability with a correlation vector where δ' fraction of the coordinates are 1 - 2 and the remainder are 1. The arithmetic mean of this vector is exactly 1 - 2δ', and its geometric mean is (1 - 2)^δ'. The desired result then follows from <Ref> and the inequality
(1 - x)^c ≥ 1-cx - (1-c)x^2 ≥ 1 - cx - x^2
which holds for all c,x ∈ [0,1]. To prove this inequality, it is sufficient that q_c(x) ≥ 0 for all x,c ∈ [0,1] where
q_c(x) (1-x)^c - 1 +cx + (1-c)x^2.
To see this, we note that for any c ∈ [0,1], the function q_c(x) has roots at x = 0 and x=1. It is furthermore increasing at x = 0, and decreasing at x = 1. If q_c(x) were to be negative for any x ∈ [0,1], then, it would need to have at least 3 local extrema. However, the derivative q_c'(x) is concave, so it can only be zero at a maximum of 2 points. This proves the desired inequality. (If the reader prefers, <Ref> gives a “proof by picture".)
§ COMPOSITION THEOREMS YIELD BOOSTERS FOR PROPERTY TESTING
§.§ A general boosting framework
Let 𝒫={𝒫_s}_s∈ be a parametrized property of Boolean functions. For a function f:^n→ and distribution 𝒟 over ^n, we write
_𝒟(f,𝒫_s)min_h∈𝒫_s_𝒟(f,h)
to denote f's distance to 𝒫_s over 𝒟. We are interested in the relaxed testing regime for size parameters s>s' where we want to decide whether an unknown target function f belongs to 𝒫_s or is -far from 𝒫_s' under 𝒟: _𝒟(f,𝒫_s')> (recall <Ref>). We say that 𝒫 is (,s,s')-testable if there exists an algorithm for (,s,s')-testing 𝒫 for every distribution 𝒟. As → 0, the gap between the Yes and No cases becomes smaller and (,s,s')-testing becomes more difficult. The main result of this section is that if 𝒫 “behaves well” under function composition, then testers for large can be boosted to testers for the more challenging regime of small . We will specialize our attention to properties which behave linearly with respect to function composition.
A parametrized property 𝒫={𝒫_s}_s∈ behaves linearly (with respect to function composition) if
f∈𝒫_s ⇒ g∘ f∈𝒫_k· s
for all g:^k→, f:^n→, and s∈.
Examples.
Being an s-junta, depth-s decision tree, depth-s formula, or degree-s polynomial are all properties of Boolean functions which behave linearly with respect to composition. As is often the case, it is straightforward to show from their definitions that these properties behave linearly. Many properties which do not a priori behave linearly can be converted into ones that do by applying an appropriate transformation to their size. For example, the property 𝒫_s={size-exp(s) decision trees} behaves linearly.
Strong composition theorems for properties.
A property 𝒫 which behaves linearly with respect to function composition is said to admit a strong composition theorem if the upper bound from <Ref> can be shown to be nearly tight. This definition generalizes the relation <ref>.
A parametrized property 𝒫={𝒫_s}_s∈ admits an (,,λ)-composition theorem with respect to g:^k→ for ,∈ (0,1) and a constant λ>0 if
_𝒟(f,𝒫_s)> ⇒ _𝒟^k(g∘ f,𝒫_λ ks)>
for all f:^n→ and distributions 𝒟 over ^n.
Strong composition theorems depend on the combining function g. For example, if g is a constant function then one would not expect the upper bound from <Ref> to be tight. For this reason, the dependence on g is made explicit in the definition of strong composition theorem.
Roughly speaking, the definition says that if a property 𝒫 behaves linearly and admits a strong composition theorem with respect to g, then composing with g turns a function in 𝒫_s into one in 𝒫_s k and turns a function slightly far from 𝒫_s into one very far from 𝒫_Θ(s k). For a fixed , having an (,,λ)-composition theorem with respect to g becomes stronger as approaches 0. In general, we are interested in (,,λ)-composition theorems when ≫. The parameter λ is built into the definition to tolerate a small amount of slack between the upper and lower bounds on g∘ f. For many applications, this constant factor is necessary. We are now equipped to state our main boosting theorem.
Let 𝒫={𝒫_s}_s∈ be a property which behaves linearly and admits an (,,λ)-composition theorem with respect to g:^k→. If 𝒫 is (,s,s')-testable in q(,s,s') queries, then it is (,s,λ^-1 s')-testable using k· q(,ks,ks') many queries.
Let be an algorithm for (,s,s')-testing 𝒫. Given queries to a function f:^n→ and random samples from a distribution 𝒟 over ^n, we (,s, λ^-1 s')-test 𝒫 using the procedure in <Ref> where is given an instance of (,ks,ks')-testing 𝒫.
Query complexity.
The target g∘ f:^nk→ is a (, ks,ks')-testing instance for . Therefore, makes q(,ks,ks') queries to the target g∘ f:^nk→ before terminating. Our tester makes k queries to f for each query to g∘ f. So our tester for f makes k· q(,ks,ks') queries in total.
Correctness.In the Yes case, f∈𝒫_s. We then have g∘ f∈𝒫_sk since 𝒫 behaves linearly. This ensures that outputs Yes. In the No case, _𝒟(f,𝒫_s'/λ)>. We then have _^k(g∘ f,𝒫_ks')>λ since 𝒫 admits an (,,λ)-composition theorem. This ensures that outputs No.
§.§ Implications for current landscape of junta testing
Our results have new implications for tolerantly testing juntas. In this regime, the Yes case of <Ref> is relaxed to only require that f is close to an r-junta over 𝒟.
Given parameters r≤ r' and ≤, queries to an unknown function f:^n→, and random samples from a distribution 𝒟 over ^n, distinguish between
* Yes: f is -close to being an r-junta under 𝒟, and
* No: f is -far from being an r'-junta under 𝒟.
In all of our applications, we will be using <Ref>, or a variant of it, with g set to _k. For this reason, we start with some useful properties about the noise stability of parity.
§.§.§ Noise stability of parity under general product distributions
For any f:^n →, distribution over ^n, junta budget R, and R-junta h,
_^k(_k ∘ f, h) ≥min_r_1+⋯+r_k=R1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2.
Our proof of <Ref> will use the multivariate noise stability of parity.
For any μ∈ (-1,1), ρ⃗∈ [0,1]^k,
_μ, (_k) = ∏_i ∈ [k]*_i + (1-_i)·μ^2=∏_i ∈ [k]*1 - (1-_i)(1-μ^2).
Note that _k(y_1, …, y_k) = ∏_i ∈ [k]y_i. Therefore,
_μ, (_k) = _∼ (π_μ)^k, *∏_i ∈ [k]_i _i.
Each pair (_i, _i) are independent of another, so
_μ, (_k) = ∏_i ∈ [k]*_i _i.
The distribution of (_i, _i) can be succinctly described: With probability _i, _i = _i. Otherwise, they are each independent draws from π_μ. Therefore,
*_i _i = _i + (1-_i)·μ^2.
The desired result follows from combining the above equations
We apply our strong composition theorem, <Ref>. It is stated in terms of advantage and gives
max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(_μ, β(r_1, …, r_k)(_k)),
where we define μ = _∼[f()], and β(r_1, …, r_k) ∈ [0,1]^k is the vector
β(r_1, …, r_k)_i = _(f, f̃_r_i) - μ^2/1 - μ^2 = 1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2.
Applying <Ref>,
max_R-juntas h_^k(_k ∘ f, h) ≤max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *1 -1 - 2·_(f, f̃_r_i) - μ^2/1 - μ^2(1 - μ^2))
= max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - *2·_(f, f̃_r_i)/1 - μ^2(1 - μ^2))
= max_r_1 + ⋯ + r_k = R√(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i)).
The desired result follows from = 1 - /2.
§.§.§ Warmup: weak testers suffice for (0,,r,r')-testing juntas
We first boost tolerant testers in the regime where is fixed to 0 in <Ref>. This version is slightly easier to state and is also the version we will use later in proving <Ref>.
If juntas can be (0,,r,r')-tested using q(,r,r') queries, then for all k∈ and λ∈ (0,1), they can be (0,,r,λ^-1 r')-tested in k· q(,kr,kr') queries where
=1-(1-2)^(1-λ)k/2/2.
We will need to following composition theorem for juntas. It is a more precise version of <Ref> stated in terms of <Ref>.
For any λ∈ (0,1), the property of being an r-junta admits an (, ,λ)-composition theorem with respect to _k for any ≤ where
= 1-(1-2)^(1-λ)k/2/2.
Assume that f:^n→ is -far from being an r-junta over 𝒟. We would like to show that _k∘ f is -far from being a λ r k-junta over 𝒟^k where is defined as in the lemma statement. Let r_1+⋯+r_k=λ rk be the partition of the junta budgets which minimizes the expression
1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2
from <Ref>. Let A_≤ r [k] denote the indices for which r_i≤ r and let A_>r=[k]∖ A_≤ r. By a counting argument, at least a (1-λ)-fraction of r_i satisfy r_i≤ r and so |A_≤ r|≥ (1-λ)k. By our assumption that f is far from being an r-junta, for these r_i, we get _𝒟(f,f_r_i)>. Therefore, we can conclude that for any λ rk-junta h:^nk→:
_𝒟^k(_k∘ f,h) ≥1 - √(∏_i ∈ [k]*1 - 2·_(f, f̃_r_i))/2<Ref>
=1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i)·∏_i∈ A_>r*1 - 2·_(f, f̃_r_i))/2
≥1 - √(∏_i ∈ A_≤ r*1 - 2·_(f, f̃_r_i))/2≤1/2
> 1 - *1 - 2^(1-λ)k/2/2_𝒟(f,f_r_i)> for i∈ A_≤ r.
Since h was arbitrary, this shows that _k∘ f is -far from being a λ rk-junta.
<Ref> is stated in the non-tolerant regime. However, we note that the same theorem holds in the (0,,r,r')-testing regime. That is, under the conditions of <Ref>, if 𝒫 is (0,,s,s')-testable, then it is also (0,,s,λ^-1s')-testable. This is because if f is a 0-approximator of f over 𝒟, then g∘f is a 0-approximator of g∘ f over 𝒟^k.
<Ref> shows that the property of being an r-junta admits an (, 1-(1-2)^(1-λ)k/2/2,λ)-composition theorem. Therefore, <Ref> shows that if juntas can be (0,,r,r')-tested in q(,r,r') queries then they can be (, r,r')-tested in k· q(,kr,kr') queries where
=1-(1-2)^(1-λ)k/2/2.
§.§.§ Weak testers suffice for tolerant junta testing
If there is a q(r)-query tester that, given queries to f:^n→ and random samples from a distribution 𝒟, distinguishes between
* Yes: f is 1/4-close to an r-junta, and
* No: f is 1/3-far from every r-junta,
then for every >0 and λ∈ (0,1), there is a q(r/(4))/4-query algorithm that distinguishes between
* Yes: f is -close to an r-junta, and
* No: f is Ω(/1-λ)-far from every λ^-1r-junta.
Let 𝒯 be a q(r)-query tester for juntas that satisfies the theorem statement. Given queries to a function f:^n→ and random samples to , we design an algorithm for (,5/1-λ, r,λ^-1r)-testing f over . The algorithm is straightforward. We choose k=1/4, and run the procedure in <Ref> with g=_k:^k→ and junta size kr.
Query complexity.𝒯 makes q(kr)=q(r/4) queries to the target _k∘ f:^nk→ before it terminates. Our tester makes k queries to f for each query to _k∘ f. Therefore, our tester makes k· q(r/4)=(r/4)/(4) queries in total.
Correctness.For correctness, we need to show:
Yes case: if f is -close to being an r-junta over , then _k∘ f is 1/4-close to being a kr-junta over ^k, and
No case: if f is 5/1-λ-far from being an λ^-1r-junta over , then _k∘ f is 1/3-far from being a kr-junta over ^k.
Yes case.
Let f be an r-junta which -approximates f over . By a union bound:
_∼𝒟^k[XOR_k∘ f()≠XOR_k∘f()] ≤_∼𝒟^k[some f(^(i))≠ f(^(i))]
≤ k·_𝒟(f,f)≤ k = 1/4.
Since _k∘f is a kr-junta, this shows that _k∘ f is 1/4-close to a kr-junta.
No case.
If f is 5/(1-λ)-far from being a λ^-1r-junta, then <Ref> implies that _k∘ f is
1-(1-2)^(1-λ)k/2/2
far from being a λλ^-1kr=kr-junta over ^k where 5/(1-λ). Therefore, it is sufficient to show that 1-(1-2)^(1-λ)k/2/2≥1/3. We observe 2/(1-λ)k≤log_1/3(e)· which implies 3^-2/((1-λ)k)≥ e^-2≥ 1-2. It follows:
1/3≥ (1-2)^(1-λ)k/2
which provides the desired bound.
§.§.§ Hardness of distribution-free tolerant junta testing
We prove the following which implies <Ref>.
Given queries to a function f:^n→ and random samples from a distribution 𝒟, and r≤ n, it is NP-hard under randomized reductions to distinguish between
* Yes: f is 0-close an r-junta over 𝒟, and
* No: f is 1/3-far from every Ω(rlog n)-junta over 𝒟.
We reduce from the SetCover problem.
A SetCover instance over a universe [m] is a collection of subsets 𝒮 = { S_1,…,S_n} where S_i [m]. The SetCover problem is to compute a minimal size subcollection {S_i_1,…, S_i_r} which covers the universe: [m]=S_i_1∪⋯∪ S_i_r.
SetCover is known to be hard to approximate.
Given a SetCover instance 𝒮 and a parameter r, it is NP-hard to distinguish between
* Yes: 𝒮 has a size-r set cover, and
* No: 𝒮 requires set covers of size Ω(rlog n).
Suppose we have an algorithm 𝒯_weak for testing juntas that can distinguish between the Yes and No cases in the theorem statement. In particular, there is a (0,1/3,r,Ω(rlog n))-tester for juntas. <Ref> implies that there is a (0,,r,Ω(rlog n))-tester, 𝒯_strong, for juntas as long as satisfies
1/3≤1-(1-2)^(1-λ)k/2/2⊛.
In the reduction, we will choose appropriately and use this boosted tester to solve SetCover.
The reduction.The reduction from SetCover to junta testing is standard <cit.>. We will restate it here for convenience. Let 𝒮 = { S_1,…,S_n} be a SetCover instance over the universe [m] and define u^(1),…,u^(m)∈^n where
(u^(j))_i =
1 if j ∈ S_i
-1 otherwise.
Let 𝒟 be the uniform distribution over { u^(1),…,u^(m), (-1)^n} and let f:^n→ be the function which is the disjunction of its inputs: f x_1⋯ x_n (where 1 is interpreted as true and -1 as false).
We choose k=Θ(m) so that <ref> holds with Ω(1/m)<<1/m+1. We then run the boosted tester 𝒯_strong on the function f and distribution 𝒟, to test if f is 0-close to an r-junta or -far from being a Ω(rlog n)-junta (where the parameters r and Ω(rlog n) correspond to the SetCover parameters). Our algorithm for SetCover outputs Yes if and only if the tester accepts f as being 0-close to an r-junta.
Runtime.
If the tester 𝒯_weak runs in polynomial time, then since k=Θ(m) and =Θ(1/m), the tester 𝒯_strong runs in polynomial time. Queries to the target function f and random samples from can also be simulated in randomized polynomial time.
Correctness.
For correctness, we need to show:
Yes case: if 𝒮 has a size-r set cover, then f is 0-close to an r-junta over , and
No case: if 𝒮 requires set covers of size Ω(rlog n), then f is -far from being a Ω(klog n)-junta over .
Yes case.
Let S_i_1,…, S_i_r be a size-r set cover. Consider the function f=x_i_1⋯ x_i_r. Since these indices form a set cover of 𝒮, f(u^(i))=1 for all i∈ [m] and f((-1)^n)=-1. This shows _(f,f)=0. It follows that f is 0-close to an r-junta over since f is an r-junta.
No case.
Suppose f is an r'-junta satisfying _𝒟(f,f)< 1/m+1. The relevant variables of f must correspond to a set cover of 𝒮: if some element i∈ [m] is not covered, then f(u^(i))=f((-1)^n) and _𝒟(f,f)≥1/m+1. This shows if 𝒮 requires set covers of size Ω(rlog n) then f is 1/m+1-far from every Ω(rlog n)-junta. In particular, since <1/m+1, every Ω(rlog n)-junta is -far from f.
§ ACKNOWLEDGMENTS
We thank the FOCS reviewers for their helpful comments and feedback. The authors are supported by NSF awards 1942123, 2211237, 2224246 and a Google Research Scholar award. Caleb is also supported by an NDSEG fellowship, and Carmen by a Stanford Computer Science Distinguished Fellowship.
alpha
§ COUNTEREXAMPLES TO NATURAL COMPOSITION THEOREMS
§.§ Counterexample to Conjecture 1
For any odd k and n ≥ k let R = (n-1)k and be the uniform distribution over ^n. There are symmetric functions g:^k → and f:^n → for which the following holds.
* There is an R-junta h achieving,
_^k(g∘ f, h) ≤ O(1/√(k)).
* The natural strategy of dividing the budget equally achieves,
_^k(g∘ f, g ∘f̃_R/k) = 1/2.
We set g = _k to be the majority function on k bits,
g(y_1, …, y_k) =
1 if ∑_i ∈ [k] y_i ≥ 0
-1 otherwise.
and f = _n to be the parity function,
f(x_1, …, x_n) = ∏_i ∈ [n] x_i.
The following fact will be useful in giving a strategy that achieves low error.
Let _1, …, _k-1 each be uniform and independent samples from . Then, for any choice of c,
*∑_i ∈ [k-1]_i = c≤ O*1/√(k).
We now give the junta achieving low error.
Let h = _k-1∘_n. Then,
* h is an ((k-1)n ≤ R)-junta.
* h achieves,
_^k(g∘ f, h) ≤ O(1/√(k)).
Clearly h depends on only the first (k-1)n bits of its inputs, so it is an R-junta as long as (k-1)n ≤ (n-1)k, which is guaranteed by the assumption n≥ k in <Ref>. We compute h's error,
_^k(g∘ f, h) = _∼^n[_k() ≠_k-1()].
In order for _k() ≠_k-1(), it must be the case that the ∑_i ∈ [k-1]_i is -1 or 0. The desired result follows from <Ref>.
We'll next show the natural strategy achieves advantage 0, equivalent to error 1/2.
Let f = _n and be the uniform distribution over ^n. Then,
_(f, f̃_n-1) = 0.
By <Ref>, it is sufficient to show that for any set |S| = n-1 and any x ∈^n,
_∼[f() |_S = x_S] = 0.
For any fixed x, there are two y ∈^n satisfying y_S = x_S: The first choice if y = x, and the second choice is x with a single bit flipped (the one bit not in S). One of these two choices will have a parity of +1 and one will have a parity of -1, so the average parity is 0, as desired.
For any odd k, μ = 0, and = [0,…, 0],
_μ, (_k) = 0.
For odd k, _k is an odd function, _∼^n[_k()]. Then,
_μ, (_k) = __1 ∼^k, _2 ∼^k[_k(_1)_k(_2)]
= __1 ∼^k[_k(_1)]__2 ∼^k[_k(_2)] _1, _2 independent
= 0 · 0 =0._k is odd
The following completes the proof of <Ref>.
In the setting of <Ref>,
_^k(g∘ f, g ∘f̃_R/k) = 0.
This follows from <Ref> and <Ref>.
§.§ Counterexample to Conjecture 2
For any n ≥ 10, k ∈, and R ≤ n/2, let be uniform over ^n. There are g: ^k and f:^n → for which, for all partitions r_1 + ⋯ +r_k = R,
_^k(g∘ f, g(f̃_r_1, …, f̃_r_k)) ≥ 1 - 2^-Ω(k).
<Ref> is particularly surprising in light of the fact that either the constant -1 or constant 1 functions, both of which are 0-juntas, will achieve error ≤ 1/2 with respect to g ∘ f. We begin with a probabilistic construction of f achieving the following.
For any n ≥ 10, there is an f: ^n → for which _∼^n[f()] ≤ 0.5 but, for all |S| ≤ n/2 and x ∈^n,
_∼^n[f() | = x] > 0.
Consider a random function where, for each x ∈^n, (x) ∼π_0.25. We'll show that meets the desired criteria with a strictly positive probability, proving the existence of at least one such f.
Let μ() _∼^n[()]. Then μ() is the average of 2^n independent samples of π_0.25. Applying Hoeffding's inequality,
[μ() > 0.5] ≤exp(-2 · (0.25)^2 · 2^n) = exp(-2^n/2).
Similarly, for any |S| ≤ n/2 and x ∈^n, let μ(, S, x) _∼^n[() | = x]. μ(,S,x) the average of at least 2^n/2 independent samples of π_0.25. Once again, by Hoeffding's inequality,
[μ(,S,x) ≤ 0] ≤exp(-2 · (0.25)^2 · 2^n/2) = exp(-2^n/2/2).
Union bounding over all 2^n choices of S and 2^n choices for x, we have that meets the desired criteria with probability at least
1 - exp(-2^n/2) - 2^2nexp(-2^n/2/2).
When n ≥ 10, the above probability is strictly positive, so such an f must exist.
Let f be a function with the properties of <Ref>, and g = And_k return +1 if and only if all k of its inputs are +1. By <Ref>, for any r ≤ n/2, f̃_r is the constant +1 function. Therefore, for any r_1 + ⋯ + r_k = R, g(f̃_r_1, …, f̃_r_k) is the constant +1 function. However,
_∼^k[(g ∘ f)() = +1] = (3/4)^k.
§.§ Counterexample to Conjecture 3
There is g:^k →, f:^n →, distribution over ^n, and budget R for which no R-junta of composed form achieves optimal error among all R-Juntas for g∘ f with respect to ^k.
We'll set k = 2, g = And_2. Let p:^2 → [0,1] be defined as
p(x)
1 if x_1 = x_2 = 1,
3/4 if x_1 ≠ x_2,
3/5 if x_1 = x_2 = -1.
We begin by describing a probabilistic construction: Given the input x, the value of (x) will still be a random variable. In particular, we set n =2, and (x) is set to +1 with probability p(x) and -1 otherwise. This probabilistic construction will later be derandomized. We allow a junta budget of R = 4.
Next, we construct an optimal approximator for g ∘. Given an input x^(1), x^(2), let _1 = (x^(1)) and _2 = (x^(2)). For succinctness, we'll use p_i to refer to the [_i = 1]. Then, since g = And_2, the optimal approximator will return 1 iff p_1p_2 ≥ 1/2. For our particular the only choices for p_i are 3/5,3/4,1. As a result,
h^(opt)(p_1,p_2) =
1 if p_1 = 1 or p_2 = 1,
1 if p_1 = p_2 = 3/4,
0 otherwise.
However, no composed form can achieve the above optimal approximator. Recall that composed form approximators are of the form h(q_1, q_2), where each q_i has range . The fact that the size of this range is 2, but there are three possible choices (3/5, 3/4, 1) for p_i, is the crux of the issue.
In more detail, of the three choices (3/5,3/4,1) for p_i, q_1 must classify at least two of them the same way. This gives three cases.
* If q_1 classifies 3/4 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/5 and p_1 = 1, p_2 = 3/5, and so cannot be optimal.
* If q_1 classifies 3/5 and 3/4 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/4, p_2 = 3/4 and p_1 = 3/5, p_2 = 3/4, and so cannot be optimal.
* If q_1 classifies 3/5 and 1 the same way, h(q_1, q_2) cannot distinguish between p_1 = 3/5, p_2 = 3/4 and p_1 = 1, p_2 = 3/4, and so cannot be optimal.
In all three cases composed form cannot achieve optimal error. It will always be off by some constant.
To derandomize this construction, we set n ≫ 2 sufficiently large. For each x ∈^n, we sample the value f(x) to be +1 with probability p(x_1,x_2) and -1 otherwise. Note that after randomly selecting the value of f on each input x ∈^n, f is now a deterministic function. Following the same arguments as in <Ref>, with high probability over the random choices in defining f, the error of the optimal 4-junta and of the optimal composed form 4-junta for g∘ f are within ±(n) of what they are for g ∘, where (n) goes to 0 as n →∞. Therefore, for sufficiently large n, there exists an f meeting the desired criteria.
|
http://arxiv.org/abs/2307.03943v1 | 20230708093708 | Camouflaged Object Detection with Feature Grafting and Distractor Aware | [
"Yuxuan Song",
"Xinyue Li",
"Lin Qi"
] | cs.CV | [
"cs.CV"
] |
Camouflaged Object Detection
with Feature Grafting and Distractor Aware
*Corresponding author. This work is supported in part by the National Natural Science Foundation of China (Grant No. 41927805).
Yuxuan Song
College of Computer
Science and Technology
Ocean University of China
Qingdao, China
[email protected]
Xinyue Li
College of Computer
Science and Technology
Ocean University of China
Qingdao, China
[email protected]
Lin Qi*
College of Computer
Science and Technology
Ocean University of China
Qingdao, China
[email protected]
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================
The task of Camouflaged Object Detection (COD) aims to accurately segment camouflaged objects that integrated into the environment, which is more challenging than ordinary detection as the texture between the target and background is visually indistinguishable. In this paper, we proposed a novel Feature Grafting and Distractor Aware network (FDNet) to handle the COD task. Specifically, we use CNN and Transformer to encode multi-scale images in parallel. In order to better explore the advantages of the two encoders, we design a cross-attention-based Feature Grafting Module to graft features extracted from Transformer branch into CNN branch, after which the features are aggregated in the Feature Fusion Module. A Distractor Aware Module is designed to explicitly model the two possible distractor in the COD task to refine the coarse camouflage map. We also proposed the largest artificial camouflaged object dataset which contains 2000 images with annotations, named ACOD2K. We conducted extensive experiments on four widely used benchmark datasets and the ACOD2K dataset. The results show that our method significantly outperforms other state-of-the-art methods. The code and the ACOD2K will be available at https://github.com/syxvision/FDNet.
Camouflaged Object Detection, Transformer, Convolutional Neural Networks, Distractor
§ INTRODUCTION
Camouflage refers to creatures use the similarity of color, texture, etc. to hide themselves in the background without being discovered by predators. Inspired by the natural camouflage of animals such as chameleon, artificial camouflage was created to deceive human's visual inspection. The computer vision task of Camouflaged Object Detection (COD) aims to accurately segment concealed objects from the background environment, which has recently attracted interests of researchers and facilitated many applications in different fields. However, due to its inherent nature, locating and segmenting of camouflaged objects is much more difficult than ordinary object detection, which makes the COD task extremely challenging.
Recently, many deep learning based methods have been proposed to solve the COD task and have achieved impressive progress. SegMaR <cit.> introduces a Magnification Module to iteratively upsample images to segment camouflaged objects with complex structures. ZoomNet <cit.> showed that multi-scale information is very effective for resolving the appearance and shape variation of objects at different scales. This model uses a shared encoder to encode images of three scales. However, shared encoders cannot take full advantage of multi-scale images and may cause error propagation. Therefore, we proposed use two different encoders in parallel, and designed a Feature Grafting Module for better feature transfer.
Existing COD methods only consider the background as distractor, such as SINetv2 <cit.> which uses reverse attention to erase the foreground and use the background to mine potential camouflage areas. However, in the COD task, due to the similarity between the object and the surrounding environment, there are two different types of distractors as shown in Figure <ref>: 1) in the first row, the stem of the branch is misclassified as camouflaged object since its texture is very similar to the target. 2) in the second row, the lower half of the animal's body is blended with the black background, and the network misses it. This observation inspired us that explicitly modeling semantic features of these two types of distractors with supervision can improve detection performance.
In this paper, we propose a Feature Grafting and Distractor Aware network (FDNet) for camouflaged object detection. We employ Transformer and CNN to exploit information on different scales, where Transformer models long-term dependence for rich context information and CNN mines local details for edge information. To aggregate the features from these two encoders, we developped a Feature Grafting Module based on cross-attention, which fuses features in a bottom-up manner to produce a coarse prediction map. A Distractor Aware Module was designed to guide the learning by modeling the two types of distractor and exploring potential camouflage regions under the supervision of groundtruth. Benefited from the designed modules, our proposed network can better recognize distractors and achieve better detection performance.
In addition, we contribute to the COD community with a new COD dataset under the fact that most existing COD datasets consists of natural camouflaged animals, whereas only a small portion are camouflage created by human. To address this limitation, we collected and annotated 2000 images of artificial camouflages from the Internet, constituting the current largest artificial camouflage dataset, named ACOD2K. Figure <ref> shows some exmaple images of this dataset. We compared our proposed model with other state-of-the-art models on public datasets and this new dataset.
Our contributions.
1) Camouflaged objects can be segmented more accurately by our proposed FDNet which featured by the multi-scale feature extractor and the explicitly modeling of distractors. 2) The parallel encoding and the Feature Grafting Module are able to extract and fuse multi-scale features, which are utilized by the Distractor Aware Module to incorporate two different types of distracting semantic cues for target segmentation. 3) A large artificial camouflage dataset, ACOD2K, was proposed and tested to compare the performance of our proposed model and other existing models.
§ RELATED WORK
The release of large-scale camouflage datasets (such as COD10K <cit.>) has triggered the invention of many deep learning-based methods, which have shown impressive results for the COD task. A majority of the recent work are inspired by how human observers visually search camouflaged targets, as SINet <cit.>, ZoomNet <cit.> and SegMaR <cit.>. SINet was designed to have two stages for searching and recognition respectively. ZoomNet <cit.> and the recently proposed SegMaR <cit.> enlarge the image in potential target regions to further mine distinguishing clues in a coarse-to-fine manner. Other work proposed to use auxiliary cues to improve performance, such as making better use of boundary clues <cit.> and frequency-domain perceptual cues <cit.>. The joint task learning was also found to be useful when SOD(Salient Object Detection) and COD are simultaneously considered to boost each other's performance <cit.>.
Unlike CNN, Transformer has a global receptive fields, which can capture richer contextual information. Its success in the natural language processing has been observed by computer vision tasks. UGTR <cit.> uses Bayesian and Transformer to infer areas of uncertainty. To take the advantage of both architecture, we employ CNN and Transformer together to enhance the performance of the model.
§ OUR METHOD
§.§ ACOD2K dataset
Camouflage images can be categorized as natural or artificial. Natural camouflage refers to the ability of animals to blend into their surroundings through changes in their physiological characteristics, making them difficult to detect by predators. Artificial camouflage refers to camouflage designed using human reasoning through methods such as painting and camouflage uniforms, with a specific aim to target human visual perception characteristics in order to more effectively deceive the human visual system. It has great practical value for tasks such as disaster-assisted search and rescue operations. Leveraging this advantage, we have constructed ACOD2K, the largest artificial camouflage dataset.It's worth noting that current camouflaged object detection methods are exclusively trained on natural camouflaged images. This is because existing datasets mainly feature natural camouflaged animals, making it difficult to train models that can accurately detect artificial camouflage. For instance, the two most commonly used training datasets in COD tasks, CAMO and COD10K, have an imbalanced distribution of natural and artificial camouflage images. Of the 2,500 images in CAMO, less than 10% are artificial camouflage images. Similarly, COD10K, a large-scale dataset with 10,000 images covering multiple camouflaged objects in natural scenes divided into 5 super classes, lacks artificial camouflage images. This highlights the need for datasets like ACOD2K, which has a significant number of artificial camouflage images, to enable the development of more robust camouflaged object detection methods.ACOD2K are consisted by 2000 images, where 1500 images are with camouflaged objects, 400 images are with non-camouflaged objects, and 100 are background images. Most of the images are collected from the Internet (80%), searched using the keywords such as “military camouflage”, “body painting”, “Ghillie suit”, and the rest are from public COD and SOD dataset. Figure <ref> shows some examples of ACOD2K, from which it can be seen that artificial camouflages are intentionally made by humans using materials and colors to conceal the whole target body in the background. High-quality and fine-grained pixel-level matting annotations were carried out for each image. In order to guarantee the quality, an additional researcher further verified all annotations.
§.§ Overall Architecture
The overall structure of our proposed FDNet is shown in Figure <ref>. It is divided into two stages, the first stage generates a coarse feature map, and the second stage refines the feature map based on the Distractor Aware Module. FDNet uses multi-scale images as input. Unlike ZoomNet which uses shared encoders, we instead used the PVT <cit.> for the main scale and used the Res2Net50 <cit.> for the sub-scale, which constitue a parallel encoder. We designed a Feature Grafting Module based on cross-attention to aggregate features of these two scales, which not only extracts valuable semantic clues, but also fully suppresses redundant information and background noise. Then the multi-scale features are sent to the Feature Fusion Module for decoding, it achieved more efficient transmission of encoded information through bottom-up dense connections. Finally, Send it into the dual-branch Distractor Aware Module to refine the feature map, and use ground truth for supervision.
§.§ Feature Grafting Module
For the main scale image, we use PVT as the backbone to extract feature maps of 4 stages, which can be denoted as g_i;i=1,2,3,4. Since the features with too small resolution will lose most of the information, we did not use g_4. For the sub-scale image, we use Res2Net50 as the backbone to extract a set of feature maps, which can be denoted as f_i;i=1,2,3,4.We choose to graft feature on feature groups with the same feature resolution. Since the resolution of the sub-scale is twice that of the main scale, the resolution of g_i,f_i+1;i=1,2,3 is same. For the first two groups, we use pooling for feature grafting to maintain and highlight useful information. In neural networks, deeper features have richer semantic clues. For g_3 extracted using Transformer, which has rich global context information. For f_4 extracted using CNN, which has edge detail information complementary to global information. We believe that using simple fusion methods such as pooling, concatenation, or addition is not effective enough for mutual learning between these two features, and cannot well suppress background noise from CNN. Therefore, we use cross-attention to incorporate the global semantic cue learned from the main scale into each pixel of the sub-scale. The detail is shown in Figure <ref>.
F_4 = Softmax(f_4^Q ·g_3^K^T/√(k))· f_4^V
f_4^Q,f_4^V=θ(f_4) g_3^K=ϕ(g_3)
θ() uses flatten and permute operations to transform f_4∈ R^C × H × W into f_4^'∈ R^HW × C. Same as self attention, we have passed it through Layer Normalization and linear transformation to get f_4^Q, f_4^V, the process of g_3 getting g_3^K through ϕ() is same as θ.
§.§ Feature Fusion Module
Unlike the previous method that directly performs convolution after channel concat on the adjacent feature layer to output the prediction map, we fuse deeper features as a semantic filter. We first element-wise multiply it with the current layer features to suppress background interference that may cause abnormality, and then preserve the original information by residual addition. The details are shown in Figure <ref>.
The features by the Feature Grafting Module are denoted as F_i;i=1,2,3,4. Since F_4 is the last layer of features, we directly perform 3x3 convolution on F_4 to form F̂_̂4̂, For F_3, we perform filtering on F4 to form F_3^filter. Correspondingly, F_2^filter and F_1^filter are shown in the following formula. We take the top-level feature F̂_̂1̂ as the final result of the Feature Fusion Module, and the coarse prediction is F_c.
F̂_̂4̂ = Conv3(F_4)
F_3^filter = Conv3(Conv1(F_4↑_2)
F̂_̂3̂ = Conv3([F_3^filter * F_3+F_3;F̂_̂4̂])
F_2^filter = Conv3(Conv1([F_4↑_4;F_3↑_2]))
F̂_̂2̂ = Conv3([F_2^filter * F_2+F_2;F̂_̂3̂])
F_1^filter = Conv3(Conv1([F_4↑_8;F_3↑_4;F_2↑_2]))
F̂_̂1̂ = Conv3([F_1^filter * F_1+F_1;F̂_̂2̂])
F_c=Conv3(F̂_̂1̂)
Conv3, Conv1 represents 3x3, 1x1 convolution respectively, ↑ refers to upsample, [;] means channel concatenation, and * represents element-wise multiplication.
§.§ Distractor Aware Module
We believe that there are two types of distractors present in the coarse prediction map generated in the first stage, namely: (i) objects that are camouflaged but not detected, referred to as false negatives, ξ_fn, and (ii) objects that are not camouflaged but are misdetected, referred to as false positives, ξ_fp. To address this, we propose a dual-branch Distractor Aware Module that explicitly models the potential interference and aims to improve the accuracy of the segmentation results.As illustrated in the lower part of Figure <ref>, we first use F̂_̂1̂∈ R^64 × H × W to extract ξ_fn features through a lightweight encoder, the encoder is designed as two 3x3 convolutions, following BN and Relu. In order to make better use of ξ_fn, We generated the predicted map of ξ_fn. During training, the ground truth of ξ_fn is approximated by the difference between the ground truth of the segmentation map and the coarse predicted map F_c. Then we concate ξ_fn with F̂_̂1̂ and send it into the attention mechanism to generate augmented weights ξ_fn^a. The attention mechanism aims to enhance the features of possible ξ_fn regions. we perform element-wise multiplication for ξ_fn^a and original feature F̂_̂1̂, and then perform residual connection to generate the enhanced feature F_fn. Now, the network can better segment those regions that are ignored as background.
ξ_fn = Small Encoder(F̂_̂1̂)
fn_GT = GT - φ(F_c)
Similarly, we use the same encoder to extract ξ_fp features and the predicted map. The ground truth of ξ_fp is approximated by the difference between the coarse predicted map F_c and the ground truth of the segmentation map. we concate F_fn with ξ_fp on channel dimension, then send it into the refine unit consisting of two 3x3 convolutional layers to capture richer context information, so as to better distinguish the misdetected areas. Finally, it is subtracted from F_fn to obtain the prediction feature that suppresses ξ_fp distractor. After 3x3 convolution, we obtain the final prediction map F_p. φ() represents binarization operation.
ξ_fp = Small Encoder(F̂_̂1̂)
fp_GT = φ(F_c) - GT
§.§ Loss Functions
Our network has two types of supervision. For the loss L_F_p of the prediction map, same as most COD methods, we use the weighted BCE loss and the weighted IOU loss(Loss1). For the loss L_fn, L_fp of fn and fp, we use the weighted BCE loss(Loss2). The loss function is as follows.
Loss = L_F_p+ λ L_fn + β L_fp
Loss1 = L_BCE^ω+L_IOU^ω
Loss2 = ∑_i(-[N_p/N_p+N_n(y_i)log(p_i)+
N_n/N_p+N_n(1-y_i)log(1-p_i)])
In the experiment, λ and β are set to 10. N_n and N_p represent the number of pixels of positive pixels and negative pixels, respectively.
§ EXPERIMENTS
§.§ Experiment Setup
Datasets.We perform experiments on four COD benchmark datasets and ours ACOD2K. Public datasets include CAMO <cit.>, CHAMELON <cit.>, COD10K <cit.> and NC4K <cit.>, Like the previous methods, we use 3040 images from COD10K and 1000 images from CAMO for training, and other datasets for testing. For the ACOD2K, we divide it into train set and test set according to the ratio of 8:2.
Evaluation Criteria.We use four metrics that commonly used in COD tasks to evaluate the model performance: Mean absolute error(MAE) <cit.>, F_β^w-measure <cit.>, E-measure <cit.>, S-measure <cit.>.
Implementation Details.Our network uses PVT <cit.> and Res2Net50 <cit.> pretrained on ImageNet as backbone. We use data augmentation strategy of random flips and rotations. During training, in order to balance efficiency and performance, the size of the main scale is set to 288x288. The batchsize is 32. We use SGD with momentum and weight decay initialized to 0.9 and 0.0005 as the optimizer, the learning rate is initialized to 0.05, follows a linear decay strategy, and the maximum training epoch is set to 50. The entire network is performed on NVIDIA GeForce GTX 3090Ti.
§.§ Comparisons with State-of-the-arts
To show the effectiveness of our method, we compare with 10 SOTA methods on public datasets. On ours ACOD2K, we compare with 3 COD methods. For fair comparison, the results of these models are either provided by the authors or retrained from open source code.
Quantitative Evaluation.As shown in the Table <ref>, our method achieves the superior performance on multiple evaluation metrics. Specifically, our method increases F_β^ω by 1.5%, 3.3%, 6%, 1.9% over the second-best method on all four datasets. Table <ref> shows the FDNet outperforms the second-best method on the four metrics by increasing 1.4%, 2.4%, 1%,0.4% on the ACOD2K.
Qualitative Evaluation.We further show the qualitative comparison of FDNet with other methods, presented in the form of visualization maps. As shown in Figure <ref>, our method not only recognizes them well, but also segments fine edges. In addition, in the second row, our method also works well with the presence of distractor in the image.
§.§ Ablation Studies
As shown in the Table <ref>, we conducted five ablation experiments. In A, we removed all key modules, only used single-scale images, and simply perform convolution after channel concatenation to get the final prediction map. In B, we replaced the Feature Fusion Module on the basis of A. In C, we use multi-scale images, but share the encoder, and the features of different scales are fused by pooling. In D, we use CNN and Transformer to encode the images of two scales respectively, and use the Feature Grafting Module to fuse feature. In E, we added Distractor Aware Module based on D. Effectiveness of multi-scale.
By fusing features of different scales, we can explore richer semantic representations. From the second and third rows in the table <ref>, it can be seen that the performance of C is significantly better than that of B, especially in the COD10K, S_α, F_β^w , E_ϕ, ℳ increased by 4.4%, 8.5%, 2.9%, 0.9% respectively.Effectiveness of Feature Fusion.
From the first and second rows of the table <ref>, B's performance on the four indicators increased by 0.8%, 2.2%, 1.1%, 0.4% on average, this is due to the positive impact of the Feature Fusion Module's bottom-up dense feature-guided structure.
Effectiveness of Feature Grafting.
Compared with C, all indicators of D on the two datasets have different degrees of increase, especially F_β^w on the CAMO increased by 1%. This is largely because Feature Grafting Module aggregates the advantages of two different types of encoders well.
Effectiveness of Distractor Aware.
E outperforms D on all datasets, and the visual comparison results in Figure <ref> also clearly verify that the module can mine potential interference areas.
§ CONCLUSION
We propose a novel COD network, FDNet. First, we design the Feature Grafting Module to extract valuable semantic information and suppress background noise. Then, in the Distractor Aware Module, we obtained more accurate prediction map by refining the two types of distractors. Additionally, we also construct a new artificial camouflage dataset, ACOD2K. Experiments on four public datasets and ACOD2K show that our method outperforms other methods significantly both qualitatively and quantitatively. In the future, we will explore more effective supervision methods for two types of distractors.
IEEEtran
|
http://arxiv.org/abs/2307.04857v1 | 20230710190112 | The geproci property in positive characteristic | [
"Jake Kettinger"
] | math.AG | [
"math.AG",
"math.CO",
"14"
] |
Bragg-Primakoff Axion Photoconversion in Crystal Detectors
Adrian Thompson
^1Central European University, Quellenstraße 51, 1100 Vienna, Austria
^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary
^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary.
^4 National Laboratory for Health Security, Hungary.
^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary.
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The geproci property is a recent development in the world of geometry. We call a set of points Z_k^3 an (a,b)-geproci set (for GEneral PROjection is a Complete Intersection) if its projection from a general point P to a plane is a complete intersection of curves of degrees a≤ b. Nondegenerate examples known as grids have been known since 2011. Nondegenerate nongrids were found starting in 2018, working in characteristic 0. Almost all of these new examples are of a special kind called half grids.
Before the work in this paper– based partly on the author's thesis– only a few examples of geproci nontrivial non-grid non-half grids were known and there was no known way to generate more. Here, we use geometry in the positive characteristic setting to give new methods of producing geproci half grids and non-half grids.
§ INTRODUCTION
While complete intersections have been a topic of much study for many years in algebraic geometry, the study of the geproci property has emerged relatively recently. Much of the groundwork in this study has been laid in the works <cit.>, <cit.>, and <cit.>, which will be cited often in this paper. We will begin with the definition of geproci (from: general projection complete intersection).
Let K be an algebraically closed field. A finite set Z in ^n_K is geproci (dZ@"protSi) if the projection Z of Z from a general point P∈^n_K to a hyperplane H is a complete intersection in H≅^n-1_K.
An easy but degenerate example of a geproci set in ^n is a complete intersection in a hyperplane ^n-1≅ H^n. In this paper, we are specifically interested in geproci sets in ^3_K. (No nondegenerate examples are known in ^n, n>3.) In the three-dimensional setting, we will specify that a configuration Z^3_K is (a,b)-geproci (where a≤ b) if the image of Z under a general projection into ^2_K is the complete intersection of a degree a curve and a degree b curve. We will use the notation {a,b}-geproci in instances when we do not want to require a≤ b.
There are two easy-to-understand types of geproci sets. One type as noted above is any complete intersection in a plane: it will project from a general point isomorphically to another complete intersection in any other plane, and so is geproci. The other type is a grid, which we will now define.
Given a curve A^3 comprising a finite set of a pairwise-disjoint lines a curve B^3 comprising a finite set of b pairwise-disjoint lines, such that every line in A intersects every line in B transversely, the ab points of intersection form an (a,b)-grid.
The set of points Z of an (a,b)-grid is (a,b)-geproci. The image Z of Z under a general projection is equal to the intersection of the images A and B of A and B, which are unions of a lines in the plane and b lines in the plane respectively, and thus A and B are curves of degrees a and b, respectively, meeting at ab points. Thus Z is a complete intersection.
These two types (sets of coplanar points and grids) are well understood, so are called trivial. What is not yet well understood is how nontrivial geproci sets can arise. The existing work on the geproci property has been done over fields of characteristic 0. What is new with this paper are the results in characteristic p>0, starting in the second section. For the rest of this section we will only discuss work which has been done in characteristic 0.
The first nontrivial examples of geproci sets came from the root systems D_4 and F_4 <cit.> and so themselves are called D_4 and F_4. These are configurations in ^3 containing 12 points and 24 points, respectively <cit.>. It was also shown that D_4 is the smallest nontrivial geproci set <cit.>, and the only nontrivial (3,b)-geproci set <cit.>. (See Figure <ref> for the 12 points of D_4 and its 16 sets of 3 collinear points.)
The configurations D_4 and F_4 are examples of half grids.
A set Z^3 is a {μ,λ}-half grid if Z is a nontrivial {μ,λ}-geproci set contained in a set of μ mutually-skew lines, with each line containing λ points of Z.
For example, the D_4 configuration is a 4, 3-geproci half grid and can be covered by four mutually-skew lines, with each line containing three points, as Figure <ref> shows. The general projection of an {a,b} half grid is a complete intersection of a union of a lines and a degree b curve that is not a union of lines. It is known that there is an (a,b)-half grid for each 4≤ a≤ b <cit.>. No other infinite families of nontrivial geproci sets were known before the results in this paper, and only finitely many (indeed, three <cit.>) non-half grid nontrivial geproci sets were known before the results in the next section.
There seem to be strong links between geproci sets Z and sets Z admitting unexpected cones <cit.>.
A finite set Z^n_k admits an unexpected cone of degree d when
[I(Z)∩ I(P)^d]_d>max(0,[I(Z)]_d-d+n-1n)
for a general point P∈^n_K, where I(Z) is the homogeneous ideal of Z in K[^n] and [I(Z)]_d is its homogeneous component of degree d <cit.>.
This is said to be unexpected because one expects by a naive dimension count that the vector subspace of homogeneous polynomials in [I(Z)]_d that are singular with multiplicity d at a general point P would have codimension n+d-1n (since being singular at P to order d imposes n+d-1n conditions on [I(Z)]_d). Therefore it is called unexpected when more such hypersurfaces exist than a naive dimension count would lead one to expect. Chiantini and Migliore showed that every (a,b)-grid with 3≤ a≤ b admits unexpected cones of degrees a and b <cit.>.
§ THE GEPROCI PROPERTY OVER FINITE FIELDS
§.§ Spreads
While examples of nontrivial geproci configurations (especially nontrivial non-half grids) have proven rather elusive in the characteristic 0 setting, we will see in this paper that they arise quite naturally over finite fields. In the finite field setting, we make generous use of the study of spreads over projective space, which we will define now.
Let ^2t-1_k be a projective space of odd dimension over a field k. Let S be a set of (t-1)-dimensional linear subspaces of ^2t-1_k, each of which is definedover k. We call S a spread if each point of ^2t-1_k is contained in one and only one member of S.
Over a finite field, spreads always exist for each t≥ 1 <cit.>. In our three-dimensional case, we have t=2. Therefore a spread in ^3_k will be a set of mutually-skew lines defined over k that cover ^3_k.
Here we show an example of a spread based on <cit.>. Given a field extension k L with (as vector spaces) _k L=t, we get a map
_k^2t-1=_k(k^2t)=_k(L^2)⟶_L(L^2)=^1_L
with linear fibers _k(L)=(k^t)=_k^t-1, giving a spread. When we take k=, t=2, and L=, we get
^3_⟶^1_=S^2.
Composing with the antipodal map S^3→^3_ gives the well-known Hopf fibration S^3→ S^2 with fibers S^1.
Here we give another construction of spreads for ^3 for fields of positive characteristic based on <cit.> and <cit.>. Let _q be a finite field of size q and characteristic p, first where p is an odd prime. Let r∈_q be such that the polynomial x^2-r∈_q[x] is irreducible; that is, r has no square root in _q. Denote by L_r(a,b) the line in ^3__q through the points (1,0,a,b) and (0,1,rb,a). Denote by L(∞) the line through the points (0,0,1,0) and (0,0,0,1). Then the set of lines
S_r={L_r(a,b),L(∞):a,b∈_q}
is a spread in ^3__q (since ^3__q has (q+1)(q^2+1)=q^3+q^2+q+1 points and one can check (using the fact that r is not a square in _q) that the lines are skew, but there are q^2+1 lines and each line has q+1 points).
In the case _q=2, we want to choose r∈_q to be such that the polynomial x^2+x+r is irreducible in _q[x]. Then define L_r(a,b) to be the line in ^3__q through the points (1,0,a,b) and (0,1,br,a+b). Then S_r={L_r(a,b),L(∞):a,b∈_q} is a spread.
Let _q be the field of size q, where q is some power of a prime. Then Z=^3__q^3__q is a (q+1,q^2+1)-geproci half grid.
First we will show that there is a degree (q+1) cone containing Z having a singularity of multiplicity q+1 at a general point P∈^3__q. Let P=(a,b,c,d)∈^3__q. Let
M= [ a b c d; a^q b^q c^q d^q; x y z w; x^q y^q z^q w^q; ].
Then we claim F= M
is such a cone.
First note that F contains every point of Z, because x^q=x for each x∈_q. Furthermore, the terms of F can be combined into groups of 4 so that F is the sum of terms of the form
(x^qyc^qd-x^qwc^qb)-(z^qya^qd-z^qwa^qb)=x^qc^q(yd-wb)-z^qa^q(yd-wb)
=(x^qc^q-z^qa^q)(yd-wb)=(xc-za)^q(yd-wb)∈ I^q+1((a,b,c,d))
Thus F is a cone C_1 of degree q+1 with vertex (a,b,c,d) of multiplicity q+1.
Now we will show there is a degree q^2+1 cone C_2 containing Z having a general point P of multiplicity q^2+1. By Example <ref>, the space ^3__q admits a spread of q^2+1 mutually-skew lines that covers all of ^3__q. Each line together with a fixed general point P determines a plane. The union of the planes gives C_2.
Projecting the q^2+1 lines from a general point P∈^3__q to a general plane Π=^2__q yields a set of q^2+1 lines in ^2__q containing the (q+1)(q^2+1) points of the image of Z.
Now we will show that C_1 and C_2 do not have components in common; to this end, we will show that C_1 contains no line in ^3__q defined over ^3__q. Note that C_1 vanishes on such a line if and only if F=0, where F= M and
M=[ a b c d; a^q b^q c^q d^q; X Y Z W; X^q Y^q Z^q W^q ]
for X=η_0u+μ_0v, Y=η_1u+μ_1v, Z=η_2u+μ_2v, and W=η_3u+μ_3v for all (u,v)∈^1__q where (η_0,η_1,η_2,η_3) and (μ_0,μ_1,μ_2,μ_3) are points on the line. If r_1, r_2, r_3, and r_4 are the rows of a 4× 4 matrix, we will denote the determinant of that matrix by |r_1,r_2,r_3,r_4|. In particular, taking the r_i to be the rows of M, we have F=|r_1,r_2,r_3,r_4|=|r_1,r_2,η u+μ v,η u^q+μ v^q|=0 for all (u,v).
Since determinants are multilinear, we have
|r_1,r_2,η u+μ v,η u^q+μ v^q|
= |r_1,r_2,η u,η u^q|+|r_1,r_2,η u,μ v^q|+|r_1,r_2,μ v,η u^q|+|r_1,r_2,μ v,μ v^q|
= |r_1,r_2,η u,η u|u^q-1+|r_1,r_2,η u,μ v^q|+|r_1,r_2,μ v,η u^q|+|r_1,r_2,μ v,μ v|v^q-1
= |r_1,r_2,η u,μ v^q|+|r_1,r_2,μ v,η u^q|=|r_1,r_2,η,μ|uv^q+|r_1,r_2,μ,η|u^qv
= |r_1,r_2,η,μ|uv^q-|r_1,r_2,η,μ|u^qv=|r_1,r_2,η,μ|(v^q-1-u^q-1)uv.
But v^q-1-u^q-1≠ 0 unless u=v=0 or u/v∈_q. Therefore F is 0 for all (u,v) only if |r_1,r_2,η,μ|=0. By an appropriate choice of coordinates we get η=(1,0,0,0), μ=(0,1,0,0), r_1=(a',b',c',d'), and r_2=(a'^q,b'^q,c'^q,d'^q) for some point (a',b',c',d') which is general since (a,b,c,d) is general. Since |r_1,r_2,η,μ| is nonzero for a'=b'=0, c'=1, d'∈_q∖_q, we see |r_1,r_2,η,μ|≠ 0 for general (a',b',c',d'). We conclude that C_1 does not contain a line of ^3__q defined over ^3__q, and so C_1 has no components in common with C_2. (In fact, since C_1 contains the q+1 points of each line of ^3__q defined over ^3__q but does not contain the line, C_1 meets each line of ^3__q defined over ^3__q transversely.) Thus C_1∩ C_2 is a curve of degree (q+1)(q^2+1) and contains the (q+1)(q^2+1) lines through P and points of Z, hence C_1∩ C_2 is exactly this set of lines.
So Z is a set of (q+1)(q^2+1) points, which is the intersection of the curves C_1∩Π (of degree q+1) and C_2∩Π (of degree q^2+1), so Z is a (q+1,q^2+1)-complete intersection. Thus Z is (q+1,q^2+1)-geproci.
Furthermore, the degree q+1 and q^2+1 cones in the above proof are unexpected. We will show this with the help of the following lemma.
Let Z=^n__q in variables x_0,…,x_n. Then [I(Z)]_q+1=1+2+⋯+n=n+12.
We will induct on n, starting with n=1. The product
x_0(x_0-x_1)(x_0-2x_1)⋯(x_0-(q-1)x_1)x_1
is the unique q+1 form (up to scalar multiplication) vanishing on all points of Z. So [I(Z)]_q+1=1.
Now let n>1 and Z'=V(x_n) Z=^n__q, so we can regard Z' as Z'=^n-1__q. We can regard each element f∈[I(Z')]_q+1 as a form in the variables x_0,…,x_n-1 and thus defined over ^n. Using this, we can define a map ρ:[I(Z)]_q+1→[I(Z')]_q+1 by ρ(f(x_0,…,x_n-1,x_n))=f(x_0,…,x_n-1,0). We can see that ρ is surjective because each element g∈[I(Z')]_q+1 defines a cone over Z' with vertex v=(0,…,0,1)∈^n. Thus g vanishes at every line through v and a point of Z'. But every point of Z is on such a line, so g∈[I(Z)]_q+1, and thus ρ is surjective.
Now let Y be the complement of Z' in Z. Then we have x_n[I(Y)]_q [I(Z)]_q+1. Furthermore, for all f∈ [I(Z)]_q+1, we see that ρ(f)=0 if and only if f=0 or f=x_n· h for some degree q polynomial h vanishing on Y. Hence x_n[I(Y)]_q=ρ. This gives us the short exact sequence
0r x_n[I(Y)]_qr [I(Z)]_q+1r [I(Z')]_q+1r 0
where [I(Z')]_q+1=1+⋯+(n-1) by the induction hypothesis. Now we must show that x_n[I(Y)]_q=n. But x_n[I(Y)]_q=[I(Y)]_q and Y is a complete intersection of n forms of degree q. For example, we can cut out Y by the n forms given by
x_i(x_i-x_n)(x_i-2x_n)⋯(x_i-(q-1)x_n)
for 0≤ i≤ n-1. Hence [I(Y)]_q=n, and so [I(Z)]_q+1=[I(Z')]_q+1+[I(Y)]_q=1+⋯+n.
The degree q+1 cone and degree q^2+1 cone in the proof of Theorem <ref> are unexpected.
From Lemma <ref>, we see that [I(Z)]_q+1=6. In particular, [I(Z)]_q+1 is generated by the 2× 2 minors of the matrix
[ x y z w; x^q y^q z^q w^q; ].
Since 6-q+33<0 for q≥ 2, and [I(Z)∩ I(P)^q+1]_q+1≥ 1>0, we have that the above q+1 cone is indeed unexpected.
To show that the degree q^2+1 cone is unexpected, we will first show that the (q^2+1)(q+1) points of ^3__q impose independent conditions on forms of degree q^2+1. We will show that for each Q∈^3__q that there is a degree q^2+1 form vanishing at every point ^3__q except Q. Without loss of generality, we will take Q=(0,0,0,1).
We will start with the case q≠ 2. Then the union of planes given by the product
π_x=∏_i=0^q-1(w-ix)
contains every point of ^3__q except those on the affine plane {(0,*,*,1)}. Similarly, the products
π_y=∏_i=0^q-1(w-iy) and π_z=∏_i=0^q-1(w-iz)
vanish everywhere except on the affine planes {(*,0,*,1)} and {(*,*,0,1)}, respectively. Therefore, the product π_xπ_yπ_z vanishes everywhere on ^3__q except the point (0,0,0,1). Since π_xπ_yπ_z=3q, taking π=w^q^2-3q+1π_xπ_yπ_z gives us a degree q^2+1 form vanishing at every point of ^3__q except Q. Note that since q>2, q^2-3q+1>0, so π is well-defined.
Since the points of Z=^3__q impose independent conditions on the q^2+1 forms, we have
[I(Z)]_q^2+1=q^2+43-(q^2+1)(q+1).
Using our degree q+1 cone from the proof of Theorem <ref> as F, we have
F[I(P)^q^2-q]_q^2-q [I(Z)+I(P)^q^2+1]_q^2+1,
giving us
[I(P)^q^2-q]_q^2-q≤[I(Z)+I(P)^q^2+1]_q^2+1.
We know that [I(P)^q^2-q]_q^2-q=q^2-q+33-q^2-q+23=q^2-q+22, so in order to show the degree q^2+1 cone is unexpected it is sufficient to see that the following inequality holds:
q^2-q+22>q^2+43-(q^2+1)(q+1)-q^2+33.
This inequality holds for q≥ 3. Thus for all prime powers q≥ 3, the degree q^2+1 cone in the proof of Theorem <ref> is unexpected.
Now for the case q=2: First we wish to show that the fifteen points of Z=^3__2 impose independent conditions on the quintic forms. Again taking Q=(0,0,0,1) without loss of generality, we can take π=w^2(w+x)(w+y)(w+z) as our degree 5 form vanishing at every point of ^3__2 except Q. Therefore the points indeed impose independent conditions. Thus
[I(Z)]_5=5+33-15=41
and so [I(Z)]_5-5+23=41-35=6. A computation in Macaulay2 reveals that
[I(Z)+I(P)^5]_5=7>6,
thus the degree q^2+1 cone from the proof of Theorem <ref> is unexpected for q=2 as well.
The Macaulay2 commands used to show [I(Z)+I(P)^5]_5=7 are as follows.
§.§ Maximal Partial Spreads
Of particular interest to the hunt for geproci sets is the existence of maximal partial spreads.
A partial spread of ^3__q with deficiency d is a set of q^2+1-d mutually-skew lines of ^3__q. A maximal partial spread is a partial spread of positive deficiency that is not contained in any larger partial spread. We will denote the set of points of ^3__q contained in the lines in a spread S by (S).
Maximal partial spreads allow us to construct examples of many geproci sets as subsets of ^3__q, using the following corollary.
Let S be a partial spread of s lines in ^3__q. Then the set of points (S)^3__q is {s,q+1}-geproci.
The same degree q+1 cone C_1 from the proof of Theorem <ref> works in this case. The degree s cone is the join of the s lines with the general point P. It follows from the proof of Theorem <ref> that C_1 meets every line of ^3__q transversely and thus that (S) is geproci.
Let Z be an {a,b}-geproci set and let Z' Z be a {c,b}-geproci subset, whose general projection shares with the general projection of Z a minimal generator of degree b. Then the residual set Z”=Z∖ Z' is {a-c,b}-geproci.
This is Lemma 4.5 of <cit.>, and the proof still works in positive characteristic.
The complement Z^3__q of a maximal partial spread of deficiency d is a nontrivial {q+1,d}-geproci set. Furthermore, when d>q+1, Z is also not a half grid.
The first sentence of the Theorem comes directly from Corollary 1 and Lemma 1, except for being nontrivial. To demonstrate that Z is nontrivial, suppose Z is contained in a plane H. Let Z' be the complement of Z. Then Z' consists of q+1 points on q^2+1-d lines. At most one of those lines can be in H, but each of the lines meet H. Thus Z' has at least q^2+1-d points in H, so Z consists of at most q^2+q+1-(q^2+1-d)=q+d points. This is impossible since |Z|=(q+1)d>q+d.
Now suppose that Z is a grid. Thus it consists of q+1 points on each of d lines. But Z' comes from a maximal partial spread, so Z contains no set of q+1 collinear points. Thus Z cannot be a grid, so Z is nontrivial.
Now we will prove that Z is a nontrivial non-half grid if d>q+1. Recall that every line in ^3__q consists of q+1 points. If Z were a half grid, then either it contains subsets of d collinear points or subsets of q+1 collinear points, but d>q+1, so the latter would be true. But we know from the above that Z contains no subset of q+1 collinear points.
§.§ Examples
By <cit.>, if q≥ 7 and q is odd, then ^3__q has a maximal partial spread of size n for each integer n in the interval q^2+1/2+6≤ n≤ q^2-q+2. In terms of deficiency d=q^2+1-n, we get the inequalities q-1≤ d≤q^2+1/2-6. Thus for every odd prime power q≥ 7 there is a maximal partial spread in ^3__q of deficiency d>q+1 and thus a nontrivial non-half grid (q+1,d)-geproci set.
In addition to Heden's bounds <cit.> showing the existence of maximal partial spreads, Mesner has provided a lower bound for the size of the deficiency d at √(q)+1≤ d <cit.>. Glynn has provided an upper bound for d at d≤ (q-1)^2 <cit.>.
By Lemma <ref>, for any line L^3__2, the set Z=^3__2∖ L is a (3,4)-geproci half grid. In fact, Z has the same combinatorics as D_4, shown in Figure <ref> (that is, Z consists of 12 points, each of which is on 4 lines, with each line containing 3 of the points). Specifically, in Figure <ref> we see ^3__2∖ V(x+y+z,w).
There is (up to projective equivalence) a unique maximal partial spread in ^3__3 <cit.>. This spread contains seven lines (as opposed to a complete spread, which contains ten). The complement Z of the points of the maximal partial spread is a set of 12 points in ^3__3 that is (3,4)-geproci and nontrivial. Furthermore, Z has the same combinatorics as the D_4 configuration (that is, Z is a set of 12 points, each of which is on 4 lines, with each line containing 3 of the points). Note that Z is then a half grid, as shown in Figure <ref>. Specifically, Figure <ref> exhibits the points of ^3__3 in the complement of the maximal partial spread given by the seven lines V(x+y,y+z+w), V(x-y-z,y+w), V(x-y+w,y+z), V(x+y+z,w), V(x-y+z, z+w), V(x+y-z,x+w), and V(x+z,x+y+w).
There are (up to projective equivalence) fifteen maximal partial spreads in ^3_/7 of size 45 and invariant under a group of order 5 (as opposed to a complete spread, which contains 50 lines) <cit.>. Let Z be the complement of the set of points of any of these maximal partial spreads. Then Z is a set of 40 points that is a nontrivial (5,8)-geproci non-half grid. Furthermore, Z has the same combinatorics as the Penrose configuration of 40 points <cit.>.
Note that if we look at two non-isomorphic maximal partial spreads M and M', and consider their complements Z and Z', then Z and Z' are non-isomorphic nontrivial non-half grid (5,8)-geproci sets. In fact, some such sets have stabilizers of different sizes! Of the fifteen up to isomorphism, there are nine with stabilizers of size 10, there is one with a stabilizer of size 20, there is one with a stabilizer of size 60, and there are four with stabilizers of size 120.
An example of such a geproci set is
{(0,0,1,3),(0,1,3,3),(0,1,3,5),(0,1,4,6),
(0,1,6,5),(1,0,1,3),(1,0,2,6),(1,0,4,5),
(1,0,4,6),(1,1,0,1),(1,1,0,4),(1,1,1,4),
(1,1,5,2),(1,2,1,6),(1,2,3,3),(1,2,5,2),
(1,2,6,5),(1,3,2,1),(1,3,4,4),(1,3,5,2),
(1,3,6,0),(1,4,0,5),(1,4,2,4),(1,4,4,1),
(1,4,6,2),(1,5,0,4),(1,5,1,0),(1,5,2,0),
(1,5,3,0),(1,5,3,1),(1,5,3,3),(1,5,3,6),
(1,5,4,5),(1,5,5,0),(1,5,5,2),(1,5,6,3),
(1,6,0,3),(1,6,1,5),(1,6,2,1),(1,6,6,6)}.
This example is the complement of a maximal partial spread of size 45 with a stabilizer of size 60.
We also used Macaulay2 to check that at least one configuration of each size stabilizer is Gorenstein. This contrasts with the case in characteristic 0, where only one nontrivial Gorenstein geproci set is known, up to projective equivalence: the Penrose configuration. <cit.>
One can determine this using the following commands in Macaulay2 with the example set of points from above.
0 1 2 3
total: 1 5 5 1
0: 1 · · ·
1: · · · ·
2: · · · ·
3: · 5 · ·
4: · · 5 ·
5: · · · ·
6: · · · ·
7: · · · 1
We can see from the Betti table that this set of points is Gorenstein. A similar calculation works to show the other geproci sets are Gorenstein.
This pattern leads us to the following question:
Given the complement of a maximal partial spread Z^3__q, when does Z correspond to a nontrivial geproci set that exists in ^3_? That is, when does there exist a nontrivial geproci set in _^3 that has the same combinatorics as Z?
§ THE GEPROCI PROPERTY WITH INFINITELY-NEAR POINTS
We can also consider configurations of points that include infinitely-near points.
Let A be a smooth point on an algebraic variety X. Let Bl_A(X) denote the blowup of X at A. Then a point B∈Bl_P(X) is infinitely-near A if π_A(B)=A where π_A:Bl_A(X)→ X is the standard blowup map.
On the other hand, if and π_A(B)≠ A, then B and A are distinct.
Intuitively, B corresponds to the direction of a line through A. In the plane, we can consider how a point A and a point B that is infinitely-near A can uniquely determine a line, the same way a line can be uniquely determined by two distinct points. This is akin to determining a line from a point and a slope. In ^3, we will consider how infinitely-near points impose conditions on forms the same way distinct points can.
We can extend the definition of geproci to include configurations with infinitely-near points by realizing Z as a non-reduced 0-dimensional subscheme of ^3. For example, let A∈^3 be a point and L a line through A. Let B be the point infinitely near A corresponding to L. Then I({A,B})=I(L)+I(A)^2 and the ideal of the image {A,B} of {A,B} under projection from a point P∉ L is I(L)+I(A)^2, where L is the image of L. A scheme Z including infinitely near points is geproci if the projection Z of Z from a general point P to a plane is a complete intersection as a subscheme of ^2.
In the following sets of points in ^3__2, we will denote a point A together with a point infinitely-near A as A× 2. We will then specify what line the infinitely-near point corresponds to.
We will consider the set of nine (not distinct) points in ^3_K where K=2:
Z={(1,0,0,0)× 2, (0,1,0,0)× 2,(0,0,1,0)× 2, (0,0,0,1)× 2,(1,1,1,1)}
by choosing infinitely-near points for each of (1,0,0,0), (0,1,0,0), (0,0,1,0), and (0,0,0,1) to be the point that corresponds to the (respective) direction of the line through the given point and the point (1,1,1,1).
The projection Z of these 9 points to the plane w=0 from a general point takes (0,0,1), (0,1,0), (1,0,0) to themselves and (1,1,1,1) and (0,0,0,1) to general points. After a change of coordinates we can map the image of (1,1,1,1) to (1,1,1) and the image of (0,0,0,1) to (a,b,c). We will denote
Z'={(0,0,1)× 2,(0,1,0)× 2,(1,0,0)× 2,(a,b,c)× 2,(1,1,1)},
where the tangent directions of each point of multiplicity 2 correspond to the line connecting the point with (1,1,1). Then Z' is the base locus of a specific type of pencil of cubics called a quasi-elliptic fibration. Specifically, the quasi-elliptic pencil given by Z has Dynkin diagram A_1^8. One can read more about the connection between Dynkin diagrams and (quasi-)elliptic fibrations in e.g. Cossec and Dolgachev <cit.>.
We can see that the conic C_1=V(xy+xz+yz) contains the points (0,0,1), (0,1,0), and (1,0,0), and the tangent lines of the three points all meet (1,1,1). Additionally, the line L_1 connecting (a,b,c) and (1,1,1) has the appropriate slope to contain the remaining infinitely-near point. Therefore the cubic given by C_1∪ L_1 contains Z'.
Similarly, we can also construct a conic C_2=V(cxy+bxz+ayz+(a+b+c)y^2) that contains the points (0,0,1), (0,1,0), (a,b,c), and their respective infinitely-near points. Letting L_2 denote the line connecting (1,0,0) and (1,1,1), we get another cubic C_2∪ L_2 containing Z'. The two cubics share no components in common, and so Z' is a complete intersection of two cubics.
Since Z' is projectively equivalent to Z, we get Z is a complete intersection. Therefore Z is (3,3)-geproci. Note that Z is a nontrivial non-half grid. What makes this work is the fact that the tangent lines of a conic in characteristic 2 are concurrent.
We can also see that Example <ref> provides examples of unexpected cones. Letting {α_0,α_1,α_2,α_3}={x,y,z,w}, we can construct a (non-minimal) generating set for I(Z) as
𝒜={α_iα_j(α_k+α_ℓ):i,j≠ k, i,j≠ℓ, k≠ℓ}.
(Note that this set includes both the polynomials where i=j and i≠ j.) A computation in Macaulay2 reveals that the ideal generated by 𝒜 can be minimally generated by 11 cubic polynomials. Therefore [I(Z)]_3=11. We also have 3+23=10, so [I(Z)]_3-3+23=1.
But we also know that [I(Z)+I(P)^3]_3≥ 2 by for example taking the join of the two planar cubics making up the complete intersection of Z' with the vertex P. Therefore we have the inequality [I(Z)+I(P)^3]_3>[I(Z)]_3-3+23>0 and so the cubic cones are indeed unexpected.
Let K=2. Now consider the 6 points
Z={(1,0,0,0)× 2, (0,1,0,0)× 2,(0,0,1,0)× 2},
where the infinitely near point for each is in the direction of (0,0,0,1). We will show that this is (2,3)-geproci.
First we will look at the following scheme of points in ^2:
Z'={(1,0,0)× 2, (0,1,0)× 2,(0,0,1)× 2}
where the infinitely-near point for each is in the direction of (1,1,1). We will show that this set of 6 points is a complete intersection of a conic and a cubic, and then show that a general projection of Z onto any plane is projectively equivalent to Z'. Note that Z' is contained in the conic A=V(xy+xz+yz) and the cubic B=V((x+y)(x+z)(y+z)). Also note that A and B, have no components in common, since A is an irreducible conic and B is the union of three lines. Therefore Z' is a complete intersection of a conic and a cubic.
Now let us return to Z^3. Let us project Z from a general point P∈^3 onto a general plane Π^3. Since the lines corresponding to each infinitely-near point meet at (0,0,0,1), and since projection from a point preserves lines (and therefore the intersection of lines), the images of the three infinitely-near points under the projection π_P,Π will also correspond to three concurrent lines. In other words, Z will map to the set
π_P,Π(Y)={π_P,Π(1,0,0,0)× 2,π_P,Π(0,1,0,0)× 2,π_P,Π(0,0,1,0)× 2}
where each infinitely-near point is in the direction of π_P,Π(0,0,0,1). For a general point P, the images of the three ordinary points in Z and the point π_P,Π(0,0,0,1) will not be collinear. Therefore we can map Π to ^2 and use an automorphism of the plane to map π_P,Π(1,0,0,0) to (1,0,0), π_P,Π(0,1,0,0) to (0,1,0), π_P,Π(0,0,1,0) to (0,0,1), and π_P,Π(0,0,0,1) to (1,1,1). Then we are in the same situation as Z', which is a complete intersection of a conic and a cubic.
Note that Z is a half grid, since the cubic containing Z is a union of three lines, but the conic is irreducible.
The unique quadric cone containing Z with a vertex at (a,b,c,d) is given by cdxy+bdxz+adyz+abw^2.
Let K=2. Now consider the 9 points
Z={(1,0,0,0)× 2, (1,1,0,0)× 2, (0,1,0,0)× 2, (0,0,1,0)× 2, (0,0,0,1)},
by choosing as our infinitely-near points for (1,0,0,0), (1,1,0,0), (0,1,0,0), and (0,0,1,0) the points that correspond to the respective directions to the point (0,0,0,1). First we will look at the following set of points in ^2_K:
Z'={(1,0,0)× 2,(a,0,1)× 2,(0,0,1)× 2,(1,1,1)× 2,(0,1,0)}
where a≠ 0 and each infinitely-near point is in the direction of (0,1,0). These nine points are a complete intersection of (y^2+xz)(x+az) and y^2(x+z). Since every set of four points, no three of which are collinear, maps can be mapped to every other such set of four points by a linear automorphism, every projection of Z onto any plane Π will be isomorphic to the configuration Z' for some a∈ K∖{1,0}, and so Z is a nontrivial (3,3)-geproci set.
The preceding example is particularly interesting because the general projection of X is not only a (3,3) complete intersection, but as in Example <ref> it is also the set of base points of a quasi-elliptic fibration (specifically one with Dynkin diagram A_1^4D_4).
|
http://arxiv.org/abs/2307.05318v1 | 20230711150148 | Predicting small molecules solubilities on endpoint devices using deep ensemble neural networks | [
"Mayk Caldas Ramos",
"Andrew D. White"
] | physics.chem-ph | [
"physics.chem-ph",
"cs.LG"
] |
The thermoelectric effect on diffusion in the two-dimensional Hubbard model
Jure Kokalj
August 12, 2023
===========================================================================
Aqueous solubility is a valuable yet challenging property to predict.
Computing solubility using first-principles methods requires accounting for the competing effects of entropy and enthalpy, resulting in long computations for relatively poor accuracy.
Data-driven approaches, such as deep learning, offer improved accuracy and computational efficiency but typically lack uncertainty quantification.
Additionally, ease of use remains a concern for any computational technique, resulting in the sustained popularity of group-based contribution methods.
In this work, we addressed these problems with a deep learning model with predictive uncertainty that runs on a static website (without a server).
This approach moves computing needs onto the website visitor without requiring installation, removing the need to pay for and maintain servers.
Our model achieves satisfactory results in solubility prediction.
Furthermore, we demonstrate how to create molecular property prediction models that balance uncertainty and ease of use.
The code is available at <https://github.com/ur-whitelab/mol.dev>, and the model is usable at <https://mol.dev>.
§ INTRODUCTION
Aqueous solubility measures the maximum quantity of matter that can be dissolved in a given volume of water.
It depends on several conditions, such as temperature, pressure, pH, and the physicochemical properties of the compound being solvated.<cit.>
The solubility of molecules is essential in many chemistry-related fields, including drug development<cit.>, protein design<cit.>, chemical<cit.> and separation<cit.> processes.
In drug development, for instance, compounds with biological activity may not have enough bioavailability due to inadequate aqueous solubility.
Solubility prediction is critical and has driven the development of several methods, including first principles<cit.>, semi-empirical equations<cit.>, molecular dynamics (MD) methods<cit.>, quantum computations<cit.>, and quantitative structure-property relationship (QSPR)<cit.> methods.
Despite significant progress, the development of accurate and reliable models for solubility remains a major concern.<cit.>
To address the persistent issues of systematic bias and non-reproducibility in aqueous solubility datasets, Llinàs et al.<cit.> introduced two solubility challenges featuring consistent data.
The first challenge evaluated participants based on the root mean square error (RMSE) obtained and the percentage of correct values within a ±0.5 logS error range.
Unfortunately, the authors did not report the methods used by the participants.<cit.>
In contrast, the second challenge showed that regardless participants were open to choosing which approach to use, all submitted responses used an implementation of QSPR or machine learning (ML).<cit.>
Neural networks (NN), multiple linear regression (MLR), and decision trees were the most commonly applied methods in these challenges.
Tree-based and MLR models presented the best results.
Surprisingly, new state-of-art methods did not yield a significant improvement in predictions compared to the results of the first challenge.<cit.>
The challenges' findings showed that data quality is more critical for accurate predictions than model selection.<cit.>
The results from the solubility challenges will be discussed in detail in Section <ref>.
Ideally, solubility models should be accurate and accessible, having clear or minimal instructions on how to use a model.
Thus a common idea is to use web servers to provide easier public access.
However, maintaining a web server requires an ongoing investment of time and money.
There are examples of servers that eventually disappear, even with institutional or government support<cit.>.
For example, eight of the 89 web server tools from the 2020 Nucleic Acid Research special web server issue are already offline<cit.>[Tested December 30, 2022] after just a few years.
Additionally, some tasks may require a long computation time<cit.>.
For instance, tools like RoseTTAFold<cit.> and ATB<cit.> can take hours to days to complete a job, resulting in long queues and waiting times.
An alternative approach is to perform the computation directly on the user's device, removing the need for the server's maintenance and cost.
In this approach, the website is simply a static file that can be hosted on sites like GitHub and be completely archived in the Internet Archive[https://archive.org/].
We explored this approach in <cit.> for bioinformatics.
The main drawback is that the application runs directly from the browser on a user's device (a personal computer or even a cellphone).
This would be infeasible for first-principle methods, like those that rely on molecular dynamics.
Nevertheless, it is feasible for deep learning models, especially with the increasing integration of deep learning chips and compiler optimizations.
In this work, we developed a front-end application using a JavaScript (JS) implementation of TensorFlow framework<cit.>.
Our application can be used to predict the solubility of small molecules with uncertainty.
To calibrate the confidence of the prediction, our model implements a deep ensemble approach<cit.> which allows reporting model uncertainty when reporting the prediction.
Our model runs locally on the user's device and can be accessed at <https://mol.dev/>.
§ RELATED WORKS
Physic-based models have been developed in the past for aqueous solubilization predictions.
Those models may become complex, limiting their use to advanced users only.<cit.>
Despite physic-based models being derived from first principles, they are no more accurate than empirical methods.<cit.>
Data-driven models can outperform physics-based models with the benefit of being less time-consuming.
Historically, common approaches computed aqueous solubilities based on QSPR<cit.> and MLR<cit.> methods.<cit.>
<cit.> used a dataset consisting of 1297 organic molecules to develop two models based on MLR and NN.
The author reported a good correlation between predicted properties and labels for training (r^2=0.94) and test (r^2=0.92) data.
<cit.> used another approach based on MLR called Estimated SOLubility (ESOL) adjusted on a 2874 small organic molecules dataset.
The final model presented a r^2=0.55 and an average absolute error (AAE) of 0.83.
GPsol<cit.>, a Gaussian Process-based model, was trained to predict the aqueous solubilities of electrolytes in addition to non-electrolyte molecules.
It used 1664 descriptors computed by Dragon software as input features to train the model on a dataset of ∼4000 molecules.
Depending on the dataset, it presented an RMSE of 0.77 or 0.61.
Lusci et al.<cit.> trained several Undirected graph recurrent neural networks (UG-RNN) architectures using different sets of node feature vectors.
The authors report RMSE from 0.90 to 1.41 for different models on the first Solubility Challenge dataset<cit.>.
McDonagh et al.<cit.> calculated solubilization free energies using first principle theoretical calculations and cheminformatic methods.
Their results have shown that cheminformatic methods have better accuracy than theoretical methods.
The authors also point to the promising results of using Random Forest models (RMSE of 0.93 on Llinàs first dataset<cit.>).
Those models use descriptors to represent molecules.
Descriptors are a straightforward way to convey physical-chemical information in your input.
However, descriptor selection is not an easy task.
It requires a good understanding of the problem settings, usually only held by specialists.
Some automated methods have been proposed to select descriptors<cit.>.
Nevertheless, computing several descriptors can increase the time needed for inference.
Additionally, these descriptors can be valid only for a specific region of the chemical space<cit.>.
More recently, transformers models have been used to compute the solubility of small molecules.
Francoeur et al.<cit.> developed the SolTranNet, a transformers model trained on AqSolDB<cit.> solubility data.
Notably, this architecture results in an RMSE of only 0.278 when trained and evaluated on the original ESOL<cit.> dataset using random split.
Nevertheless, it shows an RMSE of 2.99 when trained using the AqSolDB<cit.> and evaluated using ESOL.
It suggests that the molecules present in ESOL may have low variability, meaning that samples in the test set are similar to samples in the training set.
Hence, models trained on the ESOL training set performed excellently when evaluated on ESOL test set.
Regression Transformer (RT)<cit.> is a multipurpose transformer model trained using an infilling mask approach<cit.>.
Their results are comparable to those achieved by the SMILES-BERT<cit.> and Mol-BERT<cit.> models.
While the RMSE values for SMILES-BERT and Mol-BERT are 0.47 and 0.53, respectively.
Whereas RT presented an RMSE of 0.73.
All three models were fine-tuned using ESOL.
MolFormer<cit.> is an encoder-only transformers model with a modified embedding.
It was pre-trained in a large corpus and fine-tuned for numerous downstream tasks.
Specifically for the solubility regression fine-tuning, they reported an RMSE of 0.278 on the ESOL dataset.
Noticeably, the same value was reported by Francoeur et al.<cit.> when they trained their model on ESOL.
Comparing the performance of different models is a complex task, as performance metrics cannot be directly compared across models evaluated on distinct datasets.
To address this issue, <cit.> curated a large and diverse dataset to train models with various architectures and molecular representations.
They also compared the performance of these models on datasets from the literature<cit.>.
Although their models achieved an RMSE of ∼1.1 on their test set, using descriptors as molecular representations resulted in RMSE values ranging from 0.55 to ∼1.35 when applied to other datasets from the literature.
These findings suggest that some datasets used to train models in the literature may be inherently easier to predict, leading to smaller RMSE values.
According to their study, the Solubility Challenge datasets by Llinàs et al.<cit.> were found to be particularly challenging due to their more significant reproducibility error.
§ METHODS
§.§ Dataset
The data used for training the models were obtained from AqSolDB<cit.>.
This database combined and curated data from 9 different aqueous solubility datasets.
The main concern in using a large, curated database is to avoid problems with the generalizability of the model<cit.> and with the fidelity of the data<cit.>.
AqSolDB consists of aqueous solubility (LogS) values for 9982 unique molecules extended with 17 topological and physicochemical 2D descriptors calculated by RDKit<cit.>.
We augmented AqSolDB to 96,625 molecules.
Each entry of AqSolDB was used to generate at most ten new unique randomized SMILES strings.
Training the model on multiple representations of the same molecule improves its ability to learn the chemical space constraints of the training set, as demonstrated in previous studies <cit.>.
Duplicates were removed.
After shuffling, the augmented dataset was split into 80%/20% for the training and test datasets, respectively.
The curated datasets for the solubility challenges<cit.> were used as withheld validation data to evaluate the model's ability to predict solubility for unseen compounds.
To refer to the validation datasets, we labeled the first solubility challenge dataset as "solubility challenge 1" and the two sets from the second solubility challenge as "solubility challenge 2_1" and "solubility challenge 2_2", respectively.
Molecules present in these three datasets were not found in train and test datasets.
§.§ Model architecture
Our model uses a deep ensemble approach as described by <cit.>.
Given a model which outputs two values (mean μ̂_m and variance σ̂_m), a deep ensemble creates an ensemble of models that can estimate prediction uncertainty.
Those two numbers characterize a normal distribution 𝒩(μ̂_m,σ̂_m), where m indexes the model in the ensemble.
The uncertainty of a model can be divided into two sources: aleatoric uncertainty (AU) and epistemic uncertainty (EU).<cit.>
EU quantifies the uncertainty among the models of the ensemble.
It shows how much the elements of the ensemble disagree about a prediction.
EU is also known as “model uncertainty”.
AU, also called “data uncertainty”, quantifies intrinsic uncertainty inherent in data observations.<cit.>
For a given data point x⃗, the estimates for the ensemble predictions are computed as follows:
μ̂(x⃗) = 1/N∑_m μ̂_m(x⃗)
σ̂^2_ale(x⃗) = 1/N∑_m σ̂^2_m(x⃗) , σ̂^2_epi(x⃗) = 1/N∑_m (μ̂(x⃗) - μ̂_m(x⃗) )^2
where σ̂^2_ale is AU, σ̂^2_epi is EU, N is the ensemble size, and m indexes the models in the ensemble.
As the base model for deep ensembling, we built a deep neural network (DNN) using Keras<cit.> framework and TensorFlow<cit.> back-end.
Given its capabilities to capture long-range sequence correlations, we employed a bidirectional recurrent neural network (RNN) layers in our model.
Figure <ref> illustrates the model architecture.
The input Simplified molecular-input line-entry system (SMILES)<cit.> or Self-referencing embedded strings (SELFIES)<cit.> molecule representation is converted to a list of SELFIES tokens according to a pre-defined vocabulary.
The vocabulary was created based on the training data, generating 273 available tokens.
SELFIES tokens are the input for the DNN.
The network can be divided into three sections:
(i) Embedding,
(ii) bi-RNN, and
(iii) fully connected NN.
As the inset of Figure <ref> shows, the first step is to input the SELFIES tokens into an embedding layer.
The embedding layer allows us to convert a list of discrete tokens into a fixed-length vector space.
Working on a continuous vector space has two main advantages: it uses a more compact representation, and semantically similar symbols can be described close to each other in vector space.
Our embedding layer has an input dimension of 273 (vocabulary size) and an output dimension of 64.
After generating the embedding vector, the next step is to feed it into the bidirectional RNN layer.
The effects of using Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM)<cit.> layers as the RNN layers were investigated.
It will be shown that LSTM performed better for this application (refer to Section <ref>).
The bi-LSTM layer consists of a double-stacked LSTM layer of 64 units each.
These layers use three gates (input, forget, and output gates) to filter information (See Ref for details).
Using bi-RNN was motivated based on our previous work<cit.> in which LSTM helped improve the model's performance for predicting peptide properties using its sequences.
The output from the bi-LSTM stack undergoes normalization via Layer Normalization<cit.>.
In DNN models, gradient values concerning weights from one layer heavily rely on outputs from the preceding layer, an issue referred to as "covariate shift.".
Some authors argue that normalization schemes improve the model by reducing covariate shifts during forward normalization.<cit.>
Conversely, others argue that improvements are based on the derivatives of the mean and variance by effectively re-scaling gradients.<cit.>
The absence of a comprehensive theoretical grasp of normalization effects hinders the evolution of novel regularization schemes.<cit.>
Furthermore, normalization serves to re-center layer values around 0, where the non-linearity of most activation functions is more intense.
Despite the limited understanding, Layer Normalization is employed due to its demonstrated effectiveness.<cit.>
In the final section of the DNN, normalized data is processed through three dense layers containing 32, 16, and 1 units, respectively.
However, the output of the dense layer with 16 units is fed separately into two separate dense layers with 1 unit each.
One layer employs a linear activation function, while the other utilizes a softplus activation function to ensure a positive value, producing μ̂_m and σ̂_m, respectively.
To avoid divergences caused by a division by zero or convergence to large uncertainties, the variance value is constrained between the minimum and the maximum values of 10^-6 and 10^4, respectively.
The outputs of these two last layers are stacked to generate the model output.
Therefore, the model produces a tuple with the mean and the variance.
Negative log-likelihood loss l was used to train the model.
It is defined as the probability of observing the label y given the input x⃗:
l(x⃗,y) = log(σ̂^2_m(x⃗))/2 + (y-μ̂_m(x⃗))^2/2σ̂^2_m(x⃗)
During the training phase, dropout layers with 0.35 dropout rate were incorporated after the embedding and each dense layer to mitigate over-fitting.<cit.>
Figure <ref> illustrates the model applied for inference, which omits the dropout layers.
Models were trained using the Adam<cit.> optimizer with a fixed learning rate of 0.0001 and default values for β_1 and β_2 (0.9 and 0.999, respectively).
Our model employs adversarial training, following the approach proposed by <cit.>.
However, our input is a tokenized version of the SELFIES representation.
Hence, it is a discrete sequence.
To apply adversarial training, we generate adversarial examples by modifying the embedded representation of the input data.
Each iteration in the training phase consists of first computing the loss using Equation <ref> and a second step with a new input x⃗' to smooth the model's prediction:
x⃗^' = x⃗ + ϵsign(∇_x l(x⃗,y))
where ϵ is the strength of the adversarial perturbation.
Details of the model are available as model cards<cit.> at <http://mol.dev/>.
These cards provide information concerning performance, limitations, training data, ethical considerations, and caveats of the model.
§ RESULTS
In order to evaluate the performance of our model using deep ensembles, two baseline models were created: (i) an XGBoost Random Forest (RF) model using the 17 descriptors available on AqSolDB plus 1809 molecular descriptors calculated by PaDELPy, a python wrapper for the PaDEL-Descriptor<cit.> software, and (ii) a model with the same architecture used on our deep ensemble using RMSE as the loss function and no ensemble (referred to as DNN).
In addition, we evaluate the influence of (i) the bi-RNN layer (either GRU or LSTM), (ii) using an augmented dataset to train, (iii) the adversarial training, and (iv) the ensemble size.
§.§ Gated layer
The most common RNN layers are the GRU and the LSTM.
GRU layers use two gates, reset and update, to control the cell's internal state.
On the other hand, LSTM layers use three gates: forget, input, and output, with the same objective.
Available studies compare GRU and LSTM performances in RNNs for different applications, for instance: forecasting<cit.>, cryptocurrency<cit.>, wind speed<cit.>, condition of a paper press<cit.>, motive classification in thematic apperception tests<cit.> and music and raw speech<cit.>.
Nevertheless, it is not clear which of those layers would perform better at a given task.
We trained models using GRU or LSTM in bidirectional layers selecting the model using a deep ensemble with four individual models.
Metrics can be found in Table <ref>; for an explanation of the naming syntax used in this work, refer to Table <ref> caption.
Using LSTM resulted in a decrease in RMSE and MAE and an increase in the correlation coefficient, indicating better performance.
For Solubility Challenges 1, 2_1, and 2_2, the kde4^GRU_Aug model yielded RMSE values of 1.329, 1.354, and 1.626, respectively, while the kde4^LSTM_Aug model achieved 1.049, 1.054, and 1.340, respectively.
This trend was also observed for the models trained without data augmentation, but in a smaller proportion (See Table <ref>).
Considering that LSTM performs better regarding this model and data, we will consider only bi-LSTM layers for further discussion.
§.§ Data augmentation
Our model is not intrinsically invariant with respect to the selfies representation input.
For instance, both “C(C(C1C(=C(C(=O)O1)O)O)O)O” and “O=C1OC(C(O)CO)C(O)=C1O” are valid SMILES representations for the ascorbic acid (See Figure <ref>) that will be encoded for different SELFIES.
Hence, the model should learn to be invariant concerning changes in the string representation during training.
It can be achieved by training the model using different representations with the same label.
Therefore, the model can learn relations in the chemical space instead of correlating the label with a specific representation.
With this aim, we evaluated the effects of augmenting the dataset by generating new randomized SMILES representations for each sample.<cit.>
Among the performance tests, augmenting the dataset had the most significant impact on RMSE.
It could be seen improvements of ∼0.5 in the RMSE when evaluating on challenge datasets 1 and 2_1, and an improvement of ∼0.2 on 2_2 (See Table <ref>).
Concerning the first two datasets, augmenting data improved every model used in this study.
However, surprisingly, data augmentation led to a deprecation of the DNN model on the solubility challenge 2_2 dataset.
This behavior
was not further investigated.
§.§ Adversarial training
Using adversarial training improved performance in Lakshminarayanan et al.<cit.> studies.
Hence, they suggested that it should be used in future applications of their deep learning algorithm.
Thus, we tested the effects of adversarial perturbation on training models with ensemble sizes of 4 and 10.
Comparing kde4^LSTM-NoAdv and kde4^LSTM, using adversarial training seems to decrease model performance.
It can be seen in Table <ref> that using adversarial perturbation increased the RMSE from 1.425 to 1.554 and 1.258 to 1.469 in solubility challenges dataset 1 and 2_1, respectively.
However, the RMSE decreased from 1.719 to 1.523 in dataset 2_2.
Using adversarial perturbation affected our kde4^LSTM's performance by a change in RMSE of ±0.2.
The inconsistent performance improvement observed by using adversarial training was further investigated with models in which the dataset was augmented.
Due to the lack of multiple string representations in the training dataset, it is known that kde4^LSTM may have some problems generalizing the learning.
A generalization issue could direct the adversarial perturbation in a non-physical direction because the model does not have complete knowledge about the chemical representation space.
This hypothesis is reinforced when we compare kde10^LSTM_Aug-NoAdv and kde10^LSTM_Aug.
When using adversarial training on a model trained with an augmented dataset, the performance improvement is more noticeable (∼0.5) and consistent for all the test datasets.
§.§ Deep ensemble size
To investigate the effects of increasing the ensemble size, we trained models with an ensemble of 4, 8, and 10 models.
Given the previous results, these models used LSTM as the bi-RNN layer and were trained on the augmented dataset.
Specifically for the solubility challenge 2_2, the most complex set to predict, these models presented an RMSE of 1.340, 1.418, and 1.263, respectively.
The same order of performance was observed in all test sets (See Table <ref>), showing that increasing the ensemble size consistently improved performance.
Besides the immediate improvement in RMSE, increasing the ensemble size also improves the uncertainty of the model.
Figure <ref> shows the density distribution of the aleatoric variance and the epistemic variance (respectively related to AU and EU) for kde4^LSTM_Aug (top 6 panels) and kde10^LSTM_Aug (bottom six panels).
The increase in ensemble size led to a decrease in both uncertainties.
AU distributions for the kde4^LSTM_Aug are centered around 4 logS^2 , displaying a long tail that extends to values as high as 20 logS^2 in the worst case (solubility challenge 2_2).
A similar trend is seen in EU distributions.
On the other hand, the kde10^LSTM_Aug model results in narrower distributions.
The mean of these distributions remains relatively unchanged, but a noticeable reduction in the extent of their tails can be observed.
AU distribution ends in values around 10 logS^2.
§ DISCUSSION
After extensively investigating the hyperparameter selection, we compared our model with available state-of-art models from the literature.
Performance metrics on withheld validation data, the solubility challenge datasets, can be found in Table <ref>.
Parity plots for our chosen models are presented in Figure <ref>.
Focusing on the solubility challenge 1 dataset<cit.>,
kde10^LSTM_Aug is only ∼0.2 RMSE units worse than the best model available in the literature<cit.>.
The RMSE of the participants of the challenge was not reported.<cit.>
The primary metric used to evaluate models was the percentage of predictions within an error of 0.5 LogS units (called %±0.5log).
Computing the same metric, kde10^LSTM_Aug has a percentage of correct prediction of 44.4%, a result better than 65% of the participants.
The participant with the best performance presented a %±0.5log of 60.7%.
The architecture of the models was not published in the findings of the first challenge.<cit.>
Nevertheless, the findings for the second challenge<cit.> investigated the participants more thoroughly.
Participants were asked to identify their models' methods and descriptors used.
The challenge is divided into two datasets.
Set-1 contains LogS values with an average interlaboratory reproducibility of 0.17 LogS.
Our kde10^LSTM_Aug achieve a RMSE of 0.983 and a %±0.5log of 40.0% in this dataset.
Therefore, our results perform better than 62% of the published RMSE values and 50% of the %±0.5log.
In addition, the model with the best performance is an artificial neural network (ANN) that correctly predicted 61% of the molecule's LogS using a combination of molecule descriptors and fingerprints.
The second dataset (set-2) contains molecules whose solubility measurements are more challenging, reporting an average error in reproducibility of 0.62 LogS.
The kde10^LSTM_Aug achieves an RMSE of 1.263 and a %±0.5log of 23.3%.
It performs better than 82% of the candidates when considering the RMSE.
Surprisingly, %±0.5log does not follow this outstanding performance, being more significant than only 32% of the participants.
Regarding the literature, kde10^LSTM_Aug has an RMSE only ∼0.1 higher than a GNN that used an extensive set of numeric and one-hot descriptors in their feature vector.<cit.>
Our model performs better than a transformer model that uses SMILES-string and an adjacency matrix and inputs.<cit.>
The performance of those models is available in Table <ref>.
Notably, all participants in the solubility challenge 2 submitted a kind of QSPR or descriptor-based ML.
Using descriptors provides an easy way to ensure model invariance concerning molecule representation and is more informative since they can be physical quantities.
However, selecting appropriate descriptors is a crucial step for developing descriptor-based ML models.
It often requires specialists with a strong intuition about the relevant physical and chemical properties for predicting the target quantity.
Our approach, on the other hand, is based on extracting information from simple strings-representations, a simpler raw data.
Furthermore, we could achieve state-of-art performance while balancing the model size and complexity and using a raw input (a simple string) to simplify its usage.
Lastly, transformers models have been used to address the issue of accurately predicting the solubility of small compounds.
The typical workflow for transformers involves pre-training the model using a large dataset and subsequently fine-tuning it for a specific downstream task using a smaller dataset.
Most existing models were either pre-trained and fine-tuned on the ESOL<cit.> dataset or pre-trained on a larger dataset and fine-tuned using ESOL.
Hence, the generalizability of those models cannot be verified.
In a study by <cit.>, they considered two versions of their model, SolTranNet.
The first version of SolTranNet was trained with the ESOL dataset using random splits.
This approach achieved an RMSE of 0.289.
Subsequently, the deployed version of SolTranNet was trained with the AqSolDB<cit.>.
When ESOL was used to evaluate their deployed version, the model presented an RMSE of 2.99.
While our model achieved an RMSE of 1.316 on ESOL, outperforming SolTranNet deployed version, it cannot be compared with other models trained on ESOL.
§ CONCLUSIONS
Our model was able to predict LogS values directly from SMILES or SELFIES string representations.
Hence, there is no need for descriptors selection and construction.
Using only raw data, our model could match state-of-art performance in datasets that are challenging to predict accurately.
In addition, carefully compromising between performance and complexity, we implemented a web application using TensorFlow JS.
This application can satisfactorily run on any device with limited computational resources, such as laptops and smartphones.
This excludes the need to rely on a server to run the application, improving usability and flexibility and decreasing implementation costs.
§ DATA AND CODE AVAILABILITY
All code needed to reproduce those results are publicly available on the following GitHub repository: <https://github.com/ur-whitelab/mol.dev>.
The model is also publicly accessible at the following address: <https://mol.mol.dev/>.
unsrtnat
|
http://arxiv.org/abs/2307.04180v1 | 20230709140850 | Lattice path matroidal subdivisions, Positive Tropical Grassmannian and Amplituhedron | [
"Ayush Kumar Tewari",
"Ahmed Umer Ashraf"
] | math.CO | [
"math.CO",
"math-ph",
"math.AG",
"math.MP",
"52B40, 14T15, 81U99"
] |
[
Diancong Jin
August 12, 2023
===================
We introduce the notion of lattice path matroidal subdivisions, or LPM subdivisions for short, and show that these subdivisions are regular and hence the weight vectors for them lie in the Dressian. This leads us to explore the structure of the set of these weights inside the Dressian and owing to the fact that Lattice path matroids are positroids, we move to the positive Dressian which in turn is equal to the positive tropical Grassmannian, an object of immense interest currently in Physics. This is related to the amplituhedron and positive configuration space, which we describe here and wish to explore these connections further.
§ INTRODUCTION
Lattice path matroids (LPM) [We use this abbreviation for lattice path matroid and lattice path matroidal depending on the context] were introduced by Bonin et.al in <cit.> and matroidal properties including the Tutte polynomial were derived for them. Subsequently, it was proven that they are positroids <cit.> and also enjoy multiple connections with the positive Grassmannian. Lattice paths in themselves are ubiquitous in various topics within mathematics for example in combinatorics, representation theory, etc. In our work we see this feature helping us connect our study to various topics of not only mathematics but also to a recently defined concept in physics, the amplituhedron <cit.>, which is a geometric object encoding information concerning the scattering amplitudes of particles.
We begin with the introduction of lattice path matroidal subdivisions, which are matroidal subdivisions with each maximal cell corresponding to a lattice path matroid polytope. The idea for this class of subdivisions comes from the lattice path matroid polytope decompositions <cit.>, which is a subclass of matroid base polytope decompositions, studied in detail in <cit.>. Lattice path matroidal decompositions enjoy a unique property; they are obtained in an iterative way via simple decompositions into two LPMs, termed as a hyperplane split. We harness this property to relate them to the well-known class of split subdivisions. This relation eventually helps us in proving one of our first results.
Any subdivision of a lattice path matroid polytope _[P,Q] is regular.
Not only we are able to establish regularity for LPM subdivisions but we also show that they are obtained as common refinements of split subdivisions, which allows much more structure to these subdivisions. We introduce the notion of LPMfan
as the polyhedral fan that corresponds to LPM subdivisions. We discuss the relation of LPMfan to various well-known polyhedral fan structures which correspond to regular matroidal subdivisions, namely tropical Grassmannian and Dressian. Since LPM are positroids as well, this discussion can also be connected to the positive part of the tropical Grassmannian and Dressian. We furnish computational examples for both LPM subdivisions and LPMfans for the underlying hypersimplex Δ(k,n) which is an LPM polytope, where k=3,4 and n=6,8 respectively.
Postnikov <cit.> led the study on the stratification of the positive Grassmannian into cells that enjoy equivalences with various combinatorial objects, like decorated permuations, reduced plabic graphs, etc. We also put our results into perspective by discussing how our LPM subdivisions correspond to these combinatorial objects. This also helps us in bringing the connections to the geometric object amplituhedron, introduced first by Arkani et. al <cit.> to study problems concerning scattering amplitudes in high energy physics. We point the reader to <cit.> for exploring the connections between scattering amplitudes in physics and the geometry of the Grassmannian in full detail. Our discussion mostly revolves around the connections between positive Grassmannian, positive tropical Grassmannian and the amplituhedron.
Firstly, for the m=2 amplituhedron, we provide a purely matroidal treatment to the definition of BCFW[the abbreviation is after the names of Physicists Britto, Cachazo, Feng, and Witten] style recurrence relations for positroid dissections of the hypersimplex in the form of Theorem <ref>. These positroidal dissections were introduced in <cit.> and it is shown in <cit.> that via T-duality they are also related to certain dissections of the m=2 amplituhedron 𝒜_n,k,2. Secondly, for the m=4 amplituhedron, in <cit.> it is shown that BCFW cells of the amplituhedron correspond to a noncrossing lattice paths of a certain lattice rectangle. Additionally, a recent work <cit.> shows that BCFW cells provide a triangulation of the amplituhedron 𝒜_n,k,4. In light of these results, we prove the following result which is the first result highlighting the relation between the BCFW triangulation of 𝒜_n,k,4 and positroidal dissection of a certain hypersimplex.
Each triangulation of the amplituhedron 𝒜_n,k,4 into (k, n)-BCFW cells provides a positroid dissection {Γ_i} of the hypersimplex Δ(k,n-4), where each BCFW cell corresponds to a lattice path matroid polytope Γ_i.
Lastly, <cit.> discusses the relation between positroidal cells of the positive Grassmannian and the positive configuration space, via the Chow quotient of the Grassmanian. We also encounter a special class of LPM's throughout our study, namely snakes, which are minimal, and we use this property, to provide examples of clusters for them, which implies intricate connections between LPM's and the underlying cluster algebra, which we wish to explore further in subsequent work. This minimality of snakes also helps us answer partially a Question asked in <cit.>. We would like to make a special mention of the various salient features which we encounter for snakes and would like to state them as follows,
Snakes are lattice path matroids, positroids, minimal, binary, indecomposable, series-parallel, graphical <cit.>, order, alcoved [We do acknowledge that order and alcoved are properties satisfied by matroid polytopes of snakes.] <cit.>
In Section <ref> we introduce all basic definitions which we will use in further discussions. Section <ref> introduces the notion of LPM subdivisions and Theorem <ref> is proven here. Section <ref> describes the relation between the positive tropical Grassmannian and LPM subdivisions. Section <ref> collects all our computational examples, which are mostly LPM subdivisions and LPMfan for LPM polytopes Δ(3,6) and Δ(4,8). Section <ref> introduces the notion of amplituhedron and relates in detail the findings pertaining to LPM's. Finally, we discuss probable future problems and open questions in Section <ref>.
§ PRELIMINARIES
We would like to guide the readers unfamiliar with the concepts in this section to <cit.> and <cit.> for further details. A matroid of rank k on the set [n] := {1,2, …, n} is a nonempty collection ⊆[n]k of k-element subsets of [ n ], called bases of , that satisfies the exchange axiom:
For any I , J ∈ and i ∈ I, there exists j ∈ J such that I ∖{ i }∪{ j }∈.
A matroid is called realizable if it can be represented by elements of a matrix over some field 𝕂. A positroid of rank k is a matroid that can be represented by a k × n-matrix with non-negative maximal
minors.
The Grassmannian (k,n) is the parameterization of the family of all k-dimensional subspaces of n-dimensional vector space in 𝕂^n. It also possesses a smooth projective variety structure, corresponding to the vanishing set of the Plücker ideal ℐ_k,n.
An element in the Grassmannian (k,n) can be understood as a collection of n vectors v_1, , v_n∈𝕂^k spanning the space 𝕂^k modulo the simultaneous action of (k,n) on the vectors, where the vectors v_i are
the columns of a k × n-matrix A. Then an element
V ∈(k,n) represented by A gives the matroid _V whose bases are the k-subsets I ⊂ [n] such that _I(A) ≠ 0. Here, _I(A) denotes the determinant of A_I, the k × k submatrix of A with the column set I.
An element V ∈(k,n) is termed as totally non-negative if _I(V) ≥ 0, for all I ∈[n]k. The set of all totally non-negative V ∈(k,n) is the totally non-negative Grassmannian ^≥ 0(k,n); abusing notation, we refer to ^≥ 0(k,n) as the positive
Grassmannian <cit.>.
Tropical geometry is the study of polynomials over the tropical semiring 𝕋 = {ℝ∪∞, max, + }. Given e = (e_1 , , e_N ) ∈ℤ^N_≥ 0 , we let x^e denote x_1^e_1 x_N^e_N. For a polynomial f = ∑_e ∈ E a_ex^e, we firstly associate a corresponding tropical polynomial f where the binary operations are replaced by tropical addition and multiplication respectively, and we denote by Trop(f) the tropical hypersurface associated to f which is the collection of all points where the maxima is achieved at least twice. Let E = E^+∪ E^-⊆ℤ^N_≥ 0 , and let f be a nonzero polynomial with real coefficients such that f = ∑_e ∈ E^+a_ex^e - ∑_e ∈ E^-a_ex^e, where all of the coefficients a_e are non-negative real numbers. Then ^+(f) denotes the positive part of (f), and the set of all points (x_1 , , x_N) such that, if we form the collection of numbers ∑ e_ix^i for e ranging over
E, then the minimum of this collection is not unique and furthermore is achieved for some e ∈ E^+ and some e ∈ E^- <cit.>.
The tropical Grassmannian (k,n) is the intersection of the tropical hypersurfaces (f), where f ranges over all elements of the Plücker ideal ℐ_k,n which is generated by the quadratic Plücker relations <cit.>. The Dressian (k,n) is the intersection of the tropical hypersurfaces (f), where f ranges over all three-term Plücker relations. Similarly, the positive tropical Grassmannian ^+(k,n) is the intersection of the positive tropical hypersurfaces ^+(f), where f ranges over all elements of the Plücker ideal. The positive Dressian ^+(k,n) is the intersection of the positive tropical hypersurfaces ^+(f),
where f ranges over all three-term Plücker relations. The underlying matroid for the definitions of the tropical Grassmannian and Dressian is the uniform matroid _k,n. However, the notion of Dressian can be extended to arbitrary matroids with the definition of a local Dressian. The local Dressian () is defined as the tropical pre-variety given by the set of quadrics obtained from the three-term
Plücker relations by setting the variables p_B to zero, where B is not a basis of <cit.>.
A subdivision Σ of a polytope P in ℝ^d is said to be regular if there exits a weight vector w such that if the vertices of P are lifted to heights provided by w in ℝ^d+1 and subsequently the lower convex hull is projected back to ℝ^d, then the subdivision Σ is retrieved. A tropical polynomial with Newton polytope P defines a tropical hypersurface that is dual to a regular subdivision of P. We point the reader to <cit.>, <cit.> for further details about this duality.
We recall details about a special class of subdivisions that appear in our work. A split subdivision is a subdivision with exactly two maximal cells <cit.>. Two splits S_1 and S_2 are said to be compatible if the hyperplane along the split edges do not intersect in the interior of the polytope.
We now introduce definitions dealing with lattice path matroids. Let E be a set (which is going to be the ground set of the matroid), and let 𝒜 = (A_j : j ∈ J )
be a set system over E, that is, a multiset of subsets of a finite set S. A transversal of 𝒜 is a set { x_j : j ∈ J } of |J| distinct elements such that
x_j∈ A_j for all j ∈ J. A partial transversal of 𝒜 is a transversal of a set system of the form (A_k : k ∈ K) with K a subset of J. A transversal matroid is a matroid whose independent sets are the partial transversals of some set system 𝒜 = (A_j : j ∈ J) and 𝒜 is called the presentation of the transversal matroid. We denote this matroid by [𝒜]. The bases of a transversal matroid are the maximal partial transversals of 𝒜 <cit.>.
We now recall the definition of a lattice path matroid as a certain kind of transversal matroid <cit.>. Consider an r× (n-r) rectangular lattice grid _r,n. This consists of all the lattice points { (a, b) : 0 ≤ a ≤ n-r , 0 ≤ b ≤ r}, and all the edges between neighboring lattice points. This can also be thought as a Young diagram <cit.> consisting of r· (n-r) unit squares of the partition λ = (n-r, n-r, ⋯, n-r_r). An NE-path over _r, n is a path from the point (0,0) to the point (n-r, r) each of whose step is either a step in (1,0) direction (i.e. an E-step) or a step in (0,1) direction (i.e. an N-step). Note that for each edge in _r,n, its position in any NE-path is the same. Hence we can denote it by this position. Using this observation, we can denote each NE-path by the sequence of its north steps.
Let P and Q be two NE-paths on _r, n denoted by
P = p_1p_2… p_r
Q = q_1 q_2 … q_r
then the set of all NE-paths between P and Q forms a matroid. That is,
[P,Q] = {{i_1, i_2, …, i_r}: p_j ≤ i_j ≤ q_j for j=1,…, r }
Sometimes, we denote the matroid [P,Q] by just [J] where J is the skew Young diagram bounded by P and Q.
An example of a lattice path matroid is depicted in Figure <ref>, where the edges in the North direction are marked with their respective indices.
§ LPM SUBDIVISIONS
We use the following definition from <cit.>,
We call a lattice path matroid [P,Q] a snake if it has at least two elements, it is connected and the strip contained between the paths P and Q does not contain any interior lattice point.
Snakes are also referred to as border strip matroids. Snakes have the minimal number of bases a rank r connected matroid over n elements has. That is why they are also called minimal matroids <cit.>. In contrast to this, uniform matroids are maximal with respect to this property.
We introduce a new class of subdivision as follows:
Let [P,Q] be a lattice path matroid and _[P,Q] be its matroid polytope.
A subdivision Σ of _[P,Q] is called a lattice path matroidal (LPM) subdivision if all maximal cells of Σ are lattice path matroid polytopes.
In <cit.>, matroid base polytope decompositions are studied in detail and it is shown that for a lattice path matroid [P,Q] which is not a snake, its matroid polytope _[P,Q] admits a decomposition into lattice path matroids polytopes s.t
_[P,Q] = ⋃_i=1^t_[P_i,Q_i]
where each _[P_i,Q_i] is also a lattice path matroid base polytope for some lattice path matroid M[P_i,Q_i], and for each 1 ≤ i ≠ j ≤ t,
the intersection _[P_i,Q_i]∩_[P_j,Q_j] is a face of both _[P_i,Q_i] and _[P_j,Q_j]. A hyperplane LPM split decomposition is a decomposition in exactly two lattice path matroid polytopes, i.e., t=2 and as a consequence of <cit.> we also know that these two LPM polytopes are full-dimensional.
We feel that it is a good time to recall the notion of a polytopal subdivision <cit.>,
For a polytope P ∈ℝ^d, a (polyhedral) subdivision Σ is a polytopal complex whose vertices are the vertices of P, and that covers P. Σ can be understood as a collection of faces F, such that for any two faces F_i and F_j, F_i∩ F_j∈Σ.
It is pretty obvious from the definition above that the notions of LPM decompositions and LPM subdivisions coincide and we state this in the form of Corollary <ref>
Let Σ' = (_[P_1,Q_1]), _[P_t,Q_t]) be a decomposition of _[P,Q] into lattice path matroid polytopes. Then Σ' coincides with the subdivision Σ of _[P,Q], where each maximal cell is C_i = _[P_i,Q_i].
The LPM subdivision corresponding to a hyperplane LPM split decomposition is called a split subdivision. The subsequent subdivision is obtained iteratively via split subdivisions which correspond to hyperplane LPM split decomposition. We take this opportunity to specify our terminology so as to minimize any confusion in the text; split with the prefix 'hyperplane' would always refer to the LPM subdivision of a lattice path matroid into two LPM, whereas split with the suffix 'hyperplane' would refer to the hyperplane defining a split subdivision.
From now on our discussion would mostly focus on the LPM subdivisions, however, because of the equivalence in Corollary <ref> most of our results also extend to LPM decompositions, unless otherwise stated.
The property of being obtained via iterative hyperplane LPM split decompositions is unique to the LPM decompositions described in <cit.> and is different in this aspect from the concept of matroid decompositions defined in <cit.> which define a new quasisymmetric invariant for matroids which acts as a valuation on decompositions of matroid polytopes. Although, Kapranov <cit.> showed that for rank 2 matroids such matroid decompositions can be obtained via hyperplane split decompositions.
We recall first this technical result regarding split subdivisions,
Split subdivisions are regular.
Let S be a split subdivision of a polytope P. We provide a canonical weight vector for this subdivision in the following way. Let a be the normal vector to the split hyperplane H_S. We define the weight vector for S as w_S: Vert(P) →ℝ such that
w_S(v) =
|av| if v ∈ S_+
0 if v ∈ S_-
It is clear that this weight function is well-defined and induces the split subdivision S.
We now state a technical result concerning split LPM subdivisions. We call an LPM polytope _[J]⊆_[P,Q] a truncated LPM polytope if _[J] = _[P,Q]∖ (_[P,Q]∩ H_-), where H_- is a halfspace defined by the split hyperplane H of a split subdivision (cf. Figure <ref>).
A split subdivision of a truncated LPM polytope _[J] into two LPM can be extended to a split subdivision of the LPM polytope _[P,Q] into two LPM.
We consider a split S of the LPM polytope _[P,Q]. By Lemma <ref> we know there exists a weight vector w_S of the form
w_S(v) =
|av| if v ∈ S_+
0 if v ∈ S_-
where a is the normal vector to the split hyperplane H_S.
Similarly, let us consider a split S' of the truncated LPM polytope
_[J]. Again by Lemma <ref> we know that restricted to _[J] there exists a weight vector w_S' of the form
w_S'(v) =
|bv| if v ∈ S'_+
0 if v ∈ S'_-
where b is the normal vector the split hyperplane H_S' and we choose S'_- such that S_-⊆ S'_-. Now we notice that there exists an extension of the weight vector w_S' to w'_S' which is defined as follows
w'_S'(v) =
w_S'(v) if v ∈_[J]
0 if v ∈_[P,Q]∩ S_-
For an LPM polytope _[P,Q], the split subdivisions induced from a hyperplane split decomposition are compatible.
We proceed by proving the claim for two arbitrarily chosen split subdivisions. Let S_1 and S_2 be two split subdivisions of _[P,Q]. Since split LPM subdivisions are defined in an iterative manner, therefore without loss of generality we assume that S_2 restricted to the truncated LPM polytope _[J] = _[P,Q]∖ (_[P,Q]∩ S_1_-) defines a split subdivision for _[J]. But this implies that the split hyperplane H_S_2∈_[J]. Therefore, the split hyperplane H_S_1 and H_S_2 cannot meet in the interior of _[P,Q]. Hence, the splits S_1 and S_2 are compatible.
As for the case of the hypersimplex Δ(k,n) which is also a LPM polytope, we already know that any two splits are always compatible <cit.>.
The compatibility of splits which provide the iterative description of LPM subdivisions also shows that LPM are split matroids, introduced by Joswig and Schroeter in <cit.>.
Any LPM subdivision Σ of a lattice path matroid polytope _[P,Q] is regular.
Let σ be the LPM decomposition corresponding to Σ. We know that σ can be obtained via iterative hyperplane LPM split decompositions. These hyperplane LPM split decompositions correspond to split subdivisions. Let {S_1, S_2, S_n} be the sequence of split subdivisions which correspond to Σ. We note that {S_2,, S_n} are splits for the corresponding truncated LPM polytope _[J]. By Lemma <ref> we know that the splits { S_2, S_3, S_n} can be extended to split subdivisions for _[P,Q] and let {S'_2, S'_3, S'_n} be the corresponding split subdivisions on _[P,Q] for Σ. We see that Σ is the common refinement of the splits {S_1, S'_2, S'_n} and as we know from Lemma <ref> that these splits are compatible, therefore this common refinement is well defined. We now invoke the Split Decomposition Theorem <cit.> to conclude that there exists a canonical weight vector
w = ∑_S'α^w_w_S' w_S'
which induces Σ, where the sum runs over all splits and α^w_w_S represents the coherency index <cit.>. Hence, Σ is a regular subdivision.
For the hypersimplex Δ(3,6) we describe a LPM subdivision Σ^ in Section <ref>, illustrated in Figure <ref>, where we see that in the corresponding LPM polytope decomposition (M_1 , , M_6) shown in Figure <ref>, is obtained as common refinements of four splits namely S_1, S_2, S_3 and S_4. The weight which induces
Σ^ is
w_Σ^ = {0,0,0,0,0,0,0,1,1,2,0,0,0,1,1,2,2,2,3,5}
and
w_S_1 = {0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,1,2}
w_S_2 = {0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,1}
w_S_3 = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1}
w_S_4 = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1}
are the weights which induce the splits S_1, S_2, S_3 and S_4. With this, we see an example of the result described in Theorem <ref>, with the split decomposition in the following form,
w_Σ^ = w_S_1 + w_S_2 + w_S_3 + w_S_4
Let w_Σ be a weight vector for an LPM subdivision Σ of a lattice path matroid polytope _[P,Q]. Then w_Σ∈([P,Q]).
Since w_Σ induces a regular and matroidal subdivsion Σ, therefore by <cit.> w_Σ lies in the Dressian ([P,Q]).
We know that the Dressian is endowed with two polyhedral fan structures; one coming from the tropical prevariety definition with points satisfying Plücker relations and termed as the Plücker fan structure <cit.> on the Dressian. The other structure termed as the secondary fan structure <cit.> which comes by virtue of being a subfan of the secondary fan. Moreover, we know that these two fan structures coincide <cit.>. We now have the required setup to describe a new polyhedral fan structure for LPM subdivisions. We begin this exploration with the following definition,
Let [P,Q] be a lattice path matroid. We define the LPMfan([P,Q]) to be the polyhedral fan which is the collection of all weight vectors w such that, w is a weight vector for an LPM subdivision of [P,Q]. Two weight vector w_1 and w_2 lie in the same cone C if the LPM subdivisions Σ_1 and Σ_2 are same.
Clearly,
LPMfan([P,Q]) ⊆Dr([P,Q]) ⊆Secfan(P_[P,Q])
where all inclusions represent inclusions as subfan. Additionally, from the definition of LPM subdivisions, given that they are obtained via refinement of split subdivisions, this makes the LPMfan sit as a subfan inside the split complex Split(P_[P,Q]), which is an abstract simplicial complex defined on the set of compatible splits of _[P,Q] <cit.>.
Hence, we get this refined containment relation of subfans,
LPMfan([P,Q])) ⊆Split(P_[P,Q]) ⊆Dr([P,Q]) ⊆Secfan(P_[P,Q])
An important observation is that the hypersimplex Δ(k,n) is a lattice path matroid polytope and hence all our results for LPM polytopes follow in this case,
LPMfan(k,n) ⊆Split(Δ(k,n)) ⊆(k,n) ⊆Secfan(Δ(k,n))
An important avenue of research has been to understand the structure of the Dressian (k,n), particularly for certain low values of k and n, namely (3,6), (3,7), <cit.> and (3,8) <cit.>, etc. We describe LPMfans for certain values of k,n and discuss the calculations in Section <ref>.
§ POSITIVE TROPICAL GRASSMANNIAN AND LPM SUBDIVISIONS
In this section, our aim is to highlight the consequences of the fact that LPMs are positroids, and towards the end we also are able to provide an answer to a question asked concerning finest matroidal subdivisions of the hypersimplex in <cit.>. Since it is a major theme for this section we recall the result from <cit.> which shows us that lattice path matroids are positroids, upon which we build further in this section.
A lattice path matroid is a positroid.
Let [P,Q]) be a LPM. For the result to be true, it is sufficient to construct a k × n matrix A such that
(A_I) =
0 I ∈[n]k∖[P,Q]
α I ∈[P,Q]
where α > 0. Such a matrix can be constructed as follows. Let A = (a_i,j)^k,n_i,j=1,1 be the k × n Vandermonde matrix. Set a_i,j = 0 ∀ j ∈ [P_i,Q_i], where P_i and Q_i represent the i^th north step in the lattice paths P and Q, respectively. So A has the following form,
a_i,j =
x_i^j-1 if P_i≤ j ≤ Q_i
0 otherwise
Assign values to variables x_1, , x_k such that x_1 > 1 and x_i+1 = x_i^k^2∀ i ∈ [k-1]. We denote the submatrix A_[1, , i][c_1, , c_i] as a submatrix of A which has rows indexed from 1 to i and columns indexed from c_1 to c_i. We have (A_I) > 0 if and only if A_[1, , k]I has nonzero diagonal entries, which happens if and only if I ∈[P,Q]).
Lusztig <cit.> and Postnikov<cit.> introduced the notion of positivity for Grassmannians. This notion extends naturally to the tropical Grassmannian and Dressian <cit.>. In <cit.> and independently in <cit.>, the authors prove the following equality between ^+ Gr(k,n) and Dr^+(k,n).
The positive tropical Grassmannian ^+(k,n) equals the positive Dressian ^+(k,n).
A generalization of this theorem to the case of positive local Dressian with respect to a positroid is provided in <cit.>. An important parameterization of points residing in the positive Dressian is explained in this result,
Let Σ be a regular subdivision of Δ(k,n) induced by a weight vector w_Σ. Then the following are equivalent:
1. w is a positive tropical Plücker vector.
2. Every face of Σ is a positroid.
The generalization of this to local positive Dressian is provided again in <cit.>. With this parameterization, we conclude that a point inducing a LPM subdivision resides in the positive Dressian.
Let Σ be an LPM subdivision of _[P,Q] and let w_Σ be the weight vector for Σ. Then w ∈^+([P,Q]) = ^+([P,Q]).
We know that a point w lies in the positive Dressian if all the maximal cells of the subdivision induced by this point as a weight vector on _[P,Q] are the matroid polytopes of a positroid, i.e., w induces a positroidal subdivision <cit.>. We know that LPM are positroids, hence it also induces a positroidal subdivision, and therefore w ∈^+([P,Q]) = ^+([P,Q]).
Another important result proven in <cit.> is about the classification of the finest positroidal subdivision of the hypersimplex Δ(k,n).
Let Σ be a regular positroidal subdivision of Δ(k,n). Then the following are equivalent:
1. Σ is a finest subdivision.
2. Every facet of Σ is the matroid polytope of a series-parallel matroid.
3. Every octahedron in Σ is subdivided.
Along with the classification, <cit.> also provides the exact number of maximal cells in a finest positroidal subdivision of Δ(k,n),
Every finest positroidal subdivision of Δ(k,n) has exactly n-2k-1 facets.
We also recall the following classification of connected positroids which are series-parallel,
A connected positroid is series-parallel if and only if it has no
uniform matroid _2,4 as a minor.
In light of these results, we provide results about positroidal subdivisions of Δ(k,n) obtained from LPM. We begin with our first technical result concerning snakes,
Snakes are series-parallel matroid.
We acknowledge that the uniform matroid _2,4 is also an LPM as shown in Figure <ref>.
Clearly, _2,4 has an interior lattice point and therefore cannot be a minor of a lattice path matroid which is a snake. Therefore, by Lemma <ref> snakes are series-parallel matroid.
We also acknowledge that another proof of this result is present in <cit.>. With Lemma <ref> and Theorem <ref>, we state the following result
Let Σ be an LPM subdivision of Δ(k,n) such that the underlying matroid of each maximal cell is a snake. Then, Σ is a finest positroidal subdivision of Δ(k,n) and has exactly n-2k-1 facets.
With Theorem <ref> and Lemma <ref>, we also are able to provide a partial answer to Question 6.2 posed in <cit.>,
[Question 6.2 <cit.>]
Are all cells in the finest matroid subdivision of a hypersimplex, matroid polytopes of indecomposable matroids?
The authors show that the answer to this question is affirmative in the case when the hypersimplex is Δ(2,n) <cit.>. However, we know of explicit counterexamples provided in <cit.> which show that there exist finest matroidal subdivisions of certain hypersimplices, whose cells do not correspond to indecomposable matroids.
We state some technical definitions before stating the partial answer. We state the following classification for binary matroids; which are matroids representable over the field with two elements.
A matroid is said to be binary if and only if it has no minor isomorphic to the uniform matroid _2,4.
A matroid is said to be indecomposable if and only if its polytope does not allow a non-trivial matroid subdivision.
Therefore, we obtain Corollary <ref> as an answer to Question <ref>, when restricted to the case of positroidal subdivisions of the hypersimplex.
The cells of the finest positroidal subdivision of Δ(k,n) correspond to binary matroids. In particular, they are indecomposable.
We know from Theorem <ref> that maximal cells of the finest positroidal subdivisions of Δ(k,n) correspond to connected series-parallel positroids and by Lemma <ref> we know that they do not have U_2,4 as a minor and therefore are also binary matroids.
With Lemma <ref> it is clear that the corresponding fan structure for LPM subdivisions also resides as a subfan inside the positive Dressian
LPMfan([P,Q])) ⊆Dr^+([P,Q])) = Trop^+Gr([P,Q]))
LPMfan(Δ(k,n)) ⊆Dr^+(k,n) = Trop^+ Gr(k,n)
Also, in <cit.> a third fan structure on the positive Dressian Dr^+() is defined as the positive fan structure. This fan structure is based on the underlying cluster algebra, studied in detail in <cit.>. We refer the reader to <cit.> for basic details concerning cluster algebras. Our aim here is to highlight the third fan structure on the positive Dressian that is induced via these clusters, although they will emerge later again in our discussion concerning minimal positroids and positive configuration space in Section <ref>. We define the notion of a cluster associated with a matroid <cit.>,
A cluster 𝒞 for a matroid is a subset of that indexes a seed in the cluster structure of the cluster algebra isomorphic to ℂ[π_], where ℂ[π_] is the coordinate ring associated to the positroid variety.
The positive fan structure on Dr^+() is the fan whose cones are the images of the domains of linearity for a positive parameterization by a cluster 𝒞. Two points lie in the same cone of Dr^+(), if they determine the same common domains of linearity for all the functions p_J, J ∈.
The authors in <cit.> also prove that this new fan structure coincides with the previous two fan structures
The three fan structures on ^+() coincide.
With the sub-fan relation in place <ref>
The three fan structures on LPMfan(_[P,Q]) coincide.
We also want to highlight that matroid decompositions are invariant under matroid duality, which is also reflected in our description of the LPMfan, meaning if a k-dimensional cone C in the LPMfan(_[P,Q]) corresponds to an LPM decomposition _t[P^t,Q^t], then there exists a k-dimensional cone C' such that it represents the LPM decomposition {_t^*[P^t,Q^t]}, where * represents the matroid dual. This fact can be verified in the case of Δ(3,6) from Figure <ref>.
§ COMPUTATIONS FOR LPM POLYTOPE TEXT
In this section we look at some computational examples, concentrating on the case of Δ(k,n) for k=3,4 and n=6,8 respectively. We use <cit.> for our computations.
§.§ Computations for LPM polytope Δ(3,6)
Figure <ref> illustrates an LPM subdivision Σ^ of Δ(3,6) with the lattice path matroids corresponding to the maximal cells also shown. We also calculate the weight vector w which induces this subdivision
w = { 0,0,0,0,0,0,0,1,1,2,0,0,0,1,1,2,2,2,3,5}
We illustrate the LPM polytope decomposition which corresponds to the subdivision in Figure <ref> and we see the truncated LPM polytope after each iterative step of taking a hyperplane split decomposition in Figure <ref>.
We also see that Σ^LPM corresponds to a metric tree arrangement shown in Figure <ref>.
It is easy to see that under the permutation
1 → 1, 2 → 5, 3 → 3, 4 → 2, 5 → 4, 6 → 6
this tree arrangement permutes to the tree arrangement shown in Figure <ref> which corresponds to Cone_4 <cit.> in the classification of all maximal cones of Dr(3,6) <cit.>.
§.§ Decorated permutations and reduced plabic graphs
We now connect our computations to some other parameterization of the positive Grassmannian, namely decorated permutations and reduced plabic graphs, and we rely on <cit.> for most of our definitions in this subsection.
A decorated permutation of [n] is a bijection π : [n] → [n] whose fixed points are each colored either black or white. A black fixed point i is denoted by π(i) = i, and a white fixed point i by π(i) = i. An anti-excedance of the decorated permutation π is an element i ∈ [n] such that either π^-1(i) > i or π(i) = i. A decorated permutation on [n] is of type (k, n) if it has k anti-excedances.
We now establish the connection between decorated permutations and positroid cells of the positive Grassmanians.
Given a k × n matrix C = (c_1 , , c_n) written as a list of its columns, a decorated permutation π := π_C is associated to C as follows. Set π(i) := j to be the label of the first column j such that c_i∈spanc_i+1 , c_i+2 , , c_j. If c_i is the all-zero vector, it is called a loop and if c_i is not in the span of the other column vectors, it is called a coloop. The associated positroid cell to this decorated permutation is defined as
S_π = {C ∈Gr(k,n)^≥ 0 | π_C = π}
Postnikov showed that S_π is a cell, and that the positive Grassmannian Gr(k,n)^≥ 0 is the union of cells S_π where π ranges over decorated permutations of type (k, n) <cit.>.
A plabic graph is an undirected planar graph G drawn inside a disk (considered modulo homotopy) with n boundary vertices on the boundary of the disk, labeled 1, , n in clockwise order, as well as some internal vertices. Each boundary vertex is incident to a single edge, and each internal vertex is colored either black or white. If a boundary vertex is incident to a leaf (a vertex of degree 1), it is called a lollipop.
A perfect orientation 𝒪 of a plabic graph G is a choice of orientation of each of its edges such that each black internal vertex u is incident to exactly one edge
directed away from u; and each white internal vertex v is incident to exactly one edge directed toward v. A plabic graph is called perfectly orientable if it admits a perfect
orientation. Let G_𝒪 denote the directed graph associated with a perfect orientation 𝒪 of
G. The source set I_𝒪⊆ [n] of a perfect orientation 𝒪 is the set of i which are sources of the directed graph G_𝒪. Similarly, if j ∈I_𝒪 := [n] - I_𝒪, then j is a sink of 𝒪.
The following result links positroids with plabic graphs <cit.>.
Let G be a plabic graph of type (k, n). Then we have a
positroid M_G on [n] defined by
M_G = { I_𝒪 | 𝒪 is a perfect orientation of G }
where I_𝒪 is the set of sources of 𝒪. Moreover, every positroid cell has the form S _M_G for some plabic graph G.
If a plabic graph G is reduced <cit.> we have that S_M_G = S_π_G , where π_G is the decorated permutation defined as follows.
Let G be a reduced plabic graph with boundary vertices 1, , n. For each boundary vertex i ∈ [n], we follow a path along the edges of G starting at i, turning
(maximally) right at every internal black vertex, and (maximally) left at every internal white vertex. This path ends at some boundary vertex π(i). The fact that G is reduced implies that each fixed point of π is attached to a lollipop; we color
each fixed point by the color of its lollipop. This defines a decorated permutation, called the decorated trip permutation π_G = π of G.
In <cit.>, the following result elaborates on the way to compute the associated decorated permutations of an LPM.
Let I and J be two lattice paths starting at the origin and terminating at (k,n-k), such that I never crosses J. Let I = {i_1 < < i_k} and J = { j_1 < < j_k}∈[n]k. Denote [n] ∖ J =
{d_1 < < d_n - k} and [n] ∖ I = { c_1 < < c_n - k} . Then [I,J] is a positroid and its decorated permutation π_[I,J] is given by:
π_ℳ[I,J](j_r) = i_r ∀ r ∈ [k]
π_ℳ[I,J](d_r) = c_r ∀ r ∈ [n-k]
if π_ℳ[I,J](t) = t, then,
col(t) = { -1 if t ∈ J
1 otherwise}
where col() represents the coloring map for loop and co-loop elements of the permutation. Figure <ref> lists the decorated permutations and the reduced plabic graphs corresponding to the snakes in the snake decomposition of _3,6 described in Figure <ref>.
§.§ LPMfan(3,6)
We first inspect the f-vector of the fans associated to _3,6 <cit.>
f-vector ((3,6)) = (1,65,535,1350,1005)
f-vector (((3,6))) = (1,65,550,1395,1035)
Out of the 65 rays of the Dressian Dr(3,6), 35 correspond to splits and lie in the split complex, whereas the other 30 correspond to coarsest subdivisions of Δ(3,6) into three maximal cells.
Restricting to the positive tropical Grassmannian, we get the following vector <cit.>, <cit.>, <cit.> where F_3,6 is the fan associated to Trop^+(Gr(3,6)).
f-vector (F_3,6) = (1,16, 66, 98, 48)
Out of these 16 rays, five occur in the LPMfan in the form of S_1,S_2,S_3,S_4 and S_5 which we see in Figure <ref>. The f-vector of the LPMfan for Δ(3,6) is listed below where all cones are obtained as refinements of the five splits S_1,S_2,S_3,S_4 and S_5, illustrated in Figure <ref>, where the edges between cones labeled signify the combination of the corresponding splits.
f-vector (LPMfan(Δ(3,6)) = (1,5,7,3,1)
The LPMfan(3,6) sits inside the Split subcomplex generated by the refinements of splits S_1,S_2,S_3,S_4 and S_5. Also, to reiterate the cones are defined as secondary cones with rays defined by the corresponding splits, i.e., the collection of all the weight vectors which induce the same LPM subdivision lie in the same cone.
We refer to a lattice path matroidal subdivision which is a split with a snake as a maximal cell as snake split subdivision and we refer to the snakes appearing in a snake split subdivision as split snakes.
We point out the LPM decompositions for 𝒰_(3,6) other than the ones shown in Figure <ref>, and these are depicted in Figure <ref>, and one of the weight vectors inducing the split subdivision S_5 is the zero vector.
w_S' = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
We also want to point out to the reader that there exists a natural action of the symmetric group S_n on the cones of the Dressian Dr(Δ(k,n)) well documented in <cit.> and well described in their computations, and that is why with respect to this action there are only 7 maximal cells of (3,6) <cit.>. Our description of the LPMfan implicitly incorporates this symmetry, for example, the weight vectors
w_1 = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1}
and
w_2 = {1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
both induce the split S_3, but we know that both of them are equivalent under the action of S_6.
§.§ Computations for LPM polytope Δ(4,8)
The subdivision is described in Figure <ref> and Figure <ref> and this subdivision is induced by the weight vector
w = { 0,0,0,0,0,0,0,0,0,1,1,1,2,2,3,0,0,0,0,1,1,1,2,2,3,2,
2,2,3,3,4,5,5,6,8,0,0,0,0,1,1,1,2,2,3,2,2,2,3,
3,4,5,5,6,8,3,3,3,4,4,5,6,6,7,9,8,8,9,11,14 }
A subsequent computation for the LPMfan(Δ(4,8)) is more intricate than the computation of LPMfan(Δ(3,6)) and hence we leave it for future work as we believe it would be nice to utilize the symmetric group action also into the computation to produce bigger examples.
All the files containing the code used for all these computations can be found at the following link
<https://github.com/Ayush-Tewari13/LPM_SUBDIVISIONS>
§ AMPLITUHEDRON AND POSITIVE CONFIGURATION SPACES
We now describe an important implication of our results and connections to topics in Physics, which in recent times have gained immense interest. In <cit.>, Arkani-Hamed et. al introduced the notion of the amplituhedron which is obtained from the positive Grassmannian via the amplituhedron map. It has been noted that the amplituhedron encodes information concerning scattering amplitudes in 𝒩=4 super Yang-Mills theory, which in turn explains the etymology of the term. In <cit.>, the authors introduce the notion of positroid dissections for the hypersimplex Δ(k+1,n) and the Grasstopes dissection for the amplituhedron and explain the ways in these two dissections can be related via a duality map.
We begin with the definition of amplituhedron <cit.>, <cit.>,
For a ≤ b, define Mat^>0_a,b as the set of real a × b matrices whose a × a minors are all positive. Let Z ∈Mat^>0_n,k+m. The amplituhedron map Z : Gr(k,n)^≥ 0→Gr(k,k+m) is defined by Z := CZ, where C is a k × n matrix representing an element of Gr(k,n)^≥ 0 and CZ is a k × (k + m) matrix representing an element of Gr(k,k+m) . The amplituhedron 𝒜^≥ 0_n,k,m(Z) ⊆Gr(k,k+m) is the image Z(Gr(k,n)^≥ 0).
We briefly state some of the results from <cit.> to sketch the outline of their discussion,
Let 𝒞 = {Γ_π} be a collection of positroid polytopes, and let S_π be the collection of corresponding positroid cells. 𝒞 is a positroid dissection of Δ(k,n) if
* dim(Γ_π) = n-1 for each Γ_π∈𝒞
* pairs of two distinct positroid polytopes Γ^o_π = μ(S_π) and Γ^o_π' = μ(S_π') are pairwise disjoint, and
* ∪_πΓ = Δ(k,n).
Let A be a k × n matrix representing a point in Gr(k,n)^≥ 0. The moment map μ: Gr(k,n)^≥0→ℝ^n is defined by
μ(A) = ∑_I ∈[n]k|p_I(A)|^2e_I/∑_I ∈[n]k|p_I(A)|^2
A positroid dissection is called a positroid tiling if μ is injective on each S_π. As can be seen from the definition, dissections are a more generalized notion of a polytopal subdivision for a hypersimplex, with no restrictions on how individual pieces meet at the boundary, although the notion of good dissections <cit.> exactly agrees with the notion of a subdivision,
Let 𝒞 = {Γ_π^(1) , , Γ_π^(l)} be a dissection of Δ(k+1,n) . We say that 𝒞 is a good dissection of Δ(k+1,n) if the following condition is satisfied: for i j, if Γ_π^(i)∩Γ_π^(j) has
codimension one, then Γ_π^(i)∩Γ_π^(j) equals Γ_π, where Γ_π is a facet of both Γ_π^(i) and Γ_π^(j).
In <cit.> a dissection for the hypersimplex is provided inspired by BCFW recurrence relations for tilings of the m=4 amplituhedron, which is referred as the BCFW-style recurrence
Let 𝒞_k+1,n-1 (respectively 𝒞_k,n-1) be a collection of positroid polytopes that dissects the hypersimplex Δ(k+1,n-1) (respectively Δ(k,n-1)). Then
𝒞_k+1,n = i_pre (𝒞_k+1,n-1) ∪ i_inc(𝒞_k,n-1)
dissects Δ(k+1,n),where i_pre and i_inc are maps defined on reduced plabic graphs in <cit.>.
§.§ Matroidal definition for BCFW dissections of hypersimplex
We now try to build a purely matroidal relation for BCFW-style recurrence dissection for hypersimplices.
We provide some context to our notations. For a positroid polytope 𝒫, we refer to the underlying positroid as 𝒫 = (ℳ), where represents taking the convex hull of the indicator vectors of the bases of ℳ. We now provide the matroidal definition for BCFW style recurrence dissections for the hypersimplex.
Let 𝒞_k+1,n be a collection of positroid polytopes that dissects the
hypersimplex Δ(k+1,n) = (𝒰_k+1,n).
Then,
𝒞_k+1,n = ((𝒞_k+1,n) / e_i) ∪ ((𝒞_k+1,n) ∖ e_i)
and the set (𝒞_k+1,n) / e_i) provides a positroid dissection of Δ(k,n-1) and (𝒞_k+1,n) ∖ e_i) provides a positroid dissection of Δ(k+1,n-1) , where '/' represents contraction and '∖' represents the deletion operations on matroids.
Firstly we note that the hypersimplex Δ(k+1,n), is a 0-1 polytope obtained by the intersection of the unit cube [0,1]^n with the affine hyperplane ∑_i=1^n x_i = k+1. We note that the facet corresponding to the hyperplane x_i=0 is termed as the i-th deletion facet of Δ(k+1,n) and is isomorphic to Δ(k+1,n-1). Similarly, the facet corresponding to the hyperplane x_i=1 is termed as the i-th contraction facet of Δ(k+1,n) and is isomorphic to Δ(k,n-1). Also, these facets can be obtained as deletion and contraction respectively on the uniform matroid 𝒰_k,n <cit.>.
With these definitions, the notions of contraction and deletion extend to respective dissections and subdivisions and this fact is used in <cit.>. We point out the natural dissections of hypersimplex into two minors provided by contraction and deletion. Let v ∈Vert(Δ(k+1,n)) then since each dissection into ((ℳ_k+1,n) / e_i) and ((ℳ_k+1,n) ∖ e_i) is defined by hyperplanes x_i = 0 or x_i=1, therefore every vertex v lies in either ((ℳ_k+1,n) / e_i) or ((ℳ_k+1,n) ∖ e_i).
Given a positroid dissection 𝒞_k+1,n-1 we consider the minors with respect to an element i ∈ [n] and obtain minors ((𝒞_k+1,n) / e_i) and ((𝒞_k+1,n) ∖ e_i). We recognize that these minors also correspond to the dissections induced on the respective contraction and deletion facets of Δ(k+1,n) respectively, each of which is isomorphic to Δ(k,n-1) and Δ(k+1,n-1) respectively, which give us the two required positroid dissections.
We point out that Theorem <ref> provides a matroidal formulation of BCFW style relations for hypersimplex, and proves an almost converse statement of Theorem <ref>, and we say that this is almost a converse statement since we know that not all positroid dissections of Δ(k+1,n) occur from BCFW style recursions <cit.>, whereas the statement of Theorem <ref> involves matroidal operations, therefore for any positroid dissection of Δ(k+1,n) we can obtain dissections of Δ(k,n-1) and Δ(k+1,n-1) in this way. We point out that it is not obvious that there exist matroidal operations equivalent to the operations i_pre and i_inc used in Theorem <ref>. We also wish to explore a possible generalization of the statement for Theorem <ref> to matroid dissections and not necessarily positroid dissections. However, such a discussion would require an appropriate definition of a matroid dissection and a generalization of Theorem <ref> to the case of matroid dissections, as the non-trivial part of the proof of Theorem <ref> rests on a refined description of facets of positroid polytopes defined by Postnikov, described in <cit.>.
We again consider the snake polytope decomposition of Δ(3,6) described in Figure <ref>. As we know that this is also a regular positroidal subdivision, or equivalently a regular positroid good dissection. We now perform the contraction and deletion with respect to the element i = 1 on this subdivision, and obtain two collections in which we see that { M_2∖{1}, , M_6∖{1}} provides a positroidal subdivision (equivalently a positroid good dissection) of Δ(3,5) on letters [6] ∖{1} = {2,3,4,5,6} and { M_1 / {1}, M_2 / {1}, M_3 / {1}} provides a positroidal subdivision on Δ(2,5) (cf. from Figure <ref>.)
§.§ BCFW cells correspond to lattice path matroids
We want to point out that in the discussion in this section, we would only be focusing on the m=4 amplituhedron.
In a recent breakthrough work <cit.>, the authors prove the conjecture that BCFW cells provide a triangulation of the amplituhedron 𝒜_n,k,4. In <cit.> and <cit.> the authors establish the equivalence between BCFW cells and noncrossing lattice walks (paths). We use this observation to explore the connection between BCFW triangulations and lattice path matroids.
We borrow mostly our notation from <cit.>. Let ℒ_n,k,4 denote the set of all pairs (P_ℒ, Q_ℒ) of noncrossing lattice paths inside a k × (n - k - 4) rectangle, where the notion of noncrossing is the same as P never going above Q implicit in Definition <ref>. Therefore, we state one of our first conclusions in the form of Corollary <ref>.
Let (P_ℒ, Q_ℒ) ∈ℒ_n,k,4 be a pair of noncrossing lattice paths. Then (P_ℒ, Q_ℒ) determine a lattice path matroid ℳ[P_ℒ, Q_ℒ] which lies inside the lattice path matroid 𝒰_k,n-4.
We describe the connection between non-crossing lattice paths and BCFW cells of 𝒜_k,n,4. Firstly, in<cit.> the authors introduce the notion of a ⊕-diagram of type (k,n), which are defined as follows <cit.>
Fix 0 ≤ k ≤ n. Given a partition λ, we let Y_λ denote the Young diagram of λ. A ⊕-diagram of type (k,n) is a filling D of a Young diagram Y_λ fitting inside a k × (n - k) rectangle with the symbols 0 and + (such that each box of Y is filled with exactly one symbol) and λ is called the shape of D (cf. Figure <ref>).
The rules according to which the filling in a ⊕-diagram is obtained are elaborated in <cit.>. Let 𝒟_n,k,4 be the space of ⊕-diagram of type (k,n). We infer the following result from <cit.>
There exists an bijection Ω_ℒ𝒟 such that
Ω_ℒ𝒟 : ℒ_n,k,4→𝒟_n,k,4
The ⊕-diagrams 𝒟_n,k,4 index the (k, n)-BCFW cells 𝒞_n,k,4 .
This theorem is proven by using another bijection between the space of binary rooted trees 𝒯_n,k,4 and ℒ_n,k,4 and the authors use reduced plabic graphs to produce decorated permutations for the ⊕-diagrams. We point the reader to <cit.> to explore these concepts and proofs in full detail. Our interest develops with Corollary <ref> and this inspires us to enquire about the existence of a duality between cells of the amplituhedron and dissections of the hypersimplex, which is established via T-duality in the case of m=2 amplituhedron in <cit.>. In <cit.> the following result concerning BCFW cells is proven, which was stated as a conjecture in <cit.>.
For every k ≥ 1 and n ≥ k+4, the (k, n)-BCFW cells
form a triangulation of the amplituhedron 𝒜_n,k,4.
We now state our result based on this discussion,
Each triangulation of the amplituhedron 𝒜_n,k,4 into (k, n)-BCFW cells provides a positroid dissection {Γ_i} of the hypersimplex Δ(k,n-4), where each BCFW cell corresponds to a lattice path matroid polytope Γ_i.
By Corollary <ref> we already know that each (k,n) BCFW cell corresponds to a LPM ℳ[P_ℒ,Q_ℒ] inside 𝒰_k+4,n, where (P_ℒ,Q_ℒ) ∈ℒ_n,k,4. Therefore, each (k,n) BCFW cell corresponds to a lattice path matroid polytope 𝒫(ℳ[P_ℒ,Q_ℒ]) which lies inside Δ(k+4,n) = (𝒰_k+4,n). Therefore, a triangulation of 𝒜_n,k,4 into (k, n)-BCFW cells corresponds to a collection of all lattice path matroids which lie inside the uniform matroid Δ(k+4,n) = (𝒰_k+4,n), which is clearly a positroid dissection from Definition <ref>.
With Theorem <ref> we establish a first notion in the direction of T-duality for a m=4 amplituhedron, where in the case of m=2 amplituhedron <cit.> shows that subdivisions of the amplituhedron correspond to positroid dissections of the corresponding hypersimplex. We provide this in the case of m=4 amplituhedron for the BCFW triangulation which inspires for the exploration of the case of other triangulations and subdivision of 𝒜_k,n,4. Also, BCFW style dissections enjoy a recursive description and can be understood as coming from splits as discussed in the case of the m=2 amplituhedron in <cit.>, and we believe that a positroid dissection in LPM cells captures this in essence as well owing to the recursive definition of LPM polytope decompositions.
§.§ Positive configuration spaces, weakly separated collections and connected minimal positroids
We highlight some of the connections between our study on LPM's and <cit.>. Firstly, in <cit.> the authors relate the positive Chow cells of the Chow quotient of the Grassmannian with positroidal subdivisions. Let Ch(k,n)_≥ 0 denote the nonnegative part of the Chow quotient of the Grassmannian.
There are canonical bijections between the following sets.
* The set {Θ_Δ > 0} of positive Chow cells of Ch(k,n)_≥ 0
* The set D(k,n) of regular positroidal subdivisions of Δ(k,n).
* The set of cones in the positive tropical Grassmannian Trop Gr^+(k, n), the space of valuations of positive Puiseux series points Gr(k, n)(ℛ > 0)
* The set of cones in the positive Dressian Dr(k,n), which satisfy the three term positive Plücker relations.
As LPM'S are positroids too, all these equivalences also are true when restricted to the LPMfan.
We also delve into the connection between cluster of a matroid, weakly separated collections <cit.> and snakes. We fix some notations relevant to our discussion. We
define the cyclic ordering <cit.> (referred as the t-th Gale order in <cit.>) ≤ _t on [n] for some t ∈ [n] by the total order t ≤_t t + 1 ≤_t≤_t n ≤_t 1 ≤_t t - 1. For I ,J ∈[n]k, where
I = { i_1 , , i_k}, i_1≤_t i_2≤_t i_k
and
J = { j_1 , , j_k}, j_1≤_t j_2≤_t j_k
then
I ≤_t J if and only if i_1≤_t j_1 , , i_k≤_t j_k
For each I ∈[n]k and t ∈ [n] , we define the cyclically shifted Schubert matroid as
SM_I^t = { J ∈[n]k | I ≤_t J }
We recall the definition for weakly separated sets from <cit.>,
Let I and J be two subsets of [n]. I and J are said to be weakly separated if either
* |I| ≤ |J| and I ∖ J can be partitioned as I_1∪ I_2 such that I_1≺ J ∖ I ≺ I_2 or
* |J| ≤ |I| and J ∖ I can be partitioned as J_1∪ J_2 such that J_1≺ I ∖ J ≺ J_2
where A ≺ B indicates that every element of A is less than every element of B.
Equivalently, the definition can be stated as the sets I and J ∈ [n]k are said to be weakly separated if we cannot find cyclically ordered elements a,b,c,d such that a,c ∈ I ∖ J and b,d ∈ J ∖ I (also along with the symmetrical statement for I and J swapped).
We also recall the definition of Grassmann necklaces <cit.>.
A Grassmann necklace is a sequence I = (I_1 , , I_n) of subsets I_r⊆ [n] such that:
* if i ∈ I_i then I_i + 1 = (I_i∖{i }) ∪{ j } for some j ∈ [n] ,
* if i ∉I_i then I_i + 1 = I_i
The indices are taken modulo n. In particular, we have | I_1 | = = | I_n | .
There exists a canonical bijection between positroids and Grasmann necklaces. We state the characterization of the cluster of a matroid (Definition <ref>), in terms of weakly separated sets and Grassmann necklaces <cit.>
A subset 𝒞⊆ℳ is a cluster if it is pairwise weakly separated, has size dim(ℳ) + 1, and contains the Grassmann necklace ℐ of M. Any pairwise
weakly-separated subset of [n]k can be extended to a cluster.
As one of the takeaways in <cit.>, the authors state this result concerning minimal connected positroids and clusters for them
A connected positroid ℳ is minimal if and only if the associated reduced plabic graph G(C) is a tree, for
some cluster 𝒞 of ℳ. In this case, ℳ has a unique cluster 𝒞⊆ℳ.
We already know that for lattice path matroids, snakes are minimal matroids. Hence, by Lemma <ref>, we obtain a unique cluster in this case. We explain this with one of our running examples; the snake decomposition of 𝒰_3,6 shown in Figure <ref>. We obtain the cluster 𝒞_1 for the snake M_1
𝒞_1 = {123,234,134,124,125,126}
It is easy to verify that C_1 is weakly separated, contains the Grassmann necklace for ℳ_1 and is of cardinality = dim(ℳ_1) +1 = 5+1 = 6. Likewise, we obtain unique clusters for all the snakes. The corresponding graphs for these snakes have been described in Figure <ref>.
We conclude with another interesting observation. For both the snake split (Definition <ref>) matroids, we notice that they exactly contain k(n-k) +1 elements. This is exactly equal to the cardinality of the maximal weakly separated collection, which is the maximal collection of pairwise weakly separated elements inside the matroid ℳ and the bound on its cardinality was famously conjectured by Leclerc and Zelevinsky and proven to be true in <cit.>. However, we realize that the elements in a snake split are not all pairwise weakly separated, so they are not examples of maximal weakly separated collections. For example for the snake decomposition of 𝒰_3,6, the snake split M_1 has the elements 124 and 135 which are not pairwise weakly separated.
§ FUTURE PERSPECTIVES
We utilize this section to condense our discussion and to highlight the takeaways from our results. We also point to subsequent questions which arise from our work.
Firstly, we want to mention a recent work concerning lattice path matroid decompositions into snakes and alcoved triangulation <cit.> in which the authors prove results based on the snake decomposition of LPM and also discuss results on Ehrhart theory of LPM. They prove that the alcoved triangulation of an alcoved polytope is regular. We observe that our discussion pertaining to lattice path matroidal subdivisions being regular generalizes this result for LPM. We point the reader to Figure 1 in <cit.> to understand the context of where LPM lie with respect to other well-known families of matroids.
We also want to point the reader to <cit.>, where the authors show that there exist finest matroid subdivisions of matroid polytopes that do not contain matroid polytopes of indecomposable matroids as maximal cells. Hence, it might be a worthwhile question to ask for which other families of matroids apart from positroids, a result like Corollary <ref> can be obtained, for example, the class of transversal matroids might be a good candidate to be considered. Also, a natural generalization of this question would be to consider the finest subdivisions of Dressians for arbitrary matroids, and not necessarily the hypersimplex, and see if we can recover some of these results.
With the introduction of the notion of LPMfan we believe that there are many questions that can be asked just pertaining to its structure and we believe this might interest readers for further research. Some of the interesting queries could be, to understand if there exists a bound on the number of LPM splits and how it behaves with respect to the Dressian, computation of the dimension of the LPMfan, etc. We believe there is much more to analyze about the LPMfan and we aim to fulfill this in future work.
We also acknowledge via <cit.> recursive relations between LPM's defined by quotients and direct sums. This could be really interesting to understand LPM subdivisions for larger LPM polytopes. We wish to employ such a technique for computing LPMfan recursively. Additionally, it would be interesting to inquire about the specific Plücker relations that are satisfied by points corresponding to LPM subdivisions, which lie in the Dressian. We already know that they would be satisfying the positive Plücker relations owing to the fact that LPM are positroids, but they might be even further refined and this could be done by analyzing the forbidden minors for a matroid to be LPM, classified in <cit.>.
One of our future goals is also to find an equivalent of Theorem <ref> for LPM dissections. Also, in <cit.>, the authors provide a characterization of the positroid polytope in the form of the following statement
Let M be a matroid of rank k, and consider the
matroid polytope P_M. It is a positroid polytope if and only if all of its two-dimensional faces are positroid polytopes.
We are able to obtain a one-way implication similar to this in the case of LPM polytopes as follows,
The faces of a lattice path matroid polytope P_M[P,Q] are also lattice path matroid polytopes.
It is clear that it is sufficient to prove the claim for the facets of an LPM polytope P_M[P,Q]. We utilize the characterization of facets of matroids polytopes described in <cit.> which says that facets of a matroid polytope are either induced by hypersimplex facets or hypersimplex splits. If the facet is induced by a hypersimplex facet, we know that these correspond to matroidal deletions and contractions, and LPM are closed under these operations <cit.>. Hence, the facet of an LPM polytope is again an LPM polytope in this case. Alternatively, if the facet is induced by a hypersimplex split, we know that it is induced by a F-hyperplane <cit.> where F is a flat of the LPM M[P,Q], such that 0 < rank(F) < #F, in which case the facet can be described as P_M[P,Q](F) = P_(M[P,Q] | F ⊕ M[P,Q]/ F). Since LPM are also closed under direct sum and restrictions <cit.>, therefore this facet is again a LPM polytope.
We do highlight the fact that a characterization of snake polytopes does exist, and it concludes that snake polytopes are unimodular equivalent to order polytopes of zig-zag posets <cit.>. Additionally, the facial structure of LPM has also been classified in terms of certain sets of deletions, contractions and direct sums in <cit.>.
Lemma <ref> appears also as a result in <cit.> however the argument there appears incomplete since they only consider hypersimplex facets in their proof and not the facets that get induced via hyperplane splits.
Another important connection to our results which we want to highlight is the work of Fink and Rincon on Stiefel tropical linear spaces <cit.>. For the uninitiated, the Stiefel map assigns a k × n matrix over a field 𝕂 to an element in the Grassmannian Gr(k,n). The authors study the tropicalization of this map and also study the properties of its image, called the Stiefel image, inside the tropical Grassmannian. The authors in <cit.> relate the points inside the Stiefel image, to the class of regular transversal matroid subdivisions, which as the name suggests is the class of regular matroidal subdivisions where each maximal cell corresponds to a transversal matroid. Since LPM are transversal, LPM subdivisions are also transversal matroid subdivisions. Additionally, we obtain the following corollary as a direct consequence of <cit.>
Let L be the tropical linear space dual to a LPM subdivision. Then L lies in the corresponding Stiefel image.
In <cit.> a facet description for transversal matroid polytopes is provided and <cit.> provides a partial characterization of transversal matroids in terms of its facets. Based on these results, we propose the following question
Let P_M be the matroid polytope of a matroid M, such that all of its faces are LPM polytopes. Does this imply that M is also a LPM polytope?
We observe that an affirmative answer, along with Lemma <ref> would provide a full characterization of LPM polytopes in terms of their faces. Also, we already know due to prior results that with the assumptions in the question, M is both transversal <cit.> and a positroid <cit.>. Hence, it is also worthwhile to inquire about the ways in which the three different classes of matroids namely; transversal matroids, lattice path matroids and positroids interact. A subsequent study of relations between Stiefel tropical linear spaces and LPM subdivisions will be explored elsewhere.
We recall that a matroidal subdivision is completely determined by its 3-skeleton <cit.>. In recent work <cit.>, the authors introduce the class of permutahedral subdivisions which is the class of polyhedral subdivisions of generalized permutahedra into cells that are generalized permutahedra. They also that the 2-skeleton of a permutahedral subdivision does not completely determine the subdivision. In the background of these results, we would be happy to understand how the class of LPM subdivisions which we introduced in this paper, behave and possibly find out the criterion which completely determines a LPM subdivision.
We also comment on the location of the positroid cells corresponding to LPM in the stratification of the positive Grassmannian. We consider two well-known families of cells in the positive Grassmannian <cit.>
A positroid cell Π is called a Schubert cell if a generic point U ∈Π gives rise to a representable matroid ℳ_I = ([n], ℬ) where B ∈ℬ if and only if I <_1 B, where <_1 is the usual total order on [n].
A positroid cell Π is called a Richardson cell if a generic point U ∈Π gives rise to a representable matroid ℳ_I^J = ([n], ℬ) where B ∈ℬ if and only if I <_1 B <_1 J, where <_1 is the usual total order on [n].
Schubert matroids correspond to Schubert cells and lattice path matroids correspond to Richardson cells. We wish to understand these Richardson cells in depth, given the context of lattice path matroids and in the light of questions from algebraic geometry concerning positroid and Richardson varieties as mentioned in <cit.>. We are currently working on a sequel to our work here, in the context of the new definition of lattice path flag matroids <cit.> and to look at equivalent questions in the realm of flag matroids, along with the flag matroid equivalent of the Dressian, i.e, Flag Dressian <cit.> and the associated tropical flag variety in <cit.>.
For our results about the amplituhedron, our results have two facets. Firstly, we provide a matroidal treatment to the well-known BCFW style recurrence relations for positroidal dissections
of the hypersimplex. For the m=2 amplituhedron, via T-duality described in <cit.>, these dissections correspond to a dissection of the amplituhedron in terms of Grasstopes <cit.>. However, not much is known about the relations between triangulations of the amplituhedron and dissections of the hypersimplex in the case of the m=4 amplituhedron. We provide a first counterpart of positroid dissections of the hypersimplex for BCFW triangulations of 𝒜_n,k,4. We wish to explore the possibility of equivalent notions of T-duality for the m=4 amplituhedron as well. We also wish to examine connections between combinatorial objects and LPM's other than the ones discussed here for example chord diagrams and domino bases described in <cit.>. We also point the reader to recent work done on weakly separated collections and matroidal subdivisions <cit.>, which also correlates to some of your observations and is an interesting avenue for further exploration.
siam
|
http://arxiv.org/abs/2307.04386v1 | 20230710074506 | Counterfactual Explanation for Fairness in Recommendation | [
"Xiangmeng Wang",
"Qian Li",
"Dianer Yu",
"Qing Li",
"Guandong Xu"
] | cs.IR | [
"cs.IR"
] |
Equal contribution.
[email protected]
0000-0003-3643-3353
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
[1]
Corresponding author: [email protected]
[email protected]
0000-0002-8308-9551
School of Electrical Engineering Computing and Mathematical Sciences, Curtin University
Perth
Australia
[email protected]
0000-0001-6376-9667
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
[email protected]
0000-0003-3370-471X
Hong Kong Polytechnic University
Hong Kong
Corresponding author: [email protected]
[email protected]
0000-0003-4493-6663
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
Fairness-aware recommendation eliminates discrimination issues to build trustworthy recommendation systems.
Explaining the causes of unfair recommendations is critical, as it promotes fairness diagnostics, and thus secures users' trust in recommendation models.
Existing fairness explanation methods suffer high computation burdens due to the large-scale search space and the greedy nature of the explanation search process.
Besides, they perform score-based optimizations with continuous values, which are not applicable to discrete attributes such as gender and race.
In this work, we adopt the novel paradigm of counterfactual explanation from causal inference to explore how minimal alterations in explanations change model fairness, to abandon the greedy search for explanations.
We use real-world attributes from Heterogeneous Information Networks (HINs) to empower counterfactual reasoning on discrete attributes.
We propose a novel Counterfactual Explanation for Fairness (CFairER) that generates attribute-level counterfactual explanations from HINs for recommendation fairness.
Our CFairER conducts off-policy reinforcement learning to seek high-quality counterfactual explanations, with an attentive action pruning reducing the search space of candidate counterfactuals.
The counterfactual explanations help to provide rational and proximate explanations for model fairness, while the attentive action pruning narrows the search space of attributes.
Extensive experiments demonstrate our proposed model can generate faithful explanations while maintaining favorable recommendation performance.
We release our code at <https://anonymous.4open.science/r/CFairER-anony/>.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010187.10010192</concept_id>
<concept_desc>Computing methodologies Causal reasoning and diagnostics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010261</concept_id>
<concept_desc>Computing methodologies Reinforcement learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003261.10003271</concept_id>
<concept_desc>Information systems Personalization</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Causal reasoning and diagnostics
[500]Computing methodologies Reinforcement learning
[500]Information systems Personalization
20 February 2007
[revised]12 March 2009
[accepted]5 June 2009
Counterfactual Explanation for Fairness in Recommendation
Guandong Xu
August 12, 2023
=========================================================
§ INTRODUCTION
Recommendation system (RS) as an information filtering tool has been a core in online services, e.g., e-commerce <cit.>.
It helps users discover their preferred items and benefit content providers to profit from item exposures.
Despite the huge benefits, fairness issues refer to unfair allocations (i.e., exposures) of recommended items <cit.>, caused by, e.g., gender discrimination, have attracted increasing attention in RS.
Fairness-aware recommendation <cit.> has emerged as a promising solution to prevent unintended discrimination and unfairness in RS.
It aims to find feasible algorithmic approaches that reduce the fairness disparity of recommendation results.
Explaining why fairness disparity appears, i.e., what causes unfair recommendation results, would enhance the design of fairness-aware recommendation approaches by promoting model transparency and tracking unfair factors.
There are a few fairness explanation studies in the literature, which are mainly categorized as feature-based and aspect-based methods.
Feature-based methods estimate the contribution scores of numerical features that impact model fairness.
For instance, Begley et al. <cit.> explore fairness explanations based on Shapley value estimation for the classification task.
They calculate Shapley values of every input features to reflect their significance and then generate explanations based on calculated values.
However, this method is not applicable for deep recommendation models (e.g., neural networks <cit.>), as the high complexity of Shapley value estimation becomes the major burden when input features are in high dimension and sparse.
Another branch of aspect-based methods mainly perturbs user/item aspect scores and optimizes an explanation model to find perturbed aspects that affect the model fairness as explanations.
For example, Ge et al. <cit.> perturb aspect scores within pre-defined user-aspect and item-aspect matrices and feed the perturbed matrices into a recommendation model.
Those perturbed aspects that alter the fairness disparity of the recommendation model are considered aspect-based explanations.
However, the perturbation space grows exponentially as the number of aspects increases, resulting in a large-scale search space to seek explanations.
The above fairness explanation methods suffer below issues:
1) These feature/aspect-based methods usually incur high computational costs due to the high dimensionality of search space and ultimately result in sub-optimal explanations.
Besides, these methods are presented with the greedy nature of the explanation search process.
They optimize explanation models using greedy feature/aspect scores as significance criteria and select top features/aspects as explanations, which might have the risk of introducing pseudo-explanations.
2) These score-based optimizations can only deal with continuous attributes and thus are not well-suited for handling discrete attributes.
For example, assigning a continuous value, such as gender=0.19, to the discrete gender attribute is impractical in constructing explanations and provides no valuable clue to improve the explanation.
Worse still, discrete attributes are frequently used in real-world recommendation models, as user and item profiles for training models are often generated through data tagging <cit.> on discrete attributes.
For instance, movie recommendations <cit.> usually rely on movies tagged with discrete attributes such as genre, language, and release location.
Consequently, score-based optimizations have limited capability in handling discrete attributes that are frequently encountered in recommendation scenarios.
Unlike previous works, we resort to counterfactual explanations <cit.> derived from causal inference to tackle the above issues.
Counterfactual explanations address the fundamental question: what the model fairness would be if a minimal set of factors (e.g., user/item features) had been different <cit.>.
In other words, they provide “what-if” explanations to determine the most vital and essential (i.e., minimal) factors that change model fairness.
Unlike existing feature/aspect-based methods with greedy explanations, counterfactual explanations have the advantage of always being minimal w.r.t. the generated explanations and are faithful to model fairness changes.
Moreover, we leverage real-world attributes from Heterogeneous Information Networks (HINs) <cit.>, for counterfactual reasoning when dealing with discrete attributes.
In contrast to value-based features and aspects, real-world attributes residing in HINs are presented as discrete nodes, with edges representing their connections.
By utilizing attributes from HINs, we can overcome the limitation of score-based optimizations to directly measure whether the removal of specific attributes changes the model's fairness.
Following the above intuition, we propose to generate attribute-level counterfactual explanations for fairness from a given HIN.
We posit a novel definition of counterfactual explanation for fairness - a minimal set of attributes from the HIN that changes model fairness disparity.
We use a toy example in Figure <ref> to illustrate our idea.
Given a recommendation i_1 for the user u_1 and an external HIN carrying their attributes, we want to know why i_1 causes discrimination in recommendation results.
The counterfactual explanation performs “what-if” reasoning by altering the attributes of u_1 and i_1 and checking the fairness of the recommendation results.
Both E_1 and E_2 are valid candidate explanations since they alter fairness disparities of recommendations (i.e., i_2, i_3) from 0.90 to 0.19.
To determine which attributes are the primary reason for unfairness, the counterfactual explanation will uncover the minimal attribute changes, i.e., E_2, instead of utilizing attribute combinations in E_1.
Thus, we could infer E_2 is the most vital reason for model unfairness.
Besides, since a counterfactual explanation E_2 is minimal, it only reveals the essential attributes (i.e., “Female”) that effectively explain unfairness, while discarding the irrelevant (i.e., pseudo) explanations, i.e., “U.S” and “Discount” in E_1.
We therefore propose a novel Counterfactual Explanation for Fairness (CFairER) within an off-policy reinforcement learning environment to find optimal attribute-level counterfactual explanations.
Particularly, we focus on generating attribute-level counterfactual explanations for item exposure unfairness to promote the fair allocation of user-preferred but less exposed items.
Note that the proposed approach is general and can be utilized in different recommendation scenarios that involve different fairness definitions.
Specifically, we use a reinforcement learning agent in CFairER to optimize a fairness explanation policy by uniformly exploring candidate counterfactuals from a given HIN.
We also devise attentive action pruning over the HIN to reduce the search space of reinforcement learning.
Finally, our CFairER optimizes the explanation policy using an unbiased counterfactual risk minimization objective, resulting in accurate attribute-level counterfactual explanations for fairness.
The contributions of this work are:
* We make the first attempt to leverage rich attributes in a Heterogeneous Information Network to offer attribute-level counterfactual explanations for recommendation fairness.
* We propose an off-policy learning framework to identify optimal counterfactual explanations,
which is guided by an attentive action pruning to reduce the search space.
* We devise a counterfactual risk minimization for off-policy correction, so as to achieve unbiased policy optimization.
* Comprehensive experiments show the superiority of our method in generating trustworthy explanations for fairness while preserving satisfactory recommendation performance.
§ RELATED WORK
§.§ Fairness Explanation for Recommendation
Recommender systems have long dealt with major concerns of recommendation unfairness, which profoundly harm user satisfaction <cit.> and stakeholder benefits <cit.>.
Recent works on fairness-aware recommendation mainly discuss two primary topics, i.e., user-side fairness <cit.> and item-side fairness <cit.>.
User-side fairness concerns whether the recommendation is fair to different users/user groups, e.g., retaining equivalent accuracy or recommendation explainability.
Relevant approaches attribute the causes of user-side unfairness to discrimination factors, such as sensitive features (e.g., gender <cit.>, race <cit.>) and user inactiveness <cit.>, etc.
They mainly propose fairness metrics to constraint recommendation models (e.g., collaborative filtering <cit.>) to produce fair recommendations.
For example, Yao et al. <cit.> study the unfairness of collaborative filtering (CF)-based recommenders on gender-imbalanced data.
They propose four metrics to assess different types of fairness, then add these metrics as constraints to the CF model learning objective to produce fair recommendations.
Li et al. <cit.> investigate the unfair recommendation between active and inactive user groups, and provide a re-ranking approach to mitigate the activity unfairness by adding constraints over evaluation metrics of ranking.
As modern content providers are more concerned about user privacy, it is generally not easy to access sensitive user features for the recommendation <cit.>.
Meanwhile, users often prefer not to disclose personal information that raises discrimination <cit.>.
Thus, another topic of item-side fairness-aware recommendation <cit.> is interested in examining whether the recommendation treats items fairly, e.g., similar ranking prediction errors for different items, fair allocations of exposure to each item.
For instance,
Abdollahpouri et al. <cit.> address item exposure unfairness in learning-to-rank (LTR) recommenders.
They include a fairness regularization term in the LTR objective function, which controls the recommendations favored toward popular items.
Ge et al. <cit.> consider the dynamic fairness of item exposure due to changing group labels of items.
They calculate the item exposure unfairness with a fairness-related cost function.
The cost function is merged into a Markov Decision Process to capture the dynamic item exposure for recommendations.
Liu et al. <cit.> focus on item exposure unfairness in interactive recommender systems (IRS).
They propose a reinforcement learning method to maintain a long-term balance between accuracy and exposure fairness in IRS.
Despite the great efforts, fairness-aware recommendations mitigate user and item unfairness in a black-box manner but do not explain why the unfairness appears.
Understanding the “why” is desirable for both model transparency <cit.> and facilitates data curation to remove unfair factors <cit.>.
Limited pioneering studies are conducted to explain fairness.
Begley et al. <cit.> estimate Shapley values of input features to search which features contribute more to the model unfairness.
Ge et al. <cit.> develop an explainable fairness model for recommendation to explain which item aspects influence item exposure fairness.
They perform perturbations on item aspect scores, then apply perturbed aspect scores on two pre-defined matrices to observe fairness changes.
These prior efforts suffer from major limitations:
1) The high computational burden caused by the large-scale search space and the greedy nature of the explanation search process.
2) They generate explanations by feature <cit.> or aspect <cit.> scores, which do not apply to discrete attributes such as gender and race.
Our work conducts counterfactual reasoning to seek minimal sets of attributes as explanations.
We also reduce the large search space by attentive action pruning in the off-policy learning environment.
Meanwhile, we consider explaining recommendation unfairness based on attributes from a Heterogeneous Information Network, which is expected to be wildly applicable.
§.§ Heterogeneous Information Network in Recommendation
Heterogeneous Information Network (HIN) is a powerful structure that allows for the heterogeneity of its recorded data, i.e., various types of attributes, thus providing rich information to empower recommendations <cit.>.
HINs have been wildly adopted in recommendation models to boost performance;
representative works cover context-based filtering (e.g., SemRec <cit.>, HERec <cit.>) and knowledge-based systems (e.g., MCrec <cit.>, HAN <cit.>).
For instance, HERec <cit.> embeds meta-paths within a HIN as dense vectors, then fuses these HIN embeddings with user and item embeddings to augment the semantic information for recommendations.
MCrec <cit.> leverages a deep neural network to model meta-path-based contextual embeddings and propagates the context to user and item representations with a co-attention mechanism.
Those recommendation models observe promising improvements by augmenting contextual and semantic information given by HINs.
Despite the great efforts, prior works do not consider using the HIN to explain unfair factors in recommendations.
Novel to this work, we first attempt to leverage rich attributes in a HIN to provide counterfactual explanations for item exposure fairness.
§.§ Counterfactual Explanation
Counterfactual explanations have been considered as satisfactory explanations <cit.> and elicit causal reasoning in humans <cit.>.
Works on counterfactual explanations have been proposed very recently to improve the explainability of recommendations.
Xiong et al. <cit.> propose a constrained feature perturbation on item features and consider the perturbed item features as explanations for ranking results.
Ghazimatin et al. <cit.> perform random walks over a Heterogeneous Information Network to look for minimal sets of user action edges (e.g., click) that change the PageRank scores.
Tran et al. <cit.> identify minimal sets of user actions that update the parameters of neural models.
Our work differs from prior works on counterfactual explanations by two key points:
1) In terms of problem definition, they generate counterfactual explanations to explain user behaviors (e.g., click <cit.> ) or recommendation (e.g., ranking <cit.>) results.
Our method generates counterfactual explanations to explain which attributes affect recommendation fairness.
2) In terms of technique, our method formulates counterfactual reasoning as reinforcement learning, which can deal with ever-changing item exposure unfairness.
§ PRELIMINARY
We first introduce the Heterogeneous Information Network that offers real-world attributes for fairness explanation learning.
We then give the key terminologies, including fairness disparity evaluation and counterfactual explanation for fairness.
§.§ Heterogeneous Information Network
Creating fairness explanations requires auxiliary attributes containing possible factors (e.g., user gender) that affect recommendation fairness (cf. Figure <ref>).
Heterogeneous Information Network (HIN) has shown its power in modeling various types of attributes, e.g., user social relations, item brand.
In particular, suppose we have the logged data that records users’ historical behaviors (e.g., clicks) in the recommendation scenario.
Let 𝒰∈ℝ^M, ℐ∈ℝ^N denote the sets of users and items, respectively.
We can define a user-item interaction matrix Y={y_uv| u ∈𝒰, v ∈ℐ} according to the logged data.
We also have additional attributes from external resources that profile users and items, e.g., users' genders, items' genres.
The connections between all attributes and users/items are absorbed in the relation set ℰ.
Those attributes, with their connections with user-item interactions, are uniformly formulated as a HIN.
Formally, a HIN is defined as 𝒢=(𝒱^',ℰ^'), where 𝒱^'=𝒰∪ℐ∪𝒱_U ∪𝒱_I, and ℰ^'= {𝕀(y_uv)}∪ℰ.
𝕀(·) is an edge indicator that denotes the observed edge between user u and item v when y_uv∈Y=1.
𝒱_U and 𝒱_I are attribute sets for users and items, respectively.
Each node n ∈𝒱^' and each edge e ∈ℰ^' are mapped into specific types through node type mapping function: ϕ: 𝒱^'→𝒦 and edge type mapping function: ψ: ℰ^'→𝒥.
𝒢 maintain heterogeneity, i.e., |𝒦|+|𝒥| > 2.
§.§ Fairness Disparity
We consider explaining the item exposure (un)fairness in recommendations.
We first split items in historical user-item interactions into head-tailed (i.e., popular) group G_0 the long-tailed group G_1 [Following <cit.>, we consider the top 20% items with the most frequent interactions with users as G_0, while the remaining 80% belongs to G_1.].
Following previous works <cit.>, we use demographic parity (DP) and exact-K (EK) defined on item subgroups to measure whether a recommendation result is fair.
In particular, DP requires that each item has the same likelihood of being classified into G_0 and G_1.
EK regulates the item exposure across each subgroup to remain statistically indistinguishable from a given maximum α.
By evaluating the deviation of recommendation results from the two fairness criteria, we can calculate the fairness disparity, i.e., to what extent the recommendation model is unfair.
Formally, giving a recommendation result H_u, K, the fairness disparity Δ(H_u, K) of H_u, K is:
[ Δ(H_u, K)=|Ψ_D P|+λ|Ψ_E K| ,; Ψ_D P=|G_1| · Exposure (G_0| H_u, K)-|G_0| · Exposure (G_1| H_u, K),; Ψ_E K= α· Exposure (G_0| H_u, K)- Exposure (G_1| H_u, K) ]
where Δ(·) is the fairness disparity metric that quantifies model fairness status.
λ is the trade-off parameter between DP and EK.
Exposure(G_j| H_u, K) is the item exposure number of H_u, K within G_j w.r.t. j ∈{0,1}.
§.§ Counterfactual Explanation for Fairness
This work aims to generate attribute-level counterfactual explanations for item exposure fairness.
In particular, we aim to find the “minimal” changes in attributes that reduce the fairness disparity (cf. Eq. (<ref>)) of item exposure.
Formally, given historical user-item interaction Y={y_uv| u ∈𝒰, v ∈ℐ}, and user attribute set 𝒱_U and item attribute set 𝒱_I extracted from an external Heterogeneous Information Network (HIN) 𝒢=(𝒱^',ℰ^').
Suppose there exists a recommendation model that produces the recommendation result H_u, K for user u.
Given all user-item pairs (u,v) in H_u, K,
our goal is to find a minimal attributes set 𝒱^*⊆{{e_u, e_v}| (u, e_u), (v, e_v) ∈ℰ^', e_u ∈𝒱_U, e_v ∈𝒱_I}.
Each attribute in 𝒱^* is an attribute entity from HIN 𝒢, e.g., user's gender, item's genre.
With a minimal set of 𝒱^*, the counterfactual reasoning pursues to answer: what the fairness disparity would be, if 𝒱^* is applied to the recommendation model.
𝒱^* is recognized as a valid counterfactual explanation for fairness, if after applied 𝒱^*, the fairness disparity of the intervened recommendation result Δ(H_u, K^cf) reduced compared with original Δ(H_u, K).
In addition, 𝒱^* is minimal such that there is no smaller set 𝒱^*^'∈𝒢 satisfying |𝒱^*^'| < |𝒱^*| when 𝒱^*^' is also valid.
§ THE CFAIRER FRAMEWORK
We now introduce the framework of our Counterfactual Explanation for Fairness (CFairER).
As shown in Figure <ref>, CFairER devises three major components:
1) graph representation module embeds users, items, and attributes among HIN as embedding vectors;
2) recommendation model learns user and item latent factors to produce recommendation results and
3) our proposed counterfactual fairness explanation (CFE) model assisted by the graph representation module and the recommendation model to conduct counterfactual reasoning.
This section discusses how the CFE model collaborates with the other two components, then introduces the graph representation module and the recommendation model.
We will elaborate on our proposed CFE model in the next section.
§.§ Counterfactual Fairness Explanation Model
As shown in Figure <ref>, our CFE model is crafted within an off-policy learning environment, in which an explanation policy π_E is optimized to produce attribute-level counterfactual explanations for fairness.
At each state s_t, π_E produces actions a_t absorbing user and item attributes as potential counterfactual explanations.
These actions are committed to the recommendation model and graph representation module to produce the reward r(s_t, a_t) for optimizing π_E.
Specifically, the graph representation module provides dense vectors 𝐡_u, 𝐡_v, 𝐞_u and 𝐞_v as user, item, user attribute and item attribute embeddings, respectively.
Those embeddings are used in the state representation learning (i.e., learn s_t) and attentive action pruning (i.e., select a_t) in our CFE model.
Moreover, the attribute embeddings are fused with user or item latent factors learned by the recommendation model to explore the model fairness change.
In particular, the fused embeddings of users and items are used to predict the intervened recommendation result H_u, K^cf.
By comparing the fairness disparity (cf. Eq. (<ref>)) difference between H_u, K^cf and the original recommendation H_u, K, we determine the reward r(s_t, a_t) to optimize π_E, accordingly.
The reward r(s_t, a_t) measures whether the current attribute (i.e., action) is a feasible fairness explanation responsible for the fairness change.
Finally, π_E is optimized with a counterfactual risk minimization (CRM) objective ∇_ΘR(π_E) to balance the distribution discrepancy from the logging policy π_0.
§.§ Graph Representation Module
Our graph representation module conducts heterogeneous graph representation learning to produce dense vectors of users, items, and attributes among the HIN.
Compared with homogeneous graph learning such as GraphSage <cit.>, our graph representation injects both node and edge heterogeneity to preserve the complex structure of the HIN.
In particular, we include two weight matrices to specify varying weights of different node and edge types.
In the following, we present the graph learning for user embedding 𝐡_u.
The embeddings of 𝐡_v, 𝐞_u and 𝐞_v can be obtained analogously by replacing nodes and node types while computations.
Specifically, we first use Multi-OneHot <cit.> to initialize node embeddings at the 0-th layer, in which u's embedding is denoted by 𝐡_u^0.
Then, at each layer l, user embedding 𝐡_u^l is given by aggregating node u's neighbor information w.r.t. different node and edge types:
𝐡_u^l=σ(concat [𝐖_ϕ(u)^lD_p[𝐡_u^l-1], 𝐖_ψ(e)^l/|𝒩_ψ(e)(u)|∑_u^'∈𝒩_ψ(e)(u)D_p[𝐡_u^'^l-1] ]+b^l)
where σ(·) is LeakyReLU <cit.> activation function and concat(·) is the concatenation operator.
D_p[·] is a random dropout with probability p applied to its argument vector.
𝐡_u^l-1 is u's embedding at layer l-1.
𝒩_ψ(e)(u)={u^'|(u, e, u^') ∈𝒢} is a set of nodes connected with user node u through edge type ψ(e).
The additionally dotted two weight matrices, i.e., node-type matrix 𝐖_ϕ(u)^l and edge-type matrix 𝐖_ψ(e)^l, are defined based on the importance of each type ϕ(u) and ψ(e).
b^l is an optional bias.
With Eq (<ref>), we obtain u's embedding 𝐡_u^l at each layer l ∈{1,⋯, L}.
We then adopt layer-aggregation <cit.> to concatenate u's embeddings at all layers into a single vector, i.e., 𝐡_u=𝐡_u^(1) + ⋯ + 𝐡_u^(L).
Finally, we have user node u's embedding 𝐡_u through aggregation.
The item embedding 𝐡_v, user attribute embedding 𝐞_u and item attribute embedding 𝐞_v can be calculated analogously.
§.§ Recommendation Model
The recommendation model f_R is initialized using user-item interaction matrix Y to produce the Top-K recommendation result H_u, K for all users.
Here, we employ a linear and simple matrix factorization (MF) <cit.> as the recommendation model f_R.
Particularly, MF initializes IDs of users and items as latent factors, and uses the inner product of user and item latent factors as the predictive function:
f_R(u,v)=U_u^⊤V_v
where U_u and V_v denote d-dimensional latent factors for user u and item v, respectively.
We use the cross-entropy <cit.> loss to define the objective function of the recommendation model:
ℒ_R = -∑_u, v, y_uv∈Y y_uvlog f_R(u,v)+(1-y_uv) log(1-f_R(u,v))
After optimizing the loss function ℒ_R, we can use the trained user and item latent factors (i.e., U, V) to produce the original Top-K recommendation lists H_u, K for all users u ∈𝒰.
§ REINFORCEMENT LEARNING FOR COUNTERFACTUAL FAIRNESS EXPLANATION
We put forward our counterfactual fairness explanation (CFE) model (cf. Figure <ref>), assisted by graph representation module and recommendation model, to generate explanation policy π_E for item exposure fairness.
The explanation policy π_E is optimized within off-policy learning to adaptively learn attributes responsible for fairness changes.
In the following, we first introduce off-policy learning for our CFE model.
Then we detail each key element in the off-policy learning and give unbiased policy optimization.
§.§ Explaining as Off-policy Learning
We cast our CFE model in an off-policy learning environment, which is formulated as Markov Decision Process (MDP).
The MDP is provided with a static logged dataset generated by a logging policy π_0 [We adopt the uniform-based logging policy as π_0. It samples attributes as actions from the attribute space with the probability of π_0(a_t | s_t)=1/|𝒱_U+𝒱_I|.].
The logging policy π_0 collects trajectories by uniformly sampling actions from the user and item attribute space.
We use the off-policy learning to optimize an explanation (i.e., target) policy π_E by approximating the counterfactual rewards of state-action pairs from all timestamps, wherein the logging policy π_0 is employed for exploration while the target policy π_E is utilized for decision-making.
In the off-policy setting,
the explanation policy π_E does not require following the original pace of the logging policy π_0.
As a result, π_E is able to explore the counterfactual region, i.e., those actions that haven't been taken by the previous agent using π_0.
Formally, at each timestamp t ∈{1,⋯,T} of MDP, the explanation policy π_E(a_t|s_t) selects an action (i.e., a candidate attribute) a_t ∈𝒜_t conditioning on the user state s_t ∈𝒮, and receives counterfactual reward r(s_t, a_t) ∈ℛ for this particular state-action pair.
Then the current state transits to the next state s_t+1 with transition probability of ℙ(s_t+1| s_t, a_t)∈𝒫.
The whole MDP has the key elements:
* 𝒮 is a finite set of states {s_t | t∈ [1,⋯, T]}. Each state s_t is transformed into dense vectors (i.e., embeddings) by our state representation learning (cf. Section <ref>).
* 𝒜_t is a finite set of actions (i.e., attributes) available at s_t. 𝒜_t is select from attributes 𝒱_t ∈𝒢 by our attentive action pruning (cf. Section <ref>) to reduce the search space.
* 𝒫: 𝒮×𝒜→𝒮 is the state transition, which absorbs transition probabilities of the current states to the next states.
Given action a_t at state s_t, the transition to the next state s_t+1 is determined as
ℙ(s_t+1| s_t, a_t)∈𝒫 =1.
* ℛ: 𝒮→ℛ is the counterfactual reward measures whether a deployed action (i.e., an attribute) is a valid counterfactual explanation for fairness. ℛ is used to guide the explanation policy learning and is defined in Section <ref>.
We now introduce the implementation of each key component.
§.§.§ State Representation Learning.
The state 𝒮 describes target users and their recommendation lists from the recommendation model.
Formally, at step t, the state s_t for a user u is defined as s_t=(u, H(u,K)), where u ∈𝒰 is a target user and H(u,K) is the recommendation produced by f_R.
The initial state s_0 is (u, v) and v is the interacted item of u, i.e., y_uv∈Y=1.
Our state representation learning maps user state s_t=(u, H(u,K)) into dense vectors for latter explanation policy learning.
Specifically, given s_t that absorbs current user u and its recommendation H(u,K)={v_1,v_2,...,v_K}.
We first acquire the embedding 𝐡_v_k of each item v_k ∈ H(u,K) from our graph representation module.
The state s_t then receives the concatenated item embeddings (i.e., concat[𝐡_v_k|∀ v_k ∈ H(u,K)]) to update its representation.
Considering states within 𝒮 have sequential patterns <cit.>,
we resort to Recurrent Neural Networks (RNN) with a gated recurrent unit (GRU) <cit.> to capture the sequential state trajectory.
We firstly initialize the state representation s_0 with an initial distribution s_0∼ρ_0
[In our experiment, we used a fixed initial state distribution, where s_0 = 0 ∈ℝ^d].
Then we learn state representation s_t through the recurrent cell:
𝐮_t =σ_g(𝐖_1concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_1 s_t-1+b_1)
𝐫_t =σ_g(𝐖_2 concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_2 s_t-1+b_2)
ŝ_t =σ_h(𝐖_3concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_3(𝐫_t· s_t-1)+b_3)
s_t =(1-𝐮_t) · s_t-1+𝐮_t⊙ŝ_t
where 𝐮_t and 𝐫_t denote the update gate and reset gate vector generated by GRU and ⊙ is the element-wise product operator.
𝐖_i, 𝐔_i are weight matrices and b_i is the bias vector.
Finally, s_t serves as the state representation at time step t.
§.§.§ Attentive Action Pruning.
Our attentive action pruning is designed to reduce the action search space by specifying the varying importance of actions for each state.
As a result, the sample efficiency can be largely increased by filtering out irrelevant actions to promote an efficient action search.
In our method, actions are defined as candidate attributes selected from a given HIN that potentially impact the model fairness.
In particular, given state s_t=(u, H(u,K)), we can distill a set of attributes 𝒱_t of the current user u and items v ∈ H(u,K) from the HIN.
Intuitively, we can directly use 𝒱_t as candidate actions for state s_t.
However, the user and item attribute amount of the HIN would be huge, resulting in a large search space that terribly degrades the learning efficiency <cit.>.
Thus, we propose an attentive action pruning based on attention mechanism <cit.> to select important candidate actions for each state.
Formally, given the embedding 𝐞_i for an attribute i ∈𝒱_t from Eq. (<ref>), and the state representation s_t from Eq. (<ref>), the attention score α_i of attribute i is:
α_i=ReLU(𝐖_s s_t+𝐖_h𝐞_i+b)
where 𝐖_s and 𝐖_h are two weight matrices and b is the bias vector.
We then normalize attentive scores of all attributes in 𝒱_t and select attributes with n-top attention scores into 𝒜_t:
𝒜_t={i | i ∈Top-n[exp(α_i)/∑_i^'∈𝒱_texp(α_i^')] and i ∈𝒱_t}
where n is the candidate size.
To the end, our candidate set 𝒜_t is of high sample efficiency since it filters out irrelevant attributes while dynamically adapting to the user state shift.
§.§.§ Counterfactual Reward Definition
The counterfactual reward r(s_t, a_t) ∈ℛ measures whether a deployed action a_t ∈𝒜_t is a valid counterfactual explanation for fairness at the current state s_t.
In particular, the reward is defined based on two criteria:
1) Rationality <cit.>: deploying action (i.e., attribute) a_t should cause the reduction of fairness disparity regarding the item exposure fairness.
The fairness disparity change is measured by the fairness disparity difference between the recommendation result before (i.e., Δ(H_u, K)) and after (i.e., Δ(H_u, K^cf)) fusing the action a_t to the recommendation model f_R, i.e., Δ(H_u, K)- Δ(H_u, K^cf).
2) Proximity <cit.>: a counterfactual explanation is a minimal set of attributes that changes the fairness disparity.
For the Rationality, we fuse the embedding of a_t with user or item latent factors from the recommendation model to learn updated user and item latent vectors, so as to get the Δ(H_u, K^cf).
Specifically, for a state s_t=(u, H(u,K)), the embedding 𝐞_t of action a_t is fused to user latent factor U_u for user u and item latent factors V_v_i for all items v_i ∈ H(u,K) by a element-wise product fusion.
As a result, we can get the updated latent factors U_u^cf and V_v^cf:
U_u^cf ←U_u⊙{𝐞_t|∀ t ∈ [1, ⋯, T]}, if a_t ∈𝒱_U
V_v_i^cf ←V_v_i⊙{𝐞_t|∀ t ∈ [1, ⋯, T]}, if a_t ∈𝒱_I
where ⊙ represents the element-wise product (a.k.a. Hadamard product).
T is the total training iteration.
At the initial state of t=0, user and item latent factors U_u and V_v are learned form Eq (<ref>).
Through Eq. (<ref>), the updated user and item latent vectors are then used to generate the intervened recommendation result H_u, K^cf.
For the Proximity, we compute whether a_t returns a minimal set of attributes that changes the recommendation model fairness.
This is equal to regulating user and item latent factors before (i.e., U_u, V_v) and after (i.e., U_u^cf, V_v^cf) fusing a_t be as similar as possible.
Based on the two criteria, the counterfactual reward can be defined as the following form:
r(s_t, a_t)={[ 1+dist (U_u, U_u^cf)+ dist (V_v, V_v^cf), if Δ(H_u, K)- Δ(H_u, K^cf) ≥ϵ; dist (U_u, U_u^cf)+ dist (V_v, V_v^cf), otherwise ].
where dist(·) is the distance metric defined as cosine similarity <cit.>, i.e., dist (a,b)=⟨ a, b⟩/a b.
Δ(·) is the fairness disparity evaluation metric defined in Eq.(<ref>).
ϵ is the disparity change threshold that controls the model flexibility.
§.§ Unbiased Policy Optimization
Using state s_t ∈𝒮 from Eq. (<ref>), candidate action a_t ∈𝒜_t from Eq. (<ref>), and counterfactual reward r(s_t, a_t) in Eq. (<ref>) for each timestamp t,
the policy optimization seeks the explanation policy π_E that maximizes the expected cumulative reward R(π_E) over total iteration T.
Intuitively, we can directly use the policy gradient calculated on R(π_E) to guide the optimization of π_E.
However, our policy optimization is conducted in the off-policy learning setting, in which π_E holds different distribution from the logging policy π_0.
Directly optimizing R(π_E) would result in a biased policy optimization <cit.> due to the policy distribution discrepancy.
To this end, we additionally apply Counterfactual Risk Minimization (CRM) <cit.> to correct the discrepancy between π_E and π_0.
In particular, CRM employs an Inverse Propensity Scoring (IPS) <cit.> to explicitly estimate the distribution shift between π_E and π_0.
After applying the CRM, we can alleviate the policy distribution bias by calculating the CRM-based expected cumulative reward R(π_E):
R(π_E)
= 𝔼_π_E[∑_t=0^Tγ^tπ_E(a_t| s_t)/π_0(a_t| s_t) r(s_t, a_t)]
where π_E(a_t |s_t)/π_0(a_t |s_t) is called the propensity score for balancing the empirical risk estimated from the π_0.
Finally, the policy gradient of the explanation policy learning w.r.t. model parameter Θ is achieved by the REINFORCE <cit.>:
∇_ΘR(π_E)=1/T∑_t=0^Tγ^tπ_E(a_t| s_t)/π_0(a_t| s_t) r(s_t, a_t) ∇_Θlogπ_E(a_t | s_t)
where T is the total training iteration.
By optimizing the Eq. (<ref>), the learned explanation policy π_E generates minimal sets of attributes responsible for item exposure fairness changes, so as to find the true reasons leading to unfair recommendations.
§ EXPERIMENTS
We conduct extensive experiments to evaluate the proposed CFairER for explaining item exposure fairness in recommendations.
We aim to particularly answer the following research questions:
* RQ1. Whether CFairER produces attribute-level explanations that are faithful to explaining recommendation model fairness compared with existing approaches?
* RQ2. Whether explanations provided by CFairER
achieve better fairness-accuracy trade-off than other methods?
* RQ3. Do different components (i.e., attentive action pruning, counterfactual risk minimization-based optimization) help CFairER to achieve better sample efficiency and bias alleviation? How do hyper-parameters impact CFairER?
§.§ Experimental Setup
§.§.§ Datasets
We use logged user behavior data from three datasets [https://www.yelp.com/dataset/], [https://movie.douban.com/] and [https://github.com/librahu/HIN-Datasets-for-Recommendation-and-Network-Embedding] for evaluations.
Each dataset is considered as an independent benchmark for different tasks, i.e., business, movie and music recommendation tasks.
The dataset records user ratings on local businesses and business compliment, category and city profiles.
The is a movie recommendation dataset that contains user group information and movie actor, director and type details.
The contains music listening records of users and artist tags.
The details of both datasets are given in Table <ref>, which depicts statistics of user-item interactions, user-attribute and item-attribute relations.
All datasets constitute complex user-item interactions and diverse attributes, thus providing rich contextual information for fairness explanation learning.
Following previous works <cit.>, we adopt a 10-core setting, i.e., retaining users and items with at least ten interactions for both datasets to ensure the dataset quality.
Meanwhile, we binarize the explicit rating data by interpreting ratings of 4 or higher as positive feedback, otherwise negative.
Then, we sort the interacted items for each user based on the timestamp and split the chronological interaction list into train/test/valid sets with a proportion of 60%/20%/20%.
We also study the long-tail distribution of user-item interactions in the three datasets.
We present the visualization results of the distribution of historical user-item interactions in the three datasets in Figure <ref>.
Analyzing Figure <ref>, we find that user-item interactions of both datasets are presented with a skewed distribution: the head-tailed distribution in the blue plot area and the long-tailed distribution in the yellow plot area.
Besides, a small fraction of popular items account for most of the user interactions in both datasets,
The skewed distribution would result in serious item exposure unfairness issues in recommendations, such as the well-known filter-bubble problem <cit.> and Matthew effect <cit.>.
§.§.§ Baselines
We adopt three heuristic approaches and two existing fairness-aware explainable recommendation methods as baselines.
In particular,
* RDExp: We randomly select attributes from the attribute space for each user-item interaction and generate explanations based on the selected attributes. Note that the selected attributes can be both user and item attributes.
* PopUser and PopItem: We separately calculate the exposure number of attributes for each user-item interaction, then sort each attribute chronologically based on the exposure number.
We devise a baseline PopUser, in which the top user attributes are selected as explanations. Analogously, we build PopItem that produces the top item attributes for the explanation.
* FairKGAT: uses FairKG4Rec <cit.> to mitigate the unfairness of explanations for a knowledge graph-enhanced recommender KGAT <cit.>.
FairKG4Rec <cit.> is a generalized fairness-aware algorithm that controls the unfairness of explanation diversity in the recommendation model.
KGAT <cit.> is a state-of-the-art knowledge graph-enhanced recommendation model that gives the best fairness performance in the original FairKG4Rec paper.
* CEF <cit.>: is the first work that explains fairness in recommendation.
It generates feature-based explanations for item exposure unfairness by perturbing user and item features and searches for features that change the fairness disparity.
Note that to the best of our knowledge, FairKGAT <cit.> and CEF <cit.> are the only two existing methods designed for explainable fairness recommendation tasks.
§.§.§ Explanation Faithfulness Evaluation
We adopt the widely used erasure-based evaluation criterion <cit.> in Explainable AI to evaluate the explanation faithfulness.
The erasure-based evaluation identifies the contributions of explanations by measuring model performance changes after these explanations are removed.
As a result, one can tell whether the model actually relied on these particular explanations to make a prediction, i.e., faithful to the model.
In our experiments, we use the erasure-based evaluation to test (I) the recommendation performance change and (II) the recommendation fairness change after a set of attributes from the generated explanation is removed.
By doing so, we can identify whether our explanations are faithful to recommendation performance and fairness disparity.
Following <cit.>, we remove certain attributes from the generated explanations and evaluate the resulting recommendation performance.
Therefore, in the starting evaluation point, we consider all attributes and add them to the user and item embeddings.
We then remove certain attributes from the generated explanations to observe recommendation and fairness changes at later evaluation points.
In particular,
we first use historical user-item interactions to train a recommendation model through Eq. (<ref>) to generate user and item embeddings.
Then, we fuse all attribute embeddings from Eq. (<ref>) with the trained user and item embeddings.
The user and item embeddings after fusion are used to generate recommendation results at the starting evaluation point.
Thereafter, we conduct counterfactual reasoning using our CFairER to generate attribute-level counterfactual explanations for model fairness.
Those generated explanations are defined as the erasure set of attributes for each user/item.
Finally, we exclude the erasure set from attribute space, and fuse the embeddings of attributes after erasure with the trained user and item embeddings to generate new recommendation results.
Given the recommendation results at each evaluation point, we use Normalized Discounted Cumulative Gain (NDCG)@K and Hit Ratio (HR)@K to measure the recommendation performance
As this work focuses on item exposure fairness in recommendations, we use two wildly-adopted item-side evaluation metrics, i.e., Head-tailed Rate (HT)@K and Gini@K, for fairness evaluation.
HT@K refers to the ratio of the head-tailed item number to the list length K.
Later HT@K indicates that the model suffers from a more severe item exposure disparity by favoring items from the head-tailed (i.e., popular) group.
Gini@K measures inequality within subgroups among the Top-K recommendation list.
Larger Gini@K indicates the recommendation results are of higher inequality between the head-tailed and the long-tailed group.
§.§.§ Implementation Details
To demonstrate our CFairER, we employ a simple matrix factorization (MF) as our recommendation model.
We train the MF using train/test/validate sets split from user-item interactions in datasets with 60%/20%/20%.
We optimize the MF using stochastic gradient descent (SGD) <cit.>.
The same data splitting and gradient descent methods are applied in all baselines when required.
Our graph representation module employs two graph convolutional layers with {64, 128} output dimensions.
FairKGAT baseline also keep 2 layers.
The graph representation module outputs embeddings for all user and item attributes with the embedding size d=128.
The embedding size for FairKGAT and CEF is also fixed as d=128.
The number of latent factors (as in Eq. (<ref>)) of MF is set equal to the embedding size of our graph representation module.
To generate the starting evaluation point of erasure-based evaluation, we fuse attribute embeddings with the trained user and item latent factors based on element-wise product fusion.
The fused user and item embeddings are then used to produce Top-K recommendation lists.
We train our counterfactual fairness explanation model with SGD based on the REINFORCE <cit.> policy gradient.
For baseline model compatibility, as CEF <cit.> requires pre-defined user-feature attention matrix and item-feature quality matrix, we follow previous work <cit.> to regulate user/item attributes as user/item aspects and resort to analysis toolkit “Sentires” [https://github.com/evison/Sentires] to build the two matrices.
The hyper-parameters of our CFairER and all baselines are chosen by the grid search, including learning rate, L_2 norm regularization, discount factor γ, etc.
The disparity change threshold ϵ in Eq. (<ref>) of our CFairER is determined by performing a grid search on the validation set.
This enables us to choose the optimal value for a variety of recommendation tasks, including but not limited to business ( dataset), movie ( dataset), and music ( dataset) recommendations.
After all models have been trained, we freeze the model parameters and generate explanations accordingly.
We report the erasure-based evaluation results by recursively erasing top E attributes from the generated explanations.
The erasure length E is chosen from E=[5, 10, 15, 20].
The recommendation and fairness performance of our CFairER and baselines under different E is reported in Table <ref>.
§.§ Explanation Faithfulness (RQ1, RQ2)
We plot fairness and recommendation performance changes of our CFairER and baselines while erasing attributes from explanations in Figure <ref>.
Each data point in Figure <ref> is generated by cumulatively erasing a batch of attributes.
Those erased attributes are selected from the top 10 (i.e., E=10) attribute sets of the explanation lists provided by each method.[For example, given n explanation lists, the number of erasure attributes is n × 10. We cumulatively erase m attributes in one batch within in total (n × 10) / m iterations.]
As PopUser and PopItem baselines enjoy very similar data trends, we choose not to present them simultaneously in Figure <ref>.
Table <ref> presents recommendation and fairness performance after erasing E = [5, 10, 20] attributes in explanations.
Larger NDCG@K and Hit Ratio @K values indicate better recommendation performance while smaller Head-tailed Rate@K and Gini@K values represent better fairness.
Analyzing Figure <ref> and Table <ref>, we have the following findings.
Amongst all methods, our CFairER achieves the best recommendation and fairness performance after erasing attributes from our explanations on all datasets.
For instance, CFairER beats the strongest baseline CEF by 25.9%, 24.4%, 8.3% and 36.0% for NDCG@40, Hit Ratio@40, Head-tailed Rate@40 and Gini@40 with erasure length E=20 on .
This indicates that explanations generated by CFairER are faithful to explaining unfair factors while not harming recommendation accuracy.
Unlike CEF and FairKGAT, which generate explanations based on perturbing input features and adding fair-related constraints, CFairER generates counterfactual explanations by inferring minimal attributes contributing to fairness changes.
As a counterfactual explanation is minimal, it only discovers attributes that well-explain the model fairness while filtering out tedious ones that affect the recommendation accuracy.
Another interesting finding is that
PopUser and PopItem perform even worse than RDExp (i.e., randomly selecting attributes) on .
This is because recommending items with popular attributes would deprive the exposure of less-noticeable items, causing serious model unfairness and degraded recommendation performance.
In general, the fairness of all models consistently improves while erasing attributes from explanations, shown by the decreasing trend of Head-tailed Rate@K values in Figure <ref>.
This is because erasing attributes will alleviate the discrimination against users and items from disadvantaged groups (e.g., gender group, brand group), making more under-represented items to be recommended.
Unfortunately,
we can also observe the downgraded recommendation performance of all models in both Figure <ref> and Table <ref>.
For example, in Figure <ref>, the NDCG@5 of CEF drops from approximately 1.17 to 0.60 on at erasure iteration 0 and 50.
This is due to the well-known fairness-accuracy trade-off issue, in which the fairness constraint could be achieved with a sacrifice of recommendation performance.
Facing this issue, both baselines suffer from huge declines in recommendation performance, as in Table <ref>.
On the contrary, our CFairER still enjoys favorable recommendation performance and outperforms all baselines.
Besides, the decline rates of our CFairER are much slower than baselines on both datasets in Figure <ref>.
We hence conclude that the attribute-level explanations provided by our CFairER can achieve a much better fairness-accuracy trade-off than other methods.
This is because our CFairER uses counterfactual reasoning to generate minimal but vital attributes as explanations for model fairness.
Those attributes produced by CFairER are true reasons for unfairness but not the ones that affect the recommendation accuracy.
§.§ Ablation and Parameter Analysis (RQ3)
We first conduct an in-depth ablation study on the ability of our CFairER to achieve sample efficiency and bias alleviation.
Our CFairER includes two contributing components,
namely, attentive action pruning (cf. Section <ref>) and counterfactual risk minimization-based optimization (cf. Section <ref>).
We evaluate our CFairER with different variant combinations and show our main findings below.
§.§.§ Sample Efficiency of Attentive Action Pruning
Our attentive action pruning reduces the action search space by specifying varying importance of attributes for each state.
As a result, the sample efficiency can be increased by filtering out irrelevant attributes to promote an efficient action search.
To demonstrate our attentive action pruning, we test CFairER without () the attentive action pruning (i.e., CFairER Attentive Action Pruning), in which the candidate actions set absorbs all attributes connected with the current user and items.
Through Table <ref>, we observed that removing the attentive action pruning downgrades CFairER performance, which validates the superiority of our attentive action pruning in improving fair recommendations.
This is because attentive action pruning filters out irrelevant items based on their contributions to the current state, resulting in enhanced sample efficiency.
Moreover, the performance of CFairER after removing the attentive action pruning downgrades severely on .
This is because has the largest number of attributes compared with the other two datasets (cf. Table <ref>), which challenges our CFairER to find suitable attributes as fairness explanations.
These findings suggest the superiority of applying attentive action pruning in fairness explanation learning, especially when the attribute size is large.
§.§.§ Bias Alleviation with Counterfactual Risk Minimization
Our CFairER is optimized with a counterfactual risk minimization (CRM) loss to achieve unbiased policy optimization.
The CRM loss (cf. Eq. (<ref>)) corrects the discrepancy between the explanation policy and logging policy, thus alleviating the policy distribution bias in the off-policy learning setting.
To demonstrate the CRM loss,
we apply our CFairER with cross-entropy (CE) <cit.> loss (i.e., CRM loss → Cross-entropy loss) to show how it performs compared with CFairER on the CRM loss.
We observe our CFairER with CRM loss consistently outperforms the counterpart with CE loss on both fairness and recommendation performance.
The sub-optimal performance of our CFairER with CE loss indicates that the bias issue in the off-policy learning can lead to downgraded performance for the learning agent.
On the contrary, our CFairER takes advantage of CRM to learn a high-quality explanation policy.
We hence conclude that performing unbiased optimization with CRM is critical to achieving favorable fairness explanation learning.
§.§.§ Parameter Analysis
We also conduct a parameter analysis on how erasure length E (cf. Section <ref>) and candidate size n (as in Eq. (<ref>)) impact CFairER.
Figure <ref> (a) and Figure <ref> (b) report CFairER performance w.r.t. E=[5, 10, 15, 20].
Apparently, the performance of CFairER demonstrates decreasing trends from E=5, then becomes stable after E=10.
The decreased performance is due to the increasing erasure of attributes found by our generated explanations.
This indicates that our CFairER can find valid attribute-level explanations that impact fair recommendations.
The performance of CFairER degrades slightly after the bottom, then becomes stable.
This is reasonable since the attributes number provided in datasets are limited, while increasing the erasure length would allow more overlapping attributes with previous erasures to be found.
By varying candidate size n from n=[10, 20, 30, 40, 50, 60] in Figure <ref> (c) (d),
we observe that CFairER performance first improves drastically as candidate size increases on both datasets.
The performance of our CFairER reaches peaks at n=40 and n=30 on and , respectively.
After the peaks, we can witness a downgraded model performance by increasing the candidate size further.
We consider the poorer performance of CFairER before reaching peaks is due to the limited candidate pool, i.e., insufficient attributes limit the exploration ability of CFairER to find appropriate candidates as fairness explanations.
Meanwhile, a too-large candidate pool (e.g., n=60) would offer more chances for the agent to select inadequate attributes as explanations.
Based on the two findings, we believe it is necessary for our CFairER to carry the attentive action search, such as to select high-quality attributes as candidates based on their contributions to the current state.
§.§.§ Time Complexity and Computation Costs
For time complexity, our recommendation model (cf. Section <ref>) performs matrix factorization with a complexity of O(|𝒪|).
For the graph representation module (cf. Section <ref>), establishing node representations has complexity O(∑_l=1^L (|𝒢|+|𝒪^+|) d_l d_l-1).
For the off-policy learning process (cf. Section <ref>), the complexity is mainly determined by the attention score calculation, which has a time complexity of O(2T|𝒪^+| |𝒩̃_e| d^2).
The total time complexity is O(|𝒪|+ ∑_l=1^L(|𝒢|+|𝒪^+|) d_l d_l-1+2T|𝒪^+| n_2 d^2).
We evaluated the running time of FairKGAT and CEF baselines on the large-scale dataset.
The corresponding results are 232s and 379s per epoch, respectively.
CFairER has a comparable cost of 284s per epoch to these baselines. Considering that our CFairER achieves superior explainability improvements compared to the baselines, we believe that the increased cost of, at most, 52s per epoch is a reasonable trade-off.
§ CONCLUSION
We propose CFairER, a reinforcement learning-based fairness explanation learning framework over a HIN.
Our CFairER generates counterfactual explanations as minimal sets of real-world attributes to explain item exposure fairness.
We design a counterfactual fairness explanation model to discover high-quality counterfactual explanations, driven by an attentive action pruning to reduce the search space and a counterfactual reward to enable counterfactual reasoning.
Extensive evaluations on three benchmark datasets demonstrate CFairER’s ability to find faithful explanations for fairness and balance the fairness-accuracy trade-off.
This work is supported by the Australian Research Council (ARC) under Grant No. DP220103717, LE220100078, LP170100891 and DP200101374.
ACM-Reference-Format
|
http://arxiv.org/abs/2308.01919v1 | 20230712055540 | Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning | [
"Yunfei Guo",
"Tao Zhang",
"Wu Huang"
] | cs.MM | [
"cs.MM",
"cs.AI",
"cs.CV"
] |
Emotion recognition based on multi-modal electrophysiology multi-head attention Contrastive Learning
1th Yunfei Guo
Research institute
Chengdu Techman Software Co.,Ltd
Chengdu, China
[email protected]
2th Tao Zhang
Research institute
Chengdu Techman Software Co.,Ltd
Chengdu, China
[email protected]
3th Wu Huang*
School of Computer science
Sichuan University
Chengdu, China
[email protected]
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================
Emotion recognition is an important research direction in artificial intelligence, helping machines understand and adapt to human emotional states. Multimodal electrophysiological(ME) signals, such as EEG, GSR, respiration(Resp), and temperature(Temp), are effective biomarkers for reflecting changes in human emotions. However, using electrophysiological signals for emotion recognition faces challenges such as data scarcity, inconsistent labeling, and difficulty in cross-individual generalization. To address these issues, we propose ME-MHACL, a self-supervised contrastive learning-based multimodal emotion recognition method that can learn meaningful feature representations from unlabeled electrophysiological signals and use multi-head attention mechanisms for feature fusion to improve recognition performance. Our method includes two stages: first, we use the Meiosis method to group sample and augment unlabeled electrophysiological signals and design a self-supervised contrastive learning task; second, we apply the trained feature extractor to labeled electrophysiological signals and use multi-head attention mechanisms for feature fusion. We conducted experiments on two public datasets, DEAP and MAHNOB-HCI, and our method outperformed existing benchmark methods in emotion recognition tasks and had good cross-individual generalization ability.
Emotion recognition, ME, Self-supervised contrast learning, multi-head attention mechanism,Meiosis
§ INTRODUCTION
Emotion recognition <cit.> refers to the use of computational techniques to identify and analyze human emotional states. This technology has applications in a variety of fields, including psychology, medicine, education, and social networking <cit.>. In the medical field, emotion recognition can assist physicians in better understanding their patients’ emotional states, thereby improving treatment outcomes. Emotion recognition in education can assist teachers in assessing the emotional states of their students, enabling more impactful instruction. In social networking, emotion recognition can help platforms better understand their users’ emotional states, leading to improved service provision.
Electrophysiological signals, such as electroencephalogram (EEG), galvanic skin response (GSR) <cit.>, respiration rate (Respiration), and body temperature (Temperature), have certain advantages as input data for emotion recognition <cit.>. These signals directly reflect physiological states and are closely related to emotional changes <cit.>. For instance, when an individual experiences tension or anxiety, their skin resistance decreases, their respiration rate increases, and their body temperature changes <cit.>. By measuring these signals, accurate inferences can be made about an individual’s emotional state.
However, there are also challenges associated with using electrophysiological signals for emotion recognition. Firstly, measuring these signals requires specialized equipment and expertise <cit.>, which may increase costs and affect portability. Secondly, electrophysiological signals may be subject to interference or noise from external sources, necessitating preprocessing and filtering to improve signal quality <cit.>. Furthermore, the accuracy of emotion recognition may be influenced by physiological variations among individuals.
In summary, while the use of electrophysiological signals for emotion recognition has certain advantages, it also presents challenges. Future research must explore ways to overcome these challenges in order to improve the accuracy and reliability of emotion recognition.
Self-supervised contrastive learning <cit.> is an unsupervised learning method that learns the intrinsic structure of data by comparing the similarities and differences between different data samples. This approach can effectively leverage large amounts of unlabeled data to train deep neural networks, thereby improving the generalization and accuracy of the model. In emotion recognition, self-supervised contrastive learning can be used to learn emotion-related feature representations <cit.>, providing strong support for subsequent emotion classification <cit.>.
Multi-head attention <cit.> is a mechanism for capturing long-range dependencies within sequential data. It computes attention weights between different positions in parallel using multiple “heads,” effectively capturing complex patterns in sequential data <cit.>. In emotion recognition, multi-head attention can be used to process time-series data such as speech signals, text data, or physiological signals to extract emotion-related features <cit.>.
In summary, self-supervised contrastive learning and multi-head attention have great potential for application in emotion recognition. Future research can further explore the use of these techniques in emotion recognition and combine them with other methods to improve the accuracy and reliability of emotion recognition.
The primary contributions and novelties of this paper encompass: 1) the application of self-supervised contrastive learning to cross-domain experiments with multimodal data fusion, exploring a novel approach to improve the generalization performance of cross-domain learning; 2) the effective extraction of task-based features from multimodal data through multi-head attention mechanisms, providing strong support for applications such as emotion recognition; and 3) the maintenance of good robustness on cross-data and cross-modal data with certain domain differences, illustrates the dependability and feasibility of the suggested approach in real-world scenarios. In summary, by applying self-supervised contrastive learning and multi-head attention mechanisms to multimodal data fusion and feature extraction, this paper provides new insights and methods for research in fields such as emotion recognition.
§ RELATED WORK
In previous research, emotion recognition using electrophysiology has been primarily divided into two categories: traditional feature engineering methods and deep learning approaches. Conventional feature engineering methods depend on manually designed features to represent electrophysiological signals. These features often comprise statistical measures (such as mean <cit.>, variance <cit.>, and skewness <cit.>), frequency domain features <cit.> (e.g., power spectral density), and time-frequency features (e.g., wavelet transform <cit.> <cit.>).
These methods often require domain knowledge and expertise to select appropriate features and may not fully exploit complex patterns in the data. Deep learning approaches automatically extract data representations through multi-layer neural networks <cit.>. These methods do not require hand-crafted features and instead train models on large amounts of data to automatically discover relevant patterns. Deep learning methods have achieved significant success in emotion recognition, demonstrating excellent accuracy and robustness <cit.>. In summary, emotion recognition methods based on electrophysiological signals include traditional feature engineering approaches and deep learning methods. Each approach has its strengths and weaknesses, and the choice of method depends on the specific application scenario and data characteristics.
In related work on electrophysiological data, researchers have often achieved performance improvements through the use of self-supervised contrastive learning algorithms, data augmentation, and targeted loss functions. Data augmentation is a technique used to expand the dataset by artificially generating new data samples through methods such as rotation, flipping, cropping, or adding noise <cit.>. In self-supervised contrastive learning, data augmentation can be used to generate positive and negative samples to help the model learn the intrinsic structure of the data. Meiosis <cit.> is a genetics-inspired data augmentation method for the proposed Self-supervised Group Meiosis Contrastive learning <cit.>(SGMC) framework for emotion recognition. It leverages the alignment of stimuli between a set of EEG samples to generate augmented groups through pairing, crossover exchange and separation. The role of meiosis in self-supervised learning is to increase the meaningful difficulty for the model to decode EEG signal samples and mix signals from different subjects while preserving original stimulus-related features for SGMC extraction. Meiosis facilitates diversity exploitation of group composition through random pairing for crossover and separation. Overall, meiosis plays a crucial role in improving the performance of SGMC emotion recognition models, especially in label-scarce scenarios.
An important approach is to design loss functions specific to a given task, which measure the distance between predicted and actual values based on particular data augmentation methods. Common loss functions include mean squared error <cit.>, cross-entropy <cit.>, and contrastive loss <cit.>. In self-supervised contrastive learning, the loss function is used to guide model optimization to minimize prediction errors <cit.>.
In summary, in the analysis of temporal electrophysiological data, self-supervised contrastive learning provides new ideas and methods for applications such as emotion recognition by combining techniques such as meiosis data augmentation and loss functions.
Since ME signals are inherently long sequential data, it is theoretically feasible to extract cross-modal effective featur es from ME data using multi-head attention mechanisms <cit.>. It computes attention weights between different positions in parallel through multiple "heads", effectively capturing complex patterns in sequence data. In ME data processing <cit.>, multi-head attention can be used to capture local dependencies between different modalities to extract task-relevant features.
In addition, multi-head attention has the advantages of enhanced expressiveness and improved computational efficiency. Since each "head" can learn different attention weights, multi-head attention can better express complex patterns in the data. At the same time, since multiple "heads" can be computed in parallel <cit.>, multi-head attention can also improve computational efficiency.
In summary, in multimodal data processing, multi-head attention provides new ideas and methods for applications such as emotion recognition by capturing local dependencies, enhancing expressiveness and improving computational efficiency.
§ METHOD
§.§ Overall Framework
This paper implements emotion recognition using a multi-head attention self-supervised group meiosis contrastive learning framework for ME data based on domain differences.
As shown in Fig. 1, the proposed framework consists of a contrastive learning pre-training stage and a model fine-tuning stage. The pre-training stage includes: a ME group sampler, meiosis data augmentation, a base encoder, a multi-head attention group projector, and a contrastive loss function. First, the ME group sampler generates mini-batch data by sampling from the ME data of the samples; secondly, meiosis is used to split and splice the ME signals of each group to generate two groups of ME signals to construct positive and negative ME signal pairs; thirdly, the base encoder extracts sample-level stimulus-related representations from each ME signal; then, the multi-head attention group projector aggregates the multimodal representations of each group to extract group-level video stimulus cross-modal related representations and maps them to the latent space; finally, the representations mapped to the latent space are optimized through the contrastive loss function for the parameters of the base encoder and group projector to achieve the purpose of minimizing contrastive loss. In the model fine-tuning stage, emotion recognition inference is performed using a pre-trained base encoder and an initialized classifier.
§.§ ME Group Sampler
Extracting features related to video stimuli from ME data and using contrastive learning algorithms for experimentation is challenging in terms of achieving data alignment. Therefore, this paper proposes the use of a ME data group sampler to obtain small batch data <cit.>, providing a good foundation for subsequent data representation learning.
For the processed data, the video sequence number and subject are used as two tensor dimensions of the ME data, where each ME sample is defined as ME^s_v∈ R^M*C, corresponding to the t-second ME signal recorded when subject s watches a t-second video clip v, where M represents the number of samples and C represents the number of channels used for ME data. To obtain a small batch of data, as shown in Fig. <ref>, the ME data sampler first randomly samples P video segments v_1, v_2,..., i.e. v_P that has not been sampled in the current epoch. In order to extract two equal sample groups and construct positive pairs for each clip stimulus, the sampler then randomly selects 2Q subjects s_1, s_2,..., s_2Q for grouping. Further, the sampler extracts the ME data corresponding to the selected subjects and video segments, 2PQ samples D = {ME^sk_vi |i = 1,2,...P;K = 1,2,..., 2Q}, recorded respectively by 2Q subjects watching P video segments. In addition, we note that a group of samples G_i = {ME^s_1_v_i, ME^s_2_v_i},..., ME^s_2Q_v_i} corresponds to video clip vi. In G_i, each individual sample shares similar related features. Thus, the sampler will provide P groups of samples G_1, G_2,..., G_P corresponding to P different pre-training stimuli.
§.§ Meiosis Data Augmentation
Meiosis aims to build positive and negative sample pairs by utilizing the alignment of stimuli in ME groups, expanding a set of samples into two groups to maintain the same stimulus-related characteristics <cit.>.
In order to increase the difficulty of the model in decoding the meaning of ME signal samples, we hope to mix signals from different subjects. In addition, in order to retain the original stimulus-related features extracted by ME-MHACL, we choose to split and splice signals corresponding to the same stimulus. Therefore, we designed the crossover transformation as follows: using {a_1, a_2,..., a_M} to represent the ME signal A of any sample, where a_i is the data of the i-th sampling point (i=1,2,...,M). Similarly, {b_1, b_2,..., b_M} Further, we exchange the data of the first c sampling points of samples A and B to obtain à = {b_1, b_2,..., b_c, a_c+1, a_c+2,..., a_M} and B̃= {a_1, a_2,..., a_c, b_c+1, b_c+2,..., b_M}, where c is known.This transformation of any two ME signals is encapsulated in the following function expression:
{Ã,B̃} = T( A,B,c)
In addition, to take full advantage of the diversity of group combinations, we can randomly pair for crossover and separation. As shown in Fig. 3, the overall design of meiosis data enhancement is as follows:
* Individual pairing: For one original ME signals group G_i={ME^s_1_v_i|k = 1, 2, ..., 2Q} (corresponding to a video clip v_i) individual ME signals are randomly paired to form Q pairs {ME^s_1_v_i, ME^s_1+Q_v_i}, {ME^s_2_v_i, ME^s_2+Q_v_i}, ..., {ME^s_Q_v_i, ME^s_2Q_v_i} for crossover.
* Crossover: Meiosis receives a randomly given split position c to perform transformation (1) for each pairs to obtain {{M̃Ẽ_s_k^v_i, M̃Ẽ_s_k+Q^v_i}|k = 1, 2, ..., Q}.
* Separation: The transformed signals are randomly divided into two groups, and paired transformed signals are required enter into the different groups A and B. Two homologous groups of ME signals G̃^A_i = {M̃Ẽ^s_k_v_i|k = 1, 2, ..., Q} and G̃^B_i = {M̃Ẽ^s_k_v_i|k = Q + 1, Q + 2, ..., 2Q} can be obtained that sharing the similar group-level stimuli-related features. For the data expansion of ME data grouping samples, we use the following function expression:
{G̃_i^A ,G̃_i^B } = Meiosis(G̃_i^ )
When meiosis is established, for a minibatch of P group sample ς, 2P group sample ς̃can be obtained as follows:
ς̃= {G̃_i^t|i = 1,2,...,P;t ∈{A,B}} = Meiosis(ς )
G̃^A_i could from a positive pair with G̃^B_i, from negative pairs with any other 2(P-1) group samples.
§.§ Base Encoder
In order to extract group-level stimulus-related features for contrastive learning, a basic encoder was first designed to extract individual-level stimulus-related features from each individual ME sample. This paper introduces the basic encoder f: R^M × C→ R^D, which maps individual ME samples X to representations h in a 512-dimensional feature space. Based on the existing model ResNet18-1D <cit.>, the basic encoder is designed as follows:
As shown in Fig. 4, it mainly consists of 17 Conv layers with 1D kernels. The first Conv layer has a kernel parallel to the time axis of the ME signal tensor, with a length of 9. Each residual block contains two Conv layers with the same number of kernels and length. In each residual block, the first layer’s kernel is parallel to the input ME tensor’s time axis, while the second layer’s kernel is parallel to the channel axis. For the 8 residual blocks, the kernel lengths decrease from large to small in the order of 15, 15, 11, 11, 7, 3, 3, and 5. The positions of the max pooling (Maxpool) with a 1D kernel, average pooling (Avgpool) with a 1D kernel, batch normalization (BN), and rectified linear unit (RELU) layers are shown in the figure.
Through the basic encoder, for the augmented group sample G̃^t_i, the set of individual-level stimulus-related representations {h_1, h_2, …, h_Q} can be obtained as follows:
H_i_ME^t = f(G̃_i_EEG^t,G_i_GSR^t,G_i_Resp^t,G_i_Temp^t)
This collection is used to further extract group-level features. Individual representation can also be used to extract emotional features and classify emotions.
§.§ Multi-head attention group projector
The multi-head attention group projector aims to accurately project stimulus-related representations from ME signals into a latent space to compute the similarity of video clip stimuli. To alleviate the obstacles (fatigue, attention distraction, etc.) when extracting stimulus-related features from individual samples, a group projector was designed to extract group-level features from multiple samples.
A single ME sample set is a disordered matrix set, which lacks a specific extraction method. Most models focus on regular input representation, such as multi-channel images with a fixed order between different channels, and videos with a fixed order between different frames. In the unordered point cloud classification problem, Charles R. Qi <cit.> proposed PointNet, which uses a symmetric function to construct the network, achieving feature extraction of unordered point clouds.
To reduce the loss of individual features, extraction can be performed by increasing the dimension of individual representations. This paper proposes a basic projector l: R^D → R^H, which uses a multi-layer perceptron (MLP) to project each individual representation H onto a 4096-dimensional feature space. The basic projector consists of 3 fully connected layers, with hidden units decreasing from high to low in the order of 1024, 2048, and 4096. The activation functions for the first two layers use ReLU. The corresponding positions in the figure are Batch Normalization and Dropout set to 0.5.
The multi-head attention mechanism is a technique that can capture the local dependencies and global semantic information of input data. It divides the input data into multiple subspaces, calculates attention weights in each subspace, and then concatenates and linearly transforms the outputs of different subspaces to obtain the final output <cit.>. The formula for the multi-head attention mechanism is as follows:
F(Q, K, V)_i_ME^t = Concat(head_1,…,head_h)H_i_ME^t
where each head is computed as:
head_i = Attention(QW_i_ME^Q, KW_i_ME^K, VW_i_ME^V)
and Attention is a scaled dot-product attention function:
Attention (Q,K,V) = softmax(QK^T/√(d_head))V
Where Q, K, and V represent the query, key, and value matrices, respectively, d_head represents the dimension of each subspace, head represents the number of subspaces, W^Q_i_ME, W^K_i_ME, and W^V_i_ME represent the multimodal learnable representations obtained by transforming the group-level features H^t_i_ME acquired through the basic encoder.
We employ an 8-head multi-head attention layer for feature fusion to obtain a comprehensive feature representation for emotion prediction. The multi-head attention mechanism takes the ME data as queries, keys, and values, and concatenates the output. In the binary classification tasks based on Arousal and Valence, as well as the quad-classification task introduced in the experiment, the attention of each modality is distributed differently due to task differences. Therefore, the model needs to be trained separately based on different labels to obtain the attention weights of different channels for each modality.
To ensure a constant output to represent a group sample with any input permutation, one-dimensional maximum pooling (MaxPool1D) is used to aggregate information from each dimension's upgrade representation. As shown in Fig. 5, MaxPool1D's 1D kernel is perpendicular to the dimensionally upgraded representation vector. The scan direction of the kernel is parallel to the upgraded representation vector, the step size is 1, and the fill is 0. The MaxPool can extract the maximum value of 4096 feature dimensions from Q-dimensional upgrade representation to obtain group-level feature representation in latent space.
We denote the group projection as R^Q × D→ R^H. The group representation extracted in the latent space can be obtained through g:
me_v^t = g(F_i_ME^t)
= MaxPool1D(l(F_1_ME),l(F_2_ME), … ,l(F_Q_ME))
Inspired by the above idea, we designed a model suitable for feature extraction of group ME signals using a symmetric function. As shown in Fig. 5, we designed a multi-head attention group projector composed of a basic projector, a multi-head attention module, and a symmetric function MaxPool1D.
§.§ Classifier
In the fine-tuning task of emotion classification based on arousal and valence, we use a classifier to extract emotional features from the representations extracted by the basic encoder and predict emotional labels. As shown in Table 1, the classifier mainly consists of three fully connected layers, with hidden units decreasing from high to low in the order of 512, 256, and 128. The corresponding positions in the figure are ReLU and Dropout set to 0.5.
§.§ Contrastive Loss
To measure the similarity of group-level stimulus-related features between two groups of samples, we can calculate the cosine similarity of their group representation vectors. Given the input group samples {G̃^t_i |i = 1,2,…, P;t∈{A, B}}, we obtain the group feature representations {m̃ẽ^t_i |i = 1,2,…, P;t∈{A, B}} through the basic encoder and multi-head attention group projector. Then, we can calculate the similarity between two augmented group samples G̃^A_i and G̃^B_j on m̃ẽ^A_i and m̃ẽ^B_j:
sim =s(me_i^A,me_j^B)
= me_i^A · me_j^B/me_i^Ame_j^B,s(me_i^A,me_j^B) ∈ [0,1]
The contrastive loss aims to maximize the similarity of group-level representations of two groups sharing the same stimulus label in a positive pair.
γ = _[j i]∈{0,1}
where γ is an indicator function equaling to 1 if j i. Similar to the SimCLR framework <cit.>, we use normalized temperature-scaled cross-entropy to define the loss function as follows:
ℓ _i^A = - logexp (sim/τ )/∑_j = 1^P γexp (sim/τ ) + ∑_j = 1^P exp (sim/τ )
τ is the temperature parameter of softmax. The smaller the loss function is, the larger similarity between me_i^A and me_j^B, and the similarity between me_i^A and othor group representations come from the same minibatch.
Finally, the total loss for an iteration is the average of all contrastive losses for backpropagation as follows:
L = 1/2P( ℓ _i^A + ℓ _i^B)
§.§ Pre-training Process
ME-MHACL pre-training can be performed based on the constructed ME group sampler, meiosis data enhancement, basic encoder, multi-head attention group projector and contrast loss function.
In pre-training, we first set a number of epochs T_1, and then iterate over the epochs. In each iteration, we continue to sample P video clips until all video clips are enumerated. In each iteration, the sampler extracts 2PQ ME samples D = {ME^s_k_v_i |i = 1,2,…,P;k = 1,2,…,2Q} and combines them into groups ς = {G_i |i = 1,2,…,P}.
Then, in the Meiosis data augmentation stage, to avoid deceiving the model by recognizing the splitting position, a fixed splitting position c is randomly generated and sent to each Meiosis data of this iteration (1 < c < M-1). The 2Q augmented group samples ς̃ = {G̃_i^t | i = 1,2,…,P; t ∈{A,B}} can be obtained through equation (3). Further, we extract group-level features by fusing multimodal representations through the multi-head attention mechanism and projecting them into the latent space, obtaining group representations through equations (4)-(8). Further, we calculate the loss L through equations (9)-(12). Finally, we minimize the loss L through back propagation to compute gradients for updating the parameters of f and g using an optimizer. The specific steps are summarized in Algorithm 1.
§.§ Fine-tuning Process
To achieve excellent emotion classification performance, based on the learned feature representations, we further fine-tune the model using labeled samples. As shown in Fig. 1, we perform supervised training for emotion classification on a model composed of an initialized classifier and a basic encoder pre-trained with ME-NHACL.
We represent the training data as ME and their labels as y. We represent the classifier as c(·). The label y is a categorical variable. For example, if there are 4 emotion categories, y can take 4 values: 0, 1, 2, or 3. We need to predict the emotion category y for each sample X ∈ R^M × C. The pre-trained basic encoder f extracts representations from the raw ME signals X, which are used by the classifier c(·) to extract prediction features to obtain the predicted category y^pre = c(f(X)). We apply the cross-entropy function to define the loss function for the emotion classification task and apply an optimizer to minimize the loss function to optimize the model parameters. Finally, when the loss function converges, we obtain a model for predicting emotion recognition based on ME signals.
§ EXPERIMENTAL
In this section, we introduce the implementation details and experimental evaluation on the DEAP and HCI datasets. In the experiments, we compared ME-MHACL with other existing emotion recognition methods and evaluated its performance under limited labeled sample learning. By visualizing the feature representations learned by ME-MHACL, we explored the reasons for its effectiveness. By evaluating different combinations of hyperparameters, we explored meaningful patterns of the framework. In addition, we verified the rationality of the architecture design through control and ablation experiments.
§.§ Implementation Detail
In this section, we elaborate on the implementation details of the data set used in the experiment, data processing, and basic hyperparameters.
§.§.§ Dataset
The widely used DEAP dataset <cit.> includes ME signals recorded from 32 subjects while watching 40 one-minute music videos. The ME data of this dataset contains 32-channel EEG signals, 2-channel EOG signals, 2-channel EMG signals, 1-channel GSR signal, 1-channel respiration rate signal, 1-channel respiration belt pneumotachograph signal and 1-channel body temperature change signal, totaling 40 channels of valid data. Each trial data was recorded under rest for 3 seconds and stimulation for 60 seconds. The provider down-sampled the recorded 40-channel ME data to a sampling rate of 128hz and processed it with a band-pass filter in the frequency range of 4-45hz. After watching each video, the subjects were asked to rate each video on a scale of 1 to 9 for emotional arousal, valence, liking and dominance. We used arousal and valence scores for emotion recognition. We set the threshold values for arousal and valence ratings to 5. When the rating value is greater than 5.0, the corresponding ME signal is marked as high arousal or high valence. Otherwise, it is marked as low arousal or low valence. Each ME signal corresponds to two labels of valence and arousal, which can be used to construct 2 or 4 classification tasks. Table 2 shows the age and gender distribution of the samples in this dataset.
The MAHNOB-HCI dataset <cit.> is an emotional dataset generated by 30 subjects watching 28 movie clips and 28 pictures. This dataset simultaneously collects relevant data with 6 cameras, a head-mounted microphone, an eye tracker and ME sensors. The ME data includes 32-channel EEG data, 3-channel ECG data, 1-channel GSR data on the finger, 1-channel skin temperature data (Temp), 1-channel respiration belt pneumotachograph signal (Resp) and 1 channel for marking the state. Each subject rated the Arousal and Valence values of each movie clip on a scale of 1-9. We converted the ratings into continuous values and used them as emotional labels. The ME signals contain valid data from 39 channels and down-sample the data to a sampling rate of 256Hz. We obtained labels in the two dimensions of arousal and valence by binarizing the evaluation values of arousal and valence, and these labels will be used in emotion recognition tasks based on the arousal dimension and valence dimension. Table 2 also shows the age and gender distribution of the samples in this dataset.
§.§.§ Data Process
On the DEAP dataset, we used a 1-second sliding window to separate the 63s signal of each trial into 63 non-overlapping ME signal segments. To improve accuracy, based on existing work <cit.>, we reduced the 3s resting state signal from the 60s emotional stimulation ME signal. In each trial, we averaged the 3s baseline ME signal segment to obtain a 1s average baseline ME signal segment <cit.>. The remaining 60 segments minus the average baseline segment become input samples. All samples correspond to a total of 2400 (40 60-second videos) repeated 1-second video clips. From the 2400 video clips, in a ratio of 70:15:15, 1680, 320 (actually should be 360) and 320 (actually should be 360) 1-second video clips were randomly divided into three groups. The three groups of video clips watched by the 32 subjects correspond to 53760, 11520 and 11520 (70:15:15) ME data segments, respectively, as training set, test set and validation set.
On the MAHNOB-HCI dataset, we first scaled and removed baseline drift for each channel of the ME signals <cit.>. Similar to the DEAP dataset, we divided the movie videos into 1-second windows. Since the lengths of the test videos are different, we split adjacent windows along the time axis from front to back, removing the first three seconds of resting state signals from the 30-second emotional stimulation ME signals. In each trial, we averaged the 3s baseline ME signal segment to obtain a 1s average baseline ME signal segment. The remaining 24 segments minus the average baseline segment become input samples. From the 30 samples provided in the dataset, 5 samples without labels were removed. In a ratio of 70:15:15, each sample's 480 data segments with mean baseline removed were divided into 336, 72 and 72 (70:15:15) ME data segments, respectively as training set, test set and validation set.
§.§.§ Basic Configuration
In order to accurately evaluate the performance of the pre-training framework for emotion recognition, we used two steps to evaluate the results. First, save the pre-trained model with different epochs. Next, select the model with the highest average accuracy of emotion recognition after 5 fine-tunings. This average accuracy is used as the result for evaluation.
In order to speed up the sampling, during the pre-training process, we set the five dimensions of the dataset tensor to correspond to video clips, subjects, 1, channels, and sliding window width, respectively. In the fine-tuning process, the first two axes of the dataset, video clips and subjects, are reshaped into a sample axis. The reshaped dataset's axes correspond to samples, 1, channels, and sampling points in turn. In the pre-training task, each epoch traverses each video clip of the dataset. A good pre-training task usually requires training for more than 2000 epochs. In order to reduce workload, we use the validation dataset to adjust the hyperparameters of the ME-MHACL framework and use the test dataset to evaluate the model. Listed in Table 3, SHAPE_tr, SHAPE_te, SHAPE_val represent the tensor sizes of training test and validation datasets used for pre-training or fine-tuning. Epoch represents an appropriate number of pre-training or fine-tuning epochs to achieve good emotion recognition performance. Batchsize represents the number of samples in a small batch.
This paper implements experiments using PyTorch <cit.> on an NVIDIA RTX2080ti GPU. The Adam optimizer <cit.> is used to minimize the loss function during the pre-training and fine-tuning processes. We denote the learning rate of the optimizer as lr. During the pre-training and fine-tuning processes, different values are applied for the number of iterations, batch size, temperature parameter τ, learning rate lr, number of video clips per iteration P, number of samples per group Q, and tensor size of the dataset. As shown in Table II, we list all the hyperparameters used in the two processes on the DEAP and MAHNOB-HCI datasets.
§.§ Emotion Classification Performance
§.§.§ Performance on DEAP
As shown in Table 4, on the DEAP dataset, due to the pioneering use of ME data, we can only compare with algorithms that use DEAP EEG data alone and algorithms that use EEG and EOG bimodal data. First, ME-MHACL is compared with two state-of-the-art methods on the two emotional dimensions of valence and arousal: MMResLSTM <cit.>, which uses multimodal data with residual long short-term memory networks, and ACRNN <cit.>, a hybrid network combining recurrent networks with channel attention mechanisms. From Table II, it can be seen that the proposed ME-MHACL is 2.67% higher than the second highest in the valence dimension and 2.3% higher than the second highest in the arousal dimension. The experimental results demonstrate the effectiveness of ME-MHACL in emotion recognition.
In order to verify the effectiveness of the proposed framework in the multimodal aspect, we first compared ME-MHACL with MindLink-Eumpy <cit.>, which is based on EEG and subject facial images collected from videos. ME-MHACL is 26.14% higher than MindLink-Eumpy in the valence dimension and 34.44% higher than MindLink-Eumpy in the arousal dimension. Secondly, we compared ME-MHACL with DCCA <cit.>, which is based on EEG and EOG. ME-MHACL is 12.06% higher than DCCA in the valence dimension and 7.8% higher than DCCA in the arousal dimension. In addition to experiments with binary labels based on Valence and binary labels based on Arousal, we further compared them on a four-category classification problem: distinguishing four emotional labels: high valence and high arousal, high valence and low arousal, low valence and high arousal, low valence and high arousal, low valence and high arousal. In the four-category classification experiment, ME-MHACL is 0.84% higher than DCCA. Once again, we compared ME-MHACL with GA-MLP <cit.> based on EEG. ME-MHACL is 5.29% higher than GA-MLP in the valence dimension, 2.42% higher than GA-MLP in the arousal dimension, and 5.83% higher than GA-MLP in the four-category classification experiment. In the dimensions of valence, arousal, and four categories, the average accuracy of ME-MHACL's fine-tuning scheme exceeds that of the self-supervised baseline scheme by 0.67%, 1.33%, and 0.06%, respectively.
Comparing the above data, we can find that whether it is the experiment of a single dimension of Arousal or Valence, or the four-category classification experiment, ME-MHACL has better performance than DCCA, proving that ME-MHACL can more effectively extract emotional features that adapt to sample differences in ME data compared to other modal information.
Comparing the confusion matrices of the four-category classification experiments of DEAP shown in Fig. 6 for ME-MHACL's fine-tuning structure and self-supervised baseline, ME-MHACL's fine-tuning structure performs better in the case of high arousal and high valence and in the case of low arousal and high valence, while ME-MHACL's self-supervised scheme performs better in the case of high arousal and high valence and in the case of high arousal and low valence. In the case of changes in data channels, ME-MHACL's fine-tuning scheme can still maintain stable inference performance, proving that ME-MHACL has generalization in cross-dataset experiments.
§.§.§ Performance on MAHNOB-HCI
As shown in Table 5, on the MAHNOB-HCI dataset, our proposed ME-MHACL is first compared with TSception <cit.>, which is based on single-modal EEG data: ME-MHACL is 33.62% higher than TSception in the valence dimension and 35.28% higher than DCCA in the arousal dimension. Then, ME-MHACL is compared with MINDLINK_EUMPY <cit.>, which is based on EEG and facial video screenshots: in the valence dimension, ME-MHACL is 16.33% higher than MINDLINK-EUMPY; in the arousal dimension, ME-MHACL is 18.89% higher than MINDLINK-EUMPY. Finally, ME-MHACL is compared with HCNNS-MFB <cit.>, which is also based on single-modal EEG data: ME-MHACL is 4.72% higher than HCNNS-MFB in the valence dimension and 5.94% higher than HCNNS-MFB in the arousal dimension. By observing Table IV, we can see that ME-MHACL's model fine-tuning scheme and self-supervised learning algorithm have a significant advantage over other multimodal algorithms and single-modal algorithms, and ME-MHACL can also achieve an average accuracy of over 93.5% in the four-category classification experiment. In the dimensions of valence, arousal, and four categories, the average accuracy of ME-MHACL's fine-tuning scheme exceeds that of the self-supervised baseline scheme by 0.06%, 0.22%, and 0.11%, respectively.
Thus, it can be seen that ME-MHACL can obtain effective representations based on the valence and arousal dimensions in the multimodal data of MAHNOB-HCI. The four-category classification experiment further proves the robustness of ME-MHACL in the challenging task of MAHNOB-HCI.
In this experiment, the data modalities used in MAHNOB-HCI and DEAP are the same: EEG, GSR, Resp, and Temp. However, the data of MAHNOB-HCI has one less channel of GSR data than that of DEAP. Based on the above cross-dataset differences, we conducted a four-category classification experiment based on the dual labels of Valence and Arousal.
Comparing the confusion matrices of the four-category classification experiments of MAHNOB-HCI shown in Fig. 7 for ME-MHACL's fine-tuning structure and self-supervised baseline, ME-MHACL's fine-tuning structure performs better in the case of high arousal and high valence and in the case of low arousal and low valence, while ME-MHACL's self-supervised scheme performs better in the case of high arousal and low valence and in the case of low arousal and low valence. In the case of changes in data channels, ME-MHACL's fine-tuning scheme can still maintain stable inference performance, proving that ME-MHACL has generalization in cross-dataset experiments.
§.§ The experiments based on data modes
In this section, we conduct ablation experiments by selecting data modalities. Firstly, we investigate the impact of the number of channels in ME data on the accuracy of emotion recognition. Secondly, we study the effect of the combination of ME data on the accuracy of emotion recognition. Finally, we use attention heatmaps to visualize the attention of each channel data for each modality.
§.§.§ Data mode ablation experiment
In this experiment, we fine-tuned the binary classification of Valence based on the DEAP dataset by adjusting the number of channels of ME data. In each experiment of this group, EEG signals were used as the main modality. As shown in Fig. <ref>, fine-tuning with 32-channel EEG unimodal data resulted in an average test accuracy of 91.38% and an average loss of 31.01%. Fine-tuning with 32-channel EEG and 1-channel GSR bimodal data resulted in an average test accuracy of 91.47% and an average loss of 31.57%. Fine-tuning with 32-channel EEG, 1-channel Resp, and 1-channel Temp trimodal data resulted in an average test accuracy of 90.32% and an average loss of 34.88%. Fine-tuning with 32-channel EEG, 1-channel GSR, 1-channel Resp, and 1-channel Temp quadmodal data resulted in an average test accuracy of 96.39% and an average loss of 13.64%. Fine-tuning with 32-channel EEG, 2-channel EOG, 1-channel GSR, 1-channel Resp, and 1-channel Temp pentamodal data resulted in an average test accuracy of 94.37% and an average loss of 20.77%, and the above results were obtained by averaging over five experiments.
Although the overall trend shows that as the number of data modalities increases, the accuracy of emotion recognition remains on an overall upward trend. However, due to the domain specificity of specific tasks, the effectiveness of data is implicitly related to the task, and it is not positively correlated with the number of channels of ME data.
§.§.§ Dual mode combination comparison
In this experiment, we tested the binary classification of Valence based on different combinations of ME data from the MAHNOB_HCI dataset, including combinations of EEG and temperature, EEG and skin resistance, and EEG and respiration rate.As shown in Fig. <ref>, we first compared the accuracy of emotion recognition for fine-tuning and self-supervised learning of three bimodal data combinations in the first row. In the fine-tuning stage, the accuracy of the EEG and temperature combination was 97.06%; the accuracy of the EEG and skin resistance combination was 97.56%; and the accuracy of the EEG and respiration rate combination was 94.5%. In self-supervised learning, the accuracy of the EEG and temperature combination was 97.89%; the accuracy of the EEG and skin resistance combination was 97.11%; and the accuracy of the EEG and respiration rate combination was 95.56%. Then, in the second row, we compared the loss rate of emotion recognition for fine-tuning and self-supervised learning of three bimodal data combinations. In the fine-tuning stage, the loss rate of the EEG and temperature combination was 10.02%; the loss rate of the EEG and skin resistance combination was 9.93%; and the loss rate of the EEG and respiration rate combination was 19.48%. In self-supervised learning, the loss rate of the EEG and temperature combination was 6.87%; the loss rate of the EEG and skin resistance combination was 9.16%; and the loss rate of the EEG and respiration rate combination was 14.73%.
In the binary classification experiment based on arousal using bimodal data, it also includes combinations of EEG and temperature, EEG and skin resistance, and EEG and respiration rate. As shown in Fig. 10, firstly, we compared the accuracy of emotion recognition for fine-tuning and self-supervised learning of three bimodal data combinations in the first row. In the fine-tuning stage, the accuracy of the EEG and temperature combination was 96.39%; the accuracy of the EEG and skin resistance combination was 97.62%; and the accuracy of the EEG and respiration rate combination was 94.39%. In self-supervised learning, the accuracy of the EEG and temperature combination was 97.33%; the accuracy of the EEG and skin resistance combination was 96.94%; and the accuracy of the EEG and respiration rate combination was 95.56%. Then, in the second row, we compared the loss rate of emotion recognition for fine-tuning and self-supervised learning of three bimodal data combinations. In the fine-tuning stage, the loss rate of the EEG and temperature combination was 13.48%; the loss rate of the EEG and skin resistance combination was 9.23%; and the loss rate of the EEG and respiration rate combination was 19.71%. In self-supervised learning, the loss rate of the EEG and temperature combination was 6.87%; the loss rate of the EEG and skin resistance combination was 9.48%; and the loss rate of the EEG and respiration rate combination was 14.7%.
Through the above experiments, we found that in emotion recognition tasks based on different labels, different combinations of bimodal data have different effects on the emotion recognition of Arousal and valence. Lang et al. <cit.> found that the average value of GSR is related to the level of arousal, and slow breathing is related to relaxation, while irregular rhythm, rapid breathing, and cessation of breathing are related to stronger emotions such as anger or fear. [DEAP] When the subject is in a state of anger or fear, their skin temperature may rise and their breathing rate may increase; when the subject's emotions are relaxed, their breathing may slow down <cit.>. The above phenomenon is the origin of the difference in effective representation between different modal electrophysiological data and emotion recognition tasks.
§.§ Ablation experiments based on MHA
In this section, we will compare the representation performance of tensors using multi-head attention mechanism in group projectors in binary classification and four-class classification tasks.
§.§.§ Four-classification T-SNE visualization based on MHA
As shown in Fig. 11, based on the MAHNOB-HCI dataset, according to the self-supervised four-class task mentioned earlier, the data of four modalities is mapped to a two-dimensional plane through the 1800-dimensional features extracted by the multimodal basic encoder using t-SNE <cit.>. After 10 samples of self-supervised learning, we selected three sampling results to obtain the three images in the first row. After the 1800-dimensional feature representation processed by the multimodal multi-head attention mechanism, we also sampled three times to obtain the three images in the second row.
§.§.§ Two-classification T-SNE visualization based on MHA
As shown in Fig. 12, based on the MAHNOB-HCI dataset, according to the binary classification task of the Valence label, the data of four modalities is mapped to a two-dimensional plane through the 1800-dimensional features extracted by the multimodal basic encoder using t-SNE. After 10 samples of self-supervised learning, we selected three sampling results to obtain the three images in the first row. After the 1800-dimensional feature representation processed by the multimodal multi-head attention mechanism, we also sampled three times to obtain the three images in the second row.
Whether it is a binary classification task based on Arousal or Valence, or a four-class classification task based on two-dimensional labels, the multi-head attention module in the multimodal group projector of ME-MHACL not only learns stimulus-related feature representations, but also enables the model to distinguish whether different stimuli come from continuous videos. Without using the multi-head attention module, there are more indistinguishable representations where different emotion labels are mixed together. When using the multi-head attention module, there are fewer feature representations where different emotion labels are mixed together, showing better emotion discrimination ability. This reflects that the multi-head attention module in the multimodal group projector enables ME-MHACL to learn video-level stimulus-related representations, thereby improving emotion recognition performance.
§ CONCLUSION AND PROSPECT
This paper proposes a multimodal emotion recognition method based on multi-head attention mechanism, which can effectively use ME signals for emotion recognition and fully mine the complementary information between electroencephalogram (EEG), skin resistance (GSR), respiration rate (Respiration), and temperature (Temperature). Experiments were conducted on two publicly available multimodal emotion datasets, and the results show that the proposed method outperforms existing benchmark methods in terms of accuracy and stability of emotion prediction, and can better distinguish different emotional states.
There are several aspects of this work that can be further improved and expanded:
* This paper only considers four modalities of ME data, and in the future, other modalities of data, such as facial expressions, voice, body posture, etc., can be introduced to enhance the effectiveness and robustness of emotion recognition.
* This paper only uses two emotional dimensions, arousal and valence, and in the future, more emotional dimensions, such as dominance, value, expectation, etc., can be considered to more comprehensively describe emotional states.
* This paper only uses static emotion labels, and in the future, dynamic emotion labels, such as continuous emotion curves and changing emotion intensity, can be considered to more realistically reflect the process of emotion change.
IEEEtran
|
http://arxiv.org/abs/2307.04440v1 | 20230710094116 | Time-Frequency-Space Transmit Design and Signal Processing with Dynamic Subarray for Terahertz Integrated Sensing and Communication | [
"Yongzhi Wu",
"Chong Han"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Time-Frequency-Space Transmit Design and Signal Processing with Dynamic Subarray for Terahertz Integrated Sensing and Communication
Yongzhi Wu, Graduate Student Member, IEEE, and
Chong Han, Member, IEEE
This paper will be presented in part at IEEE SPAWC, September 2023 <cit.>.
Yongzhi Wu is with the Terahertz Wireless Communications (TWC) Laboratory, Shanghai Jiao Tong University, Shanghai, China (Email: [email protected]).
Chong Han is with the Terahertz Wireless Communications (TWC) Laboratory, Department of Electronic Engineering and Cooperative Medianet Innovation Center (CMIC), Shanghai Jiao Tong University, Shanghai, China (Email: [email protected]).
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
Terahertz (THz) integrated sensing and communication (ISAC) enables simultaneous data transmission with Terabit-per-second (Tbps) rate and millimeter-level accurate sensing. To realize such a blueprint, ultra-massive antenna arrays with directional beamforming are used to compensate for severe path loss in the THz band.
In this paper, the time-frequency-space transmit design is investigated for THz ISAC to generate time-varying scanning sensing beams and stable communication beams. Specifically, with the dynamic array-of-subarray (DAoSA) hybrid beamforming architecture and multi-carrier modulation, two ISAC hybrid precoding algorithms are proposed, namely, a vectorization (VEC) based algorithm that outperforms existing ISAC hybrid precoding methods and a low-complexity sensing codebook assisted (SCA) approach. Meanwhile, coupled with the transmit design, parameter estimation algorithms are proposed to realize high-accuracy sensing, including a wideband DAoSA MUSIC (W-DAoSA-MUSIC) method for angle estimation and a sum-DFT-GSS (S-DFT-GSS) approach for range and velocity estimation. Numerical results indicate that the proposed algorithms can realize centi-degree-level angle estimation accuracy and millimeter-level range estimation accuracy, which are one or two orders of magnitudes better than the methods in the millimeter-wave band. In addition, to overcome the cyclic prefix limitation and Doppler effects in the THz band, an inter-symbol interference- and inter-carrier interference-tackled sensing algorithm is developed to refine sensing capabilities for THz ISAC.
Terahertz integrated sensing and communications, ultra-massive MIMO, Orthogonal frequency division multiplexing, hybrid beamforming
§ INTRODUCTION
§.§ Background and Motivations
To address the rapidly growing demand for wireless data rates and the emergence of new application scenarios, the communication community is seeking new spectrum opportunities as well as new functionalities for sixth-generation (6G) and beyond wireless networks <cit.>. Following the former trend of moving up to higher frequencies, the Terahertz (THz) band is viewed as one of the key technologies to enable enormous potential in 6G wireless systems <cit.>. Another promising exploration is to use integrated sensing and communication (ISAC) technology, which can endow wireless networks with sensing capabilities to realize the mapping of the physical world to the digital world <cit.>.
Leveraging the ultra-broad bandwidth and the ultra-massive antenna arrays in the THz band, the integration of these two technologies, i.e., Terahertz integrated sensing and communication (THz ISAC) <cit.>, can achieve ultra-accurate sensing and Terabit-per-second data rates simultaneously.
Despite the promising vision of THz ISAC, critical challenges arise when designing THz ISAC transmit signal. First, there exists severe path loss in the THz band, which includes free path loss, reflection, and scattering losses. These losses strictly limit the maximum sensing and communication distance, and degrade sensing accuracy and data rate.
Second, with the power constraints, to compensate for such severe path loss, ultra-massive multiple-input multiple-output (UM-MIMO) antenna arrays with beamforming are used to generate highly directional beams <cit.>. Thus, energy-efficient and low-complexity beamforming algorithms need to be developed.
Third, the generation of directional beams restricts the angular coverage of sensing. In general, communication prefers stable beams toward users to enable tractable data detection, while sensing requires sweeping beams to scan possible targets in the surrounding environment <cit.>. To realize omnidirectional sensing with directional beams, effective and efficient narrowbeam management schemes, including transmit design in the time-frequency domain and beamforming design in the spatial domain are demanded to realize simultaneous sensing and communication for THz ISAC systems.
Meanwhile, the receive processing encounters significant challenges, especially for sensing parameter estimation algorithms in THz UM-MIMO systems, which are affected by the beamforming architectures and peculiarities of THz channels. First, the sensing algorithm for range and velocity estimation needs to be redesigned, since an additional dimension (namely, spatial domain) is introduced in the received signal model when using the ultra-large dimensional antenna arrays in the THz band.
Second, with high channel sparsity due to strong power loss of non-line-of-sight (NLoS) paths, the delay spread of the THz communication channel is reduced <cit.>. In this case, to utilize broad bandwidth with a fixed subcarrier number, we can increase subcarrier spacing, which is inversely proportional to the symbol duration. Thus, the symbol duration and cyclic prefix (CP) length are reduced in classical multi-carrier communication systems, such as orthogonal frequency-division multiplexing (OFDM). Nevertheless, the round-trip delay of sensing targets should be smaller than the CP duration with classical OFDM sensing algorithms <cit.>. For communication waveforms with reduced CP, there might exist inter-symbol interference (ISI) effects on the received sensing signal, which cause existing sensing methods inapplicable.
Third, as the Doppler shifts are proportional to the carrier frequency, the Doppler effects become even stricter in the THz band. If maintaining current waveform numerology of 5G wireless systems, Doppler effects in the presence of high-mobility targets may cause inter-carrier interference (ICI) effects and severely degrade sensing capabilities. Thus, to tackle these challenges, signal processing design in terms of sensing algorithms is vital to realize high-accuracy sensing, while data recovery has been well investigated <cit.>.
§.§ Related Works
§.§.§ Waveform Design
By jointly designing the ISAC transmit signal, sensing and communication can share the hardware and signal processing modules. From the perspective of the time-frequency domain, various ISAC waveforms have been investigated in the literature. As adopted in 4G and 5G standards, CP-OFDM is a promising candidate for ISAC although being a communication-centric design <cit.>. Since an OFDM waveform suffers from a high PAPR issue, especially in uplink transmission, some single-carrier waveforms, such as discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM), are investigated for THz ISAC systems, due to their low PAPR compared to OFDM <cit.>. Recently, orthogonal time frequency space (OTFS) has been studied in ISAC applications <cit.>, thanks to its advantages under doubly-selective channels in high-mobility scenarios. Furthermore, a DFT spread OTFS (DFT-s-OTFS) waveform is proposed in <cit.> to reduce the PAPR of OTFS for THz ISAC. However, the high complexity of data detection for MIMO-OTFS constitutes a serious problem. Despite the PAPR issue, OFDM is still a potential waveform in the THz band, since it has good compatibility with UM-MIMO and enables flexible time-frequency domain resource allocation among multiple users <cit.>. Thus, wideband UM-MIMO systems with multi-carrier modulations are investigated for THz communications in many recent works, including beamforming design <cit.>, channel estimation <cit.>, multiple access <cit.>, carrier aggregation <cit.>. Nevertheless, there is a lack of research on THz ISAC in this regard, especially focusing on the transmit design and sensing algorithms in the time-frequency-space domain.
§.§.§ Beamforming Design
Pertaining to MIMO-OFDM systems, with conventional fully-digital and analog beamforming architectures, multi-target estimation can be realized by utilizing opportunistic sensing <cit.> and multibeam optimization <cit.>.
Nevertheless, the fully-digital structure exhibits high hardware complexity and power consumption for THz ISAC systems with large-dimensional antenna arrays, while the analog beamforming architecture can only support one data stream with limited spatial multiplexing gain <cit.>.
As a combined approach, hybrid beamforming can realize comparable data rates with the fully-digital structure and exhibits less hardware complexity. Based on the full-connected (FC) hybrid beamforming architecture, authors in <cit.> propose a consensus-ADMM approach to design the analog and digital beamformers by jointly optimizing the spectral efficiency (SE) and spatial spectrum matching error of sensing. With the array-of-subarray (AoSA) structure, which further reduces the number of phase shifters and power consumption at the cost of sacrificing data rate, the ISAC hybrid beamformers can be designed by optimizing the Cramér-Rao bound <cit.> or minimizing the weighted Euclidean distance between the hybrid precoding matrix and the fully digital beamforming matrix <cit.>. To balance SE and power consumption, a dynamic array-of-subarray (DAoSA) hybrid precoding architecture is proposed in <cit.>, while the ISAC hybrid precoding design with dynamic subarray has not been investigated yet.
In addition, most of the aforementioned works design beamformers with some prior knowledge of target angles <cit.>, which is acceptable in target tracking scenarios but not available in general target estimation, i.e., target discovery mode. Thus, beam scanning-based sensing to discover targets with narrow beams in the THz band is still a significant issue to be addressed.
§.§ Contributions and Paper Structure
The contributions of this work are summarized as follows:
* We present a time-frequency-space transmit design framework for THz ISAC systems by considering a dynamic subarray hybrid beamforming architecture and multi-carrier waveform. In this framework, we develop a vectorization (VEC) based and a sensing codebook-assisted (SCA) ISAC hybrid precoding algorithms for the DAoSA structure. Our proposed ISAC hybrid precoding algorithms can realize the entire angular directions of sensing and data transmission by generating scanning sensing beams at different time slots and stable communication beams toward the user. Meanwhile, the proposed VEC algorithm outperforms existing ISAC hybrid precoding methods, and the SCA approach reduces the computational complexity.
* Based on the time-frequency-space domain transmit signal design, we propose parameter estimation algorithms at the sensing receiver, including a wideband DAoSA MUSIC (W-DAoSA-MUSIC) algorithm for angle estimation, and a sum-DFT and golden section search (S-DFT-GSS) method for range and velocity estimation. Simulation results indicate that the sensing accuracy with the proposed sensing algorithms can achieve centi-degree-level for angle estimation, millimeter-level for range estimation, and decimeter-per-second-level for velocity estimation.
* We further propose an ISI- and ICI-tackled sensing algorithm to overcome the CP limitation on the maximum sensing distance and estimation error caused by high-mobility targets. While the ICI is studied in <cit.>, the ISI effects have not been considered in the literature. Compared to the ISI-unaware estimation, the ISI-tackled sensing algorithm can accurately estimate the target with a round-trip delay larger than the CP duration. In contrast with ICI-unaware estimation, the ICI-tackled algorithm can overcome the masking problem of weak targets caused by the side lobes of the strong target in the presence of ICI effects.
The structure of the remainder of this paper is organized as follows. The system framework with the time-frequency-space transmit design for THz ISAC is presented in Sec. <ref>. The ISAC hybrid precoding algorithms are elaborated in Sec. <ref>. The sensing estimation algorithm design with the DAoSA architecture and multi-carrier modulation is proposed in Sec. <ref>. The ISI- and ICI- tackled sensing algorithm for THz ISAC is developed in Sec. <ref>. Sec. <ref> illustrates extensive simulation results. Finally, the paper is concluded in Sec. <ref>.
Notations: ℂ denotes the set of complex numbers; 𝐀(i, j) is the entry on the ith row and jth column of 𝐀; 𝔼{·} defines the expectation operation; The superscripts (·)^T and (·)^H stand for the transpose and Hermitian transpose operations; The notations ⊗ and ⊙ refer to the Kronecker product and Hadamard Product, respectively; det(·) and ·_F denote the determinant and Frobenius norm of a matrix; (·)^† indicates the Moore-Penrose pseudo inverse; vec(·) represents the vectorization operation.
§ SYSTEM FRAMEWORK
As shown in Fig. <ref>, we propose a THz ISAC system framework based on a wideband UM-MIMO architecture, in which the ISAC transceiver simultaneously senses potential targets in the surrounding spatial environment and sends information symbols to one communication receiver (without loss of generality) via the designed transmit signal in the time-frequency-space domain. Specifically, in the time-frequency domain, the data signal is modulated with orthogonal frequency-division multiplexing (OFDM) and spread across M subcarriers. In the spatial domain, the data streams at each subcarrier are precoded through a digital precoder 𝐅_BB∈ℂ^N_RF^t× N_s and an analog precoder 𝐅_RF∈ℂ^N_t × N_RF^t, where N_s denotes the number of data streams and N_RF^t refers to the number of transmit RF chains, with N_s ⩽ N_RF^t ≪ N_t.
As for the transceiver structure, the ISAC transceiver is equipped with an N_t-element transmit uniform planar array (UPA) to transmit the ISAC waveform and an N_r-element receive UPA to perform sensing echo processing. The communication receiver has an N_r-element UPA to accomplish signal reception and data detection. The transmit antenna arrays adopt a DAoSA hybrid beamforming structure <cit.>. With the DAoSA structure, the transmit antennas are divided into N_RF^t subarrays and each RF chain connects to each subarray with K_t = N_t / N_RF^t elements through a switch. Similarly, the received signal is combined through the analog combiner and the digital combiner with N_RF^r RF chains, and each receiver subarray contains K_r = N_r / N_RF^r elements.
§.§ Time-Frequency-Space Transmit Design
At the transmitter side, the ISAC system maps the transmitted bit streams to a large amount of data frames. A data frame is divided into Q time slots, each of which contains M × N data symbols, where M and N stand for the numbers of subcarriers and symbols during a time slot. In the multi-carrier hybrid beamforming architecture, at the qth time slot, the data symbols 𝐬_q[m, n] ∈ℂ^N_s× 1, q = 1, 2, ⋯, Q, m = 0, 1, ⋯, M - 1, n = 0, 1, ⋯, N - 1, which are generated from N_s data streams with 𝔼{𝐬_q[m, n] 𝐬^H_q[m, n]} = 1/N_s𝐈_N_s, are first precoded by a digital beamformer 𝐅_BB, q[m] and mapped to the mth subcarrier in the frequency domain, 𝐱_q[m, n] = 𝐅_BB, q[m] 𝐬_q[m, n]. Then, we perform the inverse discrete Fourier transform (IDFT) to transform the frequency-domain data blocks to the time-domain signal and add one cyclic prefix (CP) for each symbol before conducting up-conversion and analog beamforming 𝐅_RF, q∈ℂ^N_t× N_RF^t.
At the qth time slot, the proposed THz ISAC system with the time-frequency-space three-dimensional transmit design generates scanning beams toward the qth sensing direction and stable beams toward the communication user.
Note that all subcarriers share the same analog precoder while the digital precoder is performed for each subcarrier.
For the nth symbol during the qth time slot, the transmit time-domain signal can be expressed as,
𝐱̃_q, n (t) = ∑_m=0^M-1𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n] e^j2π mΔ f t,
where t denotes the time instant and Δ f refers to the subcarrier spacing. Then, the symbol duration T equals to 1/Δ f and the total symbol duration is expressed as T_o = T + T_cp with the CP duration of T_cp = M_cp/M T, where M_cp is the CP size. Thus, the duration of a time slot is T_s = N T_o and the frame duration can be expressed as T_f = Q T_s. To generate stable beams towards the communication user and scanning beams for searching sensing targets, the transmit beamformers are fixed during a time slot and vary at different time slots.
In this work, we consider a DAoSA hybrid beamforming architecture <cit.>, in which the connections between RF chains and subarrays can be intelligently adjusted through a network of switches. The analog precoding matrix 𝐅_RF, q can be written as,
𝐅_RF, q = 𝐅_P, q⊙𝐏_S,
where 𝐅_P, q∈ℂ^N_t× N_RF^t denotes the phase shifter network matrix and 𝐏_S∈{0, 1}^N_t × N_RF^t describes the binary switch network matrix, which can be expressed as
𝐏_S=[[ 𝐩_1,1 𝐩_1,2 … 𝐩_1, N_RF^t; 𝐩_2,1 𝐩_2,2 … 𝐩_2, N_RF^t; ⋮ ⋮ ⋱ ⋮; 𝐩_N_RF^t, 1 𝐩_N_RF^t, 2 … 𝐩_N_RF^t, N_RF^t ]],
where 𝐩_i, j stands for the status of the switch between the ith subarray and the jth RF chain. If this switch is closed, 𝐩_i, j = 1_K_t is an all-one vector. Conversely, 𝐩_i, j = 0_K_t is a zero vector. The phase shifter network matrix 𝐅_P, q satisfies a
constant modulus constraint, i.e., the modulus of its elements is 1. Then, the analog precoding matrix 𝐅_RF, q is given by
𝐅_RF, q=[[ 𝐟_1,1 𝐟_1,2 … 𝐟_1, N_RF^t; 𝐟_2,1 𝐟_2,2 … 𝐟_2, N_RF^t; ⋮ ⋮ ⋱ ⋮; 𝐟_N_RF^t, 1 𝐟_N_RF^t, 2 … 𝐟_N_RF^t, N_RF^t ]],
where 𝐟_i, j∈ℂ^K_t × 1 represents the joint precoding vector of the switch and the phase shifters between the ith subarray and the jth RF chain. When this switch is closed, 𝐟_i, j should satisfy the unit modulus constraint. When the switch is open, 𝐟_i, j is a zero vector. We denote the feasible set of the analog precoder 𝐅_RF, q as ℱ. Moreover, the normalized transmit power constraint is expressed as, 𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s.
§.§ Communication Model
With multi-carrier transmission, the communication received signal of the mth subcarrier and the nth symbol at qth time slot after the decoding process is expressed as
𝐫_q[m, n] = √(ρ)𝐂_BB^H[m] 𝐂_RF^H 𝐇_c[m] 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n]
+ 𝐂_BB^H[m] 𝐂_RF^H 𝐧_q[m, n],
where ρ describes the average received power, 𝐂_BB[m]∈ℂ^N_RF^r× N_s is the digital combining matrix, 𝐂_RF∈ℂ^N_r × N_RF^r is the analog combining matrix, and 𝐧_q[m, n] refers to the additive white Gaussian noise with independent and identically distribution 𝒞𝒩(0, σ_n^2). In the THz band, the channel is sparse and dominated by the line-of-sight (LoS) path and several reflected rays. Thus, as a benchmark, the multi-path channel model based on ray-tracing methods of the channel matrix 𝐇_c[m] at the mth subcarrier can be given by <cit.>,
𝐇_c[m] = γα_L[m] 𝐚_r(θ_L^r, ϕ_L^r) 𝐚_t^H(θ_L^t, ϕ_L^t)
+ γ∑_l=1^L_Nα_N, l[m] 𝐚_r(θ_N, l^r, ϕ_N, l^r) 𝐚_t^H(θ_N, l^t, ϕ_N, l^t),
where γ = √(N_t N_r/L_N + 1) and L_N represents the number of non-line-of-sight (NLoS) paths. Moreover, α_L[m] and α_N, l[m] denote the channel gain of the LoS path and lth NLoS path at mth subcarrier, respectively. In addition, θ^r(θ^t) and ϕ^r(ϕ^t) refer to the azimuth and elevation angles of arrival/departure (AoAs/AoDs). In the case of a UPA in the yz-plane with W and L elements on the y and z axes respectively, the array response vector can be expressed by,
𝐚(θ, ϕ) = 𝐚_z(ϕ) ⊗𝐚_y(θ, ϕ),
where
𝐚_y(θ, ϕ) = 1/√(W) [1, ⋯, e^jπ (W - 1) sin(θ) sin(ϕ)]^T,
𝐚_z(ϕ) = 1/√(L) [1, ⋯, e^jπ (L - 1) cos(ϕ)]^T,
and θ stands for the azimuth angle, and ϕ refers to the elevation angle.
For THz communications, we need to design hybrid precoders to maximize spectral efficiency. The achievable spectral efficiency can be expressed as <cit.>
R_q = 1/M∑_m=0^M-1log(𝐈_N_s + ρ/N_s𝐑_n^-1𝐂_BB^H[m] 𝐂_RF^H 𝐇_c[m]
×𝐅_RF, q𝐅_BB, q[m] 𝐅_BB, q^H[m] 𝐅_RF, q^H 𝐇_c^H[m] 𝐂_RF𝐂_BB[m]),
where 𝐑_n = σ_n^2 𝐂_BB^H[m] 𝐂_RF^H 𝐂_RF𝐂_BB[m] is a noise covariance matrix. The optimization problem of maximizing R_q at the transmitter side is equivalent to minimizing the Euclidean distance between the optimal fully digital precoder 𝐅_c[m] and the hybrid precoder as 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2. Generally, the channel state information (CSI) can be known at both transmitter and receiver by utilizing channel estimation <cit.> and is assumed to be time-invariant during a frame duration. Then, from the singular value decomposition (SVD) of the channel 𝐇_c[m], the unconstrained optimal precoder 𝐅_c[m] and decoder 𝐂_c[m] are comprised of the first N_s columns of the right and the left singular value matrices.
§.§ Sensing Model
In the THz band, directional beams are used to compensate for severe path loss and improve received sensing signal power, which limits the angular range of sensing targets. To realize entire-space sensing, we design a codebook-based beam-scanning scheme for THz sensing.
For the azimuth angle, the whole sensing angular domain is divided into Q scanning directions, ω = [ω_1, ω_2, ⋯, ω_Q]^T, each of which corresponds to a time slot. We can set Q = W and design the sensing beamforming vector as the qth column from a discrete Fourier transform (DFT) codebook, by which the transmitter can generate W orthogonal beamforming vectors and steer signals towards W independent sensing directions. Thus, the sensing codebook can be written as,
𝐀 = 𝐚_z(ϕ) ⊗ [𝐚_y,1(ω_1, ϕ), ⋯, 𝐚_y, W(ω_Q, ϕ)]
where
𝐚_y, q(ω_q, ϕ) = 1/√(W) [1, ⋯, e^jπ (W - 1)sin(ω_q)sin(ϕ)]^T,
and sin(ω_q) = -1 + 1/W + (q -1) 2/W for q = 1, 2, ⋯, W. In this case, the sensing angular window Ω_q at the qth time slot contains angles from arcsin(-1+(q-1)2/W) to arcsin(-1+q2/W).
At the sensing receiver, the frequency domain received signal of the mth subcarrier and the nth symbol at qth time slot is denoted as 𝐲_q[m, n]∈ℂ^N^r_RF× 1, which is given by
𝐲_q[m, n] = 𝐖_RF, q^H 𝐇_s[m, n] 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n]
+ 𝐖_RF, q^H 𝐞_q[m, n]
where 𝐖_RF, q∈ℂ^N_r× N_RF^r denotes the combing matrix at the sensing receiver and 𝐞_q[m, n] represents the AWGN vector.
At the ISAC transceiver side, the sensing receiver is collocated with the transmitter. Based on the OFDM radar sensing channel <cit.> and MIMO channel models <cit.>, the sensing channel matrix 𝐇_s[m, n] is expressed as,
𝐇_s[m, n] = √(N_t N_r/P)∑_p=1^P h_p e^-j2π m Δ f τ_p e^j2π ((q - 1) T_s + n T_o) ν_p
×𝐚_r(θ_p, ϕ_p) 𝐚_t^T(θ_p, ϕ_p),
where P stands for the number of sensing targets, each of which corresponds to one back-reflected path with complex channel coefficient h_p. For the pth target, the delay τ_p and the Doppler shift ν_p are calculated by τ_p = 2 r_p/c_0 (τ_p ⩽ T_cp) and ν_p = 2 f_c v_p/c_0 (ν_p ≪Δ f), where r_p and v_p refer to the range and relative velocity of the p targets, respectively. c_0 denotes the speed of light and f_c describes the carrier frequency. Moreover, θ_p and ϕ_p represent the azimuth and elevation angle-of-arrival of the pth target.
Beamforming design for sensing aims at achieving the highest beamforming gain towards the sensing direction. Thus, at the qth time slot, the optimal sensing precoder 𝐅_s, q∈ℂ^N_t × N_s can be generated from the qth column of the sensing codebook, namely, 𝐅_s, q = 1/√(N_t)𝐀(:, q) 1_N_s^T with a normalized factor of 1/√(N_t). Then, we need to minimize the Euclidean distance, 1/M∑_m=0^M-1𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2. At the sensing receiver side, 𝐖_RF, q is fixed during a time slot and the receive sensing beams point to N_RF^r random directions within Ω_q at the qth time slot.
§.§ Problem Formulation
At the THz ISAC transmitter, we need to design the analog and digital beamformers to simultaneously realize a communication link with ultra-fast data rates and provide a desired beampattern for high-accuracy sensing of surrounding targets.
Different from the conventional hybrid precoding design problem for communication, the optimal ISAC hybrid precoders should be sufficiently “close" to the time-invariant and frequency-dependent optimal communication precoder and the time-varying and frequency-independent optimal sensing precoder at the same time.
Based on the above models and analysis, we can formulate the following multi-objective optimization problem,
min_𝐅_RF, q, 𝐅_BB, q[m] 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2,
1/M∑_m=0^M-1𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2
s.t. 𝐅_RF, q∈ℱ,
𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s,
m = 0, 1, ⋯, M - 1,
for q = 1, 2, ⋯, Q.
Since this problem has multiple objective functions and the constraints are non-convex, it is rather difficult to obtain the global optimal solution. In the next section, we propose two algorithms for the THz ISAC hybrid precoding optimization problem to yield near-optimal solutions.
§ HYBRID PRECODING DESIGN FOR THZ ISAC
For the multi-objective ISAC hybrid precoding problem, we can introduce a weighting factor η (0 ≤η≤ 1), which provides the tradeoff between sensing and communication. Then, the hybrid precoding problem (<ref>) can be formulated as,
min_𝐅_RF, q, 𝐅_BB, q[m] 1/M∑_m=0^M-1(η𝐅_c[m] - 𝐅_RF, q𝐅_BB, q[m]_F^2 +
(1 - η) 𝐅_s, q - 𝐅_RF, q𝐅_BB, q[m]_F^2 )
s.t. 𝐅_RF, q∈ℱ,
𝐅_RF, q𝐅_BB, q[m]_F^2 = N_s,
m = 0, 1, ⋯, M - 1.
where η = 0 or η = 1 stands for either sensing-only or communication-only hybrid beamforming design problem. Without loss of generality, we can consider solving the hybrid precoding problem at different time slots separately. Then, a common approach is to use alternating minimization techniques <cit.>, i.e., alternately solving for 𝐅_RF, q and 𝐅_BB, q[m]. Hereby, with the irregular structure of the DAoSA analog precoder, we propose an ISAC hybrid precoding algorithm by modifying the vectorization-based (VEC) algorithm that was used for THz communications in <cit.>.
§.§ VEC-based ISAC Hybrid Precoding Algorithm
§.§.§ Digital Precoding Design
When fixing the analog precoder, we can impose an orthogonal constraint that 𝐅_BB, q[m] is unitary to mitigate the interference among data streams. Then, the problem (<ref>) can be transferred to,
min_𝐅_BB, q[m] 1/M∑_m=0^M-1𝐆_q[m] - 𝐁_q 𝐅_BB, q[m]_F^2
s.t. 𝐅_RF, q∈ℱ,
𝐅_BB, q^H[m]𝐅_BB, q[m] = 𝐈_N_s,
m = 0, 1, ⋯, M - 1.
where
𝐆_q[m] = [√(η)𝐅_c^T[m], √(1 - η)𝐅_s, q^T ]^T,
𝐁_q = [√(η)𝐅_RF, q^T, √(1 - η)𝐅_RF, q^T ]^T.
Similar to the solution of the so-called Orthogonal Procrustes problem (OPP) <cit.>, the solution to (<ref>) is given by,
𝐅_BB, q[m] = 𝐕_1 𝐔^H,
where 𝐆_q^H[m] 𝐁_q = 𝐔Σ𝐕^H is the SVD of 𝐆_q^H[m] 𝐁_q, and 𝐕_1 is the first N_s columns of 𝐕.
§.§.§ Analog Precoding Design
When fixing the digital precoder, we carry the vectorization process and the analog precoding design problem can be formulated as,
min_𝐅_RF, q 1/M∑_m=0^M-1(ηvec(𝐅_c[m]) - vec(𝐅_RF, q𝐅_BB, q[m])_2^2 +
(1 - η) vec(𝐅_s, q) - vec(𝐅_RF, q𝐅_BB, q[m])_2^2 ).
After removing the zero elements in vec(𝐅_RF, q), we need to solve its non-zero part 𝐟_eff∈ℂ^N_c K_t × 1, where N_c denotes the number of closed switches. This is a phase rotation problem, whose solution is given by
𝐟_eff = exp(j {∑_m=0^M-1𝐃^H vec(η𝐅_c[m] 𝐅_BB, q^H[m]
+ (1 - η) 𝐅_s, q𝐅_BB, q^H[m]) }),
where 𝐃 equals to 𝐈_N_t N_RF^t with d_1th, ⋯, d_N_t N_RF^t - N_c K_tth columns punctured, which correspond to the indices of zero elements in vec(𝐅_RF, q). Based on 𝐟_eff, the effective analog precoder 𝐅_RF, q can be recovered. With (<ref>) and (<ref>), we can alternatively calculate 𝐅_BB, q[m] and 𝐅_RF, q until convergence. After that, we finally update the digital precoders as
𝐅_BB, q[m] = √(N_s)/𝐅_RF, q𝐅_RF, q^†𝐆_q[m] _F𝐅_RF, q^†𝐆_q[m].
While the VEC algorithm provides a satisfactory solution, it requires a number of iterations in each time slot. Nevertheless, the optimal communication precoder 𝐅_c[m] remains the same at different time slots during a frame duration, while only the optimal sensing precoder 𝐅_s, q changes. Motivated by this, we can calculate the initial solutions of analog and digital precoders from 𝐅_c[m] and then update the analog precoders only once at each time slot based on the sensing codebook. Thus, we further propose the following low-complexity sensing codebook-assisted (SCA) ISAC hybrid precoding algorithm.
§.§ Low-Complexity SCA Algorithm
Instead of using the weighted objective function in (<ref>), we can define a weighted ISAC precoder as, 𝐅_q[m] = β (√(η)𝐅_RF, q + √(1 - η)𝐅_BB, q[m]) with a normalized factor of β = √(N_s) / √(η)𝐅_RF, q + √(1 - η)𝐅_BB, q[m]_F. Before designing the ISAC analog and digital precoders, we can first obtain the solution of analog precoder for the communication-only hybrid precoding design problem,
𝐅_RF = min_𝐅_RF, 𝐅_BB[m] 1/M∑_m=0^M-1𝐅_c[m] - 𝐅_RF𝐅_BB[m]_F^2
s.t. 𝐅_RF∈ℱ,
𝐅_RF𝐅_BB[m]_F^2 = N_s,
m = 0, 1, ⋯, M - 1,
which can be directly solved by the VEC algorithm.
Based on the initial analog precoder 𝐅_RF, we can update the analog precoder 𝐅_RF, q at the qth time slot with the desired sensing beamforming vector 𝐀(:, q). Specifically, we calculate the error between the analog precoding vectors of the phase shifters with closed switches and corresponding columns of 𝐀(:, q) as,
E_i , j = 𝐀((i-1)K_t+1:iK_t, q) - 𝐅_RF((i-1)K_t+1:iK_t, j)_2,
for all (i, j) satifying 𝐩_i,j = 1_K_t. Then, we find the first K_s minimum values of E_i, j with the indices {(i_1, j_1), ⋯, (i_K_s, j_K_s)}, where K_s = ⌈ N_c (1-η)⌉ denotes the number of subarray beamforming vectors that need to be updated. Next, we can set the designed analog precoder 𝐅_RF, q = 𝐅_RF and update it as,
𝐅_RF, q((i_k-1)K_t+1:i_k K_t, j_k) = 𝐀((i_k-1)K_t+1:i_k K_t, q)
for k = 1, ⋯, K_s. The digital precoders are calculated as
𝐅_BB, q[m] = √(N_s)/𝐅_RF, q𝐅_RF, q^†𝐅_q[m] _F𝐅_RF, q^†𝐅_q[m].
§ SENSING ESTIMATION ALGORITHM DESIGN WITH DAOSA HYBRID BEAMFORMING
In this section, we propose the sensing parameter estimation algorithms at the sensing receiver. The task of the sensing receiver is to estimate the angle, range, and velocity of targets, given the transmit signal and the received sensing signal. As the whole sensing angular window is divided into Q scanning directions, at the qth time slot, we only sense the targets whose azimuth angles of arrival are within -Ω_q, given the knowledge of the received signal 𝐲_q and the transmit signal 𝐬_q.
For angle estimation, multiple signal classification (MUSIC) is a subspace-based method with super-resolution accuracy. Hereby, we adopt a DAoSA-MUSIC algorithm in <cit.> to estimate the target angle and propose the wideband DAoSA-MUSIC algorithm by extending to the wideband transmission. We need to reconstruct the observation matrix by performing stacking operations on the received signals at different subcarriers. After estimating each angle parameter, we develop a range and velocity parameter estimation algorithm over two stages, i.e., sum-DFT and golden section search (S-DFT-GSS).
§.§ W-DAoSA-MUSIC for Angle Estimation
At the pth time slot, we construct the observation vector of the sensing receiver 𝐲_q[m, n] ∈ℂ^N_RF^r × 1 as,
𝐲_q[m, n] = 𝐖_RF, q^H 𝐀_r 𝐒_q[m, n] + 𝐄_q[m, n],
where
𝐒_q[m, n] = Λ_q[m, n] 𝐀_t^T 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n],
𝐀_r = [𝐚_r(θ_1, ϕ_1), ⋯, 𝐚_r(θ_P, ϕ_P)],
𝐀_t = [𝐚_t(θ_1, ϕ_1), ⋯, 𝐚_t(θ_P, ϕ_P)],
Λ_q[m, n] = √(N_t N_r/P)diag{h_1^(q)[m, n], ⋯, h_P^(q)[m, n]},
𝐄_q[m, n] = 𝐖_RF, q^H 𝐞_q[m, n],
and h_p^(q)[m,n] = h_p e^-j2π m Δ f τ_p e^j2π ((q - 1) T_s + n T_o) ν_p. Then, we can stack all 𝐲_q[m, n] into one matrix as,
𝐘_θ, q = [[ 𝐲_q, 0 … 𝐲_q, N-1 ]]
with 𝐲_q, n = [𝐲_q[0, n],⋯, 𝐲_q[M-1, n]].
The precoders and the receive steering matrix 𝐀_r remain the same at different symbols during a time slot. Then (<ref>) can be written as,
𝐘_θ, q = 𝐖_RF, q^H 𝐀_r 𝐒_θ, q + 𝐄_q,
where 𝐒_θ, q = [𝐒_q[0, 0], ⋯, 𝐒_q[M-1, N-1]] is regarded as the P × M N-dimensional equivalent signal source matrix, and 𝐄_q ∈ℂ^N_RF^r × M N refers to the noise matrix. Based on (<ref>), we can perform the W-DAoSA-MUSIC algorithm to estimate the azimuth AoAs of targets.
Given the reconstructed observation matrix 𝐘_θ, q, the covariance matrix can be calculated as,
𝐑_θ, q = 1/M N𝐘_θ, q𝐘_θ, q^H.
Then we can conduct the eigenvalue decomposition (EVD) as,
𝐑_θ, q = 𝐔_s Σ_s 𝐔_s^H + 𝐔_n Σ_n 𝐔_n^H,
where Σ_s ∈ℂ^P_q × P_q consists of P_q leading eigenvalues, Σ_n ∈ℂ^(N_RF^r - P_q) × (N_RF^r - P_q) contains the remaining eigenvalues and P_q denotes the number of targets whose azimuth AoAs are within -Ω_q. With the signal subspace 𝐔_s ∈ℂ^N_RF^r × P_q and the noise subspace 𝐔_n ∈ℂ^N_RF^r × (N_RF^r - P_q), the pseudo spectrum of W-DAoSA-MUSIC can be formulated as,
𝐏_music(θ, ϕ) = 𝐚^H(θ, ϕ) 𝐖_RF, q𝐖_RF, q^H 𝐚(θ, ϕ)/𝐚^H(θ, ϕ) 𝐖_RF, q𝐔_n 𝐔_n^H 𝐖_RF, q^H 𝐚(θ, ϕ).
Finally, the AoA estimation (θ̂_p, ϕ̂_p) can be obtained by searching the peaks of the MUSIC spectrum within the angles of -Ω_q, expressed as
(θ̂_p, ϕ̂_p) = max_θ, ϕ𝐏_music(θ, ϕ).
§.§ S-DFT-GSS for Range and Velocity Estimation
For range and velocity estimation, the received signal model can be expressed as,
𝐲_q[m, n] = ∑_p=1^P h_p^(q) e^j2π n T_o ν_p e^-j2π m Δ f τ_p𝐱_p, q[m, n] + 𝐞_q[m, n],
where
𝐱_p, q[m, n] = 𝐖_RF, q^H 𝐇_θ(θ_p, ϕ_p) 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n],
𝐇_θ(θ, ϕ) = 𝐚_r(θ, ϕ) 𝐚_t^T(θ, ϕ)
𝐞_q[m, n] = 𝐖_RF, q^H 𝐞_q[m, n]
and h_p^(q) = √(N_t N_r/P) h_p e^j2π (q - 1) T_s ν_p. For each estimated AoA parameter (θ̂_p, ϕ̂_p), we can construct a maximum likelihood (ML) estimator by minimizing the log-likelihood function, given by
(τ̂_p, ν̂_p) = min_τ, ν, h∑_u=1^N_RF^r𝐘_u, q - hΨ(τ, ν) ⊙𝐗̂_u, q_F^2,
where
𝐘_u, q = [[ 𝐲_q(u)[0, 0] … 𝐲_q(u)[0, N - 1]; ⋮ ⋱ ⋮; 𝐲_q(u)[M-1, 0] … 𝐲_q(u)[M-1, N-1] ]],
Ψ(τ, ν) = Ψ_τΨ_ν^T,
𝐗̂_u, q = [[ 𝐱̂_q(u)[0, 0] … 𝐱̂_q(u)[0, N - 1]; ⋮ ⋱ ⋮; 𝐱̂_q(u)[M-1, 0] … 𝐱̂_q(u)[M-1, N-1] ]],
with
Ψ_τ = [e^-j2π 0 Δ f τ, e^-j 2π 1 Δ f τ, ⋯, e^-j 2π (M - 1) Δ f τ]^T,
Ψ_ν = [e^j2π 0 T_o ν, e^j2π 1 T_o ν, ⋯, e^j2π (N - 1) T_o ν]^T,
𝐱̂_q[m, n] = 𝐖_RF, q^H 𝐇_θ(θ̂_p, ϕ̂_p) 𝐅_RF, q𝐅_BB, q[m] 𝐬_q[m, n],
for u = 1, 2, ⋯, N_RF^r. Next, this minimization problem can be transformed to the maximization problem,
(τ̂_p, ν̂_p) = max_τ, ν𝐏_ML(τ, ν),
where
𝐏_ML(τ, ν) = |∑_u=1^N_RF^rTr((Ψ(τ, ν) ⊙𝐗̂_u, q)^H 𝐘_u, q)|^2/∑_u=1^N_RF^rΨ(τ, ν) ⊙𝐗̂_u, q_F^2
= |∑_u=1^N_RF^rTr((Ψ(τ, ν) ⊙𝐗̂_u, q)^H 𝐘_u, q)|^2
The solution in (<ref>) is obtained by searching (τ, ν) at which 𝐏_ML(τ, ν) achieves a maximum value in the region [0, 1/Δ f)× [-1/2T_o, 1/2 T_o).
To reduce the computational complexity, we can design a two-phase estimation method. Specifically, in the first phase, we operate the on-grid search within a discretized set of delay and Doppler axes with step sizes1/MΔ f and 1/N T_o, which can be implemented with the 2D DFT algorithm. In the second phase, based on the coarse estimation result, we conduct the off-grid estimation by introducing a 2D golden section search (GSS) method. We describe the proposed S-DFT-GSS estimation method in the following.
§.§.§ Phase I
To compute the ML estimator in (<ref>), we first perform an on-grid search on the discretized grid Γ = {(m_0/M Δ f, n_0/N T_o), m_0 = 0, ⋯, M - 1, n_0 = -N/2, ⋯, N/2-1}, as
(m̂_0, n̂_0) = max_(τ, ν)∈Γ𝐏_ML(m_0/M Δ f, n_0/NT).
Hereby, we need to calculate the M× N-dimensional ML estimator profiles on Γ, which can be computed from the sum of N_RF^r 2D DFT outputs, given by
𝐏_ML(m_0/M Δ f, n_0/NT) = |𝐠_d(m_0 + 1, [n_0]_N + 1)|^2
where
𝐠_d = ∑_u=1^N_RF^r𝐅_M^H (𝐗̂_u, q^* ⊙𝐘_u, q) 𝐅_N,
and 𝐅_M∈ℂ^M× M and 𝐅_N ∈ℂ^N × N refer to the normalized DFT matrices. Then we determine that the delay parameter lies between m̂_0 - 1/M Δ f and m̂_0 + 1/M Δ f and the Doppler parameter is between n̂_0 - 1/N T_o and n̂_0 + 1/N T_o. Thus, the search region Γ_g for off-grid estimation in the second phase becomes,
{(τ, ν), m̂_0 - 1/M Δ f≤τ≤m̂_0 + 1/M Δ f, n̂_0 - 1/N T_o≤ν≤n̂_0 + 1/N T_o}.
§.§.§ Phase II
In this phase, we perform an off-grid search over the continuous-valued region Γ_g, as
(τ̂_p, ν̂_p) = max_(τ, ν)∈Γ_g𝐏_ML(τ, ν).
Hereby, we can utilize the 2D golden section search technique, each step of which reduces the interval of uncertainty by
the golden ratio. Finally, the estimated velocity and range are given by r̂_p = τ̂_p c_0/2 and v̂_p = ν̂_p c_0/2 f_c, respectively.
§ ISI- AND ICI-TACKLED SENSING ALGORITHM
In the previous section, the proposed estimation algorithm is based on the assumption that the round-trip delay of targets is not longer than the CP duration and the Doppler shifts are much smaller than the subcarrier spacing, i.e., the sensing channel is both ISI- and ICI-free. Nevertheless, when it comes to the THz band, this assumption might become invalid in some cases. First, as the carrier frequency increases, the Doppler shift in the THz band grows much larger than the microwave band, which may cause inter-carrier interference and degrade sensing accuracy, especially in high-mobility scenarios. Second, with the decrease of communication delay spread in the THz band, larger subcarrier spacing can be used and the symbol and CP durations are reduced. However, this limits the maximum sensing distance if still using the proposed ISI- and ICI-unaware sensing algorithm in Sec. <ref> even when the link budget is sufficient.
In this section, we first derive the received signal model with ISI and ICI caused by the sensing channel and then develop an ISI- and ICI-tackled sensing algorithm to overcome the estimation problem with ICI and ISI. Since we take into account the ISI and ICI effects, we focus on the time-frequency domain signal model and design, by simplifying the notations of the spatial domain in this section.
§.§ Received Signal Model with ICI and ISI
During a time slot, we denote the data signal at the mth subcarrier and the nth symbol as X_m, n. Then, the transmit baseband signal with the CP part is expressed as,
s(t) = ∑_m=1^M-1∑_n=0^N-1 X_m, nrect(t - n T_o) e^j 2π m Δ f (t - T_cp - n T_o),
where rect(t) refers to a rectangular pulse that is limited to [0, T_o]. At the sensing receiver, the baseband time-domain continuous signal r(t) is given by,
r(t) = ∑_p=1^Pα_p e^j2πν_p t s(t - τ_p) + w(t),
where α_p stands for the channel coefficient of the pth target, w(t) denotes the AWGN, delay and Doppler parameters are described in Sec. <ref> with relaxing the assumptions τ_p ⩽ T_cp into τ_p ⩽ T_s and ν_p ≪Δ f into ν_p < Δ f. By sampling the received signal and removing the CP part, we obtain the baseband time-domain discrete signal,
r_m, n = r(t)|_t = nT_o + T_cp + m/MT
= ∑_p=1^Pα_p e^j2 πν_i (n T_o + T_cp + m/M T) s(nT_o + T_cp + m/M T - τ_p)
+ w_m, n.
Hereby, the key step is to derive the sampling signal s_τ_p, m, n = s(nT_o + T_cp + m/M T - τ_p), given by
s_τ_p, m, n = ∑_m'=0^M-1∑_n'=0^N-1 X_m', n'rect((n - n')T_o + T_cp + m/MT - τ_p)
× e^j 2π m' Δ f ((n - n') T_o + m/MT - τ_p ).
When k_p T_o ⩽τ_p < k_p T_o + T_cp with k_p = ⌊τ_i/T_o⌋ (⌊·⌋ stands for the floor function), we can obtain
s_τ_p, m, n = ∑_m'=0^M-1 X_m', n-k_p e^j2πm' m/M e^-j2π m' Δ f τ_p e^j2π m' k_p M_cp/M.
When k_p T_o + T_cp⩽τ_p < (k_p + 1) T_o, for m ⩾τ_p/TM - M_cp - k_p(M+M_cp), s_τ_p, m, n is the same as that in (<ref>). For m < τ/T M - M_cp - k_p (M + M_cp), we obtain
∑_m'=0^M-1 X_m', n-k-1 e^j2πm' m/M e^-j2π m' Δ f τ e^j2π m' k T_cp/T e^j2π m' M_cp/M.
Based on the above derivations, we can derive the time-domain input-output relation, i.e., the vector form of the received signal time-domain r_m, n at the q time slot, 𝐫_q ∈ℂ^MN× 1, is expressed as,
𝐫_q = ∑_p=1^Pα_p Δ^(ν_p)𝐃_NΠ_2MN^l_p + k_p Mvec( Π_M^-l_p (𝐃_l_pΠ_M^-M_cp + 𝐃̂_l_p)
·𝐅_M^H 𝐛_τ_p [𝐗_q-1, 𝐗_q ] ) + 𝐰_q,
where 𝐗_q ∈ℂ^M× N denotes the time-frequency domain transmit signal at the qth time slot, l_p = max{0, ⌈τ_p/T M - M_cp - k_p(M + M_cp) ⌉} (⌈·⌉ describes the ceiling function), Δ^(ν_p) = diag(vec(𝐕_ν_p)) with 𝐕_ν_p(m, n) = e^j2πν_p (n T_o + T_cp + m/M T), the matrix Π_M∈ℂ^M× M refers to the forward cyclic-shift (permutation) matrix, 𝐃_N equals to the identity matrix 𝐈_2MN with the first MN rows punctured, 𝐃_l_p equals to the identity matrix 𝐈_M with the last M - l_p rows turning into zero elements, 𝐃̂_l_p equals to the identity matrix 𝐈_M with the first l_p rows becoming zero elements, 𝐛_τ_p = diag{b_τ_p^0, ⋯, b_τ_p^M-1} with b_τ_p = e^j2π(k_pT_cp/T - τ_p/T), and 𝐰_q is the noise vector.
After performing DFT on the matrix form of 𝐫_q, 𝐑_q = vec^-1(𝐫_q) ∈ℂ^M× N, we obtain the frequency-domain received signal 𝐲_q ∈ℂ^MN× 1 at the qth time slot, given by
𝐲_q = vec(𝐅_M 𝐑_q)
= ∑_p=1^Pα_p 𝐇_p(τ_p, ν_p) [𝐱_q-1^T, 𝐱_q^T]^T + 𝐰_q,
where the matrix 𝐇_p(τ_p, ν_p) ∈ℂ^MN× 2MN is given by,
𝐇_p(τ_p, ν_p) = (𝐈_N ⊗𝐅_M) Δ^(ν_p)𝐃_NΠ_2MN^l_p + k_p M( 𝐈_2N⊗( Π_M^-l_p
·(𝐃_l_pΠ_M^-M_cp + 𝐃̂_l_p) 𝐅_M^H 𝐛_τ_p) ),
and 𝐱_q-1 = vec(𝐗_q - 1), 𝐱_q = vec(𝐗_q). If the ISI and ICI effects are ignored, the input-output relation in the time-frequency domain is approximated as the following matrix form,
𝐘_q ≈∑_p=1^P α_p 𝐗_q ⊙Ψ(τ_p, ν_p) + 𝐖_q.
The ISI- and ICI-unaware estimation is based on this approximated input-output relation, which is not accurate and causes estimation error in the presence of ISI and ICI effects.
§.§ ISI- and ICI-tackled Estimator
Based on the received sensing signal model with ISI and ICI in (<ref>), we can obtain the ISI- and ICI-tackled estimator, given by
(τ̂, ν̂) = max_τ, ν(𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T )^H 𝐲_q/𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T _2^2.
The complexity of the proposed ISI- and ICI-tackled estimation algorithm depends on the computation of 𝐇_p(τ, ν) [𝐱_q-1^T, 𝐱_q^T]^T. This can be implemented with computationally efficient operations, including FFT algorithms, cyclic shift, vectorization, and Hadamard product. Thus, the overall computational complexity of this estimator is 𝒪(MN log (MN)).
§ NUMERICAL RESULTS
In this section, we evaluate the sensing and communication performance of the proposed precoding algorithms and sensing parameter estimation methods. The key simulation parameters are listed in Table <ref>, which refer to the physical layer numerology for beyond 52.6 GHz communications in <cit.> and the THz link budget analysis in <cit.>. We consider a THz multipath channel with one LoS path and L_N = 4 NLoS paths.
In the simulations, we consider 2D beamforming, i.e., all elevation angles are set as ϕ_0 = 90^∘.
§.§ Performance of Hybrid Precoding Algorithms for THz ISAC
First, we evaluate the performance of the proposed VEC and SCA hybrid precoding algorithms for THz ISAC in terms of spectral efficiency and transmit beamforming gain towards the sensing direction. Specifically, we consider three hybrid precoding architectures, i.e., FC, AoSA, and DAoSA structures. In comparison, the PE-AltMin approach <cit.> and the TAltMin <cit.> algorithm are used for the FC and the AoSA structures, respectively. The proposed VEC and SCA algorithms are performed for the DAoSA architecture, which is equivalent to FC with N_c = (N_RF^t)^2 and AoSA with N_c = N_RF^t. Since we focus on the evaluations of the hybrid precoding design, the FC combining architecture is set at the communication receiver side. Moreover, the performance of fully digital precoding is evaluated as an upper bound. The subcarrier spacing is set as 1.92 MHz and the number of subcarriers equals 64. The signal-to-noise ratio (SNR) of the communication link is -20 dB.
As shown in Fig. <ref>, the performance tradeoff between spectral efficiency and transmit sensing beamforming gain using different hybrid precoding algorithms is plotted by setting the weighting factor within [0, 1]. We learn that the spectral efficiency decreases as the transmit sensing beamforming gain is improved as expected, since more energy is concentrated toward the sensing direction. In the FC structure, the proposed VEC algorithm performs slightly better than the PE-AltMin approach and achieves close performance to the fully digital precoding. In the AoSA architecture, the VEC algorithm realizes higher spectral efficiency than the TAltMin method when η > 0.5, i.e., communication dominates the precoding design. Moreover, while the proposed VEC algorithm outperforms the SCA method for all dynamic hybrid beamforming structures, the SCA algorithm is more computationally efficient.
Next, we investigate the spectral efficiency versus SNR with different numbers of closed switches. In Fig. <ref>, compared to the communication-only precoding design (η = 1), the spectral efficiency of the ISAC precoding design (η = 0.6) is reduced by approximately 2.5 bits/s/Hz at the SNR of -30 dB. When N_c = 16, the DAoSA structure becomes FC, and the proposed VEC ISAC hybrid precoding algorithm achieves near-optimal performance over the whole SNR range. With fewer closed switches, fewer phase shifters are used, which causes some performance loss while improving energy efficiency.
§.§ Transmit Beampattern
We illustrate the transmit beampattern of the designed hybrid precoders in Fig. <ref> and Fig. <ref> for different weights of ISAC precoding design and beam scanning over sequential time slots.
As shown in Fig. <ref>, η = 0 corresponds to the sensing-only precoder 𝐅_s, q. In this case, both the proposed VEC and SCA can realize the desired beampattern in the FC (N_c = 16) and AoSA (N_c = 4) architectures, which is generated from the DFT sensing codebook. When η becomes 0.5, we learn that the beamforming gain toward the sensing direction is slightly reduced while several communication sub-beams are formed and point to the angles of communication paths. In the case of η = 1, the communication-only precoding design does not generate sensing beams toward the sensing direction and concentrates all beams toward the communication receiver. In addition, it is demonstrated that the transmit beam in the FC structure realizes more similar pattern to the fully digital precoding compared with the AoSA structure.
In Fig. <ref>, it is shown that during a frame duration, the designed THz ISAC transmit signal can generate sweeping beams to scan possible targets in the surrounding environment over different time slots and stable beams toward the communication user to enable ultra-fast data transmission. We observe that the transmit beamforming gains toward the sensing direction can achieve approximately 20 dBi as the beam angle varies, while the communication beams remain similar at different time slots.
Complexity Analysis: We denote N_iter as the number of iterations of the alternating minimization in the VEC algorithm for each time slot. The overall computational complexity of the VEC-based ISAC hybrid precoding algorithm is given by 𝒪(Q N_iter N_t^2 ).
Since the SCA ISAC hybrid precoding algorithm does not require the process of alternating minimization for each time slot, it can reduce the computational complexity to 𝒪(N_iterN_t^2) compared with the VEC algorithm.
§.§ Sensing Accuracy
We further investigate the effectiveness of the proposed sensing algorithm with the DAoSA hybrid beamforming architecture. In Fig. <ref>, a number of sensing targets are randomly distributed between -90^∘ and 90^∘. We conduct beam scanning by using the proposed hybrid precoding algorithms in Sec. <ref> and then plot the normalized range profile based on the back-reflected sensing received signal by using the proposed sensing estimation algorithms in Sec. <ref>. At the qth time slot, we estimate the parameters of the target within the sensing angular window Ω_q. With the time-frequency-space transmit design, we realize entire-space multi-target sensing, although the directional narrow beams are used in the THz band.
Moreover, we evaluate the sensing accuracy of angle, range, and velocity estimation with the proposed sensing algorithm. In Fig. <ref>, we set the target parameters including the azimuth angle of 70^∘, the distance of 15 m, and the velocity of 20 m/s. The waveform parameters are M = 64 and Δ f = 3.84 MHz. The number of closed switches is 4 at both transmitter and sensing receiver sides. As the sensing SNR increases, the sensing accuracy is improved. Specifically, we observe that the angle, range, and velocity estimation can achieve centi-degree-level, millimeter-level, and decimeter-per-second-level accuracy, respectively. In addition, by decreasing the weighting factor η from 0.6 to 0.4, the sensing accuracy is improved, since more power is allocated to the sensing beam.
Complexity Analysis: The computational complexity of EVD in (<ref>) is 𝒪((N_RF^r)^3). Since N_RF^r is much smaller than N_r, the overall computational complexity of W-DAoSA-MUSIC mainly depends on the matrix-vector multiplication in (<ref>), namely, 𝒪(N_RF^r N_r). The computational complexity of the S-DFT-GSS algorithm is 𝒪(N_RF^r M N log (MN)) in the first phase and 𝒪(N_gss N_RF^r M N) in the second phase, where N_gss denotes the iterations of golden section search.
§.§ ISI and ICI Effects on Sensing Parameter Estimation
Finally, we study the ISI and ICI effects on sensing parameter estimation for THz ISAC systems. The subcarrier number is set as 1024. The considered scenario contains 3 targets with the ranges (10, 20, 30) m and the effective SNRs (-10, -15, 20) dB considering the beamforming gain. In Fig. <ref>, we compare the ICI-unaware and ICI-tackled estimation algorithms under two cases, i.e., sensing channels with weak and strong ICI effects, respectively. As shown in Fig. <ref>(a), the velocity of targets is set as 5 m/s, which corresponds to the low-mobility scenario. In this case, we learn that both ICI-unaware and ICI-tackled sensing algorithms have similar estimation results and can accurately estimate the parameters of 3 targets. Nevertheless, when the target velocity increases to 50 m/s in Fig. <ref>(b), with ICI-unaware estimation, ICI effects increase side-lobe levels of the target with the strongest power, which may cause masking of weak targets or large errors on the parameters of the other two targets. The distance of the target at 30 m is estimated as 29.4 m and the target at 20 m cannot be detected successfully due to the ambiguity caused by side lobes. In contrast, the proposed ICI-tackled sensing algorithm can overcome this problem and still accurately estimate these three targets.
In Fig. <ref>, we consider the ISI effects on THz ISAC systems. We consider the scenario containing 2 targets with the ranges (10, 45) m, the same velocity v = 5 m/s, and the effective SNRs (-10, -10) dB considering the beamforming gain. As shown in Fig. <ref>(a), when the subcarrier spacing is 480 kHz, the CP-limited maximum sensing distance is 78 m, which is longer than the target ranges. In this case, there is no ISI effect and we can obtain accurate estimated values of target ranges by using the ISI-unaware sensing algorithm. When the delay spread of the THz communication channel decreases, we can increase the subcarrier spacing and the CP duration becomes shorter, which reduces the CP-limited sensing distance. In Fig. <ref>(b), the subcarrier spacing increases to 3.84 MHz, and the CP-limited sensing distance is 9.8 m, which is shorter than the target ranges. Thus, there exist ISI effects on the received sensing signal. According to the normalized range profile using the ISI-unaware sensing algorithm, the range of the second target is estimated as 49 m, while the ground truth is 45 m. By comparison, the ISI-tackled sensing algorithm still performs well and is robust against the ISI effect.
§ CONCLUSION
In this paper, we have proposed a THz ISAC system framework, including the time-frequency-space transmit design with the DAoSA hybrid beamforming architecture and OFDM waveform, and sensing algorithms for angle, range, and velocity estimation. We propose two ISAC hybrid precoding algorithms, i.e., the near-optimal VEC method and the low-complexity SCA approach. Meanwhile, in the ISI- and ICI-free case, we propose the W-DAoSA-MUSIC angle estimation algorithm and the S-DFT-GSS range and velocity estimation method. Furthermore, when there exist ISI and ICI effects on target estimation in the THz band, we develop the ISI- and ICI-tackled sensing algorithm to overcome the CP limitation and high-mobility target estimation problem.
With extensive simulations, the results indicate that the proposed VEC ISAC hybrid precoding algorithm can achieve close performance to fully digital precoding and outperforms other existing methods. The developed SCA algorithm can reduce computational complexity by removing the process of alternating minimization for each time slot. Meanwhile, with the proposed estimation algorithms, centi-degree-level angle estimation, millimeter-level range estimation, and decimeter-per-second-level velocity estimation can be realized in THz ISAC systems.
IEEEtran
|
http://arxiv.org/abs/2307.04129v1 | 20230709085847 | Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers | [
"Zhiyu Zhu",
"Junhui Hou",
"Dapeng Oliver Wu"
] | cs.CV | [
"cs.CV"
] |
Cross-modal Orthogonal High-rank Augmentation
for RGB-Event Transformer-trackers
Zhiyu Zhu, Junhui Hou, and Dapeng Oliver Wu
Department of Computer Science, City University of Hong Kong
[email protected]; [email protected]; [email protected]
August 12, 2023
==================================================================================================================================================================================================
empty
This paper addresses the problem of cross-modal object tracking from RGB videos and event data. Rather than constructing a complex cross-modal
fusion network, we explore the great potential of a pre-trained vision Transformer (ViT). Particularly, we delicately investigate plug-and-play training augmentations that encourage the ViT to bridge the vast distribution gap between the two modalities, enabling comprehensive cross-modal information interaction and thus enhancing its ability.
Specifically, we propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively.
To mitigate network oscillations resulting from the masking strategy and further amplify its positive effect, we then theoretically propose an orthogonal high-rank loss to regularize the attention matrix.
Extensive experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and two-stream trackers to a large extent in terms of both tracking precision and success rate. Our new perspective and findings will potentially bring insights to the field of leveraging powerful pre-trained ViTs to model cross-modal data. The code will be publicly available.
§ INTRODUCTION
Event cameras asynchronously capture pixel intensity fluctuations with an ultra-high temporal resolution, low latency, and high dynamic range, making it gain increasing attention recently <cit.>. Owing to such admirable advantages, event cameras have been widely adopted in various applications, such as object detection <cit.> and depth/optical flow estimation <cit.>. Particularly, the distinctive sensing mechanism makes event cameras to be a promising choice for object tracking <cit.>.
Despite many advantages of event-based object tracking under special environments, e.g., low-light, high-speed motion, and over-exposed, event data lack
sufficient visual cues, such as color, texture, and complete contextual appearance that can be easily captured by RGB data,
resulting in only event-based vision still suffering from relatively inferior performance in practice. Thus, a more promising direction is to investigate cross-modal object tracking from both RGB and event data, where the merits of the two modalities can be well leveraged for pursuing higher performance.
However, the vast distribution gap between RGB and event data poses significant challenges in designing algorithms for modeling cross-modal information.
Most existing pioneering cross-modal trackers heavily engage in robust cross-modal fusion modules, which is cumbersome to use advanced embedding backbones for boosting performance.
In view of the success of Transformer-based tracking algorithms <cit.>, where the multi-head attention naturally models the indispensable correlation relationship between template and search regions, we plan to investigate the potential of pre-trained powerful vision Transformers (ViTs) in cross-modal object tracking from both RGB and event data.
However, those pre-trained Transformers with RGB data may not be able to fully model the essential feature interaction across RGB and event data, due to the distribution gap between the two modalities.
To this end, we study plug-and-play training techniques for augmenting the pre-trained Transformer used as the embedding backbone of our RGB-event object tracking framework.
To be specific, to promote the learning of the attention layer across two modalities, we propose a cross-modal mask modeling strategy, which randomly masks/pops out the multi-modal tokens. We anticipate that, in reaction to the absence of a particular modality at certain locations, the network would proactively enhance interactions on the remaining cross-modal tokens. Nevertheless, randomly masking tokens will inevitably alter data distributions and introduce disruptions, impeding network training. To mitigate the induced negative effect, we further propose a regularization term to guide the training of each attention layer. Based on the observation that the values of internal attention matrices of a Transformer indicate the degree of cross-modal feature interaction,
we propose to orthogonalize the attention matrix to promote its rank obligatorily. Beyond, we anticipate that such regularization could encourage the cross-modal correlation to be evenly and concisely established using the multi-domain signatures, rather than unduly reliant on a specific domain. Finally, we apply the proposed techniques to state-of-the-art one-stream and two-stream Transformer-based tracking frameworks and experimentally demonstrate that their tracking performance is further boosted significantly.
In summary, the contributions of this paper are:
* a mask modeling strategy for encouraging the interaction between the cross-modal tokens in a proactive manner;
* theoretical orthogonal high-rank regularization
for suppressing network fluctuations induced by cross-modal masking while amplifying its positive effect;
and
* new state-of-the-art baselines for RGB-event object tracking.
Last but not least, our novel perspectives will potentially bring insights to the field of leveraging pre-trained powerful ViTs to process and analyze cross-modal data.
§ RELATED WORK
§.§ Object Tracking
Recent years have seen remarkable progress in the study of object tracking, which is primarily due to the widespread success of deep learning <cit.>. Based on the distribution of computational burdens, current methods could be generally divided into two-stream <cit.> and one-stream methods <cit.>. As the earlier invented and relatively mature ones, most offline Siamese-based tracking methods <cit.> fall into the first category. It utilizes a delicate embedding backbone to extract semantic-rich embeddings and then models the target location via either a direct proposal head <cit.> or an online optimization process <cit.>, which is also called deep Siamese-trackers or discriminative correlation filters, respectively <cit.>. SiamFC <cit.> first developed a fully-convolutional architecture to fuse template and search embeddings for object tracking. Though introducing a single-stage RPN <cit.> detector SiamRPN <cit.> achieved target object tracking by comparing the current-frame features to those from a template. To remove the disturbance factors, e.g., padding, SiamRPN++ <cit.> introduced a spatial-aware sampling strategy and further utilized ResNet <cit.> to embed representative features for Siamese-based tracking.
DiMP <cit.> proposed to exploit both target and background appearances to achieve object tracking. KYS <cit.> represented the scene information as dense state vectors and utilizes such state vectors to maximize the tracking performance. Besides, some spatio-temporal-based methods also exploit temporal information to achieve robust and effective tracking <cit.>. MDNet <cit.> separated domain-independent from domain-specific information via a CNN-based framework. RT-MDNet <cit.> further improved it via an RoI-Align strategy, which extracts more precise embeddings from feature maps of targets and candidates. Swin-tracker <cit.> introduced the Swin-Transformer <cit.> to effectively encode the semantic information from input images for high-performance visual tracking.
Due to the extraordinary correlation modeling ability of Transformer, an emerging branch of one-stream methods shows strong potential in correlation modeling. OS-track <cit.> unified the embedding and relation modeling processes with a single vanilla ViT <cit.>, which achieves admiring performance with reduced computational resources. Meanwhile, SimViT-Track <cit.> proposed a similar approach, which feeds search and template image tokens straight into a ViT backbone and performs regression and classification on the resulting tokens.
In summary, with the success of existing embedding backbones, such as ViT <cit.> and Swin-Transformer <cit.>, more intriguing and effective methods have been proposed recently. While these methods could achieve admirable performance, most of them are driven by matching semantically identical segments of the search and template regions viewed as RGB images. As a result, their performance is inextricably tied to imaging characteristics, which can be compromised in specific scenarios such as high-speed and low-light scenes. Hence, it is highly desired to incorporate multi-modal inputs to remedy each deficiency. Moreover, the crucially multi-modal data necessitates additional efforts to generalize these methods to the event-based.
§.§ Event-based Tracking
Owing to its innate characteristics and superiority for object tracking, event-based tracking has been a progressively prevalent subject for research in recent years. Additionally, existing approaches may be broadly classified into two categories: model-based and data-driven. Through describing surrounding environments by a photometric 3D map, Bryner et al. <cit.> proposed to track the 6-DOF pose of a camera. To capture the spatio-temporal geometry of event data, Mitrokhin et al. <cit.> utilized a parametric model to compensate camera motion. Based on a pipeline of tracking-learning-detection, Ramesh et al. <cit.> proposed an object tracking algorithm for event cameras, which is the first learning-based long-term event tracker. Then, Li et al. <cit.> introduced the VGG-Net-16 to encode the appearance of the event-stream object. Inspired by the classic Siamese-matching paradigm, Chae et al. <cit.> presented to track objects via learning an edge-aware similarity in the event domain. Recently, Zhang et al. <cit.>, introduced a spiking transformer for encoding spatio-temporal information of object tracking. Moreover, ZHU et al. <cit.> proposed to utilize inherent motion information of event data to achieve effective object tracking. To summarize, although there are some promising studies that provide directive insights for event-based tracking, a limited number of works have sought to find complementary information from RGB data, e.g., semantic information.
§.§ Cross-modal Learning
Fusing embedding with multiple modalities is a sensible solution for perceiving and recognizing the objects robustly and accurately <cit.>. However, for current machine learning algorithms, learning representative patterns from multiple modalities is still a challenging issue <cit.>. Wang et al. <cit.> proposed to apply data augmentation techniques to boost cross-modal 3D object detection. Liu et al. <cit.> utilized cross-modal feature rectification and fusion models for image segmentation with input from multiple modalities. Jaritz et al. <cit.> solved the multi-modal segmentation issue from the perspective of unsupervised domain adaptation. Moreover, Wang et al. <cit.> designed an RGB-T tracking framework by propagating the intermodal pattern and long-term context. Ye et al. <cit.> proposed a cross-modal self-attention module to achieve natural language-based image segmentation via adaptively capturing informative words and important regions in images. Zeng et al. <cit.> proposed to project the camera features onto the point set on LiDAR. In summary, recent works are clearly founded on network architecture, as is evident by their prevalence. Moreover, the current advanced Transformer paradigm could adaptively process different modalities. However, there is still a lack of further investigations and analysis of the internal mechanism.
§ PROPOSED METHOD
§.§ Motivation
Learning the correlation between the template and search regions robustly and precisely is one of the most essential aspects of object tracking. Fortunately, with current advancements in the multi-head attention mechanism, such correlation
could be naturally achieved via Transformer-based frameworks <cit.>.
However, current powerful ViTs were usually pre-trained with RGB data, e.g., ImageNet <cit.>,
potentially resulting in that they cannot be adequately adapted to cross-modal learning, i.e., the full feature interaction between RGB and event data cannot be well achieved, which is essential for cross-modal object tracking, due to the vast distribution gap between RGB and event data. Accordingly, the tracking performance may be limited.
Instead of following existing cross-modal research paradigms mainly focused on designing sophisticated cross-modal information fusion networks, we aim to explore plug-and-play training augmentation techniques to mitigate the above-mentioned potential limitation of a pre-trained ViT used as the embedding backbone of an RGB-Event object tracking scheme.
Generally, based on a fundamental and essential premise that different modalities possess their own unique benefits for a cross-modal tracker, token embedding information should be adequately transmitted across multi-modalities, especially for the regions with target objects, in a bid to enhance themselves using specific merits from the other modality. Thus, we propose a mask modeling strategy to enable the network to proactively exploit the cross-modal information in Sec. <ref>. Furthermore, we propose a high-rank orthogonalization mechanism in Sec. <ref>, which can not only alleviate network fluctuations induced by the mask modeling strategy but also further boost cross-modal information interaction.
In what follows, we will detail the proposed techniques adapted to both one-stream and two-stream trackers, as illustrated in Fig. <ref> (b) and Fig. <ref> (c), respectively.
We always use I and E in the subscripts to indicate the RGB and event modalities, and T and S are the tokens of template and search regions, respectively.
§.§ Mask-driven Cross-modal Interaction
Grouping tokens via similarity is one of the most representative steps for the self-attention mechanism of a Transformer <cit.>. However, due to the distribution gap between
tokens corresponding to different modalities, the similarity-driven attention may tend to aggregate information from the identical modality, hence impeding the cross-modal learning,
Thus, how to effectively and efficiently promote the cross-modal interactions is critical for maximizing the potential of a pre-trained ViT for RGB-event object tracking.
We propose a cross-modal mask modeling strategy to address this issue in a proactive manner, shown as Fig. <ref> (a).
As illustrated in Fig. <ref>, the underlying intuition of this strategy
is through removing the patches of different modalities and locations, we expect that the task loss would enforce the network to spontaneously enhance/build cross-modal correlation, due to the remaining tokens in different modalities. Once the interaction is established, the RGB and event tokens may learn to shrink the distribution gap, maintaining such correlation to the inference phase.
Specifically, we apply random masks to RGB and event data to remove distinct patches.
To begin, for the one-stream methods, masking elements can be readily accomplished by simply popping out corresponding elements, which could concurrently lessen the network training burden.
For the two-stream methods, due to the large computational resource consumption of the embedding backbone, we directly average the masked features of RGB and event data at the primary stage, which are
further fed into the high-level embedding backbone and relation modeling modules for the object proposal.
Remark. It is worth noting that the motivation and objective of the proposed masking strategy are considerably different from those of the well-known masked image modeling <cit.>. We start from the pursuit of promoting the network to actively utilize cross-modal information. Thus, the patches with distinct positions across RGB and event modalities are randomly removed to permit each location can be perceived by the network but with different modalities. However, mask image modeling pre-trains network weights to comprehend image semantics by feeding just a subset of image patches to reconstruct the unseen area.
Although such a masking strategy used in the training phase is expected to strengthen the ability of the network to perceive cross-modal information to some extent, the randomly dropped information would potentially result in an unstable training process. Moreover, such disruptions are especially devastating for one-stream algorithms, which must concurrently learn representative embeddings and establish the relationship between the cross-modal template and search tokens (see the experimental demonstration in Sec. <ref>). Thus,
to pull the network out of this predicament, we further propose orthogonal high-rank regularization in a theoretical manner in the next section.
§.§ Orthogonal High-rank Regularization
To appreciate the multi-head attention mechanism, we take a one-stream tracker <cit.> with the vanilla ViT <cit.> as an example. As illustrated in Fig. <ref> b), its internal self-attention layers concurrently perceive the RGB and event tokens from both the template and search areas. Depending on the query and key belongings k ∈ℝ, we can partition the resulting attention matrix into k^2 blocks (Here k=4). Note that the attention values of a typical block reflect the degree of the interaction between tokens.
To
mitigate network disturbs induced by the cross-modal mask modeling strategy
and further amplify its positive effect (i.e., boosting cross-modal learning), we concentrate on the cross-modal zones of the attention matrix, such as M_S_I,S_E, and M_S_E,S_I. Assuming that if tokens are well-embedded and with highly discriminative features, each token will form a unique correlation with its identical counterpart, resulting in each row or column being orthogonal to the others. Moreover, as attention elements are non-negative, the corresponding matrix should be full rank[We refer readers to the Supplementary Material for more details]. Therefore, we propose the following regularization to encourage some desired blocks of the attention matrix to be high-rank:
L(M,τ) = (Σ)-(τ)_1, M = U Σ V,
where τ∈ℝ is a pre-defined threshold value, U∈ℝ^n× n, Σ∈ℝ^n× m, and V∈ℝ^m× m are the outputs of the singular value decomposition (SVD) of block M∈ℝ^n× m, and (·) returns a vector, consisting of the main diagonal elements of the input matrix, and (·) converts an input scalar to be a vector by duplicating the scalar. We impose the regularization term onto a set of blocks of the attention matrix {M^(i )}_i=1^N standing for the interaction of cross-modal tokens.
Due to its strong regularization effect, we empirically select the blocks corresponding to image-to-event attention (i.e.M_S_I,T_E, and M_S_I,S_E), and the blocks to event-to-image attention (i.e., M_S_E, T_I, and M_S_E, S_I).
Moreover, as computing the SVD of a matrix is time-consuming, we randomly choose a layer to implement this regularization at each optimization step, instead of operating it in each layer.
For the two-stream methods, since the input data from different modalities are mixed in a preceding embedding backbone as shown in Fig. <ref> (c), e.g., swin-Transformer <cit.>, the resulting attention matrix only consists of two parts, i.e., the search-to-template and template-to-search regions, as illustrated in Fig. <ref> (c).
Under this scenario, we anticipate that the discriminative cross-modal tokens will be able to form a unique correlation with the identical object parts across template and search areas. As shown in the right part of Fig. <ref> (a) and Fig. <ref> (c), such a relationship would also produce that each row is orthogonal to the others.
Thus, we also regularize the regions belonging to the target objects in M_S,T. Specifically, guided by bounding box information, we first mask the attention weights in non-target regions of M_S,T, then apply Eq. (<ref>) to increase the rank of the masked matrix.
§.§ Training
To train a Transformer-based tracker with the proposed plug-and-play augmentation techniques, at each optimization step, we first randomly mask/pop out event and image patches with a ratio of δ_e and δ_i (0<δ<1), respectively. Then, we train the whole network with the following loss function:
L_all = L_task + α L(M,τ),
where L_task denotes the original task loss function, composed of regression and classification branches, and α is a balanced weight for the proposed regularization term.
§ EXPERIMENT
Implementation details. We evaluated the proposed plug-and-play training augmentation techniques on both one-stream and two-stream trackers. We set template and search sizes as 128 and 256, respectively, which contain 2× and 4× regions than annotations. Moreover, the location and scale jitter factors of the search region are set as 3 and 0.25, respectively (No jitter to template region). For one-stream, we directly adopted the SOTA method named color-event unified tracking (CEUTrack) <cit.> as our baseline model (ViT-B). During training, we used the same optimizer (ADAW), learning rate scheduler, and task loss function as the original paper. We set the batch size as 24 and the augmentation weight α in Eq. (<ref>) empirically as 1.2. The masking ratios of both modalities δ_i and δ_e were set to 0.1.
For two-stream, to the best of our knowledge, there is no Transformer-based RGB-event tracker available,
we chose the most recent event cloud-based motion-aware tracker (MonTrack) <cit.> and modified it with the proposal head of a Transformer-tracker <cit.> and the backbone of pre-trained swin-v2 <cit.> to construct two-stream RGB-event trackers for the detailed architecture). Moreover, we tested lightweight and heavy backbones, i.e., Swin-V2-Tiny <cit.> and Swin-V2-Base <cit.>, to achieve comprehensive evaluation, and the resulting baselines are named MonTrack-T and MonTrack-B, respectively. To train the whole framework, we utilized the AdamW optimizer <cit.> with the learning rate of 1e^-4for the proposal head and 1e^-5 for the backbone. We set the weight decay as 1e^-4. MonTrack-T and MonTrack-B were trained with 57K and 81K steps, respectively. We empirically set the value of α as 1.0, and the masking ratios of RGB and event data δ_i and δ_eas 0.4 and 0.3, respectively.
We refer readers to the Supplementary Material for the detailed network architectures and settings.
Datasets. We employed two large-scale cross-modal RGB-event single object tracking datasets: FE108<cit.> and COESOT<cit.>. Both datasets were collected by DAVIS346 with a spatial resolution of 346 × 260, dynamic range of 120 dB, and minimum latency of 20 μ s. FE108 consists of 108 RGB-event sequences collected indoors with a total length of 1.5 hours, which captures 21 different types of objects. The training split of FE108 consists of 140K RGB-Event pairs and 59K for testing. The ground-truth bounding boxes were annotated by a Vicon motion capture system. Moreover, the COESOT dataset consists of 578,721 RGB-Event pairs, which could be split into 827 and 527 sequences for training and testing, respectively. Those sequences are collected from both indoor and outdoor scenarios and cover a range of 90 classes and 17 attributes. The ground truth bounding boxes of the COESOT dataset were manually annotated. Note that we adopted the quantitative metrics suggested by each dataset to evaluate different methods.
§.§ Experimental Results
Results on FE108. As listed in Table <ref>, after being augmented by the proposed techniques during training, both MonTrack-T and MonTrack-B substantially improve both RSR and PRP by more than 3%. Moreover, the larger model “MonTrack-B" yields a greater performance gain. We reason such an effect may be the consequence of promoting thoroughly cross-modal interaction Besides, the superior performance of the proposed techniques is also demonstrated in the precision and success plots in Fig. <ref>, which exceeds SOTA methods by a large extent, i.e., 5.1% in RSR, 8.1% in OP_0.50, 12.1% in OP_0.75, and 3.8% in RPR. Additionally, the higher performance of cross-modal methods
than that of only event-based methods and only RGB-based methods
demonstrates the significance and necessity of using the information of both RGB and event data for object tracking.
Results on COESOT. As shown in Table <ref>, the original Tansformer-based cross-modal tracker, i.e., CEUTrack, improves the SR value of the previous SOAT SiamR-CNN by 1.1%. After being augmented with our techniques, i.e., CEUTrack+Ours, the values of SR and PR are further improved by 1.2% and 1.4%, respectively, and its NPR achieves higher than 70%,
convincingly validating the effectiveness of the proposed techniques. In addition, we also provide the success and precision plots of different attributes in Fig. <ref>, where it can be seen that the proposed augmentations can yield general improvements instead of only strengthening certain circumstances. For example,
the proposed augmentations achieve 3.4 % precision and 2.8 % success improvements under the blurring attribute. Especially, CEUTrack+Ours maintains the best performance under the camera motion attribute, while the baseline CEUTrack drops to the 7^th.
We also refer readers to the Supplementary Material for the comparisons of the network size and inference time.
§.§ Ablation Study
Visualizations. Fig. <ref> visualizes the internal attention matrix of CEUTrack. The values of each row of the matrix are utilized to weight-sum tokens in that row and project to a corresponding token. Due to the absence of values in the blocks M_S_I,S_E, M_S_I,T_E, M_T_I,T_E, M_T_I,S_E in Figs. <ref> (a) and (d), there is scarce information projected from the event domain to the RGB domain.
The reason may be that the ViT was pre-trained on ImageNet composed of RGB data,
making it preferable to process RGB data.
When used as the backbone for constructing RGB-event object tracking, the pre-trained filters attempt to project event information onto RGB tokens to complete the labor-intensive tasks of information fusion and processing, instead of the inverse projection. After being augmented with our techniques during training, the cross-modal interaction is
noticeably enhanced, i.e., the matrix blocks, which are zeros in Figs. <ref> (a) and (d), exhibit attention values, as demonstrated in Figs. <ref> (b) and (e).
Besides, we also visualized the singular values of matrix blocks related to the cross-modal interaction in Figs. <ref> (c) and (f), which substantially validates they have been pushed far away from a low-rank matrix after applying the proposed techniques. We refer readers to the Supplementary Material for more results.
Finally, Fig. <ref> shows the queries of the 2^nd, 4^th, and 7^th self-attention layers where it can be seen that the proposed augmentations narrow the distribution gaps between event and RGB tokens, especially for the 4^th layer.
Masking vs. High-rank. We conducted throughout experiments to better understand the relationship and function of the proposed two augmentation techniques. From Table <ref>, it can be seen that
when the two techniques were simultaneously applied, the improvement is much more significant than that of only applying the masking scheme. The improvement is slight when only the high-rank regularization was applied. These observations validate our claim that the two techniques are complementary.
Effect of the mask size. We experimentally validated the effect of different mask sizes on performance. As shown in Table <ref>, the benefits may be nullified under extremely large or tiny masks. The possible reason is that the network experiences the small masks as noise. While, if the mask is too broad, the object may only appear in one modal, which may be detrimental to cross-modal learning.
§.§ Discussion
In view of the impressive performance of the proposed plug-and-play training augmentations, it is worth further exploring their potential in other cross-modal scenarios, such as RGB-3D point clouds, or even vision-natural language. In addition, as demonstrated in Fig. <ref>, the proposed orthogonal high-rank regularization indeed facilitates the interactions between cross-modal tokens, and thus, it would be promising to further develop task-specific regularization terms for other visual Transformers-based works.
§ CONCLUSION
In this paper, we introduced plug-and-play training augmentations for Transformer-based RGB-event object tracking. Our augmentations consist of two complementary techniques–cross-modal mask modeling and orthogonal high-rank regularization with the same objective of enhancing the cross-modal interaction of a ViT pre-trained only with RGB data.
Our extensive experiments demonstrate the effectiveness of our training augmentations, as state-of-the-art methods achieve significant improvement in tracking performance after augmentation.
While current Transformers can be scaled up to enormous sizes, relying solely on final objectives to guide the model learning process may be insufficient. We hope our perspectives, findings and analysis
will inspire further research into the internal mechanisms of Transformer-based cross-modal fusion tasks.
ieee_fullname
|
http://arxiv.org/abs/2307.05756v1 | 20230711192221 | Multisolitons in a gauged Skyrme-Maxwell model | [
"Leandro Roza Livramento",
"Yakov Shnir"
] | hep-th | [
"hep-th"
] | =1
fig
|
http://arxiv.org/abs/2307.04879v2 | 20230710195954 | Modeling evidential cooperation in large worlds | [
"Johannes Treutlein"
] | econ.GN | [
"econ.GN",
"q-fin.EC"
] |
Modeling evidential cooperation in large worlds
Johannes Treutlein
First written in 2018; major edits in 2023
===============================================
Evidential cooperation in large worlds (ECL) refers to the idea that humans and other agents can benefit by cooperating with similar agents with differing values in causally disconnected parts of a large universe. Cooperating provides agents with evidence that other similar agents are likely to cooperate too, resulting in gains from trade for all. This could be a crucial consideration for altruists.
I develop a game-theoretic model of ECL as an incomplete information bargaining problem. The model incorporates uncertainty about others' value systems and empirical situations, and addresses the problem of selecting a compromise outcome. Using the model, I investigate issues with ECL and outline open technical and philosophical questions.
I show that all cooperators must maximize the same weighted sum of utility functions to reach a Pareto optimal outcome. However, I argue against selecting a compromise outcome implicitly by normalizing utility functions. I review bargaining theory and argue that the Nash bargaining solution could be a relevant Schelling point. I introduce dependency equilibria Spohn2007-fp, an equilibrium concept suitable for ECL, and generalize a folk theorem showing that the Nash bargaining solution is a dependency equilibrium. I discuss gains from trade given uncertain beliefs about other agents and analyze how these gains decrease in several toy examples as the belief in another agent decreases.
Finally, I discuss open issues in my model. First, the Nash bargaining solution is sometimes not coalitionally stable, meaning that a subset of cooperators can unilaterally improve payoffs by deviating from the compromise. I investigate conditions under which stable payoff vectors exist. Second, I discuss how to model agents' default actions without ECL.
§ INTRODUCTION
Evidential cooperation in large worlds (ECL)[In previous work, this concept has been referred to as “multiverse-wide
cooperation via superrationality” (MSR).] Oesterheld2017-qgGloor2017Oesterheld2018 is a crucial consideration that could have important implications for the prioritization of altruists.
To illustrate the idea, consider a prisoner's dilemma between two artificial
agents with identical source code. Even if both agents cannot causally
interact, one agent's action provides them with strong
evidence about the other agent's action. Evidential
decision theory (EDT), as well as functional decision theory Yudkowsky2017-vb and some variants of causal decision theory (CDT) Spohn2012-fo,Poellinger2013-we,
say that agents should take such evidence into account when making
decisions. In situations like the prisoner's dilemma with two identical
agents, they prescribe cooperation for this reason, an idea that is also called superrationality hofstadter1983dilemmas. ECL is based on
the idea that humans on Earth are in a similar situation as such agents.
First, there probably is a large or infinite universe,
containing vast numbers of civilizations, inhabiting different, causally disconnected parts of the universe tegmark2003parallel,tegmark2015our. I refer to such a large universe as a multiverse, and to causally disconnected parts of it as universes, regardless of the specific structure of the universe (e.g., these parts could just be far-apart regions of space). Given their vast number, there are likely universes containing agents that are very similar to humans, such that humans'
actions are evidence about these agents' actions macaskill2021evidentialist.
Second, these
agents may pursue different goals, leading to possible gains from trade. For instance, pursuing a given goal in one universe may have diminishing returns, and agents may care about other universes as well. In that case, it may be beneficial for agents to trade by pursuing a mixture of everyone's goals in all universes. Since agents in different universes cannot
communicate and there is no way to enforce an agreement, this puts them in a collective prisoner's dilemma. Under the right conditions,
the abovementioned decision theories recommend that humans take the
preferences of other, similar agents in the multiverse into account,
in order to produce the evidence that these agents do in turn take humans'
preferences into account, leaving everyone better off.
According to [sec. 4]Oesterheld2017-qg, this idea could
have far-reaching implications for the prioritization of altruists. For instance, given
ECL, some forms of moral advocacy could become ineffective: agents
advocating their particular values provides them with evidence that
others will do the same, potentially neutralizing each other's
efforts [sec. 4.2]Oesterheld2017-qg. Moreover, ECL could play a role in deciding which strategies
to pursue in AI alignment. If potential gains from cooperation
are vast, then it becomes more important to ensure that AI systems are aligned with humans' idealized philosophical views on decision theory and ECL.[Note that interventions to promote ECL could also backfire by exacerbating other risks from advanced AI [see][]xu2021open.]
In this report, I develop a game-theoretic model of ECL as an incomplete information bargaining problem, incorporating uncertainty about the values and empirical situations of potential superrational cooperators, and addressing the problem of selecting a compromise outcome. I clarify the conditions that make ECL feasible and analyze gains from trade given empirical uncertainty. Moreover, I discuss several technical and philosophical problems that arise.
Basic knowledge of game theory, such as normal form games, Nash equilibria, and the prisoner's dilemma [see][]osborne1994course, as well as decision theory and ECL (see Gloor2017 for an introduction), will be helpful for understanding this report.
§.§ Summary
Here, I provide a short summary of the report, highlighting key contributions. Afterwards, I outline the organization of the remaining report, and briefly discuss related work.
§.§.§ Game-theoretic models
I introduce three models: a bargaining model, a Bayesian game model, and a Bayesian bargaining model, combining the two previous models. The first two models are useful since many issues can more easily be discussed in the less general setting, and this structure may make the report easier to follow. However, it is possible to skip directly to the final model in <Ref>.
In a bargaining game, players have to agree on some compromise outcome, from a feasible set of achievable payoff vectors. A disagreement point specifies the outcome that is realized if no compromise is reached. I argue for modeling ECL as a bargaining problem, since (i) there is an inherent bargaining problem in determining a compromise between superrational cooperators that needs to be addressed, (ii) bargaining solutions that are supported by plausible axioms serve as Schelling points, and (iii) there are important parallels between acausal trade[<https://www.lesswrong.com/tag/acausal-trade>] (where agents use mutual simulations to reach an agreement) and ECL, meaning that bargaining could be a relevant model for agents forming conditional beliefs over other agents' actions. I also address [Sec. 2.8]Oesterheld2017-qg's suggested approach of pursuing a sum of normalized utility functions as a compromise utility function. I show that to achieve a Pareto optimal outcome, i.e., an outcome that cannot be improved upon without making any player worse off, everyone has to maximize the same compromise utility function. However, I argue for choosing a compromise based on a bargaining solution rather than a normalization method such as variance normalization, on the grounds that the latter can leave agents worse off than without the compromise. I review two popular bargaining solutions, the Nash bargaining solution (NBS) and the Kalai-Smorodinsky bargaining solution (KSBS), and conclude that the NBS could be a relevant Schelling point for ECL.
The Bayesian game formalism serves to incorporate incomplete information—that is, information about the values and available options of other players. Specifically, I use a modified version of Harsanyi1967's type space formalism. In my model, there is a large number of players, living in different universes. Each player is assigned a type, representing their values and empirical situation, according to some prior distribution p. Players' posterior beliefs over types, after updating on their own type, represent their beliefs over other universes. Players' utility functions depend on the actions and types of all players. Relaxing the assumption of a common prior p is an important area for future work.
Finally, the Bayesian bargaining model implements a bargaining game on top of a Bayesian game, incorporating bargaining with incomplete information. The feasible set here is the set of expected utilities that can be produced by players for all the types, given the types' beliefs about other players.
§.§.§ Gains from trade under uncertainty
Two important assumptions in this report are additive separability and anonymity. Additive separability means each player's utility functions can be expressed as a sum of contributions from other players. This would be true for total utilitarians but false for average utilitarians valuing average well-being across the multiverse, for instance.
Anonymity means beliefs, utilities and strategies depend only on types, not on specific players. This means we do not distinguish between different universes.
Given these two assumptions, we can regard strategies as vectors α∈ A^T, where T is the set of types and A the set of strategies for any type. The expected utility of a strategy for a type t∈ T can be simplified to the expression
EU_t(α)=u_t,t(α) + (n-1)∑_t'∈ Tp(t'| t)u_t',t(α_t')
where n is the number of players, p(t'| t) is the belief of any player of type t that any other player has type t', and u_t',t(α_t') is the utility provided by a player of type t' to a player of type t.
The first term is the utility produced by a player for themself, and the second term stands for the expected utility produced by all the other players.
Note that if n is large, the expected utility is dominated by the second term, meaning that the utilities produced by a player for themself in their own universe can be ignored. It follows that a potential compromise option β produces gains from trade for a type t if
∑_t'∈ T∖{t}p(t'| t)(u_t',t(α_t')-u_t',t(β_t))≥ p(t| t)(u_t,t(α)-u_t,t(β)).
That is, both potential gains from other types' cooperation as well as potential losses due to players of type t compromising are weighted by type t's posterior beliefs over the types of other players. If the former outweigh the latter, then β leads to gains for players of type t.
This shows that when it comes to gains from trade, what matters are players' posterior beliefs over other players' types. For instance, if types are certain that all players have the same type, i.e., p(t| t)=1, then no trade is possible. If p(t'| t)=0 for a specific type t', then that type cannot benefit players of type t. If all types have the same posterior beliefs, then trade may in principle be possible, depending on the different types' options. In general, different beliefs can put a tax on trade.
I also consider a model that includes uncertainty over whether other players are superrationalists or sufficiently similar to enable ECL. However, I argue that such considerations can also be incorporated into the posterior beliefs p(t'| t), so this extension does not increase the generality of the model.
§.§.§ Double decrease and Paretotopia
Using <Ref>, we can analyze gains from trade in different toy models. I consider an example with a trade between two types, T={1,2}. Both types start out with an equal number of resources and can invest resources into either type's utility function. Resource investments have diminishing returns, leading to potential gains from trade. As a compromise outcome, I use the NBS. I consider square root as well as logarithmic returns to resources. <Ref> shows individual feasible sets in each case, which are the sets of expected utilities a player of each type can produce for both types. Gains from trade are larger given logarithmic utilities, since utilities diminish faster in this case.
Using this model, I analyze resource investments in the respective other type and gains from trade under the NBS, for different posterior beliefs p(t'| t) in the other type (assuming both types have the same prior weight p(1)=p(2)=1/2) (<Ref>). As this belief goes down, gains from trade go down approximately quadratically in the square root utility case, leading to a “double decrease” as observed by armstrong2017double. However, in drexler2019pareto's “Paretotopia” model with logarithmic returns to resource investments, gains from trade diminish more slowly with the belief in the other player.
§.§.§ Equilibrium concepts
I introduce two equilibrium concepts for the Bayesian game and Bayesian bargaining game models. First, I introduce Bayesian Nash equilibria. In the additively separable case, these equilibria are trivial as each player is simply optimizing for their own values in their own universe, ignoring other players. Second, I introduce a generalization of Spohn2007-fp's dependency equilibria for Bayesian games and for continuous action spaces. A dependency equilibrium is a joint belief over the actions of all players, where every player's actions have optimal conditional expected utility. Since it evaluates conditional probabilities and allows for dependencies between players' actions, dependency equilibria are suitable to model the superrational reasoning required for ECL. For instance, in a prisoner's dilemma, there is a dependency equilibrium in which both players cooperate.[There are several other equilibrium concepts in the literature with similar properties al2015evidential,daley2017magical,halpern2018game, which I have not looked at in this report.] My technical contributions are generalizing dependency equilibria to Bayesian games and to continuous action spaces. The latter is necessary for my bargaining model since players bargain over a continuous space of, e.g., independent randomizations over actions, or continuous resource investments.
I prove several results about dependency equilibria in my model, including a generalization of Spohn2007-fp's folk theorem for dependency equilibria, showing that any Pareto improvement over a Bayesian Nash equilibrium is a dependency equilibrium. As a corollary, it follows that the NBS with the Nash equilibrium disagreement point is a dependency equilibrium. I also show that a dependency equilibrium with independent action distributions is a Bayesian Nash equilibrium.
§.§.§ Disagreement points
I discuss the problem of choosing a disagreement point in ECL. Since ECL only involves choosing some compromise action based on some joint belief, without any actual bargaining, it is unclear what the relevant notion of non-compromise outcome should be. However, how to model agents' default options without ECL is an important question in general, not only in my bargaining model.
A natural option is the Bayesian Nash equilibrium, but there is also the threat point, which is the equilibrium of a game in which players choose disagreement actions to improve their bargaining position. I review a plausible axiomatization of the threat point by Nash1953 and show that the NBS with the threat disagreement point can sometimes lead to bargaining outcomes that are worse than a Nash equilibrium. I also show that the NBS with the threat disagreement point is still a dependency equilibrium. Coercion to join a compromise should not be relevant to ECL, since there are no explicit threats. However, threats might be relevant for the same reason bargaining in general is relevant to ECL. The question of disagreement points is an important area for future work.[It may be valuable to review recent work by diffractor2022rose on threat-resistant bargaining in this context.]
§.§.§ Coalitional stability
Finally, I discuss the issue of coalitional stability. A bargaining solution is coalitionally stable if it is in the core, which is the set of payoff vectors in the feasible set such that no subgroup of players (coalition) can strictly increase payoffs for all of its members by unilaterally deviating from the compromise. Coalitional stability is an important criterion for a compromise solution for ECL since it seems plausible that players would choose to pursue a compromise with a subgroup of players if this leads to higher payoffs. Hence, if the grand coalition of all players is not stable, this would lead to a difficult coalition finding problem, making ECL even more complicated to implement.
I show that the NBS with either the Nash or the threat disagreement point can sometimes be unstable. I then analyze the existence of core allocations. The core is known to be empty in general games [][ch. 13.2]osborne1994course. However, using a result by scarf1967core, I show that assuming additively separable utilities, the core is always nonempty. In this analysis, I assume worst-case responses by players outside the coalition, from among the possible Pareto optimal strategies they could pursue for themselves. (Specifically, I do not assume other players respond with threats against coalitions.)
Additionally, I show that if players outside the coalition respond with a Nash equilibrium, the core can be empty even given additive separability. This demonstrates that sometimes no stable bargaining solution exists that improves upon the Nash equilibrium disagreement point, a strong argument against this disagreement point. The intuition is that sometimes two players cooperating leads to negative externalities for a third player, leaving the third player worse off than with no cooperation. Motivated by my result on the existence of core allocations, I suggest an alternative disagreement point that guarantees stability.
§.§ Outline
* In <Ref>, I discuss several assumptions and simplifications I make in the report.
* In <Ref>, I introduce a standard bargaining formalism. I argue that a bargaining problem is an appropriate model for ECL. After providing an example bargaining problem (<Ref>), I introduce the formal bargaining model and relevant notation (<Ref>). I then discuss maximizing a sum of normalized utility functions as a compromise utility function (<Ref>). Next, I briefly review bargaining theory, introducing the Nash and Kalai-Smorodinsky bargaining solutions (<Ref>). Lastly, in <Ref>, I make some initial observations about the model.
* In <Ref>, I introduce a Bayesian game model. In Sections <ref>–<ref>, I introduce the formalism and notation. I then introduce Bayesian Nash equilibria and dependency equilibria (<Ref>) and discuss extending the model to include uncertainty about decision procedures and similarity to other agents (<Ref>). Finally, I prove several equilibrium results (<Ref>).
* In <Ref>, I introduce Bayesian bargaining game, combining the previous models. I introduce the formal setup and notation in Sections <ref>–<ref>. In <Ref>, I introduce a version of the Nash bargaining solution adapted to my model. I then define equilibria in the model (<Ref>). Lastly, in <Ref>, I discuss several takeaways: I provide equilibrium results, discuss how to think about gains from trade given uncertainty, and work through several toy examples, including armstrong2017double's “double decrease” and drexler2019pareto's “Paretotopia” model.
* In <Ref>, I discuss two important issues: disagreement points (<Ref>) and coalitional stability (<Ref>).
* Finally, in <Ref>, I conclude and outline possible future work.
§.§ Related work
A list of prior work on ECL can be found at <https://longtermrisk.org/msr>. No prior work introduces a formal game-theoretic model and discusses equilibria or bargaining theory. [Sec. 2.7]Oesterheld2017-qg includes a simple calculation establishing the plausibility of ECL but without modeling different players, beliefs, or utilities. [][Sec. 2.9.4]Oesterheld2017-qg introduces a variable for other players' decision theories, an idea I discuss in <Ref>. treutlein2018three introduces a simple model with variables for correlations, gains from trade, and number of cooperators, to establish a wager for ECL.
The most important related work is armstrong2017acausal's sequence on acausal trade. He introduces a toy model where players have different utility functions and uncertainty about the existence of other players. Among other issues, he discusses how gains from trade change under different beliefs. I reproduce some of Armstrong's findings in <Ref>. Armstrong focuses on acausal trade and does not discuss relevance to ECL.
§ PRELIMINARIES
I make several assumptions and simplifications in this report:
* I focus on EDT as a decision theory. In particular, I introduce a game-theoretic solution concept based on conditional expected utilities (see <Ref>). While I believe my analysis also applies to other decision theories that take dependencies between similar
agents into account, I will not discuss this. My
analysis may also be relevant to readers with decision-theoretic
uncertainty, since there may be a wager to take ECL into account given any nonzero credence in EDT macaskill2021evidentialist,treutlein2018three.
I do not model decision-theoretic uncertainty, but it could be added similarly to uncertainty about decision-theoretic similarity (see <Ref>).
* I do not address questions about the nature of dependencies between the decisions of different agents, how one could evaluate whether different agents' decisions are dependent, etc. However, I discuss modeling partial correlations or uncertain beliefs about dependencies in <Ref>.
* I assume that there is only a finite set of
agents and the utilities of the options involved are all finite. This is a problem, since the most likely case in which the universe
is large enough to give rise to ECL is an infinite universe. It seems plausible that solutions to infinite
ethics will not change conclusions from my model [cf.][]macaskill2021evidentialist[][sec. 6.10]Oesterheld2017-qg. The assumption
of a finite set of agents is more problematic, since there likely exists a continuum of agents with a continuum of value systems. One may be able to discretize such a set and approximately recover the model discussed here, but it also seems possible that the general case leads to qualitatively new problems.
* I assume that agents are Bayesian (conditional)
expected utility maximizers.
* For simplicity, I model ECL as a one-off decision. For instance, this
could be a commitment to a policy or a decision to maximize some compromise
utility function in the future. I assume that it is possible to
commit oneself to this compromise, and that there won't be changes to the compromise based on new information about one's empirical situation in the universe. This is plausible if either the agents
can actually commit themselves in this way, or if they just
never learn enough such that their assessment of the situation would
relevantly change. Note that this does not affect how agents arrive at this compromise (whether by first-principles reasoning, by simulating agents in the multiverse, etc.; see the discussion in the next section).
§ COMPLETE INFORMATION BARGAINING MODEL
In this section, I develop a model of ECL as a complete information
bargaining problem. A bargaining problem is a game between players
in which the players have some method of negotiating a binding agreement.
If everyone accepts the agreement, the actions specified by the agreement
are carried out. Otherwise, players carry out some disagreement action.
Complete information, as opposed to incomplete information, means
that everyone knows who the other players are, as well as their options and utility functions.
In ECL, players are uncertain about their superrational cooperators, so an incomplete information model would be more appropriate. Nevertheless, it is useful to start with complete information for simplicity, since many ideas from the complete information setup will transfer. I will relax the complete information assumption in the following sections.
A more critical assumption is that of using a bargaining model for ECL. ECL is based on the idea that an agent has some belief about other agents' actions,
conditional on their own action. The agent takes some
cooperative action, to produce the evidence that other agents also take more cooperative actions. This does not involve any explicit bargaining between the agents. Nevertheless, I believe using a bargaining model is useful for thinking about ECL.
First, the problem of choosing a compromise outcome in ECL has to be addressed in some way. In <Ref>, I discuss [Sec. 2.8]Oesterheld2017-qg's suggested approach of maximizing a compromise utility function, consisting of a sum of normalized utility functions of all superrational cooperators. This is a valid approach, since every compromise that is Pareto optimal, i.e., that cannot be improved upon without making anyone worse off, is the result of maximizing some common weighted sum of utility functions (see <Ref>). However, I argue against choosing a compromise outcome implicitly by normalizing all agents' utility functions, e.g., according to variance, since that approach might leave some cooperators worse off than without a compromise. Formulating ECL as a bargaining problem and reviewing the relevant literature is a natural starting point for addressing the problem explicitly.
Second, a solution to a bargaining problem may serve as a Schelling point[<https://en.wikipedia.org/wiki/Focal_point_(game_theory)>] for superrational cooperators. Solutions can be supported by plausible axioms that could be universally agreed upon. Hence, bargaining theory can be one relevant reference point for determining which evidence one's actions provide about other agents' actions. It seems plausible that, if humans adopt some parsimonious solution, then other, similar agents will do the same.
Third, bargaining may be important because of a parallel between ECL and acausal trade[<https://www.lesswrong.com/tag/acausal-trade>]. Acausal trade refers to the more general
idea that agents could be able to negotiate and enforce a cooperative outcome via mutual simulations, in the absence of any causal interaction. ECL is the special case in which, instead of mutual
simulation, similarity in decision algorithms or psychological processes
ensures a joint cooperative action. While humans might be able to engage in ECL, acausal trade is likely only feasible for superhuman AI systems.
I think there is no principled
distinction between acausal trade and ECL. Determining the conditional
beliefs about the actions of other agents involves, at least in principle,
similar questions as those concerning acausal trade. Conditioning on one's own decision process having some output, one needs to determine which
actions a similar but non-identical decision process in a similar
or symmetrical, but non-identical decision situation would output. At the same time, the other decision process is trying to make the same determination.
Due to such mutual dependencies between the actions of agents, one cannot divide the decision process clearly
into given conditional beliefs that specify which inferences to make
based on different actions, and the subsequent choice of the action
with the highest expected utility. Instead, one has to already make choices while
inferring the (logical) conditional credences. For instance, the inferred conditional distribution over opponent actions may be influenced by one's own commitment
to respond to opponent actions in a certain way [see][]kokotajlo2019commitment,mennen2018wishful.[In a comment on an earlier draft, Max Daniel writes: “If I understand this correctly, this seems important to me, and quite connected to some of the reasons why I feel skeptical about ECL having practical implications for humans. I also feel like it has been underemphasized in texts on ECL so far.”]
It is prudent for humans to have conditional beliefs about the world,
including other agents, even without being able to entirely solve
this issue (which involves various open problems, for instance,
in logical uncertainty[<https://www.alignmentforum.org/tag/logical-uncertainty>]). In this situation, it makes sense to try
to improve one's guesses about ECL both based on reasoning
that is purely based on agents having beliefs about other agents,
and reasoning that involves hypothetical (acausal) bargaining.
Lastly, another way in which a bargaining problem is an inadequate model is that ECL is really a coalitional game. In a bargaining game,
there are only two possibilities: either all players agree to a proposed compromise action, or bargaining completely fails. If one player
disagrees, everyone plays their disagreement action. In a coalitional
game, any group of players can split off and negotiate an agreement, if this is beneficial to that group. I discuss this issue in <Ref>.
Next, I give an example bargaining problem (<Ref>) and introduce the formal bargaining framework (<Ref>). Afterwards, I discuss approaches to compromise that work via maximizing a sum of normalized utility functions (<Ref>). I argue against normalization and for explicitly picking out compromise outcomes. I then review some bargaining theory and discuss the Nash bargaining and the Kalai-Smorodinsky bargaining solutions, concluding that the former may serve as a good Schelling point (I review another solution by Armstrong2013 in <Ref>). Finally, in <Ref>, I make some initial observations and discuss issues that arise in the bargaining model. I address the uniqueness of the actions corresponding to bargaining solutions (<Ref>) and discuss how to think about gains from trade in the bargaining framework (<Ref>).
§.§ Example
Here, I give an example bargaining problem to motivate the following formal definitions, derived from a case by armstrong2017double.
There are two players, Alice and Bob. Alice and
Bob care in an additive way about the things that both do. Say Alice
has 10 and Bob has 5 units of some resource, and A and B is the amount spent on Alice's utility function, by Alice and Bob respectively. 10-A and 5-B are the respective amounts spent on Bob. Resources invested in
Alice's utility function produce linear utility for her, so her utility
is A+B. Bob's utility function,
on the other hand, has diminishing returns; the marginal cost of one
additional utilon equals the utilons that have already been produced. So to create
x utilons for Bob, both Alice and Bob need to invest ∫_0^xydy=1/2x^2
resources. Hence, Bob's utility function is √(2(10-A))+√(2(5-B)).
It is possible to plot both Alice's and Bob's actions (i.e., ways
to split up their resources between Alice and Bob) in a two-dimensional
plane in which the axes are the utility functions of Alice (x-Axis)
and Bob (y-Axis). Additionally, one can plot all combinations of actions
of Alice and Bob in one joint graph for both utilities (<Ref>). The upper right boundary of the set of feasible utilities is the Pareto frontier—the set of utility vectors such that
no one can improve on their utility without making someone else worse
off.
In this example, both agents maximizing their own utility function leads to a Pareto inferior outcome: the
point (10,√(10)), which does not lie on the Pareto frontier.
If, on the other hand, Bob and Alice are able to coordinate on a cooperative
combination of actions, this leaves both better off. There is the
question, though, which point on the Pareto frontier they should
choose. In this section, I am considering this question in the case of ECL.
An interesting property of the Pareto frontier is that if Alice and Bob choose actions such that the slopes
of their individual Pareto frontiers—in this example, the slope
of the lines in <Ref> (a)—at the point of their actions
are not the same, then the actions are not Pareto optimal. Regarding
the slope of the Pareto frontiers as marginal rates of substitution,
this is a well-known concept in economics. If the marginal rates of substitution for Alice and Bob are not the same, then both players can move in opposite directions on
their Pareto frontier to become jointly better off. One person can give
up some amount x of utility for Alice to gain some amount y
of utility for Bob, and at the same time, the other person can give
up less than y utility for Bob and gain more than x for Alice,
such that jointly, the effect on both of their utilities is positive.
§.§ Formal setup
A (complete information) bargaining game is a 4-tuple
B=(N,(A_i)_i∈ N,(u_i)_i∈ N,d),
where
* N={1,…,n} is the set of players;
* (A_i)_i∈ N is the tuple of finite sets of actions for
each player;
* (u_i)_i∈ N,u_i A→ℝ is the tuple
of utility functions for all players, where A=∏_i∈ NA_i
is the set of outcomes, or the set of pure strategy profiles;
* d∈ℝ^n is the disagreement point.
A bargaining game as defined above is a standard normal form game, with the addition of a disagreement point, which is needed to specify a default outcome that is realized when bargaining fails.
In the following, I introduce some initial notation and definitions. As is standard, I write a_-i∈ A_-i:=∏_j∈ N, j≠ iA_j and (a_-i,a_i) for the vector in which the i-th entry is a_i∈ A_i and the remaining entries are given by a_-i∈ A_-i.
Given a bargaining game B, players
are able to randomize between actions. Let Σ_i:=Δ(A_i) be the set of
probability distributions (identified with probability mass functions) over the actions in A_i. Then σ_i∈Σ_i is called a mixed strategy. Moreover, σ∈Σ:=∏_i∈ NΣ_i is called a mixed strategy profile, and I write
u_i(σ):=∑_a∈ A(∏_j∈ Nσ_j(a_j))u_i(a)
for player i's expected utility given mixed strategy profile
σ∈Σ.
I regard the mixed strategies as the options of the players. Note
that at this stage, strategies are always independent. Later I introduce
a different concept which involves possibly dependent joint distributions
over players' actions, which I call “joint strategy distributions”.
Given a bargaining game B, one can define
F(B)={x∈ℝ^n|∃σ∈Σ∀ i∈ N x_i=u_i(σ)}
as the feasible set. This set is by construction a simplex, and hence
convex and compact. The feasible set contains, for all the possible
mixed strategy profiles that the players can choose, vectors that
specify the expected utilities for each player given that profile—i.e.,
the utility vectors that are feasible given the bargaining game B. I also
define
H(B)={x∈ F(B)|∀ y∈ F(B):(∀ i∈ N:y_i≥ x_i)⇒ y=x}
as the (strict) Pareto frontier of F(B).
As an addendum to <Ref>, I will assume in the following
that the disagreement point simply corresponds to one of the possible mixed strategies, and hence lies in the feasible set, i.e.,
d∈ F(B). Moreover, I assume that it is possible to achieve gains from trade for all players. I.e., there exists x∈ F(B) such that x_i>d_i for all i∈ N. This simplifies matters and is not really a substantial restriction. To see this, note that if players cannot receive gains from trade, then it does not make sense for them to participate in ECL. Moreover, consider the set of players that can receive gains from trade without making other players worse off than their disagreement point. Then by convexity of F(B), there also exist outcomes that make all of these players better off simultaneously.
Utility functions are only specified up to positive affine transformation,
i.e., if there are utility functions u,u' and there is a∈ℝ_>0,
b∈ℝ such that u=au'+b, then these utility functions
imply exactly the same preference relation over Σ. I write
u∼ u' to denote that two utility functions are equivalent in
this sense.
If one subtracts the disagreement point from a utility function,
then the resulting function is at least unique with respect
to the addition of a constant b∈ℝ. One can of course
still multiply the function by an arbitrary positive number.
The disagreement point may be just some outcome that would obtain
if everyone were to maximize their own utility function, but it could
also be a “threat point” where everyone takes the action which
produces the best subsequent bargaining solution. More on this in <Ref>, but for now I assume such a point is given.
In the ECL context,
we can assume that players live in completely causally disconnected universes. Hence, if all the players have
value systems that are additive in the universes, it does not make a difference to the
utility one player gets from another player what all the other players
do (this is false in general, e.g., if players were able to causally interact).
So it makes sense to give the following definition:
Utility functions are called additively
separable if there are functions u_i,j for i,j∈ N such that for all
i∈ N and a∈ A, we have
u_i(a)=∑_j∈ Nu_i,j(a_j). A bargaining game B is called additively separable if the corresponding utility functions are additively separable.
Unfortunately, this excludes some notable value systems. For instance, value systems which have diminishing
returns for some good across the multiverse, or value systems that care about the average happiness of all beings in the multiverse. However, it makes things
much easier. I will assume additive separability in many results.
In case utility functions are additively separable, it is possible
to write
F(B)=∑_i∈ NF_i(B):={∑_i∈ Nx_i|∀ j∈ N x_j∈ F_j(B)},
where
F_i(B):={x∈ℝ^n|∃σ_i∈Σ_i∀ j∈ N x_j=u_j,i(σ_i)}
is the feasible set for player i∈ N and u_j,i(σ_i)=∑_a_k∈ A_iσ_i(a_k)u_j,i(a_k).
That is, for each player, there is an individual feasible set of utility vectors that this player can generate for all players, and the joint feasible
set consists of all the points x that are sums of points in the
individual feasible sets. I define
H_i(B)={x∈ F_i(B)|∀ y∈ F_i:(∀ j∈ N:y_j≥ x_j)⇒ y=x}
as the strict Pareto frontier of F_i(B).
This is the upper right boundary of
the utilities that the individual player i∈ N can contribute in
their part of the universe. Note that not every sum ∑_i x_i of points x_i∈ H_i(B) on the individual Pareto frontiers will be Pareto optimal.
In <Ref>, utilities are additively
separable, and <Ref> (a) depicts the two individual feasible sets, while (b) depicts the one combined feasible set.
These feasible sets are not valid in the sense of the above definition,
since they are not convex and they are not simplices. However, we can relax the assumption that F(B) is a simplex by allowing for a bargaining problem to be directly defined as a tuple B=(N,F,d), where N is the set of players, F the feasible set, and d∈ F the disagreement point. In this case, F still has to be compact and convex, but it need not be a simplex (e.g., if the underlying set of actions is continuous, as in <Ref>). Convexity and compactness are required so that we can apply bargaining theory (e.g., to ensure that strictly convex functions have unique minima).
Assuming additive separability, it is practical to just identify
the space of actions of player i∈ N with their feasible set
F_i(B). In that case, we can define a bargaining problem as a tuple B=(N,(F_i)_i∈ N,d) of a set of players N, individual feasible sets F_i for each player, and a disagreement point d. Then we have F(B)=∑_i∈ NF_i. Here, the F_i have to be compact, convex sets, but need not be simplices. Particularly, there are feasible sets F_i such that the
H_i are smooth, n-1-dimensional manifolds, which we will assume in some results below.
Lastly, it is useful to define
ℱ^N={F(B)| B is a bargaining game with set of players N}
as the set of all possible sets of feasible utilities for the set
of players N and ℱ=⋃_N∈𝒫ℱ^N
as the set of all possible feasible sets, where 𝒫={{1,…,n}| n∈ℕ}
is the set of all finite sets of agents. Moreover,
Υ={(F,d)| N∈𝒫,F∈ℱ^N,d∈ F}.
With these definitions, a bargaining solution is a function μ
on Υ such that μ(F,d)∈ F. That is, it takes a feasible
set and a disagreement point, and outputs a unique point in the feasible
set as solution.
§.§ Normalizing utility functions
One possible approach to determining the actions of individual players
in the bargaining problem posed by ECL is maximizing some compromise utility function [Sec. 2.8]Oesterheld2017-qg. In particular, one may start by normalizing individual utility functions via shifting and scaling, and then maximize a weighted sum of them. Maximizing a sum picks out a specific point or affine subset of the Pareto frontier. Note that this correspondence also works the other way around—for every point on the Pareto frontier, we can derive weights such that the point maximizes the corresponding weighted sum. In this section, we will first argue why all players have to maximize the same sum to reach a Pareto optimal agreement. Second, we motivate the use of bargaining solutions that directly pick out points on the Pareto frontier, by arguing against an approach that starts by normalizing utility functions.
One motivation behind the idea of maximizing a weighted sum of utility functions is Harsanyi's utilitarian theorem hammond1992harsanyi. Assume that a player wants to maximize a compromise
utility function u^* that also incorporates other players' preferences.
A very plausible axiom in this case is the following:
Let α,β∈Σ and u_i(α)≥ u_i(β)
for all i∈ N. Then u^*(α)≥ u^*(β).
This is a kind of Pareto optimality condition. If one mixed strategy
profile is at least as good for everyone as another mixed strategy
profile, then it should also be at least as good for the new utility
function u^*. According to a version of Harsanyi's utilitarian
theorem, it follows from this axiom that u^* is just a weighted
sum of the utility functions of individual players:
Resnik1983Fishburn1984-FISOHU Let u^* satisfy Axiom <ref>.
Then there are weights λ_1,…,λ_n∈ℝ_≥0
such that
u^*∼∑λ_iu_i.
This result says that a player that wants to pursue a compromise and respect the Pareto axiom has to maximize some sum of utility functions. But it leaves open how
to choose the weights in this sum of utility functions.
Assuming additive
separability, we can also show that, to get a Pareto optimal outcome, different players have to maximize the same weighted sum of utility functions. This
follows from the fact that maximizing a weighted sum picks out the
point on the Pareto frontier where the slope of the frontier corresponds
exactly to the weights in the sum. But if two players choose points
on their frontiers with different slopes, there are gains from trade
left on the table. As mentioned in <Ref>, in a Pareto-optimal outcome,
the slopes of the frontiers, i.e., marginal rates of substitution,
have to be identical. Otherwise, both players could jointly move in
opposite directions on the frontier such that both gain more than
they lose.
Let B=(N,(F_i)_i∈ N,d) be an additively separable bargaining game. Assume that there are weight
vectors μ_i∈ℝ_≥0^n for
i∈ N such that player i∈ N takes an action x_i∈ F_i
that maximizes ∑_j∈ Nμ_i,jx_i,j. Then
(i) If μ_1,i=…=μ_n,i>0
for all i∈ N, then ∑_i∈ Nx_i
is Pareto optimal.
(ii) If the boundaries ∂ F_i are smooth n-1-dimensional manifolds and there exist i,j such that μ_i≠μ_j, then ∑_ix_i is not Pareto optimal.
To begin, note that w.l.o.g. we can assume that for all i∈ N, we have ‖μ_i‖_2=1. This is because we assume μ_i≠ 0 for both (i) and (ii), and we can rescale μ_i to have norm 1 without changing the optimum x_i.
Now, to prove (i), assume that μ_1=…=μ_n. We have
∑_i∈ N∑_j∈ Nμ_i,jx_i,j=∑_i∈ Nmax_y_i∈ F_i(B)∑_j∈ Nμ_i,jy_i,j=max_y∈∏_i∈ NF_i(B)∑_j∈ Nμ_1,j∑_i∈ Ny_i,j=max_y∈ F(B)∑_j∈ Nμ_1,jy_j.
If a point is a solution to a maximization problem max_y∈ F(B)∑_j∈ Nμ_1,jy_j
such that μ_1,i>0 for all i, then we cannot improve the utilities for one of the players without making anyone else worse off. Hence, the point is Pareto optimal.
Next, we show (ii) via contraposition. Assume that ∑_i∈ Nx_i
is Pareto optimal.
For any Pareto
optimal point, there is a weight vector ν∈ℝ_≥0^n,
‖ν‖=1 such that ∑_i∈ Nx_i∈_y∈ Fν^⊤y.
Moreover, since the boundaries ∂ F_i are smooth, we can define smooth functions h_iℝ^n→ℝ such that ∂ F_i={x| h_i(x)=0}, i.e., the boundaries ∂ F_i are the level sets h_i=0, and such that for any x∈∂ F_i, ∇ h_i(x) with ‖∇ h_i(x)‖ =1 is a normal vector to the boundary ∂ F_i at x.
Then we have h_i(x_i)=0 for i∈ N. Hence,
x:=(x_1,…,x_n) is a solution to the problem of maximizing
f∏_i∈ NF_i→ℝ,y↦ν^⊤∑_i∈ Ny_i
under the side-constraint that ℋ(y)=0 where ℋ∏_i∈ NF_i→ℝ^n
such that ℋ_i∏_j∈ NF_j→ℝ,y↦ h_i(y_i).
According to the method of Lagrange multipliers, there hence are λ_j∈ℝ
for j∈ N such that
∂_if(x)=∑_j∈ Nλ_j∂_iℋ_j(x),
for all i∈ N. Since ∂_iℋ_j(y)=δ_i,j∇ h_i(y_i) (where δ_i,j is the Kronecker delta),
it follows that
ν=∂_if(x)=∑_j∈ Nλ_j∂_iℋ_j(x)=λ_i∇ h_i(x_i).
In particular, λ_i≠ 0.
Moreover, by assumption, for all i∈ N, x_i maximizes g_i F_i→ℝ,y_i↦∑_j∈ Nμ_i,jy_i,j
under the side-constraint that h_i(y_i)=0. Hence, it follows that
there is λ'_i∈ℝ such that
μ_i=∇ g_i(x_i)=λ'_i∇ h_i(x_i).
Putting everything together, it follows that
μ_i=λ'_i∇_i(x_i)
=λ'_i/λ_iν
for all i∈ N. Since ‖μ_i‖=1=‖μ_j‖,
it is μ_i=ν/‖ν‖=μ_j. This shows the contrapositive.
I believe the result carries over to some degree to a game with non-smooth feasible sets. If there are kinks
in the Pareto frontiers, then at these points, it will be possible to maximize
slightly different weights and still achieve a Pareto optimal outcome,
since several different maximized weighted sums or normal vectors of the
frontier will correspond to the same point.
Since there exist bargaining problems for which the boundaries ∂ F_i are smooth n-1-dimensional manifolds (e.g., in the trivial case in which the F_i are n-dimensional balls), this result shows that there exist problems for which maximizing different weighted sums would result in Pareto suboptimal outcomes.
Assume that there are weight vectors λ_i∈ℝ_≥0^n,
‖λ_i‖_1=1 for all i∈ N, such that λ_i,k≠λ_j,k
for some i,j,k∈ N. Then there is an ECL bargaining game B
such that if all players i∈ N choose to play a mixed strategy
that corresponds to a point x_i∈ F_i(B) such that
x_i∈_y_i∈ F_i(B)λ_i^⊤y_j,
it follows that x=∑_i∈ Nx_i is not Pareto optimal.
Follows directly from Theorem 6.
Together with the utilitarian theorem, we can conclude that all superrationalists should maximize some common sum of utility
functions. This leaves open the question of which weighted
sum to maximize.
One suggestion by [][sec. 2.8.5]Oesterheld2017-qg is to choose weights
that normalize utility functions according to their variance. Variance
normalization is also supported by MacAskill2020-MACSNM-2, who
set up a scenario in which players submit utility functions to cast
their vote on a social utility function. Using
relatively strong ignorance assumptions, they show that
normalizing the variance of utility functions leads all players to
have equal voting power; that is, they are all equally likely to change
the option that is best under the social utility function.
For my setting, I think this approach does not work well. This is because
under some circumstances, variance normalization can lead one player
to expect negative gains from trade, and I think that one important
requirement for a compromise is that everyone gets positive gains
from trade. This is true even if players that implement an updateless decision
theory dai2009updateless or have only very little prior knowledge about ECL. Players will have some
(prior) beliefs to determine whether a trade will be positive. Given
these beliefs, the trade has to be positive. Otherwise, rational players
will decide not to engage in the compromise.
As an example where variance normalization does not work, take a game
with players 1,2 and action sets A_1={a_1,b_1} and A_2={a_2,b_2}, with utilities
as depicted in Tables <ref>–<ref>. Note that utility functions are additively separable.
Here, the
dominant option for both players is (a_1,a_2). To normalize according to variance, we have to determine a distribution over actions. Here, I assume a uniform distribution. Then the mean
for player 1 is μ_1=-2, and for player 2 it is μ_2=1.
We subtract this mean from the utilities
of all the players, then divide the utilities by their variance.
The variance is σ_i^2=∑_x∈ A_1× A_2(u_i(x)-μ_i)^2
for player i, which is 10 for both players. The normalized
utilities are as depicted in <Ref>.
Here, b_1,a_2 maximizes the sum of normalized utility
functions. But this leaves player 1 worse off than without a compromise.
Though I do not investigate this in more detail, I believe problems may arise with all methods that do not directly pick a point on the Pareto frontier as a compromise solution. It would still be interesting to investigate under which conditions variance normalization or other normalization methods give all agents positive gains from trade, but I will not pursue this approach here further.
In the following, I will consider solutions that directly pick out
a point in the feasible set. Once such a point is given, it is possible to derive weights for utility functions such that the point maximizes the corresponding
weighted sum. If the Pareto frontier is differentiable at this point,
it follows from the proof of <Ref> that these weights are unique. I will not delve into the issue of translations between maximizing
weighted sums and points on the Pareto frontier further in this report. (Though I will address the related issue of uniqueness of the individual mixed strategies maximizing a particular weighted sum in <Ref>.)
§.§ Bargaining theory
Here, I briefly review the existing literature on bargaining theory. Since there exists a large literature on bargaining, it seems likely to me that the most plausible and easy to find solutions to
bargaining problems have already been discovered. There are
two main approaches to bargaining problems:
* The axiomatic or normative approach, which involves specifying plausible axioms for bargaining solutions and proving that these
axioms are equivalent to some choice of a bargaining solution.
* The noncooperative or positive approach, which involves specifying a bargaining
game and analyzing the equilibria of the game.
Both approaches are interesting from an ECL perspective. First, the axiomatic
approach is interesting because a solution that has any chance of giving an agent evidence
that others are pursuing the same solution must be parsimonious. This seems more likely if the bargaining solution depends on plausible axioms. Moreover, it is an argument for relying on the existing
literature, because solutions that have already been found by economists are ceteris
paribus also solutions that are more likely to be found by other superrationalists.[Note that this argument is informal, assuming dependencies of the sort “if I look for plausible axioms
and find them, the other agent will do the same and find the same
axioms”. It is not backed up by some equilibrium or game-theoretic
analysis but a judgement of psychological plausibility.]
Second, modeling the situation using noncooperative game theory can provide
one with evidence in favor of a particular solution being more likely
to result from real-world bargaining situations. This has only been
done for causal bargaining, but hopefully acausal
bargaining theory would give similar results to the causal setting.
Some work points in the direction that such transfer may be possible oesterheld2019robust.
In the following, I will turn to the axiomatic approach and review some of the desiderata from the literature. There are several
plausible axioms for a bargaining solution:
Let μ be some bargaining solution, B a
bargaining game, F(B) its feasible set with Pareto frontier H,
and d∈ F(G) its disagreement point.
(1) (Weak) Individual rationality. The solution should give everyone non-negative
gains from trade. So μ_i(F(B),d)≥ d_i for all i∈ N.
(2) (Strong) Pareto optimality: μ(F(B),d)∈ H(B).
(3) Invariance to affine transformations of utility functions. Let ϕℝ^n→ℝ^n such that ϕ(x)=[λ_1x_1,…,λ_nx_n]^⊤+y
for some λ_i∈ℝ_>0,y∈ℝ^n. Then
μ(ϕ(F(B)),ϕ(d))=ϕ(μ(F(B),d)).
(4) Anonymity. For any permutation π on N, define π(x)=(x_π(1),…,x_π(n))
for x∈ℝ^n. Then π(μ(F(B),d))=μ(π(F(B)),π(d)).
I argued for (1) in the preceding section. (2) seems fairly plausible
on the grounds that ECL should not leave any possible gains from trade on the
table. (3) is plausible since the solution should not depend on which
representative we pick out of the equivalence class of utility functions
which give rise to the same cardinal ranking over mixed strategy profiles.
(4) tells us that the bargaining solution should be equivariant: the payoffs assigned to players should stay the same, even if we change their indices. While anonymity is plausible, this definition unfortunately ignores the individual feasible
sets F_i(B) for each player i that exist in the additively separable case. This means that players may have to be treated equally, even if their contributions F_i(B) to the overall payoffs differ. However, it seems that the relative size of the contributions
should make a difference for fairness. We will turn to this fairness point again
in <Ref>.
The axioms outlined above do not yet uniquely specify a bargaining solution. However, they do so after adding a fifth axiom. In the following sections, I will turn to two popular suggestions for fifth axioms, which correspond to two different bargaining solutions, the Nash bargaining solution (NBS) and the Kalai Smorodinsky bargaining solution (KSBS). In <Ref>, I discuss an additional solution proposed by Armstrong2013. I do not focus on it here since it violates individual rationality. Since the main parts of this report were written in 2018, I do not consider more recent work such as diffractor2022rose.
§.§.§ The Nash bargaining solution
The Nash bargaining solution (NBS) Nash1950-vg,Harsanyi1972,Lensberg1988-lf,Okada2010-ql,Anbarci2013-yd,Roth1979a
is the point in F(B) which maximizes the product of the players'
gains from trade, also called the Nash welfare.
μ(F(B),d):=_x∈ F(B)^≥ d∏_i∈ N(x_i-d_i),
where F(G)^≥ d:={x∈ F(G)|∀ i∈ N x_i≥ d_i}.
Since F(B)^≥ d is compact and convex and there exists x∈ F(B)
such that x_i>d_i for all i∈ N by assumption, this point exists and is unique. It is also called the symmetric NBS.
Applying the NBS to <Ref>, using the point (10,√(10)) as a disagreement point,
we get the optimization problem
max_A∈[0,10],B∈[0,5](A+B-10)(√(2(10-A))+√(2(5-B))-√(10)).
This has a maximum at
A≈ 8.15, B≈ 3.15, which I have plotted as a green dot in <Ref>.
There are different axiomatizations of the NBS (i.e., choices for
the fifth axiom) which are equivalent. In the many-player case, an
axiom which I find plausible is due to Lensberg1988-lf.
It has an intuitive geometric interpretation but its mathematical formulation is quite technical.
Let P,Q⊆ N,P⊆ Q. Let x_p denote the projection
of x∈ℝ^Q onto ℝ^P. Let H_P^x={y∈ℝ^Q| y_Q∖ P=x_Q∖ P}.
Given C⊆ℝ^Q and x∈ C, denote t_P^x(C) for
the projection of H_P^x∩ C onto ℝ^P.
Multilateral stability. If P⊆ N, μ(F(G),d)=x,
and D=t_P^x(F(G)), then
μ(D,d_P)=x_P.
<Ref> shows an illustration from Lensberg1988-lf. Basically, the idea is that if one fixes the payoffs for a subset
N∖ P of the players and lets the remaining players P
renegotiate their solution on the new feasible set D=t_P^x(F(G))
that results if you fix the payoffs for all the players in N∖ P
and project this feasible set onto ℝ^P, the result should be the same as the solution of the
entire problem, projected onto ℝ^P. I find this
axiom very appealing.
<Ref> (Pareto optimality, Invariance to affine transformations, Anonymity) and <Ref> (Multilateral stability) together are necessary and sufficient to specify the Nash bargaining solution.
Assuming <Ref>, multilateral stability is interchangeable with the Independence of irrelevant alternatives axiom:
[Independence of irrelevant
alternatives] Let B,B' be two bargaining games such that F(B)⊆ F(B'),
μ(F(B'),d)∈ F(B). Then μ(F(B'),d)=μ(F(B),d).
This also seems like an appealing desideratum. There are several further axiomatizations
of the NBS (in the 2 or n-player case).
There also exists an asymmetric version of the NBS Kalai1977,Roth1979a. Either Pareto optimality or strong Individual rationality
(i.e., μ_i(F(B),d)>d_i for all i∈ N) in combination
with Invariance to affine transformations and Independence of irrelevant alternatives
are necessary and sufficient to characterize all functions
_x∈ F(G)^≥ d∏_i∈ N(x_i-d_i)^α_i,
where α_i>0 and ∑_i∈ Nα_i=1. (I am not sure
whether this would also work with Multilateral stability instead of
Independence of irrelevant alternatives.)
The NBS is also supported by several noncooperative bargaining models Nash1953Binmore1986Anbarci2013-ydBRANGEWITZ2013224Okada2010-ql.
The most common one is a version of
the alternating offers model by Rubinstein1982-vw. Here, players take turns in making offers, and the other
party (or, in the multilateral case, all other players) can reject
or accept the offer. Players are impatient: either there is a chance at each step that bargaining
breaks down, or the players discount their utilities over time. In this game, there is a unique subgame perfect equilibrium. The limit of this equilibrium as the probability of breakdown approaches zero is the NBS Binmore1986.
§.§.§ The Kalai-Smorodinsky bargaining solution
The KSBS Kalai1975-yv is a more recent alternative to
the NBS. I think it is less suitable for ECL. In the KSBS, the utility
functions are normalized such that 0 is the disagreement point and
1 is the ideal point—the best possible payoff for the agent (among
all payoffs in F(B)). Then the bargaining solution is the point
where the line from the zero point to the point where everyone has
1 utility intersects with the Pareto frontier. This means that the
solution is the point on the Pareto frontier at which ratios between players' utilities are equal to the ratios between their ideal utilities. Formally, if U_i(B) is the best
possible attainable point for i, then the solution is the point
x∈ H(B) such that
U_i(B)-d_i/U_j(B)-d_j=x_i-d_i/x_j-d_j
for all i,j∈ N.
Apparently, there are some problems with generalizing the KSBS to
n-player games Roth1979b. One needs several axioms. One
possible way to axiomatize it in this case is via the following axioms
(in addition to <Ref>) Karos2018generalization.
To make the definitions easier, assume for now that d=0
(if this is not the case, just subtract d from all points in the
feasible set). Then a bargaining solution is just a function of the
feasible set, assuming d=0.
[Individual monotonicity] μ_i(F(B))≤μ_i(F(B')) for all
i∈ N and all problems B, B' with F(B)⊆ F(B'),
U_i(B)≤ U_i(B') and U_j(B)=U_j(B') for all j≠ i.
That is, if someone's ideal point is greater in B than in B', then,
all else equal, their bargaining solution should also be greater in
B than in B'.
[Homogeneous ideal independence of irrelevant alternatives] μ(F(B))=μ(F(B'))
for all bargaining problems B,B' with F(B)⊆ F(B'), μ(F(B))∈ F(B'),
and U(B)=rU(B') for some r≤1.
This is a weakened version of independence of irrelevant alternatives
which requires the ratios of the ideal points to be equal for the
axiom to apply.
[Midpoint domination] For all bargaining problems B and any player i, we have μ_i(F(B))≥1/nU_i(B).
This axiom is also known as Proportional fairness.[<https://en.wikipedia.org/wiki/Proportional_division>]
The KSBS is supposed to be fairer than the NBS, in the sense that if someone has better options (their ideal point
is better), then they should be left better off in bargaining. This
is not the case in the NBS but is the case for the KSBS. However, I disagree with this notion of fairness. To me, fairness in the two-player case is concerned with splitting
the gains from trade equally or according to differences in power.[In the Rubinstein bargaining model, one can derive bargaining power
from players' discount rates. If a player
has a higher time discount, they have a weaker bargaining position.]
But splitting gains from trade equally only makes sense in a transferable utility game, i.e., a game in which there is a common currency of money or resources which has equal utility for both players. Since ECL deals with arbitrary utility functions, we cannot in general assess fairness in the same way here.
An important aspect of fairness is the idea
that there should not be one player or a group of players that only contribute very little to the ECL-compromise, while gaining a lot from it. This type of fairness can be ensured in a coalitional game by requiring that the solution is coalitionally stable. If a player contributes little, then a coalition
of players can split off such that all players in the coalition are
better off, making the solution unstable. I will discuss this in <Ref> and conclude that the KSBS does not fare better than the NBS in this respect.
I think there is a problem with the KSBS that arises
if several agents have the same utility function. Consider a case
with players 1,2,3 and utility functions u_1,u_2,u_3,
where players 1,2 have the same utility function. Intially, players 1,2,3
all have 1 utility, so d=(2,2,1) (since 1 and 2
benefit each other). Now they are trying to decide how to split a
surplus of 1 utility that arises from cooperating. The best achievable utilities are b_1,2=3
and b_3=2. Hence,
b_1-d_1/b_2-d_2=b_2-d_1/b_3-d_2=b_1-d_1/b_2-d_2=1,
so the ratios of utilities minus the default points in the chosen
outcome have to be equal. Hence, the KSBS chooses (2.5,2.5,1.5).
But this seems wrong: Players 1 and 2 have only received half of the utility, even though there are two of them. If they had been two players with distinct goals, then they would have each gotten one third of the utility, giving 1 and 2 a total of 2/3.
The NBS, since it is maximizing a product, is instead skewed towards
1 and 2 and chooses the point ≈(2.7,2.7,2.3), effectively giving each player one third of the utility surplus.
Lastly, another reason to prefer the NBS over the KSBS is that it seems
to lack the widespread support via noncooperative models and plausible
axiomatizations that the NBS has. The axioms for the KSBS in the multilateral
case seem much more contrived than those for the NBS, which makes
it less plausible as a multiverse-wide Schelling point.
Although there apparently are some noncooperative bargaining models
supporting the KSBS Anbarci1997, the support for the
NBS seems greater to me. Relatedly, Google scholar searches for
word combinations such as “Nash bargaining noncooperative” and
“Kalai smorodinsky bargaining noncooperative”, or for the names of the solutions, consistently turn up more than
ten times as many papers for the NBS. Admittedly, there may be
some path dependency or founder's effect here. Nash is a more prominent
name and the NBS was the first published solution. Still,
it seems reasonable that a solution like the NBS—simply maximizing
the product of gains from trade—will be discovered first and thus considered a Schelling point in many parts of the multiverse.
§.§ Observations
Here, I make some initial observations about the bargaining model and discuss potential issues. First, I discuss the question of whether the actions corresponding to a point on the Pareto frontier are unique (<Ref>). If not, this could lead to a coordination problem. I give an example where the decomposition is not unique, show that it is unique if utilities are additively separable and the feasible set is strictly convex, and argue that this should not be an issue in practice. Second, I make some basic observations about gains from trade given additively separable utilities, based on marginal rates of substitutions between different utility functions on individual Pareto frontiers (<Ref>). I conclude with remarks on trade between more than two players and continuity of the NBS (<Ref>).
§.§.§ Uniqueness of the actions corresponding to bargaining solutions
The NBS provides players with a unique compromise point x∈ H(B).
The question arises whether this leaves all players with
a clear instruction on which action to take. There may be
several mixed strategy profiles in Σ which correspond to x.
Then, if players cannot coordinate, the actually chosen outcome may
differ from the NBS. In principle, this outcome could even be worse than the disagreement point
for some.
As an example, take the game with individual Pareto frontiers H_1=H_2={x∈ℝ^2| x_1+x_2=1}
and d=0 (Figure <ref>). Apparently the overall Pareto frontier is H={x∈ℝ^2| x_1+x_2=2}
and the point (1,1)∈ H is the NBS. Then
given any action combination a,b∈ H_1,H_2 such that a_1=b_2
and a_2=b_1, it is a_1+b_1=a_2+b_2=1. Hence, any
such combination corresponds to (1,1) and maximizes the product
of gains.
Now, if player 1 chooses (2,-1), and the other player chooses
(0.5,0.5), which are both individually possible choices if they
were combined with a suitable choice by the respective other player,
then one of the players is worse off than their disagreement point.
Hence, choosing a compromise outcome leads to a coordination problem.
The problem would be even worse without separability of utility
functions. In this case, the coordination problem may be
severe and wrong combinations of action may even be Pareto suboptimal.
Given additive separability, the problem is not as severe. It is not a problem
if, for a player i∈ N, there are several σ_i∈Σ_i
with the same utilities for all players. Hence, it suffices to analyze
a game directly on the basis of the individual feasible sets (F_i)_i∈ N.
Here, as the next result shows, the outcomes will at least always be Pareto optimal.
Let B=(N,(F_i)_i∈ N,d) be an additively separable bargaining
game. Let x∈ H. Let
X_i={y_i∈ H_i|∃ y_-i∈ H_-i:=∑_j∈ N∖{i}H_j:y_i+y_-i=x}
be the set of points y_i that player i can choose from to realize x. Then ∑_i∈ NX_i⊆ H. That is, any combination of such points chosen independently by different players is Pareto optimal.
Let μ∈ℝ^n such that μ^⊤x=max_y∈ Fμ^⊤y
(since x is Pareto optimal, such a weight vector exists). Apparently,
for any x_i∈ X_i, it is μ^⊤x_i=max_y∈ F_iμ^⊤y.
Hence, μ^⊤y=max_z∈ Fμ^⊤z for any y∈∑_i∈ NX_i.
But this means that y∈ H.
Under which conditions could ∑_i∈ NX_i contain more
than one vector? At least if F is strictly convex, this cannot
happen.
Same assumptions as <Ref>. Moreover, assume that F(B)
is strictly convex.
Then ∑_i∈ NX_i={x}.
Assume that for i∈ N there are two points x≠ x'∈ H_i
and points y,y'∈ H_-i:=∑_j∈ N∖{i}H_j such
that x+y=x'+y'=h∈ H. Let λ=1/2. It is h'=λ x+(1-λ)x'+λ y+(1-λ)y'=λ(x+y)+(1-λ)(x'+y')=h∈ H.
Moreover, from <Ref>, it follows that h̃:=x'+y∈ H
and λ h+(1-λ)h̃=λ x+(1-λ)x'+y∈ H.
Since h≠h̃∈∂ F(B) and λ h+(1-λ)h̃∈∂ F(B),
F(B) is not strictly convex, which is a contradiction. Hence, x=x'.
Overall, I believe that the kind of non-uniqueness discussed here is unlikely to be a big problem. First, even if the decomposition is not unique in principle, there may still be unique points that are somehow more parsimonious and can thus serve as a Schelling points. E.g., in <Ref>, this could be the symmetric point (0.5,0.5). Second, I think it is very unlikely that a situation in which
∑_i∈ NX_i contains more than one point occurs in practice. I have not formalized this, but intuitively, the reason is that Pareto optimal points are points at which the normal vector to the individual Pareto frontiers for all players are colinear. It is unlikely that two players have Pareto frontiers that have a part that is affine and thus not strictly convex, for which their normal vectors are also exactly colinear. This is because there can only be countably many such affine parts.
§.§.§ Possible gains from trade
We can assess possible gains from trade by looking at the individual Pareto frontiers. Assume that
the whole surface of F_i is a smooth manifold, for each player i (recall that F_i is the set of expected utility vectors that player i can choose from, assuming additive separability, i.e., that the total expected utility for each player is a sum of the individual contributions from each player). For instance, one could justify this with the fact that there exists a continuum of possible actions in the real world.
Then there exists a unique normal vector to this surface at each point on the Pareto frontier H_i⊆∂ F_i. As mentioned in <Ref>, in the 2-D case, the slope of the Pareto frontier at a point corresponds to the marginal rate of substitution between the two utility functions. Pareto optimal points are points at which those slopes are equal for both players, and the normal vectors colinear. Gains from trade
are possible whenever the marginal rates of substitution between the
different utility functions on the Pareto frontier are not equal for
all players. In particular,
if a player was
previously optimizing for their own goals, then giving utility
to other players costs them nothing on the margin (see <Ref>). This idea was introduced as “marginal charity” by hanson2012marginal.
The amount of trade that can happen depends on the specific shape of the Pareto frontiers. If the Pareto frontiers are curved strongly at the disagreement point, such that Pareto optimal trades are very close to this point, then barely any trade is possible
(see <Ref>). I will return to this analysis of possible gains from trade using different toy models for the Pareto frontiers in <Ref>.
§.§.§ Further observations
Another observation concerns trades between more than
two players, which can exhibit a more complicated graph structure. For instance, if there are three players 1,2,3,
it is possible that 1 benefits 2, 2 benefits 3, and 3
benefits 1. Not everyone has to receive gains from trade from everyone
else. This property allows for higher gains from trades, but it also means that there can be players involved that don't benefit anyone else, which can be problematic (see <Ref>).
Lastly, it is worth noting that the NBS is continuous in the
feasible set and the disagreement point Lensberg1988-lf.
This means that the NBS is in some sense robust to slight changes or uncertainties about the right specification of the bargaining game.[I believe this is also true of the KSBS, though I have not investigated this.]
§ BAYESIAN GAME MODEL
In this section, I introduce a Bayesian game formalism to model uncertainty about the values and empirical situations of agents in the multiverse, using the type space formalism by Harsanyi1967,
adapted to ECL. In a Bayesian game, players have incomplete information, meaning they are uncertain about the utility function of other players. I will build on the formalism introduced in Sections <ref> and <ref> in <Ref> to define incomplete information bargaining games.
In <Ref>, I introduce the basic formalism and notation, and in <Ref>, I define pure and mixed strategies and their expected utilities. Next, I introduce joint distributions over strategies in <Ref>, which I use in <Ref> to define dependency equilibria, alongside standard Bayesian Nash equilibria. Dependency equilibria assume optimal conditional expectations of actions under joint distributions over strategies and can thus incorporate the evidential reasoning required by ECL.
I then show several equilibrium results, including a generalization of Spohn2007-fp's folk theorem for dependency equilibria, which says that all strategy profiles that Pareto dominate a Nash equilibrium are dependency equilibria (<Ref>). This result shows that dependency equilibria alone won't be useful in constraining beliefs over the players' strategies in ECL further. I also derive simple conditions for when a strategy profile leads to positive gains from trade and is thus a dependency equilibrium.
Finally, in <Ref>, I discuss a possible extension of the formalism to include uncertainty about decision procedures and similarity to other agents.
§.§ Formal setup
An ECL Bayesian game is a tuple G=(N,A,T,p,(u_i)_i∈ N), where
* N={1,…,n} is the set of players;
* A is a generic, finite set of actions;
* T={1,…,m} is a generic set of types, specifying the private information available to each player, i.e., the values and empirical situation in each universe;
* p T^n→[0,1] is a prior probability distribution
over the players' types such that all types have positive probability for all players, i.e., ∑_t_-i∈ T_-ip(t_-i,t_i)>0 for all players i∈ N and types t_i∈ T;
* (u_i)_i∈ N is the tuple of utility functions for each player,
where u_i A^n× T^n→ℝ.
Each player's type gets randomly chosen according to the joint distribution p. A type specifies a player's private information, i.e., whatever information about the player
is not common knowledge. In an ECL Bayesian game, I understand each player as a causally separate universe, inhabited by some intelligent civilization that is able to engage in ECL. The player's type then specifies this civilization's values, as well as their options in furthering any of the other types' values. Players know how many universes and thus causally disconnected civilizations there are, but they are uncertain about everyone's types.
I assume that this formal framework is common knowledge. In particular, everyone knows the common prior
over types. I believe this is a good starting point to analyze the situation, but I am unsure to what degree ECL breaks down as we relax the assumption. One possible generalization for future work would be to allow for individual
probability distributions over types that don't stem from a common
prior over types [see][]Harsanyi1967, or analyze relaxations to common knowledge such as common
p-belief Monderer1989-pj.
My formalization is different from a standard Bayesian game [e.g.][p. 347f.]maschler2020game since the set of types T is the same for each player, and there is only one set of actions, independent of the player and type. Both of these are simply notational simplifications without loss of generality. First, all the information about the actions is encoded in the utility functions, which can depend on players and types (if there are too many actions for some types, we can simply map several actions onto the same utilities). Second, we can still distinguish the different players' type distributions by choosing an appropriate prior distribution p. The only restricting condition here is the assumption that each type for each player has strictly positive probability. However, this assumption could easily be relaxed without changing anything substantial; it merely serves to avoid cumbersome case distinctions based on whether a type has zero probability.
My simplification makes particular sense in ECL, where players are causally disconnected universes. Here, we can regard players simply as vessels that can be inhabited by any of the types, such that really only the types matter. I do not think it would be useful at this point to try to distinguish the different universes.
I formalize the idea that we cannot distinguish between the players as anonymity below, alongside the additive separability assumption from the previous section. I will use this assumption a lot in the following.
Assume that there
are utility functions u_t,t' for all t,t'∈ T such that for
a=(a_1,t_1,…,a_n,t_n)∈ A, we have
u_i(a)=∑_j=1^nu_t_j,t_i(a_j)
for each player i∈ N. Then the utility functions are called additively separable and anonymous.
This definition says that we can express the utility function of any player as a sum of contributions from every player (additive separability), where the received utility only depends on their type, as well as the type of the other player (anonymity). The term u_t,t'(a) thus expresses the utility that any player of type t' gets from any player of type t, when that player chooses action a∈ A.
The prior distribution
p is called anonymous if, for all permutations on players
π and type vectors t∈ T^n, we have p(t_1,…,t_n)=p(t_π(1),…,t_π(n)).
This says that also the distribution over types is anonymous, i.e., symmetric in the players. Note that this does not mean that players' types have to be independent. One could still incorporate a belief under which, for instance, players always believe that other players are likely of the same type as they are.
Lastly, I define the same properties for Bayesian games.
I say that an ECL Bayesian game G=(N,A,T,p,(u_i)_i∈ N) is additively separable and anonymous if u is additively separable and anonymous and if p is anonymous.
§.§ Pure and mixed strategies
Now I turn to the strategies of players in an ECL Bayesian game, as well as associated expected utilities. I start with pure, i.e., deterministic strategies. I then turn to mixed strategies.
A pure strategy α_i∈ A^m is a mapping from the possible
types of player i to their actions. We denote a pure strategy profile
as α∈ A^n,m.
To introduce expected utilities, we need some additional notation for the distribution over types. In a slight abuse of notation, I denote the prior probability of player i
having type t_i as
p(t_i):=∑_t_-i∈ T_-ip(t_-i,t_i).
Note that, if p is anonymous,
p(t_i) does not depend on the player.
Player i of type t_i has a conditional belief over t_-i∈ T_-i, which is given by
p(t_-i| t_i):=p(t_-i,t_i)/p(t_i).
Now the expected utility of α for player i of type t_i
is
EU_i(α; t_i):=∑_t_-i∈ T_-ip(t_-i| t_i)u(α_1,t_1,t_1,…,α_n,t_n,t_n).
This is an ex interim expected utility, i.e., after updating on the player's own type, but before having seen anyone else's type. I will focus on ex interim expected utilities in this report since they allow for modeling players with different beliefs, which is an important aspect of ECL in my view.[For more discussion on the question of whether players should update on their own type in principle, see benya2014sin,treutlein2018udt.]
Given two players i≠ j∈ N, the joint prior p and types
t_i,t_j∈ T, we can define
p(t_i| t_j):=∑_t'_-j∈ T_-j s.t. t'_i=t_ip(t_-j'| t_j).
If p is anonymous, then p(t_i| t_j)
depends only on the two types and not the players. We can thus write p(t'| t) for the probability that any player of type t assigns to any other player having type t'.
Given additive separability and anonymity of u, one can use this to simplify
the expected utility of α as
EU_i(α; t_i)=u_t_i,t_i(α_i,t_i)+∑_j∈ N∖{i}∑_t_j∈ Tp(t_j| t_i)u_t_j,t_i(α_i,t_j).
If α_1t=…=α_nt for all t∈ T, I say that
α is anonymous and we can write α∈ T^m. If, in
addition to additive separability/anonymity of u, α
and p are anonymous, we can write
EU_t(α)=u_t,t(α_t)+(n-1)∑_t'∈ Tp(t'| t)u_t',t(α_t')
for the expected utility for any player of type t, if the anonymous strategy α∈ T^m is played. Here, the first term stands for the utility that the player produces for themself, while the term with the factor (n-1) stands for the expected utility provided by all other n-1 players, where the expectation is over possible types for any of the other players (and this term is the same for every player due to anonymity).
Next, I consider mixed strategy profiles. In a Bayesian game, we have to specify a distribution over actions for each tuple (i,t_i)∈ N× T of a player and associated type.
A mixed strategy σ_i∈Σ_i:=Δ(A)^T for player i specifies for each possible type t_i, a probability distribution over actions, denoted via σ_i(·| t_i). A mixed strategy profile is a vector σ∈∏_i∈ NΣ_i of mixed strategies for each player.
As with actions, we can denote a mixed strategy profile specifying only distributions over actions for players N∖{i} as σ_-i∈Σ_-i.
The expected utility for player i of action a_i, given mixed strategy profile σ_-i∈Σ_-i, is defined as
EU_i(σ_-i,a_i;t_i):=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i(∏_j∈ N∖{i}σ_j(a_j| t_j))u_i(a_1,t_1,…,a_n,t_n).
Similarly, we can define
EU_i(σ;t_i)
:=∑_a_i∈ Aσ_i(a_i)[EU_i(σ_-i,a_i;t_i)].
Similarly to the above for pure strategy profiles, we could simplify this expression for additively separable and anonymous games. I will skip this here since it won't be needed in the following.
§.§ Joint distributions over strategies
Mixed strategy profiles specify independent distributions over actions for each player. I will use them below to define Bayesian Nash equilibria [][p. 354]maschler2020game. However, in the case of ECL, it is important to consider dependencies between the actions of different players. In this section, I will thus define joint strategy distributions, which allow for different players' actions to be dependent. I will use them to introduce a Bayesian game generalization of dependency equilibria Spohn2007-fp,Spohn2010Depen-13626,Spohn2003-gi,Spohn2005-hi, which explicitly take such dependencies into account.
Let S={s T^n→Δ(A^n),t↦ s(·| t)}
be the set of conditional joint probability distributions over the
actions of all players given their types. Then s∈ S is called a joint strategy
distribution.
Unlike the mixed strategy profiles in bargaining problems, I interpret the distributions over strategies here as subjective credences, rather than as options that could be implemented by the players, e.g., via a randomization device. If players were able to randomize, then this would naturally lead to independent distributions (absent a randomization device that is correlated across the multiverse). Instead, ECL is based on beliefs over actions that imply that agents' actions are dependent, due to the similarity of their decision procedures. I use joint strategy distributions to formalize such beliefs.[The idea that distributions over actions describe beliefs rather than randomization is also common in traditional game theory. E.g.,
Aumann1987 writes:
“An important feature of our approach is that it does not require
explicit randomization on the part of the players. Each player always
chooses a definite pure strategy, with no attempt to randomize; the
probabilistic nature of the strategies reflects the uncertainties
of other players about his choice.”]
Joint strategy distributions can also be anonymous, i.e., symmetric in the player number.
A joint strategy profile s∈ S is called anonymous if for any player permutation
π N→ N, action vector a∈ A^n, and type vector t∈ T^n, we have
s(a| t)=s(a_π(1),…,a_π(n)| t_π(1),…,t_π(n)).
Joint strategy distributions are equivalent to standard mixed strategy profiles in the special case in which the marginals over the different players' actions are independent. To define this formally, we denote the probability for player i∈ N of playing a_i given
type vector t∈ T^n by
s(a_i| t):=∑_a_-i∈ A_-is(a_-i,a_i| t).
Moreover, the prior probability for player i∈ N of type t_i of playing
a_i is
s(a_i| t_i):=∑_t_-i∈ T_-is(a_i| t_-i,t_i)p(t_-i,t_i)/p(t_i).
If s is anonymous, these probabilities don't depend on i. This
justifies defining s(a| t) for any a∈ A and t∈ T in
this case.
A joint strategy distribution s is said to be uncorrelated, if
* s(a_i| t_i)=s(a_i| t) for any player i∈ N, type t∈ T^n and action a_i∈ A;
* s factorizes into a product of its marginals, i.e., if for any t∈ T^n and a∈ A^n, we have
s(a| t)=∏_i∈ Ns(a_i| t_i).
Note that the term “uncorrelated” is imprecise, since the definition actually requires independence. However, I am using the term for simplicity.
Now I turn to conditional expected utilities of actions. For player i∈ N, the conditional probability of other players' actions a_-i∈ A_-i
given player i's action a_i, type vector t∈ T^n, and joint strategy profile s∈ S is
s(a_-i| a_i,t):=s(a_-i,a_i| t)/s(a_i| t).
If the players' action distributions under s are dependent, then this probability might differ between different actions a_i. It takes dependencies into account, instead of simply marginalizing over all possible actions for player i to arrive at an unconditioned probability.
Next, for i,j∈ N, the probability of a∈ A^n given t_i,t_j∈ T
is
s(a| t_i,t_j):=∑_t'∈ T^n s.t. t'_i=t_i,t'_j=t_js(a| t')p(t')/∑_t'∈ T^n s.t. t'_i=t_i,t'_j=t_jp(t').
In another slight abuse of notation, I regard α
as an A^n,m-valued random variable and denote the probability
that player i of type t_i plays a_i given that player
j of type t_j plays a_j via
s(α_i,t_i=a_i|α_j,t_j=a_j,t_i,t_j):=∑_a'∈ A^n s.t. a'_i=a_i,a'_j=a_js(a'| t_i,t_j)/∑_a'∈ A^n s.t. a'_j=a_js(a'| t_i,t_j).
Given anonymous s and p, if i≠ j∈ N, this does not depend on
the players. Lastly, I define
s(a_i,t_i| a_j,t_j):=s(α_i,t_i=a_i|α_j,t_j=a_j,t_i,t_j)p(t_i| t_j).
Apparently, given anonymity, s(a_i,t_i| a_j,t_j) only
depends on the types and actions, but not on either i or j (as
long as i≠ j).
With these notations at hand, we can proceed and define conditional expected
utilities.
The conditional expected
utility of strategy s∈ S, given action a_i∈ A and type t_i for player i is defined as
EU_i(s; a_i,t_i):=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-is(a_-i| a_i,t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i).
Moreover, assuming anonymity of s and p and additive separability and anonymity of u, we define
EU_t(s; a):=u_t,t(a)+(n-1)∑_t'∈ T∑_a'∈ As(a',t'| a,t)u_t',t(a')
as the conditional expected utility of s∈ S for any player of type t given action
a∈ A.
Note that here, we condition the distribution over the other players' actions on player i's action. The conditional expected utility of different actions hence differs not only due to the different causal effects of the actions, but also due to potential dependencies between different players' actions under the distribution s. For instance, in a prisoner's dilemma, one could define a distribution s under which either all players cooperate or all players defect. Then the conditional expected utility of cooperating would be higher, since it would take into account the correlations between the players' actions.
The following lemma justifies above definition of EU_t(a| s) in the case of anonymity and additive separability.
Assume s∈ S and p are anonymous and u is additively separable and anonymous. Then we have
EU_i(s;a_i,t_i)=EU_t_i( s;a_i)
for any player i∈ N, action a_i∈ A, joint strategy profile s∈ S, and type t_i∈ T.
We have
EU_i(s;a_i,t_i)
=∑_a_-i∈ A_-i∑_t_-i∈ T_-is(a_-i| a_i,t_-i,t_i)p(t_-i| t_i)u_i(a_-i,a_i,t_-i,t_i)
=u_t_i,t_i(a_i)+∑_a'∈ A_-i∑_t'∈ T_-is(a'| a_i,t',t_i)p(t'| t_i)∑_k∈ N∖{i}u_t'_k,t_i(a'_k)
=
u_t_i,t_i(a_i)+∑_k∈ N∖{i}∑_t”∈ T∑_a”∈ A∑_a'∈ A_-i s.t. a'_k=a”∑_t'∈ T_-i s.t. t'_k=t”s(a'| a_i,t',t_i)p(t'| t_i)u_t'_k,t_i(a'_k)
=u_t_i,t_i(a_i)+(n-1)∑_t'∈ T∑_a'∈ As(a',t'| a_i,t_i)u_t',t_i(a')
=EU_t_i( s;a_i).
Before turning to equilibrium concepts, we briefly consider the case in which strategies are uncorrelated. In this case, conditional expected utilities correspond to standard expected utilities given a mixed strategy profile and an action.
Assume s∈ S is uncorrelated and define σ via σ_i(a_i| t_i):=s(a_i| t_i) for any player i∈ N, action a_i∈ A, and type t_i∈ T. Then we have
EU_i(s; a_i,t_i)=EU_i(σ_-i,a_i;t_i)
for all players i∈ N, actions a_i∈ A, and types t_i∈ T.
We have
EU_i(s;a_i,t_i) =∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-is(a_-i| a_i,t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-is(a_-i,a_i| t_-i,t_i)/s(a_i| t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ Ns(a_j| t_j)/s(a_i| t_i)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}s(a_j| t_j)u_i(a_-i,a_i,t_-i,t_i)
=
∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)u_i(a_-i,a_i,t_-i,t_i)
=EU_i(σ_-i,a_i;t_i).
§.§ Equilibrium concepts
To analyze the equilibria of ECL Bayesian games, I first define a Bayesian Nash equilibrium, which is a standard solution concept for Bayesian games and which assume mixed strategy profiles, or equivalently, uncorrelated joint strategy distributions. Afterwards, I will introduce dependency equilibria Spohn2007-fp,Spohn2010Depen-13626,Spohn2003-gi,Spohn2005-hi, which are based on conditional expected utility of potentially dependent strategy distributions and thus more suitable for ECL. Both equilibrium concepts can be motivated descriptively, to analyze how agents in the multiverse might behave, as well as normatively, to ask how rational agents should behave. In addition to the assumption of common knowledge in rationality, both equilibrium concepts are based on the assumption that all players share the same belief over the actions of the other players, conditional on their types. This assumption is too restrictive, absent a mechanism that could force such a common belief, such as repeated interactions or mutual simulation. However, as with other modeling assumptions, we will use this as a starting point for our analysis. In the case of dependency equilibria, our assumptions don't constrain the space of equilibria much: there exists a result similar to the folk theorems for iterated games [see][]fudenberg1986folk, saying that any Pareto improvement over a Bayesian Nash equilibrium is a dependency equilibrium (see <Ref>).
A mixed strategy profile σ is a Bayesian Nash equilibrium
if for all players i∈ N, types t_i∈ T, actions a_i∈ A such that σ_i(a_i| t_i)>0, we have
EU_i(σ_-i,a_i;t_i)≥ EU_i(σ_-i,a'_i;t_i) ∀ a'_i∈ A.
An uncorrelated joint strategy distribution σ is a Bayesian Nash equilibrium if
EU_i(s;a_i,t_i)≥ EU_i(s;a'_i;t_i) ∀ a'_i∈ A.
for all actions a_i∈ A such that s(a_i| t_i)>0.
Note that for a Bayesian Nash equilibrium σ, we have
EU_i(σ;t_i)
=∑_a_i∈ Aσ_i(a_i)EU_i(σ_-i,a_i;t_i)
≥∑_a_i∈ Aσ_i(a_i)EU_i(σ_-i,a'_i;t_i)
=EU_i(σ_-i,a'_i;t_i)
for any player i∈ N, action a'_i∈ A, and type t_i∈ T. Similarly, one can show that if EU_i(σ;t_i)≥ EU_i(σ_-i,a'_i;t_i) holds for all actions a'_i, then σ is a Bayesian Nash equilibrium.
A Bayesian Nash equilibrium is a generalization of a Nash equilibrium for Bayesian games, where the expected utility of a strategy is replaced with the ex interim expected utility. The condition for Nash equilibria is simply EU_i(σ_-i,a_i)≥ EU_i(σ_-i,a'_i) for all players i∈ N and actions a_i,a_i'∈ A where σ_i(a_i)>0.
In a Bayesian Nash equilibrium, we assume that players respond optimally to the distributions over other players' actions. We assume that these distributions are independent, and taking an action does not provide any evidence about the actions of other players. Hence, the notion of best response here takes into account only causal effects of an action, by influencing u directly, rather than by influencing the distribution over actions. As a result, Bayesian Nash equilibria cannot capture the type of reasoning that is required for ECL.
There is another standard solution concept that does assume potential correlations between players' actions, the correlated equilibrium [][ch. 8]maschler2020game. However, this equilibrium concept also fails to capture ECL-type reasoning. Even though players' actions can be correlated, the notion of best response still requires that a player cannot improve their payoff by unilaterally deviating from the joint distribution, without taking into account the evidence such deviations would provide about other players' actions. Hence, I will not delve further into correlated equilibria here.
Instead, I will turn to dependency equilibria, which incorporate evidential reasoning by considering potentially correlated joint distributions and evaluating only conditional expected utilities of actions. There are several other concepts achieving a similar purpose that one could look at in future work al2015evidential,daley2017magical,halpern2018game, but I will focus on dependency equilibria in the following. The following definition of a dependency equilibrium is a Bayesian game generalization of the definition in Spohn2007-fp.[For more discussions on dependencies between agents in games, see Spohn2007-fp,Spohn2010Depen-13626. Spohn sees prior
causal interactions as a common cause between agents' actions, leading to a dependency [cf.][]sep-physics-Rpcc. ECL involves dependencies despite no prior causal interaction. Instead, the dependency is caused by the similarity of decision
algorithms and decision situations of agents in ECL. It could be considered a logical dependency, for which there does not need to exist a common cause. Alternatively, the decision situation and decision algorithm similarity could be considered as an abstract common cause [cf.][]Yudkowsky2010-ur.]
A joint strategy distribution s∈ S is a dependency equilibrium if
there exists a sequence of distributions (s_r)_r∈ℕ
such that lim_r→∞s_r=s, and s_r(a_i| t_i)>0
for all players i∈ N, actions a_i∈ A, types t_i∈ T
and r∈ℕ, and if for all i∈ N, t_i∈ T and
a_i∈ A with s(a_i| t_i)>0, it is
lim_r→∞EU_i(s_r;a_i,t_i)≥lim_r→∞EU_i(s_r;a'_i,t_i) ∀ a'_i∈ A.
The requirement of rationality here is that any action with nonzero probability (in the limit) has to have
greater or equal conditional expected utility for the player performing that
action than any other action. This is similar to a Bayesian Nash equilibrium, only that players' actions are potentially dependent, and we take such dependencies into account when calculating conditional expected utilities.
The construction with limits is required since conditional credences s(a_-i| a_i,t) can only be computed for actions a_i that have positive probability. Hence, to be able to compute all possible conditional credences, we represent a dependency equilibrium s as a limit of distributions s_r for which this is the case.
As an example, consider a Bayesian
version of a prisoners' dilemma with additively separable and anonymous
utilities. There are two players, 1,2, and two types 1,2. Assume that there is a simple ignorance prior p which gives each
combination of types equal probability. In particular, p is anonymous.
Table <ref> shows the utilities that players of the two types produce
with either of two actions 1,2.
For any of the two players, given an anonymous strategy profile s and action a, using <Ref>, we get
EU_t(s;a) =u_t,t(a)+ ∑_t'∈{1,2}∑_a'∈{1,2}u_t',t(a)· s(a',t'| a,t).
Given an uncorrelated strategy profile we have s(a',t'| a,t)=s(a',t'|â,t) for any two actions a,â. Hence, the only term differing between different actions is the term u_t,t(a). It follows that the only possible optimal choice for either type is a=2, leading to an expected utility of EU_t(s;a)=3 + 1/2· 3=4.5, consisting of 3 utility produced by a player for themself and 3 utility provided by the other player in the 50% of cases in which the other player has the same type, and 0 utility provided otherwise. This is the only Bayesian
Nash equilibrium.
What about
dependency equilibria? For simplicity, I restrict myself to joint
strategies that have players of the same type always performing
the same action. This leaves us with 4 probabilities to be determined
(Table <ref>) and the following payoffs for the two types:
EU_1( s;1)= 2+1/2· 2+1/2· 2·a/a+b = 3 + a/a+b
EU_1(s;2)= 3+1/2· 3 + 1/2· 2·c/c+d = 4.5 + c/c+d
EU_1( s;1)= 2+1/2· 2+1/2· 2·a/a+c=3+ a/a+c
EU_1(s;2)= 3+1/2· 3 + 1/2· 2·b/b+d=4.5 +b/b+d .
Here, the first term is the utility produced by a player for themself, the second term the utility produced by the other player given that they have the same type (which happens with probability 1/2), and the third term is the utility produced by the other player, assuming they have the opposite type. The term a/a+b, for instance, stands for the probability that a player of type 2 plays actions 1, assuming that the other player of type 1 also plays that action.
In this case, there can be no dependency equilibrium in which action 1
gets any probability, since 1 is worse than 2, regardless of the chosen probabilities. In the best case, we have a=d=1/2, in which case EU_t(s;1)=4 and EU_t(s;2)=4.5.
This changes if there are many players. Suppose there
are 10 players, and the other properties of the game stay
the same. Then we have the following payoffs:
EU_1(s;1)= 12+10a/a+b
EU_1(s;2)= 18+10c/c+d
EU_2(s;1)= 12+10
EU_2(s;2)= 18+10.
Here, everyone playing action 1
can be a dependency equilibrium. Given any distribution that puts
only weight on a and d, action 1 is always better for either type. Hence, we can define s_r via a=r-1/r and d=1/r. Then, for any r∈ℕ, a is the optimal action under distribution s_r, and s:=lim_r→∞ is the distribution in which all players play action 1. To find all the mixed joint strategy
dependency equilibria, we would have to solve for s such that EU_t( s;1)=EU_t(s;2). I leave this as an exercise.
For further examples and to become more familiar with the concept,
see Spohn2007-fp.
As in above example, both equilibrium concepts can again
be adopted to an anonymous and additively separable setting. For instance, for Bayesian Nash equilibria, given an anonymous and additively separable game G and anonymous and uncorrelated s∈ S, we get the requirement
EU_t(s;a)≥ EU_t(s;a') ∀ a'∈ A
for all types t∈ T and actions a∈ A such that s(a| t)>0.
§.§ Observations
In this section, I show basic results about equilibria in ECL Bayesian games. First, I show that in an additively separable and anonymous game, there is essentially only one unique Bayesian Nash equilibrium—the strategy profile in which each type simply optimizes for their own values in their own universe, disregarding what everyone else is doing.
Let s∈ S be a Bayesian Nash equilibrium of an additively separable and anonymous ECL Bayesian game G. Then for any player i∈ N, action a_i∈ A and type t_i∈ T, we have s(a_i| t_i)>0 if and only if u_t_i,t_i(a_i)=max_a'∈ Au_t_i,t_i(a'). In particular, if the maximizer of u_t,t is unique for any type t, then s is anonymous and it corresponds to a unique anonymous pure strategy profile α∈ A^m. Moreover, an anonymous pure strategy Bayesian Nash equilibrium always exists in an anonymous and additively separable game.
First, since s is a Bayesian Nash equilibrium, it is uncorrelated, so by <Ref>, there exists σ∈Σ such that EU_i(s;a_i,t_i)=EU_i(σ_-i,a_i;t_i) for all players i, actions a_i, and types t_i. Now let a_i∈ A,t_i∈ T arbitrary. Then we have
EU_i(s; a_i,t_i) =EU_i(σ_-i,a_i;t_i)
=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)u_i(a_-i,a_i,t_-i,t_i)
=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)(u_t_i,t_i(a_i)+∑_j∈ N∖{i}u_t_j,t_i(a_j))
=∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)u_t_i,t_i(a_i)
=+∑_a_-i∈ A_-i∑_t_-i∈ T_-ip(t_-i| t_i)∏_j∈ N∖{i}σ_j(a_j| t_j)∑_j∈ N∖{i}u_t_j,t_i(a_j)
=u_t_i,t_i(a_i)+∑_t_-i∈ T_-ip(t_-i| t_i)∑_a_-i∈ A_-i∏_j∈ N∖{i}σ_j(a_j| t_j)∑_j∈ N∖{i}u_t_j,t_i(a_j).
Note that the second term does not depend on a_i. Hence, by the definition of a Bayesian Nash equilibrium, for any action a_i∈ A such that s(a_i| t_i)>0, and any alternative action a'_i∈ A, we have
0≤ EU_i(s;a_i,t_i)-EU_i(s;a'_i,t_i)
=u_t_i,t_i(a_i)-u_t_i,t_i(a'_i).
This shows that
u_t_i,t_i(a_i)≥ u_t_i,t_i(a'_i) for all a'_i∈ A, so
u_t_i,t_i(a_i)=max_a'∈ Au_t_i,t_i(a').
For the “in particular” part, note that if the maximizer is unique, it follows for any i∈ N and t_i∈ T that s(a_i| t_i)=1 for a_i=_a'∈ Au_t_i,t_i(a'). Hence, s corresponds to the unique anonymous pure strategy profile α∈ A^m, defined via α_t:=_a'∈ Au_t,t(a') for any t∈ T.
Since this does not depend on the player, we have
s(a| t)
=1_a=(α_t_i)_i∈ N
=1_(a_π(i))_i∈ N=(α_t_π(i))_i∈ N
=s(a_π(1),…,a_π(n)| t_π(1),…,t_π(n))
for any permutation π N→ N.
Lastly, it is clear from the above that we can always define an anonymous pure strategy Bayesian Nash equilibrium α∈ A^m given an additively separable and anonymous game by choosing α_t∈_a'∈ Au_t,t(a') arbitrarily for each t.
Next, I turn to dependency equilibria. First, as mentioned by Spohn2007-fp, every Bayesian Nash equilibrium is also a dependency equilibrium.
Every Bayesian Nash equilibrium is a dependency equilibrium.
Let s∈ S be a Bayesian Nash equilibrium. Define s_r=s for any r∈ℕ. Then, by the definition, we have
lim_r→∞EU_i(s_r;a_i,t_i)
=EU_i(s;a_i,t_i)
≥ EU_i(s;a'_i,t_i)
=lim_r→∞EU_i(s_r;a'_i,t_i)
for any player i∈ N, type t_i∈ T, and actions a_i,a'_i∈ A such that s(a_i| t_i)>0. This shows that s is also a dependency equilibrium.
Second, any pure strategy profile that is at least as good as some mixed strategy profile for every player and given any action is a dependency equilibrium. It follows as a corollary that a profile is a dependency equilibrium if it is a (weak) Pareto improvement over a Bayesian Nash equilibrium. The latter was proven in [][Observation 5]Spohn2007-fp for two-player normal form games.
Let σ∈Σ be any mixed strategy profile. Let α∈ A^n,m be a pure strategy profile such
that EU_i(α| t_i)≥ EU_i(σ_-i,a_i;t_i) for all players
i∈ N, t_i∈ T, and actions a_i∈ A.
Define
q∈ S such that for all t∈ T^n, q(α_1,t_1,…,α_n,t_n| t)=1
and q(a| t)=0 for a∈ A^n such that a≠(α_1,t_1,…,α_n,t_n).
Then q is a dependency equilibrium.
We construct distributions q_r that converge to q and such that, conditional on any player taking an action other than the one specified by α, the remaining players play σ. It then follows from the assumption that the actions in α have highest conditional expected utility.
To begin, note that for any player i∈ N and type t_i∈ T, we have
EU_i(q;α_i,t_i,t_i)=∑_t_-i∈ T_-ip(t_-i|t_i)∑_a_-i∈ A_-is(a_-i| a_i,t_-i,t_i)u_i(a_-i,a_i,t_-i,t_i)
=∑_t_-i∈ T_-ip(t_-i| t_i)u_i(α_1,t_1,t_1,…,α_n,t_n,t_n)=EU_i(α;t_i),
so conditional on taking actions specified by α, q is equivalent to α.
Now let s be the (uncorrelated) joint strategy distribution corresponding to σ, i.e., such that EU_i(s;a_i,t_i)=EU_i(σ_-i,a_i;t_i) for all i∈ N, t_i∈ T, and a_i∈ A such that σ_i(a_i| t_i)=s(a_i| t_i)>0. We distinguish two cases.
First, we assume s is strictly positive, i.e., s(a_i| t_i)>0 for any i∈ N, a_i∈ A and t_i∈ T.
Define q_r:=r-1/rq+1/rs
(for r>0). Then for any i∈ N, t∈ T^n, a_-i∈ A_-i, and
a_i∈ A such that a_i≠α_i,t_i,
we have
q_r(a_-i| a_i, t)
=r-1/rq(a_-i,a_i| t)+1/rs(a_-i,a_i| t)/∑_a'_-i∈ A_-ir-1/rq(a'_-i,a_i| t)+1/rs(a'_-i,a_i| t)
=1/rs(a_-i,a_i| t)/∑_a'_-i∈ A_-i1/rs(a'_-i,a_i| t)
=s(a_-i| a_i,t).
That is, conditional on taking action a_i, a_-i is distributed according to s. Hence, using the assumption on α and σ, it follows
EU_i( q_r; a_i,t_i)=EU_i(s; a_i,t_i)=EU_i(σ_-i,a_i;t_i)≤ EU_i(α; t_i)
for any r∈ℕ_>0. Since the expected utility is continuous in the joint strategy distribution, it follows that
lim_r→∞EU_i(q_r;a_i,t_i)≤ EU_i(α;t_i)(<ref>)=
EU_i(q;α_i,t_i,t_i)=EU_i(lim_r→∞q_r;α_i,t_i,t_i)
=lim_r→∞EU_i(q_r;α_i,t_i,t_i).
Hence, since α_i,t_i is the only action that player i of type t_i plays under q, it follows that q is a dependency equilibrium.
Now consider the case where s is not strictly positive. Then we need to modify q_r to put weight on all actions, to satisfy the definition of a dependency equilibrium. Spohn2007-fp does not specify exactly how this
would be done in their setup,
so I am providing a more detailed proof here.
To this end, I define
a joint distribution s'. Let s'(a| t)=0 for a∈ A^n,t∈ T^n,
unless specified otherwise. Take any i∈ N,a_i∈ A,t∈ T^m
such that s(a_i| t)=0. We want to define s' in a way such that s'(a_i| t)>0. To do so, for some yet to be determined constant c>0, we let s'(a_-i,a_i| t):=c·σ_-i(a_-i| t)
for all a_-i∈ A_-i. Now s'(a_i| t)>0 and
s'(a_-i| a_i,t)=cσ_-i(a_-i| t)/∑_a'_-i∈ A_-icσ_-i(a'_-i| t)=cσ_-i(a_-i| t)/c=σ_-i(a_-i| t)
for all a_-i∈ A_-i. Moreover, assume for some j≠ i, that there
is s(a_j| t)=0 for some a_j∈ A. Then s'(a_j| t)=0
still after these definitions, and by defining s'(a_-j,a_j| t) for player j, I don't change
the previously defined s'(a_-i,a_i| t) for player i. Apply the same procedure
to all actions and players.
Now choose c such that ∑_a∈ A^ns'(a| t)=1.
Proceed in the same manner with all t∈ T^n. Define q_r(a| t):=r-1/rq(a| t)+r-1/r^2s(a| t)+1/r^2s'(a| t)
for a∈ A^n,t∈ T^n. Then for any player i∈ N, action
a_i, t∈ T^n such that q_r(a_i| t) was previously
zero, we now have q_r(a_-i| a_i,t)=σ_-i(a_-i| t) and hence
lim_r→∞EU_i(q_r;a_i,t_i)=EU_i(σ_-i,a_i;t_i).
Moreover, for any of the actions a_i that receive positive probability
by s under some t∈ T^n (but not by q), we have
lim_r→∞q_r(a_-i| a_i,t)=lim_r→∞(r-1/r^2s(a_-i,a_i| t)/r-1/r^2s(a_i| t)+1/r^2s'(a_i| t)+1/r^2s'(a_-i,a_i| t)/r-1/r^2s(a_i| t)+1/r^2s'(a_i| t))
=lim_r→∞(s(a_-i,a_i| t)-1/rs(a_-i,a_i| t)/s(a_i| t)-1/rs(a_i| t)+1/rs'(a_i| t)+1/rs'(a_-i,a_i| t)/s(a_i| t)-1/rs(a_i| t)+1/rs'(a_i| t))
=s(a_-i,a_i| t)/s(a_i| t)=s(a_-i| a_i,t)
and hence also lim_r→∞EU_i(q_r;a_i,t_i)=EU_i(s;a_i,t_i)=EU_i(σ_-i,a_i;t_i).
It follows that lim_r→∞EU_i(q_r;a_i,t_i)=EU_i(σ_-i,a_i;t_i)
for any player i∈ N, type t_i, and action a_i∈ A. From here on we can proceed as above, thus concluding the proof.
Let σ be a Bayesian Nash equilibrium and α a pure strategy profile such that EU_i(α| t_i)≥ EU_i(σ| t_i) for any player i∈ N and type t_i∈ T. Then q, defined as in <Ref>, is a dependency equilibrium.
Using the assumption and <Ref>, we have
EU_i(α| t_i)≥ EU_i(σ; t_i)≥ EU_i(σ_-i,a_i; t_i)
for any i∈ N, a_i∈ A, and t_i∈ T. Hence, the result follows from <Ref>.
<Ref> provides a sufficient criterion to see whether an anonymous pure strategy profile β∈ A^m might be a dependency equilibrium.
Let G be an additively separable and anonymous ECL Bayesian game. Let β∈ A^m be an anonymous pure strategy Bayesian Nash equilibrium of the game. Then α is a dependency equilibrium if
u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t)(u_t',t(α_t')-u_t',t(β_t'))≥0
for all t∈ T.
Since G is additively separable and anonymous, an anonymous pure strategy Bayesian Nash equilibrium β∈ A^m exists by <Ref>. Hence, by <Ref> and <Ref>, α is a dependency equilibrium if
EU_t(α)≥ EU_t(β)
for all t∈ T. Using <Ref>, it follows
0≤ EU_t(α)-EU_t(β) =u_t,t(α_t)
+(n-1)∑_t'∈ Tp(t'| t)u_t',t(α_t')
=-(u_t,t(β_t)
+(n-1)∑_t'∈ Tp(t'| t)u_t',t(β_t'))
=
u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t)(u_t',t(α_t')-u_t',t(β_t')).
Intuitively, α is a dependency equilibrium if t's gains from other players choosing α
are larger than their losses from adopting α themself. If n is sufficiently large, only
the losses that other players of the same type occur are relevant.
Lastly, if the distributions s_r are all uncorrelated, then the dependency equilibrium must be a Bayesian Nash equilibrium. This tells us that if players' actions are independent, then only Bayesian Nash equilibria are relevant even for superrational players.
Assume s=lim_r→∞s_r is a dependency equilibrium, with (s_r)_r as in <Ref>. Then if for every r∈ℕ, s_r is uncorrelated, s is a Bayesian Nash equilibrium.
Let σ be the mixed strategy profile corresponding to s, and σ^r the one corresponding to s_r. Let i∈ N arbitrary. It is easy to see that since lim_r→∞s_r=s, also lim_r→∞σ_-i^r=σ_-i. Moreover, the expected utility of player i is a continuous function in σ_-i. Now let t_i∈ T, a_i such that σ_i(a_i| t_i)>0, and a'_i∈ A. Then also s(a_i| t_i)=σ_i(a_i| t_i)>0 and thus (i) lim_r→∞EU_i(s_r;a_i,t_i)≥lim_r→∞EU_i(s_r;a'_i,t_i) by the definition of a dependency equilibrium. It follows that
EU_i(σ_-i,a_i;t_i)
=EU_i(lim_r→∞σ_-i^r,a_i;t_i)
=lim_r→∞EU_i(σ_-i^r,a_i;t_i)
=lim_r→∞EU_i(s_r;a_i,t_i)
(i)≥lim_r→∞EU_i(s_r;a'_i,t_i)
=lim_r→∞EU_i(σ^r_-i,a'_i;t_i)
=EU_i(lim_r→∞σ^r_-i,a'_i;t_i)
=EU_i(σ_-i,a'_i;t_i).
This shows that σ is a Bayesian Nash equilibrium and thus concludes the proof.
§.§ Uncertainty about decision procedures and similarity
The Bayesian game model introduced here does not explicitly incorporate players with different decision procedures, or with different degrees of similarity. While these aspects could still be modeled implicitly, by defining a suitable joint distribution over actions s∈ S, it might be valuable to introduce explicit controllable parameters. Moreover, dependency equilibria are based on conditional expected utilities and thus effectively assume that all players act optimally under evidentialist or superrational reasoning. In this section, I will relax this assumption by extending the model to incorporate different decision procedures. My analysis is a generalization of the discussion in [][Sec. 2.9.4]Oesterheld2017-qg. As I will show below, this does not substantially increase the generality of my model. For this reason, I will not use the concepts introduced here in the rest of the report, so this subsection can be skipped.
One approach would be splitting types into different subtypes. Assume that there is finite index set Ω for the different subtypes (e.g., specifying the types' decision procedures). Then we can define new subsets of types T_ω={(1,ω),…,(m,ω)} for each ω∈Ω,
and let the new set of types be T=⋃_ω∈ΩT_ω. We also need to specify a new prior p over this bigger set of types T. I assume that utility functions do not depend on the subtype ω. Having defined these types, we can then restrict the space of possible joint distributions
in S in some way based on types.
I consider a simple binary approach, with two indices: C
for cooperators and D for anyone else.
The cooperators can be thought of as implementing the same or an equivalent decision procedure. Moreover, I assume that these agents maximize conditional expected utility in an ECL Bayesian game, such that we can apply
dependency equilibria to joint distributions over their actions. I assume that the actions of the players with subtype D are independent from those of the C players.[This makes the situation easier to analyze, though I think it is not entirely realistic. Even though the players in T_D are not thought of as superrational cooperators, the players in T_C may still have some conditional beliefs
about their actions. This could include the possibility of these agents
being seen as irrational in some way. For instance, players in T_C
may believe that it is more likely for a player of type T_D to
choose an uncooperative action given that they choose a cooperative
action.] In this framework, one could
model gradual beliefs about similarity by being uncertain about
whether another player belongs to T_C or not.[Compare the comment discussion on treutlein2018request, in particular <https://forum.effectivealtruism.org/posts/92wCvqF73Gzg5Jnrr/request-for-input-on-multiverse-wide-superrationality-msr?commentId=iXXvEremjJtedccwh>]
For simplicity, I assume an additively separable and anonymous setting. I write p(t',ω'| t,ω) for the probability that any player j≠ i has type (t',ω'), given that player i has type (t, ω). Moreover, I define a joint strategy distribution
s_C∈Δ(A^T_C) for all the types in T_C, and a similar distribution s_D for the types in T_D. Note that here, I take a distribution over actions given types as fundamental, rather than deriving such a distribution from an anonymous joint strategy profile as in <Ref>. Given this distribution, one can derive all the relevant probabilities, though I will not explicate this here. I denote with s_C(α_t'=a'| a,t) the belief of a player of type (t,C) that any other player of type (t',C) would play action a', given that the first player plays action a. We also need the marginal probability s_D(α_t'=a') that a player of type (t',D) plays action a' (since that type's action is independent from the actions of a player in T_C, we do not condition it on anything).
Given joint distributions s_C,s_D and a type (t,C), we can then use <Ref> to define expected utilities:
EU_t,C(s_C,s_D; a)
:=u_t,t(a)
+(n-1)∑_t'∈ Tp(t',C| t,C)∑_a'∈ As_C(α_t'=a'| a,t)u_t',t(a')
+(n-1)∑_t'∈ Tp(t',D| t,C)∑_a'∈ As_D(α_t'=a')u_t',t(a').
Since the actions of players in T_C and T_D are independent, the term for the utility from the D types does not depend on the action of a player of type C. Hence, we get
EU_t,C(s_C,s_D;a)-EU_t,C(s_C,s_D;â)
≥ 0
if and only if
u_t,t(a)-u_t,t(â)+(n-1)∑_t'∈ Tp(t',C| t,C)∑_a'∈ A(s_C(α_t'=a'| a,t)-s_C(α_t'=a'|â,t))u_t',t(a').
We can use this to determine when an anonymous pure strategy profile β is a dependency equilibrium, as in <Ref>. To that end, let α be the unique anonymous pure strategy profile Bayesian Nash equilibrium (this does not depend on the subtypes, since utilities do not depend on subtypes). I will not work this out formally here, but analogously to <Ref>, we get the condition
u_t,t(α_t)-u_t,t(β)+(n-1)∑_t'∈ Tp(t',C| t,C)(u_t',t(α_t')-u_t',t(β_t')).
To see what this means, in another abuse of notation, I write p(C| t',t,C) to denote the belief of a player of type (t,C) that another player has subtype C, given that they are of type t'. Then p(t',C| t,C)=p(C| t',t,C)p(t'| t,C). Hence, we get the condition
u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t,C)p(C| t', t,C)(u_t,t'(α_t')-u_t,t'(β_t'))≥ 0
⇔ u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t'| t,C)(1-p(D| t', t,C))(u_t',t(α_t')-u_t',t(β_t'))≥ 0
We can see that it differs from <Ref> in that the weight of each type t is reduced based
on how likely such a player is of subtype D instead of C. Given sufficiently large n, this
is not a problem per se—if n goes to infinity, it does not matter
how much weight there is on T_D in total—but it may shift
the relative conditional credences about types of the players. For
instance, a type (t,C) may deem other players with type t more
likely to have subtype C, but may be sceptical whether players of types
t'≠ t are of the C subcategory.
Such shifts in the relative weight of the types, due to different coefficients p(C| t',t,C), can be equivalently modeled by an anonymous prior p' over types, without any subtypes. To sketch an argument for this, note first that we can regard such a prior p' as a symmetric joint distribution p'∈Δ(T× T). Here, p'(t,t')=p'(t',t) is the probability that any two distinct players will have types t and t'. Now we can let
p'(t,t'):=δ^-1p((t,C),(t',C)),
where p is the original prior over types and subtypes, and δ:=∑_t,t'∈ Tp((t,C),(t',C)) is a normalization constant. Then this is apparently a symmetric probability distribution, and we have
p(t',C| t,C)
=p((t',C),(t,C))/p(t,C)
=δ p'(t',t)/p(t,C)
= p'(t'| t) p'(t) δ/p(t,C)
=p'(t'| t)δ'
for some constant δ':=p'(t) δ/p(t,C) that only depends on t, but not on t'. This shows that the relative weights of the different types t', after conditioning on being of type (t,C), are preserved under p'. So, in particular, we have
=u_t,t(α_t)-u_t,t(β_t)+(n-1)∑_t'∈ Tp(t',C| t,C)(u_t',t(α_t')-u_t',t(β_t'))
=u_t,t(α_t)-u_t,t(β_t)+δ' (n-1)∑_t'∈ Tp'(t'| t)(u_t',t(α_t')-u_t',t(β_t')).
This shows that at least the simplified model of subtypes considered here is not more general than the already introduced type space model. Assuming large n, believing that another player is of subtype D is equivalent to just giving them less relative weight in the conditional distribution p'(t'| t). For this reason, I will continue without the model introduced in this section in the following.
While I will not pursue this in this report, there may be other, more interesting ways to extend the model introduced here in future work. For instance, one could specify a specific bargaining solution or point on the Pareto frontier for each subtype. The same could be done for other contingent parameters such as disagreement points. One could then analyze how possible gains from trade change with different assumptions about these parameters.
§ ECL AS A BAYESIAN BARGAINING PROBLEM
In this section, I combine the models from the two previous sections, by defining a bargaining game on top of a Bayesian game (Sections <ref> and <ref>). To simplify the formal setup and analysis, I will assume from the start that the underlying ECL Bayesian game is additively separable and anonymous. In <Ref>, I introduce a version of the Nash bargaining solution for incomplete information bargaining games by Harsanyi1972. In <Ref>, I adapt Bayesian Nash equilibria and dependency equilibria to the Bayesian bargaining setup. To be able to apply dependency equilibria to bargaining problems, I generalize dependency equilibria to joint beliefs over continuous strategy spaces.
Finally, I conclude with several takeaways from the model (<Ref>). First, I adapt the results about dependency equilibria from <Ref>, including Spohn2007-fp's folk theorem. I then discuss gains from trade given different beliefs in general (<Ref>) and analyze several toy examples (Sections <ref>–<ref>).
§.§ Formal setup
An ECL Bayesian bargaining game is a tuple G=(N,T,(A_t)_t∈ T,p,d),
where
* N={1,…,n} is the set of players;
* T={1,…,m} is a generic set of types;
* 𝒜_t⊆ℝ^m is the convex and compact set of actions
for type t;
* pΔ(T^n) is an anonymous prior probability
distribution over type vectors, such that each type has positive prior probability (i.e., for any i∈ N, p(t_i)>0 for all types t_i∈ T).
* d∈ℝ^m is the disagreement point.
The set of players and types are the same as in an ECL Bayesian game, but the actions are now different. In the previous model, there
was a finite set of actions A, and in the additively separable
and anonymous case, there were utility functions u_t,t' for each tuple of types
t,t'∈ T, specifying the utility that type t produces for type t' with their actions. Since we assume additive separability and anonymity from the start, we now directly define sets of actions 𝒜_t for each type t∈ T, such that each vector x_t∈𝒜_t specifies the utilities x_t,t' that that player can produce for any other player of type t'∈ T. One could regard 𝒜_t as the convex hull of the
image of A under the function u_t:=[u_t,1,…,u_t,m]^⊤.
That is, if Σ_t:=Δ(A)
is the set of t's mixed strategies, then
𝒜_t={∑_a∈ Aσ_t(a)u_t(a)|σ_t∈Σ_t}.
This corresponds to what was in <Ref> the feasible set F_i for an individual
player (though note that we will define separate feasible sets for the setting here later). In the anonymous incomplete information setup, utilities depend only on
the types, so it suffices to have one such set for each type, with
as many dimensions as there are types. As in <Ref>, this set could
also be something other than a simplex—it only needs to be convex
and compact.
§.§ Strategies and feasible sets
Turning to pure strategies and expected utilities, I directly introduce strategies that are anonymous and only depend on the types. First, players of the same type have the same information, so it seems plausible that they would all choose the same action. Second, since they have exactly the same set of actions with the same utilities, they likely have to choose the same option to produce Pareto optimal outcomes (as discussed in <Ref>).
Let G=(N,T,(𝒜_t)_t∈ T,p,d) be an ECL Bayesian bargaining
game. Let α∈𝒜:=∏_t∈ T𝒜_t. Then α is called a pure
strategy profile.
Using <Ref>, we can define the expected utility of α∈𝒜 for type t, after updating on observing their own type, as
EU_t(α):=α_t,t+(n-1)∑_t'∈ Tp(t'| t)α_t',t,
where the first term is the utility provided by the player to themself, and the second term is the utility provided by the n-1 other players in expectation.
Next, a feasible set is the set of vectors of expected utilities for all types that can be produced by pure strategy profiles.
Let G=(N,T,(𝒜_t)_t∈ T,p,d) be an ECL Bayesian bargaining
game. Then
F(G):={ x∈ℝ^m|∃α∈𝒜∀ t∈ T EU_t(α| t)=x_t}
is the feasible set of G.
As in <Ref>, I assume that d∈ F(G) and that at least one payoff x∈ F(G) exists that is a strict Pareto improvement, i.e., x_i>d_i for all i∈ N.
Next, we turn to the individual feasible sets. These are sets of vectors of expected utilities for all types t' that can be produced by type t with their pure strategies. Here, we have to carefully scale the utilities in 𝒜_t to satisfy <Ref>.
Let t∈ T. Define f^(t)ℝ^T→ℝ^T
via its component functions f_t'^(t)(y):=(n-1)p(t| t')y_t'
for t'∈ T∖{t} and f_t^(t)(y)=y_t+(n-1)p(t| t)y_t.
Then t's individual feasible set is
F_t(G):=f^(t)(𝒜_t).
Given this definition, it follows that F(G)=∑_t∈ TF_t(G). That is,
similarly to complete information bargaining, the feasible set of G is the set of sums of vectors from the individual feasible sets. Since the sets 𝒜_t for
each t∈ T are convex and compact, F_t(G) is also convex
and compact, since it is just the image of 𝒜_t under the linear
mapping f^(t). Hence, the sum F(G) is also convex and compact.
Assume there are two types,
1,2. First, we have to specify the sets of actions. Suppose that there are
diminishing returns for both types' utility functions, such that the sets of actions are 𝒜_1=𝒜_2={x∈ℝ^2_≥ 0| x_1^2+x_2^2≤1}
(<Ref>). This could be motivated, for instance, by assuming that resources invested are quadratic in the utilities, and both types can allocate at most one unit of resources to both utility functions.
Second, we compute the feasible sets F_t(G) for both types.
Say there are 3 players in total with independent and uniform type distributions, such that p(1|1)=p(1|2)=p(2|1)=p(2|2)=0.5 (where p(t'| t) is the probability that any player of type t assigns to any other player having type t', as defined in <Ref>).
In the feasible set for type 2, the expected utilities
for type 1 are lower than for type 2, because a player of type 1 is
certain that they themselves have type 1 and hence they believe
that there are in expectation two players of type 1 and only one
type 2 player. The same applies vice versa. The resulting feasible
sets are depicted in <Ref> (a), where F_1=f^(1)(𝒜_1),F_2=f^(2)(𝒜_2) with
f_1^(1)(y) =y_1+2·1/2y_1=2y_1
f_2^(2)(y) =y_2+2·1/2y_2=2y_2
f_2^(1) =2·1/2y_2=y_2
f_1^(2)(y) =2·1/2y_1=y_1
.
for y∈𝒜_1 or y∈𝒜_2. The feasible set F(G) is then just the set with the points x_t+x_t' for all
possible x_t∈ F_t(G),x_t'∈ F_t'(G) (<Ref> (b)).
Above example has a type prior p that assigns equal probability to
all combinations of types, but players end up with different feasible sets because they update on the type of their own player and there are only few players in the game. I will give examples with different priors, where n is very large, in the next section.
§.§ The Nash bargaining solution for incomplete information games
Here, I define the NBS for an ECL Bayesian bargaining game G. I will use this bargaining solution below in my examples to compute cooperative outcomes. Harsanyi1972 introduce an axiomatization of the NBS for a two-player incomplete information game, which I discuss in <Ref>. This axiomatization includes versions of all the axioms of the complete information NBS discussed in <Ref>, and adds two additional axioms to deal with weights for the different types. Below definition is a generalization of Harsanyi1972's definition for more than two players, adapted to my formal setup.
Let G be an ECL Bayesian bargaining game, and let (ν_t)_t∈ T be a set of weights for each type, such that ν_t≥0 for all t∈ T and ∑_t∈ Tν_t=1. Then the Nash bargaining solution (NBS) for these weights is defined via the optimization problem
_x∈ F(G)^≥ d∏_t∈ T(x_t-d_t)^ν_t.
Harsanyi1972 derive the specific weighting ν_t=p(t) for all t∈ T, suggesting that the utility of a type should be weighed by the prior probability of that type. This weighting ensures the desirable behavior that the bargaining solution does not change if a type is split into two types with identical actions and utilities, provided their combined probability equates to the probability of the original type. These weights also appear reasonable from an ex ante fairness perspective, given that a type with a higher prior likelihood would, in expectation, occupy a larger number of universes. However, when it comes to fairness, there are other criteria that could be important for determining weights (see <Ref>).
§.§ Joint distributions and equilibria
Finally, I define joint distributions and equilibria in ECL Bayesian bargaining games. We do not need to introduce distributions to define Bayesian Nash equilibria. Action spaces are already convex, and Bayesian Nash equilibria are in any case trivial in the additively separable case.
Let G be an ECL Bayesian bargaining game. Then a strategy profile α∈𝒜 is a Bayesian Nash equilibrium if for all types t∈ T and actions α'_t∈𝒜_t, we have
EU_t(α)≥ EU_t(α_-t,α'_t).
Note that the notion of best response here assumes that all players of the same type change their action simultaneously, rather than only a single player deviating. This assumes perfect correlations between players of the same type, which seems inappropriate for the uncorrelated notion of Bayesian Nash equilibria. However, I do not consider this an issue, since, as the next proposition shows, Bayesian Nash equilibria are trivial in the additively separable case.
Let G be an ECL Bayesian bargaining game. Then the set of Bayesian Nash equilibria is given via A^*=∏_t∈ TA_t^*, where
A^*_t:=_α_t∈𝒜_tα_t,t
for t∈ T.
This follows as a simple exercise from the definition of a Bayesian Nash equilibrium and <Ref>.
Next, I introduce joint distributions to define dependency equilibria. In the present model, the sets 𝒜_t of strategies are continuous, potentially containing all possible (independent) randomizations over a set of actions that a type could implement. This is necessary to enable bargaining. Joint strategy distributions are then separately defined as joint distributions over the space 𝒜. This allows us to express beliefs such as “if I choose the NBS, other players do the same”, where the NBS is an arbitrary point in the continuous set 𝒜_t.
I define the set S of joint strategy distributions as the set of measures over the measure space 𝒜⊆ℝ^m,m, endowed with the default Borel σ-algebra. For a set A_t⊆𝒜_t, I write s(A_t):=s({α∈𝒜|α_t∈ A_t}), and similarly I define conditionals
s(A| A_t):=s({α∈ A|α_t∈ A_t})/s(A_t)
for A⊆𝒜 and A_t⊆𝒜_t such that s(A_t)>0.
For any set A_t⊆𝒜_t with s(A_t)>0, the expected utility given A_t is defined as
EU_t(s;A_t):=𝔼_α∼ s[EU_t(α)|α_t∈ A_t]
=∫_α∈𝒜EU_t(α)ds(·| A_t).
Now we can define dependency equilibria as a generalization of <Ref>.
Let s∈ S be a joint strategy distribution, and assume there exists a sequence of distributions (s_r)_r∈ℕ that converges weakly to s such that for each r∈ℕ, s_r has full support on 𝒜_t, i.e., such that s_r(A_t)>0 for all nonempty open sets A_t⊆𝒜_t. Then s is a dependency equilibrium if for all A_t⊆𝒜_t with s(A_t)>0 and arbitrary nonempty open set A'_t⊆𝒜_t, we have
lim_r→∞EU_t(s_r;A_t)≥lim_r→∞EU_t(s_r;A'_t).
We say that α∈𝒜 is a dependency equilibrium if δ_α, the Dirac measure with δ_α(A)=1 if and only if α∈ A, is a dependency equilibrium.
Here, weak convergence means that for any continuous function f𝒜→ℝ, we have
lim_r→∞𝔼_α∼ s_r[f(α)]=𝔼_α∼ s[f(α)].
I choose weak convergence as a generalization of the pointwise convergence we assumed in the case where s is a discrete distribution over a finite set of joint actions. Note that weak convergence does not require that the probability of each set converges; for instance, assume that s is the Dirac measure for some point α. Then we can define s_r via densities f_rα'↦ c_r·exp(-r‖α'-α‖), where c_r is some normalization constant. s_r becomes more and more concentrated on α as r→∞, and thus the integral with respect to s_r converges to the one with respect to δ_α for continuous functions. But s_r({α})=0 for all r∈ℕ and s({α})=δ_α({α})=1.
§.§ Observations
In this section, I discuss several takeaways from the model introduced above. I begin by adapting results from <Ref>. I prove a version of [][Observation 5]Spohn2007-fp, saying that any strategy profile that is a Pareto improvement over a Bayesian Nash equilibrium is a dependency equilibrium. In particular, the NBS with the Bayesian Nash equilibrium disagreement point is a dependency equilibrium.
In <Ref>, I make some general remarks about gains from trade in this model. I show that, given large n, only the beliefs over the types of other players matter.
I then work through several toy examples with two types. I apply the NBS as a compromise solution and analyze how gains from trade are affected by different assumptions about beliefs and utility functions. I start with a model in which all types have the same posterior beliefs, but different prior weights (<Ref>). Next, I analyze the situation in which players' types are correlated, such that players have higher posterior weight for their own type. In this case, gains from trade diminish when players become more confident that other players have the same type. This happens roughly quadratically in a model where utilties are square roots of resource investments (<Ref>), reproducing the “double decrease” observed by armstrong2017double. However, given logarithmic returns, as in drexler2019pareto's “Paretotopia” model, gains from trade go down more slowly (<Ref>).
§.§.§ Dependency equilibria
To begin, I show that if a strategy profile is at least as good for each type as some other strategy profile, for any possible action they could take, then it is a dependency equilibrium. The proof idea is the same as for <Ref>. As a corollary, it follows that Bayesian Nash equilibria and Pareto improvements over Bayesian Nash equilibria are dependency equilibria.
thmspohnfivec
Let α,β be two strategy profiles such that for every t∈ T and β'_t∈𝒜_t, we have
EU_t(α)≥ EU_t(β_-t,β'_t).
Then α is a dependency equilibrium.
In <Ref>.
Let β be a Bayesian Nash equilibrium and α such that
EU_t(α)≥ EU_t(β)
for all t∈ T. Then α is a dependency equilibrium. In particular, any Bayesian Nash equilibrium β is a dependency equilibrium.
Since β is a Bayesian Nash equilibrium, we have
EU_t(α)≥ EU_t(β)≥ EU_t(β_-t,β'_t)
for all β'_t∈𝒜_t. Hence, the result follows by <Ref>.
Lastly, I conclude that the NBS with the Bayesian Nash equilibrium disagreement point is a dependency equilibrium.
Let α be the strategy profile corresponding to the NBS with Bayesian Nash equilibrium disagreement point d. Then α is a dependency equilibrium.
The NBS as defined in <Ref> always chooses a point x such that x_t>d_t for all t∈ T. Hence, if β is the profile corresponding to the disagreement point d, then EU_t(α)≥ d_t= EU_t(β) for all t∈ T. By <Ref>, it follows that α is a dependency equilibrium.
§.§.§ General takeaways about gains from trade
Here, I give some general takeaways from the incomplete information bargaining model outlined above. First, assuming additively separable utilities and anonymous strategy profiles allows us to greatly simplify expected utilities received by each type. Recalling <Ref>, we have
EU_t(α)=α_t,t+(n-1)∑_t'∈ Tp(t'| t)α_t',t.
Assuming large n, this becomes
EU_t(α)≈ (n-1)∑_t'∈ Tp(t'| t)α_t',t.
That is, only the expected utility provided by the other players matters. In the following, I will assume large n such that this is the case (unlike in <Ref>, where I assumed n=3).
Second, the contributions α_t',t by other players of different types are weighted by type t's posterior weight for that type, p(t'| t). The higher the posterior weight p(t| t), the lower the weight of all other types, reducing the gains from trade from other types cooperating. If players of type t believe that type t' does not exist, then that type t' cannot benefit players of type t. As observed in <Ref>, uncertainty about the decision procedures of other players or similarities between players similarly factor into expected utilities. It does not matter, for instance, whether a type just cannot benefit other types much, or whether other types believe that the type has low posterior weight.
Positive correlations between players' types reduce gains from trade. In the extreme case in which all players are always of the same type, for instance, no trade is possible. However, if there are no or only small correlations, then trade is possible even given uncertainty about types of other players.
Note that this analysis still depends on the common prior assumption. Relaxing it could lead to further reductions in gains from trade, or it could completely break the analysis.
Moreover, I have not addressed uncertainty about actions, such as which bargaining solution other players will choose. Lastly, it is unclear what happens if the number of types m is as large as the number of players n. In that case, we cannot simply assume that only the other players matter, as the product (n-1)p(t'| t) may stay roughly constant for any given types t,t'. If the different types all have different values, then this could imply a situation more similar to the one with only few players and types.
§.§.§ Different prior probabilities but equal posterior beliefs
Here, I analyze a toy model in which two types have different prior weight, but players have the same beliefs over types, since players' type distributions are independent. I continue as in <Ref> with
two different types of players.
Recall Example <ref>. Assume large n, such that approximately only the expected utilities from the other players matter, and assume
the utility functions are rescaled such that if p(t| t)=0.5
for t=1,2, we have F_1(G)=F_2(G)={x∈ℝ_+^2|(2x_1)^2+(2x_2)^2≤1}. I assume the disagreement point is the Bayesian Nash equilibrium, which is the point (1/2,1/2).
Since the sets and the disagreement point are symmetric, the NBS would pick the symmetric point on the Pareto frontier,
2·(√(2)/4,√(2)/4)
=(√(2)/2,√(2)/2)∈ F(G).
Now consider a situation in which everyone has the same conditional
beliefs about other players (i.e., types of different players are independent), but one type has lower prior probability, p(1|2)=p(1|1)=3/4
and p(2|2)=p(2|1)=1/4. In this case, the individual feasible
sets get rescaled and we get
F_1(G) ={x∈ℝ_+^2 | (4/3x_1)^2+(4/3x_2 )^2≤1}
F_2(G) ={x∈ℝ_+^2 | (4x_1)^2+(4x_2)^2≤1},
as displayed in <Ref>.
Fewer players have type 2 in expectation, so their actions produce less expected utility, both for themselves and for players of the other type. Since the shapes of both Pareto frontiers are the same, the NBS will still pick the same point on F_2(G) as F_1(G), only scaled down. However, the Bayesian Nash equilibrium is asymmetric, given by d_1=3/4 and d_2=1/4, and the prior weights of types are also asymmetric. Using the Bayesian Nash equilibrium as disagreement point and the prior probabilities as weights in the NBS, we hence get
_x∈ F_1(
G)+F_2(G)(x_1-d_1)^3/4(x_2-d_2)^1/4≈ (0.92,0.39).
The points in the individual feasible sets corresponding to the NBS and the disagreement point, as well as corresponding points in the overall feasible set, are plotted as green and red dots in <Ref>.
§.§.§ “Double decrease” given different beliefs and square root utilities
Next, I consider a case in which types have different conditional beliefs.
In this situation, if a type deems another type more likely, then
this increases the gains they can receive from trade. Conversely, if a type deems another
less likely, this decreases their potential gains. Since it is more plausible
that players consider their own type more likely than others, I only consider the latter case.
First, I investigate to what degree lower beliefs in the other type decrease gains from trade, given that utilities are square roots of invested resources. armstrong2017double has observed a “double decrease” in this case, which is the effect that gains from trade
quadratically decrease with the probability assigned to the other type.
Assume that types have equal prior weights, but beliefs p(1|1)=p=p(2|2)
and p(1|2)=1-p=p(2|1). That is, conditional on observing their own type, players of either type believe other players have the same type with probability p, and the other type with probability 1-p. For, p=3/4, we get the feasible
sets F_1(G)={x∈ℝ_+^2|(4/3x_1)^2+(4x_2)^2≤1},
F_2(G)={x∈ℝ_+^2|(4x_1)^2+(4/3x_2)^2≤1}
(Figure <ref>).
Due to the symmetry of the situation, it is easy to see that the NBS always picks points on the individual Pareto frontiers where the Pareto frontier has slope
-1 (i.e., the symmetric point on the overall Pareto frontier). Using this, we can compute the point
(p^2/√(1-2(1-p)p),(1-p)^2/√(1-2(1-p)p))∈ F_1(G)
for the first type, and the same point with swapped coordinates for the
second type.
Now we compute the share of expected utility received by the other type, as well as the percent gains from trade, at the NBS outcome. The expected utility received by the other player is (1-p)^2/√(1-2(1-p)p), while the sum of expected utilities received by both types is
p^2/√(1-2(1-p)p)+(1-p)^2/√(1-2(1-p)p)
=√(1 - 2 (1-p) p).
Overall, we get a share of (1-p)^2/1 - 2 (1-p) p). This is approximately quadratic as 1-p→ 0. Next, turning to gains from trade, the total expected utility from compromise for either player is
p^2/√(1-2(1-p)p)+(1-p)^2/√(1-2(1-p)p)=√(1-2(1-p)p).
The individually achievable expected utility is p. The gain from trade
in percentage of disagreement expected utility is hence √(1-2(1-p)p)/p-1.
We plot share of expected utilities received by the other type as well as percent gains from trade for 1/2≤ p≤ 1 in <Ref> (where p=1-p(t'| t) for t'≠ t). Interestingly, gains from
trade as a percentage of individually attainable utility decline even faster than share of expected utility received by the other type. Overall, this confirms armstrong2017double's observation of a “double decrease” in the square root utility model.
§.§.§ “Paretotopia” given logarithmic utilities
While the previous example demonstrates a “double decrease”, this relies on the particular shape of Pareto frontiers in that example. In this section, I show a different result in the case of logarithmic utilities. Given logarithmic utilities, gains from trade can be very large, and Pareto frontiers are shaped in a way that makes compromise expected utilities and gains from trade change less as the belief in the other type goes down. This relates to drexler2019pareto's idea of a “Paretotopia” in the case of logarithmic returns to resources, where reaping gains from trade at all is vastly more important to players than increasing their share of the compromise outcome.
As before, let p(t| t)=p and p(t'| t)=1-p for t≠ t'∈{1,2}. Assume feasible sets
are given by
F_1(G) ={x∈ℝ_+^2 | exp(1/px_1)+exp(1/1-px_2)≤ r}
F_2(G) ={x∈ℝ_+^2 | exp(1/1-px_1)+exp(1/px_2)≤ r}
as illustrated in <Ref>. We could interpret this as a case in which
resources produce logarithmic utility for either value system and where r is the amount
of available resources. For symmetry reasons, the NBS is again
the point on the Pareto frontiers where the frontier has slope -1, as long as that point is in the feasible set. This is the point (plog(pr),(1-p)log((1-p)r))
for type 1, with swapped coordinates for type 2, for p≤1/r. For p>1/r, no trade is possible, and players just optimize for their own values. plog(r) is the amount of utility either type can produce for themself.
Performing the same calculations as in Example <ref>,
we get
(1-p)log(max{(1-p)r,1})/plog(pr)+(1-p)log(max{(1-p)r,1})
as the share of expected utility received by the other type, and
plog(pr)+(1-p)log(max{(1-p)r,1}/plog(r)-1
percent gains from trade.
I plot both functions for p∈ [1/2,1], for the cases r=100 and r=10^9, in <Ref>.
Both share of expected utility received by the other player and percent gains from trade decrease much more slowly than in <Ref>, particularly for the case in which the amount of resources is large and thus gains from trade are vast. Given r=10^9, both percent gains from trade and share of expected utility received by the other type appear to go down approximately linearly as p→ 1.
This shows that the shape of the Pareto frontier determines how gains from trade are affected by differing posterior beliefs. In future work, it would be interesting to extend this analysis, for instance, by investigating situations in which Pareto frontiers are asymmetric.
§ DISCUSSION
In this section, I discuss two issues that arise in my model.
First, I discuss the problem of choosing a disagreement point (<Ref>). I define the threat game disagreement point, which is an equilibrium of a game in which players choose disagreement actions to improve their bargaining position. I discuss an axiomatization that supports the threat point, and show that the NBS with this disagreement point is a dependency equilibrium, even though it can be worse for some players than the Nash equilibrium. I also discuss reasons against its relevance to ECL.
Second, I discuss coalitional stability (<Ref>). A compromise outcome is coalitionally stable and thus in the core of a game if no subset of players can unilaterally guarantee its members higher payoffs. I argue that stability is a desirable property in the ECL context. Unfortunately, the NBS with the Nash equilibrium or threat game disagreement point is sometimes not stable. I investigate the existence of stable allocations and show that in an additively separable game, the core is always nonempty. However, I also show that sometimes all core allocations make some players worse off than a Nash equilibrium, providing a strong argument against the Nash equilibrium disagreement point. I conclude by suggesting an alternative disagreement point that guarantees stability.
§.§ Disagreement points
The bargaining model introduced in <Ref> requires a disagreement
point, an outcome that is obtained if players do not reach an agreement.
For ECL, a plausible choice for a disagreement point is a Bayesian Nash equilibrium, which is unique in an anonymous and additively separable game, up to each type's choice of an action that optimizes their own values (Propositions <ref> and <ref>). This is the outcome that players would plausibly choose absent any dependencies
between players. In particular, in the Bayesian game model from <Ref>, I showed that this is the only dependency equilibrium in this case (<Ref>). I also showed in the model from <Ref> that the NBS with the Bayesian Nash equilibrium disagreement point is a dependency equilibrium (<Ref>).
Unfortunately, I will show in <Ref> in <Ref> that sometimes no point that is a weak Pareto improvement over the Bayesian Nash equilibrium is coalitionally stable (even if a stable point exists in principle, i.e., if the core of the game is nonempty). This strongly suggests that the Nash equilibrium may not be the right disagreement point.
Similar to bargaining solutions, one can also find a disagreement point by positing axioms that constrain the possible choices for disagreement points, or by setting up a noncooperative game and analyzing its equilibria. As argued in <Ref>, both approaches can provide relevant insights for ECL, even if ECL does not involve any actual bargaining.
Nash1953 provides both an axiomatization and a noncooperative game that implies the “threat game” disagreement point. This point represents the equilibrium of a game where players choose disagreement actions and receive as payoffs the Nash bargaining solution computed with these disagreement actions. I define this point here for the setup from <Ref>.
For the following definition, I assume that μ^ν(F(G),d) is defined as the NBS with weights ν, computed for all the types t for which there exists x∈ F(G)^≥ d such that x_t>d_t. Note that since F(G)^≥ d is convex, if such a point x exists for all types t∈ P⊆ T, then there also exists a point x'∈ F(G)^≥ d such that x'_t>d_t for all t∈ P simultaneously. Hence, we can define
μ^ν(F(G),d):=_x∈ F(G)^≥ d∏_t∈ P(x_t-d_t)^ν_t.
Let G be an ECL Bayesian bargaining game. Then the threat game disagreement point or threat point is a point d∈ F(G) such that there exists a strategy profile α∈𝒜 with d_t=EU_t(α) for all types t∈ T, and for any t∈ T and α'_t∈𝒜_t, letting d':=(EU_t'(α_-t,α'_t))_t'∈ T, we have
μ_t^ν(F(G),d)≥μ^ν_t(F(G),d').
This definition says that the threat point is a point d, corresponding to a strategy profile α∈𝒜, such that no type can improve their bargaining outcome by changing their action in α. Nash1953 shows that the threat point exists and is unique in his two-player bargaining game. I believe Nash's proof translates to my setup at least with respect to existence, though uniqueness could be violated if there are more than two players.
In Nash1953's axiomatization of the NBS with the threat point, there exists a feasible set
F together with two sets S_1,S_2 that contain the possible
disagreement strategies for players 1,2. In addition to versions
of Axioms <ref> and <ref>,
[][p. 137]Nash1953 requires the following axioms:
A restriction of the set of strategies available to a player cannot
increase the value to him of the game. That is, if S_1'⊆ S_1,
then μ_1(S_1',S_2,F)≤μ_1(S_1,S_2,F). The same
applies for the second player.
There is some way of restricting both players to single strategies
without increasing the value to player one of the game. That is,
there exist s_1∈ S_1,s_2∈ S_2 such that μ_1({s_1},{s_2},F)≤μ_1(S_1,S_2,F).
The same applies for the second player.
It follows from those axioms that the bargaining solution μ will be the NBS with the threat game disagreement point. Note that the
axioms and Nash's proof require a separate set for disagreement strategies,
so this does not directly translate to my setting. However, it seems plausible that one may be able to extend the result.
Note that, even in the two-player case, the NBS with the threat point can be worse for a player than a Nash equilibrium.
Take the game with two players 1,2 and actions a_1,a_2,a_3 and b_1,b_2, respectively, given by <Ref> (a).
Here, the threat game disagreement point would be (-3,2), since given actions (a_3,b_2) as disagreement point, none of the players can change their action to improve their bargaining outcome. Normalizing by this point leads to the
payoffs in <Ref> (b). The feasible set, alongside the relevant points, is illustrated in <Ref>.
One can calculate the NBS as the point (5.5,2.75), which is worse for player
1 than the Nash equilibrium (3,3).
An important question when it comes to ECL is whether there is a dependency equilibrium supporting the NBS with the threat point. This gives at least some basic plausibility to joint beliefs that imply this compromise outcome. Despite it potentially being worse than a Nash equilibrium, this is the case.
Let α∈𝒜 be a strategy profile corresponding to the NBS with the threat game disagreement point. Then α is a dependency equilibrium.
Let β be the strategy corresponding to the threat point d. We show that EU_t(α)≥ EU_t(β_-t,β_t') for all t∈ T and β_t'∈𝒜_t. Then the result follows using <Ref>.
Towards a contradiction, assume that there exists t∈ T and β'_t with EU_t(α)<EU_t(β_-t,β'_t). Then, defining d':=(EU_t'(β_-t,β'_t))_t'∈ T, we have μ^ν_t(F(G),d')≥ d'_t>EU_t(α)=μ^ν_t(F(G),d). But this is a contradiction to the definition of a threat point d.
One problem with the threat point in conventional bargaining is that it supposes an ability to commit to a non-equilibrium action in case no agreement is reached. Insofar as humans cannot credibly commit to certain actions, this suggests that it may not be an appropriate solution concept for bargaining problems between humans. Another concern with the threat point is that it potentially leads to an agreement reached through coercion. It seems reasonable to assume that rational agents should refuse to give in to such coercion. Therefore, if the opponent commits to pursuing a threat in case no agreement is reached, one should not take this as a disagreement point for evaluating gains from trade.[Note that, in general, the distinction between extortion and a fair trade depends on some assignment
of a default outcome [][]armstrong2016extortion. In an additively separable game, the Nash equilibrium is a plausible non-threat default outcome, but it leads to coalitional instability. I will turn to defining an alternative non-threat default outcome in the next section.] It is unclear how these considerations apply to ECL, though it seems plausible that threats should be even less relevant to ECL than to conventional bargaining.
Overall, disagreement points are an important area of further study for ECL. Some recent work on threat-resistant bargaining may be particularly relevant diffractor2022rose. Moreover, it would be interesting to investigate acausal bargaining models to gain insights into the question [e.g.][]diffractor2018cooperative,kosoy2015superrationality.
§.§ Coalitional stability
Another issue with ECL is coalitional
stability. In a coalitional game [][Pt. 4]osborne1994course, players can choose to cooperate with a smaller coalition (subset of players), ignoring the remaining players. This is different from a bargaining game, where all players have to agree to a compromise. A bargaining solution is coalitionally stable if no coalition can unilaterally ensure higher payoffs for their members. In the ECL case, it seems possible for superrationalists
to choose to cooperate with a subset of players rather than with all players (the “grand coalition”). Hence, coalitional stability is an important desideratum for a bargaining solution in the ECL case.[Issues with coalitional stability in ECL were also informally discussed by gloor2018commenting2.]
In the following, I will focus on a complete information bargaining model for simplicity. To formalize coalitional stability, let P⊆ N be an arbitrary coalition. Then we define ν(P)⊆ℝ^n as the set of payoffs x∈ℝ^n for all players
such that the players in P can achieve at least as much for themselves via a collective action. Depending on assumptions about the responses by the remaining players, this can be formalized in different ways, leading to different functions ν. In any case, we have
ν(N)={x∈ℝ^n|∃ y∈ F(B)∀ i∈ N x_i≤ y_i}.
Given a function ν, the core C^ν(B) is defined as the set of all payoffs x∈ν(N) such that no coalition can guarantee their members strictly higher payoffs.
The core of the bargaining game B with respect to ν is the set
C^ν(B):={x∈ν(N)|∀ P⊆ N∀ y∈ν(P)∃ i∈ P x_i≥ y_i}.
A standard definition for ν is the set of α-effective vectors, which assumes worst-case responses by the remaining players. Formally, x∈ν^α(P)
if and only if there is σ_P∈∏_i∈ PΣ_i such
that for all σ_-P∈∏_j∈ N∖ PΣ_j,
u_i(σ_P,σ_-P)≥ x_i for all i∈ P. The corresponding core C^α(B) is called the α-core.
This definition allows for threats to discourage formation of a coalition. In general, it is unclear whether unfriendly actions like
these should play a role (see <Ref>).
I also consider another way to define ν that does not involve outright threats. For simplicity, I assume additively separable utility functions. Assume that there is some worst case payoff matrix A∈ℝ^n,n, specifying for each player i∈ N payoffs A_i,j∈{x_i,j| x_i∈ F_i(B)} they may produce for a player j∈ N, if they are left out of the coalition. Then I define ν^A as the set of A-effective vectors, via x∈ν^A(P) if there exists y_j∈ F_j(B) for all j∈ P such that for each coalition member i∈ P, we have
x_i≤∑_j∈ Py_j,i+∑_j'∈ N∖ PA_j',i.
That is, we assume coalition members contribute payoffs y_j,i and the remaining players payoffs A_j',i to player i. For instance, A may represent Nash equilibrium payoffs, i.e., A_i∈_x_i∈ F_i(B)x_i,i for each i∈ N. The A-core C^A(B) is defined analogously to the α-core, but using the A-effective vectors ν^A. In the case where A represents the Nash equilibrium, I also write C^NE(B) for the Nash equilibrium core.
Note that by definition, the α-core is the largest possible core.
Let B be a bargaining problem with additively separable utilities. Then for any A∈ℝ^n,n with A_i,j∈{x_i,j| x_i∈ F_i(B)} for i,j∈ N, we have C^A(B)⊆ C^α(B).
Let x∈ C^A(B) arbitrary. Let P⊆ N and y∈ν^α(P). To show x∈ C^α(B), we have to show that there exists at least one i∈ P such that x_i≥ y_i. We know by definition of ν^α that there exists σ_P∈Σ_P such that y_i≤ u_i(σ_P,σ_-P) for all i∈ N and σ_-P∈Σ_-P. In particular, let σ_j be the strategy corresponding to A_j for j∉ P. Then we know
that
y_i≤ u_i(σ_P,σ_-P) for all i∈ N. Letting x̂_j∈ F_j(B) corresponding to σ_j for j∈ P, it follows
y_i≤ u_i(σ)=∑_j∈ Px̂_j,i + ∑_j'∈ N∖ PA_j',i
for all i∈ N. Hence, we have y∈ν^A(P).
By the definition of C^A(B), there thus exists i∈ P such that x_i≥ y_i. This concludes the proof.
Now we turn to analyzing the existence of core allocations. First, we show that the NBS is not necessarily in the α-core, even if the core is nonempty.
Consider the game of three players 1,2,3, where each player has
three options 1,2,3. The (additively separable)
utilities generated by each player taking any of the options are specified
in <Ref>.
The unique Nash equilibrium disagreement point is d=(3,3,3). This is also the threat game disagreement point—no matter the actions of the other players, a player can always increase their bargaining position by moving to action 1 and thus raising their own disagreement payoff and reducing that of the other players.
Now, if 1 and 2 form a coalition, they can both guarantee each other a payoff of 5 each, so
ν^α({1,2})={x∈ℝ^3| x_1≤ 5, x_2≤ 5}.
However, the NBS payoffs can be computed as x=(4.33,4.33,5.67). While both 1 and 2 can benefit 3
well, 3
cannot benefit 1 or 2 well, so they are left worse off by joining the grand coalition. Hence, x∉ν^α({1,2}), and the NBS is unstable.
The same goes for the KSBS, since 3's ideal point is better than 1 and 2's, so
the KSBS would grant 3 the highest surplus of the players. In particular, the KSBS does not seem fairer than the NBS in this example, giving the highest surplus to a player that is not contributing much.
However, the α-core (and thus also the A-core for any payoff matrix A) is nonempty. For instance, consider the payoff vector (5,5,3). This is feasible via the strategy profile (2,2,1), and one can show that no coalition could guarantee strictly higher payoffs for all of their members.
To address the instability issue, one could try to find a bargaining solution that always picks elements from the core. For instance, if one chooses the disagreement payoff in the core, then the NBS will always be in the core as well. While the α-core can be empty in general games [][ch. 13.2]osborne1994course, if we assume additive separability, the α-core is always nonempty. This follows as a corollary from a theorem by scarf1967core. Similar results have been shown in the literature scarf1967core, but I have not found this exact result upon a cursory search, so I am providing it here.
Since the α-core includes possible threats, which I regard as undesirable, I show the result for a somewhat more strict notion of core. For a coalition P⊆ N, define Σ^H_P⊆∏_i∈ PΣ_i as the set of Pareto optimal strategies for the players in P. That is, σ_P∈Σ_P^H if and only if for
x:=∑_i∈ Pu_i(σ_i) and y:=∑_i∈ Pu_i(σ'_i) for any σ'_P∈Σ_P, if y_j≥ x_j for all j∈ P, then y_j=x_j for all j∈ P.
We then define A as the worst-case Pareto optimal payoffs in any coalition. That is,
A_i,j:=min_P⊆ N s.t. i∈ Pmin_σ_P∈Σ_P^Hu_i,j(σ_i)
for i,j∈ N. If we assume that players outside of a coalition are allowed to form their own arbitrary coalitions and Pareto optimal compromises, but we do not allow any threats, this is the relevant notion of core.
The A-core as defined above, and thus by <Ref> also the α-core, is nonempty in additively separable games.
thmcorenonempty
Let B be a bargaining game with additively separable utility functions, as defined in <Ref>. Let A∈ℝ^n,n be defined as in <Ref>. Then the A-core C^A(B) is nonempty.
In <Ref>.
One may ask whether the same would hold for the Nash equilibrium core. Unfortunately, the next example shows that the Nash equilibrium core can be empty, even given additive separability. The intuition behind this is that sometimes, if two players cooperate, this can lead to negative externalities for a third party. However, the Nash equilibrium point does not take this possibility of cooperation between two players into account. Hence, the two players are better off ignoring any agreement that gives everyone at least their Nash equilibrium payoffs.
Consider a bargaining game with three players, N={1,2,3}, with payoffs as in <Ref>. There is a unique Nash equilibrium in which all players play action 1 and receive utilities (5,5,15). However, players 1 and 2 can also coordinate on action 2, which serves as a compromise between the two and produces 8 utility for both. Intuitively, we can imagine that 1 and 2 share some common goal that they can choose maximize instead of their own idiosyncratic goals. However, player 3 benefits from the players optimizing their idiosyncratic goals, and if 1 and 2 cooperate, player 3 loses out.
I added a third strategy for the first two players to make sure an option x∈ F(B) exists that strictly dominates the Nash equilibrium disagreement point, but this is inessential to the example. (Similarly, the fact that player 3 only has one option that is not Pareto dominated is inessential and can easily be relaxed.)
Now let A correspond to the Nash equilibrium strategies. Then
A=[ 5 0 5; 0 5 5; 0 0 5 ].
The coalition P={1,2} can guarantee its members a payoff of 8 each, so
ν^NE({1,2})={x∈ℝ^3| x_1,x_2≤ 8}.
Moreover, we have
ν^NE({3})={x∈ℝ^3| x_3≤ 15},
since A_1,3+A_2,3+A_3,3=15, and 3 cannot improve upon this payoff by changing their action.
It follows from the above that any payoff vector x∈ C^NE(B) has to satisfy x_3≥ 15 and x_1,x_2≥ 8. However, such a payoff vector (8,8,15) is not in the feasible set and thus impossible to obtain.
The only way to produce 8 utility for both 1 and 2 is for both players to play 2. But then player 3 can have at most 5<15 utility. Hence, C^NE(B)=∅.
There exists a bargaining game B with additively separable utilities such that the Nash equilibrium core C^NE(B) is empty.
See <Ref>.
Based on the above results, one possible way to define a disagreement point that leads to a stable bargaining solution and that does not involve threats would be via
d_j =max{x_j| x∈ν^A({j})}=max_σ_j∈Σ_ju_j,j(σ_j) + ∑_i∈ N∖{j}A_i,j
for any j∈ N and where A is defined as in <Ref>. That is, we let d_j correspond to the best possible payoff that j can attain given worst-case Pareto optimal responses by the other players.
It would be valuable to investigate coalitional stability and stable solution concepts in future work, including a more thorough review of the relevant literature on nontransferable utility coalitional games [e.g.][]shapley1967utility,Maschler1989,Maschler1992,hart1996,Harsanyi1963. As in the case of disagreement points, the work by diffractor2022rose may also be relevant.
§ CONCLUSION AND FUTURE WORK
In this report, I developed a game-theoretic model of ECL, making it possible to formalize many important aspects and issues with ECL. This includes agents' uncertainty about other agents in the multiverse, the problem of selecting a multiverse-wide compromise outcome, and the question of which joint beliefs to adopt over the actions of agents. There are many interesting open problems and avenues for future work:
* How to model agent's default options without ECL? The choice of a disagreement point (<Ref>) is a fundamental issue in ECL. In particular, there is the question whether threats should play a role in selecting a multiverse-wide compromise. It may be valuable to review the ROSE value diffractor2022rose or consider acausal bargaining models [e.g.][]diffractor2018cooperative,kosoy2015superrationality in future work.
* Another fundamental issue is that of coalitional stability (<Ref>), which is related to the problem that compromise between some parties can make other parties worse off, potentially preventing the formation a grand multiverse-wide coalition. While there always exist stable payoff allocations given additively separability, it is unclear what happens if some value systems violate this assumption. Additionally, it is an open question how to choose a stable bargaining solution. Here, it may be useful to review the literature on nontransferable utility coalitional games [e.g.][]shapley1967utility,Maschler1989,Maschler1992,hart1996,Harsanyi1963, as well as diffractor2022rose.
* How to assess possible dependencies between different agents, especially in the human case where no source code is available? What is the nature of these dependencies? What is the relevant reference class of agents for superrationality in ECL? Can one rigorously justify inferences such as “if I choose the NBS, other players are likely to do so, too”?
* How can acausal bargaining models inform ECL? Can we model the process of arriving at conditional beliefs about other agents' actions as some kind of bargaining procedure? If so, what is a plausible model, and how can it inform the problems discussed above?
* I make standard common knowledge and common prior assumptions (see <Ref>), which are unrealistic in the ECL context, at least when it comes to ECL among humans. How to relax these assumptions? Assigning posterior beliefs to other players is important to assess their gains from trade. How to do this without a common prior? See Harsanyi1967,Monderer1989-pj.
* How do gains from trade diminish when agents have different models or choose different bargaining solutions? This would lead to wasted gains from trade, but it is unclear how much would get lost, and how much different value systems would be affected.[Thanks to Lukas Gloor for a comment on an earlier draft.] How robust are bargaining solutions in practice to different empirical assumptions and model parameters?
* An alternate approach to the one employed in this report would be to take joint distributions over actions as given, analyzing and classifying them based on the dependencies they imply. For example, a specific joint distribution could imply positive or negative correlations between more or less cooperative actions of players. One could then investigate which joint distributions enable ECL.[This approach was suggested to me by Philip Trammell.]
* How to deal with the infinities involved in ECL in an infinite universe, as well as the potential continuum of players and values, rather than the discrete set assumed in this report? Is there a relatively small number of discrete clusters of similar value systems, or are there as many different types as players?
* What is the distribution over values of superrational cooperators, and what are their beliefs? Can humans usefully make progress on this question, and if not, would superintelligent AI systems be able to do so?
The main purpose of this report is to contribute towards the development of a theory of ECL and to outline open technical and philosophical problems, rather than to introduce an applicable model. However, the Bayesian bargaining model from <Ref> could still be useful for preliminary simulations to investigate possible gains from trade. This might help estimate the potential value of ECL and inform prioritization decisions.
§ ACKNOWLEDGEMENTS
Part of the work on this report was carried out during a Summer Research Fellowship at the Centre for Effective Altruism (CEA). Special thanks go to Max Dalton, who was my supervisor at the CEA. I am grateful for support by the Center on Long-Term Risk, the Center on Long-Term Risk Fund, an Open Phil AI Fellowship, and an FLI PhD Fellowship. Moreover, I am indebted to Lennart Stern, Philip Trammell, Owen Cotton-Barratt, Caspar Oesterheld, Max Daniel, Sam Clarke, Daniel Kokotajlo, Lukas Gloor, Leon Lang, Abram Demski, and Stuart Armstrong for their invaluable discussions and feedback, as well as for their help with the mathematics and game theory in this report. Finally, I want to express my gratitude to commenters on an earlier post, where I requested input on this report treutlein2018request.
§ ARMSTRONG2013'S BARGAINING SOLUTION
Armstrong2013 has published a series of blog
posts on bargaining in which he develops a bargaining solution. In this appendix, I will discuss the solution and argue against using it to model ECL.
In Armstrong's solution, utility functions are normalized such that their
zero point is the disagreement point and 1 is their
ideal point, just as with the KSBS. But instead of then taking the
point on the Pareto frontier where everyone has the same utility given
this normalization (as the KSBS would), Armstrong suggests maximizing the
sum of the thus normalized utility functions.
Armstrong discusses two ideas to support his proposed solution. The
first one is the normalization according to the KSBS, which is supposed
to give credit to the fact that if a player can benefit another player
a lot, the other's ideal point will also be higher, and their utility
function will thus be scaled down in the normalization in comparison
to the utility function of the player. The second idea is that of maximizing a sum
instead of maximizing a product or just taking some point with a fixed
ratio of utilities, which is to give agents higher ex ante
expectations of utility.
I think Armstrong's solution is unsuitable for my
setting. First, his solution does not solve the issue with fairness
in a multilateral setting that I discuss in <Ref>. Second, as argued in <Ref>,
solutions should guarantee positive gains from trade for all participants.
Maximizing a sum of normalized utility functions does not generally
guarantee that, as I have shown in the case of variance normalization.
As has been pointed out in the comments to Armstrong2013, normalizing according
to disagreement and ideal point may also not guarantee positive gains
from trade.
Lastly, the fact that a bargaining solution maximizes the sum of utilities is not a reason to choose it over other Pareto optimal solutions. Even the KSBS or NBS will maximize some
weighted sum of utility functions, since every point on the Pareto frontier corresponds to the maximizer of some weighted sum of
coordinates. I currently don't see a reason why
choosing the weighting based on knowledge of the entire Pareto frontier is
at an (a priori) disadvantage over weightings which are chosen based
on other information.
Lastly, note that the NBS maximizing a product does not mean that an agent's
uncertainty cannot be taken into account well by the NBS. As outlined in <Ref>, the expectations
of agents over different possible games can be incorporated
into feasible sets and Pareto frontiers, so the NBS need not only be applied to games
with certainty. Hence, when it comes to expectations over different
games, the NBS chooses a point that is Pareto optimal as judged by agents' beliefs—as
opposed to, for instance, choosing a point which leads to certain gains from
trade but to a lower expectation
across games.
§ HARSANYI1972'S AXIOMATIZATION OF THE NASH BARGAINING SOLUTION IN INCOMPLETE INFORMATION GAMES
In this section, I outline Harsanyi1972's axiomatization of the NBS in two-player incomplete information games. It is not directly applicable to my setup in <Ref>, and I did not find a more relevant result in the literature. I believe one should be able to translate the analysis to my setup, but I will not investigate this here.
Harsanyi1972's axiomatization
includes versions of the axioms from <Ref>, namely Individual rationality, Pareto optimality,
Invariance to affine transformations, a version of Anonymity for both
players and all types, and the Independence of irrelevant alternatives
axiom. In addition, there are two new axioms which specifically address the types.
To define these new axioms as in Harsanyi1972, we first have to specify a slightly different version of a Bayesian
bargaining game.
A two-player Bayesian bargaining game is a tuple G=(T_1,T_2,F,p)
where
* T_1={1,…,m} and T_2={m+1,…,l} are the two sets of types
for either player;
* F⊆ℝ^l is the feasible set, which specifies the
ex interim expected utilities for each type;
* p is a joint distribution over types for both players.
In this game, there are only two players, 1 and 2, and each
player has their own set of types. The feasible set F is just what
would have been the set F(G) in my case, only that the payoffs
depend on both types and players instead of just depending on types. If x∈ F, then there
exists a mixed strategy profile such that x_i specifies the utility
that type i would expect given this mixed strategy profile and
their beliefs about which types the other player could have.
The set F is assumed to be chosen such that the minimal element
in F is the disagreement point. That is, there exists d∈ F
such that d_i≤ x_i for all x∈ F,i∈ T_1∪ T_2.
Moreover, it is assumed that there are positive gains from trade to
be had for everyone—i.e., there is an element x∈ F such that
x_i>d_i for all i∈ T_1∪ T_2.
To define one of the new axioms, we need to define the operation of “splitting a type”.
We can define splitting a type for feasible payoff vectors as well as for games:
* Let j∈{1,…,m}. j is the type of player 1 we want
to split (the definition is analogous for player 2). We have two
new sets of types T'_1={1,…,m+1} and T'_2={m+2,…,l+1}.
Define F' such that it contains all x' such that there is x∈ F
such that x'_i=x_i for i∈{1,…,j}, x'_j+1=x_j,
and x'_i=x_i-1 for i∈{j+2,…,l+1}. This is called
deriving x' from x by splitting type j of player 1 into
two types.
* Let 0<ν<1. Let t∈ T'_2. We then define p' such that
p'(k,t)=p(k,t-1) for all k=1,…,j-2, p'(j,t)=ν p(j,t-1),
p'(j+1,t)=(1-ν)p(j,t-1), and p'(k,t)=p(k-1,t-1) for k∈{j+1,…,m+1}.
The new game G'=(T'_1,T'_2,F',p') with F' as feasible set,
types T'_1,T'_2, and p' as distribution over types is derived
from splitting type j of player 1 into two types with probabilities
ν and 1-ν.
With these definitions, the two new axioms are as follows:
Splitting types. If G'=(T'_1,T'_2,F',p') is derived from G
by splitting type j of player 1 into two types with probabilites
ν and ν-1, then x'=μ(G') is derived from x=μ(G)
by splitting type j of player 1 into two types.
Mixing basic probability matrices. If G=(T_1,T_2,F,p) and G'=(T_1,T_2,F,p')
have the same solution vector, then for every G”=(T_1,T_2,F,p”)
with p”=ν p+(1-ν)p' where ν∈[0,1], it is μ(G”)=μ(G')=μ(G).
Given these two additional axioms, Harsanyi1972 show that the solution
function must be
μ(G)=_x∈ F∏_t∈ T_1∪ T_2(x_t-w_t)^p(t).
That is, an asymmetric version of the NBS where the weights are the
prior probabilities of the types.
§ PROOF OF THEOREM <REF>
*
Let q:=δ_α be the Dirac measure, defined via δ_α(A)=1 if and only if α∈ A. As in <Ref>, we now want to define a joint distribution q_r for each r∈ℕ that converges weakly to q. To that end, define s as follows. For every t∈ T, let μ_t be some probability measure on 𝒜_t with full support such that μ_t({α
_t'})=0 for any α'_t∈𝒜_t (assuming 𝒜_t contains more than one point, and thus by convexity a continuum of points). For any set A⊆𝒜, define
s(A):=m^-1∑_t∈ Tμ_t({α'_t| (α'_t,β_-t)∈ A}).
To show that this is a probability measure, note that s(∅)=0, s is always non-negative, and
s(𝒜)
=m^-1∑_t∈ Tμ_t({α_t| (α'_t,β_-t)∈𝒜})
=m^-1∑_t∈ Tμ_t(𝒜_t)
=1.
Moreover, for any countable collection of pairwise disjoint sets A^1,A^2,…, we have
s(⋃_l∈ℕA^l)
=m^-1∑_t∈ Tμ_t(α'_t| (β_-t,α'_t)∈⋃_l∈ℕA^l)
=
m^-1∑_t∈ T∑_l∈ℕμ_t({α'_t| (β_-t,α'_t)∈ A^l})
=∑_l∈ℕm^-1∑_t∈ Tμ_t({α'_t| (β_-t,α'_t)∈ A^l})
=∑_l∈ℕs(A^l).
This shows that s is a probability measure.
Moreover, for any open, nonempty A_t⊆𝒜_t, we have
s(A_t)
=
m^-1∑_t'∈ Tμ_t({α'_t'| (α'_t',β_-t')∈𝒜_-t× A_t})
≥ m^-1μ_t(A_t)>0,
so this measure satisfies the full support condition that is required to define q_r.
Now we define q_r:=r-1/rq+r-1/r^2δ_β+1/r^2s. Since this is a convex combination of probability measures, it is still a probability measure. It remains to show that this measure satisfies our requirements. First, clearly, this weakly converges to q as r→∞. Second, since 1/r^2>0 for all r∈ℕ, it is q_r(A_t)≥1/r^2s(A_t)>0 for any t∈ T and nonempty open set A_t⊆𝒜_t.
Now we turn to the condition on expected utilities. Let t∈ T and A_t⊆𝒜_t with q(A_t)>0 arbitrary but fixed in the following. Then it follows that α_t∈ A_t, and thus
q({α}| A_t)=q({α})/q(A_t)=1. Hence, for measurable A⊆𝒜, it follows
lim_r→∞q_r(A| A_t)
=lim_r→∞r-1/rδ_α(A)+r-1/r^2δ_β(A)+1/r^2s({α'∈ A|α'_t ∈ A_t})/r-1/r+r-1/r^2δ_β(A)+1/r^2s(A_t)
=δ_α(A)=q(A| A_t)
and thus
lim_r→∞EU_t(q_r;A_t)=EU_t(q;A_t)=EU_t(α).
Next, let B_t⊆𝒜_t an arbitrary nonempty open set, representing any other set of actions type t could condition on. We have to show that lim_r→∞EU_t(q_r;A_t)≥lim_r→∞EU_t(q_r;B_t). To that end, note that if q(B_t)>0, it follows from the above that
lim_r→∞EU_t(q_r;B_t)=EU_t(α)=lim_r→∞EU_t(q_r;A_t),
and we are done.
Now consider the case q(B_t)=0. First, assume β_t∈ B_t. In this case, for measurable A⊆𝒜, we have
lim_r→∞q_r(A| B_t)
=lim_r→∞r-1/rδ_α(A∩(𝒜_-t× B_t)) +r-1/r^2δ_β(A∩(𝒜_-t× B_t))+1/r^2s(A∩(𝒜_-t× B_t))/r-1/rq(B_t)+r-1/r^2δ_β_t(B_t)+1/r^2s(B_t)
=lim_r→∞r-1/r^2δ_β(A∩(𝒜_-t× B_t))+1/r^2s(A∩(𝒜_-t× B_t))/r-1/r^2δ_β_t(B_t)+1/r^2s(B_t)
=lim_r→∞r-1/rδ_β(A∩(𝒜_-t× B_t))+1/rs(A∩(𝒜_-t× B_t))/r-1/rδ_β_t(B_t)+1/rs(B_t)
=
δ_β(A∩(𝒜_-t× B_t))/δ_β_t(B_t)
=δ_β(A)
.
Hence, it follows that
lim_t→∞
EU_t(q_r;B_t)=
lim_t→∞𝔼_α'∼ q_r[EU_t(α')|α'_t∈ B_t]
=
lim_t→∞𝔼_α'∼δ_β[EU_t(α')]
=EU_t(β).
Using the assumption on α and β, we can conclude that
lim_t→∞
EU_t(q_r;B_t)=EU_t(β)≤ EU_t(α)=lim_r→∞EU_t(q_r;A_t),
and we are done.
Second, consider the case β_t∉ B_t. Then for any r∈ℕ, we have
q_r(A| B_t)
=
r-1/rδ_α(A∩(𝒜_-t× B_t)) +r-1/r^2δ_β(A∩(𝒜_-t× B_t))+1/r^2s(A∩(𝒜_-t× B_t))/r-1/rq(B_t)+r-1/r^2δ_β_t(B_t)+1/r^2s(B_t)
=
1/r^2s(A∩(𝒜_-t× B_t))/1/r^2s(B_t)
=
s(A∩(𝒜_-t× B_t))/s(B_t)
=s(A| B_t).
It follows that EU_t(q_r;B_t)=EU_t(s;B_t).
Now define A_t^β_-t:={α_t'|α'∈ Aα'_-t=β_-t}. Then
s(A∩ (𝒜_-t× B_t))=
m^-1∑_t'∈ Tμ_t'({α'_t'| (α'_t',β_-t')∈ A∩ (𝒜_-t× B_t)})
=
m^-1μ_t({α'_t| (α'_t,β_-t)∈ A, α'_t∈ B_t})
=m^-1μ_t(B_t∩ A_t^β_-t).
It follows that s(A| B_t)=0 if β_-t∉ A_-t, so
for a random variable α'∼ s, we have
s(α'_-t=β_-t| B_t)=1.
It follows for any r∈ℕ that
EU_t(q_r;B_t)(<ref>)=EU_t(s;B_t)=𝔼_α'∼ s[EU_t(α')|α'_t∈ B_t]=𝔼_α'∼ s[EU_t(β_-t,α'_t)|α_t'∈ B_t]
(i)≤𝔼_α'∼ s[EU_t(α)|α_t'∈ B_t]=EU_t(α),
where we have used the assumption on α,β in (i).
Hence, also
lim_r→∞EU_t(q_r;B_t)≤ EU_t(α)=lim_r→∞EU_t(q_r;A_t),
which concludes the proof.
§ PROOF OF THEOREM <REF>
I begin by introducing some additional notation, in order to be able to state the result used to prove <Ref>.
The following definitions and conditions are adapted from kannai1992core. I assume a set N of players is given.
A function ν𝒫(N)→𝒫(ℝ^N) is called characteristic function if it satisfies the following criteria:
(i) ν(∅)=∅;
(ii) for all S⊆ N, S≠∅, ν(S) is a nonempty closed subset of ℝ^N;
(iii) if x∈ν(S) and y_i≤ x_i for all i∈ S, then y∈ν(S);
(iv) there exists a closed set F⊆ℝ^N such that
ν(N)={x∈ℝ^N|∃ y∈ F∀ i∈ N x_i≤ y_i};
(v)The set
F∩{x∈ℝ^N|∀ i∈ N x_i≥max{y_i| y∈ν({i})}}
is nonempty and compact.
Now let B be a bargaining game and A∈ℝ^n,n such that A_i,j∈{x_i,j| x_i∈ F_i(B)} for all i,j∈ N.
Then the function ν^A of A-dominated vectors as introduced in <Ref>, defined via
ν^A(P):={x∈ℝ^n|∃ y∈∏_i∈ PF_i∀ i∈ P x_i ≤∑_j∈ Py_j,i+∑_j'∈ N∖ PA_j',i},
satisfies these criteria.
ν^A is a characteristic function.
Left as an exercise. It follows from the assumption that the F_i(B) are compact, convex sets, together with the definition of ν^A. For (iv) and (v), we can take F=F(B).
Next, we need two technical definitions to be able to state the result.
Let T⊆𝒫(N) be a collection of coalitions. Then T is said to be a balanced collection if there exist nonnegative weights (δ_S)_S∈ T such that
∑_S∈ T s.t. i∈ Sδ_S=1.
This means that there exist weights for each set in T such that, for each player i∈ N, the weights of all the sets containing that player add up to 1.
A characteristic function ν is called balanced if for every balanced collection T, we have
⋂_S∈ Tν(S)⊆ν(N).
This means that if a payoff vector x can be guaranteed for their members by every single coalition in a balanced set of coalitions, then it must also be achievable by the grand coalition. This is in general not true, but we will show that it is true in the case of an additively separable bargaining game.
Now we can state the main result used to prove <Ref>. Recall the definition of the core as the set of vectors x∈ν(N) such that for all coalitions P⊆ N and y∈ν(P), there exists at least one player i∈ P such that x_i≥ y_i. Note that every characteristic function ν defines a core C^ν.
Every balanced characteristic function has a nonempty core.
See kannai1992core.
Now we can prove <Ref>.
*
Consider a bargaining game B with additively separable utility functions. Recall
A_i,j:=min_P⊆ N s.t. i∈ Pmin_σ_P∈Σ_P^Hu_i,j(σ_i)
for i,j∈ N, where Σ^H_P is the set of Pareto optimal strategies for the players in P. By <Ref>, ν:=ν^A is a characteristic function. It remains to show that ν is balanced. Then it follows from <Ref> that C^A(B)=C^ν is nonempty.
To that end, assume T is a balanced collection with weights (δ_S)_S∈ T, and assume x∈ν(S) for all S∈ T.
Then by definition, there exists x_i^S∈ F_i(B) for each i∈ N that corresponds to the utilities produced by player i in coalition S, such that
x_j≤∑_i∈ S x_i,j^S + ∑_i ∈ N∖ SA_i,j
for all j∈ S. Note that w.l.o.g., we can assume that for some σ_S∈Σ_S^H, we have x_i^S=u_i(σ_i) for all i∈ S. That is, we can choose vectors x_i^S that result in Pareto optimal payoffs for the members of S.
Then, by definition of A, we have
x_i,j^S≥ A_i,j
for any player i∈ S and j∈ N.
Now we want to find a matrix of vectors x̂∈ℝ^n,n such that
x̂_i∈ F_i(B) for each i∈ N, and such that
∑_i∈ Nx̂_i,j≥ x_j
for all j∈ N. If we we can find such a matrix, then it follows that x∈ F(B) and thus x∈ν(N), and we are done.
To define this matrix, let i∈ N arbitrary and set
x̂_i:=∑_S∈ T s.t. i∈ Sδ_Sx_i^S.
Note that this is a convex combination of vectors x_i^S∈ F_i(B), and thus also x̂_i∈ F_i(B) since the feasible sets are convex. It follows that
∑_ix̂_i,j=∑_i∈ N∑_S∈ T s.t. i∈ Sδ_Sx_i,j^S
=∑_S∈ T∑_i∈ Sδ_Sx_i,j^S
=∑_S∈ T(1_S(j)
∑_i∈ Sδ_Sx_i,j^S
+(1-1_S(j))∑_i∈ Sδ_Sx_i,j^S)
<ref>≥∑_S∈ T(1_S(j)δ_S(x_j- ∑_i∈ N∖ SA_i,j)
+(1-1_S(j))∑_i∈ Sδ_SA_i,j)
=
x_j- ∑_S∈ Tδ_S(1_S(j)∑_i∈ NA_i,j
-∑_i∈ SA_i,j)
=
x_j- ∑_i∈ NA_i,j+∑_S∈ Tδ_S∑_i∈ SA_i,j
=
x_j-∑_i∈ NA_i,j+∑_i∈ N∑_S∈ T s.t. i∈ Sδ_SA_i,j
=
x_j-∑_i∈ NA_i,j+∑_i∈ NA_i,j
=x_j.
This shows that x∈ν(N) and thus concludes the proof.
|
http://arxiv.org/abs/2307.04513v1 | 20230710122005 | CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation | [
"Yicheng Wu",
"Zhonghua Wu",
"Hengcan Shi",
"Bjoern Picker",
"Winston Chong",
"Jianfei Cai"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
CoactSeg for New MS Lesion Segmentation
Yicheng Wu et al.
1 Department of Data Science & AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3168, Australia
[email protected]
2 SenseTime Research, Singapore, 069547, Singapore
3 Alfred Health Radiology, Alfred Health, Melbourne, VIC 3004, Australia
4 Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC 3800
CoactSeg: Learning from Heterogeneous Data for New Multiple Sclerosis Lesion Segmentation
Yicheng Wu1() Zhonghua Wu 2 Hengcan Shi 1 Bjoern Picker 3,4 Winston Chong 3,4 Jianfei Cai1
August 12, 2023
==============================================================================================
New lesion segmentation is essential to estimate the disease progression and therapeutic effects during multiple sclerosis (MS) clinical treatments. However, the expensive data acquisition and expert annotation restrict the feasibility of applying large-scale deep learning models. Since single-time-point samples with all-lesion labels are relatively easy to collect, exploiting them to train deep models is highly desirable to improve new lesion segmentation.
Therefore, we proposed a coaction segmentation (CoactSeg) framework to exploit the heterogeneous data (i.e., new-lesion annotated two-time-point data and all-lesion annotated single-time-point data) for new MS lesion segmentation.
The CoactSeg model is designed as a unified model, with the same three inputs (the baseline, follow-up, and their longitudinal brain differences) and the same three outputs (the corresponding all-lesion and new-lesion predictions), no matter which type of heterogeneous data is being used.
Moreover, a simple and effective relation regularization is proposed to ensure the longitudinal relations among the three outputs to improve the model learning.
Extensive experiments demonstrate that utilizing the heterogeneous data and the proposed longitudinal relation constraint can significantly improve the performance for both new-lesion and all-lesion segmentation tasks.
Meanwhile, we also introduce an in-house MS-23v1 dataset, including 38 Oceania single-time-point samples with all-lesion labels. Codes and the dataset are released at <https://github.com/ycwu1997/CoactSeg>.
§ INTRODUCTION
Multiple sclerosis (MS) is a common inflammatory disease in the central nervous system (CNS), affecting millions of people worldwide <cit.> and even leading to the disability of young population <cit.>. During the clinical treatment of MS, lesion changes, especially the emergence of new lesions, are crucial criteria for estimating the effects of given anti-inflammatory disease-modifying drugs <cit.>. However, MS lesions are usually small, numerous, and appear similar to Gliosis or other types of brain lesions, e.g., ischemic vasculopathy <cit.>. Identifying MS lesion changes from multi-time-point data is still a heavy burden for clinicians. Therefore, automatically quantifying MS lesion changes is essential in constructing a computer-aided diagnosis (CAD) system for clinical applications.
Deep learning has been widely used for MS lesion segmentation from brain MRI sequences <cit.>. For example, the icobrain 5.1 framework <cit.> combined supervised and unsupervised approaches and designed manual rules to fuse the final segmentation results. Some works <cit.> further studied the complementary features from other MRI modalities for MS lesion segmentation. Meanwhile, to train a better deep model, class-imbalance issues <cit.> and prior brain structures <cit.> have been respectively investigated to improve the performance.
With the impressive performance achieved by existing pure MS lesion segmentation methods <cit.>, recent attention has been shifted to analyze the longitudinal MS changes <cit.>, such as stable, new, shrinking, and enlarging lesions, with the focus on new MS lesion segmentation <cit.>.
However, collecting adequate well-labeled longitudinal MS lesion data for model learning is highly challenging since it needs multi-time-point data from the same set of patients, and requires costly and time-consuming expert annotations.
Fig. <ref> shows the three types of heterogeneous MS lesion data: new-lesion annotated two-time-point data, all-lesion annotated two-time-point data, and all-lesion annotated single-time-point data, each of which is associated with different costs. New-lesion annotated two-time-point data is the ideal one for learning new lesion segmentation, but with the highest data acquisition and annotation costs. Annotating all lesions in two-time-point data can reduce the annotation cost, but it requires accurate brain registration and rule-based post-processing to identify lesion changes, which cannot avoid noise accumulation and often leads to sub-optimal performance. All-lesion annotated single-time-point data is with the cheapest data acquisition and annotation costs. This motivates us to raise the question: “Can we leverage all-lesion annotated single-time-point data to promote the new MS lesion segmentation?”
Therefore, in this paper, we proposed a deep Coaction Segmentation (CoactSeg) model that can unify heterogeneous data and annotations for the new MS lesion segmentation task. Specifically, CoactSeg takes three-channel inputs, including the baseline, follow-up, and corresponding differential brains, and produces all-lesion and new-lesion segmentation results at the same time.
Moreover, a longitudinal relation constraint (e.g., new lesions should only appear at the follow-up scans) is proposed to regularize the model learning in order to integrate the two tasks (new and all lesion segmentation) and boost each other. Extensive experiments on two MS datasets demonstrate that our proposed CoactSeg model is able to achieve superior performance for both new and all MS lesion segmentation, e.g., obtaining 63.82% Dice on the public MICCAI-21 dataset <cit.> and 72.32% Dice on our in-house MS-23v1 dataset, respectively. It even outperforms two neuro-radiologists on MICCAI-21.
Overall, the contributions of this work are three-fold:
* We propose a simple unified model CoactSeg that can be trained on both new-lesion annotated two-time-point data and all-lesion annotated single-time-point data in the same way, with the same input and output format;
* We design a relation regularizer to ensure the longitudinal relations among all and new lesion predictions of the baseline, follow-up, and corresponding differential brains;
* We construct an in-house MS-23v1 dataset, which includes 38 Oceania single-time-point 3D FLAIR scans with manual all-lesion annotations by experienced human experts. We will release this dataset publicly.
§ DATASETS
We trained and evaluated our CoactSeg model on two MS segmentation datasets, as shown in Table <ref>. On the public MICCAI-21 dataset[<https://portal.fli-iam.irisa.fr/msseg-2/>], we only use its training set since it does not provide official labels of testing samples. Specifically, 40 two-time-point 3D FLAIR scans are captured by 15 MRI scanners at different locations. Among them, 11 scans do not contain any new MS lesions. The follow-up data were obtained around 1-3 years after the first examination. Four neuro-radiologists from different centers manually annotated new MS lesions, and a majority voting strategy was used to obtain the final ground truth. For pre-processing, the organizers only performed a rigid brain registration, and we further normalized all MRI scans to a fixed resolution of [0.5, 0.75, 0.75] mm.
Since the public MS lesion data is not adequate <cit.>, we further collected 38 single-time-point 3D FLAIR sequences as a new MS dataset (MS-23v1). Specifically, all samples were anonymized and captured by a 3T Siemens scanner in Alfred Health, Australia. To the best of our knowledge, this will be the first open-source dataset from Oceania for MS lesion segmentation, contributing to the diversity of existing public MS data. Two neuro-radiologists and one senior neuro-scientist segmented all MS lesions individually and in consensus using the MRIcron segmentation tool[<https://www.nitrc.org/projects/mricron/>]. The voxel spacing of all samples is then normalized to an isotropic resolution of [0.8, 0.8, 0.8] mm.
Finally, when conducting the mixed training, we used a fixed data split in this paper (i.e., 62 samples for training and 16 for validation in total). Note that we followed the setting of the public challenge <cit.>, which selects the new validation set from MICCAI-21 that does not include samples without any new MS lesions.
§ METHOD
§.§ Overview
Fig. <ref> illustrates the overall pipeline of our proposed CoactSeg model F_θ. We construct a quadruple set (X_b, X_fu, X_d, Y) for the model training. Here, the longitudinal difference map x_d ∈ X_d is obtained by a subtraction operation between the baseline brain x_b ∈ X_b and its follow-up x_fu∈ X_fu (i.e., x_d = x_fu-x_b). Therefore, given heterogeneous annotations, i.e., all-lesion labels y_al^s ∈ Y_al^s in single-time-point data and new-lesion labels y_nl^t ∈ Y_nl^t in two-time-point data, the CoactSeg model F_θ is designed to exploit both for the model training.
§.§ Multi-head Architecture
Fig. <ref> shows that new-lesion regions are highlighted in the brain difference map x_d. Hence, besides x_b and x_fu, CoactSeg also receives x_d as inputs. It generates all-lesion and new-lesion predictions as
p_al^s1, p_al^s2, p_nl^s = F_θ(x_b^s, x_fu^s, x_d^0)
p_al^t1, p_al^t2, p_nl^t = F_θ(x_b^t, x_fu^t, x_d^t).
For single-time-point samples x^s ∈ X^s, x_b^s and x_fu^s are identical as x^s, and the difference map becomes an all-zero matrix x_d^0, with p_al^s1, p_al^s2 and p_nl^s being the corresponding all-lesion and new-lesion predictions of x^s. For two-time-point data x^t ∈ X^t,
x_b^t and x_fu^t respectively denote the first and second time-point data samples, with p_al^t1, p_al^t2 and p_nl^t being the all-lesion segmentation results at the first and second time-point and the new-lesion results of x^t, respectively.
In this way, we unify the learning of both single and two-time-point data with heterogeneous annotations by using the same model F_θ, with the same input and output formats.
Note that, inspired by semi-supervised learning <cit.>, we mix x^s and x^t samples into each batch for training. Given the heterogeneous annotations, i.e., all-lesion labels for single-time-point data and new-lesion labels for two-time-point data, we apply the following corresponding supervisions:
L_al = Dice(p_al^s1, y_al^s) + Dice(p_al^s2, y_al^s)
L_nl = Dice(p_nl^t, y_nl^t)
where Dice refers to the common Dice loss for medical segmentation tasks. We use a 3D VNet <cit.> as the backbone of F_θ and three prediction heads are designed as individual convolutional blocks. Note that, the last prediction head also receives the features from the first two in order to capture the all-lesion information. Compared to the recent work <cit.> for exploiting heterogeneous data, our architecture avoids the complicated design of dynamic prediction heads.
§.§ Longitudinal Relation Regularization
Human experts usually identify new MS lesions by comparing the brain MRI scans at different time points. Inspired by this, we further propose a longitudinal relation constraint to compare samples from different time points:
L_rr = ||p_al^s1, p_al^s2||_2 + ||p_al^t1⊗ y_nl^t, 0||_2 + ||p_al^t2⊗ y_nl^t, 1||_2
where ⊗ is a masking operation. The first term in (<ref>) is to encourage the all-lesion predictions p_al^s1 and p_al^s2 to be the same since there is no brain difference for single-time-point data. The second and third terms in (<ref>) are to ensure that the new-lesion region can be correctly segmented as the foreground in p_al^t2 and as the background in p_al^t1 in two-time-point data with only new lesion labels y_nl^t.
Finally, the overall loss function to train our CoactSeg model becomes a weighted sum of L_al, L_nl, and the regularization L_rr:
L = L_al + λ_1 × L_nl +λ_2 × L_rr
where λ_1 and λ_2 are constants to balance different tasks.
§ RESULTS
§.§.§ Implementation Details.
For training, we normalize all inputs as zero mean and unit variance. Then, among common augmentation operations, we use the random flip or rotation to perturb inputs. Since MS lesions are always small, we apply a weighted cropping strategy to extract 3D patches of size 80×80×80 to relieve the class imbalance problem <cit.>. Specifically, if the input sample contains the foreground, we randomly select one of the foreground voxels as the patch center and shift the patch via a maximum margin of [-10, 10] voxels. Otherwise, we randomly crop 3D patches. The batch size is set as eight (i.e., four new-lesion two-time-point samples and four all-lesion single-time-point samples). We apply Adam optimizer with a learning rate of 1e-2. The overall training iterations are 20k. In the first 10k iterations, λ_1 and λ_2 are set to 1 and 0, respectively, in order to train the model for segmenting MS lesions at the early training stage. After that, we set λ_2 as 1 to apply the relation regularization. During testing, we extract the overlapped patches by a stride of 20×20×20 and then re-compose them into the entire results.
Note that we follow <cit.> to mask the non-brain regions and all experiments are only conducted in the brain regions with the same environment (Hardware: Single NVIDIA Tesla V100 GPU; Software: PyTorch 1.8.0, Python 3.8.10; Random Seed: 1337). The computational complexity of our model is 42.34 GMACs, and the number of parameters is 9.48 M.
§.§.§ Performance for MS Lesion Segmentation.
Two MS tasks (i.e., new-lesion segmentation on MICCAI-21 and all-lesion segmentation on our MS-23v1 dataset) are used to evaluate the proposed CoactSeg. Besides common segmentation metrics <cit.> including Dice, Jaccard, 95% Hausdorff Distance (95HD), and Average Surface Distance (ASD), we further follow <cit.> to use the instance-level F1 score (F1) to denote the lesion-wise segmentation performance. Here, tiny lesions (i.e., fewer than 11 voxels) are not included in the F1 calculation as <cit.>.
Fig. <ref> illustrates that our proposed CoactSeg accurately segments the tiny new lesions on MICCAI-21. Compared to the recent work <cit.>, our model can even predict new lesions with low contrast (indicated by the enlarged yellow rectangles in Fig. <ref>). Table <ref> gives the quantitative results on MICCAI-21. We can see that: 1) Our model achieves good segmentation performance for new MS lesion segmentation and outperforms the second-best method <cit.> by 7.01% in Dice; 2) Compared with human experts, our proposed model also outperforms two of them (i.e., #3 and #4) in terms of the segmentation and the shape-related metrics; 3) For the lesion-wise F1 score, our method
significantly reduces the performance gap between deep models and human experts, achieving a comparable F1 with expert #3 (i.e., 61.96% vs. 62.88%).
Fig. <ref> shows the all-lesion segmentation results of our CoactSeg model on our in-house MS-23v1 dataset. It can be seen that CoactSeg is able to segment most MS lesions, even for very tiny ones (highlighted by red arrows). Moreover, we can see that the segmentation results of the first two prediction heads are relatively consistent (i.e., the 2nd and 3rd columns of Fig. <ref>), demonstrating the effectiveness of our proposed relation regularization.
§.§.§ Ablation Study.
Table <ref> further shows the ablation study for both new and all MS lesion segmentation tasks. It reveals that: 1) Introducing the heterogeneous data significantly improves the performance of new-lesion segmentation on MICCAI-21 with an average Dice gain of 2.64%; 2) Exploiting the relation regularization for mixed training can further improve the performance on the two datasets; 3) The simple stage-by-stage training strategy (See the Implementation Details <ref>) can better balance two tasks and achieve the overall best segmentation performance for both tasks.
§ CONCLUSION
In this paper, we have presented a unified model CoactSeg for new MS lesion segmentation, which can predict new MS lesions according to the two-time-point inputs and their differences while at the same time segmenting all MS lesions. Our model effectively exploits heterogeneous data for training via a multi-head architecture and a relation regularization. Experimental results demonstrated that introducing all-lesion single-time-point data can significantly improve the new-lesion segmentation performance. Moreover, the relation constraint also facilitates the model to capture the longitudinal MS changes, leading to a further performance gain. Our in-house MS-23v1 dataset will be made public to help the MS lesion research.
Future works will explore more longitudinal relations to study the fine-grained MS changes as well as consider more powerful constraints to address the domain gap <cit.> and fairness <cit.> problems. Moreover, we plan to collect and annotate more MS lesion data to improve the possibility of training large-scale deep models for clinical applications <cit.>.
§.§.§ Acknowledgement.
This work was supported in part by the Monash FIT Start-up Grant, in part by the Novartis (ID: 76765455), and in part by the Monash Institute of Medical Engineering (MIME) Project: 2022-13. We here appreciate the public repositories of SNAC <cit.> and Neuropoly <cit.>, and also thanks for the efforts to collect and share the MS dataset <cit.> and the MS-23v1 dataset from Alfred Health, Australia.
splncs04
|
http://arxiv.org/abs/2307.03965v1 | 20230708123352 | Seismic Signatures of the $^{12}$C($α$, $γ$)$^{16}$O Reaction Rate in White Dwarf Models with Overshooting | [
"Morgan T. Chidester",
"F. X. Timmes",
"Ebraheem Farag"
] | astro-ph.SR | [
"astro-ph.SR"
] |
red
Signatures of ^12C(α, γ)^16O in WD models with overshooting
Chidester, Timmes, & Farag
0000-0002-5107-8639]Morgan T. Chidester
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-0474-159X]F.X. Timmes
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
0000-0002-5794-4286]Ebraheem Farag
School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, USA
Morgan T. Chidester
[email protected]
We consider the combined effects that overshooting and the reaction rate have on variable white dwarf stellar models. We find that carbon-oxygen white dwarf models continue to yield pulsation signatures of the current experimental reaction rate probability distribution function when overshooting is included in the evolution. These signatures hold because the resonating mantle region, encompassing ≃ 0.2 in a typical ≃ 0.6 white dwarf model, still undergoes radiative helium burning during the evolution to a white dwarf. Our specific models show two potential low-order adiabatic g-modes, g_2 and g_6, that signalize the reaction rate probability distribution function. Both g-mode signatures induce average relative period shifts of Δ P/P = 0.44 % and Δ P/P = 1.33% for g_2 and g_6 respectively. We find that g_6 is a trapped mode, and the g_2 period signature is inversely proportional to the reaction rate. The g_6 period signature generally separates the slower and faster reaction rates, and has a maximum relative period shift of Δ P/P = 3.45%. We conclude that low-order g-mode periods from carbon-oxygen white dwarfs may still serve as viable probes for the reaction rate probability distribution function when overshooting is included in the evolution.
§ INTRODUCTION
Helium burning is primarily the fusion of helium into carbon by the triple-alpha (3α) process.
All stars born with more than ≃ 0.5 go through this stage of energy production as they evolve beyond the main-sequence <cit.>.
Helium burning also plays a key role in transients such as
Type I X-ray bursts <cit.>,
Type Ia supernovae <cit.>, and
He-rich subdwarf O stars <cit.>.
Helium burning also impacts several classes of distribution functions,
such as the black hole mass distribution function <cit.>
including any mass gaps based on the pair-instability mechanism in the evolution of
massive stars <cit.>.
He burning is triggered by the 3α process releasing 7.5 MeV in fusion energy and producing ^12C <cit.>.
This is a unique process, setting stringent conditions for helium ignition.
The 3α process is followed by the α capture reaction ^12C(α, γ)^16O,
converting the ^12C into ^16O <cit.>.
These two isotopes are the principal products of He burning.
In addition, nearly all of a star's initial CNO abundances in the stellar interior are converted to ^22Ne at the onset of He burning <cit.>.
This marks the first time in a star's life where the core becomes neutron rich. We follow the convention that ^22Ne is the “metallicity” of a carbon-oxygen (CO) white dwarf (WD).
The interiors of CO WDs are, in principle, the best probe of the ashes of He burning.
A goal of WD seismology is to characterize the chemical profiles of principal products of He burning
<cit.>
and the chemical profile of the trace ^22Ne metallicity <cit.>.
Furthermore, regions within a CO WD model that burn helium radiatively during its prior evolution can offer potential constraints on the He burning nuclear reaction rates.
For example, <cit.> found that certain trapped adiabatic g-modes in WD models
may provide a pulsation signature that constrains the experimental reaction rate probability distribution function.
These signature g-modes were shown to resonate
with the region of the CO WD model that underwent radiative He burning during its previous evolution. The innermost boundary of this resonant cavity
corresponds to the molecular weight gradient at O→C chemical transition, and the outermost boundary to the molecular weight C→He chemical transition.
The resonating region encompasses ≃ 0.2 of a typical ≃ 0.6 WD model.
C22 cautioned that the chemical structure and resulting pulsation spectrum
is sensitive to
the width of the O→C transition <cit.>,
the experimental 3α reaction rate probability distribution functions <cit.>,
convective boundary mixing processes during core He depletion <cit.>, and
the number of thermal pulses during the Asymptotic Giant Branch (AGB) phase of evolution <cit.>.
Modeling convective boundary mixing processes at the convective-radiative interface during core He burning in low- and intermediate-mass stellar models is currently uncertain
<cit.>.
Convective overshoot occurs because the convective boundary is not the location where convective velocities are zero,
but the location where the buoyant acceleration of the fluid is zero.
An order–of-magnitude expression Δ x = u Δ t provides an estimate for how far convective motions overshoot <cit.>.
Here Δ x is the overshoot distance, u is the convective velocity, and
Δ t ≃ 1/N where N is the frequency
in the stable region. There is disagreement on how to calculate Δ x, but this estimate
broadly shows Δ x ≪ H_P in stellar environments, where H_P is the pressure scale height.
The exponential overshoot parameterization <cit.> is frequently implemented in 1D models to describe this convective boundary mixing process, treating Δ x as a free parameter.
The values of Δ x
needed to match the gravity modes found in Slowly Pulsating B-type stars <cit.> suggest Δ x / H_P ≃ 0.1, which is larger than 3D hydrodynamical simulations of low Mach number flows at stable interfaces indicate <cit.>.
The injection of fresh He into the convective core enhances the rate of energy production by the ^12C(α,γ)^16O reaction rate, increases the central mass fraction <cit.>, and modifies the lifetime through this phase of evolution.
The resulting increase in the radiative gradient can also lead to rapid growth in the convective He core boundary (a “breathing pulse”).
A consensus on breathing pulses being physical or numerical has not yet been reached <cit.>.
C22 found a pulsation signature of the reaction rate probability distribution function using evolutionary models that purposely excluded overshooting.
This article is novel in analyzing whether or not pulsation signals of the reaction rate probability distribution function
still exist when overshooting at the inner convective-radiative interface during core He burning (CHeB) is included in the models' evolution history. Here, the inner convective-radiative interface is the transition from the convective core to the exterior radiative layer.
Section <ref> describes our models,
<ref> analyzes our models,
<ref> discusses our results,
and we summarize our findings in <ref>.
Appendix A lists the microphysics used, and
Appendix B discusses variations with the number of isotopes in the reaction network and with the temporal resolution of our models.
§ STELLAR EVOLUTIONARY MODELS
We define the term “model” to mean an evolutionary sequence that begins at the pre-main sequence, progresses through CHeB, and terminates as a cold WD. We define the term “snapshot” to mean a specific instance in time or phase of evolution within a model, and the term “set” to mean a suite of models or snapshots that have identical input physics except for the value of the reaction rate.
We use version r15140
<cit.> to build 2.1 ,
Z = 0.0151 metallicity, Y = 0.266 He mass fraction, nonrotating models at the pre-main sequence.
We adopt the AGSS09 <cit.> abundances and use a 23 isotope nuclear reaction network with ^22Ne being the heaviest isotope[A comparison to a 30 isotope network is given in Appendix B.].
Our models employ 's Henyey mixing-length theory (MLT) of convection option, with an MLT parameter of α = 1.5. This is consistent with the value used in C22.
We use the Ledoux criterion, and the predictive mixing scheme.
Additional details of the microphysics are listed in Appendix A.
One such model is run for each 0.5σ step in the experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0, giving 13 reaction rates.
As in C22, we span the current experimental reaction rate probability distribution function <cit.> from σ=-3.0 to σ=+3.0 in 0.5σ steps, totaling to 13 σ_i reaction rates; each model is prescribed one such σ_i reaction rate value for its evolution.
We calculate one set of models without overshooting (NOV), and a second set with overshooting (OV) at the inner radiative-convective interface during the CHeB phase.
Hence, each evolutionary model differs only in its σ_i reaction rate, and NOV or OV mixing prescription. This yields 26 individual stellar evolutionary models; 13 for the NOV set and 13 for the OV set. For i=(-3.0, -2.5,...,+2.5, +3.0), we use σ_i and σ=i interchangeably to reference a given σ from the reaction rate probability distribution function.
After CHeB, the models evolve until log(L/L_⊙)=3.0, prior to the first thermal pulse on the AGB. At this snapshot, we interrupt the evolution of each model. All models at this snapshot thus have a C→He transition at nearly the same mass location. We use this snapshot to construct H-dominated atmosphere (DA) WDs by removing the H envelopes until log(M_H/M_*)<-3.5.
The resulting composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K. We discuss the reasoning for constructing the WDs from the post-CHeB log(L/L_⊙)=3.0 snapshot in the following section.
We use this snapshot to isolate the sensitivity to overshooting at the convective-radiative interface.
At this snapshot the H envelopes are removed until log(M_H/M_*)<-3.5 to construct H-dominated atmosphere (DA) WDs.
These composition profile structures are used to build 0.56 ab-initio WD models with wd_builder, as done in C22. These WD models evolve until = 10,000 K.
We utilized version 6.0.1 of the code <cit.> to compute the adiabatic pulsations of our WD models throughout their respective cooling tracks (from ∼ 50,000 K to 10,000 K). We tracked the pulsations for the entire WD cooling track to observe the evolution of the adiabatic modes. Further, this was the most convenient way to auto-implement pulsation calculations for multiple models (i.e. we did not have to post-process the pulsation calculations over a specifed range for each of the 26 models). We emphasize that the computed pulsations are adiabatic, and that the observed instability strip for DAV WDs spans only from ∼ 13,000 K to ∼ 10,000 K. The inlist parameters were set to search for modes of harmonic degrees ℓ=1,2 and radial orders n≤25, where our models were assumed to be non-rotating, hence only m=0 azimuthal orders were present. For the adiabatic mode analysis, we employed the fourth-order Gauss-Legendre collocation difference equation scheme <cit.>.
Details of the models and oscillation parameters are in the files to reproduce our results at
at doi:[10.5281/zenodo.8126450][https://doi.org/10.5281/zenodo.8126450.
§.§ Core Overshooting prescription during the CHeB
During the CHeB phase, we use the following core overshooting parameters in the inlist for the OV set:
= 1d-3
= `exponential'
= `any'
= `core'
= 0.016
= 0.008
= 0.01
= 0.4
Details of the specific parameters are described in the documentation[<https://docs.mesastar.org/en/latest/>].
We choose the conventional <cit.> value of .
This parameter sets the fractional distance of H_p to overshoot at the ∇_ad=∇_rad interface, for the order of magnitude estimate given in the introduction, Δ x = f_0· H_p.
The trapped mode seismic signatures found in C22 were resonating most with the region that underwent radiative He burning, defined as R2. Their inner boundary of R2 is near the molecular weight gradient at the
O→C transition (the “O drop") and their outer boundary is near the C→He transition. Mode trapping is sensitive to the location of both of these boundaries because they define the width of the resonant cavity.
One approach to analyzing the sensitivity
of the R2 trapped mode signatures is to fix one boundary and vary the other boundary. We fix the R2 outer boundary by excluding variations imposed from the thermal pulse history, hence the interruption at the post-CHeB log(L/L_⊙)=3.0 snapshot for all models. The phenomena that happens during the AGB phase is another source of model uncertainty. <cit.> found that early post-AGB pulsations can cause rapid growth of an instability that drives a super-wind which can shed much of the outer layers in a few years. Further, their 2.0 , Z=0.02 model shows a dynamic evolutionary track, especially during the AGB, that is similar to the models in this article. <cit.> summarizes that while the preliminary results show promise on future AGB and post-AGB phenomenon, there are currently more questions than answers. We therefore leave the thermal pulse history and the particular envelope ejection phenomena on the AGB to future studies, and freeze the outermost R2 boundary before the first thermal pulse occurs. In this vein, we isolate the sensitivity of the R2 region to its inner boundary, and specifically address how core overshooting influences the pulsation signatures for the reaction rate probability distribution function.
We end this section by stating we are not advocating for a specific evolutionary model or overshooting scheme.
Rather, we are exploring one approach to quantifying the coupled uncertainty between the reaction rate probability distribution function and a common overshooting model.
§ RESULTS
§.§ Evolution of Composition Profiles
Figure <ref> shows the mass fraction profiles for both sets at three evolutionary snapshots. The top row shows the mass fraction profiles for the NOV set and the bottom row shows the mass fraction profiles for the OV set. The left most column
shows the mass fraction profiles at the post-CHeB log(L/L_⊙)>3.0 snapshot. At this point, our models have not lost much mass and are all ∼2.1. The middle column shows the mass fraction profiles after removing the H envelopes until log(M_H/M_*)<-3.5. This snapshot shows the initial hot WD profiles, after completing one model step in wd_builder. The profiles shift slightly in mass location, but the overall composition structure only differs from the left panel in the thickness of the H envelope. The right column is the final snapshot of the mass fraction profiles, when the models reach =10,000 K. Diffusion was included on the WD cooling track and leads to the smoothness of the profiles in this column.
Figure <ref> accentuates the differences between the NOV (top) and OV (bottom) mass fraction profiles for the final WD structures (right column of Figure <ref>). Here, we show the abundance in mass fraction with respect to fractional radius r/R. We partition the WDs' composition profiles into four regions: R1, R2, R3, and R4. This is similar to that done in C22. The regions are defined to estimate trapping (resonant) zones. Boundaries for mode trapping are typically near composition transitions because they generally have large mean molecular weight gradients. This may lead to partial reflections for a resonant mode(s), “trapping" it within the local cavity <cit.>. The Ledoux B profile (henceforth B) captures composition gradients and can estimate trapping regions. We use B as our primary guide to define the region boundaries for a given model. The R1-R2 boundary is set at the first local maximum in B that occurs after reaching peak in a given model's chemical profile. The R2-R3 boundary is set at the second local maximum in B. The R3-R4 boundary is set at the location where X(^1H)>X(^4He).
In both NOV and OV sets, σ_i impacts the magnitude of the ^16O and ^12C profiles in R1. Core overshooting changes the structure of these profiles, especially at r/R ∼ 0.37 where the flatness of the profiles becomes disrupted. This is due to additional He fuel ingested during CHeB, from overshooting and/or convection. The fuel ingestion from overshooting and convection is a coupled effect and specific to each σ_i model. After r/R ∼ 0.37, there is some overlap in the profiles that perturbs the proportional trend with σ_i.
For both sets, the first group of vertical blue lines marks the R1-R2 boundary, with each line representing a given σ_i. The NOV set shows a steep composition gradient at the R1-R2 boundary, and the R1-R2 location is nearly the same for all σ_i. There is greater variance in the R1-R2 location for the OV set. Further, core overshooting has softened the and gradients, and the disruption of the profiles' regularity with σ_i continues into the start of the R2 region. At r/R∼0.6, the proportionality of σ_i to the and profiles is restored.
By design from stopping at the first thermal pulse, the R3 and R4 regions are almost identical between the NOV and OV sets. These regions are least affected from mixing processes in the core (e.g. overshooting).
In Figures <ref> and <ref>, the OV chemical profiles show a non-constant structure from overshooting during CHeB in the O dominated central core (below ≃0.4 ). While element diffusion is included during the white dwarf cooling phase, these chemical profiles may be further flattened by mixing processes not considered in this study such as time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing, or first-order phase separation of the CO mixture <cit.>.
§.§ Evolutionary differences after the main-sequence
How do the NOV and OV differences in the R1 and R2 regions of Figure <ref> relate to their respective CHeB evolution histories?
How do the final WD profiles for the NOV and OV sets in Figure <ref> relate to their respective CHeB evolution histories? Figure <ref> shows the Kippenhahn diagrams for the σ = 0.0 models for NOV (left) and OV (right). This figure shows the CHeB phase until the log(L/)>3.0 termination point, spanning ≃ 0.93–1.10 Gyr. During this period the total mass of our models is ≃ 2.1 , but we show only the innermost ≃ 0.65 to capture the evolution history that ultimately defines the CO WDs.
There are immediate differences between the NOV and OV CHeB evolution histories for the σ=0.0 models. These differences are similar for any given σ_i models, and a link to an interactive figure is provided in the online journal to see each rate's OV vs. NOV comparison in greater detail.
For the NOV set, we see gradual growth of the convective core throughout the CHeB phase; the noted central mass fraction isotopes smoothly deplete/grow to reach their final mass fractions; the convective cores have no apparent splitting during the CHeB phase. Further, there is a pure radiative zone throughout the CHeB history. In comparison, the OV set shows convective cores that ebb and flow in their extent, in a saw-tooth like manner; overshooting extends past the inner convective core in a fairly consistent mass length; the OV central mass fraction isotopes ebb and flow symmetrically with the mixing phenomena at any given time.
We also see splittings of the convective core in the OV set. These splittings were not observed in any of the NOV models during the CHeB phase. We presume they are a result of overshoot inclusion. This introduces “pollution" to the pureness of the radiative burning zone, which becomes the R2 region of the WD. The pollution is seen by observing that some of the split-convection zone surpasses the log(L/)>3.0 R2 inner edge boundary. This boundary becomes the inner edge of R2 in the cool WDs. The amount of convective pollution within the OV set is minor for σ_0.0, but varies with σ_i.
Figure <ref> qualifies R2 as “Mostly Radiative" for the NOV set due to localized, short-lived, subtle convective occurrences between ≃ 0.30–0.35 near core He depletion energetics. Composition profiles are less sensitive to mixing after CHeB is complete. Any convective pollution from these brief convective periods in the NOV set are insignificant compared to the convective pollution introduced in the OV set.
For both sets, nuclear burning primarily takes place within the convective core. Both sets also show similar burning regions in the mantle outside the core, in the radiative zone. Near the end of core He depletion, nuclear burning in the core extends past the convective and overshooting core regions in the OV set, and burns into the radiative zone. This is not seen in the NOV set.
§.§ WD Adiabatic Pulsation Analysis
How do these evolutionary and WD structural differences impact the WD reaction rate pulsation signatures? We first stress the importance of the NOV models' R2 pure radiative zone during the CHeB. The trapped mode σ_i signature found in C22 resonates the most with this region.
We want to determine if this signature, or any other σ_i pulsation signature, exists when overshooting is considered at the inner R2 boundary during CHeB. First we compare the NOV WD pulsation signatures in this work to those in C22.
§.§ NOV set vs. C22
In this section we briefly describe the main differences between the NOV and C22 models. The models in C22 used a 30 isotope chemical network compared to the 23 isotope network used here. See Appendix B for a comparison. Also, the temporal resolution was greater in C22, especially through CHeB. The most important difference in the NOV models is that we terminated the evolution prior to the first thermal pulse; the models in C22 continued the evolution through the thermal pulse phase of evolution. The overall composition structure of the R1 and R2 regions in our NOV models are quite similar to those in C22.
The NOV set of models in this work found two WD g-mode signals for σ_i rather than one. This is shown in the top two panels of Figure <ref>. Both panels show snapshots of the percent period differences as a function of σ_i, at =11,500 K (bright green) and =10,000 K (blue) respectively. The y-axis label defines the period differences as (P_σ_0-P_σ_i)/P_σ_0. That is, they are normalized to the pulsation periods of the σ=0 NOV model. The first panel is the signal from g_2 and the second is the signal from g_6. In C22,
the g-mode signature was a trapped mode. Trapped modes are identified from local minima in the kinetic energy diagram <cit.>. The NOV kinetic energy diagrams for all σ_i at these snapshots are shown in the bottom left and right panels of Figure <ref>, following Equation 2 in C22
<cit.>. The figure caption explains the coloring for σ_i. At =11,500 K (bottom left panel), the first apparent trapped mode occurs at g_6 for all σ_i, with the exception of σ=0.5, which has its first local minimum of E_kin at g_5. By =10,000 K (bottom right panel), all σ_i have the first local minimum in E_kin at g_6, including σ=0.5. This is important as g_6 is one of our signature modes for σ_i. These findings are in overall agreement with C22.
The trapped g_6 mode signature is not linear with σ_i, but overall shows σ_i<0 to have longer periods than σ=0.0, and σ_i>0 to have shorter periods than σ=0.0.
The R2 contribution to the g_6 period in our NOV models was ∼ 25%. Other regions equally contributed between ∼ 20-30%, meaning that the trapped mode from our NOV set is more equitably trapped among the four regions. Thus, its credibility from R2 isn't as strong as in C22.
Nonetheless, it is not a negligible contribution and can still serve as a viable probe for σ_i.
Our other g-mode signal, g_2, does not appear to be trapped by definition (see other highlighted mode in bottom of Figure <ref>). However, the g_2 period differences are directly proportional to σ_i (first panel of Figure <ref>). This suggests that g_2 is likely distinguishing CO features in the inner regions better than other g-modes. The additional g_2 signal
was either recovered or contrived as a consequence of excluding the thermal pulse history in the evolution. This was the only procedural difference between our models and those in C22.
The direct impact of this procedural difference is expressed by the nearly uniform and profiles after the C→He transition (see Figure <ref>).
C22 showed variations in these profiles that stemmed from variations in the thermal pulse histories. Eliminating such chemical variations near the R2-R3 interface can placate the g-modes' sensitivity to the R3 and R4 regions, especially for low-order g-modes such as g_2. Figure 9 in
C22 shows g_2 distinguishes σ_i in their thinner atmosphere sequence of models. Thinner atmospheres may also lessen sensitivities to outer regions, allowing lower-order g-modes like g_2 to probe deeper into the CO interior. We therefore suspect g_2 is a viable probe for σ_i if there are uniform composition profiles at the R2-R3 boundary, and/or thinner WD atmosphere models.
We conclude that our NOV pulsation signature results are overall consistent with C22;
we find certain low-order adiabatic WD g-modes which probe the reaction rate probability distribution function. With our two signature modes established, we now discuss the impact that overshoot inclusion has on these pulsation signatures.
§.§ Detailed Analysis of Differences
We first show the pulsation periods as a function of surface temperature for all σ_i models in Figure <ref>. Black dots mark the NOV periods and grey dots mark the OV periods. G-modes with radial orders n=1-10 are annotated, all for ℓ = 1. Figure <ref> shows that there are differences in the periods between the NOV and OV sets, but there is no global systematic offset; the differences between the OV and NOV periods for any given g-mode is random. This is the case even when σ_i is constant. We find that g_6 shows the largest spread in the periods of the models. Further, the kinetic energy diagrams for all models show that g_6 was a trapped mode by =10,000 K for every model, regardless of the σ_i, NOV/OV prescription. Since g_6 is one of the signals for σ_i in the NOV models, we point out this feature in Figure <ref>. We will touch on the cause of the larger spread later, but now focus our attention on the detailed pulsation properties of the signature g_2 and g_6 modes.
Figure <ref> shows, from top to bottom, the mass fraction profiles, B, and the g_6 and g_2 mode weight functions ζ for the final WDs at =10,000 K. The left and right columns are the NOV and OV results respectively. Here, we show the comparison for σ=0.0, but an interactive figure link is provided in the online article to compare these properties for any σ_i. For all σ_i, NOV/OV comparisons, the dotted vertical lines mark the region boundary locations in each panel. This is useful to compare where the boundary locations are across multiple profile properties. For instance, the R1-R2 boundary marks the C→O transition region, the first most prominent peak in B, and the first peak-like features in g_6 ζ and g_2 ζ in the NOV case. Comparing the OV column to the NOV column, we see the global impacts from overshooting. Overall, prominent features in the NOV set are lessened in magnitude in the OV set. The C→O transition is more gradual, lessening the composition gradient at the defined boundary. This remarkably impacts the shape of B. The first prominent peak after max(O) is much less in magnitude for all σ_i, and is not the only outstanding peak near the boundary. There are now multiple, smaller peaks in B and the g_6 ζ near the R1-R2 boundary as opposed to one.
There are slight deviations between NOV and OV in these profiles for the R3 and R4 regions of the WD, but the R1→R2 region in these profiles was affected most.
The g_6 ζ and g_2 ζ panels in Figure <ref> note the weight percentages per region in the WD. This tells each region's contribution to the overall mode period (frequency). An interesting result for all σ_i is that both the g_2 and g_6 modes decrease the amount of weight in R1 when overshoot is included, and increase the amount of weight in R2. There is also a slight decrease in the weight of R3 for g_2 for all σ_i when overshoot is included. These results are important. The R2 region is the most reliable region in terms of extracting the σ_i rate signature. When overshoot is included, the R2 contribution to the overall pulsation modes in g_2 and g_6 are accentuated, implying that these modes more reliably distinguish σ_i than the NOV set. A quantitative analysis of each region's weight percentage contribution per σ_i is given for both sets in Table <ref> and Table <ref> for g_2 and g_6 respectively. Overall, Table <ref> shows that R2 and R3 are the most heavily weighted regions for g_2's period. G_6 has more equitable weight dispersed across regions, but the combined weight of R1 and R2 accounts for ∼ 50 % of the g_6 period for any given model. As identified in Figure <ref> and Figure <ref>, R1 and R2 are the most impacted regions in this study. A g-mode with about half its weight from those regions may pick up the detailed differences more so than modes weighted more in outer regions. This may explain why Figure <ref> shows a larger spread in the g_6 periods as this g-mode is likely picking up the R1 and R2 contributions to its period better than other g-modes.
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_2 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 0.91 0.75 40.6 41.3 57.0 56.4 1.47 1.47
-2.5 1.14 0.99 40.2 44.2 57.2 52.9 1.43 1.94
-2.0 1.05 0.52 40.2 41.1 57.2 56.9 1.54 1.53
-1.5 1.18 0.53 39.5 41.7 57.9 56.2 1.50 1.50
-1.0 1.16 0.27 40.4 41.5 56.9 56.8 1.48 1.46
-0.5 1.15 0.18 38.8 42.1 58.6 56.3 1.43 1.49
0.0 1.25 0.38 40.6 42.0 56.6 56.1 1.52 1.47
0.5 1.44 0.49 40.8 41.9 56.2 56.2 1.52 1.47
1.0 1.28 0.31 40.4 41.4 56.9 56.7 1.49 1.58
1.5 1.32 0.28 39.9 41.4 57.2 56.8 1.50 1.51
2.0 1.35 0.19 39.4 40.8 57.8 57.5 1.50 1.49
2.5 1.25 0.42 38.3 41.6 58.9 56.6 1.47 1.45
3.0 1.39 2.06 40.2 39.6 56.9 56.8 1.59 1.52
|c|c|c|c|c|c|c|c|c|[!htb]
9
g_6 Weight Function Percentages Per WD Region
1|c|
2c|R1
2c|R2
2c|R3
2c|R4
1|c|σ_i
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
1cNOV
1c|OV
-3.0 25.5 20.1 25.6 32.4 21.1 19.8 27.8 27.8
-2.5 33.1 19.1 29.5 33.5 13.1 20.2 24.2 24.2
-2.0 32.3 16.6 30.8 36.3 13.9 19.7 23.0 23.0
-1.5 33.5 17.3 29.6 39.1 12.6 17.3 24.4 24.4
-1.0 33.8 13.4 30.0 43.1 12.9 17.4 23.3 23.3
-0.5 33.5 11.7 29.8 47.5 12.8 14.9 23.9 23.9
0.0 33.2 15.4 28.9 42.8 12.0 15.5 25.9 25.9
0.5 26.6 16.4 22.5 41.0 13.8 14.0 37.1 37.1
1.0 31.2 14.1 27.1 43.8 12.4 16.1 29.3 29.3
1.5 32.2 13.7 27.4 46.7 12.2 14.7 28.3 28.3
2.0 25.5 11.7 23.0 48.1 14.1 14.3 37.3 37.3
2.5 30.9 14.2 28.0 42.5 12.5 13.8 28.6 28.6
3.0 30.1 32.0 25.5 26.2 12.4 13.8 32.0 32.0
When an integer multiple q of the local radial wavelength λ_r for a given g-mode nearly matches the width of a certain region(s) in a star, the g-mode resonates with that region(s). Figure <ref> shows q·λ_r (R_⊙) as a function of radius R (R_⊙) for the g_2 and g_6 modes. The NOV set doesn't show any particular close matches for any region. But the closest matches to the R2 width were the λ_r curves of g_2, q=1, and g_6, q=2. Further, the g_2, q=2 and g_6, q=3 modes were best at resonating with R3. Larger q values may show stronger resonance with R4. The resonance with R2 is enhanced in the OV set. The g_2, q=1 and g_6, q=2 λ_r curves match much more closely to the R2 width in the OV set. This implies that overshoot has enhanced the g-mode resonance for our signature modes in the region that was constructed mainly from radiative burning (Figure <ref>). We also see stronger resonance within the R1 region with the g_2, q=1 λ_r curve.
Will the differences between the NOV and OV sets in Figure <ref> impact the WD σ_i pulsation signatures shown in Figure <ref>? Figure <ref> shows the resulting relative period percent differences, as a function of σ_i at =11,500 K (bright green) and =10,000 K (blue). The period differences are negative for σ_i with longer periods than the σ=0 model, and are positive for σ_i with shorter periods than the σ=0 model for the given NOV or OV set. The left of this figure shows the period differences for g_2, and the right shows the period differences for g_6. The NOV set is indicated by the dotted lines and the OV set is the solid lines.
Looking at g_2, the period differences between NOV and OV at =11,500 K are minimal; both sets show a trend of decreasing period with increasing σ_i. At =10,000 K, the OV set shows an overall decrease in the percent differences, and a slightly greater variation in the overall σ_i vs. g_2 period difference shape. However, at both temperatures, the same pattern of the g_2 period decreasing with increasing σ_i is sustained with overshoot inclusion.
Further, the magnitude of percent differences, ranging from ≃ -1.5 to +1.0, is within the detectable threshold <cit.>.
The OV set shows greater deviation from the NOV line of period percent differences in g_6 more-so than g_2. This is most likely because g_6 is more sensitive to changes from R1 than g_2. Nonetheless, despite the σ_=-0.5 and σ_+1.0 outliers, the overall trend remains: σ_i<0 generally have longer periods than σ_0 and σ_i>0 generally have shorter periods than σ_0. Once again, the magnitude of the relative period percent differences surpass the observable threshold.
An interesting note is that for both g_2 ad g_6 signals, the percent differences change more in the NOV set as the models cool from =11,500 to =10,000 K than the OV set. The OV set showed nearly the same period differences at both temperatures.
§ DISCUSSION
C22 found pulsation signature(s) for the experimental reaction rate probability distribution function. They describe four sensitivities that may impact this result: width of the O→C transition, mixing during CHeB, thermal pulse history on the AGB, and the 3α reaction rate.
This work investigated the impact that overshoot inclusion had on the reaction rate pulsation signature(s). Doing so, we address the width of the O→C transition and mixing during CHeB. Further, by ignoring the thermal pulse history in our models, we also address the sensitivity to the number of thermal pulses, albeit, the trivial case when the number of thermal pulses is zero. In the following paragraphs, we discuss how these three sensitivities impacted our results. We further caution how our results could be impacted from further sensitivity investigations.
Including overshooting overall increased the width of the O→C transition for all σ_i cool WDs. This lessened the sharp peak in B at the O→C transition, and decreased the peak in g_6 ζ at the O→C transition. While the transition peak was lessened and dispersed into R2, widening the O→C transition shows an enhancement of both the weight contribution to the R2 region for g_2 and g_6, and the R2 resonance with λ_r for g_2 and g_6. The widening of the O→C transition was from the combined effects of overshoot inclusion and the σ_i prescription. We conclude that widening the O→C transition imposes differences in B, ζ, and the pulsation periods. Despite these changes, we still find the g_2 and g_6 relative period differences in the NOV and OV sets to distinguish the reaction rate probability distribution function. Namely, the pattern of decreasing period with increasing σ_i persisted in both NOV and OV sets. By itself, the inclusion of overshooting does not destroy the seismic signatures of the reaction rate in our WD models – which was the primary question of this study.
We caution that increasing (decreasing) the width of the O→C transition in CO WD models could potentially yield different results. Our CO WD models were informed from their evolution history, with the stated model parameters. Thus, an increase (decrease) of the width of the O→C transition may come from choosing different mixing processes, prescriptions and parameters, such as for convection and overshooting. A change in the width of the O→C transition may also come from mixing processes not considered in this study such as
time-dependent convection <cit.>, rotationally induced mixing, semiconvection, thermohaline mixing,
or first-order phase separations of the CO mixture <cit.>.
Ignoring the thermal pulse history gave an additional low-order adiabatic g-mode signature for σ_i, namely the g_2 signal. This signal was not found in C22, where the thermal pulse history was included. Future studies on the thermal pulse phase of evolution with different temporal and spatial resolutions are needed to determine the sustainability of the g_2 signal as a probe for σ_i.
Concurrently, future studies could also explore the interaction, if any, between the thermal pulses and overshooting during CHeB on the chemical profiles.
The CO cores of WDs are the result of the competition between 3α and during CHeB. An experimental 3α reaction rate probability distribution function, similar to the existing one for
<cit.>, does not yet exist to our knowledge, although a probability distribution function could be constructed using the STARLIB reaction rate library <cit.>.
Future studies involving both reaction rate probability distribution functions could probe properties of DAV WD models in the 3α rate - rate plane. For example, the 3α reaction rate is likely to slowly modulate the central ^16O mass fraction at any reaction rate because 3α controls the production of ^12C. The reaction rate will likely modulate the central ^16O mass fraction more strongly at any 3α reaction rate. We speculate that the radiative region R2 will exist in all such models. We also suspect that all such models, whether terminated at the first thermal pulse or evolved through the thermal pulse phase, will show a trapped mode, with substantial trapping from R2, that best probes the ^12C(α, γ)^16O burning reaction rates (i.e. g_6 in this work, and see Figure 9 in C22). We caution that the relative period shifts we find in this work from considering the probability distribution and overshooting may change when a 3α reaction rate probability distribution function is also considered.
<cit.> found that including overshooting impacted ensuing WD pulsations by ∼ 2-5 s.
Their results were independent of their reaction rate uncertainty evaluation. We combined the effects of overshooting and the reaction rate sensitivities in our pulsation analysis, and likewise find period differences of similar magnitudes. Our reaction rate analysis spanned the current experimental probability distribution function, which analyzed different rate values than those explored in <cit.>. They concluded that the uncertainty was less relevant than overshooting. In this study, we find that the combined effects from overshooting and the reaction rate probability distribution function yields remarkable differences in the structure of the CO WDs, and pulsation differences. Despite these differences, we still find pulsation signatures for σ_i.
We conclude this section by discussing the physical meaning of our results. Overall, both g_2 and g_6 signatures generally state that the periods decrease with increasing σ_i. Put another way, increasing the amount of in the WDs shortens the periods of these signature modes. This trend was also seen in <cit.>, namely, as the amount of [22] was increased in the WDs, the periods, for all g-modes analyzed, were shorter. The reasoning of the result came from analyzing the components of the frequency equation. One of the largest drivers of the period differences was due to an increase in pressure scale height with increasing [22] abundance. If one likens pressure scale height to tension in a string, increasing the tension in a string will shorten its period. WDs are not strings, but the line of reasoning is analogous.
One might wonder why not all g-modes display this trend? Why is it only g_2 and g_6? In <cit.>, the presence/absence of [22] was throughout ∼99% of the WD's composition structure. Thus, a uniform increase (decrease) in [22] impacts all regions of the WD equally, which is likely the reason for the global offsetting of periods for all g-modes. In comparison, increasing and decreasing the reaction rate imposes a coupled effect on both and , which is not uniform for all regions in a WD's structure. The R1 and R2 regions are most affected by the reaction rate, with some impact on the inner part of the R3 region. Our above analysis found that the R1 and R2 regions gave larger contributions to the the periods of the g_2 and g_6 signature modes more-so than other g-modes. This is the most probable reason why only certain modes are capable of distinguishing the reaction rate, within the conditions of the present analysis.
§ SUMMARY
We conducted a search for signatures of the current
experimental reaction rate probability distribution function in the pulsation periods of CO WD models with the inclusion of overshooting. We found two signature adiabatic g-modes that show period differences with the reaction rate probability distribution function σ_i trend regardless of whether or not overshoot is included. We find a g_2 period difference signature is inversely proportional to σ_i. Without overshoot, the g_2 relative period differences span ± 0.9%. With overshoot, the g_2 relative period differences range from -1.33% to 0.47%. The average magnitude of the relative period differences for g_2 were 0.46% and 0.44% respectively. The g_6 period differences were larger in magnitude, spanning from -3.44% to 1.78% for NOV and -2.02% to 1.58% for OV. The average magnitude of the g_6 period differences were 1.21% and 0.95% respectively. The average magnitudes of the g_2 and g_6 period differences were slightly decreased from the NOV set.
We found that the R2 weight contribution to these g-modes was enhanced with overshoot inclusion. The R2 region remains the best identifying region for tracing the reaction rate probability distribution function. This is because even with overshoot inclusion, it is predominantly constructed by radiative burning during CHeB.
Regardless of whether or not overshooting is considered, we find:
* two signature g-modes, g_2 and g_6 probe σ_i
* g_2 is inversely proportional to σ_i and g_6 is a trapped mode
* the g_2 and g_6 periods are generally shorter for positive σ_i and longer for negative σ_i
* both signatures have period deviations within the detectable regime
These findings suggest that an astrophysical constraint on the reaction rate probability distribution function remains, in principle,
extractable from the period spectrum of observed variable WDs.
§ ACKNOWLEDGEMENTS
We thank James Deboer for sharing the ^12C(α,γ)^16O probability
distribution function, Josiah Schwab for sharing wd_builder,
and Pablo Marchant for sharing mkipp.
We acknowledge using ChatGPT <cit.> to polish the language of one paragraph <cit.>.
This research is supported by NASA under the Astrophysics Theory Program grant NNH21ZDA001N-ATP, and in part by the National Science Foundation under Grant No. NSF PHY-1748958.
This research made extensive use of the SAO/NASA Astrophysics Data System (ADS).
<cit.>,
20190830 <cit.>,
wd_builder <https://github.com/jschwab/wd_builder>,
<cit.>,
mkipp <https://github.com/orlox/mkipp>,
<cit.>,
<cit.>, amd
<cit.>.
§ MICROPHPYSICS IN MESA
The MESA EOS is a blend of the OPAL <cit.>, SCVH
<cit.>, FreeEOS <cit.>, HELM <cit.>,
PC <cit.>, and Skye <cit.> EOSes.
Radiative opacities are primarily from OPAL <cit.>, with low-temperature data from <cit.>
and the high-temperature, Compton-scattering dominated regime by
<cit.>. Electron conduction opacities are from
<cit.> and <cit.>.
Nuclear reaction rates are from JINA REACLIB <cit.>, NACRE <cit.> and
additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>.
Thermal neutrino loss rates are from <cit.>.
§ MODEL OPTIMIZATION AND RESOLUTION
§.§ Reduced Chemical Network
The nature of our evolutionary models is computationally expensive. This paper is concerned about overshooting and the reaction rate probability distribution function, which primarily dictate the evolutionary processes and consequences of the CHeB phase. The isotopes most impacted during CHeB are , , and . and are the next two most impacted isotopes during CHeB. We thus optimize the efficiency of our models by reducing the chemical network number of isotopes from 30 to 23. The eliminated isotopes are ^21Ne, ^21,22,23Na, ^23,24Mg, and ^56Fe. A comparison of the resulting inner mass fraction profiles for the 5 most abundant isotopes for both networks is shown in Figure <ref> for each chemical network. This figure shows the profiles at the completion of CHeB. both network models used the same temporal and spatial resolution during CHeB. The run-time was reduced from a few days to a a few hours on 12 cores. All resolution studies were conducted with σ=0.0 without overshoot (NOV).
Reducing the network impacted [22] most, with an offset of ∼ 22% more [22] in the 23 isotope network. We note that C22 used a 30 network and our overall signature results persistent through variations in heavier isotopes.
§.§ Temporal Resolution
Several timestep limiters in help optimize convergence studies. In this paper, we want to limit the timestep to achieve the temporal resolution that yields a smooth evolution of the central , , and abundances during CHeB. We first utilize the delta_XC_cntr_limit limiter. This limits the amount the central abundance can change in a given timestep. To help optimize computational run-time, we begin limiting the change in central during CHeB which the central helium abundance X(_c)<0.6. This is done by adding the following lines of code in the run_star_extras.f90 file:
This temporal resolution was used for the 30 and 23 isotope network models. We refer to it as resolution A. The remaining temporal resolution studies were performed using the 23 isotope chemical network.
The next iteration of increased temporal resolution modified the run_star_extras.f90 file to include the following:
This resolution is employed slightly earlier during CHeB, when X(_c)<0.5. We added limits to the change in central temperature and density from resolution A. This is resolution B.
Our third resolution iteration used the following limiter controls in the run_star_extras.f90 file:
This is resolution C. We have set the limiters at the start of CHeB, and have decreased the limiter values from those in resolution B.
A comparison for resolutions A, B, and C are shown in Figure <ref>. In each column, the solid light curves represent resolution A, the dotted curves B, and the dark solid curves C.
The left figure shows the evolution of central abundances of , , and during CHeB, starting when X(_c)≲0.6 until the completion of CHeB. The central abundances for resolutions A and B are nearly identical. Resolution C varies slightly, with the final central abundance reaching a slightly larger amount than resolutions A and B. Further, all three resolutions show a smooth evolution of these central abundances throughout CHeB.
The middle plot in Figure <ref> shows the mass fraction profiles at the completion of CHeB. We show the 5 most abundant isotope profiles for each resolution. The and profiles for A are noticeably different than the profiles for B and C, especially after the O→C transition. This is more apparent in the right plot of Figure <ref>, which zooms in on the and profiles of the three resolutions. Resolution B follows A in the core, but then more closely aligns with C after the O→C transition. Since resolutions B and C agree well, with only a slight difference in the central and abundance, we set resolution C as the standard temporal resolution for our 13 models.
aasjournal
|
http://arxiv.org/abs/2307.05267v1 | 20230710152139 | Kibble-Zurek Mechanism for Nonequilibrium Generation of Magnetic Monopoles in Spin Ices | [
"Zhijie Fan",
"Adolfo del Campo",
"Gia-Wei Chern"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"astro-ph.CO",
"cond-mat.str-el"
] |
Department of Physics, University of Virginia, Charlottesville, VA 22904, USA
Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg
Donostia International Physics Center, E-20018 San Sebastián, Spain
Department of Physics, University of Virginia, Charlottesville, VA 22904, USA
The proliferation of topological defects is a common out-of-equilibrium phenomenon when a system is driven into a phase of broken symmetry. The Kibble-Zurek mechanism (KZM) provides a theoretical framework for the critical dynamics and generation of topological defects in such scenarios. One of the early applications of KZM is the estimation of heavy magnetic monopoles left behind by the cosmological phase transitions in the early universe. The scarcity of such relic monopoles, which contradicts the prediction of KZM, is one of the main motivations for cosmological inflationary theories. On the other hand, magnetic monopoles as emergent quasi-particles have been observed in spin ices, a peculiar class of frustrated magnets that remain disordered at temperatures well below the energy scale of exchange interaction. Here we study the annihilation dynamics of magnetic monopoles when spin ice is cooled to zero temperature in a finite time. Through extensive Glauber dynamics simulations, we find that the density of residual monopole follows a power law dependence on the annealing rate. A kinetic reaction theory that precisely captures the annihilation process from Monte Carlo simulations is developed. We further show that the KZM can be generalized to describe the critical dynamics of spin ice, where the exponent of the power-law behavior is determined by the dynamic critical exponent z and the cooling protocol.
Kibble-Zurek Mechanism for Nonequilibrium Generation
of Magnetic Monopoles in Spin Ices
Gia-Wei Chern
August 12, 2023
==========================================================================================
The existence of a critical point has profound implications on the properties of a system, both in and out of equilibrium. In particular, crossing a continuous phase transition in a finite time leads to breaking adiabatic dynamics. As a result, topological defects proliferate in the driven system. In this context, the Kibble-Zurek mechanism (KZM) provides a reference theoretical framework for critical dynamics <cit.>. It unveils that the latter behavior is universal and characterized by scaling laws that govern the density of defects and the response time of the driven system. In particular, KZM has been employed to understand the formation of 't Hooft-Polyakov magnetic monopoles, a topological defect of non-abelian gauge theories, in the early universe <cit.>. The experimental absence of such fundamental magnetic monopoles led to the ideas of cosmological inflation <cit.>. On the other hand, condensed matter systems support various emergent topological defects and offer a fruitful arena for examining various aspects of KZM.
Universality away from equilibrium can be brought out by considering a system in which different phases of matter are accessible by varying an external control parameter λ (temperature, density, etc.) across a critical value λ_c. A continuous phase transition is characterized by a universal equilibrium scaling law of the correlation length ξ=ξ_0/|ϵ|^ν, where ϵ=(λ-λ_c)/λ_c and ν is the correlation-length critical exponent. Similarly, the equilibrium relaxation time diverges in the neighborhood of the critical point λ_c as τ=τ_0/|ϵ|^zν∼ξ^z, where z is the dynamic critical exponent. This divergence is known as critical slowing down and is responsible for breaking adiabaticity in any finite-time driven protocol λ(t). To appreciate this, it suffices to linearize λ(t) in the neighborhood of λ_c so that ϵ=t/τ_Q, assuming that the critical point is reached at t=0. The KZM predicts that the density of point-like defects in d spatial dimensions scales as n∼ξ̂^-D, where D is the spatial dimension, and ξ̂ is the non-equilibrium correlation length ξ̂=ξ_0(τ_Q/τ_0)^ν/1+zν which exhibits a power-law scaling with the quench time τ_Q that is fixed by the equilibrium critical exponents z and ν. An additional prediction of the KZM is that the characteristic response time, known as the freeze-out-time t̂, also scales universally with the quench time τ_Q as t̂=(τ_0τ_Q^zν)^1/1+zν. These predictions can alternatively be derived using finite-time scaling <cit.>.
The nonequilibrium critical behavior predicted by the KZM has been explored in depth in one-dimensional systems <cit.>. The spatial distribution of topological defects is then highly constrained, and exact analytical descriptions are often possible. Experimental evidence is convincing in the quantum domain <cit.> but remains limited in systems admitting a classical description <cit.>.
Results in higher spatial dimensions show a rich behavior. In theoretical and experimental studies, some settings are consistent with the scaling predictions dictated by the KZM <cit.>, while others display deviations <cit.>. The critical dynamics in systems with a complex vacuum manifold supporting different kinds of topological defects remains poorly understood, as coarsening and multiple channels for defect creation and annihilation can coexist <cit.>.
Spin-ice systems <cit.> are an unusual class of ferromagnet where the magnetic atoms reside on a pyrochlore lattice, a three-dimensional network of corner-sharing tetrahedra as shown in FIG. <ref>(a). For spin ice with interactions restricted to nearest neighbors, the magnet remains in a disordered state down to zero temperature. At first sight, the KZM is not expected to describe the annealing dynamics of such idealized spin ice, which shows no symmetry breaking. However, at temperatures below the energy scale of exchange interaction, spin ice exhibits novel fractionalized quasi-particles which carry a net magnetic charge, essentially behaving as magnetic monopoles <cit.>. Conservation of magnetic charges means that these quasi-particles have to be created and annihilated in pairs. Magnetic monopoles are thus topological defects in an otherwise disordered spin state, in contrast to topological defects due to broken symmetry as in standard KZ scenario. An intriguing question is whether these emergent magnetic monopoles in a quenched spin ice exhibit scaling behaviors and if the KZM can be generalized to describe their nonequilibrium dynamics.
The emergence of magnetic monopoles in spin ice is closely related to the ice rule, a local constraint for ground states. Dominant easy-axis anisotropy forces the magnetic moments to point in the local ⟨ 111 ⟩ directions, allowing us to express spins in terms of Ising variables: 𝐒_i = σ_i μ𝐞̂_i, where μ is the magnitude of the magnetic moment, 𝐞̂_i is the local crystal-field axis, and σ_i = ± 1 indicates the direction of the magnetic moment, which points either from the center of a tetrahedron to the corresponding corner or vice versa. Both the short-range ferromagnetic exchange J_F < 0 and the long-range dipolar interaction contribute to an effective nearest-neighbor antiferromagnetic interaction between the Ising spins ℋ = J ∑_⟨ ij ⟩σ_i σ_j, where J = 1/3 (|J_F| μ^2 + 5μ_0 μ^2 /4π a^3) is the effective antiferromagnetic interaction and a is the nearest-neighbor distance in pyrochlore lattice. We first focus on the annealing dynamics with interactions restricted to nearest neighbors and discuss effects of long-range dipolar interaction later.
It is convenient to express the spin-ice energy in terms of magnetic charges for understanding the ground-state properties and elementary excitations. To this end, we use the dumbbell approximation <cit.> to replace a magnetic moment 𝐒_i (a dipole) by two opposite magnetic charges ±μ/ ℓ at the two ends of a bar of length ℓ, which is set to be the distance between centers of two nearest-neighbor tetrahedra. The effective magnetic charge of a tetrahedron-α is then Q_α = ± (μ/ℓ) ∑_i ∈ασ_i, where the ± sign is used for tetrahedra of opposite orientations, and the sum is over the four spins of the tetrahedron. In terms of magnetic charges, the system energy becomes ℋ = v/2∑_α Q_α^2 up to an irrelevant constant, where the self-energy coefficient v = J ℓ^2/μ^2.
The total energy of a spin ice is thus minimized by any spin configurations with zero magnetic charges Q_α = 0 for all tetrahedra, which form a diamond lattice that is dual to the pyrochlore lattice. The charge neutral condition corresponds to a tetrahedron with two σ=+1 and two σ=-1 Ising spins, known as the 2-in-2-out ice rules <cit.>. While these constraints introduce strong short-range correlations between spins, no long-range order is induced even at zero temperature. The number of ground states satisfying the ice rules grows exponentially with the system size, giving rise to a zero-point entropy, which is well approximated by the Pauling estimate S_ Pauling = (1/2) log(3/2) and verified experimentally in canonical spin ice compounds <cit.>.
Elementary excitations above the hugely degenerate ground-state manifold are represented by tetrahedra that violate the ice rules <cit.>. These correspond to 3-in-1-out/1-in-3-out tetrahedra with a magnetic charge Q = ± q_m, or 4-in/4-out tetrahedra with charge Q = ± 2 q_m, where q_m = 2 μ/ℓ is the elementary unit of magnetic charges in spin ice. These defect tetrahedra, particle-like objects carrying net magnetic charges, are essentially magnetic monopoles. It is also worth noting that the monopoles in spin ice are topological defects as they have to be created and annihilated in pairs. For example, a single spin-flip, or an inverted dumbbell, results in two monopoles of charge Q = ± q_m on adjacent diamond-lattice sites. Crucially, the monopoles can be separated from one another without further violations of local neutrality by flipping a chain of adjacent dumbbells.
The vacuum of these emergent magnetic monopoles corresponds to the highly constrained ground-state manifold. It has been shown that an effective magnetostatic theory can describe this manifold. Indeed, monopole excitations are the source and sink of the emergent magnetic field 𝐁(𝐫). The ice rules, i.e., the absence of monopoles, translate to the divergence-free condition ∇·𝐁 = 0, which in turn gives rise to dipolar-like power-law spin correlations in the degenerate ground-state manifold <cit.>. The monopole density determines the correlation length ξ of this emergent critical state at T → 0: ξ∼ 1/n_m^1/3. As an activation energy Δ E_m = v/2 q_m^2 = 2J is required to create fundamental monopoles of charge ± q_m, the density of such topological defects n_m ∼ e^-2J/T is exponentially suppressed at low temperatures. This results in an equilibrium correlation length ξ∼ e^2J/3T, which diverges exponentially as T → 0, in contrast to the familiar power-law divergence when approaching a conventional critical point.
A similar exponential divergence of correlation length also occurs in the paradigmatic ferromagnetic 1D Ising model. Similar to spin ices, Ising spins remain disordered at any finite temperature. An unconventional critical point at T_c = 0 can be associated with the system, at which spins become fully polarized. The average distance between kink and anti-kink pairs, which are topological defects of an Ising chain, determines the correlation length. The fact that the number of kinks is suppressed at low-T similarly gives rise to a correlation length that grows exponentially as T → 0.
Moreover, the dynamical behavior of the 1D Ising model under the Glauber dynamics can be described by a solvable master equation <cit.>. Notably, the KZ scaling hypothesis has also been verified in the 1D Ising model when the system is slowly annealed to zero temperature <cit.>.
From the viewpoint of an unconventional critical point at T_c = 0, spin ices can be viewed as a different high-dimensional generalization of the 1D Ising chain, to be contrasted with the standard square or cubic Ising models. We note that a 2D analog of the pyrochlore spin ice is given by the antiferromagnetic Ising model on a checkerboard lattice, as shown in FIG. <ref>(b). An artificial version of such 2D spin ice has been realized in arrays of nanomagnets <cit.> and optical traps of soft-matter particles <cit.>. Despite the similarity, we note that while the Ising chain becomes long-range ordered at T=0, both spin ices remain disordered down to zero temperature when interactions are restricted to nearest neighbors. Here we show that KZM, with proper modification, can also be applied to the critical dynamics of spin ice and the annihilation of magnetic monopoles.
§.§ Annealing of spin ice with Glauber dynamics
To describe the nonequilibrium dynamics associated with a temperature quench, we perform Glauber dynamics <cit.> simulations of pyrochlore spin ice with time-dependent temperature T(t). To take into account the stochastic and local nature of the spin dynamics, at each fundamental step, a spin σ_i that is randomly chosen from the system is updated according to the transition probability w(σ_i → -σ_i) = 1/2 [1 - tanh(1/2βΔ E_i) ], where β = 1/T is inverse temperature and Δ E_i is the energy change due to the flipped spin. At low temperatures, a single-spin flip results in mostly either the creation/annihilation of monopole pairs, of which Δ E= ± 4J, or the movement of monopoles for which Δ E = 0. It is thus convenient to introduce a dimensionless parameter γ(t) = tanh[2β(t) J] which controls the transition rate. For example, ignoring the updates that involve double monopoles, the transition rate at low temperatures simplifies to w(σ_i → -σ_i; t) = 1/2 [1-γ(t) σ_i sign(h_i)], where h_i = ∑_j ∈ nn(i)σ_j is the sum of nearest-neighbor Ising spins, and sign(x) is the sign function.
In terms of this control parameter, we first consider the so-called linear cooling schedule: γ(t) = t / τ_Q, where τ_Q denotes the total annealing time <cit.>. With this cooling protocol, the system evolves from T = ∞ at t=0 to zero temperature when t = τ_Q.
The time is incremented by δ t = 1/N_s after each spin update attempt, where N_s = 16 L^3 is the total number of spins in the system. All simulations below were performed on a lattice of L = 10, with N_s =16,000 spins. After one Monte Carlo sweep of the entire system, the time increases by one unit of time Δ t = 1, and the charge statistics is measured. The cooling time varies in the range τ_Q = 10× 2^n with n=0, 1, 2, 3, …, 10. The final results are obtained by averaging the data from 10,000 randomly generated initial states.
The time dependence of the elementary-monopole density n_m(t) is shown in FIG. <ref> for algebraic cooling with α = 1 and 2, and varying cooling time τ_Q. As discussed above, these are 3-in/1-out or 1-in/3-out tetrahedra carrying a net charge Q=± q_m, Another type of defect tetrahedra with all spins pointing in or out can be viewed as a quasi-bound state of two fundamental monopoles of equal charges. The density n_2m(t) of such double monopoles as a function of time is shown in FIG. <ref>. As these quasi-bound states carry a doubled charge Q = ± 2 q_m, they are energetically more costly, giving rise to a density that is orders of magnitude smaller than that of monopoles. The critical dynamics of both types of quasi-particles exhibit a similar overall pattern: an initial slow decay that lasts a long time, followed by a very steep decline at the end of cooling.
To shed light on the annealing dynamics of spin ices, rate equations based on reaction kinetics theory are developed to describe the dynamical evolution of single and double monopoles. For example, the rate equation for magnetic monopoles of charges ± q_m at the late stage of the cooling is
d n_m/dt = 𝒜_0 + 𝒜_1 n_2m + 𝒜_2 n_2m^2 - ℬ n_m^2,
The first three 𝒜 terms denote the various mechanisms for producing ± q_m monopoles: pair-creation from vacuum, decay of a double monopole, and conversion of two double monopoles into fundamental monopoles. The last term accounts for the pair annihilation of ± q_m monopoles. It is worth noting that the leading decay term is quadratic in n_m (no linear term) is a manifestation of their topological nature.
Through reaction kinetic theory, the coefficient ℬ is uniquely related to the three 𝒜 coefficients, which will be treated as fitting parameters. In practice, these parameters are determined from Glauber dynamics simulations with a small τ_Q =160. The rate equation for the higher-energy double monopoles n_2m can be similarly obtained; see Appendix B for details. Using random spins to set initial conditions, the rate equations are integrated numerically. The results are shown in FIG. <ref> as solid lines. Remarkably, the reaction kinetics based on exactly the same set of parameters gives an excellent overall agreement with the Glauber dynamics simulations for both linear and algebraic α = 2 cooling schedules.
§.§ Kibble-Zurek mechanism for monopoles
Both Monte Carlo simulations and calculations using the rate equations yield a power-law dependence on τ_Q for the residual monopole density at the end of cooling n_m(τ_Q) ∼τ_Q^-μ, where the KZ exponent is μ≈ 0.33 and 0.5 for algebraic cooling with α = 1 and 2, respectively.
Here we show that these scaling behaviors can be explained by a generalized KZM similar to that adopted for the 1D Ising chain <cit.>.
As discussed above, although spin-ice systems exhibit a critical point at T_c =0, the correlation length diverges exponentially ξ∼ n_m^-1/D∼ e^Δ E_m/DT, instead of algebraically as in a conventional critical point. Here the spatial dimension D = 2 and 3 for the checkerboard and pyrochlore spin ice, respectively, and the activation energy Δ E_m = 2J for both cases. On the other hand, the relaxation time τ, which is closely related to the annihilation rate of monopoles, also diverges exponentially as T → 0 <cit.>. Consequently, one can still define a dynamical exponent that relates these two exponentially divergent quantities: τ∼ξ^z.
The relaxation time is shown to follow the Arrhenius law: τ∼ e^Δ E_m/T∼ e^2J/T for spin ice with nearest-neighbor interaction <cit.>. The exponential divergence of the relaxation time has also been explicitly verified from the decay of monopoles in instant-quench simulations; see Appendix A for details. This gives rise to a dynamical exponent z = D for the relaxation of spin ice, which is also explicitly confirmed in our quench simulations. Finally, we note in passing that, despite the similarity between the Ising chain and spin ices, the dynamical exponent for kinks is z = 2 in the 1D Ising model <cit.>.
Central to the KZM is the freeze-out time t̂, measured from the critical point, which signifies the breaking of adiabaticity. Before the freeze-out time during the cooling, the system can reach the quasi-equilibrium state of the instantaneous temperature T(t) due to a short relaxation time τ(T) at the corresponding temperature. Freezing of the system occurs when the exponentially increasing relaxation time is comparable to the time left before reaching the critical point at T_c = 0, i.e.
τ̂ = τ( T(τ_Q - t̂) ) = t̂.
For time t ≳τ_Q - t̂, breaking adiabaticity means that the pair-annihilation of topological defects is suppressed. The number of monopoles at the end of annealing can thus be well approximated by that at the freeze-out time n_m(τ_Q) ∼ n_m(τ_Q - t̂).
Here we demonstrate the determination of t̂ for the general algebraic cooling schedule
1-γ(t) = A (1 - t/τ_Q)^α,
when t →τ_Q. Here A >0 is a positive constant. The linear cooling schedule corresponds to α = 1. Substituting the resultant time-dependent temperature T(t) into Eq. (<ref>), we have t̂ = exp{tanh^-1[1-A(t̂/τ_Q)^α]}. Assuming slow cooling such that τ_Q ≫t̂, we expand the right-hand side of this equation to leading order in t̂/τ_Q, and obtain a scaling relation
t̂∼τ_Q^α / (2 + α).
The residual density of monopoles can then be estimated from the correlation length at the freeze-out time, i.e., n_m(τ_Q) ∼ξ̂^-D∼τ̂^-D/z. Remarkably, the fact that the dynamical exponent is given by the dimension of spin ice z = D means that n_m(τ_Q) ∼τ̂^-1, independent of the dimension. Combining the KZ condition (<ref>) and the scaling of freeze-out time in Eq. (<ref>), we obtain a power-law dependence
n_m(τ_Q) ∼τ_Q^-α / (2 + α),
which is independent of spatial dimensions. For α = 1 and 2, the above formula gives a KZ exponent μ = 1/3 and 1/2, consistent with our numerical results shown in FIG. <ref>. Notably, the monopole densities computed at the freeze-out time and at the end of the cooling exhibit the same power-law behavior. Moreover, we have explicitly verified numerically that the same exponents also apply to the 2D checkerboard spin ice subject to algebraic cooling schedules.
§.§ Residual density of double monopoles
The double-monopole density n_2m(t) obtained from Glauber dynamics simulations is shown in FIG. <ref> as a function of time. Again, the simulation results are well captured by the rate equations. While the double monopoles seem to also exhibit power-law behavior both at the freeze-out time and at the end of cooling, the two exponents are different contrary to the case of single monopoles. It is worth noting that the double-monopoles are not topological defects as they can spontaneously decay into two fundamental monopoles. As a result, there is no freezing for the annihilation of double monopoles. However, we can still estimate the density of double monopoles at the freeze-out time t = τ_Q - t̂. As the activation energy of such defects is Δ E_2m = v/2 (2 q_m)^2 = 8J, their equilibrium density scales as n_2m∼ e^- 8 J/T. Since the relaxation time τ∼ e^2 J/T in the adiabatic regime, we have n_2m∼τ^-4. Using the KZ condition (<ref>) that the relaxation time at the freeze-out instant is τ̂ = t̂, the density of double monopole at the freeze-out time is
n_2m(τ_Q - t̂) ∼τ_Q^-4α/(2+α).
This power law agrees very well with the numerical results for both linear cooling and algebraic cooling with α = 2; see FIG. <ref>.
However, as discussed above, since double monopoles are non-topological, they will continue to decay even after the freeze-out instant. Their relaxation in this regime is governed by a rate equation
dn_2m/dt = 3 n_m^2/16 τ_2m e^-4β(t) J - n_2m/τ_2m,
where the temperature-independent τ_2m is the intrinsic lifetime of the double monopole. The first term above describes the combination reaction of two same charge monopoles into a double monopole. The reverse process, corresponding to the second term above, is the dominant contribution to the decay of double monopoles. In this freeze-out regime, the density of fundamental monopoles can be approximated by its value at the freeze-out instant. The depletion of n_m due to the recombination is negligible due to the small exponential factor e^-4β J at very low temperatures. Assuming a short decay time of double monopoles τ_2m≪t̂, the rate equation for the case of algebraic cooling can be integrated to give a residual density
n_2m(τ_Q) ∼τ_Q^-(4α + α^2)/(2 + α).
Details of the derivation is presented in Appendix C. This power law dependence is confirmed by both Glauber dynamics simulations and rate equation, as shown in FIG. <ref>.
§.§ Dynamical scaling
The freeze-out time t̂ and the associated correlation length of KZM also provide a basis for dynamically scaling the nonequilibrium behavior during cooling <cit.>. In particular, here we consider the time-dependent excess monopole density defined as δ n_m(t) = n_m(t) - n_m^ (eq)(t), which represents a genuine nonequilibrium part of the defect density. Here the quasi-equilibrium monopole density is given by the Boltzmann distribution at the instantaneous temperature n_m^ (eq)(t) ∼exp[-β(t) Δ E_m] with the degeneracy factor adequately taken into account. The excess monopole density as a function of time is shown in the inset of FIG. <ref>(a) for various cooling rates. The density of excess monopoles becomes non-zero immediately after the cooling, yet remains rather small, of the order of δ n_m ∼ 10^-3, in the initial quasi-adiabatic regime. During this period, the number of excess monopoles increases gradually until the freeze-out time, which is marked by the abrupt, rapid growth of δ n_m. At the end of cooling, when the system reaches zero temperature and n^ (eq)_m = 0, the density of excess defects exhibits the same scaling δ n_m ∼τ_Q^-1/3 as shown by the dashed line.
It is worth noting that the relevant time scale that determines the evolution of the quenched system is the freeze-out time instead of the annealing time τ_Q. The dynamical scaling posits that the density of excess monopoles, normalized by the density of residual defects at the end of cooling, is a universal function of the time left before reaching the critical point, rescaled by the freeze-out time
δ n_m(t) = δ n_m(τ_Q) ℱ(t-τ_Q/t̂).
The critical point t = τ_Q corresponds to ℱ(0) = 1. As shown in FIG. <ref>(b), the rescaled data points from our Glauber dynamics simulations collapse on a universal curve, underscoring a universal nonequilibrium dynamical behavior.
§.§ Exponential cooling protocol
Since either the Glauber or Metropolis dynamics for Ising spins is controlled by the Arrhenius factor e^-4β J, it is natural to define cooling schedules in terms of the dimensionless parameter γ(t). The algebraic cooling protocol (<ref>) corresponds to a physical temperature which vanishes in such a way that its inverse diverges logarithmically T(t) ≈ 4J/α |log(τ_Q - t)| near t = τ_Q. To investigate the annealing dynamics with a linearly decreasing temperature T(t) = T_0 (1-t/τ_Q), we consider cooling procedures where the γ parameter is described by an exponential function <cit.>
1 - γ(t) = B exp{ - b/(1-t/τ_Q)^α},
where b, α > 0 are positive parameters and B = exp(b) is a normalization factor ensuring γ(0) = 0 and γ(τ_Q) = 1. The case of a linearly decreasing temperature corresponds to α = 1.
The monopole density as a function of time is shown in FIG. <ref> for exponential cooling schedule with α = 1 and 3. The dynamical evolution is again well captured by the rate equations. Similar to the case of algebraic cooling, the relaxation of magnetic monopoles is characterized by a slow decay for most of the cooling schedule, followed by an abrupt drop at the late stage. Yet, the relaxation shows a slight deceleration roughly after the freeze-out time scale, to be discussed below. This late-stage slowdown is particularly prominent in the case of α = 3.
The residual monopole density at t = τ_Q again exhibits a power-law dependence on the cooling rate. Here we apply the KZM to understand this scaling relation. First, we substitute γ(t) of the exponential cooling procedure into the KZ condition (<ref>), the resultant transcendental equation t̂ = exp{tanh^-1[1 -B exp(-b/(t̂/τ_Q)^α) ] } in the slow cooling limit can be simplified to give a freeze-out time t̂∼τ_Q (lnτ_Q)^-1/α. Using the scaling relation n_m(τ_Q) ∼τ̂^-1∼t̂^-1 discussed previously, we obtain a universal 1/τ_Q power-law relation for the residual monopole density with a logarithmic correction that depends on the parameter α:
n_m(τ_Q) ∼τ_Q^-1ln(τ_Q)^1/α.
As shown in FIG. <ref>, the numerical results agree reasonably well with this KZM prediction.
§.§ Effect of long-range dipolar interaction
In pyrochlore spin-ice compounds, such as Dy_2Ti_2O_7 and Ho_2Ti_2O_7, the rare-earth ions carry a moment of 10 Bohr magnetons, μ≈ 10 μ_B. Long-range dipolar interaction plays a role of equal significance to the nearest-neighbor exchange. As discussed above, the dipolar interaction contributes to the effective nearest-neighbor coupling J between the Ising spins. The dipolar term is expected to slightly modify the activation energy of monopoles Δ E_m. Yet, the long-range Coulomb interaction between magnetic monopoles enhances the critical slowing down <cit.>. This enhancement can be attributed to the formation of locally bound pairs of monopoles, which hinders their diffusive motion <cit.>. As a result, the Arrhenius law cannot account for the entire low temperature relaxation time, including the intermediate quasi-plateau region (below 12 K) and the sharp upturn below 2 K. Nonetheless, the rapid increase of the relaxation time at very low temperatures, which is most relevant for the freezing in KZ scenario, can be approximated by a single exponential τ(T) = τ_0 exp(Δε / T) with an effective barrier energy Δε.
For convenience, we introduce a dimensionless parameter λ = Δ E_m / Δε. The enhanced critical slowdown indicates λ < 1. The equilibrium monopole density is then n_m ∼ξ^-D∼τ^- λ, which implies an effective dynamical exponent is z = D/ λ. Here we consider the effects of dipolar interaction in the case of algebraic cooling schedule Eq. (<ref>). Using the KZ condition (<ref>) to determine the freeze-out time t̂ and the corresponding relaxation time τ̂, the residual monopole density is found to follow a modified scaling relation
n_m(τ_Q) ∼τ_Q^-αλ/(α + 2 λ).
Although the correction caused by the dipolar interaction can be verified using the Glauber dynamics of Ising spins, large-scale simulations would be rather difficult due to the long-range dipolar term. A more feasible approach is to perform quench dynamics of a Coulomb gas of magnetic monopoles moving in a network of Dirac strings on the diamond lattice <cit.> and will be left for further study.
§.§ DISCUSSION AND OUTLOOK
A closely related system is the 2D kagome spin ice where the Ising spins reside on a network of corner-sharing triangles <cit.>. Since there are three spins in a basic triangle simplex, the ground-state manifold is governed by the 3-in-1-out or 1-in-3-out pseudo-ice rules, giving rise to a non-zero magnetic charge at every triangle. Elementary excitations, corresponding to 3-in or 3-out triangles, are not topological since they can decay into the minimum charge state by shedding the extra charge to its neighbor. The charge defects in kagome are similar to the double monopoles in pyrochlore spin ice. Moreover, while spins in the low-temperature ice phase are characterized by strong correlation, there is no emergent critical point at T = 0. As a result, for general cooling schedules, the residual charge defects exhibit a non-power-law dependence on the cooling rate <cit.>.
The KZ mechanism has previously been investigated in artificial colloidal version of the 2D spin ice with optical traps arranged in a square lattice <cit.>. Contrary to the ideal 2D checkerboard spin ice, the planar geometry breaks the degeneracy of the six ice-rule-obeying vertices, leading to a long-range order with staggered arrangement of the two lower-energy symmetric 2-in-2-out vertices. Although a power-law behavior of defect vertices was observed in the Langevin dynamics simulation, the obtained exponent is inconsistent with the prediction of KZM for the expected 2D Ising universality class. We believe the discrepancy could be attributed to the fact that charge defects, such as magnetic monopoles, are not necessarily associated with the Ising ordering as demonstrated in our work. On the other hand, It has been shown that the field-induced liquid-gas transition of magnetic monopoles in pyrochlore spin ice exhibits a dynamical KZ scaling of the 3D Ising universality class <cit.>.
Our results have firmly established the universal nonequilibrium generation of magnetic monopoles spin ices under slow cooling. Despite the absence of broken symmetries at low temperatures, pyrochlore spin ice and its 2D counterpart exhibit an unconventional critical point at T_c = 0. The correlation length of the highly correlated ice phase at low temperatures is controlled by emergent magnetic monopoles, which are topological defects that violate the two-in-two-out ice rules. Universal scaling relations of residual monopoles predicted by the Kibble-Zurek mechanism are confirmed by Glauber dynamics simulations as well as reaction kinetic theory. Our work opens a new avenue to the study of universal annealing dynamics of topological defects in other highly constrained systems.
§ APPENDIX A: RELAXATION TIME OF SPIN ICE
We employ the Glauber dynamics method to simulate instant thermal quench of nearest-neighbor pyrochlore spin ice. The relaxation time of the system can be obtained from the decay of magnetic monopoles after the quench. The simulated system consists of 10^3 cubic unit cells with N = 16 × 10^3 Ising spins. All data points are averaged over 8000 randomly generated initial configurations. We quench the system from infinite temperature to a low temperature T < J at time t = 0.
The averaged monopole density n_m(t) as a function of time is shown in FIG. <ref>(a) for various final temperatures. At large t, the time evolution can be well approximated by an exponential decay
δ n_m(t) = δ n_m(0) exp[-t/τ(T)],
where δ n_m = n_m - n^ eq_m(T) is the density of excess monopoles, and τ(T) is a temperature-dependent relaxation time. The extracted relaxation time is shown in FIG. <ref>(b) as a function of the inverse temperature. The agreement with the straight line, corresponding to 0.338exp(2.00(3) J /T), in the semi-log plot shows that the relaxation time can be well approximated by an Arrhenius law with a barrier energy of 2 J. As discussed in the main text, this result implies that the dynamical exponent z = D for nearest-neighbor spin ices.
§ APPENDIX B: REACTION KINETICS & RATE EQUATIONS
The reaction kinetics theory in chemistry is adopted to describe the dynamical evolution of the monopoles and double monopoles in spin ices. The basic idea is to describe the time evolution in terms of the number densities of different tetrahedra in a mean-field sense. For convenience, we also borrow terms from chemical reaction theory and use species to refer to tetrahedra of different charges. In pyrochlore spin ice, there are six different species which can be classified into three types. (i) ice-rule obeying tetrahedra with zero net charge; their density is denoted as n_0. (ii) 3-in-1-out and 1-in-3-out tetrahedra corresponding to magnetic monopoles with charge ± q_m. Their density is denoted as n_± 1, respectively. (iii) all-in and all-out tetrahedra with magnetic charges Q = ± 2 q_m; these are double monopoles with density n_±2.
Assuming that the magnet remains spatially homogeneous during relaxation, rate equations are employed to describe the “chemical reactions" of different tetrahedron species. The four different reactions caused by a single spin-flip are summarized in FIG. <ref>. This first type, shown in FIG. <ref>(a), describes the pair-annihilation and creation of magnetic monopoles Q=± q_m. The second reaction shows the annihilation of a double monopole with a single monopole of opposite charge. The third one corresponds to the conversion between a pair of monopoles and a pair of double monopoles. Finally, FIG. <ref>(d) depicts the decay of a double monopole into a pair of opposite-charge fundamental monopoles. It is worth noting that single spin-flip with Δ E = 0 is not listed here, as such update corresponds to the diffusive motion of fundamental monopoles.
Next we consider the transition kinetics of a general reaction
Q_A + Q_B ⇌ Q_C + Q_D,
where Q_A, Q_B are the initial reactants, and Q_C, Q_D are the final products. The two-way harpoon indicates that the reaction can occur in both forward and reversed directions. We note that, as these reactions are due to a flipping of magnetic dipole, the total charge is conserved.
It is convenient to choose the forward direction as the one that lowers the total energy, i.e., Δ E < 0.
In other words, the forward reaction is the decay or the annihilation of magnetic charges, while the reversed reaction is the excitation of magnetic charges.
The rate of a reaction is proportional to the densities of the reactants. For example, the transition rate of forward reaction for Eq. (<ref>) is v_+ ∝ n_Q_A n_Q_B. The net rate of reaction in the forward direction is then
v = k_+ n_Q_A n_Q_B - k_-1n_Q_C n_Q_D,
where n_Q is the density of tetrahedron with charge Q, and k_± denote the reaction coefficients of forward/reversed reactions, respectively.
These reaction coefficients, however, are not independent. When the system reaches equilibrium, the net change is zero v = 0, which in turn means k_+/k_- = n^ eq_Q_C n^ eq_Q_D / n^ eq_Q_A n^ eq_Q_B.
The equilibrium densities of the various species are given by the Boltzmann distribution, n^ eq_Q = g_Q e^-β E_Q / Z, where Z is the partition function, E_Q is the energy of charge species Q, and g_Q is its degeneracy. We thus have
k_+/k_- = g_Q_C g_Q_D/g_Q_A g_Q_Be^-βΔ E,
where Δ E is the energy difference between products and reactants. In general, the reaction coefficients k_± can be expressed as
k_± = A_± e^-βε_±,
where ε_± are the activation energies for the forward/backward reactions, respectively. In chemical reactions that often involve an intermediate state, these energy barriers are the energy differences between the intermediate state and the initial/final state, respectively. The coefficients A_± are now nearly temperature independent. Let E^* be the energy of the intermediate state, we have ε_+ = E^* - (E_Q_A + E_Q_B) and ε_- = E^* - (E_Q_C + E_Q_D). Substitute Eq. (<ref>) into the ration in Eq. (<ref>), and using the fact that ε_+ - ε_- = Δ E, we obtain the ratio between the two pre-factors
A_+/A_- = g_Q_C g_Q_D/g_Q_A g_Q_B.
The overall reaction rate, and in particular its temperature dependence, naturally also depends on the energy level E^* of the intermediate state. However, for Ising spins with Glauber dynamics, the transition rate only depends on the energy difference Δ E, which does not involve any intermediate state. Or equivalently, the initial state with higher energy serves as such intermediate, hence ε_+=0 and ε_- = |Δ E|.
With these simplifications, there is only one independent parameter, e.g., A_-, for the determination of the net reaction rate
v = A_-( g_Q_C g_Q_D/g_Q_A g_Q_B n_Q_A n_Q_B - e^-β|Δ E| n_Q_C n_Q_D).
When a charged species is involved in multiple reactions, the rate equation of its density should include contributions from all possible reactions.
For charge species involved in multiple reactions simultaneously, their rate equation should include the contributions of every reaction
dn_Q/dt = ∑_m (r_m,Q - s_m,Q) v_m,
where v_m is the rate of the m-th reaction, r_m,Q and s_m,Q are the stoichiometric coefficients species-Q in the reactants and products, respectively, of the m-th reaction.
To further simplify the rate equation, we utilize the charge symmetry of the spin ice system, assume that the charge densities of species with opposite signs are the same, and define defect densities of the system, n_m = (n_+1 + n_-1) and n_2m = (n_+2 + n_-2). And the density of background tetrahedra satisfying the ice rules is n_0 = 1 - n_m - n_2m.
Based on the possible reactions and the properties of charges in pyrochlore spin ice, we can see that the densities of charge defects satisfy the following ordinary differential equations,
dn_m/dt = A_1(2 e^-4β J n_0^2 - 9/8 n_m^2 )
- A_3(1/2 e^-12β J n_m^2 - 8 n_2m^2 )
- A_4 (e^-4β J n_m^2 - 16/3 n_0 n_2m)
dn_2m/dt = A_2 (e^-8β J n_0 n_m - 3 n_m n_2m)
+ A_3(1/2 e^-12β J n_m^2 - 8 n_2m^2 )
+ A_4(1/2e^-4β J n_m^2 - 8/3 n_0 n_2m)
where β(T) = 1/T(t) is the time-dependent inverse temperature. The four coefficients, A_1, ⋯, A_4 describe the overall reaction rates of the four reaction processes in FIG. <ref>(a)–(d), and are obtained by fitting with the Glauber dynamics simulations. For the quench simulations, the initially random spins at T = ∞ corresponds to initial conditions n_m(0)=1/2 and n_2m(0)=1/8.
The rate equation (<ref>) for magnetic monopoles at low temperatures is obtained from Eq. (<ref>) using the approximations n_0 = 1-n_m - n_2m≈ 1. The various coefficients there are 𝒜_0 = 2 A_1 e^-4β, 𝒜_1 = 16 A_4/3, 𝒜_2 = 8 A_3, and ℬ = 9A_1/8 + A_3 e^-12β J/2 + A_4 e^-4β J.
§.§ Appendix C: Asymptotic solution for residual double monopoles
At low temperatures after the freeze-out time, the rate equation for double monopole is dominated by the A_4 term, corresponding to the reaction shown in FIG. <ref>(d). This is justified mathematically by the fact that e^-4β J≫ e^-8 β J≫ e^-12β J for the source term, and n_0 ≫ n_m ≫ n_2m. The resultant rate equation is shown in Eq. (<ref>) in the main text with the intrinsic lifetime of double monopole given by τ_2m = 3/(8A_4). For convenience, let t_* = τ_Q - t̂ be the freeze-out moment during the cooling. The monopole density remains roughly a constant n̂_m = n_m(t_*) ≈ n_m(τ_Q) for time interval t_* < t < τ_Q. The rate equation then becomes
dn_2m/dt = 3 n̂_m^2/16 τ_2m e^-4β(t) J - n_2m/τ_2m,
Integrating this equation from t_* to t gives
n_2m(t) = n_2m(t_*) e^-(t-t_*)/τ_2m
+ 3 n̂_m^2/16 τ_2m∫_t_*^t e^-4β(s) e^-(t-s)/τ_2m ds.
Here the density of double monopole n_2m(t_*) at the freeze-out instant is given in Eq. (<ref>). Assuming a fast decay of double monopoles τ_2m≪t̂, the exponential factor of the first term at t = τ_Q is then negligible e^-t̂/τ_2m≪ 1. To evaluate the remaining integral, we introduce a change of variable η = (τ_Q - s)/τ_2m and express the Arrhenius factor e^-4β J in terms of the dimensionless γ. The residual density at the end of cooling becomes
n_2m(τ_Q) = 3 n̂_m^2/16∫_0^t̂ / τ_2m1-γ(η)/1+γ(η) e^-η dη.
In the low temperature regime at the end of cooling, γ≈ 1, we can approximate the denominator by 1+γ≈ 2. Substituting the algebraic cooling protocol (<ref>) for 1-γ, we obtain
n_2m(τ_Q) = 3 A n̂_m^2 /16(τ_2m/τ_Q)^α∫_0^t̂/τ_2mη^α e^-η dη.
The remaining integral can be readily evaluated in the τ_2m≫t̂ limit:
n_2m(τ_Q) = 3 A Γ(1+α) n̂_m^2 /16(τ_2m/τ_Q)^α.
where Γ(x) is the Gamma function. Using scaling relation (<ref>) for the residual monopole density n̂_m, the above equation leads to the power-law behavior in Eq. (<ref>).
Acknowledgments. GWC is partially supported by the US Department of Energy Basic Energy Sciences under Contract No. DE-SC0020330. The authors also acknowledge the support of Research Computing at the University of Virginia.
|
http://arxiv.org/abs/2307.04015v1 | 20230708164731 | Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder | [
"Qi Wang",
"Shubing Zhang",
"Li Zhou"
] | cs.SD | [
"cs.SD",
"cs.MM",
"eess.AS"
] |
Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
Qi Wang, Shubing Zhang , Li Zhou 1
China University of Geosciences(Wuhan)
{wangqi233,zhouli}@cug.edu.cn
* Corresponding author
1 This research was funded by the Chinese Regular Projects of the Humanities and Social Sciences Fund of the Ministry of Education of Grant No.16YJAZH080.
August 12, 2023
===================================================================================================================================================================================================================================================================================================
Music accompaniment generation is a crucial aspect in the composition process. Deep neural networks have made significant strides in this field, but it remains a challenge for AI to effectively incorporate human emotions to create beautiful accompaniments. Existing models struggle to effectively characterize human emotions within neural network models while composing music. To address this issue, we propose the use of an easy-to-represent emotion flow model, the Valence/Arousal Curve, which allows for the compatibility of emotional information within the model through data transformation and enhances interpretability of emotional factors by utilizing a Variational Autoencoder as the model structure. Further, we used relative self-attention to maintain the structure of the music at music phrase level and to generate a richer accompaniment when combined with the rules of music theory.
Our experimental results indicate that the emotional flow of the music generated by our model has a strong correlation with the input emotion, demonstrating the model's strong interpretability and control of emotional flow. The generated music is also well-structured, diverse, and dynamic, outperforming the baseline models.
Music Accompaniment Generation, Emotional Flow, Variational Autoencoder, Rule constraints
§ INTRODUCTION
Music evokes emotions in listeners, making it a powerful and intuitive medium for understanding. It also serves as a driving force for musicians to create. One important aspect of composing is incorporating emotional expression into the music. Composers use their emotions along with their technical skills and knowledge to craft their compositions.
Current AI methods fall short of replicating a composer's approach. Neural networks primarily focus on combining and utilizing pre-existing knowledge of compositions, rather than incorporating emotions as high-level information. Our research aims to overcome this limitation by developing a model for generating accompaniment that takes emotions into account.
The way emotions are processed impacts every aspect of music composition and, as a result, every aspect of deep neural networks <cit.>. This puts a significant emphasis on the need for network control. While autoregressive models can effectively capture key elements of music, they lack transparency and do not guarantee internal control and interpretability of musical information. Adversarial networks <cit.> can separate elements like pitch, rhythm, and texture, but they struggle with capturing emotional information and prioritize interpretability over musicality and structure.
Additionally, many music generation models <cit.> primarily focus on identifying and evaluating the emotional aspects of music, rather than using them as a controllable variable. Therefore,
instead of using subjective and limited emotional labels<cit.>, such as "relaxed" or "nervous," we have adopted Thayer's continuous emotion model<cit.>. This model takes into account two quantitative and controllable factors: valence, which measures the level of positivity or negativity, and arousal, which measures the level of excitement or calmness. This approach provides a controlled understanding of human emotions.
Thus, we designed a system based on Variational Autoencoder, a controllable deep learning model, which incorporates emotional factors into the neural network's learning process. The user inputs valence and arousal trends, which are then encoded using our Valence Encoder and Arousal Encoder. The model then decodes and reconstructs this information to generate 2-bar piano accompaniments that match the emotional flow of the user's input.
To compose a dynamic piece of music, we take into account two key elements: tonality<cit.>, which enhances the beat and rhythm of the music by incorporating rule-based constraints in the model's decoder, and structural organization<cit.>, which improves the storytelling aspect of the music and preserves the internal structure of the piece through a self-attention mechanism.
Our data, code, and samples have been made publicly available [<https://github.com/Duoluoluos/Emotion-Guided-Music-Accompaniment-Generation>]online.
Our main contributions include:
* Emotion-Guided Composition, where the user inputs an Emotion-Flow Curve and the model generates music
that closely matches the input emotions.
* Enhanced accompaniment generation, incorporating global tonality, music phrases, and local texture for a more realistic and dynamic improvised accompaniment.
* Integration of rules and deep learning, combining the creative capabilities of deep networks with the constraints of music theory to improve the transparency of the music creation process.
§ RELATED WORKS
§.§ Accompaniment Generation
Generating musical accompaniment is essentially a specific type of music generation problem<cit.>, where the melody is used as a constraint, and the accompaniment is the generated music. In the past, accompaniment generation was approached in the same way as music generation, treating pitch and temporal values as simple data. Algorithms such as Hidden Markov Chain (HMC)<cit.>, Random Forest (RF), Support Vector Machine (SVM)<cit.> <cit.>, etc. were used to approach the problem from a regression perspective. However, with the advancement of deep learning, more accurate prediction models have been developed.
DeepBach<cit.>, a well-known music generation network based on RNN/LSTM<cit.> networks, represents Bach choral as voice lists with metadata lists and embedding representation to RNN for prediction. However, RNN/LSTM networks alone may not be sufficient for achieving the required level of long-range coherence in accompaniment. Hybrid models, such as the RNN-LSTM model in paper <cit.> and the RNN-RBM model in paper <cit.>, have been proposed to address this issue. The RNN-LSTM model learns different models in stages, while the RNN-RBM model uses several Restricted Boltman Machines (RBMs) and samples the output of the RBMs as input for the RNN, training local information and then makes autoregression for each information.
In 2018, the Music Transformer <cit.> was introduced, which shifted the focus from regression problems and note prediction to natural language processing (NLP) techniques for recognizing relationships between different segments of music and evaluating the logicality of musical phrases, similar to how NLP tasks analyze relationships and coherence in language. The Transformer model uses attention mechanisms, positional coding, and other techniques to ensure long-range coherence, making it useful for various accompaniment generation tasks such as drum and piano accompaniment. The model is similar to text completion in NLP, using a priori melodic data and key information such as drum beats to "fill in" missing features. Papers <cit.> have expanded upon this data representation and the MuMidi proposed in paper <cit.> can solve harmonic problems in a long-term context by integrating pitch, time value, and tempo. However, the generation process is not always interpretable or controllable and the randomness of notes can increase over time, resulting in non-sequential music.
To improve control over the music generation process, various methods have been employed. MuseBert <cit.> uses data corruption and fine-tuning during the inference learning process, while Music VAE <cit.> <cit.> uses decoupled feature representations such as pitch, chord, and texture, and employs interpolation, back-and-forth sampling, and temperature factors to increase accompaniment diversity. MuseGAN <cit.> treats music data as images and can generate multi-track accompaniments, but the structure of each track is not well-constrained by composition rules and the resulting music may not be as listenable. It is worth noting that the "hidden space" of the Variational Autoencoder(VAE) is better suited to the music generation problem than the image representation method used in the generative adversarial network. Unlike pass-through data, notes are affected by pitch, time, and velocity and have a high dimensionality of information. The VAE <cit.> normalizes this information to the hidden space for posterior estimation and reconstruction using an Encoder-Decoder architecture, which can be combined with a "learning from scratch" strategy and improve the model's ability to migrate and transfer. Therefore, we chose to use VAE as a controllable accompaniment generation model. Our model can generate well-structured accompaniments that conform to certain composition rules and follow an Emotion Flow.
§.§ Emotional Flow Guided Composition
Valence and Arousal are commonly used as quantitative measures of musical emotion in research. Studies<cit.> have shown that the rhythmic density of music, determined by the duration of notes in each measure, can affect a person's arousal levels independently of note velocity. Additionally, the melodic and harmonic direction of a song can affect the overall emotional direction <cit.>, referred to as valence. These factors can have a significant impact on the emotional response to a piece of music.
The objective of our research is to extract features from Emotion Flow, specifically the Valence Curve and Arousal Curve <cit.>, and then systematically associate those features with the generated accompaniment. Previous research, as shown in the paper <cit.>, used dynamic programming and template-matching methods to complete the Emotion-Flow Guided Accompaniment Generation. However, these methods can ensure the audibility of the music but do not guarantee the diversity of the accompaniment. In contrast, deep neural networks can achieve accompaniment diversity through large-scale learning, but they struggle to maintain the structure of the music compared to methods such as template matching <cit.>. Although self-similarity <cit.> can maintain some of the structure, neural network methods have difficulty ensuring the structure of the music because the music structure is strongly regulated through music phrases. Therefore, decoding music segments into "phrase" units is the key to maintain music structure. In this paper, we propose using a VAE which makes full use of structured features of the music to improve the overall structure and diversity of the accompaniment.
§ METHODS
§.§ Data Preparation
The POP909 Dataset <cit.> comprises 909 popular music tracks, which are piano-based and have a total running time of 60 hours. Each track is stored in MIDI file format and includes three separate components - melody, bridge, and piano. The bridge and piano tracks serve as an accompaniment. Additionally, the dataset includes chord and bar annotations for each song.
The POP909 dataset includes melodies that are broken down into 2-bar, 4-bar, and 6-bar fragments. The bar annotations in the dataset provide information about the structure of these fragments. The chord annotations, on the other hand, provide information about the harmony of each bar in the melodies.
To address the issue of music structure in a consistent manner, we discovered that the majority of music is composed of 2-bar segments. As a result, we carried out data cleaning, filtering out 2/4-bar segments and 2/4-bar segments with 6-bar introductory fragments. The training and testing sets were then split in an 8:2 ratio.
As sample data, we selected a subset from the Nottingham Dataset <cit.>. This dataset comprises over 1000 European and American folk songs, all of which have chord annotations. For validation purposes, we chose 2-bar and 4-bar segments from the dataset. The collated data information is presented in Table <ref>. (It is worth noting that if the user-supplied music does not have chord annotations like the sample data, we used Bi-LSTM Harmonizer <cit.> to implement the chord annotations)
To showcase the capabilities of our model, we chose two representative songs, one with high valence and the other with low valence, from the 20 songs we used. These songs were made available on a web page for users to evaluate and [<https://soundcloud.com/ko9isjyplxrb/sets/demos-of-emotion-guided-generated-accompaniment>]enjoy.
§.§ Models
§.§.§ The Conversion of Valence and Arousal
The overall architecture is illustrated in Figure <ref>.
The initial music data is represented by piano rolls. Each row of the piano roll matrix corresponds to one of the 128 pitch values and each column corresponds to a unit of time, with the duration of a 16th note used as the unit of time. The accompaniment tracks were merged and transformed to produce the accompaniment piano roll p_T^ACC, where T represents the duration of the altered accompaniment fragment. Similarly, the rhythm piano roll is represented as p_T^RHY, and the labeled chord progression is represented as c_T. According to the twelve-mean meter <cit.>, c_T is a matrix of 12 × T, where 12 is the number of notes in an octave.
Valence_T=V(c̅_̅T̅)
Where V(·) is the Valence's mapping and c̅_̅T̅ is the chord data after normalizing the root note of c_T to the C3 note. This is to ensure that the Valence is in the same key, and we set the T here to 8.
Also with respect to Arousal's mapping as A( · ), there are,
Arousal_T= A(p_T^ACC+p_T^RHY)
The operation of mapping A is to transform the multitrack music data into a tree structure <cit.>, where the nodes of the tree can more clearly characterize the density distribution of notes. Arousal is a four-dimensional matrix of size 128× T × 16 × 8, denoting the pitch-duration-density grouping, respectively.
Denote the quantization operation of Arousal and Density as | · |,
|Arousal|_T=1/5 · T∑_T∑_pitch A(p_T^ACC+p_T^RHY)
|Valence|_T = ∑_T W_V(c̅_̅T̅)
The W value in this context refers to the chroma weights of each chord and serves as a measure of the valence, or emotional assessment, of each chord. By performing a quantization-transformation operation, the emotional content of the music can be translated into a format that the composition model can understand, allowing for the user's desired Emotion Flow to be incorporated into the final output.
§.§.§ Valence/Arousal Encoder
Arousal and Valence Encoder are both dominated by LSTM as the backbone network. Arousal Encoder extracts the features of pitch-time-value information through a CNN with a (4,12) sized kernel in convolutional layer and (1,4) sized kernel in max pooling layer.
In fact, after the features are extracted by the convolutional network, the arousal information is more concise and refined [38], so that Decoder can learn better emotional features.
The layers of the LSTM network are all 1, and both are bidirectional. the dimension of the input weight of the Arousal Encoder is 256, and the dimension of the output weight is 1024. the dimension of the input weight of the Valence Encoder is 32, and the dimension of the output weight is 1024. Both are encoded to calculate the mean and variance of the probability distribution and sampled to obtain a 256-dimensional latent space variable z_Arousal or z_Valence.
§.§.§ Decoder
The Valence Decoder is introduced first, and the LSTM encoder of the decoder is roughly the same, except that the input side is fused with z_valence, and the dimension is modified to 292. The reconstructed Valence is estimated by calculating the variance and mean, and it is input to the LSTM as a token so that the decoding part of the model is completed. The probability distribution of valence is a 12-dimensional Bernoulli distribution.
PianoTree Decoder, on the other hand, refers to the design of the paper <cit.> and uses the model in this paper as a baseline. The original model is divided into two main stages, one is the time domain decoding and the other is the decoding of notes for each pitch. Since different notes may be concatenated into fragments and have some autocorrelation in the structure of the music to form the music phrases, we performed a note summary operation after the time-domain decoding operation and introduced a self-attention mechanism, which we will explain the ins and outs in detail in the next subsection.
The role of the first Pianotree-LSTM in Figure <ref> is to decode 512-dimensional latent space vectors. latent space vectors are the hidden space mapping changes of notes, and LSTM (hidden size=1024) is to summarize and summarize the results of the changes in the temporal dimension, so we call the summarized results note summary with size (1,512). After obtaining the relative self-attention, it is then decoded in the dimension of the pitch by LSTM(2) and mapped to 128 pitches through the fully connected layer. For each or each class of notes, respective temporal values are then decoded by LSTM (hidden size=16) to obtain the emotional stream/music sequence after reconstruction.
§.§.§ Relative Self-Attention
In order to maintain the structural organization in the music sequences, we introduce a self-attentive mechanism. This inspiration comes from the paper <cit.>, which does this by comparing a template music sequence fragment with a training music fragment and obtaining the correlation of the relative positions in the two sequences by one-dimensional/two-dimensional convolution, and the resulting correlation data is called self-similarity.
In this paper, self-similarity is not done by convolution operation because we do not have template fragments, but by note summary, a tensor of stacked pitch and mood information in the time domain. Similarly, since self-attention obtains the autocorrelation information inside the input by soft addressing, it is just possible to obtain the autocorrelation of note summary in the time domain and thus maintain the structured organization of the music fragments as the estimated "music phrases".
Since there is some time invariance in the relative positions of the sequences <cit.>, we also introduce offsets. Each fragment is not very informative, and to optimize the efficiency of the algorithm, we use a single-head attention mechanism. The query, key, and value tensor of relative attention are written as Q, K, and V, respectively. S^rel represents the offset matrix and the matrix element r=NS_k-NS_q, where NS_k and NS_q are the note summary query and key's position code, then the formula for relative self-attention(abbreviated as Att) is
Att = Softmax(QK^T+S^rel/√(D))V.
As for the parameter settings, we set the weight dimension of Q to 1024 and the weight dimension of K, V to D=128.
§.§.§ Rules-based Constraint
Two rules are very common in the realm of improvised accompaniment, enriching the player's accompaniment performance by changing tonality. The first principle is to add variety to the chords by making small adjustments to the chord tuning.
The second technique is to add a sense of layering between the different voices by shifting the tonality of the chords significantly at the same time.
Either way, chord arrangement is the most important thing.
If we want to use the rules in our accompaniment generator, we need to grasp the key information and build the model. Whether it's chord transposition or pitch shifting, it's essentially shifting pitch. So instead of inferring from the model, we can use the chord arrangement and transposition information directly to shift the pitch and change the generated accompaniment.
To obtain the chord transposition information, a mathematical evaluation is required. We note that the originally labeled chords of the input melody are C^pre and the chords generated by PianoTree decoding are C^gene, each chord is represented by 12 mean meters, so it is a 12-dimensional vector. The two are compared and the maximum difference is used as the criterion for transposition. Note the current bar number i, the pitch shift Δ C refers to:
Δ C = argmax(C^pre_i C^gene(T)_i /|| C^pre_i || · || C^gene(T)_i ||)
Here T denotes matrix transposition.
Each bar has a chord best transposition selection, and a number of bars with large Δ C are selected for pitch shift so that tonality adjustment is achieved by means of rules and mathematical modeling.
§.§ Training Objective
The training objective of VAE <cit.> is much the same, and its loss function mainly consists of regularization loss and reconstruction loss. To shorten the formulation, we abbreviate Valence and Arousal as V and A.
For the regularized loss, we set the prior Gaussian distributions of Valence and Texture as p(z_V) and p(z_A), and the posterior distributions after encoder are noted as p(z_V|V), p(z_A|A), respectively. To find the regularization loss of the two probability distributions, we commonly use the KL scatter [40], denoted here as KL(·).
For the reconstruction loss, we set the probability distribution of the Valence Decoder output as p(V|z_V) and the PianoTree Decoder output as p(A|z_A, z_V), and the reconstruction loss is generally found by finding the log probability expectation value. In summary, the loss function Loss(V, A) of the model is
Loss(V, A) = E_p[log p(V|z_V) + log p(A|z_V,z_A)]
+ KL(p(z_V|V) || p(z_V)) + KL(p(z_A|A) || p(z_A))
§ EXPERIMENTS
§.§ Training Details of Our Proposed Model
The experiment was run on a host with a 12th Gen Intel(R) Core(TM) i7-12700H and a single NVIDIA GeForce RTX3060 6GB.
In the section <ref>, we explain the dataset and convert the MIDI files in the dataset into a piano roll representation and a 12-measure chord representation, respectively We set the batch size to 128, so that the model is trained with a time value of 32 for each arousal fragment and 8 for the valence fragment.
When training our VAE model, we set the epoch to 6 and the learning rate to 10^-3 with an exponential decay of 0.999 and a minimum value of 10^-5. To speed up the training speed and reduce the possibility of model divergence, we use the Teacher-Forcing strategy. The Teacher-Forcing training ratio of Encoder-PianoTree Decoder , and Encoder-Valence is set to 0.6 and 0.5 respectively. The training ratio of Encoder-Valence Decoder is set to 0.5.
§.§ Baseline Models
Our baseline models are Poly-dis and M-GPT chosen from the model in the paper <cit.> <cit.>. Poly-dis, the state-of-the-art disentanglement learning-based model, decouples the characterization of harmony and texture. Unlike our rule constraint and modeling, this model achieves the adjustment of the generated accompaniment by learning prior and posterior sampling. M-GPT is the state-of-the-art piano music generation model and can harmonize the melody using auto-regression principles.
§.§ Emotional Flow Comparison Test
The experiment aims to compare the correlation between the Emotional Flow entered by the user, used as a guide, and the Emotional Flow finally generated by the system. This is an important indicator of the effectiveness of the system's control over the input Emotional Factors.
We evaluate the correlation by comparing the Pearson coefficients between the two sequences, referring to the evaluation metrics in the paper <cit.>, so as to avoid misevaluation due to misalignment of the Emotional Flow.
There are two constraints on the Emotional Flow of the user input guidelines. The first is that there cannot be more than five extreme points per flow curve, except for the start and end points. This is because the melodic data of the sample data does not exceed 90s in length, and too many extreme points mean too many melodic ups and downs, which is not in accordance with the rules of music composition. The second is that each flow curve must have a certain amount of ebb and flow, because too much flatness is not necessary for correlation. Specifically, V̅ and A̅ are the mean values of the valence and arousal curves, and the duration of the melody is set to T.
1/T∫_0^T (V-V̅)^2 dt > 0.15
1/T∫_0^T (A-A̅)^2 dt > 0.15
The data for the experiment were obtained from the "Samples" mentioned in the section <ref>, with 20 pieces of music to be validated. Four typical cases were selected to visualize the results. The criteria we chose are similar to the idea of control variables, which are the correlation of Arousal Flow in the low arousal and high arousal cases, and the correlation of Valence Flow in the Low Valence and High Valence cases, respectively. We calculated the average valence and arousal correlation values for 20 samples of music. For statistical convenience, high arousal/valence is denoted as High Input Basis (HIB) and low arousal/valence is denoted as Low Input Basis (LIB).
The visualization in Figure <ref>, a combination of a heat map and box plot, presents a comparison of the input and output Emotional Flow. The heat map illustrates the specifics of the Emotional Flow, while the box plot offers a broader statistical comparison. The results reveal that the mean values and quartiles of the Emotional Flow are similar for both the user input and the system output. This suggests that the system-generated Emotional Flow aligns with the user input statistically, regardless of the Emotional Flow's baseline.
We also compared the association values between the baseline model and our VAE model, as shown in Table <ref>. Where the baseline model is abbreviated as Poly-Dis, our model is called VA-VAE.
It can be seen that the average correlation of our model outperforms the baseline models for both valence flow and arousal flow. The correlation of our VA-VAE also outperforms the baseline model under HIB versus LIB.
§.§ Subjective Musicality test
The subjective musicality assessment was mainly a professional assessment by music experts. A total of 44 junior and senior music majors and graduate students were invited. The music experts were randomly selected from two of the eight sample groups, and each group contained two pieces of music, one with the accompaniment generated by the baseline Transformer model and the other with the accompaniment generated by the VA-VAE model. The two pieces of music were not distinguished by name; in other words, the music experts' music was selected in a completely blind manner.
The music experts evaluated the level of the accompaniment from four angles: 1) whether the overall layout of the composition was appropriate; 2) whether the chords were harmoniously chosen and connected; 3) whether the rhythmic density (articulation points) was specific to the melody; and 4) whether there was a sub-melody or passing phrase that accentuated the melody. Each evaluation angle is evaluated quantitatively using a rating value, and is assigned a score of 1 to 5. The above four perspectives are abbreviated as Q1, Q2, Q3 and Q4.
The experimental results are shown below, and the final score for each assessment perspective is based on the weighted average score.
From the experimental results shown in Fig <ref>, we can see that the weighted average score of our VA-VAE model is stronger than that of the Baseline models in terms of the overall layout of the weave (Q1), chord selection and connection (Q2), melodic counterpoint (Q3), and melodic underscoring (Q4). The overall arrangement of the accompaniment generated by our model is more reasonable, and the chord selection and connection are more fully considered, and the rhythm between the accompaniment and the melody is more organized and regular, which can also better support the melody. The musical accompaniment generated by our model has a more artistic character.
Refer to Figure <ref> for a visual representation of the music's attention structure.
The darker the color of the music phrases, the greater the weight of attention. The structure of the different "music phrases" gathered by attention mechanism is divided by dotted lines, so that the music as a whole is well organized.
§.§ Ablation Study
For the ablation study, we abbreviated the control group without relative self-attention and Rule Constraint (RC) as CG, the model after adding relative self-attention as CG+NS, and then after adding Rule Constraint as CG+NSR. We used a quantitative approach to assess the generation The quality of the accompaniment in the ablation experiment is assessed quantitatively. Quantitative metrics such as pass/fail ratios, null ratios, etc. are less applicable in our piano improvisation accompaniment generation task. The key criteria for the evaluation of the accompaniment task are the texture of the accompaniment, the harmony of the accompaniment with the melody, the contribution to the melody, etc. This way of evaluation is very similar to that of the translation task, where the harmony of the accompaniment is like the valuation of the translated utterance, the weaving arrangement is like the wording of the translation, and the contribution to the melody is like the synthesis and comparison of the information in the translation task. Therefore, we chose the MUTE evaluation index from the paper <cit.>, which is analogous to the F-Score evaluation index in the translation task, to accurately and quantitatively assess the level of the accompaniment arrangement.
In MUTE, F1 Score(FS) evaluates the "translation accuracy" of the accompaniment from the perspective of 128 pitches and is suitable for evaluating texture, while the F1 Score Pitch Class(FSPC) normalizes the pitches to 12 basic pitches and is therefore suitable for evaluating harmony.
As seen in Table <ref>, the model incorporating relative self-attention and RC outperformed the CG and CG+NS control groups in both FS and FSPC metrics. Whether it is harmony or texture, the newly incorporated relative self-attention mechanism and rule constraint can be better designed and orchestrated to create higher quality accompaniment. Further, we visualized the comparison test of the rule constraints, as shown in Figure <ref>, and found that the rule constraints did indeed shift the range of the accompaniment to better harmonize the melody.
§ CONCLUSION
In this study, we investigate the generation of musical accompaniment that is guided by emotional flow. We focus on two key aspects of the problem. First, we establish a mechanism for converting emotional streams into music information data and a VAE network architecture that is tailored to emotional quantization data, allowing us to control the network model with emotional factors. Secondly, we optimize the structural planning of accompaniment generation by introducing the Self-Similarity and relative self-attention mechanism. By using rule constraints, we further improve the local and global tonality of the music. This approach of progressing from the whole to the local, layer by layer, allows us to create an automatic accompaniment system that has excellent emotional flow control and high-quality music generation.
In the future, we plan to further improve our research. Currently, the accompaniment is generated by a single instrument and we intend to extend it to include multiple instruments to create an automated orchestra. Additionally, the representation of emotional flow is not yet clear, and we will research on better visualization methods to make the AI technology more user-friendly.
§ ACKNOWLEDGMENT
This research was funded by the Regular Projects of the Humanities and Social Sciences Fund of the Ministry of Education of Grant No.16YJAZH080.
00
b1 Wu, Yi-Chan, and Homer H. Chen. "Emotion-flow guided music accompaniment generation." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016.
b2 Thayer, Robert E. The biopsychology of mood and arousal. Oxford University Press, Oxford, UK, 1990, ch. 2-5.
b3 Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription." arXiv preprint arXiv:1206.6392 (2012).
b4 Choi, Keunwoo, George Fazekas, and Mark Sandler. "Text-based LSTM networks for automatic music composition." arXiv preprint arXiv:1604.05358 (2016).
b5 Dua, Mohit, et al. "An improved RNN-LSTM based novel approach for sheet music generation." Procedia Computer Science 171 (2020): 465-474.
b6 Lyu, Qi, et al. "Modelling high-dimensional sequences with lstm-rtrbm: Application to polyphonic music generation." Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015.
b7 Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. "MidiNet: A convolutional generative adversarial network for symbolic-domain music generation." arXiv preprint arXiv:1703.10847 (2017).
b8 Luo, Jing, et al. "MG-VAE: deep Chinese folk songs generation with specific regional styles." Proceedings of the 7th Conference on Sound and Music Technology (CSMT). Springer, Singapore, 2020.
b9 Lattner, Stefan, Maarten Grachten, and Gerhard Widmer. "Imposing higher-level structure in polyphonic music generation using convolutional restricted boltzmann machines and constraints." Journal of Creative Music Systems 2 (2018): 1-31.
b10 Zhao, Jingwei, and Gus Xia. "AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer." arXiv preprint arXiv:2108.11213 (2021).
b11 Hadjeres, Gaëtan, François Pachet, and Frank Nielsen. "Deepbach: a steerable model for bach chorales generation." International Conference on Machine Learning. PMLR, 2017.
b12 Huang, Cheng-Zhi Anna, et al. "Music transformer." arXiv preprint arXiv:1809.04281 (2018).
b13 Huang, Yu-Siang, and Yi-Hsuan Yang. "Pop music transformer: Beat-based modeling and generation of expressive pop piano compositions." Proceedings of the 28th ACM International Conference on Multimedia. 2020.
b14 Wu, Shih-Lun, and Yi-Hsuan Yang. "The Jazz Transformer on the front line: Exploring the shortcomings of AI-composed music through quantitative measures." arXiv preprint arXiv:2008.01307 (2020).
b15 Jin, Cong, et al. "A transformer generative adversarial network for multi‐track music generation." CAAI Transactions on Intelligence Technology 7.3 (2022): 369-380.
b16 Wang, Ziyu, and Gus Xia. "MuseBERT: Pre-training Music Representation for Music Understanding and Controllable Generation." ISMIR. 2021.
b17 Jiang, Junyan, et al. "Transformer VAE: A hierarchical model for structure-aware and interpretable music representation learning." ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020.
b18 Tanaka, Keitaro, et al. "Pitch-Timbre Disentanglement Of Musical Instrument Sounds Based On Vae-Based Metric Learning." ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.
b19 Yang, Ruihan, et al. "Deep music analogy via latent representation disentanglement." arXiv preprint arXiv:1906.03626 (2019).
b20 Song, Kai, Xia Liang, and Junmin Wu. "ViT-based VQ-VAE Generative Network for Accompaniment Generation." 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence. 2021.
b21 Liu, Weiming. "Literature survey of multi-track music generation model based on generative confrontation network in intelligent composition." The Journal of Supercomputing (2022): 1-23.
b22 Wu, Yi-Chan, and Homer H. Chen. "Emotion-flow guided music accompaniment generation." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016.
b23 Wallis, Isaac, et al. "A rule-based generative music system controlled by desired valence and arousal." Proceedings of 8th international sound and music computing conference (SMC). 2011.
b24 Morreale, Fabio, and Antonella De Angeli. "Collaborating with an autonomous agent to generate affective music." Computers in Entertainment (CIE) 14.3 (2016): 1-21.
b25 Miyamoto, Kana, Hiroki Tanaka, and Satoshi Nakamura. "Online EEG-Based Emotion Prediction and Music Generation for Inducing Affective States." IEICE TRANSACTIONS on Information and Systems 105.5 (2022): 1050-1063.
b26 Kaliakatsos-Papakostas, Maximos, Andreas Floros, and Michael N. Vrahatis. "Artificial intelligence methods for music generation: a review and future perspectives." Nature-Inspired Computation and Swarm Intelligence (2020): 217-245.
b27 Boulesteix, Anne‐Laure, et al. "Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2.6 (2012): 493-507.
b28 Eddy, Sean R. "What is a hidden Markov model?." Nature biotechnology 22.10 (2004): 1315-1316.
b29 Hearst, Marti A., et al. "Support vector machines." IEEE Intelligent Systems and their applications 13.4 (1998): 18-28.
b30 Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription." arXiv preprint arXiv:1206.6392 (2012).
b31 Dahale, Rishabh, et al. "Generating Coherent Drum Accompaniment With Fills And Improvisations." arXiv preprint arXiv:2209.00291 (2022).
b32 Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017).
b33 Ren, Yi, et al. "Popmag: Pop music accompaniment generation." Proceedings of the 28th ACM International Conference on Multimedia. 2020.
b34 Temperley, David. The cognition of basic musical structures. MIT press, 2004: 10-20.
b35 Wang, Ziyu, et al. "Pop909: A pop-song dataset for music arrangement generation." arXiv preprint arXiv:2008.07142 (2020).
b36 Medeot, Gabriele, et al. "StructureNet: Inducing Structure in Generated Melodies." ISMIR. 2018.
b37 Wang, Ziyu, et al. "Pianotree vae: Structured representation learning for polyphonic music." arXiv preprint arXiv:2008.07118 (2020).
b38 Sharif Razavian, Ali, et al. "CNN features off-the-shelf: an astounding baseline for recognition." Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014.
b39 Wang, Ziyu, et al. "Pianotree vae: Structured representation learning for polyphonic music." arXiv preprint arXiv:2008.07118 (2020).
b40 An, Jinwon, and Sungzoon Cho. "Variational autoencoder based anomaly detection using reconstruction probability." Special Lecture on IE 2.1 (2015): 1-18.
b41 Wang, Ziyu, et al. "Learning interpretable representation for controllable polyphonic music generation." arXiv preprint arXiv:2008.07122 (2020).
b42 Lim, Hyungui, Seungyeon Rhyu, and Kyogu Lee. "Chord generation from symbolic melody using BLSTM networks." arXiv preprint arXiv:1712.01011 (2017).
b43 Gover, Matan, and Oded Zewi. "Music Translation: Generating Piano Arrangements in Different Playing Levels." Ismir 2022 Hybrid Conference. 2022.
|
http://arxiv.org/abs/2307.04468v1 | 20230710103412 | Badgers: generating data quality deficits with Python | [
"Julien Siebert",
"Daniel Seifert",
"Patricia Kelbert",
"Michael Kläs",
"Adam Trendowicz"
] | cs.LG | [
"cs.LG",
"68",
"D.m"
] |
Beyond spectroscopy. II. Stellar parameters for over twenty million stars in the northern sky from SAGES DR1 and Gaia DR3
Gang Zhao2,1
August 12, 2023
=========================================================================================================================
Generating context specific data quality deficits is necessary to experimentally assess data quality of data-driven (artificial intelligence (AI) or machine learning (ML)) applications.
In this paper we present , an extensible open-source Python library to generate data quality deficits (outliers, imbalanced data, drift, etc.) for different modalities (tabular data, time-series, text, etc.). The documentation is accessible at <https://fraunhofer-iese.github.io/badgers/> and the source code at <https://github.com/Fraunhofer-IESE/badgers>.
§ INTRODUCTION
§.§ Context
Applications and systems based on artificial intelligence (AI), machine learning (ML), data mining or statistics (hereafter referred to as data-driven software components) are pieces of software where the decision function is not programmed in a classical way, but is based on one or more models that can be designed either automatically (e.g. through learning or mining) or is based on domain expertise hypotheses (e.g. business rules or statistical tests).
Assessing the quality of such software components is not trivial, as it depends on several factors, such as the quality and quantity of the data, the type of model and how it is built, the application context, and domain expertise <cit.>.
§.§ Motivation
Data quality deficits (e.g., outliers, imbalanced data, missing values, etc.) can have a variety of effects on the performance of a data-driven model. A theoretical understanding of the robustness of data-driven models against specific data quality deficits is available for only a small number of models. Many can only be empirically tested against specific data quality deficits. To make matters worse, data quality deficits are context and application dependent.
Assessing the robustness of a data-driven software components to changes in data quality requires a systematic approach. It also requires the ability to generate specific data quality deficits in order to run tests.
Currently, there are many Python libraries to detect and handle data quality deficits, such as pyod[<https://pyod.readthedocs.io/en/latest/>] <cit.> for detecting outliers, imbalanced-learn[<https://imbalanced-learn.org>] <cit.> for dealing with imbalanced data, autoimpute[<https://autoimpute.readthedocs.io/en/latest/>] for imputing missing values, or great-expectations[<http://docs.greatexpectations.io>] for validation. In addition, the field of deep-learning has provided us with libraries for augmenting training data (see for instance albumentation[<https://albumentations.ai/docs/>] <cit.>). However, there are very few, if any, libraries for generating context-specific data quality deficits.
§.§ Contribution
This paper presents , a Python package dedicated to generate data quality deficits. The aim is to propose a set of standardized and extensible objects (called generators) that can take data as input, infer context information from it, and generate data quality deficits. This package relies on a few design decisions. First, it follows a simple API. Each generator provides a function (where X is the input features and y is either a vector of class labels, regression targets, or an empty one). Secondly, aims to support as many data types as possible (e.g., tabular data, images, text, graphs, etc.). This means relying on mainstream and long-established libraries (such as numpy[<https://numpy.org/>], pandas[<https://pandas.pydata.org/>, or scikit-learn[<https://scikit-learn.org/stable/index.html>]] for tabular data) whenever possible, or following reasonable design decisions. Finally, should be structured and implemented so that it can be easily extended.
§.§ Structure of the paper
The paper is organized as follows. Section <ref> presents a short overview of related work. Section <ref> presents structure and implementation. Section <ref> shows a couple of application examples. Section <ref> discusses limitations, future work, concludes the paper and provides links to the project.
§ RELATED WORK
Assessing the quality of ML applications is a broad area of research. In their paper <cit.>, Zhang and co-authors provide a relatively comprehensive overview of testing activities that apply to machine learning. According to their categorization, we can argue that generating data quality defects falls into the spectrum of test input generation. That is, the generation of specific data with the purpose of evaluating specific aspects of the system under test. The techniques listed range from rule-based to generative AI techniques. Most of the methods presented here are either part of specific test frameworks or have been described in scientific papers. To the best of our knowledge, they are not part of a library dedicated to the generation of quality defects.
Data augmentation techniques are typically used in machine learning to enrich the training data set and help train models to achieve a better goodness of fit, generalize better, and become robust to some data quality issues (e.g., noise). They usually consist of specific transformations (like rotations or scaling for images) that, in principle, should not change the semantic of the data. Recent surveys, like <cit.> for images and <cit.> for text, provide an overview of the different techniques used in data augmentation. In section <ref>, we mentioned existing libraries for data augmentation. Although their main goal is not to specifically generate data quality deficits, data augmentation methods provide interesting algorithms that can be reused for our purpose.
When it comes to generating data quality deficits from existing data, very few papers provide overviews of existing methods and implementations. For instance, <cit.> discusses how to generate outliers from existing data. While the authors seem to have implemented a number of these methods to test them empirically, no implementation is actually available.
<cit.> discusses how to generate missing values. Note that the methods discussed in <cit.> have been implemented in R[ <https://cran.r-project.org/web/packages/missMethods/>] but not in Python.
In summary, there exists a variety of methods for generating data quality defects. But very few are available in a dedicated Python library.
§ PROPOSED SOLUTION: BADGERS
§.§ Overview
Badgers is a Python library for generating data quality deficits from existing data. As a basic principle, badgers provides a set of objects called generators that follow a simple API: each generator provides a function that takes as argument (the input features) and (the class labels, the regression target, or None) and returns the corresponding transformed and . As an example, figure <ref> shows the generate function implemented in the that adds Gaussian White noise to some existing data.
The code is divided into two main modules: and . The module handles all the utilities and things that are generic to all generators such as base classes (in ), decorators (in ), and utilities (in ). The generators themselves are stored under the module, which in turn is divided into submodules, each representing a data type (e.g. , , , , etc.). Each submodule hosts the generators implementations dedicated to one specific data quality deficit (such as outliers, drift, missingness, etc.) for a specific data type. Figure <ref> shows the detail of the current structure.
§.§ Available features
Badgers is currently under development and the list of features will most probably evolve in the near future. For the moment, the focus has been more on tabular data. As shown in the figure <ref>, the module contains five submodules: , , , , and . As their names suggest, each submodule implements generators dedicated to specific data quality deficits. For time series data (), the following submodules are available: and . For text data () only one submodule () is at the moment available.
§.§.§ Tabular Data
badgers.generators.tabular_data.drift
Drift happens when some statistical properties of the data changes over time <cit.>. Two generators are currently available in this module: and . Figures <ref> and <ref> illustrate how these two generators works. Simply put, the randomly shifts values of each column independently of one another. This amounts to translating the data (see Figure <ref>). The input features are first standardized (mean = 0, var = 1) and a random number is added to each column. The applies a similar transformation but for instances belonging to the same class. Here all the instances of a given class are translated, and the translation for different classes is not the same (see Figure <ref>).
badgers.generators.tabular_data.imbalanced
Whereas imbalanced data is usually understood in the context of classification <cit.>, when some classes are over- or under-represented, we use a broader definition. For us, a data set is said to be imbalanced when some statistical properties of the data are over- or under-represented in comparison to a ground truth. Currently, three generators have been implemented: , , and . Simply put, all of these generators sample the original data set with replacement. The samples data points belonging to each class to obtain a specified class distribution (e.g., 10% of class 1, 20% of class 2, and 70% of class 3, see Figure <ref>). The samples data points according to the regression target and expects a function that maps the values of to a sampling probability (see Figure <ref>). Finally, the performs a similar transformation but the sampling probability now depends upon the input features values (see Figure <ref>).
badgers.generators.tabular_data.noise
Currently only one generator has been implemented: . It adds a Gaussian White noise to the input features (see Figure <ref>).
badgers.generators.tabular_data.outliers
Two types of generators are currently available. Generators that directly generate outliers from the input features and generators that first reduce the dimensionality of the input features and then apply an outlier generator from the previous category.
, , , and are generators from the first category. Figures <ref>, <ref>, <ref>, and <ref> illustrate how these four generators create outliers.
The generates outliers by creating data points where each feature i gets a value outside the range ]μ_i-3σ_i,μ_i+3σ_i[, where μ_i and σ_i are the mean and the standard deviation of feature i (see Figure <ref>). The generates outliers by creating data points on an hypersphere of center μ and of radius larger than 3σ (see Figure <ref>). The and both generate outliers by creating data points that belong to regions of low density. The difference between the two generators lies in their low density estimation methods. approximates regions of low density by computing an histogram of the data (see figure <ref>). uses a kernel density estimator (see Figure <ref>).
belongs to the second category. It first standardizes the data and applies a dimensionality reduction technique (so far badgers support scikit-learn transformers that provide an function like ). The outliers are generated using one of the generators mentioned above. Finally the standardization and the dimensionality reduction are inverted.
§.§.§ Time series data
Time series data is currently supported in in the form of numpy arrays and pandas dataframes.
badgers.generators.time_series.noise
Currently only one generator has been implemented: . It adds a Gaussian White noise to the input features . The implementation is the same as in . Figure <ref> illustrates this generator.
badgers.generators.time_series.outliers
Here some existing instances are replaced with outliers. Currently only one generator is implemented: . The creates locally extreme values, by changing the values of some randomly selected data points x(t_i) ∈ X (see Figure <ref>). The values are sampled out of the ]μ_j,Δ-3σ_j,Δ,μ_j,Δ+3σ_j,Δ[ range, where μ_j,Δ and σ_j,Δ are the mean and the standard deviation of the j^th feature computed in the local time interval Δ = [t_i - n, t_i + n].
§.§.§ Text
Text data is currently supported in in the form of lists of strings.
badgers.generators.text.typos
For now, only one generator is implemented: . The randomly swaps adjacent letters in words larger than three letters except for the first and the last letters. As an illustration, the sentence "the quick brown fox jumps over the lazy dog" becomes "the qucik brwon fox jupms oevr the lzay dog" after applying this generator.
§ EXAMPLES
We implemented several examples in the form of notebooks (accessible at <https://fraunhofer-iese.github.io/badgers/> under the tutorials section). The next two figures provide some examples to illustrate the use of a single generator (Figure <ref>), as well as the pipelining of several ones (Figure <ref>).
§ CONCLUSION
This paper gave an overview of , a Python package dedicated to generating data quality deficits.
is in a relatively early development stage. Until now, our focus has been to develop the library structure, the API, as well as some relatively simple generators. The goal was first and foremost to show the potential of such a library.
This library has been used in the context of internal projects. The purpose was first to conduct robustness tests and to augment data. By open-sourcing this library, we hope to provide not only a tool to ease robustness tests of data-driven applications but also to foster discussions on the topic of generating data quality deficits.
Future work will focus both on developing new generators and to test the applicability of this library in the context of data science projects. Discussions and design decisions will be needed to prioritize the work and to decide how to improve the support of other types of data (for instance images, graphs, geolocated data).
Finally, can be installed with the Python package installer [<https://pip.pypa.io/en/stable/>]: .
The full documentation is accessible at <https://fraunhofer-iese.github.io/badgers/>.
The source code for is available under the BSD-3 license at <https://github.com/Fraunhofer-IESE/badgers>.
alpha
|
http://arxiv.org/abs/2307.06258v1 | 20230712155548 | Connected Dependability Cage Approach for Safe Automated Driving | [
"Adina Aniculaesei",
"Iqra Aslam",
"Daniel Bamal",
"Felix Helsch",
"Andreas Vorwald",
"Meng Zhang",
"Andreas Rausch"
] | cs.RO | [
"cs.RO",
"cs.SE",
"D.2.1; D.2.4"
] |
Connected Dependability Cage Approach for Safe Automated Driving
A. Aniculaesei et al.
Adina Aniculaesei, Iqra Aslam, Daniel Bamal, Felix Helsch, Andreas Vorwald, Meng Zhang, and Andreas Rausch
Institute for Software and Systems Engineering, TU Clausthal,
Clausthal-Zellerfeld, 38678, Germany
{adina.aniculaesei, iqra.aslam, daniel.bamal, felix.helsch, meng.zhang, andreas.rausch}@tu-clausthal.de, [email protected]
Connected Dependability Cage Approach for Safe Automated Driving
Adina Aniculaesei Iqra Aslam Daniel Bamal Felix Helsch Andreas Vorwald Meng Zhang Andreas Rausch
August 12, 2023
====================================================================================================
Automated driving systems can be helpful in a wide range of societal challenges, e.g., mobility-on-demand and transportation logistics for last-mile delivery, by aiding the vehicle driver or taking over the responsibility for the dynamic driving task partially or completely. Ensuring the safety of automated driving systems is no trivial task, even more so for those systems of SAE Level 3 or above. To achieve this, mechanisms are needed that can continuously monitor the system's operating conditions, also denoted as the system's operational design domain. This paper presents a safety concept for automated driving systems which uses a combination of onboard runtime monitoring via connected dependability cage and off-board runtime monitoring via a remote command control center, to continuously monitor the system's ODD. On one side, the connected dependability cage fulfills a double functionality: (1) to monitor continuously the operational design domain of the automated driving system, and (2) to transfer the responsibility in a smooth and safe manner between the automated driving system and the off-board remote safety driver, who is present in the remote command control center. On the other side, the remote command control center enables the remote safety driver the monitoring and takeover of the vehicle's control. We evaluate our safety concept for automated driving systems in a lab environment and on a test field track and report on results and lessons learned.
§ INTRODUCTION
Automated driving systems (ADSs) have become more present in a variety of applications that address current societal challenges, e.g., mobility-on-demand and last-mile delivery logistics, by assisting the driver to carry out the dynamic driving task (DDT) or taking over the responsibility for the DDT partially or completely. Ensuring that automated driving systems operate safely both for the system and its environment is not a trivial task.
The standard SAE J3016 <cit.> defines six levels of automation for automotive systems, from SAE Level 0 (SAE L0) to SAE Level 5 (SAE L5). The first three levels of automation refer to driver support features, with the driver being in charge of supervising and partially carrying out the DDT as well as supervising the vehicle’s environment. For ADSs of SAE L1 and L2, it is important that in case the system reacts, then its reaction must be correct. The safety requirements of the ADS are the main focus. The system behavior is designed to be conservative in order to build the system to be fail-safe.
Starting with SAE L3, the ADS is in charge of executing the DDT and of supervising the vehicle’s environment. Automated driving systems with SAE L3 and L4 are activated and can execute the DDT only when certain operating conditions are satisfied. In case the operating conditions are not satisfied anymore, the driving system requires the intervention of the driver. While at SAE L3, the driver is still required to be ready to intervene and take over control of the vehicle, starting with SAE L4 the ADS must be ready to trigger the necessary measures that can bring the vehicle to a safe state, e.g., pulling on the side of the road. For systems of SAE L3 and above, it is important that the system reacts in all situations and its reaction must be correct. In this case both safety and liveness requirements are in focus and the goal is to make the system fail-operational.
Various methods for verification and validation are needed in order to ensure that ADSs of SAE L3 or above can operate safely in a realistic road environment. Automated driving systems undergo extensive assessment to demonstrate compliance with functional safety (FuSa) standards, such as ISO 26262 <cit.>. However, conventional safety standards are no longer sufficient for the next generation of ADSs and for fully automated driving. Complementary to ISO 26262, the standard ISO 21448 <cit.> aims for the safety of the intended functionality (SOTIF) of the ADS, which is equivalent to the absence of unreasonable risk due to hazards resulting from functional insufficiencies. These insufficiencies result from the ADS operating in an environment which dos not comply with its operational design domain (ODD) specification. Thus, in addition to methods that ensure compliance with FuSa, innovative approaches are needed to demonstrate SOTIF for ADSs of SAE L3 and above. One approach that contributes to ensuring and demonstrating SOTIF is runtime monitoring of the ODD.
This paper proposes an integrated safety concept for ADSs centered around the notion of connected dependability cage, which is able to monitor the safety requirments of the ADS during the system operation in it environment. This safety concept extends the concept of dependability cage, first introduced in <cit.> and then refined in <cit.>. In its initial concept, a dependability cage consists of two main components: a qualitative monitor and a quantitative monitor (cf. <cit.>, <cit.>). The qualitative monitor checks during the system operation the correctness of the system behavior with respect to the defined safety requirement specification (cf. <cit.>). If the qualitative monitor detects a violation of the safety requirement specification, then this result is recorded in a knowledge database (cf. <cit.>). In turn, the quantitative monitor evaluates during system operation the current driving situation of the ADS and checks whether the system is still in a context that was verified through various methods, e.g., system testing, during the system's design time (cf. <cit.>). If it has not been tested at design time, then the current driving situation is logged in a knowledge database as a novelty situation that occurred during system operation (cf. <cit.>). The results of the qualitative monitor and the quantitative monitor are used in a two-folded manner. In case of warnings from the two monitors, these results are used to compute possible reactions of the system that can bring the system back in a safe state, e.g., emergency braking. On the other side, these results are used in further development iterations to improve the system development artifacts during system design, e.g., better test cases to improve the test coverage for testing the qualitative monitor or better training data for the training of the quantitative monitor.
The safety concept proposed in this paper consists of a connected dependability cage and a remote command control center (remote CCC). The runtime monitoring of a system's ODD occurs onboard the ego-vehicle and off-board. The connected dependability cage monitors the ODD onboard the ego-vehicle using input data from its sensors. The off-board monitoring is done by a remote safety driver which supervises the ego-vehicle through the remote CCC. This safety concept is realized through a modular software architecture which allows reconfiguration of the ADS based on the monitoring results of the connected dependability cage and the instructions given by the remote safety driver from the remote CCC. The safety concept is evaluated in a lab environment using a model car and on a test field track with a full-size vehicle. The use case scenario used for the concept evaluation pertains to the application domain of parcel delivery logistics and was defined together with academic and industry partners in the project VanAssist.
The rest of the paper is structured as follows. Section <ref> researches relevant related work. In Section <ref>, the integrated safety concept for ADSs is presented in detail. Section <ref> introduces the case study and the project VanAssist. The evaluation in the lab environment and on the test field track is presented in detail in Section <ref>. Section <ref> concludes this paper and points out to interesting future research directions.
§ RELATED WORK
Our brief literature research is focused around methods for runtime monitoring of properties for autonomous safety-critical systems, safety architectures for safety-critical applications and approaches that use the concept of safety cage to ensure the system safety.
Schirmer et al. <cit.> discuss the challenges of monitoring safety properties of autonomous aircraft systems, including those that involve temporal and spatial aspects. The authors recognize the need for runtime safety monitors to be integrated with the system under analysis and thus to have access to the overall system. Furthermore, they propose that the monitoring properties follow the hierarchy of the system under analysis. Thus, different monitoring properties can be formulated at different system hierarchy levels. They focus on the hierarchy levels introduced in the SAE standard ARP4761, i.e., item, system, and aircraft (cf. <cit.>), and extend these to include mission and operation levels for autonomy. The monitoring properties are classified in different categories, i.e., temporal, statistical, spatial and parameterized, and different formal specification languages are used to formalize properties situated at different levels in the system hierarchy (cf. <cit.>).
The integration of the runtime safety monitors with the system under analysis must be supported by the system safety architecture. The access of the runtime monitors to the overall system can be ensured only through appropriate interfaces between the monitors and the system under analysis. Various safety architectures have been proposed over the years for automated safety-critical systems. A well-known safety architecture is the Simplex architecture, introduced by Sha in <cit.>. The system has a high-assurance controller and a high-performance controller, which can fulfill the task of the system independent of each other, as well as a decision module that monitors the system state. The decision state switches from the high-performance controller to the safety controller whenever the system approaches an unsafe state (cf. <cit.>).
Jackson et al. <cit.> introduces Certified Control, a variation of the Simplex architecture. A monitor checks the actions of the main controller before forwarding them to the actuators and blocks any action that is considered unsafe or replaces it with a safer action (cf. <cit.>). The decision to block an action of the main controller is taken based on a certificate generated by the latter. This certificate contains evidence that the proposed action is safe. Once the certificate is approved by the monitor, the action of the man controller is forwarded to the actuators (cf. <cit.>). The concept of Certified Control is illustrated with a certificate for LiDAR data and its formal verification through a Hoare-style proof carried out by hand (cf. <cit.>). In <cit.>, Bansal et al. propose Synergistic Redundancy as a safety architecture for complex cyber-physical systems (CPS), e.g., autonomous vehicles (AV). The Synergistic Redundancy architecture decouples the mission layer from the safety assurance layer of the system. The mission layer executes all tasks necessary to fulfill the system mission, e.g., perception, planing, and control. The safety layer runs in parallel to the mission layer and communicates over predefined interfaces with the mission layer. The safety layer provides algorithms for deterministic guarantees as well as fault handlers that identify faults and take corrective actions (cf. <cit.>). The Synergistic Redundancy concept is demonstrated for the safety-critical function of obstacle detection and collision avoidance (cf. <cit.>). Phan et al. <cit.> present a component-based variant of the Simplex architecture, to ensure the runtime safety of component-based CPSs. The proposed approach combines the principles of the Simplex architecture with assume-guarantee reasoning in order to formally prove system guarantees with respect to energy safety, collision freedom and mission completion for a ground rover (cf. <cit.>).
Considerations about the safety architecture of an automated safety-critical system become even more important when part of the system functionality is realized with artificial intelligence (AI) or machine learning (ML) components. Fenn et al. <cit.> take a closer look at common architectural patterns used in traditional aviation systems and discuss the implications for the safety assurance of the whole system when AI/ML components are integrated in the system architecture.
In <cit.>, Costello and Xu propose a new approach to certifying the safety of autonomous systems in the naval aviation domain. The proposed safety architecture consists of a runtime assurance (RTA) input monitor and a controller/safety monitor. The current aircraft state and a projection of the aircraft state into the future are passed as inputs to the RTA input monitor, which processes these further for the safety monitor. In turn, the safety monitor determines if the aircraft will violate the clearance envelope for autonomous behavior. If the aircraft violates the clearance envelope, then the safety monitor switches the air vehicle guidance to deterministic behavior.
Borg et al. <cit.> use a safety cage to carry out validity and safety checks for an ML-based pedestrian automatic emergency braking system, called SMIRK, whose task is to detect pedestrians and avoid any collisions with them. The safety cage receives radar/LiDAR and camera input data and produces an assessment whether a collision with a pedestrian is imminent or not. On one side, the safety cage uses an ML-trained anomaly detector to analyse the input camera images with potential pedestrians detected in order to find any anomalies with respect to its training data (cf. <cit.>). On the other side, the safety cage performs uses a rule engine to do heuristics-based sanity checks, e.g., in order to determine if the perceived situation is consistent with the laws of physics (cf. <cit.>). The authors use SMIRK as an example system to demonstrate the systematic construction of a safety case, including the system architecture, the safety requirements, and the test scenarios used to ensure the safety of the system (cf. <cit.>).
Our paper builds on a foundation of research developed in several previous publications. The concept of dependability cage was first proposed in <cit.> together with the challenges of engineering hybrid AI-based ADSs that emerge with respect to the dependability and safety assurance of these systems. This concept has already been applied on a lane change assistance system (LCAS) (cf. <cit.>, <cit.>, <cit.>, <cit.>).
Recently, the concept of the connected dependability cage has been introduced as an extension of the initial notion of dependability cage (cf. <cit.>). Its application in a scenario of parcel delivery logistics in the project VanAssist has been described in our previous work in <cit.>. In <cit.>, the focus was placed on improving the algorithm for the computation of the safe zone around the ego-vehicle, in comparison to the one used in <cit.>, in order to address the challenges in the project VanAssist. For additional details on how the safe zone of an AV is defined, the reader is referred to Section <ref> of this paper.
Compared to our previous work in <cit.>, in this paper we describe in more detail the purpose and functionality of the remote CCC, also developed in the VanAssist project, as well as the mechanism which enables the seamless share and transfer of responsibility over the DDT between the ADS and the remote safety driver in the remote CCC.
§ SAFETY CONCEPT FOR AUTOMATED DRIVING SYSTEMS VIA CONNECTED DEPENDABILITY CAGE
The approach of connected dependability cage is depicted in Figure <ref> and brings together two main systems: (1) an onboard runtime monitoring system of the ADS through the connected dependability cage and (2) an off-board runtime monitoring system through the remote CCC and a human remote safety driver.
§.§ Onboard Runtime Monitoring of ADSs with the Connected Dependability Cage
The connected dependability cage has two major components: (1) a qualitative monitor, which detects the violation of the ADS's safety requirements and (2) a mode control component in charge of the fail-operational reaction of the automated driving system in case the qualitative monitor detects a safety requirements violation. Notice that, in comparison to the initial dependability cage concept, the connected dependability cage presented in this paper does not include the quantitative monitor, as this component has not been implemented in the VanAssist project.
§.§.§ Qualitative Monitor.
There are two safety requirements formulated for the ADS, which the qualitative monitor must continuously check during the system operation:
SR1: The ADS shall not cause a collision of the ego-vehicle with static obstacles in the vehicle's environment.
SR2: The ADS shall operate only if the image data provided by the ego-vehicle's camera sensor is valid.
In order to check the safety requirements SR1 and SR2 during system operation, three components are implemented in the qualitative monitor: (1) a component which computes a safe zone around the ego-vehicle, (2) a LiDAR detector, and (3) a camera validator. The LiDAR detector is used to monitor SR1, using as input the computed safe zone and the data provided by the LiDAR sensors of the ego-vehicle. The safe zone is computed based on the current velocity and steering angle of the ego-vehicle. It consists of two separate areas, denoted as clear zone and focus zone, with the focus zone being computed on top of the clear zone as a constant positive overhead, and therefore always larger than the clear zone. These areas mark danger zones around the ego-vehicle based on its braking path. The LiDAR detector monitors SR1 by checking whether there are obstacles in the clear zone or in the focus zone. If the focus zone is free of obstacles, then the clear zone is also free of obstacles. In turn, the camera validator is used to monitor the safety requirement SR2. This component validates the camera sensor data by quantizing the sharpness of a camera image. If its sharpness falls below a given threshold value, the input image is classified as invalid.
§.§.§ Mode Control.
The mode control component triggers a fail-operational reaction, in case the qualitative monitor detects the violation of at least one of the two safety requirements formulated in the previous section. To compute the appropriate fail-operational reaction, the mode control component takes as inputs the results of the LiDAR detector and of the camera validator as well as the requests for change of the cage mode and of the driving mode received from the CCC. The computed fail-operational reaction consists of a new cage mode and a new driving mode. The dependability cage has two modes: on and off. In turn, the automated driving system has five driving modes:
* Fully Autonomous Driving represents an autonomous driving function without restrictions, but with stricter safety criteria, e.g., wider safe zone around the ego-vehicle.
* Limited Autonomous Driving triggers an autonomous driving function that is restricted in its freedom, e.g., driving with reduced velocity, but is safeguarded by weakened safety criteria, e.g., smaller safe zone around the ego-vehicle.
* Remote Manual Driving represents driving by a human remote safety driver.
* In-Place Manual Driving is driving by a safety driver present in the car.
* Emergency Stop implements a driving function that triggers emergency braking on the ego-vehicle.
The responsibility for dynamic driving task during the operation of the ego-vehicle is shared between the human safety driver and the ADS. Depending on the driving mode computed by the mode control component, the responsibility for the DDT is carried either by the safety driver or by the ADS individually, or the safety driver shares the responsibility for the DDT cooperatively with the ADS. Thus, the ADS is responsible for carrying out the DDT on its own when the driving mode is Fully Autonomous Driving.
The safety driver is in charge of the DDT when the driving mode is set to Remote Manual Driving, In-Place Manual Driving, or Emergency Stop. The driving mode Emergency Stop can be requested by the remote safety driver via the control panel of the remote CCC. It can also be triggered when the cage mode is on and the qualitative monitor has detected a violation of at least one of the two system safety requirements. The release of the emergency brake can be performed only by the safety driver, via a request for one of the other four possible driving modes, i.e., Remote Manual Driving, In-Place Manual Driving, Limited Autonomous Driving, or Fully Autonomous Driving.
The safety driver shares the responsibility of the DDT with the ADS when the driving mode is set to Limited Autonomous Driving. This is because adjusting the parameters of the autonomous driving system order to restrict its freedom as well as weakening its safety criteria requires the careful oversight of the remote safety driver. The safety driver can request the driving mode Limited Autonomous Driving from the control panel of the remote CCC.
The mode control component is designed as a SCADE state machine using the ANSYS SCADE tool chain. This way, we ensure a verifiable safe transfer of responsibility of the DDT and a smooth cooperation between the ADS and the remote safety driver.
§.§ Off-board Runtime Monitoring of ADSs through the Remote Command Control Center
The remote CCC allows the remote safety driver to visualize the state of the autonomous ego-vehicle based on the sensor data received from its LiDAR and camera sensors as well as the inputs received from the connected dependability cage. Figure <ref> shows an overview of the graphical user interface (GUI) of the remote CCC.
On the left side of the display in the command control center there is a summary containing the following attributes: (1) the sensor validity, (2) the mission state, (3) the driving mode, and (4) the cage state. The sensor validity is a Boolean flag which represents the assessment made by the qualitative monitor with respect to the validity of the camera input images. Regarding the mission state, a distinction is made between the states inactive, active, blocked, and completed. The inactive state means that the ego-vehicle is not currently performing any driving task. The state active means that the vehicle is currently carrying out a driving task, which is not yet completed. If a problem occurs during the current driving task ("Fail-Operational Mode"), which prevents the ego-vehicle from completing it, the state blocked is inferred. After the vehicle has finished its driving task, the mission state is considered to be completed. The driving mode refers to the current driving mode of the ADS, while the cage state indicates whether there are any objects detected inside the vehicle's safe zone or not. All these attributes describe together the state of the ego-vehicle. The possible values of each attribute are listed in Table <ref>.
In the center of the remote CCC display, there is an integrated representation of the LiDAR sensors and the safe zone, which helps the remote safety driver to quickly and intuitively assess the current driving situation of the ego-vehicle. The blue rectangle in the center of the integrated display shows an over-approximated representation of the vehicle's circumference, which is intended to help the safety driver with orientation. Surrounding the representation of the vehicle's circumference is the visualization of the safe zone, which is computed as a function of the vehicle's current speed and steering angle. Therefore, the safe zone increases in size with the vehicle's speed and changes its shape, i.e., rectangle or circle segment, depending on the current steering angle of the ego-vehicle. The green area represents the clear zone and the orange area the focus zone. The black dots surrounding the vehicle represent the point cloud measured by the vehicle's LiDAR sensors. The camera sensor data is visualized to the left of the LiDAR visualization panel.
Different controls are illustrated on the upper right of the remote CCC display: car controls, cage mode, and driving mode. In the center right corner of the display a mini-map of the ego-vehicle's environment is shown. The list of destinations/missions is displayed on the bottom right remote CCC display.
§ A CASE STUDY IN PARCEL DELIVERY LOGISTICS
The distribution of goods in urban areas is often carried out by large vehicles, i.e., “Sprinter class” vehicles that are used during the last mile of delivery. The classic parcel delivery process involves the postman going door-to-door and stopping often to reach the different customers delivery addresses. Before making the delivery to the end customer, the postman needs to find an appropriate parking spot, which is not always easy in crowded urban areas. After parking his vehicle, the postman removes the parcel from the vehicle and delivers it to the end customer. The postman also needs to bring back on foot any parcels that he could not deliver to the respective end customers. Besides being highly inefficient, the classic parcel delivery process is also prone to cause traffic congestion in urban areas, environmental pollution, as well as wear and tear of the delivery vehicle.
In order to address the issues mentioned above, the collaborative project VanAssist[https://www.vanassist.de/] aimed to develop an integrated vehicle and the corresponding system technology that enables largely emission-free and automated delivery of goods in urban centers. The VanAssist project brought together research institutes from four German universities, i.e., Institute for Reliable Embedded Systems and Communication Electronics at HS Offenburg (HSO), Institute for Vehicle Technology (IfF) at TU Braunschweig, Institute for Software and Systems Engineering (ISSE) at TU Clausthal, and Institute for Enterprise Systems (InES) at University of Mannheim, as well as four industrial partners, i.e., BridgingIT GmbH (BIT), DPD Germany GmbH, IAV GmbH, and Ibeo Automotive Systems GmbH. The overall objective of the project was to develop an automated driving system in an electric vehicle, equipped with an intelligent delivery system that is monitored by onboard and off-board monitoring systems. This intelligent delivery vehicle assists the postman, automatically moving to the next delivery point, reducing the postman's effort and enabling continuous movement along the planned route.
This paper presents the contribution of ISSE at TU Clausthal in the VanAssist project. This is the development of a safety concept for automated driving systems, which can handle critical situations or errors and can ensure the safe operation of the automated vehicle. The safety concept consists of two monitoring systems that interact continuously with each other and enable a seamless sharing of responsibility over the dynamic driving task between the automated driving system and the safety driver. These two systems are: (1) an onboard monitoring system (connected dependability cage) that monitors the vehicle and (2) an off-board monitoring system (command control center) that remotely supervises the entire fleet of vehicles as well as the transfer of responsibility over the dynamic driving task between the automated driving system and the safety driver. A detailed presentation of this safety concept is given in Section <ref> of this paper.
§ EVALUATION AND DISCUSSION OF RESULTS
This section discusses the evaluation of the concept of connected dependability cage presented in Section <ref>. In order to evaluate this concept, we defined an overall use case scenario (cf. Section <ref>). Different sub-scenarios are then extracted from it and used to test the connected dependability cage. We carried out a qualitative evaluation in our lab environment with a model car (cf. Section <ref>) and on a test field track with a full sized car (cf. Section <ref>).
§.§ Overall Use Case Scenario
The use case scenario used for the evaluation of our concept is in the application domain of parcel delivery logistics. A visual overview of the scenario is shown in <Ref>.
The scenario consists of several steps. Each step constitutes itself a sub-scenario of the overall use case scenario. In total, the overall use case scenario consists of eight sub-scenarios, which are denoted by unique identifiers 1 to 8. To begin with, the AV drives autonomously from the parking lot (1) to the depot (2), where it picks up packages. From there it drives to the postman's house (3). The postman enters the AV and drives to the home of the first parcel receiver (4). Arriving at the receiver's home, the postman leaves the car for his first delivery round through a pedestrian zone, while the AV drives around the pedestrian zone to meet up with the postman at the first meeting point (6).
On its way to the first meeting point (6), the AV encounters a narrowing in the road and the dependability cage triggers an emergency stop (5). After analyzing the situation, the remote safety driver switches the AV to limited autonomous driving, which limits the speed of the AV and thus uses a smaller safe zone. The AV passes the narrowing using the limited autonomous driving mode and drives to the first meeting point (6). After coming back from his first deliver round, the postman meets with the AV at the first meeting point and retrieves the second batch of parcels out of the AV for his second delivery round. The AV then continues its autonomous drive to the second meeting point (8).
On the way to the second meeting point, children playing ball run on the street and the dependability cage triggers an emergency stop (7). Supervising the situation over the CCC, the remote safety driver waits until the children have left the road, before switching back to the fully autonomous driving mode (7). Once switched to fully autonomous driving mode, the AV continues its trip to the second meeting point (8).
While the AV is waiting for the postman at the second meeting point, another emergency stop is triggered. Analyzing the situation through the sensors visualization panels in the CCC, the remote safety driver recognizes that the front camera is blocked by leaves and informs the postman about this issue (8). Arriving at the second meeting point from its second delivery round, the postman removes the leaves from the camera and gets in the AV (8). The remote safety driver switches the AV back to fully autonomous driving and the AV, together with the postman, drives back to the parking lot (1). This concludes the overview of our overall use case scenario.
§.§ Evaluation in a Lab Environment
For the evaluation in our lab environment, we used a model vehicle with a scale of 1:8 (cf. <cit.>) equipped with several sensors, which are used to analyze the ego-vehicle state and that of its environment, i.e., LiDAR, camera, ultrasonic sensors, GPS, and IMU. The track was built out of modular black mats of size 1 m × 1 m, with street markings and track walls (cf. <Ref>).
In the rest of this section, we extract three sub-scenarios from the overall use case scenario and show on an exemplary basis how we used these to evaluate the connected dependability cage, i.e., test the different components of the qualitative monitor and the human-machine interaction with the help of the remote CCC.
§.§.§ Sub-scenario 1: Testing the Human-Machine Interaction.
The remote safety driver uses the different panels of the remote CCC to interact with the AV. In order to start the supervision of an AV, the remote safety driver uses the car selection panel out of the list of AVs displayed on the panel (cf. <Ref>). Once he has selected an AV, the remote safety driver is provided with a very condensed overview of the selected AV's current state through the attributes defined in Table <ref>.
A remote CCC can supervise several AVs at a time. However, for the supervision of larger AV fleets it may be necessary to deploy several remote CCCs spanned over a wider area and each having its own jurisdiction. An AV can be controlled by a remote safety driver from a remote CCC only when the control rights over the respective AV are transferred to the remote CCC. The remote safety driver can transfer the control rights to his CCC by using the controls in the car control panel (cf. <Ref>). With the control rights over an AV transferred to the remote CCC, the selection of the AV driving modes is also enabled. The remote safety driver has then access to the driving mode selection and and can choose an appropriate driving mode, e.g. fully autonomous driving. This concept enables passing of AV control between different remote CCCs, which are in charge of supervising a large fleet of vehicles.
Before the AV starts driving on the first leg of its trip, the remote safety driver switches the cage on and requests the switch to fully autonomous driving mode (cf. <Ref>). The remote safety driver then selects the first destination of the AV out of the destination list panel and activates it (cf. <ref>). In order to track the progress of the AV, the remote safety driver uses the mini map panel to see the current position of the AV (red) and the positions of the destinations (blue) on the track (cf. <Ref>).
§.§.§ Sub-scenario 7: Testing the Safe Zone and the LiDAR Detector.
The safe zone and the LiDAR detector are components of the qualitative monitor, which are used to monitor the safety requirement SR1 by detecting any obstacles in the driving path of the ego-vehicle and trigger an emergency stop to prevent a collision. The remote safety driver is able to visualize the safe zone calculated around the ego-vehicle and the LiDAR points in the sensor visualization panel (cf. <Ref>). In the situation depicted in the sensor visualization panel, there is visible a significant amount LiDAR point inside the safe zone (cf. <Ref>), which leads to the trigger of the emergency stop. Since the AV never switches automatically from emergency stop to fully autonomous driving, it is the responsibility of the remote safety driver to request the switch to fully autonomous driving, once the situation is safe again (cf. <Ref>). In addition to the previously described panels, the remote safety driver uses also the camera visualization panel of the CCC to assess the current situation in the AV's environment (cf. <Ref>).
§.§.§ Sub-scenario 8: Testing the Camera Validator.
The camera validator is a component of the qualitative monitor, which is used to monitor the safety requirement SR2 by checking the validity of the input camera images. The remote safety driver is able to visualize the status of the camera sensors through the front camera and back camera panels. In the situation depicted in the front camera panel, the front camera is visibly blocked by leaves (cf. <Ref>), which leads to the trigger of the emergency stop (cf. <Ref>). Since this situation cannot be resolved remotely, the remote safety driver notifies the postman of this issue and tasks him with solving this issue.
We refer the reader to <cit.> for a more complete description of the lab evaluation that we carried out in the VanAssist project. Additionally, a video which demonstrates and explains the complete lab test scenario can be viewed at <cit.>.
§.§ Evaluation on the Test Field Track
We also evaluated our connected dependability cage concept with a full-size vehicle named PLUTO on a test track located in Braunschweig, Germany.
PLUTO is an electrically-powered full-size vehicle equipped with several sensors, i.e., LiDAR, camera, GPS, and IMU, which was custom build for the VanAssist project.
The implementation of our connected dependability cage concept for PLUTO presented several new challenges that were especially related to the safe zone component and the LiDAR detector component. One of these challenges was the 360° environment perception around the ego-vehicle provided by eight LiDAR sensors and 4 cameras. The increase in the number of the LiDAR sensors as well as the fact that these were 3D LiDAR sensors led to a significant increase in the volume of LiDAR sensor data and the noise present in these data. Furthermore, PLUTO presented a significantly different vehicle dynamics in comparison to the model vehicle used for the lab evaluation. Last but not least, although the lab track emulated the test field track, the two environments were significantly different from each other due to the fact that the lab track is an indoor environment, while test field track was situated outdoors.
To address these challenges we generalized the safe zone to a circle segment for driving forward and driving backwards. Furthermore, we implemented a z-cutoff to handle the ghost points in the LiDAR data and a clustering algorithm for the detection of objects based on LiDAR data point. We refer the reader to <cit.> for a more detailed descriptions of these challenges and the implemented solutions.
In the VanAssist project we did not perform a full demonstration of the described overall use case scenario on the test field track, but we were able to carry out test fields for individual sub-scenarios in order to test out the qualitative monitor with its components, i.e., safe zone, LiDAR detector, and camera validator, as well as the human-machine interaction between the remote safety driver and the AV via the remote CCC.
The tests for the safe zone and the LiDAR detector components were carried out by driving the vehicle PLUTO towards a static obstacle with speeds in the range of ca. 520km/h. We adjusted the calculation of the safe zone as well as the parametrization of the safe zone and the LiDAR detector, so that the emergency stop triggered by the dependability cage brought PLUTO to a full stop at least 1m before the obstacle for the speed range used during the test fields. In addition to this parametrization, we used the noise filtering of the clustering algorithm, in order to filter out LiDAR points that would trigger unnecessary emergency stops of the ego-vehicle. Thus, PLUTO was able to drive multiple rounds around the test track without triggering unnecessary emergency stops.
The camera validator was tested by placing a piece of cloth in front of the camera and adjusting the parameters of the camera validator for the different cameras and the outdoor lighting conditions.
We used the GUI of the remote CCC described in the previous section in order to carry out the test fields of the human-machine interaction between the remote safety driver and the AV. For a more complete description of the test track evaluation of the VanAssist project we refer to <cit.>.
§ SUMMARY AND FUTURE WORK
This paper presented an integrated safety concept for safeguarding the safety of ADSs based on the connected dependability cage approach. This approach consists of two runtime monitoring systems: (1) the connected dependability cage which monitors the ADS onboard the ego-vehicle and (2) the remote CCC which is able to supervise off-board an entire fleet of AVs with the cooperation of a remote safety driver. The two runtime monitoring systems are part of an integrated safety architecture for ADSs, which enables the reconfiguration of the ADS and the smooth share and transfer of responsibility over the DDT between the ADS and the remote safety driver. We have carried out a qualitative evaluation of the connected dependability cage approach both in a lab environment using a model car with 1:10 scale as well as on a test track in Braunschweig using a test vehicle. The results of the qualitative evaluation demonstrated the feasibility of the proposed safety concept for ADSs through its application in scenarios from the domain of parcel delivery logisitcs.
Several directions are interesting in future work. Firstly, the z-cutoff algorithm used in the LiDAR detector component does not always ensure a reliable separation of the LiDAR points pertaining to the ground surface from the rest of the LiDAR data that is relevant for runtime monitoring of the system's ODD. When the dependability cage detects a static obstacle on the road, it triggers immediately an emergency stop, since the safe zone points into the obstacle. This is a fail-safe reaction of the ego-vehicle. In future work, we plan to extend the connected dependability cage so that fail-operational reactions are also possible. Here we envision that the fail-operational reaction could be similar to the reaction of a human driver, who could easily steer back the ego-vehicle, go around the obstacle and continue on its drive. Furthermore, in future we plan to carry out also a quantitative evaluation on a larger set of driving scenarios, which also involve dynamic obstacles in the ego-vehicle environment. In addition, we plan to extend the connected dependability cage approach also with a quantitative monitor, which is able to assess the novelty of the current driving situation of the ego-vehicle.
On an application level, we plan to extend the functionality of the remote CCC so that in addition to the cooperation between the AV and the postman, it also enables the cooperation of the AV with a delivery robot, tasked with receiving the parcels from the AV and delivering them to the end customer.
§.§.§ Acknowledgements
This research work was made possible through the collaborative project VanAssist, which was funded by the German Federal Ministry of Transportation and Digital Infrastructure (BMVI) under the funding number 16AVF2139E. The project was carried out between October 2018 and June 2021 under the project lead of ZENTEC Center for Technology, Business Start-ups, and Cooperation GmbH. The authors of this paper would like to acknowledge BMVI for the financial support and the valuable collaboration of all project partners involved.
plain
|
http://arxiv.org/abs/2307.04305v1 | 20230710020443 | Automatic Piano Transcription with Hierarchical Frequency-Time Transformer | [
"Keisuke Toyama",
"Taketo Akama",
"Yukara Ikemiya",
"Yuhta Takida",
"Wei-Hsiang Liao",
"Yuki Mitsufuji"
] | cs.SD | [
"cs.SD",
"cs.LG",
"eess.AS"
] |
The category of reduced imaginary Verma modules
Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira
August 12, 2023
===============================================================
Taking long-term spectral and temporal dependencies into account is essential for automatic piano transcription.
This is especially helpful when determining the precise onset and offset for each note in the polyphonic piano content.
In this case, we may rely on the capability of self-attention mechanism in Transformers to capture these long-term dependencies in the frequency and time axes.
In this work, we propose hFT-Transformer, which is an automatic music transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
The first hierarchy includes a convolutional block in the time axis, a Transformer encoder in the frequency axis, and a Transformer decoder that converts the dimension in the frequency axis.
The output is then fed into the second hierarchy which consists of another Transformer encoder in the time axis.
We evaluated our method with the widely used MAPS and MAESTRO v3.0.0 datasets, and it demonstrated state-of-the-art performance on all the F1-scores of the metrics among Frame, Note, Note with Offset, and Note with Offset and Velocity estimations.
§ INTRODUCTION
Automatic music transcription (AMT) is to convert music signals into symbolic representations such as piano rolls, Musical Instrument Digital Interface (MIDI), and musical scores <cit.>.
AMT is important for music information retrieval (MIR), its result is useful for symbolic music composition, chord progression recognition, score alignment, etc.
Following the conventional methods <cit.>, we estimate the frame-level metric and note-level metrics as follows: (1) Frame: the activation of quantized pitches in each time-processing frame, (2) Note: the onset time of each note, (3) Note with Offset: the onset and offset time of each note, and (4) Note with Offset and Velocity: the onset, offset time, and the loudness of each note.
For automatic piano transcription, it is important to analyze several harmonic structures that spread in a wide range of frequencies, since piano excerpts are usually polyphonic.
Convolutional neural network (CNN)-based methods have been used to aggregate harmonic structures as acoustic features.
Most conventional methods apply multi-layer convolutional blocks to extend the receptive field in the frequency axis.
However, the blocks often include pooling or striding to downsample the features in the frequency axis.
Such a downsampling process may reduce the frequency resolution <cit.>.
It is worth mentioning, many of these methods use 2-D convolutions, which means the convolution is simultaneously applied in the frequency and time axes.
The convolution
in the time axis works as a pre-emphasis filter to model the temporal changes of the input signals.
Up to now, recurrent neural networks (RNNs), such as gated recurrent unit (GRU) <cit.> and long short-term memory (LSTM) <cit.>, are popular for analyzing the temporal sequences of acoustic features.
However, recently some of the works start to use Transformer <cit.>, which is a powerful tool for analyzing sequences, in AMT tasks.
Ou et al. <cit.> applied a Transformer encoder along the time axis and suggested that using Transformer improves velocity estimation.
Hawthorne et al. <cit.> used a Transformer encoder-decoder as a sequence-to-sequence model for estimating a sequence of note events from another sequence of input audio spectrograms.
Their method outperformed other methods using GRUs or LSTMs.
Lu et al. <cit.> proposed a method called SpecTNT to apply Transformer encoders in both frequency and time axes and reached state-of-the-art performance for various MIR tasks such as music tagging, vocal melody extraction, and chord recognition.
This suggests that such a combination of encoders helps in characterizing the broad-scale dependency in the frequency and time axes.
However, SpecTNT aggregates spectral features into one token, and the process in its temporal Transformer encoder is not independent in the frequency axis.
This inspires us to incorporate Transformer encoders in the frequency and time axes and make the spectral information available for the temporal Transformer encoder.
In addition, we usually divide the input signal into chunks since the entire sequence is often too long to be dealt at once.
However, this raises a problem that the estimated onset and offset accuracy fluctuates depending on the relative position in the processing chunk.
In our observation, the accuracy tends to be worse at both ends of the processing chunk.
This motivates us to incorporate extra techniques during the inference time to boost the performance.
In summary, we propose hFT-Transformer, an automatic piano transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
Its workflow is shown in Figure <ref>.
The first hierarchy consists of a one-dimensional (1-D) convolutional block in the time axis, a Transformer encoder in the frequency axis, and a Transformer decoder in the frequency axis.
The second hierarchy consists of another Transformer encoder in the time axis.
In particular, the Transformer decoder at the end of the first hierarchy converts the dimension in the frequency axis from the number of frequency bins to the number of pitches (88 for piano).
Regarding the issue of the location dependent accuracy fluctuation in the processing chunks, we propose a technique which halves the stride length at inference time.
It uses only the result of the central part of processing chunks, which will improve overall accuracy.
Finally, in Section <ref>, we show that our method outperforms other piano transcription methods in terms of F1 scores for all the four metrics.
A implementation of our method is available here[].
§ RELATED WORK
Neural networks, such as CNNs, RNNs, generative adversarial networks (GANs) <cit.>, and Transformers have been dominant for AMT.
Since Sigtia et al. <cit.> proposed the first method to use a CNN to tackle AMT, CNNs have been widely used for the methods of analyzing the spectral dependency of the input spectrogram <cit.>.
However, it is difficult for CNNs to directly capture the harmonic structure of the input sound in a wide range of frequencies, as convolutions are used to capture features in a local area.
Wei et al. <cit.> proposed a method of using harmonic constant-Q transform (CQT) for capturing the harmonic structure of piano sounds.
They first applied a 3-Dimensional CQT,
then applied multiple dilated convolutions with different dilation rates to the output of CQT.
Because the dilation rates are designed to capture the harmonics, the performance of Frame and Note accuracy reached state-of-the-art.
However, the dilation rates are designed specifically for piano.
Thus, the method is not easy to adapt to other instruments.
For analysis of time dependency, Kong et al. <cit.> proposed a method that uses GRUs.
Howthorner et al. <cit.>, Kwon et al. <cit.>, Cheuk et al. <cit.>, and Wei et al. <cit.> proposed methods that use bi-directional LSTMs for analysis.
Ou et al. <cit.> used a Transformer encoder to replace the GRUs in Kong et al.'s method <cit.>, and showed the effectiveness of the Transformer.
Usually, the note onset and offset are estimated in each frequency and time-processing frame grid, then paired as a note for note-level transcription by post-processing algorithms such as <cit.>.
However, compared to heuristically designed algorithms, end-to-end data-driven methods are often preferred.
For example, Keltz et al. <cit.> applied a seven-state hidden Markov model (HMM) for the sequence of attack, decay, sustain, and release to achieve note-level transcription.
Kwon et al. <cit.> proposed a method of characterizing the output of LSTM as a five-state statement (onset, offset, re-onset, activate, and inactivate).
Hawthorne et al. <cit.> proposed a method of estimating a sequence of note events, such as note pitch, velocity, and time, from another sequence of input audio spectrograms using a Transformer encoder-decoder.
This method performs well in multiple instruments with the same model <cit.>.
Yan et al. <cit.> proposed a note-wise transcription method for estimating the interval between onset and offset.
This method shows state-of-the-art performance in estimating Note with Offset and Note with Offset and Velocity.
However, the performance in estimating Frame and Note is worse than that of Wei et al.'s method <cit.>.
§ METHOD
§.§ Configuration
Our proposed method aims to transcribe N frames of the input spectrogram into N frames of the output piano rolls (frame, onset, offset, and velocity) as shown in Figure <ref>, where N is the number of frames in each processing chunk.
Each input frame is composed of a log-mel spectrogram having size (F, M+1+M), where F is the number of frequency bins, and M is the size of the forward margin and that of the backward margin.
To obtain the log-mel spectrogram, we first downmix the input waveform into one channel and resample them to 16 kHz.
Then, the resampled waveform is transformed into a mel spectrogram with class in the library <cit.>.
For the transformation, we use hann window, setting the window size as 2048, fast-Fourier-transform size as 2048, F as 256, padding mode as constant, and hop-size as 16 ms.
The magnitude of the mel spectrogram is then compressed with a log function.
§.§ Model Architecture and Loss Functions
The model architecture of our proposed method is shown in Figure <ref>.
We first apply a convolutional block to the input log-mel spectrogram, the size of which is (B, N, F, M+1+M) where B is the batch size.
In the convolutional block, we apply a 1-D convolution in the M+1+M dimension.
After this process, the data are embedded with a linear module.
The embedded vector is then processed with the first Transformer encoder in the frequency axis.
The self-attention is processed to analyze the dependency between spectral features.
The positional information is designated as [0, 1, ..., F-1].
These positional values are then embedded with a trainable embedding.
These are processed in the frequency axis only, thus completely independent to the time axis (N dimension).
Next, we convert the frequency dimension from F to the number of pitches (P).
A Transformer decoder with cross-attention is used as the converter.
The Transformer decoder calculates the cross-attention between the output vectors of the first Transformer encoder and another trainable positional embedding made from [0, 1, ..., P-1].
The decoded vectors are then converted to the outputs of the first hierarchy with a linear module and a sigmoid function (hereafter, we call these outputs output_1st).
Regarding the loss calculation for the outputs, frame, onset, and offset are calculated with binary cross-entropy, and velocity is calculated with 128-category cross-entropy.
The losses can be summarized as the following equations:
L_bce^<m> =∑_n=0^N-1∑_p=0^P-1l_bce(y_n,p^<m>,ŷ_n,p^<m>),
L_cce^velocity =∑_n=0^N-1∑_p=0^P-1l_cce(y_n,p^velocity,ŷ_n,p^velocity),
L =L_bce^frame+L_bce^onset+L_bce^offset+L_cce^velocity,
where <m> is the placeholder for each output (frame, onset, and offset), l_bce and l_cce denote the loss function for binary cross-entropy and categorical cross-entropy, respectively, and y and ŷ denote the ground truth and predicted values of each output (frame, onset, offset, and velocity), respectively.
Although it is intuitive to apply the mean squared error (MSE) for velocity, we found that using the categorical cross-entropy yields much better performance than the MSE from a preliminary experiment.
Finally, the output of the converter is processed with another Transformer encoder in the time axis.
The self-attention is used to analyze the temporal dependency of features in each time-processing frame.
A third positional embedding made from [0, 1, ..., N-1] is used here.
Then, similar to the first hierarchy, the outputs of the second hierarchy are obtained through a linear module and a sigmoid function.
We call these outputs of the second hierarchy as output_2nd hereafter.
The losses for the output_2nd are evaluated in the same way as those for output_1st.
These losses are summed with the coefficients α_1st and α_2nd as follows:
L_all=α_1stL_1st+α_2ndL_2nd.
Although both outputs are used for computing losses during training, only output_2nd is used in inference.
As Chen et al. <cit.> suggested that the performance of their method of calculating multiple losses outperformed the method that uses single loss only, it hints us that utilizing both output_1st and output_2nd in training has the potential to achieve better performance.
§.§ Inference Stride
As mentioned in Section <ref>, chunk-based processing is required because the input length is limited due to system limitations, such as memory size and acceptable processing delay.
We found that the estimation error tends to increase at certain part within each processing chunk.
This can be demonstrated by evaluating the error for each instance of time n within the chunks:
𝑒𝑟𝑟𝑜𝑟_n^<m>=1/IP∑_i=0^I-1∑_p=0^P-1(y_i,n,p^<m>-ŷ_i,n,p^<m>)^2,
where <m> is the placeholder for each output (frame, onset, offset, and velocity), and I is the number of processing chunks over the test set.
The result using our proposed model trained using the MAESTRO training set (described in Section <ref>) is shown in Figure <ref>.
Here, the error 𝑒𝑟𝑟𝑜𝑟_n^<m> is calculated using the MAESTRO test set.
In the figure, we observe a monotonic decrease for frame and a similar but much weaker trend for onset and offset. However, for velocity, no such trend can be observed.
This hints us to use only the middle portion of a processing chunk as the output to reduce the error rate. We call this as the half-stride strategy, since a 50% overlap is required for processing chunks, as shown in Figure <ref> (B).
§ EXPERIMENTS
§.§ Datasets
We use two well-known piano datasets for the evaluation.
The MAPS dataset <cit.> consists of CD-quality recordings and corresponding annotations of isolated notes, chords, and complete piano pieces.
We use the full musical pieces and the train/validation/test split as stated in <cit.>.
The number of recordings and the total duration in hours in each split are 139/71/60 and 8.3/4.4/5.5, respectively.
The MAESTRO v3.0.0 dataset <cit.> includes about 200 hours of paired audio and MIDI recordings from ten years of the International Piano-e-Competition.
We used the train/validation/test split configuration as provided.
In each split, the number of recordings and total duration in hours are 962/137/177 and 159.2/19.4/20.0, respectively.
For both datasets, the MIDI data have been collected by Yamaha Disklaviers concert-quality acoustic grand pianos integrated with a high-precision MIDI capture and playback system.
§.§ Model Configuration
Regarding our model architecture depicted in Figure <ref>, we set N as 128, M as 32, F as 256, P as 88, the CNN channels (C) as 4, size of the CNN kernel (K) as 5, and embedding vector size (Z) as 256.
For the Transformers, we set the feed-forward network vector size as 512, the number of heads as 4, and the number of layers as 3.
For training, we used the following settings: a batch size of 8, learning rate of 0.0001 with Adam optimizer<cit.>, dropout rate of 0.1, and clip norm of 1.0.
in is used for learning rate scheduling with default parameters.
We set α_1st and α_2nd as 1.0, which were derived from a preliminary experiment (see Section <ref>).
We trained our models for 50 epochs on MAPS dataset and 20 epochs for MAESTRO dataset using one NVIDIA A100 GPU.
It took roughly 140 minutes and 43.5 hours to train one epoch with our model for MAPS and MAESTRO, respectively.
The best model is determined by choosing the one with the highest F1 score in the validation stage.
In order to obtain high-resolution ground truth for onset and offset, we followed the method in Kong et al. <cit.>.
We set J, the hyper-parameter to control the sharpness of the targets, to 3.
Also, the label of velocity is set only when an onset is present.
We set the threshold as 0.5, which means if the onset is smaller than 0.5, the velocity is set as 0.
§.§ Inference
At inference time, we use output_2nd as the final output.
We set the threshold for frame as 0.5.
For note-wise events (onset, offset, and velocity), the outputs in each pitch-frame grid are converted to a set containing note-wise onset, offset, and velocity following Kong et al.'s Algorithm 1 <cit.> in five steps shown below:
Step 1. onset detection: find a local maximum in onset with a value at least 0.5. Then calculate the precise onset time using the values of the adjacent three frames <cit.>.
Step 2. velocity: If an onset is detected in Step 1, extract the velocity value at the frame. If the value is zero, then discard both onset and velocity at this frame.
Step 3. offset detection with offset: find a local maximum in offset with a value at least 0.5. Then calculate the precise offset time using the values of the adjacent three frames <cit.>.
Step 4. offset detection with frame: choose the frame that is nearest to the detected onset which has a frame value below 0.5.
Step 5. offset decision: choose the smaller value between the results of Step 3 and 4.
An example is shown in Figure <ref>.
The onset is 4.003, and the velocity is 61.
For offset, the direct estimation from offset is 4.043, and that estimated via frame is 4.064.
Thus, we choose 4.043 as offset.
Finally, we obtain a note with {onset: 4.003, offset: 4.043, velocity: 61} in the output.
§.§ Metrics
We evaluate the performance of our proposed method with frame-level metrics (Frame) and note-level metrics (Note, Note with Offset, and Note with Offset & Velocity) with the standard precision, recall, and F1 scores.
We calculated these scores using library <cit.> with its default settings.
The scores were calculated per recording, and the mean of these per-recording scores was presented as the final metric for a given collection of pieces, as explained in Hawthorne et al. <cit.>.
§.§ Results
Tables <ref> and <ref> show the scores on the test sets of MAPS and MAESTRO datasets.
The numbers of parameters in these Tables are referred from <cit.>.
For the MAPS dataset, our proposed method outperformed the other methods in F1 score for all metrics.
For the MAESTRO dataset, our proposed method outperformed the other methods in F1 score for Note, Note with Offset, and Note with Offset & Velocity.
Furthermore, our method with the half-stride strategy which is mentioned in <ref> outperformed other methods in all metrics.
In contrast, the two state-of-the-art methods for MAESTRO, which are Semi-CRFs <cit.> and HPPNet-sp <cit.>, performed well only on a subset of the metrics.
The results suggest that the proposed two-level hierarchical frequency-time Transformer structure is promising for AMT.
§.§ Ablation Study
To investigate the effectiveness of each module in our proposed method, we trained various combinations of those modules using the MAPS training set and evaluated them using the MAPS validation set.
The variations are shown in Table <ref>.
In this study, we call our proposed method 1-F-D-T, which means it consists of the 1-D convolution block, the first Transformer encoder in the Frequency axis, the Transformer Decoder, and the second Transformer encoder in the Time axis.
Table <ref> shows evaluation results for each variation.
Second Transformer encoder in time axis.
To verify the effectiveness of the second Transformer encoder, we compared the 1-F-D-T and the model without the second Transformer encoder (1-F-D-N).
For the 1-F-D-N model, we use output_1st in both training and inference stages as the final output.
The result indicates that the second Transformer encoder improved Note with Offset performance, in which the F1 score is 84.42 for 1-F-D-T and 80.23 for 1-F-D-N.
This shows the effectiveness of the second Transformer encoder as it provides an extra pass to model the temporal dependency of acoustic features, which is presumably helpful in offset estimation.
Compelxity of convolutional block.
To investigate how the complexity of the convolutional block affects the AMT performance, we compared the 1-F-D-T model and the model that replaces the 1-D convolutional block with a 2-D convolutional block (2-F-D-T).
Surprisingly, the result shows that the performance of the 2-F-D-T model is significantly worse than that of the 1-F-D-T model.
This is probably because the two modules working on the spectral dependency do not cohere with each other.
The 2-D convolutional block may over aggregate the spectral information thus resulting into an effectively lower frequency resolution. Then, the Transformer encoder can only evaluate the spectral dependency over an over-simplified feature space, causing the performance degradation.
Converter.
We used a Transformer decoder to convert the dimension in the frequency axis from F to P.
In contrast, almost all of the existing methods used a linear module to achieve this.
We compared the performance of the 1-F-D-T model to a model with the Transfomer decoder replaced with a linear converter (1-F-L-T).
The result indicates that the 1-F-D-T model outperformed the 1-F-L-T model in F1 score for all four metrics.
Especially, the difference in Note with Offset and Velocity is large (75.95 for the 1-F-D-T model and 69.34 for the 1-F-L-T model in F1 score).
This suggests that using a Transformer decoder as converter is an effective way of improving the performance, although the side effect is the increase of model size.
We also investigated how the coefficients for the loss functions, α_1st and α_2nd in Eqn (<ref>), affect the performance.
We investigated six pairs of coefficients of loss functions (α_1st, α_2nd) in Eqn (<ref>), i.e., (1.8, 0.2), (1.4, 0.6), (1.0, 1.0), (0.6, 1.4), (0.2, 1.8), and (0.0, 2.0), for the 1-F-D-T model.
Figure <ref> shows the F1 scores of frame, onset, offset, and velocity evaluated on the MAPS validation set in each epoch.
These results indicate that the (1.0, 1.0) pair yields the best score.
It also shows that the training converges faster when α_1st is larger than α_2nd.
Importantly, if we omit the output_1st, which is the case when training with the pair (0.0, 2.0), the training loss did not decrease much.
Therefore, the F1 score stays around 0% and thus cannot be seen in Figure <ref>.
This suggests that it is crucial to use both losses, output_1st and output_2nd in our proposed method.
§ CONCLUSION
In this work, we proposed hFT-Transformer, an automatic piano transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
The first hierarchy consists of a 1-D convolutional block in the time axis, a Transformer encoder and a Transformer decoder in the frequency axis, and the second hierarchy consists of a Transformer encoder in the time axis.
The experiment result based on two well-known piano datasets, MAPS and MAESTRO, revealed that our two-level hierarchical architecture works effectively and outperformed other state-of-the-art methods in F1 score for frame-level and note-level transcription metrics.
For future work, we would like to extend our method to other instruments and multi-instrument settings.
§ ACKNOWLEDGMENTS
We would like to thank Giorgio Fabbro and Stefan Uhlich for their valuable comments while preparing this manuscript.
We are grateful to Kin Wai Cheuk for his dedicated support in preparing our github repository.
|
http://arxiv.org/abs/2307.04386v1 | 20230710074506 | Counterfactual Explanation for Fairness in Recommendation | [
"Xiangmeng Wang",
"Qian Li",
"Dianer Yu",
"Qing Li",
"Guandong Xu"
] | cs.IR | [
"cs.IR"
] |
Equal contribution.
[email protected]
0000-0003-3643-3353
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
[1]
Corresponding author: [email protected]
[email protected]
0000-0002-8308-9551
School of Electrical Engineering Computing and Mathematical Sciences, Curtin University
Perth
Australia
[email protected]
0000-0001-6376-9667
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
[email protected]
0000-0003-3370-471X
Hong Kong Polytechnic University
Hong Kong
Corresponding author: [email protected]
[email protected]
0000-0003-4493-6663
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
Fairness-aware recommendation eliminates discrimination issues to build trustworthy recommendation systems.
Explaining the causes of unfair recommendations is critical, as it promotes fairness diagnostics, and thus secures users' trust in recommendation models.
Existing fairness explanation methods suffer high computation burdens due to the large-scale search space and the greedy nature of the explanation search process.
Besides, they perform score-based optimizations with continuous values, which are not applicable to discrete attributes such as gender and race.
In this work, we adopt the novel paradigm of counterfactual explanation from causal inference to explore how minimal alterations in explanations change model fairness, to abandon the greedy search for explanations.
We use real-world attributes from Heterogeneous Information Networks (HINs) to empower counterfactual reasoning on discrete attributes.
We propose a novel Counterfactual Explanation for Fairness (CFairER) that generates attribute-level counterfactual explanations from HINs for recommendation fairness.
Our CFairER conducts off-policy reinforcement learning to seek high-quality counterfactual explanations, with an attentive action pruning reducing the search space of candidate counterfactuals.
The counterfactual explanations help to provide rational and proximate explanations for model fairness, while the attentive action pruning narrows the search space of attributes.
Extensive experiments demonstrate our proposed model can generate faithful explanations while maintaining favorable recommendation performance.
We release our code at <https://anonymous.4open.science/r/CFairER-anony/>.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010187.10010192</concept_id>
<concept_desc>Computing methodologies Causal reasoning and diagnostics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010261</concept_id>
<concept_desc>Computing methodologies Reinforcement learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003261.10003271</concept_id>
<concept_desc>Information systems Personalization</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Causal reasoning and diagnostics
[500]Computing methodologies Reinforcement learning
[500]Information systems Personalization
20 February 2007
[revised]12 March 2009
[accepted]5 June 2009
Counterfactual Explanation for Fairness in Recommendation
Guandong Xu
August 12, 2023
=========================================================
§ INTRODUCTION
Recommendation system (RS) as an information filtering tool has been a core in online services, e.g., e-commerce <cit.>.
It helps users discover their preferred items and benefit content providers to profit from item exposures.
Despite the huge benefits, fairness issues refer to unfair allocations (i.e., exposures) of recommended items <cit.>, caused by, e.g., gender discrimination, have attracted increasing attention in RS.
Fairness-aware recommendation <cit.> has emerged as a promising solution to prevent unintended discrimination and unfairness in RS.
It aims to find feasible algorithmic approaches that reduce the fairness disparity of recommendation results.
Explaining why fairness disparity appears, i.e., what causes unfair recommendation results, would enhance the design of fairness-aware recommendation approaches by promoting model transparency and tracking unfair factors.
There are a few fairness explanation studies in the literature, which are mainly categorized as feature-based and aspect-based methods.
Feature-based methods estimate the contribution scores of numerical features that impact model fairness.
For instance, Begley et al. <cit.> explore fairness explanations based on Shapley value estimation for the classification task.
They calculate Shapley values of every input features to reflect their significance and then generate explanations based on calculated values.
However, this method is not applicable for deep recommendation models (e.g., neural networks <cit.>), as the high complexity of Shapley value estimation becomes the major burden when input features are in high dimension and sparse.
Another branch of aspect-based methods mainly perturbs user/item aspect scores and optimizes an explanation model to find perturbed aspects that affect the model fairness as explanations.
For example, Ge et al. <cit.> perturb aspect scores within pre-defined user-aspect and item-aspect matrices and feed the perturbed matrices into a recommendation model.
Those perturbed aspects that alter the fairness disparity of the recommendation model are considered aspect-based explanations.
However, the perturbation space grows exponentially as the number of aspects increases, resulting in a large-scale search space to seek explanations.
The above fairness explanation methods suffer below issues:
1) These feature/aspect-based methods usually incur high computational costs due to the high dimensionality of search space and ultimately result in sub-optimal explanations.
Besides, these methods are presented with the greedy nature of the explanation search process.
They optimize explanation models using greedy feature/aspect scores as significance criteria and select top features/aspects as explanations, which might have the risk of introducing pseudo-explanations.
2) These score-based optimizations can only deal with continuous attributes and thus are not well-suited for handling discrete attributes.
For example, assigning a continuous value, such as gender=0.19, to the discrete gender attribute is impractical in constructing explanations and provides no valuable clue to improve the explanation.
Worse still, discrete attributes are frequently used in real-world recommendation models, as user and item profiles for training models are often generated through data tagging <cit.> on discrete attributes.
For instance, movie recommendations <cit.> usually rely on movies tagged with discrete attributes such as genre, language, and release location.
Consequently, score-based optimizations have limited capability in handling discrete attributes that are frequently encountered in recommendation scenarios.
Unlike previous works, we resort to counterfactual explanations <cit.> derived from causal inference to tackle the above issues.
Counterfactual explanations address the fundamental question: what the model fairness would be if a minimal set of factors (e.g., user/item features) had been different <cit.>.
In other words, they provide “what-if” explanations to determine the most vital and essential (i.e., minimal) factors that change model fairness.
Unlike existing feature/aspect-based methods with greedy explanations, counterfactual explanations have the advantage of always being minimal w.r.t. the generated explanations and are faithful to model fairness changes.
Moreover, we leverage real-world attributes from Heterogeneous Information Networks (HINs) <cit.>, for counterfactual reasoning when dealing with discrete attributes.
In contrast to value-based features and aspects, real-world attributes residing in HINs are presented as discrete nodes, with edges representing their connections.
By utilizing attributes from HINs, we can overcome the limitation of score-based optimizations to directly measure whether the removal of specific attributes changes the model's fairness.
Following the above intuition, we propose to generate attribute-level counterfactual explanations for fairness from a given HIN.
We posit a novel definition of counterfactual explanation for fairness - a minimal set of attributes from the HIN that changes model fairness disparity.
We use a toy example in Figure <ref> to illustrate our idea.
Given a recommendation i_1 for the user u_1 and an external HIN carrying their attributes, we want to know why i_1 causes discrimination in recommendation results.
The counterfactual explanation performs “what-if” reasoning by altering the attributes of u_1 and i_1 and checking the fairness of the recommendation results.
Both E_1 and E_2 are valid candidate explanations since they alter fairness disparities of recommendations (i.e., i_2, i_3) from 0.90 to 0.19.
To determine which attributes are the primary reason for unfairness, the counterfactual explanation will uncover the minimal attribute changes, i.e., E_2, instead of utilizing attribute combinations in E_1.
Thus, we could infer E_2 is the most vital reason for model unfairness.
Besides, since a counterfactual explanation E_2 is minimal, it only reveals the essential attributes (i.e., “Female”) that effectively explain unfairness, while discarding the irrelevant (i.e., pseudo) explanations, i.e., “U.S” and “Discount” in E_1.
We therefore propose a novel Counterfactual Explanation for Fairness (CFairER) within an off-policy reinforcement learning environment to find optimal attribute-level counterfactual explanations.
Particularly, we focus on generating attribute-level counterfactual explanations for item exposure unfairness to promote the fair allocation of user-preferred but less exposed items.
Note that the proposed approach is general and can be utilized in different recommendation scenarios that involve different fairness definitions.
Specifically, we use a reinforcement learning agent in CFairER to optimize a fairness explanation policy by uniformly exploring candidate counterfactuals from a given HIN.
We also devise attentive action pruning over the HIN to reduce the search space of reinforcement learning.
Finally, our CFairER optimizes the explanation policy using an unbiased counterfactual risk minimization objective, resulting in accurate attribute-level counterfactual explanations for fairness.
The contributions of this work are:
* We make the first attempt to leverage rich attributes in a Heterogeneous Information Network to offer attribute-level counterfactual explanations for recommendation fairness.
* We propose an off-policy learning framework to identify optimal counterfactual explanations,
which is guided by an attentive action pruning to reduce the search space.
* We devise a counterfactual risk minimization for off-policy correction, so as to achieve unbiased policy optimization.
* Comprehensive experiments show the superiority of our method in generating trustworthy explanations for fairness while preserving satisfactory recommendation performance.
§ RELATED WORK
§.§ Fairness Explanation for Recommendation
Recommender systems have long dealt with major concerns of recommendation unfairness, which profoundly harm user satisfaction <cit.> and stakeholder benefits <cit.>.
Recent works on fairness-aware recommendation mainly discuss two primary topics, i.e., user-side fairness <cit.> and item-side fairness <cit.>.
User-side fairness concerns whether the recommendation is fair to different users/user groups, e.g., retaining equivalent accuracy or recommendation explainability.
Relevant approaches attribute the causes of user-side unfairness to discrimination factors, such as sensitive features (e.g., gender <cit.>, race <cit.>) and user inactiveness <cit.>, etc.
They mainly propose fairness metrics to constraint recommendation models (e.g., collaborative filtering <cit.>) to produce fair recommendations.
For example, Yao et al. <cit.> study the unfairness of collaborative filtering (CF)-based recommenders on gender-imbalanced data.
They propose four metrics to assess different types of fairness, then add these metrics as constraints to the CF model learning objective to produce fair recommendations.
Li et al. <cit.> investigate the unfair recommendation between active and inactive user groups, and provide a re-ranking approach to mitigate the activity unfairness by adding constraints over evaluation metrics of ranking.
As modern content providers are more concerned about user privacy, it is generally not easy to access sensitive user features for the recommendation <cit.>.
Meanwhile, users often prefer not to disclose personal information that raises discrimination <cit.>.
Thus, another topic of item-side fairness-aware recommendation <cit.> is interested in examining whether the recommendation treats items fairly, e.g., similar ranking prediction errors for different items, fair allocations of exposure to each item.
For instance,
Abdollahpouri et al. <cit.> address item exposure unfairness in learning-to-rank (LTR) recommenders.
They include a fairness regularization term in the LTR objective function, which controls the recommendations favored toward popular items.
Ge et al. <cit.> consider the dynamic fairness of item exposure due to changing group labels of items.
They calculate the item exposure unfairness with a fairness-related cost function.
The cost function is merged into a Markov Decision Process to capture the dynamic item exposure for recommendations.
Liu et al. <cit.> focus on item exposure unfairness in interactive recommender systems (IRS).
They propose a reinforcement learning method to maintain a long-term balance between accuracy and exposure fairness in IRS.
Despite the great efforts, fairness-aware recommendations mitigate user and item unfairness in a black-box manner but do not explain why the unfairness appears.
Understanding the “why” is desirable for both model transparency <cit.> and facilitates data curation to remove unfair factors <cit.>.
Limited pioneering studies are conducted to explain fairness.
Begley et al. <cit.> estimate Shapley values of input features to search which features contribute more to the model unfairness.
Ge et al. <cit.> develop an explainable fairness model for recommendation to explain which item aspects influence item exposure fairness.
They perform perturbations on item aspect scores, then apply perturbed aspect scores on two pre-defined matrices to observe fairness changes.
These prior efforts suffer from major limitations:
1) The high computational burden caused by the large-scale search space and the greedy nature of the explanation search process.
2) They generate explanations by feature <cit.> or aspect <cit.> scores, which do not apply to discrete attributes such as gender and race.
Our work conducts counterfactual reasoning to seek minimal sets of attributes as explanations.
We also reduce the large search space by attentive action pruning in the off-policy learning environment.
Meanwhile, we consider explaining recommendation unfairness based on attributes from a Heterogeneous Information Network, which is expected to be wildly applicable.
§.§ Heterogeneous Information Network in Recommendation
Heterogeneous Information Network (HIN) is a powerful structure that allows for the heterogeneity of its recorded data, i.e., various types of attributes, thus providing rich information to empower recommendations <cit.>.
HINs have been wildly adopted in recommendation models to boost performance;
representative works cover context-based filtering (e.g., SemRec <cit.>, HERec <cit.>) and knowledge-based systems (e.g., MCrec <cit.>, HAN <cit.>).
For instance, HERec <cit.> embeds meta-paths within a HIN as dense vectors, then fuses these HIN embeddings with user and item embeddings to augment the semantic information for recommendations.
MCrec <cit.> leverages a deep neural network to model meta-path-based contextual embeddings and propagates the context to user and item representations with a co-attention mechanism.
Those recommendation models observe promising improvements by augmenting contextual and semantic information given by HINs.
Despite the great efforts, prior works do not consider using the HIN to explain unfair factors in recommendations.
Novel to this work, we first attempt to leverage rich attributes in a HIN to provide counterfactual explanations for item exposure fairness.
§.§ Counterfactual Explanation
Counterfactual explanations have been considered as satisfactory explanations <cit.> and elicit causal reasoning in humans <cit.>.
Works on counterfactual explanations have been proposed very recently to improve the explainability of recommendations.
Xiong et al. <cit.> propose a constrained feature perturbation on item features and consider the perturbed item features as explanations for ranking results.
Ghazimatin et al. <cit.> perform random walks over a Heterogeneous Information Network to look for minimal sets of user action edges (e.g., click) that change the PageRank scores.
Tran et al. <cit.> identify minimal sets of user actions that update the parameters of neural models.
Our work differs from prior works on counterfactual explanations by two key points:
1) In terms of problem definition, they generate counterfactual explanations to explain user behaviors (e.g., click <cit.> ) or recommendation (e.g., ranking <cit.>) results.
Our method generates counterfactual explanations to explain which attributes affect recommendation fairness.
2) In terms of technique, our method formulates counterfactual reasoning as reinforcement learning, which can deal with ever-changing item exposure unfairness.
§ PRELIMINARY
We first introduce the Heterogeneous Information Network that offers real-world attributes for fairness explanation learning.
We then give the key terminologies, including fairness disparity evaluation and counterfactual explanation for fairness.
§.§ Heterogeneous Information Network
Creating fairness explanations requires auxiliary attributes containing possible factors (e.g., user gender) that affect recommendation fairness (cf. Figure <ref>).
Heterogeneous Information Network (HIN) has shown its power in modeling various types of attributes, e.g., user social relations, item brand.
In particular, suppose we have the logged data that records users’ historical behaviors (e.g., clicks) in the recommendation scenario.
Let 𝒰∈ℝ^M, ℐ∈ℝ^N denote the sets of users and items, respectively.
We can define a user-item interaction matrix Y={y_uv| u ∈𝒰, v ∈ℐ} according to the logged data.
We also have additional attributes from external resources that profile users and items, e.g., users' genders, items' genres.
The connections between all attributes and users/items are absorbed in the relation set ℰ.
Those attributes, with their connections with user-item interactions, are uniformly formulated as a HIN.
Formally, a HIN is defined as 𝒢=(𝒱^',ℰ^'), where 𝒱^'=𝒰∪ℐ∪𝒱_U ∪𝒱_I, and ℰ^'= {𝕀(y_uv)}∪ℰ.
𝕀(·) is an edge indicator that denotes the observed edge between user u and item v when y_uv∈Y=1.
𝒱_U and 𝒱_I are attribute sets for users and items, respectively.
Each node n ∈𝒱^' and each edge e ∈ℰ^' are mapped into specific types through node type mapping function: ϕ: 𝒱^'→𝒦 and edge type mapping function: ψ: ℰ^'→𝒥.
𝒢 maintain heterogeneity, i.e., |𝒦|+|𝒥| > 2.
§.§ Fairness Disparity
We consider explaining the item exposure (un)fairness in recommendations.
We first split items in historical user-item interactions into head-tailed (i.e., popular) group G_0 the long-tailed group G_1 [Following <cit.>, we consider the top 20% items with the most frequent interactions with users as G_0, while the remaining 80% belongs to G_1.].
Following previous works <cit.>, we use demographic parity (DP) and exact-K (EK) defined on item subgroups to measure whether a recommendation result is fair.
In particular, DP requires that each item has the same likelihood of being classified into G_0 and G_1.
EK regulates the item exposure across each subgroup to remain statistically indistinguishable from a given maximum α.
By evaluating the deviation of recommendation results from the two fairness criteria, we can calculate the fairness disparity, i.e., to what extent the recommendation model is unfair.
Formally, giving a recommendation result H_u, K, the fairness disparity Δ(H_u, K) of H_u, K is:
[ Δ(H_u, K)=|Ψ_D P|+λ|Ψ_E K| ,; Ψ_D P=|G_1| · Exposure (G_0| H_u, K)-|G_0| · Exposure (G_1| H_u, K),; Ψ_E K= α· Exposure (G_0| H_u, K)- Exposure (G_1| H_u, K) ]
where Δ(·) is the fairness disparity metric that quantifies model fairness status.
λ is the trade-off parameter between DP and EK.
Exposure(G_j| H_u, K) is the item exposure number of H_u, K within G_j w.r.t. j ∈{0,1}.
§.§ Counterfactual Explanation for Fairness
This work aims to generate attribute-level counterfactual explanations for item exposure fairness.
In particular, we aim to find the “minimal” changes in attributes that reduce the fairness disparity (cf. Eq. (<ref>)) of item exposure.
Formally, given historical user-item interaction Y={y_uv| u ∈𝒰, v ∈ℐ}, and user attribute set 𝒱_U and item attribute set 𝒱_I extracted from an external Heterogeneous Information Network (HIN) 𝒢=(𝒱^',ℰ^').
Suppose there exists a recommendation model that produces the recommendation result H_u, K for user u.
Given all user-item pairs (u,v) in H_u, K,
our goal is to find a minimal attributes set 𝒱^*⊆{{e_u, e_v}| (u, e_u), (v, e_v) ∈ℰ^', e_u ∈𝒱_U, e_v ∈𝒱_I}.
Each attribute in 𝒱^* is an attribute entity from HIN 𝒢, e.g., user's gender, item's genre.
With a minimal set of 𝒱^*, the counterfactual reasoning pursues to answer: what the fairness disparity would be, if 𝒱^* is applied to the recommendation model.
𝒱^* is recognized as a valid counterfactual explanation for fairness, if after applied 𝒱^*, the fairness disparity of the intervened recommendation result Δ(H_u, K^cf) reduced compared with original Δ(H_u, K).
In addition, 𝒱^* is minimal such that there is no smaller set 𝒱^*^'∈𝒢 satisfying |𝒱^*^'| < |𝒱^*| when 𝒱^*^' is also valid.
§ THE CFAIRER FRAMEWORK
We now introduce the framework of our Counterfactual Explanation for Fairness (CFairER).
As shown in Figure <ref>, CFairER devises three major components:
1) graph representation module embeds users, items, and attributes among HIN as embedding vectors;
2) recommendation model learns user and item latent factors to produce recommendation results and
3) our proposed counterfactual fairness explanation (CFE) model assisted by the graph representation module and the recommendation model to conduct counterfactual reasoning.
This section discusses how the CFE model collaborates with the other two components, then introduces the graph representation module and the recommendation model.
We will elaborate on our proposed CFE model in the next section.
§.§ Counterfactual Fairness Explanation Model
As shown in Figure <ref>, our CFE model is crafted within an off-policy learning environment, in which an explanation policy π_E is optimized to produce attribute-level counterfactual explanations for fairness.
At each state s_t, π_E produces actions a_t absorbing user and item attributes as potential counterfactual explanations.
These actions are committed to the recommendation model and graph representation module to produce the reward r(s_t, a_t) for optimizing π_E.
Specifically, the graph representation module provides dense vectors 𝐡_u, 𝐡_v, 𝐞_u and 𝐞_v as user, item, user attribute and item attribute embeddings, respectively.
Those embeddings are used in the state representation learning (i.e., learn s_t) and attentive action pruning (i.e., select a_t) in our CFE model.
Moreover, the attribute embeddings are fused with user or item latent factors learned by the recommendation model to explore the model fairness change.
In particular, the fused embeddings of users and items are used to predict the intervened recommendation result H_u, K^cf.
By comparing the fairness disparity (cf. Eq. (<ref>)) difference between H_u, K^cf and the original recommendation H_u, K, we determine the reward r(s_t, a_t) to optimize π_E, accordingly.
The reward r(s_t, a_t) measures whether the current attribute (i.e., action) is a feasible fairness explanation responsible for the fairness change.
Finally, π_E is optimized with a counterfactual risk minimization (CRM) objective ∇_ΘR(π_E) to balance the distribution discrepancy from the logging policy π_0.
§.§ Graph Representation Module
Our graph representation module conducts heterogeneous graph representation learning to produce dense vectors of users, items, and attributes among the HIN.
Compared with homogeneous graph learning such as GraphSage <cit.>, our graph representation injects both node and edge heterogeneity to preserve the complex structure of the HIN.
In particular, we include two weight matrices to specify varying weights of different node and edge types.
In the following, we present the graph learning for user embedding 𝐡_u.
The embeddings of 𝐡_v, 𝐞_u and 𝐞_v can be obtained analogously by replacing nodes and node types while computations.
Specifically, we first use Multi-OneHot <cit.> to initialize node embeddings at the 0-th layer, in which u's embedding is denoted by 𝐡_u^0.
Then, at each layer l, user embedding 𝐡_u^l is given by aggregating node u's neighbor information w.r.t. different node and edge types:
𝐡_u^l=σ(concat [𝐖_ϕ(u)^lD_p[𝐡_u^l-1], 𝐖_ψ(e)^l/|𝒩_ψ(e)(u)|∑_u^'∈𝒩_ψ(e)(u)D_p[𝐡_u^'^l-1] ]+b^l)
where σ(·) is LeakyReLU <cit.> activation function and concat(·) is the concatenation operator.
D_p[·] is a random dropout with probability p applied to its argument vector.
𝐡_u^l-1 is u's embedding at layer l-1.
𝒩_ψ(e)(u)={u^'|(u, e, u^') ∈𝒢} is a set of nodes connected with user node u through edge type ψ(e).
The additionally dotted two weight matrices, i.e., node-type matrix 𝐖_ϕ(u)^l and edge-type matrix 𝐖_ψ(e)^l, are defined based on the importance of each type ϕ(u) and ψ(e).
b^l is an optional bias.
With Eq (<ref>), we obtain u's embedding 𝐡_u^l at each layer l ∈{1,⋯, L}.
We then adopt layer-aggregation <cit.> to concatenate u's embeddings at all layers into a single vector, i.e., 𝐡_u=𝐡_u^(1) + ⋯ + 𝐡_u^(L).
Finally, we have user node u's embedding 𝐡_u through aggregation.
The item embedding 𝐡_v, user attribute embedding 𝐞_u and item attribute embedding 𝐞_v can be calculated analogously.
§.§ Recommendation Model
The recommendation model f_R is initialized using user-item interaction matrix Y to produce the Top-K recommendation result H_u, K for all users.
Here, we employ a linear and simple matrix factorization (MF) <cit.> as the recommendation model f_R.
Particularly, MF initializes IDs of users and items as latent factors, and uses the inner product of user and item latent factors as the predictive function:
f_R(u,v)=U_u^⊤V_v
where U_u and V_v denote d-dimensional latent factors for user u and item v, respectively.
We use the cross-entropy <cit.> loss to define the objective function of the recommendation model:
ℒ_R = -∑_u, v, y_uv∈Y y_uvlog f_R(u,v)+(1-y_uv) log(1-f_R(u,v))
After optimizing the loss function ℒ_R, we can use the trained user and item latent factors (i.e., U, V) to produce the original Top-K recommendation lists H_u, K for all users u ∈𝒰.
§ REINFORCEMENT LEARNING FOR COUNTERFACTUAL FAIRNESS EXPLANATION
We put forward our counterfactual fairness explanation (CFE) model (cf. Figure <ref>), assisted by graph representation module and recommendation model, to generate explanation policy π_E for item exposure fairness.
The explanation policy π_E is optimized within off-policy learning to adaptively learn attributes responsible for fairness changes.
In the following, we first introduce off-policy learning for our CFE model.
Then we detail each key element in the off-policy learning and give unbiased policy optimization.
§.§ Explaining as Off-policy Learning
We cast our CFE model in an off-policy learning environment, which is formulated as Markov Decision Process (MDP).
The MDP is provided with a static logged dataset generated by a logging policy π_0 [We adopt the uniform-based logging policy as π_0. It samples attributes as actions from the attribute space with the probability of π_0(a_t | s_t)=1/|𝒱_U+𝒱_I|.].
The logging policy π_0 collects trajectories by uniformly sampling actions from the user and item attribute space.
We use the off-policy learning to optimize an explanation (i.e., target) policy π_E by approximating the counterfactual rewards of state-action pairs from all timestamps, wherein the logging policy π_0 is employed for exploration while the target policy π_E is utilized for decision-making.
In the off-policy setting,
the explanation policy π_E does not require following the original pace of the logging policy π_0.
As a result, π_E is able to explore the counterfactual region, i.e., those actions that haven't been taken by the previous agent using π_0.
Formally, at each timestamp t ∈{1,⋯,T} of MDP, the explanation policy π_E(a_t|s_t) selects an action (i.e., a candidate attribute) a_t ∈𝒜_t conditioning on the user state s_t ∈𝒮, and receives counterfactual reward r(s_t, a_t) ∈ℛ for this particular state-action pair.
Then the current state transits to the next state s_t+1 with transition probability of ℙ(s_t+1| s_t, a_t)∈𝒫.
The whole MDP has the key elements:
* 𝒮 is a finite set of states {s_t | t∈ [1,⋯, T]}. Each state s_t is transformed into dense vectors (i.e., embeddings) by our state representation learning (cf. Section <ref>).
* 𝒜_t is a finite set of actions (i.e., attributes) available at s_t. 𝒜_t is select from attributes 𝒱_t ∈𝒢 by our attentive action pruning (cf. Section <ref>) to reduce the search space.
* 𝒫: 𝒮×𝒜→𝒮 is the state transition, which absorbs transition probabilities of the current states to the next states.
Given action a_t at state s_t, the transition to the next state s_t+1 is determined as
ℙ(s_t+1| s_t, a_t)∈𝒫 =1.
* ℛ: 𝒮→ℛ is the counterfactual reward measures whether a deployed action (i.e., an attribute) is a valid counterfactual explanation for fairness. ℛ is used to guide the explanation policy learning and is defined in Section <ref>.
We now introduce the implementation of each key component.
§.§.§ State Representation Learning.
The state 𝒮 describes target users and their recommendation lists from the recommendation model.
Formally, at step t, the state s_t for a user u is defined as s_t=(u, H(u,K)), where u ∈𝒰 is a target user and H(u,K) is the recommendation produced by f_R.
The initial state s_0 is (u, v) and v is the interacted item of u, i.e., y_uv∈Y=1.
Our state representation learning maps user state s_t=(u, H(u,K)) into dense vectors for latter explanation policy learning.
Specifically, given s_t that absorbs current user u and its recommendation H(u,K)={v_1,v_2,...,v_K}.
We first acquire the embedding 𝐡_v_k of each item v_k ∈ H(u,K) from our graph representation module.
The state s_t then receives the concatenated item embeddings (i.e., concat[𝐡_v_k|∀ v_k ∈ H(u,K)]) to update its representation.
Considering states within 𝒮 have sequential patterns <cit.>,
we resort to Recurrent Neural Networks (RNN) with a gated recurrent unit (GRU) <cit.> to capture the sequential state trajectory.
We firstly initialize the state representation s_0 with an initial distribution s_0∼ρ_0
[In our experiment, we used a fixed initial state distribution, where s_0 = 0 ∈ℝ^d].
Then we learn state representation s_t through the recurrent cell:
𝐮_t =σ_g(𝐖_1concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_1 s_t-1+b_1)
𝐫_t =σ_g(𝐖_2 concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_2 s_t-1+b_2)
ŝ_t =σ_h(𝐖_3concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_3(𝐫_t· s_t-1)+b_3)
s_t =(1-𝐮_t) · s_t-1+𝐮_t⊙ŝ_t
where 𝐮_t and 𝐫_t denote the update gate and reset gate vector generated by GRU and ⊙ is the element-wise product operator.
𝐖_i, 𝐔_i are weight matrices and b_i is the bias vector.
Finally, s_t serves as the state representation at time step t.
§.§.§ Attentive Action Pruning.
Our attentive action pruning is designed to reduce the action search space by specifying the varying importance of actions for each state.
As a result, the sample efficiency can be largely increased by filtering out irrelevant actions to promote an efficient action search.
In our method, actions are defined as candidate attributes selected from a given HIN that potentially impact the model fairness.
In particular, given state s_t=(u, H(u,K)), we can distill a set of attributes 𝒱_t of the current user u and items v ∈ H(u,K) from the HIN.
Intuitively, we can directly use 𝒱_t as candidate actions for state s_t.
However, the user and item attribute amount of the HIN would be huge, resulting in a large search space that terribly degrades the learning efficiency <cit.>.
Thus, we propose an attentive action pruning based on attention mechanism <cit.> to select important candidate actions for each state.
Formally, given the embedding 𝐞_i for an attribute i ∈𝒱_t from Eq. (<ref>), and the state representation s_t from Eq. (<ref>), the attention score α_i of attribute i is:
α_i=ReLU(𝐖_s s_t+𝐖_h𝐞_i+b)
where 𝐖_s and 𝐖_h are two weight matrices and b is the bias vector.
We then normalize attentive scores of all attributes in 𝒱_t and select attributes with n-top attention scores into 𝒜_t:
𝒜_t={i | i ∈Top-n[exp(α_i)/∑_i^'∈𝒱_texp(α_i^')] and i ∈𝒱_t}
where n is the candidate size.
To the end, our candidate set 𝒜_t is of high sample efficiency since it filters out irrelevant attributes while dynamically adapting to the user state shift.
§.§.§ Counterfactual Reward Definition
The counterfactual reward r(s_t, a_t) ∈ℛ measures whether a deployed action a_t ∈𝒜_t is a valid counterfactual explanation for fairness at the current state s_t.
In particular, the reward is defined based on two criteria:
1) Rationality <cit.>: deploying action (i.e., attribute) a_t should cause the reduction of fairness disparity regarding the item exposure fairness.
The fairness disparity change is measured by the fairness disparity difference between the recommendation result before (i.e., Δ(H_u, K)) and after (i.e., Δ(H_u, K^cf)) fusing the action a_t to the recommendation model f_R, i.e., Δ(H_u, K)- Δ(H_u, K^cf).
2) Proximity <cit.>: a counterfactual explanation is a minimal set of attributes that changes the fairness disparity.
For the Rationality, we fuse the embedding of a_t with user or item latent factors from the recommendation model to learn updated user and item latent vectors, so as to get the Δ(H_u, K^cf).
Specifically, for a state s_t=(u, H(u,K)), the embedding 𝐞_t of action a_t is fused to user latent factor U_u for user u and item latent factors V_v_i for all items v_i ∈ H(u,K) by a element-wise product fusion.
As a result, we can get the updated latent factors U_u^cf and V_v^cf:
U_u^cf ←U_u⊙{𝐞_t|∀ t ∈ [1, ⋯, T]}, if a_t ∈𝒱_U
V_v_i^cf ←V_v_i⊙{𝐞_t|∀ t ∈ [1, ⋯, T]}, if a_t ∈𝒱_I
where ⊙ represents the element-wise product (a.k.a. Hadamard product).
T is the total training iteration.
At the initial state of t=0, user and item latent factors U_u and V_v are learned form Eq (<ref>).
Through Eq. (<ref>), the updated user and item latent vectors are then used to generate the intervened recommendation result H_u, K^cf.
For the Proximity, we compute whether a_t returns a minimal set of attributes that changes the recommendation model fairness.
This is equal to regulating user and item latent factors before (i.e., U_u, V_v) and after (i.e., U_u^cf, V_v^cf) fusing a_t be as similar as possible.
Based on the two criteria, the counterfactual reward can be defined as the following form:
r(s_t, a_t)={[ 1+dist (U_u, U_u^cf)+ dist (V_v, V_v^cf), if Δ(H_u, K)- Δ(H_u, K^cf) ≥ϵ; dist (U_u, U_u^cf)+ dist (V_v, V_v^cf), otherwise ].
where dist(·) is the distance metric defined as cosine similarity <cit.>, i.e., dist (a,b)=⟨ a, b⟩/a b.
Δ(·) is the fairness disparity evaluation metric defined in Eq.(<ref>).
ϵ is the disparity change threshold that controls the model flexibility.
§.§ Unbiased Policy Optimization
Using state s_t ∈𝒮 from Eq. (<ref>), candidate action a_t ∈𝒜_t from Eq. (<ref>), and counterfactual reward r(s_t, a_t) in Eq. (<ref>) for each timestamp t,
the policy optimization seeks the explanation policy π_E that maximizes the expected cumulative reward R(π_E) over total iteration T.
Intuitively, we can directly use the policy gradient calculated on R(π_E) to guide the optimization of π_E.
However, our policy optimization is conducted in the off-policy learning setting, in which π_E holds different distribution from the logging policy π_0.
Directly optimizing R(π_E) would result in a biased policy optimization <cit.> due to the policy distribution discrepancy.
To this end, we additionally apply Counterfactual Risk Minimization (CRM) <cit.> to correct the discrepancy between π_E and π_0.
In particular, CRM employs an Inverse Propensity Scoring (IPS) <cit.> to explicitly estimate the distribution shift between π_E and π_0.
After applying the CRM, we can alleviate the policy distribution bias by calculating the CRM-based expected cumulative reward R(π_E):
R(π_E)
= 𝔼_π_E[∑_t=0^Tγ^tπ_E(a_t| s_t)/π_0(a_t| s_t) r(s_t, a_t)]
where π_E(a_t |s_t)/π_0(a_t |s_t) is called the propensity score for balancing the empirical risk estimated from the π_0.
Finally, the policy gradient of the explanation policy learning w.r.t. model parameter Θ is achieved by the REINFORCE <cit.>:
∇_ΘR(π_E)=1/T∑_t=0^Tγ^tπ_E(a_t| s_t)/π_0(a_t| s_t) r(s_t, a_t) ∇_Θlogπ_E(a_t | s_t)
where T is the total training iteration.
By optimizing the Eq. (<ref>), the learned explanation policy π_E generates minimal sets of attributes responsible for item exposure fairness changes, so as to find the true reasons leading to unfair recommendations.
§ EXPERIMENTS
We conduct extensive experiments to evaluate the proposed CFairER for explaining item exposure fairness in recommendations.
We aim to particularly answer the following research questions:
* RQ1. Whether CFairER produces attribute-level explanations that are faithful to explaining recommendation model fairness compared with existing approaches?
* RQ2. Whether explanations provided by CFairER
achieve better fairness-accuracy trade-off than other methods?
* RQ3. Do different components (i.e., attentive action pruning, counterfactual risk minimization-based optimization) help CFairER to achieve better sample efficiency and bias alleviation? How do hyper-parameters impact CFairER?
§.§ Experimental Setup
§.§.§ Datasets
We use logged user behavior data from three datasets [https://www.yelp.com/dataset/], [https://movie.douban.com/] and [https://github.com/librahu/HIN-Datasets-for-Recommendation-and-Network-Embedding] for evaluations.
Each dataset is considered as an independent benchmark for different tasks, i.e., business, movie and music recommendation tasks.
The dataset records user ratings on local businesses and business compliment, category and city profiles.
The is a movie recommendation dataset that contains user group information and movie actor, director and type details.
The contains music listening records of users and artist tags.
The details of both datasets are given in Table <ref>, which depicts statistics of user-item interactions, user-attribute and item-attribute relations.
All datasets constitute complex user-item interactions and diverse attributes, thus providing rich contextual information for fairness explanation learning.
Following previous works <cit.>, we adopt a 10-core setting, i.e., retaining users and items with at least ten interactions for both datasets to ensure the dataset quality.
Meanwhile, we binarize the explicit rating data by interpreting ratings of 4 or higher as positive feedback, otherwise negative.
Then, we sort the interacted items for each user based on the timestamp and split the chronological interaction list into train/test/valid sets with a proportion of 60%/20%/20%.
We also study the long-tail distribution of user-item interactions in the three datasets.
We present the visualization results of the distribution of historical user-item interactions in the three datasets in Figure <ref>.
Analyzing Figure <ref>, we find that user-item interactions of both datasets are presented with a skewed distribution: the head-tailed distribution in the blue plot area and the long-tailed distribution in the yellow plot area.
Besides, a small fraction of popular items account for most of the user interactions in both datasets,
The skewed distribution would result in serious item exposure unfairness issues in recommendations, such as the well-known filter-bubble problem <cit.> and Matthew effect <cit.>.
§.§.§ Baselines
We adopt three heuristic approaches and two existing fairness-aware explainable recommendation methods as baselines.
In particular,
* RDExp: We randomly select attributes from the attribute space for each user-item interaction and generate explanations based on the selected attributes. Note that the selected attributes can be both user and item attributes.
* PopUser and PopItem: We separately calculate the exposure number of attributes for each user-item interaction, then sort each attribute chronologically based on the exposure number.
We devise a baseline PopUser, in which the top user attributes are selected as explanations. Analogously, we build PopItem that produces the top item attributes for the explanation.
* FairKGAT: uses FairKG4Rec <cit.> to mitigate the unfairness of explanations for a knowledge graph-enhanced recommender KGAT <cit.>.
FairKG4Rec <cit.> is a generalized fairness-aware algorithm that controls the unfairness of explanation diversity in the recommendation model.
KGAT <cit.> is a state-of-the-art knowledge graph-enhanced recommendation model that gives the best fairness performance in the original FairKG4Rec paper.
* CEF <cit.>: is the first work that explains fairness in recommendation.
It generates feature-based explanations for item exposure unfairness by perturbing user and item features and searches for features that change the fairness disparity.
Note that to the best of our knowledge, FairKGAT <cit.> and CEF <cit.> are the only two existing methods designed for explainable fairness recommendation tasks.
§.§.§ Explanation Faithfulness Evaluation
We adopt the widely used erasure-based evaluation criterion <cit.> in Explainable AI to evaluate the explanation faithfulness.
The erasure-based evaluation identifies the contributions of explanations by measuring model performance changes after these explanations are removed.
As a result, one can tell whether the model actually relied on these particular explanations to make a prediction, i.e., faithful to the model.
In our experiments, we use the erasure-based evaluation to test (I) the recommendation performance change and (II) the recommendation fairness change after a set of attributes from the generated explanation is removed.
By doing so, we can identify whether our explanations are faithful to recommendation performance and fairness disparity.
Following <cit.>, we remove certain attributes from the generated explanations and evaluate the resulting recommendation performance.
Therefore, in the starting evaluation point, we consider all attributes and add them to the user and item embeddings.
We then remove certain attributes from the generated explanations to observe recommendation and fairness changes at later evaluation points.
In particular,
we first use historical user-item interactions to train a recommendation model through Eq. (<ref>) to generate user and item embeddings.
Then, we fuse all attribute embeddings from Eq. (<ref>) with the trained user and item embeddings.
The user and item embeddings after fusion are used to generate recommendation results at the starting evaluation point.
Thereafter, we conduct counterfactual reasoning using our CFairER to generate attribute-level counterfactual explanations for model fairness.
Those generated explanations are defined as the erasure set of attributes for each user/item.
Finally, we exclude the erasure set from attribute space, and fuse the embeddings of attributes after erasure with the trained user and item embeddings to generate new recommendation results.
Given the recommendation results at each evaluation point, we use Normalized Discounted Cumulative Gain (NDCG)@K and Hit Ratio (HR)@K to measure the recommendation performance
As this work focuses on item exposure fairness in recommendations, we use two wildly-adopted item-side evaluation metrics, i.e., Head-tailed Rate (HT)@K and Gini@K, for fairness evaluation.
HT@K refers to the ratio of the head-tailed item number to the list length K.
Later HT@K indicates that the model suffers from a more severe item exposure disparity by favoring items from the head-tailed (i.e., popular) group.
Gini@K measures inequality within subgroups among the Top-K recommendation list.
Larger Gini@K indicates the recommendation results are of higher inequality between the head-tailed and the long-tailed group.
§.§.§ Implementation Details
To demonstrate our CFairER, we employ a simple matrix factorization (MF) as our recommendation model.
We train the MF using train/test/validate sets split from user-item interactions in datasets with 60%/20%/20%.
We optimize the MF using stochastic gradient descent (SGD) <cit.>.
The same data splitting and gradient descent methods are applied in all baselines when required.
Our graph representation module employs two graph convolutional layers with {64, 128} output dimensions.
FairKGAT baseline also keep 2 layers.
The graph representation module outputs embeddings for all user and item attributes with the embedding size d=128.
The embedding size for FairKGAT and CEF is also fixed as d=128.
The number of latent factors (as in Eq. (<ref>)) of MF is set equal to the embedding size of our graph representation module.
To generate the starting evaluation point of erasure-based evaluation, we fuse attribute embeddings with the trained user and item latent factors based on element-wise product fusion.
The fused user and item embeddings are then used to produce Top-K recommendation lists.
We train our counterfactual fairness explanation model with SGD based on the REINFORCE <cit.> policy gradient.
For baseline model compatibility, as CEF <cit.> requires pre-defined user-feature attention matrix and item-feature quality matrix, we follow previous work <cit.> to regulate user/item attributes as user/item aspects and resort to analysis toolkit “Sentires” [https://github.com/evison/Sentires] to build the two matrices.
The hyper-parameters of our CFairER and all baselines are chosen by the grid search, including learning rate, L_2 norm regularization, discount factor γ, etc.
The disparity change threshold ϵ in Eq. (<ref>) of our CFairER is determined by performing a grid search on the validation set.
This enables us to choose the optimal value for a variety of recommendation tasks, including but not limited to business ( dataset), movie ( dataset), and music ( dataset) recommendations.
After all models have been trained, we freeze the model parameters and generate explanations accordingly.
We report the erasure-based evaluation results by recursively erasing top E attributes from the generated explanations.
The erasure length E is chosen from E=[5, 10, 15, 20].
The recommendation and fairness performance of our CFairER and baselines under different E is reported in Table <ref>.
§.§ Explanation Faithfulness (RQ1, RQ2)
We plot fairness and recommendation performance changes of our CFairER and baselines while erasing attributes from explanations in Figure <ref>.
Each data point in Figure <ref> is generated by cumulatively erasing a batch of attributes.
Those erased attributes are selected from the top 10 (i.e., E=10) attribute sets of the explanation lists provided by each method.[For example, given n explanation lists, the number of erasure attributes is n × 10. We cumulatively erase m attributes in one batch within in total (n × 10) / m iterations.]
As PopUser and PopItem baselines enjoy very similar data trends, we choose not to present them simultaneously in Figure <ref>.
Table <ref> presents recommendation and fairness performance after erasing E = [5, 10, 20] attributes in explanations.
Larger NDCG@K and Hit Ratio @K values indicate better recommendation performance while smaller Head-tailed Rate@K and Gini@K values represent better fairness.
Analyzing Figure <ref> and Table <ref>, we have the following findings.
Amongst all methods, our CFairER achieves the best recommendation and fairness performance after erasing attributes from our explanations on all datasets.
For instance, CFairER beats the strongest baseline CEF by 25.9%, 24.4%, 8.3% and 36.0% for NDCG@40, Hit Ratio@40, Head-tailed Rate@40 and Gini@40 with erasure length E=20 on .
This indicates that explanations generated by CFairER are faithful to explaining unfair factors while not harming recommendation accuracy.
Unlike CEF and FairKGAT, which generate explanations based on perturbing input features and adding fair-related constraints, CFairER generates counterfactual explanations by inferring minimal attributes contributing to fairness changes.
As a counterfactual explanation is minimal, it only discovers attributes that well-explain the model fairness while filtering out tedious ones that affect the recommendation accuracy.
Another interesting finding is that
PopUser and PopItem perform even worse than RDExp (i.e., randomly selecting attributes) on .
This is because recommending items with popular attributes would deprive the exposure of less-noticeable items, causing serious model unfairness and degraded recommendation performance.
In general, the fairness of all models consistently improves while erasing attributes from explanations, shown by the decreasing trend of Head-tailed Rate@K values in Figure <ref>.
This is because erasing attributes will alleviate the discrimination against users and items from disadvantaged groups (e.g., gender group, brand group), making more under-represented items to be recommended.
Unfortunately,
we can also observe the downgraded recommendation performance of all models in both Figure <ref> and Table <ref>.
For example, in Figure <ref>, the NDCG@5 of CEF drops from approximately 1.17 to 0.60 on at erasure iteration 0 and 50.
This is due to the well-known fairness-accuracy trade-off issue, in which the fairness constraint could be achieved with a sacrifice of recommendation performance.
Facing this issue, both baselines suffer from huge declines in recommendation performance, as in Table <ref>.
On the contrary, our CFairER still enjoys favorable recommendation performance and outperforms all baselines.
Besides, the decline rates of our CFairER are much slower than baselines on both datasets in Figure <ref>.
We hence conclude that the attribute-level explanations provided by our CFairER can achieve a much better fairness-accuracy trade-off than other methods.
This is because our CFairER uses counterfactual reasoning to generate minimal but vital attributes as explanations for model fairness.
Those attributes produced by CFairER are true reasons for unfairness but not the ones that affect the recommendation accuracy.
§.§ Ablation and Parameter Analysis (RQ3)
We first conduct an in-depth ablation study on the ability of our CFairER to achieve sample efficiency and bias alleviation.
Our CFairER includes two contributing components,
namely, attentive action pruning (cf. Section <ref>) and counterfactual risk minimization-based optimization (cf. Section <ref>).
We evaluate our CFairER with different variant combinations and show our main findings below.
§.§.§ Sample Efficiency of Attentive Action Pruning
Our attentive action pruning reduces the action search space by specifying varying importance of attributes for each state.
As a result, the sample efficiency can be increased by filtering out irrelevant attributes to promote an efficient action search.
To demonstrate our attentive action pruning, we test CFairER without () the attentive action pruning (i.e., CFairER Attentive Action Pruning), in which the candidate actions set absorbs all attributes connected with the current user and items.
Through Table <ref>, we observed that removing the attentive action pruning downgrades CFairER performance, which validates the superiority of our attentive action pruning in improving fair recommendations.
This is because attentive action pruning filters out irrelevant items based on their contributions to the current state, resulting in enhanced sample efficiency.
Moreover, the performance of CFairER after removing the attentive action pruning downgrades severely on .
This is because has the largest number of attributes compared with the other two datasets (cf. Table <ref>), which challenges our CFairER to find suitable attributes as fairness explanations.
These findings suggest the superiority of applying attentive action pruning in fairness explanation learning, especially when the attribute size is large.
§.§.§ Bias Alleviation with Counterfactual Risk Minimization
Our CFairER is optimized with a counterfactual risk minimization (CRM) loss to achieve unbiased policy optimization.
The CRM loss (cf. Eq. (<ref>)) corrects the discrepancy between the explanation policy and logging policy, thus alleviating the policy distribution bias in the off-policy learning setting.
To demonstrate the CRM loss,
we apply our CFairER with cross-entropy (CE) <cit.> loss (i.e., CRM loss → Cross-entropy loss) to show how it performs compared with CFairER on the CRM loss.
We observe our CFairER with CRM loss consistently outperforms the counterpart with CE loss on both fairness and recommendation performance.
The sub-optimal performance of our CFairER with CE loss indicates that the bias issue in the off-policy learning can lead to downgraded performance for the learning agent.
On the contrary, our CFairER takes advantage of CRM to learn a high-quality explanation policy.
We hence conclude that performing unbiased optimization with CRM is critical to achieving favorable fairness explanation learning.
§.§.§ Parameter Analysis
We also conduct a parameter analysis on how erasure length E (cf. Section <ref>) and candidate size n (as in Eq. (<ref>)) impact CFairER.
Figure <ref> (a) and Figure <ref> (b) report CFairER performance w.r.t. E=[5, 10, 15, 20].
Apparently, the performance of CFairER demonstrates decreasing trends from E=5, then becomes stable after E=10.
The decreased performance is due to the increasing erasure of attributes found by our generated explanations.
This indicates that our CFairER can find valid attribute-level explanations that impact fair recommendations.
The performance of CFairER degrades slightly after the bottom, then becomes stable.
This is reasonable since the attributes number provided in datasets are limited, while increasing the erasure length would allow more overlapping attributes with previous erasures to be found.
By varying candidate size n from n=[10, 20, 30, 40, 50, 60] in Figure <ref> (c) (d),
we observe that CFairER performance first improves drastically as candidate size increases on both datasets.
The performance of our CFairER reaches peaks at n=40 and n=30 on and , respectively.
After the peaks, we can witness a downgraded model performance by increasing the candidate size further.
We consider the poorer performance of CFairER before reaching peaks is due to the limited candidate pool, i.e., insufficient attributes limit the exploration ability of CFairER to find appropriate candidates as fairness explanations.
Meanwhile, a too-large candidate pool (e.g., n=60) would offer more chances for the agent to select inadequate attributes as explanations.
Based on the two findings, we believe it is necessary for our CFairER to carry the attentive action search, such as to select high-quality attributes as candidates based on their contributions to the current state.
§.§.§ Time Complexity and Computation Costs
For time complexity, our recommendation model (cf. Section <ref>) performs matrix factorization with a complexity of O(|𝒪|).
For the graph representation module (cf. Section <ref>), establishing node representations has complexity O(∑_l=1^L (|𝒢|+|𝒪^+|) d_l d_l-1).
For the off-policy learning process (cf. Section <ref>), the complexity is mainly determined by the attention score calculation, which has a time complexity of O(2T|𝒪^+| |𝒩̃_e| d^2).
The total time complexity is O(|𝒪|+ ∑_l=1^L(|𝒢|+|𝒪^+|) d_l d_l-1+2T|𝒪^+| n_2 d^2).
We evaluated the running time of FairKGAT and CEF baselines on the large-scale dataset.
The corresponding results are 232s and 379s per epoch, respectively.
CFairER has a comparable cost of 284s per epoch to these baselines. Considering that our CFairER achieves superior explainability improvements compared to the baselines, we believe that the increased cost of, at most, 52s per epoch is a reasonable trade-off.
§ CONCLUSION
We propose CFairER, a reinforcement learning-based fairness explanation learning framework over a HIN.
Our CFairER generates counterfactual explanations as minimal sets of real-world attributes to explain item exposure fairness.
We design a counterfactual fairness explanation model to discover high-quality counterfactual explanations, driven by an attentive action pruning to reduce the search space and a counterfactual reward to enable counterfactual reasoning.
Extensive evaluations on three benchmark datasets demonstrate CFairER’s ability to find faithful explanations for fairness and balance the fairness-accuracy trade-off.
This work is supported by the Australian Research Council (ARC) under Grant No. DP220103717, LE220100078, LP170100891 and DP200101374.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04113v1 | 20230709080545 | Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping | [
"Kazuya Nishimura",
"Ami Katanaya",
"Shinichiro Chuma",
"Ryoma Bise"
] | cs.CV | [
"cs.CV"
] |
Mitosis Detection from Partial Annotation
K. Nishimura et al.
Kyushu University, Fukuoka, Japan [email protected]
Kyoto University, Kyoto, Japan
Mitosis Detection from Partial Annotation
by Dataset Generation via Frame-Order Flipping
Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
==========================================================================================
Detection of mitosis events plays an important role in biomedical research.
Deep-learning-based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data. However, these methods require annotations for each imaging condition. Collecting labeled data involves time-consuming human labor.
In this paper, we propose a mitosis detection method that can be trained with partially annotated sequences.
The base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset.
First, we generate an image pair not containing mitosis events by frame-order flipping.
Then, we paste mitosis events to the image pair by alpha-blending pasting and generate a fully labeled dataset.
We demonstrate the performance of our method on four datasets, and we confirm that our method outperforms other comparisons which use partially labeled sequences.
Code is available at <https://github.com/naivete5656/MDPAFOF>.
§ INTRODUCTION
Fluorescent microscopy is widely used to capture cell nuclei behavior. Mitosis detection is the task of detecting the moment of cell division from time-lapse images (the dotted circles in Fig. <ref>).
Mitosis detection from fluorescent sequences is important in biological research, medical diagnosis, and drug development.
Conventionally tracking-based methods <cit.> and tracking-free
methods <cit.> have been proposed for mitosis detection.
Recently, deep-learning-based mitosis-detection methods have achieved outstanding performance <cit.>.
However, training deep-learning methods require a certain amount of annotation for each imaging condition, such as types of cells and microscopy and the density of cells.
Collecting a sufficient number of labeled data covering the variability of cell type and cell density is time-consuming and labor-intensive.
Unlike cell detection and segmentation, which aims to recognize objects from a single image, mitosis detection aims to identify events from time series of images. Thus, it is necessary to observe differences between multiple frames to make mitosis events annotation. Comprehensively annotating mitosis events is time-consuming, and annotators may be missed mitosis events. Thus, we must carefully review the annotations to ensure that they are comprehensive.
Partial annotation has been used as a way to reduce the annotation costs of cell and object detection <cit.>. Fig. <ref> shows an example of partially annotated frames. Some mitosis events are annotated (a red-dotted circle), and others are not (light-blue-dotted circles). The annotation costs are low because the annotator only needs to plot a few mitotic positions.
In addition, this style of annotation allows for missing annotations. Therefore, it would be effective for mitosis detection.
Unlike supervised annotation, partial annotation can not treat unannotated areas as regions not containing mitosis events since the regions may contain mitosis events (Fig. <ref>). The regions naturally affect the training in the partial annotation setting. To avoid the effect of unlabeled objects in unlabeled regions, Qu et al. <cit.> proposed to use a Gaussian masked mean squared loss, which calculates the loss around the annotated regions.
The loss function works in tasks in which foreground and background features have clearly different appearances, such as in cell detection.
However, it does not work on mitosis detection since the appearance of several non-mitotic cells appears similar to mitosis cells; it produces many false positives.
In this paper, we propose a cell-mitosis detection method for fluorescent time-lapse images by generating a fully labeled dataset from partially annotated sequences. We achieve mitosis detection training in a mitosis detection model with the generated dataset.
To generate the fully labeled dataset, we should consider two problems: (1) no label indicating regions not containing mitosis cells and (2) few mitosis annotations.
We can easily generate the regions not containing mitotic cells by using one image twice.
However, such regions do not contribute to identifying mitotic cells and non-mitotic cells since the data do not show natural cell motions.
For the training to be effective, the regions not containing mitotic cells should show the natural movements of cells.
To generate such regions, we propose frame-order flipping which simply flips the frame order of a consecutive frame pair. As shown in the white rectangles in Fig. <ref>, we can convert a mitosis event to a cell fusion by flipping operation. Hence, the flipped pair is the region not containing mitosis cells. Even though we flipped the frame order, the non-mitotic cells still have natural time-series motion, as shown in the yellow rectangles in Fig. <ref>.
In addition, we can make the most of a few partial annotations by using copy-and-paste-based techniques. Unlike regular copy-and-paste augmentation <cit.> for supervised augmentation of instance segmentations which have object mask annotations, we only have point-level annotations. Thus, we propose to use alpha-blending pasting techniques which naturally blend two images.
Experiments conducted on four types of fluorescent sequences demonstrate that the proposed method outperforms other methods which use partial labels.
Related work
Some methods used partially labeled data to train model <cit.>.
Qu <cit.> proposed a Gaussian masked mean squared loss, which calculates the loss around the annotated areas. To more accurately identify negative and positive samples, positive unlabeled learning has been used for object detection <cit.>.
These methods have used positive unlabeled learning on candidates detected by using partial annotation to identify whether the candidates are labeled objects or backgrounds.
However, since the candidates detected by partial annotation include many false positives, the positive unlabeled learning does not work on mitosis detection.
the appearance of the mitosis event and backgrounds in the mitosis detection task, it is difficult to estimate positive prior. These methods could not work on mitosis detection.
The positive unlabeled learning requires a positive prior.
§ METHOD: MITOSIS DETECTION WITH PARTIAL LABELS
Our method aims to detect coordinates and timing (t, x, y) of mitosis events from fluorescent sequences.
For training, we use time-lapse images ℐ = {I_t}_t=1^T and partial labels (a set of annotated mitosis cells). Here, I_t denotes an image at frame t, and T is the total number of frames.
Our method generates a fully labeled dataset 𝒟_p= { (I'_t-1, I'_t), 𝒫_t' }^T-1_t=1 from time-lapse images ℐ and partial labels and then trains a mitosis detection model f_θ with the generated dataset. Here, I'_t is a generated image, and 𝒫_t' is a set of mitotic coordinates contained in (I'_t-1, I'_t). Since our method trains the network with partial labels, it can eliminate the costs of checking for missed annotations.
§.§ Labeled dataset generation
Fig. <ref> shows an overview of our dataset generation. We randomly pick a pair of consecutive frames (I_t-1, I_t) from time-lapse images ℐ. Since the pair may contain unannotated mitosis events, we forcibly convert the pair into a negative pair (i.e., a pair which does not contain mitosis events) by using frame-order flipping. Next, we paste mitosis events to a generated pair using alpha-blending pasting and obtain a generated pair (I'_t-1, I'_t). Since we know the pasted location, we can obtain the mitosis locations 𝒫'_t of the generated pair.
Negative pair generation with frame-order flipping:
In this step, we generate a pair not containing mitotic cells by using a simple augmentation-based frame-order flipping. Fig. <ref> shows an example of the pair images (I_t-1, I_t). The pair may contain mitosis events. If we assume that the pair does not contain mitotic cells, it affects the training of the mitosis detection model f_θ. To prevent the pair from containing mitosis events, we flip the frame order and treat the flipped pair (I_t, I_t-1) as a pair of negative.
Since mitosis is the event that a cell divides into two daughter cells, the mitosis event is transformed into an event in which two cells fuse into one by flipping the order (Fig. <ref>).
The flipped event can treat as a non-mitotic event.
Note that the motivation behind using frame flipping is to be able to utilize pixels showing the motions of non-mitotic cells negatives by transforming mitosis into other events.
Even if the order is flipped, the movements of the non-mitotic cell are still a non-mitotic cell feature, and we consider that these cells are effective for the training of the negative label.
Mitosis label utilization with alpha-blending pasting:
Next, we paste mitosis events to the flipped pair by using copy-and-paste techniques in order to utilize the positive labels effectively.
Copy and paste augmentation has been used for supervised augmentation of instance segmentation <cit.>.
Unlike instance segmentation with object masks, we only have locations (t, x, y).
A simple solution is to crop images around the mitosis position and copy and paste them to the target image, like in CutMix <cit.>. However, the cropped image naturally contains surrounding objects, and the generated image appears unnatural. Unnatural images cause the detection network to make biased predictions and reduce generalization performance.
To avoid this problem, we propose alpha-blending pasting with a Gaussian blending mask.
We blend two images by leaving the pixel value in the center and blurring the vicinity of the edge of the image.
First, we crop the image around the positive annotations and obtain a set of cropped pair 𝒞 = {(C_t-1^i, C_t^i )}^N_i=0 and initialize (I'_t-1, I'_t)=(I_t, I_t-1) and 𝒫_t'= {}. Here, N is the total number of partial annotations, while C_t-1^i and C_t^i are images before and after the mitosis of the i-th annotation (Fig. <ref>). Define I_t'(l⃗^j), I_t-1'(l⃗^j) as a cropped pair image at the j-th random spatial location l⃗^j.
We crop each image centered at l⃗^j to a size that is the same as that of C_t^i. We update the randomly selected patch I_t'(l⃗^j), I_t-1'(l⃗^j) by blending a randomly selected cropped pair (C_t-1^i, C_t^i) with the following formula: I_t'(l⃗^j) = (1-α) ⊙I_t'(l⃗^j) + α⊙C_t^i, where α is a Gaussian blending mask (Fig. <ref>).
We generate the blending mask by blurring a binary mask around the annotation with a Gaussian filter. We use a random sigma value for the Gaussian filter. Then, we add the paste location l⃗^j to the set 𝒫_t'. We repeat this process random k times.
§.§ Mitosis detection with generated dataset
We modified a heatmap-based cell detection method <cit.> to work as a mitosis detection method.
Fig. <ref> is an illustration of our mitosis detection model.
Given two consecutive frames (I'_t-1, I'_t), the network output heatmap Ĥ_t.
We treat the channel axis as the time axis for the input.
The first channel is I'_t-1, and the second is I'_t.
First, we generate individual heatmaps H_t^j for each pasted coordinate l⃗^j = (l^j_x, l^j_y). H_t^j is defined as H_t^j(p_x, p_y) = exp( -(l_x^j - p_x) ^2 + (l_y^j - p_y) ^ 2/σ^2), where p_x and p_y are the coordinates of H_t^j and σ is a hyper parameter that controls the spread of the peak.
The ground truth of the heatmap at t is generated by taking the maximum through the individual heatmaps, H_t = max_j (H^j_t) (H_t in Fig. <ref>). The network is trained with the mean square error loss between the ground truth H_t and the output of the network Ĥ_t. We can find the mitosis position by finding a local maximum of the heatmap.
§ EXPERIMENTS
Dataset:
We evaluated our method on four datasets.
The first set is HeLa <cit.>, in which live cell images of HeLa cells expressing H2B-GFP were captured with 1100 × 700 resolution <cit.> [We used the publicly available
CTC data-set <http://celltrackingchallenge.net/>. We only use HeLa since the number of mitosis events in other cells is small.].
Each sequence contains 92 fluorescent images with 141 mitosis events on average.
The second set is ES, in which live cell images of mouse embryonic stem cells expressing H2B-mCherry were captured with 1024 × 1024 resolution.
Each sequence contains 41 fluorescent images with 33 mitosis events on average.
The third set is ES-D in which mouse embryonic stem cells expressing H2B-mCherry were induced to differentiate and used to capture live cell images.
Each sequence contains 61 fluorescent images with 18 on average events on average.
The fourth set is Fib, in which live cell images of mouse fibroblast cells expressing H2B-mCherry were captured with 1024 × 1024 resolution.
Each sequence contains 42 fluorescent images with 11 mitosis events on average.
Each dataset consists of four sequences of images.
We performed four-fold cross-validation in which two sequences were used as training data, one as validation data, and one as test data.
As shown in Fig. <ref>, the appearance and density are different depending on the dataset.
Implementation details:
We implemented our method within the Pytorch framework <cit.> and used a UNet-based architecture <cit.> for the mitosis-detection network. The model was trained with the Adam optimizer with a learning rate of 1e-3. σ, which controls the spread of the heatmap, was 6.
The cropping size of the positive annotations was 40 pixels.
We randomly change the number of pasting operations k between 1 and 10.
We used random flipping, random cropping, and brightness change for the augmentation.
Evaluation metrics:
We evaluated our method using the F1 score <cit.>, which is widely used in mitosis detection. Given ground-truth coordinates and detected coordinates, we performed one-by-one matching. If the distance of the matched pair was within spatially 15 pixels and temporally 6, we associated the closest coordinate pairs. We treated the matched pair as true positives (TP), unassociated coordinates as false positives (FP), and unassociated ground-truth coordinates as false negatives (FN).
Comparisons:
We conducted four comparisons that involved training the model with partially labeled data. For the first method, we trained the model by treating unlabeled pixels as non-mitosis ones (Baseline <cit.>). The second method used the Gaussian masked loss (GM <cit.>). The masked loss was calculated on the masked pixels around the positive-label pixels.
Thus, the method ignored unlabeled pixels. The third method used positive unlabeled learning to identify mitosis from candidates obtained by the detection model trained with the masked loss (PU <cit.>).
The fourth method generated pseudo-labels from the results of positive unlabeled learning and retrained the detection model with the pseudo-label (PU-I <cit.>).
In Table <ref>, we compared our method with previous methods in one and five-shot settings.
We used N samples per sequence in the N-shot settings. For a robust comparison, we sampled one or five mitosis annotations under five seed conditions and took the average.
Overall, our method outperformed all compared methods in F1 metric. GM <cit.>, PU <cit.>, and PU-I <cit.> are designed for detecting objects against simple backgrounds. Therefore, these methods are not suited to a mitosis detection task and are inferior to the baseline.
The baseline <cit.> treats unlabeled pixels as non-mitosis cell pixels.
In the partially labeled setting, unlabeled pixels contain unannotated mitosis events, and unannotated mitosis affects performance.
Unlike cell detection, mitosis detection requires identifying mitosis events from various non-mitotic cell motions, including motions that appear mitotic appearances. Although GM <cit.> can ignore unlabeled mitosis pixels with the masked loss, it is difficult to identify such non-mitosis motions.
Therefore, GM estimates produce many false positives. PU <cit.> uses positive unlabeled learning to eliminate false positives from candidates obtained from the detection results with partial labels. However, positive unlabeled learning requires a positive prior in the candidates and a certain amount of randomly sampled positive samples. Since the candidates contain many false positives, the positive prior is difficult to estimate. In addition, there is no guarantee that positive unlabeled learning can work correctly with the selected N-shot annotations.
Moreover, since positive unlabeled learning does not work in the mitosis detection task, PU-I <cit.> can not select accurate pseudo labels.
Unlike these methods, our method can estimate mitosis events accurately. Since our method generates a fully labeled dataset from a partial label, it effectively uses a few partial annotations.
Effectiveness of each module:
We performed an ablation study on the HeLa dataset to investigate the effectiveness of the proposed module.
We used random augmentation (i.e., random elastic transformation <cit.>, brightness change, and gaussian noise) instead of using frame-order flipping (FOF).
We generated I_t^aug by augmenting I_t and input the pair (I_t, I_t^aug) to the network.
In the w/o ABP setting, we directly pasted cropped images on the target image as in CutMix <cit.>.
Table <ref> demonstrates that the proposed modules improve mitosis detection performance.
Fig. <ref> shows examples of the estimation results for each condition.
Without the FOF setting, the detection model estimates a high value for all moving cells, leading to over-detection.
Without the ABP setting, the detection model overfits the directly pasted image.
The directly pasted image tends to include unnatural boundaries on the edge, leading to missed detections in real images.
Robustness against missing annotations:
We confirmed the robustness of the proposed method against missing annotations on the ES dataset. We changed the missing annotation rate from 0% to 30%.
A comparison with the supervised method in terms of F1-score is shown in Fig. <ref>.
The performance of the supervised method deteriorates as the percentage of missing labels increases, whereas the performance of the proposed method remains steady. Since our method flips the frame order, we can avoid the effects of missing annotations.
Appearance of generated dataset:
Fig. <ref> shows an example of the generated image pair. The cropped mitosis image pairs were pasted on the red-dotted circle.
It can be seen that the borders of the original image and the pasted image have been synthesized very naturally.
§ CONCLUSION
We proposed a mitosis detection method using partially labeled sequences with frame-order flipping and alpha-blending pasting. Our frame-order flipping transforms unlabeled data into non-mitosis labeled data through a simple flipping operation. Moreover, we generate various positive labels with a few positive labels by using alpha-blending pasting. Unlike directly using copy-and-paste, our method generates a natural image. Experiments demonstrated that our method outperforms other methods that use partially annotated sequences on four fluorescent microscopy images.
Acknowledgements: This work was supported by JSPS KAKENHI Grant Number JP21J21810 and JST ACT-X Grant Number JPMJAX21AK, Japan.
splncs04
|
http://arxiv.org/abs/2307.03884v1 | 20230708031428 | Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization | [
"Dheeraj Peddireddy",
"Utkarsh Priyam",
"Vaneet Aggarwal"
] | quant-ph | [
"quant-ph",
"cs.LG"
] |
APS/123-QED
Purdue University, West Lafayette IN 47906
{dpeddire, upriyam, vaneet}@purdue.edu
Variational Quantum algorithms, especially Quantum Approximate Optimization and Variational Quantum Eigensolver (VQE) have established their potential to provide computational advantage in the realm of combinatorial optimization. However, these algorithms suffer from classically intractable gradients limiting the scalability. This work addresses the scalability challenge for VQE by proposing a classical gradient computation method which utilizes the parameter shift rule but computes the expected values from the circuits using a tensor ring approximation. The parametrized gates from the circuit transform the tensor ring by contracting the matrix along the free edges of the tensor ring. While the single qubit gates do not alter the ring structure, the state transformations from the two qubit rotations are evaluated by truncating the singular values thereby preserving the structure of the tensor ring and reducing the computational complexity. This variation of the Matrix product state approximation grows linearly in number of qubits and the number of two qubit gates as opposed to the exponential growth in the classical simulations, allowing for a faster evaluation of the gradients on classical simulators.
Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization
Dheeraj Pedireddy, Utkarsh Priyam, and Vaneet Aggarwal
==========================================================================================================================
§ INTRODUCTION
Quantum computing has been far touted for its potential to solve some complex problems much more efficiently than the classical computers <cit.>. Although the fruition of the idea is further into the future, researchers have been exploring the real-time applicability of the current generation quantum computers. Most of the quantum processors in their current state are severely limited in the number of qubits, noise levels and inefficient error mitigation techniques, calling for a class of algorithms robust to the noise and error. Variational Quantum Algorithms (VQA) have been studied widely for their resilience to the noise from decoherence making them an ideal choice of algorithms for various applications on gate-based Noisy Intermediate Scale Quantum (NISQ) devices. Two such algorithms of prominence, Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) evaluate the expected energy of a state resulting from a short parameterized circuit (frequently referred to as ansatz) with respect to an observable defined by a given problem. A classical outer-loop optimizer tries to find the optimal circuit parameters that minimize the expected energy. While QAOA implements a fixed ansatz inspired from adiabatic quantum computing, VQE utilizes a variable ansatz offering flexibility to engineer the ansatz based on the hardware constraints and the problem at hand. This work chooses to focus on VQE, inspired by the recent advances of variable ansatz in quantum machine learning <cit.>. VQE, initially developed by Peruzzo et al. <cit.>, has seen a number of applications in condensed matter physics <cit.>, quantum chemistry <cit.> and quantum mechanics <cit.>.
Optimization is one of the frontrunners among the applications being studied for potential quantum advantage from VQE and adjacent algorithms <cit.>. Combinatorial optimization is a class of problems of practical relevance with applications spanning across transportation, logistics, manufacturing etc. Studies have indicated that the exponentially growing state space and quantum entanglement can improve the chances of finding the right solution with a potential speedup <cit.>. Even minor improvements to optimization problems from quantum algorithms can potentially have a large impact on the society. In the context of VQE, a multi-qubit Hamiltonian is prepared with its ground state encoding the solution of the optimization problem and the algorithm optimizes their parameters to minimize the energy of the Hamiltonian. The algorithm has been extended to use filtering operators <cit.> and iterative approaches <cit.>, to improve the performance with combinatorial optimization. The approach has also been validated on several practical applications using optimization (e.g., Job Shop Scheduling <cit.>, Vehicle Routing <cit.>)
Despite promising prospects, VQAs and more broadly quantum circuits are hindered by a plethora of problems in the current era of quantum computing, with the primary forces of impedance being the limited number of qubits, physical cost of implementation of quantum circuits and decoherence noise. Hybrid algorithms also suffer from the asymmetric scaling of quantum and classical resources with the circuit execution scaling linearly in number of qubits and circuit depth and the classical gradient evaluation scaling exponentially. Note that the gradients of the variational parameters in VQAs were evaluated using either automatic or numeric differentiation until Schuld et al. <cit.> formalized the notion for gradients computed on quantum hardware popularized as the parameter shift rule. This method estimates the gradients by computing the energy of the wave functions generated by identical circuits with the parameter for which the gradient is to be estimated, shifted by certain values. Parameter shift rule alleviates the imbalance in the scalability, albeit at the cost of executing a much larger number of quantum circuits than the other methods. Given the inconsistency in evaluating the expected values from circuits due to decoherence and inefficient error mitigation techniques on top of the statistical noise from measurement, a larger number of circuits can lead to inaccurate results.
In order to address the issues of scalability, accuracy and cost of execution, this manuscript proposes a classically simulated quantum circuit execution method that approximates the initial and intermediate quantum states using a low-rank tensor ring (TR) to compute the expected energy, which in turn are used in approximating the gradients of a VQE. Built upon the Matrix Product State (MPS) approximation of many body quantum states <cit.>, the tensor ring VQE (TR-VQE) formulates a combinatorial optimization in the same way as a naive VQE, using parameter shift rule to compute the gradients. However, the expected values of the shifted circuits used to compute the gradients are evaluated by approximating the initial quantum state with a TR as opposed to MPS, where the single qubit and two qubit gates corresponding to the circuit ansatz are evaluted using tensor contractions. It must be noted that while a single qubit gate does not change the structure of the tensor network, a two qubit gate contracted with the two corresponding tensors can alter the network by increasing the tensor size or its rank. The proposed method retains the tensor ring structure and rank by truncated singular value decomposition of the higher order tensor resulting from the application of two-qubit gate. The consistent low-rank structure allows for an exponential speedup with respect to the number of qubits and circuit depth, compared to the MPS approximation and the brute force approximation with full state vector. This truncation however, induces a noise in the circuit executions similar to the decoherence in actual quantum computers. Therefore, classically simulating a noisy quantum computer instead of a perfect quantum computer only scales linearly in the number of qubits and circuit depth <cit.>. MPS representation tries to simulate ideal quantum computation without noise but literature suggests that the noise in the current generation quantum computers limits the amount of entanglement that can be built into a quantum state. Given the computational cost of simulating ideal quantum computers, this may not be an ideal prospect since they are not representative of the noisy quantum computations. Moreover, given the robustness of VQAs to noise, this kind of noisy simulation with the benefits of scalability can be specifically useful for machine learning and optimization. Furthermore, Liu et al. <cit.> highlights that the presence of noise in VQAs can naturally help the optimizer avoid saddle points. We posit that this advantage extends to the TR-VQE as well due to the induced noise. The proposed method is validated on multiple instances of max-cut problem compared against F-VQE <cit.> and a naive VQE using parameter shift rule. The expected values of the circuit for the benchmarks are computed using simulations implementing a non-noisy MPS approximation highlighting the improved performance of noisy TR approximation over MPS approximation.
The rest of the manuscript is organized as follows: Section <ref> recounts the existing literature related to the use of Tensor networks in approximating quantum circuits and applications in QML. Section <ref> formulates the notion of VQE to solve maximum cut problem introduced in Section <ref>. Section <ref> discusses the proposed method used to compute the gradients of a variational quantum circuit using the TR approximation of a quantum state and Section <ref> addresses the complexity analysis of the proposed method. The numerical simulations are explained in Section <ref> followed by a discussion on limitations and future direction in Section <ref>
§.§ Related Work
Since its inception, the tensor network approach has been much more widely explored in the context of classical simulation of quantum computations, compared to the brute-force statevector simulation
or other graphical and distributed methods <cit.>. Matrix Product states especially were widely regarded for their ability to efficiently represent moderately entangled quantum many body states <cit.>. The idea has been further extended to techniques that efficiently simulate quantum circuits <cit.> by contracting tensor networks at a fraction of cost of the statevector simulation which holds the full 2^N sized vector. Building upon the literature several variations have emerged for specific cases like Projected Entangled Pair States (PEPS) for two-dimensional circuits <cit.> and Tree Tensor networks (TTN) for circuits with tree-like connectivity <cit.> and Multi-scale Entanglement Renormalization Ansatz (MERA) <cit.> etc.
Note that the naive MPS-based circuit simulation (which will be referred to as non-noisy MPS approximation in this manuscript) as formulated in <cit.> and widely implemented across quantum computing platforms like Qiskit, do not efficiently encode circular entanglement from first to last qubits. Further, any application of two-qubit gate contractions result in increasing tensor size which in turn increases the computational complexity as the number of two-qubit gates in the circuit increases. To circumvent this shortcoming, Zhou et al. <cit.> proposed a truncated MPS approximation to simulate noisy quantum computers, which demonstrates a linear complexity in number of qubits and circuit depth.
The noisy simulation addresses the issue of increasing tensor size by approximating the larger tensor after the application of a two qubit gate with tensors of smaller size. The higher order tensor is decomposed into two lower order tensors by truncated singular value decomposition. This approximation preserves the tensor sizes after the application of each gate unlike in the previous iterations of MPS-based simulation.
A number of quantum-inspired tensor network methods have been explored in the machine learning literature for supervised learning. Huggins et al. <cit.> implements MPS and Tree Tensor Network models to solve binary classification. Other tensor network based methods using PEPS and MPS were demonstrated to be effective in image classification tasks <cit.>. The aforementioned literature mostly explores quantum-inspired classical machine learning techniques but very few works have probed into the utility of tensor networks in augmenting quantum machine learning techniques. Peddireddy et al. <cit.> extends the singular value thresholding method from Zhou et al. <cit.> to tensor rings implemented with variational quantum classifiers demonstrating the scalability and improved performance over non-noisy MPS approximation. Tensor rings also encode circular entanglement more efficiently than MPS due to the ring structure. While Zhou et al. <cit.> evaluates the approximated expectations using noisy MPS representation, they do not explore the notion of extending it to computing gradients of variational circuits. Therefore, the application of noisy circuit simulation to scale the classical optimization loop of VQE is still an open problem. Furthermore, extending this approximation method from MPS to tensor rings can also improve representability. This work builds up on <cit.> and <cit.> by adapting the noisy tensor ring representation to compute the approximate gradients of the parameters of a variational quantum eigensolver using the parameter-shift rule. Although the proposed TR based representation computes less accurate gradients than non-noisy MPS based representations owing to the additional information that is removed in the form of truncated singular values, TR based approach scales much more efficiently.
§ PROBLEM SETUP
§.§ Max-Cut Optimization Problem
This section will briefly introduce the maximum cut (max-cut) problem and its mathematical notion in the context of quantum computers. Max-Cut is an NP-hard binary optimization problem with a history of applications in statistical physics, VLSI design and clustering etc. Given an undirected graph G = (V,E), with V and E representing the nodes and edges of the graph, the problem aims to maximize the summed weights of the edges that are cut by grouping the nodes of the graph into two subsets by choosing the optimal subgroups.
The mathematical definition follows the QUBO formulation <cit.>: a graph of n nodes with the weights of the edges given by w_ij for (i,j) ∈ E. The nodes of the graph are cut into two subgroups labelled +1 and -1. The problem attempts to maximize the objective function C(x) given by the sum of the weights of the edges connecting the nodes in +1 to the nodes in -1 which assumes the form:
C(x) = ∑_i,j w_ij x_i (1 - x_j)
where x ∈{0, 1}^n and (i,j) ∈ E. The bitstring x corresponds to an instance of the grouping schema where x_i = 0 or 1 represents the i-th node being assigned to the subgroup +1 or -1 respectively. In order to find the solution to the given objective function with a quantum computer, we construct an Ising Hamiltonian <cit.> corresponding to the function by substituting x_i with its matrix transformation I- Z_i/2 where Z_i are the Pauli Z operators that act on qubit i and I is the identity matrix:
C(x) = ∑_i,j1/4 w_i,j (I - Z_i) (I + Z_j)
C(x) = 1/2∑_i<j w_ij - 1/2∑_i<jZ_i Z_j
Essentially, maximizing the objective of the given optimization problem is equivalent to minimizing the energy of Ising Hamiltonian given by:
ℋ = ∑_i,j w_i,j Z_i Z_j
whose ground state corresponds to the solution of the optimization.The full Hamiltonian ℋ∈ℂ^2^n is never constructed explicitly but is represented using a combination of the Pauli Z operators.
§.§ Variational Quantum Eigensolver
VQE is one of the algorithms that utilizes parameterized quantum circuits to solve for an approximate solution of combinatorial optimization problems. Unlike QAOA, VQE does not enforce any constraints on the circuit ansatz and therefore can be altered to suit the hardware that it's being implemented on. The optimization problem is first translated to a qubit Hamiltonian ℋ whose eigenvalues correspond to the costs of various solutions with the ground state being associated with the optimal solution of the problem. A quantum circuit with parameterized unitary rotations denoted by U(θ) is applied to an initial state |ψ_0⟩ (generally chosen to be the basis state |0⟩^⊗ n) resulting in a trial wavefunction.
|ψ(θ)⟩ = U(θ)|ψ_0⟩
Here, U(θ) represents a chosen ansatz U with variational parameters given by θ. The energy landscape of the Hamiltonian can be traversed using this wavefunction to estimate the expected energy. We choose the notation H(θ) to represent the expectation value of |ψ(θ)⟩ with respect to the observable Hamiltonian ℋ.
H(θ) = ⟨ψ(θ)|ℋ|ψ(θ)⟩
The algorithm then updates the variational parameters of the circuit employing an outer loop optimizer using gradient descent or other adjacent methods. The process is repeated until we arrive at a sufficiently low energy. The quality of the solution at the t-th iteration is evaluated using the approximation ratio which is defined as follows:
α = M-H(θ_t)/M-m
where M represents the maximum possible Hamiltonian value and m the minimum. In other words, α=1 represents the optimal solution, and α=0 represents making no cuts.
Most of the variational quantum algorithms including VQE are implemented as hybrid models that compute the expected value of the observable on a quantum computer while calculating gradients and updating the weights on a classical computer. The fundamental mechanics of the VQE algorithm is illustrated in Figure <ref>. Following the parameter shift rule <cit.>, when the variational parameters are components of a single qubit rotation gate, the gradient takes the following form:
H(θ)θ^i = 1/2[H(θ + π21_i) - H(θ - π21_i)]
Given the choice of ansatz, we choose a circuit that only comprises CX (CNOT) gates and single qubit rotation gates which form a universal gate set, thus simplifying the gradients to the closed form given in Equation <ref> where θ^i is the i-th element of θ, H(θ) corresponds to the energy of the Hamiltonian ℋ with respect to the wavefunction generated by the circuit U(θ) and 1_i is a one-hot vector with the i-th value as 1.
§ METHODOLOGY
§.§ Computing gradients using Tensor Rings
Since the gradients of VQE can be computed by implementing quantum circuits, it is crucial to be able to carry out the circuits efficiently. Although the parameter-shift method is faster than the automatic differentiation, it requires a quantum processor to run three identical ansatz with different parameters numerous times to arrive at the gradients (More discussion on this is provided in section <ref>). This could present an impediment given the limited availability of quantum computers and the cost of each implementation. Therefore, it is essential to study the utility of classical simulation of quantum circuits in assisting the optimization procedure.
Tensor networks have been shown to be effective in approximating quantum many body systems and are thus a strong contender among the methods for efficiently simulating quantum circuits. A tensor network can be easily understood via Penrose diagrams or Tensor Network diagrams where each diagram corresponds to a graph of multiple nodes with each node representing a tensor. A tensor is a multidimensional array of with its order denoting the number of its dimensions or edges. A popular approximation strategy for quantum systems involve Matrix Product States(MPS) or Tensor Trains (TT), a class of tensor networks that aim to represent a higher order tensor as a chain of order-3 tensors (See Figure <ref>). This representation has the advantage of the topological similarity with a multi-qubit system where each tensor corresponds to a single qubit and the contraction between the tensors encodes the entanglement between the qubits. However, TTs are limited in their flexibility and representation ability due to the constraint on their border rank. Since the border ranks are much lower than the inner ranks, this representation may not be optimal for some specific quantum systems. Also, an optimal TT representation greatly depends on the order of the products restricting the choice of ansatz. Note that the border rank constraints present the same hindrances in the application of TTs to classical datasets as well. In order to ameliorate these issues, researchers in the area of classical machine learning have adopted Tensor Rings(TR) to represent the data <cit.>. TR structures relaxes the rank constraints on the border tensors increasing the expressibility of the tensors. TR decomposition multiplies the tensors circularly therefore removing the variance to permutations of the multiplicative order. Notable advantages of TR representation with respect to quantum states involves flexibility in the choice of the ansatz. To explain this further, let us assume a circuit similar to the one shown in Figure <ref> where entanglement was introduced between the first and the last qubits using a CX between the said qubits. TR representations are a better fit to encode this kind of cyclic entanglement, therefore improving the choice set of ansatz for the problem.
A quantum state |ψ⟩∈ℂ^2^N can be approximated by a tensor ring with N tensors (corresponding to N qubits) circularly multiplied with each tensor denoted by τ(n).
|ψ⟩ = ∑_i_1 … i_N∑_r_1 … r_Nτ(1)_r_N r_1^i_1τ(2)_r_1 r_2^i_2…τ(N)_r_N r_1^i_N|i_1 i_2 … i_N⟩
Here, free indices i_n ∈{0, 1} span the 2^N dimensional Hilbert space corresponding to the quantum state whereas r_n represent the bond indices (indices connecting the tensors) with rank χ_n, which determines the quality of the approximation with entangled states i.e., higher values of χ_n are better able to represent strongly entangled states. The rank of the given tensor representation for |ψ⟩ is denoted by (χ_1, χ_2, … , χ_N). Throughout the manuscript we choose χ_n = χ for all n, reducing the number of hyperparameters. The choice of χ, hereafter referred to as the tensor ring bond, for a specific problem significantly determines the representation ability and therefore performance of the algorithm. Each tensor in the the proposed TR representation is a third order tensor with a dimension of χ×χ× 2. The exponential reduction in storage complexity can be observed where a quantum state is represented by 2^N parameters, its TR approximation can be represented using only 2Nχ^2 parameters. The approximation for a typical initialization for VQAs i.e., |0⟩^⊗N can be easily computed to be a tensor ring with each tensor of dimension χ×χ× 2 where the value of the tensor is 1 at the index (1,1,1) and 0 elsewhere, represented by 1_(1,1,1). However, if a different initialization is to be chosen, constructing an approximation may not be as straightforward but efficient algorithms for TR decomposition have been studied at length in <cit.>.
While a TR can represent a quantum state, it would also need to be transformed by parameterized rotations in order to function as specified in VQAs. Given the assumption of utilizing only single qubit gates and CX gates in order to simplify the parameter shift rule, it would be sufficient to study the transformations of the TR corresponding to the aforementioned gate set. Unitary transformations of single qubits are represented by a (2 × 2) matrix which is a 2nd order tensor. The matrix multiplication associated can be implemented by contracting the unitary tensor along the free edge of the tensor corresponding to a qubit as specified in the following equation:
τ'(n)_r_n-1 r_n^i'_n = ∑_i_nU_i'_n i_nτ(n)_r_n-1 r_n^i_n
U_i'_n i_n is the 2nd order tensor with indices i'_n and i_n corresponding to the unitary matrix acting on n-th qubit which is contracted along the edge i_n with the n-th tensor denoted by τ(n) spanning the indices r_n-1, r_n and i_n, resulting in the new tensor τ'(n)_r_n-1 r_n. Note that the transformation associated with a single qubit rotation (visually illustrated in Fig <ref>) does not alter the structure of the tensor ring preserving the storage complexity.
Two qubit rotations like CX however, can change the tensor ring structure increasing the storage complexity. In order to alleviate this, we use truncated singular value decomposition with the enlarged tensor to break it down to two tensors of the original smaller size. Say a two qubit gate U ∈ℝ^4×4 is to be applied to the adjacent qubits m and n (including the circular entanglement). We begin by contracting the two tensors τ(m)_r_m-1 r_m^i_m and τ(n)_r_n-1 r_n^i_n along their shared index r_m = r_n-1 to compute a new tensor:
M_r_m-1 r_n^i_m i_n = ∑_r_mτ(m)_r_m-1 r_m^i_mτ(n)_r_n-1 r_n^i_n
The two qubit gate U is then reshaped into the tensor U_i'_m i'_n i_m i_n and multiplied with the tensor M_r_m-1 r_n^i_m i_n along the shared edges:
(τ')_r_m-1 r_n^i'_m i'_n = ∑_i_m i_n U_i'_m i'_n i_m i_n M_r_m-1 r_n^i_m i_n
The resultant tensor is reshaped into a matrix of shape (i'_m × r_m-1) × (i'_n× r_n) whose singular value decomposition is performed as follows:
(τ')_i'_m × r_m-1^i'_n× r_n = ∑_r_m X_r_m-1 r_m^i'_m S_r_m Y_r_n-1 r_n^i'_n
where the orthogonal vectors of τ' populate the matrices X and Y whereas S_r_m is a diagonal matrix with the singular values. Since we assume a constant TR bond r_m = χ and we know the dimensionality of i to be 2 ( the free indices span the quantum state), in this case, τ' has 2χ singular values. S_r_m is truncated resulting in a new diagonal matrix S'_r_m with only the largest χ values remaining. We also truncate X and Y accordingly to keep only the orthogonal vectors corresponding to the remaining singular values. We compute products of the matrices X,Y and S as follows to make up the new tensors at the sites m and n of the tensor ring. Note that while this method can only work with two qubit gates acting on adjacent qubits,this can be extended to a generic circuit using SWAP gates.
τ'(m)_r_m-1 r_m^i'_m = X_r_m-1 r_m^i'_m S'_r_m
τ'(n)_r_n-1 r_n^i'_n = Y_r_n-1 r_n^i'_n
Following the procedure specified, the resulting tensor ring would culminate with the same structure and dimensionality as before the procedure, preserving the storage complexity after each application of a two qubit rotation. It is to be noted, the specified operations at worst scale at O(χ^3), and without this approximation, the dimensionality of the tensor network approximation scales exponentially in the number of two-qubit rotations or the depth of the circuit, therefore increasing the computational complexity. Different stages of the two qubit rotation procedure with a TR is demonstrated in Figure <ref>.
Given that an ansatz has been chosen for a variational algorithm (assuming the conditions of only constructing a circuit with parameterized single qubit gates and CX gates), it can be represented as a set of gates denoted by U, ordered by their position in the circuit i.e. a gate that is applied first to the quantum gate is placed at the beginning of the set, with the single qubit gates parameterized by θ_t. The final quantum state produced by the circuit can be approximated by a tensor ring that is initialized as 1_(1,1,1) and transformed with each gate in U as specified in the procedure in the preceding paragraphs. In order to compute the expected energy with respect to the final quantum state, it must be decomposed into its linear sum of the expected energy of the unitary components of the Hamiltonian composed of Pauli matrices.
⟨ψ(θ)|ℋ|ψ(θ)⟩ = ∑_i,j w_i,j⟨ψ(θ)|Z_iZ_j|ψ(θ)⟩
We propose to compute the expected energy with respect to a component Z_pZ_q using the TR representation by the application of single qubit Pauli Z gate at sites p and q and contracting it with the ring before the Z transformations along the edges that span the quantum Hilbert space (See Fig <ref>).
τ'(θ)_i_1…,i'_p,…,i'_q,… i_N = ∑_i_p, i_q Z_i_p^i'_p Z_i_q^i'_qτ(θ)_i_1…,i_p,…,i_q,… i_N
⟨ψ(θ)|Z_p Z_q|ψ(θ_t)⟩ = ∑_i_1,i_2,… i_Nτ'(θ)_i_1, i_2 … i_Nτ(θ)_i_1, i_2 … i_N
In the equations above, τ(θ) represents the final state produced by the ansatz U parameterized by θ approximated by a TR and τ'(θ) is produced after the Pauli Z transformations on the final state. Note that the indices i'_p and i'_q in τ'(θ) have been renamed to i_p and i_q for a simplified representation. When computing the expected value, the order of the contractions becomes crucial to the computational complexity but it has been established <cit.> that it can be computed effectively in O(Nχ^3) steps. The total procedure to compute the expected value has been presented in a more compact form in Algorithm <ref>. We utilize this algorithm to evaluate the gradients of the variational quantum eigensolver by computing the expected energy of the two circuits with shifted parameters as shown in Algorithm <ref>. The gradients are then used to update the weights of the variational parameters in the same manner as the naive VQE.
§.§ Complexity
In terms of memory, we note that we construct and manipulate only a tensor ring with N tensors corresponding to N qubits which grows at the scale of O(Nχ^2) as opposed to the O(2^N) for the full quantum state. Zhou et al. <cit.> establishes that the tensor network bond χ can be chosen to be sufficiently low in order to simulate a noisy quantum computer at a linear computational complexity in the number of qubits N and circuit depth D (defined as the number of repeating parametrized blocks). Parameter shift rule, popularized for its ability to compute the gradients on a quantum computer, evaluates the gradients by computing the expectations with shifted weights.However, computing the expected values with an additive error ϵ requires a many-fold implementation of the same circuit generally in the order of O(1/ϵ^2) which adds to the statistical noise. The proposed method can compute each gradient classically with a single iteration of two circuits each of which scales as O(NDχ^3) with an error rate controlled by χ. The error rate introduced by the truncation decreases with an increasing bond dimension χ and generally saturates at a finite value in the order of 10^-2 per two qubit gate for circuits with large N and D. This is in contrast to the error rate on a quantum computer characterized by the fidelity per two qubit gate which exponentially decays in the overall number of gates in the circuit <cit.>. The finite fidelity per gate allows us to scale the proposed algorithm in circuit depth and qubits for larger applications. Automatic differentiation (AD), a tool prevalent in classical machine learning literature and applications, grows at least as fast as the forward pass of the network in terms of computational complexity. This indicates that classically computing the gradients of VQE by AD scales exponentially as it would for classically computing the energy expectation of a circuit. It must be noted that the proposed method of tensor ring transformations can be used with AD as well, which again provides an exponential speedup in N and D.
§ EXPERIMENTS
To demonstrate the runtime performance and accuracy of the TR-VQE presented in Algorithm <ref>, we compare several instances of training TR-VQE for MaxCut problem with Filtering VQE (F-VQE) <cit.> and naive VQE implemented on the Qiskit framework (MPS-VQE). Both the benchmarks use a non-noisy MPS representation to simulate the quantum computations from the circuit as formulated in <cit.> and the F-VQE is additionally implemented with an identity filter to equate the number of parameters in all the experiments. A sampling noise is introduced in the implementation of MPS-VQE and F-VQE to compute the expected values from the circuit. As discussed before, MPS-VQE is expected to compute more accurate gradients than TR-VQE owing to the induced noise in the proposed TR representation. Therefore MPS-VQE converges faster, however takes longer runtimes per iteration because the tensor sizes in MPS-VQE increase with circuit depth. F-VQE additionally implements filtering operators to change the optimization landscape thereby improving the training convergence. Amaro et al. <cit.> claims that the inclusion of filtering operators leads to a faster and more reliable convergence to the optimal solution. This improvement, however, is dwarfed with larger circuits with more number of qubits (Readers can refer to <cit.> for additional details on the implementation of F-VQE). We further collected data on TR-VQE to analyze how internal configurations, namely bond rank, and graph size, i.e., number of qubits affect the performance relative to filtering and naive VQE. All of the graphs used were randomly generated with two to three edges per node, and uniformly distributed weights (between 1 to 10) and edge pairs. We use the same circuit ansatz for all experiments, with an initial parameterized layer of R_y gates on all qubits and a variational block repeated D times, where D represents the circuit depth. Each variational block contains a set of circular CX or CNOT gates followed by parameterized R_y gates on all qubits followed by another set of CX and R_y gates. The circuit depth and the tensor ring rank is set to 1 and 10 respectively for all experiments, unless otherwise specified.
Figure <ref> indicates how each of the three algorithms performs in terms of iteration runtime across randomly generated graphs of varying sizes and different circuit ansatz. The results for each algorithm were averaged across 10 initializations each with multiple unique MaxCut graphs of fixed size. For MPS-VQE and F-VQE, the number of shots used in the Hamiltonian evaluation was increased quadratically in graph size. Across varying graph sizes, TR-VQE’s per-iteration runtime, computed as the time taken for computing the expected value of the Hamiltonian and updating the parameters from the evaluated gradients, is faster than both filtering and non-filtering VQE with smaller graphs and by extension, smaller number of qubits. As illustrated in Figure <ref>, the iteration runtimes of TR-VQE consistently improve by a large margin over the benchmarks when the number of qubits are increased. Figure <ref> demonstrates the iteration runtime of each algorithm with increasing circuit depths for a graph with 10 nodes. TR-VQE again shows a significant improvement in runtime compared to MPS-VQE and F-VQE with increasing number of layers. The results from both the experiments are compatible with the theoretical claims of improved runtime complexity as discussed in Section <ref>. The runtime speedup can be attributed to the consistent rank and tensor sizes irrespective of the circuit depth whereas in the naive MPS based approach, the tensor sizes increase with the circuit depth.
On the other hand, TRVQE performs with near-equivalent accuracy to the other algorithms, despite the runtime speedup. Figure <ref> displays per-iteration accuracy for the algorithms, averaging data from 10 runs on various randomly generated graphs with a fixed size of 10 nodes. The accuracy was compared using the approximation ratio at each iteration computed as defined in equation <ref>.
The resulting data from Figure <ref> indicate that TR-VQE performs similar to F-VQE in terms of accuracy, diverging on average by no more than 3% at any point during training. When extended to variable graph sizes, TR-VQE once again performs on par or better than the alternative algorithms. The data in Table <ref> was collected using a TR-VQE bond rank of 10 and 1000 shots per circuit evaluation for MPS-VQE and F-VQE. Excluding an outlier at small graph sizes due to instability, MPS-VQE performed the most accurately due to the availability of more information, albeit at the cost of larger runtime. However, TR-VQE followed closely behind, with a large but inconsistent gap in accuracy between it and the least accurate F-VQE algorithm.
We also plot the approximation ratio of TR-VQE with varying TR bond rank and it is to be noted that TR-VQE performs almost as good as MPS-VQE at ranks as low as 12, indicating that an exponential speedup can be achieved at smaller ranks, improving the storage complexity. All experiments including the benchmarks see a wide variance in terms of accuracy with larger graph sizes due to a phenomenon called the barren plateau effect <cit.> which is informally defined as the impaired performance due to the exponential flattening of loss landscape in the number of qubits. Martin et al. <cit.> demonstrate that barren plateau effect persists in quantum MPS circuits and therefore we can surmise that Tensor ring circuits, as an extension of MPS, will face a similar challenge in training.
To assess the accuracy of approximate gradients, we employ the l^2-norm to compare gradients obtained from state vector simulations and those generated using the TR-VQE method. The mean gradient distance, computed as the average norm difference across 500 randomly selected points on the optimization landscape, is used as a metric. We compare this metric with values obtained from noisy simulations that emulate the gradients on an actual quantum computer using noise models from the ibm montreal machine. We examine the mean gradient distance for various circuit depths and graph sizes.
Figure <ref>(Left) illustrates that the gradients produced by the TR-VQE method closely resemble those obtained from exact state vector simulations, with almost negligible differences. In contrast, gradients derived from quantum simulation deviate significantly from the exact gradients, a trend that becomes more pronounced as the number of qubits increases, as expected. As shown in Figure <ref>(Middle), TR-VQE's effectiveness diminishes with higher circuit depths due to the cumulative impact of two-qubit gates. However, this performance decline can be mitigated by increasing the tensor rank, as demonstrated in Figure <ref>(Right). In conclusion, gradients computed from approximate classical simulations can achieve accuracy comparable to those obtained from quantum computers. Consequently, they can be a valuable addition to the optimization process in hybrid algorithms.
§ CONCLUSION
This work proposes a novel technique for combinatorial optimization problems with Variational Quantum Eigensolvers by approximating the circuit computations with noisy tensor ring contractions. The proposed algorithm uses parameter shift rule to evaluate the gradients used to update the variational parameters, but computes the expected values of the shifted circuits using tensor ring approximation. The computational complexity of circuit evaluation grows linearly in the number of qubits and the circuit depth which offers a quadratic speedup over the perfect classical simulation. Evaluating gradients using TR-VQE can also eliminate the additive error present in circuit computations on quantum computers. We validate the algorithm by implementations on several instances of Max-Cut problem and compare with algorithms that use the full state information. The results demonstrate the vast improvement in runtime with respect to the number of qubits and circuit depth validating the complexity analysis at a minor cost of accuracy.
§ COMMONLY USED GATES
The matrix representation of some of the commonly used gates in the manuscript are listed below:
R_x(θ) =
[ cos(θ/2) -isin(θ/2); -isin(θ/2) cos(θ/2) ],
R_y(θ) =
[ cos(θ/2) -sin(θ/2); sin(θ/2) cos(θ/2) ],
R_z(θ) =
[ e^-iθ/2 0; 0 e^iθ/2 ]
H =1/√(2)[ 1 1; 1 -1 ]
CNOT =
[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ]
R(α, β, γ) =
[ cos(α/2) -e^iγsin(α/2); e^iβsin(α/2) e^iβ + iγcos(α/2) ]
|
http://arxiv.org/abs/2307.05602v1 | 20230710203508 | Auxiliary Physics-Informed Neural Networks for Forward, Inverse, and Coupled Radiative Transfer Problems | [
"Roberto Riganti",
"Luca Dal Negro"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn"
] |
AIP/123-QED
Department of Physics, Boston University, 590 Commonwealth Avenue, Boston, Massachusetts 02215, USA
[email protected]
Department of Physics, Boston University, 590 Commonwealth Avenue, Boston, Massachusetts 02215, USA
Department of Electrical Computer Engineering, and Photonics Center, Boston University, 8 Saint Mary’s Street, Boston, Massachusetts 02215, USA
Division of Materials Science Engineering,
Boston University, 15 St. Mary’s street, Brookline, MA 02446,USA
In this paper, we develop and employ auxiliary physics-informed neural networks (APINNs) to solve forward, inverse, and coupled integro-differential problems of radiative transfer theory (RTE). Specifically, by focusing on the relevant slab geometry and scattering media described by different types of phase functions, we show how the proposed APINN framework enables the efficient solution of Boltzmann-type transport equations through multi-output neural networks with multiple auxiliary variables associated to the Legendre expansion terms of the considered phase functions. Furthermore, we demonstrate the successful application of APINN to the coupled radiation-conduction problem of a participating medium and find distinctive temperature profiles beyond the Fourier thermal conduction limit. Finally, we solve the inverse problem for the Schwarzschild-Milne integral equation and retrieve the single scattering albedo based solely on the knowledge of boundary data, similar to what is often available in experimental settings. The present work significantly expands the current capabilities of physics-informed neural networks for radiative transfer problems that are relevant to the design and understanding of complex scattering media and photonic structures with applications to metamaterials, biomedical imaging, thermal transport, and semiconductor device modeling.
Auxiliary Physics-Informed Neural Networks for Forward, Inverse, and Coupled Radiative Transfer Problems
L. Dal Negro
August 12, 2023
========================================================================================================
§ INTRODUCTION
Over the past few years, there has been a growing interest in developing deep learning (DL) and artificial intelligence (AI) algorithms for electromagnetic wave engineering, metamaterials design, and radiative transport problems<cit.>. Rapidly emerging approaches include training artificial neural networks (ANNs) to solve complex inverse problems, parameter estimation in structured photonic environments, and in strongly scattering media<cit.>. Although successfully demonstrated with respect to several inverse design problems, traditional methods remain essentially data-driven techniques and require time-consuming training steps and massive datasets<cit.>. In order to improve on purely data-driven methods, it is essential to constrain and regularize them by leveraging the underlying physics of the investigated problems, thus relaxing the burden on training and data acquisition. Building on the firm foundation of the universal approximation theorem for multi-layer ANNs<cit.>, physics-informed neural networks (PINNs) have recently emerged as a powerful framework for the efficient solution of both forward and inverse problems mathematically described by partial differential equations (PDEs) of integer or fractional orders<cit.>. The approach of PINNs has been successfully applied to a number of differential problems in engineering ranging from Navier-Stokes
fluid dynamics, solid mechanics, and thermal transport<cit.>. Moreover, PINNs have shown remarkable results and noise robustness in the solution of electromagnetic inverse problems for metamaterials design, radiative transfer, imaging, and in the parameter retrieval of resonant photonic nanostructures<cit.>. However, the solution of Boltzmann-type, integro-differential transport equations using PINNs still poses significant challenges due to the need to resort to numerical quadrature methods such as Gauss-Legendre or Gauss-Chebyshev for the approximation of the integral terms<cit.>. Such methods add computational complexity and inevitably introduce quadrature errors in the numerical solutions<cit.>.
In order to eliminate such problems, a new PINN framework called auxiliary physics-informed neural networks (APINNs) was recently introduced by Yuan et al.<cit.>. This approach allows one to recast integro-differential equations into equivalent differential ones through the introduction of a network architecture containing additional auxiliary variables at its output, each corresponding to an integral term in the original, constrained by suitable relations. Therefore, the APINN formulation avoids the numerical approximation of integrals that are instead directly "guessed" by the network at a minimal cost, significantly improving both the numerical accuracy and computational efficiency compared to traditional PINNs.
In this paper, we develop a general APINN framework for solving relevant forward and inverse integro-differential transport equations of radiative transfer theory, which is a domain of vital importance in science and engineering with applications to complex photonic devices, medical imaging, metamaterials, thermal transport, as well as astrophysics, climate dynamics, and nuclear engineering<cit.>.
In particular, we address and demonstrate APINN formulations for the accurate solution of forward, inverse, and coupled radiation-conduction problems of radiative transport in the relevant slab geometry for different choices of scattering phase functions.
Our paper is organized as follows: in Section <ref>, we will provide a brief introduction to the radiative transfer equation (RTE), along with a description of the general APINN employed throughout this paper. In Section <ref>, we discuss forward problems for different phase functions governing the scattering processes. Specifically, we present benchmarked solutions for isotropic, Rayleigh, and Henyey-Greenstein scattering phase functions that are often utilized in engineering applications <cit.>. In Section <ref>, we discuss the APINN solution of a coupled radiation-conduction problem, enabling the accurate description of radiation transfer in a partecipating medium. Lastly, in Section <ref>, we show the solution of a canonical inverse problem described by the Schwarzschild-Milne integral equation, and we show that the radiative intensity solution and the single scattering parameters are accurately retrieved solely based on intensity data at the boundaries of the slab.
Our work shows that APINNs possess the flexibility, accuracy, and robustness required to become a powerful tool for inverse scattering and thermal transport modeling beyond the limitations of Fourier theory. Therefore, this work expands significantly upon the current capabilities and range of applications of PINNs methods and paves the way to the study of higher-dimensional transport problems in strongly scattering media with applications to nanophotonics, metamaterials, biomedical imaging, and optoelectronic device modeling.
§ APINNS FOR RADIATIVE TRANSFER PROBLEMS
The framework of radiative transfer theory for the study of complex scattering media was originally developed in astrophysics as a way to quantitatively describe the radiative equilibrium in interstellar clouds, planetary and stellar atmospheres<cit.>. Radiative transfer theory has found a very wide range of applications beyond astrophysics, including biomedical optics<cit.>, atmospheric science<cit.>, radiation hydrodynamics<cit.> and remote sensing<cit.>. For example, the propagation of light through fogs and clouds, white paints or paper, milky and turbid liquids, human tissue, and the brain can be adequately described by the classical theory of radiation transfer that we discuss in this paper using APINNs.
The radiation transfer theory is founded upon the RTE, which is a Boltzmann-type integro-differential equation expressing the detailed energy balance for the propagation of directed energy flow, or radiance, through a multiply scattering discrete random medium. For scalar waves in three spatial dimensions the RTE can be written as follows:
1/c∂ I(r,ŝ,t)/∂ t =-ŝ·∇ I(r,ŝ,t)-(κ+σ) I(r,ŝ,t)+
σ∫_4πI(r,ŝ',t)p(ŝ',ŝ)dΩ'+S(r,ŝ,t)
where κ and σ are the absorption and scattering coefficients, respectively.
Here S(r,ŝ,t) denotes a generic source term and p(ŝ',ŝ) is the phase function describing the angular distribution of the scattering process.
Alternatively, after introducing the optical thickness τ and the single scattering albedo ω as:
τ(S) = ∫_S'=0^Sβ(S')dS' =∫_S'=0^S[κ(S')+σ(S')]dS'
ω = σ/β =σ/κ+σ
one can rewrite Eq. <ref> in the alternative form:
1/β c∂ I(τ,ŝ,t)/∂ t =-ŝ·∇_τ I(τ,ŝ,t)-I(τ,ŝ,t)+
ω∫_4πI(τ,ŝ',t)p(ŝ',ŝ)dΩ'+S(τ,ŝ,t)
which is the RTE in its standard form. For a detailed discussion and derivation of the RTE, we refer the reader to references chandrasekhar_radiative_2016,howell_thermal_2020,modest_radiative_2021. In essence, the RTE states that a directed beam of light in a uniform random medium loses energy through divergence and extinction, including both absorption and scattering away from the beam (i.e., out-scattering contributions), and it gains energy from radiation sources, fluorescence or scattering events that redirect it towards the beam (i.e., in-scattering contributions). In the standard formulation, wave interference effects, polarization and non-linearity in the medium are neglected. Radiative transfer theories for vector waves have also been developed but are outside the scope of this work and more details on these subjects can be found in references mishchenko_multiple_2017, ishimaru_wave_1978. Even for the relevant slab geometry, the RTE introduced above is generally very difficult to solve<cit.>. Analytic solutions only exist for very simple cases while in many realistic situations, numerical methods such as Monte Carlo transport simulations are usually employed<cit.>. For this reason, the RTE is often approximated, under suitable conditions, by the simpler but less accurate diffusion equation<cit.>.
In our paper, we developed APINNs to obtain the forward and inverse solution of the scalar RTE in the steady-state and for different choices of phase functions. However, the developed framework can be naturally extended to time-dependent and vector RTE problems, anisotropic phase functions, and arbitrary nonlinear responses. All the implementations of the APINN algorithms developed in this paper are obtained in the powerful TensorFlow environment<cit.>.
The general APINN network utilized to solve forward and inverse RTE problems in the slab geometry is illustrated in Fig. <ref>. We considered a fully connected neural network (FCNN) with input vector x=(τ, μ) with randomly distributed values of the optical thickness τ and μ=cosθ over a two-dimensional spatial-angular domain Ω and output that is the predicted surrogate Î(μ,τ;θ̃) of the RTE solution I(μ,τ;θ). Here, θ denotes the angle of the directed energy flow with the axis z perpendicular slab's surface and θ̃ is the vector of weights and biases of our FCNN. In addition, the FCNN outputs n auxiliary variables v_i(μ,τ;θ̃), each corresponding to an integral expansion term in the RTE.
The outputs of the APINN are then used to compute, by means of automatic differentiation (AD), the derivatives of Î(μ,τ;θ̃) and v_i(μ,τ;θ̃), along with the PDE, initial conditions, and boundary conditions, depending on the nature of the problem. Each calculated value is then combined into a term of the loss function ℒ(θ̃) defined as:
ℒ(θ̃) = ℒ_int(θ̃;𝒩_int)+ℒ_b(θ̃;𝒩_b)
+ ℒ_aux(θ̃;𝒩_aux)+λ∑_iθ̃_i^2
In the expression above,
ℒ_int(θ̃;𝒩_int) =1/|𝒩_int|∑_x∈𝒩_int|| f( x;Î,∂Î/∂τ,v_0,…, v_n ) ||^2
denotes the loss term calculated in the interior domain Ω and
ℒ_b(θ̃;𝒩_b) = 1/|𝒩_b|∑_x∈𝒩_b|| ℬ(Î,x) ||^2
is the loss term for the boundary conditions of the RTE where x∈∂Ω. Moreover,
ℒ_aux(θ̃;𝒩_aux) = 1/|𝒩_aux|∑_x∈𝒩_aux|| f( x;∂ v_0/∂μ,…, ∂ v_n/∂μ) ||^2
denotes the loss term associated to the auxiliary conditions that define the APINN model. 𝒩_int, 𝒩_b, 𝒩_aux denote the number of residual points for each loss term, and the last term in Eq. <ref> is an L2 regularization included in our simulations to avoid overfitting during training<cit.>.
Table <ref> summarizes the training and APINN network parameters for the simulations studied throughout this paper. In the forward simulations of Section <ref>, we decided to analyze RTE problems in the slab geometry with different scattering phase functions of ever-increasing terms in the Legendre series expansion, resulting in an increasing number of integrals in the RTE, while keeping the general network and training parameters the same. The Legendre series expansion of the RTE phase function will be discussed in detail in Section <ref>. We thus start from the Schwarzschild-Milne equation, whose RTE has only one integral, and its corresponding APINN requires only one auxiliary variable. Then, we study the RTE with the Rayleigh phase function, whose Legendre expansion has two non-zero terms, resulting in two auxiliary outputs in the network. Finally, we study the Henyey-Greenstein (HG) phase function, whose series expansion was truncated at the tenth term, introducing ten auxiliary variables in the APINN. This approach allowed us to present a reliable scaling analysis when APINN is employed to solve integro-differential problems with kernels whose series expansions converge at different speeds. In the next section, we start presenting our APINN results, and we begin by addressing the Schwarzschild-Milne equation in a slab.
§ RESULTS AND DISCUSSION
§.§ Solutions of forward problems in a slab
§.§.§ The Schwarzschild-Milne equation
We first consider the time-independent radiative transfer problem in a slab governed by the RTE. As discussed by Howell<cit.>, this steady-state condition of the RTE is valid under the assumption that the radiation intensity is unaffected by photon time-of-flight effects, reducing Eq. <ref> to the form investigated here:
μdI(τ,μ)/dτ + I(τ,μ)=ω/2∫_-1^1 I(τ,μ')Φ(μ,μ')dμ'
When Φ(μ,μ')=1, the equation above becomes the well-known Schwarzschild-Milne integral equation describing isotropic scattering processes.
The corresponding boundary conditions are<cit.>:
I(0, μ) = I_0, 0<μ<1
I(τ_0,μ) = 0, -1<μ<0
In order to solve the Schwarzschild-Milne integral equation using the APINN framework, we recast it into an equivalent differential problem introducing the auxiliary variable v(μ;τ), which is constrained by the following system:
μdI/dτ+I-ω/2v(1)=0
v(μ;τ)=∫_-1^μI(μ';τ)dμ'
v(-1;τ)=0, dv/dμ(μ;τ)=I(τ,μ)
We then train the APINN to solve the problem for different values of the albedo ω varying from 0.2 to 1.0. Table <ref> shows the speed and accuracy of our APINN implementation in solving the Milne problem. In the large scattering limit of ω≥ 0.9, APINN minimized the loss function with values that are two orders of magnitude lower and for a fraction of the time than for the equivalent geometry displayed in Ref. mishra_physics_2021, where a quadrature method was employed. Two representative APINN solutions for the spatial-angular distributions of the radiation intensity for τ_max=1.0 are displayed in Fig. <ref> (a) and (b). To benchmark our solutions using the tables calculated by Van de Hulst's in Ref. hulst_multiple_1656, we computed the zeroth moment or point-direction gain G(τ) of the radiative intensity, which is defined as<cit.>:
G(τ) = ∫_-1^1 I(τ,μ) dμ
Fig. <ref> (c) displays the validation data of G(τ) calculated by Van de Hulst and the solution from our network, showing an excellent agreement achieved by the APINN framework. This is further confirmed by the average relative error between the two solutions displayed in the last column of Table <ref>. Fig. <ref> (d) shows a comparison between the APINN and the standard PINN quadrature loss function to solve the same problem, as implemented in Ref. mishra_physics_2021. In this figure, we display the loss function versus the number of epochs for the three largest scattering values of ω. We can immediately notice that the quadrature solution is heavily affected in its performance by the scattering strength, and the L-BFGS-B solver terminates the training early because the loss function has already saturated to its minimum value and is not decreasing further. In contrast, the APINN's loss function monotonically decreases independently of ω. This result confirms the robustness, flexibility, and accuracy of the APINN framework in solving transport problems for strongly scattering media. In a variety of engineering applications, however, the material's response is not isotropic. Therefore, in Section <ref>, we employ the APINN framework to solve the RTE in a slab with an anisotropic Rayleigh scattering phase function.
§.§.§ The Rayleigh scattering phase function
The Rayleigh phase function is employed to study anisotropic light scattering processes in various fields, from optics to astronomy<cit.>. The phase function reads:
p(cosθ) = 3/4(1+cosθ^2)
and because the scattering from spherically symmetric particles is cylindrically symmetric with respect to the incoming direction, this symmetry holds after averaging over all possible orientations. Therefore, in these situations, the phase function depends on ϕ-ϕ' and one can compute this average resulting in the projected phase function<cit.>:
p_0(μ,μ')=∫dϕ/2πdϕ'/2πp(μ,ϕ;μ',ϕ')
Using the equality μ=cosΘ=n·n'=sinθsinθ'cos(ϕ-ϕ')+cosθcosθ' one obtains:
p_0(μ,μ')=3/8(3-μ^2-μ'^2+3μ^2μ'^2)
To facilitate the calculations and the auxiliary variable formulation of the APINN framework, one typically considers the expansion of the scattering phase function in Legendre polynomials:
Φ(μ,μ') = ∑_ℓ=0^∞w_ℓP_ℓ(μ)P_ℓ(μ')
Note that, for the Rayleigh phase function, the only nonzero w_ℓ terms are w_0=1.0 and w_2=0.1. Therefore, Eq. <ref> in a slab with Rayleigh scattering becomes
μdI(τ,μ)/dτ + I(τ,μ)=ω/2∫_-1^1I(τ,μ')∑_ℓ=0^∞w_ℓP_ℓ(μ)P_ℓ(μ')dμ'
and after rearranging terms and truncating the series expansion at ℓ=2 we get:
μdI(τ,μ)/dτ + I(τ,μ)=ω/2 [w_0P_0(μ)∫_-1^1I(τ,μ')P_0(μ')dμ'
+w_2P_2(μ)∫_-1^1I(τ,μ')P_2(μ')dμ']
Finally, we recast the problem by adding two auxiliary variables to the network with their respective constraints as follows:
μdI(τ,μ)/dτ + I(τ,μ)=ω/2[w_0P_0(μ)v_0(1)+w_2P_2(μ)v_2(1)]
v_0(μ;τ)=∫_-1^μI(τ,μ')P_0(μ')dμ'
v_0(-1;τ)=0, dv_0/dμ(μ;τ)=I(τ,μ)P_0(μ)
v_2(μ;τ)=∫_-1^μI(τ,μ')P_2(μ')dμ'
v_2(-1;τ)=0, dv_2/dμ(μ;τ)=I(τ,μ)P_2(μ)
Due to the lack of benchmark solutions for Rayleigh scattering in a slab, we decided to consider a physical system similar to the one studied by Mishra and Molinaro in Ref. mishra_physics_2021, namely the case where the single scattering albedo depends on the optical thickness τ of the material. In this case, Eq. <ref> becomes
dI(τ,μ)/dτ + I(τ,μ)=ω(τ)/2[ w_0P_0(μ)v_0(1;τ)
+w_2P_2(μ)v_2(1;τ)]
To solve this problem, we train APINN with the parameters specified in Table <ref>, using 40 neurons per layer. The training for this solution took 12 minutes, and the final value of the loss function ℒ was 10^-6, demonstrating the adaptivity and flexibility of APINN in solving anisotropic scattering problems. Fig. <ref>(a) displays the APINN radiative intensity solution as a function of μ and the optical thickness. This result highlights the flexibility of APINN in finding the solution to an analytically intractable problem<cit.>. In turn, this motivates us to study the RTE with strongly anisotropic scattering properties modeled by the Henyey-Greenstein (HG) phase function.
§.§.§ The Henyey-Greenstein phase function
Here we consider the forward RTE problem in the slab with the Henyey-Greenstein (HG) phase function governing the scattering processes. The HG phase function finds applications in astrophysics, atmospheric optics, and biomedical imaging, and it depends on both the cosine of the incident angle and the asymmetry factor g∈[0,1] that appears in the equation below<cit.>:
p(μ,g) = 1-g^2/(1-2gμ+g^2)^3/2
where μ = cosθ. In the limit of g→0, the HG phase function reduces to isotropic scattering, while in the limit of g→1, HG describes strongly anisotropic scattering events.
As for the Rayleigh phase function, the HG phase function can be rewritten using the Legendre polynomials expansion in Eq. (<ref>). However, unlike the Rayleigh case, the Legendre expansion converges more slowly, and additional terms need to be included to achieve accurate numerical results:
μdI(τ,μ)/dτ + I(τ,μ)=ω/2[ w_0P_0(μ)∫_-1^1I(τ,μ')P_0(μ')dμ'
+w_1P_1(μ)∫_-1^1I(τ,μ')P_1(μ')dμ'
+…
+w_nP_n(μ)∫_-1^1I(τ,μ')P_n(μ')dμ']
where:
w_ℓ = (2n+1)g^n
In our numerical studies, we chose to benchmark the RTE with HG phase function, g=0.5, which allowed us to utilize Van de Hulst's tables as validation data<cit.>. The polynomial expansion of the phase function was truncated after ten terms, introducing ten auxiliary variables and their corresponding constraint conditions in the simulation:
μdI(τ,μ)/dτ + I(τ,μ)=ω/2[w_0P_0(μ)v_0(1;τ)
+…+w_10P_10(μ)v_10(1;τ)]
v_0(μ;τ)=∫_-1^μI(τ,μ')P_0(μ')dμ'
v_0(-1;τ)=0, dv_0/dμ(μ;τ)=I(τ,μ)P_0(μ)
…
v_10(μ;τ)=∫_-1^μI(τ,μ')P_10(μ')dμ'
v_10(-1;τ)=0, dv_10/dμ(μ;τ)=I(τ,μ)P_10(μ)
Table <ref> provides a summary of the APINN training for this problem. Considering the larger number of auxiliary variables, we trained using 80 neurons per layer instead of 40. Similarly to the isotropic and Rayleigh cases, the loss function is minimized to extremely low values with a minor trade-off in speed due to the larger number of auxiliary variables in the system, as the second and third columns of Table <ref> demonstrate. The accuracy of these results, displayed in the last column of Table <ref>, confirms the versatility of the APINN framework, which excels in solving even strong anisotropic scattering problems. Fig. <ref> (c) shows a representative solution of the radiation intensity when ω=1.0, and Fig. <ref> (b) displays the benchmarked solutions for this problem by comparing the integrated radiative intensity G(τ) calculated from the APINN network with the Van de Hulst's data. These results open the doors for multiple biomedical, metamaterials, and nano-optics applications where the HG phase function is often utilized to model realistic scattering processes<cit.>.
§.§ The coupled radiation-conduction problem of a participating medium
We now apply our APINN method to the solution of a coupled problem in radiative transfer theory. Specifically, we consider a conducting and participating slab that couples to the radiation hitting the boundary in the steady-state. Such problems have been analyzed extensively in the literature<cit.>, but, to our knowledge, have never been solved using physics-informed neural networks. Here, we use the APINN framework to analyze this problem, where the slab's temperature profile is governed by a Poisson-like equation with a coupling term to the RTE<cit.>. We will further analyze how the conduction-radiation parameter N_CR affects the traditional Fourier temperature solution in the steady-state when significant temperature differences are imposed at the boundaries of the slab. The conduction-radiation parameter N_CR measures the ratio of conductive to radiative heat contributions in a given medium, and it is defined as<cit.>:
N_CR=k β/4 k_B T^3 = k (κ + σ)/4 k_B T^3
For the simulations that follow, we chose to study coupled systems where N_CR varies from 10 (for N →∞, we get the Fourier limit) to 0.001 (for N → 0, radiative processes dominate).
We consider the heat transfer problem due to conduction and radiation in a participating medium presented by Ref. moura_neto_introduction_2013 governed by the two following coupled integro-differential equations:
d^2Θ/dτ^2-(1-ω)/N_CR[
Θ^4(τ)-1/2G(τ)]=0
μdI(τ,μ)/dτ + I(τ,μ)=H[Θ(τ)]+ω/2∫_-1^1 I(τ,μ')Φ(μ,μ')dμ'
for 0<τ<1, -1≤μ≤1, Φ(μ,μ')=1, ω=0.9
where the temperature is being modeled by the normalized adimensional quantity Θ=T/T_1. The coupling terms are:
G(τ)=∫^1_-1I(τ,μ)dμ, H[Θ(τ)]=(1-ω)Θ^4
and G(τ) is the zeroth moment of the intensity
I(τ,μ). The boundary conditions are:
I(0,μ)=1, μ∈(0,1], and I(1,μ)=0, μ∈[-1,0)
Θ(0)=1 and Θ(1)=T_2/T_1
Since the problem involves two undetermined coupled functions, we modified the architecture of the APINN framework. The changes are illustrated in Fig. <ref>: the input parameters are passed to the radiative intensity network Î(τ,μ) with auxiliary variables as for the uncoupled cases discussed so far, but the spatial variable τ is also used to train simultaneously the adimensional temperature network Θ̂(τ). The coupled problem recasted in the APINN formalism reads:
d^2Θ/dτ^2-(1-ω)/N_CR[
Θ^4(τ)-1/2v(1;τ)]=0
μdI(τ,μ)/dτ + I(τ,μ)=H[Θ(τ)]+ω/2v(1;τ)
where we introduced the auxiliary variable v(μ;τ) and its corresponding conditions like in Eq. (<ref>):
v(μ;τ)=∫_-1^μI(μ';τ)dμ',
v(-1;τ)=0, dv/dμ(μ;τ)=I(τ,μ)
By means of automatic differentiation, the outputs of the two networks are then used to compute the required PDE conditions, initial conditions, and boundary conditions, which are then incorporated into the coupled loss function:
ℒ= ℒ_Î(τ,μ) + ℒ_Θ̂(τ)
To solve this problem, we coupled the APINN network for the radiative intensity with a PINN estimating the dimensionless temperature Θ(τ), with parameters according to Table <ref>. Fig. <ref> shows the solutions for the coupled problem when two different temperature jumps are imposed at the rightmost boundary. Fig. <ref>(a) displays a ΔΘ=150K, whereas Fig. <ref>(b) a ΔΘ=270K. Moreover, we analyze the dimensionless temperature behavior when the conduction-radiation parameter N_CR decreases, as previously investigated in Refs. modest_radiative_2021, howell_thermal_2020. It is important to realize that both panels in Fig. <ref> display a beyond-Fourier behavior as N_CR decreases, demonstrating that the temperature profile is significantly affected by radiative scattering phenomena. Lastly, Table <ref> presents some relevant information regarding the APINN training. We note that, even for the coupled case, the APINN successfully minimizes both the temperature loss function ℒ_Θ̂(τ) and the radiative intensity loss function ℒ_Θ̂(τ) independently of the parameter N_CR.
§.§ Inverse problem: retrieval of the albedo from the boundary data
Finally, we present here the solution of an inverse problem of radiative transfer theory where we employ APINN to retrieve simultaneously the forward solution of the intensity I(τ,μ) and the single scattering albedo ω. We do not, however, introduce synthetic data everywhere in the domain, as it has been done previously in the literature<cit.>, but we limit ourselves to introducing two data points representing the integrated intensity G(τ) at the edges of the slab, simulating a lab environment with two detectors capturing integrated radiation entering and exiting the slab, respectively. The reason to present an inverse problem in such a fashion is to demonstrate the full potential and capabilities of physics-informed neural networks that, with no additional overhead and computing power, can solve a forward and parameter retrieval problem simultaneously. We thus modify the Schwarzschild-Milne equation for a slab discussed in an earlier section. In particular, Eq. (<ref>) is changed to include the unknown albedo parameter ω_θ:
μdI/dτ+I-ω_θ/2v(1)=0
and the loss function in Eq. (<ref>) is modified to include the two synthetic detector data points at the boundaries of the slab:
ℒ(θ,ω_θ) = ℒ_int(θ,ω_θ;𝒩_int)+ℒ_b(θ,ω_θ;𝒩_b)
+ℒ_aux(θ,ω_θ;𝒩_aux) +ℒ_inv(θ,ω_θ;𝒩_inv)
where
ℒ_inv(θ,ω_θ;𝒩_inv)=
1/|𝒩_inv|∑_(τ,μ)∈𝒩_inv|| ∫^1_-1Î(τ,μ)dμ - G(τ)||^2=
1/2(|| ∫^1_-1Î(0,μ)dμ - G(0)||^2 + || ∫^1_-1Î(1,μ)dμ - G(1)||^2)
Fig. <ref> displays the fast convergence of the retrieved APINN parameter ω_θ to the actual value ω. Each line corresponds to a different APINN training procedure during which the only data points added were G(0) and G(1) obtained from the Van de Hulst's tables<cit.>, and used to minimize ℒ_inv during the training process. Despite the loss term with two data points was not weighted differently from the interior or boundary ones, APINN achieved a precise inversion of the parameter of interest. In fact, as displayed in Table <ref>, the loss function converges independently of the albedo ω and with great precision, as displayed by the relative error between the known albedo and the predicted APINN albedo ω_θ shown in the last column of Table <ref>. Therefore, APINN retrieved the correct parameter of interest ω_θ when only two points were added during the training process.
§ CONCLUSIONS
Throughout this paper, we have described different applications of APINN for solving the radiative transfer equation, which is a Boltzmann-type transport equation. We successfully solved forward problems in a slab with both isotropic and anisotropic scattering phase functions and irrespective of the albedo. The results presented improved upon previous attempts to use physics-informed neural networks for solving the RTE in both accuracy and speed<cit.>. Furthermore, we presented the solution of the first coupled radiation-conduction problem in a participating medium using the APINN framework and we showed that the loss functions of coupled neural networks quickly converged to a low minimum value below 10^-5. Our findings open the possibility to utilize APINN to analyze higher dimensional systems and discover more interesting physics with applications to metamaterials and semiconductor device modeling. Finally, we solved an inverse problem following a setup that replicates an experimental setting with data points at the boundary of the system. It will be interesting in future studies to build on the APINN platform to address higher dimensional coupled, inverse coupled, and strongly scattering forward systems with applications to biomedical imaging, nanophotonics, metamaterials, and thermal modeling of semiconductor devices.
We acknowledge the support from the U.S. Army Research Office, RF-Center managed by Dr. J. Qiu (Grant #W911NF-22-2-0158). We thank professors Mike Kirby, Akil Narayan, and Shandian Zhe for useful discussions on this topic.
*
|
http://arxiv.org/abs/2307.05702v1 | 20230711181106 | Qubit Recycling in Entanglement Distillation | [
"Stuart Pelletier",
"Ruozhou Yu",
"George Rouskas",
"Jianqing Liu"
] | quant-ph | [
"quant-ph"
] |
Qubit Recycling in Entanglement Distillation
This work was supported in part by the National Science Foundation under grant OMA-2304118.
Stuart Pelletier, Ruozhou Yu, George Rouskas, Jianqing Liu
Department of Computer Science, North Carolina State University, Raleigh, NC 27606, USA
E-mail: {sopellet, ryu5, rouskas, jliu96}@ncsu.edu
August 12, 2023
==============================================================================================================================================================================================================
Quantum entanglement distillation is a process to extract a small number of high-fidelity entanglement from a large number of low-fidelity ones, which in essence is to trade yield (or survival rate) for fidelity. Among existing distillation approaches, Gisin's local filtering protocol is commonly adopted in photonic quantum systems for distilling entangled photons in polarization basis. Yet, the performance of Gisin's filter is cursed by the same fundamental trade-off between fidelity and yield. To address this challenge, in this work, we propose a protocol to recycle the disposed photons and improve their fidelity by a designed (and optimized) local operator. The key parameters of the proposed protocol are calculated by solving a constrained optimization problem. In so doing, we achieve significantly higher yield of high-fidelity entanglement pairs. We further evaluate the performance of our designed protocol under two common configurations of Gisin's filter, namely full filter and partial filter. Compared with existing distillation protocols, the results demonstrate that our design achieves as much as 31.2% gain in yield under the same fidelity, while only incurring moderate system complexity in terms of invested hardware and extra signaling for synchronization.
Entanglement distillation, Gisin's local filter, POVM, Optimization, Protocol design
§ INTRODUCTION
Quantum entanglement as a physical phenomenon in the microscopic world once troubled Einstein who called it “spooky action at a distance,” but it was later validated by the well-known Bell inequality test. Nowadays, despite many unanswered scientific questions around quantum entanglement, quantum networks have been widely engineered and deployed around the globe. The common goal of all these quantum networks is to distribute entanglement in large volume and high quality <cit.>, as entanglement is central to numerous applications in future quantum internet such as quantum teleportation, quantum computation, and quantum cryptography <cit.>.
When interacting with the environment like quantum memory and fibre channels, quantum entanglement unavoidably experiences coherence degradation that may lead to entanglement sudden death <cit.>. The common way to cope with decoherence is entanglement distillation, by which a smaller number of highly entangled states are extracted from a large number of weakly entangled states <cit.>. Among existing entanglement distillation protocols, Bennett's controlled-NOT (CNOT) operation <cit.> and Gisin's local filtering operation <cit.> are featured as mainstream approaches. Compared with Bennett's approach, Gisin's local filter has two appealing merits: (1) only local operations are needed (i.e., no classical communications); (2) only a single copy of the entangled state is needed (i.e., no ancilla entanglements are scarified).
Since its inception in 1996, Gisin's local filter has been extensively researched in both theory and experiments for entanglement distillation. In principle, a pair of weakly entangled qubits (and likewise for multipartite (>2 qubits) entanglement, such as the GHZ state) can become strongly entangled when passing through Gisin's filters. Any qubits reflected by the filter, however, will have their entanglement weakened, or in some cases, destroyed. Such qubits can either be measured or discarded as they are deemed useless at that point. While this uselessness holds true in many (ideal) cases, for some input states and/or under certain (practical) filter configurations, these reflected qubits are shown to have non-zero concurrence, i.e., they are still entangled despite weak strength. A natural question to ask is whether such reflected qubits can be recycled and turned into strongly entangled states. One can obviously anticipate a much higher yield of usable entanglement if the answer to this question is affirmative.
To this end, we present in this paper a novel protocol — consisting of a non-unitary transformation and multi-party agreement on coincidence count — to harvest and improve the weakly entangled qubits that are reflected by Gisin's filters. To search for the optimal non-unitary operator, we formulate a constrained optimization problem that maximizes the high-fidelity survival rate, i.e., the total entanglement yield with the minimum requirement on their fidelity. The protocol is integrated into and examined under two common filter-based entanglement distillation setups, namely the full filtering and partial filtering schemes. Based on numerical simulations, we demonstrate the superior performance of our qubit-recycling protocol in terms of high-fidelity survival rate compared to existing filter schemes.
The paper is organized as follows. To begin with, we survey the recent advances in entanglement distillation in Section <ref>. Next, we introduce the basic concepts that are relevant to our research problem in Section <ref>. We then describe the principle and design details of our proposed protocol in Section <ref>. To evaluate the performance of the protocol, we present the simulation results in Section <ref>. Lastly, we conclude the paper in Section <ref> with an outlook for the future work.
§ RELATED WORKS
In this section, we review recent advancements in entanglement distillation that have contributed to the ongoing development of the field. We organize our discussion into three subtopics: (1) distillation of multipartite states, which extends the scope of entanglement distillation beyond simple bipartite systems; (2) distillation using hyperentanglement, an emerging approach that utilizes multiple modes of entanglement to enhance distillation; and (3) distillation using reset-and-reuse operations in a quantum computer, a novel methodology that employs the inherent capabilities of quantum computing hardware to facilitate the distillation process by recycling and re-entangling ancilla qubits. By examining these recent developments, we aim to provide an overview of the current state of entanglement distillation research and highlight the significance and novelty of our proposed qubit recycling protocol.
§.§.§ Distillation of Multipartite Entanglement States
The distillation of multipartite entangled states, such as GHZ states, has garnered attention due to the advantages of entanglement being shared between more than two parties. Huang et al. <cit.> proposed a single-copy-based distillation scheme for amplitude-damped W states and amplitude-damped GHZ states. De Bone et al. <cit.> investigated the creation and distillation of GHZ states out of nonperfect Bell pairs. They introduced a heuristic dynamic programming algorithm to optimize protocols for creating and purifying GHZ states.
§.§.§ Distillation Utilizing Hyperentanglement
Utilizing hyperentanglement has been explored as a promising technique for enhancing entanglement distillation schemes. Zhou and Sheng <cit.> proposed an efficient two-step entanglement purification protocol for polarization entanglement using a single copy of states by utilizing hyperentanglement in the time bin and spatial modes. Ecker et al. <cit.> experimentally demonstrated single-copy entanglement distillation using pairs of single photons entangled in both the polarization and energy-time domains.
§.§.§ Reset-and-Reuse
In recent work by Germain et al. <cit.>, the authors explore the potential of a reset-and-reuse operation in quantum computers to substantially reduce yield loss in entanglement distillation protocols. They implement multi-pass distillation schemes, specifically BBPSSW and DEJMPS, and test them on the IBM-Q environment. This reset-and-reuse feature shows a significant minimization in the number of qubits required for distillation, bringing the number of qubits required per pass down from exponential to constant — a notably large improvement. It should be noted that such a reset-and-reuse operation, while available in quantum computers, is not currently available in a quantum network setting, as there are many challenges associated with re-entangling distance-separated ancillary qubits after measurement. Our work proposes a novel single-copy qubit recycling protocol which does not require any such re-entangling and can thus be used by a quantum network with currently available hardware.
§ PRELIMINARIES
§.§ Gisin's Local Filter
In the demonstrative experiment by Kwiat <cit.>, Gisin's local filter was realized by a series of coated glass slabs, tilted against the vertical axis by the Brewster's angle, as shown by an example in Fig. <ref>. By adjusting the configuration of these slabs (e.g., angles and coated materials), the transmission probability T_H (resp., T_V) for horizontally (resp. vertically) polarized incident photons can be tuned, owing to the well-known polarization-dependent reflectivity <cit.>. As a result, undesired states (i.e., noises) can be selectively blocked (and reflected in another direction), thus leaving the surviving photons to be more concentrated in the desired entangled states.
In theory, the Gisin's local filter can be modeled as a positive operator-valued measurement (POVM), namely {M_0, M_1} where M_0 = ([ α 0; 0 β ]) and M_1 = I - M_0 are positive semi-definite Hermitian. M_0 (M_1 likewise) is realized by the projector m_0 = √(α)|0⟩⟨0|+√(β)|1⟩⟨1| and M_0 = m_0 * m_0^†. When implementing the POVM (or Gisin's filter) in photonic systems, α and β respectively denote the transmission probability T_H and T_V of the glass slabs. That is to say, the design of Gisin's local filter is boiled down to the construction of α's and β's.
§.§ Channel Decoherence Model
In this work, we consider a (photonic) quantum network that distributes EPR pairs between any two arbitrary nodes. An entanglement source (ES) generates EPR pairs by directing a laser beam at a BBO (beta-barium borate) crystal. Without loss of generality, the EPR pair in the state of |Φ^+⟩ = 1/√(2)(|00⟩+|11⟩) or ρ = |Φ^+⟩⟨Φ^+| is assumed.
Then, each qubit of the EPR pair is distributed to Alice and Bob through independent decoherence channels. We consider the amplitude damping model in which state |1⟩ may decay into |0⟩. Mathematically, an amplitude damping channel ℰ is described by the following super-operators, a.k.a, Kraus operators:
E^i_0 = [ 1 0; 0 √(γ̅_̅i̅) ], E^i_1 = [ 0 √(γ_i); 0 0 ],
where i ∈{A,B}, γ_i = 1 - e^-t_i/T_1 is a time-dependent damping factor in which T_1 is defined as the time it takes for the |1⟩ state to settle into the |0⟩ (vice versa). Denote γ̅_̅i̅ = 1 - γ_i. After channel decoherence, the received state at Alice and Bob is
ρ^' = ℰ(ρ)=∑^1_j=0∑^1_k=0(E_j^A ⊗ E_k^B) ρ(E_j^A⊗ E_k^B)^†.
For the sake of notation simplicity, in the remainder of this paper, we consider the same fading channel for ES-A and ES-B, i.e., γ = γ_A = γ_B.
§ DESIGN PRINCIPLES OF QUBIT RECYCLING
In this section, we consider two common entanglement distillation setups in the literature, with one being that both Alice and Bob implement Gisin's local filters (coined as “full filtering”) while the other being that either Alice or Bob implements a Gisin's local filter (coined as “partial filtering”). While both setups have their merits, we will investigate the best use case of our proposed qubit-recycling idea and how much gain it can offer.
§.§ Qubit Recycling under Full Filtering
§.§.§ Typical full filtering design
To offset the decoherence incurred by the amplitude damping channel and restore the received state ρ^' closer to its original entanglement state ρ, Alice and Bob implement Gisin's local filters, which are mathematically defined as the POVMs {M_A,0, M_A,1} and {M_B,0, M_B,1} respectively, for entanglement distillation. We consider the local filters performed by Alice and Bob described by the operation:
M_i,0 = [ α_i 0; 0 β_i ],M_i,1 = [ β_i 0; 0 α_i ],
where α_i,β_i ∈ (0,1) and α_1 + β_i = 1 complying with the POVM's property. In existing work, full filtering schemes have been widely explored, wherein Alice and Bob each distills her/his respective qubit independently. This process is mathematically described by applying POVMs on both qubits. We refer to the state after undergoing both filters, i.e., the state Alice and Bob want to keep, as
ρ_11 = 1/S_11(√(M_A,1)⊗√(M_B,1)) ρ^' (√(M_A,1)⊗√(M_B,1))^†.
where S_11 is the normalization factor that is S_11 = Tr{(√(M_A,1)⊗√(M_B,1)) ρ^' (√(M_A,1)⊗√(M_B,1))^†}. The value of S represents the likelihood that both Alice's and Bob's qubits pass through the Gisin's local filters, thus can be considered as the success probability, or survival rate, of the distillation process. Note that as we consider indentical channels for ES-A and ES-B, that is γ_A = γ_B, Alice's and Bob's filter will have the same configurations. Therefore, we can drop the subscript for A and B and simply let α = α_A = α_B (likewise for β).
The calculation of the POVM parameters {α, β} is usually performed by solving a constrained optimization problem that seeks to maximize the high-fidelity yield, i.e. the success probability while meeting a minimum requirement on the entanglement fidelity. The reason for posing a hard constraint on fidelity is because some quantum applications (e.g., QKD) have a stringent requirement on the minimum fidelity to be considered usable (e.g., satisfying a minimum secret key rate) <cit.>. Mathematically,
{α^*, β^*} = _{α, β} S_11; s.t. Tr[√(√(ρ)ρ_11√(ρ))]^2 ≥ F_th.
This problem is a typical multivariate quadratic optimization problem, which can be easily proven to be convex by checking the second order derivatives of the objective and constraint functions. By Slater’s condition, the necessary and sufficient conditions for a solution {α^*, β^* } to be the optimal solution are the KKT conditions.
§.§.§ Residue entanglement in reflected qubits
When Alice's and Bob's local filters are configured using the parameters {α^*, β^* }, a photon pair passing through both filters is guaranteed to have a desired fidelity level. Yet, with such optimal filter configuration, the reflected qubit(s) could still be usable in the sense that there is a certain degree of entanglement remained.
Suppose an EPR pair passes through an amplitude damping channel with parameter γ and is filtered using Gisin's local filter with a POVM with parameters {α,β}. The resulting state of the reflected photons, ρ̃_00, is entangled when α,β≠ 0 and γ≠ 1.
Note that ρ̃_00 =
[ α^2(1/2+γ^2/2) 0 0 1/2αβ(1-γ); 0 1/2αβ(1-γ)γ 0 0; 0 0 1/2αβ(1-γ)γ 0; 1/2αβ(1-γ) 0 0 1/2β^2(1-γ)^2 ].
This density matrix is separable if and only if its partial transpose is positive <cit.>. This is called the PPT condition, which is equivalent to the condition that its partial transpose has exclusively non-negative eigenvalues. In other words, if at least one of its eigenvalues is negative, then the state ρ̃_00 is entangled. Note that its partial transpose[The partial transpose generally is taken with respect to one qubit, corresponding to either Alice's or Bob's qubit. However, the eigenvalues of the partial transpose are invariant under which qubit the partial transpose is taken on, because the partial transpose with respect to Alice's qubit is equal to the transpose of the partial transpose taken with respect to Bob's qubit. In this case, then, since the partial transpose is symmetric, it is the same partial transpose matrix for both Alice's and Bob's qubits.] is the density matrix
[ α^2 (1/2 + γ^2/2) 0 0 0; 0 1/2αβ(1-γ)γ 1/2αβ(1-γ) 0; 0 1/2αβ(1-γ) 1/2αβ(1-γ)γ 0; 0 0 0 1/2β^2(1-γ)^2 ]
which has eigenvalues
λ_1 = -1/2αβ(-1+γ)^2, λ_2 = 1/2β^2(-1+γ)^2,
λ_3 = 1/2α^2(1+γ^2), λ_4 = 1/2αβ(1 - γ^2).
Note that λ_2,λ_3 and λ_4 all take on non-negative values for all α,β,γ∈ [0,1]. The eigenvalue λ_1, however, takes on a negative value except when α=0, β=0 or γ=1. Therefore, ρ̃_00, is entangled when α,β≠ 0 and γ≠ 1.
§.§.§ Recycling reflected qubits
In light of the remaining usable entanglement in the reflected qubits, we propose a second Gisin's local filter, denoted as _, to harvest them. The basic idea is shown in Fig. <ref>, in which the reflected qubits are distilled by another filter. Then, the two light paths are integrated and analyzed by a single-photon avalanche detector (SPAD). Note that a small portion of the reflected qubits from _ will be reflected by _ again. While they can be looped back for further recycling, we choose to measure them as their entanglement strength becomes much weaker than that observed when they are only reflected once. Technically, by calculating the concurrence following Proposition 1, we can show that the entanglement strength progressively deteriorates as qubits are reflected by each subsequent filter.
To determine the optimal configurations of _, let us first define an outcome space for _ as Ω_1 = {T_A,1T_B,1, T_A,1R_B,1, R_A,1T_B,1, R_A,1R_B,1}. For example, the outcome w = R_A,1T_B,1 implies that Alice's qubit is reflected while Bob's is transmitted. In the traditional full filtering scheme, this outcome would be considered a failure because no coincidence click is observed. In addition, we can define the outcome space for the second-tier local filters Ω_2 = { ∅_A,2∅_B,2, T_A,2∅_B,2, R_A,2∅_B,2, ∅_A,2T_B,2, ∅_A,2R_B,2, T_A,2T_B,2, T_A,2R_B,2, R_A,2T_B,2, R_A,2R_B,2} in which ∅ is an null event that implicitly tells that no qubit arrives at this filter. Among these possible outcomes, we collect the outcomes which result in the final distilled entanglement in a set Ω_ = {T_A,1T_B,1∅_A,2∅_B,2, T_A,1R_B,1∅_A,2T_B,2, R_A,1T_B,1 T_A,2∅_B,2, R_A,1R_B,1 T_A,2T_B,2} which gives us the survival rate P_ = ∑_i=1^4Pr(ω_i ∈Ω_).
Specifically, the survival rates for the four cases in Ω_ are respectively calculated as follows
Pr(T_A,1T_B,1∅_A,2∅_B,2) = S_11
Pr(T_A,1R_B,1 ∅_A,2T_B,2) =
Tr{(√(M_A,1)⊗√(M_B,0)) ρ^' (√(M_A,1)⊗√(M_B,0))^†}
×Tr{(I ⊗√(M_B,1^')) ρ_10 (I ⊗√(M_B,1^'))^†}
Pr(R_A,1T_B,1 T_A,2∅_B,2) =
Tr{(√(M_A,0)⊗√(M_B,1)) ρ^' (√(M_A,0)⊗√(M_B,1))^†}
×Tr{(√(M_A,1^')⊗ I) ρ_01 (√(M_A,1^')⊗ I)^†}
Pr(R_A,1R_B,1 T_A,2T_B,2) =
Tr{(√(M_A,0)⊗√(M_B,0)) ρ^' (√(M_A,0)⊗√(M_B,0))^†}
×Tr{(√(M_A,1^')⊗√(M_B,1^')) ρ_00 (√(M_A,1^')⊗√(M_B,1^'))^†}
where the second-tier filter's POVM operator is captured by {M_A/B,0^', M_A/B,1^'}. Moreover, for any cases in ω_i ∈Ω_, we denote the output quantum state as ρ̂_11, ω_i which can be calulated similar to Eq. (<ref>).
Then, the search of optimal {α', β'} for the POVM operator {M_A/B,0^', M_A/B,1^'} of _ is formulated as the following optimization problem.
{α'^*, β'^*} =
_{α', β'}∑_i=1^4 Pr(ω_i ∈Ω_) · 1(Tr[√(√(ρ)ρ̂_11, ω_i√(ρ))]^2 ≥ F_th),
in which 1(·) is the indicator function that is 1 if its provided statement is true; and 0 otherwise.
§.§ Qubit Recycling Under Partial Filtering
Partial filtering is another widely adopted configuration in entanglement distillation for its higher survival rate. In its setup, depending on which channel has stronger decoherence, only one of Alice or Bob implements a local filter. This setup naturally gives rise to a higher survival rate without losing too much of the fidelity. Since this paper considers identical channel decoherence on ES-A and ES-B, there is no difference of placing a filter on Alice's or Bob's end. Therefore, without loss of generality, we consider the setup in which Alice filters her qubit, while Bob does not.
First of all, examining the single-filter case, we call the state transmitted by _, i.e., the state Alice and Bob want to keep in a traditional partial filtering design without qubit recycling, as
ρ_1 = 1/S_1(√(M_A,1)⊗ I) ρ^' (√(M_A,1)⊗ I)^†,
where S_1 is a the normalization factor. The goal is find the optimized parameters for _ by solving a fidelity-constrained yield-maximization problem similar to (<ref>). Mathematically,
{α^*, β^*} = _{α, β} S_1; s.t. Tr[√(√(ρ)ρ_1√(ρ))]^2 ≥ F_th.
Moreover, we define the outcome space of _ as Ω'_1 = {T_A,1, R_A,1}, and that of _ as Ω'_2 = {∅_A,2, T_A,2, R_A,2}. Analogously to the full filter case, we collect the outcomes which result in a distilled entanglement pair, giving us the set Ω'_ = {T_A,1∅_A,2, R_A_1 T_A,2} and the survival rate P'_ = ∑_i=1^2Pr(ω_i ∈Ω'_). This similarly leads us to the following analogous constrained optimization problem.
{α'^*, β'^*} =
_{α', β'}∑_i=1^2 Pr(ω'_i ∈Ω'_) · 1(Tr[√(√(ρ)ρ̂_1, ω'_i√(ρ))]^2 ≥ F_th),
.
§ PERFORMANCE EVALUATION
§.§ Simulation Methodology
In order to evaluate the performance of our proposed qubit recycling protocol, we developed a simulation model which solves the constrained optimization problems (<ref>), (<ref>), (<ref>), and (<ref>). The simulation is implemented in Python, and consists of the following steps:
* Initialization: At the beginning of the simulation, the initial parameters and constraints of the problem are defined. The quantum system ρ is prepared, we define a range of γ values to evaluate, and we fix our F_th value. Specifically, F_th values of 0.7 and 0.9 were selected.
* First filter parameter optimization: The simulation first assumes a single filter model as a benchmark, and refines the parameters of the local POVM operator _ through an iterative optimization algorithm. The optimization process iterates through the given γ value range for our given F_th value and finds the {α, β} values which respectively maximize (<ref>) and (<ref>).
* Second filter parameter optimization: Given the optimized {α, β} value corresponding to a given γ and F_th for _, a second filter _ is optimized using similar iterative methods to solve (<ref>) and (<ref>).
* Evaluation: The optimized local operators are then applied to the prepared quantum system, and the survival rate and fidelity of the resulting entanglement pairs are calculated, both for the normal filtering case (i.e., benchmark), and for filtering with recycling case, for comparison. Specifically, the normal filtering case is separately instantiated with full filtering and partial filtering schemes.
By following the aforementioned simulation methodology, we are able to determine the optimal design of our local operator for recycling the disposed photons, achieving a significant increase in high-fidelity survival rate over the optimized benchmark scheme. In the following subsections, we will discuss the specific results obtained for the full filtering and partial filtering schemes.
§.§ Full Filter Results
Our simulation results demonstrate that the full filtering scheme with qubit recycling shows a significant improvement in survival rate compared to the benchmark single filter protocol, shown in Fig. <ref> and Fig. <ref>. For the F_th = 0.7 case, our design adds 20.8% to 31.2% additional survival rate compared to the benchmark, for γ∈ (0.3676, 0.4059). Similarly, for the F_th = 0.9 case we observe a survival rate addition between 30.6% and 31.2%, for γ∈ (0.1056, 0.1085).
The limited range of γ values is easily interpreted, as the values lower than this produce states with fidelity above the threshold with no filtering necessary, thus the optimal choice is to not use Gisin's local filter. In other words, the channel introduces such an insignificant amount of noise that the entanglement can simply pass through the channel without any filtering and still maintain high fidelity. For γ values above this range, the amplitude damping effect is so strong that there does not exist a {α, β} values such that filtering will achieve a fidelity greater than F_th. To achieve high-fidelity entanglement in the high γ range, one could cascade filters in series with _ which constitutes an orthogonal research topic.
An analysis of the contributions of different filtering events to the final survival rate reveals that the improvement comes mostly from the ρ̃_10 and ρ̃_01 cases — implying that one photon is reflected in one arm while its entangled counterpart is transmitted in the other arm, as shown in the histogram in Fig. <ref>. This indicates that our proposed qubit recycling protocol effectively recycles the disposed photons in these cases, leading to a higher overall survival rate without compromising the fidelity of the entangled photon pairs.
§.§ Partial Filter Results
Our simulation results for the partial filtering scheme show an improvement in the survival rate of similar degree to the full filtering scheme, which can be seen in Fig. <ref> and Fig. <ref>. Specifically, for the F_th = 0.7 case, the partial filtering scheme adds between 20.5% and 25.0% to the benchmark survival rate, for γ∈ (0.3676, 0.3824). For the F_th = 0.9 case, we similarly observe an additional 24.3%-25.0% increase in survival rate, for γ∈ (0.1056, 0.1079).
We note an observed tradeoff between the full and partial filtering schemes. The partial filtering scheme has γ ranges for which it is a viable design which are proper subsets of the corresponding full filter's γ ranges, however the overall survival rates are significantly higher for the partial filtering scheme within those ranges. For the F_th = 0.7 case, the highest survival rate using the full filtering scheme is 56.1%, while the corresponding partial filtering scheme has a 74.9% survival rate. A similar difference is observed in the F_th = 0.9 case, where the full filtering scheme has a maximum survival rate of 56.2%, and the corresponding partial filtering scheme has a survival rate of 75.0%.
This tradeoff can be explained by the increase in the probability of photons being initially transmitted through the first filter. Given that the filter is on only one side of the entanglement pair in the partial filtering scheme, compared to both sides in the full filtering scheme, the probability of being transmitted is much greater. This allows for less filtering being possible, though, which explains the smaller γ ranges for which we see a gain in survival rate. These differences in contribution of the transmitted photons can be seen by comparing the histograms Fig. <ref> and Fig. <ref>.
§.§ Synchronization and Multi-party Agreement
The results confirm the effectiveness of our qubit recycling protocol in enhancing the performance of entanglement distillation in both the full filtering and partial filtering schemes. However, both the full filtering scheme as well as the partial filtering scheme suffer from a potential synchronization challenge, which occurs in the ρ̃_10 and ρ̃_01 cases in the full filtering scheme, or the ρ_0 case in the partial filtering scheme, where one photon in an entangled pair passes through its first filter (or is not filtered in the case of the partial filtering scheme), while the corresponding photon reflects off of its respective first filter, and subsequently passes through its second filter. As a result, the arrival times of the photons at Alice's and Bob's detectors will be different, leading to a discrepancy in their timesheets. When Alice and Bob compare their timesheets to identify photon coincidences, this discrepancy may cause difficulties in recognizing these events as coincidences, potentially leading them to be incorrectly discarded.
This time discrepancy can be avoided if Alice and Bob each measure the distance of their respective recycled light paths and share this information with each other, as well as the entanglement source. Alice and Bob can then compensate for the time difference for the recycled photons. In addition, the entanglement source can also use this information to emit photons only at intervals which are not equal to the interval between the arrivals of the entangled photons in these cases. This allows Alice and Bob to be certain that any photons arriving with such an interval between them can in fact be labeled a coincidence pair.
Furthermore, it is important to note that in the full filtering scheme, even if the ρ̃_10 and ρ̃_01 cases are excluded, the inclusion of the ρ̃_00 case alone still results in a benefit in survival rate, albeit at a lower amount. Specifically, we see a 6.06% increase in survival rate for F_th = 0.7, and a 6.24% increase for F_th = 0.9. This is illustrated by Fig. <ref>.
§ CONCLUSION AND FUTURE WORK
In this paper, we have presented a novel qubit recycling protocol for improving the yield of high-fidelity entangled qubits in photonic quantum systems. By employing a second local filter, our approach effectively reclaims discarded entangled qubits, resulting in a substantial increase in the yield of high-fidelity entanglement pairs. Our proposed protocol achieves up to a 31.2% gain in high-fidelity survival rate while incurring only moderate system complexity in terms of invested hardware and extra signaling for synchronization. Our work demonstrates the potential of qubit recycling in quantum entanglement distillation, which could have implications for the development of scalable and robust quantum communication networks.
An avenue for future work is to examine the applications of qubit recycling in different network models (e.g. multipartite entanglement, non-symmetric noise channels.) Another avenue is examining the local filter with a zero-valued parameter which breaks entanglement in the reflected photons. In some network models, using such a filter can be optimal, so finding use of these photons could lead to improvement over our proposed protocol.
IEEEtran
|
http://arxiv.org/abs/2307.05323v1 | 20230711150956 | Existence of quantum states for Klein-Gordon particles based on exact and approximate scenarios with pseudo-dot spherical confinement | [
"Sami Ortakaya"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP"
] |
Exact and approximate solutions of KG equation
Shipito Address 444 Alaska Avenue Suite #BKF475, Torrance 90503, CA, USA
[email protected]
Present address: Ercis Central Post Office, 65400, Van, Turkey
Existence of quantum states for Klein-Gordon particles based on exact and approximate scenarios with pseudo-dot spherical confinement
Sami Ortakaya
August 12, 2023
=====================================================================================================================================
In the present study, Kummer's eigenvalue spectra from a charged spinless particle located at spherical pseudo-dot of the form r^2+1/r^2 is reported. Here, it is shown how confluent hypergeometric functions have principal quantum numbers for considered spatial confinement. To study systematically both constant rest-mass, m_0c^2 and spatial-varying mass of the radial distribution m_0c^2+S(r), the Klein-Gordon equation is solved under exact case and approximate scenario for a constant mass and variable usage, respectively. The findings related to the relativistic eigenvalues of the Klein-Gordon particle moving spherical space show the dependence of mass distribution, so it has been obtained that the energy spectra has bigger eigenvalues than m_0=1 fm^-1 in exact scenario. Following analysis shows eigenvalues satisfy the range of E<m_0 through approximate scenario.
§ INTRODUCTION
Quantum mechanical wave functions are represented by probability distributions near a certain spatial point which are localized in the interaction field. Based on the spatial motions for quantum mechanical particles–represented by relativistic and nonrelativistic eigenstates– it is important to analyze the discrete energies for quantum systems in the electronic, nuclear and particle physics. In addition to numerous studies, the quantum physical process has been applied to the external field on the electronic-interactions through plasma <cit.> and condensed matter <cit.>. Concerning the Klein-Gordon equation, which describes relativistic spin-zero energy levels, it has been shown that the eigenvalue equation leads to spatial confluent hypergeometric functions, not only in the harmonic oscillator <cit.> , but also in the fractional regime <cit.>. Furthermore, Mie-formation <cit.>, exponential variables <cit.> and the non-central oscillatory <cit.> have been solved for the equality on the radial distributions of the rest-mass energy. Within framework of the Klein-Gordon oscillator, commutative & non-commutative cases and <cit.>, scenario of Lorentz-violating <cit.> have been also analysed. Regarding 1D-quantum well, tunnelling <cit.> and deep well <cit.> has been studied for spin-0 regime.
Besides typically eigenvalue equations for nonrelativistic context, the spin-zero relativistic minimal form is given in the following Klein-Gordon equation <cit.>
[-∇^2+M^2]ψ_n(r⃗)=[E_n-V(r)]^2ψ_n(r⃗)
where, E_n is energy eigenvalue, V(r) denotes spatial dependent potential energy, M is rest mass energy of the particle system in atomic units (ħ=c=1). The potential energy for the quantum mechanical particles subject to the interaction forces, plays a key role on the variable differential equations. In particular, for the solution of quantum mechanical wave equations in the defined space, the Frobenius method for spin-0 scalar particles <cit.>, the asymptotic iteration method within the scope of the molecular oscillator <cit.>, and the Nikiforov-Uvarov method on the thermodynamic concepts <cit.> have been used. These methods have been pioneer concepts in expressing the explicit form of the energy spectrum and the corresponding polynomial wave functions. Additionally, supersymmetric quantum mechanics has been also employed in relativistic calculations <cit.>.
Another analytical approach via definition of the spatial-domain is the Laplace integral transformation, which demonstrates the dependence of the energy spectra on the quantum numbers. This approach has been used not only in time-dependent problem <cit.>, but also spatial part of Schrödinger equation <cit.>, which is considered within exponential variables when obtaining in the s-domain of the Laplace transform. Here, the binomial form of the transformed space has been introduced using a multi-valued context. The studies which follow the Laplace transformation, involve the applications of spin-0 particles <cit.> and Dirac's spinor-systems <cit.> with Morse oscillator. Additionally, the N-sphere system has been examined via reduced form in the context of spin-0 particles through pseudoharmonic oscillator <cit.>. These approaches mainly assume spatial varying mass; however, comparative & reasonable analysis including the approximate results near spatial-point is needed on the effective potential energies, so I introduce to solve spin-0 regime with familiar "equality" between scalar and vector potential (see Ref. <cit.>). I also focus on the representation of the quantum states for spin-0 particles with constant rest mass in the effective potential energies.
The purpose of this study is to model the relationship between the hypergeometric functions and the Laplace transform method, as seen in previous studies <cit.>, and is to review relativistic spin-0 eigenvalue spectra. Specifically, I aim to demonstrate how the key properties of the real function in the Laplace's s-domain lead to principal quantum numbers, so Kummer's differential equation through algebraic equation is revisited. In order to show reducing to solvable regime, two considerations of the Klein-Gordon equation with pseudo-dot confinement is followed: The eigenvalues provide an exact scenario when mass-distribution is M=m_0+S(r) under V(r)=S(r) and I will show that the eigenvalues lead to approximate cases due to r^4 and r^-4 when constant mass is M=m_0 under S(r)=0. The approximate solutions for a constant rest mass M=m_0 can be illustrated on the Klein-Gordon equation, which can be reduced to Schrödinger-type equations. For this purpose, I show the "existence of quantum numbers" in the Kummer's differential equation. Within the transformed Klein-Gordon equation, I deal with a constant mass phenomena which cause to high-order power of the spatial variables <cit.>. As we know the spatial variables in the Laplace's s-domain, the Kummer-type equations define easily the eigenvalue-spectra through central potential <cit.>. The radial part of the original function can be transformed into Laplace's s-domain which provides terminal value theorem in a real statement <cit.>.
In this way, a revised model within the analytical framework of the spinless relativistic energy spectra given in Eq. (<ref>) is presented. As will be seen, the physical wave function and existence of eigenvalue spectra are given in following Section.
In the rest of the paper, the relativistic spin-zero scheme is found by introducing that numerical results.
§ MATHEMATICAL STATEMENT ON THE EXISTENCE OF QUANTUM NUMBERS
Considering the eigenvalues of levels E_n and corresponding distribution of |ψ_n(r⃗)|^2, the radial differential equation of this eigenvalue equation is given as
a(x)u”(x)+b_nu'(x)+c_n(x)u(x)=0,
where u(x) is unknown real function related to E_n eigenvalue, a(x), b_n and c_n(x) are first variable, constant term and spatial function, respectively. The eigenvalues also appear in both b_n and c_n(x); moreover, I take the spherical space of the two separated part ψ_n(r⃗)=u(r)Y(θ, φ).
The solution of Eq. (<ref>) may show a distribution in a certain "small" range within the Kummer-type differential equation which also showing multivalued functions.
In a way, we need to analyze the existence of corresponding quantum numbers which is called by eigenvalue spectra.
Definition 1. Considering a particle of wave function ψ(r⃗), Schrödinger type n-eigenvalue equation is given in following N-spherical equation
ψ_rr+N-1/rψ_r-L̂/r^2ψ+λ_n(E_n, r)ψ=0, ψ(r⃗)∈ (0, ∞)
where L̂ denotes hyperangular-momentum operator provides the hyperspherical harmonics of the function Y(θ_1, θ_2, θ_3, …θ_N-2, φ) in N-spheres. λ_n(E_n, r) represents a central function combined with eigenvalue and spatial dependent potential through ψ(r⃗)=1 and N≥ 3.
Definition 2. In the space of the range r∈ (0, ∞), the separated radial wave function of ψ_n(r⃗) is a distribution u(x) which satisfies dimensionless regime of variable r→ x. The eigenvalues provide that
xu”+β_0u'+(β_1-β_2^2 x-β_3^2/x)u=0, u(x)∈ (0, ∞).
Here, β_i denote constants including eigenvalues and other parameters via the condition of u(r)=1 for ψ(r⃗)=u(r)Y(θ, φ) in N=3.
§.§ Rearranging of Parameters and Variables
The key feature is to get the following ansatz solution of transform
u(x)=x^-|σ|f(x), σ∈
so Equation (<ref>) reduces to a kind of Kummer's equation which is given by
xf”(x)+β f'(x)+(β_1-β_2^2 x)f(x)=0, β=β_0-2|σ|.
Then we obtain that
|σ|=-1-β_0/2+√((1-β_0/2)^2+β_3^2).
As will be seen in the variable covering of the transformation r→ x, we should have β_0=1 through β=1-2β_3 for |σ|=β_3. Here, note that the given values of β, β_0 and σ are valid for variable x∝ r. Here, we can combine these values provide special cases under x∝ r^α, (α=1, 2, 3, …). As can be seen in the previous solutions <cit.>, it has been obtained that β_0 =N/2 for α=2; x∝ r^2.
We have to also consider that f(x) is a well-behaved function through physical acceptable solutions:
u(x)=x^σ_0-|σ|g(x), (σ_0-|σ|>0)
which is provided by the radial boundary-values
u(0)=0 and u(x→∞)→ 0. After the acceptable results depend on eigenvalue equations which include components asymptotically, Equation (<ref>) can be transformed into the Kummer's differential equation in the dimensionless form
xh”+(b_n-x)h'-a_nh=0, u(x)=x^σ_0-|σ| e^-β_2 xh(x).
Here, b_n and a_n include eigenvalue of the operator for given values of β_i in Eq. (<ref>). One of the solutions of Eq. (<ref>) is confluent hypergeometric function including rising factorial. Then, the polynomial solution is
h(x)=M(a_n, b_n,x)=∑_j=0^∞a_n^(j)x^j/b_n^(j)j!.
As I will proof, the eigenvalue spectra leads to
a_n=-n, n=0, 1, 2, 3, …
The proposed wave functions have to be physical boundaries, so spherical regime is provided to be effective way in keeping up with ansatz solution.
Lemma 1.
Let u(x), f(x), g(x), h(x)∈ and let σ, σ_0 be real in eigenvalue parameters. Then the following assertions hold:
* If f(x) is a well-behaved function providing that u(x) yields a non-zero distribution. Furthermore, boundary values with spherical regime permit the our comment on the radial distributions. Then, the conditions u(0)=0 and u(x→∞)→ 0 denote physical acceptable solutions satisfy that
u(x)=x^-|σ|f(x), f(x)=x^σ_0g(x) for σ_0> |σ|.
* Due to the asymptotic behaviour of the u(x), the behaviours of f(x) and g(x) are also built up at long distances:
lim_x→∞f(x)→0 and lim_x→∞g(x)→0.
§.§ Solutions of Kummer's Eigenvalue Spectra
In the presence of the spherical wave functions in Klein-Gordon Equation (<ref>), we can conclude that the radial form of Equation (<ref>) yields an eigenvalue dependence of variables. In addition to the closed-form ansatz in Equation (<ref>), the terminal-value theorem and the existence of eigenvalue numbering will provide to obtain physical solution. There are two cases defined by the physical wave function with the n^ th eigenvalue.
Case 1.
A kind of the Kummer's eigenvalue equation is obtained as a form of Equation (<ref>). As a different way, multi-valued functions can be analyzed by considering real functions in s-domain. We should have a first solution of the ordinary equation in the following from:
xf”(x)+β f'(x)+(β_1-β_2^2 x)f(x)=0, u(x)=x^-|σ|f(x), σ∈,
then Laplace's s-function reads <cit.>
ℒ{f(x)} =F(s)
=A_n(s-β_2)^a(s+β_2)^b
with
a=-2-β/2+β_1/2β_2, b=-2-β/2-β_1/2β_2
where A_n is a determined constant by putting inverse transform, which includes new constant C_n in the eigenfunctions which consist of confluent hypergeometric functions:
f(x)=C_n e^-β_2xx^1-β_0+2|σ|M(-a, 2-β, 2β_2x), a=n, (n=0, 1, 2, 3,…).
Proof. The Eq. (<ref>) yields an ordinary differential equation in Laplace's s-variable, which is obtained in following form:
(s-β_2^2)F'(s)+[(2-β)s-β_1^2]F(s)=0.
On the other hand, s-domain functions provide that F(s)∈ at s=0.
Then we have real function of the form
F(0)=(-1)^aβ_2^a+b/2
and then we should get the eigenvalue spectra
a=n, n=0, 1, 2, 3, …
Note that the terminal-value theorem is valid for real values of spatial wave function f(x).
Inverse transform also yields convolution integral through solutions <cit.>
ℒ^-1{F(s)} =f(x)
=B_n∫_0^x (x-τ)^-a-1τ^-b-1 e^2β_2 τ dτ
=C_n e^-β_2 xx^1-βM(-a, 2-β, 2β_2 x), β=β_0-2|σ|
where B_n denotes eigenvalue dependent constant including Gamma function <cit.>. One can see that the acceptable wave function is provided by f(0)=0 and convergence limit which is given in Equation (<ref>). Note that the well-behaved distribution of the function f(x) converges with sF(s) in Equation (<ref>).
Case 2.
Equation (<ref>) denotes another kind of the Kummer's eigenvalue equation
xh”(x)+(ε_1-ε_2 x)h'(x)+ε_3h(x)=0,
where
ε_1=2|σ|+β_0, ε_2=2β_2, ε_3=β_1 - (2|σ|+β_0 ),
then h(x) provides that the confluent hypergeometric functions related to the n'th eigenvalues:
h(x)=C_n M(-n, ε_1, ε_2x), ε_3/ε_2=n, n=0, 1, 2, 3, …
Proof. Applying the Laplace's transform in s-domain, one can obtain an eigenfunction of the following form:
ℒ{h(x)} =F(s)
=A_n(s-ε_2)^ε_1-2(s-ε_2/s)^1+ϵ_3/ϵ_2,
where A_n is a determined constant. In the case of the terminal-value, we should consider that
lim_x→∞h(x)=lim_s → 0sF(s)→∞ s∈,
which satisfies that
lim_x→∞u(x)→ 0,
and then we conclude that the "zero and positive integers" at s=0. This condition is provided by
ε_3/ε_2=n, n=0, 1, 2, 3, …
Here, one can see that real function is obtained via the integers related to s<ε_2. We also obtain that the convolution integral yields <cit.>
ℒ^-1{F(s)} =h(x)
=C_n M(-n, ε_1, ε_2x).
Here, we expect that the radial function of transformation f(x) shows an agreement via terminal-value theorem in Laplace s-domain. Then, real values represent integer-context with exponent in combined with following theorem:
Theorem. (<cit.>) Suppose that f(x) satisfies
the conditions of the derivative theorem and furthermore that
lim_x→∞ f (x) exists. Then this limiting value is given by
lim_x→∞f(x)=lim_s → 0sF(s) s∈,
where F(s)=ℒ{f(x)}.
§ NUMERICAL RESULTS
The obtained values are resulting in the pseudo-dot structure when mass parameters are to be m_0c^2 and m_0c^2+V(r) for exact and approximate scenario, respectively.
There is no magnetic field and variable of the first component according to pseudo-dot energy can be given via following function
V(r)=D_ e(r/r_0-r_0/r)^2 ,
where D_ e and r_0 are energy value of quantum well-width and turning point related to separation, respectively.
The exact solutions are valid for the Klein-Gordon equation in Eq. (<ref>) which reduced to Schrödinger-type equation as following form
[∇^2+E^2-m_0^2-2(E+m_0)V(r)]ψ_n(r⃗)=0
Here, mass distribution is taken by M=m_0+V(r). Within framework of the approximate scenario, it has been also obtained that the Klein-Gordon equation reads
[∇^2+E^2-m_0^2-2EV(r)+
V^2 (r)]ψ_n(r⃗)=0 ,
here rest-mass energy is taken constant in the form, M=m_0. Putting pesudo-dot into Eq. (<ref>), forth orders are obtained via r^4 + 1/r^4, and then I propose an approximation eith Taylor expansion near r_0. Note that Eqs. (<ref>) and (<ref>) also lead to Schrödinger's form
{∇^2+(ϵ-Φ)}Ψ(r⃗)=0,
where energy levels occur in view of the variable mass and constant one:
ϵ_n=
E_n^2-m_0^2, variable mass
E_n^2-m_0^2+4E_nD_ e+6D_ e^2, constant case
Also, effective potential can be obtained as following form
Φ=
(E_n+m_0)V(r), variable mass
D_ e(2E_n+4D_ e)(r^2/r_0^2+r_0^2/r^2)-D_ e^2(r^4/r_0^4+r_0^4/r^4), constant case
From Eqs. (<ref>) and (<ref>), we should have a fonction as β_3 (E_n, ℓ) for ℓ=0, 1, 2, 3, … . Due to the centrifugal term ℓ(ℓ+1)/r^2, obtained new eigenvalues are denoted by (n, ℓ). Assuming rest mass energy (ħ=c=1) and separation-distance are taken through values of m_0 =1.0 fm^-1 and r_0=1.0 fm, Figure <ref> shows the exact energy eigenvalues of relativistic spin-zero particles at values of well width parameters D_ e=1, 2 and 3 fm^-1. With increasing D_ e we can see that the eigenvalues rise to about 5.84 fm^-1. From there, one can also know that the energy eigenvalues of excited states increase with increasing quantum numbers n and ℓ. Because of the effective Schrödinger equation given Eq. (<ref>), energy spectra shifts to bigger values with increasing D_ e which represents narrow quantum well in the effective potential energy. It can be seen that increasing energies exhibit positive values in Figure <ref>, so the Schrödinger formalism shows this behavior on the given energy-levels. Moreover, radial probability distribution via |u_nℓ|^2 including confluent hypergeometric functions at D_ e=1.0 fm^-1, can be plotted near r_0=1.0 fm, so we have ground state is valid for maximum near 1.0 fm. The obtained results related to the densities show the spherical well exhibits the expected distributions in the presence of spinless relativistic energies.
For a given constant rest mass in Equation (<ref>), approximate solutions are also needed near a spatial point. Still, we can choose the same values as exact-procedure ones, so expected value lies the range E<m_0, where m_0c^2/ħ^2 c^2=m_0. Within the Eq. (<ref>), 4^ th order occur when the variable of V^2 (r) causes to r^4+1/r^4. Here, we can consider that
U(r)=r^4+r^-4, U_a(r)=A_0+A_1r^2+A_2r^-2.
These variables can be compared when D_ e=5.0 fm^-1 at r_0=1.0 fm. We can see from Figure <ref> that, besides U(r) with 4th-order U_a(r)=A_1+A_2x+A_3x^-1 is also valid near r=r_0: x=1 with x=r^2/r_0^2 and then we can easily obtain that
x^2+1/x^2≃ A_1+A_2x+A_3/x,
where A_0=-6 and A_2=A_3=4 are obtained coefficients from Taylor expansion near x=1.
In this way, it can also be taken that the energy eigenvalues depend on approximation constants, A_i; i=0, 1 and 2. So that, similar differential equation is obtained in this argument. The eigenvalues obtained from Eq. (<ref>), yield smaller values than values of constant mass energy m_0=1.0 fm^-1. As can be seen in Figure <ref>, these values have good agreement with E<m_0. Furthermore, numerical values under well width parameter of D_ e=10, 20 and 30 fm^-1 decrease with increasing D_ e since the considered effective potential in the approximate scenario behaves as a "quantum barrier".
In Figure <ref>, we consider the wide & narrow barriers, so we expect that the quasi-nonrelativistic eigenvalues denopted by ϵ_n increase with increasing barrier height within narrow quantum well. Here, obtained quasi values 613.894,
2414.731 and
5414.188 fm^-2 for width parameter of 10, 20 and 30 fm^-1, respectively. These values corresponds to relativistic eigenvalues of 0.323, 0.172 and 0.113.
§ CONCLUSION
In this work, the Schrödinger-type equations in the exact and approximate-solvable forms which are also represented by Klein-Gordon equation with relativistic spinless regime has been studied through Kummer's eigenvalues. As an analytical line with acceptable solutions which satisfy radial context of the range of (0, ∞), it is analysed how Kummer's type solutions exhibit n-dependent solutions for n=0, 1, 2, 3, … . There is a key property is to get the proposed solution in a better form via
u(x)=x^-|σ|f(x); f(x)=x^σ_0g(x) for σ_0>|σ|,
where x defines radial variable in terms of the radius r. To best of our knowledge Mie-type variables provide that confluent hypergeometric functions even so approximation is valid near equilibrium point of the behaviour V(r=r_0)=0. So that, Kummer's differential equation shows eigenvalue spectra due to the real function of Laplace transform with s-domain. Therefore, it can be easily obtained that the Kummer's orthogonality and eigenvalue spectra describe probability distribution in a certain spatial region.
In the presence of the Klein-Gordon equation related to the wave functions including confluent hypergeometric polynomials, two solutions which provide corresponding energy levels can be distinguished: Firstly, exact solution for pseudo-dot quantum confinement lies that E>m_0 with spinless Klein-Gordon equation. These solutions are possible through radial mass distribution m(r)c^2=m_0c^2+S(r) via V(r)=S(r). Secondly, the approximate scenario which causes to E<m_0 due to the fact that rest mass is not change (i.e, S(r)=0 V(r)≠0) in the interval of (0, 2 fm). In a way, considered space also allows a closeness to the effective variable is obtained as r^4+r^-4. Probability distribution is also valid at considered range; nevertheless, it can be seen that Schrödinger-transformation under effective potential shows why eigenvalues exhibit increasing (exact form) & decreasing (approximate scenario) with increasing well-width, so Kummer's solvable models are also used via analytical forms.
§ DATA AVAILABILITY STATEMENT
No Data associated in the manuscript.
§ CONFLICT OF INTEREST
The author declares that he has no confict of interest.
48
Falaye2019 Falaye, B. J. et al, Entanglement fidelity for electron–electron interaction in strongly coupled semiclassical plasma and under external fields. Laser Physics Letters 16(4), 045204, (2019).
Ghazi2015 El Ghazi, H. and Peter, A. J., Built-in electric field effect on optical absorption spectra of strained (In,Ga)N–GaN nanostructures. Physica B 470, 64, (2015).
shankarJahan, K.L., Boda, A., Shankar, I.V. et al. Magnetic field effect on the energy levels of an exciton in a GaAs quantum dot: Application for excitonic lasers. Sci Rep 8, 5073 (2018).
impurityR. Arraoui, et. al. The spatial electric field effect on the impurity binding energy and self-polarization in a double quantum dot. The European Physical Journal Plus, 137, 979 (2022).
dirac H. Bahlouli, El B. Choubabi, A. Jellal and M. Mekkaoui, Tunneling of Graphene Massive Dirac Fermions Through a Double Barrier. Journal of Low Temperature Physics 169, 51–69 (2012).
ikhSameer M. I. and Sever, R., Exact bound states of the D-dimensional klein-gordon equation with equal scalar and vector ring-shaped pseudoharmonic potential. Int. J. Modern Phys. C 19(09), 1425 (2008).
Sami2012 Ortakaya, S., Exact solutions of the Klein-Gordon equation with ring-shaped oscillator potential by using the Laplace integral transform. Chinese Physics B 21, 7, (2012).
miracMaireche, A., The influence of noncommutativity on the energy spectra of bosonic particles inthe framework of the DKGE with improved spatially-dependent mass includingmixed scalar-vector Coulomb potentials in the ERQM symmetries. Revista Mexicana de Fisica 69, 030801 (2023).
Tapas2020 Tapas, D., et. al. Analytical study of D-dimensional fractional Klein–Gordon equation with a fractional vector plus a scalar potential. Pramana 94(1), 33, (2020).
mirtaleb S. Miraboutalebi, Solutions of Klein–Gordon equation with Mie-type potential via the Laplace transforms. European Physical Journal Plus 135, 16 (2020).
ikot Akpan, N. I., et. al.Approximate analytical solutions of the Klein–Gordon equation with generalized Morse potential. Int. J. Thermophysics 42, 1, (2021).
ahmadov Ahmadov, A.I. et al. Bound state solutions of the Klein–Gordon equation under a non-central potential: the Eckart plus a ring-shaped potential. Eur. Phys. J. Plus 138, 92, (2023).
cuzinatto Cuzinatto, R. R. et al, Non-commutativity and non-inertial effects on a
scalar field in a cosmic string space-time: I. Klein–Gordon oscillator. Class. Quantum Grav. 39, 075006 (2022).
fahmed2Ahmed, F., Klein–Gordon oscillator under the effects of violation of Lorentz symmetry. European Physics Letters 136, 41002, (2021).
tunel1Eduardo López and Clara Rojas, Scattering of a Klein–Gordon particle by a smooth barrier. Canadian Journal of Physics. 98(10): 939-943 (2020).
tunel2Victor M. Villalba and C. Rojas Bound States Of The Klein–Gordon Equation In The Presence Of Short Range Potentials.
International Journal Of Modern Physics A 21(02), 313-325 (2006).
tunel3 Rojas, C, and Villalba, V.M. The Klein-Gordon equation with the Woods-Saxon potential well. Revista Mexicana de Física, 52(3), 127-129 (2006).
tunel4 Boumali, A. and Labidi, M., Shannon entropy and Fisher information of the one-dimensional Klein–Gordon oscillator with energy-dependent potential. Modern Physics Letters A 33(06), 1850033 (2018).
Greiner Greiner, W., Relativistic quantum mechanics: wave equations Springer, (2000), p.44.
fahmed Ahmed, F., Relativistic scalar charged particle in a rotating cosmic string space-time with Cornell-type potential
and Aharonov-Bohm effect. European Physics Letters 131, 3002, (2020).
brzo Brzo, A.B., Olgar, E. and Hussein, H.G. The Klein–Gordon equation with a generalized Morse potential in D-dimensions. Eur. Phys. J. Plus 136, 1007, (2021).
thermo Ikot, A.N., et al. Klein–Gordon Equation and Nonrelativistic Thermodynamic Properties with Improved Screened Kratzer Potential. Journal of Low Temperature Physics 202, 269-289 (2021).
onate Onate, C.A., Onyeaju M.C., Ikot A.N. and Ojonubah, J.O., Analytical solutions of the Klein–Gordon equation with a combined potential. Chin. J. Phys. 54(5), 820, (2016).
riahi N. Riahi, Solving the time-dependent Schrödinger equation via Laplace transform. Quantum Studies: Mathematics and Foundations 4, 103–126 (2017).
refchen Chen, G., The exact solutions of the Schrödinger equation with the Morse potential via Laplace transforms. Phys. Lett. A 326(1), 55, (2004).
commOrtakaya, S., Relativistic Treatment of Spinless Particles Subject to a q-Deformed Morse Potential. Communications in Theoretical Physics 59, 689, (2013).
aopOrtakaya, S., The equation-transform model for Dirac-Morse problem including Coulomb tensor interaction. Annals of Physics 338, 250-259, (2013).
nchin Ortakaya, S., Relativistic solutions for diatomic molecules subject to pseudoharmonic oscillator in arbitrary dimensions. Chinese Physics B 22(7), 70303 (2013).
fleischerFleischer, W. & Soff, G., Bound State Solutions of the Klein-Gordon Equation for Strong Potentials. Zeitschrift für Naturforschung A, 39(8), 703-719 (1984).
convOrtakaya, S. Pseudospin symmetry in position-dependent mass dirac-coulomb problem by using laplace transform and convolution integral. Few-Body Systems 54(11), 2073-2080 (2013).
samiarxiv Ortakaya, S., Deep anharmonicity to relativistic spin-0 particles in the spherical regime. arXiv e-prints, arXiv-2208, (2022).
schiff Schiff, J. L., The Laplace Transform: Theory and Applications. Springer-Verlag New York, p.89, (1999).
stegun Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables (Dover Books on Mathematics). Edited
by Milton Abramowitz & Irene A. Stegun, National Bureau of Standards, Applied Mathematics Series: 55 p. 253 (1964).
|
http://arxiv.org/abs/2307.04547v1 | 20230710132556 | Spectral Observables and Gauge Field Couplings in Causal Dynamical Triangulations | [
"Giuseppe Clemente",
"Massimo D'Elia"
] | hep-th | [
"hep-th"
] | |
http://arxiv.org/abs/2307.05716v1 | 20230708074507 | Hierarchical defect-induced condensation in active nematics | [
"Timo Krüger",
"Ivan Maryshev",
"Erwin Frey"
] | cond-mat.soft | [
"cond-mat.soft"
] |
a,*]Timo Krüger
a,*]Ivan Maryshev
a,b,1]Erwin Frey
[a]Arnold Sommerfeld Center for Theoretical Physics (ASC) and Center for NanoScience (CeNS), Department of Physics, Ludwig-Maximilians-Universität München,
Theresienstrasse 37, 80333 Munich, Germany
[b]Max Planck School Matter to Life, Hofgartenstraße 8, 80539 Munich, Germany
[*]T.K. and I.M. contributed equally to this work.
[1]Corresponding author: [email protected]
Hierarchical defect-induced condensation in active nematics
[
August 12, 2023
===========================================================
Topological defects play a central role in the formation and organization of various biological systems.
Historically, such nonequilibrium defects have been mainly studied in the context of homogeneous active nematics.
Phase-separated systems, in turn, are known to form dense and dynamic nematic bands, but typically lack topological defects.
In this paper, we use agent-based simulations of weakly aligning, self-propelled polymers and demonstrate that contrary to the existing paradigm phase-separated active nematics form -1/2 defects. Moreover, these defects, emerging due to interactions among dense nematic bands, constitute a novel second-order collective state. We investigate the morphology of defects in detail and find that their cores correspond to a strong increase in density, associated with a condensation of nematic fluxes. Unlike their analogs in homogeneous systems, such condensed defects form and decay in a different way and do not involve positively charged partners.
We additionally observe and characterize lateral arc-like structures that separate from a band's bulk and move in transverse direction.
We show that the key control parameters defining the route from stable bands to the coexistence of dynamic lanes and defects are the total density of particles and their path persistence length.
We introduce a hydrodynamic theory that qualitatively recapitulates all the main features of the agent-based model, and use it to show that the emergence of both defects and arcs can be attributed to the same anisotropic active fluxes.
Finally, we present a way to artificially engineer and position defects, and speculate about experimental verification of the provided model.
§ INTRODUCTION
The characteristic features of a nematic liquid crystal are the emergence of long-range orientational order and the occurrence of half-integer topological defects, which, however, are annealed at thermodynamic equilibrium <cit.>.
The dynamics of its nonequilibrium counterpart, an active nematic <cit.>, is in contrast governed by the persistent creation and annihilation of pairs of topological defects with opposite charges, leading to a dynamic steady state commonly referred to as active turbulence <cit.>.
Dense gel-like mixtures of microtubules (cytoskeletal filaments) and kinesins (molecular motors) that cause relative sliding between microtubules have become experimental platforms for studying the formation, dynamics, and annihilation of these toplogical defects <cit.>.
The observed complex defect dynamics have been investigated using hydrodynamic theories <cit.>.
The basic insight derived from such studies is that topological defects constantly generate active flow in momentum-conserving systems <cit.> or active flux in momentum non-conserving systems <cit.>.
Another experimental model system for active nematics is the actomyosin motility assay, in which actin filaments actively glide over a lawn of myosin motor proteins, performing a persistent random walk with constant speed <cit.>.
These systems exhibit phase separation into dense polar-ordered regions and dilute disordered regions, which is further corroborated by numerical analyses of corresponding theoretical models <cit.>.
Tuning the interaction between actin filaments by the addition of polyethylene glycol led to the emergence of a dynamic coexistence of ordered states with fluctuating nematic and polar symmetry <cit.>, which has been explained by pattern-induced symmetry breaking <cit.>. Systems exhibiting dense, purely nematic lanes have been thoroughly investigated by both simulations and hydrodynamic theories <cit.>.
As for half-integer topological defects, the common paradigm states that they are absent in dilute self-propelled active nematics <cit.>, but fundamental exclusion criteria for their existence have not been given.
In fact, no steady-state topological defects have yet been found in this subclass of strongly phase-separated active matter.
So far, it has only been observed that transient defects can occur in models with weak density inhomogeneity during the coarsening process <cit.>.
Moreover, toy models inspired by dilute nematic systems without self-propulsion can exhibit defect formation <cit.>.
However, the authors attest that the connection of their phenomenological theory to existing experimental systems is tenuous.
Here we investigate dilute active nematics for the presence of defects using an agent-based model of “weakly-aligning self-propelled polymers” (WASP) which has been shown to faithfully reproduce the behavior of real actomyosin motility assays on all relevant length and timescales including pattern formation processes and the topology of the phase diagram <cit.>.
This allows us to leverage these agent-based simulations as an in-silico experimental system with which to discover new phenomena.
We show that the two hitherto seemingly incompatible phenomena — phase separation and topological defects — are actually closely linked in weakly interacting active nematics.
In particular, we characterize a subclass of topological defects associated with the compression of nematic fluxes, which are similar to phenomena predicted in conceptual models <cit.>, albeit in a different context.
These defects appear as characteristic collective excitations in a novel nonequilibrium steady state. They are in dynamic equilibrium with nematic lanes from which they emerge and into which they disassemble.
Additionally, we find another type of topologically charged structure, filamentous arc ejections (FAEs) — elongated arc-shaped polymer bundles that detach from nematic bands — remotely resembling +1/2 defects.
To elucidate the mechanisms underlying these phenomena, we also introduce a hydrodynamic theory, building on previously published models <cit.>.
Exploiting the respective strengths of these two complementary theoretical approaches, we uncover a close relationship between the dynamics of phase-separated nematic bands, formation of topologically charged structures, and the associated condensation phenomena.
§ RESULTS
§.§ Simulation setup
We use agent-based simulations that emulate the dynamics of weakly interacting self-propelled polymers (WASP) of fixed length L on two-dimensional surfaces building on earlier work <cit.>; refer to the SI for further details on the algorithm.
Each polymer consists of a tail pulled by a tip that follows a trajectory corresponding to a persistent random walk with persistence length L_p.
Upon collision of a polymer tip with the contour of another polymer, a weak alignment torque is assumed to act that changes its direction of motion [Fig. <ref>(a)].
Here we use a purely nematic alignment interaction [Fig. <ref>(b)] whose strength is set by the parameter α_n.
Additionally, a small repulsion force F acts on polymer tips that overlap with other polymers.
Here we are interested in systems that have a collision statistics with purely nematic symmetry [Fig. <ref>(b)].
Figure <ref>(c) shows the phase diagram of such a weak nematic as a function of the average polymer density ⟨ρ⟩ L^2
and path persistence length L_p; hereafter ⟨ ... ⟩ denotes spatial averaging.
It exhibits an isotropic-nematic transition from a disordered homogeneous phase to a nematically ordered phase.
The phase boundary ρ_n (L_p) approximately scales as L_p^-1; refer to
the SI for details.
Thus, when the phase diagram is redrawn as a function of L_p and the spatially averaged normalized density ⟨ϕ⟩ = ⟨ρ⟩ / ρ_n, the phase boundary essentially becomes a horizontal line [inset of Fig. <ref>(c)].
§.§ Dense topologically charged structures
As expected for nematically interacting systems, our simulations show isolated nematic lanes that exhibit strong bending fluctuations on large length and time scales (cf. Movie S1 SI) caused by lateral instabilities <cit.>.
In our simulations, in addition to these typical nematic lanes, we also discover distinct types of topologically charged structures.
One class of these are three-armed filamentous structures containing a topological defect with charge -1/2 at their center [Fig. <ref>(a)].
They are typically formed when three curved nematic lanes — with their convex sides facing each other — meet and condense into a topological defect with a high-density core region [Fig. <ref>(b)]; we do not observe “collisions” of four lanes.
Unlike defects in non phase-separated active nematics, these condensed topological defects (CTDs) do not have a directly corresponding positively charged partner.
Instead, they are surrounded by an extended topologically charged region with a dispersed positive charge, as can be seen in Fig. <ref>(a) (lower right panel), which depicts the topological charge density as defined in Refs. <cit.>.
Moreover, our simulations show that the active nematic flux is gradually compressed as the triple junction of the nematic lanes (defect core) is approached
[Fig. <ref>(a), top right panel].
This leads to a reduction in lane width and a corresponding increase in density, which reaches a maximum in proximity of the core.
These three-armed topological defects are dynamic structures that are constantly being dissolved and reassembled.
A second class of structures we observe are lateral filamentous arcs that separate from the bulk of a straight nematic band and eventually move in transverse direction.
A time trace of such a filamentous arc ejection (FAE) is shown in Fig. <ref>(c).
These structures have similarities to +1/2 defects: they are “curved” and they always emanate in the direction of their convex side.
Somewhat similar observations have been made in continuum models constructed for nematic particles with velocity reversals <cit.>. However, the authors did not address the properties of these structures or the reasons underlying their formation.
While there are certainly similarities on a superficial phenomenological level between FAEs and these structures, the underlying mechanisms and nature of these structures may be quite different.
Having discovered these collective topological structures in our in-silico experiments, we sought to explore how their emergence is affected by a change of parameters.
However, since the lateral instabilities of nematic bands required for the formation of CTDs (cf. section “From CTDs to FAEs and bands” below) occur only on very long time scales, a systematic investigation of a phase diagram in agent-based simulation is numerically prohibitively demanding.
Therefore, we sought an alternative way to explore the spatiotemporal dynamics of the systems that would enable us to dissect the processes underlying the formation of CTDs and FAEs.
As explained next, we achieved this through constructing a hydrodynamic approach that captures all the main features of our agent-based simulation setup.
§.§ Hydrodynamic model provides access to the phase diagram
To this end we used the standard Boltzmann-like approach
(see SI).
However, as discussed below, this model was insufficient to explain the emergence of half-integer defects and was therefore generalized to include density-dependent corrections.
By analogy with passive model C in the Hohenberg-Halperin classification scheme <cit.> we formulate a hydrodynamic model in terms of a density and an order parameter field.
For an active nematic, these are the (normalized) polymer density
ϕ = ∫dθ P(θ)/ ρ_n,
and the traceless and symmetric tensor
Q_ij = ∫dθ P(θ)(2n_in_j- δ_ij) (nematic order parameter), where the unit vector 𝐧= (n_x,n_y)=(cos θ, sin θ) defines the the local polymer orientation vector and P(θ) denotes the probability density for the polymer orientation θ.
The eigenvector associated with the larger of the two eigenvalues of the Q-tensor can be viewed as depicting the average orientation of the polymers.
Unlike classical model C, however, a hydrodynamic model for active nematics must be intrinsically nonequilibrium in character and its dynamics can not be determined by the gradient descent in a single free-energy landscape.
Nevertheless, using the analogy to the dynamics near thermal equilibrium, some intuition can be gained for the design of the model.
As we discuss in more detail below, part of the system's dynamics can be understood in terms of two separate effective free-energy functionals for the non-conservative Q-tensor (F_Q) and the conservative density field (F_ϕ), similar to related nonequilibrium models discussed recently <cit.>.
Mass-conservation requires that the density obeys a continuity equation ∂_t ϕ = - ∂_i J_i.
In general, for symmetry reasons, the current must be the gradient of a scalar quantity and a tensorial quantity containing the Q-tensor.
Similar to model B, the scalar component is of the form
J_i^iso = -∂_i μ (ϕ)
with chemical potential
μ (ϕ) = ν (ϕ) ϕ.
Here, the first and second terms of ν (ϕ) = λ^2+ν_ϕϕ account for motility-induced effective diffusion with the diffusion constant λ^2 ∝ L_p^2 <cit.>, and for steric repulsion due to excluded-volume interactions <cit.>, respectively. The latter contribution represents the density-dependent correction.
For the tensorial part, we write J_i^aniso = -∂_j [χ(ϕ) Q_ij], which again is assumed to contain motility- and interaction-induced parts: χ (ϕ) =λ^2+χ_ϕϕ. Similar as above, the latter term represents the density-dependent correction motivated by theories for active nematics <cit.>, and it is controlled by the phenomenological parameter χ_ϕ.
It will turn out that this anisotropic term leads to phase separation, since it causes compression in the direction perpendicular to the axis of the local orientational order.
Taken together, one gets
∂_tϕ
=
∂_i∂_j[
ν(ϕ)ϕ δ_ij
+
χ(ϕ) Q_i j]
.
The isotropic flux (first term) can be written in terms of an effective free-energy functional F_ϕ= ∫d^2 x (1/2λ^2 ϕ^2+1/3ν_ϕϕ^3).
In contrast, however, the anisotropic flux (second term in (<ref>)) violates time-reversal symmetry <cit.>.
We assume the time evolution of the nematic tensor to be of the form
∂_t Q_i j
= -[
δ F_Q/δ Q_ij]^st
=
-[
δ F_Q/δ Q_ij
- 1/2 δ_ij Tr(δ F_Q/δ Q_ij)
]
,
which corresponds to a gradient dynamics (model A) determined by the effective free-energy functional F_Q; here and in the following [...]^st denotes the traceless and symmetric part of a tensor.
We have chosen the timescale such that the friction coefficient in the gradient dynamics is set to 1.
The effective free-energy functional has a standard Landau-deGennes (LdG) part <cit.> responsible for an isotropic to nematic transition, but also includes a coupling between density gradients and the orientation of polymers as in inhomogeneous active nematics <cit.>,
F_Q
=
∫ d^2 x
(
12
[
(1-ϕ)Q^2
+ 12β (Q^2)^2
+ κ (∂_jQ_ij)^2
]
-
Q_ij[
ω ∂_i∂_jϕ
+
ω^a(∂_iϕ)(∂_jϕ)
]
)
.
The LdG free-energy density in terms of the order parameter Q^2= Q_klQ_kl describes a nematic ordering transition at the critical density ϕ_c = 1 with the gradient term playing the role of a generalised elasticity.
The stiffness coefficient (or Frank constant) κ also contains two contributions, one from the motility of the polymers <cit.>, and the other due to interactions <cit.>: κ (ϕ) = 12 λ^2+κ_ϕ⟨ϕ⟩.
Note that the last term — the density-dependent correction to elasticity — is linearised around the mean value of density ⟨ϕ⟩
(see SI).
The second line in (<ref>) takes into account the coupling between density gradients and nematic order, and can be derived solely on the basis of symmetry considerations.
The functional derivatives of F_Q with respect to the nematic tensor correspond to “interfacial torques” <cit.> in the equation of motion for the nematic tensor.
They rotate the director at the interface between high- and low-density domains, where the gradients of ϕ are the strongest.
The lowest-order coupling — and the associated “aligning torque” <cit.> ω [∂_i∂_jϕ]^st —
is iconic for active nematics <cit.>.
It is responsible for the destabilization of straight nematic lanes, eventually resulting in lane undulations (or other types of chaotic behavior associated with “dry active turbulence” <cit.>).
In our case, this term is due to self-advection (ω=λ^2, see
SI)
but it can be considered as “diffusive” since anisotropic diffusion of particles leads to an analogous contribution.
Interaction between the polymers yields the next-order couplings in (<ref>).
On symmetry grounds there are two different terms quadratic in ϕ:
[ϕ ∂_i∂_jϕ]^st and [(∂_iϕ) (∂_jϕ)]^st; both can also be obtained by explicitly coarse-graining microscopic models for interacting active polymers <cit.>.
The former recalls the diffusive ω-term (especially after the linearization around ⟨ϕ⟩) and therefore is ignored here.
The latter is associated with torque, which is bilinear in the density gradients ω^a [(∂_iϕ) (∂_jϕ)]^st, providing an effective liquid-crystalline “anchoring” <cit.> (or preferred orientation) of the nematic director field with respect to the density gradients.
The parameter ω^a is taken to be negative to ensure tangential anchoring, implying that polymers tend to orient perpendicular to the density gradients (or parallel to the boundary of dense lanes).
For simplicity, we ignore additional non-linearities in the equation of motion for the Q-tensor. Such contributions are considered elsewhere <cit.> where they are typically regarded as a modification to the elasticity terms.
Taken together Eqs. (<ref>, <ref>) are a generalization of the active model C <cit.>, which was originally introduced for non self-propelled biofilaments in the presence of molecular motors. The major difference is that the model now explicitly includes self-propulsion. Moreover, by including density-dependent terms, it shows the same results as the agent-based simulations (see discussion below) and is therefore quantitatively linked to the actomyosin motility assay. Finally, it possesses less degrees of freedom, since most of the terms are rigorously derived and are controlled by the same parameter (λ).
We consider
ν_ϕ, χ_ϕ, κ_ϕ, ω and ω^a as phenomenological parameters and solve the equations of motion numerically.
This model robustly reproduces the results obtained in the agent-based simulation to a very high degree of fidelity and for a large range of parameters.
It exhibits CTDs and FAEs whose structure, topological charge, and formation process are very similar to the ones observed in WASP; cf. Fig. <ref>(d)-(f). Therefore, in the following we use this hydrodynamic approach to analyse and underpin the main mechanisms of formation of CTDs and FAEs.
In summary, our model (and the active model C <cit.>) differs significantly from the standard theory of active nematics <cit.>, since it contains density-dependent corrections and higher order terms. Without such modifications the standard active nematic model is unable to reproduce CTDs.
§.§ From CTDs to FAEs and bands
Encouraged by the promising initial results shown by our hydrodynamic theory, we took advantage of the relative ease with which it can be used to determine the long-term behavior, and generated a (λ, ⟨ϕ⟩) phase diagram [Fig. <ref>(a)].
As can be seen, at low values of λ and ⟨ϕ⟩, CTD formation dominates, while in areas of large λ and ⟨ϕ⟩ stable nematic lanes emerge.
Between these regions lies a band of parameters where the system mainly exhibits FAEs.
To test whether these findings obtained with the hydrodynamic model also hold for our agent-based simulations, we determined the average number of CTDs present at a given time in the agent-based simulation along one-dimensional lines of the (L_p, ⟨ϕ⟩) phase space — one along a constant value of ⟨ϕ⟩ and one along a constant value of L_p.
Reassuringly, the results for the agent-based simulations and hydrodynamic model are in good agreement [Figs. <ref>(c) and (d)].
We further checked the mean number of FAEs present in the agent-based simulations as a function of L_p [Fig. <ref>(e)];
see SI for details.
The observed decline in FAE frequency with increasing L_p is consistent with the observations in the hydrodynamic model, where at high λ no FAEs occur [cf. Fig. <ref>(a)].
Taken together, these results demonstrate that not only do the agent-based and hydrodynamic models share the same collective states, the frequency of these states also shows the same dependence on parameter changes.
The above relationships between model parameters and the occurrence of CTDs or FAEs can be related to the overall dynamic behavior (in short, “activity”) of the system.
For both hydrodynamic and agent-based approaches, three distinct, qualitatively different dynamic states can be distinguished [Fig. <ref>(b)].
The first of these is associated with very strong bending undulations of nematic lanes.
It occurs at low values of L_p/λ or ⟨ϕ⟩ and is characterized by constant rearrangement of lanes [Movies S2, S3, S7, SI, Figs. <ref>(a), (b), (d) and (e)]:
Lanes frequently collide leading to the formation of CTDs. In addition, system-spanning configurations of straight (or only slightly curved) lanes [cf. Figs. <ref>(c) and (f)], which may form randomly, are disrupted by undulations within a fairly short time.
This is consistent with the observation that CTDs are the predominant phenomenon at low values of L_p/λ and ⟨ϕ⟩, respectively [Figs. <ref>(c), (d)].
Notably, FAEs can also be formed in this parameter regime following the emergence of short-lived system-spanning nematic lanes.
The second dynamic state can be found at intermediate values of L_p/λ or ⟨ϕ⟩.
In this regime, bending undulations are fewer and less pronounced, resulting in straight (or only slightly curved) and system-wide lanes that are stable over long periods of time:
Elongated openings often appear in the lateral areas of the lanes, which develop into filamentous arcs
[Movies S4, S8, SI, and Figs. <ref>(c),(f) and middle panel of Fig. <ref>(b)].
This is in accordance with the observation that FAEs are the predominant phenomenon observed at intermediate values of L_p/λ or ⟨ϕ⟩ [Figs. <ref>(a) and (c)-(e)].
The third dynamic state is associated with vanishing bending undulations at high values of L_p/λ or ⟨ϕ⟩. Here, straight and system-spanning configurations are stable and no openings develop in their lateral regions [Movies S5, S9, SI and right panel of Fig. <ref>(b)]. Consequently, neither FAEs nor CTDs are observed [Figs. <ref>(a) and (c)-(e)].
The tendency just discussed for the bending undulations to become weaker as the values of L_p/λ or ⟨ϕ⟩ are increased from low to high values can be rationalized by the following heuristic arguments.
With increasing L_p/λ the Frank constant <cit.> grows, and the effective elasticity (or collective stiffness of the polymers) yields stronger penalties for orientational distortions.
As a result, the bending instability weakens, as described above.
The hydrodynamic model has allowed us to verify this hypothesis: upon varying the elastic constant κ (independently from other parameters), we observe that weak elasticity favors the formation of CTDs, while a strong one yields stable bands.
As the density ⟨ϕ⟩ is increased (for a given and constant system size), a further effect contributing to higher stability of lanes is that a system-spanning nematic band occupies a growing fraction of space, i.e., the bands become wider while the bulk density remains largely the same [cf. SI].
Since broader bands are less susceptible to a bending instability, an increase of ⟨ϕ⟩, as discussed above, leads to the decay of defect formation.
An interesting aside can be mentioned here in the context of varying values of ⟨ϕ⟩: for very small densities, close to the onset of order, both models show a drop in the observed CTD number [Fig. <ref>(d)], which is likely due to the fact that there is less mass within the ordered phase, and therefore not enough mass to form multiple curved bands necessary for lanes to collide and CTDs to be created.
Overall, the formation of condensed defects and filamentous arc ejections are both strongly linked to the stability of the nematic lanes, i.e., to their propensity to exhibit a bending instability <cit.>, which, in turn, can be externally controlled by tuning either L_p/λ or ⟨ϕ⟩.
§.§ Detailed structure of CTDs and FAEs
To better understand the structure of the CTDs forming in agent-based simulations, we studied the polymer flows through them in detail.
To this end, we tracked the motion of each polymer as it passed through a condensed defect.
This enables us to distinguish the polymer flows from one to another arm of a defect and investigate whether there is a relationship between the lateral position of individual polymers and their eventual direction of turning.
Fig. <ref>(a) illustrates the flux from one arm of a defect (arm 1) into the two other arms (arms 2 and 3) [see Movie S6 SI for a representative flux recorded in an agent-based simulation].
The flux in each defect arm gets strongly compressed laterally in the vicinity of a defect core and then splits almost exactly at the centerline of the lane, while undergoing a sharp change in direction [Fig. <ref>(a)].
Symmetrically the same flux enters the defect from arms 2 and 3, resulting in the nematic flow structure depicted in Fig. <ref>(a) and (c).
This also shows that the flows begin to mix again only at a greater distance from the center of the defect [cf. color mixing in Fig. <ref>(b) and (c)]. Hence, the overall topology often present at the birth of the defect [Fig. <ref>(b) and (e)] is preserved in the flow structure of the fully formed CTD as three barely intermingling nematic flows.
In addition, we investigated whether the velocity of the polymers is affected as they move through a CTD. As can be seen from Fig. <ref>(e), their speed remains almost unchanged and only a slowdown in the per mil range is observed. One can see two insignificant velocity drops corresponding to regions with the maximal density of polymers. Interestingly, in the immediate vicinity of the core of the defect, the particle velocity briefly returns to the average value, corresponding to particles inside the nematic band.
We also studied the temporal evolution of FAEs and their occurrence over time. To this end, we periodically projected the density of a system in a configuration that allows the formation of FAEs onto one-dimensional slides and stacked them to obtain kymographs (see SFig. 5 SI).
These reveal that the detachment of arcs accelerate over time.
Further, they show that in the hydrodynamic model, due to no noise being present, FAE events occur at regular intervals, whereas in the agent-based simulations they form stochastically.
Having established the existence of CTDs and FAEs, and characterized them in our agent-based in-silico experimental system, and having successfully introduced a hydrodynamic theory that faithfully reproduces the results of the simulations as well as providing access to the phase space of the observed pattern, we asked: why are these phenomena observed? What are the underlying mechanisms responsible for their formation?
To answer these questions, we leveraged the ability of the hydrodynamic model to provide access to single terms of its defining equations [Eqs. (<ref>,<ref>)].
This analysis reveals that both the formation of dense defects and the movement of arcs have the same root cause, namely the anisotropic (“curvature-induced”) density flux <cit.>, described by -∂_j(χ Q_ij) in Eq. (<ref>) in the hydrodynamic model.
This can be understood by plotting -∂_j(χ Q_ij) in the region of an FAE or a CTD; see the left and right panels of Fig. <ref>(d), respectively.
As can be seen, on opposite sides of the arcs the amplitudes of the fluxes are distinct. An effective “active force” acting on the concave side is greater than that on the opposite side, which leads to the movement of the bent band (or arc) in the corresponding direction [Fig. <ref>(d), left panel].
When three lanes meet, the same curvature-dependent fluxes concentrate polymers in the core of the resulting defect [Fig. <ref>(d), right panel]. This condensation is eventually balanced by the isotropic part of (<ref>) and particularly by steric repulsion of polymers.
To test this hypothesis, we set the excluded volume force F (see SI)
to zero in our agent-based simulations.
Observations in this case indicate that the formation of CTDs is reduced and that, when they form, they decay faster.
Thus, we conclude that formation of the dense defects is predominantly determined by the interplay between two counteracting processes: isotropic and anisotropic density fluxes.
In addition to the “emergent” way of obtaining CTDs just studied, in which spontaneously formed bands interact randomly and spontaneously condense into defects at stochastically distributed positions, we have sought a way to overcome this limitation by artificially generating and positioning CTDs.
In contrast to non-phase-separated systems — where such an endeavor would involve the forced separation of a defect pair — the way CTDs form spontaneously [Figs. <ref>(b),(e)] suggests that finding a way to position and form nematic lanes in suitable configurations could trigger the creation of a CTD.
In combination with the observation of polymer fluxes near a defect [Fig. <ref>(h)], we hypothesized that placing active polymer sources in a three-strand configuration should trigger the formation of three lanes that immediately condensate into CTDs.
To test this prediction, we implemented the possibility to add such “active particle throwers” into our agent-based simulations and positioned them as described.
Indeed, we found that this way a CTD can be formed at a predetermined location where it persists for an arbitrary amount of time, cf. Fig. <ref>(h) and movie S10 SI.
This may be of potential application in cases where topological defects and/or high-density regions (in a low density background) need to be created and controlled with high accuracy.
§ DISCUSSION
In summary, we have used a combination of agent-based simulations and hydrodynamic theory to study pattern formation in phase-separated nematic active matter.
Our analysis shows that topological defects and nematic lanes, previously considered as two distinct and separate collective states, coexist and are tightly coupled.
We investigated the structure, formation and decomposition of CTDs in phase-separated systems.
We observed that CTDs appear as characteristic collective excitations in a novel nonequilibrium steady state.
Moreover, the formation process of CTDs constitutes a new hierarchical condensation phenomenon.
Given the previously demonstrated and close connection of our agent-based algorithm to the actin motility-assay, a paradigmatic experimental model system, it is plausible to expect that CTDs will be observed in experimental active matter systems.
Below we discuss these observations step by step.
First of all, we characterized topologically charged structures, such as CTDs and FAEs, for the first time observed in a phase-separated nematic system with self-propulsion.
It is apparent that CTDs differ markedly from defects observed in homogeneous active matter, particularly in the dynamics of their formation and decay and in their spatial structure as well.
To begin with, CTDs upconcentrate density nearby their cores and condensate nematic fluxes.
This condensation phenomena is interesting by itself, since the majority of experimental active matter systems show a depletion of particles in -1/2 disclinations, e.g., bacteria embedded in liquid crystals <cit.> and cultures of neural progenitors <cit.>.
Weak density accumulation around the defects has been discussed in slightly inhomogeneous nematic <cit.>;
however, in such systems, the -1/2 defects occur only during the transient and eventually disappear via annihilation with their +1/2 counterparts.
Similar CTDs, among other structures, were observed in parameter sweeps of the phenomenological toy model for mixtures of non-self-propelled microtubules and kinesin motors <cit.>.
However, they were either transient or formed only under very special conditions (elasticity almost zero).
In the latter case, the shape and the mechanism of formation of the defects were clearly different from the CTDs observed here.
In our case CTDs are typically formed by the collision of three curved nematic lanes that condense into a high-density three-armed structure, trapping the previously spatially distributed negative charge [Figs. <ref>(a),(d)].
One might think of comparing condensation to CTDs with the process of motility-induced phase separation (MIPS) <cit.>.
However, the fundamental difference between the two is that CTDs are not associated with particle slowdown or prolonged residence of agents in high-density regions.
In addition, the formation of condensed defects provides a condensation mechanism for anisotropically shaped particles, which is not possible with MIPS <cit.>.
We may also argue that in MIPS the agents themselves condense into high-density clusters, while we observe the condensation of dynamical collective states (nematic lanes) into topological defects.
The mutual orientation of defects is also non-typical: we observe that two CTDs can be connected by a single nematic streamline (a filamentous bundle of polymers) [Figs. <ref>(a), <ref>(f)], whereas in non-phase-separated active matter negative half-integer disclinations usually point towards a corresponding defect with the opposite charge +1/2 [Fig. <ref>(g)] <cit.>.
The dynamic processes of defect decay in phase-separated and homogeneous active nematics are also clearly distinct.
In homogeneous systems, pairs of defects with opposite charges annihilate each other <cit.>. In contrast, we find that CTDs do not annihilate with other defects, but disintegrate due to the undulating dynamics of the lanes that connect to the defect arms (Fig. <ref>(g) and Movie S3 SI).
This means that the destruction of a negatively charged defect does not depend on the mobility or dynamics of a positively charged pair, rendering this process potentially easier to control.
In cases where all three lanes that connect to the respective arms have the same bending orientation (curvature of all either clockwise or anti-clockwise with respect to center), this decay takes place via an interesting process in which defects rotate before they dissolve [Fig. <ref>(g)].
Thus, CTDs not only emerge from “collisions” of nematic lanes, but also are connected by, and disassemble into them.
Taken together, this leads to one of the main conclusions of our work, namely that the presence of CTDs constitutes
a novel nonequilibrium steady state which corresponds to a dynamic equilibrium between dense nematic lanes and condensed topological defects coexisting in a diluted background of disordered filaments.
This is reminiscent of other recent findings in active matter, in which a dynamical coexistence between patterns of different symmetry (nematic and polar) was observed <cit.>. During the persistent formation and subsequent decay of CTDs, those defects act as temporal capacitors of negative topological charge (i.e., the curvature on the boundaries of lanes gets temporarily trapped in a very small region of space) which eventually gets released again.
It is well worth reiterating that this is a continuous cyclic phenomenon, not a transient one (unlike the defect formation observed in Ref. <cit.>).
The most important factors that allow this nonequilibrium steady state to occur are probably the following.
First, since CTDs emerge from interaction of curved nematic lanes, a lateral undulation instability of nematic lanes — as exhibited by our agent-based model — is a basic prerequisite for their formation.
Another factor that is likely to favor the formation of CTDs is the nature of the interaction between the polymers (agents), which exhibit only weak mutual alignment and weak steric exclusion.
The latter, in particular, is likely to be a critical factor necessary for the high compression of polymer density during CTD formation.
Starting from a rigorously derived hydrodynamic model for self-propelled particles, we have generalized it to include higher-order phenomenological corrections.
The resulting equations are reminiscent of a conceptual active model C <cit.>, but they include all terms arising from particle self-propulsion, which is an important additional feature here.
In particular, the hydrodynamic model presented here has many fewer degrees of freedom than the toy model presented in Ref. <cit.>, since the coefficients in front of all “standard” terms have a fixed relation among them.
This hydrodynamic theory provides additional insight into the physics of CTDs.
For example, it shows that density gradients play a crucial role through their coupling with the orientation field.
In particular, we consider density-dependent corrections of these coupling terms (controlled by the parameters χ_ϕ and ω^a), which typically disappear due to the linearization of terms around the mean value of density in the majority of hydrodynamic theories.
We want to stress again that these additional terms, which are missing in standard theories of active nematics, are crucial for a proper description of the system, because without them CTDs are no longer observed.
We argue that strong phase separation (and the resulting large density gradients) inevitably amplifies the effect of higher-order coupling terms between the density and the orientation field on the dynamics.
For example, the bilinear anchoring ω^a(∂_iϕ)(∂_jϕ) causes the nematic lines to closely follow the contour of the density field constituting a defect (SFig. 7 SI) and therefore can stabilize defects.
This is in line with the observation that a decrease in ω^a leads to a decrease in the number of defects (similar conclusion can be referred from <cit.>). However, in our model CTDs still can be formed even if ω^a=0, χ_ϕ≠0,κ_ϕ≠0.
We firmly believe that the phenomena we found can also be observed in experiments, even though our study is purely theoretical.
The weakly aligning, self-propelled polymers simulation approach we base our study on, has previously shown not only excellent agreement with experiments, but was also able to predict then novel states that were later found in experiments <cit.>; thus it can be viewed, as elaborated in the introduction, as a computational version of an experimental system.
In light of this, we expect that the most promising experimental model system that could allow observation of the new topological defects we predict is most likely the actomyosin motility assay <cit.>.
This paradigmatic system not only satisfies the requirement of weakly interacting agents <cit.>, but also offers the advantage of high particle numbers.
Previously, not only polar waves <cit.> but also nematic lanes <cit.> have been observed.
This has been achieved by adding depletion agents that enable one to tune the strength as well as the symmetry of the interaction between the actin filaments.
It is conceivable that similar and other changes in the design of the actin motility assay could be used to produce a weak and purely nematic interaction as used in our agent-based simulations.
For example, other depletion agents could be used and/or the properties of the surface to which the driving molecular motors are attached could be changed.
Recently, the latter was indeed shown to have a direct impact on polymer interactions <cit.>.
Alternatively, CTDs could potentially be observed in other types of motility assays <cit.>.
Another intriguing possibility for observing the predicted CTDs is to directly produce a configuration of nematic lanes favoring the formation of CTDs by suitably structuring the surface used in the motility assay <cit.>.
The deep understanding we gained about the formation of CTDs owing to the combination of agent-based simulation and hydrodynamic approach allowed us to find a way to generate them artificially (Fig. <ref>(h) and movie S10 SI). Given the availability of directed particle sources in an experimental system, the position of defects (and therefore the location of a domain of extremely high density) could be controlled with pin-point accuracy.
This provides a new tool for cases where -1/2 defects and/or small regions of high particle density (in an overall dilute system) are needed at specific positions, e.g., to trigger specific processes such as cell death <cit.> at definable points.
Given the strong and controlled nature of the focusing of the fluxes in nematic lanes, this method could be termed “active matter optics”.
Another important insight from the broader perspective of the active matter field is that
phase-separated active matter exhibits a hierarchy of emergent collective states.
Interaction between dense nematic lanes, considered as “first-order” collective states in active nematics, can lead to the formation of “second-order” collective states, here half-integer topological defects with an even higher density.
A phenomenon which one can call “hierarchical, alignment-induced phase-separation”.
It is reasonable to assume that similar effects may lead to new phenomena in other active systems with different symmetry, e.g., polar symmetry with polar waves as first-order collective states <cit.>.
Another class of systems in which higher-order collective states might emerge are active systems that are subject to external gradients <cit.>
or signalling interactions between the agents <cit.>.
A promising extension of our present investigations are active foams.
In this state of active matter, which has recently received increasing attention <cit.>, dense ordered bands assemble into actively reforming cellular networks.
Indeed, in preliminary simulations of the hydrodynamic theory, we have identified parameter regimes in our model where we observe active foams: CTDs are more frequent, interconnected, and persist for longer times.
Thus, the formation of active foams in active nematics seems very plausible, but a thorough investigation of the entire phase space in the agent-based model is computationally demanding and will be reserved for a future study.
§ AUTHOR CONTRIBUTIONS
T.K., I.M., and E.F. designed the research, performed research, analyzed data, and wrote the paper.
§ CONFLICTS OF INTEREST
There are no conflicts to declare.
§ ACKNOWLEDGEMENTS
We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Excellence Cluster ORIGINS under
Germany's
Excellence Strategy (EXC-2094-390783311) and through Project-ID 201269156 -
Collaborative Research Center (SFB) 1032 - Project B2.
IM acknowledges European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska-Curie Grant Agreement No. 754388 (LMU Research Fellows) and from LMUexcellent, funded by the Federal Ministry of Education and Research (BMBF) and the Free State of Bavaria under the Excellence Strategy of the German Federal Government and the Länder.
§ APPENDIX
§.§ Agent-based simulation method
We now describe our agent-based simulation model.
Please also refer to the SI and the Supplemental Materials of Refs. <cit.> for more details.
In our systems we simulate M polymers, each of length L.
Orientational diffusion causes the tip of each polymer to perform a persistent random walk. Upon collision with another polymer, local interaction causes the tip to gradually align with its direction.
Attached to the polymer tips are tails that just follows the path that is outlined by the tip.
This dynamics mimics the behavior of actin filaments in actomyosin motility assays <cit.>, in which polymers move in a snake-like fashion over a lawn of motor proteins and motion orthogonal to the contour is suppressed <cit.>.
Here we use purely nematic interactions between polymers which are primarily tuned by the nematic alignment amplitude α_n that allows for a continuous variation of the rate of alignment.
§.§ Parameters
If not stated otherwise, we used the following model parameters: discretization N = 5, polymer aspect ratio L/d = 21, nematic alignment strength α_n = 0.126≈7.2^∘ and a periodic simulation box of length L_box = 162.5L.
The velocity v^(n) of each polymer is randomly drawn from the interval [0.75,1.]v_0.
We started simulations with random initial conditions, i.e. randomly oriented polymers were placed at random positions in the simulation box.
Time is measured in units of L/v_0, where v_0 is the maximal velocity of a free polymer.
Density in Figs. <ref>(a)-(c) and <ref>(g)-(h) is time-averaged for better visibility, with averaging times of 159 for Fig. <ref>(a) and 16 for Figs. <ref>(b)-(c) and <ref>(g)-(h).
Note that the system shown in Fig. <ref>(h) does not have the usual periodic boundary conditions. Rather, the particles crossing the boundaries are moved either to a random position along a boundary with random orientation or to one of the particle sources. The ratio of these two possibilities is chosen so that the particle flux from the sources is kept constant.
§.§ Continuous theory
We numerically investigate Eqs. (<ref>,<ref>) under periodic boundary conditions by using finite differences of second order <cit.> on a 300×300 grid with the spatial resolution δ x = 0.5.
The time integration was performed via a second-order predictor-corrector scheme with time step dt = 10^-2.
We use the parameter values β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1.
Unless explicitly stated, we initialize simulations from an isotropic uniform state
with a small amount of noise. To make time and space dimensionless we rescale them by setting the rotational diffusion coefficient and μ_ρ equal to unity.
72
urlstyle
[De Gennes and Prost(1993)]de1993physics
Pierre-Gilles De Gennes and Jacques Prost.
The physics of liquid crystals.
Number 83. Oxford university press, 1993.
[Marchetti et al.(2013)Marchetti, Joanny, Ramaswamy, Liverpool, Prost,
Rao, and Simha]Marchetti2013
M Cristina Marchetti, Jean-François Joanny, Sriram Ramaswamy,
Tanniemola B Liverpool, Jacques Prost, Madan Rao, and R Aditi Simha.
Hydrodynamics of soft active matter.
Rev. Mod. Phys., 850 (3):0 1143,
10.1103/RevModPhys.85.1143.
[Doostmohammadi et al.(2018)Doostmohammadi, Ignés-Mullol, Yeomans,
and Sagués]Doostmohammadi2018
Amin Doostmohammadi, Jordi Ignés-Mullol, Julia M Yeomans, and Francesc
Sagués.
Active nematics.
Nat. Commun., 90 (1):0 3246,
10.1038/s41467-018-05666-8.
[Alert et al.(2022)Alert, Casademunt, and Joanny]alert2021active
Ricard Alert, Jaume Casademunt, and Jean-François Joanny.
Active turbulence.
Annu. Rev. Condens. Matter Phys., 130 (1):0
143–170,
10.1146/annurev-conmatphys-082321-035957.
[Sanchez et al.(2012)Sanchez, Chen, DeCamp, Heymann, and
Dogic]sanchez_spontaneous_2012
Tim Sanchez, Daniel T. N. Chen, Stephen J. DeCamp, Michael Heymann, and
Zvonimir Dogic.
Spontaneous motion in hierarchically assembled active matter.
Nature, 4910 (7424):0 431–434,
10.1038/nature11591.
[DeCamp et al.(2015)DeCamp, Redner, Baskaran, Hagan, and
Dogic]Decamp2015
Stephen J DeCamp, Gabriel S Redner, Aparna Baskaran, Michael F Hagan, and
Zvonimir Dogic.
Orientational order of motile defects in active nematics.
Nat. Mater., 140 (11):0 1110–1115,
https://doi.org/10.1038/nmat4387.
[Giomi et al.(2013)Giomi, Bowick, Ma, and Marchetti]giomi_defect_2013
Luca Giomi, Mark J. Bowick, Xu Ma, and M. Cristina Marchetti.
Defect Annihilation and Proliferation in Active Nematics.
Phys. Rev. Lett., 1100 (22):0 228101,
10.1103/PhysRevLett.110.228101.
[Shankar et al.(2018)Shankar, Ramaswamy, Marchetti, and
Bowick]shankar_defect_2018
Suraj Shankar, Sriram Ramaswamy, M. Cristina Marchetti, and Mark J. Bowick.
Defect Unbinding in Active Nematics.
Phys. Rev. Lett., 1210 (10):0 108002,
10.1103/PhysRevLett.121.108002.
[Thampi et al.(2014)Thampi, Golestanian, and
Yeomans]thampi_instabilities_2014
Sumesh P. Thampi, Ramin Golestanian, and Julia M. Yeomans.
Instabilities and topological defects in active nematics.
Europhys Lett., 1050 (1):0 18001,
10.1209/0295-5075/105/18001.
[Giomi et al.(2014)Giomi, Bowick, Mishra, Sknepnek, and
Cristina Marchetti]Giomi2014
Luca Giomi, Mark J Bowick, Prashant Mishra, Rastko Sknepnek, and
M Cristina Marchetti.
Defect dynamics in active nematics.
Philos. Trans. R. Soc. A, 3720 (2029):0
20130365,
https://doi.org/10.1098/rsta.2013.0365.
[Putzig et al.(2016)Putzig, Redner, Baskaran, and
Baskaran]putzig_instabilities_2016
Elias Putzig, Gabriel S. Redner, Arvind Baskaran, and Aparna Baskaran.
Instabilities, defects, and defect ordering in an overdamped active
nematic.
Soft Matter, 120 (17):0 3854–3859,
10.1039/C6SM00268D.
[Maryshev et al.(2019)Maryshev, Goryachev, Marenduzzo, and
Morozov]Maryshev2019Dry
Ivan Maryshev, Andrew B Goryachev, Davide Marenduzzo, and Alexander Morozov.
Dry active turbulence in a model for microtubule–motor mixtures.
Soft Matter, 150 (30):0 6038–6043,
10.1039/c9sm00558g.
[Schaller et al.(2010)Schaller, Weber, Semmrich, Frey, and
Bausch]schaller_polar_2010
Volker Schaller, Christoph Weber, Christine Semmrich, Erwin Frey, and
Andreas R. Bausch.
Polar patterns of driven filaments.
Nature, 4670 (7311):0 73–77,
10.1038/nature09312.
[Butt et al.(2010)Butt, Mufti, Humayun, Rosenthal, Khan, Khan, and
Molloy]butt_myosin_2010
Tariq Butt, Tabish Mufti, Ahmad Humayun, Peter B. Rosenthal, Sohaib Khan,
Shahid Khan, and Justin E. Molloy.
Myosin Motors Drive Long Range Alignment of Actin Filaments.
J. Biol. Chem., 2850 (7):0 4964–4974,
10.1074/jbc.M109.044792.
[Grégoire and Chaté(2004)]gregoire_onset_2004
Guillaume Grégoire and Hugues Chaté.
Onset of Collective and Cohesive Motion.
Phys. Rev. Lett., 920 (2):0 025702,
10.1103/PhysRevLett.92.025702.
[Solon et al.(2015)Solon, Chaté, and Tailleur]solon_phase_2015
Alexandre P. Solon, Hugues Chaté, and Julien Tailleur.
From Phase to Microphase Separation in Flocking Models:
The Essential Role of Nonequilibrium Fluctuations.
Phys. Rev. Lett., 114:0 068101,
10.1103/PhysRevLett.114.068101.
[Huber et al.(2021)Huber, Krüger, and Frey]huber_microphase_2021
Lorenz Huber, Timo Krüger, and Erwin Frey.
Microphase separation in active filament systems maintained by cyclic
dynamics of cluster size and order.
Phys. Rev. Res., 30 (1):0 013280,
10.1103/PhysRevResearch.3.013280.
[Huber et al.(2018)Huber, Suzuki, Krüger, Frey, and
Bausch]Huber2018
L Huber, R Suzuki, T Krüger, E Frey, and AR Bausch.
Emergence of coexisting ordered states in active matter systems.
Science, 3610 (6399):0 255–258,
DOI: 10.1126/science.aao5434.
[Denk and Frey(2020)]denk_pattern-induced_2020-1
Jonas Denk and Erwin Frey.
Pattern-induced local symmetry breaking in active-matter systems.
Proc. Natl. Acad. Sci. U.S.A., 1170 (50):0
31623–31630,
10.1073/pnas.2010302117.
[Ginelli et al.(2010)Ginelli, Peruani, Bär, and
Chaté]ginelli_large-scale_2010
Francesco Ginelli, Fernando Peruani, Markus Bär, and Hugues Chaté.
Large-scale collective properties of self-propelled rods.
Phys. Rev. Lett., 1040 (18):0 184502,
10.1103/PhysRevLett.104.184502.
[Peshkov et al.(2012)Peshkov, Aranson, Bertin, Chaté, and
Ginelli]Peshkov2012
Anton Peshkov, Igor S Aranson, Eric Bertin, Hugues Chaté, and Francesco
Ginelli.
Nonlinear field equations for aligning self-propelled rods.
Phys. Rev. Lett., 1090 (26):0 268701,
10.1103/PhysRevLett.109.268701.
[Ngo et al.(2014)Ngo, Peshkov, Aranson, Bertin, Ginelli, and
Chaté]ngo_large-scale_2014
Sandrine Ngo, Anton Peshkov, Igor S. Aranson, Eric Bertin, Francesco Ginelli,
and Hugues Chaté.
Large-Scale Chaos and Fluctuations in Active Nematics.
Phys. Rev. Lett., 113:0 038302,
10.1103/PhysRevLett.113.038302.
[Großmann et al.(2016)Großmann, Peruani, and
Bär]grosmann_mesoscale_2016
Robert Großmann, Fernando Peruani, and Markus Bär.
Mesoscale pattern formation of self-propelled rods with velocity
reversal.
Phys. Rev. E, 940 (5):0 050602,
10.1103/PhysRevE.94.050602.
[Maryshev et al.(2020)Maryshev, Morozov, Goryachev, and
Marenduzzo]Maryshev2020
Ivan Maryshev, Alexander Morozov, Andrew B Goryachev, and Davide Marenduzzo.
Pattern formation in active model c with anchoring: bands, aster
networks, and foams.
Soft Matter, 160 (38):0 8775–8781,
10.1039/d0sm00927j.
[Cai et al.(2019)Cai, Chaté, Ma, and Shi]Cai2019
Li-Bing Cai, Hugues Chaté, Yu-Qiang Ma, and Xia-Qing Shi.
Dynamical subclasses of dry active nematics.
Phys. Rev. E, 99:0 010601,
10.1103/PhysRevE.99.010601.
[Großmann et al.(2020)Großmann, Aranson, and
Peruani]grosmann_particle-field_2020
Robert Großmann, Igor S. Aranson, and Fernando Peruani.
A particle-field approach bridges phase separation and collective
motion in active matter.
Nat. Commun., 110 (1):0 5365,
10.1038/s41467-020-18978-5.
[Chaté(2020)]chate_dry_2020
Hugues Chaté.
Dry aligning dilute active matter.
Annu. Rev. Condens. Matter Phys., 110 (1),
10.1146/annurev-conmatphys-031119-050752.
[Mishra et al.(2014)Mishra, Puri, and Ramaswamy]mishra2014aspects
Shradha Mishra, Sanjay Puri, and Sriram Ramaswamy.
Aspects of the density field in an active nematic.
Philos. Trans. R. Soc. A, 3720 (2029):0
20130364,
10.1098/rsta.2013.0364.
[Bertin et al.(2013)Bertin, Chaté, Ginelli, Mishra, Peshkov, and
Ramaswamy]bertin_mesoscopic_2013
Eric Bertin, Hugues Chaté, Francesco Ginelli, Shradha Mishra, Anton Peshkov,
and Sriram Ramaswamy.
Mesoscopic theory for fluctuating active nematics.
New J. Phys., 150 (8):0 085032,
10.1088/1367-2630/15/8/085032.
[Blow et al.(2014)Blow, Thampi, and Yeomans]Blow2014
Matthew L Blow, Sumesh P Thampi, and Julia M Yeomans.
Biphasic, lyotropic, active nematics.
Phys. Rev. Lett., 1130 (24):0 248303,
0.1103/PhysRevLett.113.248303.
[Hohenberg and Halperin(1977)]HohenbergHalperin
Pierre C Hohenberg and Bertrand I Halperin.
Theory of dynamic critical phenomena.
Rev. Mod. Phys., 490 (3):0 435,
https://doi.org/10.1103/RevModPhys.49.435.
[Li and Cates(2021)]li_hierarchical_2021
Yuting I. Li and Michael E. Cates.
Hierarchical microphase separation in non-conserved active mixtures.
Eur. Phys. J. E, 440 (9):0 119,
10.1140/epje/s10189-021-00113-x.
[Baskaran and Marchetti(2012)]baskaran_self-regulation_2012
A. Baskaran and M. C. Marchetti.
Self-regulation in self-propelled nematic fluids.
Eur. Phys. J. E, 350 (9),
10.1140/epje/i2012-12095-8.
[Ahmadi et al.(2006)Ahmadi, Marchetti, and
Liverpool]ahmadi2006hydrodynamics
Aphrodite Ahmadi, M Cristina Marchetti, and Tanniemola B Liverpool.
Hydrodynamics of isotropic and liquid crystalline active polymer
solutions.
Phys. Rev. E, 740 (6):0 061913,
10.1103/PhysRevE.74.061913.
[Baskaran and Marchetti(2010)]baskaran2010nonequilibrium
Aparna Baskaran and M Cristina Marchetti.
Nonequilibrium statistical mechanics of self-propelled hard rods.
J. Stat. Mech. Theory Exp., 20100 (04):0
P04019,
10.1088/1742-5468/2010/04/P04019.
[Maryshev et al.(2018)Maryshev, Marenduzzo, Goryachev, and
Morozov]Maryshev2018
Ivan Maryshev, Davide Marenduzzo, Andrew B Goryachev, and Alexander Morozov.
Kinetic theory of pattern formation in mixtures of microtubules and
molecular motors.
Phys. Rev. E, 970 (2):0 22412,
10.1103/PhysRevE.97.022412.
[Cates(2019)]cates2019active
Michael E Cates.
Active field theories.
arXiv preprint,
10.48550/arXiv.1904.01330.
[Shaebani et al.(2020)Shaebani, Wysocki, Winkler, Gompper, and
Rieger]shaebani2020computational
M Reza Shaebani, Adam Wysocki, Roland G Winkler, Gerhard Gompper, and Heiko
Rieger.
Computational models for active matter.
Nature Reviews Physics, 20 (4):0 181–199,
https://doi.org/10.1038/s42254-020-0152-1.
[Sulaiman et al.(2006)Sulaiman, Marenduzzo, and
Yeomans]sulaiman2006lattice
N Sulaiman, D Marenduzzo, and JM Yeomans.
Lattice boltzmann algorithm to simulate isotropic-nematic emulsions.
Phys. Rev. E, 740 (4):0 041708,
https://doi.org/10.1103/PhysRevE.74.041708.
[Araki and Tanaka(2004)]araki2004nematohydrodynamic
Takeaki Araki and Hajime Tanaka.
Nematohydrodynamic effects on the phase separation of a symmetric
mixture of an isotropic liquid and a liquid crystal.
Phys. Rev. Lett., 930 (1):0 015702,
https://doi.org/10.1103/PhysRevLett.93.015702.
[Mishra et al.(2010)Mishra, Simha, and Ramaswamy]mishra2010dynamic
Shradha Mishra, R Aditi Simha, and Sriram Ramaswamy.
A dynamic renormalization group study of active nematics.
J. Stat. Mech. Theory Exp., 20100 (02):0
P02003,
10.1088/1742-5468/2010/02/P02003.
[Putzig and Baskaran(2014)]Putzig2014
Elias Putzig and Aparna Baskaran.
Phase separation and emergent structures in an active nematic fluid.
Phys. Rev. E, 900 (4):0 042304,
https://doi.org/10.1103/PhysRevE.90.042304.
[Sato and Teramoto(1996)]sato1996frank
Takahiro Sato and Akio Teramoto.
On the frank elastic constants of lyotropic polymer liquid crystals.
Macromolecules, 290 (11):0 4107–4114,
https://doi.org/10.1021/ma950986a.
[Ramaswamy et al.(2003)Ramaswamy, Simha, and
Toner]ramaswamy2003active
S Ramaswamy, R. Aditi Simha, and J Toner.
Active nematics on a substrate: Giant number fluctuations and
long-time tails.
Europhys Lett., 620 (2):0 196–202,
10.1209/epl/i2003-00346-7.
[Simha and Ramaswamy(2002)]simha2002hydrodynamic
R Aditi Simha and Sriram Ramaswamy.
Hydrodynamic fluctuations and instabilities in ordered suspensions of
self-propelled particles.
Phys. Rev. Lett., 890 (5):0 058101,
https://doi.org/10.1103/PhysRevLett.89.058101.
[Narayan et al.(2007)Narayan, Ramaswamy, and
Menon]narayan_long-lived_2007
V. Narayan, S. Ramaswamy, and N. Menon.
Long-Lived Giant Number Fluctuations in a Swarming
Granular Nematic.
Science, 3170 (5834):0 105–108,
10.1126/science.1140414.
[Genkin et al.(2017)Genkin, Sokolov, Lavrentovich, and
Aranson]genkin2017topological
Mikhail M Genkin, Andrey Sokolov, Oleg D Lavrentovich, and Igor S Aranson.
Topological defects in a living nematic ensnare swimming bacteria.
Phys. Rev. X, 70 (1):0 011029,
https://doi.org/10.1103/PhysRevX.7.011029.
[Kawaguchi et al.(2017)Kawaguchi, Kageyama, and
Sano]kawaguchi_topological_2017-1
Kyogo Kawaguchi, Ryoichiro Kageyama, and Masaki Sano.
Topological defects control collective dynamics in neural progenitor
cell cultures.
Nature, 5450 (7654):0 327–331,
10.1038/nature22321.
[Cates and Tailleur(2015)]cates2015motility
Michael E Cates and Julien Tailleur.
Motility-induced phase separation.
Annu. Rev. Condens. Matter Phys., 60 (1):0
219–244,
https://doi.org/10.1146/annurev-conmatphys-031214-014710.
[Van Der Linden et al.(2019)Van Der Linden, Alexander, Aarts, and
Dauchot]van2019interrupted
Marjolein N Van Der Linden, Lachlan C Alexander, Dirk GAL Aarts, and Olivier
Dauchot.
Interrupted motility induced phase separation in aligning active
colloids.
Phys. Rev. Lett., 1230 (9):0 098001,
https://doi.org/10.1103/PhysRevLett.123.098001.
[Shankar and Marchetti(2019)]shankar2019hydrodynamics
Suraj Shankar and M Cristina Marchetti.
Hydrodynamics of active defects: From order to chaos to defect
ordering.
Phys. Rev. X, 90 (4):0 041047,
https://doi.org/10.1103/PhysRevX.9.041047.
[Cortese et al.(2018)Cortese, Eggers, and Liverpool]cortese2018pair
Dario Cortese, Jens Eggers, and Tanniemola B Liverpool.
Pair creation, motion, and annihilation of topological defects in
two-dimensional nematic liquid crystals.
Phys. Rev. E, 970 (2):0 022704,
https://doi.org/10.1103/PhysRevE.97.022704.
[Hussain et al.(2013)Hussain, Molloy, and
Khan]hussain_spatiotemporal_2013
Saman Hussain, Justin E. Molloy, and Shahid M. Khan.
Spatiotemporal Dynamics of Actomyosin Networks.
Biophys. J., 1050 (6):0 1456–1465,
10.1016/j.bpj.2013.08.001.
[Suzuki and Bausch(2017)]suzuki_emergence_2017
Ryo Suzuki and Andreas R. Bausch.
The emergence and transient behaviour of collective motion in active
filament systems.
Nat. Commun., 80 (1):0 41,
10.1038/s41467-017-00035-3.
[Suzuki et al.(2015)Suzuki, Weber, Frey, and Bausch]suzuki_polar_2015
Ryo Suzuki, Christoph A. Weber, Erwin Frey, and Andreas R. Bausch.
Polar pattern formation in driven filament systems requires
non-binary particle collisions.
Nat. Phys., 110 (10):0 839–843,
10.1038/nphys3423.
[Sciortino and Bausch(2021)]sciortino_pattern_2021
Alfredo Sciortino and Andreas R. Bausch.
Pattern formation and polarity sorting of driven actin filaments on
lipid membranes.
Proc. Natl. Acad. Sci. U.S.A., 1180 (6):0
e2017047118,
10.1073/pnas.2017047118.
[Sumino et al.(2012)Sumino, Nagai, Shitaka, Tanaka, Yoshikawa, Chaté,
and Oiwa]sumino_large-scale_2012-1
Yutaka Sumino, Ken H. Nagai, Yuji Shitaka, Dan Tanaka, Kenichi Yoshikawa,
Hugues Chaté, and Kazuhiro Oiwa.
Large-scale vortex lattice emerging from collectively moving
microtubules.
Nature, 4830 (7390):0 448–452,
10.1038/nature10874.
[Memarian et al.(2021)Memarian, Lopes, Schwarzendahl, Athani,
Sarpangala, Gopinathan, Beller, Dasbiswas, and Hirst]memarian_active_2021
Fereshteh L. Memarian, Joseph D. Lopes, Fabian Jan Schwarzendahl,
Madhuvanthi Guruprasad Athani, Niranjan Sarpangala, Ajay Gopinathan,
Daniel A. Beller, Kinjal Dasbiswas, and Linda S. Hirst.
Active nematic order and dynamic lane formation of microtubules
driven by membrane-bound diffusing motors.
Proc. Natl. Acad. Sci. U.S.A., 1180 (52):0
e2117107118,
10.1073/pnas.2117107118.
[Turiv et al.(2020)Turiv, Koizumi, Thijssen, Genkin, Yu, Peng, Wei,
Yeomans, Aranson, Doostmohammadi, and Lavrentovich]turiv_polar_2020
Taras Turiv, Runa Koizumi, Kristian Thijssen, Mikhail M. Genkin, Hao Yu,
Chenhui Peng, Qi-Huo Wei, Julia M. Yeomans, Igor S. Aranson, Amin
Doostmohammadi, and Oleg D. Lavrentovich.
Polar jets of swimming bacteria condensed by a patterned liquid
crystal.
Nat. Phys., 160 (4):0 481–487,
10.1038/s41567-020-0793-0.
[Sciortino et al.(2022)Sciortino, Neumann, Krüger, Maryshev,
Teshima, Wolfrum, Frey, and Bausch]sciortino_defects_2022
Alfredo Sciortino, Lukas J Neumann, Timo Krüger, Ivan Maryshev, Tetsuhiko F
Teshima, Bernhard Wolfrum, Erwin Frey, and Andreas R Bausch.
Polarity and chirality control of an active fluid by passive nematic
defects.
Nat. Mater.,
10.1038/s41563-022-01432-w.
[Saw et al.(2017)Saw, Doostmohammadi, Nier, Kocgozlu, Thampi, Toyama,
Marcq, Lim, Yeomans, and Ladoux]saw_topological_2017-1
Thuan Beng Saw, Amin Doostmohammadi, Vincent Nier, Leyla Kocgozlu, Sumesh
Thampi, Yusuke Toyama, Philippe Marcq, Chwee Teck Lim, Julia M. Yeomans, and
Benoit Ladoux.
Topological defects in epithelia govern cell death and extrusion.
Nature, 5440 (7649):0 212–216,
10.1038/nature21718.
[Popescu et al.(2018)Popescu, Uspal, Bechinger, and
Fischer]popescu_chemotaxis_2018
Mihail N. Popescu, William E. Uspal, Clemens Bechinger, and Peer Fischer.
Chemotaxis of Active Janus Nanoparticles.
Nano Lett., 180 (9):0 5345–5349,
10.1021/acs.nanolett.8b02572.
[Lavergne et al.(2019)Lavergne, Wendehenne, Bäuerle, and
Bechinger]lavergne_group_2019
François A Lavergne, Hugo Wendehenne, Tobias Bäuerle, and Clemens
Bechinger.
Group formation and cohesion of active particles with visual
perception-dependent motility.
Science, 3640 (6435):0 70–74,
10.1126/science.aau5347.
[Ziepke et al.(2022)Ziepke, Maryshev, Aranson, and
Frey]alex_preprint_2022
Alexander Ziepke, Ivan Maryshev, Igor S Aranson, and Erwin Frey.
Multi-scale organization in communicating active matter.
Nat. Commun., 13,
10.1038/s41467-022-34484-2.
[Nagai et al.(2015)Nagai, Sumino, Montagne, Aranson, and
Chaté]nagai_collective_2015-1
Ken H. Nagai, Yutaka Sumino, Raul Montagne, Igor S. Aranson, and Hugues
Chaté.
Collective Motion of Self-Propelled Particles with
Memory.
Phys. Rev. Lett., 1140 (16):0 168001,
10.1103/PhysRevLett.114.168001.
[Ventejou et al.(2021)Ventejou, Chaté, Montagne, and
Shi]ventejou2021susceptibility
Bruno Ventejou, Hugues Chaté, Raul Montagne, and Xia-qing Shi.
Susceptibility of orientationally ordered active matter to chirality
disorder.
Phys. Rev. Lett., 1270 (23):0 238001,
https://doi.org/10.1103/PhysRevLett.127.238001.
[Lemma et al.(2022)Lemma, Mitchell, Subramanian, Needleman, and
Dogic]lemma2022active
Bezia Lemma, Noah P Mitchell, Radhika Subramanian, Daniel J Needleman, and
Zvonimir Dogic.
Active microphase separation in mixtures of microtubules and
tip-accumulating molecular motors.
Phys. Rev. X, 120 (3):0 031006,
https://doi.org/10.1103/PhysRevX.12.031006.
[Abramowitz and Stegun(1964)]AbramowitzStegun
Milton Abramowitz and Irene A. Stegun.
Handbook of Mathematical Functions with Formulas, Graphs, and
Mathematical Tables.
Dover, New York, 1964.
[Bertin et al.(2006)Bertin, Droz, and
Grégoire]bertin_boltzmann_2006
Eric Bertin, Michel Droz, and Guillaume Grégoire.
Boltzmann and hydrodynamic description for self-propelled particles.
Phys. Rev. E, 74:0 022101,
10.1103/PhysRevE.74.022101.
[Bertin et al.(2009)Bertin, Droz, and Grégoire]bertin_2009
Eric Bertin, Michel Droz, and Guillaume Grégoire.
Hydrodynamic equations for self-propelled particles: microscopic
derivation and stability analysis.
J. Phys. A Math. Theor., 420 (44):0 445001,
10.1088/1751-8113/42/44/445001.
[Peshkov et al.(2014)Peshkov, Bertin, Ginelli, and
Chaté]peshkov_boltzmann-ginzburg-landau_2014
A. Peshkov, E. Bertin, F. Ginelli, and H. Chaté.
Boltzmann-Ginzburg-Landau approach for continuous
descriptions of generic Vicsek-like models.
Eur. Phys. J.: Spec. Top., 2230 (7):0
1315–1344,
10.1140/epjst/e2014-02193-y.
[Ngo et al.(2012)Ngo, Ginelli, and Chaté]ngo_competing_2012-2
Sandrine Ngo, Francesco Ginelli, and Hugues Chaté.
Competing ferromagnetic and nematic alignment in self-propelled polar
particles.
Phys. Rev. E, 860 (5):0 050101,
10.1103/PhysRevE.86.050101.
*format=largeformat
§ SUPPLEMENTARY INFORMATION
§ WASP SIMULATION METHOD
In this section we provide a brief summary of the agent-based simulations.
The focus will be on the aspects most relevant for the current study.
For a detailed description of the WASP simulation setup, please refer to the supplemental materials of Refs. <cit.>.
In the agent-based simulations, we consider M polymers moving on a flat substrate (in two spatial dimensions).
Each polymer n consist of N spherical joints j which are located at a positions 𝐫_j^(n) (with j ∈ { 0, 1, …, N - 1 }, where the polymer tip is denoted by j = 0).
The direction of a polymer's tip is denoted by 𝐮_0^(n) and its motion is described by:
∂_t 𝐫_0^(n) = v^(n) 𝐮_0^(n) -𝐅_𝐫𝐞𝐩
= v^(n)(
[ cosθ_0^(n); sinθ_0^(n) ])-𝐅_𝐫𝐞𝐩 .
Here 𝐅_𝐫𝐞𝐩 describes a weak repulsion force (see (<ref>)) acting on a polymer head while in contact with the contour of another polymer.
θ_0^(n) denotes the orientation of a polymer and v^(n) its free speed.
For this study, the speed of each polymer was chosen at random from a continuous uniform distribution in the interval [0.75, 1] v_0, where v_0 denotes the maximal velocity of a free polymer (see section S<ref> for further details on this velocity dispersion).
The orientation of a polymer's head evolves in time according to
∂_t θ_0^(n) =
- δH̃_0^(n)/δθ_0^(n)
+ √(2v^(n)/L_p) ξ ,
where ξ is random white noise with zero mean and unit variance with the magnitude of the noise given by the prefactor.
This implies that individual polymers perform a persistent random walk with a path persistence length of L_p.
H̃_0^(n) sets the—in this study purely nematic—torque caused by interactions with other polymers.
Before we come to a description of H̃_0^(n), it will proof useful to introduce several other quantities.
The first is the distance vector
Δ𝐫_nm
=
(
𝐫_0^(n) -
𝐫^(m))_shDist .
This vector connects the tip of a polymer n with the position of an adjacent polymer's (denoted by m) contour that has the shortest possible distance.
The local orientation of the contour of the adjacent polymer m is given by θ_j^(m), which corresponds to the orientation of the polymer segment j of polymer n to which Δ𝐫_nm connects.
Second, if a polymer is interacting with several polymers at a time, we define a weighted average direction of the connecting vectors:
Δ𝐞_n
:=
∑_m
C(
|Δ𝐫_nm|
)
Δ𝐫_nm/|Δ𝐫_nm| .
Here C(
|Δ𝐫_nm|
) is a weighting factor accounting for the assumption that a more distant polymer contributes less to an interaction.
It is given by
C(
|Δ𝐫_nm|
)
=
{[ 0 if |Δ𝐫_nm|>d; (d- |Δ𝐫_nm|)/d else ].
,
where d defines the interaction radius.
Using the orientation of the averaged connecting vector θ̃_n, we define an averaged nematic impact angle as Δθ̃^(n)_n = θ_0^(n) - θ̃_n.
Equipped with these definitions we are now in a position to write down the alignment potential as
H̃_0^(n)
:=
α_n v_0/dcos (2Δθ̃^(n)_n) |Δ𝐞_n|
,
where the overall amplitude of the alignment is set by the absolute value of the weighted connecting vector, combined with the nematic alignment strength α_n.
The repulsion force 𝐅_𝐫𝐞𝐩 in (<ref>) is given by
𝐅_𝐫𝐞𝐩 =
-s ∑_m
C
(
|Δ𝐫_nm|
)
Δ𝐫_nm/|Δ𝐫_nm| ,
which is used to prevent unphysical aggregation of polymers. It is assumed to be weak with s = 0.05.
Filaments in actomyosin motility assays are observed to conduct a trailing motion, where the tail of a polymer follows the movement of the tip <cit.>.
To emulate this behaviour, tail joints move according to
∂_t 𝐫_j^(n)
=
K_s (
| 𝐫_j^(n)-𝐫_j-1^(n)| - b
) 1/2(
𝐮_j+1^(n)+𝐮_j^(n))
.
Here, the second part of the equation, 1/2
(
𝐮_j+1^(n) + 𝐮_j^(n)
), ensures the movement to be in the direction of the average of the segment's orientations that are adjacent to joint j.
The remainder of (<ref>) corresponds to a linear (Hookian) restoring force with spring coefficient K_s = 200 that ensures an average length b of the cylindrical segments between bonds.
§ ONSET OF NEMATIC PATTERNS
In this section we provide further information on how the phase diagram shown in Fig. 1(c) of the main text was obtained.
To determine the density ρ_n as a function of L_p above which nematic patterns are formed, we performed exploratory simulations in the phase space spanned by the (reduced) global polymer density ⟨ρ⟩ L^2
and the persistence length L_p.
To guarantee that the dynamics has reached a steady state we ran these simulations for a time 15 873 which is much larger than the initial timescale t_0 ≈ 100 it takes for a system to reach the quasi-stationary, disordered state <cit.>.
Figure <ref> shows the results of the in silico parameter scans in density at a set of fixed values for L_p: The blue triangles and red squares correspond to steady states where we visually observed nematic patterns or a disordered state, respectively.
To determine the phase boundary ρ_n (L_p) we fitted a function f_ρ(L_p) = a/L_p (with a as free fitting parameter) to the data points with the lowest density that still exhibited nematic order [solid line in Fig. <ref>].
The shape of the boundary line is dictated by the interplay between two counteracting effects: density-dependent, interaction-induced ordering and rotational diffusion.
The former increases linearly with density increase, and above the critical value of density, spontaneous ordering begins to predominate over diffusion.
Thus, the critical density is proportional to rotational diffusion coefficient and therefore ∝ L_p^-1 in our case.
We take f_ρ(L_p) as an approximation to the density corresponding to the onset of nematic patterns, ρ_n (L_p).
To further test whether this is a satisfactory approximation for the phase boundary, we ran ten independent simulations at a density corresponding to ρ_n [cf. dots in Fig. 1 (c) of the main text] and further ten at 0.9 ρ_n for several different L_p for a twice as large simulation time of 31 746.
All simulations at ρ_n formed ordered patterns, while none at 0.9 ρ_n did, affirming that f_ρ(L_p) adequately approximates the position of the isotropic-nematic transition.
§ DEFECT DETECTION
In this section, we explain the algorithms we used to identify topological defects in simulations of both the hydrodynamic theory and the agent-based model.
To algorithmically detect -1/2 defects in
both approaches, we took advantage of the fact that inside a defect core the topological charge density q, defined as <cit.>
q = 1/4 π( ∂_xQ̂_x a∂_yQ̂_y a - ∂_xQ̂_y a∂_yQ̂_x a),
has a very large negative value (with Q̂=Q/ρ and Q defined as in (<ref>)), whereas in other regions of space its absolute value is much smaller (cf. lower right pane of Fig. 2(a) and (d) of the main text). We exploit this fact and define any contiguous region of space in which q falls below a certain threshold value q_thrs as one -1/2 defect.
The position of -1 / 2 defects in the agent-based model is obtained in the following way.
Please first note that the main purpose of the data from the agent-based simulations in Fig. 3(c)-(e) is to qualitatively confirm the trend observed in the hydrodynamic model. To quantify the data with a high degree of precision would require averaging over large ensembles, which would be numerically prohibitively demanding given the very long time scales on which the observed phenomena occur.
The total runtime of each simulation was 142 857 (which is much longer than the dynamics of undulations; cf. Movie S1 and S2), from which we cutted an initial transient (cf. section S<ref>) before starting the measurement.
For each value of L_p/⟨ϕ⟩ we averaged over ten independent simulations.
To obtain q in agent-based simulations, we rasterized space into a grid with a grid spacing of Δ x = 0.3, which is small enough to resolve the structure of a defect (note that the qualitative agreement between the agent-based simulations and hydrodynamic model, shown in Fig. 3 of the main text, does not depend on the exact choice of this and the following numerical parameters).
We used the orientations θ_0^(n) of polymer tips residing inside each grid point at a given time to calculate a local value of Q̂ using (<ref>). To suppress noise due to stochastic particle fluctuations, we further averaged over a time span of 15.9, which is much shorter than density rearrangements due to bending undulations.
With this we obtained q(𝐫, t) using (<ref>).
We chose q_thrs = - 0.032, which is much lower than typical values of q outside defects.
Additionally, to avoid classifying small and short-lived density peaks that occur sporadically in the simulations as CTDs, we heuristically filtered them out by requiring the charge density to be below q_thrs for a time of at least 159 for a CTD to be detected.
The hydrodynamic model allows by construction a direct access to the Q-tensor, which allows a direct calculation of the function q, given by Eq. <ref>. The positions of -1 / 2 defects are defined as local minima of the function q and, for consistency, the same value of q_thrs is used as for the agent based simulations.
For the measurements in the hydrodynamic model, we discarded the data collected in the first half of the simulation runs in order to avoid any influence of initial transients.
To generate the data shown in Fig. 3 (a), we classified all runs in which CTDs were detected to be CTD-dominated (blue dots in Fig. 3 (a)). Distinction between FAEs and stable bands was made via visual inspection.
§ FLUX MEASUREMENT THROUGH DEFECTS
In the main text, we studied the mass flow through a defect as well as the speed of particles during a CTD passage; see Figs. 4(b) and 4(e), respectively.
To this end, we needed detailed information about the position and velocity of particles as they transitioned from one arm of a defect to another.
To determine these quantities, we leveraged the possibility offered by the agent-based simulations to access the position of each individual polymer at any given point in time.
In order to be able to deduce that a given polymer has transitioned from one arm of a defect to another one, several things have to be known.
First, one has to find a criterion which allows to algorithmically determine if a polymer is pertinent to a given arm at a given time.
For this we used the following heuristics:
Over each arm of a defect we placed a round “classification area”, which is large enough to cover the full width of the nematic lane (blue regions in Fig. <ref>, diameter 22 L).
The positions of the classification areas were chosen such that they roughly coincided with the area where the nematic lanes recovered their full width (midpoint distance of classification areas to defect: 26 L in Fig. <ref>).
Every polymer being inside one of these regions is classified as pertinent to the given defect arm.
Second, one has to find a criterion that allows to make a determination as to the origin of particles that have been classified as belonging to a particular arm.
For this we introduced an additionally classification area which encompasses all parts of the simulation box being further away from the defect core than a specific distance, cf. orange region in Fig. <ref> (distance to defect: 40 L).
(Note that the black colored area does not pertain to any classification area.)
After this partitioning, we measured the currents from one region to another with the below described heuristics.
We did this for a time span sufficiently long enough that many particles can travel from one blue region to another blue region (cf. Fig. <ref>), but short enough such that bending undulations do not change the position of the individual lanes significantly.
Data in Fig. 4(b) averaged over 159, Fig. 4(e) averaged over 4 019 trajectories in a time of 317.
For the flux measurement heuristics, we each assigned a unique identifier id to every classification area.
We then checked in short intervals of 0.16 for every polymer i if its position coincided with one of the classification areas.
If this was the case, polymer i was assigned the identifier of the region and the time of assignment t_assign was saved.
If polymer i already had a different identifier id' assigned (and hence also a different t_assign'), this meant that it had traveled from another classification area into the current region (without crossing a third region in the meantime).
In such a case, we stored the pairs of tuples (id', t_assign') and (id, t_assign), which allow (combined with with the also saved information of the position and speed of every polymer at every interval) to reconstruct the path polymer i has taken propagating from region id' to id. Subsequently, we replaced the assigned identifier and assignment time of polymer i with that of the current region and the current time and continued the simulation.
§ DISPERSION IN THE POLYMER VELOCITY
Most studies of active matter assume the speed of agents to be constant and uniform <cit.>. Yet, experiments of the actin motility assay show actin filaments to have a broad distribution of velocities <cit.>.
To take into account the effects of such a velocity dispersion, we drew the assigned speed of polymers from a distribution (cf. Section S<ref> of this Supplemental Material).
We have found that the introduction of such a velocity dispersion does not hinder the formation of nematic lanes.
To additionally check whether particles that possess different free velocities behave differently on the level of macroscopic structures—for example by causing an effective sorting of particles into spatially separate populations, where only relatively fast/slow particles form part of patterns—we subdivided the system into a grid with a grid spacing of Δ x = 0.3 and determined for each grid-cell the locally averaged ⟨ v^(n)⟩ of particles inside a simulation exhibiting nematic lanes and CTDs.
Any local accumulation of fast/slow particles would lead to a different value of ⟨ v^(n)⟩ when compared to the global average ⟨ v^(n)⟩_glob.
As can be inferred from Fig. <ref>, the system is well mixed (up to random fluctuations) with respect to polymer velocities.
We further found that the introduction of a velocity dispersion prevented the decay of purely nematic patterns into oppositely propagating polar waves (cf. Ref <cit.>), which hence seems to be an artefact of the assumption of equal and uniform velocities.
§ WIDTH OF NEMATIC LANES
As discussed in
the main text,
we measured the width of nematic lanes as a function of density ⟨ϕ⟩ in both the agent-based simulations and the hydrodynamic model (at a constant system size).
To this end, we performed several simulations at different polymer densities but at a fixed persistence length (resp. several realizations of the hydrodynamic model at different ⟨ϕ⟩ and fixed λ).
After these systems had reached a configuration in which they exhibited a single straight lane, we measured the width of the band and the average density ⟨ϕ⟩_bg in the disordered background.
(The width is determined by averaging the density of the system along the axis of the straight lane, which results in a one dimensional density profile.
The width of the lanes in the hydrodynamic model is then defined as the distance between the two points with the maximal gradient of this curve, which can easily be obtained due to the absence of noise.
In the agent based simulations the lane width is heuristically defined as the width of the region where this profile exceeds the threshold of three times ⟨ϕ⟩_bg.)
As shown in Fig. <ref>, the thickness of the lanes grows linearly with density in both the agent-based simulations and hydrodynamic model, while the density of the disordered background remains constant.
§ FAE DETECTION
In this section we describe the procedure we used to measure the mean number of FAEs present at different parameter regimes in the agent-based simulation (Fig. 3(e) of the main text).
For this we logged the formation of every FAE in the investigated systems; the most reliable method for detecting FAEs turned out to be manual inspection of simulation videos.
To obtain the mean number of FAEs present, we divided the total lifetime of all detected FAEs in the system by the total observation time.
For every investigated L_p in the agent-based simulations, we averaged over ten independent simulations, which each ran for a time of 142 857.
It is worth to note that agent-based simulations started in a parameter regime in which systems predominantly exhibit FAEs or stable lanes (i.e., high L_p; see also section “From CTDs to FAEs and bands” in the main text),
do not immediately form straight lanes at the onset of pattern formation, but frequently at first dwell in a state of high activity (cf. left panel of Fig. 3(b) in the main text) in which no FAE can develop.
We measured the duration of this initial transient (“dwell-time”) and found that it is shorter than a time of 70 000 in more than ninety percent of the cases.
We discarded this initial time span in the measurements of the mean numbers of CTDs (cf. section S<ref>) and FAEs present to rule out any influence of the initial transient on the results.
Further, we studied the temporal evolution of filamentous arc ejections.
The motion of a separating arc in the agent based and the hydrodynamic model, can be visualized using a kymograph of the density projection shown in Fig. <ref>.
As can be inferred from the bending of the lateral extrusions, the separation process of the arcs starts slowly and continues to accelerate until complete ejection and eventual dissolvement of the arc.
§ HYDRODYNAMIC MODEL
To provide the motivation of our hydrodynamic model we start form the general form of the evolution equation for the probability distribution function P(𝐫,θ,t):
∂_t P(𝐫,θ,t)
=
- L_p ∂_i [ n_i P(𝐫,θ,t) ]
+ ∂_θ^2 P(𝐫,θ,t) +interactions ,
where 𝐧=(cosθ,sinθ) is director vector, and L_p is the path persistence length of the polymers.
Time is measured in units of the diffusion coefficient.
Note that we only consider rotational diffusion and neglect translational diffusion.
In the following the space and time dependencies of the probability density are suppressed for brevity.
Contribution from the interaction between the polymers can be introduced in the form of collision intergrals in the Boltzmann ansatz <cit.>, or by using the gradient of the interaction-induced current in a Smoluchowski approach <cit.>.
We define the particle density ρ, the polarity vector 𝐩, and the nematic Q-tensor as the first three moments of the probability distribution function:
ρ
:=
∫_0^2 πdθ
P (θ)
,
p_i
:=
∫_0^2 πdθ
n_i P(θ)
,
Q_ij
:=
∫_0^2 πdθ (
2n_i n_j-δ_i j) P (θ)
,
where the subscripts i and j denote the Cartesian components and δ_ij represents the Kronecker delta.
It is convenient to consider Fourier harmonics of the probability distribution function:
P(𝐫, θ)=∑_k=-∞^∞ P_k(𝐫) e^i k θ.
According to their definitions, ρ , p_i, and Q_ij can be expressed via Fourier harmonics as follows:
ρ =
2 π P_0 ,
p_i
=
π(
(P_1 +P_-1 ), i(P_1 -P_-1 )
)
,
Q_ij =
π(
(P_2 +P_-2 ), i (P_2 -P_-2 )
)
,
where the symbol i denotes the imaginary unit.
By introducing the projection onto the m^th harmonics of P:
(…)^ m
:=
1/2 π∫_0^2 πdθ e^-i m θ(…)
,
one obtains the following contributions from the advective and diffusive parts of (<ref>) to the evolution equations of the m_th Fourier harmonics (P_m):
∂_t P_m =
-m^2 P_m
-L_p∂_i(n_iP(𝐫,θ))^ m
=
-m^2 P_m
- L_p1/2[
∂_x∑_kP_k (δ_k,m-1+δ_k,m+1)
+∂_y∑_k P_k (δ_k,m-1-δ_k,m+1)/i]
.
In terms of the collective variables this can be rewritten as:
∂_tρ =
- L_p∂_ip_i
,
∂_t p_i
=
-p_i- L_p/2∂_iρ +L_p/2∂_jQ_ij ,
∂_t Q_ij =
-4Q_ij
-L_p/2[
∂_ip_j+∂_jp_i-δ_ij∂_kp_k
]
.
Note, that we imply summation for repeating indices following the Einstein convention.
Since we consider a system with purely nematic interactions, the polar order decays on short time scales for all strengths of self-propulsion.
Thus, the polarity field 𝐩 equilibrates fast and can be eliminated adiabatically to arrive at dynamic equations for the density ρ and Q-tensor alone.
We find after rescaling time by a factor of 4:
∂_tρ =
λ^2Δρ
+ λ^2∂_i∂_jQ_ij ,
∂_t Q_ij =
-Q_ij
+λ^2/2Δ Q_ij
+λ^2
[
∂_i∂_jρ]^st ,
where we have introduced the parameter λ:=L_p/(2√(2)), Δ=∂_i∂_i denotes the Laplace operator, and [...]^st indicates the symmetric and traceless part of the expression.
We now discuss the physical meaning of each term on the RHS of
Eqs. (<ref>).
The first term in the density equation Eq. (<ref>) acts like effective translational diffusion, despite the fact that it is actually coming from the single particle advection (note, that the real translational diffusion is neglected in our model).
The second term in equation Eq. (<ref>) represents anisotropic flux of material along the nematic order. This term enhances diffusion along the direction of the eigenvector of Q_ij
corresponding to its positive eigenvalue, and suppresses it along the perpendicular direction. It also can be treated as curvature-induced flux, since it disappears in a uniformly ordered state.
The first term in the evolution equation of the nematic tensor Eq. (<ref>) is due to the thermal rotational diffusion. If there were no interaction between polymers, the action of this term would lead to disordering.
The second term in Eq. (<ref>) penalizes the distortion of Q_ij and represents the elasticity in terms of liquid crystal theory.
The last term of Eq. (<ref>) provides the coupling between the equations. It can be treated simply as an anisotropic diffusive contribution. But it also introduces “aligning torque” by changing the orientation of nematic order in the presence of the density gradients.
Finally, besides the diffusion- and advection-related terms we need to add interaction-induced contributions.
Inspired by Refs. <cit.> we also introduce the following terms to describe the nematic interactions of the polymers:
∂_tρ =
⋯
+
ν̃_ρΔρ^2
+
χ̃_ρ∂_i∂_j(ρ Q_ij)
,
∂_t Q_ij =
⋯
+
α̃ρ Q_ij
-
β̃Q^2Q_ij
+
κ̃_ρ⟨ρ⟩Δ Q_ij
+ω̃^a
[
2∂_iρ∂_jρ]^st .
The ν̃_ρ-related term in Eq. (<ref>) comes from the excluded volume interactions between the polymers (however an analogous term occurs due to the “collision" of polymers, e.g., see Ref. <cit.>).
The last term in Eq. (<ref>) is an interaction-induced flux representing a density-dependant correction <cit.> to the last term of Eq. (<ref>).
The first term of Eq. (<ref>) promotes density dependent ordering, which competes with motility-induced disordering coming from the fist term of Eq. (<ref>); β is a non-equilibrium Landau coefficient setting the magnitude of order in the bulk.
κ̃_ρ⟨ρ⟩ contributes to the restoring elastic constant. As can be seen, this is the only term in our theory that is linearized around the mean density value, whereas in the most of hydrodynamic models almost all terms in Eq. (<ref>) are subjected to this procedure. We linearize this particular term for two reasons. Firstly, for the sake of simplicity: we want this term to represent one particular effect – elasticity (or “rigidity” in terms of the material). Secondly, with this linearization it's simpler to interpret the term κ̃_ρ⟨ρ⟩Δ Q_ij as stemming from a free energy, while the contribution κ̃Δ(ρ Q_ij) could not be obtained from a free energy.
Finally, the last term of Eq. (<ref>) describes the non-equilibrium anchoring to the density interface <cit.>.
We emphasize again that we are not linearizing ν̃_ρ, χ̃_ρ, and ω̃^a - related terms around the mean density (the latter of which would simply disappear completely in that case).
Such higher-order terms are typically linearized (or ignored) in well-controlled closures in the vicinity of the isotropic/nematic transition (e.g., within Boltzmann–Ginzburg–Landau approach <cit.>).
However, our observations hint that this linearization procedure, widely used in the field of active nematics, may result in some physical processes not being accounted for by the resulting models, which in turn can leads to some phenomena (e.g., such as CTDs) escaping the researchers' gaze as well.
To obtain the equations of motion presented in the main text we simply combine (<ref>) and (<ref>) and re-normalize density by the critical one ϕ=ρ/ρ_n. The coefficients are also renamed accordingly: κ̃_ρ→κ_ϕ, etc.
As discussed in the main text, the hydrodynamic model allows to directly access the direction and magnitude of the anisotropic active flux -∂_j(χ Q_ij). To complement the illustration of this flux in Fig. 4(d) of the main text, we show in Fig. <ref> a direct plot of this observable as recorded in the hydrodynamic model.
§
Movie S1
Constantly undulating nematic lanes in an agent-based simulation.
(Parameters are: ρ L^2=3.15, L_p=11.1. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S2
Emergence of a multitude of condensed topological defects in agent-based simulations. Note that the lateral movement of lanes happens on long timescales. A single frame roughly corresponds to the time of 162 a straight moving particle with a velocity of v_0 needs to cross the whole system. (Parameters are: ρ L^2=3.2, L_p=11.9. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S3
Two condensed topological defects are formed simultaneously in an agent-based simulation. Due to continued undulation of the connecting nematic lanes the defects eventually disintegrate.
(Parameters are: ρ L^2=3.47, L_p=11.1. Scale-bar: 15L. Density averaged over a time of 3 for better visibility.)
Movie S4
Several filamentous arc ejection develop in succession along a nematic lane in an agent-based simulation.
(Parameters are: ρ L^2=2.7, L_p=14.3. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S5
Straight and stable nematic lane in an agent-based simulation.
(Parameters are: ρ L^2=1.9, L_p=20.6. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
Movie S6
Details of a flux in an agent-based simulation from one arm of a condensed topological defect to the two others. The path that is taken by the polymer heads is traced out. Only trajectories that start in the upper left arm and eventually will go to either the lower or upper right arm are visible.
(Parameters are: ρ L^2=3.5, L_p=11.1.)
Movie S7
Emergence of a multitude of condensed topological defects in a simulation of the hydrodynamic model. (Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1,⟨ϕ⟩=1.1 )
Movie S8
Several filamentous arc ejection develop in succession along a nematic lane in a simulation of the hydrodynamic model. (Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1.2,⟨ϕ⟩=1.1)
Movie S9
Straight and stable nematic lane in a simulation of the hydrodynamic model.
(Parameters are: β=0.05, κ_ϕ=0.2, ω^a=-0.5, χ_ϕ=0.4, ν_ϕ=1, λ=1.4,⟨ϕ⟩=1.1)
Movie S10
Three-beam symmetrical arrangement of sources of polar particles. The ensuing nematic currents eventually form a condensed topological defect.
(Parameters are: ρ L^2=3.6, L_p=14.3. Scale-bar: 15L. Density averaged over a time of 15.9 for better visibility.)
|
http://arxiv.org/abs/2307.04293v1 | 20230710010226 | Inverse of the Gaussian multiplicative chaos: an integration by parts formula | [
"Tomas Kojar"
] | math.PR | [
"math.PR"
] |
[for feedback please contact [email protected]]
K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment
Jinbao Wang^1, Member, IEEE,
Guoyang Xie^1,
Yawen Huang^1,
Jiayi Lyu,
Feng Zheng, Member, IEEE,
Yefeng Zheng, Fellow, IEEE, and
Yaochu Jin, Fellow, IEEE
Jinbao Wang, Jiaqi Liu and Feng Zheng are with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: [email protected]; [email protected]; [email protected])
Guoyang Xie is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China and is also with the Department of Computer Science, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected])
Yawen Huang and Yefeng Zheng are with Tencent Jarvis Lab, Shenzhen 518040, China (e-mail: [email protected]; [email protected]).
Jiayi Lyu is with the School of Engineering Science, University of Chinese Academy of Sciences, Beijing, China (e-mail: [email protected])
Yaochu Jin is with the Faculty of Technology, Bielefeld University, 33619 Bielefeld, Germany and also with the Department of Computer Science and Engineering, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected])
^1Contributed Equally.
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this article, we study the analogue of the integration by parts formula from <cit.> in the context of GMC and its inverse.
0.4pt
PART:
Introduction
§ INTRODUCTION
This article is an offshoot application that came up in <cit.> while doing the preliminary work for extending the work in <cit.>. In particular, in their work they start with the Gaussian random field H on the circle with covariance
H(z)H(z')=-lnz-z',
where z, z'∈ℂ have modulus 1. The exponential γ H gives rise to a random measure τ on the unit circle , given by
τ(I):=μ_H(I):=∫_Ie^γ H_(x)-γ^2/2H_(x)^2,
for Borel subsets I⊂=ℝ/ ℤ=[0,1) and H_ is a suitable regularization. This measure is within the family of Gaussian multiplicative chaos measures (GMC) (for expositions see the lectures <cit.>). So finally, they consider the random homeomorphism h:[0,1)→ [0,1) defined as the normalized measure
h(x):=τ[0,x]/τ[0,1], x∈ [0,1),
and prove that it gives rise to a Beltrami solution and conformal welding map. The goal is to extend this result to its inverse h^-1 and in turn to the composition h_1^-1∘ h_2 where h_1,h_2 are two independent copies. The motivation for that is of obtaining a parallel point of view of the beautiful work by Sheffield <cit.> of gluing two quantum disks to obtain an SLE loop.
We let Q_τ(x):[0,τ([0,1])]→ [0,1] denote the inverse of the measure τ:[0,1]→ [0,τ([0,1])] i.e.
Q_τ(τ[0,x])=xτ[0,Q_τ(y)]=y,
for x∈ [0,1] and y∈ [0,τ([0,1])]. The existence of the inverse Q follows from the strict monotonicity of the Liouville measure η, which in turn follows from being non-atomic <cit.>. We use the notation Q because the measure τ can be thought of as the "CDF function" for the "density" γ H and thus its inverse τ^-1=Q is the quantile (also using the notation τ^-1 would make the equations less legible later when we start including powers and truncations). We will also view this inverse as a hitting time for the measure τ
Q_τ(x)=Q_τ(0,x)=T_x:=inft≥ 0: τ[0,t]≥ x.
The inverse homeomorphism map h^-1:[0,1]→ [0,1] is defined as
h^-1(x):=Q_τ(xτ([0,1])) x∈ [0,1]
Since the inverse of GMC didn't seem to appear in other problems, it was studied very little and so we had to find and build many of its properties. In the article <cit.>, we go over various basic properties of the inverse Q. Our guide for much for this work was trying to transfer the known properties of the GMC measure to its inverse, the Markovian structure for the hitting times of Brownian motion s (such as the Wald's equation and the independent of the increments of hitting times) and then trying to get whatever property was required for the framework set up by <cit.> to go through successfully. This was a situation where a good problem became the roadmap for finding many interesting properties for the inverse of GMC and thus GMC itself.
When studying the expected value Q(a), we had trouble getting an exact formula. So in the spirit of <cit.> where they used Malliavin calculus to study the hitting times of processes, we tested using Malliavin calculus to gain better understanding of Q(a). Our guide for applying Malliavin calculus is also the article <cit.> where they applied Malliavin calculus to imaginary GMC.
§.§ Acknowledgements
We thank I.Binder, Eero Saksman and Antti Kupiainen. We had numerous useful discussions over many years.
§ MAIN RESULT
In <ref>, we study the shifted field X_ζ=U_^r(τ_a+ζ). We will
obtain an integration by parts formula for that field using the techniques from <cit.>. Then we will integrate over ζ to obtain relations for the shifted-GMC and the inverse in <ref>.
For fixed ψ∈ C_c() where we normalize ∫_ψ(a)=1 and a,L≥ 0, we have the relation
∫_0^∞ψ(a)ητ_a,τ_a+L = L+λ ∫_0^r∧L∫_ζ^∞ψ(η(θ-ζ))∫_ (θ-r)∨0^θ1/θ-t-1/r(t) (θ),
and
∫_0^∞ψ(a)τ_a = ∫_0^∞ψ(a) a+λ ∫_0^r∫_0^∞ψ(η(θ))∫_ (θ+ζ-r)∨0^θ+ζ1/θ+ζ-t-1/r(t) _ζ(θ),
where _ζ(θ):=e^U(θ+ζ).
PART:
Integration by parts formula
§ SETUP FOR MALLIAVIN CALCULUS FOR THE INVERSE
In this part we will use the setup from from <cit.> in order to use the integration by parts formula. In particular, for the Gaussian process X_t:=U_ϵ^δ(t) with covariance
R(t,s):= {ln(r /ε )-1/-1/rt-s , t-s≤ε
ln(r/t-s) +t-s/r-1 , δ>t-s≥.
we will use the Malliavin calculus setup for Gaussian processes as developed in <cit.>. Then once we obtain the various integration by parts formulas, we will then take limit in ϵ→ 0 using the convergence results for GMC (eg.<cit.>). For shorthand we will write
U̅(t)=:γ U_ϵ(t):=γ U_ϵ(t)-γ^2/2ln1/ϵ.
Let be the Hilbert space defined as the closure of the space of step functions on [0,∞) with respect to the scalar product
⟨1_[0,s] ,1_[0,t]|_⟩:=R(t,s).
The mapping 1_[0,t]↦ X_t can be extended to an isometry between and the Gaussian space H_1(X) associated with X. We will denote this isometry by ϕ↦ X(ϕ). Let be the set of smooth and cylindrical random variables of the form
F=f(X(ϕ_1),...,X(ϕ_n))
for some n≥ 1 and f∈ C^∞_b(ℝ^n) (smooth with bounded partial derivatives) and ϕ_i∈. The derivative operator D of a smooth and cylindrical random variable F∈ is defined as the -valued random variable
DF=∑_i=1^n∂ f/∂ x_i (X(ϕ_1),...,X(ϕ_n)) ϕ_i.
The derivative operator D is then a closable operator from L^2(Ω) into L^2(Ω;). The Sobolev space ^1,2 is the closure of with respect to the norm
F_1,2^2=E(F^2)+E(DF_^2)
The divergence operator δ is the adjoint of the derivative operator. We say that a random variable u∈ L^2(Ω;) belongs to the domain of the divergence operator, denoted by Dom (δ), if
E⟨DF,u|_⟩≤ c_uF_L^2(Ω)
for any F∈. In this case δ(u) is defined by the duality relationship
EFδ (u)= E⟨DF,u|_⟩,
for any F∈^1,2.
§.§ Regularity of the covariance
The following are some of the hypotheses used in the development of Malliavin calculus for Gaussian processes <cit.>. The difference is
U_ε^δ(t)-U_ε^δ(s)^2 =2t-s/(1-/δ),
which is strictly positive for t≠ s. The covariance
R(τ,t):= {ln(r /ε )-1/-1/rτ-t , τ-t≤ε
ln(r/τ-t) +τ-t/r-1 , r>τ-t≥.
is in fact an absolutely continuous function as a map t↦ R(τ,t) for each τ: when τ-t≤ε, we have the absolutely continuous function g(t)=τ-t, and when τ-t> ε, we use that ln1/x is a differentiable function for x>>0. We compute the partial derivative to be
R(τ,t)t= {
-1/-1/rt-τ/t-τ , τ-t≤ε
-1/t-τt-τ/t-τ +1/rt-τ/t-τ ,r>τ-t≥. .
Therefore, for t>τ the derivative is negative R(τ,t)t<0 and for t<τ it is positive R(τ,t)t>0. So it is not continuous on the diagonal, which was one of the constraints in <cit.>. However, in the work <cit.>, they manage to weaken to the following hypotheses that are satisfied in this setting in <ref>
For all T>0 the supremum of the integral of the partial derivative is finite for any α≥ 1
sup_s∈ [0,T]∫_0^TR(s,t)t^α<∞
and in fact for any continuous function f we have that
s↦ F(s):=∫_0^T f(t)R(s,t)t
is continuous on [0,∞).
Finally, because of the stationarity the process U_(t) does not necessarily diverge to +∞ as t→ +∞. So that means that if we apply the results from <cit.>, we have to maintain the upper truncation τ_a∧ T.
§.§ Regularity of U_ϵ(τ_a) and the inverse
In this section we discuss the Malliavin differentiability situation for U_ϵ(Q_(a)) and for the inverse Q(x), in the limit ϵ=0. For the stopped process there is generally a lack of Malliavin differentiability. For example, for Brownian motion consider any stopping time T eg. the hitting time T=T_a of the integrated Geometric Brownian motion of level a>0
∫_0^T_ae^B_s-1/2s=a.
Then the stopped Brownian motion W_T is not Malliavin differentiable (<cit.>). If it was differentiable, we would have that W_T=∫_0^∞1_s≤ TdW_s∈𝔻^1,2 and 1_s≤ T∈𝔻^1,2. However, by <cit.> we would get that for any s≥ 0 either P[s≤ T]=0 or 1, which is a contradiction.
On the other hand, for the inverse for ϵ>0, there are some results. The Malliavin derivative for increasing integral processes has been studied in <cit.>.
<cit.>
Let A_t_t∈ [0,1] be a continuous process such that:
* Strictly positive A_t>0 for all t∈ [0,1].
* There exists a version of A such that for all h∈ H, the map (λ, t)↦ A_t(ω+λ h) is continuous.
* Finite negative moments sup_t∈ [0,1]A_t^-1∈ L^p for p≥ 2.
* Finite Malliavin derivative moments: A∈ L^p([0,1];^1,p) for p≥ 2.
For fixed constant c>0 consider the hitting time of the integrated process T_c:=inft>0: ∫_0^tA_s≥ c. Then we have T_c∈^1,p for p≥ 2 with Malliavin derivative
DT_c=-1/A_T_c∫_0^TDA_rT_c<1.
In our case we have A_t:=:γ U_ϵ(t): satisfies all the above assumptions. However, the fraction -1/A_T_c=-γ U_ϵ(T_c)+γ^2/2ln1/ is likely diverging
because for c≈ 0 we have T_c≈ 0 yet the expectation at zero diverges
-γ U_ϵ(0)+γ^2/2ln1/=γ^2ln1/=^-γ^2→ +∞.
So likely the above formula will not make sense in the limit → 0. This lack of differentiability also appears in the works <cit.>, nevertheless through mollification they manage to extract some interesting formulas that we will try to mimic for the setting of GMC. We apply this first step to the inverse and to match notation write τ_a:=Q_(a) and also suppress the in η(θ):=η_(θ).
We use the same regularization. Suppose that ϕ is a nonnegative smooth function with compact support in (0,+∞) and define for any T > 0
Y:=∫_0^∞ϕ(a)τ_a∧ T .
The next result states the differentiability of the random variable Y in the sense of Malliavin calculus and provides an explicit formula for its derivative.
The derivative for the mollified inverse Y is
D_rY= -γ∫_0^Tϕ(η(θ))∫_0^θ[0,s](r)(s)=-γ∫_η(r)^η(T)ϕ(y)y-η(r)_y.
As we can see in the above formula we get _y, which by inverse function theorem is equal to e^-γ U_(τ_y)+γ^2/2ln1/ in agreement with the formula <ref>.
Due to ϕ's compact support the Y is bounded, and so we can apply Fubini's theorem
Y=∫_0^∞ϕ(a)∫_0^τ_a∧ T =∫_0^T∫_η(θ)^∞ϕ(a).
So here we need to compute the Malliavin derivative of η(θ). By linearity and chain rule for the derivative operator D we obtain
D_t∫_0^xe^γU_(s)-1/2γU_(s)^2= ∫_0^xe^γU_(s)-1/2γU_(s)^2γD_tU_(s)
= ∫_0^xe^γU_(s)-1/2γU_(s)^2γ1_[0,s](t)
= γη(t,x∨t ).
Since >0, we have that η(t,x∨ t )^2<∞ and so η(θ)∈^ 1,2 (this can also work in the limit =0 by taking 2/γ^2>2⇔γ<1). Therefore, by chain rule we get Y ∈^ 1,2 with
D_rY=-∫_0^Tϕ(η(θ))D_r(η(θ)) =-∫_0^Tϕ(η(θ))γη(r,θ∨ r ).
Finally, making the change of variable η(θ )= y yields
D_rY=-γ∫_η(r)^η(T)ϕ(y)y-η(r)_y.
§ INTEGRATION BY PARTS FORMULA
In this section we will obtain an integration by parts formula for η(τ_a,τ_a+L) using the techniques from <cit.>. We apply the Malliavin calculus framework to the Gaussian field U__n for each fixed _n and then at the very end we will take limits _n→ 0 in the integration by parts formulas for η__n(τ__n,a,τ__n,a+L). For simplicity we will temporarily write η=η__n and τ_a=τ__n,a.
§.§ Nonlinear expected value
For the usual GMC we know that its expected value is linear η(a,b)=b-a. Using the Markovian-like δ-(SMP) property from before, we obtain a nonlinear relation for the expected value of the inverse.
We have for a>0 and r≥δ
η^δ(Q^δ(a),Q^δ(a)+r)-r=Q^δ(a)-a =∫_0^∞ Q_R(t)^δ(a)≤t ≤Q^δ(a)
=∫_0^∞ η^δ(t)≤a ≤η_R(t)^δ(t) >0.
In particular, for any a>0 we have Q^δ(a)>a.
This proposition shows that the GMC η does not satisfy a "strong" translation invariance i.e. η(Q(a),Q(a)+r)≠ r. So the same is likely true for Q(a,a+t)
Q(a,a+t)=∫_0^∞t>η^δ(Q^δ(a),Q^δ(a)+r)≠∫_0^∞t>η^δ(0,r)=Q(t).
It also shows that Q^δ(a) is a nonlinear function of a.
Ideally we would like to check whether the RHS of <ref> is uniformly bounded in a>0
a>0∫_0^∞η(t) ≤ a≤η_R(t)(t) <∞ =∞,
but it is unclear of how the window [η(t) ,η_R(t)(t)] grows as t→ +∞.
§.§ Assumptions
In the work <cit.>, they make some assumptions about the covariance R(s,t) of the field that are worth comparing with even though we have to do a new proof for η.
(H1) For all t∈ [0, T ], the map s↦ R(s, t) is absolutely continuous on [0, T ] and for some α>1 we have
sup_s∈ [0,T]∫_0^TR(s,t)t^α<∞.
(H3) The function R_t := R(t, t) has bounded variation on [0, T ].
(H5) lim sup_t→+∞ X_t = +∞ almost surely.
(H6) For any 0 ≤ s < t, we have
X_t - X_s^2 > 0.
(H7) For any continuous function f , we have that
s↦ F(s):=∫_0^T f(t)R(s,t)t
is continuous on [0,∞).
Even though our setting is different since we study hitting times of η(t) and not of X_t, these assumptions have analogues. In the <ref> we compute the derivative of
R(τ,t):= {ln(r /ε )-1/-1/rτ-t , τ-t≤ε
ln(r/τ-t) +τ-t/r-1 , r>τ-t≥.
to be
R(τ,t)t= {
-1/-1/rt-τ/t-τ , τ-t≤ε
-1/t-τt-τ/t-τ +1/rt-τ/t-τ ,r>τ-t≥. .
and show the assumptions (H1),(H3) and H(7). The assumption (H6) is immediate from the covariance computation. Finally the analogue of the assumption (H6) for η is immediate since it is in fact a strictly increasing function.
§.§ Integration by parts formula for truncated hitting time
As in these works here too we study the exponential evaluated at the stopping time:
M_t+ζ:=λ U__n^δ(t+ζ)- λ^2/2ln1/_n,
t,ζ≥ 0 and some λ∈ [0,√(2)). The ζ is important here because we will then integrate over ζ to obtain a formula for η(τ_a,τ_a+L) with a,L≥ 0.
The following proposition follows from <cit.> and it asserts that δ_tM:=1/λM_t+ζ-1 satisfies an integration by parts formula, and in this sense, it coincides with an extension of the Skorokhod divergence of M _[0,t].
<cit.>
For any smooth and cylindrical random variable of the form F=f(X_t_1,...,X_t_n) for t_i∈ [0,t], we have
Fδ_tM=∑_i=1^n∂ f/∂ x_i(X_t_1,...,X_t_n)∫_0^t+ζM_sRs(s,t_i).
By writing
Y=∫_0^∞ϕ(a)τ_a∧ T=∫_0^T∫_η(θ)^∞ϕ(a),
where ϕ∈ C_c^∞(), we will apply <ref> to F:=p(Y-t), where p∈ C_c^∞() and M_t+ζ. In particular, due to the discontinuity of the Rs along the diagonal, we choose p_δ(x-y)=0 when x>y as they do in <cit.>. The following lemma uses the proof structure of <cit.>.
We have the integration by parts relation
p(Y)δ_tM= - p'(Y)∫_0^Tϕ(η(θ) ) ∫_0^t+ζM_s∫_0^θ Rs(b,s)(b) .
= - p'(Y)∫_0^η(T)ϕ(y) ∫_0^t+ζM_s∫_0^y Rs(τ_b,s) _y .
The inverse τ_y is a strictly increasing continuous function (even at the limit =0) and so we can define its Riemann-Stieltjes integral. This is because of the a)non-atomic nature of GMC <cit.> and b)GMC;s continuity and strict monotonicity, which in turn follows from satisfying bi-over dyadic intervals <cit.>.
The strategy is to discretize the domain [0,T] and thus bring us to the setting of proposition <ref>. Consider an increasing sequence D_N:=σ_i: 0=:σ_0<σ_1<...<σ_N:=T of finite subsets of [0,T] such that their union ⋃_N≥ 1D_N is dense in [0,T]. Set D_N^θ:=D_N∩ [0,θ] with σ(θ):=max(D_N^θ), to let
η_N(θ):=η_N(σ(θ)):=∑_k=1^σ(θ)U̅_(σ_k)σ_k-σ_k-1
and
Y_N:= ∫_0^Tψ(η_N(θ) ) =∑_m=1^Nψ(η_N(σ_k-1) ) σ_k-σ_k-1.
Then, Y_N and p(Y_N) are Lipschitz functions of U_(t) :t ∈ D_N. The partial σ_i-derivative is
∂ (p(Y_N))/∂σ_i=-p'(Y_N)∑_k=i+1^Nϕ(η_N(σ_k-1) )·U̅_(σ_i)σ_i-σ_i-1·σ_k-σ_k-1
and so the formula <ref> implies that
p(Y_N)δ_tM= - ∑_i=2^Np'(Y_N)∑_k=i+1^Nϕ(η_N(σ_k-1) )· U̅_(σ_i) σ_i-σ_i-1 ·σ_k-σ_k-1 ∫_0^t+ζM_sRs(σ_i,s)
= - p'(Y_N)∑_k=2^Nϕ(η_N(σ_k-1) ) ∫_0^t+ζM_s∑_i=1^k-1U̅_(σ_i) Rs(σ_i,s)σ_i-σ_i-1 σ_k-σ_k-1 .
The function r ↦∫_0^t+ζM_sRs(s,r) is continuous and bounded by condition (H1). As a consequence, we can take the N-limit of the above Riemann sum to get the integral formula
p(Y)δ_tM=- p'(Y)∫_0^Tϕ(η(θ) ) ∫_0^t+ζM_s∫_0^θU̅_(b)Rs(b,s) .
Finally, making the change of variable η(θ )= y yields
p(Y)δ_tM= - p'(Y)∫_0^η(T)ϕ(y) ∫_0^t+ζM_s∫_0^τ_y U̅_(b) Rs(b,s) _y
= - p'(Y)∫_0^η(T)ϕ(y) ∫_0^t+ζM_s∫_0^y Rs(τ_b,s) _y ,
where in the last equality we used that η and τ are inverses of each other.
§.§ Limits in the Integration by parts relation
In this section we set a specific regularization ϕ_(x)=1/_[-1,0](x/) in <ref>
Y_,a:=∫_0^∞ϕ_(x-a)(τ_x∧ T)=1/∫_a-^a(τ_x∧ T)=∫_0^1(τ_a-ξ∧ T),
where we let τ_x=0 when x<0, and we take limits of ϕ=ϕ_ and p=p_δ as ,δ→ 0. Before that step, since the derivative of the mollification p' will diverge in the limit δ→ 0, we first integrate both sides in <ref> as done in <cit.>.
Fix ψ∈ C_c^∞() and set c:=∫_ψ(a). We have the following integration by parts relation
∫_0^∞ψ(a)∫_0^∞p_δ(Y_,a-t)M_t+ζ
= c-λ ∫_0^∞∫_0^η(T)∫_0^1ψ(y+w)p_δ(Y_,y-w-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ).
By further taking the limits in ,δ→ 0 we obtain the following relation for each T>0
∫_0^∞ψ(a)M_τ_a∧T+ζ =c-λ ∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y.
By integrating over ζ∈ [0,L] we obtain an IBP for shifted-GMC
∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L
= c (L-0)-λ ∫_0^L∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y.
Continuing from <ref> we rewrite it as
∫_0^∞p_δ(Y_,a-t)M_t+ζ = 1+λ∫_0^∞p_δ(Y_,a-t)δ(M_[0,t+ζ]
= 1-λ∫_0^∞p_δ'(Y_,a-t)∫_0^η(T)ϕ_(y-a) ∫_0^t+ζM_s∫_0^y Rs(τ_b,s) _y .
Now to remove the p' issue, we do an integration by parts for the integral to obtain
1-λ∫_0^∞p_δ(Y_,a-t)∫_0^η(T)ϕ_(y-a) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y .
We multiply both sides by ψ(a) and integrate over the variable a
∫_ψ(a)∫_0^∞p_δ(Y_,a-t)M_t+ζ
= c-λ∫_ψ(a) ∫_0^∞p_δ(Y_,a-t)∫_0^η(T)ϕ_(y-a) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y.
Here for the -integral we use that ϕ_(y-a)=1/_[-1,0](y-a/) to write
c-λ ∫_0^∞∫_0^η(T)1/∫_y^y+ψ(a)p_δ(Y_,a-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y.
Finally, we do a change of variable a = y + w
c-λ ∫_0^∞∫_0^η(T)∫_0^1ψ(y+w)p_δ(Y_,y-w-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y
=: c-λ ∫_0^∞∫_0^η(T)F_,δ(y,t) G(t,y)_y
for
F_,δ(y,t) := ∫_0^1ψ(y+w)p_δ(Y_,y-w-t),
G(t,y):= M_t+ζ∫_0^y Rt(τ_b,t+ζ).
We next take limits and justify their swapping with the integrals.
Limit → 0
We use that the inverse τ_y is a continuous function to take limit
Y_,y- w=∫_0^1(τ_y- w-ξ∧ T) = τ_y∧ T
and so the limiting w-integral is
∫_0^1ψ(y+w)p_δ(Y_,y-w-t)
= ∫_0^1ψ(y)p_δ(τ_yw+τ_y (1-w)-t)
= ψ(y)p_δ(τ_y-t).
We next justify that we can swap limit and integrals in <ref>. By the compact support and smoothness of ϕ p we have a uniform constant
F_,δ(y,t)=∫_0^1ψ(y+ w)p_δ(Y_,y- w-t)≤ K.
Moreover,we can assume that compact support is contained suppp_δ⊆ [0,T+δ] and so the infinite integral in <ref> gets restricted to [0,T+δ]. We also use the uniform constant to bound as follows
(<ref>) ≤K ∫_0^T+δ∫_0^η(T) G(t,y)_y.
Finally, we will need to revert to the previous formula in terms of GMC
∫_0^y Rs(τ_b,s)=∫_0^τ_y Rs(b,s)(b).
We put all these together
∫_0^∞∫_0^η(T)F_,δ(y,t) G(t,y)_y
≤ K ∫_0^η(T) _y∫_0^T+δ∫_0^T+δ Rt(b,t+ζ)(b) M_t+ζ
= KT∫_ζ^ζ+T+δ∫_0^T+δ Rt(b,t)(b)(t)
= KT∫_ζ^ζ+T+δ∫_0^T+δ Rt(b,t),
where we also used that τ_y≤ T+δ and applied Fubini-Tonelli to integrate-out the GMCs. This final quantity is indeed finite due to the continuity of the integral as explained in <ref>. Therefore, all together we can use dominated convergence theorem to swap limits and integral
(<ref>)= c-λ ∫_0^T+δ∫_0^η(T)ψ(y)p_δ(τ_y-t) M_t+ζ∫_0^y Rt(τ_b,t+ζ) _y.
Limit δ→ 0
Here we follow parts of the <cit.>. Here we just use from <ref> that the integral
∫_0^yRt(τ_b,t+ζ)=∫_0^τ_yU̅_^r(b)Rt(b,t+ζ)
is continuous in t even if ζ=0 but as long as _n>0. Therefore, we can take the limit in δ→ 0. Now in terms of using dominated convergence theorem, we use the same dominating factor as above.
In summary we get the following limit
δ(<ref>)= c-λ ∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y.
§ FORMULA FOR THE SHIFTED GMC
In this section we use the IBP formula in <ref> to obtain a formula for the shifted GMC and the expected value of the hitting time. We will work with field U_ε^r for r>>0 and ζ>0. As mentioned in <ref> we already have one formula. By integrating over ζ∈ [0,L] we obtain an IBP for shifted-GMC
∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L
= c L-λ ∫_0^L∫_0^η(T)ψ(y)M_τ_y+ζ∫_0^y Rt(τ_b,τ_y+ζ) _y.
In the rest of the section we try to simplify this formula.
§.§ Limit in → 0 for fixed ψ
In the <ref>, ideally one would like to investigate taking → 0 and having the support of the ψ=ψ_n to be approximating to a point a_0. Assuming one can swap limits with integrals one would get the following formula
ητ_a_0∧T,τ_a_0∧T+L
= c L-λ ∫_0^LM_τ_a_0+ζ∫_0^a_0 Rt(τ_b,τ_a_0+ζ) 1/M_τ_a_0,
where the factor 1/M_τ_a_0 originated from the formal limit of _y/=e^-U̅_^r(τ_y). The issue here is that this latter limit doesn't exist because the normalization is reversed (the same is true even for the field e^-U̅_^r(s) over deterministic s since its mean is diverging like ^-γ^2.)
Therefore, we will study the IBP formula for fixed ψ and → 0.
For fixed ψ∈ C_c() where we normalize ∫_ψ(a)=1, we have the relation
∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L = L+λ ∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ1/θ-t-1/r(t) (θ),
where the GMCs have the field with =0. For simplicity we take T≥ 1>>0.
One corollary is the inequality
∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L ≥L.
Here we can actually take limit of ψ=ψ_n whose support is converging to a fixed value a_0, to get the inequality
ητ_a_0,τ_a_0+L ≥L,
which agrees with the result in <ref>.
§.§.§ Proof of <ref>
We start by writing the IBP formula explicitly using the covariance function.
Using the explicit formula of the covariance we have the expression
∫_0^y Rt(τ_b,τ_y+ζ)=-∫_a^b1/τ_y+ζ-t-1/r(t) -1/-1/r b, τ_y,
for
a:= τ_y∧(τ_y+ζ-r)∨0b:= τ_y∧(τ_y+ζ-)∨0.
For ease of notation in the proof we let s:=τ_y+ζ and
a:= τ_y∧ (s-r)∨ 0, b:= τ_y∧ (s-)∨ 0, c:= τ_y∧ (s+) d:= τ_y∧ (s+r).
Using the explicit formula for the partial derivative in <ref> we have the following
∫_0^τ_y U̅_^r(t) Rs(t,s)
= ∫_a^b-1/s-t(t)+ 1/r a,b + ∫_c^d1/t-s(t)+ -1/r c,d+1/-1/r s∧τ_y,c-b,s∧τ_y.
For s=τ_y+ζ we have
a= τ_y∧ (τ_y+ζ-r)∨ 0, b= τ_y∧ (τ_y+ζ-)∨ 0, c:= τ_y d:= τ_y.
Therefore, the above simplifies
∫_0^τ_y U̅_(t) Rs(t,s)|_s=τ_y+ζ
= ∫_a^b-1/s-t(t)+ 1/r a,b +0+ -1/r ·0+1/-1/r 0-b, τ_y
= -∫_a^b1/s-t-1/r(t) -1/-1/r b, τ_y.
Returning to <ref> we write
(<ref>)= ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L
= c L-λ ∫_0^L∫_0^η(T)ψ(y)M_τ_y+ζ-∫_a^b1/τ_y+ζ-t-1/r(t) -1/-1/r b, τ_y _y
= c L-λ ∫_0^L∫_0^Tψ(η(θ))M_θ+ζ-∫_a^b1/θ+ζ-t-1/r(t) -1/-1/r b, θ ,
where we also undid the change of variables τ_y=θ y=η(θ), and let
a:= θ∧(θ+ζ-r)∨0b:= θ∧(θ+ζ-)∨0.
Taking → 0 on the LHS is clear since ψ is compactly supported and bounded. The question is what happens in the RHS. We study each term.
We have the limit
∫_0^L∫_0^Tψ(η(θ))M_θ+ζ -1/-1/r b, θ =0.
In the term b, θ, since b:= θ∧ (θ+ζ-)∨ 0, we have that as soon as ζ≥, we get identically zero b, θ =0 for every > 0. So we just study the integrals
∫_0^∫_0^Tψ(η(θ))M_θ+ζ -1/-1/r (θ+ζ-)∨0, θ
= -1/-1/r ∫_0^ ∫_ζ^ζ+Tψ(η(θ-ζ))(θ-)∨0, θ-ζ (θ).
Here we can apply Lebesgue differentiation theorem. We study the difference of functions
f(ζ)-g_(ζ):= ∫_ζ^ζ+Tψ(η(θ-ζ))0, θ-ζ (θ)- ∫_^ζ+Tψ(η(θ-ζ))0,θ- (θ).
In the first function by taking limit → 0 we get
_0^ f(ζ)→f(0)= ∫_0^Tψ(η(θ)) 0,θ (θ).
In the second function, we separate the two limits
_0^ ∫_ζ^ζ+Tψ(η(θ-ζ))0,θ (θ)+_0^ ∫_^ζ+Tψ(η(θ-ζ))0,θ--0,θ (θ).
The first term converges to the same limit as in <ref> and so they cancel out. Therefore, it suffices to show that the second term in <ref> goes to zero. We pull out the supremum
_0^ ∫_^ζ+Tψ(η(θ-ζ))0,θ--0,θ (θ)
≤ _0^sup_≤z≤+Tz-,z ·∫_ζ^ζ+Tψ(η(θ-ζ)) (θ).
The quantity inside the expectation is uniformly bounded in because we can use to separate them
sup_≤z≤+Tz-,z^2^1/2· ∫_ζ^ζ+Tψ(η(θ-ζ)) (θ)^2^1/2,
where due to <ref> the first factor goes to zero as → 0.
We return to take the limit → 0 in <ref>
(<ref>)= ∫_0^∞ψ(a)ητ_a∧T,τ_a∧T+L
= c L-λ ∫_0^L∫_ζ^ζ+Tψ(η(θ-ζ))-∫_a^b1/θ-t-1/r(t) (θ),
for
a:= θ-ζ∧(θ-r)∨0b:= θ-ζ∧(θ-)∨0.
We note here that if ζ≥ r, then we get a=θ-ζ=b and so the inner integral becomes zero. So we are left with
∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))-∫_ (θ-r)∨0^ b1/θ-t-1/r(t) (θ).
The following lemma concludes the proof of <ref>.
We have the limit
∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t-1/r(t) (θ)
= ∫_0^r∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ1/θ-t-1/r(t) (θ).
A a heuristic we study the integrals without any GMCs:
∫_0^r∫_ζ^ζ+T∫_ (θ-r)∨0^θ-ζ1/θ-t-1/r = ∫_0^r∫_0^T∫_ (θ+ζ-r)∨0^θ1/θ+ζ-t -rT-r/6
= ∫_0^r∫_0^T ln1/ζ-ln1/r∧θ+ζ -rT-r/6
= -rln1/r1-3r/2-rT-r/6.
So we see that even for r→ 0 we still have finiteness in the limit → 0.
[proof of <ref>]
We will apply dominated convergence theorem. In terms of limits we study the inner integrals
f(ζ):=∫_ζ^ζ+Tψ(η(θ-ζ))∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t-1/r(t) (θ)
Since we have fixed ψ and it has compact support, we get that it is bounded and so we upper bound
f(ζ)≤ K ∫_ζ^ζ+T∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t(t) (θ)
= K∫_ζ^ζ+T ∫_ (θ-r)∨0^θ-ζ∧(θ-)∨01/θ-t^1+γ^2
⪅ T/ζ^γ^2,
where we evaluate the correlation for the two GMCs. This factor is still integrable as long as γ^2<1. Therefore, we can indeed apply the dominated convergence theorem.
§.§.§ IBP Formula for inverse
We justify taking infinite limit T→ +∞.
The finite T limit of <ref> is
∫_0^∞ψ(a)ητ_a,τ_a+L = L+λ ∫_0^r∫_0^∞ψ(η(θ))∫_ (θ+ζ-r)∨0^θ+ζ1/θ+ζ-t-1/r(t) _ζ(θ),
where we used the notation _ζ(θ):=e^U^r_0(ζ+θ).
Therefore, for L≥ r we use to <ref> obtain the following formula for the expected value of the inverse.
The inverse satisfies the following integration by parts formula
∫_0^∞ψ(a)τ_a = ∫_0^∞ψ(a) a+λ ∫_0^r∫_0^∞ψ(η(θ))∫_ (θ+ζ-r)∨0^θ+ζ1/θ+ζ-t-1/r(t) _ζ(θ).
[proof of <ref>]
Since ψ is compactly supported supp(ψ)⊂ [0,S] for some S>0 we get that the integral is zero as soon as
η(θ)>S.
So for the LHS in <ref> we have
∫_0^Sψ(a)ητ_a∧T,τ_a∧T+L .
Since the shifted GMC ητ_a∧ T,τ_a∧ T+L is continuous and uniformly bounded in T
ητ_a∧T,τ_a∧T+L≤η0,τ_a+L,
we can apply dominated convergence theorem. For the RHS we start by undoing the change of variables θ↔τ_y to write
(<ref>)=L+λ ∫_0^r∫_0^η(T)∧Sψ(y)∫_ (τ_y+ζ-r)∨0^τ_y+ζ1/τ_y+ζ-t-1/r(t) e^U_τ_y+ζ.
Here we use the following limiting ergodic statements for GMC <cit.>.
Let M be a stationary random measure on admitting a moment of order 1+δ for δ>0. There is a nonnegative integrable random variable Y∈ L^1+δ such that, for every bounded interval I⊂,
lim_T →∞1/T M(T I) = Y |I| almost surely and in L^1+δ,
where |·| stands for the Lebesgue measure on . As a consequence, almost surely the random measure
A∈ℬ()↦1/TM(TA)
weakly converges towards Y|·| and _Y[M(A)]=Y |A| (_Y[·] denotes the conditional expectation with respect to Y).
For GMC the Y variable is equal to one Y=1. One way to see it is using the independence of distant GMCs. By splitting η^1(0,n)/n into alternating even and odd intervals [k,k+1] to get two independent sequences and then apply strong law of large numbers to get convergence to η^1(0,n)/na.s.→1/2η^1(0,1)+1/2η^1(1,2)=1.
Therefore, since the quantity is uniformly bounded in T by bounding by the integral over ∫_0^S, we can apply dominated convergence theorem.
PART:
Further directions and Appendix
§ FURTHER RESEARCH DIRECTIONS
* Joint law for the Liouville measure
The density of the inverse is in terms of two-point joint law of GMC:
b≥ Q(x)≥ a=η(b)≥ x≥η(a).
(Of course, if we have differentiability, we can just study η(b)≥ x). The same issue showed up when studying the decomposition of the inverse. For example, we could turn the conditional moments' bounds into joint law statements by rewriting the event Q(a)-Q(b)=ℓ in terms of η. Some approaches include conformal field theory in <cit.> and possibly Malliavin calculus <cit.>. See here for work on GMC and Malliavin calculus <cit.>. It would also be interesting to get bounds on the single and joint density of GMC using the Malliavin calculus techniques in <cit.>. In the same spirit as in <cit.>, one can try to Goldie-renewal result: see <cit.> for recent work extending the Goldie renewal result used in <cit.> to the case of joint law.
* Regularity for GMC's Malliavin derivative
It would be interesting to explore the regularity of the Malliavin derivative D^kη for k=k(γ) as γ→ 0. This can give different upper bounds for the density:
Let q, α, β be three positive real numbers such that 1/q+1/α+1/β=1. Let F be a random variable in the space ^2,α, such that DF_H^-2β < ∞. Then the density p(x) of F can be estimated as follows
p(x)≤ c_q, α, βF>x^1/qDF_H^-1+D^2F_L^α(Ω;H⊗ H)DF_H^-2β^1/β,
where u_L^α(Ω;H⊗ H):= u_H⊗ H^α^1/α.
* Derivatives in the IBP-formula In the spirit of the derivative computations done in <cit.>, one could try to extract some pdes/odes. We included some some heuristics computations for
M_τ_a_0 :=e^λU_τ_a_0-λ^2/2ln1/.
In <ref>, we can concentrate ψ around the point a_0 and use <ref> to get the identity
Ψ(a,λ):=M_τ_a_0 = 1+λ M_τ_a_0 ∫_ (τ_a_0 -r)∨0^ (τ_a_0 -)∨01/τ_a_0 -t-1/r(t)
+λ1/-1/r M_τ_a_0 (τ_a_0 -)∨0,τ_a_0 .
So the λ derivative of the LHS is:
Ψ(a,λ)λ= M_τ_a U_(τ_a) -λ/2M_τ_a ln1/
= 1/λM_τ_a lnM_τ_a
and of the RHS is
ψ(a,λ)λ=Ψ(a,λ)λ= M_τ_aF(a)+λM_τ_a U_(τ_a)F(a)
-λ^2/2M_τ_a F(a) ln1/
= M_τ_a1+lnM_τ_a F(a)
= ψ(λ,a) F(a)+ψ(λ,a)lnψ(λ,a) F(a),
where ψ(λ,a):=M_τ_a. So one ODE from here is
y'=y(1+ln(y))c , y(0)=1
which has the unique solution
y(λ)=c^2/2-1 .
The Ψ itself satisfies
Ψ(a,λ)λ= M_τ_aF(a)+λM_τ_a U_(τ_a)F(a)
-λ^2/2M_τ_a F(a) ln1/
= 1/λΨ(a,λ)-1+M_τ_a lnM_τ_aF(a).
The identity is
M_τ_a = 1+λ M_τ_a F(a).
The derivative of the LHS is
M_τ_a a = λ M_τ_a U_(x)x|_x=τ_aτ_aa.
The derivative of the RHS is
M_τ_a a = λ M_τ_a U_(x)x|_x=τ_aτ_aa F(a)
+λ M_τ_a F(a)a ,
where
F(a)a= a∫_0^a Rt(τ_b,τ_a)= Rt(τ_b,τ_a)|_b=a+ ∫_0^a ^2Rt_1∂t_2(τ_b,τ_a)τ_aa.
§ MOMENTS OF THE MAXIMUM AND MINIMUM OF MODULUS OF GMC
In this section we study tail estimates and small ball estimates of the maximum/minimum of shifted GMC from <cit.>. One frequent theme is utilizing the 1d-correlation structure of GMC namely that neighboring evaluations η[0,1],η[1,2],η[2,3],η[3,4] are correlated. But the pairs η[0,1],η[2,3] and η[1,2],η[3,4] are separately . First we study the tail and moments of the maximum of the modulus of GMC.
On the face of it, in studying the 0≤ T≤ LT,T+xδ, we see that it could diverge as δ,x→ 0 because we might be able to lower bound it by an increasing sequence of iid random variables such as kx,x(k+1) for k∈ [1,L/x]. We will see that at least for fixed δ>0, we actually do have decay as x→ 0. This is in the spirit of chaining techniques where supremum over a continuum index set is dominated in terms of a maximum over a finite index set.
We will also need an extension for a different field: for λ<1, the field U_ε^δ, λ with covariance
U_ε^δ,λ(x_1 )U_ε^δ,λ(x_2 ) ={ln(δ/ε )-1/-1/δx_2-x_1+(1-λ)(1-x_2-x_1/δ) x_2-x_1≤ε
ln(δ/x_2-x_1)-1+x_2-x_1/δ+(1-λ)(1-x_2-x_1/δ) ≤x_2-x_1≤δ/λ
0 δ/λ≤x_2-x_1. .
Moments p∈ [1,2/γ^2)
For L,δ,x≥ 0 and δ≤ 1 we have
T∈[0,L] T,T+xδ^p≤ cx^α(p)L/x+1^p/r_p≤ c(1+L+x)^p/r_p x^α(p)-p/r_p,
where α(p)=ζ(p) when x≤ 1 and α(p)=p when x≥ 1, and the r_p>0 is an arbitrary number in p<r_p<2/γ^2. For simplification, we will also write p/r_p=p(γ^2/2+_p) for small enough _p>0. The same estimate follows for the measure η^δ,λ when x≤δ.
Moments p∈ (0,1)Here we have
T∈[0,L] T,T+xδ^p ⪅(1+L+x)^1/r_1 x^1-1/r_1^p,
where as above 1<r_1<2/γ^2 and let c_1:=r_1-1/r_1=1-β- for arbitrarily small >0.
In <ref>, we see that when α(p)-p/r_p>0, it decays to zero as x→ 0. By taking r_p≈2/γ^2, that means we require ζ(p)-p/r_p≈ pγ^2/2(2/γ^2-p)>0. Also, one can check that this exponent is a bit better than that given in <cit.> for general stochastic processes.
Next we study the negative moments for the minimum of the modulus of GMC.
We have for p>0
T∈[0,L] T,T+xδ^-p⪅ x^a_δ(-p)L/x+2^p/r2^-ζ(-r)p/r,
where a_δ(-p):=ζ(-p) when x≤δ and a_δ(-p):=-p when x≥δ and r>0 satisfies p/r<1 and so for simplicity we take arbitrarily small _p:=p/r>0. The same follows for the measure η^δ,λ and x≤δ.
Here we note that as r→ +∞, the constant 2^-ζ(-r)p/r diverges. So the smaller _p:=p/r>0, the larger the comparison constant.
§ PROPETIES OF THE COVARIANCE OF TRUNCATED FIELD
§.§ Regularity of the covariance
The following are some of the hypotheses used in the development of Malliavin calculus for Gaussian processes <cit.>. The difference is
U_ε^r(t)-U_ε^r(s)^2 =2t-s/(1-/r),
which is strictly positive for t≠ s. The covariance
R(τ,t):= {ln(r /ε )-1/-1/rτ-t , τ-t≤ε
ln(r/τ-t) +τ-t/r-1 , r>τ-t≥.
is in fact an absolutely continuous function as a map t↦ R(τ,t) for each τ: when τ-t≤ε, we have the absolutely continuous function g(t)=τ-t, and when τ-t> ε, we use that ln1/x is a differentiable function for x>0. We compute the partial derivative to be
R(τ,t)t= {
-1/-1/rt-τ/t-τ , τ-t≤ε
-1/t-τt-τ/t-τ +1/rt-τ/t-τ ,r>τ-t≥. .
Therefore, for t>τ the derivative is negative R(τ,t)t<0 and for t<τ it is positive R(τ,t)t>0. So it is not continuous on the diagonal, which was one of the constraints in <cit.>. However, in the work <cit.>, they manage to weaken to the following hypotheses that are satisfied here.
For all T>0 the supremum of the integral of the partial derivative is finite for any α≥ 1
sup_s∈ [0,T]∫_0^TR(s,t)t^α<∞
with a bound that diverges as T→ +∞ or → 0. In fact for any continuous function f we have that
s↦ F(s):=∫_0^T f(t)R(s,t)t
is continuous on [0,∞) as long as >0.
Finite integral: proof of <ref>
Case α=1
Because for s-t≥ r, we have zero covariance, we restrict the integral to the domains
[(s-r)∨ 0,(s-)∨ 0] ∪ [(s-)∨ 0,s]∪ [s,(s+)∧ T]∪ [(s+)∧ T,(s+r)∧ T].
In the domain [(s-r)∨ 0,(s-)∨ 0], we have t<s and s-t> and so R(s,t)t=1/s-t-1/r and the integral will be
∫_(s-r)∨ 0^(s-)∨ 01/s-t-1/r=ln(r∧ s/∧ s)-1/rs∧ r-s∧
Similarly, in the domain [(s+)∧ T,(s+r)∧ T], we have R(s,t)t=-1/t-s-1/r=1/t-s-1/r and the integral will be
ln(r∧ (T-s)/∧ (T-s))-1/r(T-s)∧ r-(T-s)∧
In the domain [(s-)∨ 0,s], we have R(s,t)t=1/-1/r=:c_,r and similarly, in [s,(s+)∧ T] we again have R(s,t)t=-1/-1/r=:c_,r. Therefore, the total integral will be
ln(r∧ s/∧ s)-1/rs∧ r-s∧+ln(r∧ (T-s)/∧ (T-s))-1/r(T-s)∧ r-(T-s)∧+c_,r(s+)∧ T-(s-)∨ 0.
So we see from here that as → 0, this integral diverges. The log-terms are the only source of potential singularity. When s is close to zero i.e. r>s> or ≥ s, we get ln(s/) and ln(s/s)=0 respectively. When s is close to T i.e. r>T-s> or ≥ T-s, we similarly get
ln(T-s/) and ln(T-s/T-s)=0 respectively. Therefore, we indeed have a finite supremum for each T>0.
Case α>1
Here instead of logarithms we get singular terms of the form 1/x^α-1. In particular following the same integration steps on splitting domains we get singular terms of the following form:
1/(r∧ s)^α-1-1/(∧ s)^α-11/(r∧ (T-s))^α-1-1/(∧ (T-s))^α-1.
When s is close to zero i.e. r>s> or ≥ s, we get 1/r^α-1-1/^α-1 and 1/s^α-1-1/s^α-1=0 respectively. For s close to T, we conversely get 1/r^α-1-1/^α-1 and 1/(T-s)^α-1-1/(T-s)^α-1=0. We always get a singular power in >0. In summary, we again have a finite supremum for each T>0 and >0.
The continuous weighted derivative: proof of <ref>
We split over the same domains. We end up with the following total integral
∫_(s-r)∨0^(s-)∨0f(t)/s-t+ (-1/r) ∫_(s-r)∨0^(s-)∨0f(t)+ ∫_(s+)∧T^(s+r)∧Tf(t)/t-s+ (-1/r) ∫_(s+)∧T^(s+r)∧Tf(t)
+c_,r ∫_(s-)∨0^(s+)∧Tf(t).
The integrals containing only the continuous function f(t) are differentiable in s due to the fundamental theorem of calculus. In particular, the function g(t)=1/s-t is continuously differentiable in the above domains because they don't contain an -neighbourhood of the singularity t=s. Therefore, the integrals with integrands f(t)/s-t are differentiable due to Leibniz-rule.
Case of → 0 and large T
Here we get
∫_(s-r)∨0^sf(t)/s-t+ (-1/r) ∫_(s-r)∨0^sf(t)+ ∫_s^s+rf(t)/t-s+ (-1/r) ∫_s^s+rf(t)
+1/-1/r ∫_(s-)∨0^(s+)∧Tf(t).
[title=Whole bibliography]
|
http://arxiv.org/abs/2307.04601v1 | 20230710143943 | InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval | [
"Hugo Abonizio",
"Luiz Bonifacio",
"Vitor Jeronymo",
"Roberto Lotufo",
"Jakub Zavrel",
"Rodrigo Nogueira"
] | cs.IR | [
"cs.IR"
] |
NeuralMind
University of Campinas
Brazil
NeuralMind
University of Campinas
Brazil
NeuralMind
University of Campinas
Brazil
NeuralMind
University of Campinas
Brazil
Zeta Alpha
Netherlands
Zeta Alpha
NeuralMind
University of Campinas
Brazil
2023
2023
acmcopyright[]
Recent work has explored Large Language Models (LLMs) to overcome the lack of training data for Information Retrieval (IR) tasks. The generalization abilities of these models have enabled the creation of synthetic in-domain data by providing instructions and a few examples on a prompt.
InPars <cit.> and Promptagator <cit.> have pioneered this approach and both methods have demonstrated the potential of using LLMs as synthetic data generators for IR tasks.
This makes them an attractive solution for IR tasks that suffer from a lack of annotated data.
However, the reproducibility of these methods was limited, because InPars' training scripts are based on TPUs – which are not widely accessible – and because the code for Promptagator was not released and its proprietary LLM is not publicly accessible.
To fully realize the potential of these methods and make their impact more widespread in the research community, the resources need to be accessible and easy to reproduce by researchers and practitioners.
Our main contribution is a unified toolkit for end-to-end reproducible synthetic data generation research, which includes generation, filtering, training and evaluation. Additionally, we provide an interface to IR libraries widely used by the community and support for GPU.
Our toolkit not only reproduces the InPars method and partially reproduces Promptagator, but also provides a plug-and-play functionality allowing the use of different LLMs, exploring filtering methods and finetuning various reranker models on the generated data. We also made available all the synthetic data generated in this work for the 18 different datasets in the BEIR benchmark which took more than 2,000 GPU hours to be generated as well as the reranker models finetuned on the synthetic data. Code and data are available at <https://github.com/zetaalphavector/InPars>
InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval
Rodrigo Nogueira
February 2023
==============================================================================================================
§ INTRODUCTION
Effective neural Information Retrieval (IR) models often require a large amount of labeled training data. However, obtaining human labeled data is costly and many publicly available benchmarks contain few or no training examples <cit.>. In these cases, the common approach is to train a model on a large dataset, such as
MS MARCO <cit.> and Natural Questions <cit.>, and use it in a zero-shot transfer learning scenario <cit.>.
Nonetheless, models trained on these datasets face challenges to generalize to the variety of tasks and specific domains available in the real world.
Thus, the recently proposed InPars <cit.> and Promptagator <cit.> methods, along with their extensions InPars-v2 <cit.> and InPars-Light <cit.>, have explored Large Language Models (LLMs) to generate synthetic data and have demonstrated their effectiveness. These methods not only outperform models that are finetuned on extensively labeled datasets but have also shown to be more adaptable to different tasks.
These methods propose the generation of synthetic in-domain training data by exploring the few-shot learning abilities of LLMs, prompting them with a brief description of the task and a small number of in-domain examples. InPars uses a static prompt that include examples collected from the MS MARCO dataset, whereas Promptagator uses dynamic prompts that include domain and task-specific examples sampled from the target dataset. Another key difference of these methods lies in the filtering of generated data. While InPars uses the sequence probability given by the LLM at generation time, Promptagator uses a consistency filtering with a model trained on the generated data. Similarly, InPars-v2 extends the pipeline by using a pre-trained reranker model to filter the examples. InPars-Light goes further in the efficiency direction by using lightweight models and showing that they are competitive with larger models.
These methods have proven to be effective, representing the state of the art in the BEIR benchmark <cit.>. However, reproducing such pipelines can still be a challenging task; researchers need to handle different codebases in addition to having access to a specific computational infrastructure. Most of the time, such components are not well integrated, making it difficult for researchers and practitioners to use them effectively. In this work, we bring all these components together, making it possible to experiment with InPars, Promptagator, and their variants, as well as to try new approaches using different LLMs, prompting approaches and datasets. We believe that making these resources available to allow reproducible work in the field of IR is crucial for several reasons. First, reproducibility is a key component of scientific research, as it allows other researchers to confirm and build upon the findings of a study.
Second, the reproduction of LLM related studies is often costly, and making the models and generated data available provides a valuable resource for the community.
We summarize our contributions as follows:
* We provide an extensive guideline for reproducing InPars and InPars-v2 for datasets on the BEIR benchmark on GPU. For Promptagator, we provide an implementation for reproducing the synthetic queries generation step with the dynamic prompt construction originally proposed by the authors.
* We also provide support for using different data sources: Pyserini's <cit.> pre-built indexes for the BEIR datasets, and <cit.> library, which contains multiple IR datasets.
* Lastly, we make available all the synthetic data generated in this reproduction study, along with the prompts and the finetuned reranker models.
§ METHODS
In this section, we describe the main methods reproduced in this paper and highlight the differences in their data generation pipelines.
§.§ InPars
The InPars method, currently available in two different versions, explores the few-shot learning abilities of LLMs to generate synthetic training data for IR tasks, by using a prompt template that instructs the LLM on how to generate the synthetic data. The prompt t||d is the concatenation of a prefix t and a document d, where the prefix t consists of N pairs of documents and their relevant queries, i.e., t={(q_1^*,d_1^*), ..., (q_N^*,d_N^*)}. The prompt t||d is fed to a language model G that generates a question q that is likely to be relevant to d. The resulting pair (q, d) forms a positive training example that is later used to finetune a retrieval model. The original InPars work uses a GPT-3 LLM as the synthetic data generator, while InPars-v2 replaced the LLM with GPT-J <cit.>. These models, trained on massive amounts of text data, have shown impressive abilities in generating human-like text, answering questions, translating languages, and even creating original content. GPT-J is an open-source 6B parameters transformer model trained using 402 billion tokens from the Pile <cit.>, an 800 GB English dataset. When generating the synthetic queries, a greedy decoding strategy was used.
InPars proposes two different prompts. The first one, named “Vanilla” prompt, uses three fixed pairs of examples of document and relevant query, that were randomly collected from the MS MARCO training dataset. The second prompt template, referred to as “Guided by Bad Questions” (GBQ) uses the same examples from the first prompt, but it labels the original questions from the MS MARCO dataset as “bad” questions. The “good” questions were manually created and are more elaborated. The intention is to encourage the LLM to produce more informative questions, where the full context of the document contributes to the answers.
InPars generates 100K pairs of positive training examples using documents randomly sampled from a collection D. The prefix t is always the same regardless of the input document d. After generating the synthetic data, a filtering step is proposed, to select the top K pairs with respect to the following (log) probability:
p_q = 1/|q|∑_i=1^|q|log p(q_i|t,d,q_<i),
where p(q_i|t,d,q_<i) is the probability assigned by G when autoregressively generating the i-th token of q, and q_<i are the tokens generated in the previous decoding steps.
This score is used to filter the top K=10,000 pairs of document and synthetic queries to be used as finetuning data.
This filtering improves the quality of the training data. Without it, i.e., using the full set of 100K synthetic queries to finetune a reranker model resulted in a drop of 4 MMR@10 points on MS MARCO.
The filtering approach was improved on InPars-v2, where a pretrained reranker model is used to filter the synthetic queries for the training step. A monoT5-3B reranker model finetuned for one epoch on the MS MARCO dataset is used to estimate a relevance score for each synthetic query generated by the LLM and the document that was used to generate it. After computing the score for each one of the 100,000 pairs of synthetic queries and documents, only the K=10,000 highest scores are kept as finetuning data.
These filtered queries are used to train a monoT5 <cit.> reranker, an adapted version of T5 <cit.> model for text ranking tasks. The filtered queries are used as positive examples, while negative examples are mined from BM25 candidates. Two models with 220M and 3B parameters were trained for one epoch over the 20,000 query-document pairs. The trained model is subsequently used to rerank the initial BM25 retrievals. This approach employs a two-stage retrieval pipeline. Firstly, BM25 retrieves the top 1,000 documents per query. Secondly, the trained model reranks the list by assigning a relevance score for each pair of query and document.
§.§ Promptagator
The Promptagator method also creates synthetic training data for IR tasks by exploiting the few-shot abilities of a 137B-parameter LLM. Differently from InPars, a specific prompt is created for each dataset using in-domain examples.
By creating a specific prompt template for each dataset, the prefixes used were selected according to the dataset description. Using the ArguAna dataset prompt as an example, the model is prompted with a prefix “” which indicates the document from the dataset, followed by a prefix “”, marking the question related to the document. This way, the prefixes resemble a better description of the datasets while instructing the LLM to generate a query for that specific task and document. Moreover, in the few-shot scenario, they use from 2 to 8 relevant query-document examples to create the prompt, sampled from the development set when it is available or, if not, from the test set.
Promptagator generates synthetic queries using a sampling decoding algorithm with a temperature of 0.7. For each dataset, they generate 8 synthetic queries for each document from a randomly sampled set of 1 million documents. FLAN <cit.> is used as the generator, which is a proprietary LLM that was pretrained on a multiple tasks using instructions.
To ensure that only high-quality synthetic questions are generated, the authors propose a filtering step based on consistency filtering. They train a retriever model using the same synthetic data that needs to be filtered to predict the most relevant passage for a given query. The retriever model keeps only queries that, when fed to the model, return the document that originated it among its top K results. The authors observed that setting K to 1 leads to better results when using the MS MARCO dataset as a validation set.
The authors suggest that this filtering strategy removes low-quality synthetic questions and improves performance on 8 out of the 11 datasets that were evaluated.
In the final step, the Promptagator method finetunes two different models using the synthetic data. The first is a bi-encoder based on the GTR <cit.> architecture with 110M parameters. The second is a cross-encoder with the same number of parameters, which reranks the top 200 candidates retrieved by the bi-encoder model.
§ EXPERIMENTAL SETUP
In this section, we describe the process of using the toolkit provided in this work. Firstly, we outline the steps for generating synthetic data. Next, we discuss the process of filtering the generated data to remove possibly irrelevant instances. Then, we describe how to build the training set using the filtered positive examples and mining the negatives. After that, we provide details on how to use the synthetic data to train a reranker. Finally, we describe the process of reranking and evaluating the trained model. By following the guidelines in this section, researchers and practitioners are able to leverage the provided resources effectively and reproduce the InPars method, as well as partially reproduce the Promptagator method and extend to new pipelines.
§.§ Commands
To begin, the synthetic data generation step is done using the command-line as follows:
[language=bash]
python -m inpars.generate –prompt="inpars" –dataset="trec-covid" –dataset_source="ir_datasets" –base_model="EleutherAI/gpt-j-6B" –output="trec-covid-queries.jsonl"
Diving into the required arguments, thwe first need to define the , which supports four different options for the prompt template to be selected: , , and . This argument will define which prompt template to be used during the generation step. We provide both “Vanilla” and “GBQ” prompts templates used by InPars, with “Vanilla” as a default.
The prompt template uses a specific template for each dataset and dynamically selects random pairs of query and relevant document to be used as prompt examples. The argument specifies the number of examples that will be used in the prompt in this case – with a default of 3 examples.
We randomly select labeled examples from the training set of each dataset when training data is available. If there is no training set, we use the development set as our source and, as a last resort, when there is no training or development set, we use the test set for creating the prompt examples. This approach is slight different from the one proposed by Promptagator, that collects examples only from the development or test set. Once the examples are collected, for each document that requires a synthetic question, the prompt is built dynamically. This means that the prompt examples are randomly ordered for each document.
To ensure a fair evaluation, the queries and documents used as few-shot examples that were extracted from the development or test sets are discarded from the evaluation metrics.
The next arguments are the and , which specify dataset to generate synthetic queries for and the source from which to load it. In line with InPars and Promptagator, we support the datasets from the BEIR <cit.> benchmark. BEIR is a widely used evaluation framework in the IR domain. It aims to provide a comprehensive evaluation benchmark on a variety of IR tasks, with a particular emphasis on zero-shot evaluation.
The argument is designed to integrate with two widely used dataset interfaces in the IR community: Pyserini <cit.>, a toolkit for conducting reproducible IR research with sparse and dense representations, and <cit.>, a commonplace for several IR ad-hoc ranking benchmarks. Furthermore, it is also possible to indicate a local file as the document and query collection. By default, we use the as the source, but both sources include all publicly available BEIR datasets.
The argument determines the LLM that will be used to generate the synthetic queries. By default, our toolkit uses the GPT-J <cit.> model available in the Hugging Face Hub [<https://huggingface.co/EleutherAI/gpt-j-6B>], but it can support any generative model available from Hugging Face. Lastly, the argument specifies the name of an output file to save the synthetic data. The output file will be a JSON format file, which will contain one query per line. This file will also include additional information related to the synthetic generation step, such as the log probabilities assigned to each token by the LLM during query generation, the prompt text fed to the LLM, and the document for which the query was generated.
Additional arguments related to the LLM, such as the maximum length of input and output or batch size, can also be set through the command-line arguments.
Once the synthetic data has been generated, we move on to the filtering stage. We provide two different filtering strategies, and the command to filter the synthetic queries is:
[language=bash]
python -m inpars.filter –input="trec-covid-queries.jsonl" –dataset="trec-covid" –filter_strategy="scores" –keep_top_k="10_000" –output="trec-covid-queries-filtered.jsonl"
Initially, before applying the filtering strategy, we keep only synthetic queries that meet some conditions. These conditions require the token count to fall within a specified range of minimum and maximum amount, defined by the arguments and . This is done to remove possible noisy synthetic queries. The optional argument removes synthetic queries in which a part of the document used for generation was copied to the query.
The first argument required by the filtering command-line is the , which refers to the file containing the synthetic queries generated in the previous step to be filtered. The argument indicates which dataset the queries belong to. The specifies the filtering strategy to be used. The default filtering strategy, introduced by InPars-v1, is called and is based on a mean value computed from LLM's tokens probabilities. The synthetic queries list is then sorted in descending order, and only the top-K values are retained. The argument defines the value ofk, with a default value of 10,000.
The second filtering strategy, which was introduced by InPars-v2, is called reranker. This strategy employs a pretrained reranker model to filter the synthetic queries by computing a relevancy score for each synthetic query-document pair. The scores list is sorted in descending order and only the top-k pairs with the highest scores are kept as positives query-document pairs to be used during training. Finally, the output file must be specified in to indicate where the filtered synthetic queries will be saved.
The filtering strategy proposed by Promptagator is not currently supported because its more elaborated and seem to require more computational resources: A bi-encoder is initially finetuned on 1 million synthetic examples and then used in the filtering step by retaining only the examples that correctly retrieve the source document. This is a costly procedure that has been postponed for future work.
The third stage of the pipeline involves mining negative examples for model training. In this stage, negative examples are mined by using the filtered synthetic queries to search for candidate documents. We followed the approach outlined in InPars, using BM25 to retrieve 1,000 candidate documents from the target collection. From this set, a random document is selected as the negative example. If the candidate document is the same one used during the synthetic generation step, the example is discarded, and a new one is sampled. The following command-line is used to execute this step:
[language=bash]
python -m inpars.generate_triples –input="trec-covid-queries-filtered.jsonl" –dataset="trec-covid" –output="trec-covid-triples.tsv"
The argument expects a file containing the previously filtered synthetic queries, as well as the identification. The result is a tuple(q, d^+, d^-), whereqandd^+are fixed (the synthetic query and the source document) andd^-represents the negative example sampled from BM25 candidates. The document collection is indexed using Pyserini, and all BEIR benchmark datasets are already available as pre-built indexes.
Once the synthetic training data is obtained, we proceed to the training step. We support the finetuning of a monoT5 reranker model, which is the same model used in InPars, as the final stage of the multi-stage retrieval pipeline.
To finetune the reranker using the synthetic data, the command-line is:
[language=bash]
python -m inpars.train –triples="trec-covid-triples.tsv" –base_model="castorini/monot5-3b-msmarco-10k" –output_dir="./reranker/" –max_steps="156"
The argument specifies the file containing the training tuples obtained in the previous step, where every line consists of a triple comprising a query, a positive document, and a negative document. The argument indicates the model to be finetuned – e.g., an original T5 model or a pre-trained monoT5. In all our experiments, we used the [<https://huggingface.co/castorini/monot5-3b-msmarco-10k>] as our initial base model. The argument specifies the path where the finetuned model should be saved. Our reranker models were trained for 156 steps, equivalent to one epoch over the query-relevant document pairs. In contrast to the InPars script that relies on TPUs, our training script supports GPUs. We conducted all experiments using a NVIDIA A100 80 GB GPU, and training the model for 156 steps took approximately 30 minutes.
Once the model has been trained, the next stage uses it to rerank a dataset. In this stage we support all BEIR datasets, as well as any custom local datasets. The command-line to rerank is:
[language=bash]
python -m inpars.rerank –model="./reranker/" –dataset="trec-covid" –output_run="trec-covid-run.txt"
The first argument is the , which specifies the trained model to be used for reranking. The argument indicates one of the BEIR datasets to load the documents and queries, as well as the initial run to be reranked. We are using the BEIR runs, created using BM25, as initial run for all the datasets. However, it is possible to provide an initial run from a local file in the TREC format using the argument. The reranker model will compute a relevancy score for each query and the candidates documents from the initial run. The output will consist of a reranked run, which will be saved in the location indicated by the path.
Finally, to evaluate the reranked run, the following command-line is used:
[language=bash]
python -m inpars.evaluate –dataset="trec-covid" –run="trec-covid-run.txt"
By providing the and to be evaluated, our script computes the metrics like recall and nDCG, in addition to other metrics computed by the TREC evaluation script.
§ RESULTS
This section presents and discusses the results obtained by reproducing the methods using our toolkit. Table <ref> presents a comparison between the baselines, the results reported by original methods and our reproductions.
The first two rows (1a and 1b) represent the BM25 baselines. In BM25-flat, document titles and contents are concatenated and stored as a single field while BM25-multi stores them as separate fields. The top 1000 documents retrieved by BM25-flat are reranked by the models in rows (2), (3a), (3b), (4b), and (4c). Row (2) presents the result of monoT5-3B, which was finetuned on MS MARCO for one epoch, used in a zero-shot setup.
Rows (3a) and (3b) presents the reproductions of InPars-v1 and InPars-v2 pipelines, respectively. Row (4a) presents the reported result for Promptagator.
The results in rows (4b) and (4c) illustrate the impact of using the Promptagator prompt with InPars pipelines. Comparing these results to those of InPars v1 and v2 (rows (3a) and (3b)), the results produced by the Promptagator prompt are either equal or slightly lower than those obtained through the InPars prompt, except for the ArguAna, Touché-2020 and SciFact datasets. Notably, for the ArguAna dataset, finetuning the reranker on the synthetic data generated by the Promptagator prompt resulted in an almost 14 nDCG@10 improvement compared to InPars' best result. These findings suggest that the Promptagator prompts are particularly effective in generating synthetic queries for the ArguAna and Touché-2020 datasets. Such datasets concentrate on argument retrieval, which is slightly different from other datasets in the BEIR benchmark. As a result, they gain advantage from using dataset-specific prompts.
Also, a factor that probably limited InPars prompt performance on the ArguAna dataset reported in rows (3a) and (3b) is related to the query length. When generating the synthetic queries, InPars sets a maximum number of 64 tokens. As shown in Table <ref>, the average number of words and tokens for queries across all datasets in the BEIR benchmark is below this value with the exception of the ArguAna dataset.
When examining the filtering strategy, the results from the InPars prompt indicate an average difference of 1 nDCG@10 point between the results displayed in rows (3a) and (3b). The improvements observed in the reranker strategy results are primarily driven by Touché-2020, FEVER and Climate-FEVER datasets. When using the Promptagator prompt, the filtering strategy appears to make a difference for certain datasets, as shown in rows (4b) and (4c). The reranking strategy appears to perform better for TREC-COVID, Touché-2020, and HotpotQA datasets. In particular, the HotpotQA dataset showed an improvement of more than 18 nDCG@10 points when compared to the scores strategy. On the other hand, the scores filtering strategy resulted in an improvement for the DBPedia and Climate-FEVER datasets, with gains of 9.6 and 16.8 nDCG@10 points, respectively. Despite the individual differences, the average results are very similar.
Table <ref> presents results comparing the performance on GPU (PyTorch <cit.> and Transformers <cit.>) versus TPU (Mesh-TensorFlow <cit.>). As part of our work, we added GPU support to reproduce InPars results. This support covers the synthetic data generation, filtering, training, reranking and evaluating. We conducted an experiment to verify that running it on a GPU setup would produce the same results as running it on the TPU setup. For this, we trained monoT5-3B models following the InPars-v2 approach. Our analysis revealed that while there were minor variations in datasets such as TREC-COVID, BioASQ, Robust04 and ArguAna, the results remained exactly the same for NFCorpus, NQ, and FiQA-2018, regardless of the device used. For the majority of datasets, the variance in results between running on TPU and GPU is minimal when considering individual performance, as demonstrated in the "Diff" column on Table <ref>. Furthermore, the average nDCG@10 remains consistent in both evaluation scenarios.
All experiments were conducted using an NVIDIA A100 80 GB GPU. Training monoT5-3B for 156 steps took about 30 minutes. Filtering 100K queries using a monoT5-3B model takes approximately 45 minutes. The duration of the evaluation step is determined by the number of queries that need to be reranked for each dataset, which can range from 50 queries for TREC-COVID to 13,145 queries for CQADupstack. The reranking of 1,000 candidate documents for a given query took a maximum of 30 seconds using the monoT5-3B reranker model.
Additionally, Table <ref> shows statistics regarding the token count for each set of documents and queries in all datasets on the BEIR benchmark.
The ArguAna dataset is noteworthy for having significantly different query length compared to the other datasets. TREC-NEWS and Robust04 have the largest document lengths. This information is crucial to keep in mind when choosing documents to use as prompt examples. For instance, if we consider the GPT-J model, with a maximum sequence length of 2048 tokens, at most two average TREC-NEWS documents can fit into a prompt, without even accounting for the length of the queries.
§ CONCLUSIONS
We have introduced the InPars Toolkit, a codebase designed to generate synthetic data using LLMs in a reproducible manner for neural IR tasks. The toolkit comprises an end-to-end pipeline that encompasses data generation, training, reranking, and evaluating the trained models. Additionally, the codebase is integrated with two major libraries for commonly used datasets from the BEIR benchmark, and it supports both GPU and TPU training and inference. Our goal is to make research on these methods more accessible and to pave the way for this emerging research trend in the IR community.
Our experiments have demonstrated that training reranker models using synthetic data and evaluating them on GPU infrastructure yielded results comparable to those obtained when training on the TPU setup. Additionally, we have also made available all synthetic data generated for all BEIR datasets and the models finetuned on this data.
§ FUTURE WORK
Future work will focus on integrating a wider range of open-source LLMs, including instruction finetuned LLMs, with the aim of enhancing the generation process. Another area of further exploration is to experiment with different prompting techniques, such as chain-of-thought prompting, and prompting for retrieval explanations. Moreover, there are plans to incorporate consistency filtering and expand the filtering methods to completely reproduce Promptagator and lay the foundations for new research approaches in the field of synthetic data generation for IR.
abbrv |
http://arxiv.org/abs/2307.05562v1 | 20230709220637 | Decentralized Decision-Making in Retail Chains: Evidence from Inventory Management | [
"Victor Aguirregabiria",
"Francis Guiton"
] | econ.EM | [
"econ.EM"
] |
Decentralized Decision-Making in Retail Chains:
Evidence from Inventory ManagementWe are grateful for the valuable comments received from Heski Bar-Isaac, Loren Brandt, Avi Goldfarb, Jiaying Gu, George Hall, Jordi Mondria, Bob Miller, John Rust, Steven Stern, Junichi Suzuki, and Kosuke Uetake, as well as from seminar participants at Cambridge, Stony Brook, Toronto, UBC-Sauder, EARIE conference, IIOC conference, CEA conference, and the Georgetown conference honoring John Rust. The data used in this paper was obtained through a request made under the Canadian Access to Information Act. We sincerely appreciate the generous assistance provided by LCBO personnel.
Victor Aguirregabiria[Department of Economics, University of Toronto. 150 St. George Street, Toronto, ON, M5S 3G7, Canada, mailto: [email protected]@utoronto.ca.]
University of Toronto, CEPR
Francis Guiton [Ph.D. Candidate, Department of Economics, University of Toronto. 150 St. George Street, Toronto, ON, M5S 3G7, Canada, mailto: [email protected] [email protected].]
University of Toronto
Received: date / Accepted: date
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
This paper investigates the impact of decentralizing inventory decision-making in multi-establishment firms using data from a large retail chain. Analyzing two years of daily data, we find significant heterogeneity among the inventory decisions made by 634 store managers. By estimating a dynamic structural model, we reveal substantial heterogeneity in managers’ perceived costs. Moreover, we observe a correlation between the variance of these perceptions and managers’ education and experience. Counterfactual experiments show that centralized inventory management reduces costs by eliminating the impact of managers’ skill heterogeneity. However, these benefits are offset by the negative impact of delayed demand information.
Keywords: Inventory management; Dynamic structural model; Decentralization; Information processing in organizations; Retail chains; Managerial skills; Store managers.
JEL codes: D22, D25, D84, L22, L81.
§ INTRODUCTION
Multi-establishment firms can adopt various decision-making structures ranging from centralized decisions at the headquarters to a more decentralized approach where decision authority is delegated to individual establishments. Determining the optimal decision-making process for a firm involves weighing different trade-offs. A decentralized decision-making structure empowers local managers to leverage valuable information specific to their respective stores. This information, which may be difficult or time-consuming to communicate to headquarters, can be utilized effectively at the local level. However, decentralization also entails granting autonomy to heterogeneous managers who possess varied skills. This heterogeneity can lead to suboptimal outcomes for the overall firm. When determining the degree of decentralization in their decision-making structure, multi-establishment firms must evaluate this trade-off. They need to consider the benefits of local knowledge and timely decision-making against the potential challenges posed by managerial heterogeneity.
In this study, we investigate the inventory management decisions made by store managers at the Liquor Control Board of Ontario (LCBO) and examine the impact on the firm's performance of delegating these decisions to the store level. The LCBO is a provincial government enterprise responsible for alcohol sales throughout Ontario. As a decentralized retail chain, each store has a degree of autonomy in its decision-making process. Specifically, store managers have discretion in two key areas of inventory management: assortment decisions (i.e., determining which products to offer) and replenishment decisions (i.e., when and how much to order for each product). Replenishment decisions involve forming expectations about future demand to determine the optimal order quantity and timing. To conduct our analysis, we utilize a comprehensive dataset obtained from the LCBO, encompassing daily information on inventories, orders, sales, stockouts, and prices for every store and product (SKU) from October 2011 to October 2013 (677 working days).[These data were obtained under the Access to Information Act, with assistance from the LCBO personnel.] Additionally, we supplement our main dataset with information from LCBO reports and gather data on store managers' education and experience from professional networking platforms such as LinkedIn.
The LCBO data and framework provide a unique setting to study inventory management due to the simple pricing mechanism employed, where prices are set as a fixed markup over wholesale cost. This feature of the market allows us to focus specifically on the inventory setting problem without the need to incorporate a model where equilibrium prices are explicitly determined by inventory decisions. By abstracting from the complex relationship between prices and inventory decisions, we can concentrate our analysis on understanding the factors influencing inventory management within the LCBO retail chain.
By employing descriptive evidence and the estimation of reduced-form models of inventory decision rules, we first show substantial heterogeneity across store managers’ replenishment decisions. Observable store characteristics – such as demand level, size, category, and geographic location – explain less than half of the differences across store managers in their inventory decision rules.
To gain a deeper understanding of the factors contributing to this heterogeneity, we propose and estimate a dynamic structural model of inventory management. The model allows for differences across stores in demand, storage costs, stockout costs, and ordering costs. Leveraging the high-frequency nature of the daily data, we obtain precise estimates of holding cost, stockout cost, and ordering costs at both the individual product (SKU) and store levels. Our findings reveal significant heterogeneity across stores in all the revealed-preference cost parameters. Remarkably, observable store characteristics only account for less than 50% of this heterogeneity. Furthermore, we uncover a correlation between the remaining heterogeneity and managers' education and experience, suggesting that manager characteristics contribute to this residual variation. We interpret this unexplained heterogeneity as the result of local managers' idiosyncratic perceptions regarding store-level costs.
Using the estimated structural model, we quantify the impact of store manager heterogeneity on inventory outcomes. Overall, eliminating the idiosyncratic heterogeneity in cost parameters produces significant effects on inventory management. Specifically, we observe a 6-day increase in the waiting time between orders, a decrease in the average order amount equivalent to 1.5 days of average sales, and a 21% reduction in the inventory-to-sales ratio. However, the frequency of stockouts remains largely unaffected. These findings indicate that if the idiosyncratic component of costs represents a biased perception by store managers, it has a substantial negative impact on the firm's profitability. This is due to increased storage and ordering costs while having little effect on stockouts and revenue generation.
Finally, we conduct an evaluation of the effects associated with centralizing the decision-making of inventory management at the LCBO headquarters. To simulate this counterfactual experiment, we take into account information provided by company reports, which indicate that store-level sales information is processed by the headquarters with a one-week delay. The main trade-off examined in this experiment revolves around the fact that a centralized inventory management system eliminates the influence of store managers' heterogeneous skills and biased perceptions of costs. However, it also relinquishes the advantages derived from store managers' just-in-time information about demand and inventories. Our findings reveal that implementing a centralized inventory management system would result in a substantial reduction in ordering and storage costs, with an average cost decrease of 23% and a 3.7% reduction for the median store. Despite the significant cost reduction, this benefit is nearly completely offset by the negative impact on profits due to the delayed information about demand. Consequently, the net effect on profits is modest, with a mere 2% increase in annual profit for LCBO, equivalent to $34 million. We further explore the implications of this trade-off for designing a more efficient inventory system that incorporates elements of both centralized and decentralized approaches.
This paper contributes to the growing empirical literature exploring the trade-offs between centralization and decentralization of decision-making in multi-division firms. Notably, () observe that most retail chains in the US employ uniform pricing across their stores, despite substantial differences in demand elasticities and potential gains from third-degree price discrimination. The authors discuss possible explanations for this phenomenon. In a study of a major international airline company, () analyze a pricing system that combines decision rights across different organizational teams. They find that despite employing advanced techniques, the pricing system fails to internalize consumer substitution effects, exhibits persistent biases in demand forecasting, and does not adapt to changes in opportunity costs. These inefficiencies are primarily attributed to limited coordination between teams. Examining decentralization decisions from a diverse set of firms across 11 OECD countries, () show that firms that delegate decision power from central headquarters to plant managers exhibited better performance during the Great Recession compared to similar firms with more centralized structures. The empirical evidence presented in their study supports the interpretation that the value of local information increases during turbulent economic times. Our paper contributes to this literature by being, to the best of our knowledge, the first empirical study to examine the trade-offs related to the (de)centralization of inventory management within retail chains, and specifically the role of store managers' heterogeneous skills.
This paper also contributes to the empirical literature on dynamic structural models of inventory behavior. Previous contributions in this area include works by (), (), (), and () in the context of firm inventories, as well as (), (), and () in the domain of household purchases of durable products. We contribute to this literature by using high-frequency (daily) data at the granular store and product level to estimate cost parameters using a dynamic structural model.
Finally, our paper contributes to the literature on structural models with boundedly rational firms. Most of this literature studies firms' entry/exit decisions (, ; and
, ),
pricing decisions (, ; , ), and bidding behavior in auctions (, ;
, ; , ). To the best of our knowledge, our paper represents the first investigation into bounded rationality in firms' inventory decisions. This research also contributes to the existing literature by combining revealed-preference estimates of managers' perceived costs with a decomposition of these costs into the objective component explained by store characteristics and the subjective component associated with managers' education and experience.
The rest of the paper is organized as follows. Section <ref> describes the institutional background of the LCBO and presents the dataset and descriptive evidence. Section <ref> presents evidence of managers following (S,s) decision rules and illustrates the heterogeneity in these (S,s) thresholds across store managers. Section <ref>
presents the structural model and its estimation. The counterfactual experiments to evaluate the effects of decentralization are described in section <ref>. We summarize and conclude in Section <ref>.
§ FIRM AND DATA
§.§ LCBO retail chain
History. LCBO was founded in 1927 as part of the passage of the Ontario Liquor Licence Act.[The information in this section originates from various archived documents from the LCBO. General information about the company and its organization is based on the company's annual reports () and (), and the collective agreement between the LCBO and OPSEU. Information regarding headquarters' order recommendations (Suggested Order Quantities, SOQs) originates from the report (). Additional information regarding the role of store managers originates from an interview we conducted with an LCBO store manager from a downtown Toronto store.] This act established that LCBO was a crown corporation of the provincial government of Ontario. Today, the wine retail industry in Ontario is a triopoly - consisting of 634 LCBO stores, 164 Wine Rack stores, and 100 Wine Shop stores. Despite its government ownership, LCBO is a profit maximizing company. As described in its governing act, part of its mandate is "generating maximum profits to fund government programs and priorities".[See
https://www.lcbo.com/content/lcbo/en/corporate-pages/about/aboutourbusiness.html.]
Store managers. According to the LCBO, store managers are responsible for managing their "store, sales and employees to reflect [their] customers’ needs and business goals", with a particular focus on the inventory management of their store. Managers must oversee their store's overall inventory level to ensure that daily demand is met. For incentive purposes, part of the store managers' remuneration depends on the overall sales performance of their store. The managers' pay is therefore closely tied to their stores' profits. In order to satisfy daily demand, managers periodically restock their shelves by ordering products from the nearest distribution center. The order is then delivered by trucks according to a pre-determined route and schedule. In addition to the ordering decisions, managers are also responsible for their store's product assortment, as they must decide which products to offer at their store in order to cater to local demand. Inventory management at LCBO, therefore, entails a dual responsibility for store managers: providing products that are in high demand and keeping these products stocked on the shelves.
Classification of stores.
The LCBO classifies its stores into six categories, AAA, AA, A, B, C, and D, ranging from the highest to the lowest. These classifications primarily reflect variations in store size and product assortment. However, there are also differences in the consumer shopping experience across these classifications, with the AAA and AA stores being considered flagship stores.
Headquarters. Headquarters are in charge of the assignment of store managers across the different stores. Assignments are occasionally shuffled due to managers being promoted (demoted) to higher- (lower-) classified stores, with seniority being a main factor in the promotion decision. Another responsibility of headquarters is to assist store managers in their inventory decisions. Headquarters use forecasting techniques to provide recommendations to managers regarding how much to order for each product at their store. In the company's internal jargon, these recommendations are referred to as Suggested Order Quantities (SOQs). For each store and product, headquarters generate order recommendations based on the previous week's sales and inventory information.[More specifically, order recommendations are a function of the Average Rate of Sale (ARS) of the product from the previous week, and of seasonal brand factors.] Importantly, this entails that headquarters process store-level information with a weekly delay. Headquarters do not use just-in-time daily information that store managers may be using in their replenishment decisions. This informational friction may play a role in the optimal allocation of decision rights.
Pricing. LCBO and its competitors are subject to substantial pricing restrictions. Prices must be the same across all stores in all markets for a given store-keeping unit (SKU). There is no price variation across the LCBO and its competitors. Retail prices are determined on a fixed markup over the wholesale price set by wine distributors. Furthermore, the percentage markup applies to all the SKUs within broadly defined categories.[See () for further details about markups at LCBO.]
No franchising system. The LCBO operates its stores without adopting a franchising system. Instead, all store managers are employees of the LCBO. As a result, store managers are not required to pay any franchise fees, fees per order, or any other types of fees to the firm. The absence of a franchising system ensures that the store managers operate within the organizational structure of the LCBO as employees, without the additional financial obligations associated with a franchising arrangement.
Union. Most employees at the LCBO are unionized under the Ontario Public Service Employees Union (OPSEU). As of 2022, the OPSEU "represents more than 8,000 workers at the Liquor Control Board of Ontario", with their main goal being to "establish and continue harmonious relations between the [LCBO] and the employees". Members of the union include store managers, retail workers, warehouse workers, and corporate workers.
§.§ Data from LCBO
Our analysis combines three data sources: the main dataset provided by the LCBO; data on store managers' experience and education collected from the social media platform LinkedIn; and consumers' socioeconomic characteristics from the 2011 Census of Population.
We use a comprehensive and rich dataset obtained from the LCBO, encompassing daily information on inventories, sales, deliveries, and prices of every product sold at each LCBO store. The dataset covers a period of two years, specifically from October 2011 to October 2013, spanning a total of 677 days. With a total of 634 stores operating across Ontario and an extensive product range consisting of over 20,000 different items, our dataset comprises approximately 720 million observations. Moreover, the dataset includes additional valuable information, such as product characteristics, store characteristics (including location, size, and store category), and the store manager's name.
Table <ref> presents summary statistics. The average store has an assortment of 2,029 items. Weekly sales per store amount to 12,909 units, translating to an average of 6.36 units sold per item. The average weekly revenue per store is $162,250, resulting in $80 in revenue per item per week and $12.6 in revenue per unit sold. Regarding deliveries, stores receive shipments at approximately 5.37 days per week, with total weekly deliveries containing an average of 12,258 units. Stockout events occur, on average, 415 times per week across all stores.
Notably, these figures exhibit significant variation across different store types. Larger stores, as expected, generate higher weekly revenues. For instance, AAA and AA stores generate average weekly revenues of $874,367 and $518,067, respectively, while C and D stores generate average weekly revenues of $59,328 and $23,913, respectively. Stockout events appear to be more prevalent in larger stores compared to smaller ones. The average AAA store experiences 1,203 stockout events per week, whereas the average D store encounters 204 stockout events per week. Additionally, larger stores tend to place orders more frequently and in larger quantities. On average, AAA and AA stores receive orders 6.30 and 6.27 days per week, totaling 51,340 and 35,711 units, respectively. In contrast, C and D stores receive orders 5.09 and 3.83 days per week, amounting to 4,395 and 1,660 units, respectively.
The bottom panel in Table <ref> presents inventory-to-(daily)sales ratios and ordering frequencies. These statistics are closely related to the (S,s) decision rules that we analyze in Section <ref>.
At the store-product level, the inventory-to-sales ratio before and after an order corresponds to the thresholds s and S, respectively. On average, stores maintain enough inventory to meet product demand for approximately 23 days, initiate an order when there is sufficient inventory for about 9 days, and the ordered quantity covers sales for around 18 days. In terms of ordering frequency, the average store and product place an order approximately once every two weeks, equivalent to a frequency of 0.07 ≃ 1/14.
Average inventory-to-sales ratios tend to decrease with store size/type, although this difference is influenced by the composition effect arising from varying product assortments across store types. For the remainder of the paper, to account for this composition effect and reduce the computational burden of estimating our model across numerous products, we focus on a working sample consisting of a few products carried by all stores. This approach allows us to manage the complexity associated with different product assortments while maintaining the robustness of our analysis.
§.§ Working sample
In our econometric models, estimated in Sections <ref> and <ref>, the parameters are unrestricted at the store-product level. Considering that our dataset comprises nearly 2 million store-product pairs, estimating these models for every store-product combination would be exceedingly time-consuming. To save time while maintaining the integrity of our analysis, we have employed a different approach. Specifically, we estimate the econometric models for every store in our dataset but limited the analysis to a selected subset of 5 products.
We employ two criteria to determine the product basket for our analysis. Firstly, we select products that exhibit high sales across all LCBO stores, ensuring that their inventory decisions significantly impact the firm's overall profitability. Secondly, we include products from each broad category to account for product-level heterogeneity. These categories encompass white wine, red wine, vodka, whisky, and rum. Table <ref> provides an overview of the five selected products that satisfy these criteria. By focusing on this subset, we capture a diverse range of products that are representative of the different categories while also being impactful in terms of their sales performance.
Table <ref> provides summary statistics for our working sample. On average, the stores in our working sample sell 91 units per week, equivalent to 18 units per SKU. The average weekly revenue per store is $1,687, resulting in $347 in revenue per SKU per week and $18 in revenue per unit sold. Regarding deliveries, stores in our working sample receive shipments approximately 2 days per week, with each delivery containing an average of 88 units. The average number of stockout events per week per store is 0.37.
Similar to Table <ref>, these numbers exhibit significant variation across different store types. Larger stores generate higher average weekly revenues compared to smaller ones. For instance, the average AAA and AA stores generate weekly revenues of $5,197 and $4,511, respectively, while the smaller C and D stores generate average weekly revenues of $821 and $302, respectively. Contrary to the full sample, stockout events appear to occur more frequently in smaller stores than in larger ones within our working sample. The average D store experiences 0.56 stockout events per week, while the average AAA store encounters 0.14 stockout events per week. Delivery patterns in our working sample follow a similar trend to Table <ref>, with larger stores placing larger and more frequent orders. The average AAA store receives deliveries 4 days per week, totaling 272 units per week, whereas the average D store only receives deliveries 0.6 days per week, amounting to 15 units per week.
Figure <ref> shows strong heterogeneity across stores in several measures related to inventory management of the five products in our working sample. The figures in panels (a) to (f) are inverse cumulative distributions over stores, together with their 95% confidence bands.[For every store, the 95% confidence interval is based on the construction of store-product-specific rates. The 95% confidence interval is determined by percentiles 2.5% and 97.5% in this distribution.]
Panel (a) presents the distribution of the stockout rate. For store i, we have:
Stockout rate_i =
# (product,day) observations
with
stockout in store i/# (product,day) observations for store i
The figure shows a large spread of stockout rates: the 10^th and 90^th percentiles are 0.20% and 2.82%, respectively. Panel (b) shows substantial heterogeneity across stores in the revenue-loss per-product-day generated by stockouts. Indexing products by j, and using 𝒥=5 to represent the number of products, the revenue-loss for store i is:
Revenue loss_i =
1/J∑_j=1^J
Stockout rate_i,j×
Average daily revenue without stockouts_i,j
The 10^th and 90^th percentiles are $0.06 and $1.03 per product-day, respectively. Aggregated at the annual level and over all the products offered in a store, they imply an average annual revenue-loss of approximately $44,000 at the 10^th percentile and $760,000 at the 90^th percentile. Panel (c) presents the ordering frequency of stores in our sample calculated as:
Ordering frequency_i =
# (product,day) observations
with an order in store i/# (product,day)
observations for store i
This ordering rate varies significantly across stores, with the 10^th percentile being 3.59% and the 90^th being 28.41%. Panels (d) to (f) present the empirical distributions for the inventory-to-sales ratio, for this ratio just before an order (a measure of the lower threshold s), and for this ratio just after an order (a measure of the upper threshold S). Indexing days by t:
Inventory-to-(daily)sales-ratio_i =
∑_j=1^J∑_t=1^T
Inventory_i,j,t/∑_j=1^J∑_t=1^T Units
sold_i,j,t
The distribution of the inventory-to-sales ratio shows that stores at the 10^th and 90^th percentiles hold inventory for 11 days and 33 days of average sales, respectively. For the upper threshold S (in Panel (e)), the values of these percentiles are 9 and 37 days, and for the lower threshold s (in Panel (f)) they are 5 and 19 days.
Given these substantial differences in inventory outcomes across stores, it is interesting to explore how they vary together. We present these correlations in Appendix <ref> (Figure <ref>). The strongest correlation appears for the positive relationship between our measures of the thresholds S and s. This correlation can be explained by store heterogeneity in storage costs: a higher storage cost implies lower values of both s and S. We confirm this conjecture in the estimation of the structural model in Section <ref>.
To investigate the possibility of stockouts at the warehouse level and their potential impact on store-level stockouts, we also analyze aggregate daily deliveries from the warehouse to all 634 stores. Given that the products in our working sample are popular items, we interpret a zero value in aggregate daily deliveries as a stockout event at the warehouse. The results of our analysis, presented in Section <ref> in the Appendix, reveal that warehouse stockouts are negligible for our working sample. For each product within the working sample, warehouse stockout events occur on no more than 3 out of the 677 days in the sample, which accounts for less than 0.5% of the observed period. These findings suggest that stockouts observed at the store level are primarily driven by factors other than warehouse-level stockouts.
§.§ Data on store managers' education and experience
In addition to the main dataset, we enhance our analysis by incorporating information on store managers' human capital. Leveraging the professional social networking platform LinkedIn, we gather data on the education and experience of store managers from their public profiles. Out of the 634 store managers in our dataset, 600 are identifiable by name in the LCBO's records.[During our sample period, some stores are overseen by interim managers who are not identified by name in our main dataset.] Within this subset, we were able to locate public LinkedIn profiles for 143 managers, allowing us to retrieve valuable information about their educational background and work experience.
Table <ref> presents summary statistics for the variables related to store managers' educational background and work experience. Notably, we observe a pattern in which managers with greater experience and higher educational attainment tend to be assigned to higher-classified stores. This finding suggests a positive correlation between manager characteristics and store classification, indicating that the LCBO may allocate more experienced and highly educated managers to stores of higher importance or larger scale.
In Appendix <ref>, we provide more detailed information on the positive relationship between manager characteristics and the classification of stores within the LCBO retail chain.
§ (S,S) DECISION RULES
§.§ Model
In this section, we study store managers' inventory behavior through the eyes of (S,s) decision rules. In its simplest form, this decision rule involves time-invariant threshold values. When assuming lump-sum (fixed) ordering costs, quasi-K-concavity of the profit function with respect to orders, and time-invariant expected demand, the profit-maximizing inventory decision rule follows a (S,s) structure (, ; , ; , ). The (S,s) rule is characterized by two threshold values: a lower threshold denoted as s, which represents the stock level that triggers a new order (known as the "safety stock level"), and an upper threshold denoted as S, which indicates the stock level to be achieved when an order is placed. Thus, if k_t represents the stock level at the beginning of day t, and y_t represents the orders placed on day t, the (S,s) rule can be defined as follows:
y_t =
S - k_t if k_t≤ s
0
otherwise
() and () provide comparative statics for the thresholds (S,s) as functions of the structural parameters in the firm's profit function. They provide the following results:
[ S = f_S(
d^e, γ^h, γ^f/γ^c, γ^z/γ^c), s = f_s(
d^e, γ^h, γ^f/γ^c, γ^z/γ^c), S - s = f_S-s(
d^e, γ^h, γ^f/γ^c, γ^z/γ^c); +
-
?
+ +
-
-
+ +
-
+
? ]
where d^e represents expected demand, γ^h is the inventory holding cost per period and per unit,
γ^f is the fixed (lump-sum) ordering cost, γ^c is the unit ordering cost, and γ^z is the stockout cost per period. These γ's are the parameters in the structural model that we estimate in Section <ref>. We investigate the predictions of equation (<ref>) in Section <ref>.
The optimality of the (S,s) decision rule extends to models with state variables that evolve over time according to exogenous Markov processes. Let 𝐳_t denote the vector of these exogenous state variables, which can include factors influencing demand, unit ordering costs, wholesale prices, and the product's retail price (when taken as given by the store manager), as is the case in our problem. The optimal decision rule in this context follows a (S_t,s_t) structure, where the thresholds S_t and s_t are time-invariant functions of these state variables: S_t = S(𝐳_t) and s_t = s(𝐳_t).
In this section, our empirical approach is inspired by the work of (), (), and (). These studies utilize household-level data on durable product purchases, specifically automobiles, to estimate (S_t,s_t) decision rules. In these decision rules, the thresholds are functions of household characteristics, prices, and aggregate economic conditions. This approach can be seen as a "semi-structural approach," where the use of (S_t,s_t) rules is motivated by a dynamic programming model of optimal behavior. However, the specification of the thresholds as functions of state variables does not explicitly incorporate the structural parameters of the model.
In Section <ref>, we present a full structural approach that explicitly incorporates the structural parameters of the model. Furthermore, in Section <ref>, we utilize the estimated structural model to conduct counterfactual policy experiments, which address the questions that motivated this paper. However, before delving into the full structural analysis, we find it valuable to explore the data using a more flexible empirical framework that remains consistent with the underlying structural model. We investigate heterogeneity between store managers' inventory decisions by estimating (S_t,s_t) rules at the store-product level. These decision rules are consistent with our structural model but they are more flexible. This allows us to gain insights and assess the suitability of the (S_t,s_t) decision rules in capturing the inventory behavior of store managers within the LCBO retail chain.
Given that our dataset contains 677 daily observations for every store and product, and that the ordering frequency in the data is high enough to include many orders per store-product, we can estimate the parameters in the (S_t,s_t) decision rules at the store-product level. In this section, we omit store and product sub-indexes in variables and parameters, but it should be understood that these sub-indexes are implicitly present.
We consider the following specification for the (S_t,s_t) thresholds:[We attempted to incorporate demand volatility, represented by lnσ^2_t, as an explanatory variable in the decision rule. However, we encountered high collinearity between the time series of ln d^e_t (expected demand) and lnσ^2_t, making it challenging to estimate their separate effects on the thresholds. It is worth noting that according to the Negative Binomial distribution, lnσ^2_t = ln d^e_t + ln(1+α d^e_t). Consequently, we can interpret the effect of ln d^e_t on the (S_t,s_t) thresholds as the combined impact of both expected demand and volatility.]
S_t = exp{β_0^S + β_d^Sln d_t^e + β_p^Sln p_t +
u_t^S}
s_t = exp{β_0^s + β_d^sln d_t^e + β_p^sln p_t +
u_t^s}
where p_t is the product's retail price, d_t^e is the expected demand, and u_t^s and u_t^S represent state variables which are known to the store manager but are unobservable to us as researchers.[For instance, u_t^s and u_t^S may include shocks in fixed and variable ordering costs, or measurement error in our estimate of expected demand.] The vector of exogenous state variables is 𝐳_t = (d_t^e, p_t, u_t^s, u_t^S). The β's are reduced form parameters which are constant over time but vary freely across stores and products and are functions of the structural parameters that we present in our structural model in Section <ref>.
Our measure of expected demand is based on an LCBO report regarding the information that headquarters use to construct order recommendations for each store (, ). Relying on this report, we assume that store managers obtain predictions of demand for each product at their store using information on the product's retail price (p_t), the average daily sales of the product over the last seven days, (that we represent as Q^[-7,-1]_t), and seasonal dummies.
Since the observed quantity sold q_t has discrete support {0, 1, 2, ...}, we consider that demand has a Negative Binomial distribution where the logarithm of expected demand at period t has the following form:
ln d^e_t =
ln𝔼(
q_t|
p_t, Q^[-7,-1]_t)
=
α'
h(
ln p_t, ln Q^[-7,-1]_t)
where q_t is the quantity sold of the store-product at day t, α is a vector of parameters that are constant over time but vary freely across store-products, and h( ln p_t, ln Q^[-7,-1]_t) is a vector of monomial basis in variables ln p_t and ln Q^[-7,-1]_t.
We denote equation (<ref>) as the sales forecasting function. It deserves some explanation. First, it is important to note that this is not a demand function. For this inventory decision problem, managers do not need to know the demand function but only the best possible predictor of future sales given the information they have. Second, this specification ignores substitution effects between products within the same category or across categories. Ignoring substitution effects in demand is fully consistent with LCBO's report and with the firm's price setting, which completely ignores these substitution effects (see , ).[Recent papers show that the pricing decisions of important multi-product firms do not internalize substitution or cannibalization effects between the firm's own products. See, for instance, ()'s study of the pricing system of a large international airline company, () and () on uniform pricing at US retail chains, () on pricing of car rentals, or () for liquor stores in Pennsylvania.] In the Appendix (Section <ref>), we present a summary of the estimation results of the sales forecasting function for every store and every product in our working sample.
The (S_t,s_t) model in equations (<ref>) and (<ref>) implies that the decision of placing an order (y_t > 0) or not (y_t = 0) has the structure of a linear-in-parameters binary choice model.
1{ y_t > 0 } = 1{
b_0^s +
b_k^sln k_t +
b_d^sln d_t^e +
b_p^sln p_t +
u_t^s≥ 0
},
where 1{.} is the indicator function; u_t^s≡ u_t^s/σ_u^s is the standardized version of u_t^s, as σ_u^s is the standard deviation of u_t^s; and there is the following relationship between β^s and b^s parameters: b_k^s = -1/σ_u^s; b_0^s = β_0^s/σ_u^s; b_d^s = β_d^s/σ_u^s; and b_p^s = β_p^s/σ_u^s. These expressions show that, given the parameters b^s, we can identify the parameters β^s and σ_u^s. We assume that u_t^s has a Normal distribution, such that equation (<ref>) is a Probit model, and we estimate the parameters b^s by maximum likelihood.
Our (S_t,s_t) model also implies that in days with positive orders (y_t > 0)
the logarithm of the total quantity offered, ln(k_t + y_t), is equal to the logarithm of the upper-threshold, ln(S_t), and this implies the following linear-in-parameters (censored) regression model:
ln(k_t + y_t)
= β_0^S + β_d^Sln d_t^e + β_p^Sln p_t +
u_t^Sif
y_t > 0.
Equation (<ref>) includes the selection condition y_t > 0. That is, the upper-threshold S_t is observed only when an order is placed. This selection issue implies that OLS estimation of equation (<ref>) yields inconsistent estimates of the parameters and the threshold itself. However, the (S_t,s_t) model implies an exclusion restriction that provides identification of the parameters in equation (<ref>). The inventory level k_t affects the binary decision of placing an order or not (as shown in equation (<ref>)), but conditional on placing an order, it does not affect the value of the upper-threshold in the right-hand-side of regression equation (<ref>). Therefore, using (<ref>) as the selection equation, we can identify the parameters β^S in (<ref>) using a Heckman two-step approach.[This exclusion restriction in (S,s) models have been pointed out by ().]
Part of the variation in parameter estimates across stores and products is attributable to estimation error rather than genuine heterogeneity. For any given parameter b_i,j, where i and j represent store and product indices respectively, let b_i,j denote its consistent and asymptotically normal estimate, with an asymptotic variance of σ^2_i,j. Using this asymptotic distribution, we can establish a relationship between the variances of b_i,j and b_i,j across stores and products: Var(b_i,j) = Var(b_i,j) + 𝔼(σ^2_i,j), where 𝔼(σ^2_i,j) represents the mean of asymptotic variances σ^2_i,j across stores and products. Since 𝔼(σ^2_i,j)>0, this equation demonstrates that Var(b_i,j) overestimates the true dispersion Var(b_i,j). To mitigate this excess dispersion or spurious heterogeneity arising from estimation error, we employ a shrinkage estimator. The details of this estimator are described in section <ref> in the Appendix.
§.§ Estimation of (S,s) thresholds
Figure <ref> presents the average estimates of parameters b^s_0, b^s_k, b^s_d, and b^s_p in the lower threshold for each store, where the average is obtained over the five products in the working sample. We sort stores from the lowest to the largest average estimate such that this curve is the inverse CDF of the average estimate. The red-dashed band around the median of this distribution is the 95% confidence band under the null hypothesis of homogeneity across stores.[The reported 95% confidence interval incorporates the Bonferroni correction for multiple testing. In this context, the implicit null hypothesis is that every store does not differ significantly from the average store. By applying the Bonferroni correction, we account for the increased probability of observing a significant difference by chance when conducting multiple tests.] The signs of the parameter estimates are for the most part consistent with the predictions of the model. These distributions show that the parameter estimates vary significantly across stores. For b^s_0, b^s_k,, b^s_d, and b^s_p, we have that 95%, 95%, 98%, and 95% of stores, respectively, lie outside of the Bonferroni confidence interval.
Figure <ref> presents the inverse CDF of the store-specific average of the parameters β^S_0, β^S_d, and β^S_p in the upper threshold, as well as the Bonferroni 95% confidence interval under the null hypothesis of homogeneity. As expected, we have strong evidence of heterogeneity in our estimates. For β^S_0, β^S_d, and β^S_p, approximately 96%, 97%, and 97% of stores lie outside of the confidence bands, respectively.
Given that we have estimates at the store-product level, we can also explore the heterogeneity in these estimates within stores. Table <ref> presents a decomposition of the variance of parameter estimates into within-store (between products) and between-store variance. The parameters associated with expected demand and the lower threshold inventory parameter show a between-store variance that is at least as large as the within-store variance. For the constant parameters and the price parameters, the variance is larger across products.
The parameter estimates imply values for the (S_t,s_t) thresholds.
In section <ref> in the Appendix, we investigate heterogeneity across stores in the estimated thresholds. We find strong between-store heterogeneity, especially in the lower threshold s.
§ STRUCTURAL MODEL
We propose and estimate a dynamic structural model of inventory management. A price-taking store sells a product and faces uncertain demand. The store manager orders the product from the retail chain's warehouse, and any unsold product rolls over to the next period's inventory. The store-level profit function incorporates four store-specific costs associated with inventory management: per-unit inventory holding cost (γ^h_i,j), stockout cost (γ^z_i,j), fixed ordering cost (γ^f_i,j), and per-unit ordering cost (γ^c_i,j).
§.§ Non-separability of inventory decisions across products
At LCBO, store managers are responsible for making inventory decisions for thousands of products. These decisions are not independent of each other due to various factors. Firstly, when a product experiences a stockout, consumers may opt to substitute it with a similar available product. As a result, the cost of a stockout for the store is influenced by the availability of substitute products. Secondly, storage costs are affected by the total volume of items and units in the store, which means the inventory management of one product can impact the storage costs of other products. Lastly, the store's ordering cost is dependent on the cost of filling a truck with units of multiple products and transporting them from the warehouse to the store. The decision to order a truck and the cost associated with it are not separable across products.
We can think of a model that fully accounts for the inter-dependence inventory decisions across products. In this model, a store manager maximizes the aggregate profit from all the products taking into account the substitutability of similar products under stockouts and subject to two overall store-level constraints: the total storage capacity of the store and the capacity of the delivery truck. Our store-product level model aligns with this multiple-product inventory management framework.
By using duality theory, we can show that the marginal conditions for optimality in the multiple-product model are equivalent to those in our single-product model, under appropriate interpretations of our store-product level structural parameters. The inventory holding cost parameter γ^h_i,j represents the shadow price or Lagrange multiplier associated with the storage capacity constraint at store i. Similarly, the fixed and unit ordering costs, γ^f_i,j and γ^c_i,j respectively, reflect the shadow prices of the truck's capacity constraint at the extensive and intensive margins. Lastly, the stockout cost parameter γ^z_i,j accounts for the impact of consumer substitution within the store when a stockout occurs for product j at store i.
In estimating our model, our approach is valid as long as these Lagrange multipliers do not exhibit significant variation over time. For our counterfactual experiments, we assume that these Lagrange multipliers remain constant in the counterfactual scenario.
To summarize, our approach does not assume the separability of inventory management across products, as this would be an unrealistic assumption. Instead, we employ certain assumptions and shortcuts to address the complexities of the joint inventory management problem, while still maintaining a realistic framework that is consistent with the interdependencies among products.
§.§ Sequence of events and profit
For notational simplicity, we omit store and product indexes. Time is indexed by t. One period is one day. Every day, the sequence of events is the following.
Step (i). The day begins with the store manager observing current stock (k_t), the retail price set by headquarters (p_t), and her expectation about the mean and variance of the distribution of log-demand: ln d^e_t and σ^2_t, respectively. Given this information, the store manager orders y_t units of inventory from the distribution center. There is time-to-build in this ordering decision. More specifically, it takes one day for an order to be delivered to the store and become available to consumers.[Based on the interviews we conducted with store managers, the most common delivery lag reported is one day. Delivery lags exceeding three days were described as extremely rare.] The ordered amount y_t is a discrete variable with support set 𝒴≡{0, 1, ..., J}.
Step (ii). Demand d_t is realized. Demand has a Negative Binomial distribution with log-expected demand and variance:
ln d^e_t = η_0^'𝐬𝐞𝐚𝐬_t +
η_pln p_t +
η_Qln Q^[-7,-1]_t
σ^2_t =
d^e_t(
1 + α d^e_t)
where η_0, η_p, η_Q, and α are parameters; 𝐬𝐞𝐚𝐬_t
is a vector of seasonal dummies (i.e., weekend dummy and main holidays dummy); Q^[-7,-1]_t is the average daily sales of the product in the store during the last seven days; and α denotes the over-dispersion parameter in the Negative Binomial. We use F_d_t to represent the distribution of d_t conditional on (p_t,Q^[-7,-1]_t,𝐬𝐞𝐚𝐬_t). Importantly, the stochastic demand shock u_t^d≡ln d_t - ln d^e_t is unknown to the store manager at the beginning of the day when she makes her ordering decision.
Step (iii). The store sells q_t units of inventory, which is the minimum of supply and demand:
q_t = min{ d_t ,
k_t}
The store generates flow profits Π_t. The profit function has the following form:
Π_t = (p_t - c_t) min{ d_t, k_t}
+ γ^z1{d_t > k_t}
- γ^h k_t
- γ^c y_t
- γ^f1{y_t>0 }
+ σ_εε_t(y_t)
where c_t is the wholesale price, and γ^z, γ^h, γ^c, γ^f, and σ_ε are store-product-specific structural parameters.
When γ^z>0, the term γ^z·1{d_t > k_t} captures the situation where the cost of a stockout can be smaller than the revenue loss from excess demand because some consumers substitute the product within the store. On the other hand, if γ^z<0, this term can represent an additional reputational cost of stockouts that goes beyond the lost revenue (see
, ). The term γ^h· k_t represents the storage cost associated with holding k_t units of inventory at the store. Parameter γ^c denotes the per-unit cost incurred by the store manager when placing an order, and γ^f represents the fixed ordering cost, including the transportation cost from the warehouse to the store. The variable ε_t(y_t) corresponds to a stochastic shock with a mean of zero that affects ordering costs. More specifically, variables ε_t(0), ε_t(1), ..., ε_t(J) are i.i.d. with a Extreme Value type 1 distribution. Parameter σ_ε represents the standard deviation of the shocks in ordering costs.
These γ parameters are the store manager's perceived costs. For instance, the fixed ordering cost γ^f and the per-unit holding cost γ^h can be interpreted as the manager's perception of the shadow prices (or Lagrange multipliers) associated to the capacity constraints of a delivery truck and of the store, respectively.
Price-cost margins. LCBO's retail prices are a constant markup over their respective wholesale prices. There are different markups for Ontario products (65.5% markup) and non-Ontario products (71.5% markup) (see , ). A constant markup, say τ, implies that the price-cost margin is proportional to the retail price: p_t - c_t = ℒℐ p_t, where ℒℐ represents the Lerner index that by definition is equal to τ/1+τ. The Lerner index is equal to
0.655/1+0.655 = 0.40 for Ontario products and to 0.715/1+0.715 = 0.42 for non-Ontario products.
Step (iv). Orders placed at the beginning of day t, y_t, arrive to the store at the end of the same day or at the beginning of t+1. Inventory is updated according to the following transition rule:
k_t+1 =
k_t +
y_t -
q_t
Finally, next period price p_t+1 is realized according to a first order Markov process with transition distribution function F_p(p_t+1|p_t).
§.§ Dynamic decision problem
A store manager chooses the order quantity y_t to maximize her store's expected and discounted stream of current and future profits. This is a dynamic programming problem with state variables 𝐱_t≡ (k_t, p_t, ln Q^[-7,-1]_t, 𝐬𝐞𝐚𝐬_t) and
ε_t≡ (ε_t(0), ε_t(1), ..., ε_t(J)) and value function V(𝐱_t,ε_t). This value function is the unique solution of the following Bellman equation:
V(𝐱_t,
ε_t)
= max_y_t∈𝒴{π(y_t,𝐱_t)
+ σ_εε(y_t)
+ β𝔼[
V(𝐱_t+1,
ε_t+1)
|
y_t, 𝐱_t]
},
where β∈ (0,1) is the store's one-day discount factor; π(y_t,𝐱_t) is the expected profit function up to the ε_t shock; and 𝔼[.| y_t, 𝐱_t] is the expectation over the i.i.d. distribution of ε_t+1, and over the distribution of 𝐱_t+1 conditional on 𝐱_t. The latter distribution consists of the transition probability F_p(p_t+1|p_t) and the distribution of demand d_t conditional on 𝐱_t which together with equations (<ref>) and (<ref>)
determines the distribution of (k_t+1, p_t+1, ln Q^[-7,-1]_t+1, 𝐬𝐞𝐚𝐬_t+1). The solution of this dynamic programming problem implies a time-invariant optimal decision rule: y_t = y^∗(𝐱_t, ε_t). This optimal decision rule is defined as the arg max of the expression within brackets {} in the right-hand-side of equation (<ref>).
For the solution and estimation of this model, we follow (, ) and use the integrated value function
V_σ(𝐱_t) ≡1/σ_ε∫ V(𝐱_t, ε_t) dε_t and the corresponding integrated Bellman equation. Given the Extreme Value distribution of the ε_t variables, the integrated Bellman equation has the following form:
V_σ(𝐱_t)
= ln[
∑_y ∈𝒴exp(
π(y,𝐱_t)/σ_ε + β𝔼[
V_σ(𝐱_t+1)
|
y, 𝐱_t]
)
].
The expected profit function π(y_t,𝐱_t) is linear in the parameters. That is,
π(y_t,𝐱_t)/σ_ε = 𝐡(y_t,𝐱_t)' γ,
where γ is the vector of structural parameters γ≡ (1/σ_ε, γ^h/σ_ε, γ^z/σ_ε, γ^f/σ_ε, γ^c/σ_ε)', and 𝐡(y_t,𝐱_t) is the following vector of functions of the state variables:
𝐡(y_t,𝐱_t)'
= (
ℒℐ p_t𝔼[min{d_t,k_t}|𝐱_t],
-k_t,
𝔼[1{d_t > k_t}|𝐱_t],
-1{y_t>0},
-y_t),
where the expectation is taken over the distribution of demand conditional on 𝐱_t.
We consider a discrete space for the state variables 𝐱_t.[In the estimation, we discretize the state space using a k-means algorithm.] Let 𝒳≡{𝐱^1, 𝐱^2, ..., 𝐱^L} be the support set of 𝐱_t. We can represent the value function V_σ(.) as a vector 𝐕_σ in the Euclidean space ℝ^L, and the transition probability functions of 𝐱_t for a given value of y as an L × L matrix 𝐅_𝐱(y). Taking this into account, as well as the linear-in-parameters structure of the expected profit π(y_t,𝐱_t), the integrated Bellman equation in vector form is:
𝐕_σ = ln[
∑_y ∈𝒴exp(
𝐇(y) γ + β𝐅_𝐱(y)
𝐕_σ)
].
where 𝐇(y) is a L × 5 matrix that in row r contains vector 𝐡(y,𝐱^r)' for 𝐱^r∈𝒳.
The Conditional Choice Probability (CCP) function, P(y|𝐱_t), is an integrated version of the decision rule y^∗(𝐱_t, ε_t). For any y ∈𝒴 and 𝐱_t∈𝒳, the CCP P(y|𝐱_t) is defined as ∫ 1{ y^∗(𝐱_t, ε_t) = y} dG(ε_t), where G is the CDF of ε_t. For the Extreme Value type 1 distribution, the CCP function has the Logit form:
P(y|𝐱_t)
= exp{𝐡(y,𝐱_t)'
γ + β𝔼[
V_σ(𝐱_t+1)
|
y, 𝐱_t]
}/∑_j=0^Jexp{𝐡(j,𝐱_t)'
γ + β𝔼[
V_σ(𝐱_t+1)
|
j, 𝐱_t]
},
Following (), we can represent the vector of CCPs, 𝐏≡{P(y|𝐱): (y,𝐱) ∈𝒴×𝒳}, as the solution of a fixed-point mapping in the probability space: 𝐏 = ψ(𝐏). Mapping ψ is denoted the policy iteration mapping, and it is the composition of two mappings: ψ(𝐏) ≡λ(υ(𝐏)). Mapping λ(𝐕) is the policy improvement. It takes as given a vector of values 𝐕 and obtains the optimal CCPs as "best responses" to these values. Mapping υ(𝐏) is the valuation mapping. It takes as given a vector of CCPs 𝐏 and obtains the corresponding vector of values if the agent behaves according to these CCPs.[See () for a description of these three mappings in the context of a general dynamic programming problem.] In our Logit model, the policy improvement mapping has the following vector form, for any y ∈𝒴:
𝐏(y)
= λ(y,𝐕)
= exp{𝐇(y) γ + β𝐅_𝐱(y)
𝐕}/∑_j=0^Jexp{𝐇(j) γ + β𝐅_𝐱(j)
𝐕}.
The valuation mapping has the following form:
𝐕 = υ(𝐏)
= [
𝐈 - β∑_y=0^J𝐏(y) ∗𝐅_x(y)
]^-1[
∑_y=0^J𝐏(y) ∗(
𝐇(y) γ
+ euler
- ln𝐏(y)
)
],
where euler is Euler's constant, and ∗ is the element-by-element vector product.
§.§ Parameter estimates
For every LCBO store and product in our working sample, we estimate the store-product specific parameters in vector γ using a Two-Step Pseudo Likelihood (2PML) estimator ( ()). Given a dataset {y_t, 𝐱_t: t=1,2, ..., T} and arbitrary vectors of CCPs and structural parameters (𝐏,γ), define the pseudo (log) likelihood function:
Q(𝐏,γ)
= ∑_t=1^Tlnψ(
y_t, 𝐱_t ; 𝐏,γ),
where ψ(.) is the policy iteration mapping defined by the composition of equations (<ref>) and (<ref>). Note that the likelihood function Q(𝐏,γ) is a function of the store's one-day discount factor β. In the estimation, we fix the value[We could eventually relax this assumption and treat β as a parameter to be estimated, and allow it to vary across store managers in order to potentially capture different degrees of myopia or impatience.] of this discount factor equal to 0.95^1/365. In the first step of the 2PML method, we obtain a reduced form estimation of the vector of CCPs 𝐏 using a Kernel method. In the second step, the 2PML estimator is the vector γ that maximizes the pseudo-likelihood function when 𝐏=𝐏. That is:
γ = arg max_γ
Q(𝐏,γ)
() show that this estimator is consistent and asymptotically normal with the same asymptotic variance as the full maximum likelihood estimator. In section <ref> in the Appendix, we provide further details on the implementation of this estimator.
In a similar vein to the estimation of (S,s) thresholds in section <ref>, a portion of the variation in parameter estimates γ_i,j can be attributed to estimation error rather than genuine heterogeneity. To address this issue and mitigate the excessive dispersion or spurious heterogeneity resulting from estimation error, we employ a shrinkage estimator. The details of this estimator can be found in section <ref> in the Appendix.
Table <ref> presents the medians from the empirical
distributions (across stores and products)
of our estimates of the four structural parameters, measured in dollar amounts.[More specifically, we first obtain the two-step PML estimate of the vector γ≡ (1/σ_ε, γ^h/σ_ε, γ^z/σ_ε, γ^f/σ_ε, γ^c/σ_ε)', and then we divide elements 2 to 5 of this vector by the first element to obtain estimates of costs in dollar amount. We use the delta method to obtain standard errors.] The median values of the estimates are $0.0036 for the per-unit inventory holding cost, $0.0219 for the stockout cost, $2.9658 for the fixed ordering cost, and $0.0341 for the per-unit ordering cost. To have an idea of the importance of these dollar amounts, in Section <ref> below we provide measures of the implied magnitude of each cost relative to revenue. These magnitudes are consistent with other cost estimates in the inventory management literature (see <cit.>, <cit.>). Median standard errors and t-statistics in Table <ref> show that the inventory holding cost and the fixed ordering cost are very precisely estimated (median t-ratios of 5.32 and 12.65, respectively), while a substantial fraction of the estimates of the stockout cost are quite imprecise (median t-ratio of 0.27).
§.§ Relative contribution of the different costs
In this section we assess the magnitude of the different inventory management costs relative to store revenues. The purpose of this exercise is twofold. First, we want to evaluate whether our parameter estimates imply realistic magnitudes for the realization of these costs. And second, it is relevant to measure to what extent the heterogeneity in cost parameters that we have presented above generates heterogeneity in profits across stores. Conditional on their perception of cost parameters, store managers' optimal behavior should compensate – at least partly – for the differences in cost parameters such that heterogeneity in realized costs should be smaller. We want to measure the extent of this compensating effect.
For each component of the inventory management cost, and for every store-product, we calculate the ratio between the realized value of the cost during our sample period and the realized value of revenue during the same period. More specifically, we calculate the following ratios for every store-product: inventory holding cost to revenue; stockout cost to revenue; fixed ordering cost to revenue; variable ordering cost to revenue; and total inventory management cost to revenue. We have an empirical distribution over store-products for each of these ratios.
Table <ref> presents the median and the standard deviation in these distributions. To evaluate the magnitude of these ratios, it is useful taking into account that – according to the LCBO's annual reports – the total expenses to sales ratio of the retail chain is consistently around 16% each year.[Of course, these expenses do not include the cost of merchandise.] According to our estimate, the total inventory cost-to-revenue ratio for the median store is approximately 1.37%. This would imply that the retail chain's cost of managing the inventories of their stores would represent around 10% of total costs, which entails that non-inventory related costs would account for approximately 90% of total costs (e.g. labor costs, fixed capital costs, delivery costs). This seems to be of the right order of magnitude. Table <ref> shows that the fixed ordering cost is the largest realized cost for store managers at LCBO, followed by storage costs. Realized stockout costs are negligible. This is due to a combination of a small parameter that captures the stockout cost, and infrequent stockouts in our working sample.
In section <ref> in the Appendix, we present the empirical distribution across stores and products of each of the four cost-to-revenue ratios. We also show the extent to which managers' inventory decisions compensate for the heterogeneity in the structural parameters.
§.§ Heterogeneity in cost parameters
Below, we investigate two potential sources for the large heterogeneity in our cost parameters: (i) differences across stores, such as store type according to LCBO's classification of stores, physical area, total product assortment, distance to the warehouse, and consumer socioeconomic characteristics; and (ii) differences across local managers. Our goal in this section is to separate the heterogeneity attributable to store characteristics, and the heterogeneity stemming from the managers themselves. We proceed using a sequential approach. First, we regress our parameter estimates on a set of store characteristics. Then, we take the residual components from the first step and regress them on manager characteristics.
#1^#1^#1
First step: store characteristics. Table <ref> presents estimation results from the first-step regressions of each estimated cost parameter against store and location characteristics: LCBO's store type dummies (6 types); LCBO's regional market dummies (25 regions); logarithm of the number of unique products offered by the store; logarithm of population in the store's city; and logarithm of median income level in the store's city. As we have cost estimates at the store-product level, we also include product fixed effects.[Note that the store location dummies capture various factors, including the effect of the distance between the store and the warehouse.]
These store and location characteristics can explain an important part of the variation across stores in inventory holding costs and fixed ordering costs: the R-squared coefficients for these regressions are 0.39, and 0.53, respectively. Fixed ordering costs decline significantly with the number of products in the store, which is consistent with economies of scope in ordering multiple products. Inventory holding costs increase with assortment size and are significantly higher for AAA stores relative to D stores. In contrast, only 6% of the variation in unit ordering costs and 13% of the variation in stockout costs can be explained by these store and location characteristics. These results are robust to other specifications of the regression equation based on transformations of explanatory or/and dependent variables.
Second step: manager characteristics. Table <ref> presents the estimation results from the second-step regressions of cost parameters on manager characteristics (educational attainment, years of experience at the LCBO, and other industry experience), after controlling for the variation explained by store characteristics. The overall finding is that managers' education and experience have non-significant effects in these regressions. There are two main reasons that can explain these negligible effects.
First, there is a substantial correlation between store characteristics and managers' skills. More skilled managers tend to be allocated to higher-class stores (positive assortative matching). Therefore, in the first-step regression, where store characteristics are included, these characteristics are also capturing the effect of managers' skills. As a result, the direct effect of managers' skills in the second-step regressions becomes less apparent.
Second, the insignificant effect of managers' skills on the estimated cost parameters aligns with the interpretation that the residual component of these parameters is associated with biased perceptions. More skilled managers may have a better measure of these costs, while less skilled managers may have noisier estimates. However, this does not imply a larger or smaller effect of managers' skills on the mean value of cost parameters. Instead, the effect would appear in the variance of the cost parameters, indicating differences in the precision of their estimates. Indeed, when we regress the variance of the cost parameters on managers' skills, we find evidence supporting this interpretation.
In Section <ref> of the Appendix, we also examine how the variance of the cost parameters depends on store characteristics. We find that the dispersion of the (second-step) manager component of costs is larger on average for lower-class stores. Since managers in these stores generally have lower levels of human capital (i.e. education and experience), we interpret the second-step manager component as a biased perception of the true cost from the point of view of store managers. That is, the (first-step) store component of the costs will be interpreted as the true cost, and the manager component will be interpreted as deviations from this true cost. In order to illustrate the interpretation of the residual component as manager bias, we present in Section <ref> of the Appendix two granular examples in which pairs of stores – located in close proximity to each other – are similar in size, sales, store classification, but have very different levels of manager experience and estimates of the cost parameters.
We explore the interpretations of our cost parameters, and their impact on store-level inventory outcomes, in the subsequent counterfactual experiments of Section <ref>.
§ COUNTERFACTUAL EXPERIMENTS
This section presents two sets of counterfactual experiments based on the model that we have estimated in the previous section. First, we study the contribution to inventory management outcomes from the heterogeneity in store managers' perceptions of costs. Second, we evaluate the effects of a counterfactual centralization of inventory management decisions at LCBO headquarters. We present this counterfactual experiment under different scenarios on the information that headquarters has about demand and costs at the store level.
§.§ Removing store managers' idiosyncratic effects
Let γ_i,j be the vector of estimates of cost parameters for product j and store i. Based on the regressions in Tables <ref> and <ref>, we decompose this vector into two additive and orthogonal components: the part explained by store and location characteristics, that we represent as γ^sto_i,j; and the part explained by local managers, γ^man_i,j. Below, we construct a counterfactual scenario that removes the idiosyncratic component γ^man_i,j from the inventory decision problem for store i and product j. For every store-product (i,j) in our working sample, we implement a separate counterfactual experiment for each of the four cost parameters, and one experiment that shuts down together the manager component of the four cost parameters. This implies a total number of 15,850 experiments.
We implement each of these experiments by solving the dynamic programming problem and obtaining the corresponding CCPs under the counterfactual values of the structural parameters. We use this vector of CCPs to calculate the corresponding ergodic distribution of the state variables for the store-product.[Note that this ergodic distribution incorporates the seasonal effects in the demand part of the model, as seasonal dummies are a component of the vector of state variables of the model.] Finally, we use the vector of CCPs and the ergodic distribution to calculate mean values of relevant outcome variables related to inventory management. We compare these average outcomes with their corresponding values under the factual values of structural parameters. In terms of outcome variables, we look at the same descriptive statistics as those reported in Table <ref> and Figure <ref>: stockout frequency, ordering frequency, inventory to sales ratio, inventory to sales ratio after an order (i.e. S threshold), and inventory to sales ratio before an order (i.e. s threshold).
Figures <ref> (for stockout frequency), <ref> (for ordering frequency), and <ref> (for inventory-to-sales ratio) summarize the results from these experiments. In each figure, the horizontal axis measures the value of the corresponding parameter γ_i,j^man, and the vertical axis measures the difference in the mean value of the outcome variable between the factual and the counterfactual scenario. For instance, in Figure <ref>(a), the horizontal axis represents γ_i,j^h,man, and the vertical axis measures Δ SOF_i,j = SOF_i,j^factual - SOF_i,j^counter, where SOF is stockout frequency.
Note that the counterfactual experiment of shutting down γ_i,j^h,man to zero is equivalent to a change in parameter γ_i,j^h from the counterfactual value γ_i,j^h,sto to the factual value γ_i,j^h,sto + γ_i,j^h,man. Therefore, we can see the cloud of points in, say, Figure <ref>(a) as the results of many comparative statics exercises, all of them consisting in changes in the value of parameter γ^h. In these figures, there are multiple curves relating a change in γ^h with a change in the outcome variable because store-products have different values of the other structural parameters. However, each of these figures shows a monotonic relationship between a change in a cost parameter and the corresponding change in an outcome variable.
We compare the relationship between parameters and outcomes implied by these figures with the theoretical predictions from the model as depicted in equation (<ref>). More specifically, note that S-s is negatively related to the ordering frequency (the larger the S-s, the smaller the ordering frequency); s is negatively related to the stockout rate (the larger the s, the smaller the stockout rate); and the levels of both S and s are positively related to the inventory to sales ratio (the larger the S and s, the larger the ratio). The pattern in our figures is fully consistent with Blinder's theoretical predictions for this class of models.
Figure <ref> depicts the relationship between cost parameters and the stockout frequency. According to Blinder's formula, the lower threshold s depends negatively on γ^h and γ^f, and positively on γ^z, while the effect of γ^c is ambiguous. Panels (a) to (d) in Figure <ref> confirm the signs of these effects on the stockout frequency.
In Figure <ref>, we present the relationship between cost parameters and the ordering frequency. Blinder's formula says that S-s depends negatively on γ^h and positively on γ^f. Panels (a) and (c) confirm the sign of these effects on the ordering frequency. According to Blinder, the sign of the effects of γ^z and γ^c on ordering frequency is ambiguous because they affect the two thresholds S and s in the same direction. In Panel (b), we find a positive relationship between the stockout cost γ^z and ordering frequency. Panel (d) shows that the frequency of placing an order falls when the unit ordering cost increases.
Figure <ref> illustrates the relationship between cost parameters and the inventory-to-sales ratio. Blinder's formula establishes that the two thresholds S and s depend negatively on γ^h and positively γ^z. Panels (a) and (b) confirm these signs for the inventory-to-sales ratio. However, according to Blinder's formula, the sign of the effects of γ^f and γ^c on the inventory-to-sales ratio is ambiguous. Panels (c) and (d) show negative effects of γ^f and γ^c on the inventory-to-sales ratio.
It is of interest to measure the average effect across stores and products of shutting down the store manager-specific component in costs. Table <ref> presents these average effects for each of the four cost parameters and for the combination of the four.[Note that, by construction, the manager component γ^man_i,j has mean zero and is orthogonal to the store component γ^sto_i,j. Therefore, if the model implied a linear relationship between outcome variables and structural parameters, then the average effect of shutting down the residual component would be zero. For the same reason, a first-order linear approximation to this average effect is zero. However, the model implies a nonlinear relationship between outcomes and structural parameters such that it is a relevant empirical question to look at these average effects. In fact, we find that the effect is not negligible at all.] Removing the manager component in all four inventory costs generates a decrease in the mean ordering frequency of 1.6 percentage points, from 17.8% to 16.2%; a decrease in the inventory-to-sales ratio of 4.7 days of average sales, from 26.8 to 22.1 days; a decrease in the lower s threshold of 6 days, from 22.6 to 16.6 days; and a decrease in the S-s gap of 1.5 days, from 9.1 to 7.6 days.
Store managers' idiosyncratic perception of costs has a substantial effect on inventory management at the aggregate firm level. It entails a 6-day decrease in waiting time between two orders, an increase in the average order amount of 1.5 days of average sales, and a 21% increase in the inventory-to-sales ratio, but a negligible effect on the frequency of stockouts. Accordingly, if this idiosyncratic component is a biased perception, then it has a substantial negative impact on the firm’s profit as it increases storage and ordering costs with almost no effect on stockouts and revenue. The bottom row in Table <ref> presents the effect of removing γ^man_i,j on total inventory management cost calculated using γ^sto_i,j but not γ^man_i,j. We find that, on average, this cost declines by 12.1%. This substantial effect plays an important role in the counterfactual experiment on centralization that we present in the next section.
§.§ Centralizing inventory decision-making
We now address the main question that motivates this paper: would the LCBO retail chain benefit from managing the stores' inventories at the headquarter level, as opposed to allowing heterogeneous store managers to have autonomy in their inventory decisions? To answer this question, we need to establish some conditions on the headquarters' information about store-level demand, inventories, and cost parameters. The experiments that we present below are based on the following conditions.
First, based on the institutional details we describe in Section <ref>, we consider that headquarters process store-product level transactions data with a one-week delay. Transmission of information from stores to headquarters occurs in real time, without any substantial delay. However, it takes time to process that information to generate headquarters demand predictions and ordering recommendations. Though a fully automated inventory management system is possible, human supervision can add value by accounting for soft information (, ). Accordingly, in the counterfactual centralized system, we replace state variable Q^[-7,-1]_t with the one-week lag of this variable, i.e., Q^[-14,-8]_t.
Second, to compare profits between the centralized and decentralized structures, we must take a stance on what are the "true" cost parameters. We assume that the true cost parameters are γ^sto_i,j which are determined by store and location characteristics. Under the centralized system, the headquarters know these costs and take inventory decisions for every store based on these costs. In contrast, we interpret γ^man_i,j as store managers' behavioral biases and not as "true" costs. Under the decentralized system, store managers make decisions as if the cost parameters were γ^sto_i,j+γ^man_i,j, but our measure of their profits is based only on γ^sto_i,j. The evaluation of profits under this assumption provides an upper bound for the (profit) gains from centralization. Alternatively, we could assume that γ^man_i,j is a true component of profit that is known to the store manager but unknown to the headquarters. This alternative assumption would provide a lower bound for the gains from centralization.
Based on these assumptions, this counterfactual experiment measures the following trade-off in the choice between centralized and decentralized inventory management. A negative aspect of decentralization is that store managers have different skills and behavioral biases as captured by the idiosyncratic components γ^man_i,j. These biases should have a negative effect on LCBO profits. The positive aspect of decentralization is that store managers have just-in-time information about demand, sales, and inventories, while the firm's headquarters process this information with one week delay. This just-in-time information should have a positive effect on LCBO profits.
We depict the results of this experiment in Table <ref>, and Figures <ref> and <ref>. Similarly as for the counterfactuals in section <ref>, we evaluate the effects using the ergodic distributions of the state variables under the factual and counterfactual scenarios. Table <ref> presents means, medians, and several percentiles for the profit per store-product under the centralized and decentralized systems and for the gains from decentralization. To have a better perspective of the implications of these effects, it is useful to take into account that a 1% change in profit per store-product represents approximately $17 million in total annual profit for LCBO in year 2012.[According to the 2012-2013 LCBO annual report, the annual profit (net income) of the company was $ 1.7 billion. Therefore, 1% of this profit is $17 million.] At the aggregate level, decentralization has a negative impact on LCBO profits. It implies a 2% decline in profits, which represents $34 million in annual profits for the retail chain. The effect on the median store is also negative: -2.1%. This relatively modest effect is the result of combining two large effects with opposite signs. The one-week delay in the processing of information in the centralized system has a non-negligible negative impact on profits at every store. However, this negative effect of the centralized system is more than compensated by the large increase in profits due to reducing ordering and storage costs when removing store managers' biased perceptions of costs. This is illustrated in the bottom row of Table <ref>: on average, decentralization increases total inventory costs by 23%.
The evidence on the mean and median effects in Table <ref> is not necessarily sufficient for a retail chain to adopt a centralized inventory management system. A retail chain may need to assess the distributional effects of the gains/losses across its stores before adopting a substantial organizational change, and not simply rely on the average effect. Table <ref> and Figure <ref> show significant heterogeneity in the impact of decentralization, with a considerable amount of stores benefiting from the decentralized structure: the 90^th percentile store has a gain in profit of 1.8%, which is not negligible. Therefore, although centralization would generate positive gains in total profits for the retail chain relative to the existing decentralized structure, the distributional effects of these gains are important to assess.
Figure <ref> provides a closer look at the heterogeneous effect from (de)centralization. It presents the empirical distribution of the percentage change in average daily inventory costs. The median of this distribution presents a positive increase in costs (i.e., 3.7%), but most striking is the long right tail of this distribution, which implies a 23% increase in average inventory costs.
§ CONCLUSION
Retail chains are complex organizations with various divisions and teams, each having its own decision-making authority. Store managers, in particular, play a crucial role within certain retail chains. These managers have the advantage of collecting and processing timely information specific to their individual stores. This store-level information is more consistent and manageable compared to information at the chain-wide level. However, the transfer and processing of this information from stores to headquarters can introduce delays ranging from days to weeks, which can negatively impact decision-making and overall profitability.
On the other hand, store managers exhibit heterogeneous skills, motivations, and levels of effort. A centralized decision-making system that selectively utilizes the most competent managers within the organization can help mitigate the negative effects stemming from the variation in managers' skills.
In this paper, we examine the trade-off between centralized and decentralized decision-making in the context of inventory management within a large retail chain. Leveraging a unique dataset containing daily information on inventories, sales, prices, and stockouts at the individual store and product level, we estimate a dynamic structural model to capture store managers' inventory decisions. Using revealed preference as a guiding principle, we obtain separate estimates for each store and product regarding four cost parameters: per unit inventory holding cost, stockout cost, fixed ordering costs, and per unit ordering costs. Our analysis reveals significant heterogeneity across stores in these cost parameters. While observable store and location characteristics account for part of this heterogeneity, a substantial portion can be attributed to idiosyncratic information and perceptions of store managers.
We utilize the estimated model to conduct a counterfactual experiment, evaluating the impact of centralizing inventory management at LCBO. In this experiment, we assume that the idiosyncratic cost component associated with store managers represents a behavioral bias rather than true costs. This assumption allows us to provide an upper-bound estimate for the gains from centralization. Our findings indicate that a centralized inventory management system would lead to a modest 2% increase in LCBO's annual profit. This outcome arises from the combination of two opposing effects. The negative effect on profits due to the loss of just-in-time information from store managers is outweighed by the significant reduction in ordering and storage costs resulting from the elimination of behavioral biases and skill heterogeneity among store managers (averaging at 23% reduction overall and 3.7% reduction for the median store).
Furthermore, the effects of centralization are highly heterogeneous across stores within the retail chain, with a substantial number of stores experiencing significant losses from adopting centralization. This distributional effect has important implications for the decision-making process when considering organizational changes aimed at maximizing overall company profit (, ).
Our empirical findings highlight the advantages of a hybrid inventory management system that combines decentralized decision-making with centralized control. By assigning decision rights to high-skilled store managers and utilizing a centralized system for stores where skill levels are lower, we can eliminate subjective biases while retaining the benefits of just-in-time local information for some of the stores. The structural model presented in this paper provides a useful tool for determining the specific allocation of decision rights across stores.
18pt
18pt
§ APPENDIX
§.§ Correlations between inventory outcomes
Figure <ref> below, presents five scatter plots at the store level: (a) stockout frequency against ordering frequency; (b) stockout frequency against inventory-to-sales ratio; (c) stockout frequency against inventory-to-sales ratio after an order is received; (d) stockout frequency against inventory-to- sales ratio before an order is placed; and (e) inventory-to-sales ratio after an order is received against inventory-to-sales ratio before an order is placed. The simple correlations in these figures provide preliminary descriptive evidence on the possible sources of structural heterogeneity, such as heterogeneity across stores in storage cost, stockout cost, ordering cost, or demand uncertainty, which are structural parameters in our model in Section <ref>.
The strongest correlation appears in Panel (e), for the relationship between our measures of the thresholds S and s.[For the interpretation of this empirical evidence, it is useful to take into the comparative statics of the (S,s) thresholds as functions of the structural parameters in the profit function. We present these comparative statics in equation (<ref>) in Section <ref>, based on results in () and ().] This positive correlation can be explained by store heterogeneity in stockout costs and/or storage costs: a higher stockout cost (storage cost) implies higher (lower) values of both s and S. In contrast, a higher lump-sum ordering cost implies a lower s but a negligible effect on S. Therefore, the positive correlation between the lower and upper thresholds we observe seems more compatible with store heterogeneity in stockout and/or storage costs rather than with heterogeneity in ordering costs. We confirm this conjecture in the estimation of the structural model in Section <ref>.
Panel (a) shows a negative relationship between the stockout frequency and the ordering frequency. As one would expect, stores placing orders more frequently tend to have lower stockout rates. Panels (b) and (c) show a small negative relationship between stockout rates and the inventory to sales ratio overall and after an order is received, respectively. These findings are what we would expect: stores that have lower inventory-on-hand on average experience higher stockout rates, and stores that order up to a smaller threshold S also experience higher stockout rates. Relatedly, panel (d) shows a small negative relationship between stockout rates and our measure of the threshold s. Again, this is what we would expect, as stores with a lower safety stock level are more likely to experience higher stockout rates.
§.§ Correlations between manager and store characteristics
Figure <ref> presents correlations between education and experience of managers and store classification. Across all three panels, there seems to be a small positive relationship between store classification and manager characteristics. However the strongest relationship is in Panel (c), where higher educational attainment is associated with higher store classification.
§.§ Estimates of Sales Forecasting Equation
Table <ref> summarizes our estimation results of the sales forecasting function. For each store and product, we estimate a Negative Binomial regression function using Maximum Likelihood. The set of explanatory variables includes the logarithm of retail price, the logarithm of the store-product sales in the last week, and two seasonal dummies: a weekend dummy, and a holiday dummy for a major holiday. For each product, Table <ref> reports the three quartiles in the distribution across stores of parameter estimates and their respective standard errors for the coefficient of log-price, the coefficient of log-lagged-weekly-sales, and the over-dispersion parameter in Negative Binomial model. Given our interest in the forecasting power of this equation, we also report the three quartiles of McFadden's Pseudo R-squared coefficient (i.e., one minus the ratio between the log-likelihoods of the estimated model and a model only with a constant term).
The estimates of for the lagged-sales coefficient show very substantial time persistence for all products and most stores. Standard errors show that this parameter is estimated with enough precision. The estimates for the log-price coefficient are mostly negative and large in absolute value, though they are not precisely estimated as LCBO changes prices quite infrequently. The estimate of the over-dispersion parameter is substantially smaller than one for almost all the stores and products, which implies evidence of over-dispersion and the rejection of the Poisson regression model. The magnitude of the Pseudo R-squared coefficient is on average around 6% for the median store, which seems small. However, it is important to note that the uncertainty about daily sales of a single product and store can be substantially larger than the uncertainty about aggregate sales at monthly level or over products or/and stores.
§.§ Stockouts at the Warehouse Level
§.§ Store heterogeneity in estimated (S,s) thresholds
We investigate the heterogeneity across stores in the estimated thresholds. For each store-product pair, we begin by obtaining the log-lower-threshold and the log-upper-threshold evaluated at the mean value of log-price and retail-specific mean value of log-expected-demand. We denote these log-thresholds as log-s_0 and log-S_0, respectively. Figure <ref> presents the inverse CDF of the store-specific average estimates of log-s_0 and log-S_0 and the Bonferroni 95% confidence band under the null hypothesis of store homogeneity. These distributions show very significant differences in store level estimates. For the log-lower-threshold, 98% of stores lie outside the confidence bands and therefore have different values of store level log-thresholds. For the upper-log-threshold, only 3% of stores lie within the confidence bands, which entails that 97% of stores have different values of store level log-thresholds.
In addition to this between-store heterogeneity, we also observe significant positive correlation between the two log-thresholds. This confirms our previous conjecture from Figure <ref>, in which we observed a positive correlation between the inventory to sales ratio before and after an order is placed. Again, this correlation can be explained by differences across stores in stockout costs or/and inventory holding costs.
Given that we have estimates at the store-product level, we can also explore within-store heterogeneity. Table <ref> below presents a variance decomposition of the log-thresholds s_0 and S_0. More specifically, we are interested in disentangling how much of the differences we observe in Figure <ref> is attributable to variation across stores, and how much is because of differences across products. Table <ref> presents an interesting finding: for the lower threshold, between-store variance is significantly larger than within-store variance, while the opposite is true for the upper threshold. That is, the order-up-to quantity seems to be relatively homogeneous across stores, while the safety stock level seems to vary significantly.
§.§ Details on estimation method of structural parameters
(i) Nonparametric estimation of CCP function. In the first step of the 2PML method, we use the following Kernel method for the estimation of the CCP function. For every (y,𝐱) ∈𝒴×𝒳:
P(y|𝐱) = ∑_t=1^T1{y_t = y} K_T(x_t - x)/∑_t=1^T K_T(x_t - x)
where K_T(𝐮) is the Kernel function 1/(1 + √(T)||𝐮||) with || . || being the Euclidean distance.
(ii) Discretization of state variables. This estimation method applies to models where the vector of state variables x has discrete support. In principle, our state variables have continuous support. We have applied a K-means clustering method for the discretization of the exogenous state variables. We apply this method separately for each store-product. More specifically, we apply K-means to discretize variables p_t and ln Q^[-7,-1]_t. For every store and product pair, we cluster these variables using a k-means algorithm with a squared Euclidean distance metric, and using ()'s k-means++ cluster initialization. For both variables, we impose the number of clusters to be 2. For the endogenous state variable k_t, along with the choice variable y_t, we choose a set of fixed grid points. Specifically, we allow k_t to take values between 0 and 100 with an interval of 2, and y_t to take values between 0 and 48 with an interval of 6. The latter preserves an important aspect of the nature of orders being placed by store managers at LCBO: most orders are placed in multiples of 6, and most order sizes are smaller or equal to 48 [Note that ln d^e and σ^2 are indirectly clustered through ln Q and p, as the sales forecasting equation determines the space of the variables ln d^e and σ^2]. Table <ref> below presents the frequency of orders in the choice set. Overall, the grid points in the choice space represent approximately 98% of orders that we observe in the data.
Finally, in Table <ref> below, we present a variance decomposition of the state space. Our goal is to assess whether the discretization of the state space is such that it captures most of the variation we observe in the data. The discretization of variables p and lnQ preserves most of their sample variation, with discretized variance representing approximately 99% and 89% of overall variance, respectively. However, for variable k, the discretization is significantly restrictive. The variance of the discretized variable represents only approximately 20% of the overall variance.
(iii) Computing time.
Most of the computing time in the implementation of this two-step estimator comes from the calculation of present values, and more specifically from the inversion of matrix 𝐈 - β∑_y=0^J𝐏(y) ∗𝐅_x(y) that has dimension |𝒳| × |𝒳|. Nevertheless, the computing time to obtain the 2PML for one store-product – using standard computer equipment – was around 20 seconds, and the total computing time for the approximately 634 × 5 = 3,160 store-products in our working sample was less than 18 hours.
§.§ Empirical distribution of parameter estimates
Figure <ref> plots the empirical density across stores and products of our raw estimates of the four structural parameters, measured in dollar amounts. These empirical densities show substantial heterogeneity across stores and products in the four parameter estimates.
§.§ Shrinkage estimator
To correct for excess dispersion, we consider the following shrinkage estimator (see <cit.>):
γ^∗_i,j = γ̅ + (
1 - σ^2_i,j/Var(γ))^1/2(
γ_i,j -
γ̅)
where γ_i,j is the original parameter estimate; σ_i,j is its standard error, and γ̅ and Var(γ) represent the mean and variance, respectively, in the empirical distribution of γ_i,j across stores and products. This estimator generates a distribution of estimates across stores-products that corrects for the spurious heterogeneity due to estimation error. By construction, we have that Var(γ̂^*_ij) = Var(γ̂_ij) - E(σ^2_ij).
§.§ Heterogeneity in realized inventory management costs
In Figure <ref>, the blue curves represent the CDFs across stores and products of each of the four cost-to-revenue ratios. These distributions show the following ranges between percentiles 5% and 95%: [0.1%, 0.7%] for the inventory holding cost; [0.0%, 0.03%] for the stockout cost; [0.4%, 2.75%] for the fixed ordering cost; and [0.05%, 0.6%] for the variable ordering cost. We can see that the realized fixed ordering costs are not only the costs with larger contribution to the firms' profit, but also with larger heterogeneity across stores.
The dispersion across stores in these cost-to-revenue ratios is the combination of dispersion in structural parameters and dispersion in decision and state variables affecting these costs. In particular, managers' optimal inventory decisions can partly compensate for the heterogeneity in the structural parameters. For instance, the inventory holding cost to revenue ratio for store i and product j is γ^h_i,j k_i,j / r_i,j. A store-product with large per-unit inventory holding cost, γ^h_i,j, will tend to keep smaller levels of inventory than a store-product with a small value of this parameter such that the difference between these stores in the ratio γ^h_i,j k_i,j / r_i,j will be smaller than the difference between their per unit inventory holding cost. To measure the magnitude of this behavioural response by store managers, the red curves in Figure <ref> present the CDFs of the cost ratios when we replace the store-product specific structural parameters by their means across products, but we keep the values of decisions and state variables. That is, for the inventory holding cost ratio, the red curve is the CDF of variable γ^h_i k_i,j / r_i,j. For each of the four inventory ratios, the counterfactual CDFs in the red curves are steeper than the factual CDFs in the blue curves. Store managers with a perception of higher inventory costs make decisions that entail lower costs of managing their inventories relative to revenue.
§.§ Dispersion of cost parameters
In Table <ref>, we explore how the dispersion of the manager component of costs depends on store and location characteristics. Consistent with the interpretation of managers' biased perception of true costs, we find that managers in high-type stores have a smaller dispersion in this component of costs.
§.§ Manager bias: granular examples
The first pair of LCBO stores we explore in Table <ref> is store #452 (1138 Avenue Road) and store #572 (1245 Dupont Street), both located in the Toronto-North area. Although both stores are of type "A" and have similar average weekly sales, their managers have very different years of experience at LCBO and significantly different estimates of cost parameters. Specifically, the manager of store #572 has an additional 16 years of experience at LCBO, a 29% higher average holding cost, a 65% lower average stockout cost, a 13% higher average fixed ordering cost, and a 19% lower average unit ordering cost. The second pair of stores we examine is store #538 (122 Rideau Street) and store #547 (111 Albert Street), both located in the Ottawa-Central area. Again, although the two stores are of type "B" and have similar average weekly sales, the managers have very different experience levels and significantly different cost estimates. Store #547 has 34 fewer years of experience, a 34% lower average holding cost, a 27% higher average stockout cost, a 39% higher fixed ordering cost, and a (mere) 1% lower average unit ordering cost.
|
http://arxiv.org/abs/2307.07616v1 | 20230714202418 | Strain engineering of Zeeman and Rashba effects in transition metal dichalcogenide nanotubes and their Janus variants: An ab initio study | [
"Arpit Bhardwaj",
"Phanish Suryanarayana"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"physics.chem-ph"
] |
College of Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
[email protected]
College of Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
We study the influence of mechanical deformations on the Zeeman and Rashba effects in synthesized transition metal dichalcogenide (TMD) nanotubes and their Janus variants from first principles. In particular, we perform symmetry-adapted density functional theory simulations with spin-orbit coupling to determine the variation in the Zeeman and Rashba splittings with axial and torsional deformations. We find significant splitting in molybdenum and tungsten nanotubes, for which the Zeeman splitting decreases with increase in strain, going to zero for large enough tensile/shear strains, while the Rashba splitting coefficient increases linearly with shear strain, while being zero for all tensile strains, a consequence of the inversion symmetry remaining unbroken. In addition, the Zeeman splitting is relatively unaffected by nanotube diameter, whereas the Rashba coefficient decreases with increase in diameter. Overall, mechanical deformations represent a powerful tool for spintronics in TMD nanotubes as well as their Janus variants.
Strain engineering of Zeeman and Rashba effects in transition metal dichalcogenide nanotubes and their Janus variants: An ab initio study
Phanish Suryanarayana
August 12, 2023
===========================================================================================================================================
§ INTRODUCTION
Transition metal dichalcogenide (TMD) nanotubes are 1D materials of the form MX2, where M and X represent a transition metal and chalcogen, respectively <cit.>. They represent the most diverse group of nanotubes, there being 38 transition metals and 3 chalcogens, resulting in a total of 114 possible combinations. Of these, around 12 have already been synthesized, which represents a significant fraction of the total number of experimentally realized nanotubes, and the most in any group <cit.>. The number of such nanotubes doubles when considering their Janus variants <cit.> — nanotubes of the form MXY, where Y represents a chalcogen that is distinct from X — of which WSSe has recently been synthesized <cit.>.
TMD nanotubes and their Janus variants demonstrate varying electronic properties, ranging from semiconducting <cit.> to metallic <cit.> to superconducting <cit.>. Notably, these properties can be tuned/engineered by a number of mechanisms, including mechanical deformation <cit.>, electric field <cit.>, temperature <cit.>, chirality/radius <cit.>, and defects <cit.>. This makes the nanotubes ideally suited for various technological applications, including mechanical sensors <cit.>, nanoelectromechanical (NEMS) devices <cit.>, biosensors <cit.>, photodetectors <cit.>, and superconductive materials <cit.>. However, the potential for TMD and Janus TMD nanotubes (and nanotubes in general) to be used in spintronic applications has not been studied heretofore, particularly in the context of first principles calculations.
Spintronics or spin electronics refers to the exploitation of both spin and the electronic charge in solid state devices <cit.>. In this context, the Zeeman and Rashba effects are of particular interest, both being relativistic effects arising from spin-orbit coupling (SOC). In particular, the Zeeman and Rashba effects result in splitting of the electronic bands along the energy and wavevector axes, respectively, of particular importance being those at the valence band maximum (VBM) and the conduction band minimum (CBM). These effects have been studied in TMD monolayers and their Janus variants not only experimentally <cit.>, but also theoretically using ab initio Kohn-Sham density functional theory (DFT) calculations <cit.>. In addition, the effect of strain on the Zeeman and Rashba splittings has been studied in Janus TMD bilayers <cit.> and their heterostructures <cit.> using DFT. However, there have been no such studies for TMD and Janus TMD nanotubes (and nanotubes in general), which provides the motivation for the current investigation.
In this work, we study the influence of mechanical deformations on the Zeeman and Rashba effects in the synthesized TMD nanotubes and their Janus variants using Kohn-Sham DFT calculations. In particular, we perform symmetry-adapted DFT simulations with SOC to determine the variation in the Zeeman and Rashba splittings with axial and torsional deformations. We find significant splitting for the nanotubes having the transition metal as either molybdenum or tungsten. In particular, axial and torsional deformations can be used to vary the Zeeman splitting, while torsional deformations can be used to introduce and vary the Rashba splitting, making the nanotubes particularly well-suited for spintronics applications.
The remainder of this manuscript is organized as follows. In Section <ref>, we list the standard and Janus TMD nanotubes studied and describe the symmetry-adapted Kohn-Sham DFT simulations for the calculation of the Zeeman and Rashba splittings. Next, we present and discuss the results of the simulations in Section <ref>. Finally, we provide concluding remarks in Section <ref>.
§ SYSTEMS AND METHODS
We start by considering the TMD nanotubes that have been synthesized <cit.>: {MoS2, MoSe2, MoTe2, WS2, WSe2, WTe2, NbS2, NbSe2, TaS2, TiS2, TiSe2, HfS2, and ZrS2}. Since we have found that spin-orbit coupling (SOC) does not cause any splitting in the nanotubes: {NbS2, NbSe2, TaS2, TiS2, TiSe2, HfS2, and ZrS2}, we henceforth consider the remaining TMD nanotubes: {MoS2, MoSe2, MoTe2, WS2, WSe2, WTe2}, as well as their Janus variants with the heavier chalcogen on the outside: {MoSSe, MoSTe, MoSeTe, WSSe, WSTe, WSeTe}, all with 2H-t symmetry. We consider their armchair configurations, since the results remain unchanged for the zigzag configuration, in agreement with previous observations for SOC in the MoS2 nanotube <cit.>. The diameters of the TMD nanotubes are chosen to be commensurate with those synthesized, and the diameters of the Janus TMD nanotubes are set to DFT-calculated equilibrium values (Table <ref>). The axial and torsional deformations considered are also commensurate with those in experiments <cit.>. Indeed, through phonon calculations using ABINIT <cit.>, we have verified that the monolayer counterparts are stable at the largest tensile/shear strains (Supplementary Material), which suggests the stability of the nanotubes at the chosen strains, as curvature effects on the phonon spectrum are expected to be minor at the relatively large diameters of the nanotubes.
We perform Kohn-Sham DFT simulations using the Cyclix-DFT <cit.> feature in the state-of-the-art real-space code SPARC <cit.>. In particular, we perform symmetry-adapted calculations that exploit the cyclic and/or helical symmetry in the system to reduce the Kohn-Sham problem to the unit cell/fundamental domain with minimal number of atoms <cit.>, e.g., the fundamental domain for the chosen nanotubes contains only 3 atoms, i.e., 1 metal and 2 chalcogen atoms (Fig. <ref>). This reduction due to symmetry can be exploited even on the application of axial and/or torsional deformations, tremendously lowering the computational expense, given that DFT calculations scale cubically with system size, making otherwise impractical calculations routine, e.g., a 8.5 nm diameter MoSSe nanotube with an external twist of 6×10-4 rad/bohr has 219,888 atoms in the simulation domain when employing periodic boundary conditions, a system size that is impractical even with state-of-the-art approaches <cit.>. Cyclix-DFT is now a mature open source feature in SPARC, having been verified by comparisons with established DFT codes <cit.>, and ability to make accurate predictions in diverse physical applications <cit.>.
In all simulations, we employ the Perdew–Burke–Ernzerhof (PBE) <cit.> exchange-correlation functional, and optimized norm-conserving Vanderbilt (ONCV) <cit.> pseudopotentials with nonlinear core correction (NLCC) and SOC from the PseudoDojo collection <cit.>. The equilibrium geometry of the nanotubes (Supplementary Material) is in very good agreement with previous DFT calculations <cit.>, and the equilibrium geometry of the corresponding monolayers (Supplementary Material) is in very good agreement with experiments <cit.> as well as DFT calculations <cit.>, verifying the accuracy of the chosen pseudopotential and exchange-correlation functional. Though more advanced and expensive exchange-correlation functionals such as hybrid generally provide better spectral properties, this is not always the case, e.g., Janus TMD monolayers <cit.>, motivating the choice of PBE exchange-correlation here, as done in previous works for such systems <cit.>. The numerical parameters in the Cyclix-DFT simulations, including grid spacing for real-space discretization, grid spacing for Brillouin zone integration, vacuum in the radial direction, and structural relaxation tolerances, are chosen such that the Zeeman splitting values and Rashba coefficients are converged to within 0.01 eV and 0.01 eV Å, respectively. This translates to an accuracy of 10^-4 Ha/atom in the ground state energy.
§ RESULTS AND DISCUSSION
We now use the aforedescribed framework to study the effect of axial and torsional deformations on the Zeeman and Rashba splittings in the molybdenum and tungsten TMD nanotubes and their Janus variants. In the results presented here, the axial strain (ε) is defined as the change in nanotube length divided by its original length; the shear strain (γ) is defined as the product of the nanotube radius with the applied twist per unit length; the Zeeman splitting (λ_ VBM) corresponds to the valence band maximum (VBM), where the effect is significantly more pronounced (∼5x larger) than the conduction band minimum (CBM) (Supplementary Material); and the Rashba splitting coefficient (α) corresponds to the VBM, calculated at the zero wavevector in the axial direction. All the data can be found in the Supplementary Material.
In Fig. <ref>, we present the variation in the Zeeman splitting with tensile and shear strains. We observe that the Zeeman splitting in the undeformed state is significant, being comparable to the monolayer counterparts <cit.>, with the WTe2 and MoS2/MoSTe nanotubes having the largest and smallest values of λ_ VBM =489 and ∼146 meV, respectively. In addition, the splitting decreases with increase in tensile strains, going to zero for large enough strains. A similar behavior is also observed for shear strains, other than for the MoSe2, WS2, WSSe, and WSTe nanotubes, where the splitting remains unaffected by the torsional deformations. Such a decrease in the Zeeman splitting values upon the application of biaxial strains has been observed for Janus TMD bilayers <cit.>. Note that the values for the Janus TMD nanotubes are generally in between their parent TMDs. Note also that the sudden jumps in the Zeeman splitting values are a consequence of the VBM location shifting to a different wavevector.
In Fig <ref>, we present the variation in the Rashba coefficient with shear strain. Unlike torsional deformations, axial deformations do not break the inversion symmetry of the nanotube, and therefore the Rashba effect remains absent <cit.>. We observe that the Rashba coefficient increases linearly with shear strain — average coefficient of determination of linear regression over all the materials is 0.97 — reaching significantly large values that are comparable to those for Janus TMD monolayers <cit.>, systems where we have found the Rashba effect to be insensitive to shear strains. Indeed, the Rashba effect is not observed in TMD monolayers due to the presence of inversion symmetry. At the largest shear strain of γ = 0.15, the largest and smallest Rashba coefficient values of α = 0.78 and 0.20 eV Å occur for the WTe2 and MoS2 nanotubes, respectively, whose undeformed configurations also have the largest and smallest Zeeman splitting, respectively. However, this correlation is not generally true, e.g., MoTe2 has one of the smallest Zeeman splitting of λ_ VBM = 213 meV for the undeformed nanotube, whereas it has one of the largest Rashba coefficient of α = 0.65 eV Å for the maximum shear strained tube (γ=0.15). Note that the values for the Janus TMD nanotubes are generally in between their parent TMDs.
To understand the effect of the nanotube diameter on the results obtained, we now consider the nanotubes that demonstrate the largest Zeeman and Rashba effects, i.e., WSe2, WSeTe, and WTe2, with diameters spanning the range ∼ 2 - 10 nm. In Fig. <ref>, we present the variation in the Zeeman splitting and Rashba coefficient with the diameter, while considering the unstrained and largest shear strain (γ = 0.15) configurations, respectively. We observe that the Zeeman splitting values remain relatively unchanged, increasing ever so slightly with diameter — around 1% over the entire diameter range — approaching the flat sheet values of λ_ VBM = 463, 473, and 493 meV for WSe2, WSeTe, and WTe2, respectively <cit.>. In addition, the Rashba coefficient decreases significantly with increase in diameter, e.g., the value for WTe2 reduces from α = 0.83 eV Å at a diameter of 2 nm to α = 0.73 eV Å at a diameter of 9 nm, expectedly heading towards the zero value for the flat sheet configuration.
The results presented here clearly demonstrate that mechanical deformations can be used to engineer the Zeeman and Rashba splittings in molybdenum and tungsten TMD nanotubes as well as their Janus variants, making them a powerful tool for spintronics applications. In particular, the Zeeman effect is especially significant for the undeformed nanotubes, becoming progressively smaller and even disappearing with increase in axial/shear strains, and the Rashba effect can be introduced through torsional deformations — break the inversion symmetry of the system — becoming especially significant as the shear strain increases.
§ CONCLUDING REMARKS
In this work, we have studied the strain engineering of Zeeman and Rashba effects in synthesized TMD nanotubes and their Janus variants using first principles DFT simulations. In particular, we have performed symmetry-adapted Kohn-Sham calculations with spin-orbit coupling to determine the effect of axial and torsional deformations on the Zeeman and Rashba splittings in the electronic band structure. We have found that there is significant splitting in the molybdenum and tungsten nanotubes, for which the Zeeman splitting decreases with increase in tensile/shear strain, reaching zero for large enough strains, while the Rashba splitting coefficient increases linearly with shear strain, while being zero for all axial deformations, a consequence of the inversion symmetry remaining unbroken. In addition, the Zeeman splitting is relatively unaffected by the nanotube diameter, whereas the Rashba coefficient decreases with increase in diameter. Though the current study has been restricted to TMD nanotubes and their Janus variants, other nanotubes are expected to demonstrate similar behavior, particularly those with heavy chemical elements. Overall, mechanical deformations represent a powerful tool for spintronics applications using nanotubes.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge the support of the Clifford and William Greene, Jr. Professorship. This research was also supported by the supercomputing infrastructure provided by Partnership for an Advanced Computing Environment (PACE) through its Hive (U.S. National Science Foundation (NSF) through grant MRI-1828187) and Phoenix clusters at Georgia Institute of Technology, Atlanta, Georgia.
10
rao2003inorganic
Rao C N R and Nath M 2003 Inorganic nanotubes Advances In Chemistry: A
Selection of CNR Rao's Publications (1994–2003) (World Scientific) pp
310–333
tenne2003advances
Tenne R 2003 Angewandte Chemie International Edition 42
5124–5132
serra2019overview
Serra M, Arenal R and Tenne R 2019 Nanoscale 11 8073–8090
yagmurcukardes2020quantum
Yagmurcukardes M, Qin Y, Ozen S, Sayyad M, Peeters F M, Tongay S and Sahin H
2020 Applied Physics Reviews 7 011311
sreedhara2022nanotubes
Sreedhara M B, Miroshnikov Y, Zheng K, Houben L, Hettler S, Arenal R, Pinkas I,
Sinha S S, Castelli I E and Tenne R 2022 Journal of the American
Chemical Society
seifert2000structure
Seifert G, Terrones H, Terrones M, Jungnickel G and Frauenheim T 2000 Physical Review Letters 85 146
seifert2000electronic
Seifert G, Terrones H, Terrones M, Jungnickel G and Frauenheim T 2000 Solid State Communications 114 245–248
mikkelsen2021band
Mikkelsen A E G, Bölle F T, Thygesen K S, Vegge T and Castelli I E 2021
Physical Review Materials 5 014002
tao2018band
Tao L, Zhang Y Y, Sun J, Du S and Gao H J 2018 Chinese Physics B 27 076104
bhardwaj2022strain
Bhardwaj A and Suryanarayana P 2022 The European Physical Journal B
95 1–9
seifert2000novel
Seifert G, Terrones H, Terrones M and Frauenheim T 2000 Solid State
Communications 115 635–638
enyashin2005computational
Enyashin A N, Shein I R, Medvedeva N I and Ivanovskii A L 2005 Internet
Electronic Journal of Molecular Design 4 316–328
nath2003superconducting
Nath M, Kar S, Raychaudhuri A K and Rao C N R 2003 Chemical Physics
Letters 368 690–695
tsuneta2003formation
Tsuneta T, Toshima T, Inagaki K, Shibayama T, Tanda S, Uji S, Ahlskog M,
Hakonen P and Paalanen M 2003 Current Applied Physics 3
473–476
zibouche2014electromechanical
Zibouche N, Ghorbani-Asl M, Heine T and Kuc A 2014 Inorganics 2
155–167
li2014strain
Li W, Zhang G, Guo M and Zhang Y W 2014 Nano Research 7 518–527
wang2016strain
Wang Y Z, Huang R, Wang X Q, Zhang Q F, Gao B L, Zhou L and Hua G 2016 Chalcogenide Letters 13 301–307
lu2012strain
Lu P, Wu X, Guo W and Zeng X C 2012 Physical Chemistry Chemical Physics
14 13035–13040
levi2015nanotube
Levi R, Garel J, Teich D, Seifert G, Tenne R and Joselevich E 2015 ACS
Nano 9 12224–12232
oshima2020geometrical
Oshima S, Toyoda M and Saito S 2020 Physical Review Materials 4
026004
wang2014tuning
Wang Y Z, Wang B L, Zhang Q F, Huang R, Gao B L, Kong F J and Wang X Q 2014
Chalcogenide Letters 11 493–502
zibouche2019strong
Zibouche N, Philipsen P and Kuc A 2019 The Journal of Physical Chemistry
C 123 3892–3899
ivanovskaya2003computational
Ivanovskaya V V, Enyashin A N, Medvedeva N I, Makurin Y N and Ivanovskii A L
2003 Internet Electronic Journal of Molecular Design 2 499–510
gao2017structural
Gao B L, Ke S H, Song G, Zhang J, Zhou L, Li G N, Liang F, Wang Y and Dang C
2017 Journal of Alloys and Compounds 695 2751–2756
yin2016chiral
Yin D, Wu M, Yang Y, Cen W and Fang H 2016 Physica E: Low-dimensional
Systems and Nanostructures 84 196–201
zibouche2012layers
Zibouche N, Kuc A and Heine T 2012 The European Physical Journal B 85 49
ansari2015ab
Ansari R, Malakpour S, Faghihnasiri M and Sahmani S 2015 Superlattices and
Microstructures 82 188–200
tal2001effect
Tal O, Remskar M, Tenne R and Haase G 2001 Chemical Physics Letters
344 434–440
li2015tailoring
Li N, Lee G, Jeong Y H and Kim K S 2015 The Journal of Physical Chemistry
C 119 6405–6413
li2016low
Li B L, Wang J, Zou H L, Garaj S, Lim C T, Xie J, Li N B and Leong D T 2016
Advanced Functional Materials 26 7034–7056
sorkin2014nanoscale
Sorkin V, Pan H, Shi H, Quek S Y and Zhang Y W 2014 Critical Reviews in
Solid State and Materials Sciences 39 319–367
yudilevichself
Yudilevich D, Levi R, Nevo I, Tenne R, Ya’akobovitz A and Joselevich E 2018
ICME 1–4
divon2017torsional
Divon Y, Levi R, Garel J, Golberg D, Tenne R, Ya’akobovitz A and Joselevich E
2017 Nano Letters 17 28–35
barua2017nanostructured
Barua S, Dutta H S, Gogoi S Devi R and Khan R 2017 ACS Applied Nano
Materials 1 2–25
unalan2008zno
Unalan H E, Yang Y, Zhang Y, Hiralal P, Kuo D, Dalal S, Butler T, Cha S N, Jang
J E, Chremmou K et al. 2008 IEEE Transactions on Electron
Devices 55 2988–3000
zhang2012high
Zhang C, Wang S, Yang L, Liu Y, Xu T, Ning Z, Zak A, Zhang Z, Tenne R and Chen
Q 2012 Applied Physics Letters 100 243101
zhang2019enhanced
Zhang Y J, Ideue T, Onga M, Qin F, Suzuki R, Zak A, Tenne R, Smet J H and Iwasa
Y 2019 Nature 570 349–353
tang2018janus
Tang Z K, Wen B, Chen M and Liu L M 2018 Advanced Theory and
Simulations 1 1800082
xie2021theoretical
Xie S, Jin H, Wei Y and Wei S 2021 Optik 227 166105
ju2021tuning
Ju L, Liu P, Yang Y, Shi L, Yang G and Sun L 2021 Journal of Energy
Chemistry 61 228–235
ju2021rolling
Ju L, Qin J, Shi Land Yang G, Zhang J and Sun L 2021 Nanomaterials 11 705
zhang2019mosse
Zhang S, Jin H, Long C, Wang T, Peng R, Huang B and Dai Y 2019 Journal of
Materials Chemistry A 7 7885–7890
cheng2013spin
Cheng Y C, Zhu Z Y, Tahir M and Schwingenschlögl U 2013 Europhysics
Letters 102 57001
larentis2018large
Larentis S, Movva H C P, Fallahazad B, Kim K, Behroozi A, Taniguchi T, Watanabe
K, Banerjee S K and Tutuc E 2018 Physical Review B 97 201407
li2020enhanced
Li Q, Zhao X, Deng L, Shi Z, Liu S, Wei Q, Zhang L, Cheng Y, Zhang L, Lu H et al. 2020 ACS Nano 14 4636–4645
jiang2017zeeman
Jiang C, Liu F, Cuadra J, Huang Z, Li K, Rasmita A, Srivastava A, Liu Z and Gao
W B 2017 Nature Communications 8 802
rezavand2021stacking
Rezavand A and Ghobadi N 2021 Physica E: Low-dimensional Systems and
Nanostructures 132 114768
chen2020tunable
Chen J, Wu K, Ma H, Hu W and Yang J 2020 RSC Advances 10
6388–6394
rezavand2022tuning
Rezavand A and Ghobadi N 2022 Journal of Magnetism and Magnetic
Materials 544 168721
milivojevic2020spin
Milivojevic M, Dmitrovic S, Damnjanovic M and Vukovic T 2020 The Journal
of Physical Chemistry C 124 11141–11149
nagapriya2008torsional
Nagapriya K S, Goldbart O, Kaplan-Ashiri I, Seifert G, Tenne R and Joselevich E
2008 Physical Review Letters 101 195501
kaplan2007mechanical
Kaplan-Ashiri I and Tenne R 2007 Journal of Cluster Science 18
549–563
kaplan2006mechanical
Kaplan-Ashiri I, Cohen S R, Gartsman K, Ivanovskaya V, Heine T, Seifert G,
Wiesel I, Wagner H D and Tenne R 2006 Proceedings of the National
Academy of Sciences 103 523–528
gonze2002first
Gonze X, Beuken J M, Caracas R, Detraux F, Fuchs M, Rignanese G M, Sindic L,
Verstraete M, Zerah G, Jollet F et al. 2002 Computational
Materials Science 25 478–492
bhardwaj2021torsional
Bhardwaj A, Sharma A and Suryanarayana P 2021 Nanotechnology 32
28LT02
bhardwaj2021strain
Bhardwaj A, Sharma A and Suryanarayana P 2021 Nanotechnology 32
47LT01
bhardwaj2021elastic
Bhardwaj A and Suryanarayana P 2022 The European Physical Journal B
95 1–8
sharma2021real
Sharma A and Suryanarayana P 2021 Physical Review B 103 035101
xu2021sparc
Xu Q, Sharma A, Comer B, Huang H, Chow E, Medford A J, Pask J E and
Suryanarayana P 2021 SoftwareX 15 100709
zhang2023versionS
Zhang B, Jing X, Xu Q, Kumar S, Sharma A, Erlandson L, Sahoo S J, Chow E,
Medford A J, Pask J E et al. 2023 arXiv preprint
arXiv:2305.07679
ghosh2017sparc1
Ghosh S and Suryanarayana P 2017 Computer Physics Communications 212 189–204
ghosh2019symmetry
Ghosh S, Banerjee A S and Suryanarayana P 2019 Physical Review B 100 125143
gavini2022roadmap
Gavini V, Baroni S, Blum V, Bowler D R, Buccheri A, Chelikowsky J R, Das S,
Dawson W, Delugas P, Dogan M et al. 2022 arXiv preprint
arXiv:2209.12747
codony2021transversal
Codony D, Arias I and Suryanarayana P 2021 Physical Review Materials
5 L030801
kumar2021flexoelectricity
Kumar S, Codony D, Arias I and Suryanarayana P 2021 Nanoscale 13
1600–1607
kumar2020bending
Kumar S and Suryanarayana P 2020 Nanotechnology 31 43LT01
bhardwaj2023ab
Bhardwaj A and Suryanarayana P 2023 The European Physical Journal B
96 36
momma2008vesta
Momma K and Izumi F 2008 Journal of Applied Crystallography 41
653–658
perdew1996generalized
Perdew J P, Burke K and Ernzerhof M 1996 Physical Review Letters 77 3865
hamann2013optimized
Hamann D R 2013 Physical Review B 88 085117
van2018pseudodojo
van Setten M J, Giantomassi M, Bousquet E, Verstraete M J, Hamann D R, Gonze X
and Rignanese G M 2018 Computer Physics Communications 226
39–54
chang2013orbital
Chang C H, Fan X, Lin S H and Kuo J L 2013 Physical Review B 88
195420
luo2019electronic
Luo Y F, Pang Y, Tang M, Song Q and Wang M 2019 Computational Materials
Science 156 315–320
wang2018mechanical
Wang Y Z, Huang R, Gao B L, Hu G, Liang F and Ma Y L 2018 Chalcogenide
Letters 15 535–543
bolle2021structural
Bölle F T, Mikkelsen A E G, Thygesen K S, Vegge T and Castelli I E 2021
npj Computational Materials 7 1–8
klots2014probing
Klots A R, Newaz A K M, Wang B, Prasai D, Krzyzanowska H, Lin J, Caudel D,
Ghimire N J, Yan J, Ivanov B L et al. 2014 Scientific Reports
4 1–7
ugeda2014giant
Ugeda M M, Bradley A J, Shi S F, Felipe H, Zhang Y, Qiu D Y, Ruan W, Mo S K,
Hussain Z, Shen Z X et al. 2014 Nature Materials 13
1091–1095
hill2016band
Hill H M, Rigosi A F, Rim K T, Flynn G W and Heinz T F 2016 Nano
Letters 16 4831–4837
lu2017janus
Lu A Y, Zhu H, Xiao J, Chuu C P, Han Y, Chiu M H, Cheng C C, Yang C W, Wei K H,
Yang Y et al. 2017 Nature Nanotechnology 12 744–749
haastrup2018computational
Haastrup S, Strange M, Pandey M, Deilmann T, Schmidt P S, Hinsche N F, Gjerding
M N, Torelli D, Larsen P M, Riis-Jensen A C et al. 2018 2D
Materials 5 042002
shi2018mechanical
Shi W and Wang Z 2018 Journal of Physics: Condensed Matter 30
215301
zhao2015ultra
Zhao W, Li Y, Duan W and Ding F 2015 Nanoscale 7 13586–13590
naaman2012chiral
Naaman R and Waldeck D H 2012 The Journal of Physical Chemistry Letters
3 2178–2187
|
http://arxiv.org/abs/2307.05097v1 | 20230711081122 | Local limit theorem for directed polymers beyond the $L^2$-phase | [
"Stefan Junk"
] | math.PR | [
"math.PR",
"60K37"
] |
=0pt
Pseudomagnetic suppression of non-Hermitian skin effect
Baile Zhang
=======================================================
We consider the directed polymer model in the weak disorder phase under the assumption that the partition function is L^p-bounded for some p>1+2/d. We prove a local limit theorem for the polymer measure, i.e., that the point-to-point partition function can be approximated by two point-to-plane partition functions at the start- and endpoint. We furthermore show that for environments with finite support the required L^p-boundedness holds in the whole weak disorder phase, except possible for the critical value β_cr. Some consequences of the local limit theorem are also discussed.
§ INTRODUCTION
The directed polymer model describes random paths in a disordered medium. The model has recently attracted much interest because it is conjectured to be in the KPZ (Kardar-Parisi-Zhang) universality class. In the so-called strong disorder regime, in particular in spatial dimension d=1, it is expected that the polymer has a super-diffusive scaling exponent and that its behavior is thus completely different from its infinite-temperature version (the usual simple random walk). At present, this has only been verified in a small number of exactly solvable models.
In contrast to the one-dimension case, in spatial dimensions d≥ 3 it is known that the diffusive scaling of the simple random walk persists up to some inverse temperature β_cr>0. This parameter regime is known as the weak disorder phase and it is the focus of the current article. It is characterized as the set of β such that the (normalized) partition function W_n^β converges to a positive limit W_∞^β. The long-term behavior in the weak disorder phase is much better understood than in the strong disorder phase, see for example <cit.>, but many important questions still remain.
We are particularly interested in the case where β is close to β_cr, which is a very intriguing regime because strong disorder is expected to hold at β_cr and beyond. However, this case is technically difficult because a very successful approach to the weak disorder phase, going back to <cit.> and based on L^2-martingale techniques, is not applicable in the full weak disorder regime but only up to some β_cr^L^2, which is known to be strictly smaller than β_cr. The contribution of this paper is to introduce an approach based on L^p-estimates that is valid up to β_cr, at least for a certain class of environments.
Our main result is a local limit theorem for the polymer measure in weak disorder. Informally, this result says that the density of the polymer measure μ_ω,n^β (the quenched law of the polymer up to time n) is comparable to the density of the simple random walk with a random multiplicative constant that is well-behaved. More precisely, for an appropriate range of x,
μ_ω,n^β(X_n=x)≈ W_∞^n,x(X_n=x),
where W^n,x_∞ denotes the “backward” partition function, started from space-time point (n,x). See Corollary <ref> for a precise statement. In particular, by the definition of the weak disorder phase, it holds that W_∞^n,x∈(0,∞) almost surely.
Similar results are known in the L^2-weak disorder phase β<β_cr^L^2, see <cit.>, and the extension to the full weak disorder phase has been an important open problem in the field. For example, in <cit.> it is noted that “the validity of local limit theorem is a natural definition for the polymer to be in the weak disorder regime (better than the central limit theorem itself)”. We also note that, in spatial dimension d=2, strong disorder holds at all β>0 but one can define an intermediate weak disorder regime by choosing a time-dependent inverse temperature β_n that goes to zero at a suitable rate. In that case, a local limit theorem is known in the full weak disorder regime, see <cit.>. This is due to the fact that, in contrast to the higher-dimensional regime considered here, L^2-techniques are applicable in the whole intermediate weak disorder regime.
Limit theorems of similar forms have been obtained in the literature on random media many times in different contexts, see for example <cit.>, and have proven a valuable tool in situations where, in some sense, the effect of disorder is weak and the model homogenizes. In our setup, (<ref>) allows a more precise comparison between the simple random walk and the polymer measure than what is provided by, for example, the invariance principle. The reason is that to compute finer properties of μ_ω,n^β(X_n∈·) it is enough to understand the quality of the approximation in (<ref>) and the statistics of W^n,x_∞ over an appropriate range of x. We demonstrate this approach in Section <ref> by deriving a number of new results for the weak disorder phase.
We prove the main result under the assumption that the partition function (W_n^β)_n∈ is L^p-bounded, for some p>1+2/d. We expect that this assumption is optimal, see the discussion in Section <ref>, and we show that it is indeed satisfied in the interior of the whole weak disorder phase if the environment is finitely supported, see Corollary <ref>.
§.§ Definition of the model
We now introduce the model in detail. The random medium (also called disorder or random environment) is given by an independent and identically distributed (i.i.d.) collection of real-valued weights (ω=(ω_t,x)_(t,x)∈×^d,,) with finite exponential moments,
[e^β|ω_0,0|]<∞ for all β≥ 0.
Assumption (<ref>) will be in place throughout the paper. The energy of a path π in ω is given by
H_I(ω,π)∑_i∈ I∩ω_i,π(i),
with H_n H_[1,n], and the polymer measure μ_ω,n^β is defined to be the associated Gibbs-measure,
μ_ω,n^β( X)=1/W_n^βe^β H_n(ω,X)-nλ(β)P( X),
where λ(β)log[e^βω_0,0] is the logarithmic moment generating function, (X=(X_n)_n∈,P) is the simple random walk on ^d and W_n^β is the normalizing constant,
W_n^β=E[e^β H_n(ω,X)-nλ(β)].
We will refer to this quantity as the partition function. It is not difficult to check that (W_n^β)_n∈ is a non-negative martingale and hence the almost sure limit W_∞^βlim_n→∞W_n^β exists. One can further show that a zero-one law holds, i.e. (W_∞^β>0)∈{0,1}. The formal definition of the weak disorder regime mentioned above is as the set of β≥ 0 where
(W_∞^β>0)=1WD.
The strong disorder phase is the complementary set of β, satisfying
SD(W_∞^β=0)=1.
We now highlight some important results and refer to the survey articles <cit.> for further details. The next theorem shows that there is a phase transition between weak and strong disorder.
(i) There exists β_cr=β_cr(d)∈[0,∞) such that (<ref>) holds for β<β_cr and (<ref>) holds for β>β_cr.
(ii) It holds that β_cr(d)>0 if and only if d≥ 3. Moreover, for d≥ 3 there exists β_cr^L^2=β_cr^L^2(d)>0 such that (W_n^β)_n∈ is L^2-bounded if and only if β<β_cr^L^2.
(iii) For d≥ 3, it holds that β_cr^L^2<β_cr.
The existence of the phase transition is shown in <cit.>. The martingale approach and the L^2-phase was introduced in <cit.>. The fact that β_cr^L^2 is different from β_cr has been shown over a number of works, starting with <cit.>, see <cit.> for the precise references.
Next, we recall some information above the behavior of μ_ω,n^β within each phase. An important observable is the so-called replica overlap
I_n^β,2∑_x∈^dμ_ω,n^β(X_n+1=x)^2=μ^⊗ 2, β_ω,n(X_n+1=X_n+1'),
where μ^⊗ 2,β_ω,n denotes the law of two independent polymers (X_n)_n∈ and (X_n')_n∈ in the same environment ω.
(i) If (<ref>) holds, then ∑_nI_n^β,2 is almost surely finite and, in particular, lim_n→∞max_xμ_ω,n(X_n+1=x)=0 almost surely. Moreover, for every f:^d→ bounded and continuous
∑_x∈^df(x/√(n))μ_ω,n^β(X_n=x)∫ f(x)k(x) x in probability,
where k denotes the standard normal density.
(ii) If (<ref>) holds, then there exists c>0 such that lim inf_n→∞max_x∈^dμ_ω,n^β(X_n+1=x)≥ c almost surely.
The central limit theorem (<ref>) is first proved in the L^2-phase <cit.> and later extended to the whole weak disorder phase <cit.>. The characterization of the phase transition in terms of the finiteness of ∑_n I_n^β,2 is proved in <cit.> and part (ii) is given in <cit.> for a setting that generalizes our model.
As mentioned earlier, our results relies on L^p-boundedness instead of L^2-boundedness, and we thus introduce the critical exponent at a given inverse temperature β,
(β)sup{p:(W_n^β)_n∈ is L^p-bounded}.
A priori, it is not clear that (<ref>) guarantees (β)>1 and extra assumptions are needed to ensure this.
Assume (<ref>).
(i)It holds that [sup_n W_n^β]<∞ and, in particular, (W_n^β)_n∈ is uniformly integrable.
(ii) If the environment is either upper-bounded,
u-bd.∃ K>0 s.t. (ω_0,0≤ K)=0,
or the support of ω is unbounded the following tail-condition is satisfied,
∃ A_0, C>1 s.t. [e^2βω|ω>A]≤ Ce^2β A for all A≥ A_0,
then (β)>1.
(iii) If the environment is upper-bounded (<ref>), then (β)≥ 1+2/d.
The uniform integrability was first shown in <cit.>. The integrability of sup_nW_m^β was shown in <cit.>, together with the L^p-boundedness under the assumption (<ref>). The extension to unbounded environments satisfying (<ref>) is proved in <cit.> and the lower bound on can be found in <cit.>.
Note that (<ref>) is strictly stronger than (<ref>), see the discussion in <cit.>.
§.§ Statement of the local limit theorem
To state the main result, we introduce some notation. First, for m≤ n and x,y∈^d, we write (m,x)↔(n,y) if P(X_n-m=x-y)>0. If (m,x)↔(n,y), then the random walk bridge between (m,x) and (n,y) is denoted by
P^x,y_m,n(·)=P(X∈·|X_m=x,X_n=y)
and the corresponding pinned partition function is defined as
W^β,x,y_m,n E^x,y_m,n[e^β H_(m,n)(ω,X)-(m-n+1)λ(β)].
Note that the environment at (n,x) and (m,x) is ignored. In the calculations, we have to be careful about whether the environment at the initial time and terminal times is included, but we stress that due to (<ref>) this difference essentially does not matter. If ν is a probability measure on ^2d such that
ν({(x,y):(m,x)↔(n,y)})=1,
then we define the averaged pinned partition function by
W^β,ν_m,n∑_x,yν(x,y) W^β,x,y_m,n.
We also introduce the random walk P^x,⋆_m,n (resp. P^⋆,y_m,n) with starting point (m,x) (endpoint (n,y)) and free endpoint (starting point). Namely, P^x,⋆_m,n is the laws of (X_k-X_m+x)_k=m,…,n and P^⋆,y_m,n is the law of (X_k-X_n+y)_k=m,…,n under P. The corresponding partition functions are denoted by
W^β,x,⋆_m,n E^x,⋆_m,n[e^β H_(m,n](ω,X)-(m-n+1)λ(β)],
W^β,⋆,y_m,n E^⋆,y_m,n[e^β H_[m,n)(ω,X)-(m-n+1)λ(β)].
In (<ref>), the environment at time n is ignored, which ensures that (<ref>) and (<ref>) have the same law.
Finally, we introduce a set of admissible distributions for two endpoints that appear in the local limit theorem, which are separated by a distance O(n) in space and we allows fluctuations of order O(n^1/2) around the two endpoints. Let
_n(α,M)
{ν∈_1(^2d):∃ a,b∈^d:|a-b|≤α n, ν((a,b)+[Mn^1/2,Mn^1/2]^2d)=1,
ν({(x,y):(0,x)↔(n,y)})=1}.
The following is the main result of this paper.
Assume d≥ 3 and (β)>1+2/d and let p∈(1+2/d,∧ 2). There exists α>0 such that the following hold:
(i) It holds that
sup_n∈sup_|x-y|≤α n[(W^β,x,y_0,n)^p] <∞,.
(ii)Let
ξ=ξ(,p)=p/2(1-1/)(1-1+2/d/) if p∈(1+-1/+1,),
p/2-p/-p/2(1-1+2/d/) if p∈(1+2/d,1+-1/+1).
For every ∈(0,ξ) and M>0 there exists C>0 such that
sup_n∈sup_ν∈_n(α,M)[|W^β,ν_0,n-1|^p] ≤ C(max_x ν(x,⋆)^ξ-+max_yν(⋆,y)^ξ-),
where ν(x,⋆)=∑_yν(x,y) and ν(⋆,y)=∑_xν(x,y) denote the marginals of ν.
Our statement of the local limit theorem looks different than the informal statement (<ref>) in the beginning, so in Section <ref> we provide an alternative version that resembles (<ref>) more closely.
§.§ Strategy of the proof of Theorem <ref>
The proof can be considered a variation of the so-called “chaos decomposition” that has been very successful in the analysis of the L^2-phase, in particular in the proof of a local limit theorem, see <cit.>.
To understand the strategy, it is helpful to first recall the strategy for the proof of the local limit theorem in <cit.>. The idea is to decompose W^β,x,y_0,n as ∑_k A_k, where A_k is itself a sum of many pairwise orthogonal terms. Thus the second moment of A_k consists only of the on-diagonal terms, and the contribution from these terms can be interpreted as a joint partition function W^⊗ 2,β,x,y_0,n[_X_s=X_s' at least k times] of two independent polymers in the same environment that are forced to meet at least k times. Here, we have introduced the notation
W^⊗ 2,β,x,y_0,n[_B] E^⊗ 2,x,y_0,n[e^β H_(0,n)(ω,X_n)+β H_(0,n)(ω,X_n')-2(n-1)λ(β)_B],
where P^⊗ 2,x,y_0,n is the law of two independent random walk bridges (X_k)_k=0,… n and (X_k')_k=0,…,n and B is an event adapted to their common sigma field.
To prove a local limit theorem in L^2-weak disorder, one can verify by explicit calculations that the second moment of A_k decays exponentially fast, hence it suffices to consider the first O(log n) terms. One can further verify that for k=O(log n), A_k is dominated by the contribution from paths where all collisions between X and X' occur in [0,n^o(1)]∪[n-n^o(1),n]. One can then integrate out the environment “in the middle” and show that the random walk bridges in the “initial” and “terminal” parts behave like simple random walks.
This procedure does not work outside of the L^2-region because the second moment of A_k grows exponentially and we instead work with the p^th moment. By writing
(W^β,x,y_0,n)^p=(W^⊗ 2,β,x,y_0,n)^p/2
we obtain a joint partition function, which we similarly decompose as W^⊗ 2,β,x,y_0,n=∑_k A_k. The difference is that we group successive collisions between the two polymers together if they occur in a short time interval, so that the subscript k of A_k counts the number of meetings that are “well-separated” in time. Intuitively, this gives the polymers time to move away from each other after a collision, which we know is their typical behavior due to Theorem <ref>(i). With this modification, we can again argue that it suffices to consider the first O(log n) terms.
A second difference to L^2-weak disorder is that not all collisions occur in the initial and terminal segments. Instead, we show that there exists a deterministic L such that there are at most L large gaps between the collision times, where “large” means “of order n”. We can then integrate out the environment in a large gap and follow the argument from above.
Our decomposition of the p^th-moment of the partition function is inspired by the approach in <cit.>, and similar to that work we will often use the following sub-additive estimate,
sub-add.(∑_i∈ Ix_i)^θ≤∑_i∈ Ix_i^θ for all θ∈[0,1] and all non-negative (x_i)_i∈ I.
§.§ Discussion of the integrability assumption
Theorem <ref> requires L^1+2/d+-bounded-ness of the partition function, so in this section we discuss whether this assumption is actually satisfied for any β>β_cr^L^2.
First, we note that due to Theorem <ref>, we have (β)≥ 1+2/d in the case of a bounded environment, (<ref>), and as discussed after <cit.> there is some evidence to believe that this is true for general environments as well. Nonetheless, it could be the case that (β)=1+2/d for all β≥β_cr^L^2 in the weak disorder phase.
To exclude that possibility, we prove the following properties of outside of the L^2-phase.
The function β↦(β) satisfies the following:
(i) If (β)∈(1+2/d,2], then is right-continuous at β.
(ii) If β>β_cr^L^2, (β)>1 and if the environment is upper bounded, (<ref>), then is left-continuous at β.
(iii) If has finite support and (β)>1, then (β')>(β) for all β'<β.
By utilizing a connection between our model and the so-called inhomogeneous pinning model, one can further show that (β_cr^2)=2, see the discussion in <cit.> and <cit.> for the case d=3. Namely, the proof implies that
sup_n[(W_n^β_cr^L^2)^p]<∞
for some p∈(1,2), and by inspecting the proofs one sees that p can be taken arbitrarily close to 2, hence (β_cr^L^2)≥ 2. The converse inequality follows by directed computation with the help of an explicit characterization for β_cr^L^2, see for example <cit.>.
For finitely supported environments, we thus conclude that is a “nice” function, see Figure <ref>.
Assume that has finite support. Then β↦(β) is either left-continuous or right-continuous at β_cr and it is continuous and strictly decreasing in [β_cr^L^2,β_cr). In particular, we have (β)>1+2/d for all β<β_cr.
It is natural to expect that the same holds for general environments.
It is widely believed that (<ref>) holds at β_cr, see <cit.>. If it is true, then we indeed have >1+2/d in the whole weak disorder phase, at least if has finite support. The above discussion moreover suggests that <cit.> can be rephrased as follows:
There exists no β∈_+ such that (β)=1+2/d.
In this context, it is relevant to discuss the result from <cit.> about a closely related model, called the directed polymer in γ-stable environment. It is very similar to our model, except the assumption (<ref>) is dropped and we only require exponential moments up to some β_0∈(0,∞). It is then convenient to re-parameterize the model and consider an environment η=(η_t,x)_t∈,x∈^d whose marginals are independent, centered, supported on [-1,∞) and have a Pareto-type tail,
(η_t,x≥ t)∼ Ct^-γ for t→∞.
The partition function is defined to be
W^β_n E[∏_t=1^n (1+βη_t,X_t)],
which is, for β∈[0,1], a non-negative martingale. The main result in <cit.> is that a non-trivial phase transition occurs, i.e., lim_n→∞W^β_n>0 for some β>0, if and only if γ>1+2/d. It is interesting that the critical exponent is the same as the one that appeared in our model, and the fact that there is no weak disorder phase at γ=1+2/d can be considered evidence for Conjecture <ref>.
We stress that the critical exponent p^*(γ,β) for (W^β_n)_n∈, defined analogously to (<ref>), might be different from γ. Indeed, the inequality p^*(γ,β)≤γ is trivial and the proof of <cit.> reveals that p^*(γ,β)∈(1+2/d,γ) for γ>1+2/d and β>0 small enough. However, we believe that p^*(γ,β)<γ for β∈(0,1] and thus it is still possible that p^*(γ,β)=1+2/d for some γ>1+2/d and β∈(0,1].
§.§ Consequences of the local limit theorem
In this section, we collect a number of results that can be proved quickly with the help of Theorem <ref>.
§.§.§ Alternative statement of the local limit theorem
We first provide a statement of Theorem <ref> that resembles (<ref>) from the introduction.
Assume (β)>1+2/d and let p∈(1+2/d,2∧(β)). For any ∈(0,3/4), it holds that
lim_r→∞sup_n≥ r^4(1+)sup_x,y∈^d
(0,x)↔(n,y)
|x-y|≤ n^3/4-[|W^β,x,y_0,n/W^β,x,⋆_rW^β,⋆,y_n-r,n-1|^p]=0.
In particular, for 1≪ r≪ n and |y|=o(n^3/4),
μ_ω,n-1^β(X_n=y)/P(X_n=y) =W^β,0,y_0,n/W_n-1^β,0,⋆=W^β,0,⋆_r/W^β,0,⋆_n-1W^β,0,y_0,n/W_r^β,0,⋆W^β,⋆,y_n-r,nW^β,⋆,y_n-r,n≈ W^β,⋆,y_-∞,n,
which recovers the density that appeared in (<ref>).
§.§.§ Critical exponent of the pinned partition function
Analogously to , we define
(β)sup{p:(W_n^0,0)_n∈2 is L^p-bounded}.
Theorem <ref>(i) immediately implies the following:
Assume (β)∈(1+2/d,2]. Then (β)≥(β).
We expect that the converse inequality also holds and that the two exponents agree. Note that in a model with translation invariance, such as the Brownian directed polymer model considered in <cit.>, the law of the pinned partition function does not depend on the endpoint, which implies ≤.
§.§.§ Stability under perturbation by small drifts
Next, we discuss the effect perturbing the model by adding a small drift to the underlying random walk. For ∈^d, we let P^ be the random walk with increment distribution
P^(X_k+1=x+y|X_k=x)=_|y|_1=11/2de^· y-φ(),
where φ()=log(1/2d∑_z:|z|_1=1e^· z). The drift is denoted by
()=E^[X_1]=1/2d∑_y:|y|_1=1ye^· y-φ().
We write P^,x,⋆_m,n for the laws P^,⋆,y_m,n for the laws of (X_k-X_m+x)_m≤ k≤ n and (X_k-X_n+y)_m≤ k≤ n under P^, i.e., the random walks going forward and backward from a given space-time point. Note that E^,x,⋆_m,n[X_n]=x+(n-m)() and E^,⋆,y_m,n[X_m]=y-(n-m)(). We furthermore write W^β,,x,⋆_m,n and W^β,,⋆,y_m,n for the corresponding partition function and reverse partition functions, with W_n^β,:=W^β,,0,⋆_0,n. The critical exponent is generalized as follows:
(β,)sup{p:(W_n^β,)_n∈ is L^p-bounded}.
Assume (β)∈(1+2/d,2]. Then ↦(β,) is lower-semicontinuous at ,
lim_↓ 0inf_∈[-,]^d(β,)≥(β,).
In particular, there exists λ_0 such that weak disorder holds in [-λ_0,λ_0]^d, i.e.,
lim_n→∞W^β,_n>0 for all ∈[-λ_0,λ_0]^d.
This in turn yields the following consequence for the large deviation rate function. For background on the theory of large deviations we refer to, for example, <cit.>.
The polymer measure (μ_ω,n^β(X_n/n∈·))_n∈ satisfies a quenched large deviation principle with deterministic, convex, good, rate function I^β. Moreover, in weak disorder it holds that I^β≥ I^0, where I^0 is the rate function of the simple random walk. If (β)>1+2/d, then there exists >0 such that I^β|_[-,]^d≡ I^0|_[-,]^d.
The equality of I^β and I^0 in a neighborhood of the origin was first noted in <cit.> in the L^2-phase. It was extended to the interior of the whole weak disorder phase in two closely related models in <cit.>.
§.§.§ Replica overlap and delocalization
We close this section by discussing some consequences for the replica overlap I_n^β,2 introduced before Theorem <ref>. This quantity appears naturally in the Doob decomposition of (log W_n)_n∈, see <cit.>, and this connection is the basis for the proof of Theorem <ref>. Note that the family (μ_ω,k^β)_k=1,…,n is not consistent in the sense of Kolmogorov’s extension theorem, see for example <cit.>, so it is not possible to interpret the expression ∑_k=1^nI_k^β,2 as the expected overlap between two polymers up to time n. For that reason, it is more natural to consider a fixed time horizon, i.e., ∑_k=1^nI_k,n^β,2 with
I_k,n^β,2∑_x∈^dμ_ω,n^β(X_k=x)^2.
This object cannot be analyzed with the approach based on the Doob decomposition, although we note that some estimates have been obtained in <cit.> in a related model with Gaussian disorder with the help of Mallivian calculus.
In another direction, it is natural to wonder about the summability of I_n^β,p, defined by
I_n^β,p∑_x∈^dμ_ω,n^β(X_n+1=x)^p.
There is no interpretation of I_n^β,p in terms of replicas as in the case p=2, but the rate of decay of I_n^β,p is closely related to the localization or delocalization of μ_ω,n^β. With the help of Theorem <ref>, we obtain the following:
Assume (β)>1+2/d.
(i) For any p>1+2/d, it holds that ∑_nI_n^β,p<∞ almost surely.
(ii) It holds that sup_n∈∑_k=1^n I_k,n^β,2<∞ almost surely.
Corollary <ref>(i) places some restrictions on the rate of decay of max_x∈^dμ_ω,n^β(X_n+1=x), simultaneously for all n large enough. As a final consequence of Theorem <ref>, we also provide a upper bound on max_x∈^dμ_ω,n^β(X_n+1=x) for typical n.
Assume (β)>1+2/d. For any >0, it holds that
lim_n→∞(max_x∈^dμ_ω,n^β(X_n+1=x)≥ n^-d/2(1-1/∧ 2)+)=lim_n→∞((I_n^β,2)^1/2≥ n^-d/2(1-1/∧ 2)+)=0.
In an upcoming work <cit.>, we will prove with different techniques that, without any assumptions on ,
lim_n→∞(max_x∈^dμ_ω,n^β(X_n+1=x)≤ n^-d/2(1-1/)-)=lim_n→∞((I_n^β,2)^1/2≤ n^-d/2(1-1/)-)=0.
By comparing with Corollary <ref>, we obtain, for (β)∈(1+2/d,2] and in probability,
lim_n→∞-1/log nmax_x∈^dlogμ_ω,n^β(X_n=x)
=lim_n→∞-1/log n(I_n^β,2)^1/2
=d/2(1-1/).
We expect that the situation is different in L^2-weak disorder, where it should be the case that, in probability,
lim_n→∞-1/log nmax_x∈^dlogμ_ω,n^β(X_n=x)=d/2(1-1/)>d/4=lim_n→∞-1/log n(I_n^β,2)^1/2.
§.§ Outline and conventions
We start by proving some auxiliary results in Section <ref> before introducing (Section <ref>) the chaos decomposition discussed in Section <ref>. Section <ref> contains the proof of the main result, Theorem <ref>, and Section <ref> contains the proof of Theorem <ref>. Finally, the corollaries from Section <ref> are proved in Section <ref>.
We write c,C,C',… for constants whose precise value is not important and which may change from line to line. Unless indicated otherwise, the norm |x| of x∈^d refers to the ℓ^2-norm.
§ AUXILIARY RESULTS
Let P^⊗ 2, denote the law of two independent random walks (X_n)_n∈ and (X_n')_n∈ with marginals P^. We write P^,x and P^⊗ 2,,(x,x') to indicate the starting points and if μ is a probability measure on ^d, we write
P^,μ=∑_xμ(x)P^,x and P^⊗ 2,,μ=∑_x,x'μ(x)μ(x')P^⊗ 2,,(x,x').
Let p∈(1+2/d,2]. There exists C>0 such that, for all μ∈_1(^d) and ∈[-1,1]^d,
∑_t∈,z∈^dP^⊗ 2,,μ(X_s≠ X_s' for s=0,…,t-1,X_t=X_t'=z)^p/2≤ Cmax_x μ(x)^p-(1+2/d).
Moreover, for all T∈ it holds that
∑_t≥ T,z∈^dP^⊗ 2,,μ(X_s≠ X_s' for s=0,…,t-1,X_t=X_t'=z)^p/2≤ CT^-d/2(p-1)+1.
We bound
P^⊗ 2,,μ(X_s≠ X_s' for all s=0,…,s-1,X_t=X_t'=z)^p/2
≤ P^⊗2,,μ(X_t=X_t'=z)^p/2
=((μ * P_t^)(z))^p,
where P_t^(x)=P^(X_t=x) and “*” denotes the convolution of two functions ^d→. Writing f_r=(∑_x∈^d |f(x)|^r)^1/r, Youngs convolution inequality now gives
μ *P_t^_p^p ≤min{μ_1^pP_t^_p^p,μ_p^pP_t^_1^p} =min{P_t^_p^p,μ_p^p},
where we used that both μ and P_t^ are probability measures. By Theorem <ref>, there exists C>0 such that, for all ∈[-1,1]^d,
P_t^_p^p=∑_x P^_t(x)^p≤max_x P_t^(x)^p-1∑_x P_t^(x)=max_x P_t^(x)^p-1≤ Ct^-d/2(p-1).
By assumption, d/2(p-1)>1, so this bound is summable and (<ref>) follows. For (<ref>), we fix T∈ and use the first bound from (<ref>) for t≤ T and the second bound for t>T, which gives
∑_t∈,z∈^d P^⊗ 2,,μ(X_s≠ X_s' for all s=0,…,s-1,X_t=X_t'=z)^p/2
≤ Tmax_xμ(x)^p-1+CT^-d/2(p-1)+1.
We have used that μ_p≤max_x μ(x)^p-1. Inserting T=(max_x μ(x))^-2/d now gives (<ref>).
Next, we apply obtain a bound for a certain joint partition function. Similar to (<ref>), let
W^⊗ 2,β,,(x,x')_n[_B]=E^⊗ 2,,(x,x')[e^β H_(0,n](ω,X)+β H_(0,n](ω,X')-2nλ(β)_B].
Assume (β)>1+2/d and let p∈(1+2/d,(β)∧ 2). For any >0 there exist β_0>β, λ_0>0 and T∈ such that, for all β'∈[β,β_0] and ∈[-λ_0,λ_0]^d,
∑_t≥ T,z∈^d[W_t^⊗ 2,β',[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2]≤.
Furthermore, there exists C>0 such that, for all n≥ T, β'∈[β,β_0] and ∈[-λ_0,λ_0]^d,
∑_t≥ n,z∈^d[W_t^⊗ 2,β',[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2]≤ Cn^-(p-(1+2/d)).
Using the conditional Jensen inequality, we get for any β'∈_+ and ∈^d,
[W_t^⊗ 2,β',[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2]
≤[[W_t^⊗ 2,β',[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]|_T-1]^p/2]
=e^λ(pβ')-pλ(β')[W_T-1^⊗ 2,β',[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2]
=e^λ(pβ')-pλ(β')[(W_T-1^β',)^pP^⊗ 2,,μ_ω,T-1^β',(X_s≠ X_s' for all s=0,…,t-1,X_t=X_t'=z)^p/2],
where μ_ω,T-1^β',(·)=μ_ω,T-1^β',(X_T∈·). Summing over t and z and applying Lemma <ref>, we get
∑_t≥ T,z∈^d[W^⊗2,β',_t[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2]
≤ Ce^λ(pβ')-pλ(β')[(W^β',_T-1)^pmax_x μ^β',_ω,T-1(x)^p-(1-2/d)].
We first consider this equation for β'=β and =. Let δ>0 be small enough that p(1+δ)<(β) and let q be the Hölder dual of 1+δ, so that
[(W^β,_T-1)^p sup_x μ^β,_ω,T-1(X_T=x)^p-(1-2/d)]
≤[(W^β,_T-1)^p(1+δ)]^1/(1+δ)[ sup_x μ^β,_ω,T-1(X_T=x)^q(p-(1-2/d))]^1/q.
By assumption, the left-hand term is bounded in T. Since sup_x μ_ω,T-1^β,(X_T=x) is bounded and converges to zero in probability, we find T∈ such that (<ref>) is bounded by /2 for β'=β and =. Note that the integrand is a continuous function of β' and (since the time-horizon T-1 is fixed). Thus, by the dominated convergence theorem, we can choose β_0 and λ_0 small enough that the right-hand side of (<ref>) is bounded by for all β'∈[β_0,β] and ∈[-λ_0,λ_0]^d, as desired.
Finally, with the same values of T, β_0 and λ_0 but applying (<ref>) instead of (<ref>) in (<ref>), we get
∑_t≥ n,z∈^d[W^⊗2,β',_t[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2]
≤ Ce^λ(pβ')-pλ(β')n^-(p-(1+2/d))[(W^β',_T-1)^p].
The claim follows since the quantities in the final line are bounded in β'∈[β,β_0] and ∈[-λ_0,λ_0]^d, for fixed T.
In the next lemma, we consider an expectation with respect to the random walk bridge between times 0 and n where the integrand does not depend on what happens in the “middle”. We show that, in this case, the expectation can be factorized into two expectations with respect to the simple random walk. To make this precise, we write _I:=σ(X_t:i∈ I∩) and _I^⊗ 2:=σ(X_t,X_t':t∈ I∩) for the filtration of the walks.
(i)For all ∈(0,1/2) and M>1, there exists C>0 such that
sup_n∈sup_∈[-1,1]^dsup_x,x',y,y'∈^d
(0,x)↔(n,y),(0,x')↔(n,y')
|x-y-n()|≤ Mn^1/2
|x'-y'-n()|≤ Mn^1/2sup_0≤ s≤ t≤ n
t-s≥ nsup_f∈^⊗ 2_[0,s],g∈^⊗ 2_[t,n]E^⊗ 2,(x,x'),(y,y')_0,n[fg]/E^⊗ 2,,(x,x'),⋆_0,n[f]E^⊗ 2,,⋆,(y,y')_0,n[g]≤ C,
(ii) For every ∈(0,1/4), there exists a non-negative sequence (a_n)_n∈ with lim_n→∞a_n=0 such that, for all n∈,
sup_x,y∈^d
(0,x)↔(n,y)
|x-y|≤ n^3/4-sup_s≤ n^1/4-
t≥ n-n^1/4-sup_f∈_[1,s],g∈_[t,n]|E^x,y_0,n[fg]/E^x,⋆_0,n[f]E^⋆,y_0,n[g]-1|≤ a_n.
We start with part (i): for any z_1,z_2,z_1',z_2'∈^d, we have
E^⊗ 2,(x,x'),(y,y')_0,n[fg_(X_s,X_s')=(z_1,z_1')_(X_t,X_t')=(z_2,z_2')]
=E^⊗ 2,(x,x'),⋆_0,n[fg_(X_s,X_s')=(z_1,z_1')_(X_t,X_t')=(z_2,z_2')_(X_n,X_n')=(y,y')]/P^⊗ 2((X_n,X_n')=(y-x,y'-y'))
=E^⊗ 2,,(x,x'),⋆_0,n[fg_(X_s,X_s')=(z_1,z_1')_(X_t,X_t')=(z_2,z_2')_(X_n,X_n')=(y,y')]/P^⊗ 2,((X_n,X_n')=(y-x,y'-x'))
=E^⊗ 2,,(x,x'),⋆_0,n[f_(X_s,X_s')=(z_1,z_1')]E^⊗ 2,,⋆,(y,y')_0,n[g_(X_t,X_t')=(z_2,z_2')]
×P^⊗ 2,((X_t-s,X_t-s')=(z_2-z_1,z_2'-z_1'))/P^⊗ 2,((X_n,X_n')=(y-x,y'-y))
≤ C E^⊗ 2,,(x,x'),⋆_0,n[f_(X_s,X_s')=(z_1,z_1')]E^⊗ 2,,⋆,(y,y')_0,n[g_(X_t,X_t')=(z_2,z_2')],
where the inequality is due to Theorem <ref>. The claim follows by summing up over z_1,z_1',z_2,z_2'.
For part (ii), by repeating the above calculation with a single random walk and =, we see that it is enough to show that, uniformly over all x,y,s,t as in (<ref>) and all z_1∈ x+[-s^1/4-,s^1/4-]^d,z_2∈ y+[-(n-t)^1/4-,(n-t)^1/4-]^d,
P(X_t-s=z_2-z_1)/P(X_n=y-x)=1+o(1) as n→∞.
We will prove this with the help of a strong version of the local limit theorem for the simple random walk, namely <cit.>. Note that the result is not directly applicable to P since the assumption “p∈𝒫_d” requires the random walk to be aperiodic. This can be rectified as follows: for even n, we can combine two steps into one and consider Y_n:=X_2n/2, which defines and irreducible and aperiodic random walk on ^d. Furthermore, by changing s to s+1 and t to t-1 if necessary, we can also assume s and t to be even. For odd n, we can decompose
E^x,y_0,n[fg] =∑_|y-y'|_1=1P^x,y_0,n(X_n-1=y') E^x,y'_0,n-1[fg^y'],
E^⋆,y_0,n[f] =∑_|y-y'|_1=1P^⋆,y_0,n(X_n-1=y')E^⋆,y'_0,n-1[g^y'].
with appropriately defined g^y'∈_[t,n-1], |y'-y|_1=1.
Let p_t(x):=1/(2π n)^d/2e^-d|x|^2/2n and note that, by <cit.>,
P(X_t-s=z_2-z_1) =p_t-s(z_2-z_1)e^O(1/t-s+|z_2-z_1|^4/(t-s)^3),
P(X_n=y-x) =p_n(y-x)e^O(1/n+|y-x|^4/n^3)
Due to our assumptions on x,y,z_1 and z_2, the exponential terms converge to one uniformly. Moreover, since
|y-x|^2|1/n-1/t-s| ≤
n^-1/4-3,
1/t-s||y-x|^2-|z_2-z_1|^2|
≤ 4n^-2,
we see that
p_t-s(z_2-z_1)/p_n(y-x)=(n/t-s)^d/2e^d|y-x|^2(1/2n-1/2(t-s))e^d/2(t-s)(|y-x|^2-|z_2-z_1|^2)
also converges to one uniformly.
§ THE DECOMPOSITION OF JOINT PARTITION FUNCTIONS
To implement the idea outlined in Section <ref>, we introduce a sequence of stopping times for two paths (X_n,X_n')_n∈. See also Figure <ref> for an illustration. For fixed T∈, let
τ_0 inf{k≥ 0:X_k=X_k'},
τ_k+1 inf{k≥τ_k+T:X_k=X_k'} for k≥ 0.
Note that τ_0 is not required to be large than T, so for example τ_0=0 if X_0=X_0'. Next, let
K_n inf{k:τ_k> n},
L_n #{0< i< K_n:τ_i-τ_i-1≥ n^1/2}.
That is, K_n is the number of collisions (separated in time by at least T) and L_n is the number of “large” gaps between collisions. The value n^1/2 in the definition of L_n is arbitrary, we could have used any exponent in (0,1). Note that in the definition of L_n, the first interval [0,τ_0] and last interval [τ_K_n-1,n] are not counted as “large” even if they are longer than n^1/2.
Note further that τ_0,τ_1,… depend on T, which is chosen as follows:
Assume (β)>1+2/d and let p∈(1+2/d,(β)∧ 2). Let λ_0, β_0 and T be as in Lemma <ref> with =1/4. There exist C,C'>0 such that, for all n∈, k,l≥ 1, β'∈[β,β_0], ∈[λ_0,λ_0]^d and μ∈_1(^d),
[W^⊗ 2,β',,μ,⋆_n[_K_n=k]^p/2] ≤ C4^-kmax_xμ(x)^p-(1+2/d),
[W^⊗ 2,β',,μ,⋆_n[_L_n≥ l]^p/2] ≤ C (C'n^-(p-(1+2/d))/2)^l.
In particular, it holds that
sup_n∈sup_∈[-λ_0,λ_0]^dsup_β∈[β,β_0][(W^β',_n)^p]<∞.
We decompose
[W^⊗ 2,β',,μ,⋆_0,n[_K_n=k]^p/2]
=[(∑_t_0,…,t_k-1
z_0,…,z_k-1 W^⊗ 2,β',,μ,⋆_0,n[_K_n=k,τ_i=t_i, X_τ_i=z_i for all i=0,…,k-1])^p/2]
≤∑_t_0,…,t_k-1
z_0,…,z_k-1[( W^⊗ 2,β',,μ,⋆_0,n[_K_n=k,τ_i=t_i, X_τ_i=z_i for all i=0,…,k-1])^p/2],
where the summation is over z_0,…,z_k-1∈^d and 0≤ t_0≤ t_1≤… t_k-1≤ n with t_i≥ t_i-1+T, i=1,…,k-1. We have used (<ref>) in the inequality. Next, we observe
W^⊗ 2,β',,μ,⋆_0,n[_K_n=k,τ_i=t_i, X_τ_i=X_τ_i'=z_i for all i=0,…,k-1]
=W^⊗ 2,β',,μ,⋆_0,t_0[_τ_0=t_0,X_τ_0=z_0]W^⊗ 2,β',,(z_k-1,z_k-1),⋆_t_k-1,n[_X_s≠ X_s' for all s=t_k-1+T,…,n]
×∏_i=1^k-1W^⊗ 2,β',,(z_i-1,z_i-1),⋆_t_i-1,t_i[_X_s≠ X_s' for all s=t_i-1+T,…,t_i-1,X_t_i=X_t_i'=z_i].
By Jensen's inequality, the first term can be estimated as
[W^⊗ 2,β',,μ,⋆_0,t_0[_τ_0=t_0,X_τ_0=z_0]^p/2] ≤[W^⊗ 2,β',,μ,⋆_0,t_0[_τ_0=t_0,X_τ_0=z_0]]^p/2
=e^λ(pβ')-pλ(β')P^⊗ 2,,μ,⋆(τ_0=t_0,X_τ_0=z_0)^p/2,
where we have used that, by definition, X_s≠ X_s for s=0,…,τ_0-1. For the last term we similarly get
[W^⊗ 2,β',,(z_k-1,z_k-1),⋆_t_k-1,n[_X_s≠ X_s' for all s=t_k-1+T,…,n]^p/2]
≤[[W^⊗ 2,β',,(z_k-1,z_k-1),⋆_t_k-1,n[_X_s≠ X_s' for all s=t_k-1+T,…,n]|_t_k-1+T-1]^p/2]
≤[W^⊗ 2,β',,(z_k-1,z_k-1),⋆_t_k-1,t_k-1+T-1[_X_s≠ X_s' for all s=t_k-1+T,…,n]^p/2]
≤[(W^β',_T-1)^p].
By dropping the restriction “t_k-1≤ n” in (<ref>), we obtain
[W^⊗ 2,β',,μ,⋆_n[_K_n=k]^p] ≤ e^λ(pβ')-pλ(β')[(W_T-1^β',)^p]
×(∑_t_0∈,z_0∈^d P^⊗ 2,,μ,⋆(X_s≠ X_s' for all s=0,…,t_0-1,X_t_0=X_t_0'=z_0)^p/2)
×(∑_t≥ T,z∈^d[W^⊗ 2,β',,⋆_t[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2])^k-1.
We have again used Jensen's inequality to integrate out the environment in ⋃_i=1^k(τ_i-1,τ_i+T). The first two factors are clearly bounded in β' and . The third factor can be bounded by Cmax_x μ(x)^p-(1-2/d) by Lemma <ref> and the final factor is bounded by 4^-(k-1) due to the choice of β_0 and λ_0 and Lemma <ref>. This proves (<ref>).
The argument for (<ref>) follows similarly. We start with (<ref>), where the summation is instead over k≥ l+1 and 0≤ t_0≤ t_1≤… t_k-1≤ n such that t_i≥ t_i-1+T (i=1,…,k), and t_i-t_i-1≥ n^1/2 for at least l indices i∈{1,…,k-1}. By repeating the argument, we see that the left-hand side of (<ref>) is bounded by the same quantity as in (<ref>), except that the final line is replaced by
∑_k≥ l+1k-1l(∑_t≥ T,z∈^d[W^⊗ 2,β',_t[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2])^k-1-l
×(∑_t≥ n^1/2,z∈^d[W^⊗2,β',_t[_X_s≠ X_s' for all s=T,…,t-1,X_t=X_t'=z]^p/2])^l.
The binomial coefficient is bounded by 2^k-1, the term inside (…)^k-1-l is bounded by 1/4 and the term inside (…)^l by Cn^-(p-(1+2/d))/2, where the last bound follows from (<ref>). Thus each summand in the display above is bounded by 2^-(k-1-l)(2Cn^-(p-(1+2/d))/2)^l and the claim follows by taking the sum over k.
Finally, to obtain (<ref>) we use (<ref>) to get
[(W^β',)^p]≤∑_k=0^∞[W^⊗ 2,β',[_K_n=k]^p/2].
Applying (<ref>), we see that the contribution from the sum over k≥ 1 is indeed bounded in n, β' and , and by Jensen's inequality [W_n^⊗ 2,β',[_K_n=0]^p/2]≤[W_n^⊗ 2,β',[_K_n=0]]^p/2≤ 1.
Up to this point, we have only considered partition functions with free endpoint. We now bound the joint pinned partition function with given starting point and endpoint and with the restriction that there is at least one collision.
Assume (β)>1+2/d and let p∈(1+2/d,(β)∧ 2). Recall (<ref>). Let p, T and λ_0 be as in Lemma <ref> and set α:=|(λ_0,…,λ_0)|_∞/2. For all M>1, there exists C>0 such that
sup_n∈sup_ν∈_n(α ,M)[W_0,n^⊗ 2,β, ν[_K_n-1>0]^p/2]≤ C(sup_x ν(x,⋆)^p-(1+2/d)+sup_yν(⋆,y)^p-(1+2/d)),
Given ν∈_n(α ,M), let a,b be as in (<ref>) and note that there exists ∈[-λ_0,λ_0]^d such that ()=(b-a)/n. We decompose
[W^⊗2,β,ν_0,n[_K_n-1>0]^p/2] ≤[W^⊗2,β,ν_0,n[_K_n-1>Klog n]^p/2] +[W^⊗2,β,ν_0,n[_L_n-1> L]^p/2]
+[W^⊗2,β,ν_0,n[_0<K_n-1≤ Klog n,L_n-1≤ L]^p/2],
We first show that K and L can be chosen in such a way that the first two terms in the above display can be disregarded, namely
[W^⊗2,β,ν_0,n[_K_n>Klog n]^p/2] ≤ Cn^-d/2(p-(1+2/d))-1,
[W^⊗2,β,ν_0,n[_L_n> L]^p/2] ≤ Cn^-d/2(p-(1+2/d))-1.
Note that ν(·,⋆) is supported on sets of cardinality at most (2M+1)^dn^d/2, hence sup_xν(x,⋆)≥ (2M+1)^-dn^-d/2 and therefore the right-hand side of (<ref>) is much smaller than the right-hand sides of (<ref>) and (<ref>). In order to prove (<ref>) and (<ref>), note that for any event A,
W^⊗ 2,β,ν_0,n[_A]=∑_x,x',y,'yν(x,y)ν(x',y') W^⊗ 2,β,(x,x'),(y,y')_0,n[_A]
Each term can be estimated as follows:
W^⊗ 2,β,(x,x'),(y,y')_0,n[_A] = W^⊗ 2,β,(x,x'),⋆_n-1[_A_X_n=y,X_n'=y']/P(X_n=y-x)P(X_n=y'-x')
=W^⊗ 2,β,(x,x'),⋆_n-1[_A_X_n=y,X_n'=y'e^· X_n +· X_n'-2nφ() ]/E[e^· X_n -nφ()_X_n=y-x]E[e^· X_n -nφ()_X_n=y'-x']
≤W^⊗ 2,β,,(x,x'),⋆_n-1[_A ]/P^(X_n=y-x)P^(X_n=y'-x')
≤ CW^⊗ 2,β,,(x,x'),⋆_n-1[_A ]n^d.
In the final line, we have used that y-x,y'-x'∈ n()+[-Mn^1/2,Mn^1/2]^d and applied Theorem <ref>. Together with (<ref>), we get
[W^⊗ 2,β,ν_n-1[_A]^p/2] ≤ Cn^dp/2∑_x,x',y,y'ν(x,y)^p/2ν(x',y')^p/2[W^⊗ 2,β,,(x,x'),⋆_n-1[_A]^p/2]
≤ Cn^pd/2(sup_x,x'[W^⊗ 2,β,,(x,x'),⋆_n-1[_A]^p/2])∑_x,x',y,y'ν(x,y)^p/2ν(x',y')^p/2
≤ Cn^pd/2+2dsup_x,x'[W^⊗ 2,β,,(x,x'),⋆_n-1[_A]^p/2] .
In the final line, we have bounded ν(x,y) and ν(x',y') by 1 and used that the support of ν has cardinality at most (2M+1)^2dn^d. Now, to prove (<ref>) and (<ref>) we set A equal to {K_n-1≥ Klog n} and to {L_n-1≥ L}. Due to Lemma <ref>, we can choose K and L such that the expectation in (<ref>) decays at an arbitrarily fast polynomial rate.
It remains to bound the final term in (<ref>). To do so, we partition the interval {n/4,…,3n/4} into 4L disjoint intervals I_1={s_1,…,t_1},…,I_4L={s_4L,…,t_4L} of equal size. We claim that on {0<K_n-1≤ Klog n,L_n-1≤ L} there is at least one interval I_i such that X_t≠ X_t' for all t∈ I_i. To see this, let A_0:=0 and let A_i+1 be the index j such that τ_j is the endpoint of the next large interval after time τ_A_i,
A_i+1:=inf{j>A_i:τ_j≥τ_j-1+ n^1/2}.
On {K_n-1≤ Klog n}, it now holds that
{t∈{1,…,n-1}:X_t=X_t'}⊆⋃_i=0^K_n-1-1[τ_i,τ_i+T)⊆⋃_i=0^L_n-1[τ_A_i,τ_A_i+Kn^1/2log n+T).
The length of the intervals in the last union is much smaller than |I_i|=n/8L, so (for n large enough) each of them can intersect at most 2 of the intervals I_1,…,I_4L. Hence on {L_n-1≤ L} there must be at least one i_0 such that ⋃_i=0^K_n-1-1[τ_i,τ_i+T)∩ I_i_0=∅.
In addition, on K_n-1>0 we must have X_r=X_r' for some r in either [0,s_i_0)∩ or (t_i_0,n)∩. Together with (<ref>), we see that the last term in (<ref>) is bounded by
∑_i=1^4L[W^⊗2,β,ν_0,n[_X_r=X_r for some r<s_i,X_t≠ X_t for all t∈ I_i]^p/2]
+∑_i=1^4L[W^⊗2,β,ν_0,n[_X_r=X_r' for some r>t_i,X_t≠ X_t for all t∈ I_i]^p/2].
We will bound the first sum and note that the contribution from the second sum can be treated similarly. As explained in Section <ref>, the idea is that since there are no intersections in I_i, we can integrate out the environment in this strip, and the resulting partition function is then essentially the product of two free partition functions. Indeed, by Jensen's inequality, each summand can be bounded by
[[W^⊗2,β,ν_0,n[_X_r=X_r' for some r<s_i,X_t≠ X_t for all t∈ I_i]|_[0,n]∖ I_i]^p/2]
= [W^⊗2,β,ν_[0,n]∖ I_i[_X_r=X_r' for some r<s_i,X_t≠ X_t for all t∈ I_i]^p/2]
≤[W^⊗2,β,ν_[0,n]∖ I_i[_X_r=X_r' for some r<s_i]^p/2]
where, for I⊆_+, _I:=σ(ω_t,x:t∈ I∩) and where (recall (<ref>))
W^⊗2,β,ν_I[_A]=∑_x,x',y,y'ν(x,y)ν(x',y')E^⊗ 2,(x,x'),(y,y')_0,n[e^β H_I(ω,X)+β H_I(ω,X')-2|I∩|λ(β)_A].
Now, applying Lemma <ref> with
f =e^β H_[1,s_i)(ω,X)+β H_[1,s_i)(ω,X')-2s_iλ(β)_X_r=X_r' for some r<s_i,
g =e^β H_(t_i,n)(ω,X)+β H_(t_i,n)(ω,X')-2(n-t_i)λ(β)
shows that, almost surely,
W^⊗2,β,ν_[0,n]∖ I_i[_X_r=X_r' for some r<s_i] ≤ CW^⊗ 2,β,,ν(·,⋆),⋆_0,s_i-1[_X_r=X_r' for some r<s_i]W^⊗ 2,β,,⋆,ν(⋆,·)_t_i+1,n
=CW^⊗ 2,β,,ν(·,⋆),⋆_0,s_i-1[_K_s_i-1>0]W^⊗ 2,β,,⋆,ν(⋆,·)_t_i+1,n.
Using again (<ref>), we have
[W^⊗2,β,ν_0,n[_X_r=X_r for some r<s_i,X_t≠ X_t for all t∈ I_i]^p/2]
≤[(W^β,,⋆,ν(⋆,·)_t_i+1,n)^p]∑_k≥ 1[W^⊗ 2,β,,ν(·,⋆),⋆_0,s_i-1[_K_s_i-1=k]^p/2]
The conclusion thus follows from Lemma <ref> by using (<ref>) for the left factor and summing (<ref>) over k≥ 1 for the right factor.
§ PROOF OF THE LOCAL LIMIT THEOREM
Let T and α be as in Lemma <ref>. Using (<ref>) and Jensen's inequality, we decompose
[(W^β,x,y_0,n)^p] =[(W^⊗2,β,(x,x),(y,y)_0,n)^p/2]
≤ 1 +[W^⊗2,β,(x,x),(y,y)_0,n[_K_n-1>0]^p/2],
The second term is bounded due to (<ref>). It remains to prove (<ref>), for which we first observe
[|W^β,ν_0,n-1|^p]≤[|W^β,ν_0,n-1|^p_A^c]+[|W^β,ν_0,n-1|^p_A],
where, for a∈(0, 1) and rmax_xν(x,⋆)+max_yν(⋆,y),
A {W^⊗ 2,β,ν_0,n[_K_n-1>0]≤ r^a}.
Note that, by choosing p' sufficiently close to we obtain, using again Lemma <ref>,
(A^c) ≤ r^-ap'[W_0,n^β,ν[_K_n-1>0]^p']
≤ Cr^-ap'+p'-(1+2/d)
≤ Cr^-a+-(1+2/d)-/2.
Then, by choosing p' sufficiently close to /p, we obtain
[|1-W_0,n^β,ν|^p_A^c] ≤[(1+W_0,n^β,ν)^pp']^1/p'(A^c)^1-1/p'
≤sup_x,y:ν({x,y})>0[(1+W_0,n^β,x,y)^pp']^1/p'(A^c)^1-1/p'
≤ Cr^(-p)(1-1+2/d/-a)-.
The second inequality is due to Jensen's inequality and the third inequality uses (<ref>). We have obtained a bound for the first term in (<ref>).
Similarly, by choosing p' sufficiently close to , we get
[W_0,n^β,ν_A^c] ≤sup_x,y:ν({x,y})>0[(W^β,x,y_0,n)^p']^1/p'(A^c)^1-1/p'
≤ Cr^(-1)(1-1+2/d/-a)-2/p.
To bound the second term in (<ref>), we use
[|W^β,ν_0,n-1|^p_A]
≤[|W^β,ν_0,n-1|^2_A]^p/2
=([W^⊗ 2,β,ν_0,n_A]-2[W^β,ν_0,n_A]+(A))^p/2
=([W^⊗ 2,β,ν_0,n[_K_n-1>0]_A]+[W^⊗ 2,β,ν_0,n[_K_n-1=0]_A]+2[W^β,ν_0,n_A^c]-(A^c)-1)^p/2
≤([W^⊗ 2,β,ν_0,n[_K_n-1>0]_A]+[W^β,ν_0,n_A^c]+P(A^c))^p/2
≤(r^a+2Cr^(-1)(1-1+2/d/-a)+2/p)^p/2
≤ r^a p/2+C'r^p/2(-1)(1-1+2/d/-a)+
In the second equality, we have used that [W^β,ν_0,n_A]=1-[W^β,ν_0,n_A^c] and in the second inequality we have used [W^⊗ 2,β,ν_0,n[_K_n-1=0]]≤ 1. The third equality uses the definition of A and (<ref>). Together with (<ref>), (<ref>) is now bounded by
r^a p/2 + r^p/2 (-1)(1-1+2/d/-a)+ + r^(-p)(1-1+2/d/-a)+.
We obtain (<ref>) by optimizing this expression over a∈(0,1-1+2/d/). The first exponent is increasing and the remaining two are decreasing, and moreover they are multiples of each other. In particular, they equal zero for the same value of a. Depending on whether p>1+-1/+1 or not, the minimizing value a can be computed by setting the first and the third exponent, resp. the first and the second exponent to be equal, which gives
a =(1-1/)(1-1+2/d/) if p∈(1+-1/+1,),
-p/-p/2(1-1+2/d/) if p∈(1+2/d,1+-1/+1).
We obtain ξ as a p/2, which concludes the proof.
§ PROOF OF THE PROPERTIES OF
It is well-known that β↦[f(W_n^β)] is increasing for every f:_+→ convex, see for example <cit.>, so β↦(β) is decreasing.
For part (i) we assume (β)∈(1+2/d,2]. By Lemma <ref>, for every p∈(1+2/d,) there exists β_0>β such that sup_n[(W_n^β')^p]<∞, hence (β')≥ p. By taking p↑, we obtain lim_β'↓(β')≥ and the claim follows from the monotonicity of .
For part (ii), assume (<ref>), β>β_cr^L^2 and (β)>1 and let >0. We will show that there exists β'<β such that [(W_n^β')^(β)+] diverges as n→∞, hence (β')≤(β)+. To this end, we use a variation of the renewal construction that appeared in the proof of <cit.>. We refer to the discussion at the beginning of <cit.> for intuition.
From <cit.>, we find c>0 such that for all u large enough,
(∃ t∈,x∈^d s.t. W_t^β[_X_t=x]≥ u)≥ cu^--/2.
Let u be large enough that, additionally, u^/2≥ 8/c and then T large enough that
(∃ t≤ T,x∈^d s.t. W_t^β[_X_t=x]≥ u)≥ cu^--/2/2.
The above probability is a continuous function of β, hence there exists β'<β such that
(∃ t≤ T,x∈^d s.t. W_t^β'[_X_t=x]≥ u)≥ cu^--/2/4.
Let now (σ,Z)inf{(t,z) W_t^β'[_X_t=z]≥ u]}, where the infimum is with respect to the lexicographic order on ×^d. We define (σ_0,Z_0)=(0,0) and, given σ_i<∞, (σ_i+1,Z_i+1) (σ,Z)∘θ_σ_i,Z_i, where θ_t,x denotes the space-time shift of the environment defined by (θ_t,xω)_t',x':=ω_t+t',x+x'. On the event
{σ_i-σ_i-1≤ T for i=1,…,⌊ n/T⌋}∩{inf_m W_m'∘θ_σ_⌊ n/T⌋,Z_⌊ n/T⌋>a},
it holds that
W_n^β'≥ W_n^β'[_X_σ_i=Z_i for all i=1,…,⌊ n/T⌋]≥ au^⌊ n/T⌋.
Hence
[(W^β'_n)^+] ≥ a^- u^+(σ_1≤ T)^⌊ n/T⌋(inf_m W_m^β'>a)
≥ a^+(inf_mW_m^β'>a)(cu^/2/4)^⌊ n/T⌋
≥ a^+(inf_mW_m^β'>a)2^⌊ n/T⌋,
where we used (<ref>) in the second inequality and the definition of u in the final inequality. Since (<ref>) holds at β', the probability in the final line is positive for any a>0. We thus see that [(W_n^β')^+] diverges (even exponentially fast), hence (β')≤(β)+. Since is arbitrary, we obtain lim_β'↑β(β')≤(β).
For part (iii), we use hypercontractivity and the result from <cit.>, which are introduced in Appendix <ref> and <ref>. We assume that has finite support and that (β)>1. Let β'∈(0,β). By Theorem <ref>, there exists ρ∈(0,1) such that, for any p≥ 1 and n∈,
[(W_n^β')^p]≤[(T_ρ W_n^β)^p].
Next, recall the value p(q) from Theorem <ref>. We claim that there exists >0 such that
p((β)-)>(β).
Indeed, for all ∈(0,(β)-1) it holds that r():=p((β)-)/(β)->1 and ↦ r() is increasing. Let _0:=1+(β)/2 and note that (<ref>) holds if we choose >0 small enough for ((β)-)r(_0)>(β). Finally, we apply (<ref>) with p equal to p((β)-) and obtain
W_n^β'_p((β)-)≤T_ρ W_n^β_p((β)-)≤W_n^β_(β)-,
where the last inequality is due to Theorem <ref>. By definition of , the right-hand side is bounded in n, hence by (<ref>) we have (β')≥ p((β)-)>(β).
§ PROOFS FOR THE COROLLARIES
We start with the alternative statement for the local limit theorem.
We introduce
W^β,x,y_(0,r]∪[n-r,n) E^x,y_0,n[e^β H_(0,r]∪ [n-r,n)(ω,X)-2rλ(β)],
ν_ω,r(x',y') W^β,x,y_(0,r]∪[n-r,n)[_X_r=x',X_n-r=y']/W^β,x,y_(0,r]∪[n-r,n).
By applying Lemma <ref>(ii) with f=e^β H_(0,r](ω,X)-rλ(β) and g=e^β H_[n-r,n)(ω,X)-(n-r)λ(β), we obtain, almost surely,
|W^β,x,y_(0,r]∪[n-r,n)/W^β,x,⋆_rW^β,⋆,y_n-r,n-1|≤ a_n.
Furthermore, by adding the indicators _X_r=x' and _X_n-r=y' to f and g for |x-x'|≤ r and |y-y'|≤ r, we obtain, almost surely,
sup_x',y'|ν_ω,r(x',y')/μ_ω,r(X_r=x')μ_ω,[n-r,n)^⋆,y(X_n-r=y')-1|≤ 3a_n,
where μ_ω,[n-r,n)^⋆,y(X_n-r=y')=W^β,⋆,y_n-r,n[_X_n-r=y']/W^β,⋆,y_n-r,n is the backwards version of the polymer measure and the supremum is over those x',y' with ν_ω,r(x',y')>0.
Now, using (<ref>) and the inequality |AB-1|≤ |A||B-1|+|A-1|, we obtain
|W^β,x,y_0,n/W^β,x,⋆_rW^β,⋆,y_n-r,n-1|≤(1+a_n) |W^β,ν_ω,r_r,n-r-1|+ a_n.
Furthermore, by applying Theorem <ref>(ii) and (<ref>), we get, for some A>0,
[|W^β,ν_ω,r_r,n-r-1|^p] ≤[max_x'ν_ω,r(x',⋆)^A]+[max_y'ν_ω,r(⋆,y')^A]
≤ 2(1+3a_n)^A [max_x'μ_ω,r(X_r=x')^A].
Note that the bounded in the last line no longer depends on x and y. The claim now follows from Theorem <ref>(i).
The proof for ≥ is straightforward:
Assume (β)∈(1+2/d,2]. By Theorem <ref>(i), for any p∈(1+2/d,) we have sup_n[(W_0,n^0,0)^p]<∞, hence (β)≥ p. By taking p↑ we obtain ≥.
Next, we prove the stability of W^β,_n in the drift.
Assume again (β)∈(1+2/d,2]. For any p∈(1+2/d,), by Lemma <ref> there exists λ_0 such that sup_n[(W_n^β,)^p]<∞ holds for all ∈[-λ_0,λ_0]^d, hence (β,)≥ p. The lower-semicontinuity follows by taking p↑ and the second claim is now obvious.
Next, we prove the claim regarding the large deviation principle.
The existence of the LDP is well-known, even in the strong disorder phase, see for example <cit.>. We compute the logarithmic moment generating function of μ_ω,n^β(X_n∈·),
logμ_ω_n^β[e^· X_n]=log E[e^β H_n(ω,X)-nλ(β)+λ X_n]-log W^β_n=log W^β,_n+nφ()-log W^β_n.
By standard arguments based on concentration inequalities, for any ∈^d, almost surely,
lim_n→∞1/nlog W_n^β,=lim_n→∞1/n[log W_n^β,]≤ 0,
where the last inequality is Jensen's inequality. If (<ref>) holds then W_n^β converges almost surely to a positive limit, hence
lim_n→∞1/nlogμ_ω,n^β[e^· X_n]≤φ().
The inequality I^β(x)≥ I^0(x) follows from the Gärtner-Ellis Theorem, see for example <cit.>. Moreover, if (<ref>) holds with bias , then lim_n→∞1/nlog W_n^β,=0 almost surely and the inequality in (<ref>) becomes an equality. Thus the equality of I^β and I^0 in a neighborhood of the origin again follows from the Gärtner-Ellis Theorem.
If (β)>2 then (<ref>) has been observed in <cit.>, and for (β)∈(1+2/d,2] we can apply Corollary <ref> to conclude.
Next, we prove the statements regarding the decay of the replica overlap.
Part (i): Note that since I_n^β,p is decreasing in p, it is enough to check the claim for all p close to 1+2/d. Assume (β)>1+2/d and let p∈(1+2/d,∧ 2). Let α be as in Theorem <ref>. By standard large deviation estimates for the simple random walk, see for example <cit.>, there exists c>0 such that, for all n∈ and |x|≥α n,
(W^β_n[_X_n+1=x]≥ e^-cn)≤ e^cnP(X_n+1=x)≤ e^-cn,
and hence, almost surely,
W^β_n[_X_n+1=x]≥ e^-cn for at most finitely many n∈,|x|≥α n.
We decompose
∑_n I_n^β,p =(W^β_n)^-p∑_n∈,x∈^dW^β_n[_X_n+1=x]
=(W^β_n)^-p∑_n∈,|x|>α nW^β_n[_X_n+1=x]^p+(W^β_n)^-p∑_n∈,|x|≤α n(W_0,n+1^β,0,x)^pP(X_n+1=x)^p.
Due to (<ref>), the first sum is almost surely finite. For the second sum, we take expectation and apply (<ref>),
∑_n∈,|x|≤α n[(W_0,n+1^β,0,x)^p]P(X_n+1=x)^p ≤ C∑_n∈,x∈^dP(X_n+1=x)^p
≤ C∑_n∈max_x∈^dP(X_n+1=x)^p-1,
which is finite due to Theorem <ref>. For part (ii), we first establish a bound for I_k,n^β,2 that does not depend on n. Indeed,
I_k,n^β,2 =(W_n^β)^-2∑_x W^β_n[_X_k=x]^2
≤(sup_m (W_m^β)^-2) ∑_x W^β_k[_X_k=x]^2(sup_m W^β_m∘θ_k,x)^2
≤(sup_m (W_m^β)^-2) ( e^-2ck∑_|x|>α k (sup_m W^β_m∘θ_k,x)^2+∑_|x|≤α k W^β_k[_X_k=x]^2(sup_m W^β_m∘θ_k,x)^2),
where the last equality holds due to (<ref>) and is valid for all k≥ k_0(ω). It is enough to show that the last line is almost surely summable over k. To deal with the first sum, we estimate
[(∑_ke^-2ck∑_|x|≥α k (sup_m W^β_m∘θ_k,x)^2)^1/2] ≤[∑_ke^-ck∑_|x|≥α k (sup_m W^β_m∘θ_k,x)]
= C∑_ke^-ckk^d,
where we used (<ref>) and Theorem <ref>(i). For the second sum in (<ref>), we take p∈(1+2/d,∧ 2) and estimate
[(∑_k∑_|x|≤α k W^β_k[_X_k=x]^2(sup_m W^β_m∘θ_k,x)^2)^p/2] ≤[∑_k∑_|x|≤α k W^β_k[_X_k=x]^p(sup_m W^β_m∘θ_k,x)^p]
=[(sup_m W^β_m)^p][∑_k∈,|x|≤α kW^β_k[_X_k=x]^p].
The first factor is finite due to Doob's inequality and the second factor is almost the same as in (<ref>), hence finite.
Finally, we prove the claim regarding the typical behavior of I_n^β,2.
Since max_xμ_ω,n^β(X_n+1=x)≤ (I_n^β,2)^1/2, it is enough to bound (I_n^β,2)^1/2. Moreover, since sup_n (W^β_n)^-2 is almost surely finite, it is enough to show that
lim_n→∞(∑_xW_n^β[_X_n+1=x]^2≥ n^-d(1-1/)+)=0.
Using (<ref>) and a union bound, we obtain
lim_n→∞(∑_|x|≥α nW^β_n[_X_n+1=x]^2≥ e^-cn)=0.
To estimate the remaining sum, we fix p∈(1+2/d,(β)) and apply the Markov inequality,
(∑_|x|≤α nW^β_n[_X_n+1=x]^2≥ n^-d(1-1/)+) ≤ n^d/2(p-p/)- p/2[(∑_|x|≤α nW^β_n[_X_n+1=x]^2)^p/2].
To bound the expectation, we apply (<ref>) and argue as in (<ref>) to obtain
[(∑_|x|≤α nW^β_n[_X_n+1=x]^2)^p/2] ≤∑_|x|≤α n[(W_0,n+1^β,0,x)^p]P(X_n+1=x)^p
≤ Cmax_x∈^dP(X_n+1=x)^p-1
≤ Cn^-d/2(p-1).
By comparing with (<ref>), we see that the exponent of n is negative if we choose p sufficiently close to .
§ APPENDIX
§.§ Discussion of hypercontractivity
We assume that has finite support. Let us first make the setup more precise: let Ω={0,…,L}^Λ denote the enlarged hypercube, where Λ is some finite index set. In our application we will take Λ={1,…,n}×{-n,…,n}^d. Let π be a probability measure with support {0,…,L} and ⊗_i∈Λπ.
For gΩ→ and ρ∈[0,1], the noise operator T_ρ acts on g by
(T_ρg)(ω) [g(ω_ρ)|σ(ω)], where
(ω_ρ)_t,x ω_t,x with probability ρ
ω'_t,x with probability 1-ρ.
and where ω' is a independent copy of ω. The coordinates of ω_ρ are independent, so that ω_ρ has law as well. Thus T_ρ smoothes out the effect of individual coordinates of ω and we expect that ω↦ T_ρ g(ω) is, in a sense, smoother than ω↦ g(ω). This effect of “noise stability” has been investigated quite actively recently, see <cit.> for an overview.
Given 1<q<p<∞ and ρ∈[0,1], we say that a hypercontractive inequality holds if, for all g:Ω→,
T_ρ g_p≤g_q.
Note that this inequality is always satisfied for ρ=0 and never for ρ=1 (unless g is constant). One natural problem is to show that, for a given p and q, there exists ρ>0 satisfying (<ref>) and to determine the largest possible value such that the inequality still holds. For our purposes, we need a slightly weaker result:
Fix ρ∈(0,1). For every q>1 there exists p=p(q)>q such that (<ref>) holds for all gΩ→ and for all finite Λ. Moreover, q↦p(q)/q is increasing.
The hypercontractive inequality has been studied most intensively in the case L=1 and π=1/2δ_0+1/2δ_1, i.e., the marginals have a symmetric Bernoulli distribution. In that case, it is known that, for given 1<q<p<∞, (<ref>) holds for all ρ≤ (q-1/p-1)^1/2, see <cit.> or the discussion in <cit.>. Since (q-1/p-1)^1/2 converges to one as p↓ q, we can prove Theorem <ref> in the symmetric Bernoulli case in this way.
However, it seems that a generalization to biased Bernoulli distributions is not available. Namely, in that case the optimal ρ has been established if either p or q equals 2, see <cit.> and the references therein, but we did not find suitable references for the case q<p<2. In that direction, the most relevant work is <cit.>: they obtain certain bounds on the optimal ρ in (<ref>), but those bounds are of asymptotic nature and do not give sufficient information in the regime p↓ q. We note that they also show <cit.> that the general case L>1 can be reduced to the case |Λ|=1 and L=1, i.e., (biased) Bernoulli distributions.
We now explain how Theorem <ref> can be derived from the result in <cit.>, which comes from a different context. The downside of this approach is that it does not give an explicit expression for p=p(q,ρ).
Fix ρ∈(0,1). In <cit.>, they regard ω and ω_ρ as two steps of a Markov chain on Ω with transition semigroup T_ρ and study the quantity
s_pmin{r∈[0,1][((T_ρ g)(ω))^p]^1/p≤[g(ω_ρ)^rp]^1/rp for all gΩ→}∈[p^-1,1].
In our case, as noted above, it holds that ωω_ρ and thus the definition of s_p implies that (<ref>) holds with q equal to ps_p. Note also that (ω,ω_ρ) is an indecomposable Markov chain in the sense given before <cit.>. Thus, by <cit.>, we obtain that s_p is independent of |Λ| and that p↦ s_p is strictly decreasing with s_1=1.
We now claim that we can choose p q/s_q to conclude. Indeed, due to the strict monotonicty, s_q<1 and thus p>q. Moreover, since s_p<s_q, we can take r equal to s_q in (<ref>) to get
[(T_ρ g)^p]^1/p≤[g^ps_q]^1/(ps_q)=[g^q]^1/q.
The final claim is clear from p(q)/q=1/s_q.
§.§ Hypercontractivity for partition functions
To apply the hypercontractive inequality to the partition function, we recall the following result from <cit.>, which relates T_ρW_n^β to W_n^β' for some β'<β.
[<cit.>]
If the environment is both upper and lower bounded, i.e., there exists K>0 such that
(ω_0,0∈[-K,K])=1,
then for any 0<β_0<β there exists C>0 such that, for all β'∈[β_0,β],n∈ and p≥ 1,
[(W^β'_n)^p]≤[(T_1-C(1-β'/β) W^β_n)^p].
§.§ Local limit theorem for biased random walk
In this section, we consider the local limit theorem for the biased random walk P^. We refer to <cit.> for a discussion of this well-known result. For our purposes, we need the following statement:
For every M>1, there exists C>0 such that, for all ∈ [-1,1]^d and n∈,
sup_x∈^dP^(X_n=x) ≤ Cn^-d/2,
inf_x∈ n()+[-Mn^1/2,Mn^1/2]^d
(n,x)↔ (0,0)P^(X_n=x) ≥ C^-1n^-d/2.
For a fixed value of , it is not difficult to find references for Theorem <ref>. Most works require (X,P^) to be centered, aperiodic and ^d-valued, but a version that applies to our setup can be found in <cit.>. Most references furthermore do not provide the uniformity in that we desire, although a comment in that direction is given in <cit.>. However, upon inspection it becomes clear that the proofs can be made uniform in , and we will now explain how that can be checked with the help of estimates provided in <cit.>.
We write ψ_Z^() E^[e^iZ·] for the characteristic function of an ^d-valued random variable (Z,P^). To deal with the aperiodicity of (X_n)_n∈, we first consider the theorem for even times. Let _i X_2i/2 and note that the increments (_i-_i-1)_i≥ 0 are ^d-valued random variable with a lattice distribution of span 1. By <cit.>, for any x∈^d,
P^(_1=x) =1/(2π)^d∫_[-π,π]^dψ^__1()e^-ix.
For any δ∈(0,π), we can now write
P^(_n=n()+x) =1/(2π)^d∫_[-π,π]^dψ^__n()e^-ix·-in()·
=I_1(x)+1/(2π)^d∫_[-δ,δ]^dψ^__n()e^-ix·-in()·
=I_1(x)+1/(2π)^d∫_[-δ,δ]^dψ^__1-()()^ne^-ix·
=I_1(x)+1/(2π n^1/2)^d∫_[-δ n^1/2,δ n^1/2]^dψ^__1-()(/n^1/2)^ne^-ix·/n^1/2
=I_1(x)+I_2(x)+I_3(x),
where I_2(x) and I_3(x) are defined by changing the domain of integration to [-δ n^1/2,δ n^1/2]^d∖[-n^1/8,n^1/8]^d and [-n^1/8,n^1/8]^d.
By <cit.>, it holds that ψ^__1()<1 for all ∈[-π,π]^d∖{0} (note that the lemma assumes that _1 is centered, but this is not used in the proof). Since (,)↦φ^__1() is jointly continuous, there exists ∈(0,1) such that, for all ∈[-1,1]^d and ∈[-π,π]^d∖[-δ,δ]^d
|ψ^__1()|≤ 1-
and therefore |I_1(x)|≤ C(1-)^n.
Next, we show that |I_2(x)| decays at a stretched exponential rate. To this end, let Σ_[(_1-())(_1-())^T] be the covariance matrix. Note that since Σ_ is positive definite, all eigenvalues are positive and they depend continuously on . Hence there exists r>1 such that, for all ∈^d and ∈[-1,1]^d,
r^-1||≤ t^TΣ_ t≤ r||^2.
Furthermore, by applying <cit.> with m=3, there exists C>0 such that, for all ∈^d and ∈[-1,1]^d,
|ψ^__1-()()-1+1/2^TΣ__ e^()|≤ R||^3.
Thus, if we choose δ≤1/4rR, then for all ||≤δ n^1/2 and ∈[-1,1]^d,
|ψ__1-()(/√(n))| =|1-1/2n^TΣ_+e^(/√(n))|
≤ 1-1/2rn||^2+R/n^3/2||^3
≤ 1-r/4rn||^2.
Using the inequality 1-z≤ e^-z, we therefore get
|I_2(x)| ≤1/(2π n^1/2)^d∫_[-δ n^1/2,δ n^1/2]^d∖ [-n^1/8,n^1/8]^d|ψ^__1-()(/√(n))|^n
≤1/(2π n^1/2)^d∫_[-δ n^1/2,δ n^1/2]^d∖ [-n^1/8,n^1/8]^de^-1/4r||
≤ Ce^-1/4r n^1/8.
To conclude, we now introduce
I_3(x)1/(2π n^1/2)^d∫_[-n^1/8,n^1/8]^de^-1/2^TΣ_-ix·/n^1/2
and show that there exists C>0 such that, for all ∈[-1,1]^d and x∈[-Mn^1/2,Mn^1/2]^d∩^d,
|I_3(x)-I_3(x)| ≤ Cn^-d/2-1/2,
C^-1n^-d/2≤ I_3(x) ≤ Cn^-d/2.
Since we have proved that |I_1(x)| and |I_2(x)| are much smaller than n^-d/2 and since the error in (<ref>) is also smaller than n^-d/2, we thus conclude that P^(_n=x) is of order n^-d/2, for all ∈[-1,1]^d and x∈[-Mn^1/2,Mn^1/2]^d∩^d.
To prove (<ref>), we note that, using (<ref>), we can change the domain of integration in the definition of I_3(x) from [-n^1/8,n^1/8]^d to ^d with an error term that decays at a stretched exponential rate. On the other hand, by <cit.>, the resulting expression is equal to
1/(2π n^1/2)^d√(Σ_)e^-1/2nx^TΣ_ x.
Using again (<ref>), it is not difficult to see that this expression is of order n^-d/2, uniformly in [-Mn^1/2,Mn^1/2]^d and ∈[-1,1]^d.
To prove (<ref>), we use <cit.>, (<ref>) and (<ref>) to see that, for all ||≤ n^1/8 and ∈[-1,1]^d,
ψ^__1-()(/√(n))^n =(1-1/2n^TΣ_+e^(/√(n)))^n
=e^-1/2^TΣ_+ne^(/√(n))+O(||^2/n+ne^(/√(n))^2)
=e^-1/2^TΣ_(1+O(||^3/n^1/2+||^2/n+||^6/n^2)),
Thus
|I_3(x)-I_3(x)| ≤C/(2π n^1/2)^d∫_[-n^1/8,n^1/8]^de^-1/2^TΣ_(||^2/n+||^3/n^1/2+||^6/n^2)
≤ Cn^-d/2-1/2∫_^de^-1/2^TΣ_(||^2+||^3+||^6).
Using again (<ref>), the last integral can be bounded independently of ∈[-1,1]^d.
We have shown that P^(_n=x) is of order n^-d/2, uniformly in ∈[-1,1]^d and x∈[-Mn^1/2,Mn^1/2]^d∩^d, and it only remains to derive the conclusion in term of the original random walk X. For n even, (<ref>) and (<ref>) immediately follow. For odd n=2k+1, we observe
P^(X_2k=x+e_1)^(X_1=e_1)≤ P^(X_2k+1=x)≤max_|y|_1=1 P^(X_2k=x+y),
so the conclusion follows since P^(X_1=y) is bounded away from zero for ∈[-1,1]^d and |y|_1=1.
§ ACKNOWLEDGEMENT
We are grateful to Ryoki Fukushima for many interesting discussions in the course of this research.
plain
|
http://arxiv.org/abs/2307.05251v2 | 20230711133347 | A stochastic optimization approach to minimize robust density power-based divergences for general parametric density models | [
"Akifumi Okuno"
] | stat.ME | [
"stat.ME",
"stat.ML"
] |
[
[
August 12, 2023
===================
Density power divergence (DPD) [Basu et al. (1998), Biometrika], which is designed to estimate the underlying distribution of the observations robustly against outliers, comprises an integral term of the power of the parametric density models to be estimated.
While the explicit form of the integral term can be obtained for some specific densities (such as normal density and exponential density), its computational intractability has prohibited the application of DPD-based estimation to more general parametric densities, over a quarter of a century since the proposal of DPD.
This study proposes a simple stochastic optimization approach to minimize DPD for general parametric density models and explains its adequacy by referring to conventional theories on stochastic optimization. The proposed approach also can be applied to the minimization of another density power-based γ-divergence with the aid of unnormalized models.
Keywords: robust density power divergence, general parametric densities, stochastic optimization
§ INTRODUCTION
As the presence of outliers within observations may adversely affect the statistical inference, robust statistics has been developed for several decades <cit.>.
Amongst many possible directions, the divergence-based approach, which estimates some parameters in probabilistic models by minimizing the divergence to underlying distributions, has drawn considerable attention owing to its compatibility with the probabilistically formulated problems.
Particularly, the density power divergence <cit.> simply extends the Kullback-Leibler divergence to be robust against outliers, and DPD is one of the most widely recognized divergences across disciplines.
To name a few, DPD is applied to blind source separation <cit.>, matrix factorization <cit.>, more general signal processing <cit.>, Bayesian inference <cit.>,
variational inference <cit.>, and so forth to enhance their robustness.
DPD comprises an integral term of the power of the parametric density models to be estimated.
Unfortunately, however, an explicit form of this integral term can be obtained only for specific density functions including normal density <cit.>, exponential density <cit.>, generalized Pareto density <cit.>, and Weibull density <cit.>; most of the existing literature considers only the normal density function. The computational difficulty of this integral term has prohibited the application of DPD-based estimation to general parametric density models, over a quarter of a century since the proposal of DPD.
To overcome this computational limitation, for instance, <cit.> considers matching mean functions instead of matching probability densities,
<cit.> minimizes an upper-bound of DPD for Gaussian mixture, and
<cit.> computes a finite approximation of the intractable term for Poisson distribution.
While these challenging attempts may provide individual solutions for each parametric density estimation, they cannot be generalized to the remaining countlessly many types of parametric density models, such as inverse-Gaussian and Gompertz densities (see, e.g., <cit.> for a list of statistical distributions).
Such a strong restriction of the parametric density models forces the statistical inferences to be suffered from unwilling model misspecification, which is incongruous with the estimation robust against “outliers”.
Therefore, a general approach to minimize the DPD for general parametric density models is greatly appreciated.
One straightforward approach to compute the integral term is to conduct a numerical integration, and we may compute gradient descent to minimize DPD. However, for parameter estimation problems, the gradient descent requires computing the numerical integration for each iteration, and its computational complexity is non-negligibly high.
Interestingly, the conventional stochastic optimization framework suggests that the gradient is not needed to be exactly computed <cit.>; for a familiar example, the gradient for each iteration is enough to be computed with only a small number of (sub-sampled) observations called minibatch, even for training deep neural networks <cit.>.
In the context of statistics and machine learning, it is known that even a log-likelihood sometimes falls into a computationally intractable form, due to the normalization constant of the probabilistic model (typically in the integral form).
Intractable maximum likelihood estimation <cit.>, contrastive divergence learning <cit.>, and their subsequent studies have been developed to optimize such an intractable likelihood by approximating the gradient.
Following a similar idea, this study proposes computing stochastic gradient descent, whose gradient of the DPD is unbiasedly estimated by a stochastic term.
See Figure <ref> for illustration, including the estimation of Gompertz and Gaussian mixture density models; the proposed approach can be applied to general parametric density models.
This study also discusses that another well-known density power-based γ-divergence <cit.> can be minimized in the same way with the aid of unnormalized models <cit.>.
§ DENSITY POWER ESTIMATOR
In this section, the background and the problem setting of this study are shown in Section <ref> and <ref>.
Our proposal, a stochastic gradient descent algorithm using an unbiased stochastic gradient is shown in Section <ref>.
§.§ Background: density power divergence and estimator
Let d ∈ℕ and let 𝒳⊂ℝ^d.
Suppose that given vectors x_1,x_2,…,x_n ∈𝒳 are independently and identically drawn from a distribution Q whose support is 𝒳⊂ℝ^d.
This study considers estimating the underlying distribution Q by a parametric distribution P_θ, using the observed vectors x_1,x_2,…,x_n.
p_θ denotes a probability density function of the distribution P_θ.
Let β>0 be a user-specified hyperparameter, typically specified as β=0.1,0.5 or β=1.
β is also called apower-parameter.
The density power divergence (DPD, also known as β-divergence; <cit.>, <cit.>, <cit.>) between the underlying distribution Q and the parametric distribution P_θ (equipped with the parameter θ∈Θ⊂ℝ^s,s ∈ℕ) is defined by
D_β(Q,P_θ):=d_β(Q,P_θ)-d_β(Q,Q),
where
d_β(Q,P_θ)
=
-1/β∫_𝒳 p_θ(x)^β Q(x)
+
r_θ^(β),
(
r_θ^(β)
=
1/1+β∫_𝒳 p_θ(x)^1+β x
)
is referred to as density power cross entropy (DPCE).
While the above DPD and DPCE are defined for the continuous distribution Q, the integral should be replaced with the discrete summation if Q is a discrete distribution.
DPD can be regarded as a discrepancy measure between two distributions Q,P_θ; we can estimate the parameter θ in the model P_θ by minimizing the DPD. The DPD reduces to the well-known Kullback-Leibler divergence
D(Q,P_θ)=d(Q,P_θ)-d(Q,Q), d(Q,P_θ)=∫log p_θ(x) Q(x) by taking the limit β↘ 0.
DPD has a robustness property.
Consider the case that Q is composed of the true distribution P_θ_* and the outlier distribution R, i.e.,
Q = (1-ξ) P_θ_* + ξ R
with the contamination ratio ξ > 0.
As R represents the outlier distribution, we may assume that ν^(β)(θ) := 1/β∫ p_θ(x)^β R(x) ≥ 0 is small enough with positive power-parameter β>0 and θ≈θ_*.
Then, it holds for θ≈θ_* that
d_β(Q,P_θ)
=
d_β((1-ξ)P_θ_*+ξ R,P_θ)
=
d_β((1-ξ)P_θ_*,P_θ)
-
ν^(β)(θ)
≈
d_β((1-ξ)P_θ_*,P_θ);
d_β(Q,P_θ) automatically relieves the adverse effect of outlier distribution R, and the DPD-based estimator is expected to be closer to underlying true parameter θ_* compared to the maximum likelihood estimator (MLE) which corresponds to β=0.
While the explicit form of the underlying distribution Q is not obtained in practice, the DPCE (<ref>) is empirically approximated by substituting the empirical distribution Q̂(x)=1/n∑_i=1^n1(x_i ≤ x) into the distribution Q.
The empirical DPCE is defined by
d_β(Q̂,P_θ)
=
- 1/β1/n∑_i=1^n p_θ(x_i)^β
+
r_θ^(β),
and the empirical density power (DP) estimator θ̂_β is defined as
θ̂_β = _θ∈Θ D_β(Q̂,P_θ)
=
_θ∈Θd_β(Q̂,P_θ).
The empirical DP estimator reduces to the MLE by taking the limit β↘ 0.
The aforementioned definitions and properties of DPD are explained in the vast amount of existing literature. See, e.g., <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> for more details.
§.§ Problem: computational difficulty
Unfortunately, computation of the DP estimator (<ref>) is intractable for general parametric density model p_θ, as an explicit form of the integral term r_θ^(β) cannot be obtained.
Explicit form of the integral term r_θ^(β) can be obtained only for several specific parametric density models, such as normal density and exponential density (see, e.g., <cit.>); we obtain r_θ^(β)=(2πσ^2)^-β/2(1+β)^-3/2 if p_θ represents the density function of the univariate normal distribution N(μ,σ^2).
The problem here is to what extent the explicit form of the integral term r_θ^(β) can be obtained.
While the extent has not been clarified to the best of the author's knowledge, at least, the above calculation can be extended to (slightly restricted) exponential family distributions whose density function is in the form
p_θ(x) = h exp( ⟨θ, u(x)⟩ - v(θ) ).
h ≥ 0, u:𝒳→ℝ^s and v:Θ→ℝ are user-specified parameter and functions, respectively. The family of these densities includes normal, exponential, and Gamma distributions, and we obtain the explicit form of r_θ^(β) as shown in Proposition <ref>.
The proof is straightforward. See Appendix <ref>.
r_θ^(β)=1/1+βh^βexp( v((1+β) θ)-(1+β) v(θ) ) for the parametric density (<ref>).
While there exist several remaining distributions whose explicit form of r_θ^(β) can be obtained (see, e.g., <cit.> for generalized Pareto distribution, and <cit.> for Weibull distribution), r_θ^(β) is in general still computationally intractable.
Following the proof shown in Appendix <ref>, we can easily observe that Proposition <ref> cannot be extended to the exponential family with non-constant base measure h(x), i.e.,
p_θ(x)=h(x) exp(⟨θ,u(x)⟩-v(θ)).
This family includes inverse-Gaussian distribution, Poisson distribution, and so forth. One possible approach to compute r_θ^(β) for such density functions is leveraging the numerical integration.
For instance, if the support of the general parametric density model p_θ(x) is ℝ, we may compute a numerical integration
1/1+β2M/N∑_i=1^N p_θ(z_i)^1+β,
with z_i=-M+2M(i-1)/(N-1). By taking the limit N,M →∞, it is expected that (<ref>) converges to r_θ^(β).
Then, we may compute the gradient-based optimization algorithms, using the gradient of the numerical integration (<ref>).
For discrete Poisson case, <cit.> employs a similar idea for the parameter estimation using a slightly different (but essentially the same) divergence called γ-divergence <cit.>.
However, significantly approximating r_θ^(β) requires specifying large N,M ∈ℕ.
To make matters worse, minimizing the DPD in this way requires computing the gradient of the numerical integration (<ref>) for each iteration, resulting in non-negligibly high computational complexity.
§.§ Proposal: stochastic gradient descent using unbiased stochastic gradient
To dodge the high computational complexity described in Section <ref>, this study employs a stochastic gradient descent using an unbiased stochastic gradient.
We first describe the idea in an intuitive manner.
While the approach described in the last of Section <ref> intended to compute the “exact” integration for optimization, this study considers employing a “rough” estimate of the gradient, i.e., a stochastic (unbiased) estimator of the exact gradient.
With the stochastic gradient g_t(θ^(t)) satisfying
𝔼(g_t(θ^(t)))=∂/∂θd_β(Q̂,P_θ^(t))
(and 𝕍(g_t(θ^(t)))<∞), conventional theories prove that the parameter θ^(t) iteratively updated by stochastic gradient descent
θ^(t) = θ^(t-1) - η_t g_t-1(θ^(t-1))
(t=1,2,…,T)
with the decreasing learning rate η_t ↘ 0 yields the convergence of ∂/∂θd_β(Q̂,P_θ^(t)) to 0 (as t →∞). See Proposition <ref> for more rigorous description.
Traditionally speaking, this type of optimization algorithm has its roots in <cit.>, and a similar idea can be found in maximum likelihood estimation of computationally intractable probability models <cit.>.
A significant benefit to employing such a stochastic optimization algorithm is that we can dodge the exact computation of the gradient, which is computationally intensive due to the numerical integration (<ref>).
In DPD-based estimation, we can define an unbiased stochastic gradient g_t(θ^(t)) as follows.
Let P̃_t be a user-specified distribution, such that random numbers can be generated from P̃_t. p̃_t denotes its probability density function.
With functions w_t(y)=p_θ^(t)(y)/p̃_t(y) and t_θ(x)=∂log p_θ(x)/∂θ, we randomly generate independent m ∈ℕ samples Y^(t)_m=(y_1^(t),y_2^(t),…,y_m^(t)) from P̃_t, and define
g_t(θ^(t))
=
-1/n∑_i=1^np_θ^(t)(x_i)^βt_θ^(t)(x_i)
+
1/m∑_j=1^m
w_t(y_j^(t))
p_θ^(t)(y_j^(t))^βt_θ^(t)(y_j^(t)).
While we employ P̃_t=P_θ^(t) in our experiments (so as to obtain constant weight function w_t(y)=1) for simplicity, we may employ normal distribution or some other computationally-tractable distributions to generate Y^(t)_m.
Interestingly, (<ref>) obviously satisfies the unbiasedness assumption (<ref>) (by taking the expectation with respect to Y^(t)_m), regardless of the sample size m ∈ℕ.
Even if we employ m=1, the stochastic gradient descent equipped with (<ref>) is proved to minimize DPD (though the optimization procedure can be slightly unstable when m ∈ℕ is excessively small); this approach much reduces the computational complexity, as it does not need to take the limit m →∞ unlike the numerical integration (<ref>).
Our numerical experiments in Section <ref> demonstrate that m=10 is enough for plausible computation.
A slightly simpler version of the convergence theorem shown in <cit.> is summarized in Proposition <ref>.
While this study considers only the simple stochastic algorithm, several options such as variance reduction <cit.> and their theories also can be incorporated to our approach.
Let m ∈ℕ be arbitrarily fixed.
Assume that
(i) f(θ)=d_β(Q̂,P_θ) is smooth,
(ii) its gradient satisfies the Lipschitz property, i.e., ∂ f(θ)/∂θ-∂ f(θ')/∂θ≤ Lθ-θ',
(iii) 𝔼_Y^(t)_m(g_t(θ^(t)))=∂ f(θ^(t))/∂θ and
(iv) 𝔼_Y^(t)_m(g_t(θ^(t))-∂ f(θ^(t))/∂θ^2) ≤σ^2 for some σ≥ 0, for any t ∈ 1,2,…,T.
Assume that the learning rate {η_t}_t=1^T satisfies ∑_t=1^Tη_t →∞ and {∑_t=1^Tη_t}^-1∑_t=1^Tη_t^2 → 0 as T →∞.
Then, the sequence {θ^(t)} obtained by the stochastic gradient descent (<ref>) satisfies
𝔼_τ(
∂/∂θd_β(Q̂,P_θ^(τ))
^2
)
0
(T →∞).
𝔼_τ represents the expectation with respect to the step τ∈{1,2,…,T} randomly chosen with the probability ℙ(τ=k | T)={2η_k-Lη_k^2}/∑_k=1^T{2η_k-Lη_k^2} (k=1,2,…,T).
While the assumptions (i)–(iv) are slightly restrictive for general distribution P_θ particularly in the outer region of the parameter space Θ, we may customize the functions (only in the outer region which is out of interest in parameter estimation) so as to satisfy (i)–(iv).
In any case, conditions (i)–(iv) are needed for mathematical completeness, and the convergence are examined in our numerical experiments shown in Section <ref>. The application range and the compatibility with the robust statistics of our approach are also discussed in Note <ref> and <ref>.
[Application range of the proposed approach]
Our approach can be computed even for the mixture of the distributions of inverse Gaussian, Gompertz, and so forth while our experiment considers only the Gaussian distribution in mixture setting (which is also considered in <cit.>).
[Compatibility of stochastic optimization with robust estimation]
It is known that the robust density power-based divergences are in general non-convex functions with respect to the model parameter θ∈Θ.
As also noted in <cit.>, the stochastic approach is compatible with such a non-convex optimization; even in our setting, Proposition <ref> does not require the non-convexity on the divergence.
See, e.g., <cit.> for the recent theoretical analyses of the stochastic optimization applied to non-convex functions.
In addition to the above simple optimization problem, we note that a challenging attempt to conduct a Bayesian inference (i.e., computing the posterior distribution, but not the single-point estimator θ̂_β) with robust divergences using general parametric models can be found in a very recent work <cit.>.
Its main purpose is to select the power-parameter β by applying <cit.> to robust divergence settings. Therein, the power-parameter β (but not θ) is updated by a gradient descent, with the gradient stochastically approximated with sequential monte-carlo samplers.
While It has a potential to be computed with general parametric models, they consider a slightly different Bayesian problem.
Their numerical experiments employ only the normal densities, and m=2000 monte-carlo samples are generated to approximate the gradient in each iteration while ours consider smaller sample size m (even m=3 yields plausible results in our experiment).
§ APPLICATION TO THE MINIMIZATION OF Γ-DIVERGENCE
More recently, the γ-divergence <cit.> D_γ(Q,P_θ)=d_γ(Q,P_θ)-d_γ(Q,Q) defined with the γ-cross entropy (GCE)
d_γ(Q,P_θ)
=
-1/γlog∫_𝒳 p_θ(x)^γ Q(x)
+
1/1+γlog∫_𝒳 p_θ(x)^1+γ x
has attracted considerable attention <cit.>.
γ-divergence is equivalent to a pseudo-spherical score <cit.>.
γ-divergence has similar robust properties as the DPD, and it also comprises the integral of the powered density, which is in general computationally intractable.
While several optimization approaches for γ-divergence including the Majorize-Minimization algorithm <cit.> have been developed, the parametric density models are still limited to normal density or several specific ones discussed so far.
The proposed approach cannot be directly applied to the optimization of GCE (<ref>) as the log function applied to the integral term yields the bias of the stochastic gradient.
However, our approach also can be used to minimize the GCE with the aid of unnormalized models <cit.>.
An unnormalized model is defined as a general nonnegative function f:𝒳→ℝ_≥ 0, while the probability density function should satisfy the integral constraint ∫ f(x) x=1. <cit.> provides an important identity between minimizers of GCE and DPCE:
θ̂_γ
=
_θ∈Θd_γ(Q̂,P_θ)
=
_θ∈Θ{min_c > 0 d_β(Q̂, cP_θ)
}|_β = γ.
Therefore, stochastic gradient descent for the augmented parameter
θ̃=(θ,c), with a slightly modified version of the stochastic gradient (<ref>) for θ:
-(c^(t))^γ1/n∑_i=1^np_θ^(t)(x_i)^γt_θ^(t)(x_i)
+
(c^(t))^1+γ1/m∑_j=1^m
w_t(y_j^(t))
p_θ^(t)(y_j^(t))^γt_θ^(t)(y_j^(t))
and that for the scale parameter c>0:
-(c^(t))^γ-11/n∑_i=1^np_θ^(t)(x_i)^γ
+
(c^(t))^γ1/m∑_j=1^mw_t(y_j^(t))p_θ^(t)(y_j^(t))^γ
is expected to produce the γ-estimator θ̂_γ, which minimizes the γ-divergence.
Note that Y_m^(t)=(y_1^(t),…,y_m^(t)) are i.i.d. generated from the distribution P̃_t (i.e., no need to consider the scale parameter c>0 for the generation of Y_m^(t)).
§ NUMERICAL EXPERIMENTS
This section describes the numerical experiments. To demonstrate the proposed approach, we synthetically generate n=1000 observations x_1,x_2,…,x_n from the contaminated distribution (<ref>):
Q=(1-ξ)P_θ_*+ξ R.
Particularly, we employ for (the outlier distribution) R the normal distribution with mean μ=10 and the standard deviation σ=1.
For the parametric distribution P_θ, we empoly the four types of parametric densities:
* Normal distribution, with the true parameter θ_*=(μ_*,σ_*)=(0,1).
* Inverse Gaussian distribution, with the true parameter θ_*=(μ_*,λ_*)=(1,3).
* Gompertz distribution, with the true parameter θ_*=(ω_*,λ_*)=(1,0.1).
* Gaussian mixture distribution, with the true parameter θ_*=(μ_1*,σ_1*,μ_2*,σ_2*,α_*)=(-5,0,1,1,0.6).
See Appendix <ref> for the definitions of the distributions (ii)–(iv).
We compute the density power estimators by conducting the stochastic gradient descent (SGD) with the stochastic gradient (<ref>). The parameters to be estimated are initialized by the maximum likelihood estimation.
Learning rate η_t is decreased by multiplying the decay rate r=0.7 for each 25 iterations:
remaining settings (T,η_0) of the SGD are (i) (500,1), (ii) (1000,1), (iii) (1000,0.5), (iv) (1000,1).
Contamination ratios are ξ=0.1 for (i) and (ii), and ξ=0.01 for (iii) and (iv).
Estimation results of (i)–(iv) are shown in Figures <ref>, <ref>, <ref>(fig:hist_gompertz) and <ref>(fig:hist_GM), respectively.
Firstly, for the setting (i) normal distribution, we can compute the exact empirical DPCE by virtue of Proposition <ref> (while the integral terms cannot be computed for the remaining settings (ii)–(iv)).
We monitor the empirical DPCE via the SGD iterations. See Figure <ref>. We can observe that the SGD significantly reduces the DPCE.
All the DPCE get close to almost the same value regardless of the sample size m ∈ℕ, while the DPCE sequence has a larger variance if smaller m is employed.
Also see Figure <ref>: the DPD-based estimators fit to the true normal distribution robustly against the outlier distribution R=N(10,1). The estimated results are not much different for different m ∈ℕ, and larger β>0 provides more robust estimators.
For the settings (ii)–(iv), see Figures <ref>, <ref>(fig:hist_gompertz), and <ref>(fig:hist_GM).
Almost the same tendency can be observed.
With the same setting as (i) normal distribution, we also minimize γ-cross entropy (GCE). The settings are all the same, and the initial value of the scale parameter c>0 is specified as c=1. We computed the exact values of empirical GCE as shown in Figure <ref>. We can observe that the GCE reduces as well as the empirical DPCE. The obtained distributions are almost the same as those of the DPD. The Figures are skipped due to the page limitation.
During the iteration, the scale parameter ĉ>0 for unnormalized models discussed in Section <ref> is also monitored. See Figure <ref>. We can observe that the scale parameter converges to the scale of the true density 1-ξ=0.9.
We last note that source codes to reproduce the experimental results are provided in <https://github.com/oknakfm/DPD>.
§ CONCLUSION
This study provided a stochastic optimization approach to minimize the robust density power divergence for general parametric density models.
This study explained its adequacy with the aid of conventional theories on stochastic gradient descent.
Our stochastic approach was also used to minimize γ-divergence with the aid of unnormalized models.
Numerical experiments demonstrated the proposed approach.
§ ACKNOWLEDGEMENT
A. Okuno was supported by JSPS KAKENHI (21K17718, 22H05106).
We would like to thank Shintaro Hashimoto, Takayuki Kawashima, Kazuharu Harada, Keisuke Yano, and Takahiro Kawashima for the helpful discussions.
§ PROOF OF PROPOSITION <REF>
As we have
p_θ(x)^1+β =
h^1+βexp( (1+β) ⟨θ,u(x)⟩ -(1+β) v(θ) )
=
h^1+βexp(⟨ (1+β)θ,u(x)⟩ -
v((1+β)θ)
+
v((1+β)θ)
-
(1+β) v(θ)
)
=
h^βexp( v((1+β)θ)
-
(1+β) v(θ) )
h exp(
⟨ (1+β)θ,u(x)⟩ -
v((1+β)θ)
)_=p_(1+β)θ(x),
we obtain
r_θ^(β)
=
1/1+β∫_𝒳p_θ(x)^1+β x
=
1/1+β
h^βexp( v((1+β)θ)
-
(1+β) v(θ) ).
§ DISTRIBUTIONS
* Inverse Gaussian distribution:
the probability density function is
p^_θ(x)
=
√(λ/2π x^3)exp(
-λ (x-μ)^2/2μ^2 x),
(x>0)
where μ>0 is the mean parameter and λ>0 is the shape parameter.
As we have
t_θ^(x)
=
(
λ (x-μ)/μ^3 , 1/2λ - (x-μ)^2/2μ^2 x),
θ=(μ,λ),
the maximum likelihood estimator for θ is
μ̂=1/n∑_i=1^n x_i,
λ̂=1/n^-1∑_i=1^n{x_i^-1 - μ̂^-1}.
We leverage function in package in language to generate random nubmers following the inverse Gaussian distribution.
* Gompertz distribution:
the probability density function is
p^_θ(x)
=
λexp(ω x + λ/ω{1-exp(ω x)}), (x ≥ 0)
where ω>0 is the scale parameter and λ>0 is the shape parameter.
As we have
t_θ^(x)
=
(
x -
λ( 1-exp(ω x)/ω^2 + x exp(ω x)/ω)
, 1/λ + 1-exp(ω x)/ω),
θ=(ω,λ),
the maximum likelihood estimator satisfies
λ̂ =
-ω̂/n^-1∑_j=1^n{1-exp(ω̂ x_j)},
∑_i=1^nx_i
+
1/n^-1∑_j=1^n{1-exp(ω̂x_j)}∑_i=1^n{1-exp(ω̂ x_i)/ω̂
+
x_i exp(ω̂ x_i) }
=
0.
We can numerically find ω̂ by the Newton-Raphson algorithm, whereby we obtain λ̂.
We leverage function in package in language to generate random numbers following the gompertz distribution.
* Gaussian mixture distribution:
the probability density function is
p_θ^(x)
=
αϕ(x;μ_1,σ_1^2)
+
(1-α) ϕ(x;μ_2,σ_2^2),
ϕ(x;μ,σ)=1/√(2πσ^2)exp(-(x-μ)^2/2σ^2),
where α∈ [0,1] denotes the mixing coefficient.
We have
t_θ^(x)
=
(
c_1 x-μ_1/σ_1^2,
c_1 {(x-μ_1)^2/σ_1^3 - 1/σ_1},
c_2 x-μ_2/σ_2^2,
c_2 {(x-μ_2)^2/σ_2^3 - 1/σ_2},
c_3
), θ=(μ_1,σ_1,μ_2,σ_2,α)
where
c_1=αϕ(x;μ_1,σ_1^2)/p_θ^(x),
c_2=(1-α) ϕ(x;μ_2,σ_2^2)/p_θ^(x),
c_3=ϕ(x;μ_1,σ_1^2)-ϕ(x;μ_2,σ_2^2)/p_θ^(x).
We computed the maximum likelihood estimator by leveraging the fucntion in package in language.
apalike
|
http://arxiv.org/abs/2307.04064v1 | 20230709000056 | Local null controllability of a class of non-Newtonian incompressible viscous fluids | [
"Pitágoras de Carvalho",
"Juan Límaco",
"Denilson Menezes",
"Yuri Thamsten"
] | math.AP | [
"math.AP",
"math.OC",
"35K55, 76D55, 93B05, 93C10"
] |
UESPI]P. Carvalho
[email protected]
IMEUFF]J. Límaco
[email protected]
IMEUFF]D. Menezes
[email protected]
IMEUFF]Y. Thamsten
[email protected]
[UESPI]Departamento de Matemática, Universidade Estadual do Piauí, Teresina, PI, Brasil
[IMEUFF]Instituto de Matemática e Estatística, Universidade Federal Fluminense, Niterói, RJ, Brasil
We investigate the null controllability property of systems that mathematically describe the dynamics of some non-Newtonian incompressible viscous flows. The principal model we study was proposed by O. A. Ladyzhenskaya, although the techniques we develop here apply to other fluids having a shear-dependent viscosity. Taking advantage of the Pontryagin Minimum Principle, we utilize a bootstrapping argument to prove that sufficiently smooth controls to the forced linearized Stokes problem exist, as long as the initial data in turn has enough regularity. From there, we extend the result to the nonlinear problem. As a byproduct, we devise a quasi-Newton algorithm to compute the states and a control, which we prove to converge in an appropriate sense. We finish the work with some numerical experiments.
Null controllability, shear dependent viscosity, nonlinear partial differential equations, non-Newtonian fluids.
[2010] 35K55, 76D55, 93B05, 93C10.
§ INTRODUCTION
Let us fix an integer N ∈{ 2,3 }, and let us take a non-empty, open, connected, and bounded subset Ω of ℝ^N with a smooth boundary ∂Ω, and a real number T>0. Henceforth, we write Q:= ]0,T[×Ω, and Σ := [0,T]×∂Ω. In general, we understand all of the derivatives figuring in this work in the distributional sense.
We interpret the set Ω as a region occupied by the particles of a fluid with a velocity field y. We represent its pressure by p, whereas v stands for a distributed control which acts as a forcing term through a given open set ω⋐Ω. We assume ω≠∅. The model comprising the subject of the current investigation is the following:
[ D y/Dt - ∇·𝒯(y,p) = χ_ω v, in Q,
∇· y = 0, in Q,
y = 0, on Σ,
y(0) = y_0, in Ω. ]
Above, the function χ_ω denotes the indicator function of ω, we define the material derivative as
Dy/Dt := y_t + ( y·∇) y,
the stress tensor, 𝒯, is given by
𝒯(y,p) := -p I + ν(∇ y) ∇ y, ν(∇ y) := ν_0 + ν_1 |∇ y|^r ,
in such a way that the constitutive law for the deviatoric stress tensor reads as
ν(∇ y)∇ y := ( ν_0 + ν_1 |∇ y|^r) ∇ y,
where
|∇ y| := [ ∑_i,j=1^N ( ∂_j y_i)^2 ]^1/2.
We remark that the three constants ν_0, ν_1, and r appearing above are strictly positive, typically with ν_0 ≫ν_1, although this assumption is not necessary in this work.
Therefore, we are focusing on the class of power-law shear-dependent fluids. Pioneers in the study of the system (<ref>)-(<ref>) were O. A. Ladyzhenskaya and J.-L. Lions, see <cit.>. Particularly, let us introduce the usual spaces we use in the mathematical analysis of fluid dynamics, i.e.,
H := { y ∈ L^2(Ω)^N : ∇· y = 0 in Ω, y· n = 0 on ∂Ω}
and
V := {y ∈ H^1_0(Ω)^N : ∇· y = 0 in Ω},
where n denotes the outward unit normal on ∂Ω. Then, the results <cit.> (cf. <cit.>) imply the following:
Let us suppose that
r > N/2 - 1.
as well as
y_0 ∈ H and χ_ω v ∈ L^q^'(0,T; V^'),
where
1/q + 1/q^' = 1, for q := r+2.
Then, the problem (<ref>)-(<ref>) admits a unique solution (y,p) such that
y ∈ L^r+2(0,T;V) ∩ L^∞(0,T;H) and p ∈ L^2(Q).
For r=1 and N=3, the system (<ref>)-(<ref>) is the simple turbulence model of Smagorinsky, see <cit.>. Since then, gradient-dependent (or shear-dependent) viscosity models of incompressible viscous fluids have attracted considerable attention from the mathematical, physical, and engineering communities. Some other works investigating the well-posedness for the model (<ref>)-(<ref>) under consideration are <cit.>. The paper <cit.> studies the energy dissipation for the Smagorinsky model. For the investigation of some regularity properties of solutions of (<ref>)-(<ref>), see <cit.> and the references therein.
On the one hand, the Navier-Stokes (NS) system of equations (corresponding to formally replacing ν_1 = 0 in (<ref>)) is deeply relevant, not only in mathematics, but for physics, engineering, and biology, see <cit.>. For standard well-posedness results, which are now classic, see <cit.>. However, even with a great effort of researchers, among the main longstanding open problems are the questions about global existence or finite-time blow-up of smooth solutions in dimension three of the incompressible Navier-Stokes (or else the Euler) equations. The system (<ref>)-(<ref>) is a generalization of the Navier-Stokes equations. From a practical perspective, as <cit.> points out, every fluid which solutions of NS decently models is at least as accurately described by those of (<ref>)-(<ref>).
On the other hand, for real-world problems, the advantage of considering the more general fluids of power-law type is not slight. In effect, as <cit.> describes, practitioners employed them to investigate problems in chemical engineering of colloids, suspensions, and polymeric fluids, see <cit.>, in ice mechanics and glaciology, see <cit.>, in blood-rheology, see <cit.>, and also in geology, see <cit.>, to name a few instances.
We briefly describe the physical meanings of the constants ν_0, ν_1, and r. Firstly, ν_0 stands for the kinematic viscosity of the fluid. If the physical variables are nondimensionalized, then ν_0^-1 is the Reynolds number of the fluid. Secondly, we can conceive the constants ν_1 and r in light of the kinetic theory of gases and the definition of a Stokesian fluid, see <cit.>. For instance, from the point of view of turbulence modeling, we have ν_1 = C_0ℓ^2, where C_0 is a model parameter and ℓ≪ 1 is a mixing length, see <cit.>. In the latter perspective, a possible derivation of the model stands on the Boussinesq assumption for the Reynolds stress, further stipulating that the eddy viscosity ν_t takes the particular form
ν_t = ν_1 |∇ y|^r,
see <cit.>. The term ν_t given by (<ref>) leads to a stabilizing effect by increasing the viscosity for a corresponding increase in the velocity field gradient, see the discussion in <cit.>; hence, we call these fluids shear-thickening.
From the viewpoint of control theory, <cit.> establishes the local null controllability for the Navier-Stokes equations under no-slip boundary conditions; later developments worth mentioning are, e.g, <cit.>. For the study of the Ladyzhenskaya-Smagorinsky model, see <cit.>. The paper <cit.> deals with a similar one-dimensional problem. Regarding local exact controllability properties for scalar equations having a locally nonlinear diffusion, some advances are <cit.>. However, although the diffusion coefficients can be functions of the state (in the case of <cit.> in a simplified form), the methods used in these works seem not enough to tackle the situation in which these coefficients depend on the gradient of the controlled solution. Furthermore, the assumptions they make rule out more general diffusions with power-law type nonlinearities. In the present work, we can circumvent all of these difficulties.
The notion of controllability we consider in this paper is defined as follows.
We say that (<ref>)-(<ref>) is locally null-controllable at time T>0 if there exists η>0 such that, for each y_0 ∈[H^5(Ω)∩ V]^N satisfying the compatibility conditions Ay_0,A^2y_0 ∈[H^1_0(Ω)]^N, as well as
y_0_H^5(Ω)^N < η,
we can find v ∈ L^2(]0,T[×ω)^N for which the corresponding velocity field y of (<ref>)-(<ref>) satisfies
y(T,x) = 0 for almost every x ∈Ω.
We now state the main theoretical result we establish in this paper.
Let us suppose r ∈{1,2} or r ⩾ 3. For each T>0, the system (<ref>)-(<ref>) is locally null-controllable at time T.
Although we stated Theorem <ref> in terms of weak solutions, our methodology yields smooth controls and transient trajectories for the nonlinear system (<ref>)-(<ref>). Namely, we will be able to prove that there is a control parameter v such that
ρ_4 v, (ζ v)_t, ζΔ v, ( ζ v_t )_t, ζΔ v_t, ζ D^4 v ∈ L^2(Q)^N,
with a corresponding trajectory y satisfying
ρ_6∇ y, ρ_7 y_t, ρ_7 Δ y, ρ_8∇ y_t, ρ_9y_tt, ρ_9Δ y_t, ρ_10∇ y_tt, ρ_10D^3 y_t, ρ_9 D^4 y, ρ_11y_ttt, ρ_11Δ y_tt∈ L^2(Q)^N,
ρ_6 y, ρ_7 ∇ y, ρ_8y_t, ρ_9Δ y, ρ_9 ∇ y_t, ρ_9 D^3 y, ρ_10y_tt, ρ_10Δ y_t, ρ_11∇ y_tt∈ L^∞(0,T; L^2(Ω)^N),
for appropriate time-dependent positive weights ρ_4, ρ_6, ρ_7, ρ_8, ρ_9, ρ_10, ρ_11, ζ, ζ which blow up exponentially as t↑ T. For more details and the proofs, we refer to Sections <ref> and <ref>. Of course, there is a trade-off between such regularity and our requirements on the initial datum. We will comment upon questions that are related to this relation on Section <ref>.
We will prove Theorem <ref> with the aid of a local inversion-to-the-right theorem. Namely, we will introduce Banach spaces Y and Z (we provide the details in the second subsection of Section <ref>) as well as a mapping H: Y → Z, such that a solution (y,p,v) of the equation
H(y,p,v) = (0,y_0),
for a given initial data y_0 meeting the assumptions of Theorem <ref>, is a solution of the control problem, i.e., a tuple subject to (<ref>)-(<ref>) and (<ref>). We will use the inversion theorem to guarantee the existence of a local right inverse of H. For proving that H is well-defined, as well as that it enjoys suitable regularity properties, the key steps are novel high-order weighted energy estimates for a control and the solution of the linearization of the system (<ref>)-(<ref>) around the zero trajectory.
Taking advantage of the invertibility properties of DH(0,0,0), we construct the following algorithm allowing the computation of a tuple (y,p,v) solving (<ref>)-(<ref>) and (<ref>).
The following local convergence result for Algorithm 1 holds.
There exist a small enough constant η > 0, as well as appropriate Banach spaces Y and Z,[We provide, in the second subsection of Section <ref>, the explicit definitions of both Y and Z.] such that, if y_0_H^5(Ω)^N < η, with y_0 satisfying the compatibility conditions of Definition <ref>, then it is possible to find κ∈]0,1[ with the following property: the relations (y^0,p^0,v^0) ∈ Y and
(y^0,p^0,v^0)-(y,p,v)_Y < κ,
imply the existence of θ∈]0,1[ for which
(y^n+1,p^n+1,v^n+1) - (y,p,v)_Y ⩽θ(y^n,p^n,v^n)-(y,p,v)_Y,
for all n⩾ 0. In particular, (y^n,p^n,v^n) → (y,p,v) in Y.
Here, we fix some notations that we will use throughout the whole paper. Firstly, C denotes a generic positive constant that may change from line to line within a sequence of estimates. In general, C depends on Ω, ω, T, ν_0, ν_1, and r. In case C begins to depend on some additional quantity a (or we want to emphasize some dependence), we write C=C(a). We will also write, for every integer k⩾ 0,
|D^k y| := [∑_i=1^N ∑_|α|=k(∂^α y_i)^2 ]^1/2,
where we used the standard multi-index notation above. We denote the standard norm of L^2(Ω)^N by ·. Finally, we set D^k y := | D^k y|.
We finish this introductory section outlining the structure of the remainder of the work.
* In Section 2, we study the linearization of (<ref>)-(<ref>) around the zero trajectory — it is a forced Stokes system. With the aid of a global Carleman estimate, we can to show that this system is null controllable. Assuming sufficiently regular initial data, we employ a bootstrapping argument to deduce higher regularity for the control, taking advantage of its characterization via Pontryagin's minimum principle. The higher control regularity naturally leads to higher regularity of the velocity field..
* In Section <ref>, we use a local inversion-to-the-right theorem for mappings between Banach spaces to show that the model (<ref>)-(<ref>) is locally null controllable.
* It is in Section 4 that we prove Theorem <ref>. Then, we conduct some numerical experiments to illustrate our theoretical findings.
* Finally, we conclude the work in Section <ref> with some comments and perspectives.
§ STUDY OF THE LINEARIZED PROBLEM
§.§ Some previous results
Our aim in the present Section is to establish the null controllability of the linear system:
[ Ly + ∇ p = χ_ω v + f, in Q,
∇· y = 0, in Q,
y = 0, on Σ,
y(0) = y_0, in Ω, ]
In (<ref>), we have written Ly := y_t - ν_0 Δ y. We achieve this result via a suitable Carleman inequality for the adjoint system of (<ref>); upon writing L^*φ := -φ_t - ν_0 Δφ, it reads
[ L^*φ + ∇π = g, in Q,
∇·φ = 0, in Q,
φ = 0, on Σ,
φ(T) = φ^T, in Ω. ]
In the present subsection, we fix notations that we will employ henceforth. Let us consider ω_1 ⋐ω, with ω_1 ≠∅. For the proof of the following lemma, see <cit.>.
There is a function η^0 ∈ C^2(Ω) satisfying
η^0 >0 in Ω, η^0 = 0 on ∂Ω, |∇η^0| > 0 on Ω\ω_1.
We take l ∈ C^∞([0,T]) with
l(t) ⩾ T^2/4 on [0,T/2], l(t) = t(T-t), on [T/2,T].
We define
γ(x) := e^λ(η^0(x) +mη^0_∞),
α(x) := e^5/4λ mη^0_∞ - e^λ(η^0(x) + mη^0_∞),
γ_1 := min_Ωγ, γ_2 := max_Ωγ,
α_1 := min_Ωα, α_2 := max_Ωα,
and
γ := γ/l^4, α := α/l^4.
Given C>1, m>4, there exists λ_0=λ_0(m,C)>0 such that α_2 ⩽ Cα_1, for all λ⩾λ_0.
For s,λ>0, we write
I(s,λ,φ) := s^3λ^4 ∫_Q e^-2sαγ^3|φ|^2d(t,x) + sλ^2∫_Q e^-2sαγ |∇φ|^2 d(t,x)
+ s^-1∫_Q e^-2sαγ^-1(|φ_t|^2 + |Δφ|^2 ) d(t,x).
We are ready to recall the Carleman inequality that is the key to study the null controllability of the linear system (<ref>).
There exist positive constants s, λ and C depending solely on Ω and ω for which the relations g ∈ L^2(Q)^N, φ^T ∈ H, λ⩾λ and s ⩾s(T^4 + T^8) imply
[ I(s,λ,φ) ⩽ C(1+T^2)(s^15/2λ^20∫_Q e^-4sα_1 +2sα_2(γ_2/l^4)^15/2|g|^2d(t,x); + s^16λ^40∫_0^T ∫_ω_1 e^-8sα_1 + 6sα_2(γ_2/l^4)^16|φ|^2dx dt ), ]
where φ is the solution of (<ref>) corresponding to g and φ^T.
As a consequence, we get the following Observability Inequality.
With the notations of Proposition <ref> (possibly enlarging s, λ and C, the latter now depending on T), we have
φ(0)^2 ⩽ C(s^15/2λ^20∫_Q e^-4sα_1 +2sα_2γ_2^15/2|g|^2d(t,x) + s^16λ^40∫_0^T ∫_ω_1 e^-8sα_1 + 6sα_2γ_2^16|φ|^2dx dt ).
From now on, we fix λ = λ and s=s. Moreover, in view of Remark <ref>, given γ > 0, we can take λ = λ(γ) large enough in such a way that
α_2 < (1+γ)α_1.
Whenever we need (<ref>) in subsequent estimates, for a suitable positive real number γ, we will assume it holds in all that follows.
For p,q,r ∈ℝ, we introduce the weights
μ_p,q,r(t):= exp{psα_1 l^-4(t) }exp{qsα_2 l^-4(t) } l^r(t).
Regarding these weights, it is valuable to note:
Let p,p_1,p_2,q,q_1,q_2,r,r_1,r_2 be nine real numbers.
(a) One has the equality μ_p_1,q_1,r_1μ_p_2,q_2,r_2 = μ_p_1+p_2,q_1+q_2,r_1+r_2. In particular, for integral k, μ_p,q,r^k = μ_kp,kq,kr.
(b) There exists a constant C>0 such that
|d/dtμ_p,q,r|⩽ Cμ_p,q,r-5,
|d/dt(μ_p,q,r^2) |⩽ C μ_p,q,r-5/2^2.
(c) There exists a constant C>0 such that μ_p_1,q_1,r_1⩽ Cμ_p_2,q_2,r_2 if, and only if,
p_1α_1 + q_1α_2 = p_2α_1 + q_2α_2 and r_1 ⩾ r_2,
or
p_1α_1 + q_1α_2 < p_2α_1 + q_2α_2.
We define the weights
ρ_0 := μ_0,1,6,
ρ_1 := μ_0,1,2,
ρ_2 := μ_0,1,-2,
ρ_3 := μ_2,-1,15,
and
ρ_4 := μ_4,-3,32.
With these notations, we can gather Proposition <ref> and Corollary <ref> together, resulting in the following statement.
There is a constant C=C(Ω,ω,s,λ,m,T)>0 such that the solution φ of (<ref>) corresponding to g ∈ L^2(Q)^N and φ^T ∈ H satisfies
[ φ(0)^2 + ∫_Q[ ρ_0^-2|φ|^2 + ρ_1^-2|∇φ|^2 + ρ_2^-2(|φ_t|^2 + |Δφ|^2 )]d(t,x); ⩽ C(∫_Qρ_3^-2|g|^2d(t,x) + ∫_0^T ∫_ω_1ρ_4^-2|φ|^2 dx dt ). ]
§.§ Null controllability of the linear system
We suppose y_0 ∈ H, ρ_0 f ∈ L^2(Q)^N. Then there exist controls v ∈ L^2(]0,T[×ω)^N such that the state y of (<ref>) corresponding to v, f and y_0 satisfies
∫_Q ρ_3^2|y|^2d(t,x) + ∫_0^T∫_ωρ_4^2|v|^2dx dt ⩽ Cκ_0(y_0,f),
where
κ_0(y_0,f) := y_0_H^2 + ∫_Q ρ_0^2|f|^2d(t,x).
In particular, y(T) = 0 almost everywhere in Ω.
We define P_0 := { (w,σ) ∈ C^2(Q)^N+1 : ∇· w ≡ 0, w|_Σ≡ 0, ∫_Ωσ dx = 0 }, we take χ∈ C^∞_c(ω), with 0 ⩽χ⩽ 1, χ|_ω_1≡ 1, and we consider on P_0 the continuous bilinear form
b((w,σ),(w,σ)) := ∫_Q {ρ_3^-2(L^*w +∇σ)·(L^* w + ∇σ) + χρ_4^-2w·w} d(t,x).
By Corollary <ref>, b is an inner product on P_0. Let us denote P:= P_0^b(·,·), i.e., P is the completion of P_0 under the norm induced by b(·,·). We also deduce, from the corollary we just mentioned, that that the linear form
Λ : (w,σ) ∈ P ⟼∫_Ω y_0· w(0) dx + ∫_Q f· w d(t,x) ∈ℝ
is continuous, with
|Λ(w,σ)| ⩽ Cκ_0(y_0,f)^1/2b((w,σ),(w,σ))^1/2.
The Riesz representation theorem guarantees the existence of a unique (φ,π) ∈ P for which
Λ(w,σ) = b((w,σ),(φ,π)) (for all (w,σ) ∈ P).
Upon taking (w,σ) = (φ,π) above, we get
b((φ,π),(φ,π)) = Λ(φ,π) ⩽ Cκ_0(y_0,f)^1/2b((φ,π),(φ,π))^1/2,
whence
b((φ,π),(φ,π)) ⩽ Cκ_0(y_0,f).
Let us set
y:= ρ_3^-2(L^*φ + ∇π), z:= ρ_4^-2φ, v:= -χ z.
We observe that (v) ⊆ω, that (y,v) is a solution of (<ref>) corresponding to the datum y_0 and f, and applying Corollary <ref> once more,
∫_Q ρ_3^2|y|^2d(t,x) + ∫_0^T ∫_ωρ_4^2|v|^2 dx dt ⩽ Cb((φ,π),(φ,π)) ⩽ Cκ_0(y_0,f).
This proves the theorem.
§.§ Weighted energy estimates
Along this subsection, we let y_0 ∈ H, ρ_0 f ∈ L^2(Q)^N, and let us denote by (v,y) the control-state pair constructed in the proof of Theorem <ref>.
Let us define ρ_6 := μ_1,-1/2,35/2 and ρ_7 := μ_1,-1/2,20. We have
sup_[0,T]( ∫_Ωρ_6^2 |y|^2 dx) + ∫_Q ρ_6^2 |∇ y|^2 d(t,x) ⩽ Cκ_0(y_0,f),
and, if y_0 ∈ H^1_0(Ω)^N, then
∫_Q ρ_7^2(|y_t|^2+|Δ y|^2 )d(t,x) + sup_[0,T](∫_Ωρ_7^2|∇ y|^2 dx ) ⩽ Cκ_1(y_0,f),
where
κ_1(y_0,f) := y_0_H^1_0(Ω)^N^2 + ∫_Qρ_0^2 |f|^2d(t,x).
For each n ⩾ 1, let v_n(t, ·), f_n(t, ·) and y_0,n(·) be the projections of of v(t, ·), f(t, ·) and y_0(·) in the first n eigenfunctions for the Stokes operator A: D(A) → H, respectively. Let us denote by y_n the corresponding solution for the finite dimensional approximate forced Stokes system. For simplicity, unless we state otherwise, we omit the subscript n throughout the current proof. Moreover, we emphasize that we can take all of the constants C appearing below to be independent of n.
Using ρ_6^2y as a test function in system (<ref>), and doing some integrations by parts, we derive the identity
1/2d/dt(∫_Ωρ_6^2 |y|^2dx ) + ν_0 ∫_Ωρ_6^2 |∇ y|^2 dx = ∫_ωρ_6^2 v· y dx + ∫_Ωρ_6^2 f· y dx
+ 1/2∫_Ωd/dt(ρ_6^2 ) |y|^2dx.
From (<ref>) and Remark <ref>, item (c), we have ρ_6 ⩽ Cρ_4 ⩽ Cρ_3 ⩽ Cρ_0, whence
∫_ωρ_6^2 v· y dx ⩽ C(∫_ωρ_4^2 |v|^2 dx + ∫_Ωρ_3^2 |y|^2 dx ),
and
∫_Ωρ_6^2 f· y dx ⩽ C(∫_Ωρ_0^2 |f|^2 dx + ∫_Ωρ_3^2 |y|^2 dx ).
From Remark <ref>, item (b), we have |d/dt(ρ_6^2)| ⩽ Cρ_3^2, from where it follows that
∫_Ωd/dt( ρ_6^2 )|y|^2dx ⩽ C∫_Ωρ_3^2|y|^2 dx.
Using (<ref>), (<ref>) and (<ref>) in (<ref>), and applying Gronwall's inequality together with (<ref>), we infer (<ref>).
Henceforth, we will tacitly apply (<ref>) and Remark <ref>.
Now, we use ρ_7^2(y_t-ν_0 A y) as a test function in (<ref>), from where we easily derive that
[ ∫_Ωρ_7^2(|y_t|^2 + ν_0^2|Δ y|^2 )dx + 1/2d/dt(∫_Ωρ_7^2|∇ y|^2 dx ) = ∫_ωρ_7^2 v·(y_t-ν_0A y)dx; + ∫_Ωρ_7^2 f·(y_t-ν_0A y)dx + 1/2∫_Ωd/dt( ρ_7^2 )|∇ y|^2 dx. ]
We observe that, for any ϵ>0,
∫_ωρ_7^2 v·(y_t-ν_0A y)dx ⩽C/ϵ∫_ωρ_4^2|v|^2dx + Cϵ[∫_Ωρ_7^2(|y_t|^2 + |Δ y|^2)dx ],
∫_Ωρ_7^2 f·(y_t-ν_0A y)dx ⩽C/ϵ∫_Ωρ_0^2|f|^2dx + Cϵ[∫_Ωρ_7^2(|y_t|^2 + |Δ y|^2)dx ],
∫_Ωd/dt( ρ_7^2)|∇ y|^2 dx ⩽ C∫_Ωρ_6^2 |∇ y|^2 dx.
We take ϵ sufficiently small, in such a way that that the terms involving y in (<ref>) and (<ref>) are absorbed by the left-hand side of (<ref>). Also, from (<ref>) and (<ref>), the time integral of the third term in the right-hand side of (<ref>) is bounded by Cκ_0(y_0,f). Thus, it suffices to apply Gronwall's Lemma to conclude (<ref>) for the Galerkin approximates y_n instead of the actual solution y. Employing standard limiting arguments, as n →∞, we conclude that (<ref>) does hold for the actual solution y.
(a) If ζ := μ_-1,1,0, then
ζ v ∈ L^2(0,T;H^2(ω)∩ H^1_0(ω))∩ C([0,T];V), (ζ v)_t ∈ L^2(]0,T[×ω)^N,
with the estimate
∫_0^T ∫_ω[|(ζ v)_t|^2 + |ζΔ v|^2 ]dx dt + sup_[0,T]ζ v_V^2 ⩽ Cκ_0(y_0,f).
(b) Let us also assume that y_0 ∈ H^1_0(Ω)^N. For ζ := μ_-1,1,5, we have the memberships
(ζ v_t)_t ∈ L^2(]0,T[×ω)^N, ζ v_t ∈ L^2(0,T;[H^2(ω)∩ H^1_0(ω)]^N),
ζ v ∈ L^2(0,T;[H^4(ω)∩ H^1_0(ω)]^N),
and the following inequality holds
∫_0^T ∫_ω[|(ζ v_t)_t|^2 + |ζΔ v_t|^2 + |ζΔ^2v|^2 ] dx dt ⩽ Cκ_1(y_0,f).
(a) For p,q,r ∈ℝ, we notice that
L^*(μ_p,q,rz) = -d/dt(μ_p-8,q+6,r-64)φ + μ_p-4,q+4,r-34y - μ_p-8,q+6,r-64∇π.
Choosing p=-1, q=1, and r=0, it follows that
| d/dt(μ_p-8,q+6,r-64)| ⩽ Cρ_0^-1, μ_p-4,q+4,r-34⩽ Cρ_3, μ_p-8,q+6,r-64⩽ C.
Thus, u:= ζ z and π := μ_-9,7,-64π solve the Stokes equation
Lu + ∇π = h, in Q,
∇· u = 0, in Q,
u = 0, on Σ,
u(T) = 0, in Ω,
where
h := -d/dt(μ_-9,7,-64)φ + μ_-5,5,-34y ∈ L^2(Q)^N.
By standard regularity results for solutions of the Stokes system, we can infer the stated regularity for ζ v = -χ u.
(b) As in the previous item, for p,q,r ∈ℝ, we derive
[ L^*(μ_p,q,rz_t) = -φd/dt[μ_p,q,rd/dt(μ_-8,6,-64) ] + yμ_p+4,q-2,r+64d/dt(μ_-8,6,-64); - d/dt(μ_p-8,q+6.r-64)φ_t + μ_p-4,q+4,ry_t + μ_p-8,q+6,r-64d/dt(μ_4,-2,64) y; - μ_p-8,q+6,r-64∇π_t. ]
For the choice p=-1, q=1, r=5, it is straightforward to check the inequalities
|d/dt[μ_p,q,rd/dt(μ_-8,6,-64) ] | ⩽ Cμ_p-8,q+7,r-58ρ_0^-1 = Cμ_-9,8,-53ρ_0^-1⩽ Cρ_0^-1,
|μ_p+4,q-2,r+64d/dt(μ_-8,6,-64) | ⩽ Cμ_p-14,q+6,r-152ρ_3 = Cμ_-15,7,-147ρ_3 ⩽ Cρ_3,
|d/dt(μ_p-8,q+7,r-64) | ⩽ Cμ_p-8,q+7,r-71ρ_2 = Cμ_-9,8,-66ρ_2 ⩽ Cρ_2,
μ_p-4,q+4,r = μ_p-6,q+5,r-20ρ_7 = μ_-7,6,-15ρ_7 ⩽ Cρ_7,
|μ_p-8,q+6,r-64d/dt(μ_4,-2,64) | ⩽ Cμ_p-6,q+5,r-37ρ_3 = Cμ_-7,6,-32ρ_3 ⩽ Cρ_3,
and
μ_p-8,q+6,r-64 = μ_-9,7,-59⩽ C.
We can conclude by arguing similarly as in the first two memberships and corresponding estimates. The third ones are obtained doing the same analysis for the term L^*(ζΔ z).
Let us set ρ_8 := ζ = μ_-1,1,0 and ρ_9 := μ_-1,1,5/2. Supposing y_0 ∈ H^2(Ω)^N∩ V, Ay_0 ∈[H^1_0(Ω)]^N, ρ_8f_t ∈ L^2(Q)^N, we have the following estimates:
sup_[0,T](∫_Ωρ_8^2|y_t|^2dx ) + ∫_Q ρ_8^2|∇ y_t|^2 d(t,x) ⩽ Cκ_2(y_0,f).
If furthermore y_0 ∈ H^3(Ω)^N, f(0) ∈ H^1_0(Ω)^N,
∫_Q ρ_9^2(|y_tt|^2 + |Δ y_t|^2 )d(t,x) + sup_[0,T][∫_Q ρ_9^2(|∇ y_t|^2 + |Δ y|^2 )dx ] ⩽ Cκ_3(y_0,f),
where
κ_2(y_0,f):= y_0_H^2(Ω)^N^2 + ∫_Qρ_0^2 |f|^2d(t,x) + ∫_Q ρ_8^2|f_t|^2 d(t,x)
and
κ_3(y_0,f) := y_0_H^3(Ω)^N^2 + ∫_Qρ_0^2 |f|^2d(t,x) + ∫_Q ρ_8^2|f_t|^2 d(t,x) + f(0)_H^1_0(Ω)^N^2.
We establish the current estimates by following the same approach as in the proof of Lemma <ref>. Here, we begin by differentiating the system (<ref>) with respect to time, and we use ρ_8^2 y_t as a test function:
1/2d/dt(∫_Ωρ_8^2|y_t|^2 dx ) + ν_0 ∫_Ωρ_8^2 |∇ y_t|^2 dx = ∫_ωρ_8^2 v_t· y_t dx + ∫_Ωρ_8^2 f_t· y_t dx
+ 1/2∫_Ωd/dt(ρ_8^2)|y_t|^2 dx.
We note that
| d/dt(ρ_8^2) | ⩽ Cρ_7^2, ρ_8 ⩽ Cρ_7 ⩽ Cρ_4;
hence,
∫_ωρ_8^2 v_t· y_t dx ⩽ C[∫_ω(|ρ_4 v|^2 + |(ζ v)_t|^2 )dx + ∫_Ωρ_7^2 |y_t|^2 dx ],
∫_Ωρ_8^2 f_t · y_t dx ⩽ C(∫_Ωρ_8^2 |f_t|^2 dx + ∫_Ωρ_7^2|y_t|^2 dx ).
By Lemmas <ref> and <ref>,
∫_Qρ_7^2 |y_t|^2 d(t,x) + ∫_0^T∫_ω(|ρ_4 v|^2 + |(ζ v)_t|^2 )dx dt ⩽κ_1(y_0,f),
so that by using (<ref>) and (<ref>) in (<ref>), then integrating in time and applying of Gronwall's lemma, it follows that
sup_[0,T](∫_Ωρ_8^2|y_t|^2 dx) + ∫_Q ρ_8^2|∇ y_t|^2 d(t,x) ⩽ C (y_t(0)_L^2(Q)^N^2 +∫_Q ρ_8^2|f_t|^2d(t,x) .
+ κ_1(y_0,f) ).
It is simple to infer the subsequent estimate:
y_t(0)_L^2(Q)^N^2 ⩽y_0_H^2(Ω)^N^2 + f(0)_L^2(Q)^N^2 + v(0)_L^2(]0,T[×ω)^N^2 ⩽κ_2(y_0,f).
Relations (<ref>) and (<ref>) imply (<ref>).
Next, we use ρ_9^2(y_tt-ν_0A y_t) as a test function in the system (<ref>) differentiated with respect to time to deduce
[ ∫_Ωρ_9^2 (|y_tt|^2 + ν_0^2|Δ y_t|^2 )dx + ν_0d/dt( ∫_Ωρ_9^2 |∇ y_t|^2 dx ) = ∫_ωρ_9^2 v_t· (y_tt-ν_0A y_t) dx; + ∫_Ωρ_9^2 f_t· (y_tt-ν_0A y_t)dx + ν_0∫_Ωd/dt(ρ_9^2 )|∇ y_t|^2 dx ]
We observe that
∫_Ωd/dt(ρ_9^2) |∇ y_t|^2dx ⩽ C∫_Ωρ_8^2|∇ y_t|^2 dx,
and for each ϵ > 0,
∫_ωρ_9^2 v_t· (y_tt-ν_0A y_t)dx ⩽ C[ 1/ϵ∫_ωζ^2 |v_t|^2 dx + ϵ∫_Ωρ_9^2( |y_tt|^2 + |Δ y_t|^2 ) dx ],
as well as
∫_Ωρ_9^2f_t · (y_tt-ν_0A y_t)dx ⩽ C[1/ϵ∫_Ωρ_8^2|f_t|^2 dx + ϵ∫_Ωρ_9^2(|y_tt|^2 + |Δ y_t|^2)dx ].
We fix a sufficiently small ϵ, whence the second terms within the brackets in the right-hand sides of (<ref>) and (<ref>) are absorbed by the left-hand side of (<ref>). Then, using (<ref>) in (<ref>), we infer
[ ∫_Ωρ_9^2 (|y_tt|^2 + |Δ y_t|^2 )dx + d/dt( ∫_Ωρ_9^2 |∇ y_t|^2 dx ) ⩽ C(∫_ωζ^2 |v_t|^2 dx + ∫_Ωρ_8^2 |f_t|^2dx; + ∫_Ωρ_8^2 |∇ y_t|^2 dx ). ]
Employing Gronwall's lemma in (<ref>), we obtain
∫_Q ρ_9^2 (|y_tt|^2 + |Δ y_t|^2)d(t,x) + sup_[0,T](∫_Ωρ_9^2|∇ y_t|^2 dx ) ⩽ C(∇ y_t(0)_L^2(Q)^N^2 + κ_2(y_0,f) ).
We easily establish, with the aid of item (a) of Lemma <ref>, that
∇ y_t(0)_L^2(Q)^N^2 ⩽ C(y(0)_H^3(Q)^N^2 + ∇ v(0)_L^2(Q)^N^2 + ∇ f(0)_L^2(Q)^N^2 ) ⩽ Cκ_3(y_0,f),
whence
∫_Q ρ_9^2 (|y_tt|^2 + |Δ y_t|^2)d(t,x) + sup_[0,T](∫_Ωρ_9^2|∇ y_t|^2 dx ) ⩽ Cκ_3(y_0,f).
Finally, we use ρ_9^2Δ y_t in the undifferentiated partial differential equation of system (<ref>) as a test function to get
[ ∫_Ωρ_9^2 |∇ y_t|^2 dx + ν_0/2d/dt(∫_Ωρ_9^2 |Δ y|^2 dx ); = ∫_ωρ_9^2 v·Δ y_t dx + ∫_Ωρ_9^2 f·Δ y_t dx + ν_0/2∫_Ωd/dt(ρ_9^2)|Δ y|^2 dx; ⩽ C(∫_ωρ_4^2 |v|^2 dx + ∫_Ωρ_0^2 |f|^2 dx + ∫_Ωρ_9^2 |Δ y_t|^2 dx + ∫_Ωρ_7^2|Δ y|^2 dx). ]
We use Gronwall's lemma in (<ref>), in such a way that
sup_[0,T](∫_Ωρ_9^2 |Δ y|^2 dx ) ⩽ Cκ_3(y_0,f).
From the estimates (<ref>) and (<ref>), together with the compatibility condition Ay_0 ∈[H^1_0(Ω)]^N, we derive (<ref>).
We write ρ_10:= ζ = μ_-1,1,5 and ρ_11 := μ_-1,1,15/2. Let us assume that y_0 ∈ H^4(Ω)^N∩ V, Ay_0, A^2y_0 ∈[H^1_0(Ω)]^N, ρ_9 Δ f ∈ L^2(Q)^N, ρ_10 f_tt∈ L^2(Q)^N, ρ_10f_t ∈ L^2(0,T; H^1_0(Ω)^N), f(0) ∈[H^2(Ω)∩ H^1_0(Ω)]^N and f_t(0)∈ L^2(Ω). Then, the following estimate holds
[ sup_[0,T][∫_Ωρ_10^2 (|y_tt|^2 + |Δ y_t|^2 )dx + ρ_9 y_H^3(Ω)^N^2 ]; + ∫_Q (ρ_10^2 |∇ y_tt|^2 +ρ_10^2|D^3 y_t|^2 + ρ_9^2|D^4 y|^2 ) d(t,x) ⩽ Cκ_4(y_0,f). ]
If, furthermore, y_0 ∈ H^5(Ω)^N, f(0) ∈ H^3(Ω)^N, Af(0) ∈ V, and f_t(0) ∈ H^1_0(Ω)^N, then
sup_[0,T](∫_Ωρ_11^2 |∇ y_tt|^2 dx ) + ∫_Qρ_11^2(|y_ttt|^2 + |Δ y_tt|^2 )d(t,x) ⩽ C κ_5(y_0,f).
where we have written
κ_4(y_0,f) := ∫_Q(ρ_9^2 |Δ f|^2 +ρ_10^2|∇ f_t|^2 + ρ_10^2|f_tt|^2) d(t,x)
+ y_0_H^4(Ω)^N^2 + f(0)_H^2(Ω)^N^2 + f_t(0)_L^2(Ω)^N^2 + κ_3(y_0,f),
κ_5(y_0,f) := y_0_H^5(Ω)^N^2 + f(0)_H^3(Ω)^N^2 +f_t(0)_H^1_0(Ω)^N^2 + κ_4(y_0,f).
Again, we proceed in the same framework as in the proof of Lemma <ref>. We begin by applying the Stokes operator A on the equation of system (<ref>), and then use -ρ_9^2A^2 y as a test function:
[ ν_0∫_Ωρ_9^2|Δ^2 y|^2 dx = -∫_ωρ_9^2 Δ v · A^2 y dx - ∫_ΩΔ f · A^2 y dx + ∫_Ωρ_9^2 Δ y_t · A^2 y dx; ⩽ C(∫_ωζ^2|Δ v|^2 dx + ∫_Ωρ_9^2 |Δ f|^2 dx + ∫_Ωρ_9^2 |Δ y_t|^2 dx ) + 1/2∫_Ωρ_9^2 |Δ^2 y|^2 dx, ]
We integrate (<ref>) with respect to time, whence
∫_Q ρ_9^2 |Δ^2 y|^2 d(t,x) ⩽ Cκ_4(y_0,f).
We can now easily argue that, under suitable limiting arguments (having in view the compatibility conditions we required in the statement of the present lemma), Eq. (<ref>) yields the corresponding estimate for the solution of (<ref>). We observe that the relations ρ_9A y_t ∈ L^2(Q)^N and ρ_9A^2 y ∈ L^2(Q)^N imply ρ_9 y ∈ L^∞(0,T; H^3(Ω)^N), with
sup_[0,T]ρ_9 y_H^3(Ω)^N^2 ⩽ C∫_Q ρ_9^2 ( |Δ y_t|^2 + |Δ^2 y|^2 )d(t,x) ⩽ Cκ_4(y_0,f).
In the differential equation of system (<ref>) differentiated once with respect to time, we use the test function ρ_10^2 A^2 y_t:
1/2d/dt(∫_Ωρ_10^2 |Δ y_t|^2 dx ) + ν_0∫_Ωρ_10^2 |∇Δ y_t|^2 dx
= ∫_ωρ_10^2 ∇ v_t : ∇Δ y_t dx + ∫_Ωρ_10^2 ∇ f_t : ∇Δ y_tdx
+ 1/2∫_Ωd/dt( ρ_10^2 ) |Δ y_t|^2 dx.
For ϵ > 0,
[ ∫_ωρ_10^2 ∇ v_t : ∇Δ y_tdx ⩽ C_ϵ∫_ωζ^2 |∇ v_t|^2 dx + ϵ∫_Ωρ_10^2|∇Δ y_t|^2 dx; ⩽ C_ϵ∫_ω( |(ζv_t)_t|^2 + |(ζ v)_t|^2 + ρ_4^2|v|^2)dx + ϵ∫_Ωρ_10^2 |∇Δ y_t|^2 dx, ]
[ ∫_Ωρ_10^2 ∇ f_t: ∇Δ y_t dx ⩽ C_ϵ∫_Ωρ_10^2 |f_tt|^2 dx + ϵ∫_Ωρ_10^2 |∇Δ y_t|^2 dx , ]
∫_Ωd/dt( ρ_10^2)|Δ y_t|^2 dx ⩽ C ∫_Ωρ_9^2 |Δ y_t|^2,
and
[ Δ y_t(0)^2 ⩽ C(y_0_H^4(Ω)^N^2 + Δ v(0)_L^2(]0,T[×ω)^N^2 + Δ f(0)^2 ); ⩽ Cκ_4(y_0,f). ]
Therefore, by taking ϵ sufficiently small, and using (<ref>)-(<ref>) in (<ref>), we deduce
sup_[0,T](∫_Ωρ_10^2 |Δ y_t|^2dx ) + ∫_Q ρ_10^2 |∇Δ y_t|^2 d(t,x) ⩽ Cκ_4(y_0,f).
Next, we differentiate the equation of system (<ref>) twice with respect to time and we use the test function ρ_10^2 y_tt:
1/2d/dt(∫_Ωρ_10^2 |y_tt|^2 dx ) + ν_0∫_Ωρ_10^2 |∇ y_tt|^2 dx = ∫_ωρ_10^2 v_tt· y_tt dx + ∫_Ωρ_10^2 f_tt· y_ttdx
+ 1/2∫_Ωd/dt( ρ_10^2 ) |y_tt|^2 dx.
We have
[ ∫_ωρ_10^2 v_tt· y_ttdx ⩽ C(∫_ωζ^2 |v_tt|^2 dx + ∫_Ωρ_9^2|y_tt|^2 dx ); ⩽ C[ ∫_ω( |(ζv_t)_t|^2 + |(ζ v)_t|^2 + ρ_4^2|v|^2)dx + ∫_Ωρ_9^2 |y_tt|^2 dx], ]
[ ∫_Ωρ_10^2 f_tt· y_tt dx ⩽ C(∫_Ωρ_10^2 |f_tt|^2 dx + ∫_Ωρ_9^2 |y_tt|^2 dx ), ]
∫_Ωd/dt( ρ_10^2)|y_tt|^2 dx ⩽ C ∫_Ωρ_9^2 |y_tt|^2,
and
[ y_tt(0)^2 ⩽ C(y_0_H^4(Ω)^N^2 + Δ v(0)_L^2(]0,T[×ω)^N^2 + Δ f(0)^2 .; . + v_t(0)_L^2(]0,T[×ω)^N^2 + f_t(0)^2 ); ⩽ Cκ_4(y_0,f). ]
Using (<ref>), (<ref>) and (<ref>) in (<ref>), then integrating in time and using (<ref>), we infer
sup_[0,T](∫_Ωρ_10^2|y_tt|^2 dx ) + ∫_Q ρ_10^2 |∇ y_tt|^2 d(t,x) ⩽ Cκ_4(y_0,f).
Estimates (<ref>), (<ref>), (<ref>) and (<ref>) are enough to conclude (<ref>).
Now, we use ρ_11^2(y_ttt - ν_0 A y_tt) as a test function in the equation of system (<ref>) twice differentiated in time, reaching
[ ∫_Ωρ_11^2|y_ttt|^2 dx + ν_0^2 ∫_Ωρ_11^2|Δ y_tt|^2 dx + ν_0 d/dt(∫_Ωρ_11^2 |∇ y_tt|^2 dx ); = ∫_ωρ_11^2 v_tt· (y_ttt-ν_0A y_tt) dx + ∫_Ωρ_11^2 f_tt· (y_ttt-ν_0 A y_tt)dx; + ν_0 ∫_Ωd/dt(ρ_11^2)|∇ y_tt|^2 dx. ]
For ϵ > 0,
[ ∫_ωρ_11^2 v_tt· (y_ttt-ν_0A y_tt) dx ⩽ C[1/ϵ∫_ω(|(ζv_t)_t|^2 + |(ζ v)_t|^2 + ρ_4^2|v|^2)dx; + ϵ∫_Ωρ_11^2( |y_ttt|^2 + |Δ y_tt|^2 )dx ], ]
∫_Ωρ_11^2 f_tt· (y_ttt-ν_0 A y_tt)dx ⩽ C[1/ϵ∫_Ωρ_10^2 |f_tt|^2 dx + ϵ∫_Ωρ_11^2( |y_ttt|^2 + |Δ y_tt|^2 )dx],
and we also notice that
∫_Ωd/dt(ρ_11^2)|∇ y_tt|^2 dx ⩽ C ∫_Ωρ_10^2 |∇ y_tt|^2 dx.
We easily check that
∇ y_tt(0)^2 ⩽ Cκ_5(y_0,f).
As in the proof of (<ref>), inequalities (<ref>)-(<ref>), we can infer from (<ref>), through an adequate choice of a small positive ϵ, and the aid of the previous estimates, the subsequent inequality
∫_Q ρ_11^2 ( |y_ttt|^2 +|Δ y_tt|^2) d(t,x) + sup_[0,T](∫_Ωρ_11^2 |∇ y_tt|^2 dx ) ⩽ Cκ_5(y_0,f).
Estimate (<ref>) is precisely (<ref>); hence, we have finished the proof of the present result.
§ NULL CONTROLLABILITY OF THE MODEL (<REF>)
§.§ Local right inversion theorem
It is possible to find a proof of the subsequent result in <cit.>. This is the inversion theorem that we will use to obtain our local null controllability result.
Let Y and Z be two Banach spaces, and H : Y → Z be a continuous function, with H(0) = 0. We assume that there are three constants δ,η^',M > 0 and a continuous linear mapping Λ from Y onto Z with the following properties:
(i) For all e ∈ Y, we have
e_Y ⩽ MΛ(e)_Z;
(ii) The constants δ and M satisfy δ < M^-1;
(iii) Whenever e_1,e_2 ∈ B_Y(0;η^'), the inequality
H(e_1) - H(e_2) - Λ(e_1-e_2)_Z ⩽δe_1-e_2_Y
holds.
Then, whenever k ∈ B_Z(0;η), the equation H(e) = k has a solution e ∈ B_Y(0;η^'), where η:= (M^-1 - δ)η^'.
A typical way of verifying condition (iii) is through the remark presented below.
Let Y and Z be two Banach spaces, and let us consider a continuous mapping H:Y→ Z, with H(0)=0, of class C^1(Y,Z). Then it has property (iii), for any positive δ subject to (ii), as long as we take η^' as the continuity constant of DH ∈ℒ(Y,ℒ(Y,Z)) at the origin of Y.
§.§ The setup
Let us set
X_1 := L^2(Q;ρ_3^2)^N, X_2 := L^2(]0,T[×ω; ρ_4^2)^N.
We define
[ Y := { (y,p,v) ∈ X_1 × L^2(Q) × X_2 : y_t ∈ L^2(Q)^N, ∇ y ∈ L^2(Q)^N× N,; (ζ v)_t, ζΔ v, (ζ v_t)_t, ζΔ v_t, ζD^4 v ∈ L^2(]0,T[×ω)^N,; for f:= Ly + ∇ p - χ_ω, ρ_0 f, ρ_8 f_t, ρ_9 Δ f, ρ_10f_tt∈ L^2(Q)^N,; ρ_10f_t ∈ L^2(0,T; H^1_0(Ω)^N), f(0) ∈[H^3(Ω)∩ H^1_0(Ω)]^N,; Af(0) ∈ H^1_0(Ω)^N, f_t(0) ∈ H^1_0(Ω)^N, y|_Σ≡ 0, ∇· y ≡ 0,; y(0) ∈[H^5(Ω)∩ V]^N, Ay(0), A^2y(0) ∈ H^1_0(Ω)^N, ∫_Ω p dx = 0 }. ]
We consider on Y the norm
[ (y,p,v)_Y^2 := ∫_Q(ρ_3^2|y|^2 + ρ_0^2 |f|^2+ ρ_8^2 |f_t|^2 + ρ_9^2 |Δ f|^2 + ρ_10^2|∇ f_t|^2 + ρ_10^2|f_tt|^2 )d(t,x); + ∫_0^T ∫_ω(ρ_4^2|v|^2 + |(ζ v)_t|^2 + |ζΔ v|^2 + |(ζ v_t)_t|^2 + |ζΔ v_t|^2 + |D^4( ζ v )|^2 ) dx dt; + f(0)_[H^3(Ω)∩ H^1_0(Ω)]^N^2 + f_t(0)_H^1_0(Ω)^N^2 + y(0)_H^5(Ω)^N^2, ]
where in (<ref>) we have written f:= Ly + ∇ p - χ_ω v. Then, endowing the space Y with ·_Y renders it a Banach space.
Now, we put
[ F := { f ∈ L^2(Q)^N : ρ_0 f, ρ_8 f_t, ρ_9 Δ f, ρ_10f_tt∈ L^2(Q)^N, ρ_10f_t ∈ L^2(0,T; H^1_0(Ω)^N),; f(0) ∈[H^3(Ω)∩ H^1_0(Ω)]^N, f_t(0) ∈ H^1_0(Ω)^N }, ]
f_F^2 := ∫_Q(ρ_0^2 |f|^2+ ρ_8^2 |f_t|^2 + ρ_9^2 |Δ f|^2 + ρ_10^2|∇ f_t|^2 + ρ_10^2|f_tt|^2 )d(t,x)
+ f(0)_[H^3(Ω)∩ H^1_0(Ω)]^N^2 + f_t(0)_H^1_0(Ω)^N^2,
and also consider the space of initial conditions
G := { y_0 ∈ H^5(Ω)^N ∩ V : Ay_0, A^2y_0 ∈ H^1_0(Ω)^N },
with the same topology as H^5(Ω)^N∩ V. Then, we define
Z := F × G,
The space Z with the natural product topology is also a Banach space.
Finally, we define the mapping H : Y → Z by
H(y,p,v) := (Dy/Dt - ∇·𝒯(y,p) - χ_ω v , y(0)).
§.§ Three lemmas and the conclusion
The mapping H : Y → Z is well-defined, and it is continuous.
We write H(y,p,v) = (H_1(y,p,v),H_2(y,p,v)), where
H_1(y,p,v) := Dy/Dt - ∇·𝒯(y,p) - χ_ω v;
H_2(y,p,v) := y(0).
There is nothing to prove about H_2, since it is cleary linear and continuous. We will consider only the mapping H_1 in what follows.
We decompose H_1(y,v) = h_1(y,p,v) + h_2(y,p,v) + h_3(y,p,v), where
h_1(y,p,v):= y_t -ν(0)Δ y + ∇ p - χ_ω v,
h_2(y,p,v):= - ∇·[(ν(∇ y)-ν(0))∇ y ],
h_3(y,p,v)=(y·∇) y.
By the definition of the norm of F, it follows promptly that
h_1(y,p,v)_F < ∞.
Next, we will prove that the quantity h_2(y,p,v)_F is finite.
CLAIM 1: Δ h_2(y,p,v)_9 < ∞.
We notice that
[ |Δ h_2(y,p,v)| ⩽ C[ (r+1)r|r-1||∇ y|^r-2|D^2 y|^3 + (r+1)r|∇ y|^r-1|D^2 y||D^3 y|+(r+1)|∇ y|^r|D^4 y|]; = C(D_1,1 + D_1,2 + D_1,3). ]
In the case r=1, the term D_1,1 vanishes and thus |Δ h_2| is bounded by C(D_1,2 + D_1,3). Otherwise, assuming r ⩾ 2, we have
[ ∫_Q ρ_9^2 D_1,1^2 d(t,x) ⩽ C(r)∫_0^T ρ_9^2 ∫_Ω |∇ y|^2(r-2) |D^2 y|^6 dx dt; ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-2)D^2 y_L^6(Ω)^6 dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 y^2(r-2)y_H^3(Ω)^6 dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 y^2(r+2) dt; ⩽ C(r)( sup_[0,T]ρ_9D^3 y)^2(r+2)∫_Q ρ_9^-2(r+1) d(t,x) < ∞. ]
In the above equations, we used the continuous immersions: H^2(Ω) ↪ L^∞(Ω); H^1(Ω) ↪ L^6(Ω). These are valid for N ⩽ 3, see <cit.>, and we will use them tacitly henceforth.
Now, we obtain the estimate for D_1,2:
[ ∫_Q ρ_9^2 D_1,2^2 d(t,x) ⩽ C(r) ∫_0^T ρ_9^2 ∫_Ω |∇ y|^2(r-1)|D^2 y|^2|D^3 y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 y^2rD^4 y^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2r∫_Q ρ_9^2|D^4 y|^2 d(t,x) < ∞. ]
Likewise, we show D_1,3 to be finite, since
[ ∫_Q ρ_9^2 D_1,3^2 d(t,x) ⩽ C(r) ∫_0^T ρ_9^2 ∫_Ω |∇ y|^2r |D^4 y|^2dx dt; ⩽ C(r) ∫_0^T ρ_9^-2r(ρ_9D^3 y)^2r(ρ_9D^4 y)^2 dt; ⩽ C(r) sup_[0,T]ρ_9D^3 y^2r∫_Q ρ_9^2 |D^4 y|^2 d(t,x) < ∞. ]
CLAIM 2: ∂_t^2 h_2(y,p,v)_10 < ∞.
We begin with the pointwise estimate,
[ |∂_t^2 h_2(y,p,v)| ⩽ C[ (r+1)r|r-1||∇ y|^r-2|∇ y_t|^2|Δ y| + (r+1)r|∇ y|^r-1|∇ y_tt||Δ y|; + (r+1)r|∇ y|^r-1|∇ y_t||Δ y_t| +(r+1)|∇ y|^r|Δ y_tt|]; = C(D_2,1+D_2,2+D_2,3+D_2,4). ]
As in the previous claim, if r=1, then D_2,1≡ 0. For r ⩾ 2, the next estimate is valid:
[ ∫_Q ρ_10^2 D_2,1^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-2)|∇ y_t|^4|Δ y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^-2rρ_9 D^3 y^2(r-2)ρ_9 D^4 y^2 ρ_10Δ y_t^4 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-2)(sup_[0,T]ρ_10Δ y_t)^4 ∫_Q ρ_9^2|D^4 y|^2 d(t,x); < ∞. ]
Proceeding similarly, we prove the remaining inequalities:
[ ∫_Q ρ_10^2 D_2,2^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-1)|∇ y_tt|^2|Δ y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_10^2ρ_9^-2rρ_11^-2ρ_9 D^3 y^2(r-1)ρ_9 D^4 y^2 ρ_11∇ y_tt^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-1)(sup_[0,T]ρ_11∇ y_tt)^2 ∫_Q ρ_9^2|D^4 y|^2 d(t,x); < ∞; ]
[ ∫_Q ρ_10^2 D_2,3^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-1) |∇ y_t|^2|Δ y_t|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^-2(r-1)ρ_10^-2ρ_9 D^3 y^2(r-1)ρ_10 D^3 y_t^2 ρ_10Δ y_t^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-1)(sup_[0,T]ρ_10Δ y_t)^2 ∫_Q ρ_10^2|D^3 y_t|^2 d(t,x); < ∞; ]
[ ∫_Q ρ_10^2 D_2,4^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2r |Δ y_tt|^2 dx dt; ⩽ C(r) ∫_0^T ρ_10^2ρ_9^-2rρ_11^-2ρ_9 D^3 y^2rρ_11Δ y_tt^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2r∫_Q ρ_11^2|Δ y_tt|^2 d(t,x); < ∞. ]
This finishes the proof of the second claim.
CLAIM 3: |∂_t ∇ h_2(y,p,v)| _10 < ∞.
As before, we begin by considering the pointwise estimate:
[ |∂_t ∇ h_2(y,p,v)| ⩽ C[(r+1)r|r-1||∇ y|^r-2|∇ y_t||Δ y| + (r+1)r|∇ y|^r-1|Δ y_t| .; .+ (r+1)|∇ y|^r|D^3 y_t| ]; = C(D_3,1 + D_3,2 + D_3,3). ]
Again, if r=1, then we need not consider D_3,1, since it vanishes. For r⩾ 2,
[ ∫_Q ρ_10^2 D_3,1^2 d(t,x) ⩽ C(r)∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-2)|∇ y_t|^2 |Δ y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_10^2 ρ_9^-2rρ_9D^3 y^2(r-2)ρ_9D^4 y^2ρ_9∇ y_t^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-2)(sup_[0,T]ρ_9D^4 y)^2∫_Q ρ_9^2 |∇ y_t|^2 d(t,x); < ∞, ]
[ ∫_Q ρ_10^2 D_3,2^2 d(t,x) ⩽ C(r)∫_0^T ρ_10^2∫_Ω |∇ y|^2(r-1)|Δ y_t|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^-2(r-1)ρ_9D^3 y^2(r-1)ρ_10Δ y_t^2 dt; ⩽ C(r)(sup_[0,T]ρ_9D^3 y^2 )^2(r-1)(sup_[0,T]ρ_10Δ y_t)^2 < ∞, ]
and
[ ∫_Q ρ_10^2 D_3,3^2 d(t,x) ⩽ C(r)∫_0^Tρ_10^2 ∫_Ω |∇ y|^2r|D^3 y_t|^2dx dt; ⩽ C(r)∫_0^T ρ_9^-2rρ_9D^3 y^2rρ_10D^3 y_t^2dt; ⩽ C(r)(sup_[0,T]ρ_9D^3 y)^2r∫_Q ρ_10^2|D^3 y_t|^2 d(t,x) <∞. ]
These inequalities confirm the third claim.
The remaining terms composing the F-norm of h_2(y,p,v), h_2(y,p,v)_F, are norms of lower order derivatives of it, compared to the ones considered above, in adequate weighted L^2 spaces. Therefore, these terms are even easier to handle. A similar remark is also true for h_3(y,p,v)_F. In addition, we can show the continuity of H via estimates which are very similar to the ones that we carried out in the claims above; hence, we omit these computations. This ends the proof of the Lemma.
The mapping H is strictly differentiable at the origin of Y, with derivative DH(0,0,0) = Λ∈ℒ(Y,Z) given by
Λ· (y,p,v) = ( y_t - ν_0 Δ y + ∇ p - χ_ω v, y(0)) = (Λ_1· (y,p,v),Λ_2· (y,p,v)).
In fact, H is of class C^1(Y,Z) and, for each (y,p,v) ∈ Y, its derivative DH(y,p,v) ∈ℒ(Y,Z) is given by
DH(y,p,v)· (y,p,v) = (Λ_1(y,p,v)· (y,p,v) , Λ_2 · (y,p,v) ),
where we have written
Λ_1(y,p,v)· (y,p,v) := Λ_1· (y,p,v)
- rν_1∇·[ χ_y |∇y|^r-2∇y : ∇ y ∇y + |∇y|^r∇ y ]
+ (y ·∇) y + (y·∇) y ,
∇y : ∇ y := ( ∇y^⊺∇ y ),
χ_y is the indicator function of the set {∇y≠ 0}.
We will only prove the first claim, i.e., that H is strictly differentiable at the origin (0,0,0) ∈ Y, with DH(0,0,0) being onto Z. There is no additional difficulty to prove the lemma in its full force.
We write H = (H_1,H_2) as in (<ref>) of Lemma <ref>. Again, it is only necessary to investigate H_1, since H_2 is linear and continuous, and therefore C^∞. Given (y,p,v),(y,p,v) ∈ Y, we note that
H_1(y,p,v) - H_2(y,p,v) - Λ_1 · (y-y, p - p,v-v) = -ν_1 D_1 + D_2,
where
D_1 := ∇·(|∇y|^r ∇y - |∇ y|^r∇ y ),
D_2 := (y·∇)y - (y ·∇) y.
Let us take two positive real numbers, ϵ and δ, and we suppose (y,p,v)_Y ⩽δ, (y,p,v)_Y ⩽δ. We must show that we can take δ = δ(ϵ) such that
H_1(y,p,v) - H_2(y,p,v) - Λ_1 · (y-y, p - p,v-v)_F ⩽ϵ(y-y, p - p, v-v)_Y.
We assume, without loss of generality, that δ < 1. It is enough to show that
ν_1D_1_F + D_2_F ⩽ϵ(y-y, p-p, v- v)_Y,
for a suitable δ = δ(ϵ). To begin with, we observe that
[ |Δ D_1| ⩽ C (r+1)r[|r-1|||∇y|^r-2 - |∇ y|^r-2||D^2y|^3; + |r-1||∇ y|^r-2(|D^2y|^2 + |D^2 y|^2 )|∇y -∇ y|; + |r-1|(|∇y|^r-2 + |∇ y|^r-2)|∇y -∇ y||D^2y||D^3y|; + |∇ y|^r-1|D^2y-D^2 y||D^3y| +|∇ y|^r-1|D^2 y||D^3(y-y)|; + |∇y|^r-1|∇(y-y)||D^4 y| + |∇ y|^r|D^4(y-y)| ]; = C(r+1)r(D_1,1 + ⋯ + D_1,7). ]
If r=1, then D_1,1≡ D_1,2≡ D_1,3≡ 0, whereas for r=2 we also have D_1,1≡ 0. If r ⩾ 3, we follow estimates similar to the ones we developed in Lemma <ref>, and make use of the immersions we described there, in such a way that
[ ∫_Q ρ_9^2 D_1,1^2 d(t,x); ⩽ C(r)∫_0^T ρ_9^2D^3 (y-y)^2(D^3 y^2(r-3) + D^3 y^2(r-3))D^2 y_L^6(Ω)^6 dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 (y-y)^2(D^3 y^2(r-3) + D^3 y^2(r-3))D^3y^6 dt; = C(r) ∫_0^T ρ_9^-2rρ_9D^3 (y-y)^2(ρ_9D^3 y^2(r-3) + ρ_9D^3 y^2(r-3))ρ_9D^3y^6 dt; ⩽ C(r)δ^2r(y-y,p - p, v-v)_Y^2. ]
Next, for r⩾ 2,
[ ∫_Q ρ_9^2 D_1,2^2 d(t,x); ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-2)D^3(y-y)^2(D^2y_L^4(Ω)^4 + D^2 y_L^4(Ω)^4 )dt; ⩽ C(r)∫_0^Tρ_9^2D^3 y^2(r-2)D^3(y-y)^2(D^3 y^4 + D^3 y^4 )dt; ⩽ C(r)δ^2r(y-y, p-p, v-v)_Y^2, ]
[ ∫_Q ρ_9^2D_1,3^2d(t,x); ⩽ C(r)∫_0^Tρ_9^2(D^3 y^2(r-2) + D^3 y^2(r-2))D^3(y-y)^2D^4 y^2D^3 y^2 dt; ⩽ C(r)δ^2r(y-y,p-p,v-v)_Y^2. ]
Now, for every r ⩾ 1,
[ ∫_Q ρ_9^2D_1,4^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-1)D^4(y-y)^2D^3y^2 dt; ⩽ C(r)δ^2r(y-y,p-p,v-v)_Y^2, ]
[ ∫_Q ρ_9^2D_1,5^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2 D^3 y^2(r-1)D^4 y^2D^3 (y-y)^2 dt; ⩽ C(r) δ^2r(y-y,p-p, v-v)_Y^2, ]
[ ∫_Q ρ_9^2D_1,6^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-1)D^3(y-y)^2D^4y^2 dt; ⩽ C(r)δ^2r(y-y,p-p, v-v)_Y^2, ]
[ ∫_Q ρ_9^2D_1,7^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2D^3 y^2rD^4(y-y)^2 dt; ⩽ C(r)δ^2r(y-y,p-p, v-v)_Y^2. ]
Summing up, the computations we carried out above yield
Δ D_1_9 ⩽ C(r)δ^r(y-y,p-p,v-v)_Y.
We can treat the remaining terms composing the F-norm of D_1 likewise, as we argued in Lemma <ref>. Dealing with D_2 is even simpler, since it involves lower order derivatives of y. In this way, we deduce that
ν_1D_1_F + D_2_F ⩽ C(r)δ(y-y,p-p,v-v)_Y.
Thus, it suffices to take any positive δ < min(1,ϵ/C(r)) in order to finish the proof.
The linear operator DH(0,0,0) : Y → Z is continuous and onto. Furthermore, there exists a constant M>0 such that
(y,p,v)_Y ⩽ MDH(0,0,0)· (y,p,v)_Z
The continuity of DH(0,0,0) follows promptly from the definition of the norms of Y and Z. As for the surjectiveness of this mapping, let us consider (f,y_0) ∈ Z. We take (y,p,v) as the state-pressure-control tuple given by Theorem <ref>. By the estimates we proved in subsection <ref>, namely (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), together with Lemma <ref>, the membership (y,p,v) ∈ Y is valid. Moreover,
DH(0,0,0)· (y,p,v) = (y_t - ν_0Δ y +∇ p - χ_ω v, y(0)) = (f,y_0),
where the last equality holds by the choice of (y,p,v); hence, DH(0,0,0) is onto Z. By the aforementioned estimates, (<ref>) follows easily. This establishes the lemma.
§.§ Proof of Theorem <ref>
According to Lemmas <ref>, <ref> and <ref>, it is licit to apply Theorem <ref>. This result allows us to deduce the existence of η > 0 such that, for each (f,y_0) ∈ Z subject to
(f,y_0)_Z < η,
the equation
H(y,p,v) = (f,y_0)
has a solution (y,p,v) ∈ Y which satisfies
(y,p,v)_Y < B η,
for a suitable constant B > 0 which is independent of η. Explicitly, we can take B := (M^-1 - δ)^-1, where M>0 is given by Lemma <ref> (cf. (<ref>)), and where we select the positive constant δ < M^-1 such that H satisfies condition (iii) of Theorem <ref>. Such a constant δ does in fact exist by Lemma <ref>.
In particular, taking f≡ 0, inequality (<ref>) reads
y_0_H^5(Ω)^N < η.
Since (y,p,v) ∈ Y, we have (<ref>), and alonside (<ref>), we see that (y,p,v) does solve (<ref>).
§ NUMERICAL ANALYSIS
§.§ Proof of the convergence of the algorithm
The proof of this result is straightforward once we have established Lemmas <ref> and <ref>. We present it here for completeness.
Firstly, we observe that Lemma <ref> ensures that (y^n+1,p^n+1,v^n+1) is well-defined in terms of (y^n,p^n,v^n), since in this lemma we showed that DH(0,0,0) is bijective. Furthermore, we have DH(0,0,0)^-1_ℒ(Z,Y)⩽ M, according to the notations of this lemma.
Next, we take y_0 ∈ G, with y_0_H^5(Ω)^N < η, and we let (y,p,v) ∈ Y be the solution of H(y,p,v) = (0,y_0). We also consider 0<ϵ < (2M)^-1. By Lemma <ref>, there exists δ >0 such that the relations
(y,p,v)∈ Y and (y,p,v) ∈ Y, (y-y, p-p, v-v)_Y ⩽δ
imply
DH(y,p,v) - DH(y,p,v)_ℒ(Y,Z)⩽ϵ.
Shrinking η, if necessary, we can assume η⩽δ. Employing Lemma <ref> once more, we find κ = κ(y,p,v) ∈]0,1[ such that (y,p,v) ∈ Y and (y-y,p-p,v-v)_Y ⩽κ together imply
H(y,p,v) - H(y,p,v) - DH(y,p,v)· (y-y,p-p,v-v)_Z ⩽ϵ(y-y,p-p,v-v)_Y.
We write e^n := (y^n, p^n, v^n) - (y,p,v), and let us assume e^0_Y ⩽κ. By the algorithm,
e^n+1 = - DH(0,0,0)^-1[H(y^n,p^n,v^n)-H(y,p,v) - DH(y,p,v) · e^n ]
- DH(0,0,0)^-1[DH(y,p,v) - DH(0,0,0) ]· e^n,
whence
e^n+1_Y ⩽ M {H(y^n,p^n,v^n)-H(y,p,v) - DH(y,p,v)e^n.
.+ [DH(y,p,v) - DH(0,0,0) ]· e^n}.
Assuming inductively that e^n_Y ⩽κ, which holds true for n=0, it follows that
e^n+1_Y ⩽ 2Mϵe^n_Y.
Thus, we also have e^n+1_Y ⩽κ. By induction, it follows that e^n_Y⩽κ, for every n; hence, it is always possible to pass from (<ref>) to (<ref>). Let us take θ := 2Mϵ. Applying inequality (<ref>) iteratively in n, we conclude that
e^n_Y ⩽θ^ne_0_Y.
This proves Theorem <ref>.
§.§ Implementation of the algorithm
To implement the fixed-point numerical algorithm, we proceed in two steps. Firstly, it is necessary to implement a solver for the control problem of the forced Stokes system. We begin with the variational problem (<ref>) and adequately reformulate it to achieve a mixed formulation, as in <cit.>. Below, we recall the main ideas for N=2. After treating the linear problem, we iterate it by updating the source term according to our algorithm.
Under the notations of the proof of Theorem <ref> (see (<ref>)), we define u := ρ_3^-1(L^*φ + ∇π), m := ρ_4^-1φ, and k := ρ_4^-1π. Let us introduce the spaces
Z := { (m^',k^') : m^'∈ L^2(0,T; H^1_0(Ω)^2, m^'_t ∈ L^2(Q)^2, k^'∈ L^2(0,T; H^1(Ω)), ∫_Ω k^' dx = 0 a.e. },
and
W:= L^2(Q)^2 × Z, M := L^2(0,T;H^1_0(Ω)^2)× L^2(Q),
as well as the bilinear forms b_1 : W × W →ℝ, B,B_1 : W × M →ℝ by
b_1((u,m,k),(u^', m^', k^')) := ∫_Q {u · u^' + χ m · m^'}d(t,x),
B((u,m,k),(λ,μ)) := ∫_Q {λ·[u+ρ_3^-1(ρ_4 m)_t + ∇(ρ_4 k ) ] - ∇( ρ_3^-1λ): ∇(ρ_4 m ) } d(t,x)
and
B_1((u,m,k),(λ,μ)) = B((u,m,k),(λ,μ)) - ∫_Q ρ_3^-1μ∇·(ρ_4 m ) d(t,x).
The last element we introduce is the linear form Λ : W →ℝ, which is given by
⟨Λ, (u,m,k) ⟩ := ∫_Q ρ_4 f m d(t,x) + ∫_Ω (ρ_4 m)(0)y_0 dx.
We reformulate problem (<ref>) as: find (u,m,k) ∈ W and multipliers (λ,μ) ∈ M such that
b_1((u,m,k),(u^',m^',k^')) + B_1((u^',m^',k^'),(λ,μ)) = ⟨Λ, (u^',m^',k^') ⟩, for all (u^',m^',k^') ∈ W,
B_1((u,m,k),(λ^',μ^')) = 0, for all (λ^',μ^') ∈ M.
After we solve it, we recover the control and corresponding velocity field of the linear control problem (<ref>) via
v = - χρ_4^-1 m and y = ρ_3^-1 u.
If we assume that Ω is polygonal, it is simple to find finite dimensional approximations W_h and M_h of the spaces W and M.
§.§ A numerical experiment
In the sequel, we will employ the FreeFem++ library of C++; see <http://www.freefem.org/ff++> for more informations. In Table <ref>, we describe the datum we used to apply the quasi-Newton method for (<ref>).
We illustrate in Figure <ref> the 2D mesh of Ω, and the 3D mesh of the cylinder Q. In Figure <ref>, we show both components of the initial state y(0) = y_0.
Our stopping criterion is
y^n+1-y^n_L^2(Q)/y^n_L^2(Q)⩽ϵ,
with ϵ = 10^-8. We took as the initial guess (y^0,p^0,v^0) = (0,0,0). We attained convergence after six iterates, with a rate of 4.68.
We begin to illustrate the overall behavior of the control and of the state we computed through the plots of some cross-sections in space of them. On the one hand, for the control, we plot the x_1 = 0.9 and x_1 = 2.1 cuts in Figures <ref> and <ref>, respectively. On the other hand, we provide the surfaces comprising the values of the state components, relative to these cuts, in Figures <ref> and <ref>.
The time evolution of the norms of the control and of the corresponding state is what we illustrate in Figure <ref>. It corroborates our theoretical findings as these norms decay exponentially. To further illustrate the control, we provide a surface of its values at initial time in Figure <ref>. Then, we give some insight into the dynamics of the problem by showcasing some heat maps of the control and of its corresponding state. Namely, in Figure <ref>, we illustrate the control at time t=0.15 — it is already considerably small, as we would expect from Figure <ref>. For several times, viz., for each t∈{0.15, 0.25, 0.35, 0.45}, we give a heat map of the first (respectively, second) component of the velocity field in Figure <ref> (respectively, Figure <ref>).
§ COMMENTS AND PERSPECTIVES
§.§ On the constitutive law for the shear stress
Upon scrutinizing the proof of Lemmas <ref> and <ref>, we conclude that they still hold for any function ν : ℝ^N× N→ℝ in (<ref>) having the following properties:
* ν⩾ν_0, for some constant ν_0>0;
* ν is of class C^3( ℝ^N× N\{ 0 });
* There exists r>0 such that
|D^k ν(A)|⩽ C(1 + |A|^(r-k)^+),
for k=0,1,2,3, and for every A ∈ℝ^N× N\{ 0}.
With Lemmas <ref> and <ref> at hand, we can follow the remaining steps towards the main result, i.e., Theorem <ref>, in the same manner as we proceeded in Section <ref>. This more general class of constitutive laws includes the one determining the reference model of this paper, namely, ν(A) := ν_0 + ν_1|A|^r, when r∈{ 1, 2 } or r⩾ 3. An example of another class of functions ν for which the properties we stated above hold are
ν(A) := ν_0 (1 + ν_1 |A|^2 )^r/2, r ∈{1,2}∪[3,∞[.
§.§ On the use of the gradient instead of the deformation tensor
We can replace the gradient of the velocity field in (<ref>) with the deformation tensor, Dy = ( ∇ y + ∇ y^T)/2, without losing any of the results we established. From a practical viewpoint, this form of the model is more realistic. Analyzing the estimates we carried out throughout the present work, it is easy to see that the techniques we employed work just as well under this substitution. In particular, we notice the new framework shares the linearization around the zero trajectory with the one we studied in Section <ref>. Using the estimates developed there, alongside Korn-type inequalities, we can prove all of the corresponding results in Sections <ref> and <ref> for this alternate version of the model (<ref>)-(<ref>).
§.§ On extensions of Theorem <ref> and some related open questions
Boundary controllability. We remark that a corresponding boundary local null controllability result follows from Theorem <ref>. In effect, let us assume that the initial data y_0 belongs to H^5_0(Ω)∩ V, being sufficiently small in the (strong) topology of this space, and that we act with a control on a smooth portion γ of the boundary ∂Ω (with γ≠∂Ω and γ≠∅). We can employ standard geometrical arguments to extend Ω to an open region Ω, with a smooth boundary ∂Ω, and in a way that ∂Ω\γ⊂∂Ω. Acting distributively over ω:= Ω\Ω, with y_0 extended to zero outside of Ω, we obtain a control v∈ L^2(]0,T[ ×ω) driving the corresponding state y to zero at time T. A boundary control for the original problem is y|_[0,T]×γ.
Local controllability to trajectories. Regarding the local exact controllability to trajectories, there are two key aspects to investigate. Firstly, we must prove a global Carleman inequality, analogous to Proposition <ref>, but for the adjoint system of the linearization around the given trajectory, cf. Lemma <ref>. Secondly, we have to extend the estimates of Section <ref> for this linearized problem. These endeavors are not straightforward, whence we leave this question open for future investigations.
On the restrictions on the exponent r. We notice that the estimates of Section <ref> are not immediately extensible for the values of r > 0 outside of {1,2}∪[3,∞[. However, we conjecture that our main result (viz., Theorem <ref>) is still in force for these values of r. A possible way to establish this is to parametrically regularize the function ν around zero, and attentively keep track of the regularization parameters throughout the estimates. We leave this question open here.
Requirements on the initial datum. Through another regularization argument, we possibly could require a less restrictive topology for the initial datum in the main theorem. Namely, if we assume y_0 ∈ H only, we ought to carry out estimates for the uncontrolled problem (corresponding to (<ref>) with v≡ 0) to show that there exists t_0 ∈]0,T[ for which y(t_0,·)_H^5(Ω)^N∩ V⩽η, as long as y_0_H is sufficiently small. We choose not to delve in the technicalities of these estimates here (see <cit.> for the application of such an argument in the case of the Navier-Stokes equations with the Navier boundary condition). However, we emphasize that this is a non-trivial task. Thus, assuming this is valid, Theorem <ref> asserts that there exists a control v ∈ L^2(]t_0,T[×ω) driving y(t_0,·) to zero at time T. From the exponential decay of solutions, see <cit.>, this argument immediately provides a large-time global null controllability result.
Remarks on other boundary conditions. We observe that, if instead of no-slip boundary conditions, we assume Navier boundary conditions, the method of <cit.>, used for the Navier-Stokes equations, may apply to the current model. If we manage to deal with the additional terms figuring in the expansions we must make after an appropriate time rescaling, especially the boundary layers, we should obtain a small-time global exact controllability to trajectories result (under Navier boundary conditions). Alternatively, if we consider the model (<ref>)-(<ref>) with Ω = 𝕋 (the N-dimensional torus) and periodic boundary conditions, then we can easily conduct the regularizing argument for the initial datum we outlined above, whence we can prove large-time global null controllability for this model — we omit the details here.
Stabilization results. It might be that, for ν_1 > 0, an appropriate use of the stabilizing effect of the power-law model makes it easier to establish stabilization results for this class of non-Newtonian fluids. In this way, we propose that our current contributions could bridge such results with global null controllability ones. We remark that, even for the Navier-Stokes equations (corresponding to ν_1 = 0) under no-slip boundary conditions, whether global null controllability holds is an open problem. We suggest that such results for (<ref>)-(<ref>) (with ν_1 > 0) could provide insight on this important open question.
apalike
|
http://arxiv.org/abs/2307.07215v1 | 20230714081545 | Scaling law for a buckled elastic filament in a shear flow | [
"Pawel Sznajder",
"Lujia Liu",
"Piotr Zdybel",
"Maria L. Ekiel-Jezewska"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cond-mat.soft"
] |
./figures/
|
http://arxiv.org/abs/2307.04873v2 | 20230710195036 | Gauge fixing in cosmological perturbations of Unimodular Gravity | [
"Francisco X. Linares Cedeño",
"Ulises Nucamendi"
] | gr-qc | [
"gr-qc",
"astro-ph.CO"
] |
Engineering bound states in continuum via nonlinearity induced extra dimension
Girish S. Agarwal
August 12, 2023
==============================================================================
§ INTRODUCTION
We are living in the era of modern cosmology, and accurate predictions of the dynamics of the universe are required. With the advent of more data, the information acquired from different sources in the cosmos challenges the broad variety of cosmological models in the literature. The standard Lambda Cold Dark Matter (ΛCDM) model is based on the theory of General Relativity (GR), which offers a great concordance with several observations <cit.>. However, there are still some unsolved problems within the ΛCDM model <cit.>. This has motivated a large amount of proposals considering both new particles and new theories of gravity.
The current accelerated expansion of the universe is one of the riddles that cosmologist are trying to solve, and within the framework of GR it is the Cosmological Constant Λ who plays the role of the main component responsible of such accelerated expansion, the so–called Dark Energy. The origin of this dark component is still unknown, and there are many models in the literature based on different physics: some examples are dark energy fluids <cit.>, quintessence/phantom scalar fields <cit.>, modified gravity <cit.>, between others.
In first formulations of GR, a choice of coordinates such that the determinant of the metric tensor is fixed was considered <cit.>, this is, the metric tensor g_μν obeys the unimodular condition √(-g) = 1. Later, the relation between a fixed metric determinant and the cosmological constant was made <cit.>, where unimodular coordinate mappings were imposed: x^μ→ x^'μ such that |∂ x^'μ/∂ x^ν| = 1. Such consideration leads to the traceless part of the Einstein field equations, and this gravitational theory has been dubbed Unimodular Gravity (UG) <cit.>. One of the main consequences of having a four–volume preserving theory is that the energy–momentum tensor is no longer conserved (∇_μ T^μ_ ν≠ 0), and new non–gravitational interactions are allowed in the matter–energy sector. This feature is expected in theories of gravity that at a fundamental level could be more compatible with quantum mechanics <cit.>.
Cosmological models based on UG have been studied in last years <cit.>, although most of them have been focused in the background dynamics. Particularly in <cit.>, the authors of the present work analyzed four phenomenological diffusion models describing interactions between the dark sector components, and it is shown that such interactions alleviate the H_0 tension. It is then natural to go further, and studying whether it is possible to describe an inhomogenous universe by considering linear perturbations in UG, with the aim of reproduce observables such as the Cosmic Microwave Background (CMB) and the Large Scale Structure (LSS). This implies to properly solve the Einstein–Boltzmann system describing the cosmological evolution for the initial fluctuations of both, the all matter components and the metric tensor.
Some contributions in this direction have been made: in <cit.> the linear perturbations are obtained although the authors impose by hand the conservation of the energy–momentum tensor, which is not the most general form of the UG field equations. Nonetheless, it is shown that the Sachs–Wolfe effect <cit.> in UG has a new term given by a scalar metric perturbation <cit.>. On the other hand, the second order perturbations were obtained and no major distinctions from GR are found <cit.>. A recent analysis including the non–conservation of the energy–momentum tensor has been realized in <cit.> and then, the presence of an energy–momentum current violation has been considered.
As we mentioned above, whereas a first work was made by the authors of the present work with focus in the Hubble tension at a background level <cit.>, now we will pay all the attention to the details of the cosmological perturbations in UG, which is a previous necessary step to the posterior implementation of this theory at the level of numerical solutions from Boltzmann solvers, as well as statistical analysis in order to both constraint parameters, and performing model comparison. A particular aspect that has not been mentioned in previous literature is that of gauge fixing. This is of crucial importance due to one have to ensure that no spurious degrees of freedom are propagating in the theory. We show that it is possible to fix both Newtonian and Synchronous gauges: the former is fixed in the same way as in GR, whereas for the latter to be fixed the unimodular constraint at first order is needed to be implemented. Nonetheless, the dynamics of the linear perturbations differs to that of GR due to both, the unimodular constraint at linear order, and the presence of the energy–momentum current violation. Thus, with the aim of exploring possible imprints of UG dynamics at first order, we obtain the CDM density contrast in both gauges mentioned above for a matter–dominated universe, and the Sachs–Wolfe effect is obtained as well.
This paper is organized as follows: Section <ref> is dedicated to review in more detail the current contribution of previous works on the analysis of linear perturbations in UG. In Section <ref> we obtain the equations of motion for UG by variations method. Once the field equations are obtained, the background for a Friedmann–Robertson–Walker (FRW) line element is implemented and the linear perturbations are obtained in Section <ref>. Besides, for the first time the perturbations for the energy–momentum current violation is presented. Later in Section <ref> we present the main analysis of this work, we fix the most commonly used gauges in cosmology: Newtonian and Synchronous gauges. Additionally, we review the gauge choice implemented at <cit.>. In Section <ref> we present the physical implications of UG linear perturbations dynamics, and particularly we pay attention to the evolution of CDM density contrast, as well as in the Sachs–Wolfe effect. Final remarks are given in Section <ref>.
We will follow the signature convention (- , + , + , +). On the other hand, we will use dots ( ) for derivatives with respect to cosmic time t, and primes ( ^' ) will be used to label change of coordinates when choosing gauges.
§ UNIMODULAR GRAVITY PERTURBATION THEORY: STATE OF ART
In this Section we are going to briefly review previous works that have addressed the cosmological perturbations in UG. We will highlight the main results, as well as several crucial aspects that have not been deeply analyzed yet.
The first analysis on linear perturbations within the framework of UG was done by <cit.>. In such work it is shown for the first time the unimodular constraint at linear perturbations level: what at the background level is a fixed four–volume, now is a new relation between the scalar modes of the metric fluctuations that is not present in GR at the level of linear perturbations. Notwithstanding, the unimodular constraint leads to a gauge issue that seems to be unavoidable when obtaining the dynamics for the scalar modes of the metric fluctuations. Specifically, the authors of <cit.> explain:
"This scalar type metric perturbation cannot be removed through a gauge choice in unimodular gravity and thus leads to the possibility of observationally distinguishing unimodular gravity from GR".
In fact, such scalar mode appears as a new term when obtaining the relation between temperature and gravitational potential in the Sachs–Wolfe effect within the framework of UG. However, the differences between GR and UG when analyzing the Sachs–Wolfe effect are suppressed on large angular scales, making both theories practically indistinguishable. It is important to mention that even when the unimodular constraint gives an additional relation between the gravitational potentials for the metric perturbations, the gauge choice made by the authors of <cit.> leaves two gravitational degrees of freedom, as in GR. This is done so in order to compare with the longitudinal gauge of GR. However, this choice does not fix the gauge, as it will be shown in the present work.
Later, going a step further the authors of <cit.> obtain the second order perturbations of the theory. The gauge choice they consider is the same as that in <cit.>, and then the gauge fixing issue persists. Nonetheless, the authors of <cit.> claim that the appearance of the new term in the Sachs–Wolfe effect can be compatible with GR with the proper gauge choice. On the other hand, the second order Mukhanov–Sasaki equation in UG is obtained, and it is shown that it depends only on the first order unimodular constraint. It is concluded that there is no significant difference between GR and UG at neither of both first and second order in perturbations.
Then, both works mentioned above conclude that there are no major distinctions between GR and UG. However, in both studies it is assumed that the energy–momentum tensor is conserved, which neglect one of the main features of the UG theory. Current progress in this direction has been done: in <cit.> the cosmological linear perturbations in UG under the Newtonian conformal gauge are studied for scalar and tensor perturbations, and the Boltzmann equation for photons is obtained as well. In particular, they obtain the 00 component for scalar perturbations, an it is shown that it presents an extra contribution due to Λ. On the other hand, the Boltzmann equation for photons contains an additional term that contains third order derivatives. In this respect, the authors say:
"...the Boltzmann equation for photons is exposed because it contains the energy momentum violations that characterize the UG. Notoriously, the extra term carries higher order derivatives in the conformal time component for the scale factor and for the scalar curvature."
This is the case when considering that radiation will be coupled with non–standard terms due to the energy–momentum current violation. In our case, we will consider that the new non–gravitational interactions occur only between the dark sector components.
Another work assuming violation of the energy–momentum tensor as well as considering linear perturbations in the longitudinal gauge in UG is <cit.>, where the authors do not use the action principle to obtain the UG field equations, but their starting point is the trace-free Einstein equations only. Thus, the authors do not assume the unimodular condition. This is different from our approach where the UG field equations are derived from an action principle (in fact, we will consider unimodular variations), and the unimodular constraint will be considered for both, background and linear perturbations. The study of instabilities in UG with non–gravitational interactions in the dark sector are carried out with detail, and in particular the authors report:
"...the usual instability is driven by the nonadiabatic pressure perturbation of the dark energy fluid, but for the trace–free Einstein equations and a transfer potential that depends only on the dark matter energy density there is no nonadiabatic pressure perturbation to dark energy – this is ultimately why there is no instability here."
Later in <cit.> the contribution of an energy–momentum current violation is considered as well, and it is referenced by the authors as nonconservative unimodular gravity. Besides, the unimodular constraint is fully considered when obtaining the dynamics of linear perturbations, which are written in terms of one only gravitational degree of freedom. In relation to the Newtonian and Synchronous gauges, the authors in <cit.> report:
"The newtonian gauge can not be used in the unimodular context unless any anisotropic contribution to the stress–tensor are considered," and
"Scalar perturbations in the nonconservative unimodular gravity are permitted in the synchronous gauge only and have a growing mode."
The former aspect is still without analysis, whereas the latter was analyzed by considering a specific background solution. Even when these results constitutes a significant breakthrough in the study of cosmological perturbations in UG, there is still missing the proper analysis of gauge fixing, and looking for the choices that leave the theory without spurious degrees of freedom. Moreover, it is mandatory to have the correct dynamics of linear perturbations if one is interested in the implementation of UG cosmological model in Boltzmann solvers such as <cit.> and <cit.>. For instance, integrates
transfer functions for quantities
defined in the synchronous gauge, whereas uses the synchronous gauge by default although Newtonian gauge equations are implemented on top of the synchronous ones. Therefore, the gauge fixing issue in UG must be deeply understood[Since Boltzmann solvers mentioned above are still being used by cosmologist, we are interested in the study of Newtonian/Synchronous gauges. The gauge invariant formalism of UG is presented in detail in <cit.>, and it is not our focus to deal which such treatment in the present work.].
From all mentioned above, there are some important technical details still to be addressed in order to properly study the cosmological perturbations in UG, with aim of analyze whether cosmological models based on UG are viable candidates to describe the universe, not only the background dynamics but to describe CMB and LSS as well.
Summarizing, we have that
* Even when UG naturally leads to non–gravitational interactions, this is ∇_μ T^μ_ ν≠ 0, first works on UG linear perturbations assume the opposite, and energy–momentum conservation is imposed by hand <cit.>. Within these analysis, GR and UG are basically the same theory.
* With focus on scalar modes, once the non–conservation of energy–momentum tensor is considered it has been possible to obtain the 00 component of field equations for linear perturbations in the Newtonian gauge <cit.>, the perturbed field equations in the longitudinal gauge but without considering the unimodular constraint <cit.>, and solutions for linear perturbations only in the synchronous gauge <cit.>. Newtonian gauge requires anisotropic term in the matter sector, and solutions in this gauge are lacking.
* None of the works mentioned above analyze the gauge fixing in UG. This is crucial to do in order to avoid the propagation of spurious degrees of freedom that can be mistaken with physical effects on cosmological observables.
In the present work we will address the missing points mentioned above, with special emphasis on the gauge fixing problem, the dynamics of the evolution of linear perturbations considering the unimodular constraint as well as the energy–momentum current violation, and the physical repercussions on the growth of CDM density contrast. In this respect, we show that it is possible to fix both the Newtonian and Synchronous gauges, although they have different consequences in the dynamics of linear perturbations. We also review the gauge choice studied at <cit.>, and we show that it is not completely fixed due to a remaining undetermined function. We find analytical solutions in terms of the only gravitational degree of freedom in Newtonian and Synchronous gauges. Therefore, a possibility to track some signatures of UG at cosmological scales is by analyzing the growth of structure, which implies to obtain the dynamics not only for CDM, but for all the matter components and by setting the proper dynamical equations for the Einstein–Boltzmann system within the framework of UG. On the other hand, we obtain the Sachs–Wolfe effect in UG, and different from what has been previously reported in the literature, the result is exactly the same as GR without new contributions of metric perturbations.
§ EQUATIONS OF MOTION IN UNIMODULAR GRAVITY
Different from <cit.>, where the unimodular constraint was introduced in the Einstein–Hilbert action through a Lagrange multiplier, this time we will obtain the UG equations of motion considering the following unimodular variation δ_u,
δ_ug^μν≡δ g^μν - 1/4g^μνg_αβδ g^αβ ,
and then, it follows that
g_μνδ_ug^μν = g_μν( δ g^μν - 1/4g^μνg_αβδ g^αβ) = g_μνδ g^μν - 1/4δ_λ^λg_μνδ g^μν= 0 .
The volume–preserving diffeomorphisms are satisfied under the unimodular variation δ_u, since when considering the variation of the determinant of the metric, we have
δ_u√(-g) = -1/2√(-g)g_μνδ_u g^μν = 0 ⇒ √(-g) = f ,
where f=f(x) is a nondynamical scalar density which depends on the coordinates, and it can always be redefined to the unity.
The total action is,
S = S_EH + S_M = 1/2κ^2∫ d^4x√(-g)R + S_M ,
where S_EH is the Einstein–Hilbert action, and S_M is the action for the matter fields. The Ricci scalar is defined as R=g^μνR_μν, and then, the unimodular variation of the Einstein–Hilbert action is
δ_u S_EH = 1/2κ^2[∫ d^4x(δ_u √(-g))R + ∫ d^4x √(-g)(δ_u g^μν)R_μν + ∫ d^4x √(-g)g^μν(δ_u R_μν)] .
The first term is zero due to (<ref>), and the last term is also vanishing because, as in GR, after some algebra such term is a boundary contribution at infinity which can be set to zero <cit.>. Then, the Einstein–Hilbert action gets reduced to the second term only, which using Eq. (<ref>) is written as
δ_u S_EH = 1/2κ^2∫ d^4x √(-g)(δ_u g^μν)R_μν = 1/2κ^2∫ d^4x √(-g)(δ g^μν - 1/4g^μνg_αβδ g^αβ)R_μν ,
= ∫ d^4x √(-g)[1/2κ^2(R_μν - 1/4R g_μν)]δ g^μν .
For the matter content, we have the standard energy–momentum tensor definition but considering the unimodular variation, this is,
T_μν≡ -2/√(-g)δ_u S_M/δ_u g^μν
and then, we have
δ_u S_M = -1/2√(-g)T_μνδ_u g^μν = -1/2√(-g)T_μν(δ g^μν - 1/4g^μνg_αβδ g^αβ)
= -1/2√(-g)( T_μν - 1/4T g_μν)δ g^μν .
Therefore, the previous results from Eq. (<ref>) and (<ref>) give the following variation for the total action (<ref>),
δ_u S = ∫ d^4x √(-g)/2[1/κ^2(R_μν - 1/4R g_μν) -( T_μν - 1/4T g_μν)]δ g^μν = 0 ,
and thus, we obtain the UG field equations,
R_μν - 1/4R g_μν = κ^2( T_μν - 1/4T g_μν) ,
which are the trace–free version of the Einstein field equations. We can rewrite Eq. (<ref>) as follows,
R^μ_ ν - 1/2Rδ^μ_ν + 1/4(R + κ^2T)δ^μ_ν = κ^2 T^μ_ ν ,
and applying the Bianchi identities,
∇_μ( R^μ_ ν - 1/2Rδ^μ_ ν) + 1/4∇_ν(R + κ^2T) = κ^2 ∇_μT^μ_ ν ,
we notice that whereas the first term on the l.h.s. is identically zero, the covariant derivative of the energy–momentum tensor is no longer locally conserved,
κ^2∇_μT^μ_ ν = 1/4∂_ν(R + κ^2T) ≡ J_ν ,
where J_ν is the energy–momentum current violation. Integrating the expression from above, and replacing this result into Eq. (<ref>), we have
R_μν - 1/2Rg_μν + [ Λ + ∫ dx^α J_α(x) ] g_μν = κ^2 T_μν ,
where Λ is a constant of integration. Notice that, in the particular case when the energy–momentum tensor is conserved (J_ν = 0), Eq. (<ref>) coincides with the Einstein field equations of GR, and then, Λ is identified as the cosmological constant. Thus, within the framework of UG, the cosmological constant Λ arises naturally in the equation of motion as an integration constant when considering volume–preserving diffeomorphisms. Notwithstanding, in general we will have J_ν≠ 0, and non–gravitational interactions are allowed between different matter and energy components.
In summary, the UG field equations are given by
R_μν - 1/2Rg_μν + Λ(x) g_μν = κ^2 T_μν , with Λ(x) ≡Λ + ∫ dx^α J_α(x) ,
∇_μ T^μ_ ν = 1/κ^2J_ν , with J_ν≡1/4∂_ν(R + κ^2T) ,
where Λ(x) in Eq. (<ref>) is an effective cosmological constant which in general depends on the spacetime coordinates. We will focus in non–gravitational interactions only between dark matter and the effective cosmological constant through the energy–momentum current violation J_ν according to Eq. (<ref>).
§ LINEAR COSMOLOGICAL PERTURBATIONS IN UNIMODULAR GRAVITY
Let us write the metric, the energy–momentum tensor, the effective cosmological constant, and the energy–momentum current violation in the following way
g_μν = g̅_μν + h_μν , T_μν = T̅_μν + δ T_μν , Λ = Λ̅ + δΛ , J_μ = J̅_μ + δ J_μ ,
where the bar denotes quantities from the background, and h_μν , δ T_μν , δΛ , and δ J_μ are small fluctuations with respect to their corresponding background values. In the case of the background metric, we consider the flat FRW spacetime, whose components are
g̅_00 = -1 , g̅_0i = 0 , g̅_ij = a^2(t)δ_ij ,
g̅^00 = -1 , g̅^0i = 0 , g̅^ij = a^-2(t)δ_ij ,
while for the inverse of the metric perturbation we have
h^μν = -g̅^μαg̅^νβh_αβ ,
whose components are given by
h^00 = -h_00 , h^i0=a^-2h_i0 , h^ij=-a^-4h_ij
Notice that the determinant of the metric given by Eq. (<ref>) can be written at first order as
√(-g)≃√(-g̅)[ 1 + 1/2g̅^μνh_μν + 𝒪(h^2) ] = √(-g̅)( 1 - h_00/2 + a^-2h_ii/2) ,
and, whereas at zero order we recover the unimodular constraint (<ref>), at first order we have
-h_00 + a^-2h_ii = 0 .
The last expression will be important to be considered in the following analysis of the dynamics of small fluctuations, since it constitutes a new relation between the components of the perturbed metric that is not present in GR.
The Christoffel symbols are defined as
Γ^α_μν = 1/2g^αβ( ∂_νg_βμ + ∂_μg_βν - ∂_βg_μν) ,
and then, the non–null components are
Γ^0_00 = -ḣ_00/2 ,
Γ^0_i0 = ȧ/ah_i0-1/2∂_i h_00 ,
Γ^0_ij = aȧδ_ij + 1/2( 2aȧδ_ijh_00 - ∂_j h_i0 - ∂_i h_j0 + ḣ_ij) ,
Γ^i_00 = 1/2a^2( 2ḣ_i0 - ∂_i h_00) ,
Γ^i_j0 = ȧ/aδ_ij + 1/2a^2( -2ȧ/ah_ij + ḣ_ij + ∂_j h_i0 - ∂_i h_j0) ,
Γ^i_jk = 1/2a^2( -2aȧh_i0δ_jk + ∂_k h_ij + ∂_j h_ik - ∂_i h_jk) .
The Ricci tensor is,
R_μν = ∂_αΓ^α_μν - ∂_νΓ^α_αμ + Γ^α_αβΓ^β_μν - Γ^α_νβΓ^β_αμ ,
with components given by
R_00 = -3ä/a - ∇^2h_00/2a^2 - 3/2ȧ/aḣ_00 + ∂_iḣ_i0/a^2 -
1/2a^2{ḧ_ii - 2ȧ/aḣ_ii + 2[ ( ȧ/a)^2 - ä/a]h_ii} ,
R_0i = -ȧ/a∂_i h_00 - 1/2a^2( ∇^2 h_i0 - ∂_i ∂_j h_j0) + [ ä/a + 2( ȧ/a)^2 ]h_i0 - 1/2∂_t[ 1/a^2( ∂_i h_jj - ∂_j h_ji) ] ,
R_ij = ( aä + 2ȧ^2 )δ_ij + 1/2∂_i∂_j h_00 + ( 2ȧ^2 + aä)δ_ij h_00 + 1/2aȧδ_ijḣ_00 + 1/2ḧ_ij - ȧ/aδ_ij∂_k h_k0
-1/2a^2( ∇^2h_ij - ∂_k∂_j h_ki - ∂_k∂_i h_kj + ∂_i∂_j h_kk) - 1/2ȧ/a( ḣ_ij - δ_ijḣ_kk)
+( ȧ/a)^2( 2h_ij - δ_ijh_kk) - 1/2( ∂_i ḣ_j0 + ∂_j ḣ_i0) - 1/2ȧ/a( ∂_i h_j0 + ∂_j h_i0) ,
The Ricci scalar R = R̅ + δ R = g̅^μαR̅_αμ + g̅^μαδ R_αμ + h^μαR̅_αμ , is given by
R = 6[ (ȧ/a)^2 + ä/a] + 6[(ȧ/a)^2 + ä/a]h_00 + 3ȧ/aḣ_00 + ∇^2h_00/a^2
-2/a^2( 2ȧ/a∂_ih_i0 + ∂_iḣ_i0) -2/a^2[ (ȧ/a)^2 + ä/a]h_ii + ḧ_ii/a^2
-1/a^4( ∇^2h_ii - ∂_i∂_jh_ij) .
For the matter content we are going to be interested in the energy–momentum tensor of a perfect fluid, this is
T_μν = ( ρ + p )U_μU_ν + pg_μν ,
with
ρ = ρ + δρ , p = p + δ p , U^μ = ( 1+δ U^0, v^i ) , U_μ = (-1+δ U_0,v_i) .
where ρ is the energy density, p the pressure, and U^μ the four–velocity of the fluid which at the level of the background we have chosen the system of reference of comoving observers. The term v^i=δ U^i is the peculiar velocity, which can be considered as a small quantity as δρ and δ p. Notice that, due to the condition g_μνU^μ U^ν=-1, the time component of the four–velocity perturbation is δ U^0=δ U_0 = h_00/2. Thus, the components of the energy–momentum tensor in terms of the zero and first order perturbations for a perfect fluid are given by
T_00 = ρ̅ -ρ̅ h_00 + δρ ,
T_i0 = p̅ h_i0 - (ρ̅ + p̅)v_i ,
T_ij = a^2p̅δ_ij + p̅h_ij + a^2δ pδ_ij ,
whereas the components with mixed indices are
T^0_ 0 = -ρ̅ - δρ ,
T^0_ i = -(ρ̅ + p̅)v_i = -T^i_ 0 ,
T^i_ j = p̅δ^i_j + δ p δ^i_j .
Once inserted the above expressions in the UG field equations (<ref>) for a spatially flat FRW universe, we obtain for the zeroth–order perturbations the background equations, i.e.,
H^2 - Λ̅(t)/3 = κ^2/3ρ̅ , Ḣ = -κ^2/2ρ̅( 1 + ω) ,
ρ̇̅̇ + 3Hρ̅( 1 + ω) = -J̅_0(t)/κ^2 ,
where H=ȧ/a is the Hubble parameter. Both Λ̅ and J̅_0 depend only on the cosmic time t due to homogeneity and isotropy. The energy density and the pressure for the matter fields are related by a constant equation of state ω≡p̅/ρ̅ (ω=0 for non–relativistic matter such as baryons and cold dark matter, and ω=1/3 for radiation).
On the other hand, following <cit.> the linear perturbations for Eq. (<ref>) are
δ R_μν - Λ̅h_μν - g̅_μνδΛ = κ^2( δ T_μν - 1/2g̅_μνδ T - 1/2h_μνT̅) ,
where T̅ is the trace of the background energy–momentum tensor, and δ T its perturbation,
T̅ = 3p̅ - ρ̅ = -6/κ^2[ ä/a + (ȧ/a)^2 -2/3Λ̅] , δ T = 3δ p - δρ .
The components of Eq. (<ref>) are given by
κ^2/2( δρ + 3δ p ) = -∇^2h_00/2a^2 - 3/2Hḣ_00 + ∂_iḣ_i0/a^2 - 3( H^2 + Ḣ)h_00
- 1/2a^2( ḧ_ii - 2Hḣ_ii - 2 Ḣ h_ii) + δΛ ,
-κ^2( ρ̅ + p̅)v_i = -H∂_i h_00 - 1/2a^2(∇^2h_i0 - ∂_i∂_jh_j0) - 1/2∂/∂ t[ 1/a^2(∂_i h_jj - ∂_j h_ji) ] ,
a^2/2κ^2( δρ - δ p )δ_ij = 1/2∂_i∂_j h_00 + (2ȧ^2+aä)δ_ijh_00 + 1/2aȧδ_ijḣ_00 - H/2(∂_ih_j0 + ∂_jh_i0)
- 1/2a^2( ∇^2h_ij - ∂_k∂_j h_ki - ∂_k∂_i h_kj + ∂_i∂_jh_kk ) + 1/2ḧ_ij
- H/2(ḣ_ij - δ_ijḣ_kk) -( H^2 + Ḣ)h_ij - H^2δ_ijh_kk - Hδ_ij∂_k h_k0
- 1/2(∂_iḣ_j0 + ∂_jḣ_i0) - a^2 δ_ijδΛ ,
and at first order, the (non) conservation of the energy–momentum tensor (<ref>) is
∂_μδ T^μ_ ν + Γ̅^μ_μαδ T^α_ ν + δΓ^μ_μαT̅^α_ ν - Γ̅^α_μνδ T^μ_ α - δΓ^α_μνT̅^μ_ α = δ J_ν/κ^2 .
We can simplify these expressions by decomposing the perturbations into scalars, divergenceless vectors, and divergenceless traceless symmetric tensors. The perturbation of the metric h_μν can be written as
h_00 = -E ,
h_i0 = a( ∂_i F + G_i ) ,
h_ij = a^2( Aδ_ij + ∂_i∂_j B + ∂_j C_i + ∂_i C_j + D_ij) ,
where A , B , E , F are scalar perturbations, C_i and G_i are vector perturbations, and D_ij are tensor perturbations. Particularly, C_i , G_i and D_ij satisfy
∂_i C_i = ∂_i G_i = 0 , ∂_i D_ij = 0 , D_ii = 0 .
Analogously, the energy–momentum tensor can be decomposed in a similar way, this is, we can rewrite Eq.(<ref>) as
δ T_00 = -ρ̅ h_00 + δρ ,
δ T_i0 = p̅h_i0 - (ρ̅ + p̅)(∂_i v + δ v_i^V) ,
δ T_ij = p̅h_ij + a^2( δ_ijδ p + ∂_i∂_j π^S + ∂_i π_j^V + ∂_jπ_i^V + π_ij^T ) ,
where we have decomposed the spatial part of the four–velocity perturbation as v_i ≡∂_i v + δ v_i^V, with ∂_i v the gradient of a scalar velocity potential, and δ v_i^V a divergenceless vector. The terms π^S , π^V , and π^T represent dissipative corrections to the perturbation of the inertia tensor δ T_ij. This quantities satisfy similar conditions to that of Eq. (<ref>)
∂_iπ_i^V = ∂_iδ v_i^V = 0 , ∂_iπ_ij^T = 0 , π_ii^T=0 .
Besides, the mixed components of the energy–momentum tensor (<ref>) are given by
δ T^0_ 0 = -δρ ,
δ T^i_ 0 = a^-2(ρ̅ + p̅)(a∂_i F + aG_i - ∂_i v - δ v_i^V) ,
δ T^0_ i = (ρ̅ + p̅)(∂_i v + δ v_i^V) ,
δ T^i_ j = δ_ijδ p + ∂_i∂_jπ^S + ∂_iπ_j^V + ∂_jπ_i^V + π_ij^T ,
δ T = 3δ p - δρ + ∇^2π^S .
As is the case in GR, in the linear regime of small fluctuations it is possible to separate the perturbations into three classes: scalar modes, vector modes, and tensor modes, which at linear order are completely independent from each other. We will be focused in the scalar modes of perturbations, and then, Eq. (<ref>) is given by
κ^2(δρ + 3δ p + ∇^2π^S) = ∇^2E/a^2 + 3HĖ + 2/a∇^2Ḟ + 2H/a∇^2F - 3Ä - 6HȦ + 6(H^2+Ḣ)E
- 2H∇^2Ḃ - ∇^2B̈+ 2δΛ ,
while Eq. (<ref>) gives
-κ^2(ρ̅ + p̅)∂_i v = H∂_i E - ∂_i Ȧ ,
which is exactly the same as that of GR. Eq. (<ref>) can be separated in two parts: that proportional to δ_jk, and that proportional to ∂_j∂_k, which gives
κ^2(δρ - δ p - ∇^2π^S) = -HĖ - 2(3H^2 + Ḣ)E - ∇^2A/a^2 + Ä + 6HȦ + H∇^2Ḃ - 2H/a∇^2F -δΛ ,
0 = ∂_i∂_j(2κ^2a^2π^S + E + A - a^2B̈ - 3aȧḂ + 2aḞ + 4ȧF) .
On the other hand, the energy–momentum (non) conservation given by Eq. (<ref>) will be now written as
-δ J_0/κ^2 = δρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a( v/a - F ) + Hπ^S ] + (ρ̅ + p̅)/2(3Ȧ + ∇^2Ḃ) ,
∂_iδ J^S/κ^2 = ∂_i{δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v + (ρ̅ + p̅)/2E } ,
where we have decomposed the energy–momentum current violation perturbation in the same way as the other perturbed quantities, i.e., δ J_μ = (δ J_0 , δ J_i), and δ J_i = ∂_iδ J^S + δ J_i^V with ∂_iδ J_i^V=0. We consider only the scalar modes δ J_0 and δ J^S.
The unimodular constraint at linear regime of perturbations on the determinant of the metric given by Eq. (<ref>) can be written as
3A + ∇^2B + E = 0 ,
which coincides with that reported by <cit.> in their respective notations.
Notice that the Ricci scalar (<ref>) is then written as
R = 6[ ( ȧ/a)^2 + ä/a] - 6[ ( ȧ/a)^2 + ä/a]E - 3ȧ/aĖ - ∇^2E/a^2 - 6/aȧ/a∇^2F - 2/a∇^2Ḟ - 2/a^2∇^2A
+4ȧ/a( 3Ȧ + ∇^2Ḃ) + 3Ä + ∇^2B̈ ,
and the perturbed energy–momentum current violation δ J_μ = (1/4)∂_μ(δ R + κ^2δ T) is given by
δ J_μ = 1/4∂_μ{ - 6[ ( ȧ/a)^2 + ä/a]E - 3ȧ/aĖ - ∇^2E/a^2 - 6/aȧ/a∇^2F - 2/a∇^2Ḟ - 2/a^2∇^2A .
. + 4ȧ/a( 3Ȧ + ∇^2Ḃ)
+ 3Ä + ∇^2B̈ + κ^2( 3δ p - δρ)} .
From the above expression we notice that, besides the fact that the perturbed energy–momentum current violation δ J_μ is function of the scalar metric perturbations A , B , E , F , as well as of the perturbed matter quantities δρ and δ p, it is obtained that
δ J_0 = ∂_0δ J^S .
The set of equations (<ref>)–(<ref>) constitute the relativistic linear perturbations equations to describe the evolution of small fluctuations of a perfect fluid in an expanding universe within the framework of UG.
§ FIXING THE GAUGE
The theory of General Relativity is invariant under diffeomorphism, which means that the equations will remain the same under general coordinate transformations. On the other hand, we have set in the previous Section that the geometry will be described by the sum of two metric tensors: one describing the background spacetime g̅_μν and which we have fixed to be the FLRW (<ref>), and the other metric h_μν representing the small perturbations of the spacetime. Then, since the theory is invariant under diffeomorphism, and the metric of the background g_μν is fixed, the components of the metric tensor for the perturbations h_μν are not unique. In other words, we can choose how to fix the perturbations of the metric.
Consider the following coordinate transformation
x^μ→ x^'μ = x^μ + ϵ^μ(x) ,
with ϵ^μ(x) a small quantity as the other perturbations h_μν , δρ , etc., and primes ( ^' ) labels change of coordinates. Whereas in GR the 4–vector ϵ^μ=(ϵ^0 , ϵ^i) is arbitrary, in the case of UG it satisfies
∇_μϵ^μ = 0 ,
which reflects the rigidity of the spacetime volume under the unimodular condition. Notice that ϵ^0=-ϵ_0 and ϵ^i = a^-2ϵ_i. Developing the above expression we have
∇_μϵ^μ = ∂_μϵ^μ + Γ̅^μ_μνϵ^ν= ϵ̇_0 - ∂_iϵ_i/a^2 + 3Hϵ_0 = 0 .
Additionally, with the coordinate transformation (<ref>) the metric will transform as
g^'_μν(x^') = g_λκ∂ x^λ/∂ x^'μ∂ x^κ/∂ x^'ν .
Since we are in a scenario in which only the perturbed metric will be affected by a coordinate transformation (the unperturbed metric is given by the FRW line element), we implement gauge transformations and we will attribute the whole change in g_μν to a change in h_μν. Therefore, any change of coordinates Δ h_μν(x) on the perturbation of the metric of the form h_μν(x) → h_μν(x) + Δ h_μν(x) must leaves invariant the field equations[This is the gravitational analogue to the electromagnetic potentials φ and A⃗, which under gauge transformations, both fields the electric E⃗ and magnetic B⃗ remain the same, leaving invariant the Maxwell equations.].
The change on the perturbation is defined as follows
Δ h_μν(x) ≡ g^'_μν(x) - g_μν(x) ,
which written in terms of its components once inserted Eq. (<ref>), and after expanding up to first order in perturbations, it can be shown that it is obtained
Δ h_00 = -2ϵ̇_0 ,
Δ h_i0 = -ϵ̇_i - ∂_i ϵ_0 + 2Hϵ_i ,
Δ h_ij = - ∂_i ϵ_j -∂_j ϵ_i + 2aȧδ_ijϵ_0 .
Analogous to Δ h_μν(x), the change on the perturbation of the energy–momentum tensor will be
Δδ T_00 = 2ρ̅ϵ̇_0 + ρ̇̅̇ϵ_0 ,
Δδ T_i0 = -p̅ϵ̇_i + ρ̅∂_i ϵ_0 + 2p̅Hϵ_i ,
Δδ T_ij = -p̅( ∂_i ϵ_j + ∂_j ϵ_i ) + ∂/∂ t(a^2p̅)δ_ijϵ_0 .
Following the same procedure, but this time applied to the energy–momentum current violation, we have
Δδ J_μ(x) = -J̅_λ(x)∂ϵ^λ/∂ x^μ - ∂J̅_μ/∂ x^λϵ^λ ,
whose components are given by
Δδ J_0 = 2J̅_0ϵ̇_0 + J̅_i/a^2( 2Hϵ_i - ϵ̇_i ) .
Δδ J_i = J̅_0∂_iϵ_0 + J̇̅̇_iϵ_0 - a^-2( J̅_j∂_iϵ_j + ϵ_j∂_j J̅_i ) .
Since we have chosen comoving observers for the background (see the four–velocity in Eq. (<ref>)), it can be shown that the energy–momentum current violation is given by
J̅_μ = 1/4∇_μ( R̅ + κ^2T̅) = 1/4∇_μ[ 4Λ̅(t) ] ⇒ J̅_μ = [ Λ̇̅̇(t) , 0 , 0 , 0 ] ,
and then, Eq. (<ref>) gets reduced in the following simpler form
Δδ J_0 = 2J̅_0ϵ̇_0 , Δδ J_i = J̅_0∂_iϵ_0 .
To be able to classify these gauge transformation into scalar, vector and tensor components, let us decompose the spatial part of ϵ^μ into the gradient of a scalar ϵ^S and a divergenceless vector ϵ_i^V as follows
ϵ_i = ∂_i ϵ^S + ϵ_i^V , with ∂_iϵ_i^V = 0 ,
and then, from Eq. (<ref>) we obtain
ϵ̇_0 - ∇^2ϵ^S/a^2 + 3Hϵ_0 = 0 .
Therefore, Eq. (<ref>) and (<ref>) give the gauge transformations of the metric components (<ref>) and energy–momentum tensor (<ref>) respectively, and the scalar modes of the coordinate transformation obey (<ref>). For the metric perturbation we have
Δ A = 2ȧ/aϵ_0 , Δ B = -2/a^2ϵ^S , Δ C_i = -1/a^2ϵ_i^V , Δ D_ij = 0 , Δ E = 2ϵ̇_0 ,
Δ F = 1/a( -ϵ_0 - ϵ̇^S + 2ȧ/aϵ^S ) , Δ G_i = 1/a( -ϵ̇_i^V + 2ȧ/aϵ_i^V ) ,
while for the energy–momentum tensor, the gauge transformations are given by
Δδρ = ρ̇̅̇ϵ_0 , Δδ p = ṗ̅̇ϵ_0 , Δ v = -ϵ_0 ,
and the other terms are gauge invariants, this is
Δπ^S = Δπ_i^V = Δ_ij^T = Δδ u_i^V = 0 .
With expressions (<ref>) and (<ref>) we can fix the gauge, this is, we can choose particular values of the components of ϵ^μ(x) to close the system of equations unambiguously. As was said before, our interest lies in the scalar perturbations, in which case the most general line element is written as
ds^2 = -(1+E)dt^2 + 2a∂_iFdtdx^i + a^2[ (1+A)δ_ij + ∂_i∂_j B ]dx^idx^j ,
and there are several choices we can consider to fix them. We want to analyze two gauges that are broadly used in the literature for cosmological perturbations: Newtonian gauge and Synchronous gauge. For a detailed study of these gauges in GR see <cit.>.
§.§ “Newtonian” gauge: B^'=0 and F^'=0
The gravitational potentials E , F , A , B in (<ref>) are general non–null solutions of the perturbed cosmological equations (<ref>),(<ref>),(<ref>), and (<ref>). In this gauge, also known as conformal/longitudinal gauge, we choose ϵ^S such that B^'=0 and then ask for ϵ_0 such that F^'=0, where primed quantities label gravitational potentials in the new coordinates. In first place, let us show that it is possible to fix unambiguously this gauge, i.e., the scalar components of the vector ϵ^μ(x) given by ϵ_0 and ϵ^S are equal to zero after performing coordinate transformations once the conditions requested above are satisfied. From (<ref>) we have
Δ B = B^'-B = -2/a^2ϵ^S ,
Δ F = F^'-F = 1/a( -ϵ_0 - ϵ̇^S + 2ȧ/aϵ^S ) .
Solving for ϵ_0 and ϵ^S from (<ref>) we obtain that the conditions B^'=0 and F^'=0 are satisfied when
ϵ^S(t,x⃗) = 1/2a^2(t)B(t,x⃗) , ϵ_0(t,x⃗) = a(t)F(t,x⃗) + 1/2a^2(t)Ḃ(t,x⃗) .
Now, by performing a new coordinate transformation ϵ̃^μ(x), and requiring to remain in the Newtonian gauge, this is, choosing ϵ̃^S such that B^''=0 and ϵ̃_0 such that F^''=0, we have
Δ B = B^''-B^' = -2/a^2ϵ̃^S ,
Δ F = F^''-F^' = 1/a( -ϵ̃_0 - ϵ̇̃̇^S + 2ȧ/aϵ̃^S ) ,
where, since we already have that B^'=F^'=0, there is no other possible choice of coordinate transformation than ϵ̃_0 = 0 and ϵ̃^S = 0 and then, the remaining variables are totally determined. Therefore, from (<ref>) the scalar gravitational potentials satisfy:
Δ A = A^''-A^' = 2ȧ/aϵ̃_0 = 0 ⇒ A^'' = A^'≠ 0 ,
Δ B = B^''-B^' =-2/a^2ϵ̃^S = 0 ⇒ B^'' = B^' = 0 ,
Δ E = E^''-E^' = 2ϵ̇̃̇_0 = 0 ⇒ E^'' = E^'≠ 0 ,
Δ F = F^''-F^' = 1/a( -ϵ̃_0 - ϵ̇̃̇^S + 2ȧ/aϵ̃^S )= 0 ⇒ F^'' = F^' = 0 ,
and then, the only non–null gravitational potentials in this gauge are A and E. Thus, this gauge is completely fixed, and there is no remaining freedom to make any additional transformation. The unimodular condition (<ref>) was not necessary to be implemented in order to fix this gauge. However, it will be taken into account in the equations of motion.
§.§ “Synchronous” gauge: E^'=0 and F^'=0
The choice for this gauge consists in fixing ϵ_0 such that E^'=0, and then choose ϵ^S such that F^'=0. Using (<ref>) we have
Δ E = E^' - E = 2ϵ̇_0 ,
Δ F = F^'-F = 1/a( -ϵ_0 - ϵ̇^S + 2ȧ/aϵ^S ) ,
from where it is obtained
ϵ_0(t,x⃗) = f_1(x⃗) - 1/2∫ E(t,x⃗)dt , ϵ^S(t,x⃗) = a^2(t)[f_2(x⃗) - ∫a(t)F(t,x⃗)+ϵ_0(t,x⃗)/a^2(t)] .
Now, considering a new coordinate transformation ϵ̃^μ(x), and again requiring to remain in the synchronous gauge choosing ϵ̃_0 such that E^''=0 and ϵ̃^S such that F^''=0, we have
Δ E = E^''-E^' = 2ϵ̇̃̇_0 ,
Δ F = F^''-F^' = 1/a( -ϵ̃_0 - ϵ̇̃̇^S + 2ȧ/aϵ̃^S ) .
This time, it it found that
ϵ̃_0(x⃗) = f_3(x⃗) , ϵ̃^S(t,x⃗) = a^2(t)[f_4(x⃗) - ϵ̃_0(x⃗)∫dt/a^2(t)] .
In order to completely fix the synchronous gauge, we have to determine in some way the arbitrary scalar functions f_3 and f_4. We can perform a new coordinate transformation, but it can be proved that successive gauge transformations lead to the same mathematical structure of (ϵ̃_0, ϵ̃^S), with a new couple of spatial functions. For instance, it can be shown that for a third gauge transformation (ϵ̃̃̃_0,ϵ̃̃̃^S) it is possible to obtain
ϵ̃̃̃_0(x⃗) = f_5(x⃗) , ϵ̃̃̃^S(t,x⃗) = a^2(t)[f_6(x⃗) - ϵ̃̃̃_0(x⃗)∫dt/a^2(t)] .
Therefore, we are always left with two arbitrary spatial functions, and the synchronous gauge remains ambiguous. Notwithstanding, all the spatial functions in ϵ̃^S, ϵ̃̃̃^S and so on, affects only the initial coordinate labelling, and it is only the spatial function in the time components ϵ̃_0, ϵ̃̃̃_0, etc, which remains as a spurious degree of freedom, and it will have repercussions on physical quantities if it is not properly determined <cit.>. We can safely keep then the coordinate transformations (<ref>) and (<ref>) as
ϵ̃_0(x⃗) = f_3(x⃗) , ϵ̃^S(t,x⃗) = -a^2(t) ϵ̃_0(x⃗)∫dt/a^2(t) ,
ϵ̃̃̃_0(x⃗) = f_5(x⃗) , ϵ̃̃̃^S(t,x⃗) = -a^2(t) ϵ̃̃̃_0(x⃗)∫dt/a^2(t) ,
respectively, and so on for successives coordinate transformations. Then, we have to deal with only one arbitrary function. Notice that if f_5=0, then both ϵ̃̃̃_0=0 and ϵ̃̃̃^S=0, and the gauge is completely fixed. The standard approach in GR to handle this situation of ambiguity in the coordinates, is to move to the CDM frame of reference, i.e., by choosing a coordinate transformation comoving with the CDM fluid. From Eq. (<ref>), and for the GR case where δ J^S=0, we have
δ p^' + ∇^2π^S ' + ∂_t[ (ρ̅ + p̅)v^'] + 3H(ρ̅ + p̅)v^' + (ρ̅ + p̅)/2E^' = 0 ,
which for the synchronous gauge we already show that it is asked for new coordinates where E^'=0, in whose case we have
δ p^' + ∇^2π^S ' + ∂_t[ (ρ̅ + p̅)v^'] + 3H(ρ̅ + p̅)v^' = 0 ,
and when considering the CDM fluid, there is no pressure nor anisotropic stress. Then, the previous equation applied to CDM is
ρ̇̅̇_CDMv^'_CDM + ρ̅_CDMv̇^̇'̇_CDM +3Hρ̅_CDMv^'_CDM = 0 .
In the case of GR, the energy–momentum conservation at background level for CDM is (see Eq. (<ref>) with J̅_0 = 0),
ρ̇̅̇_CDM = -3Hρ̅_CDM ,
where Eq. (<ref>) gets reduced to
ρ̅_CDMv̇^̇'̇_CDM = 0 ⇒ v^'_CDM = f(x⃗) ,
and then, in GR the CDM peculiar velocity is a function of spatial coordinates only. This is a crucial result in order to completely fix the synchronous gauge in GR, because now we can consider the coordinate transformation for v_CDM as follows: from (<ref>) we have
Δ v_CDM = v_CDM^' - v_CDM = -ϵ_0 ⇒ v_CDM^' = v_CDM -ϵ_0 ,
where ϵ_0 is given by (<ref>). After a new change of coordinates, we have
Δ v_CDM = v_CDM^'' - v_CDM^' = -ϵ̃_0 ⇒ v_CDM^'' = v_CDM^' -ϵ̃_0 ,
but as we already see from Eq. (<ref>) and (<ref>), both functions v_CDM^' and ϵ̃_0 are functions only of spatial coordinates. Thus, we can choose ϵ̃_0 = v_CDM^' in order to have v_CDM^'' = 0, and then we will be in a reference frame comoving with the CDM fluid. Now, under a new change of coordinates and asking for remaining in the CDM fluid reference frame, which is given by the condition v_CDM^'''=0, we obtain
Δ v_CDM = v_CDM^''' - v_CDM^'' = -ϵ̃̃̃_0 ⇒ v_CDM^''' = v_CDM^'' -ϵ̃̃̃_0 ,
but since v_CDM^''' = v_CDM^'' = 0, there is no other possible choice for ϵ̃̃̃_0 that ϵ̃̃̃_0=0. This completely fix the synchronous gauge in GR, as can be seen from Eq. (<ref>).
Whereas in GR this gauge is fixed as we have shown above, in UG the scenario is completely different. Basically, the issue we have to deal with is based on the fact that it is not possible to move to the CDM reference frame due to the energy–momentum current violation J_μ. From Eq. (<ref>), we have
δ p^' + ∇^2π^S ' + ∂_t[ (ρ̅ + p̅)v^'] + 3H(ρ̅ + p̅)v^' + (ρ̅ + p̅)/2E^' = δ J^S '/κ^2 .
In the synchronous gauge, and for the CDM fluid, the previous equation is written as
ρ̇̅̇_CDMv^'_CDM + ρ̅_CDMv̇^̇'̇_CDM +3Hρ̅_CDMv^'_CDM = δ J^S '/κ^2 ,
but now, in UG the energy–momentum conservation at background level for CDM is (see Eq. (<ref>)),
ρ̇̅̇_CDM = -3Hρ̅_CDM -J̅_0(t)/κ^2 ,
which leads to
v̇^̇'̇_CDM = J̅_0(t)v^'_CDM + δ J^S '/κ^2ρ̅_CDM ,
and then, in UG it is no longer true in general that v^'_CDM is a function of spatial coordinates only. In fact, by solving Eq. (<ref>) we obtain
v^'_CDM = g(x⃗)e^1/κ^2∫J̅_0(t)/ρ̅_CDM(t)dt[1 + 1/κ^2g(x⃗)∫e^-1/κ^2∫J̅_0(t)/ρ̅_CDM(t)dtδ J^S '/ρ̅_CDM(t)dt ] .
Besides, notice that it is not possible to choose a coordinate system where there is not perturbations of the energy–momentum current violation, as we can see in Eq. (<ref>) from the fact that we can not fix neither of ϵ_0, ϵ̃_0, ϵ̃̃̃_0, etc, equal to zero. Moreover, we are left with an arbitrary function g(x⃗) that must be determined. Therefore, within the framework of UG, the cosmological perturbations in the synchronous gauge do not allow to choose the CDM comoving frame due to the presence of the energy–momentum current violation J̅_0 and its scalar perturbation δJ^S. Thus, this gauge can not be fixed in this way as in GR.
We can think in using the unimodular constraint for perturbations (<ref>): for the synchronous gauge it reads
3A^''' = -∇^2B^''' .
For the gauge transformations (<ref>), it is written as
∇^2ϵ̃̃̃^S(t,x⃗) = 3a(t)ȧ(t)ϵ̃̃̃_0(x⃗) ,
which from Eq. (<ref>) we obtain,
[∫dt/a^2(t) ]∇^2f_5(x⃗) = -[3ȧ(t)/a(t)]f_5(x⃗) .
The differential equation from above can be solved by the method of variable separation, and we have
∇^2f_5(x⃗)/f_5(x⃗) = 𝒞 , -[3ȧ(t)/a(t)∫ dt/a^2(t)]=𝒞 ,
with 𝒞 a constant. The second expression is an integro–differential equation for the scale factor a(t), which can be written as
da(t)/dt=-𝒞a(t)/3∫ dt/a^2(t) ⇒ Ḣ = -𝒞/3a^2(t) ,
which demands a very particular solution for the background equations (<ref>) that does not need to be a physical solution for the expansion of the universe filled with a particular matter content. Moreover, since the scale factor is solution of the background dynamics, it will not necessarily satisfy (<ref>). In fact, such equation is way to restrictive to govern the expansion of the universe. For instance, we can consider a general power law solution a(t) = (t/t_0)^p, where t_0 is the present day. It can be shown that this solution is valid only when p=2 and 𝒞=18/t_0^2. However, this constitutes a very particular background evolution for the dynamics of any matter component, as we mentioned above. Doing the same for a late time solution within the UG framework (see Appendix B in <cit.>), where the scale factor is given by a(t) = [Asinh^2(Bt)]^C with A,B,C constants, we have that it is not possible to satisfy (<ref>).
On the other hand, the first expression in Eq. (<ref>), is the Poisson equation with source term given by the function itself,
∇^2f_5(x⃗) = 𝒞f_5(x⃗) .
Thus, the only way to have a scale factor driven by the background dynamics, and simultaneously the unimodular condition being satisfied at the level of perturbations in this gauge (<ref>), is through the trivial solution f_5(x⃗) = 0. But this is precisely what we need to fix the synchronous gauge, as can be seen from Eq. (<ref>). Therefore, when considering non–gravitational interactions it is not possible to consistently fix the synchronous gauge in UG by choosing the comoving frame of CDM. Instead, the unimodular condition appears to be useful to completely fix this gauge. This has serious repercussions on the possibility of implementing cosmological models based on UG in Boltzmann solvers such as <cit.> and <cit.>, as we will discuss later.
§.§ Alternative gauge choice: B^'=0 and unimodular constraint
This is the approach implemented by <cit.> with the aim of keeping two geometric degrees of freedom, just as in GR perturbations. Moreover, in such work choose B^'=0 in order to compare with the Newtonian gauge in GR. Then, the line element under this choice can be written as
ds^2 = -(1-3A^')dt^2 + 2a∂_iF^'dtdx^i + a^2(1+A^')δ_ijdx^idx^j ,
where the only degrees of freedom are A^' and F^'. However, we will show that this choice does not determined unambiguously these gravitational potentials, and spurious effects due to this not properly fixed gauge will affect physical quantities such as energy density and pressure perturbations.
Similar to previous procedures, and as in the Newtonian gauge, we ask for ϵ^S=0 such that B^'=0 (see Eq. (<ref>) and (<ref>)), this is
Δ B = B^'-B = -2/a^2ϵ^S ⇒ ϵ^S(t,x⃗) = a^2/2B(t,x⃗) .
Now, instead of asking for ϵ_0 such that F^'=0, we follow the approach of <cit.> by imposing the unimodular condition (<ref>), which in terms of the scalar components of the coordinate transformation ϵ^μ is written as
ϵ̇_̇0̇ + 3ȧ/aϵ_0 = 0 ⇒ ϵ_0(t,x⃗) = f_1(x⃗)/a^3(t) .
It can be seen that we have left with an arbitrary spatial function f_1(x⃗). A new coordinate transformation ϵ̃^μ where B^''=0 leads to
Δ B = B^''-B^' = -2/a^2ϵ̃^S ⇒ ϵ̃^S(t,x⃗) = 0 ,
and the remaining scalar component ϵ̃_0 must be zero to completely fix the gauge. However, once the unimodular condition is imposed one more time, we have
ϵ̇̃̇_0 + 3ȧ/aϵ̃_0 = 0 ⇒ ϵ̃_0(t,x⃗) = f_2(x⃗)/a^3(t) ,
and then, we still have an arbitrary spatial function f_2(x⃗). It can be shown that successives coordinates transformations lead to the same results, i.e., ϵ̃̃̃^S = 0, but ϵ̃̃̃_0 is always in terms of an arbitrary spatial function. This gauge freedom will affect not only the remaining gravitational potentials, which after such transformations are
F^''(t,x⃗) = F(t,x⃗) - a/2Ḃ(t,x⃗) - f_3(x⃗)/a^4 ,
A^''(t,x⃗) = A(t,x⃗) + 2ȧ/a^4f_3(x⃗) ,
with f_3 = f_1+f_2, but also to the energy density perturbation, which from Eq. (<ref>) is
δρ^'' = δρ + ρ̇̅̇/a^3(t)f_3(x⃗) ,
and similarly for both, the pressure perturbation Δδ p and peculiar velocity Δ v transformations. Moreover, the Sachs–Wolfe effect <cit.> has been derived in UG in <cit.> in order to spot differences between GR and UG through possible signatures in the anisotropies of the CMB. In our notation, they obtain that
( -3A^''/2 + δ T/T̅ + aḞ^'') = ctte ,
but as it is shown in Eq. (<ref>), both gravitational potentials are not completely determined due to the arbitrary spatial function f_3(x⃗). Therefore, even when the differences obtained in <cit.> between GR and UG are negligible under the assumptions they considered, it is important to study physical observables such as the CMB radiation, with a proper gauge choice without spurious degrees of freedom. This will be discussed in detail in the next Section.
§ PHYSICAL IMPLICATIONS OF COSMOLOGICAL PERTURBATIONS IN UG
Now that we have fixed both, the Newtonian and Synchronous gauge in UG, we are able to write down the dynamical equations for the linear perturbations in each of these gauges. As we will see below, the unimodular condition (<ref>) will reduce the degrees of freedom from two to only one gravitational potential. It is possible to find solutions for the density contrasts in each of these gauges in terms of the corresponding metric scalar perturbation. Besides, we obtain proper derivation of the Sachs–Wolfe effect within the UG framework for the Newtonian gauge, and we show that there is only a modification in the coefficient of the gravitational potential derivative, but the physical result is exactly the same as that obtained in GR.
§.§ Linear perturbations: Newtonian gauge
The line element (<ref>) is written as[In order to keep the notation simple, notice that we drop out the primes (^') since we already know that it is possible to find a consistent coordinates transformation where E and A are the only physical degrees of freedom for the gravitational perturbations, prior to impose the unimodular constraint.],
ds^2 = -(1+E)dt^2 + a^2 (1+A)δ_ij + dx^idx^j ,
but we will use the standard notation for E and A in this gauge, which is given by E≡ 2Φ, and A≡ -2Ψ, and then, the perturbed line element (<ref>) takes the form
ds^2 = -(1+2Φ)dt^2 + a^2 (1-2Ψ)δ_ijdx^idx^j .
The evolution of perturbations given by Eq. (<ref>)–(<ref>) are then written as
κ^2/2(δρ + 3δ p + ∇^2π^S) = ∇^2Φ/a^2 + 3HΦ̇ + 3Ψ̈ + 6HΨ̇ + 6(H^2+Ḣ)Φ + δΛ ,
-κ^2/2(ρ̅ + p̅)∂_i v = H∂_iΦ + ∂_iΨ̇ ,
-κ^2/2(δρ - δ p - ∇^2π^S) = HΦ̇ + 2(3H^2 + Ḣ)Φ - ∇^2Ψ/a^2 + Ψ̈ + 6HΨ̇ + δΛ/2 ,
κ^2a^2∂_i∂_j π^S = ∂_i∂_j(Ψ - Φ) ,
and Eq. (<ref>) for the energy–momentum tensor becomes
δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] - 3(ρ̅ + p̅)Ψ̇ = -δ J_0/κ^2 ,
δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v + (ρ̅ + p̅)Φ = δ J^S/κ^2 .
We have 6 equations (<ref>)–(<ref>) and 6 variables to determine: δρ , δ p , π^S δ u , Φ , and Ψ. In particular, variables Φ and Ψ differ by the scalar anisotropic term π^S as can be seen from Eq. (<ref>). In the particular case of a perfect fluid without dissipative corrections, π^S=0, and we obtain that Φ = Ψ. However, in UG we have that the choice of coordinates such that B=F=0 leads to the following unimodular constraint for perturbations (<ref>)
3Ψ = Φ ,
and the previous equations acquire the form
κ^2/6(δρ + 3δ p + ∇^2π^S) = ∇^2Ψ/a^2 + 4HΨ̇ + Ψ̈ + 6(H^2+Ḣ)Ψ + δΛ/3 ,
-κ^2/2(ρ + p)∂_i v = 3H∂_iΨ + ∂_iΨ̇ ,
-κ^2/2(δρ - δ p - ∇^2π^S) = 6(3H^2 + Ḣ)Ψ - ∇^2Ψ/a^2 + Ψ̈ + 9HΨ̇ + δΛ/2 ,
-κ^2/2a^2∂_i∂_j π^S = ∂_i∂_j Ψ ,
-δ J_0/κ^2 = δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] - 3(ρ̅ + p̅)Ψ̇ ,
δ J^S/κ^2 = δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v + 3(ρ̅ + p̅)Ψ ,
and the perturbation for the energy–momentum current violation in this gauge is
δ J_μ = 1/4∂_μ{ - 36[ ( ȧ/a)^2 + ä/a]Ψ - 44ȧ/aΨ̇ - 2∇^2Ψ/a^2 - 6Ψ̈ + κ^2( 3δ p - δρ)} .
Notice that we need the presence of the scalar anisotropic stress π^S in order to have non–null gravitational potential Ψ, as can be seen from Eq. (<ref>). In this sense, we can see that strictly the Newtonian gauge is not recovered in UG. This was already reported in previous literature, and recently by <cit.>. However, different from the approach of the mentioned work, we will keep the anisotropic stress term in order to find solutions for physical quantities, such as the CDM density contrast δ_CDM≡δρ_CDM/ρ̅_CDM. We have then that p̅_CDM = δ p_CDM = 0, but different from the standard model in Λ CDM, the dark matter particle will have π^S_CDM≠ 0. Combining the previous equations, it can be shown that in a matter–dominated era the CDM density contrast in UG for the Newtonian gauge δ_CDM(new)^UG is given by
δ_cdm(new)^UG = (4/3)(-2/HΨ̇) - 2[ 1 + (1/6)(k^2/3a^2H^2) ](3)Ψ ,
and it can be seen that it differs only by numerical factors from the GR result (see Eq.(12) in <cit.>),
δ_cdm^GR = -2/HΨ̇ - 2(1 + k^2/3a^2H^2)Ψ ,
where we have used ∇→ -k^2 for solutions in Fourier space. Thus, once the gravitational potential Ψ is known, it is possible to follow the cosmological evolution of CDM fluctuations. Moreover, if a particular model for the non–gravitational interaction is considered at background level, such information will be in the Hubble parameter H through the Friedmann equation (<ref>), and constraints could be put on cosmological models of UG by studying LSS of the universe through the Matter Power Spectrum (MPS). Also notice that, even when neglecting the energy–momentum current violation, and GR is recovered at background level, the unimodular constraint at the level of linear perturbations changes the evolution of CDM fluctuations, as can be seen when comparing the coefficients of Eqs. (<ref>) and (<ref>).
§.§ Linear perturbations: Synchronous gauge
In this case the perturbed line element is written as
ds^2 = -dt^2 + a^2[ (1+A)δ_ij + ∂_i∂_j B ]dx^idx^j ,
The field equations (<ref>)–(<ref>) in this gauge are given by
κ^2(δρ + 3δ p + ∇^2π^S) = -3Ä - 6HȦ - ∇^2B̈ - 2H∇^2Ḃ + 2δΛ ,
κ^2(ρ̅ + p̅)v = Ȧ ,
κ^2(δρ - δ p - ∇^2π^S) = -∇^2A/a^2 + Ä + 6HȦ + H∇^2Ḃ - δΛ ,
2κ^2a^2π^S = -A + a^2B̈ + 3aȧḂ ,
and Eq. (<ref>) for the energy–momentum tensor now takes the form
δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] + (ρ̅ + p̅)/2(3Ȧ + ∇^2Ḃ) = -δ J_0/κ^2 .
δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v = δ J^S/κ^2 ,
and the energy–momentum current violation perturbation (<ref>) is given by
δ J_μ = 1/4∂_μ[ -2/a^2∇^2A + 4ȧ/a( 3Ȧ + ∇^2Ḃ) + 3Ä + ∇^2B̈ + κ^2( 3δ p - δρ) ] .
However, in UG we have that the choice of coordinates of Section <ref> leads to the following unimodular constraint (<ref>) for perturbations
3A = -∇^2B ,
and the previous equations are written as follows
κ^2(δρ + 3δ p + ∇^2π^S) = 2δΛ ,
κ^2(ρ̅ + p̅)v = Ȧ ,
κ^2(δρ - δ p - ∇^2π^S) = -∇^2A/a^2 + Ä + 3HȦ - δΛ ,
2κ^2π^S = ∇^2B/3a^2 + B̈ + 3HḂ ,
-δ J_0/κ^2 = δ̇ρ̇ + 3H(δρ + δ p) + ∇^2[ (ρ̅ + p̅)/a^2v + Hπ^S ] ,
δ J^S/κ^2 = δ p + ∇^2π^S + ∂_t[ (ρ̅ + p̅)v ] + 3H(ρ̅ + p̅)v ,
with Eq. (<ref>) now written as
δ J_μ = 1/4∂_μ[ -2/a^2∇^2A + κ^2( 3δ p - δρ) ] .
As we have done in the previous case, it is possible to show that in a matter–dominated era the CDM density contrast in UG for the Synchronous gauge δ_CDM(syn)^UG is given by
δ_cdm(syn)^UG = 2k^2/7a^2H^2A ,
where again we have used ∇→ -k^2 for solutions in Fourier space. Then, once the solution for the gravitational potential A is known, we have the cosmological evolution for the density contrast.
§.§ Sachs–Wolfe effect: a proper derivation in UG
The approach followed by <cit.> lead to an expression that modifies the GR result by a new term (see Eq. (<ref>)). However, we have shown in Section <ref> that such new term is not completely determined unambiguously due to the gauge is not properly fixed. In what follows, we present the derivation for the Sachs–Wolfe effect for UG in the Newtonian gauge.
We start by setting the line element for the Newtonian gauge (<ref>):
ds^2 = -(1+2Φ)dt^2 + a^2 (1-2Ψ)δ_ijdx^idx^j ,
where we left both gravitational potentials in order to compare with GR the final expression, and at the end of the procedure we use the unimodular constraint (<ref>). Following <cit.>, it can be shown that the Boltzmann equation for photons at linear order in UG is given by
∂/∂ t(δ T/T̅) + p̂^i/a∂/∂ x^i(δ T/T̅) - ∂Ψ/∂ t + p̂^i/a∂Φ/∂ x^i = 0 ,
where the right hand side of the previous equation neglects the collision term since we are interested in the moment that photons are already decoupled. The mean temperature and its fluctuations are denoted respectively by T̅ and δ T, whereas p̂^i is the unitary 3–momentum. In order to apply the same differential operator to both gravitational potentials, we add new partial derivatives as follows
∂/∂ t(δ T/T̅) + p̂^i/a∂/∂ x^i(δ T/T̅) - ∂Ψ/∂ t + ∂Φ/∂ t + p̂^i/a∂Φ/∂ x^i = ∂Φ/∂ t
( ∂/∂ t + p̂^i/a∂/∂ x^i)( δ T/T̅ + Φ) = ∂Φ/∂ t + ∂Ψ/∂ t .
At this point, notice that once the gravitational potentials are equal the standard result is obtained, and the right hand side of the previous equation is 2∂Φ/∂ t (see Eq.(9.20) of <cit.>). Notwithstanding, the latter is true in GR where no anisotropic stress is present and the condition Φ = Ψ is satisfied. In our case, the anisotropic stress can not be set to zero in the Newtonian gauge, as we have discussed in previous Sections. Nonetheless, we have to impose the unimodular constraint of Eq. (<ref>) in the Newtonian gauge, which reads Ψ = Φ/3. Thus, the relation between the temperature fluctuations δ T/T̅ and the gravitational potential Φ in UG is
( ∂/∂ t + p̂^i/a∂/∂ x^i)( δ T/T̅ + Φ) = 4/3∂Φ/∂ t ,
and it is only a factor of 2/3 the difference with respect to the GR result. After recombination the universe is matter–dominated and then we can approximate ∂Φ/∂ t ≃ 0. This leads to the standard expression of the Sachs–Wolfe effect:
( δ T/T̅ + Φ) = const .
Therefore, whereas previous literature have found modifications due to the presence of a new gravitational potential in Eq. (<ref>) (see Eq. (<ref>)), we have shown that such modification propagates spurious degrees of freedom due to the gauge choice. Our result shows that effectively there is no distinction between GR and UG when looking at the Sachs–Wolfe effect, but UG does not induced new terms in the relation between temperature fluctuations δ T/T̅ and the gravitational potential Φ.
§ FINAL REMARKS
The theory of Unimodular Gravity in its original formulation brings new interesting features due to the constraint in the spacetime four–volume, reducing general coordinate transformations to volume–preserving diffeomorphisms. The natural arising of the non–conservation of energy–momentum tensor allows to generate new non–gravitational interactions, which can be used to elucidate the behavior of the dark sector in cosmological models.
We have analyzed whether the most common gauges used in cosmology are properly fixed, since previous work on linear perturbations within the framework of UG have not discussed this crucial aspect in the study of cosmological perturbations. We have demonstrated that it is possible to fix both, Newtonian and Synchronous gauges in UG, although the consequences on the matter fields are different to those for GR: particularly, CDM must have a non–null anisotropic stress when working in the Newtonian gauge, whereas it is not possible to choose a comoving observer with the CDM fluid in the Synchronous one.
Even when the dynamics of the perturbations change with respect to GR due to the unimodular constraint (we are left with only one gravitational potential instead of two), we have shown that it is possible to obtain the fluctuations of the CDM energy density as function of the only gravitational degree of freedom in both, Newtonian and Synchronous gauge. In fact, in the same line of ideas developed by Ma & Bertschinger <cit.>, we can obtain the equations in terms of the fluid variables: density contrast δ and velocity divergence θ, given by
δ≡δρ/ρ̅ , θ≡∂_i v_i/a = 1/a∂_i (∂_iv + v_i^V) = ∇^2v/a ,
where in the last expression, we only consider the scalar mode of peculiar velocity. From Eqs. (<ref>) and (<ref>), where the unimodular constraint has not been imposed yet, δ and θ are given in both gauges respectively by,
Newtonian gauge:
δ^' = -(1+ω)( θ -3Ψ^') - 3a^'/a( δ p/δρ - ω)δ + aJ̅_0 δ - ρ̅δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/a(1-3ω)θ - ω^'/1+ωθ + δ p/δρ/1+ωk^2δ -k^2σ + k^2Φ +aJ̅_0/κ^2ρ̅θ - k^2δ J^S/κ^2ρ̅(1+ω) .
Synchronous gauge:
δ^' = -(1+ω)( θ + h^'/2) - 3a^'/a( δ p/δρ - ω)δ + aJ̅_0 δ - δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/a(1-3ω)θ - ω^'/1+ωθ + δ p/δρ/1+ωk^2δ -k^2σ +aJ̅_0/κ^2ρ̅θ - k^2δ J^S/κ^2ρ̅(1+ω) ,
where, for the sake of comparison with <cit.>, this time the prime indicates derivative with respect to conformal time τ, relating with cosmic time t through dτ = dt/a. We also identify our anisotropic term π^S with σ through the relation[In order to coincide in notation with <cit.>, we have redefined the traceless anisotropic stress by adding the term -δ_ij∇^2π^S/3 in Eq. (<ref>).] σ≡ -2∇^2 π^S/3ρ̅(1+ω), and the trace part of the spatial metric perturbation in conformal time is related to our gravitational potentials in the synchronous gauge by h = h_ii≡ 3A + ∇^2B. Previous equations are the UG version of Eqs. (29) and (30) from <cit.>, where new terms due to the energy–momentum current violation J_μ are present. From what we have learn in previous Sections, we have to set the unimodular constraint, and taking into consideration the new features arising in UG under the analysis of gauge fixing: for instance, in the Newtonian gauge we have to keep the anisotropic term in order to have gravity perturbations (see discussion in Section <ref>). On the other hand, once the synchronous gauge is fixed, it is not possible to have a comoving observer with the CDM fluid, and then, the velocity divergence can not be set equal to zero (see discussion in Section <ref>). Thus, considering the corresponding unimodular constraint in each gauge (3Ψ = Φ and 3A = -∇^2B for the Newtonian and Synchronous gauge respectively) for a CDM–dominated universe, the previous equations for the evolution of density contrast δ and velocity divergence θ are written as:
Newtonian gauge:
δ^' = -θ + Φ^' + aJ̅_0 δ - ρ̅δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/aθ - k^2σ + k^2Φ +aJ̅_0/κ^2ρ̅θ - k^2δ J^S/κ^2ρ̅(1+ω) .
Synchronous gauge:
δ^' = -θ + aJ̅_0 δ - δ J_0/κ^2 ρ̅ ,
θ^' = -a^'/aθ +aJ̅_0θ - k^2δ J^S/κ^2ρ̅ .
In the case of Eqs. (<ref>), the differences with respect to GR and the standard ΛCDM model are the presence of the energy–momentum current violation J_μ, the anisotropic term σ, and we have only one gravitational potential Φ. Notice that even when assuming ∇_μ T^μν = 0 , and then J̅_0 = δ J_0 = δ J^S = 0 , the perturbation equations do not recover the GR case. This is precisely due to both, the unimodular constraint and the anisotropic term. Similarly, the dynamics of the perturbed equations (<ref>) for the Synchronous gauge in UG is very different from that of GR. Even when neglecting the energy–momentum current violation, we can observe that the source of the density contrast evolution is not the trace h (as is the case in ΛCDM, see Eq.(42) in <cit.>), but the velocity divergence θ. This is another way to understand why it is not possible to choose an observer comoving with CDM: there will be not growth of structures if θ = 0. Of course, in the general case we are studying, the presence of the energy–momentum current violation and its scalar perturbations will affect as well the dynamics of structure formation in both gauges.
Therefore, UG leaves imprints in the properties of dark sector, and the implications on the linear perturbations are different to those in GR. Specifically, the physical consequences of linear perturbations in UG due to the geometric constraint imposed by the unimodular condition, are translated into non–standard features of the cold dark matter component: if working in the Newtonian gauge, CDM must be anisotropic as this term is directly proportional to the only gravitational degree of freedom (see Eqs. (<ref>) and (<ref>)); if working in the Synchronous gauge, it is not possible to choose a comoving observer with CDM fluid as its velocity divergence drives the structures formation (see Eqs. (<ref>)). Besides, we have new contributions from the energy–momentum current violation, in the form of the background term J̅_0 and scalar modes δ J_0 , δ J^S . The background dynamics have been solved and studied under numerical and statistical analysis, by considering phenomenological models of diffusion to describe the new non–gravitational interactions in the dark sector <cit.>. However, the linear perturbations in UG lead to a novel level of complexity: we have focused in a matter–dominated universe in order to extract some information about the process of structure formation through the CDM density contrast, but this is not enough if one want to reproduce observables such as CMB or MPS. One of the new issues to handle is the information about the scalar perturbations of the energy–momentum current violation δ J_0 and δ J^S. Perhaps a naive way to proceed is to directly consider fluctuations of the diffusion models, or as was the case for the background in <cit.>, to propose a phenomenological model for such perturbations.
Thus, more work have to be done in order to properly implement a cosmological model based on UG in a Boltzmann solver such as or . Even when the conservative approach of energy–momentum conservation for ordinary matter content (photons, neutrinos, baryons) is assumed, the unimodular constraint changes the dynamics of linear perturbations for all species. In other words, while J_μ = 0 for ordinary matter, the curvature produced by the only gravitational potential in UG will change the dynamics of matter fields. Even more, Boltzmann solvers are written for GR in the Synchronous gauge[See <https://cosmologist.info/notes/CAMB.pdf> and <http://www.class-code.net>. In particular, allows to work in both, Synchronous and Newtonian gauge. In any case, the numerical implementation must be applied consistently by considering the gravitational effects of the unimodular constraint at linear perturbations level for all matter components.], and strictly speaking such gauge does not exist in UG, since it is not possible to choose consistently a CDM comoving observer by setting its velocity divergence θ_CDM = 0 . With this in mind, we consider that for any attempt to reproduce CMB and MPS observations for cosmological models within the framework of UG, analysis of cosmological perturbations such as <cit.> must be done, in order to consistently solve the dynamics of linear perturbations for all matter and energy content gravitating as UG dictates. This work constitutes a first step in this direction by considering only the evolution of the dark matter component at linear order in perturbations.
F.X.L.C. acknowledges Beca CONACYT. U.N and F.X.L.C acknowledges to PROYECTO CIENCIA DE FRONTERA CF 2019/2558591 for financial support.
plunsrt
|
http://arxiv.org/abs/2307.06104v1 | 20230712120236 | Deep learning for dynamic graphs: models and benchmarks | [
"Alessio Gravina",
"Davide Bacciu"
] | cs.LG | [
"cs.LG",
"cs.SI"
] |
Gravina et al.: Deep learning for dynamic graphs: models and benchmarks
Deep learning for dynamic graphs: models and benchmarks
Alessio Gravina^* and Davide Bacciu, Senior Member, IEEE
^* Corresponding Author
D. Bacciu and A. Gravina are with the Department
of Computer Science, University of Pisa, Italy. (e-mail: [email protected] and [email protected])
Preprint. Under review
August 12, 2023
================================================================================================================================================================================================================================================================================
Recent progress in research on Deep Graph Networks (DGNs) has led to a maturation of the domain of learning on graphs. Despite the growth of this research field, there are still important challenges that are yet unsolved. Specifically, there is an urge of making DGNs suitable for predictive tasks on
real-world systems of interconnected entities
, which evolve over time.
With the aim of fostering research in the domain of dynamic graphs, at first, we survey recent advantages in learning both temporal and spatial information, providing a comprehensive overview of the current state-of-the-art in the domain of representation learning for dynamic graphs. Secondly, we conduct a fair performance comparison among the most popular proposed approaches, leveraging rigorous model selection and assessment for all the methods, thus establishing a sound baseline for evaluating new architectures and approaches.
deep graph networks, graph neural networks, temporal graphs, dynamic graphs, survey, benchmark
§ INTRODUCTION
Graph representation learning has been gaining increasing attention over the recent years promoted by the ubiquitousness and expressiveness of structured relational information. Graphs are powerful tools to represent systems of relations and interactions, across several application fields where deep learning for graphs has found successful application, such as biology, social science and human mobility <cit.>.
The key challenge when learning from graph data is how to numerically represent the combinatorial structures for effective processing and prediction by the model. A classical predictive task of molecule solubility prediction, for instance, requires the model to encode both topological information and chemical properties of atoms and bonds. Graph representation learning
solves the problem in a data-driven fashion, by learning a mapping function that compresses the complex relational information of a graph into an information-rich feature vector that reflects both structural and label information in the original graph.
Despite the progress made in recent years in the field, which is mainly related to the family of Deep Graph Networks (DGNs) <cit.>, a majority of the literature works focuses on networks that are textitstatic snapshots of a phenomenon. Indeed, this is often a limitation when considering real-world processes, both natural and synthetic, where interactions evolve over time, they are dynamic in nature. Examples of such time-evolving systems can be found in social networks, where users can develop new friendships, citation networks, which are constantly updated with new publications, and in an e-commerce, where user behaviors change over time as well as their interactions with items.
Representing a time-varying process through a static graph can be a reasonable choice in those scenarios in which the temporal dynamic is extremely slow, such as in a protein-protein interaction network. In general, however, ignoring the temporal information can harm the final performance of the predictor <cit.>. Therefore, the community has begun to look into dynamic graphs and into models that can process the temporal dimension of a graph, as well as its spatial aspects. As a result, the last few years have witnessed a surge of works on dynamic graphs, leading to a fragmented and scattered literature with respect to model formalization, empirical setups and performance benchmarks. This aspect very much motivated us to look into a systematization of the literature which does not only look at surveying the existing works, but also actively promotes the identification of shared benchmarks and empirical protocols for the fair evaluation of dynamic graph models.
The present survey has a three-fold contribution. First, we propose a coherent formalization of the domain of representation learning for dynamic graphs, unifying different definitions and formalism gathered from the literature. Secondly, we provide a survey on representation learning for dynamic graphs under our unified formalism. Finally, we provide the graph learning community with a fair performance comparison among the most popular DGNs for dynamic graphs, using a standardized and reproducible experimental environment[We release openly the code at <https://github.com/gravins/dynamic_graph_benchmark>.]. Specifically, we performed experiments with a rigorous model selection and assessment framework, in which all models were compared using the same features, same datasets and the same data splits. As a by-product of our work, we also provide the community with a selection of datasets which we put forward as good candidates for the benchmarking of future works produced by the community.
Existing surveys on the topic of deep graph learning are <cit.>. Our novel contributions with respect to such works are: (i) a broader and more up-to-date coverage of literature; (ii) a benchmark and an empirical comparison between a broad range of methods; and (iii) a statistical overview of a selected list of datasets in both discrete and continuous time setting.
The remainder of the paper is organized as follows: Section <ref> briefly surveys representation learning for static graphs, providing general definitions and methods useful to define dynamic graph problems[The interested reader is referred to <cit.> for an in-depth analysis of the methods developed within the scope of representation learning for static graphs.]. Section <ref> formalizes representation learning for dynamic graphs
, while Section <ref> and <ref> survey the related literature. Section <ref> describes our empirical evaluation setting and provides an experimental comparison of most popular DGNs for dynamic graphs.Section <ref> concludes the paper.
§ REPRESENTATION LEARNING FOR STATIC GRAPHS
§.§ Definitions and notations
A (static) graph is a tuple 𝒢=(𝒱, ℰ, 𝐗, 𝐄) defined by the nonempty set 𝒱 of nodes (also referred to as vertices), and by the set ℰ of edges (also called links or arcs) <cit.>. Nodes represent interacting entities, whereas edges denote connections between pairs of nodes. Depending on edge type, a graph is undirected, when node pairs are unordered, ℰ⊆{{u,v} | u,v ∈𝒱}, or directed, when the pairs are ordered, ℰ⊆{(u,v) | u,v ∈𝒱}. The structural information expressed by ℰ can also be encoded into an adjacency matrix 𝐀, which is a square |𝒱| × |𝒱| matrix where each element 𝐀_uv∈{0,1} is 1 if an edge connects the nodes u and v, and it is 0 otherwise.
In many practical scenarios, nodes and edges are often enriched with additional attributes. We represent node features (also known as node representation or node embedding) as a matrix 𝐗∈ℝ^|𝒱|× d_n, where d_n is the number of available features. The u-th row of 𝐗 is denoted as 𝐱_v and represents a single node's features. Similarly, we represent edge features as a matrix 𝐄∈ℝ^|ℰ| × d_e, where d_e is the number of edge features, and we indicate edge features vectors as 𝐞_uv. Finally, we denote the neighborhood (or adjacency set) of a node u ∈𝒱 as the set 𝒩_u = {v∈𝒱|{u,v}∈ℰ}. A visual representation of a directed and undirected graph and the neighborhood of a node is shown in Figure <ref>.
§.§ Overview of representation learning for static graphs
Representation learning for graphs has been pioneered by Graph Neural Network (GNN) <cit.> and Neural Network for Graphs (NN4G) <cit.>, which were the first to provide learning models amenable also for cyclic and undirected graphs. The GNN leverages a recursive approach, in which the state transition function updates the node representation through a diffusion mechanism that takes into consideration the current node and its neighborhood defined by the input graph. This procedure continues until it reaches a stable equilibrium. On the other hand, the NN4G leverages a feed-forward approach where node representations are updated by composing representations from previous layers in the architecture.
The original approaches by NN4G and GNN have been later extended and improved throughout a variety of approaches which can be cast under the umbrella term of (static) Deep Graph Networks (DGNs), for which there exist dedicated surveys <cit.>. Briefly, DGNs denote a family of approaches capable of learning the functional dependencies in a graph through a layered approach, where the single layers are often referred to as Graph Convolutional Layers (GCLs). Each of these computes a transformation of node representations by combining the previous node representations and their neighborhoods. We visually represent this procedure in Figure <ref>. The transformations are often referred to as graph convolutions, and they are realized either in the spectral or spatial domain.
§.§.§ Spectral convolution
In this setting, graphs are processed and learned through a parameterization in the spectral domain of their Laplacian matrices. Specifically, given a filter 𝐠_θ = diag(θ) parametrized by θ∈ℝ^|𝒱| and the graph signal 𝐱∈ℝ^|𝒱| for a graph 𝒢, we can define the spectral graph convolution as a multiplication in the Fourier domain:
𝐠_θ * 𝐱 = 𝐔𝐠_θ𝐔^T 𝐱
where 𝐔^T 𝐱 is the graph Fourier transform, and 𝐔 is the matrix of eigenvectors of the normalized graph Laplacian 𝐋 = 𝐈 - 𝐃^-1/2𝐀𝐃^-1/2 = 𝐔Λ𝐔^T, with Λ the diagonal matrix of the eigenvalue of 𝐋. In the graph Laplacian, 𝐈 indicates the identity matrix, 𝐃 is the diagonal node degree matrix, and 𝐀 is the adjacency matrix of 𝒢. The approach in Equation <ref> is severely limited by the computational requirements associated the Laplacian decomposition and by the spectral parameterization costs, which have motivated a whole body of followup works <cit.>. Among these, the Graph Convolutional Network (GCN) <cit.> is certainly the most successful one.
GCN leverages the degree-normalized Laplacian introduced in <cit.>, hence,
the output of the GCN's (ℓ+1)-th layer for a node u is computed as
𝐱^ℓ+1_u = Θ_0 𝐱_u^ℓ + Θ_1 ∑_v ∈𝒩_u1/√(𝐝𝐞𝐠(v) 𝐝𝐞𝐠(u))𝐱^ℓ_v,
where σ is the activation function, while 𝐝𝐞𝐠(v) and 𝐝𝐞𝐠(u) are, respectively, the degrees of nodes v and u. With such formulation, GCN requires 𝒪(|ℰ|) time.
§.§.§ Spatial convolution
Spatial convolutions are typically framed in the Message Passing Neural Network (MPNN) <cit.> framework, where the representation for a node u at a layer ℓ+1 is computed as
𝐱_u^ℓ+1 = ϕ_U(𝐱_u^ℓ, ⊕_v∈𝒩_uϕ_M(𝐱_u^ℓ, 𝐱_v^ℓ, 𝐞_uv))
where ⊕ is an aggregation invariant function, and ϕ_U and ϕ_M are respectively the update and message functions. The message function computes the message for each node, and then dispatches it among the neighbors. The update function collects incoming messages and updates the node state. A typical implementation of the MPNN use sum as ⊕ and ϕ_U functions, and ϕ_M(𝐱_u^ℓ, 𝐱_v^ℓ, 𝐞_uv) = MLP(e_uv)𝐱_v^ℓ.
Depending on the definition of the update and message functions, it is possible to derive a variety of DGNs. The Graph Attention Network (GAT) <cit.> introduces an attention mechanism to learn neighbors' influences
computing node representation as
𝐱_u^ℓ+1 = σ( ∑_v ∈𝒩_uα_uvΘ𝐱_v^ℓ)
where α_uv is the classical softmax attention score between node u and its neighbor v.
When graphs are large and dense, |ℰ| = |𝒱|^2, it can be impractical to perform the convolution over the node's neighborhood. Neighborhood sampling has been proposed as a possible strategy to overcome this limitation, i.e. by using only a random subset of neighbors to update node representation. GraphSAGE <cit.> exploits this strategy to improve efficiency and scale to large graphs.
GraphSAGE updates the representation of a node u by fixing the subset of nodes treated as neighbors, and by leveraging aggregation and concatenation operations:
𝐱_u^ℓ+1 = σ (Θ· [𝐱_u^ℓ || □({𝐱_v^ℓ}_v ∈𝒩_S(u)])
where 𝒩_S: 𝒱→𝒱 is the function that computes the fixed subset of neighbors for a node u, and □ is am aggregation function. Differently, ClusterGCN <cit.> samples a block of nodes identified by a graph clustering algorithm to restrict the neighborhood dimension.
The way models aggregate neighbors representations to compute node embeddings affects the discriminative power of DGNs. <cit.> showed that most DGNs are at most as powerful as 1-Weisfeiler-Lehman test <cit.>. In particular, Graph Isomorphism Network (GIN) <cit.> has been proven to be as powerful as 1-Weisfeiler-Lehman test by computing node representations as
𝐱_u^ℓ+1 = MLP((1+γ)𝐱_u^ℓ + ∑_v ∈𝒩_u𝐱_v^ℓ)
with γ as a learnable parameter or a fixed scalar.
More recently, advancements in the field of representation learning for graphs have introduced new architectures that establish a connection between the domains of DGNs and Ordinary Differential Equations (ODEs), with the primary objective of optimizing various aspects of message passing. These new methods exploit the intrinsic properties of ODEs to enhance the efficiency and effectiveness of message passing within DGNs. By formulating the propagation of information in graphs as an ODE system, these architectures effectively tackle multiple challenges, such as preserving long-range dependencies <cit.>, reducing the computational complexity of message passing <cit.>, and mitigating the over-smoothing phenomena <cit.>.
§.§.§ Random walks
A different strategy to learn node embeddings including local and global properties of the graph relies on random walks. A random walk is a random sequence of edges which joins a sequence of nodes. <cit.> proposed DeepWalk, a method that learns continuous node embedding by modeling random walks as the equivalent of sentences.
Specifically, the approach samples multiple walks of a specified length for each node in the graph, and then it leverages the SkipGram model <cit.> to update node representations based on the walks, treating the walks as sentences and the node representations as words within them.
Node2Vec <cit.> improves DeepWalk by exploiting biased random walks, we can control the likelihood of revisiting a node in the walk (allowing the walk to be more or less explorative) and bias the exploration of new nodes towards a breath first or a depth first strategy.
§ REPRESENTATION LEARNING FOR DYNAMIC GRAPHS: NOTATION AND TAXONOMY
A dynamic graph is a tuple 𝒢(t)=(𝒱(t), ℰ(t), 𝐗(t), 𝐄(t)), defined for t≥0. Differently from static graphs, all elements in the tuple are functions of time t. Thus, 𝒱(t) provides the set of nodes which are present in the graph at time t, and ℰ(t) ⊆{{u,v} | u,v ∈𝒱(t)} defines the links between them. Analogously, 𝐗(t) and 𝐄(t) define node states and edge attributes at time t. Although, 𝒱(t) can theoretically change over time, in practice it is often considered fixed for the ease of computation, which means that all the nodes that will appear in the dynamic graph are known in advance. Hence, 𝒱(t) = 𝒱 for t≥ 0.
The way we observe a system of interacting entities plays a crucial role in the definition of the corresponding dynamic graph. We can distinguish between two distinct types: discrete-time dynamic graphs and continuous-time dynamic graphs. Each of these representations gives rise to diverse architectures and learning approaches.
A discrete-time dynamic graph (D-TDG) models an evolving system that is fully observed at different timestamps. For such a reason, a D-TDG, 𝒢 = {𝒢_t | t∈[t_0, t_n]}, consists in a sequence of static graphs (known as snapshots). Each snapshot, 𝒢_t = (𝒱_t, ℰ_t, 𝐗_t, 𝐄_t), provides a picture of the graph's state at a particular time t. Each snapshot maintains the notations and definitions outlined for static graphs (see Section <ref>). We note that if the set of nodes and edges are fixed over time (𝒢_t = (𝒱, ℰ, 𝐗_t, 𝐄_t)), then the dynamic graph is often referred to as spatio-temporal graph.
Commonly, D-TDG are captured at periodic intervals (hours, days,
etc.) Hence, considering Δ t > 0 the interval between observations and t_i the current timestamp, the next observation is captured at t_i+1 = t_i + Δ t. We present in Figure <ref> a visual exemplification of a D-TDG.
A continuous-time dynamic graph (C-TDG) is a more general formulation of a dynamic graph. Indeed, it models systems that are not fully observed over time. In facts, only new events in the system are observed. Therefore, a C-TDG is a stream of events (also known as observations) 𝒢 = {o_t | t∈[t_0, t_n]}. An event, o_t = (t, EventType, {u}_u ∈𝒱(t) ), is a tuple containing the timestamp, the event type, and the nodes involved. Since C-TDGs are captured with fine-grain observations, they typically have irregular timestamps.
Without loss of generality, we can identify three types of event:
* node-wise, a node is created or its features are updated;
* interaction, a temporal edge is created;
* deletion, a node or an edge is deleted.
Generally, the temporal neighborhood of a node u at time t, consists of all the historical neighbors of u, prior to current time t, 𝒩^t_u = {v ∈𝒱(t) | (t_i, interaction, u, v) ∈𝒢∧ t_i < t }.
We observe that at any time point t, we can obtain a snapshot of the C-TDG by sequentially aggregating the events up to time t.
Figure <ref> shows visually the temporal evolution of a C-TDG.
We now proceed by providing a survey of state-of-the-art approaches in the domain of representation learning for dynamic graphs by means of the taxonomy and definition introduced in this section.
§ LEARNING WITH DISCRETE-TIME DYNAMIC GRAPHS
Given the sequential structure of D-TDGs, a natural choice for many methods has been to extend Recurrent Neural Networks <cit.> to graph data. Indeed, most of the models presented in the literature can be summarized as a combination of static DGNs and RNNs. In particular, some approaches adopt a stacked architecture, where DGNs and RNNs are used sequentially, enabling to separately model spatial and temporal dynamics. Other approaches integrate the DGN inside the RNN, allowing to jointly capture the temporal evolution and the spatial dependencies in the graph. In the following, we review state-of-the-art approaches for both spatio-temporal graphs and more general D-TDGs.
§.§ Spatio-temporal graphs
When dealing with spatio-temporal graphs, new methods are designed to solve the problem of predicting the node states at the next step, 𝐗_t+1, given the history of states, 𝐗_t.
To do so, different types of architectures have been proposed to effectively solve this task.
§.§.§ Stacked architectures
<cit.> proposed Graph Convolutional Recurrent Network (GCRN), one of the earliest deep learning models able to learn sequences of spatio-temporal graphs.
The authors proposed
to stack a Chebyshev spectral convolution <cit.> (Equation <ref> shows the first-order approximation of this convolution) for graph embedding computation and a Peephole-LSTM <cit.> for sequence learning:
𝐗'_t = Cheb(𝐗_t, ℰ, k, Θ)
𝐇_t = peephole-lstm(𝐗'_t)
where Cheb(𝐗_t, ℰ, k, Θ) represents Chebyshev spectral convolution (leveraging a polynomial of order k) computed on the snapshot 𝒢_t parametrized by Θ∈ℝ^k× d_h× d_n. Here, d_h is the new latent dimension of node states, and 𝐇_t is the hidden state vector, which is equivalent to the node states at time t+1 (𝐗_t+1). To ease readability, in the following, we
drop from the equation the edge set, ℰ, and the polynomial degree, k, since they are fixed for the whole snapshot sequence.
Equation <ref> can be reformulated to define a more abstract definition of a stacked architecture between a DGN and an RNN,
𝐗'_t = dgn(𝐗_t, Θ)
𝐇_t = rnn(𝐗'_t)
<cit.> implement Equation <ref> by leveraging the same spectral convolution as GCRN and a Gated Recurrent Unit (GRU) <cit.> as RNN. Differently, <cit.> employed the first-order approximation of the Chebyshev polynomials, which lead to the usage of a GCN to learn spatial features, and a GRU <cit.> to extract temporal features. This results in:
𝐗'_t = gcn(𝐗_t)
𝐇_t = gru(𝐗'_t).
A3TGCN <cit.> extends the implementation of <cit.> with an attention mechanism to re-weight the influence of historical node states, with the aim of capturing more global information about the system.
§.§.§ Integrated architectures
In contrast to the aforementioned approaches, an alternative type of architecture is the one of an integrated architecture, where the DGN is incorporated into the RNN to simultaneously capture and integrate temporal evolution and spatial dependencies within the graph.
<cit.> proposed a second version of GCRN that exploit this type of architecture, by embedding the Chebyshev spectral convolution in the Peephole-LSTM. In this case, input, forget, and output gates can be reformulated as
ĥ = σ(Cheb(𝐗_t, Θ_x) + Cheb(𝐇_t-1, Θ_h) + θ_c⊙ c_t-1),
the rest of the LSTM is defined as usual. We note that ⊙ denotes the Hadamard product, σ is the activation function, and ĥ if the output of a generic gate. The weights Θ_h∈ℝ^k× d_h× d_h, Θ_x∈ℝ^k× d_h× d_n, θ_c∈ℝ^d_h
are the parameters of the model. We observe that (here and in the following) the bias term is omitted for the easy of readability.
The Spatio-Temporal Graph Convolutional Network <cit.> composes several spatio-temporal blocks to learn topological and dynamical features. Each block consists of two sequential convolution layers and one graph convolution in between. The temporal convolution layer contains a 1-D causal convolution followed by a Gated Linear Unit <cit.>, while the graph convolution is in the spectral domain. Let's consider Conv^𝒢 the spectral graph convolution, and Conv^𝒯_1 and Conv^𝒯_2 the first and second temporal convolutions, respectively. Thus, each spatio-temporal block can be formulated as
𝐇_t = Conv^𝒯_2( ReLU( Conv^𝒢( Conv^𝒯_1(𝐗_t) ))).
§.§ General D-TDGs
Different from spatio-temporal graphs, the topology of a general D-TDG can evolve over time. In this case, the mere exploitation of the sole node states can lead to poor performance, since the new topology leads to different dynamics in the graph. In fact, the evolving topology is responsible for different information flows in the graph over time. Thus, excluding the evolution of the graph structure becomes a major limit of the method, leading to inaccurate predictions.
Even in this case, we can categorize approaches for general D-TDGs depending on the architectural design.
§.§.§ Integrated architectures
<cit.> proposed GC-LSTM an encoder-decoder model for link prediction, assuming fixed the node set 𝒱(t). The encoder consists of a GCN embedded in a standard LSTM. The GCN learns topological features of the cell state c and of the hidden state h, which are used to save long-term relations and extract input information, respectively. The encoder takes as input the sequence of adjacency matrices and returns an embedding that encodes both temporal and spatial information. Thus, a generic gate in the LSTM can be expressed as:
ĥ_t = σ(Θ_h𝐀_t + gcn_h(𝐇_t-1, ℰ_t-1))
where Θ_h ∈ℝ^|𝒱| × d_h is the weight matrix, and 𝐀_t is the adjacency matrix at time t, as usual. The decoder part of the model is a MLP that leverages the embedding generated from the encoder to predict the probability of each edge in future adjacency matrix 𝐀_t+1.
A similar strategy, which integrates topological changes into the computation, has also been employed by <cit.>.
Indeed, the authors proposed the so-called LRGCN that embed a Relational-GCN <cit.> into a LSTM model.
Different from GC-LSTM, LRGCN exploits the directionality of the edges in accordance with node features rather than the only stream of adjacency matrices, with the aim of effective modeling of the temporal dynamics.
In LRGCN the input, forget, and output gates are computed as the result of the R-GCN model over the input node representations and the node embeddings computed at the previous step. The authors distinguish between
four edge types to produce more informed latent representation: intra-incoming, intra-outgoing, inter-incoming, inter-outgoing. An inter-time relation corresponds to an arc {u,v} present at time t-1, while an intra-time relation is an arc {u,v} present at time t. To employ LRGCN in a path classification task, the authors extend their model with a self-attentive path embedding (SAPE). Given a path P ∈ℝ^m × d_o, where m is the path length and d_o is the output dimension, SAPE first applies LSTM to capture node dependency along the path sequence, Γ = LSTM(P) ∈ℝ^m × d_new. Then, SAPE uses the self-attentive mechanism to learn node importance and generate size-invariant representation of the path,
S=softmax(MLP(tanh(MLP(Γ)))) ∈ℝ^r × m
with r an hyper-parameter. Lastly, the final path representation is obtained by multiplying S with Γ, e=SΓ ∈ℝ^r × d_new.
§.§.§ Stacked architectures
Instead of integrating the DGN into the RNN, <cit.> stack an LSTM on top of a MPNN, as previously proposed for spatio-temporal graphs. Differently from those approaches, the authors leverage as input the new node features as well as the new topology. Thus, the MPNN updates node representations by exploiting the temporal neighborhoods in each snapshot.
<cit.> extend the MPNN-LSTM method by proposing the exploitation of hierarchical node states. Thus, the authors propose to stack multiple DGN's layers and interleave them with the sequence encoder, the RNN, to better exploit the temporal dynamic at each degree of computation. Thus, the node state at each layer depends on both the node state from the previous layer and the historical node state. More formally, the ℓ-th layer of 's framework is
𝐇̃^ℓ_t = dgn^ℓ(𝐇̃_t^ℓ-1)
𝐇_t^ℓ = update(𝐇̃^ℓ_t, 𝐇_t^ℓ-1).
where update is the sequence encoder.
Contrarily from previous works, <cit.> propose to first embed the history of the node time series into latent representations that encode the temporal dynamic of the system. Such representations are then processed leveraging multiple powers of a graph shift operator (graph laplacian or adjacency matrix) to encode the spatial dynamic of the system. Specifically, the authors propose to encode the temporal dynamic by means of an Echo State Networks (ESNs) <cit.>, a randomized recurrent neural networks, to efficiently compute node embedding and improve the scalability of DGNs for D-TDGs.
With the same aim of speeding up the dynamic graph processing, <cit.> propose DynGESN, an extension of the Graph Echo State Network <cit.> to the temporal domain. Specifically, DynGESN updates the embedding for a node u at time t as
𝐡_t^u = (1-γ)𝐡_t-1^u + γtanh( Θ_i𝐡^u_t + ∑_v ∈𝒩^t_uΘ_r𝐱^v_t-1),
with 0<γ≤ 1 being a leakage constant, Θ_i the input weights, and Θ_r the recurrent weights. Both input and recurrent weights are randomly initialized.
§.§.§ Meta architectures
We refer to meta architectures as those methods that learn a function that maps the evolution of the graph into the evolution of the parameters of the employed DGN. This kind of architecture has been proposed by <cit.> to deal with those scenarios where nodes may frequently appear and disappear. As observed by the authors,
such dynamics can be challenging to model with RNN-based models, since they have difficulties in learning these irregular behaviors.
In such situation, the authors proposed Evolving GCN (E-GCN) to capture
the dynamism of such graphs by using an RNN to evolve the parameters of a GCN. Thus, only the RNN parameters are trained. The authors considered two versions of their model, depending on whether graph structure or node features play the more important role. The first treats the GCN weights as the hidden state of a GRU to assign more significance to node representations.
The second computes the weights as the output of the LSTM model, and it is more effective when the graph structure is important for the task. Let's consider GRU(𝐗_t, Θ_t-1) as an extended version of a standard GRU model that exploits both the weight matrix at time t-1, Θ_t-1, and the previous node embedding, 𝐇_t. The first E-GCN architecture can be formulated as
Θ_t = GRU(𝐗_t, Θ_t-1)
𝐇_t = GCN(𝐗_t, ℰ_t, Θ_t)
while the second substitutes the GRU with an LSTM that takes as input only the weight matrix at time t-1.
§.§.§ Autoencoder architectures
<cit.> introduced DyGrAE, an autoencoder for D-TDGs. Specifically, DyGrAE leverages the Gated Graph Neural Network (GGNN) <cit.> to capture spatial information, and LSTM encoder-decoder architecture to capture the dynamics of the network. GGNN is a DGN similar to the GNN introduced by , but with a fixed number of iterations. DyGrAE consists of four components: a GGNN to learn the spatial dynamic; an RNN to propagate temporal information; an encoder to project the graph evolution into a fixed-size representation; and a decoder to reconstruct the structure of the dynamic graph. At each time step,
at first, DynGrAE computes the snapshot embedding
as the result of the average pooling on node embeddings at time t, emb(𝒢_t) = pool_avg(GGNN(𝐗_t)).
Then, the LSTM encoder-decoder uses the graph embeddings to encode and reconstruct the input graph sequence:
encoder: 𝐡^enc_t = LSTM_enc (emb(𝒢_t), 𝐡^enc_t-1)
decoder: 𝐡^dec_t = LSTM_dec (𝐀_t-1, 𝐡^dec_t-1)
where 𝐀_t-1 = sigmoid(MLP(𝐡^dec_t-1)) is the reconstructed adjacency matrix at time t-1. The decoder uses h^enc_w to initialize its first hidden state, if w is window size. To improve the performance, the authors introduced a temporal attention mechanism, which forces the model to focus on the time steps with significant impact. That mechanism causes the reformulation of the decoder as
𝐡^dec_t = LSTM_dec ([𝐡_t^* ||𝐀_t-1], 𝐡^dec_t-1)
where 𝐡_t^* = ∑_i=t-w^t-1α_t^i 𝐡_i^enc is the attention distribution, the attention weights α_t^i = softmax(f(𝐡^dec_t-1, 𝐡^enc_i)), and f is a function, dot product or MLP.
A different strategy has been proposed by <cit.> that developed DynGEM. Such method handles D-TDGs by varying the size of the autoencoder network depending on a heuristic, which determines the number of hidden units required for each snapshot. Such heuristic, named PropSize, ensures that each pair of consecutive layers, ℓ and ℓ+1, satisfy the condition:
size(ℓ+1) ≥ρ· size(ℓ)
where 0<ρ<1 is a hyper-parameter. This heuristic is applied to both encoder and decoder separately. If the condition in Equation <ref> is not satisfied for each pair of layers, then the number of (ℓ+1)'s hidden units are increased. If PropSize is still unsatisfied between the penultimate and ultimate layers, a new layer is added in between. At each time step t and before any application of PropSize, DynGEM initializes model parameters with those of the previous step Θ_t = Θ_t-1. This results in a direct transfer of knowledge between adjacent time steps, which guarantees a higher affinity between consecutive embeddings.
§.§.§ Random walk based architectures
Inspired by DeepWalk and Node2Vec, <cit.> propose a random walk approach for D-TDGs named Evolve2Vec. Given a sequence of graph snapshot, consider old interactions to contribute only in the propagation of topological information, while they use more recent interactions to encode the temporal dynamic. Thus, they proceed by aggregating old snapshots as a unique static graph.
Evolve2Vec starts random walks from all nodes with at least one outgoing edge in the static graph, as discussed in Section <ref>. Then, in the temporal part, each walker move to a new neighbor if there is at least an outgoing edge in the current snapshot, otherwise it remains in the current node until an outgoing edge is added. Depending on how threshold between old and new is set, we can interpolate between a fully static or fully dynamic approach. After the computation of the random walks, node embeddings are computed by feeding the walks into a skip-gram model, as usual.
§ LEARNING WITH CONTINUOUS-TIME DYNAMIC GRAPHS
In a scenario where the dynamic graph is observed only as new incoming events in the system, the methods defined in Section <ref> are unsuitable. In fact, approximating a C-TDG through a sequence of graph snapshots can introduce noise and loss of temporal information, since snapshots are captured at a more coarse level, with consequent performance deterioration. Moreover, the previously discussed methods usually do not allow including the time elapsed since the previous event. The majority of such methods update the embeddings only when new events occur. However, depending on how long it passed since the last event involving a node may result in the staleness of the embedding.
Intuitively, the embedding may change depending on the time elapsed since the previous event. For such reasons, new techniques have been introduced to handle C-TDGs. We classify literature approaches into four categories depending on the architectural choices.
§.§.§ Integrated architectures
<cit.> proposed JODIE, a method that learns embedding trajectories to overcome the staleness problem. JODIE computes the projection of a node u in a future timestamp t as an element-wise Hadamard product of the temporal attention vector with the previous node embedding,
𝐱_u(t) = (1+𝐰) ⊙𝐱_u(t^-_u)
where (1+𝐰) is the temporal attention vector, 𝐰 = Θ_pΔ t is the context vector, and Δ t = t - t^-_u is the time since the last event involving u. Thanks to the projection, JODIE can predict more accurately future embeddings, thus new events. Similar to other models, when an interaction event occurs between nodes u and v, JODIE computes the embeddings 𝐱_u and 𝐱_v by leveraging two RNNs.
<cit.> proposed DyRep, a framework that
update the representation of a node as it appear in an event in the C-TDG. DyRep captures the continuous-time dynamics leveraging a temporal point process approach. A temporal point process is characterized by the conditional intensity function that models the likelihood of an event to happen given the previous events. DyRep considers only two event types: topological evolution (node/edge creation/deletion), and node interaction (related to activity between two nodes in the graphs). DyRep's conditional intensity function, computed for an event between nodes u and v at time t, is:
λ_uv^k(t) = f_k(g^k_uv(t^-))
where k is the event type, t^- is the previous timestamp in which an event occur, and
f_k(z) = ψ_k log(1 + exp(z/ψ_k))
with ψ_k a parameter to be learned. The inner function
g^k_uv(t^-) = ω_k^T · [𝐱_u(t^-) || 𝐱_v(t^-)]
is a function of node representations learned through a DGN, with ω_k ∈ℝ^2|F| the model parameters that learn time-scale specific compatibility. Node embeddings computed by the DGN are updated as
𝐡_u(t) = σ(Θ_i𝐡_u^loc(t^-) + Θ_r𝐡_u(t^-_u) + Θ_e(t - t^-_u))
where h_u^loc(t^-) ∈ℝ^d_h is the representation of the aggregation of u's direct neighbors, t^-_u is the previous event involving node u, and Θ_i, Θ_r∈ℝ^d_h× d_h and Θ_e∈ℝ^d_h are learnable parameters. In Equation <ref> the first addend propagates neighborhood information, the second self-information, while the third considers the exogenous force that may smoothly update node features during the interval time. To learn 𝐡_u^loc(t^-), DyRep uses an attention mechanism similar to the one proposed in the GAT model by <cit.>. In this case, the attention coefficient is parametrized by 𝒮∈ℝ^|𝒱| × |𝒱|, which is a stochastic matrix denoting the likelihood of communication between each pair of nodes. 𝒮 is updated according to the conditional intensity function. The aggregated neighborhood representation is
𝐡_u^loc(t^-) = max({σ(α_uv(t) ·𝐡_v(t^-)) | v ∈𝒩_u^t) }),
with σ the activation function and α_uv(t) the attention factor, as usual.
§.§.§ Stacked architectures
In the case of sequential encoding of spatial and temporal information, <cit.> introduce
TGAT <cit.>, a model that learns the parameters of a continuous function that characterize the continuous-time stream. Similar to GraphSAGE and GAT models, TGAT employs a local aggregator that takes as input the temporal neighborhood and the timestamp and computes a time-aware embedding of the target node by exploiting an attention mechanism.
The ℓ-th layer of TGAT computes the temporal embedding of node u at time t as
𝐡^ℓ_u(t) = mlp^ℓ_2(ReLU(mlp^ℓ_1([𝐡̂(t) || 𝐱_u])))
where 𝐡̂(t) is the hidden neighborhood representation obtained as
𝐪(t) = [𝐙(t)]_0 Θ_q
𝐊(t) = [𝐙(t)]_1:nΘ_K
𝐕(t) = [𝐙(t)]_1:nΘ_V
𝐡̂(t) = attn(𝐪(t), 𝐊(t), 𝐕(t))
where 𝐙(t)= [𝐱^ℓ-1_u(t) || Φ_d(0), ..., 𝐱^ℓ-1_v(t) || Φ_d(t-t_v)], ... ] is the temporal feature matrix, with v∈𝒩_u^t and n the size of u's neighborhood;
𝐪(t), 𝐊(t), and 𝐕(t) are the query, key and value projections of the matrix; and attn is an attention mechanism similar to GAT. The dimensional functional mapping Φ_d: t →ℝ^d_h is defined as
Φ_d(t) = [cos(ω_1t)sin(ω_1t), ..., cos(ω_dt)sin(ω_dt)]
where ω_i are learnable parameters.
Differently, <cit.> proposed an approach, named StreamGNN, to learn the node embedding evolution as new edges appear in the dynamic graph. Thus, it is design to only deal with interaction events. StreamGNN is composed of two main components: the update component, which is responsible for updating the node representations of the source and destination nodes of the new link; and the propagation component, which propagates the new event across the direct neighborhood of the involved nodes. When a new event is observed, the update component computes the representation of the event as the result of an MLP on the node representation of both source and destination. Then, such representation is updated by an LSTM to include historical information from previous interactions. The amount of the past node history used by the LSTM is inversely proportional to the time difference with the previous node interaction. Then, the lastly computed node embeddings of source and target nodes are merged with the output of the LSTM model.
After these first steps, the propagation component diffuse the computed representations across the 1-hop neighborhood by leveraging an attention mechanism and by filtering out those neighbors which appear in an interaction before a predefined threshold.
<cit.> extend previous concepts by proposing a general framework composed of five core modules: memory, message function, message aggregator, memory updater, and the embedding module. The memory at time t is a matrix 𝐬(t) that has the objective of representing the node's history in a vectorial format. For this purpose, it is updated after every event. The message function has the role of encoding the event to update the memory module. Given an interaction event involving nodes u and v at time t, the message function computes two messages
m_u(t) = msg_src(𝐬_u(t^-), 𝐬_v(t^-), Δ t, 𝐞_uv(t))
m_v(t) = msg_dst(𝐬_v(t^-), 𝐬_u(t^-), Δ t, 𝐞_uv(t)),
where msg can be any learnable function, a MLP. In case of a node event, it is sent a single message. The message aggregator is a mechanism to aggregate messages computed at different timestamps. It can be a learnable function, RNN, or not, message average or most recent message. After every event involving a node u, the memory of the node is updated by the memory updater as
𝐬_u(t) = mem(m_u(t), 𝐬_u(t^-))
where m_u(t) represents the aggregated messages, and mem is an RNN. Lastly, the embedding module generates the representation for a node u at time t by exploiting the information stored in the memory module of the node itself and its neighborhood up to time t
𝐡_u(t) = ∑_v ∈𝒩_u^t f(𝐬_u(t), 𝐬_v(t), 𝐱_u(t), 𝐱_v(t), 𝐞_uv)
with f a learnable function and 𝐱_u(t), 𝐱_v(t) the input node representations of nodes u and v.
§.§.§ Random walk based architectures
Even in the scenario of C-TDGs, it is possible to compute node embeddings relying on random walks. Differently from a standard random walk, in the continuous-time domain
a valid walk is a sequence of interaction events with a non-decreasing timestamp. <cit.> extended the Node2Vec framework to exploit temporal random walks. Once decided the starting timestamp t_0, which is used to temporally bias the walk, the framework samples new nodes for the walk by considering the temporal neighborhood. Differently from the general formulation of temporal neighborhood, apply a threshold to discriminate and filter old neighbors.
The distribution to sample nodes in the walk can be either uniform, ℙ(v) = 1/|𝒩^t_u|, or biased. Specifically, the authors proposed two ways to obtain a temporally weighted distribution. Let consider that the random walk is currently at the node u. In the first case, a node v is sampled with the probability
ℙ(v) = exp(𝒯(v) - 𝒯(u))/∑_v' ∈𝒩^t_uexp(𝒯(v') - 𝒯(u)),
where 𝒯: 𝒱→ℝ^+ is the function that given a node return the corresponding timestamp of the event in which the node was involved; while in the second
ℙ(v) = δ(v, 𝒯(v))/∑_v' ∈𝒩^t_uδ(v', 𝒯(v')),
where δ : 𝒱×ℝ^+ →ℤ^+ is a function that sorts temporal neighbors in descending order.
Instead of temporal random walks, <cit.> exploited Causal Anonymous Walks (CAW) to model C-TDGs. A CAW encodes the causality of network dynamics by starting from an edge of interest and backtracking adjacent edges over time. Moreover, a CAW is anonymous because it replaces node identities in a walk with relative identities based on the appearance order. The causality extraction helps the identification of temporal network motif, while node anonymization guarantees inductive learning. Given an edge {u,v}, the model extracts M walks of length m starting from both u and v, and then performs the anonymization step. Afterward, an RNN encodes each walks leveraging two functions. The first consists of two MLPs ingested with the encoding of the correlation between the node w and the sampled walks
f_1(w) = MLP(g(w, S_u)) + MLP(g(w, S_v))
where S_u is the set of sampled walks started from u, and g is the function that counts the times a node w appears at certain positions in S_u. The second function encodes time as Equation <ref>. All the encoded walks are aggregated through mean-pooling or the combination of self-attention and mean-pooling to obtain the final edge representation.
§.§.§ Hybrid architectures
<cit.> propose to improve the expressive power of methods designed for C-TDGs by leveraging the strengths of both CAW and TGN-based architectures. Thus, by providing a hybrid architecture.
Specifically, the authors observe that for TGN-based architectures, most expressive power is achieved by employing injective embedding module, message aggregator and memory updater functions. On the other hand, the main advantage of CAW is its ability to leverage node identities to compute representative embeddings and capture correlation between walks. However, such approach imposes that walks have timestamps in decreasing order, which can limit its ability to distinguish events. Under such circumstances, the authors propose PINT, an architecture that leverages injective temporal message passing and relative positional features to improve the expressive power of the method. Specifically, the embedding module computes the representation of node u at time t and layer ℓ as
𝐡̂^ℓ_u(t) = ∑_v ∈𝒩_u^tmlp^ℓ_agg(𝐡^ℓ-1_v(t) || 𝐞_uv) α^-β(t-t^-)
𝐡^ℓ_u(t) =mlp^ℓ_upd(𝐡^ℓ-1_u(t) || 𝐡̂^ℓ_u(t))
where α and β are scalar hyper-parameters the node state is initialized with its memory representation, 𝐡^0_u(t)=𝐬_u(t). To boost the power of PINT, the authors augment memory states with relative positional features, which include information about the number of existing temporal walks of a given length between two nodes.
§ THE BENCHMARK PROBLEM
In this section, we provide the graph learning community with a performance comparison among the most popular DGNs for dynamic graphs. The aim is to support the tracking of the progress of the state-of-the-art and to provide robust baseline performance for future works. To the best of our knowledge, in fact, there are no widely agreed standard benchmarks in the domain of dynamic graphs. For such a reason, nowadays, it is not easy to fairly compare models presented in different works, because they typically use different data and empirical settings. The latter plays a crucial role in the definition of a fair and rigorous comparison, including multiple random weights initialization and hyper-parameter search and similar data splits.
With this in mind, we designed three benchmarks to assess models that deal with spatio-temopral graphs, general D-TDGs, and C-TDGs. To do so, we extended the library PyDGN[<https://github.com/diningphil/PyDGN>] to the D-TDG learning setting to foster reproducibility and robustness of results. With the same aim, we developed a Pytorch Geometric <cit.> based framework to allow reproducible results in the continuous scenario. Lastly,
in Table <ref> we provide the community with a selection of datasets useful for benchmarking future works. An interest reader is referred to SNAP <cit.>, TSL <cit.>, and Network Repository <cit.> for a broader data collections.
§.§ Spatio-temopral graph benchmark
In the spatio-temporal setting, we consider three graph datasets for traffic forecasting, Metr-LA <cit.>, Montevideo <cit.>, and PeMSBay <cit.>. Specifically,
* Metr-LA consists of four months of traffic readings collected from 207 loop detectors in the highway of Los Angeles County every five minutes;
* Montevideo comprises one month of hourly passenger inflow at stop level for eleven bus lines from the city of Montevideo;
* PeMSBay contains six months of traffic readings collected by California Transportation Agencies (CalTrans) Performance Measurement System (PeMS) every five minutes by 325 traffic sensors in San Francisco Bay Area.
For all the three datasets, the objective is to perform temporal node regression, thus, to predict the future node values, 𝐗_t+1, given the past graph history, [𝒢_i]_i=1^t.
The baseline performance for this type of predictive problems on graphs is based on five spatio-temporal DGNs (A3TGCN <cit.>, DCRNN <cit.>, GCRN-GRU <cit.>, GCRNN-LSTM <cit.>, TGCN <cit.>), within the aim of assessing both stacked and integrated architectures, and the influence of an attention mechanism.
We designed each model as a combination of three main components. The first is the encoder which maps the node input features into a latent hidden space; the second is the DGN which computes the spatio-temporal convolution; and the third is a readout that maps the output of the convolution into the output space. The encoder and the readout are MLPs that share the same architecture among all models in the experiments.
We performed hyper-parameter tuning via grid search, optimizing the Mean Absolute Error (MAE). We perform a time-based split of the dataset which reserves the first 70% of the data as training set, 15% of the following data as validation set, and the last 15% as test set. We trained the models using Adam optimizer for a maximum of 1000 epochs and early stopping with patience of 50 epochs on the validation error. For each model configuration, we performed 5 training runs with different weight initialization and report the average of the results. We report in Table <ref> the grid of hyper-parameters exploited for this experiment.
*Results
In Table <ref> we report the results on the spatio-temporal-based experiments.
Overall, DCRNN and GCRN-GRU achieve the better performance on the selected tasks. Interestingly, they both rely on Chebyshev spectral convolution and GRU, but with different architectural structure. Indeed, DCRNN employs a stacked architecture, while GCRN-GRU embeds the DGN into the RNN, enabling a combined modeling of the temporal and spatial information. This result shows that there is not a superior architectural design, in these tasks. However, it seems relevant to include a bigger neighborhood in the computation (by exploiting a larger Chebishev polynomial filter size). Indeed, even if A3TGCN employs an attention mechanism to capture more global information, it is not enough to achieve comparable performance to DCRNN or GCRN-based approaches.
§.§ D-TDG benchmark
In the setting of general D-TDGs (both nodes' state and topology may evolve over time), we consider the following datasets:
* Twitter Tennis <cit.>: a mention graph in which nodes are Twitter accounts and their labels encode the number of mentions between them;
* Elliptic <cit.>: a network of bitcoin transactions, wherein a node represents a transaction and an edge indicate the payment flow. Node are also mapped to real entities belonging to licit categories (exchanges, wallet providers, miners, licit services) versus illicit ones (scams, malware, terrorist organizations, ransomware, Ponzi schemes);
* AS-773 <cit.>: the communication network of who-talks-to-whom defined in a timespan of almost 26 months from the BGP (Border Gateway Protocol) logs;
* Bitcoin-α <cit.>: a who-trusts-whom network of bitcoin users trading on the platform http://www.bitcoin-alpha.com.
We use the first two datasets to run node-level tasks. Specifically, similarly to the case of spatio-temporal setting, in Twitter tennis we perform temporal node regression, while in the Elliptic dataset temporal node classification. Therefore, we predict the class associated to the nodes of the snapshot at time t given the past graph history, [𝒢_i]_i=1^t.
We employ the last two datasets for link prediction task, to predict the future topology of the graph given its past history.
In this benchmark we evaluate three different classes of architectures (stacked, integrated and meta) and we show the potential of randomized networks in the tradeoff between performance and complexity. Thus, we consider five DGNs for our experiments: DynGESN <cit.>, EvolveGCN-H <cit.>, EvolveGCN-O <cit.>, GCLSTM <cit.>, LRGCN <cit.>.
We performed hyper-parameter tuning via grid search, optimizing the Mean Absolute Error (MAE) in the case of node regression and Area Under the ROC curve (AUROC) in the case of node classification and link prediction. We considered the same experimental setting, split strategy, and architectural choice as for the spatio-temporal graphs. In the case of link prediction, we perform negative sampling by randomly sampling non-occurring links from the next future snapshots. We note that in the case of DynGESN, the model employs fixed and randomized weights and only the final readout is trained.
We report in Table <ref> the grid of hyper-parameters exploited for this experiment.
*Results
Table <ref> shows the results on general D-TDGs.
Differently than the spatio-temporal setting, different tasks benefit from different architectures. Indeed, integrating topology's changes (such as in GCLSTM and LRGCN) is more effective in link prediction tasks, while evolving the parameters of the DGN is more beneficial for node-level tasks, since it is more difficult to change the parameters of a static DGN to predict the topological evolution of the system. Notably, DynGESN achieves comparable results by exploiting only few trainable parameters, showing an advantageous tradeoff between performance and complexity. This makes it an ideal choice when the computational resources are limited.
§.§ C-TDG benchmark
In the continuous scenario, we perform our experiment leveraging three datasets:
* Wikipedia <cit.>: one month of interactions (157,474 interactions) between user and Wikipedia pages. Specifically, it corresponds to the edits made by 8,227 users on the 1,000 most edited Wikipedia pages;
* Reddit <cit.>: one month of posts (interactions) made by 10,000 most active users on 1,000 most active subreddits, resulting in a total of 672,447 interactions;
* LastFM <cit.>: one month of who-listens-to-which song information. The dataset consists of 1000 users and the 1000 most listened songs, resulting in 1,293,103 interactions.
For all the datasets we considered the task of future link prediction, thus, predicting if a link between two nodes u and v exists at a future time t given the history of past events.
For our experimental purposes, we consider the following DGNs: DyRep <cit.>, JODIE <cit.>, TGAT <cit.>, and TGN <cit.>. These methods allow us to evaluate the sequential encoding of spatial and temporal information as well as integrated architectures. Moreover, they allow assessing the contribution of attention mechanism, embedding trajectories, and memory components. We consider as additional baseline EdgeBank <cit.> with the aim of showing the performance of a simple heuristic. EdgeBank is a method that merely stores previously observed interactions (without any learning), and then predicts stored links as positive.
We performed hyper-parameter tuning via grid search, optimizing the AUROC score. We considered the same experimental setting and split strategy as previous experiments. We perform negative sampling by randomly sampling non-occurring links in the graph, as follows: (1) during training we sample negative destinations only from nodes that appear in the training set, (2) during validation we sample them from nodes that appear in training set or validation set and (3) during testing we sample them from the entire node set.
We report in Table <ref> the grid of hyper-parameters exploited for this experiment.
*Results
We report the results of the C-TDG experiments in Table <ref>.
Overall, TGN generally outperforms all the other methods, showing consistent improvements over DyRep and JODIE. This result shows how the spatial information is fundamental for the effective resolution of the tasks. Indeed, an advantage of TGAT and TGN is that they can exploit bigger neighborhoods with respect to DyRep, which uses the information coming from one-hop distance, and JODIE, which only encode the source and destination information. Despite these results, we observe that the temporal information is still extremely relevant to achieve good performance. In fact, the EdgeBank baseline is able to exceed 91% AUROC score by only looking at the graph's history.
This is even more evident in the LastFM task, which, as observed in <cit.>, contains more reoccurring edges with respect to Wikipedia and Reddit. Consequently, such a task is comparatively easier to solve by solely exploiting these temporal patterns. Considering that EdgeBank's performance is directly correlated to the number of memorized edges, in this task, it is able to outperform all the other methods.
§ CONCLUSIONS
Despite the field of representation learning for (static) graphs is now a consolidated and vibrant research area, there is still a strong demand for work in the domain of dynamic graphs.
In light of this, in this paper we proposed, at first, a survey that focuses on recent representation learning techniques for dynamic graphs under a uniform formalism consolidated from existing literature. Second, we provide the research community with a fair performance comparison among the most popular methods of the three families of dynamic graph problems, by leveraging a reproducible experimental environment. We believe that this work will help fostering the research in the domain of dynamic graphs by providing a clear picture of the current development status and a good baseline to test new architectures and approaches.
In order to further improve the maturity of representation learning for dynamic graphs, we believe that certain aspects should be deepened.
A future interesting direction, in this sense, is to extend the work that has been done for heterophilic (static) graphs <cit.> to the temporal domain. This will require addressing the problem of generating information-rich node representations when neighboring nodes tend to belong to different classes. A similar challenge is the one of heterogeneus dynamic graphs, which contain different types of nodes and links. In this scenario, new architectures should learn the semantic-level information coming from node and edge types, in addition to topological and label information. Although these are interesting future directions, we observe that there are more challenges to be addressed in the future, such as the study in the temporal domain of over-smoothing <cit.>, which is a phenomenon where all node features become almost indistinguishable after few embedding updates; over-squashing <cit.>, which prevents DGNs to propagate and preserve long-term dependencies between
nodes; and expressive power of DGNs <cit.>.
§ ACKNOWLEDGEMENTS
This work has been partially supported by
EU NextGenerationEU programme under the funding schemes PNRR-PE-AI FAIR (Future Artificial Intelligence Research) and by the EU H2020 TAILOR project, GA n. 952215. The authors would like to thank Federico Errica,
NEC Laboratories Europe GmbH, for the insightful discussions throughout the development of this work.
IEEEtranN
79
url@samestyle
[Gilmer et al.(2017)Gilmer, Schoenholz, Riley, Vinyals, and Dahl]MPNN
J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural
Message Passing for Quantum Chemistry,” in Proceedings of the 34th
ICML, vol. 70.1em plus 0.5em minus 0.4emJMLR, 2017, p.
1263–1272.
[Zitnik et al.(2018)Zitnik, Agrawal, and Leskovec]bioinformatics
M. Zitnik, M. Agrawal, and J. Leskovec, “Modeling polypharmacy side effects
with graph convolutional networks,” Bioinformatics, vol. 34, no. 13,
pp. i457–i466, 06 2018.
[Gravina et al.(2022)Gravina, Wilson, Bacciu, Grimes, and
Priami]gravina_schizophrenia
A. Gravina, J. L. Wilson, D. Bacciu, K. J. Grimes, and C. Priami,
“Controlling astrocyte-mediated synaptic pruning signals for schizophrenia
drug repurposing with deep graph networks,” PLoS Comput. Biol.,
vol. 18, no. 5, pp. 1–19, 05 2022.
[Bacciu et al.(2023)Bacciu, Errica, Gravina, Madeddu, Podda, and
Stilo]gravina2023Covid
D. Bacciu, F. Errica, A. Gravina, L. Madeddu, M. Podda, and G. Stilo, “Deep
Graph Networks for Drug Repurposing with Multi-Protein Targets,” IEEE
TETC, pp. 1–14, 2023.
[Monti et al.(2019)Monti, Frasca, Eynard, Mannion, and
Bronstein]social_network
F. Monti, F. Frasca, D. Eynard, D. Mannion, and M. M. Bronstein, “Fake News
Detection on Social Media using Geometric Deep Learning,” arXiv
preprint arXiv:1902.06673, 2019.
[Derrow-Pinion et al.(2021)Derrow-Pinion, She, Wong, Lange, Hester,
Perez, Nunkesser, Lee, Guo, Wiltshire, Battaglia, Gupta, Li, Xu,
Sanchez-Gonzalez, Li, and Velickovic]google_maps
A. Derrow-Pinion, J. She, D. Wong, O. Lange, T. Hester, L. Perez, M. Nunkesser,
S. Lee, X. Guo, B. Wiltshire, P. W. Battaglia, V. Gupta, A. Li, Z. Xu,
A. Sanchez-Gonzalez, Y. Li, and P. Velickovic, “ETA Prediction with Graph
Neural Networks in Google Maps,” in Proceedings of the 30th ACM
CIKM.1em plus 0.5em minus 0.4emAssociation for Computing
Machinery, 2021, p. 3767–3776.
[Bacciu et al.(2020)Bacciu, Errica, Micheli, and Podda]BACCIU2020203
D. Bacciu, F. Errica, A. Micheli, and M. Podda, “A gentle introduction to
deep learning for graphs,” Neural Networks, vol. 129, pp. 203–221,
2020.
[Wu et al.(2021)Wu, Pan, Chen, Long, Zhang, and Yu]GNNsurvey
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A Comprehensive
Survey on Graph Neural Networks,” IEEE TNNLS, vol. 32, no. 1, pp.
4–24, 2021.
[Zhao et al.(2020)Zhao, Song, Zhang, Liu, Wang, Lin, Deng, and
Li]T-GCN
L. Zhao, Y. Song, C. Zhang, Y. Liu, P. Wang, T. Lin, M. Deng, and H. Li,
“T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction,”
IEEE T-ITS, vol. 21, no. 9, pp. 3848–3858, 2020.
[Rossi et al.(2020)Rossi, Chamberlain, Frasca, Eynard, Monti, and
Bronstein]tgn_rossi2020
E. Rossi, B. Chamberlain, F. Frasca, D. Eynard, F. Monti, and M. Bronstein,
“Temporal Graph Networks for Deep Learning on Dynamic Graphs,” in
ICML 2020 Workshop on Graph Representation Learning, 2020.
[Trivedi et al.(2019)Trivedi, Farajtabar, Biswal, and Zha]dyrep
R. Trivedi, M. Farajtabar, P. Biswal, and H. Zha, “DyRep: Learning
Representations over Dynamic Graphs,” in ICLR, 2019.
[Xu et al.(2020)Xu, Ruan, Korpeoglu, Kumar, and Achan]TGAT
D. Xu, C. Ruan, E. Korpeoglu, S. Kumar, and K. Achan, “Inductive
representation learning on temporal graphs,” in ICLR, 2020.
[Kazemi et al.(2020)Kazemi, Goel, Jain, Kobyzev, Sethi, Forsyth, and
Poupart]dynamicgraph_survey
S. M. Kazemi, R. Goel, K. Jain, I. Kobyzev, A. Sethi, P. Forsyth, and
P. Poupart, “Representation Learning for Dynamic Graphs: A Survey,”
J. Mach. Learn. Res., vol. 21, no. 1, jan 2020.
[Jiang and Luo(2022)]traffic_forecasting_survey
W. Jiang and J. Luo, “Graph neural network for traffic forecasting: A
survey,” Expert Systems with Applications, vol. 207, p. 117921,
2022.
[Bondy(1976)]graph_theory
J. A. Bondy, Graph Theory With Applications.1em plus 0.5em
minus 0.4emElsevier Science Ltd., 1976.
[Scarselli et al.(2009)Scarselli, Gori, Tsoi, Hagenbuchner, and
Monfardini]GNN
F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The
Graph Neural Network Model,” IEEE Transactions on Neural Networks,
vol. 20, no. 1, pp. 61–80, 2009.
[Micheli(2009)]NN4G
A. Micheli, “Neural Network for Graphs: A Contextual Constructive
Approach,” IEEE Transactions on Neural Networks, vol. 20, no. 3, pp.
498–511, 2009.
[Defferrard et al.(2016)Defferrard, Bresson, and
Vandergheynst]chebnet
M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional Neural
Networks on Graphs with Fast Localized Spectral Filtering,” in
Proceedings of the 29th NeurIPS.1em plus 0.5em minus
0.4emCurran Associates Inc., 2016, p. 3844–3852.
[Kipf and Welling(2017)]GCN
T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph
Convolutional Networks,” in ICLR, 2017.
[Veličković et al.(2018)Veličković, Cucurull,
Casanova, Romero, Liò, and Bengio]GAT
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò,
and Y. Bengio, “Graph Attention Networks,” ICLR, 2018.
[Hamilton et al.(2017)Hamilton, Ying, and Leskovec]SAGE
W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive Representation Learning
on Large Graphs,” in NeurIPS, 2017.
[Chiang et al.(2019)Chiang, Liu, Si, Li, Bengio, and Hsieh]clusterGCN
W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh, “Cluster-GCN:
An Efficient Algorithm for Training Deep and Large Graph Convolutional
Networks,” in Proceedings of the 25th ACM SIGKDD KDD.1em plus
0.5em minus 0.4emAssociation for Computing Machinery, 2019, p.
257–266.
[Xu et al.(2019)Xu, Hu, Leskovec, and Jegelka]GIN
K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How Powerful are Graph Neural
Networks?” in ICLR, 2019.
[Weisfeiler and Lehman(1968)]WL
B. Weisfeiler and A. Lehman, “A Reduction of a Graph to a Canonical Form and
an Algebra Arising during This Reduction,” Nauchno-Technicheskaya
Informatsia, vol. 2, no. 9, 1968.
[Gravina et al.(2023)Gravina, Bacciu, and Gallicchio]gravina2023adgn
A. Gravina, D. Bacciu, and C. Gallicchio, “Anti-Symmetric DGN: a stable
architecture for Deep Graph Networks,” in ICLR, 2023.
[Wang et al.(2021a)Wang, Wang, Yang, and Lin]dgc
Y. Wang, Y. Wang, J. Yang, and Z. Lin, “Dissecting the Diffusion Process in
Linear Graph Convolutional Networks,” in NeurIPS, 2021.
[Wu et al.(2019)Wu, Souza, Zhang, Fifty, Yu, and Weinberger]sgc
F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying
Graph Convolutional Networks,” in Proceedings of the 36th ICML,
vol. 97.1em plus 0.5em minus 0.4emPMLR, 09-15 Jun 2019, pp.
6861–6871.
[Eliasof et al.(2021)Eliasof, Haber, and Treister]pde-gcn
M. Eliasof, E. Haber, and E. Treister, “PDE-GCN: Novel Architectures for
Graph Neural Networks Motivated by Partial Differential Equations,” in
NeurIPS, 2021.
[Rusch et al.(2022)Rusch, Chamberlain, Rowbottom, Mishra, and
Bronstein]graphcon
T. K. Rusch, B. P. Chamberlain, J. Rowbottom, S. Mishra, and M. M. Bronstein,
“Graph-Coupled Oscillator Networks,” arXiv preprint
arXiv:2202.02296, 2022.
[Perozzi et al.(2014)Perozzi, Al-Rfou, and Skiena]DeepWalk
B. Perozzi, R. Al-Rfou, and S. Skiena, “DeepWalk: Online Learning of Social
Representations,” in Proceedings of the 20th ACM SIGKDD KDD.1em plus 0.5em minus 0.4emACM, 2014, pp. 701–710.
[Mikolov et al.(2013)Mikolov, Chen, Corrado, and Dean]skipgram
T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation of Word
Representations in Vector Space,” in ICLR, 2013.
[Grover and Leskovec(2016)]node2vec
A. Grover and J. Leskovec, “node2vec: Scalable Feature Learning for
Networks,” in Proceedings of the 22nd ACM SIGKDD KDD, 2016.
[Rumelhart et al.(1986)Rumelhart, Hinton, and Williams]RNN
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations
by back-propagating errors,” Nature, vol. 323, no. 6088, pp.
533–536, Oct 1986.
[Seo et al.(2018)Seo, Defferrard, Vandergheynst, and Bresson]GCRN
Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson, “Structured Sequence
Modeling with Graph Convolutional Recurrent Networks,” in
NeurIPS.1em plus 0.5em minus 0.4emSpringer International
Publishing, 2018, pp. 362–373.
[Gers et al.(2002)Gers, Schraudolph, and Schmidhuber]peephole-LSTM1
F. Gers, N. Schraudolph, and J. Schmidhuber, “Learning Precise Timing with
LSTM Recurrent Networks,” Journal of Machine Learning Research,
vol. 3, pp. 115–143, 01 2002.
[Graves(2013)]peephole-LSTM2
A. Graves, “Generating sequences with recurrent neural networks,”
arXiv preprint arXiv:1308.0850, 2013.
[Li et al.(2018)Li, Yu, Shahabi, and Liu]DCRNN
Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion Convolutional Recurrent
Neural Network: Data-Driven Traffic Forecasting,” in ICLR, 2018.
[Cho et al.(2014)Cho, van Merrienboer, Gulcehre, Bahdanau, Bougares,
Schwenk, and Bengio]GRU
K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk,
and Y. Bengio, “Learning Phrase Representations using RNN Encoder-Decoder
for Statistical Machine Translation,” arXiv preprint
arXiv:1406.1078, 2014.
[Bai et al.(2021)Bai, Zhu, Song, Zhao, Hou, Du, and Li]a3tgcn
J. Bai, J. Zhu, Y. Song, L. Zhao, Z. Hou, R. Du, and H. Li, “A3T-GCN:
Attention Temporal Graph Convolutional Network for Traffic
Forecasting,” ISPRS International Journal of Geo-Information,
vol. 10, no. 7, 2021.
[Yu et al.(2018)Yu, Yin, and Zhu]STGCN
B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal Graph Convolutional Networks: A
Deep Learning Framework for Traffic Forecasting,” in Proceedings of
the 27th IJCAI, 2018.
[Dauphin et al.(2017)Dauphin, Fan, Auli, and Grangier]GLU
Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, “Language Modeling with
Gated Convolutional Networks,” in Proceedings of the 34th ICML,
vol. 70.1em plus 0.5em minus 0.4emJMLR, 2017, p. 933–941.
[Chen et al.(2018)Chen, Xu, Wu, and Zheng]GC-LSTM
J. Chen, X. Xu, Y. Wu, and H. Zheng, “GC-LSTM: Graph convolution embedded
lstm for dynamic link prediction,” arXiv preprint arXiv:1812.04206,
2018.
[Li et al.(2019)Li, Han, Cheng, Su, Wang, Zhang, and Pan]LRGCN
J. Li, Z. Han, H. Cheng, J. Su, P. Wang, J. Zhang, and L. Pan, “Predicting
Path Failure In Time-Evolving Graphs,” in Proceedings of the 25th ACM
SIGKDD KDD.1em plus 0.5em minus 0.4emAssociation for Computing
Machinery, 2019, p. 1279–1289.
[Schlichtkrull et al.(2018)Schlichtkrull, Kipf, Bloem, van den Berg,
Titov, and Welling]RGCN
M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and
M. Welling, “Modeling Relational Data with Graph Convolutional Networks,”
in The Semantic Web.1em plus 0.5em minus 0.4emSpringer
International Publishing, 2018, pp. 593–607.
[Panagopoulos et al.(2021)Panagopoulos, Nikolentzos, and
Vazirgiannis]mpnn_lsltm
G. Panagopoulos, G. Nikolentzos, and M. Vazirgiannis, “Transfer Graph Neural
Networks for Pandemic Forecasting,” in Proceedings of the 35th AAAI
Conference on Artificial Intelligence, 2021.
[You et al.(2022)You, Du, and Leskovec]roland
J. You, T. Du, and J. Leskovec, “ROLAND: graph learning framework for dynamic
graphs,” in Proceedings of the 28th ACM SIGKDD KDD, 2022, pp.
2358–2366.
[Cini et al.(2023)Cini, Marisca, Bianchi, and Alippi]cini2023scalable
A. Cini, I. Marisca, F. Bianchi, and C. Alippi, “Scalable Spatiotemporal
Graph Neural Networks,” in Proceedings of the AAAI Conference on
Artificial Intelligence, 2023.
[Jaeger(2010)]esn1
H. Jaeger, “The “echo state” approach to analysing and training recurrent
neural networks–with an erratum note,” German National Research
Center for Information Technology GMD Technical Report, vol. 148, no. 34,
2010.
[Jaeger and Haas(2004)]esn2
H. Jaeger and H. Haas, “Harnessing Nonlinearity: Predicting Chaotic Systems
and Saving Energy in Wireless Communication,” Science, vol. 304, no.
5667, pp. 78–80, 2004.
[Micheli and Tortorella(2022)]dyngesn
A. Micheli and D. Tortorella, “Discrete-time dynamic graph echo state
networks,” Neurocomputing, vol. 496, pp. 85–95, 2022.
[Gallicchio and Micheli(2010)]gesn
C. Gallicchio and A. Micheli, “Graph echo state networks,” in
IJCNN.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 1–8.
[Pareja et al.(2020)Pareja, Domeniconi, Chen, Ma, Suzumura, Kanezashi,
Kaler, Schardl, and Leiserson]egcn
A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler,
T. B. Schardl, and C. E. Leiserson, “EvolveGCN: Evolving Graph
Convolutional Networks for Dynamic Graphs,” in Proceedings of the
34th AAAI Conference on Artificial Intelligence, 2020.
[Taheri and Berger-Wolf(2019)]dygrae
A. Taheri and T. Berger-Wolf, “Predictive Temporal Embedding of Dynamic
Graphs,” in Proceedings of the 2019 IEEE/ACM ASONAM.1em plus
0.5em minus 0.4emAssociation for Computing Machinery, 2019, p.
57–64.
[Li et al.(2016)Li, Zemel, Brockschmidt, and Tarlow]gatedGNN
Y. Li, R. Zemel, M. Brockschmidt, and D. Tarlow, “Gated Graph Sequence Neural
Networks,” in Proceedings of ICLR'16, April 2016.
[Goyal et al.(2018)Goyal, Kamra, He, and Liu]dyngem
P. Goyal, N. Kamra, X. He, and Y. Liu, “DynGEM: Deep Embedding Method for
Dynamic Graphs,” arXiv preprint arXiv:1805.11273, 2018.
[Bastas et al.(2019)Bastas, Semertzidis, Axenopoulos, and
Daras]evolve2vec
N. Bastas, T. Semertzidis, A. Axenopoulos, and P. Daras, “evolve2vec:
Learning Network Representations Using Temporal Unfolding,” in
MultiMedia Modeling.1em plus 0.5em minus 0.4emSpringer
International Publishing, 2019, pp. 447–458.
[Kumar et al.(2019)Kumar, Zhang, and Leskovec]jodie
S. Kumar, X. Zhang, and J. Leskovec, “Predicting Dynamic Embedding Trajectory
in Temporal Interaction Networks,” in Proceedings of the 25th ACM
SIGKDD KDD.1em plus 0.5em minus 0.4emACM, 2019.
[Ma et al.(2020)Ma, Guo, Ren, Tang, and Yin]streamgnn
Y. Ma, Z. Guo, Z. Ren, J. Tang, and D. Yin, “Streaming Graph Neural
Networks,” in Proceedings of the 43rd International ACM SIGIR.1em plus 0.5em minus 0.4emAssociation for Computing Machinery, 2020,
p. 719–728.
[Nguyen et al.(2018)Nguyen, Lee, Rossi, Ahmed, Koh, and
Kim]temporal_node2vec
G. Nguyen, J. B. Lee, R. A. Rossi, N. Ahmed, E. Koh, and S. Kim,
“Continuous-Time Dynamic Network Embeddings,” Companion Proceedings
of The Web Conference 2018, 2018.
[Wang et al.(2021b)Wang, Chang, Liu, Leskovec, and
Li]CAW
Y. Wang, Y.-Y. Chang, Y. Liu, J. Leskovec, and P. Li, “Inductive
Representation Learning in Temporal Networks via Causal Anonymous Walks,”
in ICLR, 2021.
[Souza et al.(2022)Souza, Mesquita, Kaski, and Garg]pint
A. H. Souza, D. Mesquita, S. Kaski, and V. K. Garg, “Provably expressive
temporal graph networks,” in NeurIPS, 2022.
[Fey and Lenssen(2019)]Fey/Lenssen/2019
M. Fey and J. E. Lenssen, “Fast graph representation learning with PyTorch
Geometric,” in ICLR Workshop on Representation Learning on Graphs and
Manifolds, 2019.
[Leskovec and Krevl(2014)]snapnets
J. Leskovec and A. Krevl, “SNAP Datasets: Stanford large network dataset
collection,” <http://snap.stanford.edu/data>, Jun. 2014.
[Cini and Marisca()]tsl
A. Cini and I. Marisca, “Torch spatiotemporal, 3 2022,”
<https://github.com/TorchSpatiotemporal/tsl>, vol. 10.
[Rossi and Ahmed(2015)]nr
R. A. Rossi and N. K. Ahmed, “The network data repository with interactive
graph analytics and visualization,” in AAAI, 2015. [Online].
Available: <https://networkrepository.com>
[Rozemberczki et al.(2021)Rozemberczki, Scherer, He, Panagopoulos,
Riedel, Astefanoaei, Kiss, Beres, Lopez, Collignon, and
Sarkar]rozemberczki2021pytorch
B. Rozemberczki, P. Scherer, Y. He, G. Panagopoulos, A. Riedel, M. Astefanoaei,
O. Kiss, F. Beres, G. Lopez, N. Collignon, and R. Sarkar, “PyTorch
Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine
Learning Models,” in Proceedings of the 30th ACM CIKM, 2021, p.
4564–4573.
[Béres et al.(2018)Béres, Pálovics, Oláh, and
Benczúr]twitter_tennis
F. Béres, R. Pálovics, A. Oláh, and A. A. Benczúr, “Temporal
walk based centrality metric for graph streams,” Applied Network
Science, vol. 3, no. 1, p. 32, 2018.
[Weber et al.(2019)Weber, Domeniconi, Chen, Weidele, Bellei, Robinson,
and Leiserson]elliptic
M. Weber, G. Domeniconi, J. Chen, D. Weidele, C. Bellei, T. Robinson, and
C. Leiserson, “Anti-Money Laundering in Bitcoin: Experimenting with Graph
Convolutional Networks for Financial Forensics,” KDD ’19 Workshop
on Anomaly Detection in Finance, 08 2019.
[Leskovec et al.(2005)Leskovec, Kleinberg, and Faloutsos]as733
J. Leskovec, J. Kleinberg, and C. Faloutsos, “Graphs over Time: Densification
Laws, Shrinking Diameters and Possible Explanations,” in Proceedings
of the 11th ACM SIGKDD KDD.1em plus 0.5em minus 0.4emAssociation for Computing Machinery, 2005, p. 177–187.
[Kumar et al.(2016)Kumar, Spezzano, Subrahmanian, and
Faloutsos]bc-otc
S. Kumar, F. Spezzano, V. Subrahmanian, and C. Faloutsos, “Edge weight
prediction in weighted signed networks,” in IEEE ICDM.1em
plus 0.5em minus 0.4emIEEE, 2016, pp. 221–230.
[Kumar et al.(2018)Kumar, Hooi, Makhija, Kumar, Faloutsos, and
Subrahmanian]bc-otc2
S. Kumar, B. Hooi, D. Makhija, M. Kumar, C. Faloutsos, and V. Subrahmanian,
“Rev2: Fraudulent user prediction in rating platforms,” in
Proceedings of the 11th ACM WSDM.1em plus 0.5em minus
0.4emACM, 2018, pp. 333–341.
[Poursafaei et al.(2022)Poursafaei, Huang, Pelrine, , and
Rabbany]edgebank
F. Poursafaei, S. Huang, K. Pelrine, , and R. Rabbany, “Towards better
evaluation for dynamic link prediction,” in NeurIPS Datasets and
Benchmarks, 2022.
[Pei et al.(2020)Pei, Wei, Chang, Lei, and Yang]geom-gcn
H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang, “Geom-GCN: Geometric
Graph Convolutional Networks,” in ICLR, 2020.
[Yan et al.(2021)Yan, Hashemi, Swersky, Yang, and
Koutra]heterophily_results2
Y. Yan, M. Hashemi, K. Swersky, Y. Yang, and D. Koutra, “Two sides of the
same coin: Heterophily and oversmoothing in graph convolutional neural
networks,” arXiv preprint arXiv:2102.06462, 2021.
[Cavallo et al.(2023)Cavallo, Grohnfeldt, Russo, Lovisotto, and
Vassio]cavallo2023gcnh
A. Cavallo, C. Grohnfeldt, M. Russo, G. Lovisotto, and L. Vassio, “GCNH: A
Simple Method For Representation Learning On Heterophilous Graphs,”
arXiv preprint arXiv:2304.10896, 2023.
[Cai and Wang(2020)]over-smoothing
C. Cai and Y. Wang, “A note on over-smoothing for graph neural networks,”
arXiv preprint arXiv:2006.13318, 2020.
[Alon and Yahav(2021)]bottleneck
U. Alon and E. Yahav, “On the Bottleneck of Graph Neural Networks and its
Practical Implications,” in ICLR, 2021.
[Giovanni et al.(2023)Giovanni, Giusti, Barbero, Luise, Lio', and
Bronstein]digiovanni2023oversquashing
F. D. Giovanni, L. Giusti, F. Barbero, G. Luise, P. Lio', and M. Bronstein,
“On Over-Squashing in Message Passing Neural Networks: The Impact of Width,
Depth, and Topology,” arXiv preprint arXiv:2302.02941, 2023.
[Li and Leskovec(2022)]GNNBook-ch5-li
P. Li and J. Leskovec, “The expressive power of graph neural networks,” in
Graph Neural Networks: Foundations, Frontiers, and Applications,
L. Wu, P. Cui, J. Pei, and L. Zhao, Eds.1em plus 0.5em minus
0.4emSpringer Singapore, 2022, pp. 63–98.
[
< g r a p h i c s >
]Alessio Gravina is a Ph.D. Student in Computer Science at University of Pisa. He received his B.Sc./M.Sc. in Computer Science from University of Pisa in 2018 and 2020, respectively. He was a visiting researcher at Huawei Research Center, Munich in 2023 and at Stanford University in 2019, while, in 2018, he won the Fujistu AI-NLP Challenge. He is a member of the Computational Intelligence and Machine Learning group and Pervasive AI Lab. His interests are related to the area of machine learning for graphs and deep learning.
[
< g r a p h i c s >
]Davide Bacciu (S'06–M'09–SM'18) has a Ph.D. in Computer Science and Engineering from IMT Lucca. He is Associate Professor at the Computer Science Department, University of Pisa, where he heads the Pervasive AI Lab. His research interests include machine learning for structured data, Bayesian learning, deep learning, reservoir computing, distributed and embedded learning systems. Dr. Bacciu received the 2009 E.R. Caianiello Award for the best Italian Ph.D. thesis on neural networks. He is the Vice President of the Italian Association for Artificial Intelligence, Chair of the IEEE Technical Committee on Neural Networks and the founder of the IEEE CIS Task Force on Learning for Graphs. He is a Senior Editor of the IEEE TNNLS.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.