entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.01345v1
|
20230703203934
|
Linear multistep methods with repeated global Richardson
|
[
"Imre Fekete",
"Lajos Lóczi"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65L05, 65L06"
] |
Linear multistep methods with repeated global Richardson extrapolation
I. Fekete, corresponding author, Department of Applied Analysis and Computational Mathematics, ELTE Eötvös Loránd University, Pázmány P. s. 1/c, H-1117 Budapest, Hungary, L. Lóczi, Department of Numerical Analysis, ELTE Eötvös Loránd University, Pázmány P. s. 1/c, H-1117 Budapest, Hungary, and Department of Differential Equations, BME Budapest University of Technology and Economics
====================================================================================================================================================================================================================================================================================================================================================================================================
In this work, we further investigate the application of the well-known Richardson extrapolation (RE) technique to accelerate the convergence of sequences resulting from linear multistep methods (LMMs) for numerically solving initial-value problems of systems of ordinary differential equations. By extending the ideas of our previous paper,
we now utilize some advanced versions of RE in the form of repeated RE (RRE).
Assume that the underlying LMM—the base method—has order p and RE is applied ℓ times. Then we prove that the accelerated sequence has convergence order p+ℓ.
The version we present here is global RE (GRE, also known as passive RE), since the terms of the linear combinations are calculated independently.
Thus, the resulting higher-order LMM-RGRE methods can be implemented in a parallel fashion and existing LMM codes can directly be used without any modification. We also investigate how the linear stability properties of the base method (e.g. A- or A(α)-stability) are preserved by the LMM-RGRE methods.
Keywords: linear multistep methods; Richardson extrapolation; Adams methods;
BDF methods; convergence; region of absolute stability
Mathematics Subject Classification (2020): 65L05, 65L06
§ INTRODUCTION
Let us consider the initial-value problem
y'(t)=f(t,y(t)), y(t_0)=y_0,
where m∈ℕ^+ and f:ℝ×ℝ^m→ℝ^m is a given sufficiently smooth function. Suppose we approximate the unique solution y of (<ref>) on an interval [t_0,t_final] by applying a k-step linear multistep method (LMM)
∑_j=0^k α_j y_n+j=∑_j=0^k h β_j f_n+j
of order p≥ 1 on a uniform grid {t_n}, where, as usual, h:=t_n+1-t_n>0 is the step size (or grid length), f_m:=f(t_m, y_m), and the numbers α_j∈ℝ and β_j∈ℝ (j=0, …, k) are the given method coefficients with α_k 0.
The LMM (<ref>) generates a sequence y_n(h) which is supposed to approximate the exact solution at t_n, that is, y_n(h)≈ y(t_n). In this work, this LMM will also be referred to as the base method or the underlying method.
Classical Richardson extrapolation (RE) <cit.> is a technique to accelerate the convergence of numerical sequences depending on a small parameter, by eliminating the lowest order error term(s) from the corresponding asymptotic expansion. When solving (<ref>) numerically, the
parameter in RE can be chosen as the discretization step size h. The application of RE to sequences generated by one-step (e.g., Runge–Kutta) methods is described, for example, in <cit.>. In <cit.>, global (also known as passive) and local (or active) versions of RE are implemented with Runge–Kutta sequences. These combined methods find application in many areas (see, e.g., <cit.>).
When carrying out global RE (GRE), one considers a suitable linear combination of two approximations, one generated on a coarser grid and one on a finer grid, to obtain a better approximation of the solution y of (<ref>). This extrapolation is called global, because the sequences on the two grids are computed independently and their linear combination is formed only in the last step. Taking this idea further, one can consider several approximations on finer and finer grids to improve convergence even further. Such a procedure is called repeated global Richardson extrapolation (RGRE). To describe RGRE, let us consider some nested grids with grid lengths, say, h, h/2, h/4, and h/8, and the corresponding sequences y_n(h), y_2n(h/2), y_4n(h/4), and y_8n(h/8).
Suppose that RE is applied ℓ∈ℕ^+ times, and let us denote the corresponding linear combination of the approximations by r_n^[ℓ](h) (thus, the ℓ=1 case corresponds to classical RE). In <cit.>, the authors show the formulae r_n^[ℓ](h) for several values of ℓ, here we just reproduce the first 3 cases:
r_n^[1](h):=2^p · y_2n(h/2)- y_n(h)/2^p-1
r_n^[2](h):=2^2p+1· y_4n(h/4)-3· 2^p· y_2n(h/2)+ y_n(h)/(2^p-1) (2^p+1-1)
r_n^[3](h):=2^3p+3· y_8n(h/8)-7· 2^2p+1· y_4n(h/4)+7· 2^p· y_2n(h/2)- y_n(h)/(2^p-1) (2^p+1-1)(2^p+2-1).
In <cit.>, all the sequences y_n are assumed to be generated by Runge–Kutta methods applied to (<ref>).
Note the slight difference in the terminology: in <cit.>, q denotes the number of RE repetitions, while here ℓ denotes the number of RE applications. In other words, q=ℓ-1, and classical RE corresponds to ℓ=1 or q=0.
In <cit.>, the authors describe multiple RE, which is
another advanced version of Richardson extrapolation besides repeated RE. Based on (<ref>), multiple RE is defined as
r_n^multiple(h):=2^p+1· r_2n^[1](h/2)- r_n^[1](h)/2^p+1-1.
This formula is identical to r_n^[2](h) in (<ref>). (The context in <cit.> is, however, different, since they deal with local RE applied to Runge–Kutta methods as underlying methods.)
In our recent work <cit.>, we have shown that the sequence r_n^[1](h) also converges to the solution of (<ref>) if its component sequences y_2n and y_n are generated by a LMM (<ref>).
In the present work, we extend these ideas and prove that if the base method (<ref>) (satisfying some assumptions a1–a4 listed in Section <ref>) is of order p, and global Richardson extrapolation is applied ℓ times (for any ℓ≥ 1), then the sequences r_n^[ℓ](h) converge to the solution of (<ref>) with order of convergence p+ℓ. We can refer to such procedure as LMM-RGRE, or, when the number of RE applications is made explicit, LMM-ℓGRE.
The structure of our paper is as follows. Section <ref> summarizes some notation. In Section <ref>, the improved convergence of LMM-RGREs is proved if the underlying LMM is an Adams–Bashforth, Adams–Moulton or BDF method (see Corollary <ref>). Linear stability of LMM-RGREs is investigated in Section <ref>. Finally, several numerical tests are presented in Section <ref> that demonstrate the expected convergence order.
§.§ Conclusions
* To implement a LMM-RGRE method, existing LMM codes can directly be used without any modification.
* The proof of Theorem <ref> does not depend on the number of RE applications (although, for simplicity, we have listed only the cases ℓ=1,2,3), one only needs to assume more smoothness on the function f for larger values of ℓ.
* Although our recent work <cit.> can be considered as the ℓ=1 special case of Theorem <ref> of this paper, still, the assumptions of Theorem <ref> on the closeness of the initial values are relaxed (see Remark <ref> below): instead of O(h^p+1)-closeness, it is sufficient to assume O(h^p)-closeness for the starting values of the LMM.
* The computational cost of a LMM-RGRE increases for larger values of the RE repetitions. In general, due to the computations on finer grids, applying GRE ℓ times requires approximately 2^ℓ+1-1 as much computation as the underlying LMM. However, this is compensated by the higher convergence order p+ℓ.
* Regarding linear stability, when the underlying LMM is a BDF method, for example, LMM-RGREs preserve the A(α)-stability angles. In particular, when ℓ=2 and the base method is a BDF2 method, we obtain a 4^th-order A-stable method. Another example is the BDF5 method with ℓ=2, resulting in a 7^th-order method with A(α)-stability angle ≈ 51.839^∘ (see also <cit.> for the ℓ=1 case).
* The proof of Theorem <ref> is based on <cit.>. Since they establish the existence of
asymptotic expansions of the global error for (strictly stable) general linear methods (GLMs), one can clearly apply RGRE in that context as well, and accelerate the convergence of numerical sequences generated by such GLMs.
* The application of local RE to LMMs is the subject of an ongoing study.
§.§ Notation
We assume throughout this work that 0∈ℕ. The Kronecker product of two matrices is denoted by ⊗ (for its definition and properties, see, e.g., <cit.>). For a (complex) square matrix A, let 𝔐_A denote its minimal polynomial (that is, the unique univariate polynomial with leading coefficient equal to 1 and having the least degree such that 𝔐_A(A) is the zero matrix).
For a set S⊂ℂ and for a>0, we define a S:={a z:z∈ S}.
For 2≤ k∈ℕ, ABk, AMk and BDFk denote, respectively, the k-step Adams–Bashforth, Adams–Moulton, and BDF methods.
§ CONVERGENCE ANALYSIS
Given the α-coefficients of the LMM (<ref>), we define—according to <cit.>—the matrix
A:=(
[ -α_k-1/α_k -α_k-2/α_k … -α_1/α_k -α_0/α_k; 1 0 … 0 0; 0 1 … 0 0; 0 0 ⋱ ⋮ ⋮; 0 0 … 1 0; ])
so that A∈ℝ^k× k and its lower left block is the (k-1)×(k-1) identity matrix.
We assume that
a1 the right-hand side f of the initial-value-problem (<ref>) is sufficiently smooth;
a2 the LMM (<ref>) has order p≥ 1;
a3 for any h∈[0,h_0], the k starting values of the sequence
{y_0(h),y_1(h),…,y_k-1(h)}
to initiate the LMM are each O(h^p)-close to the corresponding exact solution values
{y(0),y(h),…,y((k-1)h)}
of the initial-value-problem (<ref>);
a4 the eigenvalues of A lie in the closed unit disk of the complex plane, the only eigenvalue with modulus 1 is 1, and its algebraic multiplicity is also 1.
It is known that, due to consistency of the LMM (<ref>), 1 is always an eigenvalue of A.
The following theorem establishes the convergence of sequences (<ref>)–(<ref>). The grids have grid length h, h/2, h/4, and h/8, respectively, so the fixed grid point t^*:=t_0+n h is part of all of these grids. (As always in this context, any implied constant in an O symbol is independent of n and h.)
Under the above assumptions a1–a4 and for any fixed grid point t^*∈ [t_0,t_final], the sequences (<ref>)–(<ref>) satisfy
r_n^[1](h)-y(t^*)= O(h^p+1),
r_n^[2](h)-y(t^*)= O(h^p+2),
r_n^[3](h)-y(t^*)= O(h^p+3).
Extrapolation techniques are based on the existence of an asymptotic expansion of the global error with respect to the small parameter h. We will show in the second half of the proof that <cit.> is applicable in the present situation. This theorem then guarantees the existence of some functions 𝐞_p, 𝐞_p+1,𝐞_p+2 and 𝐄 such that
y_n(h)-y(t^*)=𝐞_p(t^*)· h^p+
𝐞_p+1(t^*)· h^p+1+𝐞_p+2(t^*)· h^p+2
+𝐄(t^*,h)h^p+3,
where, given any δ>0 small enough, the function 𝐄 is uniformly bounded for any h∈[0,h_0] and any t^*∈ [t_0+δ,t_final]. (Clearly, for fixed t^*, we can choose δ such that t^*∈[t_0+δ,t_final].) The linear combinations in the definition of the sequences (<ref>)–(<ref>) have been set up such that the first, first two, and the first three terms, respectively, on the right-hand side of (<ref>) are eliminated. More precisely, to construct the coefficients of (<ref>), for example, one considers the expression
γ_1·(y_n(h)-y(t^*))+γ_2·(y_2n(h/2)-y(t^*))+γ_3·(y_4n(h/4)-y(t^*)),
then applies (<ref>) with h, h/2 and h/4, and
solves the linear system for γ_1,2,3 obtained by setting γ_1+γ_2+γ_3 equal to 1, the coefficient of h^p equal to 0, and the coefficient of h^p+1 equal to 0.
To finish the proof, what remains is to show that our assumptions a1–a4 on the LMM imply the assumptions of <cit.>. Theorem 9.1 is a general theorem about the asymptotic expansion of the global error of strictly stable general linear methods (GLMs). It is known <cit.> that any LMM (<ref>) can be interpreted as a GLM written as a one-step method in a higher dimensional space as follows:
Y_n+1=(A⊗ I)Y_n+hΦ(t_n,Y_n,h) (n∈ℕ),
where Y_n:=(y_n+k-1,y_n+k-2,…, y_n)^⊤ (n∈ℕ), Φ is the increment function of the numerical method, A∈ℝ^k× k is the matrix (<ref>), I∈ℝ^m × m is the identity matrix (recall that m is the dimension of (<ref>), hence A⊗ I∈ℝ^m· k× m· k). Then the matrix S appearing in <cit.> is S:=A⊗ I.
Step 1. Our assumption a1 implies the smoothness assumption A3 of Theorem 9.1.
Step 2. Assumption A2 of Theorem 9.1 holds automatically since in our case the increment function Φ does not depend on n.
Step 3. Assumptions a2–a3 imply the consistency of order p of the GLM. Indeed, assumption a2 implies that the local error of the LMM is O(h^p+1) in the sense of <cit.>. But this quantity is just the first component of the vector Y(x_n+1)-Ŷ_n+1 in <cit.> describing the local error of the one-step reformulation (<ref>) of the LMM; the remaining components of Y(x_n+1)-Ŷ_n+1 are 0. This means that the local error d_n+1 (n∈ℕ) of the GLM in the sense of <cit.> is also O(h^p+1). Since d_0=O(h^p) due to assumption a3, we get, by using <cit.>, that the consistency order of the GLM is p. (In (8.15) of <cit.>, the relation with the spectral projector Eδ_p(x)=0 now clearly holds, since δ_p(x)=0 due to d_n+1=O(h^p+1).)
Step 4. We finally show that assumption A1 of Theorem 9.1 follows from a4. We need to show that the GLM is strictly stable, that is,
(i) the matrix S is power bounded, in other words, sup_n∈ℕS^n∈ℝ;
(ii) 1 is the only eigenvalue of S with modulus one.
It is known <cit.> that the eigenvalues of a Kronecker product S=A⊗ I are the product of the eigenvalues of its components, hence (ii) is verified.
Finally, to check (i) we recall (see, e.g., <cit.>) that a square matrix S is power bounded if and only if
(ia) each zero of its minimal polynomial lies in the closed unit disk, and
(ib) (potential) zeros of its minimal polynomial on the unit circle have multiplicity 1.
The minimal polynomial divides the characteristic polynomial, so a4 imples (ia). We also know from a4 that 1 is a simple zero of the characteristic polynomial of A, but from the above properties of the Kronecker product we see that its multiplicity in the characteristic polynomial of S is m. However, 1 is also a simple zero of the minimal polynomial of A. Hence Lemma <ref> below with M:=A implies property (ib).
The following auxiliary result, which we state separately, has been used in the above proof.
Let M∈ℂ^k× k be any square matrix, and let I∈ℝ^m× m be the identity matrix (k,m∈ℕ^+). Then the minimal polynomials of M and M⊗ I coincide.
In the proof, we denote identity matrices of appropriate size by the same symbol I. Consider a Jordan canonical form of M=TJT^-1. Then one easily checks that M⊗ I=(T⊗ I)(J⊗ I)(T⊗ I)^-1. We know that if two matrices are similar, then their minimal polynomials coincide. So to prove the lemma, it is sufficient to show that 𝔐_J=𝔐_J⊗ I.
Let J_1,…,J_n denote the blocks of the block diagonal matrix J. Then the blocks of the block diagonal matrix J⊗ I are J_1⊗ I,…,J_n⊗ I. It is easily seen that
𝔐_J=lcm(𝔐_J_1,𝔐_J_2,…,𝔐_J_n),
where lcm denotes the least common multiple of the given polynomials with leading coefficient set to 1. Similarly, we have
𝔐_J⊗ I=lcm(𝔐_J_1⊗ I,…,𝔐_J_n⊗ I).
Finally, we claim that 𝔐_J_j=𝔐_J_j⊗ I for any block J_j∈ℝ^s_j× s_j (j=1,2,…,n). Indeed, let λ denote the independent variable of the minimal polynomial, λ_j the eigenvalue corresponding to J_j, and N the appropriate sized (nilpotent) square matrix with 1s in its (first) superdiagonal and 0s everywhere else. Then, for both possible types of Jordan blocks, we have
* J_j=λ_j I⟹𝔐_J_j=λ-λ_j=𝔐_J_j⊗ I,
* J_j=λ_j I+N ⟹𝔐_J_j=(λ-λ_j)^s_j=𝔐_J_j⊗ I,
completing the proof.
Power boundedness of the matrix S=A⊗ I in the above proof of Theorem <ref> also follows from <cit.>, since now A is power bounded. Hence our Lemma <ref> can be considered as an alternative (more algebraic) proof of this fact.
<cit.> contains the convergence proof of the sequence (<ref>) based on <cit.>. This is the reason we have assumed in <cit.> that the k starting values of the sequence {y_0(h),y_1(h),…,y_k-1(h)} to initiate the LMM are each O(h^p+1)-close to the corresponding exact solution values. The present discussion shows that it is in fact enough to assume the weaker closeness assumption a3.
In <cit.>, the authors did not explicitly reference a result that guarantees the existence of an asymptotic expansion of the global error for Runge–Kutta methods, on which expansion their proof is based. One such classical theorem for one-step methods is due to Gragg (1964); for a modern treatment, see, e.g., <cit.>.
Many LMMs satisfy assumptions a2 and a4. In particular, all AB and AM methods satisfy a4 (since, for any of these methods, the only non-zero eigenvalue of the corresponding matrix (<ref>) is 1 with algebraic multiplicity 1), and one directly checks that each BDFk method also satisfies a4. Therefore, we have the following convergence result.
According to Theorem <ref>, the convergence of the sequences obtained by applying any of the LMMs
* ABk (with k≥ 2 steps),
* AMk (with k ≥ 2 steps), or
* BDFk (with 2≤ k≤ 6 steps)
to problem (<ref>) can be accelerated by using formulae (<ref>)–(<ref>).
These combined methods will be referred to as ABk-RGRE, AMk-RGRE, and BDFk-RGRE in general, and ABk-ℓGRE, AMk-ℓGRE, and BDFk-ℓGRE in particular (where ℓ is the number of RE applications appearing in r_n^[ℓ](h) in (<ref>)–(<ref>)).
§ LINEAR STABILITY ANALYSIS
In this section, let LMM denote any of the base methods ABk (k≥ 2), AMk (k≥ 2), or BDFk (2≤ k≤ 6), and let ℓ∈ℕ^+ denote the number of RE applications.
The region of absolute stability of the base method is denoted by 𝒮_LMM, while that of the combined LMM-RGRE method will be denoted by 𝒮_RGRE^[ℓ]. Let us apply the LMM-RGRE method to the scalar linear test equation
y'(t)=λ y(t), y(t_0)=y_0
with some h>0 and λ∈ℂ. Similarly to <cit.>, we define 𝒮_RGRE^[ℓ]⊂ℂ as
the set of numbers μ:=hλ for which the sequence n↦ r_n^[ℓ](h) is bounded for any choice of the starting values of any of its component sequences
n↦ y_n(h/2^j) (j=1,…,ℓ), but excluding the values of μ for which any of the leading coefficients α_k-(μ/2^j)β_k (j=1,…,ℓ) vanishes.
The proofs of the lemmas of <cit.> can be extended in a straightforward way to the present situation, so we only state the results.
We have the inclusions
⋂_j=0^ℓ(2^j𝒮_LMM)⊆𝒮_RGRE^[ℓ]⊆𝒮_LMM.
For any 2≤ k≤ 6 and ℓ∈ℕ^+, the BDFk-ℓGRE method has the same A(α)-stability angle as that of the underlying BDFk-method.
Assume that 𝒮_LMM is convex. Then 𝒮_RGRE^[ℓ]=𝒮_LMM.
In <cit.>, we have shown that 𝒮_LMM is convex, for example, for the AB2 and AM2 methods, but not convex for the AB3 method.
§ NUMERICAL TESTS
In this section, we verify the expected order of convergence of LMM-RGREs on three benchmark problems: we chose
a Dahlquist test problem
y'(t)=-5y(t), y(0)=1,
a Lotka–Volterra system
y'_1(t)=0.1y_1(t)-0.3y_1(t)y_2(t), y'_2(t)=0.5(y_1(t)-1)y_2(t)
for t∈[0,62] with initial condition y(0)=(1,1)^⊤, and a mildly stiff van der Pol equation
y'_1(t)=y_2(t), y'_2(t)=2(1-y_1^2(t))y_2(t)-y_1(t)
for t∈[0,20] with initial condition y(0)=(2,0)^⊤.
As base LMMs, we considered the 2^nd- and 3^rd-order AB, AM, and BDF methods. For starting methods, we chose the 2^nd- and 3^rd-order Ralston methods, having minimum error bounds <cit.>. As it is usual, the AM methods were implemented in predictor-corrector style. For the nonlinear algebraic equations arising in connection with implicit LMMs, we use MATLAB's command. When the goal is to achieve high convergence order or when we apply RE several times, one should change to a multi-precision environment due to the double precision limitation of MATLAB and to the limitations of . The fine-grid solutions obtained by a 6^th-order Runge–Kutta method with 2^16 grid points are used as a reference solution to measure the global error in maximum norm and to estimate the order of convergence of LMM-RGREs <cit.>.
Table <ref> and Figure <ref> illustrate the expected 4^th-order of convergence for all tested LMM-RGREs.
Tables <ref> and <ref>, and Figure <ref> illustrate the expected 5^th-order of convergence for all tested LMM-RGREs.
§ ACKNOWLEDGEMENT
I. Fekete was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. Supported by the ÚNKP-22-5 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund. The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the Artificial Intelligence National Laboratory Program RRF-2.3.1-21-2022-00004.
99
BH T. Bayleyegn, Á. Havasi, Multiple Richardson Extrapolation Applied to Explicit Runge–Kutta Methods, Advances in High Performance Computing. Springer, 902, 262–270, (2020), <http://doi.org/10.1007/978-3-030-55347-0_22>
matrixmathematics D. S. Bernstein, Matrix Mathematics, Princeton Univ. Press (2009), <https://doi.org/10.1515/9781400833344>
butcher
J. C. Butcher, Numerical Methods for Ordinary Differential Equations, 3rd edition, Wiley (2016), <https://doi.org/10.1002/9781119121534>
falgout2021
R. D. Falgout, T. A. Manteuffel, B. O’Neill, J. B. Schroder, Multigrid reduction in time with Richardson extrapolation, Electron. Trans. Numer. Anal., 54, 210–233 (2021), <https://doi.org/10.1553/etna_vol54s210>
feketeloczi I. Fekete, L. Lóczi, Linear multistep methods and global Richardson extrapolation, Appl. Math. Letters
133 (2022), <https://doi.org/10.1016/j.aml.2022.108267>
gautschi
W. Gautschi, Numerical Analysis, 2nd edition, Birkhäuser (2012), <https://doi.org/10.1007/978-0-8176-8259-0>
hairernorsettwanner
E. Hairer, S. P. Norsett, G. Wanner, Solving Ordinary Differential Equations I, Nonstiff Problems, 2nd revised edition, Springer-Verlag (1993), <https://doi.org/10.1007/978-3-540-78862-1>
zdzislav Z. Jackiewicz, General Linear Methods for Ordinary Differential Equations, Wiley (2009), <https://doi.org/10.1002/9780470522165>
leveque R. J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems, SIAM, Philadelphia (2007), <https://doi.org/10.1137/1.9780898717839>
ralston1962 A. Ralston, Runge–Kutta methods with minimum error bounds, Math. Comp., 16, 431–437 (1962), <https://doi.org/10.1090/S0025-5718-1962-0150954-0>
richardson1911
L. F. Richardson, The approximate arithmetical solution by finite differences of physical problems including differential equations, with an application to the stresses in a masonry
dam, Philos. Trans. Roy. Soc. London Ser. A., 210, 307–357 (1911), <https://doi.org/10.1098/rsta.1911.0009>
richardson1927
L. F. Richardson, The deferred approach to the limit, Philos. Trans. Roy. Soc. London Ser. A., 226, 299–361 (1927), <https://doi.org/10.1098/rsta.1927.0008>
zlatev
Z. Zlatev, I. Dimov, I. Faragó, Á. Havasi, Richardson Extrapolation: Practical Aspects and Applications, Berlin, Boston: De Gruyter (2017), <https://doi.org/10.1515/9783110533002>
ZDFGH
Z. Zlatev, I. Dimov, I. Faragó, K. Georgiev, Á. Havasi, Explicit Runge–Kutta Methods
Combined with Advanced Versions of
the Richardson Extrapolation, Comput. Methods Appl. Math. 20(4) 739–762 (2020), <https://doi.org/10.1515/cmam-2019-0016>
zlatev2022
Z. Zlatev, I. Dimov, I. Faragó, Á. Havasi, Efficient implementation of advanced Richardson extrapolation in an atmospheric chemical scheme, J. Math. Chem., 60 (1), 219–238 (2022), <https://doi.org/10.1007/s10910-021-01300-z>
|
http://arxiv.org/abs/2307.02484v2
|
20230705175821
|
Elastic Decision Transformer
|
[
"Yueh-Hua Wu",
"Xiaolong Wang",
"Masashi Hamaya"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
[itemize]leftmargin=2.5em
Phenomenology of bond and flux orders in kagome metals
Mark H. Fischer
August 1, 2023
======================================================
This paper introduces Elastic Decision Transformer (EDT), a significant advancement over the existing Decision Transformer (DT) and its variants. Although DT purports to generate an optimal trajectory, empirical evidence suggests it struggles with trajectory stitching, a process involving the generation of an optimal or near-optimal trajectory from the best parts of a set of sub-optimal trajectories.
The proposed EDT differentiates itself by facilitating trajectory stitching during action inference at test time, achieved by adjusting the history length maintained in DT.
Further, the EDT optimizes the trajectory by retaining a longer history when the previous trajectory is optimal and a shorter one when it is sub-optimal, enabling it to "stitch" with a more optimal trajectory.
Extensive experimentation demonstrates EDT's ability to bridge the performance gap between DT-based and Q Learning-based approaches. In particular, the EDT outperforms Q Learning-based methods in a multi-task regime on the D4RL locomotion benchmark and Atari games. Videos are available at: <https://kristery.github.io/edt/>.
§ INTRODUCTION
r0.5
< g r a p h i c s >
Normalized return with medium-replay datasets. The dotted gray lines indicate normalized return with medium datasets. By achieving trajectory stitching, our method benefits from worse trajectories and learns a better policy.
Reinforcement Learning (RL) trains agents to interact with an environment and learn from rewards. It has demonstrated impressive results across diverse applications such as game playing <cit.>, robotics <cit.>, and recommendation systems <cit.>. A notable area of RL is Offline RL <cit.>, which employs pre-collected data for agent training and proves more efficient when real-time interactions are costly or limited. Recently, the conditional policy approach has shown large potentials in Offline RL, where the agent learns a policy based on the observed state and a goal. This approach enhances performance and circumvents stability issues related to long-term credit assignment. Moreover, the successful Transformer architecture <cit.>, widely used in applications like natural language processing <cit.> and computer vision <cit.>, has been adapted for RL as the Decision Transformer (DT) <cit.>.
DT utilizes a Transformer architecture to model and reproduce sequences from demonstrations, integrating a goal-conditioned policy to convert Offline RL into a supervised learning task. Despite its competitive performance in Offline RL tasks, the DT falls short in achieving trajectory stitching, a desirable property in Offline RL that refers to creating an optimal trajectory by combining parts of sub-optimal trajectories <cit.>. This limitation stems from the DT's inability to generate superior sequences, thus curbing its potential to learn optimal policies from sub-optimal trajectories (Figure <ref>).
We introduce the Elastic Decision Transformer (EDT), which takes a variable length of the traversed trajectory as input.
Stitching trajectories, or integrating the current path with a more advantageous future path, poses a challenge for sequence generation-based approaches in offline RL. Stitching a better trajectory appears to contradict one of the core objectives of sequence generation that a sequence generation model is required to reliably reproduce trajectories found within the training dataset.
We suggest that in order to `refresh' the prediction model, it should disregard `negative' or `unsuccessful' past experiences. This involves dismissing past failures and instead considering a shorter history for input. This allows the sequence generation model to select an action that yields a more favorable outcome. This strategy might initially seem contradictory to the general principle that decisions should be based on as much information as possible. However, our proposed approach aligns with this concept. With a shorter history, the prediction model tends to output with a higher variance, typically considered a weakness in prediction scenarios. Yet, this increased variance offers the sequence prediction model an opportunity to explore and identify improved trajectories. Conversely, when the current trajectory is already optimal, the model should consider the longest possible history for input to enhance stability and consistency. Consequently, a relationship emerges between the quality of the path taken and the length of history used for prediction. This correlation serves as the motivation behind our proposal to employ a variable length of historical data as input.
In practice, we train an approximate value maximizer using expectile regression to estimate the highest achievable value given a certain history length. We then search for the history length associated with the highest value and use it for action inference.
Evidence from our studies indicates that EDT's variable-length input sequence facilitates more effective decision-making and, in turn, superior sequence generation compared to DT and its variants. Furthermore, it is computationally efficient, adding minimal overhead during training. Notably, EDT surpasses state-of-the-art methods, as demonstrated in the D4RL benchmark <cit.> and Atari games <cit.>. Our analysis also suggests that EDT can significantly enhance the performance of DT, establishing it as a promising avenue for future exploration.
Our Contributions:
* We introduce the Elastic Decision Transformer, a novel approach to Offline Reinforcement Learning that effectively addresses the challenge of trajectory stitching, a known limitation in Decision Transformer.
* By estimating the optimal history length based on changes in the maximal value function, the EDT enhances decision-making and sequence generation over traditional DT and other Offline RL algorithms.
* Our experimental evaluation highlights EDT's superior performance in a multi-task learning regime, positioning it as a promising approach for future Offline Reinforcement Learning research and applications.
§ PRELIMINARIES
In this study, we consider a decision-making agent that operates within the framework of Markov Decision Processes (MDPs) <cit.>. At every time step t, the agent receives an observation of the world o_t, chooses an action a_t, and receives a scalar reward r_t. Our goal is to learn a single optimal policy distribution P^*_θ(a^t | o^≤ t, a^<t, r^<t) with parameters θ that maximizes the agent's total future return R_t = ∑_k>t r^k on all the environments we consider.
§.§ Offline Reinforcement Learning
Offline RL, also known as batch RL, is a type of RL where an agent learns to make decisions by analyzing a fixed dataset of previously collected experiences, rather than interacting with an environment in real-time. In other words, the agent learns from a batch of offline data rather than actively exploring and collecting new data online.
Offline RL has gained significant attention in recent years due to its potential to leverage large amounts of pre-existing data and to solve RL problems in scenarios where online exploration is impractical or costly. Examples of such scenarios include medical treatment optimization <cit.>, finance <cit.>, and recommendation systems <cit.>.
Despite its potential benefits, offline RL faces several challenges, such as distributional shift, which occurs when the offline data distribution differs significantly from the online data distribution, and the risk of overfitting to the fixed dataset. A number of recent research efforts have addressed these challenges, including methods for importance weighting <cit.>, regularization <cit.>, and model-based learning <cit.>, among others.
§.§ Decision Transformer
The Decision Transformer architecture, introduced by <cit.>, approaches the offline RL problem as a type of sequence modeling. Unlike many traditional RL methods that estimate value functions or compute policy gradients, DT predicts future actions based on a sequence of past states, actions, and rewards. The input to DT includes a sequence of past states, actions, and rewards, and the output is the next action to be taken. DT uses a Transformer architecture <cit.>, which is composed of stacked self-attention layers with residual connections. The Transformer architecture has been shown to effectively process long input sequences and produce accurate outputs.
Despite the success of being applied to offline RL tasks, it has a limitation in its ability to perform "stitching." Stitching refers to the ability to combine parts of sub-optimal trajectories to produce an optimal trajectory.
This approach can lead to a situation where the agent follows a sub-optimal trajectory that provides an immediate reward, even if a different trajectory leads to a higher cumulative reward over time. This limitation of DT is a significant challenge in many offline RL applications, and addressing it would greatly enhance the effectiveness of DT in solving real-world problems.
§ ELASTIC DECISION TRANSFORMER
In this section, we present Elastic Decision Transformer (EDT), a model that automatically utilizes a shorter history to predict the next action when the traversed trajectory underperforms compared to those in the training dataset. The mechanism allows the model to switch to a better trajectory by forgetting `unsuccessful' past experiences, thus opening up more possibilities for future trajectories. We further propose a method to estimate the maximum achieveable return using the truncated history, allowing EDT to determine the optimal history length and corresponding actions.
We first provide essential background knowledge (Sec. <ref>) and discuss the motivation behind our approach (Sec. <ref>). Subsequently, we present our novel objective designed for training the EDT, which integrates expectile regression (Sec. <ref>). We also detail the action inference process employed during testing (Sec. <ref>).
§.§ Reinforcement Learning as Sequence Modeling
In this paper, we adopt an approach to offline reinforcement learning that is based on a sequence modeling problem. Specifically, we model the probability of the next token in the sequence (denoted as τ) based on all the tokens that come before it.
The sequences we model can be represented as:
τ = ⟨ ..., o^t, R̂^t, a^t, ... ⟩,
where t is a time step and R̂ is the return for the remaining sequence. The sequence we consider here is similar to the one used in <cit.> whereas we do not include reward as part of the sequence and we predict an additional quantity R that enables us to estimate an optimal input length, which we will cover in the following paragraphs. Figure <ref> presents an overview of our model architecture. It should be noted that we also change the way to predict future observation from standard DT <cit.>, where the next observation is usually directly predicted from a^t through the causal transformer decoder.
§.§ Motivation
!tr0.43
< g r a p h i c s >
A Toy example to illustrate the motivation of EDT. The figure shows an offline RL dataset that contains only two trajectories (s_t-1^a, s_t, s_t+1^a), (s_t-1^b, s_t, s_t+1^b).
We propose a shift in the traditional approach to trajectory stitching. Instead of focusing on training phases, we aim to achieve this stitching during the action inference stage. This concept is illustrated in Figure <ref> using a simplified example. In this scenario, we consider a dataset, D, comprising only two trajectories: D=(s_t-1^a, s_t, s_t+1^a), (s_t-1^b, s_t, s_t+1^b). A sequence model trained with this dataset is likely to predict the next states in a manner consistent with their original trajectories.
To overcome this, we propose a method that enables trajectory stitching, where the model starts from s_t-1^b and concludes at s_t+1^a. This is achieved by adaptively adjusting the history length. We introduce a maximal value estimator, R, which calculates the maximum value among all potential outcomes within the dataset. This allows us to determine the optimal history length that maximizes R.
In the given example, if the model starts at state s_t-1^b, it will choose to retain the history (s_t) upon reaching state s_t, as R(s_t)>R(s_t-1, s_t). Conversely, if the model initiates from state s_t-1^a, it will preserve the history (s_t-1^a, s_t) when decision-making at s_t, as R(s_t-1^a,s_t)≥R(s_t). From the above example, we understand that the optimal history length depends on the quality of the current trajectory we've traversed, and it can be a specific length anywhere between a preset maximal length and a single unit.
To estimate the optimal history length in a general scenario, we propose solving the following optimization problem:
_Tmax_τ_T∈ DR̂^t(τ_T),
where τ_T denotes the history length T. More precisely, τ_T takes the form:
τ_T =⟨ o^t-T+1,R̂^t-T+1, a^t-T+1,...,o^t-1,R̂^t-1,a^t-1,o^t,R̂^t,a^t⟩.
§.§ Training objective for Maximum In-Support Return
In the EDT, we adhere to the same training procedure as used in the DT. The key distinction lies in the training objective - we aim to estimate the maximum achievable return for a given history length in EDT. To approximate the maximum operator in max_τ_T∈ DR̂^t(τ_T), we employ expectile regression <cit.>, a technique often used in applied statistics and econometrics. This method has previously been incorporated into offline reinforcement learning; for instance, IQL <cit.> used expectile regression to estimate the Q-learning objective implicitly. Here, we leverage it to enhance our estimation of the maximum expected return for a trajectory, even within limited data contexts.
The α∈(0,1) expectile of a random variable X is the solution to an asymmetric least squares problem, as follows:
_m_α𝔼_x∈ X[L^α_2(x-m_α)],
where L^α_2(u)=|α-1(u<0)| u^2.
Through expectile regression, we can approximate max_τ_T∈ DR̂^t(τ_T):
R^t_T=max_τ_T∈ DR̂^t(τ_T)≈_R^t(τ_T)_τ_T∈ D[L^α_2(R^t(τ_T)-R̂^t)].
We estimate R^t by applying an empirical loss of Equation <ref> with a sufficiently large α (we use α=0.99 in all experiments).
The only difference in training EDT compared to other DT variants is the use of Equation <ref>, making the training time comparably shorter. We summarize our objective as:
ℒ_EDT=c_rℒ_return+ℒ_observation+ℒ_action+ℒ_max,
where ℒ_observation and ℒ_action are computed with a mean square error, ℒ_return is a cross-entropy loss, and ℒ_max is an empirical estimate of Equation <ref>. We set c_r=0.001 to balance scale differences between mean square error and cross-entropy loss. In tasks with discrete action spaces like Atari, we optimize the action space as the return objective ℒ_return using cross-entropy with weight 10c_r.
Our training method extends the work of <cit.> by estimating the maximum expected return value for a trajectory using Equation <ref>. This estimation aids in comparing expected returns of different trajectories over various history lengths. Our proposed method is not only easy to optimize, but can also be conveniently integrated with other DT variants. As such, it marks a significant advance in developing efficient offline reinforcement learning approaches for complex decision-making tasks.
§.§ Action Inference During Test time
During action inference phase in test time, we first (1) estimate the maximum achievable return R_i for each history length i. Subsequently, (2) we predict the action by using the truncated traversed trajectory as input. The trajectory is truncated to the history length that corresponds to the highest value of R_i. These steps are elaborated in Figure <ref>.
To identify the history length i that corresponds to the highest R_i^t, we employ a search strategy as detailed in Algorithm <ref>. As exhaustively searching through all possible lengths from 1 to T may result in slow action inference, we introduce a step size δ to accelerate the process. This step size not only enhances inference speed by a factor of δ, but also empirically improves the quality of the learned policy. An ablation study on the impact of the step size δ is provided in Appendix <ref>. For all experiments, we set δ = 2 to eliminate the need for parameter tuning.
To sample from expert return distribution P(R^t,...|expert^t), we adopt an approach similar to <cit.> by applying Bayes' rule P(R^t, ...|expert^t)∝ P(expert^t| R^t, ...) P(R^t, ...) and approximate the distribution of expert-level return with inverse temperature κ[We set κ to 10 in all our experiments.] <cit.>:
P(R^t|expert^t,...)∝exp(κ R^t)P(R^t).
While it may initially appear feasible to directly use the predicted R as the expert return, it's important to note that this remains a conservative maximum operation. Empirically, we have found that Eq. <ref> encourages the pursuit of higher returns, which consequently enhances the quality of the actions taken.
0.45
0.5
§ EXPERIMENTS
Our experiments are designed to address several key questions, each corresponding to a specific section of our study:
* Does EDT significantly outperform DT and its variants? (Sec. <ref>, <ref>)
* Is the EDT effective in a multi-task learning regime, such as Locomotion and Atari games? (Sec. <ref>)
* Does a dynamic history length approach surpass a fixed length one? (Sec. <ref>)
* How does the expectile level α impact the model's performance? (Sec. <ref>)
* How does the quality of datasets affect the predicted history lengths? (Sec. <ref>)
We also provide an additional ablation study in Appendix <ref> due to space constraints.
§.§ Baseline Methods
In the subsequent section, we draw comparisons with two methods based on the Decision Transformer: the original Decision Transformer (DT) <cit.> and the Q-learning Decision Transformer (QDT) <cit.>. Additionally, we include a behavior cloning-based method (TS+BC) <cit.>, as well as two offline Q-learning methods, namely S4RL <cit.> and IQL <cit.>, in our comparisons.
It is important to note that QDT and TS+BC are specifically designed to achieve trajectory stitching. QDT accomplishes this by substituting collected return values with estimates derived from Conservative Q-Learning <cit.>. Conversely, TS+BC employs a model-based data augmentation strategy to bring about the stitching of trajectories.
§.§ Single-Task Offline Reinforcement Learning
For locomotion tasks, we train offline RL models on D4RL's “medium” and “medium-replay” datasets. The “medium” dataset comes from a policy reaching about a third of expert performance. The '`medium-replay”, sourced from this policy's replay buffer, poses a greater challenge for sequence modeling approaches such as DT.
0.45
0.48
We conclude our locomotion results in Table <ref>.
Since the proposed model estimates the return of the current sequence, reward information is not required during test time.
Our observations indicate that the proposed EDT consistently outperforms the baseline DT
and its variants on the majority of the datasets, with a notable performance advantage on the "medium-replay" datasets. These findings provide strong evidence that our approach is highly effective in stitching together sub-optimal trajectories with high return proportion, a task that DT and its variants cannot accomplish. Although EDT doesn't fully outperform IQL in the single-task learning, it does bridge the gap between Q-learning-based methods and DT by performing trajectory stitching with the estimated maximum return.
§.§ Multi-Task Offline Reinforcement Learning
This section aims to evaluate the multi-task learning ability of our model across diverse tasks, focusing on locomotion and Atari tasks. Locomotion tasks utilize vectorized observations, while Atari tasks depend on image observations. To emphasize the role of trajectory stitching, we restrict our datasets to medium-replay datasets for the four locomotion tasks and datasets derived from DQN Replay <cit.> for the Atari tasks. Our evaluations span 20 different Atari tasks, with further environment setup details available in the Appendix.
Locomotion. In the locomotion multi-task experiment, we maintain the same model architecture as in the single-task setting. By confining the dataset to medium-replay datasets from four tasks, we increase task complexity and necessitate the offline RL approach to learn and execute these tasks concurrently, while effectively utilizing trajectories generated by random policies. As depicted in Figure <ref>, our proposed EDT successfully accomplishes all four tasks simultaneously without much performance compromise.
Atari. For Atari, we adopt a CNN image encoder used in DrQ-v2 <cit.> to process stacks of four 84x84 image observations. To ensure fair comparisons, all methods employ the same architecture for the image encoder. Following <cit.>, we incorporate random cropping and rotation for image augmentation. Additional experiment details are delegated to the Appendix for brevity. Performance on each Atari game is measured by human normalized scores (HNS) <cit.>, defined as (score-score_random)/(score_human-score_random), to ensure a consistent scale across each game.
Our experimental results align with those of <cit.>, highlighting that Q-learning-based offline RL approaches encounter difficulties in learning a multi-task policy on Atari games. Despite IQL achieving the highest score in Table<ref>, it demonstrates relative inadequacy in simultaneous multi-task learning as indicated in Table <ref> and Figure <ref>. We leave the raw scores of the 20 Atari games in Appendix <ref>.
§.§ Dynamic History Length vs. Fixed History Length
In Sec. <ref>, we proposed the concept of EDT, which adjusts history length based on the quality of the current trajectory. We illustrated this idea with a toy example.
To validate the benefits of this dynamic history length approach, we tested the EDT using both fixed and variable history lengths. The results, summarized in Table <ref>, show that the variable history length outperforms the fixed ones, particularly on the "medium-replay" datasets.
These findings suggest that the EDT effectively chooses a history length that yields a higher estimated return. While a shorter history length aids in trajectory stitching, it's also crucial to retain a longer history length to ensure the continuity of optimal trajectories. Therefore, the dynamic adjustment of history length in the EDT is key to its superior performance.
§.§ Ablation Study on Expectile Level α
A key component of EDT is the approximation of the maximal value using expectile learning, as our method depends on accurately estimating these maximal values to choose the optimal history length. Consequently, examining the change in performance relative to the expectile level, α, provides insight into the necessity of correct history length selection for performance enhancement.
The results, as displayed in Figure <ref>, suggest that when expectile regression is able to accurately approximate the maximizer, specifically at higher expectile levels, we observe both a higher average performance and lower standard deviation. This suggests that accurate selection of history length not only stabilizes performance but also enhances scores. Conversely, as the expectile level approaches 0.5, the expectile regression's objective shifts towards a mean square error, leading to an estimated value that is more of a mean value than a maximal one. This change makes it a less effective indicator for optimal history length. As a result, we can see a deterioration in EDT's score as the expectile level drops too low, and an increase in standard deviation, indicating inconsistency in the selection of an effective history length.
§.§ Analysis of Optimal History Length Distribution
In our analysis, we examine the history length distributions across various datasets, as depicted in Figure <ref>. Our findings reveal that the medium-replay dataset, which amalgamates trajectories from multiple policies, yields a distribution closely approximating a uniform distribution. Conversely, the medium dataset, acquired through a singular stochastic policy, exhibits a history length distribution characterized by an increased density at lower history lengths. This observation can be attributed to the prevalence of analogous trajectories within the medium dataset, leading to more frequent occurrences of trajectory stitching than the “medium-replay” dataset. However, it is important to acknowledge that the gains derived from this type of trajectory stitching remain limited, as the trajectories stem from identical distributions. Although performance improvement is observed, as presented in Table <ref>, it is significantly less pronounced in comparison to the medium-replay dataset.
Contrary to initial expectations, trajectory stitching does not occur as frequently within the medium-replay dataset as within the medium dataset. In fact, the distinct policies within the medium dataset contribute to the reduced instances of trajectory stitching, as their respective state distributions differ from one another.
The diversity within the dataset results in a limited number of mutual s_t instances illustrated in Figure <ref>.
Nevertheless, the proposed EDT method derives substantial benefits from trajectory stitching in this context. The EDT effectively avoids being misled by sub-optimal trajectories within the dataset, demonstrating its capacity to make better decisions regarding history lengths and actions that optimize the current return.
§ RELATED WORK
Offline Reinforcement Learning.
Offline RL has been a promising topics for researchers since sampling from environments during training is usually costly and dangerous in real-world applications and offline reinforcement learning is able to learn a better policy without directly collecting state-action pairs. Several previous works have utilized constrained or regularized dynamic programming to mitigate deviations from the behavior policy <cit.>.
Decision Transformer and its variants <cit.> have been a promising direction for solving offline RL from the perspective of sequence modeling.
Trajectory Transformer (TT) <cit.> models distributions over trajectories using transformer architecture. The approach also incorporates beam search as a planning algorithm and demonstrates exceptional flexibility across various applications, such as long-horizon dynamics prediction, imitation learning, goal-conditioned reinforcement learning, and offline reinforcement learning.
Recently, there has been a growing interest in incorporating diffusion models into offline RL methods. This alternative approach to decision-making stems from the success of generative modeling, which offers the potential to address offline RL problems more effectively. For instance, <cit.> reinterprets Implicit Q-learning as an actor-critic method, using samples from a diffusion parameterized behavior policy to improve performance. Similarly, other diffusion-based methods <cit.> utilize diffusion-based generative models to represent policies or model dynamics, achieving competitive or superior performance across various tasks.
Trajectory Stitching.
A variety of methods have been proposed to tackle the trajectory stitching problem in offline RL. The Q-learning Decision Transformer (QDT) <cit.> stands out as it relabels the ground-truth return-to-go with estimated values, a technique expected to foster trajectory recombination. Taking a different approach, <cit.> utilizes a model-based data augmentation strategy, stitching together parts of historical demonstrations to create superior trajectories. Similarly, the Best Action Trajectory Stitching (BATS) <cit.> algorithm forms a tabular Markov Decision Process over logged data, adding new transitions using short planned trajectories. BATS not only aids in identifying advantageous trajectories but also provides theoretical bounds on the value function. These efforts highlight the breadth of strategies employed to improve offline RL through innovative trajectory stitching techniques.
§ DISCUSSION
Conclusion.
In this paper, we introduced the Elastic Decision Transformer, a significant enhancement to the Decision Transformer that addresses its limitations in offline reinforcement learning. EDT's innovation lies in its ability to determine the optimal history length, promoting trajectory stitching. We proposed a method for estimating this optimal history length by learning an approximate value optimizer through expectile regression.
Our experiments affirmed EDT's superior performance compared to DT and other leading offline RL algorithms, notably in multi-task scenarios. EDT's implementation is computationally efficient and straightforward to incorporate with other DT variants. It outshines existing methods on the D4RL benchmark and Atari games, underscoring its potential to propel offline RL forward.
In summary, EDT offers a promising solution for trajectory stitching, enabling the creation of better sequences from sub-optimal trajectories. This capability can considerably enhance DT variants, leading to improved performance across diverse applications. We are committed to releasing our code.
Limitations.
A potential direction for future improvement involves enhancing the speed at which EDT estimates the optimal history. This could make the method suitable for real-time applications that have strict time constraints. While this adaptation is an exciting avenue for future research, it falls outside the primary scope of this paper.
plain
§ ADDITIONAL ABLATION STUDY
§.§ Step Size Ablation
r0.5
< g r a p h i c s >
width=0.48
Ablation study on the step size δ. The greater the step size the smaller the length search space is.
In Algorithm <ref>, we introduced the step size parameter δ, acting as a balance between search granularity and inference speed. The effects of varying δ are depicted in Figure <ref>. To compute the history length search space, we commence with the maximum history length. For instance, if the maximum history length is T=20 and the step size δ=8, the history length search space becomes 20, 12, 4.
Figure <ref> shows that a narrowed search space leads to a decline in return performance and an increase in standard deviation, corroborating results from Table <ref>. EDT with a restricted history search space behaves more like EDT with a fixed history length. We also found the return to be optimized when the history length is set to 4, possibly due to challenges in estimating the return in offline settings <cit.>, where direct environment interactions are prohibited. By increasing the step size appropriately, the return inference becomes more stable and resilient to noise and disruptions. Lastly, the inference speed when δ=4 is substantially faster (by a factor of four) than when δ=1.
§.§ History Length Distribution
We further show more history length distributions for the locomotion tasks here.
Our initial hypothesis regarding the Ant task was that the medium-replay dataset would primarily consist of shorter history lengths, while the medium dataset would be more focused on longer history lengths. The distribution of history lengths in the medium and medium-replay datasets mostly supports this hypothesis.
In the medium-replay dataset, we consistently see a concentration of shorter history lengths. This was expected, given that the nature of the medium-replay dataset is likely to produce shorter history lengths. The distribution often converges towards higher densities for shorter lengths, which aligns with our expectations.
The pattern within the medium dataset, however, is less consistent. This could be attributed to several factors that can either elongate or truncate the history length. Despite these fluctuations, we still observe a slight inclination towards longer history lengths. This meets our initial assumption but also demonstrates the complexity within the medium dataset.
§ EXPERIMENT DETAILS
§.§ Learning on Locomotion Tasks
In Section <ref>, we discussed the application of our approach to a multi-task learning scenario. Specifically, we consolidated medium-replay datasets from Ant, HalfCheetah, Hopper, and Walker2d. To standardize returns across these varied environments, we used scaling statistics (maximum return, minimum return, and return scale) from the official Decision Transformer repository (<https://github.com/kzl/decision-transformer>). As detailed in Section <ref>, we further segmented the return into 120 discrete buckets and, following the approach of <cit.>, sampled from the top 85th percentile logits within the discrete return distribution.
However, it's important to highlight that our return maximizer, R, estimates the scaled value directly rather than the discretized one. For the sequence ⟨ ...,𝐨,R,a,...⟩, we supplement each token embedding with a learned positional embedding. Given the differing state dimensions across tasks, we employ an additional state embedding layer. This layer transforms the raw state representation into a consistent state embedding size, while keeping the remainder of the architecture unchanged across tasks.
Approximate R Estimation. Section <ref> outlines how we use Bayes' Rule to estimate expert returns. A conventional R prediction typically involves an autoregressive process, given the necessity of sampling from the discrete R̂_expert distribution at each time step during the search for the optimal history length. To simplify this process, we follow the approximation strategy used in <cit.>'s implementation. We mask all return values except the first one, thus making R solely dependent on 𝐨, a, and the initial return value.
History Length Search Heuristic. In Algorithm <ref>, we illustrated that a larger δ value allows us to infer actions more rapidly. Adopting this method needs to balance inference speed with search accuracy. Based on the concept introduced in Sec. <ref>, the optimal history length at the current state s_t+1 might be close to that of the previous state s_t. Therefore, given the optimal length l_t at time step t, we search within the range {l_t-Δ,l_t-Δ+1...,l_t+Δ} for the optimal length l_t+1 at the next step. Our results show that with Δ=3, we can achieve an inference speed that is around three times faster than Algorithm <ref> with δ=1, with slight improvement in performance and lower variance.
§.§ Multi-Task Learning on Atari Games
The process for action inference in multi-task learning on Atari games closely aligns with that described in Sec. <ref>, including the method for approximating R. However, there are several noteworthy distinctions, which we elaborate upon in this section.
Given that all games utilize grayscale frames of the same dimensions (84x84), we do not need to implement an additional state embedding layer as we did in the Locomotion scenario. Instead, we introduce a shared image encoder for all games, with further details outlined in Sec. <ref>.
It's important to note the distinction between the action spaces of Atari games and Locomotion tasks. The action space of Atari games is discrete, so our sampling approach mirrors how we sample from the return distribution: we select from the top 85th percentile of action logits. Inspired by <cit.>, we discretize the reward signal into {-1, 0, +1}, while the return is split into 120 buckets ranging from {-20, ..., 100}.
Given the complexity and potential pitfalls of learning on 20 Atari games from scratch, we use a subset of GPT-2 to initialize the transformer decoder. This step improves both the convergence rate and the quality of the learned policy. Our dataset consists of 2 training runs from <cit.>, with each run featuring rollouts from 50 checkpoints. Lastly, we enhance the dataset with image augmentations, including random cropping and rotation.
§.§ Atari Games
Due to time and computational constraints, we randomly selected 20 tasks from the 41 tasks in the study by <cit.>. Details about the game types and number of action spaces can be found in Table <ref>. We also provide the raw scores of the Atari experiments in Table <ref>. Given that we transform the original observations to grayscale and rescale them to 84x84 images, we illustrate these transformed observations in Figure <ref>.
§.§ Image Encoder
To ensure a fair comparison, we standardize the image encoder across EDT and the two baseline approaches. We adopt the image encoder from DrQ-v2 <cit.> for this purpose.
The architecture of the image encoder is as followed:
* 1 convolution with stride 2, output channels 32, kernel size 3. (ReLU).
* 3 convolution with stride 1, output channels 32, kernel size 3. (ReLU).
* 1 fully connected layer and H output dimensions.
§ INTER-QUARTILE MEAN OF THE ATARI RESULTS
rH0.42
< g r a p h i c s >
width=0.4
Results of learning from 20 Atari games in terms of IQM.
Alongside the main paper's results, we present the Inter-Quartile Mean (IQM) of our findings. Notably, IQL outperforms DT in terms of IQM. This occurs due to the variance in task difficulties, despite our use of HNS to balance scales across tasks. The difficulty disparity among games leads to significant variance. However, despite this, our proposed EDT still significantly surpasses both baselines, demonstrating its robust performance.
In the original Decision Transformer model, estimating the remaining return-to-go is dependent on a preset expected return-to-go value. This requirement presents a challenge when applied to multi-task environments, where return-to-go values can significantly vary across different tasks.
The issue is particularly pronounced in Atari games where high scores often necessitate long trajectories. These long trajectories may consequently yield negative estimates for the return-to-go, which presents an issue for the model's performance and application. Thus, the unique characteristics of different tasks highlight the limitations of a one-size-fits-all approach in the context of return-to-go estimation in the Decision Transformer model.
§ ARCHITECTURE DETAILS
We adopt the causal transformer decoder architecture and process the sequences as follows.
* Transformer Blocks: Composed of masked causal attention and a multilayer perceptron (MLP), with layer normalization and residual connections. Activation function used is GELU (Gaussian Error Linear Unit).
* Embedding Layers:
* State Embedding: A linear layer followed by positional (time) embeddings addition.
* Action Embedding: A linear layer followed by positional (time) embeddings addition.
* Return to Go Embedding: A linear layer followed by positional (time) embeddings addition.
* Timestep Embedding: An embedding layer for encoding timesteps.
* Prediction Heads:
* State Prediction: A linear layer taking as input the concatenated action and state embeddings.
* Action Prediction: A linear layer followed by a Tanh activation function.
* Return-to-go Prediction: A linear layer.
|
http://arxiv.org/abs/2307.01382v1
|
20230703222707
|
Fractionary Charged Particles Confronting Lepton Flavor Violation and the Muon's Anomalous Magnetic Moment
|
[
"Elmer Ramirez Barreto",
"Alex G. Dias"
] |
hep-ph
|
[
"hep-ph",
"hep-ex"
] |
αδ̣ΔϕΦγΓłλŁΛ→→μν()σΣ fb fb^-1 GeV⟨⟨||⟩⟩≲4pt∼<≳4pt∼>
[email protected] Departamento de Ciencias Exactas, Facultad de Ciencias y Filosofia, Universidad Peruana Cayetano Heredia, Av. Honorio Delgado 430, Lima 31, [email protected] de Ciências Naturais e Humanas, Universidade Federal do ABC,
09210-580, Santo André-SP, Brasil
In light of the result published by the Fermilab Muon (g-2) experiment, we investigate a simple model that includes particles of fractional
electric charges: a colour-singlet fermion and a scalar with charges 2/3e and 1/3e, respectively. The impact of these particles on the
muon anomalous magnetic moment are examined, particularly the restrictions on their Yukawa couplings with the light leptons. Given that
lepton flavor violation processes impose stringent constraints on certain scenarios beyond the Standard Model, we asses the one-loop contribution
of the new particles to (g-2) in order to identify regions in the parameter space consistent with the Fermilab results and compatible with the
current and projected limits on the branching ratio Br(μ→ e γ). Taking into account the current lower bound for the masses of
fractionary charged particles, which is around 634 GeV, we show that the mass of the scalar particle with fractional charge must exceed 1 TeV
and may be discovered in future collider experiments. Finally, we also study the validity of our model in light of the QCD lattice results on
the muon (g-2).
Fractionary Charged Particles Confronting Lepton Flavor Violation and the Muon's Anomalous Magnetic Moment
Alex G. Dias
August 1, 2023
==========================================================================================================
§ INTRODUCTION
Recently, a new measurement of the muon anomalous magnetic moment a_μ= 1/2(g- 2)_μ was reported by the Fermilab Muon g-2 collaboration. The measured value, a_μ^FNAL = (116 592 040 ± 54) × 10^-11<cit.>, is in agreement with the previous result from the Brookhaven E821 muon g-2 experiment, a_μ^E821 = (116592089 ± 63)× 10^-11<cit.>.
With the combination of these results, there is a 4.2σ deviation from the Standard Model (SM) prediction, a_μ^SM =(116591810 ± 43)× 10^-11<cit.>, given by Δ a_μ=a_μ^EXP-a_μ^SM=(251 ± 59)× 10^-11– the so called muon's anomalous magnetic moment. This confirms the 3.7σ deviation for Δ a_μ obtained by the BNL Collaboration <cit.>, but with higher statistics. If this anomaly is not due to unknown theoretical and experimental uncertainties, it raises the possibility that there is a new physics, at an energy scale not so far above the electroweak scale, manifesting through new particles interacting with the leptons.
On the other hand, any assumption of the existence of new particles interacting with leptons must consider the stringent constraints imposed by lepton flavour violation (LFV) processes.
A crucial constraint arise from the absence of observed muon decay into electron and a photon, whose branching ratio is limited according to Br(μ→ e + γ) < 4.2 × 10^-13<cit.>, with a projected limit expected to reach Br(μ→ e + γ) < 6 × 10^-14 <cit.>. Such limits play an important role in constraining the masses and couplings of these new particles, thereby pointing on whether any potential new physics may appear just above the electroweak scale.
Different models have been developed to explain the anomalous magnetic moment of the muon, keeping at the same time consistency with the current constraints imposed by charged LFV processes <cit.>. Examples are the supersymmetric models <cit.>, extra dimensions-grand unification frameworks <cit.>, models with an extended gauge group or extended scalar sector <cit.>, and within the Standard Model gauge group the inclusion of vector-like leptons
and leptoquarks <cit.>.
In this work, within the SM gauge group, we investigate to what extent specific leptons and scalars with fractional charges can account for the anomalous magnetic moment of the muon while satisfying the current limit on the branching ration Br(μ→ e + γ). The possibility of color singlets fermions, which we will also denote by leptons for short, and scalars with fractional charges has already been considered in the literature, including the Standard Model framework <cit.>, 3-3-1 models <cit.> as well as in models of grand unification <cit.>. Particles with non-conventional electric charge could also originate from a mechanism like the one proposed in <cit.>, where a gauge boson associated with a U(1)^' symmetry kinetically mixes with the SM hypercharge gauge boson SM, allowing a fermion with only U(1)^' charge to couple with the photon. Experimentally, fractionally charged
particles were investigated through Drell-Yan pair production, leading to the exclusion of particles with charges 1/3e
and 2/3e for masses below 200 GeV and 480 GeV, respectively <cit.>. Recent analysis, utilizing data from proton-proton collisions at a center-of-mass of 13 TeV, have established exclusion limits for mass up to 636 GeV and charges above 1/2e <cit.>.
In this context, we consider a model with renormalizable interactions of particles with fractional electric and look for regions of the parameter
space, out of their masses and coupling constants, that are consistent with the current experimental results as well as the projected ones.
Our results on leptons with fractional charges are complementary the searches in high energy colliders <cit.>.
It has to be pointed out that recent results coming from high-precision QCD lattice simulations show agreement with the experimental measurements
of the muon's anomalous magnetic moment, reducing the Δ a_μ deviation to 1.5 σ<cit.>.
As these results could be in conflict with the e^+ e^- data, various theory and lattice groups are expected to present updated results
in order to confirm it. For the purposes of our study, we will assume that the anomaly exists with the 4.2 σ deviation.
In the next section we construct a simple model of leptons and scalars with fractional electric charges within the SM symmetry group.
Following this, we present our results, discussions and conclusions.
§ THE SIMPLEST MODEL OF LEPTONS WITH FRACTIONAL CHARGE
The simplest renormalizable interaction model of a lepton with fractional electric charge coupled to the SM fermion is built by introducing a vector-like fermion
ℰ^(n)∼(1, n),
and the scalar
h^(1-n)∼(1,1-n),
which are both singlets of SU(2)_L and have electric charges n and 1-n, respectively. The numbers in parenthesis represent the field transformations under SU(2)_L and hypercharge U(1)_Y. These fields are supposed to couple with the SM right-handed lepton singlets ℓ_R∼(1, -1), ℓ=e, μ, τ, through the interaction term in the Lagrangian
ℒ⊃𝒴_ℓℰ (ℓ_ R)^c ℰ^ n h^1-n + H.c.,
with c meaning charge conjugation and 𝒴_ℓℰ is a coupling constant, which we assume to be real. From now on we will specialize to the case n=2/3.
The corresponding interaction Lagrangian in Eq. (<ref>) is similar to the one proposed in Refs. <cit.>, where a single charged Higgs is included. The ℰ field is called a leptonic one by the reason we assume it carries a charge of lepton number, so that the interaction Lagrangian in Eq. (<ref>) invariant under such symmetry. With a single ℰ field it is not possible to have lepton number distinguishing the families.
The vector-like lepton and the charged scalar couplings with the photon field A_μ are given by
ℒ ⊃ - 2e/3ℰ^2/3γ^μℰ^2/3A_μ+e/3(h^1/3 †∂^μ h^1/3 - h^1/3∂^μ h^1/3 †)A_μ .
With the interactions in Eqs. (<ref>) and (<ref>), we have new radiative corrections for the family lepton number violation muon decay, μ→ e + γ, and the anomalous magnetic moment of the muon according Feynman diagrams in Figure <ref>.
§.§ Lepton flavor violation decay for the muon
The partial decay width of a lepton ℓ decaying into a lepton ℓ^' plus a photon is <cit.>
Γ(ℓ→ℓ^'+ γ) = ( m_ℓ^2 - m_ℓ^'^2 )^3
( | σ_L|^2 + | σ_R|^2 )/16 π m_1^3 ,
where m_ℓ(m_ℓ^') is the lepton ℓ(ℓ^') mass; with σ_L and σ_R the form factors defined in the Appendix <ref>. Thus, the ratio for the LFV process μ→ e + γ is taken as
Br(μ→ e + γ)=Γ(μ→ e + γ)/Γ(μ→ e + ν̅_̅e̅ + ν_μ).
In this expression, the total width is assumed that the total width as the Standard Model one for the muon decay into an electron plus an electron anti-neutrino and a muon neutrino at the leading order, i. e. Γ(μ→ e + ν̅_̅e̅ + ν_μ)=G_F^2 m_μ^5/(192 π^3), with G_F the Fermi coupling constant, which is much larger than Γ(μ→ e + γ).
Thus, we obtain for the branching ratio, the expression:
Br(ℓ_1→ℓ_2 + γ)=3 (4 π)^3 α_em (m_1^2-m_2^2)^3/4 G_F^2 m_1^8 ( | σ_L|^2 + | σ_R|^2 )Br(ℓ_1→ℓ_2 + ν̅_̅2̅ + ν_1),
where α_em is the electromagnetic fine-structure constant and from the experimental side, we know that Br(μ→ e ν̅_̅e̅ν_μ)= 100 %, Br(τ→ e ν̅_̅e̅ν_τ)= 17,82 % and Br(τ→μν̅_̅μ̅ν_τ)= 17,39 %<cit.>.
§.§ Muon's anomalous magnetic moment
Loop diagrams with the particles ℰ and h, as show in Figure <ref>, generate additional corrections to the muon anomalous magnetic moment a_μ= (g - 2)_μ /2. They give the following contribution to the Δ a_μ <cit.>
Δ a_μ = Q_h/8 π^2m_μ^2/m_h^2∫_0^1 dx 𝒴^ 2_μ -ℰ P(x)/R^h(λ,λ^', x)
+ Q_ℰ/8 π^2m_μ^2/m_h^2∫_0^1 dx 𝒴^ 2_μ -ℰ P^'(x)/R^ℰ(λ,λ^', x),
with the functions of the mass ratios of ℰ, h and the muon, i.e. ϵ = m_ℰ/m_μ, λ = m_μ/m_h and λ^' = m_ℰ/m_h, given by
P(x) = x^3 - x^2 + ϵ (x^2 - x),
P^'(x) = x^2 - x^3 + ϵ x^2,
R^ h(λ, λ^',x) = λ^2 x^2 + (1 - λ^2) x + λ^' 2(1 - x),
R^ ℰ(λ, λ^',x) = λ^2 x^2 + (λ^' 2 - λ^2) x + (1 - x).
In Eq. (<ref>) Q_ℰ and Q_h represents the electric charge of ℰ and h, respectively.
§ RESULTS
Let us discuss the potential phenomenological implications of our model on the observable quantities. We will consider the experimental constraints for Δ a_μ from the Muon Collaboration at Fermilab, as well as the current experimental limit for the LFV branching ratio involving the muon <cit.>: Br(μ→ e + γ) < 4.2 × 10^-13 and the projected value <cit.>Br(μ→ e + γ) < 6 × 10^-14.
By utilizing the one-loop analytic expressions in Eqs. (<ref>) and (<ref>), which encompass the contributions of the new particles to the muon magnetic moment and the branching ratios, we explore the parameter space in order to identify points that simultaneously satisfy the muon (g -2) anomaly and adhere to the constraints imposed by Br(μ→ e + γ). Thus, the relevant input parameters in our investigation are the masses M_ℰ, M_h and the Yukawa couplings 𝒴_μ -ℰ and 𝒴_e-ℰ.
To facilitate our analysis, we impose limitations on the masses of the new exotic particles and Yukawa couplings based on collider searches and the electron (g -2). So, we fix initially the mass of the exotic lepton ℰ^ +2/3 to be 650 GeV, in accordance with the experimental limits from the LHC <cit.>. However, considering that the contributions of ℰ and h for Δ a_e must be negligible, we have some freedom in constraining the values for 𝒴_e -ℰ appropriately. Thus, our choice of these couplings takes into account the values obtained for m_h^1/3 within the energy range of the LHC, while avoiding a excessive fine-tuning in the Yukawa sector.
We now show in figure 2, our numerical results fixing 𝒴_e -ℰ=10^-6. In the left panel, for M_ℰ= 650 GeV, the green and soft green regions contains points (M_h, 𝒴_μ -ℰ) compatible with the current and projected bounds for
Br(μ→ e + γ), while the gray zone represent an exclusion zone for this observable.
In addition, in the same plot, we show values for M_h and 𝒴_μ -ℰ that matches the
Δ a_μ data, represented by the blue line and their respective 1 σ and 2 σ ranges. Thus, for m_h ≥ 4.2 TeV, and with 𝒴_μ -ℰ≥ 0.19 the contributions of the new particles explains Δ a_μ and respect the limits for Br(μ→ e + γ). If we take M_ℰ= 800 GeV (right panel), we have m_h ≥ 4.7 TeV, and with 𝒴_μ -ℰ≥ 0.20
in order to verify both observables. On the other hand, if we consider the region defined by the - 2σ deviation,
we identified m_h ≥ 3.2 TeV and 𝒴_μ -ℰ≥ 0.12 for M_ℰ= 650 GeV, and
m_h ≥ 3.6 TeV and 𝒴_μ -ℰ≥ 0.12 for M_ℰ= 800 GeV.
In the figure 3, we present a new scenario with 𝒴_e -ℰ=10^-7. The left panel, again for M_ℰ= 650 GeV,
shows that for m_h ≥ 1.25 TeV, and with 𝒴_μ -ℰ≥ 0.10
it is possible to explain Δ a_μ and respect the Br(μ→ e + γ) limits. In addition, for M_ℰ= 800 GeV (right panel),
the limits are increased, so m_h ≥ 1.1 TeV and 𝒴_μ -ℰ≥ 0.11 allowing us to explain Δ a_μ and
the respective current constrain for Br(μ→ e + γ). If we consider again the region defined by the - 2σ deviation, we identified m_h ≥ 1.2 TeV and 𝒴_μ -ℰ≥ 0.07 for M_ℰ= 650 GeV, and m_h ≥ 1.05 TeV and 𝒴_μ -ℰ≥ 0.075 for M_ℰ= 800 GeV.
For this last scenario, the considered exotic lepton masses and the exotic scalar mass are within the range of energies reached by the LHC so that these particles could be produced through the Drell-Yan processes p p →ℰ^ 2/3ℰ^ - 2/3 and p p → h^+ 1/3 h^- 1/3 at the next high-luminosity run of the LHC. We can expect that for masses as in the former scenario, ∼ 4 TeV, there might be a reasonable discovery potential for these particles at the future higher colliders, such as High-Energy LHC.
Finally, as already mentioned in the introduction, the latest lattice results predict a larger value of muon (g -2) bringing
it closer to experimental value. In this sense, our model remains consistent with such result for Δ a _μ, respecting the limits for Br(μ→ e + γ). Thus, the contributions of the exotic particles in that scenario are represented by red lines in the figures 2 and 3, where for 𝒴_e -ℰ=10^-6 (figure 2) the lattice QCD results involves m_h ≳ 3 TeV and 𝒴_μ -ℰ≳ 0.10 for both fixed M_ℰ. On the other hand, for 𝒴_e -ℰ=10^-7 (figure 3), we identify m_h ≳ 1.5 TeV and 𝒴_μ -ℰ≳ 0.06 compatible with the lattice results and the respective current constrain for Br(μ→ e + γ).
§ CONCLUSIONS AND FINAL REMARKS
In this work, we have demonstrated that the inclusion of a vector-like lepton ℰ and a scalar h with exotic electric charges allow us predict significant contributions to Δ a _μ. This enable us to explain the muon (g -2) anomaly while also respecting the experimental constrains on Br(μ→ e + γ).
By considering the contributions of such exotic particles at the one-loop level for Δ a _μ, and taking into account the current limits on particles with fractional electric charges, we explore the parameter space defined by m_h^1/3 and 𝒴_μ -ℰ. Through our numerical analysis, we have identified the regions of mass and Yukawa couplings in the parameter space able to explain the muon (g -2) anomaly while satisfying phenomenological constraints for the charged lepton flavor-violating decay μ→ e + γ. In two benchmark scenarios, with the vector-like lepton mass m_ℰ fixed at 650 and 800 GeV, we have found masses around the TeV scale for the exotic Higgs and couplings that account for the muon (g -2) anomaly while satisfying the branching ratio constraint. Furthermore, if we consider a conservative scenario, which includes lattice QCD results, our model remains capable of explaining both results.
We would like to call attention to the fact that there is a search at LHC looking for experimental evidence of particles with exotic charges and masses above electroweak energy scale <cit.>. Therefore, further phenomenological analysis will deserve attention in future studies.
§ ACKNOWLEDGMENTS
This study was financed in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), grant 305802/2019-4 (A.G.D).
§ FORM FACTORS
In this appendix, we show the general expressions for the form factors used in subsection II. Thus, we have:
σ_L= q_ℰ [ c_1 κ_1+ c_2 κ_2+c_3 κ_3 ] + q_h [ c_1 κ̅_̅1̅+ c_2 κ̅_̅2̅+c_3 κ̅_̅3̅],
σ_R= q_ℰ [ d_1 κ_1+ d_2 κ_2+d_3 κ_3 ] + q_h [ d_1 κ_1+ d_2 κ_2+d_3 κ_3]
where q_ℰ and q_h are the electric charges of the exotic particles, and the c_i and d_i coefficients are defined in function of the
scalar and pseudoscalar Yukawa couplings:
c_1 = (𝒴^ s *_2+𝒴^ p *_2)(𝒴^ s_1+𝒴^ p_1), c_2 = (𝒴^ s *_2-𝒴^ p *_2)(𝒴^ s_1-𝒴^ p_1),
c_3 = (𝒴^ s *_2+𝒴^ p *_2)(𝒴^ s_1-𝒴^ p_1), d_1 = (𝒴^ s *_2-𝒴^ p *_2)(𝒴^ s_1-𝒴^ p_1),
d_2 = (𝒴^ s *_2+𝒴^ p *_2)(𝒴^ s_1+𝒴^ p_1), d_3 = (𝒴^ s *_2-𝒴^ p *_2)(𝒴^ s_1+𝒴^ p_1).
The factors κ_i and κ̅_̅i̅ can be write as:
κ_1,2=i m_1,2/16 π^2 m_h^2 [x^2-5x - 2/12 (x-1)^3+x lnx/2 (x-1)^4],
κ_3=i m_ℓ/16 π^2 m_h^2 [x - 3/2 (x-1)^3+lnx/(x-1)^3],
and
κ_1,2=i m_1,2/16 π^2 m_h^2 [2 x^2+5 x - 1/12 (x-1)^3-x^2 lnx/2 (x-1)^4],
κ_3=i m_ℓ/16 π^2 m_h^2 [x + 1/2 (x-1)^2-x lnx/(x-1)^3].
apsrev4-1
|
http://arxiv.org/abs/2307.01091v2
|
20230703150932
|
UW-ProCCaps: UnderWater Progressive Colourisation with Capsules
|
[
"Rita Pucci",
"Niki Martinel"
] |
cs.CV
|
[
"cs.CV"
] |
Journal of Class Files, Vol. 18, No. 9, September 2020
How to Use the IEEEtran Templates
UW-ProCCaps: UnderWater Progressive Colourisation with Capsules
Rita Pucci, Niki Martinel
Manuscript created October 2022;
July 2023
================================================================
Underwater images are fundamental for studying and understanding the status of marine life. We focus on reducing the memory space required for image storage while the memory space consumption in the collecting phase limits the time lasting of this phase leading to the need for more image collection campaigns. We present a novel machine-learning model that reconstructs the colours of underwater images from their luminescence channel, thus saving 2/3 of the available storage space. Our model specialises in underwater colour reconstruction and consists of an encoder-decoder architecture. The encoder is composed of a convolutional encoder and a parallel specialised classifier trained with webly-supervised data. The encoder and the decoder use layers of capsules to capture the features of the entities in the image. The colour reconstruction process recalls the progressive and the generative adversarial training procedures. The progressive training gives the ground for a generative adversarial routine focused on the refining of colours giving the image bright and saturated colours which bring the image back to life. We validate the model both qualitatively and quantitatively on four benchmark datasets. This is the first attempt at colour reconstruction in greyscale underwater images.
Extensive results on four benchmark datasets demonstrate that our solution outperforms state-of-the-art (SOTA) solutions ..... We also demonstrate that the generated colourisation enhances the quality of images compared to enhancement models at the SOTA.
Capsule, Underwater Image, Colourisation, Webly-Supervised.
§ INTRODUCTION
The oceans cover most of our planet and are as fascinating as harsh, complex, and dangerous for exploration. Part of the exploration and protection of the rich ecosystems leverages images and videos collected by sophisticated visual sensing systems that allow biologists to analyse them in safe lab environments. These systems are often embedded in robots – an attractive option because of their non-intrusive, passive, and energy-efficient nature.
Such a solution has been used for monitoring the coral barrier reef <cit.>, exploring the depth of the ocean <cit.>, analysing the seabed <cit.>, and much more.
However, the application of robots comes with limitations in terms of power and memory storage capacity.
In this work, we focus on reducing the memory space needed to store the collected data.
This is achieved by storing the greyscale version of the acquired image and restoring the colours when the collection phase is over.
This is achieved by proposing a novel architecture for automatic image colourisation. Before diving into the innovations proposed in this paper we introduce the challenge posed by the colourisation problem.
The image colourisation task is a challenging and ill-posed problem because the "correct" colourisation of an entity is not always well defined. There are entities that have a small range of colours, e.g., the grass is usually a tone of green, clouds are generally white, and the sky is blue or black, but others have a broader variation in appearance, e.g., t-shirts, desks, and many other objects which can appear in different shapes and colours.
Recent works model the global content (image-level features) and objects' instances (entity-level features) in order to extract the needed features to deal with the multimodality of colourisation. Works like <cit.> focus mainly on the global content resulting in unsaturated entities' colourisation with smudges on boundaries.
Works presented by <cit.> extract both types of features but neglect the importance of the interaction between the global content and the objects' instances for the colourisation of an image. Differently, in <cit.>, we proposed a model that encourages the collaboration between the entities and the global content of the image increasing the naturalness of generated colours.
In this paper, we propose UW-ProCCaps an encoder-decoder architecture. The UW-ProCCaps takes as input a greyscale image reducing the dimension of the store image to a third of the original dimension, the two coloured channels are not memorised. The architecture consists of convolutional and capsule layers to extract the image-level and the entity-level features respectively. The colour channels are reconstructed through the collaboration of the encoder and decoder which are connected by skip connections. The UW-ProCCaps is able to deal with the multimodality of colourisation by focusing the attention on the entity to be colourised and by reconstructing the colours based on the structural information given by the encoder phase. The architecture is inspired on <cit.> and it is specialised in underwater colour generation. In Fig. <ref>, we show samples of colours generated with our architecture.
We compare UW-ProCCaps, with two works that represent the SOTA of the two types of feature extraction, Deodify <cit.>, InstanceAware <cit.>. The comparison is presented qualitatively and quantitatively on PSNR, SSIM, and LPIPS metrics over four benchmarks. We demonstrate that UW-ProCCaps is able to generate saturated and vibrant colours for underwater images from their luminescence and it outperforms the stat-of-the-art for the metrics.
§ RELATED WORK
§.§ Underwater image reconstruction
Underwater images are characterised by a whole range of light distortions due to the water absorption of light waves <cit.>. The colours and the edges' perception change with depth, illumination, and turbidity of water making the image appear completely different. The low definition of edges, the distortion of colours in a blueish or greenish colourisation, and the distortion of light effects mainly the luminescence channel of the images. Fig. <ref> shows the original image from the benchmark and the greyscale image obtained with the L channel.
§.§ Colours reconstruction
The automatic image colourisation task is an open challenge that is generally addressed following two main directions. The former considers image-level features, hence the colourisation is done without specifically considering object entities. In this direction <cit.> with the application of conditional adversarial networks for an image-to-image translation, or <cit.> with generative models by predicting with semantic and prior knowledge, or <cit.> with multi-task models and single-pixel significance to predict a right colourisation for the image.
Methods following the latter direction consider entity-level features. These are introduced by semantic labels as in <cit.>. These features are interpretable semantics by a cross-channel encoding scheme <cit.> and then enriched with a pre-trained classification model <cit.>. <cit.> implements the entity-level and image-level features extractions with two parallel models. In <cit.>, the focus is on the image-level features with an encoder-decoder structure trained in two phases (end-to-end, then GAN).
Our model integrates the entity-level and image-level feature extractions in one architecture, trained in two phases. The former follows the progressive learning paradigm to let the architecture gradually learn how to reconstruct colours. The latter exploits a GAN training scheme to improve the naturalness of the predicted colours.
§.§ Progressive learning
ProGL is a training methodology proposed by <cit.> for generative networks. It consists of starting with low-resolution images, and then progressively increasing the resolution by adding layers. This allows the training to first discover the large-scale structure of the image distribution and then shift attention to the fine details. Works like <cit.> proposed a ProGL scheme for GANs to generate images of people, <cit.> applied ProGL to obtain a multi-tasking method based on salient object detection.
A first application of progressive learning (ProGL) for colour reconstruction was proposed in <cit.>.
In our work, ProGL is applied in the first phase of training to let the model learn gradually the collaboration among features extracted by the different layers to reconstruct the colour information.
§ PROPOSED METHOD
We consider the images in the CIELab colour space (where the luminescence channel L is independent of the chrominance channels (a,b) which identify the four unique colours of human vision) with the assumption that only the luminescence channel L is stored while recording underwater videos.
§.§ UW-ProCCaps architecture
The UW-ProCCaps architecture follows an encoder-decoder structure. The encoder is composed of two parallel models, a convolutional encoder, and a classifier. The former processes the inputs by transforming them into a set of feature maps. The latter emits classification vectors. Feature maps and classification vectors contain image-level information. A capsules encoder aggregates such vectors to emit entity-level features.
The decoder, composed of a capsules decoder and a convolutional decoder, receives the entity-level features to generate the chromatic channels.
The convolutional encoder and the convolutional decoder implement a tight collaboration by skip and residual connections which connect each layer of the convolutional encoder with the correspondent layer in the convolutional decoder, as illustrated in Fig. <ref>.
Convolutional Encoder
The luminescence channel of the input image, 𝐈_L∈ℝ^224× 224× 1, is given to the convolutional encoder and to the classifier. The convolutional encoder consists of the combination of a preprocessing block () and n double blocks down (^n where n∈[4,..,1]). The is composed of a ---. It performs an initial features extraction yielding Ω=f_(𝐈_L)∈ℝ^56× 56× 32.
The Ω goes through all the s, each of which is composed of two consecutive sequences of --, <cit.>. is a 3×3 heterogeneous convolution; is the instance normalisation; and is used to non-linearly transform the fused features.
At its last stage, ^1 outputs 𝐃^1∈ℝ^16× 16× 512, i.e.,
𝐃^1=f_^2(f_^3(f_^4(f_(𝐈_L)))
Classifier Model
The classifier model enriches the information extracted by the convolutional encoder with specific features related to the predicted class of the image and helps the model deal with the multimodality of the colorization by clustering the information by class.
The classifier provides a classification vector formatted to be concatenated with 𝐃^1 in Υ∈ℝ^16× 16× 519, explained in Sec.<ref>.
Capsules Encoder ()
This encoder aggregates the image-level information included in Υ to extract entity-level features.
This is achieved in three steps.
We first compute the activation vectors 𝐔:
𝐔 = [Flatten(^1(Υ))^T, ..., Flatten(^C(Υ))^T]
where C=32 is the number of capsules and each column of 𝐔∈ℝ^k× C is the capsule output u_i∈ℝ^k. The second step is to compute the entity-level prediction vectors 𝐔̂. We apply an affine transformation on 𝐔 with a weight matrix W_ij∈ℝ^k× k to obtain
𝐮̂_j|i=W_ij𝐮_i
In the third step, we apply the “routing by agreement” mechanism <cit.> on 𝐔̂. This mechanism, during the training phase, uses the coupling coefficients 𝐜_i|j to identify the clusters of the features in 𝐔̂. Each cluster of features identifies one (or part of an) entity and is formally defined by the weighted sum of 𝐮̂_j|i vectors:
𝐯_j = squash(∑𝐜_i|j*𝐮̂_j|i)
where 𝐯_j ∈ℝ^k̂. The final output of downsampling is the matrix 𝐕=𝐯_0,⋯, 𝐯_j, it carries information about how strong the capsules agree on the presence of an entity. 𝐕∈ℝ^32× 8× 8× 128 feature matrix for each capsule contains the entity-level features matrix.
Capsules Decoder
The capsules decoder () elaborates the entity-level features 𝐕 to reconstruct the colours' information.
The features in 𝐕 lack information about their spatial displacement with respect to the input datum. This is however needed to properly reconstruct colours within entities' boundaries.
The inverts the process.
A weight matrix 𝐖^r_ji∈ℝ^k̂× k reverses the affine transformation:
𝐮^r_i|j = 𝐖^r_ji𝐯_j
then 𝐮r_i|j∈ℝ^k are stacked in 𝐔^r ∈ℝ^k× C. 𝐮^r_i's are given to the i-th reversed capsules, implemented as a transpose convolutional layer (_i).
This yields to
𝐗 = [(_1(𝐮^r_1)),⋯,
(_k(𝐮^r_k))]
where 𝐮^r_i denotes the i-th row of 𝐔^r.
The matrix 𝐗 consists of the initial colours reconstruction from entity-level features 𝐕.
Convolutional Decoder
𝐗 is the input to the convolutional decoder which consists of four stacked layers.
DIRE COME E' FATTO DBU
At its first stage, i.e., ^1, the block input is 𝐗, which generates:
𝐘^1 =f_^1(𝐗)
Following blocks (^m, with m ∈ [2,..,4]), apply a skip connection to promote the collaboration between the encoder and decoder phases, i.e.,
𝐘^m =f_^m(cat(𝐘^m-1,𝐃^m-1))
The stacked layers are followed by the last upsampling layer ().
This layer performs the reversed function of :
Ψ = f_(cat(𝐘^4,𝐃^4))
The outputs Ψ∈𝐑^H× W×Γ having the same spatial resolution of Ω.
It consists of the final composition of all the features extracted in the phase. The five functional blocks consist of the 28-layer network.
Quantization of colours
The last two layers of UW-ProCCaps are two convolutional layers which quantize and reconstruct the colours predicted in Ψ, we refer to this block as the colour quantisation block (). The first convolutional layer computes a quantised representation of the colours, based on the idea in <cit.>. This prevents the model to generate values outside the set of gamut colours, which would provide implausible results as demonstrated in <cit.>. Following CIT, Ψ is remapped and quantised in the in-gamut CIELab colours with bin=10 to obtain 313 colour classes.
This layer receives the residual of Ψ and Ω and maps it over the quantised colour distribution
Ẑ = f_Quantisation(sum(Ψ,Ω))
where Ẑ∈ℝ^56× 56× 313. This makes the task a classification problem for each point in the input.
The second convolutional layer computes the chroma representation which lets the model predict the a and b channels consistent with the CIELab colour definition. This layer consists of a 1× 1- layer followed by bilinear upsampling to resize Ẑ by a factor of 4, hence to map Ẑ onto the two chrominance channels (â,b̂)∈ℝ^224× 224× 2.
§.§ Loss function
The UW-ProCCaps model training consists of three phases, described in Sec. <ref>. In each phase, we apply the loss function relevant to the corresponding task and training methodology. Classifier finetuning: The classifier receives the 𝐈_L× 3 in input and provides the classification of the input image ŷ∈c_0,..., c_p where p is the number of classes in the training dataset. In this phase, the class of the input image is known as y. The loss function is computed as ℒ_class = CrossEntropyLoss(ŷ,y). End-to-End UW-ProCCaps training: The UW-ProCCaps is trained progressively and we compute a composed loss function which takes into consideration the two layers of reconstruction that is in the , ℒ_end2end = ℒ_q+ℒ_ch. Where the ℒ_q is the quantised colours loss and ℒ_ch is the chrominance loss. To compute the ℒ_q, the matrix Ẑ is compared with the projection of ground-truth chroma channels on the quantised representation <cit.>. The ground truth, 𝐈_a,b, is converted by the soft-encoding scheme in the quantised representation 𝐙.
ℒ_q = -∑_h,wv(𝐙_h,w) ∑_q𝐙_h,w,qlog(Ẑ_h,w,q)
where v(·) re-weights the loss for each pixel based on pixel colour rarity. We have considered the soft-encoding and the v(·) values introduced by <cit.>.
To compute the ℒ_ch, we minimise the difference between the real (𝐈_a,b) and the predicted (𝐈_â,b̂) colour channels as:
ℒ_ch = ||â-a||^2_2 + ||b̂-b||^2_2.
GAN UW-ProCCaps Training:
In the GAN phase of training, we apply a composed loss function ℒ_GAN = ℒ_ADV+ℒ_perc where ℒ_ADV is the adversarial loss <cit.> and ℒ_perc is the perceptual loss <cit.>. The adversarial loss is implemented with the Binary Crossentropy (BCE) with logits. The perceptual loss evaluates the distance between features extracted from the predicted colourisation and the ground truth images by a base pre-trained network ϕ(·). It does not require an exact reconstruction, allowing for variations in the
reconstructed image and focused on an understanding of global structure. With this purpose the ℒ_perc uses a high receptive field base model:
ℒ_perc = M([ϕ_HRF(𝐙)-ϕ_HRF(Ẑ)]^2)
M is the sequential two-stage mean operation (interlayer mean of intralayer means). The ϕ_HRF(x) is implemented using Dilated convolutions <cit.>.
§.§ Training UW-ProCCaps
We train the model in three phases.
Classifier finetuning: The classifier used for this work is available at the stat-of-the-art. We use the pre-trained model and we fine-tune it on the Flickr-UW-7(described in Sec. <ref> dataset for h_class epochs. This initial phase lets the classifier create an underwater image structure understanding.
End-to-End UW-ProCCaps training: The entire UW-ProCCaps architecture described in Sec. <ref>, except for the classifier model which is frozen, is trained on UFO120 dataset for h_end2end epochs following the ProGL methodology presented in Sec. <ref>. To implement the progression we introduce the temporary quantisation block () which consists of a quantisation layer and a chroma layer in which dimensions of the output change in based on the depth of the step, as explained in Tab.<ref>. The progression methodology trains the model adding a every ρ epochs of training. Together with the ^m, ProGL adds the relative ^m, and, if it is present, removes the ^m-1, as shown in Fig. <ref>.
Each ^m follows the structure proposed for , where the resolution of 𝐙̂ and (â,b̂) is equal to 𝐘^m, and it is defined by the level of progression, Tab. <ref>. At the beginning of training, the consists only of . Let being the first layer of reconstruction, ^p provides Ẑ^p and (â,b̂)^p. In following layers, for ^m, we add ^m and it provides Ẑ^m and (â,b̂)^m. The last layer of growing is the that completes the structure.
GAN UW-ProCCaps Training: We refine the knowledge of the model by fine-tuning the architecture (while keeping the classifier block frozen) through a GAN training procedure with the Pix2Pix Discriminator <cit.>. The model is trained for h_GAN epochs on the UFO120 dataset. We observe that by performing the GAN training phase, the model learns to provide colours more vibrant and neat moreover it results in a greater range of colourisation compare to the model obtained by the End-to-End UW-ProCCaps training phase. This point is discussed in the ablation study at Sec. <ref>.
§ EXPERIMENTAL RESULTS
§.§ Datasets
Training phase The encoder phase has a parallel classifier. We train such a classifier following a webly-supervised procedure. To do this, we have collected a new dataset, named Flickr-UW-7, by scraping the Flickr platform for free public license images containing one of 7 tags: dive, coral, fish, jellyfish, seabed, shark-whales, and weirdo.
The scraping process generated 2009 images.
The End-to-End UW-ProCCaps and the GAN UW-ProCCaps training phases exploits data in the UFO120 dataset <cit.>. This contains 1500 samples shot underwater with no labels. Each sample consists of a noisy and denoised image pair. The noisy image is shot underwater and it shows water distortion, while the denoised image does not have water distortion. We considered the noisy images to let the model learn to colourise the original underwater images.
Validation phase We evaluated our model on four benchmarks.
In the Enhancing Underwater Visual Perception (EUVP) dataset <cit.>, we considered the validation split consisting of 515 paired images with water distortion.
The Heron Island Coral Reef Dataset (HICRD) <cit.> is focused on the coral reef in the deep sea, we use the paired HR split which consists of 300 images.
The Underwater Image Enhancement Benchmark (UIEB) <cit.> includes 890 images, which involve rich underwater scenes (lighting conditions, water types, and target categories) and better visual quality reference images than the existing ones.
The Underwater Image Super-Resolution (USR248) <cit.> contains 248 samples of underwater images.
§.§ Implementation details
The input images 𝐈_L∈ℝ^224× 224× 1 and the outputs of UW-ProCCaps are Ẑ∈ℝ^56× 56× 313 and the (â,b̂)∈ℝ^224× 224× 2. We implemented the classification model with a ResNet34 <cit.> (hereafter referred to as ResNet). In the training process, we used a batch of 16 samples and the Adam optimiser with a learning rate of 2e^-3. In the classifier training, we trained the ResNet for h_class = 20 epochs. In the End-to-End training, we set ρ=30, hence run h_end2end = 240 epochs in ProGL. Then we trained the whole network in GAN for h_GAN = 1000 epochs.
§.§ Evaluations metrics
The UW-ProCCaps model is evaluated qualitatively for the naturalness of the predicted colourisation, and quantitatively over the metrics results. To assess our colourisation performance, we follow the experimental protocol in <cit.> and consider the Peak Signal to Noise Ratio (PSNR), the Learned Perceptual Image Patch Similarity (LPIPS) <cit.> (version 0.1 with VGG backbone), and the Structural Similarity Index Measure (SSIM) <cit.>.
§.§ Ablation study
The proposed architecture consists of different parts and implementation choices joined together to perform the task of colourisation. In this section, we analyse the importance of each of the main parts proposed.
Are the capsule bringing a beneficial outcome? To answer this question we present quantitative and qualitative results obtained with and without capsules over the validation datasets introduced in Sec. <ref>. The UW-Net consists of the encoder, the convolutional decoder, and the quantisation of colours as described in Sec. <ref>. Both networks are trained following the ProGL methodology for only the End-to-End training phase.
Fig. <ref> shows the results of colour reconstructions obtained with UW-UNet and UW-ProCCaps.
UW-ProCCaps reconstructs pleasant colourisation that looks natural, plausible, and well defined in the contours of the entities in the images. In Tab. <ref>, we compare the metrics results obtained with the UW-UNet and the UW-ProCCaps (referred to in Tab. <ref> with the name UWPCC_FTFlickrResNet_E2E to be distinguished by the other ablation study cases). We observe that the quantitative results underline that the two networks are competitive with each other. We consider both the qualitatively and quantitatively results to prove that the application of capsules is an important addition to the architecture to obtain a good colour reconstruction.
Is the parallel classifier bringing a beneficial outcome? To prove the importance of the classifier network, Fig.<ref> we compare the first row obtained with UW-ProCCaps and the second row obtained with the same architecture but without the classifier network (here referred to as ResNet as described in Sec. <ref> and both variations are trained following Sec. <ref>. The colours obtained with UW-ProCCaps are bright and vibrant and they vary from subject to subject, while removing the classifier network the colours are low tone and sometimes are greyish. In Tab. <ref> the results on metrics depend on the dataset considered but are always competitive with both models. We propose the UW-ProCCaps with the classifier network considering both the qualitative results and the quantitative results.
Do we have to fine-tune the classifier network even if it is already pre-trained? This question arises naturally and the idea behind the fine-tuning is to let the network focus on the classes that are present underwater improving the model for better colour reconstruction. We perform experiments with the ResNet pre-trained on ImageNet and with the same network fine-tuned on Flickr-UW-7. In Fig. <ref>, the third row shows the results obtained with UW-ProCCaps with the ResNet only pre-trained on ImageNet while the first row shows results obtained fine-tuning the ResNet. In the first row, we note that all the entities (fish, corals, and alga) in the images are well-coloured with plausible and diversified colours in contrast with the third row where the majority of entities are not properly coloured. In Tabs. <ref> respectively for end-to-end and GAN, the metrics prove that the fine-tuning of the ResNet improves the quantitative performances with all the datasets considered and for almost all the metrics.
The progressive learning is applied in the end-to-end training phase, but is it improving the generated colours? The fourth row in Fig. <ref> shows the results obtained with UW-ProCCaps where the end-to-end training phase is performed without the ProGL methodology. We compare the fourth row with the first row, the output a UW-ProCCaps where the ProGL is applied. The first aspect that we note is that the details of the entities in the images are well-defined and the colourisation proposed for them is bright and of high quality compare with the one obtained without the ProGL in end-to-end phase. We prove that the application of ProGL improves the quality of the output obtained by the network. In Tabs. <ref> respectively for end-to-end and GAN, the results obtained for the metrics are improved in almost all the datasets by applying the finetuning of the model in GAN.
Is the GAN training phase improving the performance? As described in Sec. <ref>, we train the entire UW-ProCCaps first end-to-end and then in GAN. In the end-to-end phase, the model is trained to achieve the colourisation of the ground truth and ends up with an averaging the possible colours of each pixel in order to reduce the loss error. This ends with under-toned and brownish colourisation. The proof of this observation is shown in Fig. <ref> comparing the first row where UW-ProCCaps is trained in GAN and the last row where UW-ProCCaps is trained end-to-end. The colours reconstructed at the end-to-end phase (referred to in the image as E2E) are brownish and the colour tends to be not vibrant compares to the colours reconstructed after the GAN phase. The quantitative results are presented in Tabs. <ref> respectively for end-to-end and GAN. Results describe that the application of a GAN training phase provides a consistent improvement in the colour reconstruction for almost all the variations of the model in all the validation datasets.
§ RESULTS
In this section, we analyse the qualitatively and quantitatively results obtained with UW-ProCCaps. We compare our model with the two well-known models at the stat-of-the-art, Deodify <cit.> and InstanceAware <cit.> for the task of colourisation. In <cit.>, authors presented a two phases training as the one proposed in this paper in Sec. <ref>, and in <cit.>, the model architecture takes into consideration the identification of entity-levels and image-level features that here we implement with capsules. Our model and the models at stat-of-the-art are trained following the training proposed in Sec. <ref>, and on the same datasets described in Sec. <ref>.
We finally analyse if the model is able to improve the quality of the colours while reconstructing it. with this intent, we compare the results with models at the SOTA trained on the enhancement task on UFO-120, Deep Sesr <cit.>, Funie gan <cit.>, Ugan and Ugan-p <cit.>. These methods take the coloured noise image as the input image.
Quantitative comparison
We analyse the results obtained for the metrics presented in Sec. <ref>. In Tabs. <ref> the results with the proposed UW-ProCCaps and the Deodify <cit.> and InstanceAware <cit.>. The results in Tab. <ref> are results obtained with the models trained end-to-end with UFO120 dataset. We note that the UW-ProCCaps model outperforms the results obtained with the two models at the stat-of-the-art. This proves that the UW-ProCCaps model is a promising model that reaches good results already at the first stage of training. In Tab. <ref>, the models are fine-tuned in GAN on UFO120. The results summarised in the tables are also visible for UW-ProCCaps and for Deodify and InstanceAware in Fig. <ref> to facilitate the interpretation of the performances. For the PSNR and SSIM metrics, the higher result is better, while for the LPIPS metric, the lower result is better. As shown in the tables, the UW-ProCCaps outperforms the SOTA models with almost all the datasets. Finally, in Tab. <ref>, we compare the UW-ProCCaps model with the enhancement models. The results obtained with our model outperform all the considered models. This proves that the final model obtained is robust and obtains high performances in the colourisation task and in the enhancement task.
Qualitative comparison
In Fig. <ref>, we summarise some samples of colourations obtained with the proposed model and the stat-of-the-art in both the end-to-end and GAN training phases. We think that presenting both the end-to-end and GAN results demonstrate that the model has a robust reconstruction of underwater colourisation while this behaviour is not obtained with the other models in the same conditions. We present in the UW-ProCCaps columns, the colour reconstructed with our model on validation datasets. The InstanceAware and the Deoldify columns are the colours reconstructed with models at the stat-of-the-art. The colours obtained with the UW-ProCCaps in GAN column are plausible and they present high quality on the detailed entities. The model is able to reconstruct different colourisations of the same entity as shown for the yellow-black fish, the purple fish, and the red fish in Fig. <ref>, and for the sea colours. The third row from the top, the starfish, present a complex case for colourisation because the model has to colourise the starfish against the seabed that is in the background. The UW-ProCCaps deals with the starfish providing a vibrant red colour while the stat-of-the-art is not providing a colourisation.
§ CONCLUSION
In this paper, we take into consideration the colourisation of greyscale images for underwater domain. Taking into consideration the greyscale images we level up the cameras of colour resolution and colour distortion common underwater. Moreover the compression of each image to only the luminescence channel, reduce the memory space required for the image collection campaigns. The proposed UW-ProCCaps brings together different architectural strategies such as capsules, parallel classifier, and encoder-decoder structure; different training methodologies such as ProGL, GAN. We presented the ablation study of each on the main choices that we made for this model proving that each piece is doing is part to obtain the best balance between quantitative and qualitative performances. The results obtained are qualitatively superior to the stat-of-the-art showing bright colourisation and images with high quality. The quantitative results on metrics outperform the ones at the stat-of-the-art in the same training conditions.
IEEEtran
|
http://arxiv.org/abs/2307.02658v1
|
20230705211913
|
Spherical Feature Pyramid Networks For Semantic Segmentation
|
[
"Thomas Walker",
"Varun Anand",
"Pavlos Andreadis"
] |
cs.CV
|
[
"cs.CV",
"I.4.6"
] |
Many-objective Optimization via Voting for Elites
Nick Cheney
August 1, 2023
=================================================
Semantic segmentation for spherical data is a challenging problem in machine learning since conventional planar approaches require projecting the spherical image to the Euclidean plane. Representing the signal on a fundamentally different topology introduces edges and distortions which impact network performance. Recently, graph-based approaches have bypassed these challenges to attain significant improvements by representing the signal on a spherical mesh. Current approaches to spherical segmentation exclusively use variants of the UNet architecture, meaning more successful planar architectures remain unexplored. Inspired by the success of feature pyramid networks (FPNs) in planar image segmentation, we leverage the pyramidal hierarchy of graph-based spherical CNNs to design spherical FPNs (S^2FPN). Our S^2FPN models show consistent improvements over spherical UNets, whilst using fewer parameters. On the Stanford 2D-3D-S dataset, our models achieve state-of-the-art performance with an mIOU of 48.75, an improvement of 3.75 IoU points over the previous best spherical CNN.
§ INTRODUCTION
In recent years spherical image data has become increasingly common due to omnidirectional imaging and LiDAR sensors used in autonomous vehicles <cit.>. However, spherical data is encountered across all disciplines as atomic charge distributions <cit.>, brain activity <cit.>, climate patterns <cit.>, and the cosmic microwave background <cit.>. The task of semantic segmentation frequently appears in these spherical domains, perhaps most notably in the visual systems employed by autonomous vehicles. For planar segmentation, feature pyramids are an established component of state-of-the-art segmentation models <cit.>. We introduce a Feature Pyramid Network (FPN) for spherical images. As part of this design, we present an improved scheme to transition spherical signals between pyramid levels, and perform an ablation study over these design choices.
Early attempts to utilize deep learning models for segmenting spherical images involved mapping them to the Euclidean plane and using well-studied planar, convolutional neural networks (CNN) <cit.>. Almost exclusively, these methods use the equirectangular projection, mapping the latitude and longitude of the spherical images to Euclidean grid coordinates. However, this mapping introduces polar discontinuities, greatly distorting objects in a scene according to their proximity to the poles <cit.>. Objects close to the top and bottom edge of the projection have their scale exaggerated, a distortion which planar segmentation models are known to be sensitive to <cit.>. In fact, the introduction of boundaries by planar representations is known to harm classification accuracy of close-by pixels <cit.>.
Collectively, these drawbacks have motivated a branch of literature aimed at generalizing successful planar deep learning techniques to operate natively on the sphere. The primary effort has been directed towards designing convolutions that directly consume spherical signals without the need for destructive projections. One of the biggest challenges is the lack of a perfectly symmetrical grid to discretize spherical signals <cit.>. Without a spatially consistent notion of a `pixel', there exists no clear way to define a traditional spatial convolution operation. Further still, there is no canonical choice of filter orientation on the sphere. This results in undesirable path-dependent convolutions, since spatial kernels will change their orientation depending on the choice of path taken during convolution <cit.>. The seminal work on fully spherical CNNs <cit.> led to a line of research on spectral methods. These approaches can bypass the aforementioned challenges by breaking the signal down into spherical harmonics and performing convolution in frequency space as a spectral dot product. However, the repeated computation of Fourier transforms and their inverses are very computationally expensive and scale poorly to high resolution images <cit.>.
Recently, the works of <cit.> bypass these challenges by sampling the spherical signal onto the nodes of an icosahedral graph, an approximation of a sphere of varying “levels" (in analogy to image resolution, in the planar case, see figure <ref>). Two of these methods parameterize spherical convolutions as a linear combination of partial differential operators (PDOs). These PDO-based models are efficient and have achieved competitive results on benchmark segmentation data sets <cit.>.
We continue the trend of generalizing successful planar techniques to the spherical domain. In planar segmentation models, the importance of multi-scale features is long established, even before the success of deep CNNs <cit.>. <cit.> introduced Feature Pyramid Networks (FPNs), that have now become common-place in state-of-the-art networks <cit.>. FPNs leverage the intrinsic receptive field hierarchy of a CNN to extract semantic feature maps across various scales, which enable them to detect objects of varying shapes and sizes. For our approach, we leverage the varying receptive field of PDOs on an icosahedral graph to design Spherical Feature Pyramid Networks for graph-based spherical models.
Our contributions are as follows:
* We design spherical feature pyramid networks and present results for a range of model complexities. Our best model improves the state-of-the-art on Stanford 2D-3D-S <cit.> by a significant margin.
* We present optimal pooling and up-sampling routines for constructing icosahedral feature pyramids.
§ RELATED WORK
§.§ Graph-based Spherical CNNs
Despite the significant progress in making spectral approaches more scalable <cit.>, graph-based spherical CNNs still emerge as the most efficient approach to processing spherical signals. However, a notable cost of approximating the sphere as graph is the diminished ability to facilitate rotational equivariance. The most recent research efforts have been in attaining more uniform spherical samplings, and designing rotationally equivariant convolutions <cit.>. Hence, these works focus on the signal domain and the layer-wise operations, not considering improvements to the architectural design they employ for the task of segmentation. Our work is orthogonal to surrounding research, and builds on the PDO-based approach introduced by <cit.> to utilize feature pyramids for segmentation.
§.§ Multi-scale Features
Since objects in the real world have different shapes and sizes, it is desirable for networks to be able to extract multi-scale features that capture information across all object scales. Even before the success of deep CNNs, the literature on planar image segmentation and object detection has a long history of methods using Gaussian image pyramids <cit.> to identify objects across scales. These image pyramids were combined with hand-engineered features <cit.> to achieve state-of-the-art results at the time. Though CNNs have replaced engineered features, the need for multi-scale features still remains. Early CNN-based architectures <cit.> used the pyramidal feature hierarchy of a CNN to do object detection. However, these networks weight semantic information unequally across the levels of the pyramid, and perform poorly on small objects. Feature Pyramid Networks (FPN) <cit.> was the seminal work to introduce a top-down pathway, to extract high-level semantic information uniformly at multiple scales. FPNs simultaneously leverage a CNNs receptive field and semantic hierarchies efficiently, and have been widely adopted in succesful planar object detection and segmentation systems <cit.>, <cit.>.
§ PRELIMINARIES
In this section we provide an overview of the core mathematical components of graph-based spherical CNNs which use PDOs.
§.§ Icosahedral Mesh
We define our signal domain, the icosahedral spherical mesh. As originally proposed by <cit.>, we can accurately discretize a sphere by recursively applying a subdivision routine initialized on an icosahedron. In each iteration we add vertices at the midpoints between each pre-existing vertex, and then re-project then to be unit distance to the origin. Fully connecting the new vertices subdivides each original face into four new triangles. Following this scheme we can naturally define upsampling and downsampling algorithms which are analogous to different resolutions in planar images, see figure <ref>. We refer to the original icosahedron as a level-0 (ℓ = 0) mesh. Each iteration of this subdivision routine increases the level ℓ += 1. The number of vertices n_v scales with level ℓ according to,
n_v = 10·4^ℓ + 2.
Our input and output spherical signals are level-5 meshes, with the minimum level-0 mesh at the lowest level of the network.
§.§ MeshConv Operation
We apply the "MeshConv" operation as defined in <cit.>. Given the spherical signal F, and a partial differential operator (PDO) kernel G_θ, we parameterize a convolution at each vertex as,
F ∗ G_θ = θ_0 I F + θ_1 ∇_x F + θ_2 ∇_y F + θ_3 ∇^2 F,
where I is the identity operator and ∇_x and ∇_y are gradients in the east-west and north-south directions respectively. Collectively, the constituent PDOs capture diffusion properties of the signal at each vertex. First and second differential operators, as well as the Laplacian operator, and computed using the Libigl library <cit.>, and follow from results in discrete differential geometry <cit.>. Namely, we represent scalar functions on a mesh as a piece-wise linear function with values defined at each mesh vertex:
f(𝐱) ≈∑_i=1^n ϕ_i(𝐱) f_i,
where ϕ_i is a piece-wise linear basis function defined on the mesh. For each triangle ϕ_i is a linear function which is one only at vertex x = v_i and zero otherwise.
For future discussion, note that with this prescription we are making the choice to perform bilinear interpolation to find the function values between the vertices of the spherical signal. We can consider gradients of piecewise linear functions as simply sums of gradients of the hat functions:
∇ f(𝐱) ≈∇∑_i=1^n ϕ_i(𝐱) f_i =
∑_i=1^n ∇ϕ_i(𝐱) f_i.
Around a given vertex v_i, these basis gradients ∇ϕ_i are zero everywhere on the mesh except the surrounding faces which contain v_i. For the spherical gradient signal itself, we sum the gradients on these faces weighted by each face's area. Following <cit.>, at each vertex we take the dot product of this gradient with north-south and east-west basis vector fields, separating these components for individual parameterization within the PDO kernel.
Within this framework, the signal's Laplacian can similarly be computed using the cotangent formulation of Laplace-Beltrami operator,
Δ f(v_i) = 1/2A_i∑_v_j ∈𝒩(v_i) (α_ij + β_ij) (f(v_j) - f(v_i)),
where 𝒩(v_i) is the set of vertices in the 1-ring neighborhood of v_i, A_i is cell area of vertex v_j and α_ij, β_ij are referred to as the cotangent angles.
Note that these computations use information from the one-ring neighbourhood of each vertex to compute features. The size of this neighbourhood is larger at lower levels of the mesh (see Figure <ref>). This property enables MeshConv operating at lower mesh levels to have a larger receptive field on the input spherical signal. We leverage this receptive field hierarchy to construct Spherical Feature Pyramid Networks (S^2FPNs).
§ METHODOLOGY
In this section, we describe the architecture of our S^2FPN model, followed by a description of our design choices and their impact on the receptive field of the models.
§.§ Feature Pyramid Networks
Our spherical FPN consists of three stages which we detail below. Figure <ref> provides an overview of our overall architecture.
Encoder: We use a modified version of the encoder used by the Spherical UNet <cit.>. Starting with a level-5 mesh, we apply bottlenecked ResBlocks similar to until the lowest mesh level of the model. Each ResBlock consists of a MeshConv operation sandwiched between standard 1x1 convolutions, with average pooling applied after the MeshConv. Our model differs from the UNet in terms of the order of operations and the choice of downsampling. This is explained further in section <ref>. The number of channels is set to 32 at level-5 and is doubled on each transition to a lower level mesh. At level-0, the channel width is capped at 512 (equal to level-1), to avoid a drastic increase in parameters. Such a design results in a semantic hierarchy, with lower mesh levels extracting higher-level semantic information, at coarser resolutions.
Pyramid: To construct our feature pyramid we follow <cit.> and add a 256-channel top-down pathway with lateral connections from the encoder. This pathway upsamples features from the lowest level of the encoder, building semantically rich feature maps across all scales. Each upsampling block consists of bilinear upsampling, followed by addition with feature maps of the same level from the encoder (Upsamp Block). The lateral connections use a 1x1 convolution to match the number of channels in the pyramid. By keeping the number of channels fixed across all pyramid levels, we treat each scale equivalently, weighting semantic information equally across all scales.
Head: While the design of the encoder and the pyramid are generic and can be used for any task, we use a classification head designed for semantic segmentation following <cit.>. Starting from the lowest mesh level, every pyramid feature map is upsampled using bilinear upsampling followed by MeshConv, to the output mesh level (CrossUpSamp). This results in a set of level-5 feature maps which are element-wise summed and passed through a final MeshConv layer for prediction. We set the the number of channels to be 128 across all the head feature maps.
Finally, all our MeshConv operations are followed by batch normalization and ReLU. Our pyramid differs from <cit.>, who empirically found non-linearities to not have an effect on their task. We found batch normalization to be necessary to prevent explosions in MeshConv’s differential operators.
§.§ Up/Down-sampling Spherical Signals
Between levels of the feature pyramid the spherical image is up/down-sampled. We diverge from the seminal work on PDO-based spherical CNNs <cit.> in how we compute new vertex values when transitioning between levels, as well as the order of operations in our “Up/Down-Samp" blocks.
§.§.§ Down-sampling
<cit.> downsample by sampling the signal values only at the nodes shared by the subsequent, lower-level graph. Instead, we average pool vertex values in a 1-ring neighborhood around these shared vertices to ensure no information is lost. By taking information from a 1-ring neighbourhood, we also increase the receptive field of vertices at each level.
§.§.§ Up-sampling
In each iteration of the icosahedral sub-division routine, additional vertices are added at the midpoints between existing vertices. In the seminal model proposed by <cit.>, new vertices are assigned a value of zero, essentially “zero-padding" the shared vertices of the existing, lower-level, signal. This approach introduces artificial edges, since every vertex in the mesh is now surrounded by a ring of black vertices (Figure <ref>a). The MeshConv operator is particularly sensitive to these artifacts, since the constituent differentials act as edge detectors. Accordingly, we bilinearly interpolate the signal at parent vertices in order to compute the new midpoint vertex values.
§.§.§ Order Of Operations
Finally, note that as part of the up/down-sampling block, <cit.> apply a final MeshConv after performing up/down-sampling. In our approach, this order of operations is swapped such that convolution is performed prior to transitioning between graph levels.
These design choices have a direct effect on the receptive field of the MeshConv operation at each level, and therefore the receptive hierarchy our FPN is designed to exploit. Our choice of ordering and average pooling maintains the same receptive field as the seminal work, but without losing information in down-sampling. Alternatively, by keeping the original ordering with average pooling, the receptive field is larger by a one-ring neighborhood of the previous level, see figure <ref>. This will increase the receptive field at each level of the encoder, which may not be beneficial for finding smaller scale features. For this reason, we perform an ablation study over these design choices to empirically determine the optimal up/down-sampling procedure for spherical FPNs.
§ EXPERIMENTS
§.§ Stanford 2D-3D-S Experiments
We follow <cit.> and analyse our models on the Stanford 2D3DS spherical image dataset, which contains 1413 equirectangular images of indoor scenes, with RGBD channels and semantic labels corresponding to 13 classes. We use the pre-processed data provided by <cit.> which samples the original images at the latitude-longitudes of the spherical mesh vertex positions spherical signal. The input RGB-D channels are interpolated using bilinear interpolation, and semantic labels are acquired using nearest-neighbor interpolation. Model performance is measured using two standard metrics - pixel-wise accuracy, and mean intersection-over-union (mIoU),
We test a range of models, varying the maximum depth of the feature pyramid. Besides simply providing models of varying complexity, we motivate this based on the extreme spatial coarseness of the lowest level meshes. Planar FPNs <cit.>, which operate on established image recognition benchmark datasets such as ImageNet <cit.> and MS COCO <cit.>, cap their lowest pyramid level to feature maps with a resolution of 20 × 15, or 300 pixels. In contrast, the icosahedral mesh at the lowest level-0 consists of a mere 12 vertices, roughly corresponding to a 3 × 4 2D image. At this extreme of spatial coarseness, the potential gains from including increasingly lower pyramid levels may be non-trivial. Objects in the scene may not have the excessively large features which are best represented at this resolution. For this reason, we experiment with a range of pyramid depths.
Experimental Setup: We test spherical FPNs with maximum level of 5, and minimum level in {0,1,2,3}. All models use the bilinear upsampling and average pooling blocks defined previously. For all experiments we use the Adam optimizer to train our networks for 100 epochs, with an initial learning rate of 0.01 and a step decay of 0.9 every 20 epochs. We use a batch size of 16 for all models except the L0:5 FPN, for which we use a batch size of 8 due to memory restrictions. All models were trained on Google Cloud using an NVIDIA Tesla T4. The results are shown in Table <ref>
Results and Discussion: All four FPN models achieve state-of-the-art performance, with the smallest L3:5 FPN using 1.5× fewer parameters. The L2:5 model demonstrates substantial gains of 3.14 mIoU points over <cit.>, whilst using a comparable number of parameters to. Our deepest FPN achieves 48.75 mIOU, setting a new state-of-the-art for graph-based spherical CNNs on this dataset. The per-class mIoU scores are shown in Table <ref>. Collectively, Our S^2FPNs achieve the best performance across all the classes except door, where they fall short by 0.007 mIoU points.
§.§ Ablation Study
We perform an ablation study over our choice of down-sampling and up-sampling operations, as well as the order of MeshConv operation relative to down-sampling. We consider two types of down-sampling to a lower-level mesh, average pooling vertex values to compute new vertices (“average"), or simply taking the values of vertices shared with the lower mesh (“drop"), as in <cit.>. For up-sampling, we test bilinear interpolation (“bilinear") and having new vertices take a value of zero (“zero-pad"). Finally, the condition “swapped" refers to whether the order of MeshConv and downsampling operation is swapped relative to <cit.>. If the order is swapped, MeshConv is applied before downsampling, as in <ref>a). All models used a L3:5 architecture, and were tested using an identical experimental set up to the previous section.
Results and Discussion: The use of bilinear up-sampling and average pooling is seen to improve the model both independently and in conjunction. Swapping the order of MeshConv and down-sampling operation is seen to harm performance for the model using "drop" down-sampling, but improve the model when applied to a model with average pooling. The reason for this requires more experimentation, but could be due to the fact that both swapping the order and using “drop" down-sampling reduce the receptive field on the previous level mesh. More precisely, used together, they have a receptive field nine times as small as that of average pooling and the original ordering. Combined with using the shallowest L3:5 model, this significantly reduces receptive field on the input signal and may be harming the network performance.
§.§ ClimateNet
We also evaluate our method on the task proposed by <cit.>, the segmentation of climate events from a 20-year run of the Community Atmospheric Model v5 (CAM5) <cit.>. We use the data preprocessed by <cit.>, which consists of spherical signals sampled onto a level-5 icosahedral grid. Each map consists of 16 channels of measurements such temperature, wind, humidity, and pressure. The training, validation, and test set size is 43917, 6275, and 12549, respectively. The task is to use these climate measurements to segment Atmospheric Rivers
(AR) and Tropical Cyclones (TC). The
labels are heavily unbalanced with 0.1% TC, 2.2% AR, and 97.7% background (BG) pixels, and so we use a weighted cross-entropy loss. Please see <cit.> for information on ground truth label production. Since the model proposed by <cit.> is our most direct comparison, we reduce the number of channels in our S^2FPN model (see <ref>) by a factor of 4, in order to have a comparable number of parameters.
For all experiments we train for 50 epochs using the Adam optimizer, with an initial learning rate of 0.001, and a step decay of 0.4 every 20 epochs. We use a batch size of 128, training all models on a NVIDIA Tesla T4.
We omit comparisons to <cit.> and <cit.> since their parameter scales are vastly different (52M, and 2.3M respectively). Compared to Spherical UNet, our FPN models show substantial improvements in terms of mAP. The L1:5 model performs better than the larger L0:5 model, perhaps indicating the effect of diminishing returns on extremely coarse meshes.
§ CONCLUSION
At present, there is not a universally accepted approach for performing convolution on a sphere. Generalising convolutions to work on arbitrary shapes and unstructured grids is an active research topic. However, the importance of multi-scale features is generic. In theory, feature pyramids can be constructed using any network with semantic and receptive field hierarchies. In this work, we generalise the idea of feature pyramids to the spherical domain and propose an alternative model to UNet for spherical image segmentation. In addition, motivated by their influence on the receptive field hierarchy of the network, we present improved pooling and up-sampling schemes and measure their respective contributions through an ablation study. Our work demonstrates that FPNs are a powerful domain-agnostic architecture, which successfully generalize to a PDO-based spherical CNNs, attaining state-of-the-art results.
§ ACKNOWLEDGMENTS
We would like to thank Chiyu "Max" Jiang for his valuable comments on the up/down-sampling methods, and for their work on pre-processing both ClimateNet and Stanford 2D-3D-S datasets.
|
http://arxiv.org/abs/2307.01988v1
|
20230705024448
|
The linear convergence of the greedy randomized Kaczmarz method is deterministic
|
[
"Yansheng Su",
"Deren Han",
"Yun Zeng",
"Jiaxin Xie"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
Deterministic Linear Convergence for GRK]The linear convergence of the greedy randomized Kaczmarz method is deterministic
School of Mathematical Sciences, Beihang University, Beijing, 100191, China.
[email protected]
LMIB of the Ministry of Education, School of Mathematical Sciences, Beihang University, Beijing, 100191, China.
[email protected]
School of Mathematical Sciences, Beihang University, Beijing, 100191, China.
[email protected]
LMIB of the Ministry of Education, School of Mathematical Sciences, Beihang University, Beijing, 100191, China.
[email protected]
[
Jiaxin Xie
==============
To improve the convergence property of the randomized Kaczmarz (RK) method for solving linear systems, Bai and Wu (SIAM J. Sci. Comput., 40(1):A592–A606, 2018) originally introduced a greedy probability criterion for effectively selecting the working row from the coefficient matrix and constructed the greedy randomized Kaczmarz (GRK) method. Due to its simplicity and efficiency, this approach has inspired numerous subsequent works in recent years, such as the capped adaptive sampling rule, the greedy augmented randomized Kaczmarz method, and the greedy randomized coordinate descent method. Since the iterates of the GRK method are actually random variables, existing convergence analyses are all related to the expectation of the error. In this note, we prove that the linear convergence rate of the GRK method is deterministic, i.e. not in the sense of expectation. Moreover, the Polyak's heavy ball momentum technique is incorporated to improve the performance of the GRK method. We propose a refined convergence analysis, compared with the technique used in Loizou and Richtárik (Comput. Optim. Appl., 77(3):653–710, 2020), of momentum variants of randomized iterative methods, which shows that the proposed GRK method with momentum (mGRK) also enjoys a deterministic linear convergence. Numerical experiments show that the mGRK method is more efficient than the GRK method.
Key words: Linear systems, Kaczmarz, greedy probability criterion, heavy ball momentum, deterministic linear convergence
Mathematics subject classification (2020): 65F10, 65F20, 90C25, 15A06, 68W20
§ INTRODUCTION
The Kaczmarz method <cit.>, also known as algebraic reconstruction technique (ART) <cit.>, is an iterative method for solving large-scale linear systems
Ax = b, A ∈, b∈ℝ^n.
Throughout this note, we assume that the linear system (<ref>) is consistent, i.e. there exists a x such that Ax = b. For any i∈[m]:={1,…,m}, we use aı to denote the transpose of the i-th row of A and use b_i to denote the i-th entry of b. Starting from x^(0)∈ℝ^n, the original Kaczmarz method constructs x by
x = x-̨a x-̨ b/a a,
where the index i_k = (k m) +1.
There are empirical evidences that selecting a working row from the matrix A randomly can often lead to a better convergence of the Kaczmarz method compared to choosing it sequentially <cit.>.
The celebrated result of Strohmer and Vershynin <cit.> shows that if the index i_k is selected randomly with probability proportional to a_i_k^2_2, then the resulting randomized Kaczmarz (RK) method converges linearly in expectation.
Since then, RK-type methods have received extensive attention due to their computational efficiency and scalability. They have a wide range of applications in many areas of scientific computing and engineering such as computerized tomography <cit.>, signal processing <cit.>, optimal control <cit.>, and machine learning <cit.>. We refer to <cit.> for a recent survey on the Kaczmarz method.
It is well known that using a better probability criterion can lead to a more favorable order of working rows and thus accelerate the convergence of the RK method. To enhance the convergence property of the RK method,
Bai and Wu <cit.> first introduced the greedy probability criterion, and constructed the
greedy randomized Kaczmarz (GRK) method for solving the linear system (<ref>).
At the k-th iteration, GRK determines a subset ℐ_k of [m] such that the magnitude of the residual a_i^⊤ x-̨b_i exceeds a threshold,
i.e.
= { i : |aı x-̨ bı|^2 ≥ε_k Ax-̨baı},
where
ε_k = 1/2(1/Ax-̨bmax_i{|aı x-̨ bı|^2/aı} + 1/A).
Then, a modified residual vector r̃$̨ is defined asr̃_̨i= a_i^⊤ x-̨b_i, if i ∈ℐ_k,
0, otherwise.The GRK method selects the index of the working rowi_k ∈ℐ_kwith probability
Prob(i_k=i)=|r̃_̨i|^2/r̃_2^2 .
Finally, the GRK method orthogonally projects the current iteratex^konto thei_k-th hyperplane{x|⟨a_i_k,x⟩=b_i}to obtain the next iteratex^k+1.
By using the above greedy idea, small entries of the residual vectorAx^k-bmay not be selected, which guarantees the progress of each iteration of GRK. This property leads to a faster convergence rate of GRK compared to RK.
In recent years, there has been a large amount of work on the refinements and extensions of the GRK method, such as the capped adaptive sampling rule <cit.>, the greedy augmented randomized Kaczmarz method for inconsistent linear systems <cit.>, the greedy randomized coordinate descent method <cit.>, and the capped nonlinear Kaczmarz method <cit.>. In addition, we note that there has also been some work on non-random Kaczmarz methods <cit.> inspired by the GRK method.
In this note, we show that the linear convergence of the GRK method is deterministic.
In general, the convergence analyses of RK-type methods are based on the expectation of the error [x-̨x ].
Specifically, Bai and Wu <cit.> proved that the iteration sequence of GRK satisfies
[x-̨x] ≤( 1-( 1/γA+1 ) ^2(A)/A)^k-1(1-^2(A)/A) x-x,
whereγis a constant (See Theorem <ref>). We demonstrate that the greedy strategy will always guarantee a certain reduction of x-̨x at each iteration. As a result, the convergence bound in (<ref>) is not only valid for the expectation [x-̨x ], but also for the quantity ofx-̨x. Particularly, we show that the iteration sequence of GRK satisfiesx-̨x≤( 1-( 1/γA+1 ) ^2(A)/A)^k-1(1-^2(A)/A) x-x.To the best of our knowledge, this is the first time that convergence not based on the expectation has been explored for RK-derived methods. We refer to such convergence for random algorithms as deterministic.
Our result is mainly based on the following observation: By the definition of , we know that any index i ∈satisfies≥1/2( + ),and since
≥∑_i=1^maı/A· = ,
for anyi ∈, the following inequality always holds regardless of the probability employed
≥,
which ensures deterministic convergence. This observation makes it possible for many variants of the GRK method to achieve deterministic convergence. Particularly, we investigate the heavy ball momentum <cit.> variant of the GRK method and demonstrate that the linear convergence of the GRK method with momentum (mGRK) is also deterministic. In recent years, the incorporation of momentum acceleration techniques with Kaczmarz-type methods has been a popular topic in the literature <cit.>. Convergence of the heavy ball momentum variant of the randomized Kaczmarz method has been analyzed in <cit.> and <cit.>. In this note, we provide a different convergence analysis of the heavy ball momentum variant of the Kaczmarz method, where a smaller convergence factor for the Kaczmarz method with momentum can be obtained.
§.§ Notations
We here give some notations that will be used in the note. For vectorx∈ℝ^n, we usex_i,xandx_2to denote thei-th entry, the transpose and the Euclidean norm ofx, respectively. For matrixA∈ℝ^m×n, we usea_i,A,A^†, A_F , Range(A) ,Rank(A), (A) , and (A) to denote thei-th row, the transpose, the Moore-Penrose pseudoinverse, the Frobenius norm, the range space, the rank, the largest and the smallest non-zero singular value of A , respectively.
§.§ Organization
The remainder of the note is organized as follows. We prove the deterministic convergence of GRK in Section 2. In Section 3, we propose the momentum variant of the GRK method. In Section 4, we perform some numerical experiments to show the effectiveness of the proposed method. Finally, we conclude the note in Section 5.
§ DETERMINISTIC CONVERGENCE FOR GRK
In this section, we study the deterministic convergence of the greedy randomized Kaczmarz (GRK) method. The GRK method is presented in Algorithm <ref>. We note that, compared with the GRK method proposed by Bai and Wu <cit.> where the probability criterion (<ref>) is used, any probability that satisfies{ p_̨i
= 0, i ∉,
p_̨i
≥ 0, i ∈,
. and ∑_i∈ p_̨i=1will be appropriate for Algorithm <ref>.
The convergence result for Algorithm <ref> is as follows. We note that the proof of this theorem is nearly identical to that of Theorem 3.1 in <cit.>, except for the handling of expectation.
Suppose that x∈ and let x = A^† b + (I-A^† A)x denote the projection of x onto the solution set of Ax=b. Then for k ≥ 1, the iteration sequence x generated by Algorithm <ref> satisfies
x-̨x≤( 1-( 1/γA+1 ) ^2(A)/A)^k-1(1-^2(A)/A) x-x,
where γ = max_1 ≤ i ≤ m∑_j=1, j≠ i^ma_j.
By the iterative strategy of Algorithm <ref>, we have
x - x = x-̨(a x-̨ b)/aa-x^2_2
= x-̨x - |a x-̨ b|^2/a
≤ x-̨x - ε_k Ax-̨b,
where the second equality follows from the fact that b=a x and the last inequality follows from (<ref>). Next, let us give an estimate for the quantity ε_k.
For k ≥ 1, we have
r_̨i_k-1 = (a_i_k-1) x-̨ b_i_k-1
= (a_i_k-1)(x - ((a_i_k-1) x - b_i_k-1)/a_i_k-1a_i_k-1) - b_i_k-1
= (a_i_k-1) x - b_i_k-1 - ((a_i_k-1) x - b_i_k-1)
= 0.
Therefore when k ≥ 1, we have
ε_kA_F^2 = /2 ∑_i=1^maı/A·+1/2
= /2 ∑_i=1,i≠ i_k-1^maı/A·+1/2
≥ ( A/∑_i=1,i≠ i_k-1^ma_i +1 )
≥ ( 1/γA +1 ).
For the case where k = 0, we have
ε_0A_F^2 = max_1 ≤ i ≤ m{ | a_i_0^T x - b_i_0|/a_i_0}/2 ∑_i=1^maı/A·|a_i x-̨ b_i|^2/a_i + ≥ +
= 1.
Overall we have
ε_k ≥{[ 1A, k = 0,; 12( 1γA +1 )1A, k≥1. ].
Now, let us give an estimate for Ax-̨b. We have that for any k≥0, x-̨ x∈. Indeed, from the definition of x, we know that x - x = A^†(Ax - b) ∈. Suppose that x-̨ x∈ holds, then x - x = x-̨x - (a x-̨ b)/aa∈. By induction we have that x-̨ x∈ holds for any k≥ 0. Therefore,
Ax-̨b = A(x-̨x)≥^2(A) x-̨x.
Substituting this and (<ref>) into (<ref>) completes the proof.
We note that the inequality established in (<ref>) can also be used to prove Theorem <ref>, but can only yield a weaker convergence rate (see Remark <ref>).
§ GRK WITH MOMENTUM
In this section, we will incorporate Polyak's heavy ball momentum into the GRK method and show that the momentum variant of the GRK method also achieves deterministic linear convergence.
§.§ Heavy ball momentum
Recall that the gradient descent (GD) method for solving the optimization problem
min_x∈ℝ^n f(x)
utilizes the updatex^k+1=x^k-α_k∇ f(x^k),whereα_k>0is the step-size,fis a differentiable convex function, and∇f(x^k)denotes the gradient offatx^k.
Whenfis a convex function withL-Lipschitz gradient, GD requiresO(L / ε)steps to guarantee an error withinε. Iffis alsoμ-strongly convex, it converges linearly with a convergence rate ofO(log(ε^-1)(L / μ))<cit.>.
To improve the convergence behavior of the method, Polyak modified GD by introducing a momentum term,β_k(x^k-x^k-1). This leads to the gradient descent method with momentum (mGD), commonly known as the heavy ball methodx^k+1=x^k-α_k∇ f(x^k)+β_k(x^k-x^k-1) .Polyak <cit.> proved that, for twice continuously differentiable objective functionsf(x)withμ-strongly convex andL-Lipschitz gradient, mGD achieves a local accelerated linear convergence rate ofO(log(ε^-1)√(L / μ))(with an appropriate choice of the step-sizeα_kand momentum parameterβ_k). In this section, we aim to use the heavy ball momentum technique to improve the performance of the GRK method.
§.§ The proposed method
The proposed mGRK method for solving linear systems utilizes the following update rule:
x = x-̨a x-̨ b/a a +(x-̨x),
where the indexi_kis selected using a certain greedy probability criterion.
We note that Bai and Wu <cit.> introduced a relaxation parameterθinε_kin Algorithm <ref> such that the factor1/2before the two terms ofε_kis replaced byθin the first term and by1-θin the second term, proposing the relaxed greedy probability criterion. The mGRK method will adopt this relaxed greedy probability criterion and is described in Algorithm <ref>.
§.§ Convergence analysis
To establish the linear convergence of mGRK, the following lemma is useful.
Fix F^(1)=F^(0)≥ 0 and let {F^(k)}_k≥ 0 be a sequence of nonnegative real numbers satisfying the relation
F^(k+1)≤γ_1 F^(k)+γ_2F^(k-1), ∀ k≥ 1,
where γ_2≥0,γ_1+γ_2<1. Then the sequence satisfies the relation
F^(k+1)≤ q^k(1+δ)F^(0), ∀ k≥ 0,
where q={[ γ_1+√(γ_1^2+4γ_2)/2, if γ_2>0;; γ_1, if γ_2=0, ]. δ=q-γ_1≥ 0. Moreover,
γ_1+γ_2≤ q<1,
with equality if and only if γ_2=0.
We have the following convergence result for Algorithm <ref>.
Suppose that x=x∈, θ∈ [0,1] and let x = A^† b + (I-A^† A)x denote the projection of x onto the solution set of Ax=b.
Assume that ∈ (0,2) if β=0 or ∈(0,1+β) if >0, and the expressions
γ_1 = 2^2 + 3+ 1 - (3 + 2 - ^2)^2(A)/A and γ_2 = 2^2 +
satisfy γ_1+γ_2<1. Then the iteration sequence {x^(k)}_k≥ 0 generated by Algorithm <ref> satisfies
x-x≤ q^k(1+δ)x-x,
where q = γ_1+√(γ_1^2+4γ_2)/2 and δ = q -γ_1. Moreover, γ_1+γ_2≤ q <1.
To state conveniently, we set P := a a/a, then we have
(a x-̨ b)/aa = a a/a (x-̨x) = P(x-̨x),
where the first equality follows from the fact aı x = bı for any i∈.
Noting that P = P and P^2 = a a a a/a^4 = a a/a = P, we have
x-̨x, P(x-̨x) = x-̨x, P^2(x-̨x) = P(x-̨x).
Now by the iterative strategy of Algorithm <ref>, we have
x - x = x-̨ x - P(x-̨ x) + (x-̨x)
= (I- P)(x-̨ x) + ^2 x-̨x
+ 2x-̨ x, x-̨x -2P(x-̨ x),x-̨x.
We shall analyze the four terms in the last expression separately. By using (<ref>), the first term satisfies
(I- P)(x-̨ x) = (x-̨ x),(I-2 P+^2P^2)(x-̨ x)
= x-̨ x - (2-^2) P(x-̨x).
We keep the second term ^2 x-̨x unchanged and reformulate the third term by
2x-̨ x, x-̨x = (x-̨ x + x-̨x - x - x )
For the last term,
-2P(x-̨ x),x-̨x
= ( x-̨x - P(x-̨ x) - x-̨x - P(x-̨ x) )
= ( (I-P)(x-̨x) - (x-x) - x-̨x - P(x-̨ x) )
≤ ( 2 (I-P)(x-̨x) + 2 (x-x) - x-̨x - P(x-̨ x) )
= (2x-̨x - 3P(x-̨x) + 2 (x-x) - x-̨x),
where the inequality follows from a+b≤ 2a + 2b. Overall, subsituting the above bounds into (<ref>), we obtain
x - x≤ (2++1) x-̨x + (2-)x-x
+(^2 + - ) x-̨x - (3 + 2 - ^2)P(x-̨x).
Since ^2 + - ≥0 by the assumption in this theorem, we eliminate the term x-̨x by x-̨x≤ 2x-̨x + 2x-x and have
x - x≤ (2^2 + 3+ 1) x-̨ x + (2^2 + ) x - x
- (3 + 2 - ^2)P(x-̨x).
Now we focus on the last term P(x-̨x) and establish its relationship with x-̨x. It follows from (<ref>), we know that the inequality (<ref>) also holds for Algorithm <ref>, i.e.
P(x-̨x) = |a x-̨ b|^2/a≥,
where the equality follows from (<ref>).
Let us show that x-̨ x∈Range(A^⊤) for all k≥ 0 which can be proved by induction. By the definition of x, x, and x, we have x - x,x - x∈Range(A^⊤). If x^ℓ - x∈Range(A^⊤) holds for ℓ=0,…,k, then
x- x = x-̨ x - (a x-̨ b)/aa+(x-̨x)
=(1+β)(x-̨ x) -β(x -x) - (a x-̨ b)/aa∈Range(A^⊤).
Hence, by induction we have that x-̨ x∈Range(A^⊤) for all k≥ 0. Hence, we have
P(x-̨x)≥Ax-̨b/A≥^2(A)/Ax-̨x.
Substituting it into (<ref>), we can get
x - x≤ (2^2 + 3+ 1 - (3 + 2 - ^2)^2(A)/A) x-̨ x
+ (2^2 + ) x - x.
Suppose that F^(k):= x-̨x,
γ_1 = 2^2 + 3+ 1 - (3 + 2 - ^2)^2(A)/A, and γ_2 = 2^2 +. Noting that the conditions of the Lemma <ref> are satisfied. Indeed, γ_2≥0, and if γ_2=0, then β=0 and q=γ_1≥0.
The condition γ_1+γ_2<1 holds by assumption.
Then by Lemma <ref>, one can get the theorem.
When = 0, the conclusion in Theorem <ref> reduces to
x-̨x≤( 1-(2-^2)^2(A)/A)^k x-x,
which is the convergence rate for the GRK method with relaxation. It can be observed that when = 0 and = 1, the bound established in Theorem <ref> is tighter than the one obtained here.
In <cit.>, the authors provided a linear convergence result for the momentum variant of the RK method of the same form as ours, where
γ̃_1 = 2^2 + 3+ 1 - ( + 2 - ^2)^2(A)/A and γ̃_2 = 2^2 + + ^2(A)/A .
These constants are larger than those obtained by Theorem <ref>, and hence, a smaller convergence factor can be guaranteed for the mGRK method.
Let us explain how to choose the parametersαandβsuch thatγ_1+γ_2<1is satisfied in Theorem <ref>. Indeed, setτ_1:=4-3ασ_min^2(A)/A^2_F andτ_2:=(2α-α^2)σ_min^2(A)/A^2_F.Letα∈(0,1], then we haveτ_2>0and the conditionγ_1+γ_2<1now is satisfied for all
0≤β<1/8(√(τ_1^2+16τ_2)-τ_1).
Finally, let us compare the convergence rates obtained in Theorem <ref> and Remark <ref>.
From the definition ofγ_1andγ_2, we know that convergence rateq(β)in Theorem <ref> can be viewed as a function ofβ. We further assume that0<α<min{2,4A^2_F/3σ^2_min(A)}so thatτ_1≥0. We note that4A^2_F/3σ^2_min(A)≥2can be easily satisfied in practice, for instance, whenRank(A)≥2. Then we haveq(β)≥γ_1+γ_2=4β^2+τ_1β-τ_2+1≥ 1-τ_2
=1-(2α-α^2)σ_min^2(A)/A^2_F=q(0).Since the lower boundqis an increasing function ofβ, we know that the convergence rate for mGRK is always inferior to that of GRK. Although numerical experiments suggest that mGRK performs better than GRK in practice, it is challenging to achieve a better convergence rate in theory for mGRK. One possible approach to overcome this problem is to use the adaptive strategy proposed in <cit.>.
§ NUMERICAL EXPERIMENTS
In this section, we describe some numerical results for the mGEK method for solving linear systems. We also compare the mGRK method with the GRK method and the stochastic conjugate gradient (SCG) method <cit.> on a variety of test problems. Our numerical results suggest that incorporating momentum in the GRK method can lead to improved convergence and efficiency in solving linear systems.
All methods are implemented in Matlab R2022a for Windows10on a desktop PC with the Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz and 16 GB memory.
We use the following two types of coefficient matrices. One is matrices randomly generated by the Matlab function randn. For givenm, n, r, andκ>1, we construct a dense matrixAbyA=U D V^⊤, whereU ∈ℝ^m ×r, D ∈ℝ^r ×r, andV ∈ℝ^n ×r. Using Matlab notation, these matrices are generated by [U,∼]=qr(randn(m,r),0), [V,∼]=qr(randn(n,r),0), and D=diag(1+(κ-1).*rand(r,1)). So the condition number and the rank ofAis upper bounded byκandr, respectively.
Another is real-world data which are available via SuiteSparse Matrix Collection <cit.>.
In our implementations, to ensure the consistency of the linear system, we first generate the solution byx=randn(n,1)and then setb=Ax. All computations are started from the initial vectorx^0=0. We stop the algorithms if the relative solution error (RSE)x^k-A^†b^2_2/A^†b^2_2≤10^-12. All the results are averaged over20trials and we report the average number of iterations (denoted as Iter) and the average computing time in seconds (denoted as CPU). We set the step-sizeα=1and the relaxation parameterθ=1/2for the mGRK method. We use (<ref>) as the probability criterion for selecting the working row.
Figures <ref> and <ref> illustrate our experimental results with different choices of the momentum parameterβ. Whenβ=0, the mGRK method is exactly the GRK method.
We note that in all of the presented tests, the momentum parametersβof the methods are chosen as non-negative constants that do not depend on parameters not known to the users, such asσ^2_min(A). It is evident that the incorporation of the momentum term has resulted in an improvement in the performance of the GRK method. We can observe thatβ=0.4is consistently a good choice for achieving sufficiently fast convergence of the mGRK method on random matrices. While for the datasets WorldCities and crew1, we find thatβ=0.6is a good option for WorldCities (we have not plotted the case whereβ=0.7as the mGRK method is observed to be divergent in this case) andβ=0.4is a good option for crew1. This indicates that we need to select appropriate values ofβfor handing different types of data.
In Figure <ref>, we present the computing time of GRK, SCG, and mGRK with random matricesA, wherem=1000,2000,…,10000,n=100,κ(A)=10(top) orκ(A)=40(below), andr=100(left) orr=90(right). From the figure, we can observe that the GRK method and the SCG method exhibit comparable performance. Particularly, if the condition number of the coefficient matrix is larger, the SCG method may outperform the GRK method. Additionally, regardless of whether the coefficient matrixA∈ℝ^m×nis full rank or rank-deficient, the mGRK method outperforms both the GRK method and the SCG method in terms of CPU time. Specifically, the mGRK method is approximately two times faster than the GRK method.
Table <ref> presents the iteration counts and computing times for the GRK, SCG, and mGRK methods when applied to sparse matrices obtained from the SuiteSparse Matrix Collection <cit.>. The matrices used in the experiments include bibd_16_8, crew1, WorldCities, nemsafm, model1, ash958, Franz1, and mk10-b2. Some of these matrices are full rank, while others are rank-deficient. From Table <ref>, it can be observed that both the GRK and mGRK methods are more effective than the SCG method. This is because, in the case of sparse matrices, the computation of the parametersε_k, i.e.,Ax^(k)-b, in both GRK and mGRK is significantly reduced. It can be also observed that the mGRK method with an appropriate momentum parameter generally exhibits better performance than the GRK method.
§ CONCLUDING REMARKS
In this note, we proved that the linear convergence of the greedy randomized Kaczmarz (GRK) method is deterministic. Moreover, we showed that the deterministic convergence can be inherited to the heavy ball momentum variant of the GRK method. The preliminary numerical results showed that the mGRK method performs better than the GRK method.
It can be seen from (<ref>) that, theoretically, the optimal choices ofαandβfor the mRGK method require knowledge of the smallest nonzero singular value of matrix, which is often not accessible.
Furthermore, during our experiments, we have observed the need to select suitable parametersβfor different types of data. Hence, it would be a valuable topic to investigate the GRK method with adaptive heavy ball momentum <cit.>, where the parameters can be learned adaptively using iterative information. This adaptive approach could potentially overcome the challenge of selecting optimal parameters and improve the overall performance of the GRK method in various scenarios.
plain
|
http://arxiv.org/abs/2307.03227v1
|
20230706180003
|
Single-event likelihood of star cluster properties with LIGO-Virgo-Kagra binary black hole observations
|
[
"Ken K. Y. Ng",
"Konstantinos Kritos",
"Andrea Antonelli",
"Roberto Cotesta",
"Emanuele Berti"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.CO",
"gr-qc"
] |
[email protected]
The population of binary black hole mergers observed in gravitational waves, together with astrophysical simulations, can help us to understand the properties of the progenitors and the binary formation mechanisms in different astrophysical scenarios.
Here we focus on dynamical formation in star clusters. We use the third gravitational-wave transient catalog (GWTC-3) and Rapster, a rapid code to simulate cluster dynamics, to show that it is possible to construct the single-event likelihood of star cluster properties from individual observations. We find that the measured primary mass in a binary black hole merger correlates with the measured star cluster mass, because the mass spectrum of the primary component increases with the mass of the cluster.
This trend may be caused by two physical mechanisms: (i) the more efficient production of hierarchical mergers with primary mass above ∼ 40 for cluster masses of ≳ 10^6, and (ii) the suppression of more massive first-generation binaries, which happens because ejected binaries do not merge within the lookback time for cluster masses of ≲ 10^5.
The formalism presented here can be generalized to infer the population properties of binary progenitors in more realistic scenarios involving multiple formation channels.
Single-event likelihood of star cluster properties with LIGO-Virgo-Kagra binary black hole observations
Emanuele Berti
August 1, 2023
=======================================================================================================
§ INTRODUCTION
The recent catalogs of gravitational-wave (GW) transients released by the LIGO-Virgo-Kagra Collaboration <cit.> motivated efforts to investigate the properties of the binary black hole (BBH) population and their possible formation channels. There are various ways to address this problem.
One approach is to construct phenomenological models that reproduce the main distinctive features of astrophysical formation channels <cit.>.
This is a sensible approach because it requires minimal astrophysical modeling.
For instance, certain broad features of the population – such as the BH spin alignment with the orbital angular momentum <cit.> or the measurement of binary component masses populating the pair instability supernova (PISN) mass gap, above ∼ 40 <cit.> – may provide evidence for multiple formation channels. This is because isolated binary evolution is expected to produce mostly binaries with aligned spins and masses below the mass gap <cit.>, while the spins of BHs produced through dynamical formation in star clusters should be isotropically oriented, and hierarchical mergers can populate the PISN mass gap <cit.>. Other features that can be captured by phenomenological models include the time (or redshift) evolution of the merger rate density <cit.>, or the presence of peaks and tails in the redshift evolution of merger rate densities due to putative Population III or primordial BBH components, which could be observable with next-generation GW detectors <cit.>.
One drawback of phenomenological models is that they are affected by modeling systematics: for example, the class of parametrized functions used to reproduce the data may be too restrictive, leading to an erroneous mapping between the parametrized models and detailed astrophysical simulations.
A second approach is to infer the empirical distribution using data-driven models, leaving the interpretation of the resulting distribution to the postprocessing stage <cit.>. Even within this approach, finding a suitable statistical metric connecting data with astrophysical simulations could be problematic.
A third approach (and one that we follow in this paper) is to build a direct mapping between the measured parameters of a BBH merger event
and the observables predicted by astrophysical simulations <cit.>.
While the inference is still limited by our incomplete knowledge of astrophysical formation scenarios, this approach allows for in-depth studies of the astrophysical mechanisms that correspond to certain features seen in the populations.
There have been many attempts to infer
some of the key astrophysical parameters affecting the isolated binary evolution scenario, as well as the relative contribution (or branching ratios) of multiple formation channels: see e.g. <cit.> for an incomplete list.
In this paper we avoid the complications related to multiple formation channels, and we focus on the dynamical formation scenario in dense star clusters.
Each cluster in the star cluster population has different properties, and therefore it produces a different BBH subpopulation. Here we develop a two-level hierarchical Bayesian framework that can ultimately infer the properties of the star cluster population from BBH merger observations (Sec. <ref>).
We focus on the first step in this framework, which consists in constructing the single-event likelihood of star cluster properties: i.e., we aim to identify clusters with parameters which are more likely to generate a particular BBH observed in the third GW transient catalog (GWTC-3) <cit.>.
To this end, we use a code for rapid simulations of cluster dynamics, Rapster <cit.>, to build a statistical mapping between the BBH parameters and the star cluster parameters.
By analyzing these simulations we observe a positive correlation between the measured BBH primary mass and the inferred cluster mass.
As we discuss in Sec. <ref>, this correlation may be explained by the cluster mass scaling of the efficiency in the production of hierarchical mergers and by the inspiral timescale of the ejected binaries.
In Sec. <ref> we discuss some technical aspects and future prospects to interpret the observed BBH population using astrophysical simulations.
In Appendix <ref> we give details on the cluster simulations and on the kernel density estimation (KDE) we use to approximate the joint distribution from the simulated mergers.
§ TWO-LEVEL HIERARCHICAL BAYESIAN FRAMEWORK
As the Universe evolves, numerous star clusters form with redshift-dependent rates and with different physical properties (such as total mass, radius, and metallicity) <cit.>.
Each cluster evolves dynamically and produces an ensemble of BBHs whose statistical distribution depends on the properties of the host cluster.
Therefore, the distribution of BBH properties observed by LVK in the cluster scenario should be modeled by considering the population of BBHs originating from a population of star clusters.
The “inverse problem” consists of inferring the properties of the star cluster population that can host BBH mergers observable in GW detectors.
One may attempt to perform the full hierarchical analysis by simulating BBHs drawing from different realizations of cluster populations.
However, it is more beneficial to consider a two-level hierarchical model that can break down the inference, as follows.
In the first level of hierarchy, we map the single-event likelihood of BBH parameters to the likelihood of parameters of individual clusters that are likely to produce them, using the BBH properties predicted by star cluster simulations.
In the second level of hierarchy, we combine these single-event cluster likelihoods and infer the distribution of the cluster properties.
To see how the single-event cluster likelihood enters in the hierarchical framework, we derive it using a top-down approach.
The full hierarchical likelihood based on a Poisson process of data generation is given by <cit.>
p(|d) ∝ e^-_ det()∏_i=1^N ∫ p(d_i | _i) d/d(_i | ) d_i,
where d={d_i}_i=1^N is the data set of N BBH observations, p(d_i | _i) is the individual likelihood of the i-th BBH characterized by parameters _i such as component masses and spins, d/d is the differential number of BBHs expected for a given cluster population characterized by hyperparameters , and _ det is the number of detectable BBHs:
_ det() = ∫d/d( | ) ϵ_ det() d,
where 0≤ϵ_ det()≤ 1 is the detection efficiency for a BBH merger with binary parameters .
The differential rate can be written as
d/d( | )
= ∫d^2 /d d( , | ) d
= ∫ p( | ) () d/d( | ) d,
where p( | ) is the distribution of originating from a single cluster characterized by some , () is the number of BBHs produced by the cluster, and is the total number of clusters.
For example, could be the mass of a single cluster, and the power law index of the cluster mass function.
Both p( | ) and () are predicted by the simulation, while p(d_i | _i) is obtained from GW observations.
The integral in Eq. (<ref>) is equivalent to the expected number of BBHs averaged over the individual BBH likelihoods. It can be rewritten as
⟨⟩_i ()
= ∬ p(d_i | _i) p(_i | _i) (_i) d/d(_i | ) d_i d_i
= ∫ p(d_i | _i) (_i)d/d(_i | ) d_i,
where the individual cluster likelihood of the i-th observation, p(d_i | _i), is the marginalization of p(_i | _i) over the individual BBH likelihoods:
p(d_i | _i) ≡∫ p(d_i | _i) p(_i | _i) d_i.
This result may also be obtained by applying Bayes' theorem and marginalizing over _i on the joint distribution p(d_i, _i | _i) “directly” in the bottom-up approach.
This procedure is practically advantageous as we only need to approximate p( | ) and () once with the finite samples produced by the simulations.
On the contrary, the emulation of the entire BBH population d/d originating from all possible cluster populations is limited to the choices of the prior functions used in the training set, and thus hinders the use of the more flexible nonparametric models for the cluster population that we are interested in.
In the following, we will study solely the single-event cluster likelihood in Eq. (<ref>) for selected events in GWTC-3.
§ INDIVIDUAL LIKELIHOOD OF CLUSTER PROPERTIES
The best-measured BBH parameters in current GW observations, and therefore the parameters that are most informative in the inference of formation channels, are the (source-frame) masses of the primary, m_1, and secondary, m_2; the effective spin projected along the orbital angular momentum, ; and the redshift, z.
The Rapster code has a total of 19 input parameters. While it is hopeless to constrain all of these parameters, as a proof of principle we explore two of the most important intrinsic properties of individual clusters: the total mass of the cluster, , and the half-mass radius at the time of cluster formation, .
In other words, we set = (m_1, m_2, , z) and =(, ) in the formalism of Sec. <ref>.
We reweigh the LVK posterior samples and obtain the likelihood samples of .
We limit the cluster parameter space to the ranges ∈ [10^4, 10^7] and ∈ [0.5, 3] pc, respectively, based on current observations of young star clusters <cit.>.
The simulation samples for the constructing the KDE are generated by the following settings.
The initial cluster masses M_ cl are drawn from a power-law distribution with a spectral index -2 in the range [10^3.7, 10^7.3]. To avoid hard cutoffs in the range [10^4, 10^7] where we construct our KDE, we taper the distribution using a Tukey window function with shape parameter 0.18.
The initial half-mass radius r_h is drawn from a linear distribution in the range [0.3, 3] pc.
This choice is to balance the number of mergers per cluster in the simulation set, which scales with the inverse of the cluster radius.
In the inference, we obtain the likelihood of and by reweighing the above chosen priors.
The other cluster parameters and the initial cluster conditions are fixed as follows.
We use SEVN to compute the initial mass function of BHs so that the PISN cut-off is at ∼ 40, with the exact value depending on the metallicity <cit.>.
The dimensionless natal spin of first-generation BHs is sampled from a uniform distribution in the range [0, 0.2], as in Ref. <cit.>.
The masses and spins of BBH merger remnants are computed using the precession code <cit.>.
The initial central stellar density is calculated as 3/(4π(/1.3)^3), assuming a Plummer profile <cit.>.
These two choices are motivated by observations of star clusters in the local Universe <cit.>.
Moreover, we assume a young massive cluster population with redshift of cluster formation and mean metallicity sampled from the Madau-Fragos distribution <cit.>.
We also apply a log-normal spread with a variance of 0.3 in the metallicity distribution at each redshift.
The rest of the input parameters are set to their default Rapster values, as listed in Table I of Ref. <cit.>.
With one node (48 processors) at the Maryland Advanced Research Computing Center at Johns Hopkins, we can simulate ∼ 10^6 star clusters within 2 days.
To approximate the conditional probability distribution p(|), we employ Gaussian KDE on a set of ∼ 7× 10^5 simulated BBH mergers generated by the synthesis code Rapster.
Given p(|), we can evaluate Eq. (<ref>) for each BBH observation in GWTC-3 released by the LVK Collaboration.
Since the integral is generally intractable, we sample the likelihood in Eq. (<ref>) by Monte Carlo methods.
Technical details on the KDE and on the integration are given in Appendix <ref>.
In Fig. <ref> we show the joint likelihoods of (,) for two events: GW190521 <cit.>, and GW151226 <cit.>.
These events were chosen because they have very different primary masses: GW190521 has m_1∼ 100, suggestive of a hierarchical merger origin <cit.>, while GW151226 has m_1∼ 14, a more typical value for events in the GWTC-3 catalog <cit.>.
We find that the likelihood for is almost uninformative even for GW190521.
This is because, in the Rapster simulations <cit.>, the compactness of the cluster mostly affects the number of BBHs produced in the cluster, i.e. , rather than the shape of the BBH mass distribution.
As is not involved in the single-event cluster likelihood, we do not extract any new information about from the single-event analysis.
However, we note that the current version of Rapster does not include stellar mergers, which would allow for the possibility to form initial BHs within the PISN mass gap.
This mechanism may skew the distribution of m_1 to higher values for very compact clusters <cit.>.
The key feature of Fig. <ref> is that the likelihood of in the two systems is very different: GW190521 favors ≳ 10^6, and GW151226 favors ≲ 10^6.
This may hint at a positive correlation between the primary BH mass m_1 and the probable cluster mass that produces the corresponding BBH merger event.
To test this hypothesis, we have analyzed all BBH events in GWTC-3.
The results of this analysis are shown in Fig. <ref>, where we show the inferred value of as a function of m_1 for the GWTC-3 catalog. Most events in grey, but a selected subset (listed in the legend) is highlighted in color. The highlighted subset is chosen to cover three different ranges of m_1: values in the PISN mass gap (GW190521 and GW191109, with m_1≳40), events with 40 ≳ m_1≳ 20 (GW150914 and GW190412), and low-primary mass events with m_1 ≲ 20 (GW151226 and GW190930).
As anticipated, we observe that the inferred values of (within the 90% credible interval) tend to increase as a function of the measured values of m_1.
We have also checked that the correlation of with other binary parameters (such as and q) is not as significant as the correlation with m_1.
To understand this correlation, in Fig. <ref> we compare the primary mass distributions p(m_1) generated by clusters having masses in different ranges (highlighted by histograms in different colors).
These mass distribution histograms have two major features.
First of all, the relative fraction of BBHs above the PISN mass gap is larger when ≥ 10^5 (i.e., for the orange and green histograms).
Hierarchical mergers within the mass gap occur more frequently in more massive clusters, because these clusters have larger escape velocities and thus they are more likely to retain the merger remnants despite their gravitational recoils.
This is compatible with the correlation between and primary BHs having m_1≳ 40 observed in Fig. <ref>.
Secondly, the mass distribution of first-generation mergers below the PISN mass gap (those with m_1 ≲ 40) is skewed toward lower values for ≤ 10^5 (blue histogram): for these light clusters, the peak in m_1 decreases from ∼ 35 to ∼ 15.
This trend may be qualitatively explained by a combination of the ejection mechanism discussed above, and the finite merging time window.
In a star cluster, first-generation mergers are typically formed by a combination of mass segregation and exchange interactions.
The majority of first-generation mergers are nearly equal-mass systems, whose critical semimajor axis for getting ejected out of the cluster after a binary-single interaction scales with ∝ m_1/: see e.g. Eq. (8) in Ref. <cit.>, or Eq. (8) in Ref. <cit.>.
Therefore, in less massive clusters the more massive BBHs are ejected at an earlier stage of their inspiral evolution.
Since the GW inspiral timescale τ∝^4/m_1^3 <cit.>, the typical inspiral time for ejected mergers has the scaling τ∝ m_1/^4.
As the binaries can only merge within the (finite) cosmic time since their formation,
the critical m_1 below which BBHs can merge scales with m_1 ∝^4. This leads to the observed shift in the primary mass distribution as decreases.
Note that this is only a qualitative explanation, and the quantitative correlation between and m_1 for first-generation BHs is very likely model-dependent.
The ejection efficiency and the resulting merging timescale are sensitive to nonlinear effects in cluster dynamics, to the formation redshift and to the cluster metallicity. All of these effects may modify the shape of the primary mass distribution at different merger redshifts.
§ DISCUSSION
In this paper we propose a two-level hierarchical framework to analyze the BBH population observed in GWs under the assumption that the merger events are produced dynamically in star clusters.
The two-level hierarchy is based on the idea that the each cluster in the cluster population has different physical properties, and therefore produces a different BBH population.
The implication is that we can not only characterize the BBH population produced by an assumed cluster population, but (vice versa) we can also infer the population properties of the clusters, given a physical model of cluster dynamics.
To carry out this hierarchical inference, we first need to perform single-event inference – that is, we need to identify the cluster properties that are most likely to produce any observed BBH merger event.
Estimating the single-event likelihood of cluster properties requires a knowledge of how the distribution of BBH parameters (such as binary masses, spins, and redshift) depend on the individual cluster properties, such as the cluster mass and radius. In this paper we carried out a proof-of-principle demonstration that this hierarchical inference is possible using, for illustrative purposes, astrophysical models built with the Rapster code.
With Rapster we can simulate the formation of BBHs in star clusters, and then approximate the joint distribution of BBH parameters and cluster parameters from the simulated BBH samples by KDE methods.
We find that the inferred cluster mass is correlated with the measured BBH primary mass, as shown in Fig. <ref>.
The correlation is a result of the variation of the primary mass distribution as a function of cluster mass observed in Fig. <ref>: more massive clusters enhance the production of hierarchical mergers above the PISN mass gap, and less massive clusters eject more massive first-generation binaries at semimajor axes that are too large to efficiently produce mergers.
As emphasized in the main text, this is a mostly qualitative explanation: the extent of the shift of the primary mass distribution in first-generation BBHs is sensitive to the details of cluster dynamics and to the initial conditions of cluster formation (including redshift and metallicity).
For cluster radii ∈ [0.5, 3] pc, the radius only affects the overall number of BBHs produced in each cluster (because stellar mergers are not been included in the model), and it has a negligible effect on the primary mass distribution. In a more realistic scenario the radius should play an important role in the inference of cluster population properties, because the number of BBHs produced per cluster entering the next hierarchy depends on , and affects the expected BBH merger rate.
Our results are again model-dependent, and therefore it would be useful to validate this trend by comparing against other existing codes <cit.>.
While the KDE method employed in this work suffices to capture the broad features discussed above, it may not be robust enough to proceed to the next hierarchy and infer the properties of the cluster population.
Performing the full hierarchical analysis may require more advanced techniques, such as deep generative modeling, to better approximate the multidimensional probability density functions involved in Eq. (<ref>).
For example, one may learn p(|) and () separately by simulating the BBH populations given a set of cluster properties {}, or work with the joint distribution p(, |) at a chosen cluster population characterized directly by , and obtain p(|)() ∝ p(, |)/p( | ) by reweighing the chosen prior of the cluster population.
Finally, we note that the two-level hierarchy may be generalized to include contributions from multiple formation channels.
This may be achieved by using the relevant parametrization to set up Eq. (<ref>) for each channel and build a mixture model in Eq. (<ref>).
For example, one may obtain the likelihood of progenitor redshift and metallicity based on binary evolution simulations for galactic field binaries (either by backpropagation or though other numerical techniques, as in Refs. <cit.>); combine with the similar likelihood of cluster binaries; and trace the evolution of star formation rate, cluster formation rate, and stellar metallicity all at once.
The full hierarchical inference will be presented in future work.
We thank Davide Gerosa, Will Farr, Chase Kimball, Sharan Banagiri, and Vicky Kalogera for fruitful discussions.
KKYN is supported by a Miller Fellowship at Johns Hopkins University.
K.K., A.A., R.C. and E.B. are supported by NSF Grants No. AST-2006538, PHY-2207502, PHY-090003 and PHY-20043, and NASA Grants No. 20-LPS20- 0011 and 21-ATP21-0010. This research project was conducted using computational resources at the Maryland Advanced Research Computing Center (MARCC).
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan.
§ CONSTRUCTION OF KDE AND IMPORTANCE SAMPLING
Rapster enables rapid simulations of BBHs generated from a population of clusters.
Therefore, it is more convenient to (i) choose a cluster population (parametrized by Λ_ cl) that produces enough simulated BBHs in the range of the BBH parameter space (our simulated samples), (ii) perform kernel density estimation (KDE) to approximate the joint distribution p(, |) from the simulated mergers, and (iii) obtain p(|) = p(, |)/p(|) from Bayes' theorem.
Here, p(|) contains a factor of the differential merger rate, (), on top of the chosen prior of the cluster population put into the simulations, since each input cluster may produce a different number of mergers.
To ensure that p( | ) is properly normalized to unity for each , we require a second KDE for p(|) ∝() p(|), which can also be constructed from the simulated samples, because the count of is proportional to the differential merger rate.
We employ from , written in the infrastructure, to speed up the KDE. We use ∼ 7× 10^5 simulation points, with a bandwidth of ∼ 0.25 set by Silverman's rule.
One may attempt to first approximate the integral of Eq. (<ref>) by an importance sum over the samples of individual BBH likelihoods, and then draw the sample from the approximated likelihood by Monte Carlo methods.
Since the parameter space of has a relatively low dimension, we simplify the sampling procedures further by importance sampling of the joint distribution p(d_i, _i | _i) ≡ p(d_i | _i) p(_i | _i).
In practice, we append an additional set of _i samples, {_i,j}_j=1^K, drawn from a uniform distribution U(_i)∝ 1 to the set of BBH likelihood samples, {_i,j}_j=1^K.
The aggregated set {_i,j, _i,j}_j=1^K follows the joint distribution p(d_i | _i) U(_i).
The desired set of _i samples that are representative of the marginalized likelihood p(d_i | _i) is equivalent to the set of {_i,j}_j=1^K weighed by {w^ cl_i,j∝ p(_i,j | _i,j)}_j=1^K.
|
http://arxiv.org/abs/2307.00559v1
|
20230702125254
|
Entropy Accumulation under Post-Quantum Cryptographic Assumptions
|
[
"Ilya Merkulov",
"Rotem Arnon-Friedman"
] |
quant-ph
|
[
"quant-ph",
"cs.CR"
] |
The Center for Quantum Science and Technology, Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot, Israel
In device-independent (DI) quantum protocols, the security statements are oblivious to the characterization of the quantum apparatus – they are based solely on the classical interaction with the quantum devices as well as some well-defined assumptions.
The most commonly known setup is the so-called non-local one, in which two devices that cannot communicate between themselves present a violation of a Bell inequality.
In recent years, a new variant of DI protocols, that requires only a single device, arose.
In this novel research avenue, the no-communication assumption is replaced with a computational assumption, namely, that the device cannot solve certain post-quantum cryptographic tasks.
The protocols for, e.g., randomness certification, in this setting that have been analyzed in the literature used ad hoc proof techniques and the strength of the achieved results is hard to judge and compare due to their complexity.
Here, we build on ideas coming from the study of non-local DI protocols and develop a modular proof technique for the single-device computational setting.
We present a flexible framework for proving the security of such protocols by utilizing a combination of tools from quantum information theory, such as the entropic uncertainty relation and the entropy accumulation theorem.
This leads to an insightful and simple proof of security, as well as to explicit quantitative bounds.
Our work acts as the basis for the analysis of future protocols for DI randomness generation, expansion, amplification and key distribution based on post-quantum cryptographic assumptions.
Entropy Accumulation under Post-Quantum Cryptographic Assumptions
Rotem Arnon-Friedman
August 1, 2023
=================================================================
§ INTRODUCTION
The fields of quantum and post-quantum cryptography are rapidly evolving.
In particular, the device-independent (DI) approach for quantum cryptography is being investigated in different setups and for various protocols.
Consider a cryptographic protocol and a physical device that is being used to implement the protocol. The DI paradigm states that when proving the security of the protocol, one should treat the device itself as an untrusted one, prepared by an adversarial entity.
Only limited, well-defined assumptions regarding the inner-workings of the device are placed.
In such protocols, the honest party, called the verifier here, interacts with the untrusted device in a black-box manner, using classical communication. The security proofs are then based on properties of the transcript of the interaction, i.e., the classical data collected during the execution of the protocol, and the underlying assumptions.
The most well-studied DI setup is the so called “non-local setting” <cit.>. There, the protocols are being implemented using (at least) two untrusted devices and the assumption being made is that the devices cannot communicate between themselves during the execution of the protocol (or parts of it).
In recent years, another variant has been introduced: Instead of working with two devices, the protocol requires only a single device and the no-communication assumption is replaced by an assumption regarding the computational power of the device. More specifically, one assumes that during the execution of the protocol, the device is unable to solve certain computational problems, such as Learning With Errors (LWE) <cit.>, which are believed to be hard for a quantum computer. (The exact setup and assumptions are explained in Section <ref>).
Both models are DI, in the sense that the actions of the quantum devices are uncharacterized.
Figure <ref> schematically presents the two scenarios.
As in practice it might be challenging to assure that two quantum devices do not communicate, as required in the non-local setting, the incentive to study what can be done using only a single device is high.
Indeed, after the novel proposals made in <cit.> for DI protocols for verification of computation and randomness certification, many more protocols for various tasks and computational assumptions were investigated; see, e.g., <cit.>.
Experimental works also followed, aiming at verifying quantum computation in the model of a single computationally restricted device <cit.>.
With this research agenda advancing, more theoretical and experimental progress needs to be done– a necessity before one could estimate whether this avenue is of relevance for future quantum technologies or of sheer theoretical interest.
In this work we are interested in the task of generating randomness.[Unless otherwise written, when we discuss the generation of randomness we refer to a broad family of tasks: randomness certification, expansion and amplification as well as, potentially, quantum key distribution. In the context of the current work, the differences are minor.]
We consider a situation in which the device is prepared by a quantum adversary and then given to the verifier (the end-user or costumer). The verifier wishes to use the device in order to produce a sequence of random bits. In particular, the bits should be random also from the perspective of the quantum adversary.
The protocol that indicates the form in which the verifier interacts with the device (interchangeably also called the prover) should be constructed in a way that guarantees that either the verifier aborts with high probability if the device is not to be trusted or that, indeed, we can certify that the bits produced by the device are random and unknown to the adversary.
Let us slightly formalize the above.
The story begins with the adversary preparing a quantum state ρ^in_PE; the marginal ρ^in_P is the initial state of the device given to the verifier while the adversary keeps the quantum register E for herself.
In addition to the initial state ρ^in_P, the device is described by the quantum operations, e.g., measurements, that it performs during the execution of the protocol.
We assume that the device is a quantum polynomial time (QPT) device and thus can only perform efficient operations. Namely, the initial state ρ^in_P is of polynomial size and the operations can be described by a polynomial size quantum circuit; these are formally defined in Section <ref>.
The verifier can perform only (efficient) classical operations.
Together with the verifier, the initial state of all parties is denoted by ρ^in_PVE.[In a simple scenario one can consider ρ^in_PVE=ρ^in_PE⊗ρ^in_V, i.e., the verifier is initially decoupled from the device and the adversary. This is, however, not necessarily the case in, e.g., randomness amplification protocols <cit.>. We therefore allow for this flexibility with the above more general notation.]
The verifier then executes the considered protocol using the device.
All protocols consist of what we call “test rounds” and “generation rounds”.
The goal of a test round is to allow the verifier to check that the device is performing the operations that it is asked to apply by making it pass a test that only certain quantum devices can pass. The test and its correctness are based on the chosen computational assumption.[In the non-local setting, the test is based on the violation of a Bell inequality, or winning a non-local game with sufficiently high winning probability. Here the computational assumption “replaces” the Bell inequality.]
For example, in <cit.>, the cryptographic scheme being used is “Trapdoor Claw-Free Functions” (TCF)– a family of pairs of injective functions f_0,f_1:{0,1}^n →{0,1}^n.
Informally speaking, it is assumed that, for every image y: (a) Given a “trapdoor” one could classically and efficiently compute two pre-images x_0 ,x_1 such that f_0(x_0)=f_1(x_1)=y; (b) Without a trapdoor, even a quantum computer cannot come up with x_0 ,x_1 such that f_0(x_0)=f_1(x_1)=y (with high probability).
While there exists no efficient quantum algorithm that can compute both pre-images x_0, x_1 for a given image y without a trapdoor, a quantum device can nonetheless hold a superposition of the pre-images by computing the function over a uniform superposition of all inputs to receive ∑_y∈{0,1}^n(|0,x_0⟩ + |1,x_1⟩)|y⟩.
These insights (and more– see Section <ref> for details) allow one to define a test based on TCF such that a quantum device that creates ∑_y∈{0,1}^n(|0,x_0⟩ + |1,x_1⟩)|y⟩ can win while other devices that do not hold a trapdoor cannot. Moreover, the verifier, holding the trapdoor, will be able to check classically that the device indeed passes the test.
Let us move on to the generation rounds. In these rounds, the device produces the output bits O, which are supposed to be random.
During the execution of the protocol some additional information, such as a chosen public key for example (or whatever is determined by the protocol), may be publicly announced or leaked; we denote this side information by S.
After executing all the test and generation rounds the verifier checks if the average winning probability in the test rounds is higher than some pre-determined threshold probability ω∈(0,1). If it is, then the protocol continues and otherwise aborts.
Let the final state of the entire system, conditioned on not aborting the protocol, be ρ_|Ω.
To show that randomness has been produced, one needs to lower-bound the conditional smooth min-entropy H^ε_min(O|SE)_ρ_|Ω <cit.> (the formal definitions are given in Section <ref>). Indeed, this is the quantity that tightly describes the amount of information-theoretically uniform bits that can be extracted from the output O, given S and E, using a quantum-proof randomness extractor <cit.>.
The focus of our work is to supply explicit lower-bounds on H^ε_min(O|SE)_ρ_|Ω in a general and modular way.
An important remark is in order before continuing. Notice that in the above the device is computationally bounded and yet we ask to get output bits O that are information-theoretically secure with respect to the adversary. This means that the adversary may keep her system E and perform on it, using the knowledge of S, any, not necessarily efficient, operation. By proving such a strong statement, it is implied that the computational assumption only needs to hold during the execution of the protocol in order to provide ever-lasting security of the final output. This also means that the outcomes are random with respect to the adversary even if the computational assumption, e.g., hardness of LWE, is broken after the creation of the bits. This property is termed “security lifting” and is a fundamental and crucial feature of all DI protocols based on computational assumptions.
§.§ Motivation of the current work
The setup of two non-communicating devices has naturally emerged from the study of quantum key distribution (QKD) protocols and non-local games (or Bell inequalities). As such, the quantum information theoretic toolkit for proving the security of protocols such as DIQKD, DI randomness certification and alike was developed over many years and used for the analysis of numerous quantum protocols (see, e.g., the survey <cit.>).
The well-established toolkit includes powerful techniques that allow bounding the conditional smooth min-entropy H^ε_min(O|SE)_ρ_|Ω mentioned above. Examples for such tools are the entropic uncertainty relations <cit.>, the entropy accumulation theorem <cit.> and more.
The usage of these tools allows one to derive quantitatively strong lower bounds on H^ε_min(O|SE)_ρ_|Ω as well as handle realistically noisy quantum devices <cit.>.
Unlike the protocols in the non-local setup, the newly developed protocols for, e.g., randomness certification with a single device restricted by its computational power, are each analyzed using ad hoc proof techniques.
On the qualitative side, such proofs make it harder to separate the wheat from the chaff, resulting in a less modular and insightful claims.
Quantitatively, the strength of the achieved statements is hard to judge– they are most likely not strong enough to lead to practical applications and it is unknown whether this is due to a fundamental difficulty or a result of the proof technique. As a consequence, it is unclear whether such protocols are of relevance for future technology.
In this work, we show how to combine the information theoretic toolkit with assumptions regarding the computational power of the device. More specifically, we prove lower bounds on the quantity H^ε_min(O|SE)_ρ_|Ω by exploiting post-quantum cryptographic assumptions and quantum information-theoretic techniques, in particular the entropic uncertainty relation and the entropy accumulation theorem. Prior to our work, it was believed that such an approach cannot be taken in the computational setting (see the discussion in <cit.>).
Once a bound on H^ε_min(O|SE)_ρ_|Ω is proven, the security of the considered protocols then follows from our bounds using standard tools.
The developed framework is general and modular. We use the original work of <cit.> as an explicit example; the same steps can be easily applied to, e.g., the protocols studied in the recent works <cit.>.
§.§ Main ideas and results
The main tool which is used to lower bound the conditional smooth min-entropy in DI protocols in the non-local setting is the entropy accumulation theorem (EAT) <cit.>. The EAT deals with sequential protocols, namely, protocols that proceed in rounds, one after the other, and in each round some bits are being output. Roughly speaking, the theorem allows us to relate the total amount of entropy that accumulates throughout the execution of the protocol to, in some sense, an “average worst-case entropy of a single round” (see Section <ref> for the formal statements).
It was previously unclear how to use the EAT in the context of computational assumptions and so <cit.> used an ad hoc proof to bound the total entropy.
The general structure of a protocol that we consider is shown in Figure <ref>.
The initial state of the system is ρ^in_PVE. The protocol, as mentioned, proceeds in rounds. Each round includes interaction between the verifier and the prover and, overall, can be described by an efficient quantum channel ℳ_i for round i∈[n]. The channels output some outcomes O_i and side information S_i. In addition, the device (as well as the verifier) may keep quantum and classical memory from previous rounds– this is denoted in the figure by the registers R_i. We remark that there is only one device and the figure merely describes the way that the protocol proceeds. That is, in each round i∈[n] the combination of the actions of the device and the verifier, together, define the maps ℳ_i. The adversary's system E is untouched by the protocol.[One could also consider more complex protocols in which the adversary's information does change in some ways. This can be covered using the generalized EAT <cit.>. We do not do so in the current manuscript since all protocols so far fall in the above description but the results can be extended to the setup of <cit.>.]
We are interested in bounding the entropy that accumulated by the end of the protocol: H^ε_min(𝐎|𝐒E)_ρ_|Ω, where 𝐎=O_1,⋯,O_n and similarly 𝐒=S_1,⋯,S_n.
The EAT tells us that, under certain conditions, this quantity is lower bounded, to first order in n, by n with of the form
= min_σ∈Σ(ω) H(O_i|S_iE)_ℳ_i(σ) ,
where ω is the average winning probability of the device in the test rounds and Σ(ω) is the set of all (including inefficiently constructed) states of polynomial size that achieve the winning probability ω.
To lower bound the entropy appearing in Equation (<ref>) one needs to use the computational assumption being made. This can be done in various ways.
In the current work, we show how statements about anti-commuting operators, such as those proven in <cit.>, can be brought together with the entropic uncertainty relation <cit.>, another tool frequently used in quantum information theory, to get the desired bound.
In combination with our usage of the EAT the bound on H^ε_min(𝐎|𝐒E)_ρ_|Ω follows.
The described techniques are being made formal in the rest of the manuscript. We use <cit.> as an explicit example,[For readers who are familiar with <cit.>, our work exploits a few lemmas from <cit.>, which define “the heart” of the computational assumption in the context of randomness generation and then we replace <cit.> completely.] also deriving quantitative bounds. We discuss the implications of the quantitative results in Section <ref> and their importance for future works.
Furthermore, our results can be used directly to prove full security of the DIQKD protocol of <cit.>.
We add that on top of the generality and modularity of our technique, its simplicity contributes to a better understanding of the usage of post-quantum cryptographic assumptions in randomness generation protocols.
Thus, apart from being a tool for the analysis of protocols, we also shed light on what is required for a protocol to be strong and useful.
For example, we can see from the explanation given in this section that the computational assumptions enter in three forms in Equation (<ref>):
* The channels ℳ_i must be efficient.
* The states σ must be of polynomial size.
* Due to the minimization, the states σ that we need to consider may also be inefficient to construct, even though the device is efficient.
Points 1 and 2 are the basis when considering the computational assumption and constructing the test; In particular, they allow one to bound H(O_i|S_iE)_ℳ_i(σ), up to a negligible function η(λ), where λ is a security parameter defined by the computational assumption.
Point 3 is slightly different. The set over which the minimization is taken, determines the strength of the computational assumption that one needs to consider. For example, it indicates that the protocol in <cit.> requires that the LWE assumption is hard even with a potentially-inefficient polynomial-size advice state. This is a (potentially) stronger assumption than the “standard” formulation of LWE.
Note that the stronger assumption is required even though the initial state of the device, ρ^in_P is efficient to prepare (i.e., it does not act as an advice state in this case).
In <cit.>, this delicate issue arises only when analyzing everything that can happen to the initial state throughout the entire protocol and conditioning on not aborting.
In the current work, we directly see and deal with the need of allowing advice states from the minimization in Equation (<ref>).
§ PRELIMINARIES
Throughout this work, we use the symbol 1 as the identity operator and as the characteristic function interchangeably; the usage is clear due to context.
We denote x← X when x is sampled uniformly from the set X or x←𝒟 when x is sampled according to a distribution 𝒟.
The Bernoulli distribution with p(0)=γ is denoted as Bernoulli(1-γ)
The Pauli operators are denoted by σ_x,σ_y,σ_z.
The set 1,⋯,n is denoted as [n].
§.§ Mathematical background
A function η:ℕ→ℝ^+ is to be negligible if for every c ∈ℕ there exists N∈ℕ such that for every n>N, η(n)<n^-c.
Let η:ℕ→ℝ^+ be a negligible function. The function η(n)ln(1/η(n)) is also negligible.
Assume, w.l.o.g, monotonicity and positivity of some negligible function η.
Given c∈ℕ, we wish to find N∈ℕ s.t. ∀ n>N, η(n)ln(1/η(n)) < n^-c.
We denote the following function
g(n) max.k∈ℕ|η(n)<n^-k .
This means
[ n^-(g(n)+1) ≤ η(n) < n^-g(n) ,; ⇒ -(g(n)+1)lnn ≤ lnη(n) < -g(n)lnn ,; ⇒ (g(n)+1)lnn ≥ ln(1/η(n)) > g(n)lnn , ]
yielding
η(n)ln(1/η(n)) <
n^-g(n)(g(n)+1)lnn =
g(n)+1/n^g(n)-1lnn/n <
g(n)+1/n^g(n)-1 =
g(n)+1/n^g(n)-1-c1/n^c .
By the negligibility property of η(n), the function g(n) diverges to infinity.
Therefore, ∃ N∈ℕ s.t. ∀ n > N
g(n)+1/n^g(n)-1-c≤ 1 ,
and for all n>N
η(n)ln(1/η(n)) <
g(n)+1/n^g(n)-1-c1/n^c≤1/n^c .
Let η be a negligible function and let h(x)=-xlog x - (1-x)log(1-x) be the binary entropy function. There exists a negligible function ξ for which the following holds.
h(x - η(n)) ≥ h(x) - ξ(n) .
Given two probability distributions P=p_i_i and Q=q_i_i, the Hellinger distance between P and Q is defined as
H(P,Q) = 1/√(2)√(∑_i(√(p_i) + √(q_i))^2) .
Let U_1,U_2 be two self adjoint unitary operators acting on a Hilbert space ℋ, of a countable dimension.
Let L be a normal operator acting on the same space such that
[L,U_1]=[L,U_2]=0.
There exists a decomposition of the Hilbert space into a direct sum of orthogonal subspaces ℋ = ⊕_αℋ_α
such that for all α, (ℋ_α)≤ 2 and given |ψ⟩∈ℋ_α all 3 operators satisfy
U_1|ψ⟩,U_2|ψ⟩,L|ψ⟩∈ℋ_α.
Jordan's lemma appears, among other places, in <cit.>.
Lemma
The sole change in the proof of the extension is in the choice of diagonalizing basis of the unitary operator U_2 U_1.
The chosen basis is now a mutual diagonalizing basis of U_2 U_1 and L which exists due to commutative relations.
Let Π, M, K be Hermitian projections acting on a Hilbert space ℋ of a countable dimension such
that [K,Π]=[K,M]=0.
There exists a decomposition of the Hilbert space into a direct sum of orthogonal subspaces such that Π,M and K are 2
by 2 block diagonal.
In addition, in subspaces ℋ_α of dimension 2, Π and M take the forms
[ Π_α=(
[ 1; 0 0 ]) M_α=(
[ c_α^2 c_αs_α; c_αs_α s_α^2 ]) ]
where c_α = cosθ_α,s_α=sinθ_α for some θ_α.
Given a Hermitian projection Π, Consider the unitary operator,
2Π-1, satisfying (2Π-1)^2 = 1.
Therefore, the operators 2Π-1, 2M-1 satisfy the conditions for Lemma <ref> and there exists decomposition of ℋ to a direct sum of orthogonal subspaces ℋ_α such that in these subspaces
[ 2Π_α - 1_α=(
[ 1; 1 ]) 2M_α - 1_α=(
[ ω; ω̅ ]) ] .
One can recognize the operators as σ_x and cosθσ_x + sinθσ_y, respectively, for
some angle θ.
Therefore, there exists a basis of ℋ_α such that the operators are σ_z and cosϕσ_z + sinϕσ_x, respectively, for some angle ϕ.
Consequently, in this subspace, the projections take the form
[ Π_α=1/2(
(
[ 1 ; -1 ]
)
+
(
[ 1 ; 1 ]
)
)
=
(
[ 1 ; 0 ]
); ; M_α=1/2(
(
[ cosϕ sinϕ; sinϕ -cosϕ ]
)
+
(
[ 1 ; 1 ]
)
)
=
(
[ c_α^2 c_αs_α; c_αs_α c_α^2 ]
) ] ,
with θ_α = ϕ/2.
Let f:U→ℝ be a convex function over a convex set U∈ℝ^n such that for every u∈ U there exists a subgradient.
Let (X_i)_i∈[n] be a sequence of random variables with support over U.
Then, 𝔼[f(X_1, ..., X_n)] ≥ f(𝔼[X_1],...,𝔼[X_n]).
§.§ Tools in quantum information theory
We state here the main quantum information theoretic definitions and techniques appearing in previous work, on which we build in the current manuscript.
Let ρ_AB be a density over the Hilbert space ℋ_A ⊗ℋ_B. The von Neumann entropy of the ρ_A is defined as
H(A)_ρ = - (ρ_A logρ_A) .
The conditional von Neumann entropy of A given B is defined to be
H(A|B)_ρ = H(AB)_ρ - H(B)_ρ .
Let Π and M be two observables, described by orthonormal bases |π_i⟩_i and |m_j⟩_j on a d-dimensional Hilbert space ℋ_A.
The measurement processes are then described by the completely positive maps
𝒫:ρ→∑_i⟨π_i|ρ|π_i⟩π_i , ℳ:ρ→∑_j⟨m_j|ρ|m_j⟩m_j .
The square overlap of Π and M is then defined as
cmax_i,j|⟨π_i|m_j⟩|^2
.
Let M be an observable on a d dimensional Hilbert space ℋ_A and let ℳ be its corresponding map as appearing in Equation(<ref>).
Given a state ρ∈ℋ_A⊗ℋ_B, we define the conditional entropy of the measurement M given the side information B as
H(M|B)_ρ = H(A|B)_(ℳ⊗1_B)(ρ) = H(AB)_(ℳ⊗1_B) (ρ) - H(B)_(ℳ⊗1_B)(ρ) .
Let Π and M be two observables on a Hilbert space ℋ_P and let c be their square overlap.
For any density operator ρ∈ℋ_V⊗ℋ_P⊗ℋ_E,
H(Π|E) ≥log (1/c) - H(M|V) .
In order to provide a clear understanding of the quantum uncertainty that arises from two measurements, as in Lemma <ref>, it is beneficial to examine the square overlap between those measurements (Definition <ref>).
The Bloch sphere representation, which pertains to Hilbert spaces of two dimensions, offers a lucid illustration of this concept, as shown in Figure <ref>.
Given two non-trivial Hermitian projections
Π=1/2(1+σ_z) ;
M=1/2(1+cos(θ)σ_z+sin(θ)σ_x)
acting on a 2-dimensional Hilbert space ℋ, the eigenvalues of the operator Π M Π + (1 - Π) M (1 - Π) are cos^2(θ/2) and sin^2(θ/2).
In addition, the square overlap of the operators in Equation (<ref>) is c=maxcos^2(θ/2),sin^2(θ/2).
For states ρ and σ on a Hilbert space ℋ_A⊗ℋ_B, if 1/2ρ-σ_1 ≤ϵ≤ 1, then
|H(A|B)_ρ - H(A|B)_σ| ≤ϵlog_2 (|A|) + (1+ϵ) h(ϵ/1+ϵ) .
Let Π and M be two Hermitian projections on ℋ and ϕ a state on ℋ.
Let ω = (Mϕ) and
μ =
|
1/2 - (MΠϕΠ) - (M(1-Π)ϕ(1-Π))
| .
Let c∈(1/2,1].
Let B_j be the orthogonal projection on the j'th 2×2 block, as given in Lemma <ref> and denote c_j as the square overlap of Π and M in the corresponding block.
Define Γ be the orthogonal projection on all blocks such that the square overlap is bound by c:
Γ∑_j:c_j ≤ c B_j .
Then,
((1-Γ)ϕ) ≤2μ + 10√(1 - ω)/(2c - 1)^2 .
The above lemma was proven in <cit.>.
Since in plays a crucial part in the work we include the proof for completeness.
Using Jordan's lemma we find a basis of ℋ in which
M = ⊕_j
[ a_j^2 a_j b_j; a_j b_j b_j ^2 ] and Π = ⊕_j
[ 1 ; 0 ]
where a_j = cosθ_j, b_j = sinθ_j, for some angles θ_j.
Let Γ be the orthogonal projection on those 2-dimensional blocks such that max(a_j^2, b_j^2)≡ c_j ≤ c.
Note that:
* Γ commutes with both M and Π, but not necessarily with ϕ.
* Γ is the projection on 2×2 blocks where the eigenvalues of the operator Π MΠ + (1-Π)M(1-Π) are in the range [1-c,c] as a consequence of Lemma <ref>.
Assume that ω = 1.
This implies that ϕ is supported on the range of M.
For any block j, let B_j be the projection on the block and p_j = (B_j ϕ).
It follows from the decomposition of M and Π to 2 dimensional blocks and the definition of μ that[This is easily seen in the Bloch sphere representation.]
|
∑_j:c_j ≤ c p_j (a_j^4 + b_j^4) +
∑_j:c_j > c p_j (a_j^4 + b_j^4) -
1/2
| = μ .
Using that for j with c_j > c such that max(a_j^2, b_j^2) ≥ c and the fact that the polynomial 1-2x(1-x) is strictly rising in the regime x>1/2, we get
a_j^4 + b_j^4 = 1 - 2 a_j^2 b_j^2 =
1 - 2 max(a_j^2, b_j^2)(1 - max(a_j^2, b_j^2))
≥ 1 - 2c(1-c) =
1/2 + 1/2(2c-1)^2 .
From Equation (<ref>) and a_j^4 + b_j^4 ≥ 1/2 it follows that
2 μ =
∑_j:c_j ≤ c p_j (1+(2c_j-1)^2) +
∑_j:c_j > c p_j (1+(2c_j-1)^2) - 1
= ∑_j:c_j ≤ c p_j (2c_j-1)^2 + ∑_j:c_j > c p_j (2c_j-1)^2
≥∑_j:c_j > c p_j (2c-1)^2 .
Consequently,
((1-Γ)ϕ) ≤2μ/(2c - 1)^2 .
This concludes the case where ω=1.
Consider ω < 1 and (Mϕ) > 0, as otherwise the lemma is trivial.
Let ϕ' = Mϕ M/ (M ϕ).
By the gentle measurement lemma <cit.>,
ϕ' -ϕ_1 ≤ 2√(1 - ω) .
Using the definition of μ it follows that
|
1/2 - (MΠϕ' Π) - (M(1-Π)ϕ'(1-Π))
|
≤μ + 4√(1 - ω) .
Following the same steps as those used for the case γ = 0 yields an analogue of (<ref>), with ϕ' instead of ϕ on the left-hand side and μ+4√(1 - ω) instead of μ on the right-hand side.
Applying again Equation (<ref>) the same bound transfers to ϕ with an additional loss of 2√(1 - ω).
.
A main tool that we are going to use is the entropy accumulation theorem (EAT) <cit.>.
For our quantitive results we used the version appearing in <cit.>; One can use any other version of the EAT in order to optimize the randomness rates, e.g., <cit.> as well as the generalization in <cit.>.
We do not explain the EAT in detail; the interested reader is directed to <cit.> for a pedagogical explanation.
Quantum channels ℳ_i :R_i-1→ R_i O_i S_i Q_i_i∈[n] are said to be EAT channels if the following requirements hold:
* O_i_i∈[n] are finite dimensional quantum systems of dimension d_O and Q_i_i∈[n] are finite-dimensional classical systems (RV). S_i_i∈[n] are arbitrary quantum systems.
* For any i∈[n] and any input state σ_R_i-1, the output state σ_R_i O_i S_i=ℳ_i (σ_R_i-1) has the property that the classical value Q_i can be measured from the marginal σ_O_i S_i without changing the state. That is, for the map 𝒯_i : O_i S_i → O_i S_i Q_i describing the process of deriving Q_i from O_i and S_i, it holds that _Q_i∘𝒯_i (σ_O_i S_i)=σ_O_i S_i.
* For any initial state ρ^in_R_0 E, the final state ρ_𝐎𝐒𝐗E=(_R_n∘ℳ_n ∘⋯∘ℳ_1)⊗1_E ρ^in_R_0 E fulfils the Markov-chain conditions O_1,...,O_i-1↔ S_1,...,S_i-1,E↔ S_i.
Let ℳ_i be a family of EAT channels and 𝒬 denote the common alphabet of Q_1 ,..., Q_n. A differentiable and convex function f_min from the set of probability distributions p over 𝒬 to the real numbers is called a min-tradeoff function for ℳ_i if it satisfies
f_min (p) ≤inf_σ_R_i-1 R' : ℳ_i (σ)_Q_i=pH(O_i | S_i R')_ℳ_i (σ)
for all i∈[n], where the infimum is taken over all purifications of input states of ℳ_i for which the marginal on Q_i of the output state is the probability distribution p.
Given an alphabet 𝒬, for any 𝐪∈𝒬^n we define the probability distribution freq_𝐪 over 𝒬 such that for any q̃∈𝒬,
freq_𝐪(q̃) = |i∈[n]:q_i=q̃|/n .
Let ℳ_i : R_i-1→ R_i O_i S_i Q_i for i∈[n] be EAT channels,
let ρ be the final state,
Ω an event defined over 𝒬^n,
p_Ω the probability of Ω in ρ and ρ_|Ω the final state conditioned on Ω.
Let ε∈(0,1).
For Ω̂=freq_𝐪 : 𝐪∈Ω convex,
f_min a min-tradeoff function for ℳ_i_i∈[n],
and any ∈ℝ such that f_min(freq_𝐪) ≥ for any freq_𝐪∈Ω̂,
H^ε_min(𝐎 | 𝐒 E)_ρ_|Ω≥ n - μ√(n) ,
where
μ = 2 (log (1+2 d_O) + ⌈‖∇ f_max‖ _∞⌉ ) √(1 - 2log(ε· p_Ω)) .
§.§ Post-quantum cryptography
For every security parameter λ∈ℕ, Let 𝒳⊆0,1^w,𝒴 and be finite sets of inputs, outputs and keys respectively.
A family of injective functions
ℱ=f_k,b:𝒳→𝒴_k∈,b∈0,1
is said to be trapdoor claw-free (TCF) family if the following holds:
* Efficient Function Generation: There exists a PPT algorithm Gen_ℱ which takes the security parameter 1^λ and outputs a key k∈ and a trapdoor t.
* Trapdoor: For all keys k∈ there exists an efficient deterministic algorithm Inv_k such that, given t, for all b∈0,1 and x∈𝒳, Inv_k(t,b,f_k,b(x))=x.
* Claw-Free: For every QPT algorithm 𝒜 receiving as input (1^λ,k) and outputting a pair (x_0,x_1)∈𝒳^2 the probability to find a claw is negligible.
I.e. there exists a negligible function η for which the following holds.
_k←(x_0,x_1)←𝒜(1^λ,k)[f_k,0(x_0)=f_k,1(x_1)] ≤η(λ).
For every security parameter λ∈ℕ, Let 𝒳,𝒴 and be finite sets of inputs, outputs and keys respectively and 𝒟_𝒴 the set of distributions over 𝒴. A family of functions
ℱ=f_k,b:𝒳→𝒟_Y_k∈,b∈0,1
is said to be noisy trapdoor claw-free (NTCF) family if the following conditions hold:
* Efficient Function Generation: There exists a probabilistic polynomial time (PPT) algorithm Gen_ℱ which takes the security parameter 1^λ and outputs a key k∈ and a trapdoor t.
* Trapdoor Injective Pair: For all keys k∈, the following 2 conditions are satisfied.
* Trapdoor: For all b∈0,1 and x≠ x'∈𝒳, Supp(f_k,b(x))∩Supp(f_k,b(x'))=∅. In addition, there exists an efficient deterministic algorithm Inv_k such that for all b∈0,1, x∈𝒳 and y∈Supp(f_k,b (x)), Inv_k(t,b,y)=x.
* Injective Pair: There exists a perfect matching relation ℛ_k⊆𝒳×𝒳 such that f_k,0(x_0)=f_k,1(x_1) if and only if (x_0,x_1)∈ℛ_k.
* Efficient Range Superposition: For every function in the family f_k,b∈ℱ, there exists a function f'_k,b:𝒳→𝒟_𝒴 (not necessarily a member of ℱ) such that the following hold.
* For all (x_0,x_1)∈ℛ_k and y∈Supp(f'_k,b(x_b)), Inv_k(t,b,y)=x_b and Inv_k(t,1-b,y)=x_1-b.
* There exists an efficient deterministic algorithm Chk_k such that
Chk_k(b,x,y)=1_y∈Supp(f'_k,b(x)).
* There exists some negligible function η such that
x←𝒳E[H^2(f_k,b(x),f'_k,b(x))] ≤η (λ)
where H the Hellinger distance for distributions defined in Definition <ref>.
* There exists a quantum polynomial time (QPT) algorithm Samp_k,b that prepares the quantum state
|ψ'⟩=1/√(|𝒳|) =
∑_x∈𝒳,y∈𝒴√((f'_k,b(x))(y))|x⟩|y⟩.
* Adaptive Hardcore Bit: For all keys k∈, the following holds. For some integer w that is a polynomially bounded function of λ
* For all b∈0,1 and x∈𝒳, there exists a set G_k,b,x⊆0,1^w such that _d←0,1^w[d∉ G_k,b,x]≤η(λ) for some negligible function η. In addition, there exists a PPT algorithm that checks for membership in G_k,b,x given k,b,x and the trapdoor t.
* Let
H_k (b,x_b,d,d·(x_0⊕ x_1))|b∈0,1,(x_0,x_1)∈ℛ_k,d∈ G_k,0,x_0∩ G_k,1,x_1
H̅_k (b,x_b,d,e)|e∈0,1, (b,x_b,d,1-e)∈ H_k
then for any QPT 𝒜 and polynomial size (potentially inefficient to prepare) advice state ϕ independent of the key k, there exists a negligible function η' for which the following holds[We remark that the same computational assumption is being used in <cit.>, even though the advice state is not included explicitly in the definition therein.]
|
_(k,t)← Gen_ℱ(1^λ)
[𝒜(k, ϕ)∈ H_k]
-
_(k,t)← Gen_ℱ(1^λ)
[𝒜(k, ϕ)∈H̅_k]
|
≤η'(λ) .
There exists an LWE-based construction of an NTCF family as in Definition <ref>.[As previously noted, <cit.> assumes that the LWE problem is hard also when given an advice state. Thus, though not explicitly stated, the construction in <cit.> is secure with respect to an advice state.]
§ RANDOMNESS CERTIFICATION
Our main goal in this work is to provide a framework to lower bound the entropy accumulated during the execution of a protocol that uses a single quantum device.
The randomness generation protocol that we use is given as Protocol <ref>, where multiple rounds of interaction of a verifier with a quantum prover are performed.
The quantitative result of this section is a lower bound the amount of smooth min-entropy of the output of the protocol, namely, H_min^ε(Π̂|𝐊𝐓𝐆E).
The protocol and its security are based on the existence of an NTCF ℱ=f_b,k:𝒳→𝒴 (see Lemma <ref>).
We use the same assumption as in <cit.>, where the LWE problem <cit.> is exploited to construct an NTCF on which the protocol builds.
The definitions made in this chapter are stated implicitly with respect to ℱ, a security parameter λ and a corresponding set of keys .
Before proving the validity of this protocol in the multiple round, we inspect a single round of the protocol in Section <ref>.
This is done using a definition and inspection of a simplified one-round protocol and device, a reduction to single qubits and then the usage of the entropic uncertainty relation to define a min-tradeoff function.
Following these steps, in Sections <ref> and <ref>, we use the results of Section <ref> in combination with the EAT to prove a lower bound on the total amount of entropy accumulated during the execution of Protocol <ref>.
§.§ Single-round entropy
In this Section we analyze a single round of Protocol <ref>, presented as Protocol <ref>.
§.§.§ One round protocols and devices
Protocol <ref>, roughly, describes a single round of Protocol <ref>.
The goal of Protocol <ref> can be clarified by thinking about an interaction between a verifier 𝒱 and a quantum prover 𝒫 in an independent and identically distributed (IID) scenario, in which the device is repeating the same actions in each round.
Upon multiple rounds of interaction between the two, 𝒱 is convinced that the winning rate provided by 𝒫 is as high as expected and therefore can continue with the protocol. We will, of course, use the protocol later on without assuming that the device behaves in an IID manner throughout the multiple rounds of interaction.
We begin by formally defining the most general device that can be used to execute the considered single round Protocol <ref>.
A general device is a tuple D=(ϕ,Π,M) that receives k∈ as input and specified by the following:
* A normalized density matrix ϕ∈ℋ_D⊗ℋ_Y.
* ℋ_D is a polynomial (in λ) space, private to the device.
* ℋ_Y is a space private to the device whose dimension is the same of the cardinality of 𝒴.
* For every y∈𝒴, ϕ_y is a sub-normalized state such that
ϕ_y = (1_D⊗⟨y|_Y)ϕ(1_D⊗|y⟩_Y).
* For every y∈𝒴, a projective measurement M_y^(u,d) on ℋ_D, with outcomes (u,d)∈0,1×0,1^w.
* For every y∈𝒴, a projective measurement Π_y^(b,x) on ℋ_D, with outcomes (b,x)∈0,1×𝒳.
For each y, this measurement has two designated outcomes (0,x_0),(1,x_1).
We say that a device D=(ϕ,Π,M) is efficient if:
* The state ϕ is a polynomial size (in λ) “advice state” that is independent of the chosen keys k∈. The state might not be producible using a polynomial time quantum circuit (we say that the state can be inefficient).
* The measurements Π and M can be implemented by polynomial size quantum circuit.
We emphasize that the device D is computationally bounded by both time steps and memory.
This prevents pre-processing schemes in which the device manually goes over all keys and pre-images to store a table of answers to all the possible challenges as such schemes demand exponential memory in λ.
The cryptographic assumption made on the device is the following.
Intuitively, the lemma states that due to the hardcore bit property in Equation (<ref>), the device cannot pass both pre-image and equation tests; once it passes the pre-image test, trying to pass also the equation test results in two computationally indistinguishable state– one in which the device also passes the equation test and one in which it does not.
Let D=(ϕ, Π, M) be an efficient general device, as in Definitions <ref> and <ref>. Define a sub-normalized density matrix
ϕ̃_YBXD =
∑_y∈𝒴y_Y⊗∑_b∈0,1b,x_b_BX⊗Π_y ^(b, x_b)ϕ_y Π_y ^(b, x_b) .
Let
σ_0 =
∑_b∈0,1b,x_b_BX⊗∑_(u,d)∈ V_y,1u,d_U ⊗
(1_Y ⊗ M_y^(u,d))
ϕ̃^(b)_YD
(1_Y ⊗ M_y^(u,d)) ,
σ_1 =
∑_b∈0,1b,x_b_BX⊗∑_(u,d)∉ V_y,11_d∈Ĝ_yu,d_U ⊗
(1_Y ⊗ M_y^(u,d))
ϕ̃^(b)_YD
(1_Y ⊗ M_y^(u,d)) ,
where V_y,1 is the set of valid answers to challenge 1 (equation test).
Then, σ_0 and σ_1 are computationally indistinguishable.
The proof is given in <cit.>.[Note that even though the proof in <cit.> does not address the potentially inefficient advice state ϕ explicitly, it holds due to the non-explicit definition of their computational assumption.]
We proceed with a reduction of Protocol <ref> to a simplified one, Protocol <ref>.
As the name suggests, it will be easier to work with the simplified protocol and devices when bounding the produced entropy.
We remark that a similar reduction is used in <cit.>; the main difference is that we are using the reduction on the level of a single round, in contrast to the way it is used in <cit.> when dealing with the full multi-round protocol. Using the reduction in the single round protocol instead of the full protocol helps disentangling the various challenges that arise in the analysis of the entropy.
A simplified device is a tuple D̃=(ϕ,Π̃,M̃) that receives k∈ as input and specified by the following:
* ϕ={ϕ_y}_y∈𝒴⊆Pos(ℋ_D) is a family of positive
semidefinite operators on an arbitrary space ℋ_D such that ∑_yTr(ϕ̃_y)≤ 1;
* Π̃ and M̃ are defined as the sets Π̃_y_y,M̃_y_y respectively such that for each y∈𝒴, the operators, M̃_y=M̃_y^0,M̃_y^1=1-M̃_y^0 and Π̃_y = Π̃_y^0,Π̃_y^1,Π̃_y^2=1-Π̃_y^0 - Π̃_y^1, are projective measurements on ℋ_D.
Given a general device as in Definition <ref> D=(ϕ,Π,M), we construct a simplified device D̃=(ϕ,Π̃,M̃) in the following manner:
* The device D̃ measures y∈𝒴 like the general device D would.
* The measurement Π̃={Π̃_y^0,Π̃_y^1,Π̃_y^2} is defined as follows.
* Perform the measurement {Π_y^(b,x)}_b∈0,1,x∈𝒳 for an outcome (b,x).
* If Chk_k (b,x,y)=1, the constructed device returns b corresponding to the projection Π̃_y^b∈Π̃_y^0, Π̃_y^1.
* If Chk_k (b,x,y)=0, the constructed device returns 2 corresponding to the projection Π̃_y^2.
* The measurement M̃={M̃^0_y,M̃^1_y} is defined as follows.
M̃^0_y = ∑_(u,d)∈ V_y,1 M_y^(u,d) , M̃^1_y = 1 - M̃^0_y
,
where V_y,1 is valid answers for the equation test.
Meaning the outcome M̃=0 corresponds to a valid response (u,d) in the equation test.
The above construction of a simplified device maintains important properties of the general device.
Firstly, the simplified device fulfils the same cryptographic assumption as the general one. This is stated in the following corollary.
Given a general efficient device D=(ϕ,Π,M), a simplified device D̃=(ϕ,Π̃,M̃) constructed according to Definition <ref> is also efficient. Hence, the cryptographic assumption described in Lemma <ref> holds also for D̃ as well.
Secondly, the entropy produced by the simplified device in the simplified single-round protocol, Protocol <ref>, is identical to that produced by the general device in single round protocol, Protocol <ref>.
A general device executing Protocol <ref> defines a probability distribution of π̂ over 0,1,2.
Using the same general device to construct a simplified one, via Definition <ref>, leads to the same distribution for Π̃ when executing Protocol <ref>.
This results in the following corollary.
Given a general efficient device D=(ϕ,Π,M) and a simplified device D̃=(ϕ,Π̃,M̃) constructed according to Definition <ref>, we have for all k,
H(π̂|EY)^General = H(Π̃|EY)^Simplified ,
where H(π̂|EY)^General is the entropy produced by Protocol <ref> using the general device D and H(Π̃|EY)^Simplified is the entropy produced in Protocol <ref> using the simplified device D̃.
Both entropies are evaluated on the purification of the state ϕ.
The above corollary tells us that we can reduce the analysis of the entropy created by the general device to that of the simplified one, hence justifying its construction and the following sections.
§.§.§ Reduction to qubits
In order to provide a clear understanding of the quantum uncertainty that arises from two measurements, it is beneficial to examine the square overlap between those measurements (Definition <ref>).
The Bloch sphere representation, which pertains to Hilbert spaces of two dimensions, offers a lucid illustration of this concept.
Throughout this section, it is demonstrated that the devices under investigation, under specific conditions, can be expressed as a convex combination of devices, each operating on a single qubit.
Working in a qubit subspace then allows one to make definitive statements regarding the entropy of the measurement outcomes produced by the device.
We remark that this is in complete analogy with the proof techniques used when studying DI protocols in the non-local setting, in which one reduces the analysis to that of two single qubit devices <cit.>.
Let D̃=(ϕ,Π̃,M̃) be a simplified device (Definition <ref>), acting within a Hilbert space ℋ of a countable dimension, with the additional assumption that Π̃ consists of only 2 outcomes and let Γ be a Hermitian projection that commutes with both Π̃ and M̃.
Given an operator F=f(M̃,Γ) constructed from some non-commutative polynomial of 2 variables f, let S_D̃ = ⟨f(M̃,Γ)_ϕ⟩ be the expectation value of F.
Then, there exists a set of Hermitian projections B_j_j, acting within the same Hilbert space ℋ, satisfying the following conditions
[ 1) ∀ j, Rank(B_j) ≤ 2; ; 2) ∑_jB_j=1; ; 3) ∀ j, [Π̃,B_j] = [M̃, B_j] = [Γ,B_j] = 0 ]
such that
S_D̃ = ∑_j(j) S_D̃_j ,
where (j) = (B_jϕ) and S_D̃_j is the expectation value of F given the state B_jϕ B_j/[B_jϕ B_j], corresponding to the simplified device D̃_j = (B_jϕ B_j/[B_jϕ B_j],Π̃,M̃).
That is, S_D̃_j=⟨ f(M̃,Γ)⟩_B_jϕ B_j/[B_jϕ B_j].
As an immediate result of Lemma <ref>, there exists a basis in which Π̃,M̃ and Γ are 2× 2 block diagonal.
In this basis, we take the projection on every block j as B_j.
This satisfies the conditions in Conditions (<ref>).
Furthermore,
S_D̃ = (∑_j F B_j ϕ B_j) = ∑_j (j) ( F B_j ϕ B_j/(B_j ϕ B_j)) = ∑_j (j) S_D̃_j .
Note that the simplified device D̃_j=(B_jϕ B_j/[B_jϕ B_j],Π̃,M̃) yields the same expectation values to those of the simplified device D̃_j=(B_jϕ B_j/[B_jϕ B_j],B_jΠ̃ B_j,B_jM̃B_j).
The resulting operation is therefore, effectively, done in a space of a single qubit. In addition, due to the symmetry of Π̃ and M̃ in this proof, the lemma also holds for an observable constructed from Π̃ and Γ instead of M̃ and Γ. I.e. S_D̃ = f(Π̃,Γ)_ϕ.
The simplified protocol permits the use of the uncertainty principle, appearing in Lemma <ref>, in a vivid way since it has a geometrical interpretation on the Bloch sphere; recall Figure <ref>.
Under the assumption that Π̃ has two outcomes (this has yet to be justified) we can represent Π̃ and M̃ as two Bloch vectors with some angle between them that corresponds to their square overlap – The smaller the square overlap, the closer the angle is to π/2.
In the ideal case, the square overlap is 1/2 which means that in some basis the two measurements are the standard and the Hadamard measurements.
If one is able to confirm that Pr(M̃=0)=1, the only possible distribution on the outcomes of Π̃ is a uniform one, which has the maximal entropy.
Π̃, however, has 3 outcomes and not 2, rendering the reduction to qubits unjustified.
We can nonetheless argue that the state being used in the protocol is very close to some other state which produces only the first 2 outcomes.
The entropy of both states can then be related using Equation (<ref>).
The ideas described here, are combined and explained thoroughly in the main proof shown in the following subsection.
§.§.§ Conditional entropy bound
The main proof of this subsection is done with respect to a simplified device constructed from a standard one using Definition <ref>.
Before proceeding with the proof, we define the winning probability in both challenges.
Given a simplified device (with implicit key k) D̃=(ϕ, Π̃, M̃), for a given y∈𝒴 we define the Π̃_y and M̃_y winning probabilities, respectively, as
ω_p^y ((1 - Π̃_y^2)ϕ_y)/(ϕ_y) ; ω_m^y (M̃_y^0ϕ_y)/(ϕ_y) .
Likewise, the winning probabilities of Π and M as
ω_p ∑_y((1 - Π̃_y^2)ϕ_y) ; ω_m ∑_y(M̃_y^0ϕ_y) .
Recall that ϕ_y are sub-normalized.
We are now ready to prove our main technical lemma.
Let D̃=(ϕ, Π̃, M̃) be a simplified device as in Definition <ref>, constructed from an efficient general device D=(ϕ, Π, M) in the manner depicted in Definition <ref>. Let Φ_y∈ℋ_D⊗ℋ_E be a purification of ϕ_y (respecting the sub-normalization of ϕ_y) and Φ={Φ_y}_y∈𝒴.
For all c∈(1/2,1] and some negligible function ξ(λ), the following inequality holds:
H(Π̃|E,Y)_Φ≥ max0, 1-√(2)A(c)√(1-ω_p)-A(c)√(1-ω_m)
×(log_2(1/c) - h(ω_m - 2√(1-ω_p)-√(2)A(c)√(1-ω_p)-A(c)√(1-ω_m)))
- √(1-ω_p)log_2 (3) - (1+√(1-ω_p)) h(√(1-ω_p)/1+√(1-ω_p)) - ξ(λ) ,
where h(·) is the binary entropy function and
A(c)10/(2c-1)^2 .
For every y∈𝒴 we introduce the state
Ψ_y(Π̃_y^0+Π̃_y^1)Φ_y(Π̃_y^0+Π̃_y^1)/((Π̃_y^0+Π̃_y^1)Φ_y)
in order to reduce the problem to a convex combination of 2-dimensional ones, as described in Section <ref>. We provide a lower bound for the entropy H(Π̃|E,Y)_Ψ_y and by continuity of entropies, a lower bound for H(Π̃|E,Y)_Φ_y is then derived.
The proof proceeds in steps.
* Let
U_y^0 = Π̃_y^0-(Π̃_y^1+Π̃_y^2) ; U_y^1=M̃_y^0-M̃_y^1 .
Using Jordan's lemma, Lemma <ref>, there exists an orthonormal basis where the operators are 2× 2-block diagonal. Let Γ_y be the Hermitian projection on blocks where the square overlap of U_y^0 and U_y^1 is bound by c (good blocks).
Note that Γ_y also commutes with both unitaries.
* Using Lemma <ref> we bound the probability to be in subspace where the square overlap of Π and M is larger than c (bad blocks).
Tr((1 - Γ_y)Ψ_y) ≤2μ+10√(Tr(M̃^1_yΨ_y))/(2c-1)^2
= A(c)(1/5μ+√(Tr(M̃^1_yΨ_y)))
where A(c) is given by Equation (<ref>) and
μ |1/2 - (M̃_y^0Π̃_y^0Ψ_yΠ̃_y^0)-(M̃_y^0Π̃_y^1Ψ_yΠ̃_y^1)|
= 1/2
|
∑_b∈0,1(Π̃_y^bΨ_yΠ̃_y^b) -
2∑_b∈0,1(M̃_y^0Π̃_y^bΨ_yΠ̃_y^b)
|
= 1/2
|
∑_b∈0,1(M̃_y^0Π̃_y^bΨ_yΠ̃_y^b) -
∑_b∈0,1((1-M̃_y^0)Π̃_y^bΨ_yΠ̃_y^b)
| .
Due to Lemma <ref> and Corollary <ref>, μ is negligible in the security parameter λ.
Thus, for some negligible function η,
Tr((1 - Γ_y)Ψ_y) ≤ A(c)√(Tr(M̃^1_yΨ_y))+η(λ) .
* Note that Φ is the state of the device (and the adversary), not Ψ. Hence, we cannot, a priori, relate Tr(M̃^1_yΨ_y) to the winning probabilities of the device.
We therefore want to translate Equation (<ref>) to quantities observed in the application of Protocol <ref> when using the simplified device D̃.
That is, we would like to use the values ω_p, ω_m given in Definition <ref> in our equations:
√(Tr(M̃^1_yΨ_y)) = √(Tr(M̃^1_y(Ψ_y - Φ_y))+Tr(M̃^1_yΦ_y))
≤√(Tr(|Ψ_y - Φ_y|)) +
√(Tr(M̃^1_yΦ_y))
≤√(2√(Tr(Π̃_y^2 Φ_y))) +
√(Tr(M̃^1_yΦ_y))
=
√(2√(1-ω_p^y)) +
√(1-ω_m^y) ,
where in the last inequality we used the gentle measurement lemma <cit.>.
The last equation, combined with Equation (<ref>), immediately yields
Tr(Γ_y Ψ_y) ≥ 1 - A(c)(√(2)√(1-ω_p^y) +√(1-ω_m^y)) - η(λ) .
* In a similar manner, for later use, we must find a lower bound on Tr(M̃_y^0 Ψ_y) using quantities that can be observed from the simplified protocol. To that end, we again use the gentle measurement lemma:
Tr(M̃_y^0 Φ_y) = Tr(M̃_y^0( Φ_y - Ψ_y)) + Tr(M̃_y^0 Ψ_y)
≤ 2√((1 - Π̃_y^2) Φ_y) + Tr(M̃_y^0 Ψ_y) .
Using the definitions of ω_m^y,ω_p^y,
⇒ ω_m^y ≤2√(1-ω_p^y) + Tr(M̃_y^0 Ψ_y)
⇒Tr(M̃_y^0 Ψ_y) ≥ω_m^y - 2√(1-ω_p^y)
* We proceed by providing a bound on the conditional entropy, given that Γ_y =0, using the uncertainty principle in Lemma <ref>.
Conditioned on being in a good subspace (which happens with probability (Γ_y = 0)), the square overlap of Π̃ and M̃ is upper bounded by c. We can then bound the entropy of Π̃ using the entropy of M̃:
H(Π̃|E,Y=y,Γ=0)_Ψ_y ≥log_2 (1/c) - H(M̃|Y=y,Γ=0)_Ψ_y
= log_2 (1/c) - h(Pr(M̃_y = 0| Γ_y = 0)_Ψ_y).
Since we have a lower bound on Tr(M̃_y^0 Ψ_y), we proceed by working in the regime where the binary entropy function is strictly
decreasing. To that end, it is henceforth assumed that the argument of the binary entropy function, and all of its subsequent lower bounds, are larger than 1/2.
By using the inequality
Pr(M̃_y = 0| Γ_y = 0) ≥Pr(M̃_y = 0) + Pr(Γ_y = 0) - 1
we obtain
- h(Pr(M̃_y = 0| Γ_y = 0)) ≥
- h(Pr(M̃_y = 0) + Pr(Γ_y = 0) - 1) .
Therefore,
H(Π̃|E,Y=y,Γ=0)_Ψ_y≥log_2 (1/c) - h(Pr(M̃_y = 0) + Pr(Γ_y = 0) - 1) .
* We now want to bound the value H(Π̃| E, Y)_Ψ_y, i.e., without conditioning on the event Γ_y=0.
We write,
H(Π̃| E, Y=y)_Ψ_y ≥ H(Π̃|E,Y=y,Γ_y)_Ψ_y
= (Γ_y=0)H(Π̃|E,Y=y,Γ_y=0)_Ψ_y + (Γ_y=1)H(Π̃|E,Y=y,Γ_y=1)_Ψ_y
≥(Γ_y=0)H(Π̃|E,Y=y,Γ_y=0)_Ψ_y .
This allows us to use Equation (<ref>) and Equation (<ref>) to bound each term, respectively, on the right hand side of the last inequality with
H(Π̃| E,Y=y)_Ψ_y ≥
(
1 - A(c)(√(2)√(1-ω_p^y) +√(1-ω_m^y)) - η(λ)
)
≥ ×
(
log_2 (1/c) - h(Pr(M̃_y = 0) + Pr(Γ_y = 0) - 1)
)
.
We use Equation (<ref>) a second time together with Equation (<ref>) to lower bound both probability terms in the argument of the binary entropy function and get
H(Π̃|E,Y=y)_Ψ_y ≥
(
1 - A(c)(√(2)√(1-ω_p^y) +√(1-ω_m^y)) - η(λ)
)
×
≤≤(log_2(1/c) - h(ω_m^y - 2√(1-ω_p^y)-√(2)A(c)√(1-ω_p^y)-A(c)√(1-ω_m^y)-η(λ))) .
* Now, taking the expectation over y on both sides of the inequality and using Lemma <ref>:
H(Π̃|E,Y)_Ψ ≥(1-√(2)A(c)√(1-ω_p)-A(c)√(1-ω_m)-η(λ)) ×
≤ (log_2(1/c) - h(ω_m - 2√(1-ω_p)-√(2)A(c)√(1-ω_p)-A(c)√(1-ω_m)-η(λ))) .
* Using Equation (<ref>), we can extract η(λ) from the argument of the binary entropy function in Equation (<ref>) such that for some negligible function ξ(λ),
H(Π̃|E,Y)_Ψ ≥(1-√(2)A(c)√(1-ω_p)-A(c)√(1-ω_m)) ×
≤ (log_2(1/c) - h(ω_m - 2√(1-ω_p)-√(2)A(c)√(1-ω_p)-A(c)√(1-ω_m)))-ξ(λ) .
* Using the continuity bound in Equation (<ref>) with Ψ-Φ_1 /2 ≤√(1-ω_p) yields
|H(Π̃|E,Y)_Ψ - H(Π̃|E,Y)_Φ| ≤√(1-ω_p)log_2 (3) + (1+√(1-ω_p)) h(√(1-ω_p)/1+√(1-ω_p)) .
* Combining Equation (<ref>) with Equation (<ref>), we conclude that the lemma holds.
For the sake of brevity, denote the bound in Equation (<ref>) as
H(Π̃|Y,E) ≥ g(ω_p, ω_m, c) - ξ(λ) .
Seeing this inequality holds for all values c∈(1/2,1], for each winning probability pair (ω_p, ω_m) we can pick an optimal value of c to maximize the inequality. We do this implicitly and rewrite the bound as
g(ω_p, ω_m) max_c∈(1/2,1] g(ω_p, ω_m, c)
Doing so yields the graph in Figure <ref>.
In the protocol, later on, the verifier chooses whether to abort or not, depending on the overall winning probability ω:
ω (W=1)
=(T=0)(W=1|T=0) + (T=1)(W=1|T=1)
= 1-β(T=0)·ω_p +
β(T=1)·ω_m
We therefore define for every β∈(0,1), another bound on the entropy, that depends only on ω (assuming ω≥1/2) as
g(ω; β)
min_ω_p
g
(
ω_p, ω_m = ω - (1-β)ω_p/β
) .
The optimal β can be found numerically. We plot the bound in Equation (<ref>) in Figure <ref> (Neglecting the negligible element ξ(λ)).
§.§ Entropy accumulation
Combining Lemma <ref> with Corollary <ref>, we now hold a lower bound for the von Neumann entropy of Protocol <ref>–
we can connect any winning probability ω to a lower bound on the entropy of the pre-image test.
This allows us to proceed with the task of entropy accumulation.
To lower bound the total amount of smooth min-entropy accumulated throughout the entire execution of Protocol <ref>, we use the Entropy Accumulation Theorem (EAT), stated as Theorem <ref>.
To use the EAT, we need to first define the channels corresponding to Protocol <ref> followed by a proof that they are in fact EAT channels.
In the notation of Definition <ref>, we make the following choice of channels:
ℳ_i:R_i-1→
R_i O_iΠ̂_i M̂_i S_iK_i T_i G_i
and set Q_i=W_i.
The channels ℳ_i:R_i-1→ R_i Π̂_i M̂_i K_i T_i G_i_i∈ [n] defined by the CPTP map describing the i-th round of Protocol <ref> as implemented by the computationally bounded untrusted device D̃ and the verifier are EAT channels according to Definition <ref>.
To prove that the constructed channels ℳ_i_i∈ [n] are EAT channels we need to show that the conditions are fulfilled:
* O_i_i∈[n]=Π̂_i M̂_i_i∈[n], S_i_i∈[n]=K_i T_i G_i_i∈[n] and Q_i_i∈[n]=W_i_i∈[n] are all finite-dimensional classical systems. R_i_i∈[n] are arbitrary quantum systems. Finally, we have
d_O =
d_Π̂_i· d_M̂_i =
|0,1,2| ·|0,1| = 6 < ∞ .
* For any i∈[n] and any input state σ_R_i-1, W_i is a function of the classical values Π̂_i M̂_i K_i T_i G_i. Hence, the marginal σ_O_i S_i is unchanged when deriving W_i from it.
* For any initial state ρ^in_R_0 E and the resulting final state ρ__𝐎𝐒𝐐E=ρ__Π 𝐌 𝐊 𝐓 𝐆 𝐖E the Markov-chain conditions
(Π̂M̂)_1, ⋯ ,(Π̂M̂)_i-1↔(K T G)_1, ⋯ ,(K T G)_i-1, E ↔(K T G)_i
trivially hold for all i∈ [n] as K_i,T_i and G_i are chosen independently from everything else.[For a reader interested in randomness expansion protocols, note that in order to use less randomness as an initial resource one could reuse the keys K_i in some of the rounds (similarly to what happens in standard DI randomness expansion protocols). The Markov-chain condition also holds when the keys are being reused and so one can still follow our proof technique.]
Let β,ω,γ∈(0,1) and g(ω;β) be the function in Equation (<ref>).
Let p be a probability distribution over 𝒲=⊥, 0, 1 such that γ=1-p(⊥) and ω=p(1)/γ. Define Σ(p) = σ_R_i-1R' : ℳ_i(σ)_W_i = p.
Then, there exists a negligible function ξ(λ) such that
(g(ω; β)-ξ(λ)) (1 - βγ) ≤inf_σ_R_i-1 R'∈Σ(p)
H(Π̂_i M̂_i | K_i T_i G_i R')_ℳ_i (σ) .
In particular, this implies that for every β∈(0,1) the function
f_min(p) (g(ω; β) - ξ(λ)) (1 - βγ)
satisfies Definition <ref> and is therefore a min-tradeoff function.
Due to Lemma <ref> and the consequent Equation (<ref>), the following holds for any polynomial sized state σ (not necessarily efficient):
g(ω; β) - ξ(λ) ≤ H(Π̂_i | Y_i K_i, T_i = 0, R')_ℳ_i (σ)
≤1/(T_i=0)[(T_i=0) H(Π̂_i | Y_i K_i, T_i = 0, R')_ℳ_i (σ).
= 1/(T_i=0) + . (T_i=1)H(Π̂_i | Y_i K_i, T_i = 1, R')_ℳ_i (σ)]
= 1/(T_i=0) H(Π̂_i | Y_i K_i T_i R')_ℳ_i (σ)
= 1/1 - βγ H(Π̂_i | Y_i K_i T_i R')_ℳ_i (σ)
≤ 1/1 - βγ H(Π̂_i M̂_i | K_i T_i R')_ℳ_i (σ)
= 1/1 - βγ H(Π̂_i M̂_i | K_i T_i G_i R')_ℳ_i (σ) .
For the last equality, note that the device only knows which test to perform while being unaware if it is for a generation round or not.
Therefore, once T is given, G does not provide any additional information.
Using Theorem <ref>, we can bound the smooth min-entropy resulting in application of Protocol <ref>.
ε_s(Π̂𝐌̂| 𝐊𝐓𝐆)_ρ _| Ω≥
n f_min
- μ√(n) ,
where f_min is given by Equation (<ref>).
We simplify the right-hand side in Equation (<ref>) to a single entropy accumulation rate, μ_opt, and a negligible reduction ξ(λ).
ε(Π̂𝐌̂| K T G)_ρ _| Ω≥
n (μ_opt (n, ω, γ, ε_s, p_Ω; β)
- ξ(λ) ) .
Note that the differential of f_min is unbounded.
This prevents us from using the EAT due to Equation (<ref>).
This issue is addressed by defining a new min-tradeoff function f̃_min such that
f̃_min (ω; ω_0) =
f_min (ω) ω≤ω_0
d/dω.f_min (ω)|_ω=ω_0 (ω - ω_0) + f_min (ω_0) otherwise .
We provide a number of plots for μ_opt as a function of ω for various values of n in Figure <ref>. We remark that we did not fully optimize the code to derive the plots and one can probably derive tighter plots.
§.§ Randomness rates
In the previous section we derived a lower bound on ε
(Π̂𝐌̂| K T G)_ρ _| Ω.
One is, however, interested in a bound on ε_s(Π̂|𝐊𝐓𝐆E)_ρ_|Ω instead.
To derive the desired bound we follow similar steps to those taken in the proof of <cit.>.
Under the same conditions for deriving Equation (<ref>), the following holds:
ε_s(Π̂|𝐊𝐓𝐆E)_ρ_|Ω ≥
n (μ_opt (n, ω, γ, ε_s/4, p_Ω; β)
- ξ(λ) - γ)
≤
- 2log(7) √(1 - 2 log(ε_s/4· p_Ω ))- 3log(1 - √(1-(ε_s/4)^2)) .
We begin with entropy chain rule <cit.>
ε_s(Π̂|𝐊𝐓𝐆E)_ρ_|Ω ≥ε_s/4(Π̂𝐌̂|𝐊𝐓𝐆E)_ρ_|Ω
- ε_s/4(𝐌̂|𝐊𝐓𝐆E)_ρ_|Ω
- 3log(1 - √(1-(ε_s/4)^2)) .
The first term on the right-hand side is given in Equation (<ref>); it remains to find an upper bound for the second term. Let us start from
ε_s/4(𝐌̂|𝐊𝐆𝐓E)_ρ_|Ω≤ε_s/4(𝐌̂|𝐓E)_ρ_|Ω .
We then use the EAT again in order to bound ε_s/4(𝐌̂|𝐓E)_ρ_|Ω.
We identify the EAT channels with 𝐎→𝐌̂,𝐒→𝐓 and E → E.
The Markov conditions then trivially hold and the max-tradeoff function reads
f_max(p) ≥sup_σ_R_i-1R':ℳ_i(σ)_W_i=ω
H(M̂_i|T_i R')_ℳ_i (σ) .
Since the following distributions are satisfied for all i∈[n]
[M̂_i =⊥|T_i =0] = 1,
[M̂_i ∈0,1|T_i =1] = 1,
[T_i =1] = γ,
the max-tradeoff function is simply f_max(p)=γ (thus ‖∇ f_max‖ _∞ = 0).
We therefore get
ε_s/4(𝐌̂|𝐓E)
_ρ_|Ω≤γ n + 2 log (7)
√(1 - 2 log(ε_s/4· p_Ω )) .
We wish to make a final remark. One could repeat the above analysis under the assumption that Y and X are leaked, to get a stronger security statement (this is however not done in previous works). In this case, Y_i and X_i should be part of the output O_i of the channel.
Then, the dimension of O_i scales exponentially with λ, which worsens the accumulation rate μ_opt since this results in exponential factors in Equation (<ref>).
In that case, μ_opt becomes a monotonically decreasing function of λ and we get a certain tradeoff between the two elements in the following expression:
μ_opt(n, ω, γ, ε_s, p_Ω, λ; β) - ξ(λ) .
This means that there is an optimal value of λ that maximizes the entropy and to find it one requires an explicit bound on ξ(λ).
§ CONCLUSION AND OUTLOOK
By utilizing a combination of results from quantum information theory and post-quantum cryptography, we have shown that entropy accumulation is feasible when interacting with a single device.
While this was previously done in <cit.> using ad hoc techniques, we provide a flexible framework that builds on well-studied tools and follows similar steps to those used in DI protocols based on the violation of Bell inequalities using two devices <cit.>. Prior to our work, it was believed that such an approach cannot be taken (see the discussion in <cit.>).
We remark that while we focused on randomness certification in the current manuscript, one could now easily extend the analysis to randomness expansion, amplification and key distribution using the same standard techniques applied when working with two devices <cit.>.
Furthermore, even though we carried out the proof here specifically for the computational challenge derived from a NTCF, the methods that we establish are modular and can be generalized to other protocols with different cryptographic assumptions.
For example, in two recent works <cit.>, the winning probability in various “computational games” are tied to the anti-commutator of the measurements used by the device that plays the game (see <cit.> and <cit.> in particular). Thus, their results can be used to derive a bound on the conditional von Neumann entropy as we do in Lemma <ref>.
From there onward, the final bound on the accumulated smooth min-entropy is derived exactly as in our work.
Apart from the theoretical contribution, the new proof method allows us to derive explicit bounds for a finite number of rounds of the protocol, in contrast to asymptotic statements. Thus, one can use the bounds to study the practicality of DI randomness certification protocols based on computational assumptions.
For the current protocol, in order to get a positive rate the number of repetitions n required, as seen in Figure <ref>, is too demanding for actual implementation. In addition, the necessary observed winning probability is extremely high.
We pinpoint the “source of the problem” to the min-tradeoff function presented in Figure <ref> and provide below a number of suggestions as to how one might improve the derived bounds.
In general, however, we expect that for other protocols (e.g., those suggested in <cit.>) one could derive better min-tradeoff functions that will bring us closer to the regime of experimentally relevant protocols.
In fact, our framework allows us to compare different protocols via their min-tradeoff functions and thus can be used as a tool for bench-marking new protocols.
We conclude with several open questions.
* In both the original analysis done in <cit.> and our work, the cryptographic assumption needs to hold even when the (efficient) device is getting an (inefficient) “advice state”. Including the advice states is necessary when using the entropy accumulation in its current form, due to the usage of a min-tradeoff function (see Equation (<ref>)).
One fundamental question is therefore whether this is necessary in all DI protocols based on post-quantum computational assumptions or not.
* As mentioned above, what makes the protocol considered here potentially unfeasible for experimental implementations is its min-tradeoff function. Ideally, one would like to both decrease the winning probability needed in order to certify entropy as well as the derivative of the function, which is currently too large. The derivative impacts the second-order term of the accumulated entropy and this is why we observe the need for a large number of rounds in the protocol– many orders of magnitude more than in the DI setup with two devices.
Any improvement of Lemma <ref> may be useful; when looking into the details of the proof, there is indeed some room for it.
* Once one is interested in non-asymptotic statements and actual implementations, the unknown negligible function ξ(λ) needs to be better understood. The assumption is that ξ(λ)=0 as λ→inf but in any given execution one does fix a finite λ. Some more explicit statements should then be made regarding ξ(λ) and incorporated into the final bound.
* In the current manuscript we worked with the EAT presented in <cit.>. A generalized version, that allows for potentially more complex protocols, appears in <cit.>. All of our lemmas and theorems can also be derived using <cit.> without any modifications. An interesting question is whether there are DI protocols with a single device that can exploit the more general structure of <cit.>.
The authors would like to thank Zvika Brakerski, Tony Metger, Thomas Vidick and Tina Zhang for useful discussions.
This research was generously supported by the Peter and Patricia Gruber Award, the Daniel E. Koshland Career Development Chair, the Koshland Research Fund, the Karen Siem Fellowship for Women in Science and the Israel Science Foundation (ISF) and the Directorate for Defense Research and Development (DDR&D), grant No. 3426/21.
alpha
|
http://arxiv.org/abs/2307.03325v1
|
20230706223233
|
3D Environment Modeling for Falsification and Beyond with Scenic 3.0
|
[
"Eric Vin",
"Shun Kashiwa",
"Matthew Rhea",
"Daniel J. Fremont",
"Edward Kim",
"Tommaso Dreossi",
"Shromona Ghosh",
"Xiangyu Yue",
"Alberto L. Sangiovanni-Vincentelli",
"Sanjit A. Seshia"
] |
cs.PL
|
[
"cs.PL"
] |
Vin et al.
University of California, Santa Cruz
University of California, Berkeley
SentinelOne
insitro
Waymo LLC
The Chinese University of Hong Kong
{evin, shkashiw, dfremont}@ucsc.edu
3D Environment Modeling for Falsification and Beyond with Scenic 3.0
Eric Vin10000-0002-3089-1129 Shun Kashiwa1 Matthew Rhea3 Daniel J. Fremont10000-0002-9992-9965
Edward Kim2 Tommaso Dreossi4 Shromona Ghosh5 Xiangyu Yue6
Alberto L. Sangiovanni-Vincentelli2 Sanjit A. Seshia2
====================================================================================================================================================================================================================
We present a major new version of Scenic, a probabilistic programming language for writing formal models of the environments of cyber-physical systems.
Scenic has been successfully used for the design and analysis of CPS in a variety of domains, but earlier versions are limited to environments that are essentially two-dimensional.
In this paper, we extend Scenic with native support for 3D geometry, introducing new syntax that provides expressive ways to describe 3D configurations while preserving the simplicity and readability of the language.
We replace Scenic's simplistic representation of objects as boxes with precise modeling of complex shapes, including a ray tracing-based visibility system that accounts for object occlusion.
We also extend the language to support arbitrary temporal requirements expressed in LTL, and build an extensible Scenic parser generated from a formal grammar of the language.
Finally, we illustrate the new application domains these features enable with case studies that would have been impossible to accurately model in Scenic 2.
0
10cm
https://doi.org/10.5281/zenodo.7887049
< g r a p h i c s >
< g r a p h i c s >
§ INTRODUCTION
A major challenge in the design of cyber-physical systems (CPS) like autonomous vehicles is the heterogeneity and complexity of their environments.
Increasingly, problems of perception, planning, and control in such environments have been tackled using machine learning (ML) algorithms whose behavior is not well-understood.
This trend calls for verification techniques for ML-based CPS; however, a significant barrier has been the difficulty of constructing formal models that capture the diversity of these systems' environments <cit.>.
Indeed, building such models is a prerequisite not only for verification but any formal analysis.
Scenic <cit.> is a probabilistic programming language that addresses this challenge by providing a precise yet readable formalism for modeling the environments of CPS.
A Scenic program defines a scenario describing physical objects in a world, placing a probability distribution on their positions and other properties; a single program can generate many different concrete scenes by sampling from this distribution.
Scenic also allows defining a stochastic policy describing how agents behave over time, and implementing the resulting dynamic scenarios in a variety of external simulators.
Environment models defined in Scenic can be used for many tasks: falsification, as in the VerifAI toolkit <cit.>, but also debugging, training data generation, and real-world experiment design <cit.>.
These tasks have been successfully demonstrated in a variety of domains including autonomous driving <cit.>, aviation <cit.>, and reinforcement learning agents <cit.>.
Despite Scenic's successes, it has several limitations that prevent its use in a number of applications of interest.
First, the original language models the world as being two-dimensional, since this enables a substantial simplification in the language's syntax (e.g., orientations being a single angle) as well as optimizations in its implementation.
The 2D assumption is reasonable for domains such as driving but leaves Scenic unable to properly model environments for aerial and underwater vehicles, for example.
There can be problems even for ground vehicles: Scenic could not generate a scene where a robot vacuum is underneath a table, as their 2D bounding boxes would overlap and Scenic would treat them as colliding.
The use of bounding boxes rather than precise shapes also leads Scenic to use a simplistic visibility model that ignores occlusion, making it possible for Scenic to claim objects are visible when they are not and vice versa: a serious problem when generating training data for a perception system.
Fundamentally, verification of AI-based autonomous systems requires reasoning about perception and physics in a 3D world.
To support such reasoning, a formal environment modeling language must provide faithful representations of 3D geometry.
Towards this end, we present Scenic 3.0 [Available at: <https://github.com/BerkeleyLearnVerify/Scenic/>], a largely backwards-compatible major release featuring:
* Native 3D Syntax: We update Scenic's existing syntax to support 3D geometry, and add new syntax making it possible to define complex 3D scenarios simply. For example, an object's orientation can be specified as being tangent to a surface and facing another object as much as possible.
* Precise 3D Shapes: The shapes of objects (as well as surfaces and volumes) can be given by arbitrary 3D meshes, with Scenic performing precise reasoning about collisions, containment, tangency, etc.
* Precise Visibility: We use ray tracing for precise visibility checks that take occlusion into account.
* Temporal Requirements: We support arbitrary Linear Temporal Logic <cit.> properties to constrain dynamic scenarios (vs. only G p and F p in Scenic 2).
* Rewritten Parser: We give a Parsing Expression Grammar <cit.> for Scenic, using it to generate a parser with more precise error messages and better support for new syntax and optimization passes.
We first define the new features in Scenic 3 in detail in Sec. <ref>, working through several toy examples.
Then, in Sec. <ref>, we describe two case studies using Scenic with scenarios that could not be accurately modeled without the new features: falsifying a specification for a robot vacuum and generating training data constrained by an LTL formula for a self-driving car's perception system.
Related Work.
There are many tools for test and data generation <cit.>.
Some approaches learn from examples <cit.> and so do not provide specific control over scenarios as Scenic does.
Approaches based on rules or grammars <cit.> provide some control but have difficulty enforcing requirements over the generated data as a whole.
Several probabilistic programming languages have been used for generation of objects and scenes <cit.>, but none of them provide specialized syntax to lay out geometric scenarios, nor for describing dynamic behaviors.
Finally, there has been work on synthetic data generation of 3D scenes and objects using ML techniques such as GANs (e.g., <cit.>), but these lack the specificity and controllability provided by a programming language like Scenic.
§ NEW FEATURES
§.§ 3D Geometry
The primary new feature in Scenic 3 is the generalization of the language to 3 dimensions.
Some changes, like changing the type system so that vectors have length 3, are obvious: here we focus on cases where the existing syntax of Scenic does not easily generalize, using simple scenarios to motivate our design choices.
The first challenge when moving to 3D is the representation of an object's orientation in space: Scenic's existing heading property, providing a single angle, is no longer sufficient.
Instead, we introduce yaw, pitch, and roll angles, using the common convention for aircraft that these represent intrinsic rotations (i.e., yaw is applied first, then pitch is applied to the resulting orientation, etc.).
Using intrinsic angles makes it easy to compose rotations: for example if we point an airplane towards a landing strip with yaw and pitch (either manually or using Scenic's facing toward specifier — more on this below), we can add an additional roll by adding to that property.
To further simplify composition, we add a parentOrientation property which specifies the local coordinate system in which the 3 angles above should be interpreted (by default, the global coordinate system).
This allows the user to specify an orientation with respect to a previously-computed orientation, for instance that of a tilted surface.
Scenic provides a flexible system of natural language specifiers which can be combined to define properties of objects.
Consider the following Scenic 3 code:
objectA = new Object at (1, 2, 3), facing (45 deg, 0, 90 deg)
objectB = new Object left of objectA by 1
objectC = new Object above objectB by 1,
facing (Range(0,30) deg, Range(0,30) deg, 0)
Here, we use the at specifier to define a specific position for object A; the facing specifier defines the object's orientation using explicit yaw, pitch, and roll angles.
We then place object B left of A by 1 unit with the left of specifier: this specifier now not only sets the position property, but also sets the parentOrientation property to the orientation of object A (unless explicitly overridden).
Thus object B will be oriented the same way as A.
Similarly, object C is positioned relative to B and so inherits its orientation as its parentOrientation.
However, this time we use the facing specifier to define random yaw and pitch angles, so object C will face up to 30^∘ off of B.
Another way to specify an object's orientation is the facing toward specifier.
This is a case where the 2D semantics become ambiguous in 3D.
Consider a scenario where the user wants an airplane to be “facing toward” a runway: the plane's body should be oriented toward the runway (giving its yaw), but it is not clear whether in addition the plane should be pitched downward so that its nose points directly toward the runway.
To allow for both interpretations, Scenic 3 has facing toward only specify yaw, while the new facing directly toward specifier also specifies pitch.
This is illustrated in Fig. <ref>.
Another common practice in 3D space is to place one object on another.
For example, we may want to place a chair on a floor, or a painting on a wall.
Scenic's existing on specifier, which sets the position of an object to be a uniformly random point in a given region, does not suffice for such cases because it would cause the chair to intersect the floor or the painting to penetrate the wall (or both).
To fix this issue, we allow each object to define a base point, which on positions instead of the object's center.
The default base point is the bottom center of the object's bounding box, suitable for cars and chairs for example; a class could override this to be the back center.
Finally, to enable placing objects on each other, objects can provide a topSurface property specifying the surface which is considered the “top” for the purposes of the on specifier.
As before, there is a reasonable default (the upward-pointing faces of the object's mesh) that can be overridden.
This syntax is illustrated in Fig. <ref>.
A final 3D complication arises when positioning objects on irregular surfaces.
Consider a pair of cars driving up an uneven mountain road, with one 10 meters behind the other.
We can use the ahead of specifier to place one car 10 meters ahead of the other, but then the car will penetrate the road due to its upward slope.
Alternatively, the on specifier can correctly place the car so it is tangent to the road, but then we cannot directly specify the distance between the cars.
The natural semantics here would be to combine the constraints from both specifiers, but this is illegal in Scenic 2 where a given property (such as position) can only be specified by a single specifier at a time.
We enable this usage in Scenic 3 by introducing the concept of a modifying specifier that modifies the value of a property already defined by another specifier.
Specifically, if an object's position is already specified, the on specifier will project that position down onto the given surface.
This is illustrated by the green chair in Fig. <ref>.
Note that the green chair is correctly upright on the floor even though it was positioned relative to the cube, and so should inherit parentOrientation from the cube as discussed above.
In this situation, the user has provided no explicit orientation for the chair, and both below and on can provide one.
To resolve this ambiguity, we introduce a specifier priority system, where specifiers have different priorities for the properties they specify (generalizing Scenic's existing system where a specifier could specify a property optionally).
In our example, below specifies position with priority 1 and parentOrientation with priority 3, while on specifies these with priorities 1 and 2 respectively.
So both specifiers determine position (with on modifying the value from below as explained above), but on takes precedence over below when specifying parentOrientation.
This yields the expected behavior while still allowing below to determine the orientation when used in combination with other specifiers than on.
§.§ Mesh Shapes and Regions
Scenic 2's approximation of objects by their bounding boxes was adequate for 2D driving scenarios, for example, but is wholly inadequate in 3D, where objects are commonly far from box-shaped.
For example, consider placing a chair tucked in under a table.
Since the bounding boxes of these two objects intersect, Scenic 2 would always reject this situation as a collision and try to generate a new scene, even if the chair and table are entirely separate.
In Scenic 3, each object has a precise shape given by its shape property, which is set to an instance of the class .
The most general class is , which represents an arbitrary 3D mesh and can be loaded from standard formats; classes for primitive shapes like spheres are provided for convenience.
These shapes are used to perform precise collision and containment checks between objects and regions.
Scenic also supports mesh regions, which can either represent surfaces or volumes in 3D space. For example, given a mesh representing an ocean we might want to sample on the surface for a boat or in the volume for a submarine.
All meshes in Scenic are handled using Trimesh <cit.>, a Python library for triangular meshes, which internally calls out to the tools Blender <cit.> and OpenSCAD <cit.> for several operations.
These operations tend to be expensive, so Scenic uses several heuristics to cheaply determine simple cases; these can give between a 10x-1000x speedup when sampling scenes.
§.§ Precise Visibility Model
Scenic 2's visibility system simply checks if the bounding box corners of objects are contained in the view cone of the viewing object, which is no longer adequate for 3D scenarios with complex shapes.
Visibility checks are now done using ray tracing, and account for objects being able to occlude visibility.
In addition to standard pyramidal view cones used for cameras, Scenic correctly handles wrap-around view regions such as those of common LiDAR sensors.
Visibility checks use a configurable density of rays, and are optimized to only send rays in areas where they could feasibly hit the object.
§.§ Temporal Requirements
A key feature of Scenic is the ability to declaratively impose constraints on generated scenes using require statements.
However, Scenic 2 only provides limited support for temporal requirements constraining how a dynamic scenario evolves over time, with the require always and require eventually statements.
Slightly more complex examples, like “cars A and B enter the intersection after car C”, require the user to explicitly encode them as monitors, which is error-prone and yields verbose hard-to-read imperative code: this property requires an 8-line monitor in <cit.>.
Scenic 3 extends require to arbitrary properties in Linear Temporal Logic <cit.>, allowing natural properties like this to be concisely expressed:
require (carA not in intersection and carB not in intersection
until carC in intersection)
The semantics of the operators always, eventually, next, and until are taken from RV-LTL <cit.> to properly model the finite length of Scenic simulations.
§.§ Rewritten Parser
For interoperability with Python libraries, Scenic is compiled to Python, and the original Scenic parser was implemented on top of the Python parser.
This approach imposed serious restrictions on the language design (e.g., forcing non-intuitive operator precedences), made extending the parser difficult, and led to misleading error messages which pointed to the wrong part of the program.
Scenic 3 uses a parser automatically generated from a Parsing Expression Grammar (PEG) <cit.> for the language.
The parser is based on Pegen <cit.>, the parser generator developed for CPython, and the grammar itself was obtained by extending the Python PEG.
The new parser outputs an abstract syntax tree representing the structure of the original Scenic code (unlike the old parser), ensuring that syntax errors are correctly localized and simplifying the task of writing analysis and optimization passes for Scenic.
This new parser gives us flexibility in designing and implementing the language. For example, we carefully assigned precedence to the four new temporal operators so that users can naturally express temporal requirements without unnecessary parentheses.
There are additional benefits from having a precise machine-readable grammar for Scenic: for instance, as we wrote the grammar, we discovered ambiguities that had previously been unnoticed and made minor changes to the language to eliminate them.
The grammar could also be be used to fuzz test the compiler and other tools operating on Scenic programs.
§ CASE STUDIES
In this section, we discuss two case studies in the robotics simulator Webots <cit.>. The code for both case studies is available in the Scenic GitHub repository <cit.>.
The first case study, performing falsification of a robot vacuum, illustrates a domain that could not be modeled in Scenic 2 due to the lack of 3D support.
The second case study, generating data constrained by an LTL formula for testing or training the perception system of an autonomous vehicle, is an example of how the new features in Scenic 3 can significantly improve effectiveness even in one of Scenic's original target domains.
§.§ Falsification of a Robot Vacuum
In this example we evaluate the iRobot Create <cit.>, a robot vacuum, on its ability to effectively clean a room filled with objects.
We use a specification stating that the robot must clean at least a third of the room within 5 minutes: in Signal Temporal Logic <cit.>, the formula φ = F_[0,300] (coverage > 1/3).
We use Scenic to generate a complete room and export it to Webots for simulation.
The room is surrounded by four walls and contains two main sections: in the dining room section, we place a table of varied width and length randomly on the floor, with 3 chairs tucked in around it and another chair fallen over.
In the living room section, we place a couch with a coffee table in front of it, both leaving randomly-sized spaces roughly the diameter of the robot vacuum.
We then add a variable number of toys, modeled as small boxes, cylinders, cones, and spheres, placed randomly around the room; for a taller obstacle, we place a stack of 3 box toys somewhere in the room.
Finally, we place the vacuum randomly on the floor, and use Scenic's mutate statement to add noise to the positions and yaw of the furniture.
Several scenes sampled from this scenario are shown in Fig. <ref>.
We tested the default controller for the vacuum against 0, 1, 2, 4, 8, and 16-toy variants of our Scenic scenario, running 25 simulations for each variant.
For each simulation, we computed the robustness value <cit.> of our spec φ.
The average values are plotted in Fig. <ref>, showing a clear decline as the number of toys increases.
Many of the runs actually falsified φ: up to 44% with 16 toys.
There are several aspects of this example that would not be possible in Scenic 2.
First, the new syntax in Scenic 3 allows for convenient placement of objects, specifically the use of on in combination with left of and right of, to place the chairs on the appropriate side of the dining table but on the floor.
Many of the objects are also above others and have overlapping bounding boxes, but because Scenic now models shapes precisely, it is able to properly register these objects as non-intersecting and place them in truly feasible locations (e.g., in Fig. <ref>, the toy under the dining table in the top left scene and the robot under the coffee table in the bottom right scene).
§.§ Constrained Data Generation for an Autonomous Vehicle
In this example we generate instances of a potentially-unsafe driving scenario for use in training or testing the perception system of an AV.
Consider a car passing in front of the AV in an intersection where the AV must yield, and so needs to detect the other car before it becomes too late to brake and avoid a collision.
We want to generate time series of images labeled with whether or not the crossing car is visible, for a variety of different scenes with different city layouts to provide various openings and backdrops.
Our scenario places both the ego car (the AV) and the crossing car randomly on the appropriate road ahead of the intersection.
We place several buildings along the crossing road that block visibility, allowing some randomness in their position and yaw values.
We also place several buildings completely randomly behind the crossing road to provide a diverse backdrop of buildings in the images.
Finally, we want to constrain data generation to instances of this scenario where the crossing car is not visible until it is close to the AV, as these will be the most challenging for the perception system.
Using the new LTL syntax, we simply write:
require (not ego can see car) until distance to car < 75
Fig. <ref> shows a simulation sampled from this scenario.
In Scenic 2, the crossing car would be wrongly labeled as visible in image (a), since the occluding buildings would not be taken into account.
This would introduce significant error into the generated training set, which in previous uses of Scenic had to be addressed by manually filtering out spurious images; this is avoided with the new system.
§ CONCLUSION
In this paper we presented Scenic 3, a major new version of the Scenic programming language that provides full native support for 3D geometry, a precise occlusion-aware visibility system, support for more expressive temporal operators, and a rewritten extensible parser.
These new features extend Scenic's use cases for developing, testing, debugging, and verifying cyber-physical systems to a broader range of application domains that could not be accurately modeled in Scenic 2.
Our case study in Section <ref> demonstrated how Scenic 3 makes it easier to perform falsification for CPS with complex 3D environments.
Our case study in Section <ref> further showed that even in domains that could already be modeled in Scenic 2, like autonomous driving, Scenic 3 allows for significantly more precise specifications due to its ability to reason accurately about 3D orientations, collisions, visibility, etc.; these concepts are often relevant to the properties we seek to prove about a system or an environment we want to specify.
We expect the improvements to Scenic we describe in this paper will impact the formal methods community both by extending Scenic’s proven use cases in simulation-based verification and analysis to a much wider range of application domains, and by providing a 3D environment specification language which is general enough to allow a variety of new CPS verification tools to be built on top of it.
In future work, we plan to develop 3D scenario optimization techniques (complementing the 2D methods Scenic already uses) and explore additional 3D application domains such as drones.
We also plan to leverage the new parser to allow users to define their own custom specifiers and pruning techniques.
Acknowledgements
The authors thank Ellen Kalvan for helping debug and write tests for the prototype, and several anonymous reviewers for their helpful comments.
This work was supported in part by
DARPA contracts FA8750-16-C0043 (Assured Autonomy) and FA8750-20-C-0156 (Symbiotic Design of Cyber-Physical Systems), by Berkeley Deep Drive, by Toyota through the iCyPhy center, and NSF grants 1545126 (VeHICaL) and 1837132.
splncs04
|
http://arxiv.org/abs/2307.03270v1
|
20230704082959
|
A Comprehensive Multi-scale Approach for Speech and Dynamics Synchrony in Talking Head Generation
|
[
"Louis Airale",
"Dominique Vaufreydaz",
"Xavier Alameda-Pineda"
] |
cs.GR
|
[
"cs.GR",
"cs.CV",
"cs.LG",
"cs.SD",
"eess.AS"
] |
A Comprehensive Multi-scale Approach for Speech and Dynamics Synchrony in Talking Head Generation
Louis Airale
Univ. Grenoble Alpes, CNRS
Grenoble INP, LIG
38000 Grenoble
France
Dominique Vaufreydaz
Univ. Grenoble Alpes, CNRS
Grenoble INP, LIG
38000 Grenoble
France
Xavier Alameda-Pineda
Univ. Grenoble Alpes, Inria, CNRS
Grenoble INP, LJK
38000 Grenoble
France
August 1, 2023
========================================================================================================================================================================================================================================================================================
Animating still face images with deep generative models using a speech input signal is an active research topic and has seen important recent progress.
However, much of the effort has been put into lip syncing and rendering quality while the generation of natural head motion, let alone the audio-visual correlation between head motion and speech, has often been neglected.
In this work, we propose a multi-scale audio-visual synchrony loss and a multi-scale autoregressive GAN to better handle short and long-term correlation between speech and the dynamics of the head and lips.
In particular, we train a stack of syncer models on multimodal input pyramids and use these models as guidance in a multi-scale generator network to produce audio-aligned motion unfolding over diverse time scales.
Our generator operates in the facial landmark domain, which is a standard low-dimensional head representation.
The experiments show significant improvements over the state of the art in head motion dynamics quality and in multi-scale audio-visual synchrony both in the landmark domain and in the image domain.
Our code, models and demo will be made available on the project's GitHub page.[https://github.com/LouisBearing/HMo-audiohttps://github.com/LouisBearing/HMo-audio.]
§ INTRODUCTION
Among the many computer vision tasks that have benefited from the breakthrough of deep learning, talking face generation, that aims to animate still images from a conditioning audio signal, has received considerable attention in the previous years.
The advent of potent reenactment systems, as <cit.> or <cit.>, and powerful loss functions allowing for a finer correlation between the generated lip motion and the audio input <cit.> have paved the way for a new state of the art.
In both tasks of talking head generation and face reenactment, where lip and head motion are given as a driving video sequence, it is customary to represent face dynamics in a low dimensional space <cit.>.
For this reason recent breakthrough in face reenactment has also benefited the talking head synthesis task.
The above approach assumes that image texture and face dynamics can be processed independently, and that all necessary cues to handle the dynamics fit on a low dimensional manifold.
It is then a reliable strategy to treat audio-conditioned talking face synthesis as a two-step procedure, where the audio-correlated dynamics are first generated in the intermediate space of an off-the-shelf face reenactment model, which is later used to reconstruct photorealistic video samples <cit.>.
This allows to focus on improving the audio-visual (AV) correlation between the input speech signal and the produced face and lips movements in a much sparser space than that of real-world images.
Nevertheless, synthesising natural-looking head and lip motion sequences adequately correlated with an input audio signal remains a challenging task.
In particular, although it has long been known that speech and head motion are tightly associated <cit.>, only recently has this relation attracted the attention of the computer vision community.
A likely reason for the difficulty of producing realistic head motion is the lack of an adequate loss function.
So far, the most successful strategy to produce synchronized lip movements has relied on the maximization of the cross-modal correlation between short audio and output motion clips, measured by a pre-trained model <cit.>.
This fails, however, to account for lower frequency motion as that of the head which remains quasi-static over the short duration considered, typically of the order of a few hundreds of milliseconds.
Surprisingly, there was no attempt to generalize this approach beyond lip synchronization.
Neither has possible multi-scale audio-visual correlation been explored in the talking face generation literature.
Head motion is often produced through the use of a separate sub-network trained to match the dynamics of a ground truth sequence, which in practice decouples the animation of head and lips.
We argue that to account for motion that unfolds over longer duration such as the head rhythm, a dedicated loss enforcing the synchrony of AV segments of various lengths is needed.
We propose to implement this loss using a pyramid of syncers, replacing the lip-sync expert of <cit.> with a stack of syncer models evaluating the correlation between the audio input and the dynamics of the whole face over different time scales.
One advantage of this syncer-pyramid loss function is that it allows to produce head and lip movements together; here one may train a single network end-to-end on the dynamics of both head and lips, resulting in overall lighter architecture and training procedure.
A natural way to exploit the gradients from the multi-scale AV correlation loss is then to construct a similar hierarchical structure in the generative model itself.
The proposed method, hereafter labelled , is implemented in the landmark domain <cit.>: for the reasons previously mentioned it is sufficient to parameterize the speech-correlated facial dynamics, which is the focus of the present work.
Our generative model, loss functions and most of the metrics used to measure the quality and synchrony of the produced motion therefore apply in this domain.
Although it is out of the scope of this study, several landmark-based real-world face reconstruction methods exist <cit.>.
Last, in contrast with the current trend in talking face synthesis, we rely on an autoregressive generative network for its inherent ability to model sequential dependencies, and its flexibility to handle sequences of arbitrary length.
To do so, we build on the autoregressive Generative Adversarial Network (GAN) baseline of <cit.>, and show that the conditioning speech signal has a stabilizing effect that hinders error accumulation on a much longer term than in the unconditional setting.
In particular, we demonstrate experimentally that the error drift can be mitigated on test sequences more than five times the length of the training sequences.
More importantly, we show that the proposed model, coupled with the multi-scale discriminator of <cit.>, largely outperforms the state of the art in terms of multi-scale audio-visual correlation and head dynamics quality.
The contributions of the present work are:
* A multi-scale audio-visual correlation loss based on a pyramid of syncer networks,
* A multi-scale autoregressive GAN for the generation of speech-synchronized head and lip motion in the 2D-landmarks domain with minimal error accumulation,
* Extensive experiments on three datasets that show that our architecture outperforms previous works on all metrics related to both quality of head dynamics and AV correlation.
§ RELATED WORK
§.§ Talking head generation
The task of animating a human face with a neural network can be either guided, when the head motion comes from a driving sequence, or unguided, in which case the head and lip motion must be inferred by the generative model from other modalities.
Compelling results have been achieved over the years to improve the photorealistic rendering of guided methods <cit.>.
Among these, several works rely on low dimensional representations, e.g. facial landmarks <cit.>, learned keypoints <cit.>, or morphable models <cit.> to handle the dynamics, which are later used to warp or normalize the style of the source identity image.
On the other hand, the primary focus of audio-driven talking head synthesis has been on syncing output lip movements and input speech signal, either leaving visual reenactment as a separate task, or limiting it to static pose scenarios <cit.>.
For this reason, there has been comparatively few endeavors to generate realistic head motion <cit.>.
As a noticeable improvement over previous research, recent works showed very promising results producing rich head motion in a low-dimensional keypoint space in combination with proficient visual reenactment systems <cit.>.
However there remains a margin of improvement in particular in the diversity of output head motion, and in the time alignment between speech, lips and head motion over different time scales, which has never been addressed before.
§.§ Learning to align speech and head dynamics
Two trends coexist regarding the syncing of audio and face dynamics.
Originally, learning audio-correlated lip movements was only done with a mean squared error loss to the ground truth sequence <cit.>.
In parallel, following SyncNET <cit.>, the use of contrastive loss variants turned out to be a strong alternative for its effectiveness on cross-modal training tasks <cit.>.
In particular, in <cit.> the authors proposed to train a lip-sync expert network to regress the cross-modal alignment between short audio and video segments.
The expert would later be frozen during the generative model training phase, and used as a loss function to enhance output audio-visual alignment.
This strategy was later employed successfully in several works <cit.>.
We argue however that the commonly used segment length of 200 ms is insufficient to properly align lower-frequency movements like that of the head, for which several such syncer networks operating on various segment lengths are required.
§.§ Multi-scale data processing
Learning on representations of the input data over multiple scales has become the standard in computer vision tasks such as object detection or semantic segmentation where objects of the same class can have different sizes <cit.>.
In the generative models literature, multi-scale approaches may either be implemented in the discriminator network of GANs as a way to improve multi-scale faithfulness of generated data <cit.> but also in the generative model itself <cit.>.
Although this was not explored so far in talking head generation, multi-scale feature hierarchies can be readily computed to align speech and dynamics of various motion frequencies.
§ METHOD
Given a set of initial landmark coordinates x_0 ∈ℝ^2L (the 2D coordinates of the L=68 landmarks) and a conditioning audio signal a_0:T = (a_0, …, a_T) ∈ℝ^d × T (here d=26) over T time steps, we aim to produce a sequence of landmark positions x_1:T such that the joint distributions over generated and data samples match:
p_g(x_0:T,a_0:T) = p_data(x_0:T,a_0:T), ∀ x_0:T, a_0:T.
In this section we describe our procedure to tackle this problem as follows.
We start by introducing in <ref> the multi-scale AV synchrony loss which is the major contribution of the present work.
Then in <ref> we propose a multi-scale generator architecture able to exploit appropriately the devised multi-scale AV loss.
Finally section <ref> details our overall training procedure.
§.§ Multi-scale audio-visual synchrony loss
The most prominent procedure to align dynamics with speech input relies on the optimization of a correlation score computed on short audio-visual segments of the generated sequence using a pre-trained AV syncer network <cit.>.
Several contrastive loss formulations are possible to train the syncer network, that suppose the maximization of the agreement between in-sync AV segments or positive pairs (a_t, x_t) versus that of out-of-sync or negative pairs.
One particularly interesting formulation is the Info Noise Contrastive Estimation loss, that maximizes the mutual information between its two input modalities <cit.>.
Given a set X = (a_t, x_t, x_1^neg, …, x_N^neg) containing a positive pair and N negative position segments, this loss writes:
ℒ_InfoNCE = - 𝔼_X e^S(a_t, x_t)/e^S(a_t, x_t) + ∑_n=1^N e^S(a_t, x_n^neg),
with S the syncer model score function, which is hereafter implemented as the cosine similarity of the outputs from the audio and position embeddings e_a and e_x of the syncer network:
S(a_t, x_t) = e_a(a_t)^⊤ e_x(x_t)/||e_a(a_t)||.||e_x(x_t)||.
Following the usual practice, we take a_t and x_t respectively as the MFCC spectrogram and position segment of a 200 ms window centered on time step t.
Negative pairs can be indifferently misaligned audio and position segments from the same sequence sample, or segments from different samples, and N is hereafter fixed to 48.
Once trained, the weights of e_a and e_x are frozen and the following term is added
to the loss function of the generative model:
ℒ_AV = - 𝔼_a_t, x_t S(a_t, x_t),
where a_t is now part of the conditioning signal and x_t is output by the model.
The above procedure is insufficient when one needs to discover AV correlations over different time scales.
A straightforward extension consists in building multi-scale representations of the audio-visual inputs and training one syncer network S^i for each level i in the resulting pyramid.
The training process of the pyramid of syncers is represented in Figure <ref> (left).
Specifically, the audio and landmark position hierarchies {a_0:T/2^i-1^i}_i and {x_0:T/2^i-1^i}_i are constructed by successive passes through an average pooling operator that blurs and downscales its input by a factor 2, e.g. for positions:
x_t^i = 1/2k + 1∑_τ=-k^k x_2t + τ^i-1
where we choose k=3.
The objective is to progressively blur out the highest frequency motion when moving upward in the pyramid, forcing the top level syncers to exploit better the rhythm of the head motion.
A total of four syncer networks are trained on the input pyramid following (<ref>), input segment duration ranging from the standard 200 ms on the bottom level to 1600 ms at the coarsest scale.
After the training of the pyramid of syncers, all networks S^1 to S^4 are frozen and used to compute the multi-scale audio-visual synchrony loss.
The principle of this loss is presented in Figure <ref> (right).
Similar to the input pyramids used to train the syncer networks, we construct a multi-scale representation of the input speech a_0:T and the generated landmark positions x_0:T.
Then for each hierarchy level i one loss term ℒ_AV^i is computed according to (<ref>) using pre-trained syncer S^i.
Those terms are then averaged to give the overall multi-scale AV synchrony loss ℒ_AV^MS.
To better exploit the effects of this loss, we propose a multi-scale autoregressive generator network that we describe in the following section.
§.§ Multi-scale autoregressive generator
Through the multi-scale synchrony loss, the generator receives gradients that push it to produce audio-synced landmark positions over multiple time scales.
In this section, we describe the architecture of our generator network, which is itself implemented with a multi-scale structure to better exploit the loss gradients.
The overall architecture is described in Figure <ref>.
The proposed generative model is inspired from SUHMo <cit.> which implements an autoregressive model to generate facial landmark velocities.
This however requires substantial adaptations to deal with the present multimodal data.
Very generally, given landmark positions x_0:t until time step t and next frame audio input a_t+1, the generator G produces instantaneous velocities v_t+1:
v_t+1 = G(x_0:t, a_t+1),
x_t+1 = x_t + v_t+1.
As depicted in Figure <ref>, G is constituted of a temporal module operating on a sequence of landmark positions, and of a multi-scale module that takes the output of the temporal module h_t, the positions x_t and audio a_t+1 as input to produce v_t+1.
We implement the multi-scale module as the bottom-up path of a Feature Pyramid Network <cit.>.
Namely, the input spectrogram is processed by several downsampling convolutional layers, producing feature maps a_0:T^1 to a_0:T/2^3^4 of the same resolution as those used to compute the AV loss pyramid.
Feature maps 2 to 4 are later interpolated back to the length T of the finest map, such that one vector a_t+1^i can be extracted from each pyramid level i to produce the next step velocity.
Concretely, each vector a_t+1^i is concatenated with x_t and h_t and is processed by an independent fully connected branch, the rationale being that processing each input resolution separately would allow the model to produce different motion frequencies.
The outputs of the four branches of the multi-scale generator are merged using a learnable soft spatial mask.
Each branch i outputs a velocity vector v^i ∈ℝ^2L (note that time index is omitted for the sake of clarity) and a mask vector w^i ∈ℝ^2L such that w^i_2k-1 = w^i_2k, ∀ k ≤ L, responsible for enhancing or weakening the contribution of each landmark on the given branch.
This is because we expect facial regions to play different roles depending on the scale: the finest resolution branch might emphasize lip landmarks, while at the coarsest scale, more weight may be put on head contour.
The output of the multi-scale module finally writes:
v_t+1 = ∑_i=1^4 (e^w^i/∑_j e^w^j) v^i
§.§ Overall architecture and training
In addition to the AV synchrony loss ℒ_AV^MS, we make use of the discriminator networks proposed in <cit.>, that proved effective to train an autoregressive generator on landmark sequences.
These consist in one frame discriminator D_f which computes the realism of static landmarks, and two window-based multi-scale networks D_s and D_s^j on sequences.
The difference between those is that D_s^j processes pairs of samples to help reducing mode collapse <cit.>.
Although in our audio-conditioned setting mode collapse is at most a minor issue, we found that using this additional loss slightly improves the dynamics quality.
Adversarial losses are implemented with the geometric GAN formulation of <cit.>.
Namely, given the generated and ground truth landmark position distributions p_g and p_data, the generator losses write:
ℒ_G_f = -𝔼_x_0:T∼ p_g[ 1/T∑_t ≥ 1 D_f(x_t) ],
ℒ_G_s = -𝔼_x_0:T∼ p_g[ D_s(x_0:T) ],
ℒ_G_s^j = -𝔼_x_0:T^∼ p_g, x'_0:T∼ p_g[ D_s^j(x_0:T^, x'_0:T) ],
as for the generic discriminator loss:
ℒ_D_* = 𝔼_x ∼ p_g [max(0, 1 + D_*(x)) ]
+ 𝔼_x ∼ p_data[max(0, 1 - D_*(x)) ],
where D_* is replaced respectively by D_f, D_s and D_s^j and sequences are sampled according to equations <ref> to <ref>.
Additionally, we trained with the weak L_2 reconstruction loss in <cit.> but found no significant improvement.
Overall, training consists in minimizing alternatively the two following terms:
ℒ_D = ℒ_D_f + ℒ_D_s + ℒ_D_s^j,
ℒ = λℒ_AV^MS + ℒ_G_f + ℒ_G_s + ℒ_G_s^j,
with λ = 8 in all experiments.
§ EXPERIMENTS
We conducted three benchmark evaluations to measure the proficiency of our model, assessing respectively head dynamics quality, multi-scale AV synchrony in the landmark domain, and AV synchrony in the image domain.
§.§ Experimental protocol
Datasets.
Experiments are conducted on two versions of the VoxCeleb2 dataset <cit.> with different preprocessing.
The first version, labelled VoxCeleb2 (I), follows the standard preprocessing that centers the face in every frame.
Second, we use the preprocessing strategy in <cit.> to re-generate subsets of respectively ∼18k and 500 short video clips from the original VoxCeleb2 train and test sets.
The interest of this preprocessing method is that it keeps the reference frames fixed, thus preserving head motion.
We refer to this second version as VoxCeleb2 (II).
HDTF dataset <cit.> contains ∼400 long duration frontal-view videos from political addresses, which despite limited dynamics diversity makes it suitable for AV correlation measurements.
Last, we use LRS2 <cit.>, which is preprocessed similarly to VoxCeleb2 (I), to measure the AV synchrony in the image space.
Benchmark models.
We compare our method, , with the following prominent speech-driven talking head generation models.
Wav2Lip <cit.> uses a pre-trained lip syncer to
learn the AV synchrony, and achieved state-of-the-art performances on the visual dubbing task.
However, it only reenacts the lip region and therefore does not produce any head motion.
Similarly, PC-AVS <cit.> produces speech-synchronized talking head videos using a driving head motion sequence, mapping the results directly in image space without any explicit intermediate representation.
MakeItTalk <cit.> was one of the first successful attempts to produce speech-correlated head motion.
Its dynamics are learned in the landmark domain on VoxCeleb2 (I), i.e. no head translation was seen at training.
Audio2Head <cit.> and its follow-up model OSTF <cit.> propose methods to generate vivid dynamics, learning head motion and AV synchrony in a sparse keypoint space using a two-step training procedure.
Audio2Head dynamics module is trained on VoxCeleb2 (II), while OSTF is trained on a single identity, namely using Obama Weekly Address dataset.
As a noticeable improvement over Audio2Head, in OSTF AV synchrony is controlled with the contrastive loss of <cit.>.
An overview of the different models can be found in Table <ref>.
Training details.
The temporal module introduced in Section <ref> is implemented as a 1-layer LSTM with hidden size 256.
All convolutions and fully connected layers are implemented as 1D blocks (with kernel size 1 for dense layers) <cit.>.
We trained two versions of our model, varying only the training sequence length from 40 to 80 frames, resulting in -short and -long: the aim is to see how this affects the quality of the produced sequences on various output lengths.
Models were trained on VoxCeleb2 (II) for 70k iterations (about 500 epochs) using Adam optimizers with β_1=0 and β_2=0.999 and learning rates 2× 10^-5 and 1× 10^-5 respectively for the generator and the discriminator, after which a decay factor of 0.1 was applied on the learning rates for 5k additional iterations.
All audio inputs are sampled at 16 kHz, and to generate the 26-dimensional MFCC spectrogram we used a window size of 400 and hop size of 160.
§.§ Dynamics quality
Protocol.
The quality of the produced dynamics is evaluated on the 500 videos of VoxCeleb2 (II) test set, which preserve head motion.
The Fréchet Inception Distance (FID) is used to measure static face realism, while the Fréchet Video Distance (FVD) and temporal FID <cit.> metrics measure the distance between the distributions of data and generated motion.
The two latter metrics require a fixed sequence length that we set to either 40 or 80 frames (equivalent to 1.6 s and 3.2 s at 25 fps), and we refer to the resulting metrics as FVD_40 (t-FID_40) and FVD_80 (t-FID_80), respectively.
When generating longer sequences, we measure the FVD_40, t-FID_40 and FID on the last 40 frames.
Image-rasterized landmarks are used to compute the metrics (see Figure <ref> and <cit.>).
Results.
The results of the dynamics quality evaluations are reported in Table <ref>.
-short shows similar FVD and t-FID scores to OSTF but significantly better FID, especially on 40 and 80 frame sequences.
Since the faces produced by OSTF are also very sharp, we interpret this result as a hint that this model lacks diversity.
Audio2Head suffers from the same limitation to an even greater extent: although visually compelling, the movements it produces are stereotypical and therefore penalized by their too small variance in the Fréchet distance calculations.
On the other hand, MiT performs well in FID but its dynamics are of noticeable lower quality.
Finally, the -long results show that a mere change in training strategy allows to greatly reduce error accumulation over 200 time steps, although it is here at the cost of a slightly lower quality on shorter sequences.
§.§ Landmark-domain multi-scale AV synchrony
Protocol.
Ideal multi-scale AV synchrony scores should convey how much a model succeeds in exploiting the audio signal to produce motion over diverse time scales.
To that end, we resort to audio-visual datasets which preserve motion dynamics, namely VoxCeleb2 (II) and HDTF <cit.>, and carry our evaluations in the landmark domain.
We split HDTF into 291 and 51 train and test identities, and further split the test videos into 1058 80-frame clips.
Likewise, measurements on VoxCeleb2 (II) are made on sequences of 80 frames.
Pyramids of AV syncers dedicated to metrics calculation, equivalent to landmark-domain SyncNETs <cit.> on full faces, are trained beforehand on both datasets.
Note that contrary to Section <ref>, we use the triplet loss of <cit.> to train the syncers.
The AV synchrony is evaluated using the absolute value of the audio-visual offset (|AV-Off|) and the confidence score (AV-Conf) introduced in <cit.>, measured at four different scales via the syncer pyramids on successively downsampled audio-visual chunks of duration 200 ms, 400 ms, 800 ms and 1600 ms.
Hence an offset of 1 at the finest resolution, sampled at 25 fps, amounts to a misalignment of 40 ms between modalities, whereas at the coarsest scale this rises to 320 ms.
We report in Figure <ref> the performances of the first level syncer on VoxCeleb2 (II) test set.
The distributions of ground truth synchrony scores are closer to Gaussian than perfect Dirac.
This is partly because the same audio signal may correspond to more than one facial configuration, and the syncers may not fully grasp this diversity.
The synchrony scores reported here should therefore be viewed in light of this assumption: rather than an actual finer AV alignment, results that appear "better" than the ground truth instead correspond to a closer match to the modes of the AV distribution discovered by the syncers.
Results.
The AV synchrony scores are reported in Table <ref> and <ref> for VoxCeleb2 (II) and HDTF, respectively.
We did not include PC-AVS in this section because of distinct cropping strategies producing inconsistent results.
Although the loss functions are different, the syncer pyramid used to train the model and the one which serves to compute the metrics were both trained on VoxCeleb2 (II): our model almost perfectly learned to optimize this loss, hence the very strong AV correlation scores.
The HDTF results expose the generalization abilities of the different methods.
Although Wav2Lip presents the best |AV-Off_1| at the first scale and Audio2Head the best |AV-Off_2| at the second scale, possesses the second best scores and largely outperforms all models in terms of AV confidence.
What is more, the gap in favor of our model increases at the two coarsest scales, highlighting the effectiveness of the proposed approach to correlate input speech and generated motion on multiple time scales.
Last it is noteworthy that although it produces no head motion, Wav2Lip results remain way above the static, uncorrelated boundary even at the coarsest resolution.
This means that the mouth region is still partly informative at the top pyramid level, possibly pleading for a stronger blurring strategy.
§.§ Image-domain AV synchrony
Protocol.
In a third batch of experiments the synchrony is calculated in the image domain, similar to the classical evaluation protocol.
To do so, we first map the landmarks output by our model in the image space using the reenactment system from MakeItTalk (hereafter MiT <cit.>).
Although this leads to blurry results when the pose changes from the initial orientation, we found it sufficient for the sake of AV synchrony measurements.
We use two datasets for these experiments: a subset of 2141 videos from the original test set of VoxCeleb2 (I), and LRS2.
To cope with the imbalanced duration of VoxCeleb2 videos and keep computation time manageable, we work with the first 40 frames in each clip, while we use the whole LRS2 test set, which contains shorter videos.
In addition to the absolute AV offset and confidence score given by SyncNET, we compute the Landmark Distance (LMD <cit.>), together with a frontalized version LMD_front that better accounts for face rotation.
For a fair comparison, we do not directly use the landmarks produced by to measure the LMD but extract it back from the reenacted video clips; the same procedure is applied to the ground truth landmarks to help assess the effects of each of the previous steps on the metrics.
Results.
Although not the primary scope of our study, the landmarks produced by and reenacted with MiT behave surprisingly well in the image domain (Table <ref>).
outperforms all other models in terms of AV offset on both VoxCeleb2 (I) and LRS2, simply falling short of Wav2Lip in terms of AV confidence on the two datasets, and of LMD on VoxCeleb2.
In particular, performs better than all other models with head motion (and notably MakeItTalk) on all considered metrics.
Notice also how the fact that Wav2Lip leaves the whole input face beyond the lips intact seems to bias the calculation of AV confidence in its favor, especially when considering the MiT-reenacted ground truth landmarks.
This suggests that SyncNET is sensitive to the image sharpness: the shortcomings of the image reenactment systems probably set limits on the achievable offset and confidence values.
§.§ Qualitative results
In Figure <ref> we present several sequences output by different models and the corresponding ground truth sequences over 120 frames.
An examination of these examples shows that mouth closing and opening produced by look correctly aligned with the original, but interestingly this also seems to be the case for head motion although the loss only enforces convergence of distributions.
Although the motion produced by OSTF is qualitatively good, it is slightly less diverse and tends to frontalize the face disregarding the original orientation.
Wav2Lip, on the other hand, only synchronizes the lips.
§.§ Ablation study
In this section we explore the roles of the multi-scale AV synchrony loss and of the multi-scale generator on the output results, and in particular in AV confidence at different resolutions.
As can be seen in Figure <ref> for the evolution of the validation AV confidence along training, almost no difference is visible at the finest resolution between the full model and its single-scale loss, single-scale generator equivalent.
However as expected the confidence of the latter model falls significantly below as one moves upward in the feature pyramid as the loss does not explicitly enforce multi-scale synchrony.
It is possible to circumvent this effect by enabling the multi-scale AV synchrony loss, however it is clear that if the generator is not itself a multi-scale network, it lacks capacity to fully exploit the loss, resulting in average performances on every scale.
§ CONCLUSION
The approach proposed in this work is the first attempt to learn and model audio-visual correlations at multiple scales for talking head generation.
This is enabled thanks to a pyramid of syncer models that are trained on hierarchical representations of input audio and landmark position sequences, and then used to compute the loss for the training of the generative model.
Importantly, we showed that this model should also be built on a multi-scale backbone, implemented here as a feature pyramid network together with individual branches for each pyramid level that are merged using a soft learnable mask.
The very encouraging results of let us foresee numerous applications of similar approaches on other audio-visual generation tasks.
One research direction could thus consist in replacing the facial landmarks with other quantities, be it low dimensional keypoints or body joints, or real-world images.
Another orthogonal direction may lead to extend the focus to additional cross modal relationships, such as audio-visual emotions.
ieee_fullname
|
http://arxiv.org/abs/2307.00896v1
|
20230703094646
|
On the interior Bernoulli free boundary problem for the fractional Laplacian on an interval
|
[
"Tadeusz Kulczycki",
"Jacek Wszoła"
] |
math.AP
|
[
"math.AP",
"35S99, 49J10"
] |
We study the structure of solutions of the interior Bernoulli free boundary problem for (-Δ)^α/2 on an interval D with parameter λ > 0. In particular, we show that there exists a constant λ_α,D > 0 (called the Bernoulli constant) such that the problem has no solution for λ∈ (0,λ_α,D), at least one solution for λ = λ_α,D and at least two solutions for λ > λ_α,D. We also study the interior Bernoulli problem for the fractional Laplacian for an interval with one free boundary point. We discuss the connection of the Bernoulli problem with the corresponding variational problem and present some conjectures. In particular, we show for α = 1 that there exists solutions of the interior Bernoulli free boundary problem for (-Δ)^α/2 on an interval which are not minimizers of the corresponding variational problem.
Periodicity of general multidimensional continued fractions using repetend matrix form
Vítězslav Kala
Electronic address:
======================================================================================
§ INTRODUCTION
The interior Bernoulli free boundary problem for the fractional Laplacian is formulated as follows.
Given α∈ (0,2), d ∈, a bounded domain D ⊂^d and a constant λ > 0 we look for a continuous function u: ^d → [0,1] and a domain K ⊂ D of class C^1 satisfying
{
(-Δ)^α/2u(x) =0 for x ∈ D ∖K,
u(x) =1 for x ∈K,
u(x) =0 for x ∈ D^c,
D_n^α/2u(x) = -λ for x ∈∂ K..
Here (-Δ)^α/2 denotes the fractional Laplacian given by
(-Δ)^α/2f(x) = lim_r → 0^+α 2^α-1Γ(d+α/2)/π^d/2Γ(1-α/2)∫_{z ∈^d: |z| > r}f(x+z) - f(x)/|z|^d+α dz
and D_n^α/2 denotes the generalized normal derivative given by
D_n^α/2u(x) = lim_t → 0^+u(x + t n(x)) - u(x)/t^α/2,
where n(x) is the outward unit normal vector to K at x. As usual, by a domain we understand a nonempty, connected open set, by a domain of class C^1 we understand a domain, which boundary is locally a graph of a C^1 function.
When α = 2, that is if (-Δ)^α/2 is replaced by (-Δ) and D_n^α/2 is replaced by the normal derivative
∂_n u(x) = lim_t → 0^+u(x + t n(x)) - u(x)/t,
Problem <ref> is just the classical interior Bernoulli problem, which have been intensively studied, see e.g. <cit.>. It arises in various nonlinear flow laws and several physical situation e.g. electrochemical machining and potential flow in fluid mechanics.
In the classical case (α = 2) it is well known that Problem <ref> does not have a solution for any positive level λ. For example, when D is convex, it is proved in <cit.> that there is some positive constant λ_D such that this problem has a solution for level λ if and only if λ≥λ_D. This constant λ_D is called a Bernoulli constant. It is also known that even if there are solutions to the problem for some λ there is no uniqueness in general. For example, if D is a ball in ^d (d ≥ 2) there are exactly 2 solutions to the problem for any λ > λ_D, while for λ = λ_D the solution is unique (see e.g. <cit.>). A very interesting and open question is whether for general convex bounded domains D the structure of the solutions to the Bernoulli problem enjoys similar features. See <cit.> for some discussion on this question in the classical case.
The main aim of this paper is to study the structure of solutions of Problem <ref> in the simplest geometric case i.e. when D is an interval. The main results of our paper are the following theorems.
Let α∈ (0,2), x_0 ∈, r > 0 and D = (x_0-r,x_0+r). Then there is a constant λ_α,D > 0 such that
Problem <ref> has
* at least two solutions for λ> λ_α,D,
* at least one solution for λ = λ_α,D,
* no solution for λ < λ_α,D.
Moreover, if (u,K) is any solution of Problem <ref>, then K is symmetric with respect to x_0. We also have λ_α,D = λ_α,(x_0-r,x_0+r) = r^-α/2λ_α,(-1,1).
The constant λ_α,D is called the Bernoulli constant for the fractional Laplacian for a domain D. Next result provides
some estimates of this constant for interval (-1,1).
For any α∈ (0,2) we have
C_α( T_α + 1/α 2^α)≤λ_α,(-1,1)≤ C_α( 2/α)^α/2( T_α + 1/α 2^α( α/2-α)^α),
where C_α = π^-1sin(πα/2), T_α = B(α,1-α/2).
Bernoulli problems for fractional Laplacians have been investigated for the first time by Caffarelli, Roquejoffre and Sire in <cit.>. Such problems are relevant in classical physical models in mediums where long range interactions are present. Bernoulli problems for (-Δ)^α/2 have been intensively studied in recent years see e.g. <cit.>. In these papers mainly regularity of free boundary of solutions of the related variational problem was studied. This variational problem is formulated in Section <ref>.
One dimensional Bernoulli problems for the fractional Laplacian for α = 1 are related to some reaction-diffusion equations, which can model the combustion of an oil slick on the ground with the temperature diffucing above ground (see <cit.>)
Problem <ref> for α = 1 have been already studied in <cit.>. In that paper (in the part devoted to the inner Bernoulli problem) only existence of the related variational problem was studied, without investigating the number of solutions.
In our paper we study all solutions to Problem <ref> and not only solutions to the variational problem. The relation between Problem <ref> and the variational problem is discussed in Section <ref>.
The main difficulty in proving Theorem <ref> is caused by nonlocality of the fractional Laplacian. From technical point of view this is manifested by the fact that there is no explicit formula of the Poisson kernel corresponding to (-Δ)^α/2 for an open set, which is the sum of 2 disjoint intervals, although the explicit formula of the Possion kernel corresponding to (-Δ)^α/2 for an interval is known. The nonlocal Problem <ref> for an interval can be transformed to the local one in one dimension higher (see e.g. <cit.>) but this does not seem to allow to find the explicit formula for the Poisson kernel corresponding to (-Δ)^α/2 for the sum of 2 disjoint intervals.
In our paper we study also a simplified version of Problem <ref>. This is the interior Bernoulli problem for the fractional Laplacian for an interval with one free boundary point. It is formulated as follows.
Given α∈ (0,2), x_0 ∈, r > 0, D = (x_0 - r, x_0 + r) and a given constant λ > 0 we look for a Borel measurable function u: ^d → [0,1], continuous in D and an interval K = (a,x_0+r) ⊂ D satisfying
{
(-Δ)^α/2u(x) =0 for x ∈ D ∖K,
u(x) =1 for x ∈K∖{x_0 + r},
u(x) =0 for x ∈ D^c,
D_n^α/2u(a) = -λ, .
where D_n^α/2 is given by (<ref>).
Clearly, the point a is the unique free boundary point for this solution.
The main result concerning Problem <ref> is the following theorem.
Let α∈ (0,2), x_0 ∈, r > 0 and D = (x_0-r,x_0+r). Then there is a constant μ_α,D > 0 such that the Problem <ref> has
* exactly two solutions for λ> μ_α,D,
* exactly one solution for λ = μ_α,D,
* no solution for λ < μ_α,D.
We also have μ_α,D = μ_α,(x_0-r,x_0+r) = r^-α/2μ_α,(-1,1).
For α = 1 we are able to obtain explicit formulas for μ_α,D and solutions of Problem <ref>.
Let α = 1, x_0 ∈, r > 0 and D = (x_0-r,x_0+r). Then we have μ_1,D = 2√(2)/(π√(r)). For λ = μ_1,D we have 1 solution of Problem <ref> given by
K = (x_0,x_0+r), u(x) = 2/πarctan((x-x_0+r)^1/2/(2(x_0-x))^1/2), x ∈ D ∖K.
For λ > μ_1,D we have 2 solutions (K_1,u_1) and (K_2,u_2) of Problem <ref> given by
K_i = (a_i, x_0+r), u_i(x) = 2/πarctan( (x-x_0+r)^1/2(x_0+r-a_i)^1/2/(a_i-x)^1/2(2r)^1/2),
where x ∈ D ∖K_i, a_i = x_0 + (-1)^i (1-μ^2_1,D/λ^2)^1/2 for i=1,2.
Figure <ref> shows graphs of solutions given in Theorem <ref>. Note, that although there is an extensive literature devoted to free boundary problems for fractional Laplacians, explicit solutions of these problems are quite rare.
The paper is organized as follows. In Section <ref> we collect well known facts which will be used in the sequel. In Section <ref> we study the interior Bernoulli problem for the fractional Laplacian for an interval with one free boundary point. Section <ref> contains proofs of main results of our paper. In Section <ref> we present some conjectures and discuss the connection of the Bernoulli problem with the corresponding variational problem.
§ PRELIMINARIES
In this section we present notation and gather some well known facts, which we need in the paper. In particular, we introduce Poisson kernel and Green function corresponding to the fractional Laplacian (-Δ)^α/2 (for α∈ (0,2)). We concentrate only on one-dimensional case and only on facts which are needed in this paper. For the detailed exposition of the potential theory corresponding to the fractional Laplacian we refer the reader to <cit.> or <cit.>.
We denote = {1, 2, 3, …}, for D ⊂ we put D^c = ∖ D and for any x ∈ we put δ_D(x) = (x,D^c).
Let be a class of nonempty open sets G in , G which has the representation
G = ⋃_k = 1^n G_k,
where n ∈ and all G_k are intervals (bounded or unbounded) and (G_i,G_j) > 0 for any i, j ∈{1, …, n}, i j. The class plays a role of smooth sets in .
Fix α∈ (0,2). For any D ∈ we denote by P_D the Poisson kernel for D corresponding to (-Δ)^α/2. P_D: D × (D)^c → (0,∞) has the following properties. For any x ∈ D we have
∫_(D)^c P_D(x,y) dy ≤ 1,
and if D ∈ is additionally bounded, then
∫_(D)^c P_D(x,y) dy = 1.
Let D ∈ be bounded and let f: D^c → be bounded and measurable. Let us consider the following outer Dirichlet problem for the fractional Laplacian (-Δ)^α/2. We look for a bounded measurable function u: →, continuous on D, satisfying
{
(-Δ)^α/2u(x) =0, for x ∈ D,
u(x) =f(x), for x ∈ D^c..
Such Dirichlet problem has a unique solution, which is given by the formula
u(x) = ∫_(D)^c P_D(x,y) f(y) dy, x ∈ D,
(see e.g. <cit.>). When f is continuous on D^c, it is well known that u is continuous on .
On the other hand, if measurable, bounded function u:→, continuous on D satisfies
(-Δ)^α/2 u(x) = 0, for x ∈ D,
then the following mean value property is satisfied (see e.g. <cit.>). For any bounded B ⊂ D, B ∈ and any x ∈ B we have
u(x) = ∫_(B)^c P_B(x,y) u(y) dy.
There are known explicit formulas of the Poisson kernels for intervals. For any a, b ∈, a < b we have (see e.g. <cit.>)
P_(a,b)(x,y) = C_α((x-a)(b-x))^α/2/((y-a)(y-b))^α/21/|x-y|, x ∈ (a,b), y ∈ [a,b]^c,
where C_α is defined as in Theorem <ref>.
For any D ∈ we denote by G_D the Green function for D corresponding to (-Δ)^α/2. It has the following properties: G_D: ×→ [0,∞] and G_D(x,y) = 0 if x ∈ D^c or y ∈ D^c. For x, y ∈ denote h_D,y(x) = G_D(x,y). For any fixed y ∈ D we have
(-Δ)^α/2 h_D,y(x) = 0, for x ∈ D.
The function h_D,y satisfies the following mean value property (see e.g. <cit.>). For any fixed y ∈ D, any bounded B ⊂ D, B ∈ such that (B,y) > 0 we have
h_D,y(x) = ∫_(B)^c P_B(x,z) h_D,y(z) dz, for x ∈ B.
For any B ⊂ D, B, D ∈ we have
G_B(x,y) ≤ G_D(x,y), for x, y ∈ℝ.
As in the classical case, there is known representation of the Poisson kernel in terms of the Green function. For any bounded D ∈ we have (see e.g. <cit.>)
P_D(x,z) = ∫_D G_D(x,y) _α/|y - z|^1+α dy for x ∈ D, z ∈ (D)^c,
where
_α = α 2^αΓ(1+α/2)/2 √(π)Γ(1 - α/2).
This is the same constant, which appear in the definition of (-Δ)^α/2 for dimension d = 1.
In the paper we need some well known facts concerning hypergeometric functions. For the convenience of the reader, we briefly present them below. Let |z|<1, p,q,r ∈ℝ and -r ∉. The (Gaussian) hypergeometric function is defined as
_2F_1(p,q;r;z) = ∑_n=0^∞(p)_n(q)_n/(r)_nz^n/n!,
where (·)_n is a Pochhammer symbol. For p ∈, r > q >0 and |z|<1 we have
B(q,r-q) _2F_1(p,q;r;z) = ∫_0^1 t^q-1(1-t)^r-q-1(1-tz)^-p dt,
where B(· ,·) is the beta function. Note, that we can rewrite the above as
B(q,r-q) _2F_1(p,q;r;z) = ∫_1^∞ t^p-r(t-1)^r-q-1(t-z)^-p dt
= ∫_0^∞ t^r-q-1(t+1)^p-r(t-z+1)^-p dt.
We will also need the following easy result concerning beta function.
For p ∈ (0,2) we have B(p,1-p2) > 1/p.
By definition we have
B(p, 1-p2) = ∫_0^1 t^p-1 (1-t)^-p/2 dt > ∫_0^1 t^p-1 dt = 1/p.
§ BERNOULLI PROBLEM WITH ONE FREE BOUNDARY POINT
This section is devoted to the study of the interior Bernoulli problem for the fractional Laplacian for an interval with one free boundary point.
For a fixed a ∈ (0,1) let w_a:→ be a (unique) measurable, bounded function, continuous on (0,a), which satisfies the following Dirichlet problem:
{
(-Δ)^α/2w_a(x) =0 for x ∈ (0,a),
w_a(x) =0 for x ∈ [a,1),
w_a(x) =1 for x ∈ (0,1)^c..
Clearly, for each fixed a ∈ (0,1) we have w_a: → [0,1]. By (<ref>), we have
w_a(x) = ∫_[0,1]^c P_(0,a)(x,y) w_a(y) dy, for x ∈ (0,a).
By (<ref>), we get
P_(0,a)(x, y) = C_α( (a-x)x/y(y-a))^α/2 |x-y|^-1
for x ∈ (0,a) and y ∈ [0,a]^c.
For a ∈ (0,1) let us define
R(a) = lim_t → 0^+w_a(a - t) - w_a(a)/t^α/2.
Using (<ref>) and (<ref>), we obtain
R(a)
= lim_t → 0^+1/t^α/2∫_[0,1]^c P_(0,a)(a-t, z) dz
= lim_t → 0^+C_α/t^α/2∫_[0,1]^c( -t^2+at/z^2-az)^α/2 |a-t-z|^-1 dz
= C_α a^α/2∫_[0,1]^c (z^2-az)^-α/2 |a-z|^-1 dz
= C_α a^α/2( ∫_0^∞ z^-α/2(z+a)^-α/2-1 dz+ ∫_1^∞ z^-α/2(z-a)^-α/2-1 dz ).
By (<ref>), we get
∫_0^∞ z^-α/2(z+a)^-α/2-1 dz = B(α, 1-α2) _2F_1(α2+1, α; α2+1; 1-a)
= B(α, 1-α2) a^-α.
Similarly, by (<ref>), we get
∫_1^∞ z^-α/2(z-a)^-α/2-1 dz = B(α, 1) _2F_1(α2+1, α; α+1; a)
= α^-1 _2F_1(α2+1, α; α+1; a).
For convenience, throughout the rest of this section we denote
F_α(a) = _2F_1(α2+1, α; α+1; a).
Recall that in the whole paper we use notation T_α = B(α, 1-α/2).
Using the above formulas we obtain
R(a) = C_α(T_α a^-α/2 + a^α/2α^-1 F_α(a)).
Now, fix a ∈ (0,1) and put u_a = 1 - w_a, K_a = (a,1). Let n(a) be the outward unit normal vector to K_a at a. Then we have
D_n^α/2u_a(a) = lim_t → 0^+u_a(a + t n(a)) - u_a(a)/t^α/2 = lim_t → 0^+u_a(a - t) - u_a(a)/t^α/2 = -R(a).
It follows that for any fixed a ∈ (0,1) the function u_a and the set K_a is a solution of Problem <ref> for D=(0,1) and λ = R(a).
Now we will study properties of the function (0,1) ∋ a ↦ R(a). These properties will allow to justify Theorem <ref>.
The function (0,1) ∋ a ↦ R(a) is strictly convex.
By differentiating (<ref>) twice, we get
d^2/d a^2 R(a) = C_α( α(α+2)/4· T_α a^-α/2-2 +
α^-1d^2/d a^2 (a^α/2F_α(a)) ).
Recall that
F_α(a) = ∑_n=0^∞(α2+1)_n (α)_n/(α+1)_na^n/n! = ∑_n=0^∞α/α+nα/2+nn a^n.
Hence
a^α/2F_α(a) = ∑_n=0^∞α/α+nα/2+nn a^α/2+n.
It follows that d^2/d a^2 (a^α/2F_α(a)) is equal to
α/2(α/2 -1) a^α/2-2 + ∑_n=1^∞α/α+nα/2+nn( α/2+n ) ( α/2 + n -1 ) a^α/2+n-2.
Hence
d^2/d a^2 (a^α/2F_α(a)) > α/2(α/2 -1) a^α/2-2.
Using this and Lemma <ref>, we finally obtain
(d^2/d a^2 R(a) ) 4a^α/2+2 C_α^-1 = α(α+2) T_α + 4a^α/2+2/α·d^2/d a^2 (a^α/2F_α(a))
> (α+2) + (α-2).
We have
lim_a → 0^+ R(a) = lim_a → 1^- R(a) = ∞.
The limit for a → 0^+ immediately follows from (<ref>), continuity of F_α(a) in a=1 and the fact that F_α(0) = 1.
By the inequality (α/2+1)_n > (1)_n = n!, we get
F_α(a) = ∑_n=0^∞(α/2+1)_n (α)_n/(α+1)_na^n/n! > ∑_n=0^∞(α)_n/(α+1)_n a^n = ∑_n=0^∞α/α+n a^n.
From this inequality we obtain
lim_a → 1^- F_α(a) = ∞.
Therefore
lim_a → 1^- R(a) = C_α( T_α + α^-1lim_a → 1^- F_α(a) )=∞.
Assume that (u,K) is a solution of Problem <ref> for D = (-r,r) and λ >0, where K = (a,r), r > 0 and a ∈ D. Let s > 0. Put D_s = (-sr,sr), K_s = (sa,sr) and define u_s:→ [0,1] by u_s(x) = u(x/s). Then (u_s,K_s) is a solution of Problem <ref> for D_s and s^-α/2λ.
For x ∈K_s = s K we have x/s ∈K, so u_s(x) = u(x/s) = 1. Similarly, for x ∈ (D_s)^c = s D^c we have x/s ∈ D^c, so u_s(x) = u(x/s) = 0. We also have
D_n^α/2 u_s(s a) = lim_t → 0^+u_s(s a - t) - u_s(s a)/t^α/2 =
lim_t → 0^+u(a - t/s) - u(a)/(t/s)^α/21/s^α/2 = -λ/s^α/2.
Put W = D ∖K. By (<ref>), we obtain for x ∈ W
u(x) = ∫_(W)^c P_W(x,z) u(z) dz = ∫_K P_W(x,z) dz.
Note that for x ∈ D_s ∖K_s = s W we have x/s ∈ W. Using this, <cit.> and (<ref>), we obtain for x ∈ D_s ∖K_s
u_s(x) = u(x/s) = ∫_K P_W(x/s,z) dz = ∫_K s P_sW(x,sz) dz.
By substitution y = sz, this is equal to
∫_K_s P_sW(x,y) dy = ∫_(sW)^c P_sW(x,y) u_s(y) dy.
Hence (-Δ)^α/2 u_s(x) = 0 for x ∈ D_s ∖K_s, which finishes the proof.
By the definition of the fractional Laplacian one easily obtains the following result.
Assume that (u,K) is a solution of Problem <ref> for D = (x_0-r,x_0+r) and λ >0, where K = (a,x_0+r), x_0 ∈, r > 0 and a ∈ D. Let y_0 ∈. Put D_* = (x_0+y_0-r, x_0+y_0+r), K_* = (a+y_0,x_0+y_0) and define u_*:→ [0,1] by u_*(x) = u(x - y_0). Then (u_*,K_*) is a solution of Problem <ref> for D_* and λ.
Recall that for any fixed a ∈ (0,1) the function u = u_a = 1- w_a and K = K_a = (a,1) is a solution of Problem <ref> for D=(0,1) and λ = R(a). By (<ref>), the function (0,1) ∋ a ↦ R(a) is positive and continuous. Using this and Propositions <ref>, <ref>, we obtain the assertion of the theorem for D=(0,1). The assertion for arbitrary D follows from Lemmas <ref> and <ref>.
We first show the assertion for D = (0,1). We assume that a ∈ (0,1). Recall that we denote u_a = 1 - w_a. By (<ref>) and (<ref>), we get for x ∈ (0,a)
u_a(x) = ∫_a^1 P_(0,a)(x,z) dz
= 1/π∫_a^1 (a-x)^1/2 x^1/2/(z-a)^1/2z^1/2 (z-x) dz
= 2/πarctan(x^1/2 (1-a)^1/2/(a-x)^1/2).
By (<ref>), we obtain
-D_n^1/2u_a(a) = D_n^1/2w_a(a) = C_1(T_1 a^-1/2 + a^1/2 F_1(a)).
Note that we have
F_1(a) = _2F_1(3/2,1;2;a) = 2 - 2 √(1-a)/a √(1-a).
Hence we obtain
-D_n^1/2u_a(a) = 2/π√(a)√(1-a).
We have
d/da(2/π√(a)√(1-a)) = 2a - 1/π a^3/2 (1-a)^3/2.
Thus, the minimum of the function (0,1) ∋ a ↦ -D_n^1/2u_a(a) is obtained for a = 1/2 and it is equal to 4/π.
Therefore,
μ_1,(0,1) = 4/π.
For λ = μ_1,(0,1) the unique solution is given by K = (1/2, 1) and u = u_1/2 (given by (<ref>)). For any λ > μ_1,(0,1) we have -D_n^1/2u_a(a) = λ if and only if
2/π√(a)√(1-a) = λ.
The above can be reduced to the following quadratic equation:
a^2 - a + μ_1,(0,1)^2/4 λ^2 = 0.
Since we have chosen λ > μ_1,(0,1), it has exactly two solutions, given by
a_1 = 1 + √(1 - μ_1,(0,1)^2/λ^2)/2, a_2 = 1 - √(1 - μ_1,(0,1)^2/λ^2)/2.
This implies that Problem <ref> has exactly two solutions. The first solution is K = (a_1,1) and u = u_a_1. The second solution is K = (a_2,1) and u = u_a_2. Functions u_a_1, u_a_2 are given by (<ref>). This gives the assertion of the theorem for D = (0,1).
The assertion for arbitrary D follows from Lemmas <ref> and <ref>.
§ PROOFS OF MAIN RESULTS
In this section we present proofs of Theorems <ref> and <ref>.
Let x_0 ∈, r > 0, D = (x_0-r,x_0+r) and λ > 0. Assume that (u,K) is a solution of Problem <ref> for D and λ. Then K is symmetric with respect to x_0.
Before we prove this proposition, we need some estimates of the Green function corresponding to the fractional Laplacian.
Fix 0 < b < w. Put U = (-w,-b) ∪ (b,w). There exists c_* > 0 such that for any x, y ∈ (b,w) we have G_U(-x,y) ≤ c_* δ_U^α/2(x). The constant c_* depends on α, b, w.
By (<ref>) and (<ref>), for any x, y ∈ (b,w) we have
G_U(-x,y) = ∫_b^w P_(-w,-b)(-x,z) G_U(z,y) dz
= C_α∫_b^w ((-x-(-w))(-b-(-x)))^α/2/((z-(-w))(z-(-b)))^α/2G_U(z,y)/|-x-z| dz
≤ c δ_(-w,-b)^α/2(-x) ∫_b^w G_U(z,y) dz
≤ c_* δ_U^α/2(x),
where c depends on α, b, w.
Fix 0 < b < w and let c_* be the constant from Lemma <ref>. Put U = (-w,-b) ∪ (b,w). There exist t ∈ (0,(w-b)/8) (depending on α, b, w, c_*) such that for any x ∈ (b, b+t) and y ∈ (b+3t,b+4t) we have
G_U(x,y) - G_U(-x,y) ≥ c_* δ_U^α/2(x).
Let t ∈ (0,(w-b)/8) (which will be chosen later) and assume that x ∈ (b, b+t) and y ∈ (b+3t,b+4t). We have
δ_U(x) δ_U(y)/|x - y|^2≤4 t^2/4 t^2 = 1.
Using this (<ref>) and <cit.>, we get
G_U(x,y) ≥ G_(b,w)(x,y)
≥ c_1 δ_U^α/2(x) δ_U^α/2(y)/|x - y|
≥ c_2 δ_U^α/2(x) (3t)^α/2/2t
= c_3 δ_U^α/2(x)/t^1-α/2,
where c_1, c_2, c_3 depends on α, b, w. Let c_* be the constant from Lemma <ref>. Put
t = min((c_3/2 c_*)^2/(2-α), w-b/8).
Note that we have t^1 - α/2≤ c_3/(2 c_*). Using this and Lemma <ref>, we get
G_U(x,y) - G_U(-x,y) ≥(c_3/t^1-α/2 - c_*) δ_U^α/2(x)
≥ c_* δ_U^α/2(x).
On the contrary, assume that a - (x_0 - r) (x_0 + r) - b. We may suppose that a - (x_0 - r) < (x_0 + r) - b. Note that this is equivalent to a + b < 2 x_0. Put w_-1 = x_0 - r, w_0 = (a+b)/2, s = a - (x_0 - r), w_1 = b + s. We have w_1 = b + a - (x_0 - r) < x_0 + r. We may assume that w_0 = 0. Then w_-1 = - w_1 and a = -b. Put U = (-w_1,-b) ∪ (b,w_1). Note that
D_n^α/2u(b) - D_n^α/2u(a)
= lim_x → b^+u(x) - u(b)/δ_U^α/2(x) - lim_x → b^+u(-x) - u(-b)/δ_U^α/2(-x)
= lim_x → b^+u(x) - u(-x)/δ_U^α/2(x).
We have (-Δ)^α/2u(x) = 0 for x ∈ U. The function u is equal to 1 on [-b,b], and it is equal to 0 on (-∞,-w_1]∪[x_0+r,∞). By (<ref>) and (<ref>), for x ∈ U we have
u(x) = ∫_(U)^cP_U(x,z) u(z) dz =
∫_(U)^cu(z) ∫_U G_U(x,y) _α/|y - z|^1 + α dy dz.
This is equal to
∫_U G_U(x,y) h(y) dy,
where h(y) = ∫_(U)^cu(z) _α |y - z|^-1 - α dz, y ∈ U. For y ∈ (b,w_1) we have
h(y) - h(-y) = _α∫_w_1^x_0 + r u(z) (1/|y - z|^1 + α - 1/|-y - z|^1 + α) dz > 0.
Put U_+ = {x ∈ U: x > 0} = (b,w_1). By the same arguments as in the proof of Lemma 3.3 in <cit.>, we get
u(x) - u(-x) = ∫_U_+ (G_U(x,y) - G_U(-x,y)) (h(y) - h(-y)) dy.
Using this, Lemma <ref>, (<ref>) and (<ref>), we get D_n^α/2u(b) - D_n^α/2u(a) > 0. This contradicts the conditions of Problem <ref> which imply that
D_n^α/2u(b) - D_n^α/2u(a) = -λ+λ = 0.
Assume that (u,K) is a solution of Problem <ref> for D = (-r,r) and λ >0, where K = (-a,a), r > 0 and a ∈ (0,r). Let s > 0. Put D_s = (-sr,sr), K_s = (-sa,sa) and define u_s:→ [0,1] by u_s(x) = u(x/s). Then (u_s,K_s) is a solution of Problem <ref> for D_s and s^-α/2λ.
The proof of this lemma is very similar to the proof of Lemma <ref> and it is omitted.
By the definition of the fractional Laplacian, one easily obtains the following result.
Assume that (u,K) is a solution of Problem <ref> for D = (x_0-r,x_0+r) and λ >0, where K = (x_0 -a, x_0+a), x_0 ∈, r > 0 and a ∈ (0,r). Let y_0 ∈. Put D_* = (x_0+y_0-r, x_0+y_0+r), K_* = (x_0+y_0 -a, x_0+y_0+a) and define u_*:→ [0,1] by u_*(x) = u(x - y_0). Then (u_*,K_*) is a solution of Problem <ref> for D_* and λ.
Now, we study the solution of Problem <ref> for D = (-1,1). For a fixed a ∈ (0,1) let f_a be a (unique) bounded continuous solution of the following Dirichlet problem:
{
(-Δ)^α/2f_a(x) =0 for x ∈ (-1,-a) ∪ (a,1),
f_a(x) =1 for x ∈ [-a,a],
f_a(x) =0 for x ∈ (-∞,-1] ∪ [1,∞)..
Clearly, we have f_a: → [0,1], the function f_a satisfies f_a(-x) = f_a(x) for x ∈. By (<ref>), we have
f_a(x) = ∫_(W(a))^c P_W(a)(x,y) f_a(y) dy
for x ∈ W(a), where W(a) = (-1,-a) ∪ (a,1).
By (<ref>), we get
P_(a,1)(x, y) = C_α( (1-x)(x-a)/(y-a)(y-1))^α/2 |x-y|^-1
for x ∈ (a,1) and y ∈ [a,1]^c. By (<ref>), we have
f_a(x) = ∫_[a,1]^c P_(a,1)(x,y) f_a(y) dy
for x ∈ (a,1). Using this and induction, we obtain for x ∈ (a,1)
f_a(x) = ∑_n = 1^∞ f_a^(n)(x),
where
f_a^(1)(x) = ∫_-a^a P_(a,1)(x,y) dy, f_a^(n)(x) = ∫_a^1 P_(a,1)(x,-y) f_a^(n-1)(y) dy,
for n ∈, n ≥ 2.
The function (0,1) ∋ a ↦ f_a(x) is continuous for any fixed x ∈.
By formulas (<ref>), (<ref>) and induction for any fixed n ∈ and x ∈, the function (0,1) ∋ a ↦ f_a^(n)(x) is continuous. By standard properties of the Poisson kernel, we obtain that for any > 0 there exists δ > 0 such that
sup_a ∈ [,1]sup_x ∈ (a,1)∫_a^1 P_(a,1)(x,-y) dy ≤sup_x ∈ (,1)∫_^1 P_(,1)(x,-y) dy
= 1 - δ < 1.
Using this and induction, for any fixed > 0 and arbitrary n ∈ we obtain
sup_a ∈ [,1]sup_x ∈ (a,1) f_a^(n)(x) ≤ (1-δ)^n-1.
By this and continuity of (0,1) ∋ a ↦ f_a^(n)(x), we get the assertion of the lemma.
For a ∈ (0,1) put
Ψ(a) = - lim_h → 0^+f_a(a+h) - f_a(a)/h^α/2.
The next two lemmas concern some properties of the function Ψ.
For any a ∈ (0,1) Ψ(a) is well defined and we have
Ψ(a) = C_α (1-a)^α/2( ∫_1^∞Φ(a,y) dy + ∫_a^∞Φ(a,-y) dy - ∫_a^1 Φ(a,-y) f_a(y) dy )
where
Φ(a,y)=((y-a)(y-1))^-α/2|y-a|^-1.
For any a ∈ (0,1) we have Ψ(a) ∈ (0,∞) and Ψ is continuous on (0,1).
We have
Ψ(a) = lim_h → 0^+ h^-α/2( f_a(a) - ∫_[a,1]^c P_(a,1)(a+h,y) f_a(y) dy ).
Moreover, f_a(a)=1=∫_[a,1]^c P_(a,1)(a+h,y) dy for h ∈ (0,1-a). Thus,
Ψ(a) = lim_h → 0^+ h^-α/2( ∫_1^∞ P_(a,1)(a+h, y) dy + ∫_a^∞ P_(a,1)(a+h, -y) dy
- ∫_a^1 P(a+h, -y) f_a(y) dy ).
Using (<ref>), the right-hand side of this equation tends to the right-hand side of (<ref>). This implies that Ψ(a) is well defined and gives (<ref>).
The fact that Ψ(a) ∈ (0,∞) for any a ∈ (0,1) easily follows from (<ref>) and (<ref>). (<ref>), (<ref>) and Lemma <ref> imply continuity of Ψ on (0,1).
We have lim_a → 0^+Ψ(a) = lim_a → 1^-Ψ(a) = ∞.
In the whole proof we assume that a ∈ (0,1). Using (<ref>), we get
Ψ(a) > C_α (1-a)^α/2∫_1^∞Φ(a,y) dy
= C_α (1-a)^α/2∫_1^∞ (y-a)^-α/2-1 (y-1)^-α/2 dy
> C_α (1-a)^α/2∫_1^∞ (y-a)^-α-1 dy
= C_α/α(1-a)^α/2.
The right-hand side tends to infinity as a → 1^-, so as Ψ(a).
In order to prove the second limit, put g_a(x) = 1 - f_a(x). By (<ref>), we get
Ψ(a) ≥ C_α (1-a)^α/2∫_a^1 Φ(a,-y) g_a(y) dy.
For x ∈ (a,1) we have 1 = ∫_[a,1]^c P_(a,1)(x,y) dy. Using this and (<ref>), we obtain for x ∈ (a,1)
g_a(x)
= ∫_[a,1]^c P_(a,1)(x,y) g_a(y) dy
≥∫_1^∞ P_(a,1)(x,y) dy.
Note that for x ∈ (a,3/4), y ∈ (1,∞) we have (1-x)^α/2≥ (1/4)^α/2, (y-a)^-α/2(y-1)^-α/2 (y-x)^-1≥ y^-1-α. Thus, using (<ref>), for all such x we get
g_a(x) ≥∫_1^∞ P_(a,1)(x,y) dy
≥ C_α 2^-α (x - a)^α/2∫_1^∞ y^-1-α dy
= C_α 2^-αα^-1 (x - a)^α/2.
Additionally, for y ∈ (2a,1) we have
(y-a/y+a)^α/2
= (y/a-1/y/a+1)^α/2≥(y/a-y/(2a)/y/a+y/(2a))^α/2
= (1/3)^α/2.
Using inequalities (<ref>), (<ref>) and (<ref>), we obtain for a ∈ (0,1/4)
Ψ(a) ≥ C_α (1/2)^α/2∫_a^3/4Φ(a,-y) C_α 2^-αα^-1 (y - a)^α/2 dy
= C_α^2 2^-3α/2α^-1∫_a^3/4 (y + a)^-1-α/2 (y+1)^-α/2 (y - a)^α/2 dy
≥ C_α^2 2^-3α/2α^-1∫_2a^3/4 2^-α/2 3^-α/2 (y + a)^-1 dy
= C_α^2 2^-2α 3^-α/2α^-1 (log(3/4+a) - log(3a)).
This implies that
lim_a → 0^+Ψ(a) = ∞.
If (u,K) is a solution of Problem <ref> for D = (x_0-r,x_0+r) and some λ > 0, then, by Proposition <ref>, K is symmetric with respect to x_0. By Lemmas <ref>, <ref>, it is obvious that it is sufficient to show the theorem for D = (-1,1). Using this lemma, one also obtains λ_α,D = λ_α,(x_0-r,x_0+r) = r^-α/2λ_α,(-1,1).
So, we may assume that D = (-1,1) (that is x_0 = 0, r = 1). Hence, if (u,K) is a solution of Problem <ref> for D and some λ > 0, then u = f_a and K = (-a,a) for some a ∈ (0,1).
On the other hand, by Lemma <ref>, for any a ∈ (0,1) the function f_a is a solution of Problem <ref> for D and λ = Ψ(a). Now, using Lemma <ref> and the fact that the function (0,1) ∋ a ↦Ψ(a) is continuous and positive on (0,1), we obtain the assertion of the theorem.
By (<ref>), we have for any a ∈ (0,1)
∫_1^∞Φ(a,y) dy = T_α _2F_1(1+α/2, α; 1+α/2; a) = T_α (1-a)^-α,
where Φ is given by (<ref>). Using this and Lemma <ref>, for any a ∈ (0,1) we obtain
Ψ(a) ≥ C_α (1-a)^α/2(T_α (1-a)^-α + ∫_1^∞Φ(a, -z) dz )
≥ C_α (1-a)^α/2(T_α (1-a)^-α + ∫_1^∞ (z+1)^-α-1 dz )
= C_α (1-a)^α/2(T_α (1-a)^-α + 1/α 2^α).
Put L(a) = C_α (1-a)^α/2(T_α (1-a)^-α + α^-1 2^-α). We will now show that L is increasing on (0,1). Applying Lemma <ref> yields
d/daL(a) = C_αα/2 (1-a)^-1-α/2(T_α - 1/α 2^α(1-a)^α)> 0
for any a ∈ (0,1). We conclude that
min_a ∈ (0,1)Ψ(a) ≥min_a ∈ (0,1) L(a) ≥ L(0) = C_α(T_α + 1/α 2^α).
On the other hand, for any a ∈ (0,1) we have
Ψ(a) ≤ C_α (1-a)^α/2( T_α (1-a)^-α + ∫_a^∞Φ(a, -z) dz)
≤ C_α (1-a)^α/2( T_α (1-a)^-α + ∫_a^∞ (a+z)^-α-1 dz)
= C_α (1-a)^-α/2( T_α + 1/α 2^α(1/a-1)^α).
Put U(a) = C_α (1-a)^-α/2( T_α + 1/α 2^α(1/a-1)^α). We observe that
min_a ∈ (0,1)Ψ(a) ≤min_a ∈ (0,1) U(a) ≤ U(1-
α/2) = C_α( 2/α)^α/2( T_α + 1/α 2^α( α/2-α)^α),
which proves the assertion of the theorem.
§ DISCUSSION
In this section we discuss properties of solutions of the inner Bernoulli problem for the fractional Laplacian on intervals and balls. We also discuss the connection of the Bernoulli problem with the corresponding variational problem.
Let us start with the following conjecture.
Let α∈ (0,2), x_0 ∈, r > 0 and D = (x_0-r,x_0+r). Then there is a constant λ_α,D > 0 such that the Problem <ref> has
* exactly two solutions for λ> λ_α,D,
* exactly one solution for λ = λ_α,D,
* no solution for λ < λ_α,D.
A standard way to show such result is to study properties of the function Ψ (which is defined by (<ref>)). In Section <ref> it is shown that (0,1)∋ a ↦Ψ(a) is continuous and positive and lim_a → 0^+Ψ(a) = lim_a → 1^-Ψ(a) = ∞. So to justify Conjecture <ref> it is enough to prove that for each α∈ (0,2) functions (0,1)∋ a ↦Ψ(a) are unimodal. Graphs of Ψ for α = 1 and α = 1/2 (see Figure <ref>) suggest that these functions have this property. It seems that it is possible to justify Conjecture <ref> using a computer assisted proof. Below we present some rough idea of such a proof. Of course, this is only the idea, we are not claiming in any way that this is a formal proof.
By (<ref>), we have
Ψ(a) = C_α (1-a)^α/2( ∫_1^∞Φ(a,y) dy + ∫_1^∞Φ(a,-y) dy +
∫_a^1 Φ(a,-y) g_a(y) dy ),
where g_a(y) = 1 - f_a(y). Fix α∈ (0,2). It is easy to show that d/d aΨ(a), d^2/d a^2Ψ(a) are well defined for any a ∈ (0,1). Note also that for any y ∈ (0,1) the function (0,y) ∋ a ↦ g_a(y) is decreasing. Using this and (<ref>), one can obtain that there exists a_0 (depending on α) such that d/d aΨ(a) < 0 for all a ∈ (0,a_0]. Now it is enough to show that Ψ is convex on [a_0,1). Note that (cf. <ref>) for a ∈ (0,1), x ∈ (a,1) we have
g_a(x) = ∑_n = 1^∞ g_a^(n)(x).
where
g_a^(1)(x) = ∫_[-1,1]^c P_(a,1)(x,y) dy, g_a^(n)(x) = ∫_a^1 P_(a,1)(x,-y) g_a^(n-1)(y) dy,
for n ∈, n ≥ 2. For some n_0 ∈ denote Ψ = Ψ^(1) + Ψ^(2), where
Ψ^(1)(a) = C_α (1-a)^α/2( ∫_1^∞Φ(a,y) dy + ∫_1^∞Φ(a,-y) dy
+ ∫_a^1 Φ(a,-y) ∑_k = 1^n_0 g_a^(k)(y) dy ),
Ψ^(2)(a) = C_α (1-a)^α/2∫_a^1 Φ(a,-y) ∑_k = n_0 + 1^∞ g_a^(k)(y) dy.
Now, the strategy of the proof could be the following. One could obtain that for sufficiently large k ∈
|d^2/d a^2(C_α (1-a)^α/2∫_a^1 Φ(a,-y) g_a^(k)(y) dy)|
is sufficiently small for a ∈ [a_0,1). Then one should be able to show that there exist ε > 0 and n_0 ∈ so that d^2/d a^2Ψ^(1)(a) > ε for a ∈ [a_0,1) and |d^2/d a^2Ψ^(2)(a)| ≤ε. Some numerical experiments suggest that this is possible. This would show that (0,1)∋ a ↦Ψ(a) is unimodal. Of course, the above way of proving Conjecture <ref> demands numerical estimates in many steps and it is beyond the scope of this paper.
Well known results for the classical inner Bernoulli problem for balls (see e.g. Figure 3 in <cit.>) suggest that the following hypothesis holds.
Let α∈ (0,2), d ≥ 2, x_0 ∈^d, r > 0 and D = {x ∈^d: |x - x_0| <r}. Then there is a constant λ_α,D > 0 such that the Problem <ref> has
* exactly two solutions for λ> λ_α,D,
* exactly one solution for λ = λ_α,D,
* no solution for λ < λ_α,D.
It seems, however, that this result is much more difficult to prove than Conjecture <ref>. Even the proof of result similar to Theorem <ref> for balls seems quite challenging.
One of the standard ways to study Bernoulli problems (in the classical or fractional case) is to investigate appropriate variational problems (see e.g. <cit.>). Fix d ≥ 1, α∈ (0,2) and a bounded domain D ⊂^d. Let us define the energy functional on the Sobolev space H^α/2(^d)
i_α,λ,D(u) = α 2^α-2Γ(d+α/2)/π^d/2Γ(1-α/2)[u]_d,α + (Γ(1+α/2))^2 λ^2 |{x ∈ D: u(x) < 1}|
depending on the parameter λ > 0, where
[u]_d,α = ∬_^d ×^d(u(x) -u(y))^2/|x - y|^d+α dx dy,
and |{x ∈ D: u(x) < 1}| denotes the Lebesgue measure of {x ∈ D: u(x) < 1}.
A similar definition appears in Section 1.1 in <cit.>.
Let us consider the following problem of finding minimizers to this energy functional. This problem is connected with the inner Bernoulli problem for the fractional Laplacian.
Given α∈ (0,2), λ>0 and a bounded domain D⊂^d, find a nontrivial minimizer u∈ H^α/2(^d) of i_α,λ,D subject to the constraint u=0 on D^c.
For α = 1 this variational problem was studied in Section 3 in <cit.>. In that paper the problem was investigated for general bounded domains. For the particular case when D is an interval <cit.> implies the following result.
Let x_0 ∈, r > 0 and D = (x_0-r,x_0+r). There exists a constant Λ_1,D such that for any λ≥Λ_1,D there exists a solution u of Problem <ref> for α = 1, λ, D, which is symmetric with respect to x_0, continuous on and nonincreasing on [x_0,∞). The function u and K = {x ∈: u(x) = 1} is a solution of Problem <ref> for α = 1, λ, D. For any λ∈ (0,Λ_1,D) there are no solutions of Problem <ref> for α = 1, λ, D.
Note that the above result does not guarantee the uniqueness of solution of Problem <ref>. The results for the classical Bernoulli problem for balls suggest that uniqueness holds (cf. Conjecture <ref> below).
By translation and scaling we may assume that x_0 = 0 and r = 1. By arguments as in the proof of Lemma 2.5 in <cit.> (by changing in that proof E_λ to I_λ,(-1,1)), we obtain that there exists a solution of Problem <ref>, which is symmetric with respect to 0, continuous on and nonincreasing on [0,∞). It follows that {x ∈: u(x) = 1} = [-a,a] for some a ∈ (0,1). By Theorem 1.7 (d) in <cit.>,
lim_t → 0^+u(a) - u(a+t)/√(t) = λ.
Using again Theorem 1.7 in <cit.>, we obtain that u and K = {x ∈: u(x) = 1} is a solution of Problem <ref> for α = 1, λ, D = (-1,1).
The next result gives an inequality between the variational constant Λ_1,D and the Bernoulli constant λ_1,D for any interval D. The proof of this result is computer-assisted.
Let x_0 ∈, r > 0 and D = (x_0-r,x_0+r). We have
Λ_1,D > λ_1,D.
It is well known that the analogous result holds for the classical inner Bernoulli problem on a ball (see e.g. Example 11 in <cit.>).
In the whole proof we put α = 1. By translation and scaling, we may assume that x_0 = 0 and r = 1 (so D = (-1,1)). First, we estimate Λ_1,D from below. By Proposition <ref>, the solution u of Problem <ref> is continuous on and nonincreasing on [0,∞). Let a = max{x∈ (0,1): u(x) = 1} and b = max{x∈ (0,1): u(x) = 1/2}. By arguments from the proof of Lemma 1.10 from <cit.>, we have
π/4Λ_1,D^2 ≥1/2π/2a [u]_1,1,
so
Λ_1,D^2 ≥1/π^2 a [u]_1,1
≥2/π^2 a∫_-a^a ∫_(-1,1)^c(u(x) - u(y))^2/|x-y|^2 dx dy
+ 2/π^2 a∫_-a^a ∫_(-1,-b) ∪ (b,1)(u(x) - u(y))^2/|x-y|^2 dx dy
+ 2/π^2 a∫_(-b,-a) ∪ (a,b)∫_(-1,1)^c(u(x) - u(y))^2/|x-y|^2 dx dy
= I + II + III.
We have
I = 2/π^2 a∫_-a^a ∫_(-1,1)^cdx dy/|x-y|^2 = 2/π^2 alog((1+a)^2/(1-a)^2),
II = 2/π^2 a 2 ∫_-a^a ∫_b^1 (1/4) dx dy/|x-y|^2 = 1/π^2 alog((1-a)(a+b)/(1+a)(b-a)),
III = 2/π^2 a 2 ∫_a^b ∫_(-1,1)^c(1/4) dx dy/|x-y|^2 = 1/π^2 alog((1-a)(1+b)/(1+a)(1-b)).
For 0 < a < b < 1 put
F_1(a,b) = 2/π^2 alog((1+a)^2/(1-a)^2) + 1/π^2 alog((1-a)(a+b)/(1+a)(b-a))
+ 1/π^2 alog((1-a)(1+b)/(1+a)(1-b)).
Using Mathematica, we find that
inf_0 < a < b < 1 F_1(a,b) ≥ 1.1582,
so Λ_1,D^2 ≥ 1.1582, which gives
Λ_1,D≥ 1.0761.
In order to obtain upper bound estimate of λ_1,D, we need to estimate Ψ(a). We will use formula (<ref>).
The term f_a(y), present in this formula, needs to be estimated from below. We have
f_a(y) = ∑_n = 1^∞ f_a^(n)(y),
where f_a^(n) is given by (<ref>).
Note that for any n ∈, a ∈ (0,1) and y ∈ (a,1) we have f_a^(n)(y) > 0. Hence f_a(y) > f_a^(1)(y). For any a ∈ (0,1) and y ∈ (a,1) we have
f_a^(1)(y) = ∫_-a^a P_(a,1)(y,z) dz = 1 - 2/πarctan(√(1+a)√(y-a)/√(2a)√(1-y)).
For any a ∈ (0,1) put
F_2(a) = 1/π (1-a)^α/2( ∫_1^∞Φ(a,y) dy + ∫_a^∞Φ(a,-y) dy - ∫_a^1 Φ(a,-y) f_a^(1)(y) dy ),
where Φ is given by (<ref>). By (<ref>), we obtain Ψ(a) ≤ F_2(a) for any a ∈ (0,1). Using Mathematica, we find that min_a ∈ (0,1) F_2(a) < 1.03. Indeed, one can check, using Mathematica, that F_2(0.34) < 1.03. Hence
λ_1,D = min_a ∈ (0,1)Ψ(a) < 1.03.
This and (<ref>) gives (<ref>).
Proposition <ref> implies the following result.
For α = 1 and any interval D there exists a solution for the inner Bernoulli problem for the fractional Laplacian on D, which is not a minimizer of the corresponding variational problem on D (Problem <ref>).
For any interval D and λ≥Λ_1,D we have at least 2 solutions of Problem <ref>. One may ask how many of them are solutions of Problem <ref>. Knowing results for the classical inner Bernoulli problem for balls (see e.g. Section 5.3 in <cit.>) and Propositions <ref>, <ref>, we may formulate the following hypothesis.
Let α∈ (0,2), x_0 ∈, r > 0 and D = (x_0-r,x_0+r). There exists a constant Λ_α,D such that Λ_α,D > λ_α,D. For any λ≥Λ_α,D there exists a unique solution of Problem <ref>. It is symmetric with respect to x_0, continuous on , nonincreasing on [x_0,∞) and it is a solution of Problem <ref>. For any λ∈ (0,Λ_α,D) there are no solutions of Problem <ref>.
99
plain
A1989 A. Acker, Uniqueness and monotonicity of solutions for the interior Bernoulli free boundary problem in the convex n-dimensional case', Nonlinear Anal. 13, No. 12 (1989), 1409-1425.
AC1981 H. W. Alt, L. A. Caffarelli, Existence and regularity for a minimum problem with a free boundary, J. Reine Angew. Math. 325 (1981), 105-144.
BS2009 C. Bianchini, P. Salani, Concavity properties for elliptic free boundary problems, Nonlinear Anal. 71, No. 10 (2009), 4461-4470.
BB2000 K. Bogdan, T. Byczkowski, Potential theory of Schrödinger operator based on fractional Laplacian, Probab. Math. Statist. 20 (2000), 293-335.
BBKRSV2009 K. Bogdan, T. Byczkowski, T. Kulczycki, M. Ryznar, R. Song, Z. Vondracek, Potential Analysis of Stable Processes and its Extensions, Lecture Notes in Mathematics 1980, Springer, Berlin (2009).
BKK2008 K. Bogdan, T. Kulczycki, M. Kwaśnicki, Estimates and structure of α-harmonic functions, Probab. Theory Relat. Fields 140 (2008), 345-381.
CMS2012 L. A. Caffarelli, A. Mellet, Y. Sire, Traveling waves for a boundary reaction–diffusion
equation, Adv. Math. 230 (2012), no. 2, 433-457.
CRS2010 L. A. Caffarelli, J.-M. Roquejoffre, Y. Sire, Variational problems with free boundaries for the fractional Laplacian J. Eur. Math. Soc. 12, No. 5 (2010), 1151-1179.
CS2007 L. A. Caffarelli, L. Silvestre, An Extension Problem Related to the Fractional Laplacian, Comm. Partial Differential Equations 32 (2007), 1245-1260.
CT2002 P. Cardaliaguet, R. Tahraoui, Some uniqueness results for the Bernoulli interior free-boundary problems in convex domains, Electron. J. Differential Equations (2002), n. 102, 1-16.
C1999 Z.-Q. Chen, Multidimensional symmetric stable processes, Korean J. Comput. Appl. Math. 6 (1999), no. 2, 227-266.
SR2012 D. De Silva, J. M. Roquejoffre, Regularity in a one-phase free boundary problem for the fractional Laplacian, Ann. Inst. H. Poincare Anal. Non Lineaire 29, No. 3 (2012), 335-367.
DSS2015b D. De Silva, O. Savin, C^∞ regularity of certain thin free boundaries, Indiana Univ. Math. J. 64, No. 5 (2015), 1575-1608.
DSS2015 D. De Silva, O. Savin, Regularity of Lipschitz free boundaries for the thin one-phase problem, J. Eur. Math. Soc. (JEMS) 17, No. 6 (2015), 1293-1326.
SSS2014 D. De Silva, O. Savin, Y. Sire, A one-phase problem for the fractional Laplacian: regularity of flat free boundaries, Bull. Inst. Math. Acad. Sin. (N.S.) 9, No. 1 (2014), 111-145.
EKPSS2019 M. Engelstein, A. Kauranen, M. Prats, G. Sakellaris, and Y. Sire, Minimizers for the thin one-phase free boundary problem, Comm. Pure Appl. Math. 74 (2021), no. 9, 1971-2022.
FR2022 X. Fernández-Real and X. Ros-Oton, Stable cones in the thin one-phase problem, Amer. J. Math., in press (2022).
FR1997 M. Flucher, M. Rumpf, Bernoulli's free-boundary problem, qualitative theory and numerical approximation, J. Reine Angew. Math. 486 (1997), 165-204.
HS1997 A. Henrot, H. Shahgholian, Convexity of free boundaries with Bernoulli type boundary condition Nonlinear Anal. 28, No. 5 (1997), 815-823.
HS2000 A. Henrot, H. Shahgholian, Existence of classical solution to a free boundary problem for the p-Laplace operator: (II) The interior convex case, Indiana Univ. Math. J. 49, No. 1 (2000), 311-323.
JKS2022 S. Jarohs, T. Kulczycki, P. Salani, On the Bernoulli free boundary problems for the half Laplacian and for the spectral half Laplacian, Nonlinear Anal. 222 (2022), Paper No. 112956, 39 pp.
K2013 T. Kulczycki, Gradient estimates of q-harmonic functions of fractional Schrödinger operator, Potential Anal. 39 (2013), no. 1, 69-98.
|
http://arxiv.org/abs/2307.00685v1
|
20230702233818
|
Ferromagnetic filament shapes in a rotating field reveal their magnetoelastic properties
|
[
"Andris P. Stikuts",
"Andrejs Cēbers",
"Guntars Kitenbergs"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"physics.flu-dyn"
] |
Orientational dynamics of anisotropic colloidal particles in a planar extensional flow
Dinesh Kumar
August 1, 2023
======================================================================================
§ ABSTRACT
Flexible ferromagnetic filaments can be used to control the flow on the micro-scale with external magnetic field.
To accurately model them, it is crucial to know their parameters such as their magnetization and bending modulus, the latter of which is hard to determine precisely.
We present a method how the ferromagnetic filament's shape in a rotating field can be used to determine the magnetoelastic number Cm - the ratio of magnetic to elastic forces.
Then once the magnetization of the filament is known, it is possible to determine its bending modulus.
The main idea of the method is that Cm is the only parameter that determines whether the filament is straight or whether its tips are bent towards the magnetic field direction.
Comparing with numerical solutions, we show that the method results in an error of 15...20% for the determined Cm, what is more precise than estimations from other methods.
This method will allow to improve the comparability between theoretical filament models and experimental measurements.
§ INTRODUCTION
Magnetic filaments can be created by connecting paramagnetic or ferromagnetic beads with a linker, for example, DNA fragments or some other polymer
<cit.>.
The resulting filaments are typically tens of microns long.
The diameter of the beads (and thus the width of the filament) range from less than a micron for the paramagnetic case <cit.> to 4 μm for the ferromagnetic case <cit.>.
Apart from the choice of the beads, another variable that determines the properties of the filaments is the choice and length of the linker polymers.
The impact of DNA linker length to the filament's flexibility was analyzed in ref. <cit.>.
Magnetic filaments can be used to influence the flow on the micro-scale using external magnetic fields.
A few of their applications include microswimming <cit.>, micromixing <cit.>, and
navigation through microfluidic channels <cit.>.
To describe such filaments theoretically <cit.>, it is vital to accurately determine their magnetization M and bending modulus A_B (such that the associated energy is A_B∫ k^2 dl/2, where k is the curvature of the filament).
The magnetization depending on the applied magnetic field can be determined from bulk measurements of the beads that make up the filaments <cit.>.
The determination of the bending modulus A_B requires more subtle techniques since the filaments are micron-sized.
Paramagnetic filaments when placed in a static field form long lived metastable hairpin-like U shapes, whose maximum curvature k_max for strong fields is proportional to the square root of the magnetoelastic number Cm (the ratio of magnetic to elastic forces) <cit.>.
From this it is possible to determine the value of A_B <cit.>.
Theoretically stationary U shapes can also be achieved with ferromagnetic filaments, however they are more unstable than the paramagnetic U shapes, and quickly relax to straight shapes through the third dimension.
Nonetheless it has been attempted to estimate A_B by observing the curvature just before the relaxation <cit.>.
Another approach uses the fact that a slightly deformed filament exponentially relaxes to a straight shape with the characteristic time
t_0=ζ_⊥ L^4 /A_B, where ζ_⊥ is the drag coefficient in the direction normal to the filament's centerline <cit.>.
Using this method, A_B was estimated in ref. <cit.>, where the drag coefficient was estimated ζ_⊥≈ 4πη with η being the viscosity of the surrounding fluid.
This, however, might be an underestimate since the filaments are close to the bottom of the sample cell and the drag close to it rapidly increases <cit.>.
Many experimental works <cit.> use ferromagnetic filaments formed using streptavidin coated micron sized (d=4.26 μ m) ferromagnetic beads (Spherotec, 1%w/v) that are linked with 1000bp long biotinized DNA fragments (ASLA biotech or Latvian Biomedical Research and Study Centre) following the procedure outlined in <cit.>.
The estimated bending modulus A_B values for some of these works are gathered in table <ref>.
There is a variation of several orders of magnitude, which motivates us to devise a relatively simple procedure to determine the filament's bending modulus.
We have corrected the final step of the calculation, where L in formula 3.93^4L^-4 A_B/ζ needs to be taken as half of the filament's length.
In this work we show how a ferromagnetic filament's shape in a rotating field can be used to determine the magnetoelastic number Cm and thus the bending modulus A_B.
We solve for the equilibrium shape of the filament for small deviation from the magnetic field direction.
We then extend this solution to include large deviations when Cm is small.
Finally we outline the procedure how to determine the filament's parameters and compare it to full numerical simulations to determine its accuracy.
§ MATHEMATICAL MODEL
Elastic magnetic filaments are commonly modeled using Kirchhoff theory of an elastic rod with additional terms that describe the magnetic interactions <cit.>.
The filament of length L is described by the radius vector r(l) which is parameterized by the arc length l∈[-L/2,L/2].
The force acting on the cross-section of the filament reads
F = -A_B r_lll+Λ r_l + F_m,
where the subscript l denotes the derivative with respect to the arc length, A_B is the bending modulus and Λ(l) r_l is the tension force that ensures the inextensibility of the filament.
F_m is the magnetic force that for a ferromagnetic filament reads
F_m = -μ_0M H,
where M is the magnetic moment per unit length of the filament, and H is the applied magnetic field intensity.
When the filament is slender, its motion can be described by the resistive-force theory <cit.>.
The linear force density in a Stokes flow is connected with the velocity through the drag coefficients parallel to the filament ζ_∥ and perpendicular to it ζ_⊥
F_l = ζ_⊥ (( v - v_∞)· n) n + ζ_∥ (( v- v_∞)· t) t,
where t and n is the tangent and normal vectors of the filament, respectively, and v_∞ is the background flow velocity.
The local inextensibility of the filament dictates that
r_l · r_l = 1.
Taking the time derivative of equation (<ref>) we get a constraint on the velocity r_l · v_l = 0.
Finally, the mathematical model is concluded with the boundary conditions of torque and force free filament ends, which at l=± L/2 require
r_ll |_l=± L/2 = 0
and
(-A_B r_lll + Λ r_l -μ_0 M H)|_l=± L/2 = 0.
§.§ Dimensionless parameters
The mathematical model can be rendered dimensionless by introducing the following scales:
* length scale r_0=L,
* time scale t_0=ζ_⊥ L^4 /A_B.
With this scaling, dimensionless parameters appear in the mathematical formulation:
* the magnetoelastic number Cm=μ_0 M H L^2 / A_B,
* the ratio of perpendicular and parallel drag coefficients ζ_⊥/ζ_∥.
The ratio of drag coefficients is close to 2 for slender filaments <cit.>.
If there is a rotating magnetic field driving the filament, a third dimensionless parameter arises:
* the Mason number M_a=ωζ_⊥ L^2 / (12 μ_0 M H),
where ω is the angular frequency of the field.
The equations rendered dimensionless by r_0 and t_0 are used in section <ref>, where the equilibrium shape for small deformations is derived.
Elsewhere to facilitate the reading, dimensional formulas are used.
§ DERIVATION OF THE EQUILIBRIUM SHAPE OF A FILAMENT IN A ROTATING FIELD
To find the equilibrium shape of the filament in a rotating magnetic field, we can utilize the lack of inertia in the Stokes flow regime, and move to a coordinate system that rotates with the magnetic field (figure <ref>).
We set the magnetic field along the x axis.
A background flow of v_∞={-ω y, ω x, 0} arises, where ω is the angular velocity of the magnetic field.
When ω=0, the filament lies along the x axis.
We seek an approximate equilibrium shape up to first order in ω and y.
The arclength parameter becomes l=x+O(y^2).
With this approximation, along the x axis the equations stated in the section <ref> read
Λ_x = 0,
where the subscript x denotes the derivative with respect to x.
This together with the only non-trivial boundary condition
(-Cm + Λ)|_x = ± 1/2 = 0
gives us the solution for the tension force
Λ=Cm.
Along the y axis the equations stated in the section <ref> read
-y_xxxx + Λ y_xx +ω x = 0.
The boundary conditions are
y_xx|_x=± 1/2=0,
(-y_xxx+Λ y_x)|_x=± 1/2=0.
Plugging in Λ from equation (<ref>), and requiring that y(0)=0, we get the solution for the equilibrium shape of the filament
y=ω/12 Cm[ 6sinh(x√(Cm))/Cmsinh(√(Cm)/2)
-2x^3 + x (3/2 -12/Cm)
].
It is possible to verify that up to the first order in y, the filament inextensibility condition (eq. (<ref>)) is satisfied.
Additionally from the equation (<ref>) it is evident that y is small when ω/(12Cm) is small, which suggests that ω/(12Cm) ≪ 1 is the criterion for the validity of this solution.
The square-bracketed expression is bounded between ± 1/2 for x∈[-1/2,1/2].
§ ANALYSIS OF THE ASYMPTOTIC EQUILIBRIUM SHAPE
For convenience from now on we will again use dimensional expressions.
Equation (<ref>) for the equilibrium shape in dimensional form reads
y/L=M_a [ 6sinh(x√(Cm)/L)/Cmsinh(√(Cm)/2)
-2x^3/L^3 + x/L(3/2 -12/Cm)
],
where we identify the coefficient in front of of the brackets as the Mason number M_a=ωζ_⊥ L^2 / (12 μ_0 M H),
which is the ratio of viscous to magnetic forces in the system.
Interestingly, the tip coordinates of the filament are independent of Cm and are determined solely by M_a.
Denoting with Δ x and Δ y the difference of x and y coordinates between the tips (figure <ref>), we can write
Δ y/Δ x = M_a.
This means that the expression in the square brackets solely determines the shape of the filament connecting the two end points.
We plot the equation (<ref>) divided with M_a for different values of Cm (see figure <ref>) to observe how this happens.
For small values of Cm, the filament is nearly straight (which is expected, since in the limit of A_B→∞, the filament should be a rigid rod).
Whereas large Cm means that the tips of the filament become bent in the direction of the field.
§ CORRECTION TO THE ASYMPTOTIC SOLUTION TO TAKE INTO ACCOUNT THE LIMIT OF A RIGID ROTATING ROD
It is possible to write the equation for the deviation of a rigid ferromagnetic rod from the magnetic field direction <cit.>
sinθ = Δ y/L = M_a,
where θ is the angle between the rod and the field, Δ y and L are defined the same way as for the flexible filament.
In terms of y(l) the equation for the rigid rod (Cm→0) reads
y=M_a l .
Note that unlike eq. (<ref>), eq. (<ref>) is valid for arbitrary deviations from the magnetic field direction.
Knowing this, we can modify eq. (<ref>) to include the limit of rigid rod as Cm→ 0.
We write
y/L= M_a [ 6sinh(l√(Cm)/L)/Cmsinh(√(Cm)/2)
-2l^3/L^3 + l/L(3/2 -12/Cm)
],
where we replaced x → l.
For small y/L, this corrected expression is asymptotically identical to the previously derived eq. (<ref>).
Whereas in the limit of small Cm, it is identical to the rigid rod (eq. (<ref>)) for arbitrary y/L.
Note that the formula only gives the y coordinate of the filament, however, the x coordinate can be determined by integrating dx/dy = √((dl/dy)^2 -1).
We hope that this correction will improve the applicability range of the solution.
Indeed looking at figure <ref> we see that for small M_a and deviations from the magnetic field direction, the both eqs. (<ref>) and (<ref>) well coincide with the full numerical solution.
As M_a increases the corrected eq. (<ref>) follows the numerical shape much more closely, however, even it starts to noticeably deviate for M_a>0.5.
This is of course expected since in the derivation only the perpendicular drag coefficient ζ_⊥ is used, but for larger deformations of the shape, the parallel drag coefficient ζ_∥ starts to play a role.
§ PROCEDURE TO DETERMINE THE FILAMENT PARAMETERS
The decoupling of the M_a and Cm effect on the shape (one determines the deviation from the field, while the other determines the shape) inspires us to propose the following procedure to determine them for an experimental filament.
* Find the tips of the filament, and using eq. (<ref>), determine the Mason number M_a =Δ y / L.
* Plug the found M_a in eq. (<ref>) and determine Cm by varying it until it best describes the filament's shape.
* Once the magnetic moment per unit length M is known, calculate the bending modulus A_B=μ_0 H M L^2 / Cm.
Using this procedure, we determined the magnetoelastic Cm and Mason M_a numbers from the numerical equilibrium shapes.
The relative error (fitted - true)/true of the parameters is shown in figure <ref>.
The determined M_a is accurate for relatively large deviations from the magnetic field direction.
The error in M_a is less than 10% for deviations of up to Δ y/L≈ 0.5 from the magnetic field direction (which corresponds to roughly 30^∘ between the filament and the field).
The deviation from the magnetic field direction can be experimentally controlled by the rotation frequency.
One should choose a low enough frequency such that the deviations are small, but the shape is still visually discernible.
As expected, the equation (<ref>) gives very accurate M_a values for small Cm, which corresponds to nearly rigid rod.
However, increasing Cm values leads to an underestimate in M_a.
The error in the estimated Cm is noticeably larger, and is dependent on Cm itself.
The method is most accurate for Cm≈38.
As can be seen in figure <ref>, this is because for this Cm, the shape lies in between the two extreme configurations.
To minimize the error one should find the magnetic field value such that the experimentally observed shape is between the extremes of a straight rod and an S shape, whose tips align with the magnetic field.
Additionally, the deviation from the magnetic field direction should be less than ≈20^∘ and Cm≈ 10...70 to have the relative error of 15...20 %.
Interestingly, the error seems to be systematic - the method underestimates the true Cm value.
We can therefore increase the fitted Cm by ≈ 15% to get a more accurate result.
To conclude the section let us examine a particular experimental observation (figure 1 (a) in ref. <cit.>, whose data are archived in ref. <cit.>).
We used the procedure outlined in the beginning of this section to estimate the M_a and Cm numbers.
Experimental filament's shape is taken from the center coordinates of the beads that make up the filament.
From the centers of the first and last bead of the filament we get
M_a=0.31, and the shape fit then gives us Cm=40±15, which we can increase by ≈ 15% (to offset the systematic underestimate as seen in figure <ref>) to obtain Cm=46.
From magnetization measurements it was found that these beads possess a magnetic moment of m=1.4· 10^-13 A· m^2 <cit.>.
Dividing by their typical diameter d=4.26 μ m gives us the linear magnetization of the filament M=3.3· 10^-8A· m.
The length of the filament in the experiment is L=67.4 μ m, and the magnetic field is μ_0 H = 0.86 mT.
This allows us to determine the bending modulus A_B=μ_0 H M L^2 / Cm=2.8· 10^-21 J· m.
This result falls in the middle of the values shown in table <ref>.
Finally, for comparison the effective bending elasticity that arises just from the interaction between magnetic dipoles in the chain is two orders of magnitude smaller A_B^mag=μ_0 M^2 (ζ(3)+1/6)/(18π)=3.3·10^-23 J· m <cit.>, where ζ(n) is the Riemann zeta function.
This confirms that the bending stiffness mostly comes from the DNA linkers between the beads.
§ CONCLUSIONS
The equilibrium shape of a ferromagnetic filament in a rotating field contains the information about the filament's properties.
In particular, the tip positions relative to the magnetic field direction encode the value of the Mason number M_a - the ratio of magnetic to viscous forces.
The shape that connects the tips is only dependent on the magnetoelastic number Cm - the ratio of magnetic to elastic forces.
For small values of Cm the filament takes up a straight shape like a rigid magnetic rod, while for a large Cm the filament's tips bend in the direction of the magnetic field, resulting in an S-like shape.
This allows us to determine Cm just by visually observing the shape of the filament.
Once we know the magnetization of the filament, we can also determine the bending modulus A_B.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGEMENTS
Authors acknowledge the funding by the Latvian Council of Science, project A4Mswim, project No. lzp-2021/1-0470.
|
http://arxiv.org/abs/2307.01783v1
|
20230704154336
|
The Path to Fault- and Intrusion-Resilient Manycore Systems on a Chip
|
[
"Ali Shoker",
"Paulo Esteves Verissimo",
"Marcus Völp"
] |
cs.CR
|
[
"cs.CR",
"cs.AR",
"cs.DC"
] |
The Path to Fault- and Intrusion-Resilient
Manycore Systems on a Chip
blinded
Disrupt Paper No. 5073
Ali Shoker Paulo Esteves-Verissimo
RC3 Center, CEMSE Division,
King Abdullah University of Science
and Technology (KAUST)
<[email protected]>, <[email protected]>
Marcus Völp
University of Luxembourg
Interdisciplinary Center for Security,
Reliability and Trust (SnT) - CritiX group
<[email protected]>
August 1, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
plainplain
147 out of 150 wordswrite me
The hardware computing landscape is changing. What used to be distributed systems can now be found on a chip with highly configurable, diverse, specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to control today's cyber-physical systems, being the building blocks of critical infrastructures. They are deployed in harsh environments and are connected to the cyberspace, which makes them exposed to both accidental faults and targeted cyberattacks. This is in addition to the changing fault landscape that continued technology scaling, emerging devices and novel application scenarios will bring. In this paper, we discuss how the very features—distributed, parallelized, reconfigurable, heterogeneous—that cause many of the imminent and emerging security and resilience challenges, also open avenues for their cure though SoC replication, diversity, rejuvenation, adaptation, and hybridization. We show how to leverage these techniques at different levels across the entire SoC hardware/software stack, calling for more research on the topic.
What would be the differences to the Midir paper that make this a disrupt?
I would consider Midir only the basic mechanism (of course selling a bit of it). What we need is core rejuvenation SW + HW and for that diversity and morph around persistent faults (othw. we loose cores over time). Also there are a lot of open questions at the OS side, like how to benefit from privilege reversion, what mechanism exactly (e.g., iBFT = Midir, but at different levels).
fault and intrusion tolerance, resilience, hardware, system on a chip, FPGA
§ OPPORTUNITIES FOR HARDWARE RESILIENCE
need to expand on the aging issues, .. trojans, and attacks in hardware
Hardware chips continue to be the core building blocks of computing devices due to their inherent immutability and speed, required in modern digital and mission-critical systems like Cyber-Physical Systems, Healthcare, Fintech, Automotive, and Space. This hardware can implement an entire monolithic system or even be used as proof-of-trust anchors. Contrary to the common belief, hardware is prone to unintentional (benign) and intentional/malicious (intrusion or Byzantine <cit.>) faults. The former can be caused by the fabrication (e.g., Silicon) material prone to dust, aging, and overheating, or by design/implementation glitches <cit.>. Malicious faults manifest in many forms, prior- or post-fabrication, where stealthy logic, backdoors, trojans, kill switches, and post-fab fabric editing are possible <cit.>. In line with this, the trends of building complex hardware out of smaller commercial-off-the-shelf (COTS) components and introducing programmable/reconfigurable hardware, e.g., FPGA <cit.>, are closing the gap with software systems: hardware systems are no longer rigid, immutable, and fixed creatures. This raises both new challenges and opportunities, which call to revisit the way resilient and secure hardware systems are built.
The notable demand on hardware due to the automation and digitalization of services in many sectors raised new challenges in the hardware fabrication industry, where vendors need to maintain delivery on time and reduce production costs. This resulted in a divide-and-conquer <cit.> production style: a system is split into smaller and cheaper building blocks, i.e., components. Components are developed in parallel to reduce the production cycle time. Each block is likely developed by a dedicated specialized vendor, i.e., generating COTS <cit.>. This means that the synthesising entity of these COTS can focus on the technology it masters, rather than distributing its efforts on multiple fronts. Despite this, these cheap components are becoming more prone to failures and attacks <cit.>, which can lead to drastic impacts on critical sectors like Cyber-Physical Systems, health smart systems, mission-critical space systems, etc. Our experience in software systems shows that building resilient systems composed of small and cheap components can be more resilient than a single complex monolithic system, that is usually very expensive.
There are ample opportunities for hardware resilience leveraging the above advancements. To demonstrate this, we showcase in Fig. <ref> different levels of the chip development process, from low-level fine-grained gate logic blocks up to multicore systems-on-chip (SoC). Literary works reveal some selected resiliency techniques on most of these layers for constructing resilient clock networks, replicated power domains, and lock-step coupling of cores <cit.>, which is a good starting point. We, however, advocate for more systematic and comprehensive resiliency, probably leveraging hardware hybrids to simplify the designs. This holistic view helps optimising SoC designs by suggesting the right level of resiliency at each stage to reduce the redundant complexity and cost.
In a nutshell, the lowest level, in Fig. <ref>, is building a single layer microchip that constitutes a simple logical circuit of gates. Different gates are known to have different resiliency levels <cit.>. Recently, SiNW transistors are used to bridge Source to Drain with multiple nanowires to compensate manufacturing defects and aging <cit.>. While a typical design process mainly considers the space, energy, and time metrics in the design, making these circuits more resilient would mean trading these metrics for resiliency, e.g., using backup gates, replicated parallel gates, or diverse gates <cit.>.
On the other hand, single-layered circuits can today be synthesized in a 3D fabric<cit.>. Layers typically have different complementary functionalities. However, they can also have layers of identical functionality from different vendors, which is useful to improve diversity in fault masking scenarios (discussed later). It is also helpful to synthesize a monolithic chip from multi-vendor layers to avoid vendor lock-in or potential aging issues, backdoors, and kill switches <cit.>—so called Distribution attack on the supply chain.
At a higher level, always depicted in Fig. <ref>, these 3D microchips can be assembled to build a system-on-chip fabric <cit.>. Again, components of identical functionalities can be used to build fault and intrusion masking SoC fabric. This can be enriched with heterogeneous diverse microchips at a higher level, thus building resilient Multicore Systems on Chip (MPSoC) <cit.>.
At the higher layers, where a software stack complements the functionality of the system to form a more programmable flexible hardware (discussed next), one can take advantage of a remarkable body of research and practice to build resilient soft-custom logic <cit.>. This can be done by exploiting virtualization techniques to provide software-level containment and replication.
More complex systems can be built through networked systems of systems on chip. First instances of networked SoC systems are already emerging in the automotive, aeronautics, and CPS domain.
Across this spectrum, we foresee a need and opportunities to revisit how resilient hardware is built:
* building complex systems of systems and MPSoCs out of smaller COTS;
* taking advantage of the programability and elasticity of modern hardware, e.g., FPGA, GPGPU, to replicate, diversity and adapt; and
* simplifying the design of secure robust systems using smaller hardware hybrids—easy to design and verify, as resilient anchors.
1/2 pg
Challenge 1: fault containment
* SPOF energy supply: different domains
* SPOF clock network: GALS + Steininger's resilient clock work
* crosstalk
* heat as cross-core fault injection; side- / covert channel
* ...
* => must construct systems such that at some level only
accidental faults can happen; then error detection / correction to
constrain how faults propagate; the lower this level, the better,
but this also implies faults might propagate into legitimately
accessible objects
shouldn't w talk abt fault independence as well?
Yes, at least in terms of requirements and solutions that exist (Steininger's clock algorithm, heterogeneous cores, ultimately leading to reconfiguration and organic computing as primitives)
1/2 pg
Challenge 2: need for operational resilience
* SOTA: static partitioning, but modern apps require change
* change to adapt to threats, to adapt to
application changes, to safe power
* impossibility to support change without trusted components:
change requires changing access to resources, but the entity
performing that change and the entity enforcing this change may
fail as well. In particular a failure in any of the two may grant
access to a resource that shouldn't be accessed and faults may
propagate into that resource => impossibility
* it suffices for enforcement to be trusted, provided the
decision what to enforce is made consensually <= Midir <cit.>
* change is also required to recover from faults
* relocate, rejuvenate
§ PROGRAMABILITY, ELASTICITY, PLASTICITY
The genuine
immutability properties of hardware components and elements, make them ideal for security hardening and containment, i.e., by making the hard-implemented logic tamper-resistant against both benign and intrusion faults.
Despite these facts, there is a continuous wave of relaxing these “rigid” hardware designs through introducing programmable (including reconfigurable and adaptable) fabric <cit.>. The main reason is to improve hardware flexibility and compatibility, i.e., making them application-agnostic, and to facilitate the daunting design verification process prior to fabrication, hence cutting off fabrication costs thereof. For this, programmable hardware is considered a tradeoff between software logic—fully flexible, slow, and mutable—and hard logic—fully rigid, fast, and immutable.
We believe that there are promising opportunities to boost the resilience of the programmable platforms against faults and intrusions, although immutability is slightly reduced. To explain these benefits, we consider two classes of programmable hardware:
Soft Custom Logic Fabric (SCLF) : these are commonly known as software-defined devices like PLC, ECU, and SDN devices <cit.>. This hardware is mostly domain-specialized, where computing is done using general-purpose micro-controller or microprocessors, often managed by a full software stack: hypervisors, RTOS/OS, drivers, libraries, and applications. Consequently, these devices exhibit high programability features, analogous to IT computing, although they have specialized roles and use domain specific peripherals, e.g., sensors, actuators, and interfaces.
Hard Custom Logic Fabric (HCLF) : these are hardware chip fabrics, e.g., FPGA <cit.> and GPGPU <cit.>, composed of arrays of logical components, e.g., gates and multiplexers, that are not “hard etched”, i.e., can be reprogrammed as needed. The programming logic in this case is almost entirely implemented in hardware, without the need for a software stack at runtime. Fabric is reprogrammed through soft IP Cores <cit.>
(HDL code <cit.>) or through components (softcores or blocks) synthesized on the chip as needed. This programability feature is a very interesting tradeoff that retains the speed and security of Application-Specific Integrated Circuit (ASIC) chips, while giving the flexibility to support diverse applications and update implementations without the need for costly and slow fabrication.
Although programability, in both classes, opens the door for tampering with the system, and thus injecting surveillance circuits, intrusions and backdoors <cit.> after fabrication (though slightly compared with software systems), there is a huge opportunity to leverage this programability to improve the resilience of these systems through four main ingredients: replication, diversity, rejuvenation, and adaptation.
§.§ Replication
Replication is often useful to build resilience against Benign or Byzantine faults. Passive replication <cit.> allows a failing system to failover into a backup replica. This is a cheap solution that typically requires one passive backup replica. However, recovery is slow, requires reliable detection and is not seemless to the user, even if implemented entirely at transistor level.
For example, Razor <cit.> integrates detection capabilities, originally for timing faults in sequential logic, but also for power instability <cit.> and side channels <cit.>, and re-injects stored state into the pipeline for re-execution. Albeit functionally transparent, users may observe timing differences and anomalies caused by them.
Active replication masks faults through building a deterministic replicated state machine <cit.>, composed of replicas of identical functionality, which execute an agreement protocol, e.g. Paxos <cit.> or PBFT <cit.>. The number of required replicas is typically 2f+1/3f+1 in order to tolerate f faults. Interestingly, several works make use of hardware hybrids as root-of-trust to simplify these protocols to build resilient broadcast and agreement abstractions for embedded real-time systems <cit.> (requiring only 2f+1 replicas to tolerate f Byzantine ones).
Replication in SCLF is analogous to software replication at the software layer. While some literary works have studied this in some settings <cit.>, there are research opportunities in other real-time applications like software-defined vehicles, UXVs, Smart Grid, etc.
On the other hand, replication in HCLF is today easier than ever. Using an FPGA, it is possible to spawn replicas as soft cores or logical blocks, using off-the-shelf soft IPs. This is a nice hardware feature that gives the flexibility to create hard-replicas quickly and on-demand, using only one fabric, in a similar way to creating virtual machines or containers at software level.
§.§ Diversity
Resiliency through active replication is, however, only guaranteed as long as the replicas fail independently <cit.>. The second ingredient, diversity, helps building replicas of the same functionality but with different implementations. The aim is to avoid common-mode benign failures and intrusions.
Since programability in both classes, SCLF and HCLF, open new avenues for multi-vendor implementations and COTS, the likelihood of diversity is higher than the case of monolithic hardware that require deep technology capabilities. An interesting trend that would benefit this model greatly is more standardization for architectures and APIs. For instance, the introduction of the AutoSAR <cit.> standard has greatly enriched the automotive market with multi-vendor implementations of the entire software and hardware stack, which act as a blockboxes of identical functionalities.
CUDA <cit.> and OpenGL <cit.> provide standard APIs to implement accelerated parallel computing logic on a GPGPU using COTS implementations.
Open source hardware platforms like RISC-V <cit.> also standardize the architectures provided by different vendors, and enrich the market with diverse architectures.
Interestingly, FPGAs allow for hardware diversity through modifying the hard-logic through using different implementations or specifications for the softcore/block IP, possibly from different vendors, which is then used to spawn computing cores. It would be interesting to study the case where IP compilers can generate diverse versions of identical softcores to be used on the fly. First approaches towards such a generation of morphable softcores has been investigated in the context of organic computing <cit.>.
§.§ Rejuvenation
Rejuvenation is the third complementary ingredient to replication and diversity. These latter techniques can only maintain resilience as long as the assumed number of failing replicas f is fixed. This assumption is unfortunately hard due to benign faults and malicious behaviours. The first is related to aging, which manifest in software <cit.> as memory leakage, failure to release resources and locks, failure to garbage collect, data corruption, etc. Surprisingly, aging occurs also in hardware, due to the deterioration of hardware material under overuse and overheating, etc. The second reason is recently getting more attention with the increasing attempts of Advanced Persistent Attacks (APT)—where a big deal of time and effort is usually put to identify vulnerabilities and exploit them. While this might be clear at the software level, there are continuous concerns about hardware backdoors and timed Trojans. Indeed, this is behind the recent agendas of acquiring chip sovereignty or split manufacturing in many countries <cit.>.
SCLF reprogramability can greatly benefit from the huge body of research on software rejuvenation, that is proven to mitigate failures. This would even be more effective when rejuvenation is simultaneous with diversity, which allows the rejuvenation to a different implementation with identical functionally, in consequence, reducing the success rate of APTs. Using FPGAs, rejuvenation can also happen at hardware level in HCLF <cit.>. An FPGA allows restarting or spawning new soft cores and logical blocks at runtime—avoiding slow device restarts. In fact, one can partially rejuvenate some soft cores while others continue to run. FPGAs allow for even smarter techniques, e.g., to rejuvenate to diverse softcore variants that are loaded in different FPGA spatial locations, which can avoid potential backdoors in the FPGA grid fabric.
§.§ Adaptation
Yet, another way to withstand a varying number of faults f is to adapt the resilient system accordingly. Among the adaptation forms are scaling out/in the system when f may change, e.g., upon experiencing more threats, or switching to a backup protocol that is more adequate to the current conditions <cit.> (considering safety, liveness, performance, etc.). This would require research on the aforementioned adaptation mechanisms and, importantly, on severity detectors that can trigger adaptation actions once needed. As we discussed above, both SCLF (e.g., virtualization) and HCLF (e.g., FPGAs) provide tempo-spatial elasticity, which allows changing the number of replicas and their locations on the fabric as needed. It will be interesting to study these research questions from scratch or validating the feasibility of existing ones (developed in the software realm).
§.§ Resilient Reconfiguration
or section
It should be evident that reconfiguration must be resilient to faults and attacks, irrespective of the kind of adjustment performed (i.e., diverse rejuvenation, relocation, or adaptation). This holds for both reconfiguration of an FPGA grid fabric as well as multi-chip FPGAs—where the individual FPGA chiplets are the unit of reconfiguration. We shall focus here exclusively on internal, partial and dynamic reconfiguration,
since the reliance on external complex and non-configurable modules (e.g., CPUs) would induce a weak spot in the system, which could contaminate its resilience or introduce downtimes. Nevertheless, dependencies on external hybrids that are simple, and thus easy to verify, are allowed if they simplify the design.
Internal, partial and dynamic mean respectively that reconfiguration (i) is driven from within the FPGA, e.g., by an HCLF or softcore defining the configuration bitstream to be loaded into a reconfigurable region (or frame) through interfaces like internal configuration access ports, (ii) it is bound to the reconfigured area and elements therein, and (iii) it happens while other parts of the FPGA continue to execute.
Optimizing the mapping of blocks to the FPGA grid fabric and integrating the configured block with the remaining blocks remain sufficiently complex tasks to be executed by a software-level operating-system kernel. Disabling and enabling configured circuits and frames constitute the critical operations, which leaves writing the configuration memory and validating that a correct bitstream is written as tasks that can be executed by the responsible kernel or possibly even kernel replicas. Provided sufficient access controls are in place at the internal configuration access ports, the actual configuration of a frame can even be delegated to its current user. However, as shown in Gouveia et al. <cit.>, privilege change must remain a trusted operation executed consensually and enforced by a trusted-trustworthy component. This leads to the more general question of architectural hybridization, which we address next.
* discuss mapping + integration with functionality in other reconfigurable regions
* complexity of mapping
* what are the critical operations: enabling of the reconfigurable region and the circuit installed therein + who can write which region => consensual reconfiguration by writing and validating configuration memory prior to enabling
+ consensual enabling interface
* actual reconfiguration may be delegated
* if so access controls to limit who can write a reconfigurable region's configuration memory
§ ARCHITECTURAL HYBRIDIZATION
split off reconfiguration access control from more general hybridization that Ali suggests and write about the latter
more generic? In hybridization one would benefit from small easy-verifiable hybrids that are trusted-trustworthy. In particular, they can be used as anchors for trust or performance, which makes larger systems efficient, easy to design and verify. These could be unique counters (USIC), registers, secure elements, trusted sync networks, TPM, TEEs? The section now talks a lot about privileges and Midir, too specific. Let's widen our imagination ;)
Differentiating how the individual hard- and software components of an MPSoC architecture can fail, architectural hybridization aims at benefiting from small easy-to-verify and therefore more trustworthy components, called hybrids. The goal is to enable, simplify or improve the performance of the overall system, by serving as trust anchors for these properties. These could be components (registers, memory, trusted execution environments or networks) such as USIG, A2M, TrInc, SGX and others, used in hybrid BFT-SMR protocols <cit.>.
Realizing hybridization poses a challenge dual to the question whether SCLF or HCLF leads to more reliable systems-on-a-chip. For software-only hybrids, we used to equate simplicity (measured for example in lines-of-code required to realize a certain functionality) with a low likelihood of failure and ease of verification. However, at hardware level, this equation is not as obvious, even if we consider lines-of-VHDL or another hardware description language.
We illustrate this using the USIG from the MinBFT protocol by Veronese et al. <cit.> as example. USIG is essentially a sequential circuit, which is driven by the counter register and a few additional registers, which provide as constants the secret key for the HMAC and the ID of the replica. The lowest complexity version of such a circuit will use normal registers. But then any bitflip in the counter will have catastrophic effects on the consensus problem at hand since it is reflected unchanged in the computed HMAC and USIG output. ECC-registers on the other hand add extra bits and the logic required for correction, which both increase the complexity of the circuit at the benefit of tolerating a certain number of bitflips. We also see the converse effect when the required complexity of producing a special purpose circuit for a given functionality exceeds the complexity of a simple core that is able to fetch, decode and execute software. Once the inherent complexity of such a functionality exceeds this bound, software implementations become preferable and hybridization amounts to providing such an isolated core.
The objective of hardware-level hybridization is therefore to remain in this middle-ground. Hardware hybrids, protected by ECC and other accidental- and malicious-fault countermeasures, provide the desired functionality. This can then be extended into the realm of software hybrids that are possibly executed in a replicated manner and that vote to perform critical operations <cit.>.
§ CONCLUSIONS AND CALL TO ACTION
We emphasized that hardware architectures, and in particular multi- and manycore systems-on-a-chip are not the robust, dependable and reliable computing units we would like to have. We have subsequently started to replicate entire systems, which has ultimately lead to the huge body of knowledge on implementing resilient distributed systems. However, as we have seen, the continuing miniaturization and integration of processing elements into a single MPSoC, makes full system resilience increasingly costly, in particular when a single system already provides all the processing power that future critical applications need. We have shown how reconfiguration, rejuvenation and adaptation already allow the hardware to repair itself, to recover from faults and retain the resources classical resilience mechanisms need, when applied entirely on chip.
Hybridization rooted in exactly the right-complexity circuits and applied to construct incrementally more complex dependable systems will produce the next generation flexible, morphable and highly trustable systems mission-critical systems will need.
We therefore appeal for more research to study the resilience of hardware-based systems, systems of systems, and MPSoCs at different layers and cutting vertically across layers, probably through validating the techniques developed in the software Systems and Dependability areas.
IEEEtran
|
http://arxiv.org/abs/2307.02105v2
|
20230705082618
|
Incremental Model Transformations with Triple Graph Grammars for Multi-version Models
|
[
"Matthias Barkowsky",
"Holger Giese"
] |
cs.SE
|
[
"cs.SE"
] |
Incremental Model Transformations with Triple Graph Grammars for Multi-version Models
This work was developed mainly in the course of the project modular and incremental Global Model Management (project number 336677879) funded by the DFG.
1st Matthias Barkowsky
System Analysis and Modeling Group
Hasso-Plattner Institute at the University of Potsdam
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
[email protected]
2nd Holger Giese
System Analysis and Modeling Group
Hasso-Plattner Institute at the University of Potsdam
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
[email protected]
August 1, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In previous work, multi-version models for model-driven software engineering have been introduced, which allow checking well-formedness and finding merge conflicts for multiple versions of a model at once. However, also for multi-version models, situations where different artifacts, that is, different models, are linked via automatic model transformations have to be handled.
In this paper, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and the aforementioned encoding of model version histories called multi-version models. In addition to batch transformation of an entire model version history, the technique also covers incremental synchronization of changes in the framework of multi-version models.
We show the correctness of our approach with respect to the standard semantics of triple graph grammars and conduct an empirical evaluation to investigate the performance of our technique regarding execution time and memory consumption. Our results indicate that the proposed technique affords lower memory consumption and may improve execution time for batch transformation of large version histories, but can also come with computational overhead in unfavorable cases.
Multi-version Models,
Triple Graph Grammars,
Incremental Model Transformation
§ INTRODUCTION
In model-driven software development, models are treated as primary development artifacts. Complex projects can involve multiple models, which describe the system under development at different levels of abstraction or with respect to different system aspects and can be edited independently by a team of developers. In this case, consistency of the holistic system description is ensured by batch model transformations, which automatically derive new models from existing ones, and incremental model transformations, that is, model synchronizations, which propagate changes to a transformation's source model to the transformation's target model<cit.>.
Similarly to program code, the evolution of models via changes by different developers requires management of the resulting versions of the software description. In particular, version management has to support parallel development activities of multiple developers working on the same artifact, where living with inconsistencies may temporarily be necessary to avoid loss of information <cit.>. In <cit.>, we have introduced multi-version models as a means of managing multiple versions of the same model that also enables monitoring the consistency of the individual model versions and potential merge results of versions developed in parallel.
However, with model transformations effectively linking multiple models via consistency relationships, considering only the evolution of a single model without its context is insufficient for larger model-driven software development projects. Thus, a mechanism for establishing consistency of different versions of such linked models that allows parallel development of multiple versions is required. On the one hand, this requires support for transforming multiple versions of a source model into the corresponding target model versions, for instance when a new kind of model is introduced to the development process. On the other hand, the inherently incremental scenario of development with multiple versions calls for efficient synchronization of changes between versions of related models. In order to achieve efficient transformation and synchronization of models with multiple versions and enable further analysis operations as described in <cit.>, a close integration of this transformation and synchronization mechanism and version handling seems desirable.
Therefore, in this paper[See <cit.> for the technical report version of the paper.], we explore a step in the direction of model transformations for multi-version models by adapting the well-known formalism of triple graph grammars, which enables the implementation of single-version model transformations and synchronizations, to the multi-version case.
The remainder of the paper is structured as follows: In Section <ref>, we reiterate the concepts of graphs, graph transformations, triple graph grammars, and multi-version models. We subsequently present our approach for deriving transformation rules that work on multi-version models from single-version transformation specifications in the form of triple graph grammars in Section <ref>. In Section <ref>, we describe how the derived rules can be used to realize joint transformation of all model versions encoded in a multi-version model and prove the technique's correctness with respect to the semantics of triple graph grammars. Section <ref> discusses the extension of the approach to incremental model synchronization. Section <ref> reports on results of an initial evaluation of the solution's performance based on an application scenario in the software development domain. Related work is discussed in Section <ref>, before Section <ref> concludes the paper.
§ PRELIMINARIES
In this section, we give an overview of required preliminaries regarding graphs and graph transformations, triple graph grammars, and multi-version models.
§.§ Graphs and Graph Transformations
We briefly reiterate the concepts of graphs, graph morphisms and graph transformations and their typed analogs as defined in <cit.> and required in the remainder of the paper.
A graph G = (V^G, E^G, s^G, t^G) is given by a set of nodes V^G, a set of edges E^G and two functions s^G: E^G → V^G and t^G: E^G → V^G assigning each edge a source and target node. A graph morphism m: G → H consists of two functions m^V: V^G → V^H and m^E: E^G → E^H such that s^H ∘ m^E = m^V ∘ s^G and t^H ∘ m^E = m^V ∘ t^G. We call m^V the vertex morphism and m^E the edge morphism.
A typed graph G^T = (G, 𝑡𝑦𝑝𝑒^G) comprises a graph G along with a typing morphism 𝑡𝑦𝑝𝑒: G → TG into a type graph TG. In this paper, we consider a model to be a typed graph, with the type graph defining a modeling language by acting as a metamodel. A typed graph morphism from a typed graph G^T = (G, 𝑡𝑦𝑝𝑒^G) into a typed graph H^T = (H, 𝑡𝑦𝑝𝑒^H) with the same type graph is a graph morphism m^T: G → H such that 𝑡𝑦𝑝𝑒^G = 𝑡𝑦𝑝𝑒^H ∘ m^T. A (typed) graph morphism m with injective functions m^V and m^E is called a monomorphism. If m^V and m^E are also surjective, m is called an isomorphism.
Figure <ref> shows an example typed graph on the left along with the corresponding type graph on the right. The typing morphism is encoded by the node's labels. The graph represents an abstract syntax graph of a program written in an object-oriented programming language. Nodes may represent class declarations (ClassDecl), field declarations (FieldDecl) or type accesses (TypeAccess). Class declarations can contain field declarations via edges of type declaration, whereas field declarations can reference a class declaration as the field type via a TypeAccess node and edges of type access and type. The graph contains two class declarations, one of which contains a field declaration, the field type of which is given by the other class declaration.
A (typed) graph transformation rule γ is characterized by a span of (typed) graph monomorphisms L K R and can be applied to a graph G via a monomorphism m : L → G called match that satisfies the so-called dangling condition <cit.>. The result graph H of the rule application is then formally defined by a double pushout over an intermediate graph <cit.>. We denote the application of γ via m by G →^γ_m H. Intuitively, the application of γ deletes the elements in m(L) that do not have a corresponding element in R and creates new elements for elements in R that do not have a corresponding element in L. The graph L is called the rule's left-hand side, K is called the rule's glueing graph, and R is called the right-hand side.
γ is a graph production if it does not delete any elements, that is, l is surjective. In this case, since L and K are isomorphic, we also use the simplified representation L R.
Figure <ref> shows an example graph production in shorthand notation, where preserved elements are colored black, whereas created elements are colored green and marked by an additional “++” label. For two existing classes, the production creates a field declaration in one of them that references the other class declaration as the field type.
We denote a sequence of applications of rules from a set of rules Γ to a graph G with resulting graph G' by G →^Γ G'. We say that such a rule application sequence is maximal if it cannot be extended by any application of a rule from Γ.
Maximal Rule Application Sequence
A sequence of rule applications G →^Γ G' with a set of graph transformation rules Γ is maximal if no rule in Γ is applicable to G'.
§.§ Triple Graph Grammars
Triple graph grammars (TGGs) were initially presented by Schuerr <cit.>. This paper is based on the slightly adapted version introduced in <cit.>. A TGG relates a source and a target modeling language via a correspondence modeling language and is characterized by a set of TGG rules. In <cit.>, a TGG rule is defined by a graph production that simultaneously transforms connected graphs from the source, correspondence and target modeling language into a consistently modified graph triplet. The set of TGG rules has to include an axiom rule, which has a triplet of empty graphs as its left-hand side and defines a triplet of starting graphs via its right-hand side.
The left-hand side of a TGG rule γ = L R can be divided into the source, correspondence, and target domains L_S, L_C, and L_T respectively, with L_S ⊆ L, L_C ⊆ L, and L_R ⊆ L and L_S ⊎ L_C ⊎ L_R = L. The right-hand side can similarly be divided into three domains R_S, R_C, and R_T. The type graph for graph triplets and TGG rules is hence given by the union of the type graphs defining the source, correspondence, and target language along with additional edges connecting nodes in the correspondence language to elements in the source and target language. It is also assumed that each element in L_S, L_T, R_S, and R_T is connected to exactly one node in L_C or R_C and that each rule creates exactly one node in the correspondence domain.
TGGs can be employed to transform a model of the source language into a model of the target language. This requires the derivation of so-called forward rules from the set of TGG rules. A forward rule for a TGG rule γ = L R can be constructed as γ^F = L^F L^F R, where L^F = L ∪ (R_S ∖ r(L)) and r^F = r ∪ id, with id the identity morphism. Intuitively, γ^F already requires the existence of the elements in the source domain that would be created by an application of γ and only creates elements in the correspondence and target domain. In the following, we also denote the subgraph of a forward rule that corresponds to the subgraph that is newly transformed by the rule by L^T = L^F ∖ L.
Additionally, the derivation of a forward rule requires a technical extension to avoid redundant translation of the same element. Therefore, a dedicated bookkeeping node, which is connected to every currently untranslated source element via a bookkeeping edge, is introduced. Then, a bookkeeping node and bookkeeping edges to all elements in L^T are added to the forward rule's left-hand side. The bookkeeping node is also added to the rule's glueing graph and right-hand side. The application of the forward rule via m thus requires that elements in m(L^T) are untranslated, as indicated by the presence of bookkeeping edges, and marks them as translated by deleting the adjacent bookkeeping edges.
Note that, in order to allow bookkeeping edges and outgoing edges of correspondence nodes to target regular edges, a slightly extended graph model is used, which is detailed in <cit.>. In this paper, we will call such graphs graphs with bookkeeping. We say that two graphs with bookkeeping G and H are equal up to isomorphism including bookkeeping if and only if there exists an isomorphism iso : G → H, which entails that for all nodes and edges x ∈ G, it holds that ∃ b ∈ E'^G: t'^G(b) = x ↔∃ b' ∈ E'^H: t'^H(b') = iso(x), where E'^G denotes the set of bookkeeping edges and t'^G the related target function.
Figure <ref> shows a TGG rule for linking the language for abstract syntax graphs given by the type graph in Figure <ref> to a modeling language for class diagrams given by the type graph on the right in Figure <ref>, using the correspondence language on the left. The rule simultaneously creates a FieldDecl and TypeAccess along with associated edges in the source domain (labeled S) and a corresponding Association with associated edges in the target domain (labeled T), which are linked via a newly created correspondence node of type CorrField in the correspondence domain (labeled C). Edges from the correspondence node to other edges are omitted for readability.
Figure <ref> shows the forward rule derived from the TGG rule in Figure <ref>. The elements f_1 and t_1 and adjacent edges are no longer created but preserved instead. Also, the rule contains a bookkeeping node and adjacent bookkeeping edges to these elements. The rule's application then deletes these bookkeeping edges and creates the corresponding elements in the target domain along with the linking node cf_1 and edges in the correspondence domain. The bookkeeping mechanism is however not visualized for readability reasons.
TGGs can also be used to perform a transformation from the target to the source language by means of similarly derived backward rules. In the following, we will focus on the forward case. However, the backward case simply works analogously.
A TGG without any critical pairs <cit.> among its rules is called deterministic <cit.>. A forward transformation with a deterministic TGG can be executed via an operation trans^F, which adds a bookkeeping node and bookkeeping edges to all elements in the soruce mode and then applies the TGG's forward rules for as long as there is a match for any of them. For a deterministic TGG with a set of forward rules Γ and a starting model triplet SCT, any produced maximal rule application sequence SCT →^Γ SCT' then constitutes a correct model transformation and yields the same result if it deletes all bookkeeping edges in SCT. In this paper, we will focus on such deterministic TGGs, which allow for efficient practical implementations that avoid potentially expensive undoing of forward rule applications and backtracking <cit.>.
In addition to a full batch transformation of a previously untransformed model, TGGs also enable incremental synchronization of changes to an already transformed model to the transformation result. In the most basic case, this involves undoing all forward rule applications that are invalidated by the deletion of elements in a first step. In a second step, elements that are no longer covered as well as newly created elements are then transformed via the TGG's forward rules.
§.§ Multi-version Models
In this paper, we consider models in the form of typed graphs. A model modification can in this context be represented by a span of morphisms M ← K → M', where M is the original model, which is modified into a changed model M' via an intermediate model K <cit.>. A version history of a model is then given by a set of model modifications Δ^M_{1,...,n} between models M_1, M_2, ..., M_n with type graph TM. We call a version history with a unique initial version and acyclic model modification relationships between the individual versions a correct version history.
In <cit.>, we have introduced multi-version models as a means of encoding such a version history in a single consolidated graph. Therefore, an adapted version of TM, TM_mv, is created. To represent model structure, TM_mv contains a node for each node and each edge in TM. Source and target relationships of edges in TM are represented by edges in TM_mv. In addition, a version node with a reflexive suc edge is added to TM_mv, which allows the materialization of the version history's version graph. The version graph and the model structure are linked via cv_v and dv_v edges from each node v in TM_mv to the version node.
Figure <ref> displays the adaptation of the type graph from Figure <ref>. cv and dv edges are omitted for readability reasons.
TM_mv allows the translation of Δ^M_{1,...,n} into a single typed graph MVM conforming to TM_mv, which is called a multi-version model, via a procedure comb. This translation yields a bijective function origin: V^MVM→⋃_i ∈{1, 2, ..., n} V^M_i∪ E^M_i mapping the nodes in MVM to their respective original element. An individual model version can be extracted from MVM via the projection operation proj(MVM, i) = M_i. Finally for a node v_mv∈ V^MVM, the set of model versions that include the element origin(v_mv) can be computed via the function p, with p(v_mv) = {M_i ∈{M_1, M_2, ..., M_n} | origin(v_mv) ∈ M_i}.
§ DERIVATION OF MULTI-VERSION TRANSFORMATION RULES FROM TRIPLE GRAPH GRAMMARS
The transformation of the individual model versions encoded in a multi-version model with a TGG can trivially be realized via the projection operation proj. However, the multi-version model may in practice afford a more compact representation compared to an explicit enumeration of all model versions, as derived via proj.
In such practical application scenarios, operations concerning all model versions that directly work on the multi-version model may therefore also perform better regarding execution time than the corresponding operations on individual model versions, as has already been demonstrated for the case of pattern matching in <cit.>. Since pattern matching also constitutes an important task in model transformation via TGGs, a direct, joint translation of all model versions based on the multi-version model representation seems desirable.
Given a TGG, graph transformation rules for the joint translation of all source or target model versions encoded in a multi-version model can be derived from the regular translation rules in a straightforward manner. In the following, we will discuss the deriviation for forward translation. Rules for the backward case can be derived analogously.
First, the adapted multi-version type graph for the TGG's merged source, correspondence and target type graph is created via the translation procedure described in <cit.>. However, edges between the correspondence and source or target type graph are simply translated as edges rather than nodes. The resulting adapted type graph TG_mv for multi-version models is also extended by two additional edges, ucv_v and udv_v, for each node v in the source domain of the merged type graph. Source and target of these edges are given by s^TG_mv(ucv_v) = s^TG_mv(udv_v) = v and t^TG_mv(ucv_v) = t^TG_mv(udv_v) = version, where version is the dedicated version node.
Analogously to the bookkeeping edges in the original type graph, these edges will be used to encode in which versions an element represented by a node v_mv with type v has already been translated. We therefore define the set of versions where v_mv has not been translated yet u(v_mv) analogously to the set of versions p(v_mv) where v_mv is present <cit.>, except that ucv_v and udv_v replace cv_v and dv_v in the definition.
Then, for each forward rule γ = L K R a corresponding multi-version forward rule is created via a procedure adapt, with adapt(γ) = trans'(L) trans'(K) trans'(R). The vertex morphism of l_mv is given by l_mv^V = origin^-1 ∘ l ∘ origin and the edge morphisms by l_mv^E = s ∘ origin^-1∘ l^E ∘ origin ∘ s^-1 and l_mv^E = t ∘ origin^-1∘ l^E ∘ origin ∘ t^-1 for all edges representing source respectively target relationships. r_mv is constructed analogously.
The trans' procedure is a minor adaptation of the trans procedure in <cit.>, which ignores the bookkeeping node and bookkeeping edges, and translates correspondence edges to edges rather than nodes, but otherwise works analogously. The bookkeeping mechanism is translated into the additional constraint P ≠∅ over trans'(L), where P = (⋂_v_mv∈ V^trans'(L) p(v_mv) ∩⋂_v_mv∈ origin^-1(L^T) u(v_mv)) ∖⋃_v_mv∈ V^trans'(L)∖ origin^-1(L^T) u(v_mv).
The application of the adapted rule additionally creates outgoing cv and dv edges for all nodes v^C_mv∈ V^trans(R)∖ (origin^-1 ∘ r ∘ origin)(trans(K)) to realize the assignment p(v^C_mv) P. Furthermore, for each v_mv∈ origin^-1(r(l^-1(L^T))), the application also adds and deletes outgoing ucv and udv edges to realize the modification u(v_mv) u(v_mv) ∖ P. Note that, since the computation of the p and u sets requires considedring paths of arbitrary length, these computations cannot technically be defined as part of the graph transformation but have to be realized externally.
For a set of forward rules Γ, adapt(Γ) = {adapt(γ) | γ∈Γ} denotes the corresponding set of multi-version forward rules.
§ EXECUTION OF MULTI-VERSION TRANSFORMATIONS (DETAILED)
The forward transformation of all model versions in a multi-version model MVM according to a TGG can jointly be performed via the TGG's set of multi-version forward rules.
In a first step, all ucv and udv edges in MVM are removed. Then, for each edge e_cv∈ E^MVM with type(e_cv) = cv_x and s^MVM(e_cv), an edge e_ucv with type(e_ucv) = ucv_x and s^MVM(e_cv) = s^MVM(e_ucv) and t^MVM(e_cv) = t^MVM(e_ucv) is created. For all dv edges, corresponding udv edges are created analogously. Thus, after the creation of the ucv and udv edges, it holds that ∀ v_vm∈ V^MVM: u(v_vm) = p(v_vm).
Subsequently, the simultaneous transformation of all model versions encoded in MVM is performed similarly to the regular transformation of a single model version via the TGG. More specifically, the adapted forward rules of the TGG are applied to MVM until no such rule is applicable anymore.
In the following, we will argue that this transformation approach is correct in the sense that it yields the same result as the transformation of the individual model versions via regular forward rules. Therefore, we first extend the projection operation proj from <cit.> to a bookkeeping-sensitive variant.
(Bookkeeping-sensitive Projection)
For a multi-version model MVM with version graph V and model version M_t with corresponding m_t ∈ V^V, the bookkeeping-sensitive projection operation works similarly to the regular projection operation proj, except that it also adds a bookkeeping node and bookkeeping edges to an element origin(v) iff M_t ∈ u(v) for all v ∈ V^MVM. We also denote the result of the bookkeeping-sensitive projection operation by MVM[t] = proj^M(MVM, t).
We also define two sets that represent the bookkeeping during the transformation process.
(Bookkeeping Set)
For a model M, we denote the set of translated elements (vertices and edges) by B(M) = {x ∈ M | ∄ b ∈ E'^M: t'^M = x}, with E'^M the set of bookkeeping edges in M and t'^M the target function for bookkeeping edges. We also call B(M) the bookkeeping set of M.
(Projection Bookkeeping Set)
For a multi-version model MVM and version t ∈ V^V, with V the version graph, we denote the set of already handled elements (vertices and edges) in MVM[t] by B_mv(MVM[t]) = {x ∈ MVM[t] | t ∉ u(proj^-1(x))}. We also call B_mv(MVM[t]) the projection bookkeeping set of MVM[t].
The following theorem then states that, at the start of the transformation process via adapted forward rules, the prepared multi-version model correctly encodes the starting situation for the translation of the individual model versions.
Given a multi-version model MVM encoding a version history with model versions M_1, M_2, ..., M_n such that ∀ v_vm∈ V^MVM: u(v_vm) = p(v_vm), it holds that
∀ t ∈{1, 2, ..., n}: MVM[t] = init_F(M_t)
up to isomorphism including bookkeeping, where init_F(M_t) denotes the graph with bookkeeping resulting from the preparation of M_t for the regular forward transformation, that is, the graph M_t with an added bookkeeping node and bookkeeping edges to all elements in M_t.
Follows directly from the fact that ∀ t ∈{1, 2, ..., n}: proj(MVM, t) = M_t, which has been shown in <cit.>, and the definition of the bookkeeping-sensitive projection operation.
By Theorem <ref>, we also get the following corollary:
Given a multi-version model MVM encoding a version history with model versions M_1, M_2, ..., M_n such that ∀ v_vm∈ V^MVM: u(v_vm) = p(v_vm), it holds that
∀ t ∈{1, 2, ..., n}: B_mv(MVM[t]) = B(init_F(M_t))
up to isomorphism, where init_F(SCT_t) denotes the graph with bookkeeping resulting from the preparation of M_t for the regular forward transformation process, that is, the graph M_t with an added bookkeeping node and bookkeeping edges to all elements in M_t.
Follows directly from Theorem <ref> and the definition of bookkeeping set and projection bookkeeping set.
We now show that a multi-version rule is applicable to a multi-version model iff the corresponding regular rule is applicable to all model versions affected by the rule application.
A multi-version forward rule γ_mv = L_mv← K_mv→ R_mv is applicable to a multi-version model triplet SCT_mv with bookkeeping via match m, if and only if for all t ∈ P, the associated original forward rule γ = L ← K → R is applicable to SCT_mv[t] via match origin(m), with P = ⋂_v ∈ V^L_mv p(m(v)) ∩⋂_v ∈ V^L^T_mv u(m(v)).
For a version t, as has already been shown in <cit.>, the match m : L_mv→ SCT_mv has a corresponding match origin(m) : L → SCT_mv[t] if and only if t ∈⋂_v ∈ V^L_mv p(m(v)). Furthermore, due to the definition of P and the construction of γ_mv, all elements in m(origin(m)(L^T)) have an adjacent bookkeeping edge in SCT_mv[t] iff t ∈⋂_v ∈ V^L^T_mv u(m(v)). Similarly, all elements in m(origin(m)(L ∖ L^T)) have no adjacent bookkeeping edge in SCT_mv[t] iff t ∉⋃_v ∈ V^L_mv∖ L^T_mv u(m(v)). Since γ and γ_mv delete no vertices, the dangling condition is trivially satisfied for r and the match trans(m). γ_mv is hence applicable to SCT_mv via m, with t ∈ P, iff r is applicable to SCT_mv[t] via origin(m).
We can now show the equivalence of a single multi-version rule application to a multi-version model to the application of the corresponding regular rule to all affected model versions.
For an application SCT_mv→^γ_mv_m SCT_mv' of a multi-version forward rule γ_mv = L_mv← K_mv→ R_mv with original forward rule γ = L ← K → R to a multi-version model triplet SCT_mv with bookkeeping and version graph V via match m, it holds up to isomorphism including bookkeeping that ∀ t ∈ P: SCT_mv'[t] = SCT', with the corresponding application SCT_mv[t] →^r_origin(m) SCT', and ∀ t ∈ V^V ∖ P: SCT_mv'[t] = SCT_mv[t], where P = ⋂_v ∈ V^L_mv p(m(v)) ∩⋂_v ∈ V^L^T_mv u(m(v)).
Disregarding bookkeeping edges, all forward rules and thus also the adapted forward rules are productions. Due to the construction of the adapted forward rules, all elements created by the rule's application are only mv-present in SCT_mv' for the versions in P. Therefore, for all remaining versions, SCT_mv[t] contains exactly the same elements as SCT_mv'[t]. An isomorphism iso: SCT_mv[t] → SCT_mv'[t] is hence trivially given by the identity in this case. Since the application of γ_mv only changes the projection bookkeeping sets for versions in P, B_mv(SCT_mv'[t]) = B(SCT_mv[t]) with isomorphism iso.
It thus holds up to isomorphism that ∀ t ∈ V^V ∖ P: SCT_mv'[t] = SCT_mv[t] ∧ B_mv(SCT_mv'[t]) = B(SCT_mv[t]).
The application of γ_mv to SCT_mv yields a comatch n : R_mv→ SCT_mv' and the associated application of γ to SCT_mv[t] similarly yields a comatch n' : R → SCT' for any t ∈ P.
An isomorphism iso: SCT_mv'[t] → SCT' can then be constructed as follows: Since γ_mv is a production, SCT_mv is a subgraph of SCT_mv' and hence SCT_mv[t] is also a subgraph of SCT_mv'[t] except for bookkeeping. Since γ is a production, SCT_mv[t] is also a subgraph of SCT'. Isomorphic mappings for SCT_mv[t] between SCT_mv'[t] and SCT' are thus simply given by the identity. This leaves only the elements in n(R_mv∖ L_mv) and the elements in n'(R ∖ L) unmapped. Due to the construction of γ_mv being unique up to isomorphism, n and n' being monomorphisms, and trans and origin being bijections, the remaining isomorphic mappings are given by n' ∘ trans ∘ n^-1∘ origin. Note that for elements in n(L_mv), the definition of iso via identity and n' ∘ trans ∘ n^-1∘ origin is redundant but compatible.
Due to the definition of bookkeeping-sensitive projection, bookkeeping set, and projection bookkeeping set, it holds that B(SCT_mv[t]) = B_mv(SCT_mv[t]) and thus B_mv(SCT_mv[t]) = B(SCT_mv[t])). Compared to B_mv(SCT_mv[t]), the application of γ_mv only changes the projection bookkeeping set B_mv(SCT_mv'[t]) by adding the elements in trans(m(L^T_mv)). The modification to B_mv(SCT_mv'[t]) hence corresponds to the modification of the bookkeeping set B(SCT') by the application of γ via trans(m) for the isomorphism iso due to the construction of γ_mv.
It thus holds that ∀ t ∈ P: SCT_mv'[t] = SCT' ∧ B_mv(SCT_mv'[t]) = B(SCT').
Based on Theorem <ref> for individual rule applications, we get the following corollary for sequences of rule applications:
For a TGG with associated set of forward rules Γ and multi-version forward rules Γ_mv and a multi-version model triplet SCT_mv with bookkeeping and version graph V, there is a sequence of rule applications SCT_mv→^Γ_mv SCT_mv' if and only if for all t ∈ V^V, there is a sequence of rule applications SCT_mv[t] →^Γ SCT' with SCT_mv'[t] = SCT' up to isomorphism including bookkeeping.
We prove the corollary by induction over the length of the multi-version rule application sequence.
For the base case of application sequences of length 0, the identity morphism and empty application sequences trivially satisfy the corollary.
If there is a sequence of rule applications SCT_mv→^Γ_mv SCT_mv' if and only if for all t ∈ V^V, there is a sequence of rule applications SCT_mv[t] →^Γ SCT' with SCT_mv'[t] = SCT' ∧ B_mv(SCT_mv'[t]) = B(SCT'), by Theorem <ref> we have an extended multi-version sequence SCT_mv→^Γ_mv SCT_mv' →^γ_mv_m SCT_mv” and all t ∈ V^V if and only if for all t ∈ V^V, there is a sequence of regular rule applications SCT_mv[t] →^Γ SCT” with SCT_mv”[t] = SCT”∧ B_mv(SCT_mv”[t]) = B(SCT”).
For all t ∈ V^V ∖ P, where P = ⋂_v ∈ V^L_mv p(m(v)) ∩⋂_v ∈ V^L^T_mv u(m(v)), the corresponding regular rule application sequence SCT_mv[t] →^Γ SCT' and isomorphism iso : SCT_mv'[t] → SCT' are also valid for SCT_mv”[t] and satisfy the condition on bookkeeping sets, since SCT' = SCT_mv'[t] = SCT_mv”[t] (up to isomorphism).
In accordance with Theorem <ref>, there is an extended sequence SCT_mv→^Γ_mv SCT_mv' →^γ_mv_m SCT_mv” if and only if for all t ∈ P, the regular rule application sequence SCT_mv[t] →^Γ SCT_mv'[t] can be extended by a rule application SCT_mv'[t] →^γ_trans(m) SCT_mv”[t] that satisfies the condition on bookkeeping sets.
Thus, there is a sequence of rule applications SCT_mv→^Γ_mv SCT_mv' →^γ_mv_m SCT_mv” if and only if for all t ∈ V^V, there is a sequence of rule applications SCT_mv[t] →^Γ SCT” with SCT_mv”[t] = SCT”∧ B_mv(SCT_mv”[t]) = B(SCT”).
With the proof for the base case and the induction step, we have proven the correctness of the corollary.
Intuitively, the multi-version forward rules perform a simultaneous transformation of multiple model versions encoded in SCT_mv. The application of a multi-version rule L_mv← K_mv→ R_mv corresponds to the application of the original rule to all model versions in P = ⋂_v ∈ V^L_mv p(m(v)) ∩⋂_v ∈ V^L^T_mv u(m(v)) and leaves other model versions unchanged. Thus, a multi-version rule application effectively extends the original rule application sequences for versions in P by the associated original rule application, whereas it represents the “skipping” of a step for versions not in P.
For a TGG with associated set of forward rules Γ and multi-version forward rules Γ_mv and a multi-version model triplet SCT_mv with bookkeeping and version graph V, there is a maximal sequence of rule applications SCT_mv→^Γ_mv SCT_mv' if and only if for all t ∈ V^V, there is a maximal sequence of regular rule applications SCT_mv[t] →^Γ SCT' such that SCT_mv'[t] = SCT' up to isomorphism including bookkeeping.
The existence of a sequence of original rule applications for a sequence of multi-version rule applications and all versions t ∈ V^V and vice-versa is given by Corollary <ref>. From Theorem <ref>, it follows directly that the multi-version sequence is maximal if and only if the regular sequences are maximal for all t ∈ V^V.
For a deterministic TGG, a correct translation of source graph S is given by any maximal rule application sequence of forward rules that deletes all bookkeeping edges in the source model. Note that because of the determinism criterion, either every maximal rule application sequences or none of them satisfies the bookkeeping criterion.
Thus, for a deterministic TGG and by Theorem <ref> and Corollary <ref>, the results of jointly transforming the model versions using the TGG, that is, the result of repeated application of adapted transformation rules to a multi-version model prepared for multi-version translation until a fixpoint is reached, are equivalent to the results of repeated application of the original rules to the individual model versions prepared for translation.
We thereby have the correctness of the forward transformation using multi-version forward rules trans^F_mv, which applies multi-version forward rules to a multi-version model with bookkeeping until a fixpoint is reached.
For a correct version history Δ^M_{1,...,n} and a triple graph grammar with set of forward rules Γ, it holds up to isomorphism that
∀ t ∈{1, ..., n} : trans^F_mv(init_F(comb(Δ^M_{1,...,n})), adapt(Γ))[t] = trans^F(M_t, Γ) if trans^F(M_t, Γ) contains no bookkeeping edges.
Follows from Theorem <ref> and Corollary <ref>.
§ INCREMENTAL EXECUTION OF MULTI-VERSION SYNCHRONIZATIONS (DETAILED)
As TGGs naturally offer capabilities for incremental synchronization of single-version models <cit.>, the related concepts can be transferred to multi-version case to enable direct incremental model synchronization when developing with multi-version models. We therefore consider a standard scenario where new versions of a model can be created, changed and merged by developers and the associated modifications to the model's version history should correctly be propagated to a related target model of a TGG-based transformation. In the following, we discuss how TGGs can be used to react to the different kinds of modifications of multi-version models required in such a scenario.
Formally, the creation of a new version for single-version models corresponds to the introduction of a new model modification M_i ← K → M_n+1 into a version history Δ^M_{1,...,n} such that M_i and M_n+1 are isomorphic, requiring copying of the base version to retain the version history. The realization in the context of multi-version models via a procedure apply^v_mv only requires adding a new version node for M_n+1 along with an incoming suc edge from M_i.
The following theorem states that applying this procedure to a previously transformed multi-version model triplet already yields a triplet where all encoded versions of the source model are correctly transformed. Thus, no further synchronization effort is required.
For a correct version history Δ^M_{1,...,n}, an extended version history Δ^M_{1,...,n + 1} = Δ^M_{1,...,n}∪{M_i ← K → M_n+1} with i ∈{1,...,n} and (V^M_i, E^M_i, s^M_i, t^M_i) = (V^M_n + 1, E^M_n + 1, s^M_n + 1, t^M_n + 1), and a TGG with set of forward rules Γ, it holds up to isomorphism including bookkeeping that
∀ t ∈{1, ..., n + 1} : apply^v_mv(SCT_mv, M_i ← K → M_n+1)[t] = SCT_t',
with SCT_mv = trans^F_mv(init_F(comb(Δ^M_{1,...,n})), adapt(Γ)) and M_t →^Γ SCT_t' a maximal rule application sequence.
Since apply_v only introduces a new version node for version n + 1 along with a single incoming suc edge from the version node for version i and does not make any further changes, it follows that ∀t' ∈{1, ..., n}: apply^v_mv(SCT_mv)[t] = SCT_mv[t]. It also follows that ∀v ∈ V^apply^v_mv(SCT_mv): ((n + 1) ∈ p(v) ↔ i ∈ p(v)) ∧ ((n + 1) ∈ u(v) ↔ i ∈ u(v)) and hence apply^v_mv(SCT_mv)[n + 1] = SCT_mv[i]. Thus, because M_i and M_n+1 are isomorphic, the theorem holds.
The creation of an element x in a new version M_i in the single-version case formally consists of simply adding the element to the set of nodes or edges of M_i and adjusting source and target functions if x is an edge. In a multi-version model representation, instead a new node v_mv of the corresponding adapted type is created and connected to the version node representing M_i via a cv edge. Furthermore, if x is an edge, the related source and target edges are created.
Such a modification to a source model can be synchronized to a target model via the procedures mark^F_c and trans^F_mv. mark^F_c connects v_mv to the version node representing M_i via a ucv edge. trans^F_mv applies the TGG's multi-version forward rules until a fixpoint is reached. The following theorem states that this yields a multi-version model triplet that correctly encodes the transformation results for all model versions, with apply^+ and apply_mv^+ the procedures for applying a creation modification to the original model respectively the multi-version model.
For a correct version history Δ^M_{1,...,n}, an element x, a version M_i such that ∄ M_i ← K → M_x ∈Δ^M_{1,...,n}, and a TGG with set of forward rules Γ, it holds up to isomorphism including bookkeeping that
∀ t ∈{1, ..., n}∖{i}: trans^F_mv(mark^F_c(SCT_mv', x, i), adapt(Γ))[t] = SCT_t'
∧
trans^F_mv(mark^F_c(SCT_mv', e, i), adapt(Γ))[i] = SCT_i',
with SCT_mv = trans^F_mv(init_F(comb(Δ^M_{1,...,n})), adapt(Γ)), SCT_mv' = apply_mv^+(SCT_mv, x, i), and M_t →^Γ SCT_t' and apply^+(M_i, x) →^Γ SCT_i' maximal rule application sequences.
Since neither apply_mv^+ nor mark^F_c and consequently also not trans^F_mv impact the result of the projection operation for any t ∈{1, ..., n}∖{i}, the theorem holds for all t ∈{1, ..., n}∖{i}. For version i, it follows from theorem <ref> that there exists a rule application sequence apply^+(M_i, δ_+) →^Γ mark^F_c(SCT'_mv, x, i)[i], since the projection contains a new, unmarked element corresponding to x and is otherwise unchanged. From corollary <ref> hence follows the correctness of the theorem.
Deletion of an element x from a new version M_i corresponds to the removal of the respective element from the set of nodes or edges of M_i and adjusting source and target functions if x is an edge. To update a related multi-version model accordingly, a dv edge from the node v_mv representing x to the version node m_i representing M_i is created.
The procedures mark^F_d and trans^F_mv synchronize such a modification to a source model to a corresponding target model. mark^F_d consists of two steps. First, if M_i ∈ u(v_mv), a udv edge between v_mv and m_i is created. Otherwise, any correspondence node c_mv adjacent to v_mv with M_i ∈ p(c_mv) as well as any attached target model nodes are connected to m_i via dv edges. Then, this adjustment is transitively propagated to the correspondence node's dependent correspondence nodes that fulfill the presence condition and their attached target model nodes. A correspondence node is dependent on another if the image of the match of the rule application that created the dependent correspondence node contained the required correspondence node. Second, all source model nodes connected to an affected correspondence node other than v_mv are connected to m_i via a ucv edge. Finally, multi-version forward rules are applied by trans^F_mv until a fixpoint is reached. This yields correct transformation results, as stated by the following theorem, with apply^- and apply_mv^- the procedures for applying a deletion modification to the original model respectively the multi-version model.
For a correct version history Δ^M_{1,...,n}, an element x, a version M_i such that ∄ M_i ← K → M_x ∈Δ^M_{1,...,n}, and a TGG with set of forward rules Γ, it holds up to isomorphism including bookkeeping that
∀ t ∈{1, ..., n}∖{i}: trans^F_mv(mark^F_d(SCT'_mv, x, i), adapt(Γ))[t] = SCT_t'
∧ trans^F_mv(mark^F_d(SCT'_mv, x, i), adapt(Γ))[i] = SCT_i',
with SCT_mv = trans^F_mv(init_F(comb(Δ^M_{1,...,n})), adapt(Γ)), SCT_mv' = apply_mv^-(SCT_mv, x, i), and M_t →^Γ SCT_t' and apply^-(M_i, x) →^Γ SCT_i' maximal rule application sequences.
Since neither apply_mv^- nor mark^F_d impact the result of the projection operation for any t ∈{1, ..., n}∖{i}, the theorem holds for all t ∈{1, ..., n}∖{i}. For version i, mark^F_d(SCT'_mv, e, i)[i] no longer contains the element corresponding to x, any directly attached or dependent correspondence node, and any related target elements, effectively structurally undoing the rule applications that created these elements. Furthermore, in the projection, all source elements covered by a previously present correspondence node are now marked. Since the projection is otherwise unchanged, it follows from theorem <ref> that there exists a rule application sequence apply^-(M_i, x) →^Γ mark^F_d(SCT'_mv, x, i)[i]. The correctness of the theorem then follows from corollary <ref>.
Essentially, the synchronization procedures for element creation and deletion thus realize a synchronization similar to the technique described in <cit.> in the context of multi-version models.
We consider a merge of two versions for single-version models to be formally represented by the introduction of two new model modifications M_i ← K → M_n+1 and M_j ← K → M_n+1 with M_n+1⊆ M_i ∪ M_j into a version history Δ^M_{1,...,n}. In a multi-version model encoding, this modification corresponds to the creation of a new version node for M_n+1, m_n+1, along with incoming suc edges from the version nodes for M_i and M_j. Furthermore, dv edges to m_n+1 from all nodes representing elements in M_i and M_j that are not in M_n+1 are created.
Synchronization of such a change to a source version history is achieved via the procedures mark^F_m and trans^F_mv. mark^F_m performs three steps: First, similarly to how deletion modifications for new versions are handled, any correspondence node c_mv with M_n+1∈ p(c_mv) adjacent to a node v_mv representing a deleted element, as well as any attached target model nodes are connected to m_n+1 via dv edges and this adjustment is transitively propagated to the correspondence node's dependent correspondence nodes that fulfill the presence condition and their attached target model nodes. For any such v_mv with M_n+1∈ u(v_mv), a udv edge between v_mv and m_n+1 is added. Second, for each source model element that would be connected to more than one correspondence node in the projection to n + 1 (one of which has to be present in the projection to i and the other in the projection to j), a dv edge to m_n+1 is added to the correspondence node present in the projection to j and this adjustment is again propagated to dependent correspondence nodes and associated target model elements. Third, all source model nodes v'_mv connected to any affected correspondence node with m_n+1∈ p(v'_mv) are connected to m_n+1 via a ucv edge. trans^F_mv then applies the TGG's multi-version forward rules until a fixpoint is reached. This yields a correct multi-version encoding of the transformation results for all model versions, with apply^m_mv the procedure for applying the merge modification to the multi-version model.
For a correct version history Δ^M_{1,...,n}, an extended version history Δ^M_{1,...,n + 1} = Δ^M_{1,...,n}∪{M_i ← K_i → M_n+1, M_j ← K_j → M_n+1} with i, j ∈{1,...,n}, i ≠ j, and M_n+1⊆ M_i ∪ M_j, and a TGG with set of forward rules Γ, it holds up to isomorphism including bookkeeping that
∀ t ∈{1, ..., n + 1} : trans^F_mv(mark^F_m(SCT'_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1), adapt(Γ))[t] = trans^F(M_t, Γ),
with SCT_mv = trans^F_mv(init_F(comb(Δ^M_{1,...,n})), adapt(Γ)), SCT'_mv = apply^m_mv(SCT_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1), and M_t →^Γ SCT_t' a maximal rule application sequence.
Since apply^m_mv only introduces a new version node for version n + 1 along with two incoming suc edges from the version nodes for versions i and j and hence neither apply^m_mv nor mark^F_m and consequently also not trans^F_mv impact the result of the projection operation for any t' ∈{1, ..., n}, it follows that ∀ t ∈{1, ..., n} : trans^F_mv(mark^F_m(SCT'_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1), adapt(Γ))[t] = trans^F(M_t, Γ).
Furthermore, since dv edges to the version node corresponding to M_n+1 are transitively added to any attached correspondence node that might otherwise be present in mark^F_m(SCT'_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1)[n+1], dependent correspondence nodes, and associated target elements, if any of its associated source elements is not present anymore, it essentially structurally undoes the related forward rule applications that created these elements. It thus follows that for any correspondence node and target element remaining in mark^F_m(SCT'_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1)[n + 1], there must exist a sequence of applications of rules from Γ that creates the correspondence node and its target elements.
Furthermore, since dv edges are similarly added for redundant correspondence nodes, it follows that no two correspondence nodes and associated target elements in mark^F_m(SCT'_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1)[n + 1] can be created by separate rule applications that require the deletion of new bookkeping edges for the same source element. Since rules from Γ are productions except for the bookkeeping mechanism, the application of one rule can only prevent the application of another rule via the bookkeeping mechanism. Thus, because of the adjustment of the set u(v) for any impacted node v ∈ V^SCT_mv, there must exist a sequence of rule applications M_n+1→^R apply^m_mv(SCT_mv, M_i ← K_i → M_n+1, M_j ← K_j → M_n+1)[t].
From corollary <ref> then follows the correctness of the theorem.
From the correctness of the synchronization procedures for the considered types of modifications to a version history follows that a sequence of such modifications can be handled via a procedure sync^F_mv, which performs the appropriate synchronization for each modification in the sequence in order, with apply the procedure for applying a sequence of modifications to the original model one after another.
For a correct version history Δ^M_{1,...,n}, a sequence of version creation, element creation, element deletion, and merge modifications S_δ creating new versions M_n+1, ..., M_n + m, and a TGG with set of forward rules R, it holds up to isomorphism that
∀ t ∈{1, ..., n + m} : sync^F_mv(SCT_mv, S_δ, adapt(Γ))[t] = trans^F(M'_t, Γ) if trans^F(M_t, Γ) contains no bookkeeping edges,
with SCT_mv = trans^F_mv(init_F(comb(Δ^M_{1,...,n})), adapt(Γ)) and Δ^M_{1, ..., n + m} = apply(Δ^M_{1,...,n}, S_δ).
Follows from the correctness of theorems <ref>-<ref> and the determinism property for TGGs.
§ EVALUATION
In order to evaluate our approach empirically with respect to execution time performance and memory consumption, we have realized the presented concepts in the MoTE2 tool <cit.> for TGG-based model transformation, which is implemented in the context of the Java-based Eclipse Modeling Framework <cit.> and has been shown to be efficient compared to other model transformation tools <cit.>.
As an application scenario, we consider the transformation of Java abstract syntax graphs to class diagrams. We have therefore modeled this transformation as a TGG with MoTE2 and use the original and our adapted implementation to automatically derive forward rules respectively multi-version forward rules according to Section <ref>, as well as the single-version forward synchronization rules employed by MoTE2.
To obtain realistic source models, we have extracted the version history of one small Java project (rete, about 60 versions) and one larger open source Java project (henshin <cit.>, about 2600 versions) from their respective Git <cit.> repositories and have constructed the corresponding history of the related abstract syntax graphs using the MoDisco tool <cit.>. As input for the solution presented in Sections <ref>, <ref>, and <ref>, we have consolidated both version histories into multi-version models using a mapping based on hierarchy and naming.[Our implementation and datasets are available under<cit.> respectively <cit.>.] Based on this, we experiment with two application scenarios for model transformation in the context of models with multiple versions.
§.§ Batch Transformation Scenario
First, we consider a batch transformation scenario, where all versions of the Java abstract syntax graph are to be translated to their corresponding class diagram. This scenario emulates a situation where a new model along with a transformation to derive it from an existing model is introduced to an ongoing development process.
We therefore run the following model transformations for both repositories and measure the overall execution time and memory consumption of the involved models[All experiments were performed on a Linux SMP Debian 4.19.67-2 machine with Intel Xeon E5-2630 CPU (2.3 GHz clock rate) and 386 GB system memory running OpenJDK version 11.0.6. Reported execution time and memory consumption measurements correspond to the mean result of 10 runs of the respective experiment. Memory consumption measurements were obtained using the Java Runtime class.]:
* SVM B: individual forward transformation of all model versions in the version history using the original MoTE2 implementation
* MVM B: joint forward transformation of all model versions in the version history using a multi-version model encoding and our implementation of the technique presented in Sections <ref> and <ref>
Note that the SVM B strategy would require initial projection operations and a final combination of transformation results to work within the framework of multi-version models. However, for fairness of comparison of the transformation, we do not consider these additional operations in our evaluation. We however do consider initialization effort for each model version required by the MoTE2 engine.
To investigate scalability, we also execute the transformations for subsets of the version history, that is, only for a subset of all model versions in the case of SVM B and for a multi-version model encoding only a subset of all versions in the case of MVM B. Figure <ref> shows the execution times of the transformations using the two strategies for subsets of different size and how the measured execution times are composed of time required to initialize the MoTE2 engine (init), indexing the models for pattern matching by the employed pattern matching tool (index), and actual execution of forward rules (execute). Except for the two smaller subsets of the version history of the smaller repository, the transformation based on multi-version models requires less time than the transformation of the individual model versions using the original MoTE2 tool, with the most pronounced improvement for the full history of the larger repository.
The improvement in efficiency and scalability for larger version histories can be explained by the fact that many elements in the abstract syntax graphs of the repositories are shared between many versions. SVM B has to perform a separate transformation, including separate pattern matching and indexing, for each model version. In contrast, MVM B only performs a transformation including pattern matching over a single multi-version model, along with efficient search operations over the version graph, and only requires indexing once. For the larger multi-version models, this effect outweighs the higher initialization effort for the MVM variant and the increase in pattern matching and indexing effort resulting from the technically less efficient encoding of edges and attributes as nodes in the multi-version model.
To investigate the memory consumption of the different encodings of the model triplets produced by the transformation, we stored the produced triplets and loaded them into memory in a second experiment. The resulting memory measurements can be found in Figure <ref>, which shows that the multi-version encoding was more compact than the corresponding naïve encoding for all considered subsets of the larger repository's history, while it required more memory for the three smaller subsets of the smaller repository's version history. The overhead can be explained by the less efficient encoding of edges and attributes as nodes in a multi-version model, which for larger histories is however outweighed by the reduction in redundancy. With a memory consumption of about 200 MB and 400 MB, the multi-version model encoding in both cases is compact enough to fit into a regular PC's main memory.
Notably, memory consumption for the three smaller subsets of the larger repository's version history is very similar. This is due to the fact that many of the models for versions between version 600 and 2000 are actually almost empty as the repository does not contain a project with the given name. This is also the case for the first 15 versions in the smaller repository, which causes memory consumption for the smallest version history subset to fall below the accuracy threshold of the methods used to measure it. However, due to a quirk of the MoTE2 engine, which performs pattern matching over the input TGG and the engine itself, even these empty models cause some indexing and initialization effort, which explains the execution time measurements in Figure <ref>.
The measurements overall indicate that at least in our current implementation, encoding edges and attributes as nodes in a multi-version model causes substantial overhead regarding both execution time and memory consumption. However, the results also confirm that joint transformation of all encoded versions can significantly improve performance compared to the separate transformation of each individual model version for large histories with many shared elements between model versions, more than compensating for the overhead caused.
§.§ Incremental Synchronization Scenario
As a second scenario, we consider the incremental synchronization of changes between pairs of versions of the abstract syntax graphs. This scenario aims to emulate an ongoing development process where new versions of the abstract syntax graph are iteratively produced by user edits and the corresponding version history of the corresponding class diagram should continuously be updated to reflect the newly introduced versions.
We therefore incrementally rebuild the final models in the repositories' histories, starting with the initial version and iteratively applying the changes of the successor versions according to a topological sorting of the version DAG. In case of branching development, we split execution and consider each branch separately until the branches are merged again. After an initial batch transformation of the root version, we perform an incremental synchronization after the integration of each successor version and measure the related execution time. For synchronization, we consider the following techniques:
* SVM I: forward synchronization of changes modifying a direct predecessor version into each version in the history via the regular single-version synchronization by MoTE2
* MVM I: forward synchronization of changes resulting from the iterative integration of each version in the history into a multi-version model encoding via the multi-version synchronization approach from Section <ref>
Note that we assume that the version history of the involved models is to be preserved, that is, the introduction of a new version must not override the base version. For the synchronization working with single-version models, this means that in order to create a new version of source and target model, the base version of both models along with the connecting correspondence model has to be copied before changes can be applied and the synchronization can be executed. We therefore consider the time required for this copying effort in the execution time measurements for SVM I. In contrast, this is not required for the MVM I strategy, which natively preserves the full version history by only allowing the types of changes described in Section <ref>.
The aggregate execution times for synchronizing all versions up to the n-th version in the topological sorting of the version graph are plotted in Figure <ref>. For the smaller repository, SVM I outperforms MVM I by about factor 1.5 even after considering the required copying. For the larger repository, the execution time of MVM I is comparable to the execution time of SVM I including copying.
The plots demonstrate that SVM I requires some computational effort even for versions without changes due to the copying of the base version. In principle, MVM I only has to perform computations for actual changes of the source model, but requires substantially more time to process such a change than SVM I due to the less efficient encoding of nodes and edges and the fact that in some cases, synchronization of a change requires traversal of larger parts of the version DAG, potentially causing effort in the version DAG's size. To reduce the required effort for the latter, we employ a simple indexing structure for the version DAG, the updating of which theoretically causes effort in the version DAG's size whenever a new version is introduced. However, the required execution time was below the granularity threshold of milliseconds in our experiments and is hence not visible in the plots.
Lastly, even without a significant gain in performance for incremental synchronization, the usage of the multi-version model encoding of the version histories in both the batch and incremental scenario means that analyses that are specific to this representation or more efficient there, such as those presented in <cit.>, can directly be executed over the transformation results without requiring a change of encoding.
§.§ Threats to Validity
Threats to the internal validity of our experimental results include unexpected behavior of the Java virtual machine such as garbage collection. To address this threat, we have performed multiple runs of all experiments and report the mean measurement result, with the standard deviation of overall execution time and memory consumption always below 10% of the mean value. To minimize the impact of the concrete implementation, we have realized our solution in the framework of the transformation tool we use for comparison and thereby largely use the same execution mechanism.
Measuring memory consumption of Java programs is known to be a challenge. While we attempted to improve the reliability of these measurements by performing multiple runs of each experiment and suggesting to the JVM to perform garbage collection before every measurement, the reported results are not necessarily accurate but can only serve as an indicator.
To mitigate threats to external validity, we use real-world models as the source models of the transformation. However, we remark that our results are not necessarily generalizable to different examples or application domains and make no quantitative claims regarding the performance of our approach.
§ RELATED WORK
The general problem of model versioning has already been studied extensively, both formally <cit.> and in the form of concrete tool implementations <cit.>. Several solutions employ a unified representation of a model's version history similar to multi-version models <cit.>. However, due to the problem definition focusing on the management of different versions of a single model, model transformation based on a unified encoding is out of scope for these approaches.
There is also a significant body of previous work on synchronization of concurrently modified pairs of models using triple graph grammars <cit.>. The focus of these works is the derivation of compatible versions of source and target model that respect the modifications to either of them. This paper aims to make a step in an orthogonal direction, namely towards allowing living with inconsistencies by enabling developers to temporarily work with multiple modified, possibly conflicting versions of source and target model.
Furthermore, there exist several approaches for optimizing the performance of incremental model synchronization with TGGs, for instance <cit.> and <cit.>. Since the underlying concepts are mostly orthogonal to the ideas related to multi-version models, an integration into the approach proposed in this paper is an interesting direction for future work.
In the context of software product lines, so-called 150% models are employed to encode different configurations of a software system <cit.>. In this context, Greiner and Westfechtel present an approach for propagating so-called variability annotations along trace links created by model transformations <cit.>, explicitly considering the case of transformations implemented via TGGs. However, not integrating this propagation with the transformation process would mean that certain cases that are covered by our approach could not be handled, for instance if a model element would be translated differently in different model versions based on its context.
The joint execution of queries over multiple versions of an evolving model has been considered for both the case with <cit.> and without <cit.> parallel, branching development. This paper builds on these results, but instead of focusing on pure queries without side-effects considers the case of writing operations in the form of model transformations.
§ CONCLUSION
In this paper, we have presented a step in the direction of model transformation and synchronization for multi-version models in the form of an adaptation of the well-known triple graph grammar formalism that enables the joint transformation of all versions encoded in a multi-version model as well as synchronization of subsequent updates. The presented approach is correct with respect to the translation semantics of deterministic triple graph grammars for individual model versions, that is, it produces equivalent results. Initial experiments for evaluating the efficiency of our approach demonstrate that our technique can improve performance of the transformation compared to a naïve realization, but can also cause significant computational overhead especially in the synchronization case, in a realistic application scenario.
In future work, we want to explore the possibility of improving the efficiency of multi-version model transformations via incremental pattern matching for multi-version models. Another interesting direction is the integration of advanced application conditions for the specification of triple graph grammar rules, such as nested graph conditions, into our approach. Finally, a more extensive evaluation can be conducted to further study the performance of the presented technique.
§ ACKNOWLEDGEMENTS
This work was developed mainly in the course of the project modular and incremental Global Model Management (project number 336677879) funded by the DFG.
IEEEtran
|
http://arxiv.org/abs/2307.02716v1
|
20230706014600
|
CFSum: A Coarse-to-Fine Contribution Network for Multimodal Summarization
|
[
"Min Xiao",
"Junnan Zhu",
"Haitao Lin",
"Yu Zhou",
"Chengqing Zong"
] |
cs.CL
|
[
"cs.CL",
"cs.CV"
] |
A composition law and refined notions of convergence
for periodic continued fractions
[
=====================================================================================
Multimodal summarization usually suffers from the problem that the contribution of the visual modality is unclear. Existing multimodal summarization approaches focus on designing the fusion methods of different modalities, while ignoring the adaptive conditions under which visual modalities are useful. Therefore, we propose a novel Coarse-to-Fine contribution network for multimodal Summarization (CFSum) to consider different contributions of images for summarization. First, to eliminate the interference of useless images, we propose a pre-filter module to abandon useless images. Second, to make accurate use of useful images, we propose two levels of visual complement modules, word level and phrase level. Specifically, image contributions are calculated and are adopted to guide the attention of both textual and visual modalities. Experimental results have shown that CFSum significantly outperforms multiple strong baselines on the standard benchmark. Furthermore, the analysis verifies that useful images can even help generate non-visual words which are implicitly represented in the image[Code is available at <https://github.com/xiaomin418/CFSum>].
§ INTRODUCTION
With the information explosion, the internet is flooded with various multimodal information. Multimodal summarization (MMS) can help generate more abundant and comprehensive summary information than unimodal based on extra visual information. Existing studies on multimodal summarization include multimodal sentence summarization <cit.>, multimodal summarization with multimodal output <cit.>, multimodal meeting summarization <cit.> and so on. In this paper, we focus on the task that generating a text summary based on the input of a text and an image. It has been proved that integrating multimodal data can help improve the quality of the summary <cit.>.
However, it is unclear whether the visual modality can indeed benefit the process of summarization. Thus, we conduct an experiment to explore the influence of masking images on the summary. As shown in Figure <ref>, the solid lines mean the performance of summary generated by masking portions of images, and the dashed lines indicate the origin performance. It can be observed that the dashed and the solid lines roughly coincide, which indicates that masking images do not affect the performance of the multimodal model. Some masking rates can even raise the ROUGE-1 value of the summary. It indicates that existing models do not make effective use of image information for the summary.
Existing approaches have two major problems. First, existing studies focus on multimodal fusion, such as concatenate, attention-based, and gate-based fusion (referring to sec:related). However, they ignore the adaptive conditions under which visual modalities are helpful. Thus they are poor at extracting useful visual information. Furthermore, all fusion methods do not explicitly model the image complementarity for the summary. Especially for the attention-based method, the inter-attention is not accurate enough, which leads to inefficient use of the image. Second, in many samples, the image may introduce noise, while existing fusion methods assume that all images are helpful for the summary without considering the interference of useless images. As analyzed above, we believe that: 1) It is essential to eliminate the influence of the useless image. 2) The contributions of the image to the summary need to be clarified. In particular, it is necessary to consider the complementarity of visual information relative to textual information.
Although we notice the lack of image contributions, it is difficult to detach various roles of images from a single fusion layer. Thus, in this work, we propose a novel Coarse-to-Fine contribution network for multimodal Summarization (CFSum) to extract the role of the image at different stages. First, we apply a pre-filter module to abandon useless images. It coarsely obtains helpful images for the summary. Specifically, the consistency of content between image and text is calculated. If the consistency is low, the image will be masked in subsequent encoding. Second, when the image is coarsely useful, the complement module is employed to finely guide the fusion of text with the image. To consider image contributions for text with different granularities, the complement module consists of two levels, word level and phrase level. For the word level complement module, to obtain the image complementarity over the text, the difference between bi-modal and uni-modal inputs is measured through a classification task. Then we add a loss to guide the attention between words and the image. For the phrase level complement module, similar to the word level, the image complementarity on phrases is acquired to guide the attention between phrases and the image. Through these modules, the model can acquire more explicit image contributions and provide better multimodal encoding for summary generation.
Our contributions are as follows:
(1) We propose a Coarse-to-Fine contribution network for multimodal Summarization (CFSum) to model different contributions of images for summarization.
(2) We innovatively design a pre-filter module to coarsely reduce the interference of the useless images and develop two visual complement modules to finely obtain image complementarity over the summary.
(3) Experimental results show that our model outperforms strong baselines. Besides, extensive analysis proves that useful image even contributes to non-visual words which are implicitly represented in the image.
§ RELATED WORK
Multimodal Summarization Tasks.
In the field of multimodal summarization, there are usually three steps. First, different feature extractor modules are adopted to extract the features of the text and the image, respectively. Second, the different features are fused at the fusion layer. Finally, the fused context features are fed into the text decoder to generate a summary.
Existing studies focus on multimodal fusion. Specifically, the fusion methods consist of concatenate, attention-based, and gate-based. The concatenate fusion directly concatenates multimodal features into a fusion context <cit.>. It can fully extract high-level features of different modalities, but there is a large gap between high-dimensional spaces. The attention-based methods fuse all multimodal features with attention mechanism <cit.>, which can get the correlations between each unit of text and image. Gate-based methods take text as the central modality <cit.> and exploit images to help focus on the core information <cit.>. In summary, (1) all fusion methods do not explicitly model the image complementarity for the summary, which leads to inefficient use of the image. (2) concatenate and attention-based cannot eliminate the influence of useless images in the fusion layer.
Cross-modal tasks.
Some studies have noted the contributions of modalities and explored the cross-modal influence in
other multimodal tasks. <cit.> propose loss modulation to explore the contribution of individual modalities and devise a modality filter to reduce modality noise, which considers consistency and complementarity between different modalities. <cit.> propose multi-task summarization: the method also selects the image that best matches the summary when generating a text summary. It guarantees the positive effect of images on the summary. <cit.> exploit ReLu-based cross-attention to align visual features to textual representation, which abandons low-value attention scores for those unaligned visual features. Inspired by the above studies, we propose CFSum, which considers various image contributions for better encoding input text and generating the final summary.
§ PROPOSED METHODS
§.§ Overview
In this section, we introduce the details of CFSum. Given a dataset consisting of n triplets (t_i, v_i,s_i)_i ∈ [1,n] with a text t_i, an image v_i, and a summary s_i, the multimodal summarization task aims at generating s_i based on t_i and v_i.
As depicted in Figure <ref>, the CFSum takes bi-modal and uni-modal streams as input parallelly. It builds coarse and fine image contributions with three modules (sec:hierachical). First, the pre-filter module coarsely filters the images inconsistent with texts (sec:filtering). Second, two levels of visual complement modules consisting of word level (sec:word-level) and phrase level (sec:semantic-level) make accurate use of useful images.
§.§ Coarse-to-Fine Structure
We build our model based on the multimodal transformer UNITER <cit.> and GRU <cit.> encoder-decoder architectures. We refer the model to UniG. As shown in Figure <ref>(a), in order to evaluate the complementarity of different modalities, the bi-modal and uni-modal inputs are operated parallelly with the same encoder. The two parallel streams can catch the gain of the image. Additionally, we generate a summary relying on bi-modal encoding. Uni-modal encoding assists in measuring various contributions and guiding the bi-modal encoding.
Specifically, the multimodal encoder consists of L=12 multimodal transformer layers. We serve the L layers as a hierarchical structure and divide L layers into three parts as shown in Figure <ref>(a). L_f, L_w, L_p mark as the starting layer of the pre-filter, the word-level complement, and the phrase-level complement modules, respectively. Existing studies assume all images benefit summary generation or input text encoding, resulting in damage from unnecessary images. The pre-filter module is utilized to eliminate the interference of misleading images in advance. Next, the word-level complement module is developed to model the gain of the image on input words for the summary. Then the image gain guides the subsequent attention between words and the image. Finally, similar to the word level, the phrase-level complement module concentrates on phrases at higher layers. Each component will be elaborated in the following sub-sections.
§.§ Pre-filter Module
The bi-modal and uni-modal features from the i^th layer are encoded as m^i∈ℝ^C × H, u^i∈ℝ^T × H, where i∈ [1,L], and C,T denote the lengths of bi-modal and uni-modal tokens. H denotes the hidden dimension. The bi-modal self-attention matrix in the i^th layer is A^i=(a_r,s^i) ∈ℝ^C × C.
The pre-filter module aims at filtering images that are unnecessary to the summary. As shown in Figure <ref>(a), given two encoded features m^L_f and u^L_f from the L_f^th layer, the goal of the filtering module is to select those useless images and guide the self-attention of all subsequent layers. We believe that if the bi-modal feature has low consistency with the uni-modal feature, the image may introduce interferential information. Specifically, we first calculate the consistency Δ^C between uni-modal feature u^L_f and bi-modal feature m^L_f as follows:
pu = MeanPool(u^L_f),
pm = MeanPool(m^L_f),
Δ^C = Sign(cosine(pu, pm)-α)
We define the indicator function as:
I_r,s=
1,
0,
which represents the text attending to the image, the image attending to the text, and the image attending to itself shown in Figure <ref>(a). Then we calculate the new subsequent self-attention na^i_r,s with:
na^i_r,s= a^i_r,s×(1-I_r,s) + a^i_r,s× I_r,s×Δ^C,
i∈ [L_f+1,L]
By correcting the attention matrix, the image with a large deviation in content is cropped out. In other words, the multimodal inconsistency features degenerate into text-only features through this process. The simple method has been shown to be effective in our experiments.
§.§ Word-level Complement
This section introduces a word-level complement module, considered as an auxiliary task during the training process. First, we measure the image gain on input words for the summary. Then the image gain is applied to guide the attention between words and the image (as shown in Figure <ref>(b)).
Image gain measurement. Intuitively, the text tokens should concern the image which is helpful for the summary. In previous attention-based studies, inter-modality correlation can be modeled as softmax(QK/√(D))V. Q, K, V are the projected features from the bi-modal input. However, it does not explicitly model the image complementarity for the summary, which leads to inefficient use of the image.
Following the motivation above, we hope to calculate the image gain on the summary with mutual information. In other words, we want to measure whether generating summaries based on bi-modal feature m^L is more deterministic than generating summaries based on uni-modal feature u^L. Thus, we expect to calculate the image gain on the k-th word of the reference summary:
GI_k = Gain(s_k/u^L, s_k/m^L)
However, we intend to obtain GI_k before generating summary S and encoding m^L. Thus GI can be beneficial for generating S and encoding m^L. To this end, we define Copy Classification task Y to approximate the summary task S: for each input text token t_j, the target is to binary categorize whether it appears in the reference summary. If the token appears in the reference summary, it is classified as ŷ_̂ĵ=1; otherwise, ŷ_̂ĵ=0. Next, the GI_j is given by:
GI_j = Gain(y_j/u^L_w, y_j/m^L_w)
where u^L_w,m^L_w denote the uni-modal and the bi-modal feature acquired by L_w^th layer. Finally, we measure the gain that the image brings to predict whether a word appears correctly in the summary as follows:
GI_j = Gain(y_j/m^L_w, y_j/u^L_w)
= logP(y_j=ŷ_̂ĵ/m^L_w)-logP(y_j=ŷ_̂ĵ/u^L_w)
Derivation details can refer to the Appendix <ref>. In addition, to ensure the correct gain direction, we add a binary cross-entropy loss to train the Copy Classification Task Y:
ℒ_copyc = BCE(y_j,ŷ_̂ĵ/m^L_w)+BCE(y_j,ŷ_̂ĵ/u^L_w)
Image gain application. We introduce divergence loss to restrain that the image with greater gain should receive more textual attention. In successive i^th _i∈ [L_w+1,L_w+3] layer, the average inter-attention between each text token t_j and the image is:
T2V_j^i = 1/2(C-T)(∑_s=T+1^s=C a_j,s^i+∑_s=T+1^s=Ca_s,j^i)
where a_j,s^i, a_s,j^i represent the attention of image-to-text and text-to-image, respectively.
Finally, an attention divergence loss is added to restrain the inter-attention scores T2V_j^i with GI_j:
ℒ_word=KL(Softmax(GI_j)||Avg(T2V_j^i))
By minimizing the divergence loss, the text token attends to the image according to the gain it brings. Interaction between word gain and inter-attention learns to pay attention to the useful image. Appendix <ref> provides examples to figure out the word-level complement.
§.§ Phrase-level Complement
Considering the image contribution to text of different granularities, we put forward a phrase-level complement module similar to the word level (as shown in Figure <ref>(c)).
Image gain measurement. Different from copy classification task at the word level, we define Copy Scorer task to measure the image gain on phrases: We obtain phrases {p_1,...,p_k...} from the text with StandfordNLP[< https://github.com/stanfordnlp>]. {l_1,...,l_k...} is the number of words in the phrases. The task targets scoring the proportion of words that appear in both the phrase and the reference summary:
R_p_k^u=Scorer(u^L_p)
R_p_k^m=Scorer(m^L_p)
where Scorer is a MLP. The ground truth proportion is obtained with the following:
R̂_̂p̂_̂k̂=Count_ t_j'∈ p_k(t_j')/l_k
where Count_ t_j'∈ p_k denotes the number of words that appear in both the phrase p_k and the reference summary. Therefore, the image gain on phrase can be acquired as:
GS_p_k=|R_p_k^u-R̂_̂p̂_̂k̂| -|R_p_k^m-R̂_̂p̂_̂k̂|
Similarly, to guarantee the correctness of phrase gain, we add a squared loss for the Copy Scorer task:
ℒ_copys = MSE(R_p_k^m,R̂_̂p̂_̂k̂)
+MSE(R_p_k^u,R̂_̂p̂_̂k̂)
Especially, for the convenience of applying phrase gain GS_p_k, we project it to token gain GS_j as:
GS_j=max{GS_p_k,t_j ∈ p_k}
Image gain application. Second, we introduce a phrase attention divergence loss to restrain that the image with greater phrase gain should receive more textual attention. We obtain the inter-attention score T2V_j^i from i∈ [L_p+1,L_p+3] layers as formula <ref>. Finally, we restrain it with the following:
ℒ_phrase=KL(Softmax(GS_j)||Avg(T2V_j^i))
The phrase-level restraint guarantees the image contributing to the text of phrase granularity.
§.§ Training and Inference
In the training phase, to ensure the accuracy of the information difference between bi-modal and uni-modal, we initialize the model only with the summary generation loss. We apply negative log-likelihood for the target word sequence as the overall loss:
ℒ_gen = 1/T∑_t=1^T(-logP(s_t))
Then the model is finetuned with the hierarchical modules' objectives:
ℒ = ℒ_gen + ℒ_word + ℒ_phrase+ ℒ_copyc + ℒ_copys
In the inference phase, we only maintain the pre-filter module. ℒ_word and ℒ_phrase are added to let the model learn how to fuse multimodal information. Hence, differences in training and inference phases would not hurt the generation.
§ EXPERIMENT
§.§ Settings
We experiment with the multimodal sentence summarization dataset[<http://www.nlpr.ia.ac.cn/cip/dataset.htm>] <cit.>. It contains 66,000 samples in total.
And each sample is a triplet of <sentence, image, summary>. Some statistical information is shown in Table <ref>. Appendix <ref> gives the categories of test images.
Because there is no output from these systems, we only report ROUGEs in papers. In addition, BLEU, BERTScore, and MoverScore cannot be recalculated.
We set both the text embedding dimension and hidden dimension as 768. We apply “bert-base-uncased” <cit.> vocabulary with 28,996 tokens. The dropout <cit.> rate is set to 0.1. Besides, the batch size is set to 8. For texts, we use the max text encoding length of 60, and the minimum text decoding length is 8. For images, the object detection tool BUTD <cit.> is applied to extract the image feature, with the maximum boxes as 36. We use the Adam <cit.> optimizer and set the learning rate as 5e-05, momentum parameters as β_1=0.9, β_2=0.98.
The model is initially trained with the summary generation loss for 35 epochs. To obtain our final model, we train for a further 15 epochs with the hierarchical framework. In the test phase, we employ beam search and set the beam size as 4 to generate the summary. The parameter α in the pre-filter module is set as α=0.65.
§.§ Comparative Methods
Lead: Exploiting the first eight words as the summary.
Compress <cit.>: It uses integer linear programming to infer global optimal compressions.
ABS <cit.>: It utilizes an attention-based model to generate words of summary conditioned on the input text.
SEASS <cit.>: It constructs a second-level sentence representation with a sentence encoder and a selective gate for summarization.
Multi-Source <cit.>: It combines multiple source modalities based on the hierarchical attention mechanisms over each modality for solving the multimodal machine translation.
Doubley-attentive <cit.>: It uses two separate attention mechanisms to incorporate the visual feature, which minified the gap between the image and the translation.
MAtt <cit.>: It proposes modality attention and image filtering for multimodal summarization.
MSE <cit.>: It proposes to apply the visual selective gates to multimodal summarization.
UniG: It is our base model with multimodal transformer UNITER and GRU decoder.
UniG (T): UniG fed only with textual modality.
§.§ Automatic Evaluation Results
Our methods are reported with six automatic metrics, including ROUGE-1, ROUGE-2, ROUGE-L <cit.>, BLEU <cit.>, BERTScore <cit.>, and MoverScore <cit.>. More details of evaluation scripts are given in Appendix <ref>.
Comparisons with Baselines.
We compare our work with our baselines and other work on the multimodal sentence summarization dataset. Table <ref> shows the results of different models. The results show that UniG performs comparably with UniG (T). CFSums build on UniG, and introduces coarse-to-fine contribution network. “F”, “W”, and “P” represent the pre-filter, the word-level complement, and the phrase-level complement modules contained in the CFSum. The footnote is the location of the corresponding module. For example, CFSum-F_3 contains a pre-filter module with L_f=3.
Generally, our methods CFSums outperform the baselines UniG (T) and UniG. The best methods is CFSum-F_3W_6P_9. And it achieves 1.64 higher points on ROUGE-1 than UniG.
We also conduct ablation experiments by applying one or two kinds of contributions. The results demonstrate that each image contribution benefits the model. In addition, combining all image contributions brings greater gains than a single contribution. Therefore, it can be concluded that different contributions are complementary to the summary.
Besides, we conduct ablation studies by placing the pre-filter module at the beginning (L_f=3) or the end of the hierarchical layers (L_f=9). In comparison, placing the pre-filter module at the beginning (CFSum-F_3W_6P_9) yields better performance.
§.§ Human Evaluation Results
We randomly select 50 samples from the test dataset and invite three postgraduates to score 1-5 for the summary quality. The evaluation metrics include informativeness, fluency, and non-redundancy. (1) Informativeness: Does the system summary contain comprehensive reference content? (2) Fluency: Is the system summary grammatically correct and readable? (3) Non-Redundancy: Does the system summary not have redundant or incorrect information relative to the reference summary? Table <ref> shows the human evaluation results. We run the inter-annotator agreement study on three volunteers’ scores and achieve reasonable scores, 0.47, 0.39, and 0.43 on informativeness, fluency, and non-redundancy, respectively. The results show that our method CFSum-F_3W_6S_9 achieves the best performance on all three aspects over UniG (T) and UniG baselines. Thus we conclude that our method is also effective through human evaluation.
§.§ Further Analysis
§.§.§ Complement Modules Analysis
In other multimodal tasks such as image captioning and multimodal translation, their models learn to attend to the image more for visual words like “red”, “rose” and “woman” <cit.>. Since our proposed complement modules aim at extracting complementary information relative to textual modality, we want to know which word or phrase the image provides gains on. As shown in Figure <ref>, we visualize the complement gain value for the input words. We manually align the reference summary and the input text. The word highlighted with a red box indicates that it appears in the reference summary generatively[“Generatively” means that the summary word is obtained by paraphrasing or synonymous substitution of the input word.] or extractively.
First, we find that words with positive image gain can basically cover the reference summary information. It proves that our calculated gain helps in generating the target summary words. Second, it can be observed that different complement modules bring positive gains in different areas, which means different levels of complement modules are complementary. It further explains that multiple contributions are better than a single contribution in the experimental results. At last, it is worth noting that some words are gained from the image but are not visible in the image, i.e., “relatives” and “victims”. Therefore, we believe the image brings gain in both visible and invisible words. We explain further in sec:gainable.
§.§.§ Pre-filter Module Analysis
Since we believe that images should provide meaningful contributions instead of robustness enhancements in multimodal summarization, we wonder whether unpaired multimodal data may affect the performance of our model. Therefore, we try generating the summary based on the unpaired image and text.
In the test set, most of the images are highly similar in theme and content. Generating unpaired data with automatic shuffling is not significant for analysis. Therefore, we manually exchange v_i and v_j in pairs <t_i,v_i>, <t_j,v_j>, where v_i,v_j have different themes or contents.
We exchange 20 pairs from 100 pairs of test samples. And we conduct experiments with different sampling for three times. The mean and standard deviation reports as Table <ref>. “Paired” represents ROUGE-1 on test set, “Unpaired” represents ROUGE-1 on the unpaired set. “CFSum (filter-off)” represents turning down the pre-filter mechanism.
The results show different trends in the two models. For UniG, unpaired multi-modalities do not affect the performance. We guess UniG does not exploit meaningful image information while relying only on text to generate the summary. In contrast, CFSum hurt more severely from unpairing. The difference exists because CFSum depends on the image and text. Thus, the unpaired image would reduce the correct information that CFSum gets. However, CFSum still performs better than UniG, proving that it is fault-tolerant. Furthermore, CFSum (filter-off) significantly suffers from unpaired data, showing that pre-filter can eliminate useless images.
§.§.§ Ablation Study
One of the most important hyperparameters in CFSum is the location of different contribution modules. Because the three modules' order in the network is fixed, we change their absolute position in the encoder layers and report the corresponding performance in Figure <ref>. w denotes the number of layers between two modules, and the X axis denotes the starting layer of the pre-filter module.
The results show that the different layer settings achieve comparable performance. It is noticeable that w=2 weakens the model. This is due to the fact that the network with small w loses the advantage of a hierarchical structure in the encoder.
§.§.§ Gainable Images
We select three gained words and corresponding gainable images to show in Figure <ref>. Consistent with our perception, images bring gains on visual words, such as “earthquake”. More importantly, they bring gains on non-visual words such as “celebrate” and “victims”. For example, “celebrate” may be used in competitions, events, and diplomacy as shown in Figure <ref>. Multimodal tasks such as image captioning or multimodal question answering focus on establishing associations between visual words and images. However, multimodal summarization also needs to pay attention to the associations between non-visual words and images. In other words, image contributes to both visual and non-visual words.
§ CONCLUSION
Based on the observation that existing multimodal summary models do not take full advantage of useful image information, this paper focuses on modeling different contributions of images for summarization. Therefore, we propose a novel framework CFSum consisting of pre-filter, word-level complement, and phrase-level complement modules. The pre-filter coarsely eliminates the impact of useless images. The two-level visual complement modules measure different aspects of image gains and guide the fusion of different modalities. Experimental results have shown that CFSum can significantly improve the summary. More importantly, the complement modules make images contribute to visual words and non-visual words.
§ LIMITATIONS
Since our method constructs on the multimodal transformer, it cannot be migrated to the dual-stream model. Experiment results show that CFSum can achieve comparable performance with strong baselines. But it still cannot surpass the SOTA of some dual-stream large models.
§ ACKNOWLEDGEMENTS
The research work has been supported by the Natural Science Foundation of China under Grant No. 62106263.
acl_natbib
§ EXPERIMENT DETAILS
Here, we will introduce some detailed settings for our experiments. All methods are run on NVIDIA GeForce RTX 3090. UniG has 139M parameters. When the batch size is 8, it takes 20 hours to train for 50 epochs with a single GPU.
We also provide evaluation scripts for reproduction. For ROUGE score, we use file2rouge[https://github.com/pltrdy/files2rouge] with default settings. For BERTScore[https://pypi.org/project/bert-score/0.2.1], we use the official API. It exploits the pre-trained contextual embeddings from BERT to calculate the similarity between the hypothesis sentences and the reference sentences. For MoverScore, we use moverscore_v2[https://github.com/AIPHES/emnlp19-moverscore], which leverages BERT and Earth Mover Distance to measure the similarity.
§ DERIVATION DETAILS
Derivation detail of formula <ref> is:
GI_j = Gain(y_j/m^L_w, y_j/u^L_w)
= KL(ŷ_̂ĵ||y_j/m^L_w)-KL(ŷ_̂ĵ||y_j/u^L_w)
= P(ŷ_̂ĵ=1)· logP(y_j=1/m^L_w)
+ P(ŷ_̂ĵ=0)· logP(y_j=0/m^L_w)
- P(ŷ_̂ĵ=1)· logP(y_j=1/u^L_w)
- P(ŷ_̂ĵ=0)· logP(y_j=0/u^L_w)
= P(y_j=ŷ_̂ĵ)· logP(y_j=ŷ_̂ĵ/m^L_w)
- P(y_j=ŷ_̂ĵ)· logP(y_j=ŷ_̂ĵ/u^L_w)
= logP(y_j=ŷ_̂ĵ/m^L_w)-logP(y_j=ŷ_̂ĵ/u^L_w)
Thus the gain is simplified to entropy difference.
§ EXAMPLES OF COMPLEMENT MODULES
We will provide some examples to explain further sec:word-level. For one of the input words t_j, we assume that it appears in the reference summary. Then the ground truth of the copy classification is ŷ_̂ĵ=1. We list hypothetical classification results of bi-modal and uni-modal in Table <ref>.
Then, the GI_j is calculated as:
GI_j = Gain(y_j/m^L_w, y_j/u^L_w)
= logP(y_j=1/m^L_w)-logP(y_j=1/u^L_w)
= log0.6-log0.4
= 0.405
which means the image may give the input word t_j a gain of 0.405. Furthermore, the image brings a positive gain. Thus in the attention layer, the text word t_j should give the image a higher attention score.
§ IMPACT OF IMAGE CATEGORY
To further analyze the impact of our approach on different categories of images. We categorize the test images with VGG19 and show the performance of each type of image. As shown in Figure <ref>, there are 380 categories in the test images, and we list the top 10 categories with the highest proportion. It can be seen that the image is evenly distributed. The line charts also show that CFSum is superior to UniG in all categories. Therefore there is no category bias in our method.
§ GUIDED ATTENTION
We visualize (1) the attention matrix from the 8^th encoder layer of CFSum-F_3W_6S_9, whose layer is under the word-level guidance. (2) the attention matrix from the 11^th encoder layer of CFSum-F_3W_6S_9, whose layer is under the phrase-level guidance. The attention matrix is renormalized after removing [CLS] and [SEP]. They are shown in Figure <ref> and Figure <ref>.
From the attention under the word-level guidance, we can observe that some input words which generatively or extractively occur in the reference summary will attend to the image, such as “crash” and “relatives”. From the attention under the phrase-level guidance, we can observe that some input phrases which generatively or extractively occur in the reference summary attend to the image more. Above all, it also proves that two visual complement modules succeed in providing better encoding to generate summaries.
|
http://arxiv.org/abs/2307.02229v1
|
20230705121356
|
Knowledge-Guided Additive Modeling For Supervised Regression
|
[
"Yann Claes",
"Vân Anh Huynh-Thu",
"Pierre Geurts"
] |
cs.LG
|
[
"cs.LG"
] |
Y. Claes et al.
University of Liège, Liège 4000, Belgium
{y.claes,vahuynh,p.geurts}@uliege.be
Knowledge-Guided Additive Modeling
For Supervised RegressionThis work was supported by Service Public de Wallonie Recherche under Grant No. 2010235 - ARIAC by DIGITALWALLONIA4.AI. Computational
resources have been provided by the Consortium des Equipements de Calcul Intensif (CECI), funded by the Fonds de la
Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11 and by the Walloon Region.
Yann Claes0009-0005-2551-5152 Vân Anh Huynh-Thu0000-0001-5492-2498 Pierre Geurts0000-0001-8527-5000
August 1, 2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================
Learning processes by exploiting restricted domain knowledge is an important task across a plethora of scientific areas, with more and more hybrid methods combining data-driven and model-based approaches. However, while such hybrid methods have been tested in various scientific applications, they have been mostly tested on dynamical systems, with only limited study about the influence of each model component on global performance and parameter identification. In this work, we assess the performance of hybrid modeling against traditional machine learning methods on standard regression problems. We compare, on both synthetic and real regression problems, several approaches for training such hybrid models. We focus on hybrid methods that additively combine a parametric physical term with a machine learning term and investigate model-agnostic training procedures. We also introduce a new hybrid approach based on partial dependence functions. Experiments are carried out with different types of machine learning models, including tree-based models and artificial neural networks[Our Python implementations of the hybrid methods are available at <https://github.com/yannclaes/kg-regression>.].
§ INTRODUCTION
For the past decades, machine learning (ML) models have been developed to tackle a variety of real-life problems, complementing/replacing model-based (MB) approaches, which mostly remain approximations that make stringent assumptions about the system under study. Traditional ML approaches are said to be data-driven, i.e. their prediction model is solely built from some learning dataset, let it be (deep) neural networks or regression trees. While their design comes with great expressiveness, they are likely to be subject to over-fitting without enough training examples and to show a lack of robustness on unseen samples, with predictions that can be inconsistent w.r.t. domain knowledge <cit.>. To overcome this generalization issue, hybrid approaches have been introduced to incorporate a priori domain knowledge within statistical models, which can be leveraged in a multitude of ways (see <cit.> for reviews). The success of these hybrid methods have been shown empirically on a range of synthetic and real-world problems <cit.>. However, while these models have been mostly applied to dynamical systems, they have not been thoroughly studied in the context of standard regression problems. Furthermore, the majority of ML models that have been considered in these approaches are neural networks and variants of the latter, leaving aside other methods. Our contributions are the following:
* We investigate empirically the performance and benefits of hybrid methods against data-driven methods on static regression problems (in opposition to dynamical problems). The static context removes a layer of complexity related to the temporal correlation between observed states, which makes it easier to assess the impact and interaction between the MB and ML components. Specifically, we focus on hybrid models that combine in an additive way a parametric physical term with an ML term.
* We compare different approaches for training such hybrid additive models. We highlight specific hypotheses under which these approaches are expected to work well and relate the differences in terms of prediction and parameter recovery performance. We focus on model-agnostic approaches, where the ML term can be of any type, and we compare tree-based methods against neural networks. Tree-based methods have several advantages over neural networks, which motivate their use on static regression problems: they have much less hyperparameters to tune, appear robust to the presence of irrelevant features and have been shown to outperform neural networks on tabular data <cit.>.
* We introduce a new hybrid approach based on partial dependence functions, which makes it easier to find the right balance between the MB and ML components, makes less hypotheses than other approaches, and is shown to be competitive in our experiments.
§ PROBLEM STATEMENT
Let us define a
regression problem, with y ∈ and ∈^d, with d ∈_+, drawn from a distribution p(,y) such that y = f() + ε
with f : ^d ↦ the partially known generating function and ε∼𝒩(0, σ^2) the noise term. We focus on problems such that f() can be decomposed as:
[H1, Additivity]
y = f_k(_k) + f_a() + ε,
where _k is a subset of K ≤ d input variables. We assume partial knowledge of the generating function through some known algebraic function h_k^θ_k(_k) ∈ℋ_k with tunable parameters θ_k, such that for the optimal parameters θ_k^* we have h_k^θ_k^* = f_k. The residual term f_a() is unknown and is approximated in this work through an ML component h_a^θ_a∈ℋ_a, with parameters θ_a[In the following, h_k^θ_k and h_a^θ_a will sometimes be denoted simply as h_k and h_a to lighten the notations.]. The final model h∈ H is denoted h() = h_k^θ_k(_k) + h_a^θ_a(), with the function space H defined as H_k + H_a. H<ref> is common when MB methods and ML models are combined <cit.>.
Given a learning sample of N input-output pairs LS = {(_i, y_i)}_i=1^N, drawn from p(,y), we seek to identify a function h = h_k^θ_k + h_a^θ_a, i.e. parameters θ_k and θ_a, that minimizes the following two distances:
d(h, y) = 𝔼_(,y)∼ p(,y){(h() - y)^2},
d_k(h^θ_k_k, f_k) = 𝔼__k ∼ p(_k){ (h^θ_k_k(_k) - f_k(_k))^2}.
The first distance measures the standard generalization error of the global model h. The hope is that taking h_k into account will help learning a better global model than fitting directly a pure data-driven model on y, especially in the small sample size regime. The second distance d_k measures how well the tuned h_k approximates f_k. The main motivation for this second objective is interpretability: one expects that the algebraic form of h_k will be derived from first principles by domain experts, who will be interested in estimating the parameters of this term from data. An alternative to d_k is a loss that would compare the estimated and optimal parameters θ̂_k and θ^*_k (e.g., ||θ̂_k - θ^*_k||^2). d_k however has the advantage not to require θ_k^* to be fully identifiable, i.e. there can exist several sets of parameters θ^*_k such that h^θ^*_k_k = f_k. In our experiment, we will report both d_k and the relative mean absolute error on the estimated parameters.
The following approximation of (<ref>) can be used as training objective:
d̂(h, y; LS) = 1/N∑_i=1^N (h(𝐱_i) - y_i)^2.
Minimizing the distance in (<ref>) is expected to be challenging and sometimes even ill-posed. Indeed, if h_a is too powerful, it could capture f entirely and leave little room for the estimation of f_k. Finding the right balance between h_k and h_a is thus very challenging, if not impossible, using only guidance of the learning sample LS. Unlike (<ref>), (<ref>) cannot be estimated from a sample of input-output pairs and hence cannot be explicitly used to guide model training. There are however several scenarios that will make the problem easier. In the following, we will discuss the optimality of the hybrid methods under two additional assumptions:
[H2, Disjoint features]
Let _a be a subset of features disjoint from _k (_k∩_a = ∅). There exists a function f^r_a(_a) such that f_a() = f^r_a(_a) for all .
[H3, Independence]
Features in _k are independent from features in _a (_k⊥⊥_a).
H<ref> makes the problem easier as f_k captures all the dependence of y on _k. In the absence of H<ref>, it might be hard to distinguish real contributions from _k to f from those due to correlations with features not in _k.
§ RELATED WORK
Hybrid additive modeling methods emerged several decades ago, combining first-principles models with different ML models. Already in the 1990's, approaches in <cit.> complemented physics-based models with neural networks, weighting contributions of both components (e.g. through radial basis function networks), to achieve enhanced physical consistency with better generalization properties. More recently, other works applied the same principles to model dynamical systems in various domains, still massively relying on neural networks <cit.>. In a more standard regression setting, <cit.> combined a linear parametric term with a tree-based ML term.
Previous works have introduced regularization of the ML term to reduce parameter identification issues in the decomposition <cit.>. Further works on this matter introduced physically-motivated constraints in the learning objective to better control contributions of the MB/ML components <cit.>. Elements of discussion about the well-posedness of this additive decomposition have been introduced in previous works: <cit.> showed the existence and uniqueness of an optimal pair (h_k, h_a) when the contributions of h_a are constrained to be minimal, and <cit.> demonstrated the convergence of an algorithm alternating between the optimization of h_k and the optimization of h_a, without however any guarantee about convergence points.
§ METHODS
We focus on model-agnostic approaches, i.e. that can be applied with any algebraic function h_k and any type of ML model h_a. For both terms, we only assume access to training functions, respectively denoted fit^h_k, fit^h_k+γ, and fit^h_a, that can estimate each model parameters, respectively θ_k, (θ_k,γ) and θ_a, so as to minimize the mean squared error (MSE) over LS (see below for the meaning of γ), where parametric methods rely on gradient descent.
§.§ Sequential training of h_k and h_a
This baseline approach first fits h_k on the observed output y, then fits h_a on the resulting residuals, as done in <cit.>. More precisely, we first train h_k^θ_k on y by introducing a constant term γ∈ℝ, such that
(θ̂_k, γ̂) = fit^h_k + γ(LS).
Our motivation for introducing the term γ will be explained below. Afterwards, we fit h_a on the output residuals: θ̂_a = fit^h_a{(_i, y_i - h_k^θ̂_k(_i)-γ̂)}_i=1^N.
Let ℱ̂_k be the set of all functions f̂_k mapping _k ∈𝒳_k to some value y ∈, i.e. ℱ̂_k = {f̂_k : 𝒳_k ↦}.
Under H<ref> and H<ref>, it can be shown that f̂_k^* = min_f̂_k ∈ℱ̂_k d(f̂_k, y)
is such that f̂_k^*(_k) = f_k(_k) + C, for every _k ∈𝒳_k, with C = 𝔼__a{f^r_a(_a)} (see <ref>). Hence, this approach is sound at least asymptotically and justifies the introduction of γ. Note however that even under H<ref> and H<ref>, we have no guarantee that this approach produces the best estimator for a finite sample size, as f^r_a(_a)+ϵ acts as a pure additive noise term that needs to be averaged out during training. The approaches described in Sections <ref> and <ref> try to overcome this issue by fitting h^θ_k_k on corrected outputs that are expected to be closer to f_k(_k). Without H<ref> and H<ref>, the quality of the estimation of f_k by h_k^θ̂_k, according to (<ref>), is not guaranteed as there are regression problems satisfying H<ref> such that:
∄γ∈: min_f̂_k∈ℱ̂_k d(f̂_k, y) = f_k + γ.
An example will be given in Section <ref>.
§.§ Alternate training of h_k and h_a
A hybrid additive approach was proposed in <cit.> that alternates between updating h_k and updating h_a, using neural networks for h_a. Such alternate training was also proposed in <cit.> with a single decision tree as h_a and a linear h_k. We include this approach in our comparison, but also investigate it with random forests <cit.> and tree gradient boosting <cit.>. θ̂_k is initialized by (fully) fitting h_k^θ_k+γ on y. Then, we alternate between: (1) a single epoch of gradient descent on h_k^θ_k+γ and (2) either a single epoch for h_a (in the case of neural networks, as in <cit.>) or a complete fit of h_a (in the case of tree-based models).
While some theoretical results are provided in <cit.>, convergence of the alternate method towards the optimal solution is not guaranteed in general. Despite an initialization favoring h_k, it is unclear whether a too expressive h_a will not dominate h_k and finding the right balance between these two terms, e.g. by regularizing further h_a, is challenging. Under H<ref> and H<ref> however, the population version of the algorithm produces an optimal solution. Indeed, h_k will be initialized as the true f_k, as shown previously, making the residuals y - h_k at the first iteration, as well as h_a, independent of _k. h_k will thus remain unchanged (and optimal) at subsequent iterations.
§.§ Partial Dependence-based training of h_k and h_a
We propose a novel approach relying on partial dependence (PD) functions <cit.> to produce a proxy dataset depending only on _k to fit h_k. PD measures how a given subset of features impact the prediction of a model, on average. Let _k be the subset of interest and _-k its complement, with _k ∪_-k =, then the PD of a function f() on _k is:
PD(f,_k) = 𝔼__-k[f(_k, _-k)] = ∫ f(_k, _-k) p(_-k) d_-k,
where p(_-k) is the marginal distribution of _-k. Under H<ref> and H<ref>, the PD of
f() = f_k(_k) + f^r_a(_a) is <cit.>:
PD(f,_k) = f_k(_k) + C, C = E__a{f^r_a(_a)}.
The idea of our method is to first fit any sufficiently expressive ML model h_a() on LS and to compute its PD w.r.t. _k to obtain a first approximation of f_k(_k) (up to a constant). Although computing the actual PD of a function using (<ref>) requires in principle access to the input distribution, an approximation can be estimated from LS as follows:
PD(h_a,_k; LS) = 1/N∑_i=1^Nh_a(_k, _i, -k),
where _i, -k denotes the values of _-k in the i-th sample of LS. A new dataset of pairs (_k, PD(h_a,_k; LS)) can then be built to fit h_k. In our experiments, we consider only the _k values observed in the learning sample but PD(h_a, _k; LS) could also be estimated at other points _k to artificially increase the size of the proxy dataset.
In practice, optimizing θ_k only once on the PD of h_a could leave residual dependence of _k on the resulting y - h_k^θ̂_k(_k) - γ̂. We thus repeat the sequence of fitting h_a on the latter residuals, then fitting h_k on the obtained PD(h_a^θ̂_a, _k; LS) + h_k^θ̂_k(_k) + γ̂, with θ̂_k and θ̂_a the current optimized parameter vectors (see <ref>).
The main advantage of this approach over the alternate one is to avoid domination of h_a over h_k. Unlike the two previous approaches, this one is also sound even if H<ref> is not satisfied as it is not a requirement for (<ref>) to hold. One drawback is that it requires h_a to capture well the dependence of f on _k so that its PD is a good approximation of f_k. The hope is that even if it is not the case at the first iteration, fitting h_k, that contains the right inductive bias, will make the estimates better and better over the iterations.
§ EXPERIMENTS
We compare the different methods on several regression datasets, both simulated and real. Performance is measured through estimates of (<ref>) and (<ref>) (the latter only on simulated problems) on a test set TS, respectively denoted d̂(h, y; TS) and d̂_k(h_k^θ_k, f_k; TS). In some cases, we also report rMAE(θ_k^*, θ_k), the relative mean absolute error between θ_k^* and θ_k (lower is better for all measures). For the hybrid approaches, we use as h_a either a multilayer perceptron (MLP), gradient boosting with decision trees (GB) or random forests (RF). We compare these hybrid models to a standard data-driven model that uses only h_a. We also compare fitting h_a with and without input filtering, i.e. respectively removing or keeping _k from its inputs, to verify convergence claims about h_k in <ref>. Architectures (e.g. for MLP, the number of layers and neurons) are kept fixed across training methods to allow a fair comparison between them, and are given in <ref>. We use early stopping of gradient descent training by monitoring the loss on a validation set (except for pure tree-based models, which are trained in the standard way, hence not using gradient descent).
§.§ Friedman problem (H<ref> and H<ref> satisfied)
We consider the following synthetic regression problem:
y = θ_0sin(θ_1 x_0 x_1) + θ_2 (x_2 - θ_3)^2 + θ_4 x_3 + θ_5 x_4 + ε,
where x_j ∼𝒰(0, 1), j=0, … 9, and ε∼𝒩(0, 1) <cit.>. We generate 10 different datasets using 10 different sets of values for θ_0, …, θ_5, each with 300, 300 and 600 samples for respectively the training, validation and test sets. For the hybrid approaches, we use the first term as prior knowledge, i.e. f_k(_k) = θ_0sin(θ_1 x_0 x_1).
We see in <ref> that all hybrid training schemes outperform their data-driven counterpart. They come very close to the ideal f_k → h_a method, and sometimes even slightly better, probably due to small overfitting issues. Sequential fitting of h_k and h_a performs as well as the alternate or PD-based approaches, as H<ref> and H<ref> are satisfied for this problem (see <ref>). Filtering generally improves the performance of hybrid schemes as H<ref> is verified. PD-based optimization yields good approximations of f_k (as shown by a low d̂_k). The alternate approach follows closely whereas the sequential one ends up last, which can be expected as fitting h_k only on y induces a higher noise level centered around 𝔼__a{f_a(_a)}, while the other approaches benefit from reduced perturbations through h_a estimation, as explained in <ref>. Filtering vastly decreases d̂_k for alternate approaches, supporting claims introduced in <ref>, while this measure remains unimpaired for sequential and PD-based training by construction.
§.§ Correlated input features (H<ref> not satisfied)
Correlated linear model.
Let y = β_0 x_0 + β_1 x_1 + ε, with β_0 = -0.5, β_1 = 1, ∼𝒩(0, Σ), and ε∼𝒩(0.5^2, 1). We generate 50, 50 and 600 samples respectively for the training, validation and test sets. We use as known term f_k(_k) = β_0 x_0. Regressing y on x_0 yields the least-squares solution <cit.>:
𝔼[β̂_0] = β_0 + cov(x_0, x_1)/var(x_0)β_1.
We set cov(x_0, x_1) = 2.25 and var(x_0) = 2 so that (<ref>) reverses the sign of β_0 and (<ref>) is satisfied. The sequential approach should hence yield parameter estimates of β_0 close to (<ref>) while we expect the others to correct for this bias.
From <ref>, we observe that, contrary to the PD-based approach, the sequential and alternate methods return very bad estimations of β_0, as H<ref> is no longer verified.
Filtering corrects the bias for the alternate approach but degrades the MSE performance for the sequential method as it removes the ability to compensate for the h_k misfit.
Correlated Friedman problem.
The structure is identical to the one in <ref> but with correlated inputs drawn from a multivariate normal distribution where μ_i = 0.5 and var(x_i) = 0.75, ∀ i, and cov(x_i, x_j) = ± 0.3, ∀ i ≠ j (the covariance sign being chosen randomly). Sizes of the training, validation and test sets are identical to those of <ref>. Inputs are then scaled to be roughly in [-1, 1]. Here again, we use f_k(_k) = θ_0 sin(θ_1 x_0 x_1).
As in <ref>, <ref> shows that hybrid models outperform their data-driven equivalents. PD-based methods usually yield more robust h_k estimations in the general unfiltered case, but struggle to line up with the alternate scheme in terms of predictive performance, except for GB-related models. For RF, this can be explained by a worse h_k estimation while for MLP we assume that it is due to h_a overfitting: in the alternate approach, it is optimized one epoch at a time, interleaved with one step on h_k, whereas that of PD-based methods is fully optimized (with identical complexities). Sequential and alternate approaches undergo stronger h_k misparameterization without filtering since H<ref> is not met, but the latter mitigates this w.r.t. the former, as was already observed in <ref>. Input filtering degrades predictive performance for the sequential methods as they cannot counterbalance a poor h_k.
§.§ Overlapping additive structure (H<ref> and H<ref> not satisfied)
Let y = β x_0^2 + sin(γ x_0) + δ x_1 + ε with ε∼𝒩(0, 0.5^2), β = 0.2, γ = 1.5, δ = 1 and sampled as in the correlated linear problem. We generate 50, 50 and 600 samples respectively for the training, validation and test sets. We define f_k(_k) = β x_0^2 and f_a() = sin(γ x_0) + δ x_1 + ε. Hence, H<ref> and H<ref> do not hold. Even with β̂ = β^*,
h_a still needs to compensate for sin(γ x_0). Filtering is thus expected to degrade performance for all hybrid approaches as h_a(x_1) will never compensate this gap, which is observed in <ref>. Results for RF are not shown for the sake of space, but are similar to GB.
§.§ Real regression problems
We now apply all methods on two real-world static datasets. As algebraic term h_k, we chose to use a linear prior on x_k, where x_k is the feature with the highest importance score in a RF model trained on the full dataset (assumed to be inaccessible at training time). As there is no guarantee that any of our hypotheses are met (and in particular H<ref>), we do not measure the distance d_k in (<ref>) as it is deemed irrelevant.
We consider two settings for each dataset, inspired by <cit.>. In the first setting (INT), the training and test sets are sampled from the same distribution p(, y) whereas the second one (EXT) evaluates extrapolation performance for samples with unseen target values. If the linear prior is a reasonable assumption or, at the very least, the target increases or decreases monotonically with x_k, then we can expect hybrid methods to yield better results in the latter setting.
In the INT setting for each dataset, we randomly select 100 samples for the learning set, 100 samples for the validation set and keep the rest as test set. For the EXT setting, we select the samples (one fourth of the dataset) with the lowest output values as test set. From the remaining samples, we randomly select 100 samples for the learning set and 100 samples for the validation set. For both INT and EXT settings, performance metrics are averaged over 10 different splits. We standardize both input and output variables.
The features for both datasets are described in <ref>. The Combined Cycle Power Plant (CCPP) dataset <cit.> collects 9,568 measurements of net hourly electrical energy output for a combined cycle power plant, along with four hourly average input variables. The Concrete Compressive Strength (CCS) dataset <cit.> is composed of 1,030 samples relating amounts of concrete components with the resulting compressive strength. As done in <cit.>, we introduce a new feature corresponding to the cement-to-water ratio.
From <ref>, it seems that introducing a linear prior does not yield any benefit in the interpolation setting, as all models perform equivalently good, which suggests that either the prior is not adequate or that H<ref> is not verified. In the extrapolation scenario, we can however observe that the linear prior allows to mitigate the impact of moving out of the distribution compared to data-driven models.
Indeed, compared to the INT setting, the performance of all purely data-driven methods degrades in the EXT scenario, especially for GB and RF as their output predictions are bounded by the minimum target value observed in the training set. PD-based hybrid methods consistently outperform other hybrid approaches and are only slightly impacted, while sequential and alternating methods attain similar results.
§ CONCLUSION
We study several hybrid methods on supervised regression problems modeled in an additive way, using neural network models and tree-based approaches. We empirically show that trends observed for neural networks also apply for the non-parametric tree-based approaches, in terms of predictive performance as in the estimation of the algebraic known function. We introduce claims related to the convergence of these hybrid approaches, under mild hypotheses,
and verify their soundness on illustrative experiments. We present a new hybrid approach leveraging partial dependence and show its competitivity against sequential and alternate optimization schemes on both synthetic and real-world problems. We highlight its benefits in estimating the parametric prior and show that it alleviates both the risk of the ML term to dominate the known term and the need for assuming independent input features sets. As future work, we plan to investigate further the theoretical properties of the PD-based approach and extend it to dynamical problems.
splncs04
§ OPTIMAL MODEL UNDER H2 AND H3
Let us recall the regression problem, where y ∈ can be decomposed into the addition of two independent terms:
y = f_k(_k) + f_a^r(_a) + ε, ε∼𝒩(0, σ^2), _k ∪_a = , _k∩_a = ∅, _k ⊥⊥_a.
For clarity, let us denote by 𝔼_ the subsequent expectations over the input space 𝔼_∼ p(){·}. We have:
f̂_k^* = min_f̂_k ∈ℱ̂_k d(f̂_k, y) = min_f̂_k ∈ℱ̂_k𝔼_, ε{(f̂_k(_k) - f_k(_k) - f_a^r(_a) - ε)^2}
= min_f̂_k ∈ℱ̂_k𝔼__k{(f̂_k(_k) - f_k(_k))^2} + 𝔼__a, ε{(f_a^r(_a) + ε)^2}
- 𝔼_, ε{2(f̂_k(_k) - f_k(_k))(f_a^r(_a) + ε)}.
The second term is independent w.r.t. f̂_k and thus has no impact on the minimization. Moreover, since _k ⊥⊥_a, the last term writes as the product of two expectations, one of which is constant w.r.t. f̂_k. We thus have:
f̂_k^* = min_f̂_k ∈ℱ̂_k𝔼__k{(f̂_k(_k) - f_k(_k))^2} - 2 C 𝔼__k{(f̂_k(_k) - f_k(_k))},
with C = 𝔼__a, ε{(f_a^r(_a) + ε)} = 𝔼__a{f_a^r(_a)}. Cancelling the derivative of (<ref>) w.r.t. f̂_k, we obtain the optimal model f̂_k^*(_k) = f_k(_k) + C, for every _k ∈𝒳_k.
§ MODEL ARCHITECTURES
Model hyperparameters are reported in <ref>. We used <cit.> for MLP, <cit.> for RF and <cit.> for GB. Unspecified parameters keep their default values. Learning rates for training h_k and MLP are set to 0.005. H is the number of hidden layers in MLP and W the number of neurons per hidden layer.
T is the number of trees in GB and RF, d the maximum tree depth and mss the minimum number of samples required to split an internal tree node.
|
http://arxiv.org/abs/2307.01002v1
|
20230703133517
|
Yielding transition of amorphous solids in the presence of aspherical impurities
|
[
"Anoop Mutneja",
"Bhanu Prasad Bhowmik",
"Smarajit Karmakar"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.stat-mech"
] |
^1 Tata Institute of Fundamental Research, 36/P, Gopanpally Village, Serilingampally Mandal,Ranga Reddy District,
Hyderabad, Telangana 500107, India
^2School of Engineering, The University of Edinburgh,
King’s Buildings, Edinburgh EH9 3FG, United Kingdom
Understanding the mechanical properties of amorphous solids has been a field of intense research not only for the zoo of interesting phenomena one observes once these solids are subjected to external deformations but also for their importance in industrial applications. Amorphous solids are known to have higher yield strength compared to crystalline solids made with similar compositions. However, they fail catastrophically via shear band formation, and tuning their mechanical yielding is of significant importance for designing better materials. For decades, engineers have been working on ways to improve the ability of amorphous solids to sustain external deformations. One popular method is micro-alloying, which involves adding small amounts of different materials to the pure sample; while there are many examples of how micro-alloying can improve the yield strain, the microscopic mechanism behind it still needs to be better understood. Via extensive molecular dynamics simulation of model amorphous solids, we have studied the effect of elongated impurity particles on the yielding transition of these solids and show that rod-like particles can enhance the yield threshold but can also make these materials more brittle depending on their sphericity and the concentration. In particular, we show that the rotational relaxation of these impurity particles plays a significant role in suppressing or enhancing the occurrence of shear band formation, which are the main players for the eventual brittle failure of any amorphous solids.
Yielding transition of amorphous solids in the presence of aspherical impurities
Anoop Mutneja ^1,
Bhanu Prasad Bhowmik ^2 and Smarajit Karmakar ^1
August 1, 2023
================================================================================
§ INTRODUCTION
Unlike crystalline materials, which yield under external shear due to the motion of embedded dislocations and defects, amorphous solids lack long-range order; thus, defining and identifying defects or soft zones which are more prone to rearrangements under external deformation is very challenging, and it remained a research topic of significant current interest both in the science and the engineering communities <cit.>. The study of mechanical failure in amorphous solids is also important due to its widespread use in both industrial and daily life settings <cit.>. Amorphous solids or glasses are abundant in nature; for example, toothpaste, whipped cream, ketchup, window glass, metallic glass razor, etc., are the disordered solids whose mechanical response to external forcing would be essential to study for better design and performance. It will be important to understand how much mechanical load a metallic glass sample can withstand or how large an earthquake on the Richter scale would make the earth flow like a liquid. Though these phenomena happen at very different scales, they share similar characteristics and probably have the same microscopic origin, making the macroscopic and microscopic study of the mechanical properties of amorphous solids even more important. Despite decades of research, a clear understanding of how an amorphous solid yield remains elusive, with one set of studies suggesting it as a non-equilibrium phase transition <cit.> while the other studies argue this as a purely a dynamic crossover <cit.>.
When external deformation is applied, the stress (σ_xy) in the material increases linearly with the applied strain (γ) through reversible affine movements up to a threshold strain; however, with increasing strain, irreversible events known as plastic events start to set in, causing a discontinuity in the stress-strain curve, leading to an average quasi-elastic behavior of the loading curve <cit.>. Such a mechanical response is quite general in a variety of systems both in simulation studies <cit.> and experiments <cit.>. The plastic events at small strain comprise a small set of particles undergoing irreversible structural rearrangements, thus, are typically localized and sometimes broadly referred to as shear transformation zones (STZ) <cit.>, although a clear identification of soft defect zones (or STZs) is still a matter of intense debate. As loading continues at γ=γ_Y, the number and the typical size of plastic drops increase, culminating in a system-spanning avalanche called a shear band, which signals the onset of mechanical failure or yielding transition. The physics of nuclear plastic events, or STZ, is well-studied, along with the collective phenomenon of shear banding <cit.>, which has applications in material design.
Under various external deformations, a system can fail in two distinct ways. The first is a brittle failure, where highly localized, irreversible particle movements associated with plastic events span the entire system, leaving most parts of the material unaffected. The second is a ductile failure, where plastic regions homogeneously appear and prevent strong spatial stress concentration, leading to a more rounded and gradual failure. The kinetic stability of the initial sample often determines the type of failure, with stability being directly related to the sample's degree of annealing <cit.>. Samples that are prepared via a slow cooling process and subjected to further annealing are more kinetically stable and will typically fail catastrophically by forming a shear band, while the ductile behaviour would be shown by the sample obtained from higher temperatures via rapid cooling procedures or poorly annealed. Numerical studies of mechanical yielding in amorphous solids are vast in the literature, but the lowest temperatures accessible via normal molecular dynamics were not low enough to see such brittle behaviour until recently. The development of various enhanced sampling methods like the swap-Monte Carlo method <cit.> has made numerical studies of these brittle yielding possible <cit.>. The low-energy states obtained by these enhanced sampling methods are often referred to as ultra-stable glasses. The extreme brittle behaviour can also be seen in states generated by cyclically loading the system <cit.>, in which it is found that cyclic loading enhances the annealing processes in amorphous solids. Similarly, amorphous solid samples prepared with random pinning protocol can also show higher kinetic stability <cit.>. A recent study was conducted on a system formed by randomly bonding the nearest neighbours of a poorly annealed system, resulting in a system of dimers <cit.>. This dense assembly of dimers, which has a physical relevance to patchy colloid systems, was found to be in an ultra-stable state. The phenomenon of brittle yielding has recently been extensively studied both because of its ubiquitous nature and its possible link to a non-equilibrium phase transition. It has been theorized that the formation of a shear band comprised of an array of N Eshelby-quadrupolar singularities is the energy-minimized state at large strain <cit.>. The framework suggests the brittle yielding as a critical spinodal phenomenon <cit.>. Other studies list the yield point as a discontinuous transition in the mode coupling framework <cit.> or a spinodal point in random-first-order-transition theory <cit.>.
For decades, engineers have been working on ways to improve the strength of amorphous solids to sustain external deformations. One popular method is micro-alloying, which involves adding small amounts of different materials to the pure sample <cit.>. While there are many examples of how micro-alloying can improve the yield strain, the microscopic mechanism behind it has yet to be fully understood. Recent studies have focused on the role of impurities in the mechanical response of amorphous solids, particularly through particle pinning <cit.>. This technique has been shown to prevent plastic events and delay the yielding transition; moreover, random pinning leads to a transition from heterogeneous shear band-mediated yielding to homogeneous yielding. Simply put, when the system is pinned, shear localization or shear band formation is highly suppressed. Soft pinning, which involves replacing pinned particles with impurity particles with larger diameters or masses, has also shown promise in improving mechanical response. These findings suggest that computer simulations could provide insight into the microscopic mechanism of micro-alloying and the role of impurities in yielding transitions. Our study revisits the concept of soft pinning and highlights the importance of rotational degrees of freedom of the embedded inclusions in altering the mechanical properties of amorphous solids, especially the shear band formation. A similar theoretical study <cit.> also supports the mechanical enhancement of metallic glasses with impurities. The study suggests that the impurities create a new local structure different from the crystalline and glassy matrix. By treating the impurities and the surrounding structure as a defect, the study shows that a shift in the yield point is necessary to form a system-spanning instability or a shear band. Additionally, the study predicts that no shear bands will be present when a critical concentration of impurities is reached.
Using computer simulation of a binary glass-forming system, we investigate the effect of impurities on the yielding transition of amorphous solids in the athermal quasi-static scenario. We use both isotropic and anisotropic impurities. For the former, we add a third type of particle into the glass matrix, which has a larger diameter than the constituent particles. Due to the bigger size, these impurities have smaller diffusion coefficients and can effectively have a similar influence of particle pinning <cit.> without breaking any translational symmetry. In the case of anisotropic impurities, we add rod shape particles into the glass matrix. Recently rod shape particles have been extensively used to extract various length scales in glassy systems <cit.>. The advantage of replacing some of the constituent spherical particles with rods is two folded. On the one hand, due to their large shape, they have a smaller diffusion coefficient same as larger particles. On the other hand, due to the asymmetric shape, they introduce rotational degrees of freedom (rDoF) in the system.
We find that the addition of large symmetric impurities reduces the plasticity at smaller strains, increases the yield strain, and leads to more brittle type yielding. Anisotropic dumbbells having comparatively the same size of large particles exhibit a similar response, but the system can sustain a much larger load than spherical impurities. We also study the effect of increasing the rod length on the yield point and found that such long rods change the yielding transition to be highly brittle type, similar to ultra-stable glasses. Our study also points out the role of rDoF and how freezing of rDoF leads to ultra-stable-like behaviour as also observed in a recent work <cit.>.
§ MODEL SYSTEMS & SIMULATION DETAILS
We simulate Lenard-Jones particles in both two and three dimensions. The details of the models are following-
3dKA Model:
3dKA (3d-Kob-Anderson) model <cit.> is a binary mixture of A- and B-type Lennard-Jones particles with a concentration ratio of 80:20. This is a generic model that mimics the molecular
liquids (Ni_80P_20 glass-forming liquids). They interact via the following potential-
V_αβ(r)=4ϵ_αβ[(σ_αβ/r)^12-(σ
_αβ/r)^6]
where α and β vary in A, B and the interaction strengths and radii are ϵ__AA=1.0, ϵ__AB=1.5, ϵ__BB=0.5; σ__AA=1.0, σ__BB=0.88 and σ__AB=0.8. The interaction is truncated at r=2.5σ_αβ and is smoothed by adding up to 2^nd order terms.
2dmKA Model:
2dmKA (2d modified Kob-Anderson) model <cit.> is the glass forming model in 2 dimensions with properties like 3dKA. It is a 65:35 binary mixture of the same A and B particles of the 3dKA model interacting with the same potential and parameters.
Rods and ternary particles:
In the glass formers mentioned above, we added c_rod concentration of rods in the parent system of N=(1-c_rod)N_T particles; we used a system with N_T=100000 total particles. Each rod is formed by gluing nb spheres at a fixed distance of d=0.3σ_AA.
In this study, to achieve the soft pinning effect, the rods are made up of spheres with σ_rod, rod=2.0, while σ_rod,α=0.5(σ_rod,rod+σ_αα), same is true for ternary spheres; while the have same mass and interacts via same potential as parent spheres with ϵ_rod,α=1 and ϵ_rod,rod=1/2 for both the models. ϵ_rod,rod=1/2 is chosen to avoid the nematic ordering. We have also studied the mechanical response with thin rod inclusions with σ_rod,rod=1.0. The rod length is defined as L=0.3σ_AA*(N-1)+σ_rod,rod.
Sample preparation:
The moderately-annealed state of a glassy system with c_rod, the concentration of rods is prepared by firstly equilibrating the system at high temperature (T=1.0). It is then slowly cooled to temperature T=5×10^-4, with a cooling rate of dT/dt=1×10^-4. This annealed state is minimized via conjugate-gradient to reach the inherent state (IS). This inherent state is then used as a starting point for all shear procedures.
Shear protocol:
In this work we focus only at the athermal quasi static (T → 0 and γ̇→ 0) deformation. We start with an inherent state and deform it by increasing the strain by δγ = 5×10^-5 in every step. Each step of deformation contains two parts. The first step is called affine transformation in which particle positions are modified in the following way, x_i = x_i + δγ y_i, y_i = y_i, z_i = z_i. Here the strain is applied in the x direction. The second step involves minimizations of energy in which particles are brought back to mechanical equilibrium. We use the conjugate gradient method for energy minimization.
§ RESULTS
§.§ Increasing Mechanical Strength
The bulk mechanical behaviour of the system containing various kinds of dopants is studied here in a strain-controlled experiment with stress as an observable. In Fig.<ref>(a) and (b), we show the stress-strain curve for the ternary system, which contains a c_T concentration of larger diameter particles as discussed in the method section for 2dmKA and 3dKA models, respectively. Acquainted with the fact that increasing pinning concentration enhances the material strength (increase in the modulus and shift in the yield point to a larger strain value) <cit.>, if the larger diameter particles (ternary system) act as soft pinning centers, one expects the system to yield at larger strain values and also increase in the slope of the stress-strain curve (increase in shear modulus, μ) with increasing pinning concentration. In Fig.<ref>(a) and Fig.<ref>(b), we have shown the stress-strain curves for a 2dmKA- and 3dKA- ternary system, respectively. The ternary particle concentration varies from c_T∈ [0.5%-10%] in the system size of N_T=100000 particles. The increase in the yield strain and the shear modulus (μ) with increasing the dopant's concentration is clearly visible in systems in both dimensions, providing a proof of concept that a larger diameter particle would act like a soft pinning center to both delay the mechanical breakdown as well as enhancing its shear strength in the system. Also, notice the appearance of a prominent stress peak with increasing dopant concentration, especially for the 3dKA model, along with an increase in steady-state stress values (flow stress). To check the nature of the yielding transition, we computed the susceptibility
χ(γ)=N_T (⟨σ_xy^2⟩-⟨σ_xy⟩^2).
χ(γ) shows a peak at the yield point; a sharper peak with a large amplitude implies a more brittle character of the yield process. The peak becomes sharper, and the peak height increases with increasing c_T (Fig.<ref>(c, d)), signifying the emerging brittle character. The peak also shifts to larger strain values along with the yield point. For the 2dmKA model, the peaks are very broad, suggesting that the yielding in the 2d system is not very sharp, although the magnitude increases with increasing c_T. Although the results are encouraging, we must admit the improvement is very minimal (γ_Y(c_T=0)=0.09 to γ_Y(c_T=0.1)=0.107 for 3dKA system). Thus we feel that larger diameter particles act as soft pinning sites but do not have a very strong pinning effect. Next, by adding rod-like particles, we investigate the effect of asymmetric dopants on the yielding transition. These dopants will have both rotational and translational degrees of freedom (DoF); it will be interesting to see the effect of these DoFs on mechanical loading. We expect to see significant improvement over the ternary system as asymmetric dopants will have even slower diffusion because of their larger hydrodynamic radii, along with the anisotropic diffusion in the system. The interplay of these dynamical processes will indeed have fascinating effects on the yielding transition, as discussed in the subsequent sections.
Now we focus our attention on micro-alloying with dimers. Fig.<ref>(a, b) shows the stress-strain curves for the studied model systems in both the dimensions with different concentrations of dimers (c_rod) with aspect ratio 2.3:2. Interestingly, the yield strain changes by a large amount (around 40%) from γ_Y=0.09 for the pure systems to γ_Y=0.127 for c_rod = 10% as compared to only around 18% for the ternary system for the 3dKA model system. An increase in yield strain is also very prominent in the 2dmKA model system, along with a significant increase in the shear modulus in both dimensions with the appearance of clear stress overshoots. The systematic increase in yield point with increasing c_rod conveys the result of dimers helping the system sustain more load and acting as effective soft pinning sites. This result is further supported by the χ(γ) plots shown in Fig.<ref>(c). One can see clearly that the peak position increases to a larger strain indicating an increase in yield strain with increasing dimer concentration as well as increasing peak height indicates a behavior characteristic of increased stability rather than increased brittleness as the width of the χ vs. γ curve remained very similar with increasing c_rod. Fig.<ref>(d-f) shows the non-affine displacement maps, D^2_min <cit.> map for 3dKA system with increasing concentration of dimers. The maps are obtained at strain value where ∑_i=1^N D^2_min≈ 75000, from initial γ=0 configuration. The shear band formed with different dimer concentrations are very similar, suggesting no significant change in the character of the yielding process with increasing c_rod although both moduli and yield strain increased significantly.
The results look promising in terms of enhancing material strength with doping. However, this considerable augmentation poses a conundrum since the dimers of such an aspect ratio offer a slight increase in particle volume, implying that the increase in particle volume may not be the sole cause of such an enhancement. The results point toward the significant role of rotational DoFs offered by the anisotropy of the molecules. To further understand the role of rotational DoFs, we manually stopped the rotational motion of these dimers. The same samples with different concentrations of rods were taken and sheared using the AQS protocol, but while minimization at each strain step, the orientation of the rod is kept frozen to rule out the effects it may cause on the mechanical properties. Fig.<ref>(a) and Fig.<ref>(b) show the stress-strain curves for 2dmKA and 3dKA systems with frozen rotational DoFs. The increase in the yield point has now vanished, confirming the vital role of the rotational degrees. By providing additional pathways to dissipate internal stresses, these DoFs enable the system to frequently dissipate extra stresses and, as a result, resist larger strains. In contrast, frozen DoFs accumulate internal stresses, which are released abruptly, causing the system to yield in a very brittle manner. In Fig.<ref>(b), we show the susceptibility, χ for the 2dmKA model, and it is clear that the system now yields at smaller strain with increasing dimer concentration at the same time the peak height increase with a significant decrease in the width of the χ-γ curves. This signals the increasing brittle-like failure. The trend in the 3dKA model is even dramatic as shown in Fig.<ref>(c), in which one sees the stress-strain curve to be come near discontinuous at the transition point and the corresponding susceptibility χ attaining a very large peak value at the transition point and a smaller width. Thus the rotational degrees play a vital role in controlling the mechanical strength of the medium, making it a crucial component in material design.
§.§ Ultra-stability with Frozen Rotational DoFs
Mechanical Aspects: It is intriguing to observe the highly brittle-like yielding behaviour that occurs when rotational degrees of freedom is absent. The brittleness becomes more prominent as the concentration of rotationally stuck dimers increases, indicating the emergence of ultra-stable glass-like properties. In Fig.<ref>(e), we show the non-Affine displacement D^2_min map from the initial state of the system with different concentrations of rotationally frozen dimers. The shear band is more localized with a large doping concentration (right), suggesting the emergence of the ultra-stable character of the system. At this point, one may argue that artificially freezing the rotational degrees of freedom of a molecule will be unrealistic and will not have any practical importance, so even if the rotational degrees of freedom play an important role and their absence can lead to enhanced shear band formation in a material, it will not be observed in a realistic scenario. This is not true, as the same effect can appear if the rods are longer, which will significantly decrease their rotational diffusion constant in a medium. To test our hypothesis that frozen degrees of freedom lead to this extreme brittleness, we now analyze systems containing long, thin rods which are expected to be more rotationally immobile.
We now introduce thinner but longer rods into the system as dopants. The diameter of each bead in the rod is kept to be σ_rod,rod=1.0 (thin rod in Fig.<ref>), which is the same as the particles in the system. Fig.<ref>(a) shows the averaged stress-strain curve for a 3dKA system with 10% rods of various lengths. One can clearly see the increasing brittle behaviour with increasing rod length. Also, it is worth pointing out that the system with longer rods is dynamically slow; thus, the same preparation protocol would generate poorly annealed states, which should smoothen the stress-strain curve—implying that the observed brittleness is even stronger than what we obtain in this study because of the simulation difficulty. Note that yield strain decreases systematically with increasing rod length in complete agreement with our previous observation with frozen rDOF. The stress peak and the shear moduli of the samples with longer rods increased considerably. Fig.<ref>(b) shows sample-to-sample stress-strain curves for two different dopant rod-length. For the rod length 1.3, the stress-strain curve has many more stress drops near the yielding transition. In contrast, for rod length 2.5, the individual stress-strain curves are much more abrupt, and one sees larger stress drops during plastic events indicating the emergence of shear bands in the system. The emerging mechanically ultra-stable phase is thus quite evident from these results and well supported by the peaked χ(γ) for larger rods (see Fig.<ref>(c)). Clear emergence of shear bands with increasing mechanical ultra-stability is also seen in the non-affine displacement D^2_min maps as shown in Fig.<ref>(d)). For aspect ratio AR=2.5:1, the shear band is very clear and localized in contrast to a somewhat diffused band in the system with a shorter rod of AR=1.3:1 (Fig.<ref>(d)).
Recently, in Ref.<cit.>, the bulk-ultra stable glass phase was claimed to be formed by randomly bonding the nearest neighbors of an otherwise poorly annealed glassy state. Such bonded molecules would also be rotationally stuck because of the packing. Thus our results offer an alternate explanation of the results reported in <cit.>. The observation of the long/rotationally stuck impurities making the system behave in a highly brittle way can be hypothesized as an effect of the larger length scale in the system. The ultra stable states are sampled from extremely low temperatures, implying a sizeable structural length scale; while the states sampled from high temperatures would have a structural length scale of a few particle diameters. By inserting a rod of length L_rod, a static correlation of the same length is induced. Thus the observed similarity between systems doped with larger rod-like impurities and ultra-stable glasses may be simply due to the increased static correlation length. Although we do not have direct proof of this argument, it seems more like to be the scenario, and further work is needed to understand the microscopic reason for the strong similarity between ultra-stable glasses and micro-alloyed glasses with long rod-like dopants.
Kinetic Aspects:
After demonstrating their enhanced mechanical stability, we now focus on the kinetic stability of systems with frozen rotational DoFs. To assess the kinetic stability of a solid, we observed the potential energy per particle (e) while subjecting it to heating-cooling cycles. An ultra-stable state would be characterized by a deep potential energy minimum that cannot be reached through normal cooling, resulting in hysteresis in the potential energy plot. To test this, we melted the rotationally frozen system of 10% dimers and then cooled it to the same temperature at a cooling/heating rate of Ṫ=10^-4 (same as the rate with which the sample was prepared at the first place). As shown in Fig.<ref>(a), hysteresis was observed, indicating classical ultra stable characteristics. Upon heating, the system remained in its glassy state until T=1.12, while on cooling, it reached its glassy state at T=1.0, resulting in a glass with a higher potential energy per particle. However, the second heat cycle did not exhibit hysteresis. Furthermore, the absence of hysteresis in the dashed lines of Fig.<ref>(a), which depict the same procedure with rotationally free dimers, confirms the effectiveness of frozen rotational degrees of freedom in creating ultra stable glasses. The specific heat (C_V=de/dT) plots in Fig.<ref>(b) also support this conclusion.
§ DISCUSSION
Finally, our results have demonstrated that incorporating soft-pinning centers into a glassy system can significantly increase its ability to withstand external loads beyond its usual limit, along with increased shear modulus and yield stress. Furthermore, asymmetric inclusions with rotational degrees of freedom are much better for micro-alloying as they can significantly increase the yield strain, shear modulus as well as yield stress at even small concentrations. The presence of rotational degrees of freedom permits additional pathways for relieving local stresses, leading to a stronger and more strain-resistant material. Conversely, inclusions with frozen or constrained rotational degrees of freedom do not exhibit the same enhancement of the material properties; rather, it shows a decrease in yield strain along with a more brittle-like catastrophic yielding transition. Our findings also suggest that longer rods or rotationally stuck particles may accumulate stress and ultimately break in a brittle manner by forming shear bands, similar to ultra stable glasses.
It is indeed puzzling that systems containing rod inclusions with frozen or constrained rDoFs behave more like ultra stable glasses both in terms of their mechanical and kinetic stability. We don't have an immediate microscopic understanding of this phenomenon, but it seems plausible that long rod-like inclusions will introduce static correlation in the system, which will be of the order of the rod length. This increased static correlation can then be compared with the enhanced static correlation in ultra-stable glasses. This line of thought will then be consistent with the observation of extreme brittleness in ultra stable states and micro-alloyed glasses with long rod-like impurities. Ultra stable states that are created through the swap Monte-Carlo technique will have large correlated domains referred to as the mosaic scale <cit.> which will have much smaller rotational freedom, resulting in brittle behaviour. States sampled from higher temperatures may have smaller mosaic domains, which are rotationally free and able to dissipate local stresses through multiple plastic events. Although these arguments might seem reasonable, further studies are needed to ascertain their validity. If this reasoning is correct, then it could significantly contribute to understanding mechanical failures in disordered systems and linking mechanical properties to inherent length scales in the problem, as suggested in a recent work <cit.>. In Ref.<cit.>, the existence of a static correlation length is obtained using a new soft-matrix method associated with plastic events, and it was also found that the correlation length increases with increasing annealing of the material; thus, it will be interesting to see if the correlation length associated with the plastic event also increases with increasing rod-like dopant concentration. This will then be a shred of important evidence supporting growing static correlation in micro-alloyed materials with asymmetric impurities leading to enhanced shear modulus and yield stress similar to ultra-stable glasses.
§ ACKNOWLEDGMENT
SK acknowledges funding by intramural funds at TIFR Hyderabad from the Department of Atomic Energy (DAE) under Project Identification No. RTI 4007. Core Research Grant CRG/2019/005373 and Swarna Jayanti Fellowship SB/SFJ/2019-20/05 from Science and Engineering Research Board (SERB) are acknowledged for generous funding. Most of the computations are done using the HPC clusters bought using CRG/2019/005373 grant and Swarna Jayanti Fellowship, grants DST/SJF/PSA01/2018-19, and SB/SFJ/2019-20/05 of SK.
|
http://arxiv.org/abs/2307.01432v1
|
20230704015056
|
Verifying the magnitude dependence in earthquake occurrence
|
[
"Giuseppe Petrillo",
"Jiancang Zhuang"
] |
physics.geo-ph
|
[
"physics.geo-ph",
"physics.data-an"
] |
The Institute of Statistical Mathematics, Research Organization of Information and Systems, Tokyo, Japan
The Institute of Statistical Mathematics, Research Organization of Information and Systems, Tokyo, Japan
The existence of magnitude dependence in earthquake triggering has been reported. Such a correlation is linked to the issue of seismic predictability and remains under intense debate whether it is physical or is caused by incomplete data due to short-term aftershocks missing. Working firstly with a synthetic catalogue generated by a numerical model that capture most statistical features of earthquakes and then with an high-resolution earthquake catalogue for the Amatrice-Norcia (2016) sequence in Italy, where for the latter case we employ the stochastic declustering method to reconstruct the family tree among seismic events and limit our analysis to events above the magnitude of completeness, we found that the hypothesis of magnitude correlation can be rejected.
Verifying the magnitude dependence in earthquake occurrence
Jiancang Zhuang
August 1, 2023
===========================================================
Introduction–
The question of whether earthquakes can be predicted is one of the most important in both social and scientific contexts <cit.>. The study of earthquake occurrence phenomena is of great interest and involves multiple fields of research and technology, including engineering, geophysics, seismology, statistical mechanics, and more. It has been well known that seismicity is not completely random and the biggest predictable component in seismicity is clustering. The Epidemic Type Aftershock Sequence (ETAS) model is considered as the standard baseline for modelling earthquake clusters <cit.> and short-term aftershock forecasting. In the traditional ETAS model, all the event magnitudes are assumed to be independent from the occurrence times and identically from the same random distribution – the Gutenberg-Richter law, which in fact implies the complete randomness of earthquake magnitude in predictability.
Recently, some researchers reported the presence of correlations between seismic magnitudes within an earthquake sequence <cit.>, i.e., subsequent events tend to have larger magnitudes than expected based on the Gutenberg-Richter law. This implies that there is some predictability from complete randomness for forecast earthquake magnitude since it is possible to predict in some extend the magnitude of an earthquake from a seismic signal before its rupture process completes.
However, there also have been argues that such appear correlation is caused by the short-term aftershock incompleteness (STAI) <cit.>, which refers to the lack of recorded earthquakes following a major event due to overlapping coda-waves, particularly in the immediate aftermath of a large earthquake <cit.>. STAI does not only lead to a bias in the estimation of model parameters and in forecasting but also creates the apparent magnitude correlation.
Both STAI and magnitude correlation seem to be able to explain each other. The existence of magnitude dependence offers an alternative explanation for STAI. In other words, the lack of recorded earthquakes following a major event may not be a recording issue but rather a preference to trigger earthquakes of a certain magnitude. It is important to note that the incompleteness of the instrumental seismic catalogue due to the overlapping of coda-waves is a well-established effect, and the supporters of the existence of correlations between magnitudes do not deny the existence of STAI. Rather, they attribute the absence of minor events to both instrumental issues and physical phenomena caused by magnitude clustering.
The traditional ETAS model does not account for either STAI or magnitude dependence.
To improve the ETAS model's ability to describe seismicity, we need to make a choice between two different approaches. The first is to tackle the influence of artificial incompleteness, by “obscuring" events produced by a simulated ETAS catalog to reproduce the correct sequence of events present in the real catalog (as suggested in <cit.>) or by “reconstructing" the complete catalog by reintroducing missing events (as suggested in <cit.>). The second approach introduces a “constrained" magnitude frequency distribution P(m| m^*) for the aftershocks that are triggered directly by a parent event magnitude m^* in the ETAS model to account for the existence of correlations between seismic magnitudes. Both approaches appear to improve the ETAS model's ability to describe seismicity, but it is still unclear which one corresponds to the reality and to be used for the next generation of seismic forecasting statistical models.
In this article, we study the magnitude correlations for a synthetic seismic catalogue produced with a 2-layer OFC model (<cit.>) which is able to produce a realistic earthquake statistics. Therefore, we propose a direct correlation analysis of a machine learning high-resolution catalogue for the Amatrice-Norcia (2016) sequence in Italy while avoiding biases due to uncertainty about descendants in the triggering phase. In fact, using stochastic declustering technique <cit.>, we assign to each event j a probability of being the offspring of a previous event i, or a background event. By declustering the instrumental catalog, we calculate correlations weighting the results based on the probability of the two events being correlated. After establishing the completeness magnitude of the catalog and performing statistical analysis on correlated pairs, we can check whether the magnitude correlation hypothesis can be rejected with an high level of confidence.
The Physical Model and Magnitude Correlation
We implement the model defined in <cit.> and tested in <cit.> composed by two elastic layers. The first one represents the brittle fault, the second, instead, is ductile. The aftershocks on the fault are nucleated by the interaction with the second layer. We consider a rectangular fault modeled as a lattice of blocks of size L_x = 1000 and L_y = 400. The stress acting on the i-th block is the sum of two contributions which take into account for the intra-layer and inter-layer interaction. The friction in the two layers is different, being velocity weakening (modelled as Coulomb Failure Criterion) in the brittle layer and velocity strengthening in the ductile layer. The ingredients introduced for friction induce stick-slip dynamics in the ductile layer, which allows all the statistical laws of earthquakes to be recovered. For more details on the model, see reference <cit.>. The output seismic catalogue we use in this study contains ∼ 5,000,000 events.
Completeness of the Amatrice-Norcia seismic catalogue–
The Machine‐Learning‐Based High‐Resolution Earthquake Catalog consists in 885,616 events spanning a 1 year period, based on arrival times derived using a deep‐neural‐network‐based picker <cit.>. It is well known that immediately after a large earthquake, many aftershocks cannot be recorded (Fig.<ref>). The seismic waveforms generated by the aftershocks, many of which occur shortly after the mainshock, overlap with each other and cannot be accurately distinguished. Therefore, catalog completeness, is quantified in terms of a minimum threshold m_c defined as the magnitude above which all events are identified and included in the seismic catalogue. The value of m_c depends on the level of noise present in the seismic data and on the distance between the earthquake epicenter and the recording seismic stations <cit.>. Several methods have been proposed to estimate m_c <cit.> but many of these have limitations. To address the problem of calculating m_c, we estimate the completeness magnitude of the catalogue by plotting the quantity
F_M(t,m|m_th)=∑_i=1^N 1(t<t_i,m<m_th)/1(m<m_th)
where 1 is the indicator function and m_th is the threshold magnitude chosen for the calculation of the quantity. In Fig.(<ref>) it is easy to see how that for small values of m_c, the curves are distinctly separate, whereas they blur for larger values (m_c ≥ 2). Neglecting the small noise, a complete collapse of all curves means that the catalog is complete and all occurred events have been recorded. Here, we consider complete the catalogue considering only earthquakes with m>3.
Stochastic declustering–
The main weakness of a direct statistical approach in calculating the correlations between magnitudes is that the calculation is performed by ordering the earthquakes chronologically and for a fixed spatial region. Thus there is a non-zero probability that related events occurring close-in-time are spatially distant. Conversely related events occurring close-in-space can be separated by a very large time interval. For this reason a simple space-time window selection is not suitable for this kind of study. To overcome this problem we employ the stochastic declustering methodology introduced by <cit.>, with which is possible to estimate the probability that an event is a spontaneous event or is instead triggered by others. We define as ρ_ij the probability that an event i is an offspring of an event j. Since we are only interested in understanding whether there is magnitude clustering between the triggering events, we remove all background events from the computation, i.e. all events having ρ_ij with i=j. After the procedure, we obtain a probability tree among the events. In particular, we built a matrix (i,j,ρ_ij), where j is the possible mother of i, ranging from 1 up to the total number of mothers, while i is the index of the possible offspring related to j ranging from 0 up to the total number offsprings. We obtain N_c=706,266 combination of events with magnitude m ≥ 3.
Correlations of the Empirical Magnitudes–
Instead of looking at the pairs (m_i,m_j) directly, we estimate the counts
(EM_i, EM_j)=(ecdf_m_1:n(m_i), ecdf_m_1:n(m_j)) in the unit square [0,1] × [0,1] on a regular grid weighted considering the probability ρ_ij (see Supp. Mat.).
If there is no magnitude dependence, these points are distributed completely homogeneously in the square unit without any regular patterns (Fig.(<ref>)). To statistically test whether a correlation exists, we can compute a ρ_ij-weighted histogram of the differences between EM-values, Δ_ij = EM_i-EM_j (Fig.(<ref>a,<ref>b)). For the null hypothesis of no-correlation, Δ_i has a probability density function (pdf) with a triangular shape: Δ_i +1 if -1 < Δ_i < 0 , and 1-Δ_i if 0< Δ_i < 1 (see Suppl.Mat.). Conversely, if m_i and m_j are positively correlated, then the pdf of Δ_ij will be more concentrated around 0. In Fig.(<ref>c,<ref>d) the cumulative density function (cdf) of Δ_i is compared with the theoretical one for the null hypothesis. We find that the hypothesis of magnitude dependence is rejected for m ≥ 3, conversely, for m < 3 a concentration of points around 0 is more evident and the hypothesis of magnitude correlation cannot be rejected.
We then justify the observed correlations observed if we consider all the events in the catalogue as "spurious" and caused to the lack of events with minor magnitudes not present in the catalogue. In conclusion we stat that the magnitude dependence we found in machine learning Amatrice-Norcia catalogue might be due to the short-term aftershock missing and cannot be attributed to a real dependence between magnitudes.
Conclusions–
Resolving the magnitude correlation debate is crucial to focus statistical seismologists on developing a next-generation epidemic model. Moreover, the presence of correlations is also intrinsically linked to greater predictability of a seismic event. In this article we have shown how the correlation between magnitudes is an artificial effect due to the incompleteness of the instrumental catalog caused in turn by the overlapping of coda-waves. In addition to what has been performed in the literature, we propose three improvements: 1) we study the correlations on a synthetic catalogue produced by a physical model that captures the real statistical features of earthquakes, 2) we use a high-resolution experimental machine learning catalogue, 3) to be sure of calculating the correlation between the right pairs of events (father and descendants), we use the technique of stochastic declustering.
We want to underline that the proposed ETAS models with magnitude correlation may still perform well, however, it is likely that they do not capture the real process behind it.
This research activity has been supported by MEXT Project for Seismology TowArd Research innovation with Data of Earthquake (STAR-E Project), Grant Number: JPJ010217. We would like to acknowledge David Marsan for the useful discussion.
|
http://arxiv.org/abs/2307.02433v1
|
20230705165427
|
Unconditionally stable higher order semi-implicit level set method for advection equations
|
[
"Peter Frolkovič",
"Nikola Gajdošová"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"35L60, 65M06"
] |
[mytitlenote]The research was supported by VEGA 1/0314/23 and APVV 19-0460.
mysecondaryaddress]Peter Frolkovičmycorrespondingauthor
[mycorrespondingauthor]Corresponding author
[email protected]
mysecondaryaddress]Nikola Gajdošová
[email protected]
[mysecondaryaddress]Department of Mathematics and Descriptive Geometry, Slovak Technical University, Radlinského 11, 81005 Bratislava, Slovakia
We present compact semi-implicit finite difference schemes on structured grids for numerical solutions of the advection by an external velocity and by a speed in normal direction that are applicable in level set methods. The most involved numerical scheme is third order accurate for the linear advection with a space dependent velocity and unconditionally stable in the sense of von Neumann stability analysis. We also present a simple high-resolution scheme that gives a TVD (Total Variation Diminishing) approximation of the spatial derivative for the advected level set function. In the case of nonlinear advection, the semi-implicit discretization is proposed to linearize the problem. The compact form of implicit stencil in numerical schemes containing unknowns only in the upwind direction allows applications of efficient algebraic solvers like fast sweeping methods. Numerical tests to evolve a smooth and non-smooth interface and an example with a large variation of velocity confirm the good accuracy of the methods and fast convergence of the algebraic solver even in the case of very large Courant numbers.
[2010] 35L60 65M06
§ INTRODUCTION
Numerical methods to solve mathematical models expressed by partial differential equations are an important tool for applications of such models in research and industry. As they provide only approximate solutions, one is interested in numerical methods that are accurate and robust enough at the same time. In this paper, we attempt to offer a candidate for these types of schemes to solve a prototype of advection equations used in level set methods <cit.>.
The basic idea of level set methods is to describe moving interfaces that can have a complex shape of evolving curves in 2D and evolving surfaces in 3D. For that purpose, a time dependent level set function is considered of which the zero level set represents implicitly in each time the position of the interface. In this way, the level set function can be determined by solving a nonlinear advection equation of which the velocity is typically prescribed by some external velocity field and/or by a speed in normal direction. The equation can then be solved numerically, e.g., on a uniform structured grid using finite difference methods and avoiding difficulties of direct tracking of the interface where a nontrivial treatment of nonuniform dynamic mesh is typically required.
In particular, we are considering the following nonlinear advection equation,
∂_t ϕ + (u⃗ + δ∇ϕ/|∇ϕ|) ·∇ϕ = 0 , ϕ( x,0)=ϕ^0( x)
where ϕ=ϕ( x,t) for x∈ R^d and t>0 is the unknown level set function given at t=0 by the given function ϕ^0. The vector field u⃗=u⃗( x) prescribes the movement of all level sets by an external velocity, and δ is the speed in the normal direction given by the normalized gradient.
Numerical solutions of the level set equation (<ref>) are of great interest in research and applications, see monographs or review articles <cit.> for an overview.
Concerning the large variation of the applications of level set methods, we are concerned with the tracking of interfaces in two-phase flows <cit.>, groundwater flow with moving water table <cit.>, evolving porous media <cit.>, forest fire propagation <cit.>, and image segmentation by subjective surfaces <cit.>.
We are interested in numerical methods that do not require constraints on the choice of discretization steps to ensure the stability of computations.
Such restrictions are usually quantified by the so-called (grid) Courant numbers, which, typically, must be small enough to provide stable numerical results. The restriction can be unpractical to be followed in cases where a large variation of Courant numbers occurs due to, e.g., large variations of discretization steps for unfitted grids with computational domains having complex boundaries, see the so-called "small cut cells" problem in <cit.>. Moreover, large time steps, and consequently large Courant numbers, are suitable in problems where the time dependent solution is approaching stationary form or when an auxiliary time variable is used to solve stationary problems by time marching methods or relaxation algorithms <cit.>.
To derive an implicit scheme with no stability restriction on time steps, we follow several techniques that are popular in proposing numerical methods for the solution of hyperbolic problems. First, we apply the so-called Lax-Wendroff (or Cauchy-Kowalevskaya) procedure in connection with finite Taylor series in time where the time derivatives are replaced by terms involving space derivatives using the relation between them given by a partial differential equation. The standard form of this procedure uses a replacement by terms that involve only spatial derivatives that are then approximated by some space discretization methods <cit.>. We follow the approach where mixed derivatives are used <cit.> combined with the idea that the sequence of terms obtained in the Taylor series can be approximated with a decreasing order of accuracy <cit.>. Using these tools, we construct a third order accurate implicit scheme that is unconditionally stable and produces algebraic systems which can be solved by efficient methods such as fast sweeping methods <cit.>. The accuracy and stability is formally shown for linear advection equation and smooth solutions, but the method is applied successfully for nonlinear form (<ref>) with non-smooth solutions and it shows clearly better precision than second order scheme for the chosen representative examples.
Opposite to hyperbolic problems that describe conservation laws for which discontinuous solutions must be considered, the solutions of the non-conservative level set advection equation (<ref>) are supposed to be continuous. Nevertheless, the gradient of the level set functions can contain discontinuities, and its numerical approximation can play an important role in some applications of level set methods. To deal with it, we use a simple relation in the 1D case between the non-conservative advection equation for the level set function and the conservative advection equation for the spatial derivative of the level set function that is used to motivate the derivation of many numerical methods including the one in the seminal work of Osher and Sethian <cit.>. We use this property to derive a high resolution method to solve (<ref>) based on a parametric second order scheme in 1D that can be locally limited to prevent unphysical oscillations in the approximations of gradient in the spirit of Essentially Non-Oscillatory (ENO) approximations <cit.>. Such a scheme has a very simple form opposite to the more involved third order scheme and it can be applied easily for problems in several dimensions by discretizing dimension by dimension. It is important that such a scheme is again unconditionally stable that is not the case if the unlimited second order scheme is used in this form.
In summary, we offer a semi-implicit method for the solution of (<ref>) based on two numerical schemes. The third order accurate scheme is more involved and it is here studied in detail in the two-dimensional case. It offers a very good approximation of the solution ϕ in (<ref>), but it does not ensure a non-oscillatory approximation of ∇ϕ in the case of discontinuities that might be an issue in some applications of level set methods. The high-resolution scheme is very simple to implement even in several dimensional problems and additionally to unconditional stability it offers possibility to approximate the gradient in the spirit of ENO and related methods.
The paper is structured as follows. In Section 2 we present details of all schemes in the 1D case. In Section 3 we extend the method to several dimensions, and in Section 4 we extend it to the nonlinear case. In Section 5 we present representative numerical experiments.
§ ONE-DIMENSIONAL CASE
For clarity of presentation, we describe the method in the one-dimensional linear case and then extend it to several dimensions and nonlinear form (<ref>). The linear advection equation for an unknown function ϕ=ϕ(x,t) with a given velocity function u=u(x) can be written as
∂_t ϕ(x,t) + u(x) ∂_x ϕ(x,t) = 0 , x ∈ (0,L) , t >0 .
Let x_i ∈ [0,L] (with L given) and t^n ≥ 0 be the discrete spatial and temporal points with the indices running from 0 to given values I and N, respectively. We restrict ourselves to a uniform spatial mesh, so h := x_i+1-x_i, with x_0=0 and x_I=L being the boundary nodes. For simplicity, we use a uniform time step τ:=t^n+1-t^n, but the method can be used with variable time steps. In what follows, we use the short notation ϕ_i^n:=ϕ(x_i,t^n) and similarly for the partial derivatives of ϕ. Analogously, u_i:=u(x_i).
The equation (<ref>) must be accompanied by a given initial function ϕ^0=ϕ^0(x) and given boundary functions ϕ_0=ϕ_0(t) (if u(0) >0) and ϕ_L=ϕ_L(t) (if u(L)<0)), which are used to define the discrete values
ϕ_i^0=ϕ^0(x_i) , i=0,1,…,I ,
ϕ^n_0=ϕ_0(t^n) if u_0>0 , n=1,2,…,N ,
ϕ^n_I=ϕ_L(t^n) if u_I<0 , n=1,2,…,N .
We begin the derivation of the scheme with the Taylor series expansion in a form suitable to derive an implicit type of schemes,
ϕ_i^n-1=ϕ_i^n-τ∂_t ϕ_i^n + τ^2/2∂_ttϕ_i^n - τ^3/6∂_tttϕ_i^n + 𝒪(τ^4) .
The idea of the Lax-Wendroff (or Cauchy-Kowalevskaya) procedure is to replace the time derivatives in (<ref>) by terms containing spatial derivatives using the equation (<ref>). In what follows, we do it gradually. First, we derive a parametric form of the 2nd order scheme, then its high resolution extension, and, finally, we extend the scheme to the 3rd order accurate form.
§.§ Parametric second order accurate scheme
Opposite to the origin Lax-Wendroff procedure where all time derivatives in (<ref>) are replaced by space derivatives <cit.> using (<ref>), we use a partial Lax-Wendroff procedure where mixed derivatives are allowed <cit.>,
∂_t ϕ_i^n = - u_i ∂_x ϕ_i^n , ∂_ttϕ_i^n = - u_i ∂_txϕ_i^n .
Applying (<ref>) in (<ref>) we obtain
ϕ_i^n-1 = ϕ_i^n + τ u_i ∂_x ϕ_i^n
- τ^2/2 u_i ∂_txϕ_i^n + 𝒪(τ^3) .
To obtain a fully discrete scheme, one can approximate the terms after τ and τ^2 in (<ref>) with the second and first order accurate finite differences, respectively.
To do so, we denote non-dimensional Courant numbers
C_i := τ u_i/h .
First, we derive the scheme for the case C_i > 0 that determines an upwind form of finite difference approximations.
Later, we present the scheme for a general case of arbitrary signs of C_i.
To approximate the term in (<ref>) after τ, we consider the parametric approximation
h ∂_x ϕ_i^n ≈ϕ_i^n - ϕ_i-1^n + 1-w_i/2 (ϕ_i+1^n - 2 ϕ_i^n + ϕ_i-1^n) + w_i/2 (ϕ_i^n - 2 ϕ_i-1^n + ϕ_i-2^n) .
The approximation is second order accurate for any choice of the parameter w_i∈ R, and it is third order accurate for the particular choice w_i=1/3 <cit.>. The two particular choices w_i=0 and w_i=1 give approximations using reduced stencils that can be used for approximations near boundary nodes where the full stencil is not available. All other choices w_i ∈ (0,1) use in (<ref>) the interpolation with the full stencil of the values from ϕ_i-2^n up to ϕ_i+1^n. For stability reasons <cit.>, only w_i≥ 0 shall be considered, therefore, the choice w_i>1 in (<ref>) is possible and can be viewed as an extrapolation. Note that, in general, different w_i can be used in each time step, which we do not emphasize in the notation.
Next, we approximate the terms after τ^2, for which it is enough to use first order accurate approximation. We again propose a parametric approximation, but now with the purpose of obtaining a “compact scheme”. Namely, we want to cancel the term with ϕ_i+1^n in (<ref>) for any w_i when used in the Taylor series (<ref>). To do so, we propose
τ^2/2 u_i ∂_txϕ_i^n ≈C_i/2((1-w_i) (ϕ_i+1^n - ϕ_i^n - ϕ_i+1^n-1 + ϕ_i^n-1) . +
.
+ w_i (ϕ_i^n - ϕ_i-1^n - ϕ_i^n-1 + ϕ_i-1^n-1 ) ) .
Putting all these approximations together and neglecting the truncation error for the values Φ_i^n ≈ϕ_i^n of numerical solution, we obtain
the final parametric second order accurate numerical scheme for C_i>0
Φ_i^n + C_i ( Φ_i^n - Φ_i-1^n +
1-w_i/2( Φ_i+1^n-1 - Φ_i^n-1 - Φ_i^n +Φ_i-1^n ) .
. + w_i/2( Φ_i^n-1 - Φ_i-1^n-1 - Φ_i-1^n+Φ_i-2^n) ) = Φ_i^n-1
The leading error term E of the scheme (<ref>) can be expressed in the form
E = τ^3/6∂_tttϕ_i^n+h τ^2/4 u_i ∂_ttxϕ_i^n + h^2 τ/4 (2w_i-1) u_i ∂_txxϕ_i^n
+ . h^3/6 (3w_i-1) u_i ∂_xxxϕ_i^n. .
In the case of constant velocity, i.e. u_i ≡u̅, so C_i ≡C̅, we can apply the standard Lax-Wendroff procedure,
∂_txxϕ_i^n = -u̅∂_xxxϕ_i^n , ∂_ttxϕ_i^n = u̅^2 ∂_xxxϕ_i^n , ∂_tttϕ_i^n = - u̅^3 ∂_xxxϕ_i^n .
Using it in (<ref>), we obtain
E = h^3/12C̅ (1+C̅) (2 + C̅ - 6 w_i) ∂_xxxϕ_i^n .
Clearly, the choice w_i = (2 + C̅)/6 cancels the third order error term E, so for this choice of parameter the scheme (<ref>) is third order accurate if the velocity u is constant. Such a possibility is well known also for analogous parametric fully explicit schemes <cit.> or fully implicit schemes <cit.>.
The main advantage of the compact scheme (<ref>) is that the resulting linear algebraic system is defined by a lower triangular matrix, therefore, the unknowns Φ_i^n can be obtained directly if equations (<ref>) are solved in the order i=1,2,…,I. The system (<ref>) must be accompanied by proper approximations near the boundary nodes.
When deriving the scheme for C_i<0, we obtain
Φ_i^n
+ C_i ( Φ_i+1^n - Φ_i^n - 1-w_i/2( Φ_i+1^n - Φ_i^n - Φ_i^n-1 +Φ_i-1^n-1) .
-. w_i/2( Φ_i+2^n - Φ_i+1^n - Φ_i+1^n-1+Φ_i^n-1) )
= Φ_i^n-1 .
Analogous considerations about the linear algebraic system obtained by (<ref>) lead to the fact that it can be solved directly if treated in the order i=I-1,I-2,…,0, and if a proper treatment of approximations near boundary nodes is used.
The general case that cover (<ref>) and (<ref>) can be written as follows,
Φ_i^n
+ | C_i |( Φ_i^n - Φ_i∓ 1^n + 1-w_i/2( Φ_i± 1^n-1 - Φ_i^n-1 - Φ_i^n + Φ_i∓ 1^n) .
+.w_i/2( Φ_i^n-1 - Φ_i∓ 1^n-1 - Φ_i∓ 1^n+Φ_i∓ 2^n) )
= Φ_i^n-1 .
where ±=(C_i) and ∓=-(C_i). To solve the linear system of algebraic equations (<ref>), we use the fast sweeping method <cit.> for which each iteration consists of two Gauss-Seidel iterations with alternating ordering of equations as used for (<ref>) and (<ref>). In fact, if there is no index 𝚒∈{2,3,…,I-2} such that C_𝚒<0 and C_𝚒+1>0, the system can be solved using one fast sweeping iteration, see <cit.> for a proof. If such an index occurs, there exists x̅∈ (x_i,x_i+1) such that u(x̅)=0 and u'(x̅)>0. To preserve the efficiency of the fast sweeping method,
we switch locally to the first order scheme for 𝚒 and 𝚒+1. Let Φ̅ denote an interpolated value of the numerical solution in x̅ at time level n-1, e.g. a linear interpolation of Φ_𝚒^n-1 and Φ_𝚒+1^n-1. As u(x̅)=0, we preserve this value for t ∈ (t^n-1,t^n), and we define discrete equations
Φ_𝚒^n + τ u_𝚒/x̅ - x_𝚒( Φ̅- Φ_𝚒^n ) = Φ_𝚒^n-1 , Φ_𝚒+1^n + τ u_𝚒+1/x_𝚒+1-x̅( Φ_𝚒+1^n - Φ̅) = Φ_𝚒+1^n-1
that are explicitly and independently solvable, see also <cit.> for more details.
Concerning a choice of the value for w_i, we prefer the space dependent value
w_i = 2+| C_i|/6
that is third order accurate in the case of constant velocity and that appears appropriate also in the case of variable velocity <cit.>. In the next section, we introduce a high resolution form of the semi-implicit scheme (<ref>) where the parameter w_i will depend on the numerical solution and will differ from (<ref>) for the grid nodes where the approximation of ∂_x ϕ_i^n varies significantly.
§.§ High-resolution scheme
For level set methods, the quality of the approximation for the first derivatives of the solution (the gradient) can be of a great importance. The level set function itself is continuous, but the derivatives can be in general only piecewise continuous having jumps at some parts of computational domain. Therefore, one can expect nonphysical oscillations in the approximation of the first derivative if the second order scheme (<ref>) is used with a fixed stencil <cit.>, that is, with a fixed value of parameters w_i.
In this section, to avoid such behavior if the approximation of the gradient is important near discontinuities, we propose to use a nonlinear form of the scheme with parameters w_i depending on the numerical solution similar to <cit.> that we adapt to the level set equation (<ref>). Moreover, we propose the scheme in a predictor-corrector form that simplifies a solving procedure of the resulting nonlinear algebraic equations.
Note that the scheme (<ref>) for Φ_i^n can be used to define an analogous “conservative” scheme for the (undivided) backward finite differences (if C_i>0) to approximate Ψ_i^n ≈ h ∂_x ϕ(x_i,t^n),
Ψ_i^n := Φ_i^n - Φ_i-1^n , i=1,2,…,I ,
, n=0,1,… .
To show it, we rewrite the scheme (<ref>) into the form
Φ_i^n + C_i ( Ψ_i^n +
1-w_i/2( Ψ_i+1^n-1 - Ψ_i^n ) + w_i/2( Ψ_i^n-1 - Ψ_i-1^n) ) = Φ_i^n-1 .
Furthermore, using the notation for a “numerical flux function”,
F_i := u_i ( Ψ_i^n +
1/2( (1-w_i) ( Ψ_i+1^n-1 - Ψ_i^n ) + w_i ( Ψ_i^n-1 - Ψ_i-1^n))))
and computing the difference of (<ref>) for i and i-1, we obtain
Ψ_i^n + τ/h( F_i - F_i-1) = Ψ_i^n-1 ,
that can be viewed formally as a conservative finite difference scheme to solve
∂_t ψ + ∂_x (u ψ) = 0
with ψ:=∂_x ϕ.
Compact implicit conservative schemes of the type (<ref>) with (<ref>) were studied in <cit.> that we use here to define a high-resolution form of numerical fluxes in (<ref>) to obtain the TVD (Total Variation Diminishing) approximations of ψ in (<ref>). Such property in the discrete form is defined by
∑_i=1^I |Ψ_i^n - Ψ_i-1^n |≤∑_i=1^I |Ψ_i^n-1 - Ψ_i-1^n-1|
if appropriate boundary conditions are supposed (e.g., periodic ones).
An enormous amount of research is available for (TVD) high-resolution schemes in the case of fully explicit time discretizations starting with <cit.>, see also monographs <cit.> or review in <cit.>. Similarly, high-resolution schemes for implicit time discretization are developed <cit.>. Here we adapt the methodology for the semi-implicit time discretization.
To propose a (nonlinear) TVD form of the parametric 2nd order scheme (<ref>) with w_i depending on numerical solution, we introduce indicators r_i that measure a ratio between two variants of the 2nd order updates in (<ref>) (i.e., the term multiplied by either (1-w_i) or w_i)
r_i := Ψ_i^n-1 - Ψ_i-1^n/Ψ_i+1^n-1 - Ψ_i^n .
This indicator r_i is clearly specific for the semi-implicit scheme and it depends on the unknown value Ψ_i^n (i.e., on Φ_i^n). Next, we continue in the spirit of other (explicit or implicit) high-resolution methods. If we define the coefficients s_i,
s_i = 1 - w_i + w_i r_i ,
the fluxes F_i can be written in the form,
F_i = u_i (Ψ_i^n + 1/2 s_i ( Ψ_i+1^n-1 - Ψ_i^n))
The values s_i in (<ref>) can be formally viewed as the “slopes” of the second order updates of the first order scheme. In what follows, we propose a high-resolution form of the general 1D scheme (<ref>) where the slopes s_i are replaced by limited and predicted values l_i that should not differ from s_i whenever possible. We present here an algorithm to compute l_i, the motivation for such computations with a proof of TVD property in the case of constant velocity is given in Appendix.
Firstly, since the indicator r_i depends on the unknown solution Φ_i^n, we have to predict its value. We compute it with (<ref>) using the choice of the parameter w_i in (<ref>). Denoting the predicted solution by Φ_i^n,p, we compute the predicted value r_i^p of r_i,
r_i^p = Φ_i^n-1 - Φ_i∓ 1^n-1 - Φ_i∓ 1^n + Φ_i∓ 2^n/Φ_i± 1^n-1 - Φ_i^n-1 - Φ_i^n,p+Φ_i∓ 1^n .
Next, we compute a preliminary value l_i^p of l_i,
l_i^p = max{ 0 , min{ s_i^p , 2 }} , s_i^p = 1 - w_i + w_i r_i^p .
Finally, we compute the value l_i by
l_i = max{ 0, min{l_i^p , (2/| C_i | + l_i∓ 1) r_i^p }} .
We note that other approaches can be used to compute the limited values l_i of s_i^p than in (<ref>), see, e.g., <cit.>.
Having the value l_i, the final scheme with one corrector step takes the form
Φ_i^n + | C_i |( Φ_i^n - Φ_i∓ 1^n +
1/2 l_i ( Φ_i± 1^n-1 - Φ_i^n-1 - Φ_i^n,p +Φ_i∓ 1^n ) ) = Φ_i^n-1 .
§.§ Third order accurate scheme
In this section, we use the property that the parametric approximation of ∂_x ϕ_i^n in (<ref>) is third order accurate for the particular choice w=1/3 and with some additional effort we extend the scheme (<ref>) to third order accuracy in space and time.
To do so, we apply the Lax-Wendroff procedure to the derivative ∂_tttϕ_i^n and ∂_ttxϕ_i^n in (<ref>) to replace it with mixed derivatives,
∂_tttϕ_i^n = - u_i ∂_ttxϕ_i^n , ∂_ttxϕ_i^n =
- ∂_x(u_i ∂_txϕ_i^n) .
Using now (<ref>) together with w=1/3, the leading error term E in (<ref>) simplifies to the the form,
E = -u_i τ^3/12∂_x (u_i ∂_txϕ_i^n) -h τ^2/12 u_i ∂_txxϕ_i^n .
To obtain the third order accurate numerical scheme for C_i>0, we extend the second order scheme (<ref>) by adding finite difference approximations of E in (<ref>). Of course, we have to do it using the chosen stencil, namely,
u_i τ^3/12∂_x (u_i ∂_txϕ_i^n) ≈C_i/12( C_i (ϕ_i^n - ϕ_i-1^n - ϕ_i^n-1 + ϕ_i-1^n-1) .
. - C_i-1 (ϕ_i-1^n - ϕ_i-2^n - ϕ_i-1^n-1 + ϕ_i-2^n-1) )
and
h τ^2/12 u_i ∂_txxϕ_i^n ≈C_i/12(ϕ_i^n - 2 ϕ_i-1^n + ϕ_i-2^n - ϕ_i^n-1 + 2 ϕ_i-1^n-1 - ϕ_i-2^n-1) .
The final form of the scheme can be written as follows
Φ_i^n +
C_i/12( 9 Φ_i^n - 12 Φ_i-1^n + 3 Φ_i-2^n + 4 Φ_i+1^n-1 - 3 Φ_i^n-1 - Φ_i-2^n-1.
+ . C_i (Φ_i^n-Φ_i-1^n - Φ_i^n-1 + Φ_i-1^n-1) .
- . C_i-1(Φ_i-1^n-Φ_i-2^n - Φ_i-1^n-1 + Φ_i-2^n-1) ) =Φ_i^n-1 .
Again, the general case can be derived in the following form,
Φ_i^n +
| C_i|/12( 9 Φ_i^n - 12 Φ_i∓ 1^n + 3 Φ_i∓ 2^n + 4 Φ_i± 1^n-1 - 3 Φ_i^n-1 - Φ_i∓ 2^n-1.
+ . | C_i | (Φ_i^n-Φ_i∓ 1^n - Φ_i^n-1 + Φ_i∓ 1^n-1) .
∓. C_i∓ 1 (Φ_i∓ 1^n-Φ_i∓ 2^n - Φ_i∓ 1^n-1 + Φ_i∓ 2^n-1) ) =Φ_i^n-1 ,
where ±=(C_i) and ∓=-(C_i). We note that the scheme (<ref>) in the case of constant velocity is different from the scheme (<ref>) with choice (<ref>). It seems too complex to prove the unconditional stability of (<ref>) for a constant velocity case using an analytical von Neumann stability analysis as in <cit.>. However, the numerical stability analysis <cit.> and all numerical experiments suggest that this property is preserved for (<ref>).
§ ADVECTION IN SEVERAL DIMENSIONS
An advantage of the one-dimensional second order accurate scheme from Section <ref> and the high resolution scheme from Section <ref> is that they can be used in a straightforward manner in several dimensions applying them the dimension by dimension <cit.>. We show it first for the linear advection equation
∂_t ϕ( x, t) + v⃗( x) ·∇ϕ( x, t) = 0 .
The partial Lax-Wendroff procedure takes in the two-dimensional case (i.e., x=(x,y), v⃗=(u,v)) the form
∂_t ϕ_ij^n = - u_ij∂_x ϕ_ij^n - v_ij∂_y ϕ_ij^n , ∂_ttϕ_ij^n = - u_ij∂_txϕ_ij^n - v_ij∂_tyϕ_ij^n ,
where each term occurs analogously in x and y direction. We have extended the notation from (<ref>) in Section <ref> as follows: y_j = j h, j=0,1,…,J (with J given), ϕ_ij^n = ϕ(x_i,y_j,t^n) and similarly for u_ij, v_ij and C_ij. Moreover, we have to introduce the local Courant numbers for the second component of the velocity,
D_ij = τ v_ij/h .
The unlimited version of the 2D scheme can be then written formally as follows,
Φ_ij^n
+ | C_i j|( Φ_i j^n - Φ_i∓ 1 j^n + 1-w^x_i j/2( Φ_i∓ 1 j^n - …) )
+ | D_i j|( Φ_i j^n - Φ_i j∓ 1^n + 1-w^y_i j/2( Φ_i j∓ 1^n - …) ) = Φ_i j^n-1 .
where the first term in large parentheses shall be completed analogously to (<ref>) and the same with the second one that should be adapted to the y direction. The parameters w_ij^x and w_ij^y now correspond to w_i in (<ref>) applied in the x and y directions, respectively.
It was shown in <cit.> that the linear second order scheme (<ref>) is only conditionally stable. For example, for the choice w_i ≡ 0.5, the scheme is stable up to Courant numbers | C_ij| and | D_ij| approximately equal to 7.396 <cit.>, which is a significant improvement with respect to analogous explicit schemes.
In general, if computations are realized with very large Courant numbers or approximations of the gradient of non-smooth level set function is important, the high resolution method (<ref>) can be straightforwardly extended to several dimensions as follows,
Φ_ij^n + | C_ij|( Φ_ij^n - Φ_i∓ 1 j^n +
1/2 l^x_ij( Φ_i± 1 j^n-1 - Φ_i j^n-1 - Φ_ij^n,p +Φ_i∓ 1 j^n ) )
+ | D_ij|( Φ_ij^n - Φ_i j∓ 1^n + 1/2 l^y_ij( Φ_i j± 1^n-1 - Φ_i j^n-1 - Φ_ij^n,p +Φ_i j∓ 1^n ) )
= Φ_ij^n-1 .
The predicted values Φ_ij^n,p and the values l_ij^x and l_ij^y of the limiter are obtained by a natural extension of the one dimensional case with l_i in (<ref>).
Finally, we extend the third order method from Section <ref> for the two-dimensional case of (<ref>). Interestingly enough, such a scheme will increase not only the accuracy, but it will also deliver unconditional stability.
To derive the scheme, we have to extend (<ref>) as follows
∂_tttϕ_ij^n =
- u_ij∂_ttxϕ_ij^n
- v_ij∂_ttyϕ_ij^n
together with
∂_ttxϕ_ij^n =
- ∂_x(u_ij∂_txϕ_ij^n) - ∂_x(v_ij∂_tyϕ_ij^n) ,
∂_ttyϕ_ij^n =
- ∂_y(u_ij∂_txϕ_ij^n) - ∂_y(v_ij∂_tyϕ_ij^n) .
Consequently, the leading error term (<ref>) in the 2D case takes the form
E = - u_ijh τ^2/12∂_txxϕ_ij^n - u_ijτ^3/12∂_x (u_ij∂_txϕ_ij^n)
- v_ijh τ^2/12∂_tyyϕ_ij^n
- v_ijτ^3/12∂_y (v_ij∂_tyϕ_ij^n)
-u_ijτ^3/12∂_x (v_ij∂_tyϕ_ij^n) -v_ijτ^3/12∂_y (u_ij∂_txϕ_ij^n)
.
While the first four terms also occur in the one-dimensional case, see (<ref>), the last two terms in (<ref>) are specific for problems in several dimensions.
Their finite difference approximation is rather straightforward,
u_ijτ^3/12∂_x (v_ij∂_tyϕ_ij^n) ≈(D_ij) | C_ij|/12( D_ij (ϕ_ij^n - ϕ_i j∓ 1^n - ϕ_ij^n-1 + ϕ_i j∓ 1^n-1) .
- . D_i∓ 1 j (ϕ_i∓ 1 j^n - ϕ_i∓ 1 j∓ 1^n - ϕ_i∓ 1 j^n-1 + ϕ_i∓ 1 j∓ 1^n-1) ) ,
where the sign in the first index, e.g. in i∓ 1, is decided from ∓ = - (C_ij) and ± = (C_ij), and analogously for the second index, e.g. in j∓ 1, one takes ∓ = ∓(D_ij) and ± = (D_ij). The second term is discretized analogously,
v_ijτ^3/12∂_y (u_ij∂_txϕ_ij^n) ≈(C_ij) | D_ij|/12( C_ij (ϕ_ij^n - ϕ_i∓ 1 j^n - ϕ_ij^n-1 + ϕ_i∓ 1 j^n-1) .
- . C_i∓ 1 j (ϕ_i j∓ 1^n - ϕ_i∓ 1 j∓ 1^n - ϕ_i j∓ 1^n-1 + ϕ_i∓ 1 j∓ 1^n-1) ) .
To define the complete scheme in the general case, we do it formally as follows,
Φ_ij^n +
| C_ij|/12(9 Φ_ij^n - 12 Φ_i∓ 1 j^n + …)
+ | D_ij|/12(9 Φ_ij^n - 12 Φ_i j ∓ 1^n + …)
+ (D_ij) | C_ij|/12( D_ij (ϕ_ij^n - ϕ_i j∓ 1^n + …)
+ (C_ij) | D_ij|/12( C_ij (ϕ_ij^n - ϕ_i∓ 1 j^n + …) = Φ_ij^n-1 .
where the first term in parentheses of (<ref>) is completed as in (<ref>), analogously the second one, but adapted to the variable y, and the third and fourth terms are completed according to (<ref>) and (<ref>), respectively.
Note that the matrix for the system (<ref>) of linear algebraic equations has off-diagonal terms only in an upwind direction, therefore, algebraic solvers like the fast sweeping methods with four alternating directions of Gauss-Seidel iterations <cit.> can be used efficiently. As noted above, we have applied the von Neumann stability analysis for the system (<ref>) with constant (“frozen”) values of the velocity, and unconditional stability is clearly indicated by advanced numerical optimization procedures in the software Mathematica <cit.> and is confirmed by all numerical experiments.
§ NONLINEAR ADVECTION EQUATION
Up to now, we have considered only linear advection (<ref>). To solve the nonlinear level set advection equation (<ref>), we use a well-known approach of semi-implicit schemes <cit.>, where a semi-linear form of PDEs is linearized by evaluating nonlinear coefficients with the solution in the previous time. In particular, we replace (<ref>) for t ∈ (t^n,t^n+1) by (<ref>) with
v⃗(x,y) = u⃗(x,y) + δ(x,y) ∇ϕ(x,y,t^n-1)/|∇ϕ(x,y,t^n-1)|
In general, one can loose an accuracy in the numerical approximation as the normal direction of level sets is frozen at the left point of the time interval.
To compute some approximations of the gradient ∇ϕ(x_i,y_j,t^n-1) in (<ref>), one has to choose very carefully an upwind type of finite differences with appropriate accuracy.
To propose such an upwind finite difference, we follow the strategy in <cit.>.
We suppose that ϕ(x,y,t) fulfills the standard sign property, namely that its zero level set represents a closed interface and ϕ<0 inside of the closed region and ϕ>0 otherwise <cit.>.
Having such property, we use the approximations
h ∂_x ϕ_ij^n-1≈{[ Φ_ij^n-1 - Φ_i-1 j^n-1,w^x , Φ_i-1 j^n-1,w^x < min{Φ_ij^n-1, Φ_i+1 j^n-1,w^x}; Φ_i+1 j^n-1,w^x - Φ_ij^n-1 , Φ_i+1 j^n-1,w^x < min{Φ_ij^n-1, Φ_i-1 j^n-1,w^x}; 0 otherwise ].
and analogously for ∂_y ϕ_ij^n-1.
The values of Φ_i∓ 1 j^n-1,w^x are computed using the variable parametric form of the second order accurate approximation for ∂_x ϕ_i^n-1 analogously to (<ref>) together with the idea of Weighted Essentially Non-Oscillatory (WENO) approximations as used in <cit.>.
Namely, the value of w^x=w^x_ij is computed as
w^x_ij = 1/1+2 (r_ij^x)^2 ,
where the indicators r_ij^x, and consequently the parameters w^x_ij, are computed differently for Φ_i-1 j^n-1,w^x and Φ_i+1 j^n-1,w^x. Namely,
Φ_i± 1 j^n-1,w^x = Φ_ij^n-1±1-w^x_ij/2 (Φ_i+1 j^n-1 - Φ_i-1 j^n-1)
+ . w^x_ij/2 (- 3 Φ_i j^n-1 + 4 Φ_i± 1 j^n-1 - Φ_i± 2 j^n-1) .
and
r_ij^x = ϵ + (Φ_i± 2^n-1 - 2 Φ_i± 1^n-1 + Φ_i^n-1)^2/ϵ + (Φ_i+1^n-1 - 2 Φ_i^n-1 + Φ_i-1^n-1)^2 .
The parameter ϵ has a small value to avoid a division by zero, e.g. ϵ=10^-7. Analogous definitions are used to define the approximation of ∂_y ϕ_ij^n-1 in (<ref>).
Once we have the approximation (<ref>), we can evaluate the velocity v⃗ in (<ref>) at each grid point (x_i,y_j), and the schemes in Section <ref> can be applied straightforwardly.
§ NUMERICAL EXPERIMENTS
In this section, we illustrate the properties of the proposed numerical schemes on several test problems. If an exact solution is available, we use it to set the boundary conditions and the initial condition. To check the Experimental Order of Convergence (EOC), we use the exact values of the solution not only at the boundary points with the inflow boundary conditions, but also at the neighboring points outside of the computational domain, if necessary. The implementation is realized in the Matlab <cit.>.
The main purpose of experiments is to show that the schemes produce good accuracy when Courant numbers are larger than one (i.e., significantly larger than typical restrictions of explicit schemes) and that they preserve the expected order of convergence for very large Courant numbers with no instabilities produced.
§.§ Linear advection in 1D with a smooth solution
In the following experiment, the velocity is defined as u(x)=sin(x) and the exact solution by
ϕ(x, t)=sin(2arctan(tan(x/2e^-t))) ⇒ ϕ(x,0)=sin(x) .
The example is computed for x ∈ [-π/2,7π/2] and t ∈ [0,2]. Note that the velocity u changes sign four times in the interval, and the schemes (<ref>) were used twice.
The error is computed by
E_I := h ∑_i=0^I |ϕ_i^N - Φ_i^N | .
One can see in Table <ref> that the third order EOC is obtained for both the medium and the large Courant numbers for sufficiently fine computational grids.
§.§ Advection with a non-smooth solution
In the following example, we solve the linear advection equation (<ref>) with constant velocity u(x)≡ 1 and a special form of the initial condition taken from <cit.>, namely, ϕ(x,0)=ϕ^0(x-0.5), where ϕ^0(x) is defined to be periodic and
ϕ^0(x)= - c (x+1)+ {[ 2cos(3 π x^2/2)-√(3) -1≤ x < -1/3,; 3/2+3cos(2π x) -1/3≤ x<0,; 15/2-3cos(2π x) 0 ≤ x<1/3,; 6 π x(x-1) + 28+4π+cos(3π x)/3 1/3≤ x<1 , ].
and c=√(3)/2+9/2+2π/3.
We consider the intervals x ∈ [-1,1] and t ∈ [0,2].
In Figures <ref> and <ref> we present the solutions obtained by the high-resolution scheme (<ref>) and the third order scheme (<ref>).
In Figure <ref>, one can see that the oscillations in the approximation of ∂_x ϕ occur for the third order scheme at the initial time, but they are not amplified as the scheme is stable, see Figure <ref> for the solution at t=2. The high-resolution method successfully reduces such oscillations at each time step. In Table <ref> we present the comparison of errors and EOCs for both methods. Clearly, the high-resolution method produces smaller errors for this example.
§.§ Advection in two-dimensional case
In this section, we present several linear and nonlinear test problems in the two-dimensional case. To solve the resulting linear systems of algebraic equations, we use the fast sweeping method <cit.> with only two iterations for all used grids.
Firstly, we present advection problems with a velocity given as a sum of a linear velocity field and a nonlinear one describing the movement in the normal direction, namely,
u⃗=
[ -y; x ]
+ δ∇ϕ/|∇ϕ|,
where δ is a constant. The linear part of the velocity describes a rotation around the origin with the period 2 π, and the nonlinear part describes an expansion of level sets if δ>0 and a shrinking if δ<0. We choose two representative initial conditions defined by level set functions for a smooth and a non-smooth interface, namely, a circular interface and a square interface, see Figure <ref>. The functions are defined later within the corresponding exact solutions.
Afterwards, an example with exponentially varying velocity will be presented for which the property of no restriction on the time step can be used with a clear profit.
We compute the maximal Courant number 𝒞 as given here,
𝒞 = max{max_i,j,nτ| u(x_i,y_j,∇ϕ_ij^n)|/h , max_i,j,nτ| v(x_i,y_j,∇ϕ_ij^n)|/h}
We presents the results for the second order scheme (<ref>) with w_ij^x =w_ij^y ≡ 0.5, the high-resolution scheme (<ref>), and the third order scheme (<ref>). The error is computed by
E_I^N := τ h^2 ∑_n=1^N∑_i,j=0^I |ϕ_i j^n - Φ_i j^n | .
§.§.§ Rotation of a quartic function
Firstly, we show the convergence order of the proposed second and third order schemes for a quartic initial function
with the velocity u⃗ defined in (<ref>) and δ=0. The exact solution is given by
ϕ(x̃,ỹ,t) = x̃^4 + ỹ^4,
x̃=xcos(t)+ysin(t)+ 0.25,
ỹ=ycos(t)-xsin(t) ,
where (x,y)∈Ω=[-1,1]×[-1,1] and t∈[0,π]. We use the exact value of the solution only at the inflow part of the boundary ∂Ω. In Table <ref> we can see the results of the EOC according to the order of the schemes.
§.§.§ Level set function for a smooth interface
In the first nonlinear example, we consider Ω=[-0.5,0.5]×[-0.5,0.5] and t∈[0,π]. The initial condition is a distance function to the point (-0.25,0) ∈Ω,
and the exact solution is defined by
ϕ(x̃,ỹ,t)=max{0,√(x̃^2 + ỹ^2) - δ t } ,
where x̃ and ỹ are defined in (<ref>).
In the first version, we choose δ=-0.1/π, so the initial level sets will rotate and shrink.
In Table <ref>, one can compare the results for the third and the second order schemes.
As the problem is nonlinear with the velocity dependent on the gradient of the solution, the third order scheme exhibits for this example the EOC approaching the value 2 from above, see Table <ref>. On the other hand, the EOC of the second order scheme is converging to 2 from below and the error is clearly larger for all grid levels. A visual comparison is given in Figure <ref> where we present the results for a very large 𝒞 when the second order scheme loses its precision more profoundly.
Next, we perform this experiment for the rotation and expansion of the initial level sets by choosing δ=0.1/π in (<ref>). One can see analogous properties as discussed in the first version of this example.
§.§.§ Level set function for a non-smooth interface
In the second example, we again consider the square computational domain Ω=[-0.5,0.5]×[-0.5,0.5] and the time interval t∈[0,π]. The evolved level set function will now represent interfaces that are (at least initially) non-smooth. Namely, the initial function is now ϕ(x,y,0) given in (<ref>) that has square level sets.
In the first version of this example, analogously to the previous section, we consider the velocity in (<ref>) with δ=-0.1/π, so the interface will rotate and shrink. The exact solution is defined by
ϕ(x̃,ỹ, t) =
{[ ỹ- δ t ỹ≥|x̃|,; -ỹ- δ t -ỹ≥|x̃|,; x̃- δ t x̃≥|ỹ|,; -x̃- δ t -x̃≥|ỹ|, ].
where the transformed coordinates (x̃, ỹ) are defined in (<ref>). Note that in this case, the exact solution is non-smooth. As will be clear from numerical results presented here, the second order scheme (<ref>) for large Courant numbers produces for this example very imprecise results. Therefore, we also apply for this example the high-resolution scheme (<ref>) and compare the two schemes in Figure <ref>. One can clearly see that the “oscillatory” behavior of the second order scheme is significantly reduced in the results obtained by the high-resolution scheme. Furthermore, we compare the numerical results for the third order scheme with the exact solution in Figure <ref> which are of good quality even for larger Courant numbers 𝒞. The errors and EOCs of all schemes for this example are given in Table <ref>. Note that small bumps at the corners of square level sets are a typical behaviour of higher order approximations <cit.>.
Analogously, we performed this experiment for the rotation and expansion of the initial profile by choosing δ=0.1/π in (<ref>). The exact solution for this example is as follows,
ϕ(x̃,ỹ,t) =
{[ 0, I_1; -x̃-δ t, I_2; x̃-δ t, I_3; -ỹ-δ t, I_4; ỹ-δ t, I_5; √((x̃-d_6)^2+(ỹ-d_6)^2)-δ t+d_6 , I_6 ∖( I_3 ∪ I_5 ); √((x̃-d_7)^2+(ỹ+d_7)^2)-δ t+d_7, I_7 ∖( I_3 ∪ I_4 ); √((x̃+d_8)^2+(ỹ+d_8)^2)-δ t+d_8, I_8 ∖( I_2 ∪ I_4 ); √((x̃+d_9)^2+(ỹ-d_9)^2)-δ t+d_9, I_9 ∖( I_2 ∪ I_5 ) ].
where
I_1 =x̃^2+ỹ^2≤ (δ t)^2
I_2 =(x̃≤-δ t) & (x̃+δ t ≤ỹ) & (ỹ≤ -x̃-δ t)
I_3 =(x̃≥δ t) & (-x̃+δ t ≤ỹ) & (ỹ≤x̃-δ t))
I_4 =(ỹ≤-δ t) & (ỹ+δ t ≤x̃) & (x̃≤ -ỹ-δ t))
I_5 =(ỹ≥δ t) & (-ỹ+δ t ≤x̃) & (x̃≤ỹ-δ t)
I_6 =(x̃ > 0) & (ỹ > 0), d_6=1/2(x̃+ỹ-√(2(δ t)^2-(x̃ - ỹ)^2))
I_7 =(x̃ > 0) & (ỹ < 0), d_7=1/2(x̃-ỹ-√(2(δ t)^2-(x̃ + ỹ)^2))
I_8 =(x̃ < 0) & (ỹ < 0), d_8=1/2(-x̃-ỹ-√(2(δ t)^2-(x̃ - ỹ)^2))
I_9 =(x̃ < 0) & (ỹ > 0), d_9=1/2(-x̃+ỹ-√(2(δ t)^2-(x̃ + ỹ)^2)).
The visual results are presented in Figure <ref> and <ref> and the errors with the corresponding EOCs are given in Table <ref>. One can again observe that the high-resolution method significantly decreases irregularities in numerical solutions obtained by the second order method, and that the third order method gives stable results even for large values of 𝒞.
§.§.§ Exponentially varying velocity
In the following example, we illustrate the behavior of the scheme for a solution of the advection equation when the velocity is varying significantly in the computational domain. Namely we choose the velocity that varies exponentially,
v⃗=(u, v)=(e^2(y-x), e^2(y-x)).
Consequently, the velocity v⃗ is constant along each diagonal given by y-x=c with any constant c ∈ R.
The example is defined for the square domain Ω=[-1,1]×[-1,1] and the time interval t∈[0,0.4]. The initial condition ϕ^0, see Figure <ref>, is the distance function to the point [-1,-1] defined as
ϕ^0(x,y) = √((x+1)^2+(y+1)^2).
The exact solution ϕ=ϕ(x,y,t) defined for (x,y) ∈ R^2 and t∈ R is given by
ϕ(x,y,t) = ϕ^0(x-tu(x,y),y-tv(x,y)).
In the first version of the example, we set time dependent Dirichlet boundary conditions with the values given by ϕ from (<ref>) only at the inflow edges of the square domain. The comparison of the exact solution at the final time t=0.4 with numerical solutions is presented in Figure <ref>, the norms of errors and the EOCs are given in Table <ref>. One can see stable results and good accuracy for both methods and the expected behavior of EOCs. Note that in this version of the example, the solution is smooth.
In the second version of the example we use time independent Dirichlet boundary conditions at the left and bottom edges of the domain with the values given by ϕ^0. The solution ϕ is given by
ϕ(x,y,t) =
{[ ϕ^0(x-t u(x,y),y-t v(x,y), y≥ x & x-t u(x,y) ≥ -1
ϕ^0(-1,y-x-1), y≥ x & x-t u(x,y) < -1
ϕ^0(x-t u(x,y),y-t v(x,y), x≥ y & y-t v(x,y) ≥ -1
ϕ^0(x-1-y,-1), x≥ y & y-t v(x,y) < -1 ].
The solution reaches a stationary form in a finite time with the stationary values equilibrated much faster in the part of the square domain above its diagonal where the Courant numbers are large, see Figure <ref>. The 3rd order scheme can compute the results
with a similar precision for 𝒞≈ 11 and 𝒞≈ 109, opposite to the 2nd order scheme where the differences are much more visible. The norms of errors and the corresponding EOCs are presented in Table <ref>. Note that the exact solution is non-smooth in this case.
§ CONCLUSIONS
We present the method based on semi-implicit higher order schemes for numerical solution of level set equation described by the advection equation with the velocity defined by an external velocity field and by the speed in normal direction. We offer two variants of the method based on such schemes. The first one is of high resolution form that is based on a limiting of simple second order scheme and that can be applied straightforwardly dimension by dimension to problems in several dimensions. This scheme can be proved in the case of a 1D linear advection equation to be Total Variation Diminishing (TVD) for the approximation of the space derivative. The second one is based on the third order scheme and it has a more involved form that we present here in the two-dimensional case. This variant offers clearly a higher accuracy than the high-resolution variant, and it seems suitable for examples with sufficiently smooth solutions and/or for enough refined computational meshes. The both variants of the method are unconditionally stable in the sense of von Neumann stability analysis, which is confirmed also by chosen linear and nonlinear examples computed with significantly larger Courant numbers than required by analogous explicit schemes.
§ APPENDIX
Here, we prove that the high-resolution scheme (<ref>) is TVD in the case of advection equation with a constant velocity. Let C>0, the case with constant negative Courant number is treated analogously.
Firstly, the flux F_i in (<ref>) can be rewritten to the form,
F_i = u_i (Ψ_i^n + 1/2 s_i ( Ψ_i+1^n-1 - Ψ_i^n)) =
u_i (Ψ_i^n + 1/2s_i/r_i( Ψ_i^n-1 - Ψ_i-1^n)) .
Consequently, the scheme (<ref>) can be written in the form
Ψ_i^n - Ψ_i^n-1 + C (Ψ_i^n - Ψ_i-1^n .
+ . 1/2(s_i/r_i - s_i-1)
( Ψ_i^n-1 - Ψ_i-1^n) ) = 0 .
Moreover, as Ψ_i^n-1 - Ψ_i-1^n = Ψ_i^n - Ψ_i-1^n - (Ψ_i^n - Ψ_i^n-1), we can rewrite (<ref>) as follows,
Ψ_i^n - Ψ_i^n-1 + C 1+ 1/2( s_i/r_i - s_i-1)/1 - C/2( s_i/r_i - s_i-1) (Ψ_i^n - Ψ_i-1^n)
= 0 .
Similar schemes are studied in <cit.> with the straightforward conclusion that the scheme (<ref>) is TVD if the coefficient before (Ψ_i^n - Ψ_i-1^n) is non-negative. Clearly, the coefficients s_i do not ensure such property in general, therefore, we replace s_i in (<ref>) with limited values l_i,
(Ψ_i^n - Ψ_i^n-1) + C 1+ 1/2( l_i/r_i - l_i-1)/1 - C/2( l_i/r_i - l_i-1) (Ψ_i^n - Ψ_i-1^n)
= 0 .
The required property is obtained if the limited values fulfill the following inequalities for any r ∈ℛ,
0 ≤ l_i-1≤ 2 ,
0 ≤l_i/r≤2/C + l_i-1 .
The definition of values l_i in (<ref>) ensure such inequalities, consequently, the coefficient in (<ref>) is nonnegative and the scheme (<ref>) is TVD.
10
url<#>1urlprefixURL href#1#2#2 #1#1
set99
J. Sethian, Level Set Methods and Fast Marching Methods, Cambridge UP,
Cambridge, 1999.
osh02
S. Osher, R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, Springer,
New York, 2002.
gibou2018review
F. Gibou, R. Fedkiw, S. Osher, A review of level-set methods and some recent
applications, Journal of Computational Physics 353 (2018) 82–109.
sussman1998improved
M. Sussman, E. Fatemi, P. Smereka, S. Osher, An improved level set method for
incompressible two-phase flows, Computers & Fluids 27 (5-6) (1998) 663–680.
olsson2007conservative
E. Olsson, G. Kreiss, S. Zahedi, A conservative level set method for two phase
flow II, Journal of Computational Physics 225 (1) (2007) 785–807.
frolkovic2016flux
P. Frolkovič, D. Logashenko, C. Wehner, Flux-based level-set method for
two-phase flows on unstructured grids, Computing and Visualization in Science
18 (1) (2016) 31–52.
holm_method_1999
E. J. Holm, H. P. Langtangen, A method for simulating sharp fluid interfaces in
groundwater flow, Advances in Water Resources 23 (1) (1999) 83–95.
herreros2006application
M. Herreros, M. Mabssout, M. Pastor, Application of level-set approach to
moving interfaces and free surface problems in flow through porous media,
Computer Methods in Applied Mechanics and Engineering 195 (1-3) (2006) 1–25.
fro12
P. Frolkovič, Application of level set method for groundwater flow with
moving boundary, Adv. Water. Resour. 47 (2012) 56–66.
robinson2023new
N. I. Robinson, New analysis and numerical values for the classical dam
problem, Advances in Water Resources 175 (2023) 104356.
van2009crystal
T. L. van Noorden, Crystal precipitation and dissolution in a porous medium:
effective equations and numerical experiments, Multiscale Modeling &
Simulation 7 (3) (2009) 1220–1236.
schulz2017effective
R. Schulz, P. Knabner, An effective model for biofilm growth made by
chemotactical bacteria in evolving porous media, SIAM Journal on Applied
Mathematics 77 (5) (2017) 1653–1677.
ray2019numerical
N. Ray, J. Oberlander, P. Frolkovic, Numerical investigation of a fully coupled
micro-macro model for mineral dissolution and precipitation, Computational
Geosciences 23 (2019) 1173–1192.
garttner2020efficiency
S. Gärttner, P. Frolkovič, P. Knabner, N. Ray, Efficiency and
accuracy of micro-macro models for mineral dissolution, Water Resources
Research 56 (8) (2020) e2020WR027585.
kelm2022comparison
M. Kelm, S. Gärttner, C. Bringedal, B. Flemisch, P. Knabner, N. Ray,
Comparison study of phase-field and level-set method for three-phase systems
including two minerals, Computational Geosciences 26 (3) (2022) 545–570.
mallet2009modeling
V. Mallet, D. E. Keyes, F. Fendell, Modeling wildland fire propagation with
level set methods, Computers & Mathematics with Applications 57 (7) (2009)
1089–1101.
frolkovic2015semi
P. Frolkovič, K. Mikula, J. Urbán, Semi-implicit finite volume level
set method for advective motion of interfaces in normal direction, Appl. Num.
Math. 95 (2015) 214–228.
alessandri2021parameter
A. Alessandri, P. Bagnerini, M. Gaggero, L. Mantelli, Parameter estimation of
fire propagation models using level set methods, Applied Mathematical
Modelling 92 (2021) 731–747.
sarti2000subjective
A. Sarti, R. Malladi, J. A. Sethian, Subjective surfaces: A method for
completing missing boundaries, Proceedings of the National Academy of
Sciences 97 (12) (2000) 6258–6263.
mikula2005co
K. Mikula, A. Sarti, F. Sgallari, Co-volume level set method in subjective
surface based medical image segmentation, Handbook of Biomedical Image
Analysis: Volume I: Segmentation Models Part A (2005) 583–626.
bourgine2009extraction
P. Bourgine, P. Frolkovič, K. Mikula, N. Peyriéras,
M. Remešíková, Extraction of the intercellular skeleton from
2D images of embryogenesis using eikonal equation and advective subjective
surface method, in: Scale Space and Variational Methods in Computer Vision:
SSVM 2009, Voss, Norway. Proceedings 2, Springer, 2009, pp. 38–49.
fmu15
P. Frolkovič, K. Mikula, J. Urbán, Distance function and extension in
normal direction for implicitly defined interfaces, DCDS - Series S 8 (5)
(2015) 871–880.
may2017explicit
S. May, M. Berger, An explicit implicit scheme for cut cells in embedded
boundary meshes, J. Sci. Comput. 71 (3) (2017) 919–943.
frolkovic2018semi
P. Frolkovič, K. Mikula, Semi-implicit second order schemes for numerical
solution of level set advection equation on Cartesian grids, Appl. Num.
Math. 329 (2018) 129–142.
Engwer2020A3673
C. Engwer, S. May, A. Nuing, F. Streitburger, A stabilized DG cut cell method
for discretizing the linear transport equation, SIAM Journal on Scientific
Computing 42 (6) (2020) A3673 – A3703.
Xie2022
Z. Xie, P. Lin, T. Stoesser, A conservative and consistent implicit Cartesian
cut-cell method for moving geometries with reduced spurious pressure
oscillations, Journal of Computational Physics 459 (2022).
li_absolutely_2021
L. Li, J. Zhu, Y.-T. Zhang, Absolutely convergent fixed-point fast sweeping
WENO methods for steady state of hyperbolic conservation laws, Journal of
Computational Physics 443 (2021) 110516.
hahn2022finite
J. Hahn, K. Mikula, P. Frolkovič, B. Basara, Finite volume method with
the soner boundary condition for computing the signed distance function on
polyhedral meshes, International Journal for Numerical Methods in Engineering
123 (4) (2022) 1057–1077.
qiu_finite_2003
J. Qiu, C.-W. Shu, Finite Difference WENO Schemes with
Lax–Wendroff-Type Time Discretizations, SIAM J. Sci. Comp. 24 (May
2003).
leveque_finite_2004
R. J. Leveque, Finite Volume Methods for Hyperbolic Problems, 2nd
Edition, Cambridge UP, Cambridge, 2004.
toro_riemann_2009
E. F. Toro, Riemann solvers and numerical methods for fluid dynamics: a
practical introduction, 3rd Edition, Springer, Dordrecht; New York, 2009.
zorio_approximate_2017
D. Zorío, A. Baeza, P. Mulet, An Approximate Lax–Wendroff-Type
Procedure for High Order Accurate Schemes for Hyperbolic
Conservation Laws, J. Sci. Comput. 71 (1) (2017) 246–273.
carrillo2019compact
H. Carrillo, C. Parés, Compact approximate Taylor methods for systems of
conservation laws, Journal of Scientific Computing 80 (3) (2019) 1832–1866.
carrillo2021lax
H. Carrillo, C. Parés, D. Zorío, Lax-Wendroff approximate Taylor
methods with fast and optimized weighted essentially non-oscillatory
reconstructions, Journal of Scientific Computing 86 (1) (2021) 1–41.
seal_high-order_2014
D. C. Seal, Y. Güçlü, A. J. Christlieb, High-Order Multiderivative
Time Integrators for Hyperbolic Conservation Laws, J. Sci. Comput.
60 (1) (2014) 101–140.
frolkovic2023high
P. Frolkovič, M. Žeravý, High resolution compact implicit
numerical scheme for conservation laws, Applied Mathematics and Computation
442 (2023) 127720.
zhao2005fast
H. Zhao, A fast sweeping method for eikonal equations, Math. Comput. 74 (250)
(2005) 603–627.
osh88
S. Osher, J. Sethian, Fronts propagating with curvature-dependent speed:
Algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys. 79
(1988) 12–49.
shu_essentially_1998
C.-W. Shu, Essentially non-oscillatory and weighted essentially non-oscillatory
schemes for hyperbolic conservation laws, in: Advanced Numerical
Approximation of Nonlinear Hyperbolic Equations, Lecture Notes in
Mathematics, Springer, Berlin, Heidelberg, 1998, pp. 325–432.
wesseling2009principles
P. Wesseling, Principles of computational fluid dynamics, Vol. 29, Springer,
Heidelberg, 2009.
nishikawa2021truncation
H. Nishikawa, A truncation error analysis of third-order MUSCL scheme for
nonlinear conservation laws, International Journal for Numerical Methods in
Fluids 93 (4) (2021) 1031–1052.
frolkovic2022semi
P. Frolkovič, S. Krišková, M. Rohová, M. Žeravý,
Semi-implicit methods for advection equations with explicit forms of
numerical solution, Japan J. Indust. Appl. Math. 39 (2022) 843–867.
kemm_comparative_2011
F. Kemm, A comparative study of TVD-limiters—well-known limiters and an
introduction of new ones, International Journal for Numerical Methods in
Fluids 67 (4) (2011) 404–440.
harten_class_1984
A. Harten, On a Class of High Resolution Total-Variation-Stable
Finite-Difference Schemes, SIAM J. Numer. Anal. 21 (1) (1984) 1–23.
sweby1984high
P. K. Sweby, High resolution schemes using flux limiters for hyperbolic
conservation laws, SIAM Journal on numerical analysis 21 (5) (1984)
995–1011.
yee1989class
H. Yee, A class of high-resolution explicit and implicit shock-capturing
methods, Rep., NASA-TM-101088 (1989).
duraisamy_implicit_2007
K. Duraisamy, J. D. Baeder, Implicit Scheme for Hyperbolic Conservation
Laws Using Nonoscillatory Reconstruction in Space and Time, SIAM
J. Sci. Comput. 29 (6) (2007) 2607–2620.
arbogast2020third
T. Arbogast, C.-S. Huang, X. Zhao, D. N. King, A third order, implicit, finite
volume, adaptive Runge–Kutta WENO scheme for advection–diffusion
equations, Comput. Meth. Appl. Mech. Eng. 368 (2020) 113–155.
puppo_quinpi_2022
G. Puppo, M. Semplice, G. Visconti, Quinpi: Integrating Conservation Laws
with CWENO Implicit Methods, Commun. Appl. Math. Comput. (Feb. 2022).
billett1997on
S. Billett, E. Toro, On WAF-type schemes for multidimensional hyperbolic
conservation laws, Journal of Computational Physics 130 (1) (1997) 1–24.
Mathematica
W. R. Inc., https://www.wolfram.com/mathematicaMathematica 13,
champaign, IL, 2021 (2021).
<https://www.wolfram.com/mathematica>
boscarino_high_2016
S. Boscarino, F. Filbet, G. Russo, High Order Semi-implicit Schemes for
Time Dependent Partial Differential Equations, J Sci Comput 68 (3)
(2016) 975–1001.
zhang2006high
Y.-T. Zhang, H.-K. Zhao, J. Qian, High order fast sweeping methods for static
Hamilton–Jacobi equations, Journal of Scientific Computing 29 (2006)
25–56.
MATLAB:2020
MATLAB, version 9.9.0 (R2020b), The MathWorks Inc., Natick, Massachusetts,
2022.
qiushu1dspecialIC
J. Qiu, C.-W. Shu, Hermite WENO schemes for Hamilton-Jacobi equations,
Journal of Computational Physics 204 (1) (2005) 82–99.
fm07
P. Frolkovič, K. Mikula, High-resolution flux-based level set method, SIAM
J. Sci. Comp. 29 (2) (2007) 579–597.
mo10
K. Mikula, M.Ohlberger, A new level set method for motion in normal direction
based on a semi-implicit forward-backward diffusion approach, SIAM J. Sci.
Comp. 32 (3) (2010) 1527–1544.
saye2014high
R. Saye, High-order methods for computing distances to implicitly defined
surfaces, Communications in Applied Mathematics and Computational Science
9 (1) (2014) 107–141.
|
http://arxiv.org/abs/2307.01276v1
|
20230703180541
|
Fly-by galaxy encounters with multiple black holes produce star-forming linear wakes
|
[
"Nianyi Chen",
"Patrick LaChance",
"Yueying Ni",
"Tiziana Di Matteo",
"Rupert Croft",
"Priyamvada Natarajan",
"Simeon Bird"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
Center for Astrophysics | Harvard & Smithsonian, Cambridge, MA 02138, US
McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213
Black Hole Initiative, Harvard University, Cambridge, MA 02138, USA
Department of Astronomy, Yale University, New Haven, CT 06511, USA
Department of Physics, Yale University, New Haven, CT 06520, USA
Department of Physics & Astronomy, University of California, Riverside, 900 University Ave., Riverside, CA 92521, US
Nianyi Chen
[email protected]
We look for simulated star-forming linear wakes such as the one recently discovered by <cit.> in the cosmological hydrodynamical simulation .
Amongst the runaway black holes in , none are able to produce clear star-forming wakes. Meanwhile, fly-by encounters, typically involving a compact galaxy (with a central black hole) and a star-forming galaxy (with a duo of black holes) reproduce remarkably well many of the key properties (its length and linearity; recent star formation, etc.) of the observed star-forming linear feature. We predict the feature to persist for approximately 100 Myr in such a system and hence constitute a rare event. The feature contains a partly stripped galaxy (with M_ gal=10^9 ∼ 10^10M_⊙) and a dual BH system (M_ BH=10^5 ∼ 10^7 M_⊙) in its brightest knot. X-ray emission from AGN in the knot should be detectable in such systems. After 100∼ 200 Myrs from the first fly-by, the galaxies merge leaving behind a triple black hole system in an (still) actively star-forming early-type remnant of mass ∼ 5× 10^10 M_⊙. Follow-up JWST observations may be key for revealing the nature of these linear features by potentially detecting the older stellar populations constituting the bright knot. Confirmation of such detections may therefore help discriminate a fly-by encounter from a massive BH wake to reveal the origin of such features.
§ INTRODUCTION
Given that nearly all galaxies host central black holes, it is likely that during complex dynamical mergers involving multiple galaxies, one or more of the BHs get ejected. Evidence from observations for such anticipated three-body interactions has been sparse till recently.
New observations have reported serendipitous discoveries of extended linear star-forming features with unclear origins that likely involve multiple BH interactions and accompanying triggered star formation. <cit.> (VD23 hereafter) reports the discovery of a thin stellar streak extending ∼ 60 kpc from a nearby compact galaxy at z∼ 1. Based on the length and linearity of the feature, as well as the emission lines associated with dense star-forming gas, the authors interpret it as the stellar wake induced by the passage of a massive black hole (MBH) kicked out from the compact galaxy during a 3-body encounter of the MBHs.
In another recent paper, <cit.> reports finding an even longer, collimated galactic tail 380 kpc in length, also associated with recent and ongoing star formation.
Having discussed the challenges of a more commonplace origin for this feature such as ram pressure or tidal stripping, these authors also propose that this feature is likely created by a three-body encounter between three interacting galaxies.
Other observed cases of potential runaway BHs include the one detected in a peculiar source in the COSMOS survey at z=0.359 (CXOCJ100043.1+020637 (CID-42)) that presents as two compact optical sources embedded in the same galaxy with a large velocity offset <cit.>.
In this work, we focus on linear features with accompanying star formation activity.
If the observed linear features were indeed produced by a runaway BH in a compact star cluster, or even a runaway galaxy itself, they open up a new channel for finding ejected stars and MBHs undergoing complex dynamical interaction.
However, the runaway BH explanation itself also faces theoretical challenges: it is not clear, for example, whether a BH with M_ BH∼ 10^7 M_⊙ can trigger such a high level of star formation, and leave a trail of stars weighing almost half the mass of the original galaxy (> 10^9 M_⊙ as per VD23).
On the other hand, it is also theoretically challenging to produce bright stellar wakes that are so thin and straight, without having some ejected material associated with them.
Further clarification on the origin of these exotic systems demands work from the theoretical side, and the first step would be to identify theoretical counterparts of these observed systems in cosmological simulations, a path we pursue here, so that we can better investigate their origins.
To this end, our work presents a first attempt to query and investigate the origin of extended linear stellar wakes in cosmological simulations and their possible association with runaway MBHs.
We use the large volume simulation with a large galaxy population <cit.>, allowing us to find the very rare occurrences of extended star-forming systems.
Moreover, the updated BH dynamics model <cit.> produces a vast number of wandering MBHs <cit.> and even potential runaway MBHs among MBH triplets <cit.>, allowing us to look for connections between stellar wakes and MBH tuples or runaway BHs.
Our letter is organized as follows: in Section <ref> we first introduce the simulation, and the selection criteria we used to find runaway BHs in the simulation as well as the theoretical counterparts of the currently observed linear features.
In Section <ref>, we present systems from the simulation that morphologically resemble the observed linear star-forming features.
We then examine their origin, evolution, and detailed properties of the galaxies and BHs associated with such systems.
§ METHOD
§.§ Simulations
is a cosmological hydrodynamical simulation performed using a new version of the Smoothed Particle Hydrodynamics code . The simulation evolves a cube of 250 per side with 2×5500^3 initial tracer particles comprising dark matter and baryons, and has currently reached z=1.3. has a dark matter particle mass resolution of M_ DM = 6.7 × 10^6 and M_ gas = 1.3 × 10^6 in the initial conditions. The gravitational softening length is ϵ_ g = 1.5 for both DM and gas particles. The simulation includes a full-physics sub-grid treatment for modeling galaxy formation, SMBHs and their associated supernova and AGN (Active Galactic Nuclei) feedback, as well as inhomogeneous hydrogen and helium reionization. We refer the readers to the introductory papers <cit.> for detailed descriptions of physical models deployed in the simulation.
Here we briefly summarize BH modeling, which is most relevant to our current investigation of runaway MBHs. BHs are seeded in haloes with M_ halo,FOF > 5 × 10^9 and M_ *,FOF > 2 × 10^6, with seed masses stochastically drawn between 3×10^4 and 3×10^5, motivated by the direct collapse scenario proposed in <cit.>.
The gas accretion rate onto the black hole is estimated via a Bondi-Hoyle-Lyttleton-like prescription <cit.>.
The black hole radiates with a bolometric luminosity L_ bol proportional to the accretion rate Ṁ_∙, with a mass-to-energy conversion efficiency η=0.1 in an accretion disk according to <cit.>.
5% of the radiated energy is coupled to the surrounding gas as the AGN feedback.
Dynamics of the black holes are modeled with a sub-grid dynamical friction model <cit.> to replace the original implementation that directly repositioned BHs to the local minimum of the potential.
This gives well-defined black hole trajectories and velocities.
Per this implementation, two black holes can merge if their separation is within two times the gravitational softening length 2ϵ_g, once their kinetic energy is dissipated by dynamical friction and they are gravitationally bound.
§.§ Galaxy Population and Candidate Selection
Galaxies in the simulation are identified with the algorithm <cit.>. The large volume of provides a rich galaxy population matching the host galaxy of the observed systems, from which we can conduct a comprehensive search for runaway BHs and star-forming wakes. We perform the search at two redshifts, z=2 and z=1.3 (the current lowest redshift of the simulation), with the latter being closer in time to the observed streak in VD23 at z=0.964.
Target galaxies:
In Figure <ref>, we show the properties of z=2 galaxies in and our selection criteria. To match with the properties of the main galaxy in <cit.>, we focus on the star-forming (SFR> 1 M_⊙/ yr), compact (r_ half < 2 kpc), and non-satellite galaxies with masses 5× 10^9 < M_ gal < 5× 10^10 that contain at least 3 BHs with a total BH mass above 10^7.
We apply the 3-BH criterion because it is necessary to produce a runaway BH.
After applying these cuts, we narrow down our search to 29888 (11399) galaxies at z=1.3 (z=2), and hereafter we refer to these selected galaxies as our “target galaxies".
Runaway BH selection:
We identify runaway BHs within the target galaxies above (i.e. the orange population in Figure <ref>).
To quantitatively define a “runaway" MBH in these galaxies, we compute the ratio between consecutive apocentric radii between each satellite BH in the galaxy and the central MBH <cit.>.
If we see a consistent and sudden increase in orbital size by a factor of more than three in at least three full orbits, then we categorize the MBH as a “runaway" MBH. Among the ∼ 11399 target galaxies at z=2, we found ∼ 200 potential runaway BHs that satisfy these criteria.
Linear star-forming wake selection:
We look for radial linear star-forming features with the following method:
we divide all the young stars (stellar age < 100 Myrs) in each halo into 45 radial bins [10, 100] kpc from the central galaxy center. To match a ∼ 50 kpc observable linear feature, we select systems with at least 2× 10^6 stars in each bin for 25 continuous radial bins. This gives us 514 (435) candidates with potentially observable stellar streaks at z=1.3 (z=2) respectively, which makes up 2-4% of the target galaxy population (we will refer to these as “linear feature candidates"). We then visually inspect these potential linear features to select the ones with large signal-to-noise, resulting in ∼ 30 visually-confirmed linear features at each redshift (note that this number of viable candidates is likely a lower limit as we only look for linearity along the x-y and x-z planes).
§ RESULTS
§.§ Do runaway BHs produce linear stellar wakes?
<cit.> proposed that one of the most likely mechanisms for producing the observed star-forming wake is a runaway BH induced by a three-body encounter. Following this proposal and the works by <cit.> and <cit.>, we look for possible associations between linear star-forming features and runaway BHs.
However, upon an initial search through the target galaxies (orange points in Figure <ref>), we do not find any visible stellar features along the wakes of the runaway BHs.
As the BH-induced star formation may be more prominent within a denser circum-galactic medium (CGM) than that in our target galaxies, we relax the upper limit on the galaxy mass and did another search among MBH triplets in larger halos (the sample of MBH triplets from <cit.>).
In these halos, the runaway BHs also tend to be more massive, and the formation of a stellar wake is less affected by the resolution limit.
Figure <ref> shows an example runaway BH candidate within a MBH triplet, found in a galaxy with mass 1.3× 10^11 M_⊙.
In this system, a BH with mass 1.3× 10^7 M_⊙ is ejected from the host galaxy ∼ 100 Myrs after the initial formation of the MBH triplet at a speed of ∼ 700 km/s, and it traverses a distance of ∼ 45 kpc from the galaxy center in another ∼ 100 Myrs.
Even in this case, we still do not see a star-forming signature associated with the wake of the runaway BH: we found at most <10^6M_⊙ total newly formed stars within 10 kpc of the ejected BH.
We note a few limitations of the simulation potentially responsible for this null result: the resolution of cannot fully resolve the ejection due to three-body encounters, and BHs that “run away" are mostly disrupted by a blob of incoming stars/gas.
This implies that the maximum speed reached by any of our runaway BH candidates is ∼ 800 km/s and decreases to ∼ 100 km/s at the target separation of ∼ 50 kpc, barely reaching the speed to shock the surrounding gas <cit.>.
Also, the simulation may lack the mass resolution to resolve gas shocked by a BH: the target runaway BH (M_ BH∼ 10^7M_⊙) is a few times larger than the gas particle mass, and so the gas over-density drawn by the BH may not be directly resolvable. A follow-up study would need to deploy higher-resolution simulations, to detect BHs ejected at a higher velocity in environments similar to our target galaxies to evaluate whether a similar star-forming wake can be produced.
§.§ Stellar wakes associated with fly-by galaxy encounters
Although we do not find visible star-forming wakes associated with the passage of runaway BHs, we do find a few tens of star-forming wakes in the simulation following the method detailed in Section <ref>.
The natural question to ask is then: what mechanisms, if not runaway BHs, produce these wakes?
By inspecting the simulation counterparts to the observed systems selected in Section <ref>, we find that most star-forming streaks originate from the fly-by encounter of a recently merged, dual-BH young galaxy with a more massive compact galaxy.
We present some representative cases of such systems and their evolution in this section.
Figure <ref> shows star and gas visualizations of two systems with linear star-forming features, and the mock images of the systems seen through the different filters of HST and JWST.
In both cases, the young stars extend linearly to more than 50 kpc away from the main, compact galaxy (Galaxy 1, on the left end of each image). The young galaxy (Galaxy 2, on the right) is embedded in a stream of cold, dense gas with T∼ 10^4-10^5 K clearly distinguishable from the background hot gas in the surrounding medium (second row), along which we see ongoing star formation and young stars formed within a few tens Myrs (fourth row).
Figure <ref> also shows the mock HST and JWST observations, created by assigning spectral energy distributions (SEDs) to each star particle according to the Binary Population and Spectral Population Synthesis model <cit.>.
These SEDs are convolved with the filter transmission functions associated with the chosen filters.
We smooth the stars with an SPH kernel and make 2D projections with pixel sizes matching the sensor for HST and JWST (0.049" for HST ACS, and 0.031"/0.063" for JWST NIRCAM short/long wavelength).
The stellar age along the wake follows a bi-modal distribution: half of the stars are relatively old with ages ∼ 1 Gyr, and half are formed within <100 Myrs during the emergence of the streak.
The young stars form the linear feature when captured by the HST F606W and F814W bands (third row), while the older stars are better seen with JWST at longer wavelengths (fifth row). The redder stars may be a distinctive signature of the stripped galaxy scenario, which are unlikely to be found in the runaway-BH case.
§.§ Evolution of the linear feature
To better understand the formation of the star-forming streak, we trace System 1 across a Gyr before and after the time when the linear feature is most prominent. Figure <ref> shows the time evolution of the two galaxies involved in the linear stellar feature in a face-on view.
The feature is a by-product of a fly-by encounter between two galaxies at a speed of 580 km/s.
The galaxies barely touch each other during their first passage (frame 1), after which Galaxy 1 remains unperturbed while Galaxy 2 develops elongated tidal arms on both sides.
The feature formed around 150 Myrs after the initial passage between the two galaxies, and lasts for ∼ 200 Myrs (frame 2-4), after which the stars along the streak become old (frame 5), and the two galaxies experience a head-on collision (frame 6-7).
Our simulated linear features are extreme cases of the tidal arms/tails found in previous numerical studies of galaxy mergers <cit.>.
Idealized simulations have shown that within these tails accumulations of gas and stars can be found <cit.>.
These clumps harbor molecular gas which provides a reservoir for new stars <cit.>, and could be the origin of the blue knots along the VD23 system.
Upon the close inspection of the ∼ 50 visually-confirmed linear features, we find that the long tails are usually associated with dual-BH galaxies.
70% of these tails contain close BH pairs, indicating a recent galaxy merger within the tail.
Isolated galaxy simulations may not produce features like System 1 and System 2 if it takes two consecutive galaxy mergers to produce very long star-forming tails.
We also show in Figure <ref> the trajectories of the BHs along the tail.
BH3 has come within Galaxy 2 in a previous galaxy merger a few hundred Myrs before the wake formation.
Then it keeps orbiting around BH2 during and after the production of the stellar wake, until the final merger between Galaxy 1 and Galaxy 2 (frame 7), after which both BH2 and BH3 ended up in wide orbits around BH1.
We will examine the detailed evolution of the triple-BH system in the next section.
§.§ BHs along the linear feature
A common feature shared by our linear feature candidates is a dual-BH system along the streak merging into a central BH.
Such MBH triplets are indicators of consecutive galaxy mergers producing the stellar streak, and if active, they may be also responsible for triggering the OIII emission along the streak. Here we examine the evolution of BHs along the linear features and their AGN activity.
Figure <ref> shows the time evolution of the X-ray luminosity, BH mass, and the orbits of the three BHs associated with the merging galaxies in System 1 and System 2.
In both cases, the BH in the compact galaxy is fairly luminous, maintaining an X-ray luminosity of nearly 10^43 erg/s for more than a Gyr.
The second brightest BH (BH2) embedded along the linear feature also has a relatively high X-ray emission between 10^42-10^43 erg/s.
If the observed wake is produced by the scenario we show here, then BH2 is responsible for triggering the OIII emission along the wake. As mentioned in VD23, the amount of OIII emission seen would correspond to an AGN with L_X>10^43 erg/s according to the scaling relation in <cit.>. Our BH2s are slightly fainter for a 1.9 × 10^41 erg/s OIII luminosity, but still fall well within the <cit.> relation with the scatter and are capable of triggering the observed OIII signature.
The detectability of the third BH may depend on its orbits within Galaxy 2, but it is possible that BH3 was captured at the pericenter of the orbit and at it is brightest when the linear feature is produced (e.g. the case in System 1). In this (optimistic) case, follow-up observations may simultaneously see an MBH triplet associated with the stellar wake.
§ CONCLUSIONS AND DISCUSSION
In this work, we identify runaway BHs ejected at ∼ 700 km/s from host galaxies, as well as linear star-forming features extending >50 kpc from compact galaxies in the simulation at z=1∼2.
We look for the association between the two types of systems following the proposal in VD23, and find that runaway BHs do not produce visible star-forming wakes in the simulation.
However, we find that linear star-forming features in fly-by galaxy encounters resemble the recent observation by <cit.>, with very bright, thin, and straight young stars when seen in the HST F606W/F814W filters.
Such linear features potentially exist among ∼ 0.1% compact, star-forming galaxies with 5× 10^9 < M_ gal < 5× 10^10 and multiple BHs, and the most prominent linear features can last for 100-200 Myrs.
We examine two representative systems with strong linear features in detail, both with stellar masses between 10^10-5× 10^10 M_⊙ and SFRs between 10-50 M_⊙/ yr. Among the BHs associated with the two galaxies producing the linear feature, the central BH has a bright X-ray emission with 10^42 erg/s< L_X < 10^43 erg/s, which could potentially be seen by Chandra.
The brightest BH along the feature emits at 10^41 erg/s< L_X < 10^43 erg/s, also with a chance to be detected in the X-rays.
The production of the linear feature involves consecutive encounters between three galaxies.
The remnant of the first galaxy merger (∼ 10^9-10^10 M_⊙ and star-forming) goes through a fly-by encounter with a more massive, compact galaxy, during which its surrounding gas gets tidally stripped, leaving a trace of star-forming gas and young stars.
∼ 70% of the visually-confirmed systems with linear features harbor a dual-BH star-forming galaxy along the feature, supporting the three-galaxy encounter scenario. Our findings indicate that these linear features may offer a robust signature for finding BH binaries and multiple-BH systems, but further work is required to better establish the connection between linear long tidal tails and binary-hosting galaxies. There have been catalogs of long tidal tails in the COSMOS field <cit.>, out of which two galaxy nuclei can be clearly identified in some extended linear systems.
Finally, although our results show that only tidal features during fly-by galaxy mergers can produce these recently observed features, we acknowledge that our resolution limit leaves room for the possibility of a runaway-BH-induced star-forming wake. Due to the similarity with many features of the observed signatures (star-forming gas, OIII emission, and a population of young stars) shared by these two scenarios, follow-up studies are needed to potentially distinguish between them.
Follow-up observations using longer-wavelength bands with JWST can tell whether there exists an underlying old stellar population along the linear feature. The lack of old stars will greatly support the runaway BH origin of the feature. Meanwhile, higher-resolution simulations can study the triggering of such dense star formation by runaway BHs, and the mass/velocity of the BH required. In both formation channels of the linear feature, follow-up observations may be able to reveal multiple MBH signatures along the feature, and such systems are very likely the sites for complex three-body encounters and mergers between MBHs <cit.>.
§ ACKNOWLEDGEMENTS
We thank Mohit Bhardwaj, Pieter van Dokkum, Charlie Conroy and Qian Yang for their helpful discussions.
was run on the Frontera facility at the Texas Advanced Computing Center.
TDM and RACC acknowledge funding from the NSF AI Institute: Physics of the Future, NSF PHY-2020295, NASA ATP NNX17AK56G, and NASA ATP 80NSSC18K101.
TDM acknowledges additional support from NSF ACI-1614853, NSF AST-1616168, NASA ATP 19-ATP19-0084, and NASA ATP 80NSSC20K0519, and RACC from NSF AST-1909193.
SB acknowledges funding supported by NASA-80NSSC22K1897.
§ DATA AVAILABILITY
The code to reproduce the simulation is available at <https://github.com/MP-Gadget/MP-Gadget>, and continues to be developed.
Part of the snapshots are available at <https://astrid-portal.psc.edu/>.
aasjournal
|
http://arxiv.org/abs/2307.02970v1
|
20230706131651
|
HST/WFC3 Light Curve Confirms the Closest Exoplanet to Transit an M Dwarf is Terrestrial
|
[
"Emily K Pass",
"Jennifer G Winters",
"David Charbonneau",
"Aurelia Balkanski",
"Nikole Lewis",
"Maura Lally",
"Jacob L Bean",
"Ryan Cloutier",
"Jason D Eastman"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
0000-0002-1533-9029]Emily K. Pass
Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-9003-484X]Jennifer G. Winters
Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Thompson Physics Lab, Williams College, 880 Main Street, Williamstown, MA 01267, USA
0000-0002-9003-484X]David Charbonneau
Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
0000-0002-8507-1304]Nikole Lewis
Department of Astronomy and Carl Sagan Institute, Cornell University, 122 Sciences Drive, Ithaca, NY 14853, USA
0000-0002-4443-6725]Maura Lally
Department of Astronomy and Carl Sagan Institute, Cornell University, 122 Sciences Drive, Ithaca, NY 14853, USA
0000-0003-4733-6532]Jacob L. Bean
Department of Astronomy & Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA
0000-0001-5383-9393]Ryan Cloutier
Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Department of Physics & Astronomy, McMaster University, 1280 Main Street West, Hamilton, ON L8S 4L8, Canada
0000-0003-3773-5142]Jason D. Eastman
Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Previous studies of the exoplanet LTT 1445Ac concluded that the light curve from the Transiting Exoplanet Survey Satellite (TESS) was consistent with both grazing and non-grazing geometries. As a result, the radius and hence density of the planet remained unknown. To resolve this ambiguity, we observed the LTT 1445 system for six spacecraft orbits of the Hubble Space Telescope (HST) using WFC3/UVIS imaging in spatial scan mode, including one partial transit of LTT 1445Ac. This imaging produces resolved light curves of each of the three stars in the LTT 1445 system. We confirm that the planet transits LTT 1445A and that LTT 1445C is the source of the rotational modulation seen in the TESS light curve, and we refine the estimate of the dilution factor for the TESS data. We perform a joint fit to the TESS and HST observations, finding that the transit of LTT 1445Ac is not grazing with 97% confidence. We measure a planetary radius of 1.10_-0.07^+0.10 R_⊕. Combined with previous radial velocity observations, our analysis yields a planetary mass of 1.36±0.19 M_⊕ and a planetary density of 5.6_-1.5^+1.7 g cm^-3. LTT 1445Ac is an Earth analog with respect to its mass and radius, albeit with a higher instellation, and is therefore an exciting target for future atmospheric studies.
§ INTRODUCTION
LTT 1445 is a triple star system <cit.>, located at 6.9 pc and comprised of three fully convective M dwarfs. The primary, LTT 1445A, is separated from the close binary LTT 1445BC by 7.2" as of Gaia DR3 <cit.>. Photometry from the Transiting Exoplanet Survey Satellite <cit.> has revealed two small transiting planets in the LTT 1445 system, with the discovery of planet b reported in <cit.> and planet c in <cit.>. While all three stars fall within the same 21" square TESS pixel, the ground-based transit and radial velocity follow-up presented in those works indicate that the planets orbit the A component. High-precision radial velocity follow-up of LTT 1445A from the ESPRESSO spectrograph <cit.> has also yielded an additional candidate planet d, which is likely non-transiting <cit.>.
LTT 1445A is the nearest M dwarf with a known transiting planet; planets b and c therefore offer some of the most favorable conditions to characterize the atmospheres of terrestrial exoplanets. This favorability can be quantified by the transmission spectroscopy metric <cit.>, with <cit.> calculating a TSM of 30 for planet b and 46 for planet c. <cit.> propose that planets with TSMs substantially larger than 10 are high-priority targets for atmospheric characterization. Moreover, <cit.> argue that a future discovery of a planet more favorable for transmission spectroscopy is unlikely based on our understanding of planetary occurrence rates and the fraction of nearby stars already probed by TESS and ground-based surveys. While planet c therefore appears to be an optimal target to devote follow-up resources such as JWST, its TSM was calculated with a caveat: it is possible that the transit of this planet is grazing. A grazing geometry would yield a large uncertainty in the planetary radius, affecting the TSM, our ability to interpret atmospheric observations, and potentially, the terrestrial nature of the planet. While there are non-grazing geometries that are consistent with the TESS data, <cit.> found an 85% chance that c is grazing. Additional data are needed to resolve this uncertainty and establish whether planet c is suitable for detailed atmospheric characterization.
In this work, we present a transit of LTT 1445Ac as observed by the Hubble Space Telescope (HST) using WFC3/UVIS imaging in spatial scan mode. Unlike the TESS data, these observations resolve LTT 1445A, B, and C as three independent sources. In Section <ref>, we describe the HST data collection and reduction. In Section <ref>, we perform a joint fit to the TESS and HST observations. We conclude in Section <ref>.
§ HST DATA COLLECTION AND REDUCTION
§.§ Observation setup
We gathered the HST photometry under program 16503 (PI: Winters), using WFC3/UVIS in spatial scanning mode with the F814W filter and the 512x512 subarray. The WFC3/UVIS spatial scanning mode has not been widely described in the literature, although it has been employed for a handful of projects, including <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.
Our data set is comprised of six HST orbits, divided between two visits and consisting of sequences of 22s exposures taken at a 76s cadence. The first visit occurred on 2021 Sep 26 from 11:27:24–15:35:39 UT and the second on 2021 Sep 29 from 14:07:57–18:14:44 UT. We observed a partial transit of LTT 1445Ac in the first visit; in the second, the transit fell in the gap between orbital visibility periods.
§.§ Preprocessing and extraction
We download the 188 .flc files associated with our program from the Mikulski Archive for Space Telescopes (MAST). These files are calibrated, individual exposures with charge transfer efficiency corrections applied; for details on the WFC3 calibration pipeline, see appendix E.1 of the WFC3 Instrument Handbook <cit.>.[https://hst-docs.stsci.edu/wfc3ihbhttps://hst-docs.stsci.edu/wfc3ihb]
Next, we use <cit.>,[https://github.com/cshanahan1/WFC3_phot_toolshttps://github.com/cshanahan1/WFC3_phot_tools] a package designed for WFC3/UVIS spatially scanned data. With this package, we perform cosmic-ray rejection, apply the pixel area map (PAM) correction, and extract each of the three stars using rectangular apertures. We identify appropriate apertures using the routine (which uses ; ) with a SNR threshold of 100. This threshold was chosen to ensure that the algorithm does not combine LTT 1445BC into a single object, which was an issue when using the function's default parameters. We use the x and y centroids determined by this algorithm to center our apertures in each exposure, but use a fixed rectangular aperture size of 380×20 pixels for all three stars (Figure <ref>). Our choice of width is motivated by the second visit, in which the spatial scan of A is very close to the edge of the detector; we select the maximal width that does not result in the aperture being truncated by the edge of the detector in any exposure. We explore other choices of aperture size and find that our extraction is robust against modest variations in this choice. We do not perform background subtraction, as the stars are very bright and the background count level is therefore negligible (note the logarithmic color bar in Figure <ref>). The HST timestamps are in UTC, which we convert to BJD using <cit.>.
Figure <ref> shows our extracted time series. Some systematics are present. Most prominently, there is an offset between forward and reverse scans. There are also trends with HST orbital phase, with these trends seeming to vary between the first and second visit (the left and right panels of the figure); nonetheless, these trends are much less pronounced than what is typically seen in WFC3/IR spatial scan observations. Figure <ref> also shows that the flux of LTT 1445C varies substantially, both within a visit and over the 3-day interval between visits. In <cit.>, the authors suspected that the C component was the source of the 1.4-day rotation period observed in the TESS data; our observations confirm this hypothesis.
§.§ Filter comparison
Our HST observations use the F814W filter, which offers good coverage over the red-optical wavelengths where M dwarfs emit much of their light and avoids the 6563Å H feature, which is sensitive to stellar flares. In Figure <ref>, we compare the response functions of the TESS and HST observations. While the TESS bandpass is wider, the two are nonetheless similar; for a model 3300K M dwarf <cit.>, we find an effective wavelength of 8100Å for our HST observations, as opposed to 8300Å for TESS. The similarity of the response functions allows us to make two simplifications in our analysis, which we describe below.
Firstly, this similarity suggests that we do not need to account for the wavelength dependence of the limb-darkening parameters. To verify this, we use <cit.> to estimate quadratic limb-darkening parameters for LTT 1445A in each of the two bands, adopting the estimates of the stellar properties from <cit.>: T_ eff=3340 K, logg=4.967, and [Fe/H]=-0.34 dex. For the TESS bandpass, we find u_1=0.194, u_2=0.360. For the HST bandpass, we find u_1=0.181, u_2=0.367. The variations in limb darkening between these bandpasses is very small, particularly considering that <cit.> allow the limb-darkening parameters to vary in their fit to the TESS data, centered on the computed value and with a standard deviation of 0.1. The parameters are indistinguishable at this level of precision.
A second consideration is the dilution. As the TESS light curve contains all three stars, a dilution correction is required to remove contamination by LTT 1445BC when analyzing those data. In <cit.>, the authors estimated this dilution using TESS magnitudes approximated from (I_ KC-K_ s) colors, ultimately measuring A_D = (f_ A/( f_ A+ f_ B + f_ C))=0.480±0.013. As HST independently resolves each star, we are equipped to improve this dilution estimate. We measure A_D= 0.47541 ± 0.00024 from our HST observations, with the uncertainty in our measurement driven by spot modulation of LTT 1445C. Our HST value is consistent with 0.480±0.013 within errors. Because the TESS and HST bandpasses are relatively similar, we suspect that differences between the magnitude systems would introduce negligible uncertainty in the dilution correction. To test this, we approximate LTT 1445A, B, and C using the 3300K, 3200K, and 3100K models shown in Figure <ref>. We then use these models to calculate A_D in each of the two bands, finding that the estimates differ by only 0.0009. Without any correction for bandpass mismatch, our estimates of A_D from the HST observations still reduce the uncertainty in the TESS dilution by an order of magnitude.
§ JOINT TESS AND HST ANALYSIS
§.§ TESS model
In <cit.>, the authors use the package <cit.> to remove stellar variability and <cit.> to perform the orbital fit. For simplicity, we perform both tasks with in this work. This approach has the benefit of allowing uncertainties in the stellar variability removal to propagate into the final fit.
While we did not observe additional transits of LTT 1445Ab, we fit for both planets in our TESS model. As discussed in <cit.>, transit duration provides an independent measurement of the stellar density that can constrain the stellar radius and mass even beyond the 4–5% systematic noise floors of current models <cit.>. The inclusion of planet b therefore improves our fit for planet c by reducing the uncertainty in the stellar radius.
We model the rotational modulation with a Gaussian process (GP), using the kernel implemented in <cit.>. This kernel is a mixture of two simple harmonic oscillators and is parameterized by five hyperparameters: , , , , and . As this treatment is fairly standard, we will not discuss the mathematical formalism in greater detail here. We refer the reader to the documentation for full details, or other works such as <cit.>. We adopt the priors on the hyperparameters given in the tutorial on this topic:[https://gallery.exoplanet.codes/tutorials/lc-gp-transit/https://gallery.exoplanet.codes/tutorials/lc-gp-transit/] as an inverse gamma distribution with a lower tail of 1 and upper tail of 5, log() and log() as normal distributions with means of 0 and standard deviations of 2, as a uniform distribution from 0.01 to 1, and log() centered at the log of the measured rotation period (here, 1.398 days) with a standard deviation of 0.02. We also fit for a jitter term, which we add to the TESS uncertainties in quadrature. We again adopt the suggested prior from : the log of the jitter is a normal distribution centered at the log of the mean TESS error, with a standard deviation of 2.
As in <cit.>, we remove the crowding correction applied by the TESS pipeline and apply our own. While those works used A_D = 0.480, we instead use A_D = 0.47541, as measured from our HST observations in Section <ref>.
We use <cit.> as our modeling framework. As in <cit.>, we use the <cit.> K-band mass–luminosity relation as our prior on the stellar mass, which produces an estimate of 0.258±0.014 M_⊙, and the <cit.> mass–radius relation as our prior on the stellar radius, corresponding to 0.268±0.027 R_⊙. We model the star as an object, which accounts for quadratic limb darkening. We use the limb-darkening estimates from Section <ref> as our priors: normal distributions centered at u_1 = 0.19 and u_2 = 0.36 and with standard deviations of 0.10.
We model the light curve corresponding to the two planets using the class, which takes as input the planetary periods, times of conjunction, impact parameters, radius ratios, and transit durations, as well as the stellar radius. We also include a free parameter for the normalization of the light curve. This class outputs an estimate of stellar mass implied by the transit duration of each planet; we use these estimates as observed parameters, comparing them against our mass prior.
As <cit.> and <cit.> both found that the orbits of planets b and c were consistent with circular, we do not include eccentricity in our fit. Moreover, low eccentricities are expected due to tidal circularization <cit.>: for Earth-like tidal dissipation factors, the circularization timescale for both planets is on the order of 1–10 million years. While ages are challenging to measure for M dwarfs, the long rotation period and H inactivity of LTT 1445A rule out such extreme youth <cit.>.
We use uninformative uniform priors for the radius ratios and transit durations. For the impact parameter, we use a uniform prior ranging between 0 and 1+R_P/R_*; an impact parameter exceeding this upper limit would correspond to a completely non-transiting planet. For the periods, we use normally distributed priors centered at the <cit.> value with standard deviations of 0.00001 days. We select this standard deviation such that our prior is wider than the 0.000004-day uncertainty estimated in <cit.>, ensuring that our solutions can deviate from the <cit.> results if necessitated by the data. For the times of conjunction, we do the same but with standard deviations of 0.001 days, as the <cit.> uncertainties are larger for these parameters.
§.§ HST model
We fit our HST and TESS data jointly, and therefore the planetary and stellar parameters described in the previous section are also used to model the HST transit. As discussed in Section <ref>, we do not find it necessary to use separate limb-darkening parameters for the HST observations, as we find that the differences between the bandpasses will not measurably change the limb darkening at the level of precision of our data. We perform our systematics correction within the fit to allow uncertainties in the correction to propagate into our inferred system parameters.
lccl[t]
Raw and Systematics-corrected Light Curves of the LTT 1445 system
Column
Format
Units
Description
1 F9.5 days BJD - 2457000
2 F7.0 counts Raw Flux A
3 F7.0 counts Raw Flux B
4 F7.0 counts Raw Flux C
5 F3.3 ppt Corrected Flux A
6 F3.3 ppt Error in Corrected Flux A
7 F3.3 ppt Corrected Flux B
8 F3.3 ppt Error in Corrected Flux B
9 F3.3 ppt Corrected Flux C
10 F3.3 ppt Error in Corrected Flux C
Full table available in machine-readable form. The corrected flux columns use the maximum a posteriori systematics correction, as plotted in Figure <ref>.
lc[t]
2
100pt
Maximum a posteriori Jitter Parameters
Jitter Parameter
ppt
A, HST Visit 1 0.15
B, HST Visit 1 0.15
C, HST Visit 1 0.09
A, HST Visit 2 0.15
B, HST Visit 2 0.17
C, HST Visit 2 0.17
TESS 0.62
These jitter terms are added to the observational uncertainties in quadrature.
For each star, we compare the normalized observed fluxes to our model. This model is comprised of the prediction (which is always equal to unity for stars B and C), divided by a normalized systematics term. This systematics term is a fourth-order polynominal in HST orbital phase, with coefficients that are shared between the three stars but allowed to vary between the two visits. The normalization of the observed fluxes is also a free parameter, which is allowed to vary between stars, visits, and forward/reverse scans. For LTT 1445C, this normalization includes a linear slope to model the observed rotational modulation, which we allow to vary between visits. Lastly, we include a GP to clean up residual correlations that we observe between the light curves of the three stars. The kernel for this GP is a stochastically driven, damped harmonic oscillator, with two hyperparameters that govern the characteristic amplitude and length scale of oscillations. These hyperparameters are shared across all three stars and across both visits. We also fit for a jitter term that is added to our photometric uncertainties in quadrature. This jitter is allowed to vary between stars and between visits.
The maximum a posteriori (MAP) solution for our systematics correction is shown in Figure <ref>, with the corresponding flux values provided in Table <ref>. In Figure <ref>, we show the phased transit of LTT 1445Ac as it appears in the detrended TESS and HST data. In Table <ref>, we note the MAP values for the jitter terms included in our model. The standard deviation of the residuals of our MAP fit is 188 ppm for the HST observations of A (22s exposures) and 1.02 ppt for the TESS observations (2m exposures).
lccccccc
Median Parameters and 68% Confidence Intervals from our Joint HST–TESS Fit
4
Parameter Units 2cValues
2lHost Star Parameters: LTT 1445A
M_* Stellar Mass (M_⊙) 0.257±0.014
R_* Stellar Radius (R_⊙) 0.270^+0.019_-0.010
u_1 Linear limb-darkening coefficient 0.157^+0.083_-0.080
u_2 Quadratic limb-darkening coefficient 0.34^+0.10_-0.09
2lPlanetary Parameters: c b
P Period (days) 3.1238999^+0.0000025_-0.0000026 5.3587643^+0.0000041_-0.0000038
a Semimajor axis (au) 0.02658^+0.00047_-0.00049 0.03808^+0.00067_-0.00070
T_0 Time of conjunction (BJD) 2458412.58194^+0.00058_-0.00057 2458412.70899^+0.00039_-0.00040
T_14 Total transit duration (days) 0.02120^+0.00089_-0.00084 0.05694±0.00076
R_P Radius (R_⊕) 1.10^+0.10_-0.07 1.34^+0.11_-0.06
R_P/R_* Radius of planet in stellar radii 0.0370^+0.0020_-0.0018 0.0454±0.0012
δ Transit depth (fraction) 0.001146^+0.000090_-0.000082 0.00230±0.00011
i Inclination () 87.46^+0.12_-0.20 89.52^+0.33_-0.40
b Impact parameter 0.936^+0.012_-0.011 0.25^+0.18_-0.17
b+R_P/R_* Grazing parameter 0.973^+0.013_-0.011 0.30^+0.18_-0.17
K RV semi-amplitude (ms^-1) 1.48±0.20 2.47±0.20
M_P Mass (M_⊕) 1.36±0.19 2.74^+0.25_-0.24
ρ_P Density (g cm^-3) 5.6^+1.7_-1.5 6.2^+1.2_-1.3
§.§ Sampling
We run a Markov-Chain Monte Carlo (MCMC) to determine the uncertainties in our model parameters. Starting from the MAP solution found by , we use the modified sampler implemented in to sample four chains each with a 1500-draw burn-in and 2000 draws. We use an initial acceptance fraction of 0.5, a target acceptance fraction of 0.95, and 100 regularization steps. We find that the sampler properly converges, as evidenced by Gelman–Rubin statistics <cit.> near 1 for all parameters.
In Table <ref>, we tabulate our median and 68% confidence intervals for the orbital parameters. We are particularly interested in the grazing parameter, b+R_P/R_*. When this parameter exceeds 1, only part of the planet eclipses the stellar disk during transit and the transit is considered grazing. We measure a value of 0.973^+0.013_-0.011, indicating a non-grazing geometry. Moreover, we find that fewer than 3% of samples in our posterior distribution have a grazing parameter in excess of 1 (Figure <ref>). We are therefore able to constrain the radius of LTT 1445Ac with good precision, finding a value of 1.10^+0.10_-0.07 R_⊕. This measurement is consistent with the 1.147^+0.055_-0.054 R_⊕ reported in <cit.>, although that value was estimated using the <cit.> planetary mass–radius relation as a prior and not measured solely from the light curve.
§.§ Radial velocities
For circular orbits, the correlation between the transit parameters and the RV semiamplitude, K, is negligible. As we have not collected any new RV data, we can adopt the measurement of K from a previous work. However, no previous work has performed a radial-velocity fit using all extant RV data. <cit.> analyzed 136 radial velocities from the ESPRESSO <cit.>, HARPS <cit.>, HIRES <cit.>, MAROON-X <cit.>, and PFS <cit.> spectrographs, finding K_c=1.67^+0.21_-0.20 ms^-1. <cit.> collected 85 additional radial velocities with ESPRESSO and found K_c=1.11±0.20 ms^-1, but they analyzed only their ESPRESSO data and archival data from HARPS. The two estimates do not agree within stated errors. While the <cit.> fit also includes a third planet, they report that their measurement of K_c is effectively unchanged in a model fit that contains only planets b and c; therefore, the inclusion or exclusion of planet d does not explain the discrepancy.
Repeating the RV-only analysis from <cit.> but with the addition of the 85 new ESPRESSO radial velocities from <cit.>, we find K_c=1.48±0.20 ms^-1, which splits the difference between the previous measurements. This fit assumes circular orbits and uses separate zero points for the observations collected before and after the COVID-19 shutdown of ESPRESSO. For planet b, this analysis produces K_b=2.47±0.20 ms^-1, which also is intermediate between the 2.60±0.21 ms^-1 found by <cit.> and the 2.15±0.19 ms^-1 found by <cit.>. We prefer these new estimates over those published in the previous works, as they account for all extant radial velocity data. We combine these K measurements with our posterior distributions of period, inclination, and stellar mass to estimate the planetary masses, finding 1.36±0.19 M_⊕ for c and 2.74^+0.25_-0.24 M_⊕ for b. The densities implied by our mass and radius estimates are fully consistent with Earth-like planetary compositions (Figure <ref>). Note that both planets are more highly irradiated than the Earth, with an instellation of roughly 12S_⊕ for c and 6S_⊕ for b.
6em
§ CONCLUSION
We observed the three stars of the LTT 1445 system for six orbits of the Hubble Space Telescope using WFC3/UVIS imaging in spatial scan mode, including one transit of LTT 1445Ac. We jointly fit our observations with extant TESS data, allowing us to establish that the transit of LTT 1445Ac is non-grazing with 97% confidence and measure the planetary radius to be 1.10_-0.07^+0.10 R_⊕. Using radial velocity observations previously published in <cit.> and <cit.>, we find a planetary mass of 1.36±0.19 M_⊕.
We tabulate our constraints on planetary parameters for LTT 1445Ab and c in Table <ref>. These estimates supersede those of <cit.>, as ours include additional transit data from HST and radial velocities from <cit.>, and are not calculated under the assumption of the <cit.> mass–radius relation. Our revised system parameters yield a TSM of 46 for LTT 1445Ac, which is the same value as was estimated in <cit.> using that mass–radius assumption. Our HST observations also allow us to confirm that LTT 1445C is the source of the rotational modulation in the TESS observations and refine the estimate of the TESS dilution to A_D=0.4754.
Taken together, our inferred mass and radius indicate that LTT 1445Ac has a likely terrestrial composition, falling on the rocky side of the radius gap <cit.>. As the nearest terrestrial exoplanet to transit an M dwarf (alongside LTT 1445 Ab), this planet is an exciting target for atmospheric characterization, particularly now that it is known to be non-grazing and its radius is therefore appropriately constrained.
6em
§ ACKNOWLEDGEMENTS
We thank Jonathan Irwin and Johanna Teske for their feedback on both the HST proposal and this manuscript, and Nicola Astudillo-Defru, Xavier Bonfils, Martti Holst Kristiansen, Andrew Howard, Alton Spencer, and Andrew Vanderburg for their participation in the HST proposal. E.P. is supported in part by a Natural Sciences and Engineering Research Council of Canada (NSERC) Postgraduate Scholarship, M.L. by a National Science Foundation (NSF) Graduate Research Fellowship, and R.C. by an NSERC Banting Postdoctoral Fellowship.
This work is based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. Support for program number HST-GO-16503 was provided through a grant from the STScI under NASA contract NAS5-26555. This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the TESS mission is provided by the NASA's Science Mission Directorate.
HST, TESS
<cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, as well as <cit.> and its dependencies <cit.>.
aa_url
|
http://arxiv.org/abs/2307.01893v1
|
20230704193453
|
EANet: Enhanced Attribute-based RGBT Tracker Network
|
[
"Abbas Türkoğlu",
"Erdem Akagündüz"
] |
cs.CV
|
[
"cs.CV"
] |
First Image of the Sun with MeerKAT Solar Observations: Opening a New Frontier in Solar Physics
[
August 1, 2023
===============================================================================================
Tracking objects can be a difficult task in computer vision, especially when faced with challenges such as occlusion, changes in lighting, and motion blur. Recent advances in deep learning have shown promise in challenging these conditions. However, most deep learning-based object trackers only use visible band (RGB) images. Thermal infrared electromagnetic waves (TIR) can provide additional information about an object, including its temperature, when faced with challenging conditions. We propose a deep learning-based image tracking approach that fuses RGB and thermal images (RGBT). The proposed model consists of two main components: a feature extractor and a tracker. The feature extractor ecnodes deep features from both the RGB and the TIR images. The tracker then uses these features to track the object using an enhanced attribute-based architecture. We propose a fusion of attribute-specific feature selection with an aggregation module. The proposed methods is evaluated on the RGBT234 <cit.> and LasHeR <cit.> datasets, which are the most widely used RGBT object-tracking datasets in the literature. The results show that the proposed system outperforms state-of-the-art RGBT object trackers on these datasets, with a relatively smaller number of parameters.
§ INTRODUCTION
Tracking objects using both RGB and thermal images, known as RGBT tracking, is a complex task due to the differences in the two modalities. In challenging scenarios, the fusion of RGB and thermal images into a single stream using traditional tracking methods may not be the most effective approach. This work aims to enhance RGBT tracking through the use of an improved fusion module. With our approach, we aim to enhance feature fusion capability with a relatively small number of parameters and reduce dependency on large-scale training datasets.
For this purpose, we focus on developing attribute-specific fusion branches and attribute-based aggregation fusion modules, which are ideas previously proposed by <cit.>. In order to improve the effectiveness of this approach, we combine the attribute-specific fusion branch and the attribute-based aggregation fusion module with an architecture inspired by the so-called ESKNet <cit.>. The aggregation module based on ESKNet is designed to fuse features from all branches adaptively. By adding branch features together, this module determines which features to add. It does this by using attention-based scoring to get rid of noisy features from attributes. The purpose of this paper is to combine this idea with attribute-specific fusion in order to improve RGBT tracking.
We evaluate the effectiveness of our work by comparing its performance against other state-of-the-art methods on benchmark datasets such as RGBT234<cit.> and LasHeR <cit.>. We aim to prove that our efforts provide a viable solution for enhancing the precision and speed of RGBT tracking systems across a diverse range of uses, such as autonomous driving, security surveillance, and robotics.
§ RELATED WORK
When it comes to visual tracking, combining multiple imaging modalities has shown potential in overcoming the limitations of single-modal tracking systems. In recent years, RGBT tracking studies have increased significantly due to the availability of all-in-one RGB and TIR optical systems <cit.>. In particular, deep learning-based RGBT architectures dominated the literature with the rise of deep learning. The reader may refer to <cit.> for a detailed survey on the subject. In the following, we present details of several outstanding state-of-the-art RGBT object tracking methods that we used to benchmark our study.
* MDNet: An exemplary visual tracking algorithm is the Multi-Domain Network (MDNet) <cit.>. It delivers superior performance when compared to other algorithms in the field. There are several branches of domain-specific layers in MDNet, and each domain corresponds to a separate training sequence. Each training video is treated as a separate domain, with domain-specific layers for binary classification at the end of the network. The algorithm also includes a multi-domain learning framework that separates domain-independent information from domain-specific information and an effective hard negative mining technique for the learning procedure.
* FANet: FANet <cit.> offers a practical approach to implementing all layer features in a trained model and developing resilient target representations. It uses a fully connected layer to learn a nonlinear interaction between channels of varying modalities and employs a second fully connected layer in conjunction with a SoftMax activation function to predict the modality weights that regulate the information flows across modalities in adaptive aggregation.
* DAPNet: Dense feature Aggregation and Pruning network (DAPNet) <cit.> offers a unique feature aggregation and pruning system for RGBT fusion object tracking, recursively combining all layers' deep features while compressing feature channels.
* CMPP: Cross-Modal Pattern-Propagation for RGBT Tracking (CMPP) <cit.> diffuses instance patterns across the two modalities on both the spatial and temporal domain, incorporating long-term historical contexts into the current frame for more effective information inheritance. <cit.> proposed a dual visual attention-guided deep RGBT tracking algorithm that utilizes both local attention and target-driven global attention.
* JMMAC: Joint Modeling of Motion and Appearance Cues (JMMAC) <cit.> is proposed for jointly modeling both appearance and motion cues. The framework includes multimodal fusion and motion mining components. When the appearance cue is unreliable, the motion cue, which includes target and camera movements, is used to make the tracker more robust.
* MANet++: MANet++ <cit.> jointly performs modality-shared, modality-specific, and instance-aware target representation learning for RGBT tracking. Multimodal Cross-Layer Bilinear Pooling for RGBT Tracking (CBPNet) <cit.> includes a feature extractor, channel attention mechanism, cross-layer bilinear pooling module, and three fully connected layers for binary classification.
In addition to the listed above, M^5L <cit.>, MaCNet <cit.>, TFNet <cit.>, CAT <cit.>, ADRNet <cit.>, and APFNet <cit.> are various RGBT trackers that improve tracking performance by learning effective residual representations to enhance target appearance under various challenging circumstances. M^5L <cit.> uses a novel loss function called Multi-modal Multi-margin Structured Loss, which preserves structured information from both RGB and thermal modalities. MaCNet <cit.> focuses on scene-adaptive fusion for cross-modal features, while TFNet <cit.> tracks specific instances in consecutive frames using both RGB and thermal infrared information.
CAT <cit.>, ADRNet <cit.>, and APFNet <cit.> differ from most existing RGBT trackers in their network construction, making them more comprehensible for both modality-specific and modality-shared challenges. ADRNet <cit.> models target appearance in different attributes individually using the attribute-driven branch (ADRB), which is then adaptively aggregated via the channel-wise ensemble network (CENet) module. APFNet <cit.> is disentangling the fusion process into five challenge attributes, including thermal crossover, illumination variation, scale variation, occlusion, and fast motion. APFNet <cit.> is trained using a three-stage algorithm, using a dual-stream hierarchical architecture for online tracking and an enhancement transformer for interactive enhancement. These trackers are able to handle various challenges and can be trained efficiently using a small amount of data.
§ PROPOSED METHOD
As stated earlier in the Introduction section, our research focuses on combining the attribute-specific fusion proposed by <cit.> with an aggregation module inspired by <cit.>. The so-called “Attribute-Based Progressive Fusion Module” is utilized by APFNet for fusion. This module consists of three main components, namely: Attribute-Specific Fusion Branch, Attribute-based Aggregation, and Attribute-based Enhanced Fusion <cit.>. This architecture demonstrates to be a powerful RGBT tracker network, however, in this paper, we investigated ways to simplify the system complexity by replacing the Attribute-based Enhanced Fusion module.
Another architecture we are inspired by, the ESKNet, adds spatial attention mechanisms to a previous model, the SKNET <cit.>. Spatial attention enables the network to dynamically assign higher weights or focus to specific spatial locations, successfully calibrating the significance of various regions. The spatial attention mechanism prioritizes important regions for tracking by assigning them higher weights, while disregarding or suppressing regions that are less informative. The model can better capture the appearance of the object and adapt to changes in its position or appearance over time by focusing on the most important spatial locations. By calibrating the spatial dimension features through the integration of spatial attention, we hope to improve the model's ability to pay attention to and track the object correctly, even in tough situations like occlusions, cluttered backgrounds, or changing lighting conditions.
In order to accommodate for the differences between the various imaging modalities, we adopt a parallel network <cit.> as our network's backbone in order to extract features from both RGB and thermal infrared pictures independently.
The backbone of our model is the first three convolution layers from VGG-M <cit.>, which have kernel sizes of 7x7, 5x5, and 3x3. To initialize the convolution kernel parameters, we employ a pre-trained model on ImageNet-vid <cit.>. By incorporating the enhanced attribute-based module into all levels of the backbone, we create a hierarchical design that facilitates effective fusion of diverse modalities. Following the last convolutional layer, we introduce three fully connected layers, with the final FC6 layer adapted to different domains in a manner similar to the approach in <cit.>.
The fusion process is applied to both RGB and thermal data simultaneously, passing through branches specially trained for different attributes. The Enhanced Selective Kernel (ESK) <cit.> is used for both fusion branches and aggregation of branches data. The convolution data and the data formed as a result of the aggregation are subjected to element-wise addition, which proceeds separately for RGB and Thermal data in parallel.
In the interest of keeping things as straightforward as possible, we have mapped the fusion branches that fall under various attributes of the same structure. To be more specific, we begin by extracting features from two modalities for each branch by utilizing a convolutional layer with a kernel size of 5x5, a rectified linear unit (ReLU) <cit.>, and a convolutional layer with a kernel size of 4x4. This process is repeated for each branch. After that, we make use of a similar structure to ESK <cit.> to make an adaptive selection of the features. Figure <ref> and <ref> provide further information regarding these particulars.
The network is trained in two phases: first, each branch is trained individually, with the pre-trained model on the ImageNet dataset initializing the parameters of the VGG-M <cit.> backbone. During the second phase, we rectify the trained branches and employ all the accessible training data to refine the aggregation fusion modules. This process involves a total of 500 training epochs. All the other settings remain the same as in the first phase. We store the parameters of the FC4 and FC5 aggregation fusion modules.
In the online tracking phase, the tracker is established using the target's location and the first frame of the series. 500 positive samples of various scales surrounding the target are collected using Gaussian sampling in the first frame, while 1000 samples are collected simultaneously to train the regressor. The coordinates of the tracking results are modified using the regressor to get more precise tracking results in follow-up tracking.
During the online tracking phase, only the parameters of the fully-connected layers are changed. Gaussian sampling and the tracking result from the t-1-th frame acquire 256 samples for the current frame while tracking the target in the t-th frame. The trained model computes the scores for 256 samples, then computes the mean value for the 5 samples with the highest scores at the time. Finally, the learned regressor fine-tunes the target location.
Long-time update Settings and short-time updates for tracking errors guarantee the robustness of the method. The formula for the tracking result is as follows:
X_t^*=max f^+(x_t^i) i=1,2,...,N
§ EXPERIMENTAL SETUP
The codes for our model are implemented in PyTorch.
The experiments were carried out on a system using an NVIDIA A4000 GPU with 16 GB RAM.
§.§ Datasets
Our method underwent an extensive evaluation on the RGBT234 <cit.> dataset, which is widely recognized as a benchmark for RGB and thermal object tracking. The dataset comprises 234 high-resolution RGB and thermal image sequences, encompassing a diverse range of scenarios and challenges. The purpose of evaluating our method on the RGBT234 <cit.> dataset was to demonstrate its superior performance in terms of accuracy and effectiveness in tracking targets under challenging conditions when compared to several state-of-the-art RGBT trackers.
The other dataset we utilized in our experiments, namely LasHeR <cit.> is comprised of 1224 (245 for testing and 979 for training) pairs of visible and thermal infrared video and over 730K frame pairs in total. Each frame is aligned spatially and manually labeled with a bounding box, resulting in a heavily annotated dataset. It will play a vital role in both the training of deep RGBT fusion object trackers and the evaluation of RGBT fusion object tracking methods comprehensively.
The GTOT <cit.> benchmark consists of 50 video sequences that were captured using both grayscale and thermal cameras. The sequences include a variety of challenging scenarios, such as low illumination, fast motion, partial occlusion, and cluttered backgrounds. The GTOT dataset also includes annotations of the object's bounding box in each frame, as well as statistics on the bias between the two modalities.
In this study, GTOT is used as the training set, and RGBT234 and LasHeR are used as the test sets.
§.§ Evaluation Metrics
Evaluation metrics for RGBT Tracking vary depending on the benchmark used.
The GTOT <cit.>, RGBT234 <cit.>, and LasHeR <cit.> benchmarks all make use of the same metrics for evaluation, which are referred to as Precision Rate (PR) and Success Rate (SR). The distance between the actual bounding box on the ground and the one that was anticipated is what the precision rate measures. The term "success rate" refers to the proportion of failed tracking attempts whose Intersection over Union (IoU) values between their respective labels fall below a predetermined cutoff. For quantitative performance evaluation, we use the PR and SR from the one-pass evaluation (OPE). In order to calculate the PR score, we adjust the threshold to 20 pixels. The area under the curve is used to produce the representative SR score, which expresses how many correctly tracked frames have overlaps that are greater than thresholds.
§ RESULTS
§.§ Evaluation on RGBT234 Dataset
In terms of overall performance, our method achieved remarkable results on the RGBT234 dataset. The precision (PR) score reached an impressive 83.5%, showcasing the high level of accuracy in target tracking. Additionally, the success rate (SR) achieved an exceptional score of 58.4%, indicating the effectiveness of our method in successfully tracking targets throughout the sequences. We compared our model with state-of-art models such as <cit.>. The evaluation curve of Precision Rate and Success Rate can be seen in Figure <ref>.
To provide a comprehensive analysis of our method's performance, we further evaluated its effectiveness based on different attributes of the dataset. This attribute-based analysis allowed us to analyze how well our method handled specific challenges commonly encountered in object tracking, such as background clutter, occlusion, motion blur, and more.
Our method demonstrated robust performance across most attributes, surpassing the comparison trackers in terms of PR and SR. We compared our attribute-based performance with state-of-art treackers. Our attribute-based performance results can be seen in Table <ref>.
According to attribute-based performance results, in sequences with high background clutter (BC), our method achieved the best result with a precision rate of 83.2% and a success rate of 54.6%. This indicates its effectiveness in accurately tracking targets in cluttered environments. Our method performed well even in scenarios with heavy occlusion (HO), achieving a precision rate of 76.0% and a success rate of 52.2%. These results indicate that our approach is capable of dealing with challenging situations where the target is significantly occluded. Under motion blur (MB) conditions, our method achieved a precision rate of 76.9% and a success rate of 55.7%. This showcases its effectiveness in accurately tracking targets despite the presence of motion blur artifacts. When faced with partial occlusion (PO), our method achieved a precision rate of 86.6% and a success rate of 60.6%. This demonstrates the ability of our method to handle partially occluded targets and maintain accurate tracking.
In sequences with thermal crossover (TC), our method achieved a precision rate of 82.8% and the best success rate with a 59.2% score. This showcases the effectiveness of our approach in accurately tracking targets when the thermal and RGB modalities overlap.
Our method also demonstrated good performance in sequences with scale variation (SV), achieving second-best scores with a precision rate of 83.1% and a success rate of 58.5%. This highlights its ability to handle changes in target size and maintain accurate tracking.
The attribute-based performance results provide comprehensive insights into the capabilities of our method in handling diverse challenges within the RGBT234 dataset. Our model gains almost 5 best scores over 12 attributes and the best score in all attributes. Our method outperformed the comparison trackers in terms of PR and SR and demonstrated its effectiveness in various challenging scenarios.
§.§ Evaluation on LasHeR Dataset
In addition to the RGBT234 dataset, we also evaluated our method on the LasHeR dataset to further assess its performance. The evaluation on this set was consistent, although with slightly lower performance compared to the RGBT234 dataset. The evaluation curve of Precision Rate and Success Rate on LasHeR dataset can be seen in Figure <ref>.
Our method achieved a PR of 50.6% on the LasHeR dataset. Although this score is relatively lower compared to the RGBT234 dataset but also is the best score when compared to other trackers. It demonstrates the effectiveness of our approach in accurately localizing and tracking targets.
The SR achieved by our method on the LasHeR dataset is 36.7%. While this score may seem lower, it reflects the challenges present in the LasHeR dataset and the ability of our approach to handle them to a significant extent. SR score of our model is also the best score when compared to other trackers.
The evaluation on the LasHeR dataset further supports the robustness and versatility of our method in handling different tracking scenarios and challenges. Despite the slightly lower performance compared to the RGBT234 dataset, our approach showcased its effectiveness in tracking targets in diverse environments.
§.§ Ablation Study
We conducted an ablation study to measure the effectiveness of the sub-modules of our model. In order to see the contribution of Enhanced Aggregation Module to the success of the model, we removed this module and aggregated the data from Enhanced Fusion Branch Modules with element-wise addition. We called this variation of the model “Var-AggESK” in Table <ref>. “Var-AggESK” variation achieved PR 81.2% and SR 56.4%. As seen in Table <ref>, when we compare “Var-AggESK” with the proposed model, we conclude that the Enhanced Aggregation Module contributes to the success of the model and is a necessary part of our model.
§ CONCLUSION
In conclusion, our method demonstrated outstanding performance on both the RGBT234 <cit.> and LasHeR <cit.> datasets, highlighting its effectiveness in object tracking tasks. The evaluation on the RGBT234 <cit.> dataset showcased the high accuracy and effectiveness of our approach, with impressive precision and success rate scores. The attribute-based analysis further demonstrated its robustness in handling various challenges, including background clutter, occlusion, motion blur, and more.
While the evaluation on the LasHeR <cit.> dataset yielded slightly lower performance scores compared to the RGBT234 dataset, but our model gains the best score when compared to other trackers, and it still confirmed the capabilities of our method in accurately tracking targets in diverse scenarios. The precision and success rate scores obtained on the LasHeR <cit.> dataset demonstrate the effectiveness of our approach, considering the challenges specific to that dataset.
Overall, our method showcases significant potential in the field of RGBT object tracking. Its performance on the RGBT234 <cit.> and LasHeR <cit.> datasets provides evidence of its accuracy, versatility, and ability to handle challenging scenarios. The results of our evaluation contribute to the advancement of RGBT object-tracking techniques and serve as a foundation for future research and development in the field.
This work is partially supported by Middle East Technical University Scientific Research Projects Coordination Unit (METU-BAP), under the project title "Real-Time Visual Tracking System based on Deep Learning using Infrared and Visible Band Fusion" (Kızılötesi ve Görünür Bant Kaynaştırma Kullanarak Derin Öğrenme Tabanlı ve Gerçek Zamanlı Görsel Takip Sistemi - AGEP-704-2022-11000).
spiebib
§ AUTHORS' BACKGROUND
|
http://arxiv.org/abs/2307.01217v1
|
20230701125237
|
FedCP: Separating Feature Information for Personalized Federated Learning via Conditional Policy
|
[
"Jianqing Zhang",
"Yang Hua",
"Hao Wang",
"Tao Song",
"Zhengui Xue",
"Ruhui Ma",
"Haibing Guan"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Queen's University Belfast
Belfast
UK
[email protected]
Louisiana State University
Baton Rouge
USA
[email protected]
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Shanghai Jiao Tong University
Shanghai
China
[email protected]
Recently, personalized federated learning (pFL) has attracted increasing attention in privacy protection, collaborative learning, and tackling statistical heterogeneity among clients, , hospitals, mobile smartphones, . Most existing pFL methods focus on exploiting the global information and personalized information in the client-level model parameters while neglecting that data is the source of these two kinds of information. To address this, we propose the Federated Conditional Policy () method, which generates a conditional policy for each sample to separate the global information and personalized information in its features and then processes them by a global head and a personalized head, respectively. is more fine-grained to consider personalization in a sample-specific manner than existing pFL methods. Extensive experiments in computer vision and natural language processing domains show that outperforms eleven state-of-the-art methods by up to 6.69%. Furthermore, maintains its superiority when some clients accidentally drop out, which frequently happens in mobile settings. Our code is public at <https://github.com/TsingZ0/FedCP>.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010219.10010220</concept_id>
<concept_desc>Computing methodologies Multi-agent systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010919.10010172</concept_id>
<concept_desc>Computing methodologies Distributed algorithms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010259</concept_id>
<concept_desc>Computing methodologies Supervised learning</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Multi-agent systems
[500]Computing methodologies Distributed algorithms
[300]Computing methodologies Supervised learning
: Separating Feature Information for Personalized Federated Learning via Conditional Policy
Haibing Guan
===========================================================================================
§ INTRODUCTION
Nowadays, many web-based services, such as personalized recommendations <cit.>, benefit from artificial intelligence (AI) and the huge volume of data generated locally on various clients <cit.>, , hospitals, mobile smartphones, internet of things, . At the same time, legislation endeavors on data privacy protection continue to increase, , General Data Protection Regulation (GDPR) of Europe <cit.> and California Consumer Privacy Act (CCPA) <cit.>. Due to privacy concerns and regulations, centralized AI faces significant challenges <cit.>. On the other hand, because of the data sparsity problem, it is hard to learn a reasonable model for a given task independently on each client <cit.>.
Federated learning (FL) is proposed as a collaborative learning paradigm <cit.> to utilize local data on the participating clients for the global model training without sharing the private data of clients. As one of the famous FL methods, FedAvg conducts four steps in each communication iteration: (1) The server sends the old global model parameters to the selected clients. (2) Each selected client initializes the local model with the received global parameters and trains the local model on local data. (3) The selected clients upload the updated local model parameters to the server. (4) The server generates new global model parameters by aggregating the received client model parameters. However, in practice, the data on the client is typically not independent and identically distributed (non-IID) as well as unbalanced <cit.>. With this statistical heterogeneity challenge <cit.>, the single global model in traditional FL methods, such as FedAvg, can hardly fit the local data well on each client and achieve good performance <cit.>.
To meet the personalized demand of each client and address the challenge of statistical heterogeneity in FL, personalized federated learning (pFL) comes along that focuses on learning personalized models rather than a single global model <cit.>. Most existing pFL methods consider the global model as a container that stores the global information and enriches the personalized models with the parameters in the global model.
However, they only focus on client-level model parameters, , the global/personalized model to exploit the global/personalized information. Specifically, the meta-learning-based methods (such as Per-FedAvg <cit.>) only fine-tune global model parameters to fit local data, and the regularization-based methods (such as pFedMe <cit.>, FedAMP <cit.>, and Ditto <cit.>) only regularize model parameters during local training.
Although personalized-head-based methods (such as FedPer<cit.>, FedRep <cit.>, and FedRoD <cit.>) explicitly split a backbone into a global part (feature extractor) and a personalized part (head), they still focus on exploiting global and personalized information in model parameters rather than the source of information: data.
As the model is trained on data, the global/personalized information in model parameters is derived from client data. In other words, the heterogeneous data on clients contains both global and personalized information. As shown in <Ref>, widely-used colors, , blue, and rarely-used colors, , purple and pink, contain global information and personalized information in images, respectively.
To exploit the global and personalized information in the data separately, we propose a Federated Conditional Policy () method based on conditional computing techniques <cit.>. Since the dimension of raw input data is much larger than the feature vector extracted by the feature extractor, we focus on the feature vector for efficiency. As the proportion of the global and personalized information in the features differ among samples and clients, we propose an auxiliary Conditional Policy Network () to generate the sample-specific policy for feature information separation. Then, we process the global feature information and personalized feature information by a global head and a personalized head in different routes, respectively, as shown in <Ref>. We store the personalized information in the personalized head and reserve the global information by freezing the global head without locally training it.
Through end-to-end learning, automatically learns to generate the sample-specific policy. We visualize six cases in <Ref> to show the effectiveness of the feature information separation ability.
To evaluate , we conduct extensive experiments on various datasets in two widely-used scenarios <cit.>, , the pathological settings and the practical settings. outperforms eleven state-of-the-art (SOTA) methods in both scenarios, and we analyze the reasons in <Ref>.
In summary, our key contributions are:
* To the best of our knowledge, we are the first to consider personalization on the sample-specific feature information in FL. It is more fine-grained than using the client-level model parameters in most existing FL methods.
* We propose a novel that generates a sample-specific policy to separate the global information and personalized information in features on each client. It processes these two kinds of feature information through a frozen global head and a personalized head on each client, respectively.
* We conduct extensive experiments in computer vision (CV) and natural language processing (NLP) domains to show the effectiveness of . Besides, keeps its superior performance even when some clients accidentally drop out.
§ RELATED WORK
§.§ Personalized Federated Learning
To collaboratively learn models among clients on their local private data while protecting privacy, traditional FL methods, such as FedAvg <cit.> and FedProx <cit.>, come along. Based on FedAvg, FedProx improves the stability of the FL process through a regularization term. However, in practice, statistical heterogeneity widely exists in the FL setting, so it is hard to learn a single global model that fits well with the local data in each client <cit.>.
Recently, pFL has attracted increasing attention for its ability to tackle statistical heterogeneity in FL <cit.>. Among meta-learning-based methods, Per-FedAvg <cit.> learns an initial shared model as the global model that satisfies the learning trend for each client.
Among regularization-based methods, pFedMe <cit.> learns an additional personalized model locally for each client with Moreau envelopes. In addition to learning only one global model for all clients, FedAMP <cit.> generates one server model for one client through the attention-inducing function to find similar clients. In Ditto <cit.>, each client learns its personalized model locally with a proximal term to fetch global information from global model parameters. Among personalized-head-based methods, FedPer<cit.> and FedRep <cit.> learn a global feature extractor and a client-specific head. The former locally trains the head with the feature extractor, while the latter locally fine-tunes the head until convergence before training the feature extractor in each iteration. To bridge traditional FL and pFL, FedRoD <cit.> explicitly learns two prediction tasks with a global feature extractor and two heads. It uses the balanced softmax (BSM) loss <cit.> for the global prediction task and processes the personalized task by the personalized head. Among other pFL methods, FedFomo <cit.> calculates the client-specific weights for aggregation on each client using the personalized models from other clients. FedPHP <cit.> locally aggregates the global model and the old personalized model using a moving average to keep the historical personalized information. It also transfers the information in the global feature extractor through the widely-used maximum mean discrepancy (MMD) loss <cit.>.
These above pFL methods only focus on exploiting global and personalized information of model parameters but do not dig deep into data.
§.§ Conditional Computing
Conditional computing is a technique that introduces dynamic characteristics into models according to task-dependent conditional inputs <cit.>. Formally, given a conditional input C (, image/text, model parameter vector, or other auxiliary information) and an auxiliary module AM(·; θ), a signal S can be generated by S = AM(C; θ) and used to interfere with models, such as dynamic routing and feature adaptation.
To activate specific parts in a model and process the data in different routes for each input sample, many approaches generate sample-specific policies for route selection. Conditioned on the input image, ConvNet-AIG <cit.> can decide which layers are needed during inference using Gumbel Softmax <cit.>.
With a policy network, SpotTune <cit.> makes decisions for each image to select which blocks in a pre-trained residual network to fine-tune.
Instead of focusing on dynamic model topology, some methods propose adapting the learned features. In the few-shot learning field, TADAM <cit.> adapts the features through an affine transformation conditioned by the extracted task representation. In the video object detection field, TMA <cit.> proposes a learnable affine transformation conditioned by video frames for feature adaptation.
The above methods use conditional computing techniques but are designed for centralized AI scenarios and specific tasks. Combining the ideas of dynamic routing and feature adaptation, we devise the module in our to separate global feature information and personalized feature information then process them in different routes for pFL scenarios and various tasks.
§ METHOD
§.§ Overview
In statistically heterogeneous pFL settings, non-IID and unbalanced data exist on N clients, who train their personalized models W_1, …, W_N in a collaborative manner. N clients own private datasets 𝒟_1, …, 𝒟_N, respectively, which are sampled from N distinct distributions without overlapping.
Similar to FedPer<cit.>, FedRep <cit.>, and FedRoD <cit.>, we split the backbone into a feature extractor f:ℝ^D →ℝ^K, that maps input samples to feature space and a head g:ℝ^K →ℝ^C, which maps from low-dimensional feature space to a label space. Following FedRep, we consider the last fully connected (FC) layer in each given backbone as the head. D, K, and C are the dimension of the input space, feature space, and label space, respectively. K is determined by the given backbone and typically D ≫ K.
Different from FedPer, FedRep and FedRoD, on client i, we have a global feature extractor (parameterized by W^fe), a global head (parameterized by W^hd), a personalized feature extractor (parameterized by W^fe_i), a personalized head (parameterized by W^hd_i), and a (parameterized by Θ_i).
Specifically, for the feature extractors, we initialize W^fe_i by overwriting it with corresponding global parameters W^fe in each iteration, and then locally learn the personalized feature extractor. The feature generated by the changing personalized feature extractor may not fit the frozen global head during local learning. Thus, we freeze the global feature extractor after receiving and align the features outputted by the personalized feature extractor to the ones generated by the global feature extractor through the MMD loss, as shown in <Ref>. For the global head, we freeze it after it has been initialized by W^hd to preserve global information.
In short, at the start of each iteration, we overwrite W^fe_i by new W^fe then freeze W^fe and W^hd.
As shown by the non-transparent module in <Ref>, the personalized model used for inference (parameterized by W_i) consists of the personalized feature extractor, the global head, the personalized head, and the , , W_i := { W^fe_i, W^hd, W^hd_i, Θ_i}. The frozen global feature extractor is only used for local learning and is not part of the personalized model. We omit iteration notation, sample index notation, and biases for simplicity. Given the local loss ℱ_i (described later), our objective is
{ W_1, …, W_N} = 𝒢(ℱ_1, …, ℱ_N).
Typically, 𝒢(ℱ_1, …, ℱ_N) = ∑^N_i=1 n_i ℱ_i, n_i = |𝒟_i| / ∑^N_j=1 |𝒟_j|, and |𝒟_i| is the sample amount on client i.
§.§ Federated Conditional Policy ()
We focus on feature information separation for the feature vector
h_i = f( x_i; W^fe_i), ∀ ( x_i, y_i) ∈𝒟_i.
Due to statistical heterogeneity, h_i ∈ℝ^K contains global and personalized feature information. To separately exploit these two kinds of information, we propose that learns sample-specific separation in an end-to-end manner, as shown in <Ref>.
§.§.§ Separating feature information
Guided by the global information in the frozen global head and the personalized information in the personalized head, the (the core of ) can learn to generate the sample-specific policy and separate the global and personalized information in h_i automatically.
Specifically, we devise as the concatenation of an FC layer and a layer-normalization layer <cit.> followed by the ReLU activation function <cit.>, as shown in <Ref>. On client i, we generate the sample-specific policy by
{ r_i, s_i} := (𝒞_i; Θ_i),
where r_i∈ℝ^K, s_i∈ℝ^K, r^k_i + s^k_i = 1, ∀ k ∈ [K], and 𝒞_i ∈ℝ^K is the sample-specific input for . We describe the details of the input 𝒞_i and the output { r_i, s_i} as follows.
𝒞_i is generated to achieve the sample-specific characteristic and introduce personalized (client-specific) information. We can directly obtain the sample-specific vector h_i, so we only introduce how to obtain the client-specific information here.
Based on FedRep and FedRoD, the parameters in the personalized head, , W^hd_i, naturally contain client-specific information. However, W^hd_i is a matrix, not a vector. Thus, we generate v_i by reducing the dimension of W^hd_i.
Recall that a head is an FC layer in , , W^hd_i ∈ℝ^C× K, so the kth column of W^hd_i corresponds to kth feature in h_i. We obtain v_i := ∑^C_c=1 w^T_c, where w_c is the cth row in W^hd_i and v_i ∈ℝ^K. In this way, we obtain a client-specific vector with the same shape and feature-wise semantics as h_i. Then we combine sample-specific h_i and the client-specific v_i via
𝒞_i:=( v_i / || v_i||_2) ⊙ h_i, where || v_i||_2 is the ℓ_2-norm <cit.> of v_i and ⊙ is the Hadamard product. We obtain v_i before local learning in each iteration and regard it as a constant during training. During inference, we reuse the latest v_i.
We separate information by multiplying the policy { r_i, s_i} and h_i to obtain the global feature information r_i ⊙ h_i and personalized feature information s_i ⊙ h_i. There are connections among features <cit.>, so we output { r_i, s_i} with real numbers instead of Boolean values, , r^k_i∈ (0, 1) and s^k_i∈ (0, 1). Inspired by the Gumbel-Max trick for policy generating <cit.>, we generate the policy with the help of the intermediates and a softmax <cit.> operation through the following two steps. Firstly, generates the intermediates a_i ∈ℝ^K× 2, where a^k_i = {a^k_i, 1, a^k_i, 2}, k ∈ [K], a^k_i, 1 and a^k_i, 2 are scalars without constraint.
Secondly, we obtain r^k_i and s^k_i by
r^k_i = exp(a^k_i, 1)/∑_j∈{1, 2}exp(a^k_i, j), s^k_i = exp(a^k_i, 2)/∑_j∈{1, 2}exp(a^k_i, j).
Note that, r^k_i∈ (0, 1), s^k_i∈ (0, 1), r^k_i + s^k_i = 1, ∀ k ∈ [K] still holds.
§.§.§ Processing feature information
Then, we feed r_i ⊙ h_i and s_i ⊙ h_i to the global head and the personalized head, respectively. The outputs of global head and the personalized head are out^r_i = g( r_i ⊙ h_i; W^hd) and out^s_i = g( s_i ⊙ h_i; W^hd_i), respectively. We define the final output out_i := out^r_i + out^s_i. Then the local loss is
ℰ_i = 𝔼_( x_i, y_i) ∼𝒟_iℒ(out_i, y_i),
where ℒ is the cross-entropy loss function <cit.>.
From the view of each sample, the extracted features are processed by both the global head and the personalized head. For simplicity, we aggregate these two heads through averaging to form the upload head W^hd_i:
W^hd_i = W^hd + W^hd_i/2.
In each iteration, we upload { W^fe_i, W^hd_i, Θ_i} to the server.
§.§.§ Aligning features
To fit the features outputted by the personalized feature extractor with the frozen global head, we align the features outputted by the personalized feature extractor and the global feature extractor through the MMD loss ℰ^d_i,
ℰ^d_i = 𝔼_( x_i, y_i) ∼𝒟_iκ[ h_i, f( x_i; W^fe)],
where κ is the radial basis function (RBF) kernel. Finally, we have the local loss ℱ_i = ℰ_i + λℰ^d_i, where λ is a hyper-parameter. Specifically,
ℱ_i =𝔼_( x_i, y_i) ∼𝒟_i{ℒ[g( r_i ⊙ h_i; W^hd) + g( s_i ⊙ h_i; W^hd_i), y_i]
+ λκ[ h_i, f( x_i; W^fe)]},
where h_i is the feature vector extracted by <ref>, and r_i and s_i are obtained through <ref>.
We show the entire learning process in <Ref>. For inference, we use the personalized model as illustrated by the non-gray border modules in <Ref>.
§.§ Privacy Analysis
According to <Ref> and <Ref>, our proposed shares the parameters of one feature extractor, one head, and one . As for the head part, we upload W^hd_i on each client after aggregating W^hd and W^hd_i by <ref>. This process can be viewed as adding noise (global parameters W^hd) to W^hd_i, thus protecting privacy during the uploading and downloading. Besides, the sample-specific characteristic further improves the privacy-preserving ability of . On the one hand, since 𝒞_i is dynamically generated without sharing with the server, it is hard to recover the sample-specific policy with the or through model inversion attacks <cit.>. On the other hand, without the sample-specific policy, the connection between the feature extractor and the head is broken, increasing the difficulty of attacks based on shared model parameters. We evaluate the privacy-preserving ability of in <Ref>.
§ EXPERIMENTAL SETUP
We evaluate on various image/text classification tasks.
For the image classification tasks, we use four famous datasets, including MNIST <cit.>, Cifar10 <cit.>, Cifar100 <cit.> and Tiny-ImageNet <cit.> (100K images with 200 classes) using a famous 4-layer CNN <cit.>.
To evaluate on a larger backbone model than the 4-layer CNN, we also use ResNet-18 <cit.> on Tiny-ImageNet. We set the local learning rate η = 0.005 for the 4-layer CNN and η = 0.1 for ResNet-18. For the text classification tasks, we use the AG News <cit.> dataset with the fastText <cit.> and set η = 0.1 for fastText with other settings being the same as image classification tasks.
We simulate the heterogeneous settings in two widely-used scenarios, , the pathological setting <cit.> and practical setting <cit.>. For the pathological setting, we sample 2/2/10 classes on MNIST/Cifar10/Cifar100 from a total of 10/10/100 classes for each client with disjoint data. Specifically, similar to FedAvg <cit.>, we separate clients into groups that own unbalanced data with the same labels. Following MOON <cit.>, we create the practical setting through the Dirichlet distribution, denoted as Dir(β). Specifically, we sample q_c, i∼ Dir(β) and allocate a q_c, i proportion of the samples of class c to client i.
We set β = 0.1 for the default practical setting <cit.>. Then, we split the data on each client into a training dataset (75%) and a test dataset (25%).
Following FedAvg, we set the local batch size to 10 and the number of local learning epochs to 1. We run all tasks up to 2000 iterations until all methods converge empirically. Based on pFedMe, FedFomo, and FedRoD, we set the total number of clients to 20 and the client joining ratio ρ = 1 by default.
Following pFedMe, we report the test accuracy of the best global model for traditional FL methods and the average test accuracy of the best personalized models for pFL methods. We run all the experiments five times and report the mean and standard deviation. Besides, we run all experiments on a machine with two Intel Xeon Gold 6140 CPUs (36 cores), 128G memory, eight NVIDIA 2080 Ti GPUs, and CentOS 7.8. For more results and details, please refer to the Appendix.
§ ABLATION STUDY
§.§ Feature Information Visualization
To visualize the separated global and personalized feature information when using ResNet-18, we adopt the Grad-CAM <cit.> on the learned personalized model when only the global head or the personalized head is activated. Six cases from Tiny-ImageNet are shown in <Ref>.
According to <Ref>, with only the global head activated, the personalized model focuses on relatively global information, such as trees (Case 0 and Case 4), grasses (Case 1), or sky (Case 2 and Case 5) in the background. When we only activate the personalized head, the personalized model focuses on the relatively personalized information, such as foreground (Case 2 and Case 5) or objects (Case 0, Case 1, and Case 4). As for Case 3, the rarely-used pink color is more personalized than the widely-used blue color.
§.§ Effectiveness of input
To show the effectiveness of each part of the input, we remove them one by one and obtain the variants: without client-specific vector (w.o. cs), without sample-specific vector (w.o. ss), without client-specific and sample-specific vector (w.o. cs & ss). For w.o. cs & ss, we regard the randomly initialized frozen vector as the input, which has the same shape as the sample-specific vector.
In <Ref>, removing either the client-specific vector or the sample-specific vector causes an accuracy decrease. However, w.o. cs performs better than w.o. ss, so the sample-specific vector is more significant than the client-specific one. According to <Ref> and <Ref>, removing these two kinds of information and using the random vector, w.o. cs & ss still achieves higher accuracy than all the baselines because module can still learn to separate feature information through the end-to-end training.
§.§ Effectiveness of modules
To show the effectiveness of each module in , we remove them one by one and obtain the variants: without the frozen global feature extractor and the MMD loss (without GFM for short, , w.o. GFM), without (w.o. ), without and GFM (w.o. & GFM), without and the frozen global head (w.o. & GH), without , GFM, and the frozen global head (w.o. & GFM & GH, similar to FedPer), as shown in <Ref>. It is invalid to keep while removing the frozen global head since they are a union for our feature separating goal.
In <Ref>, without the GFM to align the features, the accuracy of w.o. GFM decreases by 1.31% compared to , but it still outperforms other baselines (see <Ref>). Without , the accuracy of w.o. decreases by 3.01%, so is more critical than the GFM when the frozen global head exists. Removing both the and the GFM (w.o. & GFM) degenerates further than removing one of them, which means that these two modules can facilitate each other. The and the frozen global head are the key modules in . Without them, the performance of w.o. & GH degenerates significantly, with a 8.74% drop compared to . Furthermore, w.o. & GFM & GH (removing all the modules) performs better than w.o. & GH. It means simply adding the GFM to w.o. & GFM & GH causes performance degeneration.
§ EVALUATION AND ANALYSIS
§.§ Main Experiments
Due to the limited space, we use the “TINY” and “TINY*” to represent using the 4-layer CNN on Tiny-ImageNet and using ResNet-18 on Tiny-ImageNet, respectively. <Ref> shows that outperforms all the baselines when using either the 4-layer CNN or the ResNet-18, especially on relatively challenging tasks. In the default practical setting on Cifar100, exceeds the best baseline (Ditto) by 6.69%. Our only introduces an additional 0.527M (million) parameters on each client, which is 9.25% and 4.67% of the parameters in the 4-layer CNN (5.695M) and the ResNet-18 (11.279M), respectively. In the following, we analyze why outperforms all the baselines.
In <Ref>, FedAvg and FedProx perform poorly, as the global model cannot fit the local data well on all the clients. They directly feed features to the global head, regardless of the personalized information in the features. In contrast, separates and feeds the global information and the personalized information in the features to the global head and the personalized head, respectively.
Per-FedAvg performs poorly among pFL methods, as the aggregated learning trend can hardly meet the trend of each personalized model. In contrast, considers personalization in a sample-specific manner conditioned by the client-specific vector, which meets the demand of each client, thus performing better.
pFedMe and FedAMP utilize regularization terms to extract information from the local model and the client-specific server model, respectively. However, excessively concentrating on personalization is not beneficial to the collaborative goal of FL. Since Ditto extracts global information from the global model, it performs better than pFedMe and FedAMP. Like Ditto, also takes advantage of global information for each client.
FedPer and FedRep only share the feature extractor without sharing heads. They ignore some global information in the head part, so they perform worse than . FedRoD bridges the goal of traditional FL and pFL by learning two heads with two objectives. However, these two goals are competing <cit.>, so FedRoD performs worse than FedRep, which also learns a personalized head but only focuses on the goal of pFL.
Like FedRep,
only focuses on the pFL goal, thus performing the best.
Similar to FedAMP, FedFomo aggregates client models with client-specific weights, thus losing some global information. FedPHP transfers the global information only in the global feature extractor through the MMD loss. Although it achieves excellent performance, FedPHP loses the global information in the global head during local training, so it performs worse than .
§.§ Computing and Communication Overhead
Here, we focus on the training phase. We report the total time and the number of iterations required for each method to converge and calculate the average time consumption in each iteration, as shown in <Ref>. Ditto and pFedMe cost more time in each iteration than most methods since the additional personalized model training takes much extra time.
Compared to most baselines, , Per-FedAvg, pFedMe, Ditto, FedRep, and FedPHP, costs less training time in each iteration. In , the parameters in the module only require an additional 4.67% communication overhead per iteration when using ResNet-18 compared to FedAvg.
§.§ Different Heterogeneity Degrees
In addition to <Ref>, we conduct experiments on the settings with different degrees of heterogeneity on Tiny-ImageNet and AG News by varying β. The smaller the β is, the more heterogeneous the setting is. We show the accuracy in <Ref>, where still outperforms the baselines. Most pFL methods achieve higher accuracy than traditional FL methods in the more heterogeneous setting. In the setting with a larger β, most of them cannot achieve higher accuracy than FedAvg on Tiny-ImageNet. In contrast, the methods that utilize global information during local learning (FedPHP, FedRoD, and ) maintain excellent performance.
FedRoD performs worse than FedRep, as the latter focuses only on the goal of pFL.
pFedMe and FedAMP perform poorly among pFL methods. Their accuracy is lower than traditional FL methods when β = 1.
§.§ Scalability with Different Client Amounts
Following MOON <cit.>, we conduct another six experiments (, N = 10, N = 30, N = 50, N = 100, N = 200, and N = 500) to study the scalability of and keep other settings unchanged. Per-FedAvg requires more data than other methods, as meta-learning requires at least two batches of data, which is invalid on some clients in our unbalanced settings when N ≥ 200. Since the total data amount is constant on Cifar100, the local data amount (on average) decreases as the client amount increases. With both N and local data amount changing, it is unreasonable to compare the results among different N in <Ref>.
Some pFL methods, including Per-FedAvg and pFedMe, achieve relatively poor performance in the setting with N = 10, where few clients (, hospitals) participate in FL, and each of them possesses a large data repository.
When N = 500 (, mobile smartphones), each client only has 90 samples for training on average, which is not enough for the weight calculation in FedFomo, so it performs worse than FedAvg. FedAMP diverges as it is hard to find similar clients when they have little data. According to <Ref>, still outperforms all the baselines.
To simulate a real-world scenario where more clients means more total data amount in FL, we consider the setting Cifar100 (β = 0.1, ρ = 1, and N = 50) used above as the base setting and randomly sample 10 and 30 clients from existing 50 clients to form the Cifar100 (β = 0.1, ρ = 1, and N = 10|50) and Cifar100 (β = 0.1, ρ = 1, and N = 30|50) settings, respectively. When we increase the client amount, the accuracy increases as more data are utilized to train the globally shared modules, which facilitates information transfer among clients. The superior performance of in <Ref> shows its scalability in this real-world scenario.
§.§ Large Local Epochs
Large local epochs can reduce total communication iterations but increase computing overhead per iteration for most of the methods in FL <cit.>.
With larger local epochs, can still maintain its superiority as shown in <Ref>. Most of the methods perform worse with larger local epochs since more local training aggravates the discrepancy among client models, which is adverse to server aggregation. For example, the accuracy of FedRoD drops by 2.16% when the number of local epochs increases from 5 to 40.
§.§ Clients Accidentally Dropping Out
Due to the changing network connection quality, some clients may accidentally (randomly) drop out at one iteration and become active again at another iteration, which frequently happens in the mobile settings. We compare the performance of pFL methods when some clients accidentally drop out, as shown in <Ref>. Instead of using the constant ρ, we randomly choose a value within a given range for ρ in each iteration. The larger the range of ρ is, the more unstable the setting is. It simulates a more practical setting with a random drop-out rate than the settings used by the SOTA methods, which set a constant drop-out rate in all iterations.
Most pFL methods suffer from an accuracy decrease in unstable settings. pFedMe and FedPHP have up to 6.65% and 9.80% accuracy decrease, respectively, compared to ρ = 1 in <Ref>. Some methods, such as FedRep, and FedRoD, perform worse with a larger range of ρ. The standard deviation of Per-FedAvg, pFedMe, Ditto, and FedRoD is greater than 1% when ρ∈ [0.1, 1], which means their performance is unstable with the random ρ. Since separates feature information automatically, can adapt to the changing environments thus still maintaining superiority and stable performance in these unstable settings.
§ EFFECT OF THE HYPER-PARAMETER Λ
To guide the learned features to fit the frozen global head, we use the hyper-parameter λ to control the importance of MMD loss that aligns the outputs of the personalized feature extractor and the outputs of the global feature extractor. The larger the λ is, the closer these two outputs are.
From <Ref>, the accuracy first increases and then decreases as λ increases, which is similar among three settings with different degrees of heterogeneity. By assigning a proper value to λ, the personalized feature extractor can learn the information from the local data while guiding the output features to fit the frozen global head. When the value of λ is overlarge (, λ=50), the personalized feature extractor can hardly learn from the local data. Instead, it tends to output similarly to the frozen global feature extractor. To pay more attention to the local data in a more heterogeneous setting (, β=0.01), requires a relatively smaller λ, as the global information plays a less critical role in this situation.
§ POLICY STUDY
We show the policy change for the training samples and the generated policies for all the test samples during inference in <Ref>. For clarity, we collect all the sample-specific s_i on each client and average them to obtain s_i. Then we further average the elements in s_i to generate one scalar, which is called personalization identification ratio (PIR): PIR_i:=1/K∑^K_k s^k_i, i∈ [N], where s^k_i is the kth element in the policy s_i.
When using diverse backbones with different feature extraction abilities, the policies vary in both PIR change and s_i distribution. As shown in <Ref>, on client #0, PIR increases from the initial value of 0.50 to around 0.58 in the first 20 iterations and remains almost unchanged using the 4-layer CNN. However, when using ResNet-18, PIR decreases first and then increases rapidly to around 0.61, which means that the features extracted by the feature extractor in ResNet-18 contain more global feature information in early iterations, and our can automatically capture this dynamic characteristic during all FL iterations. In <Ref>, the value range of s_i varies among clients, as they contain diverse samples. For example, the s_i range on client #10 is the largest among clients. Although the policies are different for the samples, the mean values of s_i are similar among clients when using one specific backbone, as shown in <Ref>. The values of s_i are all larger than 0.5 during inference, which means the learned features contain more personalized feature information than global feature information on clients in these scenarios.
§ CONCLUSION
We propose a Federated Conditional Policy () method that generates a policy for each sample to separate its features into the global feature information and the personalized feature information, then processes them by the global head and the personalized head, respectively. outperforms eleven SOTA methods by up to 6.69% under various settings with excellent privacy-preserving ability. Besides, also maintains excellent performance when some clients accidentally drop out.
This work was supported in part by the Shanghai Key Laboratory of Scalable Computing and Systems, National Key R&D Program of China (2022YFB4402102), Internet of Things special subject program, China Institute of IoT (Wuxi), Wuxi IoT Innovation Promotion Center (2022SP-T13-C), Industry-university-research Cooperation Funding Project from the Eighth Research Institute in China Aerospace Science and Technology Corporation (Shanghai) (USCAST2022-17), and Intel Corporation (UFunding 12679). The work of H. Wang was supported in part by the NSF grant CRII-OAC-2153502. Ruhui Ma is the corresponding author.
ACM-Reference-Format
§ CONVERGENCE ANALYSIS
Recall that our objective is
{ W_1, …, W_N} = 𝒢(ℱ_1, …, ℱ_N),
where ℱ_i, ∀ i ∈ [N] is the local loss and 𝒢(ℱ_1, …, ℱ_N) = ∑^N_i=1 n_i ℱ_i. During the training phase, the value of 𝒢 is the training loss of . To study the convergence of , we denote the loss calculated with the trained personalized models after local learning as loss_aft and the loss calculated with the initialized personalized models before local learning as loss_bef. Except for the loss values, we also evaluate the corresponding test accuracy, calculated by averaging the accuracy of all the personalized models on the corresponding local test datasets of clients.
To empirically analyze the convergence of , we draw the training loss curves and test accuracy curves for our when using ResNet-18, as shown in <Ref>. On Tiny-ImageNet in the default practical setting, loss_aft becomes close to loss_bef after 74 iterations, and both of them reach the minimum value meanwhile. In other words, converges after training around 74 iterations. With the training loss decreasing, the test accuracy increases. Both the loss curve and the accuracy curve fluctuate before iteration 56 when using ResNet-18 due to the policy update, as shown in <Ref> in the main body of this paper.
§ PRIVACY-PRESERVING ABILITY
Here, following a traditional FL method FedCG <cit.>, we consider a semi-honest scenario where the server follows the FL protocol but may recover original data from a victim client with its model updates via Deep Leakage from Gradients (DLG) attack <cit.>.
Among the baselines in our paper, there are two categories in terms of information transmission between the server and clients. Methods in Category 1 share the parameters in the entire backbone model, such as FedAvg, FedProx, Per-FedAvg, pFedMe, Ditto, FedRoD, FedFomo, and FedPHP. Methods in Category 2 only share the parameters in the feature extractor, such as FedPer and FedRep. Without loss of generality, we select the most famous methods in each category as the representative baselines: FedAvg for Category 1 and FedPer for Category 2.
Also following FedCG, we provide the experimental results in <Ref> to evaluate the privacy-preserving ability of with representative baselines in Peak Signal-to-Noise Ratio (PSNR). The lower value of PSNR shows better privacy-preserving ability. The results in <Ref> show the superiority of .
§ CONDITIONAL POLICY NETWORK DESIGN
By default, our consists of a fully connected (FC) layer <cit.> and a layer-normalization layer <cit.> (LN for short) followed by the ReLU activation function <cit.>. Here, we investigate how different designs affect the effectiveness of by varying the number of FC layers, the normalization layer, and the activation function, as shown in <Ref>. Since the intermediate outputs a_i ∈ℝ^K × 2 have two groups, we set the number of groups to two for the group-normalization <cit.> (GN for short). We only change the considered component based on . The accuracy results with an underline are higher than the accuracy of .
The results in <Ref> show that we can further improve by using other architectures for the . Adding more FC layers to process its input improves the test accuracy for ResNet-18 but causes a slight decrease for the 4-layer CNN. The additional parameters introduced for with 1 FC, 2 FC, 3 FC, and 4 FC are 0.527M (million), 0.790M, 1.052M, and 1.315M, respectively. However, the additional computing cost in each iteration introduced by additional FC layers is not worth the little accuracy increase. As for the normalization layer, replacing the LN with the batch-normalization <cit.> (BN) improves 0.64% test accuracy for the 4-layer CNN. However, it decreases around 0.48% accuracy for ResNet-18, which also contains BN layers. Similar to LN that normalizes entire a_i, GN respectively normalizes a_i, 1 and a_i, 2. However, the test accuracy for both the 4-layer CNN and ResNet-18 decreases with the GN layer. As for the activation function, using tanh only increases the accuracy for the 4-layer CNN, while using sigmoid improves the performance for both backbones compared to using ReLU, as the output belongs to (0, 1) is more suitable for outputting a policy.
§ HYPERPARAMETER SETTINGS
We use the grid search to find the optimal λ. Specifically, we perform the grid search in the following search space:
* λ: 0, 0.1, 1, 5, 10
In this paper, we set λ=5 for the 4-layer CNN and λ=1 for the ResNet-18 and the fastText, respectively.
§ DATA DISTRIBUTION VISUALIZATION
Here, we show visualizations of the data distributions (including training and test data) in the image and text tasks.
|
http://arxiv.org/abs/2307.02050v1
|
20230705061722
|
From Ideal to Practice: Data Encryption in eADR-based Secure Non-Volatile Memory Systems
|
[
"Jianming Huang",
"Yu Hua"
] |
cs.CR
|
[
"cs.CR"
] |
Emoji Prediction using Transformer Models
1st Muhammad Osama Nusrat
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Zeeshan Habib
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Mehreen Alam
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Saad Ahmed Jamal
Department of GeoInformatics ZGIS
University of Salzburg
Salzburg, Austria
[email protected]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
plain
plain
Extended Asynchronous DRAM Refresh (eADR) proposed by Intel extends the persistence domain from the Non-Volatile Memory (NVM) to CPU caches and offers the persistence guarantee. Due to allowing lazy persistence and decreasing the amounts of instructions, eADR-based NVM systems significantly improve performance. Existing designs however fail to provide efficient encryption schemes to ensure data confidentiality in eADR-based NVM systems. It is challenging to guarantee both data persistence and confidentiality in a cost-efficient manner due to the transient persistence property of caches in eADR. Once the system crashes, eADR flushes the unencrypted data from the cache into NVM, in which security issues occur due to no encryption. To bridge the gap between persistence and confidentiality, we propose cost-efficient BBE and Sepencr[This paper is an extended version of our published paper in IEEE Computer Architecture Letters (DOI: 10.1109/LCA.2022.3225949)] encryption schemes that efficiently match different eADR execution models from ideal to practice. Under the ideal eADR execution model, BBE supports the encryption module via the battery of eADR upon crashes. Under the practical eADR execution model, Sepencr generates the one-time paddings (OTPs) at the system startup to encrypt the cached data in case the system crashes. Our evaluation results show that compared with an intuitive in-cache encryption scheme in eADR-based systems, our designs significantly reduce performance overheads while efficiently ensuring data confidentiality.
§ INTRODUCTION
Non-Volatile Memory (NVM) provides near-DRAM performance, low standby power consumption, and disk-like durability <cit.>. The byte-addressable feature of NVM is efficient to deliver high performance for NVM devices. NVM, as the Persistent Memory (PM) <cit.>, allows the persistence boundary to move from storages, e.g., disk and SSD, to the memory <cit.>. Moreover, Intel proposes Asynchronous DRAM Refresh (ADR) <cit.> and extended ADR (eADR) <cit.> to further extend the persistence domain to the on-chip memory controller (MC) and CPU caches. Specifically, ADR guarantees that the write pending queue (WPQ) in the memory controller becomes a persistent domain by flushing the data from WPQ into NVM upon power-down via the backup battery. eADR further guarantees that all on-chip data buffers, including CPU caches, become persistent domains <cit.> by flushing data from these buffers into NVM upon crashes. By using ADR and eADR, the persistent domains are extended to the on-chip buffers.
The significant persistence extensions are efficient to deliver high performance, which however overlook the encryption schemes in the heterogeneous memory/storage devices. Data in the persistent domains need to be encrypted to ensure confidentiality, i.e., the data cannot be accessed by attackers. In existing storage architecture, different persistent devices leverage different encryption schemes. For the external storage, e.g., disk and SSD, the data-at-rest encryption schemes are used to encrypt data <cit.>, e.g., full disk encryption (FDE) in the disk <cit.>. For the NVM, e.g., Intel Optane PM <cit.>, the standard 256-AES hardware encryption <cit.> is used to encrypt data.
Since the on-chip memory controller (specifically, the WPQ in the memory controller) exists in the persistent domain due to the support of ADR, the memory controller also leverages an encryption scheme for data confidentiality. Unlike the disk and PM in the off-chip domain, the encryption/decryption latencies for the data in the on-chip memory controller significantly impact the system performance. To efficiently encrypt the data in the on-chip memory controller for data confidentiality, a low-overhead counter mode encryption (CME) scheme in the memory controller has been proposed and widely used in existing works <cit.>.
While prior designs mainly offer data confidentiality in the disks, NVM, and ADR-based NVM systems <cit.>, few schemes discuss how to encrypt data in the eADR-based NVM systems when the CPU caches become persistent domain. Although the on-chip caches are security domains <cit.>, efficient encryption is still important for the cached data in the eADR-based NVM system for data confidentiality. Specifically, the caches in the eADR domain offer transient persistence, which means that these caches themselves are volatile, and the data in caches are guaranteed to be persisted in the real persistent device, e.g., NVM, upon crashes. The plaintext data moving from the caches to the unsafe NVM without encryption become vulnerable to attackers. Unfortunately, we observed that the existing CME scheme in the ADR-based memory controller cannot guarantee data confidentiality in eADR-based NVM systems. The encryption module in CME does not work upon crashes due to power down. However, after system crashes, the eADR works and allows the cached data, whether encrypted or not, to be flushed from caches to NVM, thus leading to information leakage.
In the eADR-based NVM system, upon crashes, the plaintext data in caches are flushed into NVM to achieve data persistence, without the confidentiality guarantee. The eADR-based NVM systems cannot ensure both data persistence and confidentiality at the same time due to the lack of comprehensive analysis upon access patterns and execution models.
The NVM system behavior can be reduced to three types of operations: read, computation, and write. The system reads the source data from NVM, computes the results via the source data, and writes the results into NVM for long-time storage or the next computation. To implement the encryption scheme in eADR-based NVM systems, we model the execution operations in eADR, ignoring software and hardware implementation details. 1All-Operation Model: eADR supports data read, write and computation upon crashes via the backup battery. 2Write-Compute-Order Model: eADR supports data write, and computation upon crashes via the backup battery. 3Write-Only Model: eADR only supports data write upon crashes via the backup battery. We do not consider the Read-Write-Order Model, since reading data is meaningless without processing the read data. It is worth noting that currently All-Operation Model and Write-Compute-Order Model are ideal in theory, and only the Write-Only Model can be supported by current available eADR in practice <cit.>. Existing work <cit.> has leveraged the ideal models, and we believe the eADR can support these ideal models in the near future.
To efficiently address the dilemma between data persistence and confidentiality in the eADR-based NVM systems, we design different schemes under different eADR execution models. Under the All-Operation Model, existing CME ensures data confidentiality. Under the Write-Compute-Order Model, we introduce the Battery-Backed Encryption (BBE) scheme via the observation that the backup battery in eADR can support the AES encryption engine and XOR gates upon crashes. We leverage the on-chip incremental counter and outside-the-memory-space address to generate the OTP that is used to XOR with the cached data for encryption upon crashes. Under the Write-Only Model, we further propose the Separate Encryption (Sepencr) scheme that generates the OTPs for all cached data in advance and stores the OTPs on chip. The pre-generated OTPs are XORed with the cached data to complete the encryption in case the system crashes. These encrypted data are finally flushed into NVM.
To evaluate the performance of our proposed designs, we use Gem5 <cit.> to implement BBE and Sepencr and run 5 persistent workloads, which are widely used in existing PM designs <cit.>. Compared with the intuitive in-cache encryption scheme in eADR-based NVM systems, our Sepencr/BBE significantly reduces performance overheads from 403% to 31%/4% under different eADR execution models, respectively.
In summary, this paper makes the following contributions:
* Comprehensive modeling of eADR execution from ideal to practice. We analyze three different eADR execution models from ideal to practice. The All-Operation and Write-Compute-Order Models are ideal, and the Write-Only Model is practical and supported by the current eADR mechanism, which work together to construct comprehensive models for eADR-based systems.
* The analysis of dilemma between data persistence and confidentiality in eADR-based NVM systems. We observed that existing eADR-based NVM systems cannot ensure both data persistence and confidentiality at the same time since the CME does not work after crashes while the cached data are still flushed into NVM via eADR, thus causing information leakage.
* The encryption schemes under different eADR execution models. To encrypt data in eADR-based NVM systems with low overheads, we leverage the battery of eADR to support the AES engine upon crashes under the ideal eADR execution models. We also pre-generate the OTPs to encrypt the cached data under the practical eADR execution model.
* Extensive experiments and analysis. We implement and evaluate our proposed designs. The experimental results show that our BBE and Sepencr significantly reduce performance overheads compared with an intuitive in-cache encryption scheme. We also discuss the applicability issues in eADR-based NVM systems.
§ BACKGROUNDS
§.§ Threat Models
Like the threat models in existing designs <cit.>, we assume that only the on-chip domain, including the processor, caches and memory controller (MC), in the computer system is safe. In our threat model, attackers can reveal the data by snooping the memory bus and physically stealing the non-volatile DIMM. The data integrity attacks <cit.> are beyond the scope of our paper like existing works <cit.>, which can be defended via Merkle tree based designs <cit.>.
§.§ Counter Mode Encryption
To ensure data confidentiality, counter mode encryption (CME) has been widely used in existing NVM-based secure systems <cit.>. The CME is executed in the memory controller and becomes transparent to applications. As shown in Fig. <ref>, the AES engine encrypts the counter, data memory address and padding via the secret key to generate the one-time padding (OTP). When writing data into NVM, the plaintext data XOR the OTPs to generate the encrypted data. When reading data from NVM, the encrypted data XOR the OTPs to generate the plaintext data.
In order to guarantee security, the OTPs are one-time and cannot be reused. The inputs for generating OTPs include data memory addresses and counters. For different data lines, since the memory addresses of data are different, the OTPs are different. For the same data line, the corresponding minor counter in Fig. <ref> increases by 1 when persisting the data line into NVM. Therefore, the OTPs for the same data line on different memory writes are different. The counter block is formed via one 64-bit major counter and 64 7-bit minor counters <cit.>. When one minor counter in the counter block overflows, the major counter increases by 1. All 64 minor counters then are reset to 0 and all corresponding 64 data blocks are read to be re-encrypted via the new counters. Compared with direct AES encryption using the unchanged secret key, CME is more secure since the OTPs for encryption are one-time. Moreover, since counter blocks are cached in the memory controller, systems generate the OTPs in parallel with reading the encrypted data from NVM. The decryption latency in CME is hidden by the latency of reading data.
§.§ Integrity Tree
Data integrity is important to NVM, which means that the data in NVM cannot be modified by attackers. In general, integrity trees are used to fast detect data integrity <cit.>. Specifically, a Merkle Tree (MT) <cit.> is a typical integrity tree that is constructed by iteratively hashing the protected user data. A hash value stored on chip is finally generated as the root of MT. In the secure NVM systems with counter mode encryption (CME), a Bonsai Merkle Tree (BMT) <cit.> is proposed by coalescing with CME. Unlike MT that iteratively hashes the user data to construct the tree, BMT iteratively hashes the counter blocks of user data in CME to generate the tree root. The user data are protected by the HMACs that are constructed by hashing the user data and their counter blocks. Since the number of counter blocks is less than that of user data, the BMT is smaller than MT with lower storage overheads. MT and BMT are non-parallelizable integrity trees <cit.>, i.e., in MT/BMT, the parent node is computed after the child node has been computed since the child node is the input of the parent node. To improve the system performance, the parallelizable integrity trees, i.e., SGX-style integrity trees (SIT) <cit.>, are proposed. In SIT, one tree node consists of 8 counters and 1 HMAC. The HMAC in one SIT node is computed by hashing all counters in this node and one counter in the parent node. Therefore, when counters in different nodes have been updated, the HMACs in SIT nodes can be computed in parallel.
§.§ eADR Mechanism
Intel Asynchronous DRAM Refresh (ADR) technique <cit.> is able to flush data from the write pending queue (WPQ) in the memory controller into NVM upon system crashes. The WPQ thus becomes a persistent domain. Intel recently proposes extended ADR (eADR) <cit.> to further extend the persistent domains and improve the system performance. As shown in Fig. <ref>, in the eADR environment, the on-chip caches are included in the power-failure protection domain. If system crashes occur, the data in caches can be written into NVM via the supports of eADR <cit.>. By integrating ADR with eADR, the on-chip domain, including the WPQ and caches, is regarded as the persistent domain. By using the eADR, the flush instructions, such as CLWB, CLFLUSHOPT, or CLFLUSH, become not very necessary, which can be simplified in the context of the non-volatile memory programming <cit.>. Simplified instruction semantics are efficient and helpful to improve system performance.
As shown in Fig. <ref>, we evaluate different workloads in ADR and eADR systems. The system configurations and workloads are described in evaluation. Note that we remove the CLWB, CLFLUSHOPT, and CLFLUSH instructions from source codes to build the eADR-based workloads. This approach to building eADR-based workloads is widely used and well-recognized in existing eADR-based NVM schemes <cit.>. Since the data do not need to be flushed into NVM, eADR significantly improves the system performance. The quantitative performance improvements depend on the number of writes in the workloads. On average, the eADR system improves 57% system running time compared with the ADR system.
The battery of eADR is expected to be used to support some operations, including but not limited to writing data upon crashes <cit.>. We thus propose three different eADR execution models in this paper from ideal to practice as shown in Table <ref>: 1All-Operation Model: eADR supports data write, read and computation upon crashes via the backup battery. 2Write-Compute-Order Model: eADR supports data write, and computation upon crashes via the backup battery. 3Write-Only Model: eADR only supports data write upon crashes via the backup battery. The Write-Only Model is the only model supported by the current version of eADR. Although All-Operation Model and Write-Compute-Order Model are ideal, we believe that they can be supported by eADR in the near future.
§.§ Counter Crash Consistency
Systems need to use correct counters to decrypt the encrypted data. While the encrypted data are persisted in NVM, the updated counters are cached in the memory controller for performance improvements. After system crashes, the cached counters in the memory controller are discarded. The stale counters in NVM become inconsistent with the user data and cannot decrypt the data after system recovery.
Existing designs propose different approaches to ensuring the counter crash consistency <cit.>. Supermem <cit.> leverages the write-through counter cache to ensure counter crash consistency. When counters in the counter cache are updated, the modified counters are directly persisted into NVM. Osiris <cit.> does not force to persist the counter blocks with user data. The stale counter blocks can be restored by increasing the counter values. In eADR-based NVM systems, the on-chip domain, including the counter buffer, is the persistent domain via the support of eADR. The counters in the counter buffer can be persisted into NVM upon crashes. Therefore, it is simple to achieve counter crash consistency in eADR-based NVM systems.
§ THE DILEMMA BETWEEN PERSISTENCE AND CONFIDENTIALITY
§.§ The Changes of Persistence Boundary
In order to store data, hierarchical architectures generally consist of cache, memory, and storage, which have dramatically changed over the decades. As shown in Fig. <ref>, the bottom-up hierarchy contains more and more persistence levels, which meantime introduce associated encryption schemes to protect data in the persistent domain. Specifically, the disk-based persistent domain encrypts data via full disk encryption (FDE) <cit.>. Moreover, the non-volatile memory (NVM) is available in the market <cit.>. When using NVM, the memory is the persistent domain, i.e., the persistence boundary moves up. To protect the data in the NVM, a standard 256-AES hardware encryption <cit.> is used.
To efficiently offer system consistency, Intel proposed ADR <cit.>, which is often used with NVM, to flush data from the WPQ in the memory controller into NVM upon system crashes. The memory controller thus becomes the persistent domain in the ADR-based NVM system. To protect the data transiting in the memory bus, which exists in the threatened domain in the context of our threat model (Threat-Models), the ADR-based NVM system leverages the CME <cit.> in the memory controller to encrypt the data. As shown in Fig. <ref>, when the persistent domain expands from the disk to the memory controller, the encryption domain also expands to match the persistent domain for data confidentiality.
Recently, the eADR <cit.> technique is further proposed to extend the persistent domain, in which the data caches become the persistent domain as shown in Fig. <ref>. However, the encryption domain is not extended, causing a mismatch between the persistent domain and the encryption domain. In the eADR-based NVM system, although the persistence boundary moves up compared with the ADR-based NVM system, there are no efficient encryption schemes to protect data.
§.§ Data Confidentiality Issues
In our threat model, the on-chip caches are in the safe domain, and the cached data cannot be attacked by attackers. However, in the eADR-based NVM system, the cached data may be attacked when they are flushed into NVM. Specifically, from the persistence view, the eADR-based caches offer transient persistence and require out-of-place flushing to guarantee persistence. The data in cache cannot be persisted in-place upon crashes but need to be flushed into the out-of-place persistent devices, e.g., NVM. To store the data, the system flushes the cached data into NVM upon crashes via the support of eADR. From the security view, the cached data are flushed from the safe domain, i.e., the cache, to the unsafe domain, i.e., the memory bus and NVM (Threat-Models). Without data encryption, the plaintext data under the unsafe domain cause information leakage.
We argue that existing encryption schemes cannot address the data confidentiality issues in eADR-based NVM systems under the Write-Compute-Order and Write-Only eADR models. As shown in Fig. <ref>, the FDE and standard 256-AES hardware encryption in disk and PM cannot protect data in the memory bus. The CME in the memory controller protects the data passing through the memory bus, but CME is inefficient in eADR-based NVM systems upon crashes. As shown in Fig. <ref>, the CME in the memory controller does not work due to counter cache miss and the disability of the encryption engine upon crashes with power failure. But the cached plaintext data are still flushed into NVM for persistence via the support of eADR. The plaintext data may be attacked in the memory bus, i.e., causing data confidentiality issues. In Fig. <ref>, the eADR-based NVM system supports data persistence in the CPU caches and however, fails to guarantee data confidentiality.
We observed the dilemma between data confidentiality and persistence in the eADR-based NVM system. Once system crashes, if flushing the cached plaintext data into NVM via eADR, the data confidentiality cannot be guaranteed due to no data encryption. On the other hand, if we discard the cached data upon crashes for data confidentiality, the eADR fails to support data persistence since the data are not flushed into NVM, which invalidates the eADR.
§.§ The Requirements of Secure eADR-based NVM Systems
In this section, we describe the requirements of a secure eADR-based NVM system that contains persistent off-chip memory and on-chip caches via the support of eADR.
Requirement 1: Data Confidentiality Requirement. All user data on the off-chip domain need to be encrypted. As shown in Threat-Models, in our threat model, only the on-chip domains are secure. The off-chip domains, including the memory bus and NVM, are vulnerable to attackers. Since the user data contains sensitive information, all user data in the memory bus and NVM need to be encrypted. Note that the secure metadata do not need to be encrypted, e.g., the counter blocks in the CME and tree nodes in the integrity tree <cit.>.
Requirement 2: Data Persistence Requirement. All data in caches need to be persisted into NVM upon system crashes. Since the data in caches can be flushed into NVM via the support of eADR, eADR-based NVM systems remove the CLFLUSH, CLFLUSHOPT, and CLWB instructions and do not need to consider security metadata crash inconsistency. In order to efficiently leverage the persistent caches, many designs remove these flush instructions in the workloads <cit.>. If the data in caches are not persisted upon crashes, the eADR-based workloads are incorrectly running.
§ SYSTEM DESIGNS AND IMPLEMENTATIONS
To ensure data confidentiality, in this section, we first demonstrate an intuitive idea to directly encrypt the data in cache, which however incurs high performance overheads. In order to decrease the overheads, we further present the adaptive encryption schemes that flexibly meet the requirements (requirements) of a secure eADR-based NVM system under different eADR execution models.
§.§ Move the Encryption Up
To address the dilemma between persistence and confidentiality in the eADR-based NVM system, an intuitive idea is to move the encryption engine to cache to directly encrypt data as shown in Fig. <ref>. The OTPs are generated in cache, and all data in cache are encrypted by XORing the corresponding OTPs. After system crashes, the encrypted data in cache are directly flushed into NVM. However, directly encrypting the data in cache is inefficient for protecting data confidentiality. When a processor reads data from cache, the data need to be decrypted, i.e., generating the OTPs and XORing the data with the OTPs. The data decryption/encryption in the latency-sensitive caches exists on the critical path of reading/writing data, and significantly decreases the system performance as shown in evaluation. Moreover, the cache is designed to store data, which is typically constructed via SRAM <cit.>. Moving the AES engine into cache will increase the complexity of the manufacturing process.
§.§ Data Confidentiality under All-Operation Model
All-Operation Model is the most relaxed eADR execution model that allows the system to continue reading, writing and processing data upon crashes, like the Uninterruptible Power Supply (UPS) system <cit.>. Under this model, ensuring data confidentiality is simple since we use existing CME to encrypt cached data by reading the security metadata from NVM, and generating and XORing the OTPs in the memory controller upon crashes.
§.§ BBE under Write-Compute-Order Model
Write-Compute-Order Model, which allows fewer operations, is stricter than the All-Operation Model. Under Write-Compute-Order Model, the system can write data and continue to run the encryption module upon crashes. We propose a Battery-Backed Encryption (BBE) to encrypt the data upon crashes to meet both Data Confidentiality and Data Persistence Requirements (requirements) under the Write-Compute-Order eADR execution model. We observed two challenges for encrypting cached data after crashes: 1The loss of counters. The counters are partially stored in the metadata cache for high performance. However, due to the limited size of the metadata cache, counter misses may occur and further prevent the OTP generation upon crashes. Although the system can read counters from NVM to metadata cache during running time, the Write-Compute-Order Model does not support read data upon crashes. 2The inefficiency of the encryption engine. After system crashes, the power is down. Without power, the AES encryption engine cannot generate the OTPs for encryption. To address these challenges, BBE introduces an increment counter (incr-counter) register, which provides counters for the encryption engine upon crashes, and leverages the battery of eADR to support the execution of the encryption module.
§.§.§ The inputs of CME upon crashes
The inputs of the AES engine to generate OTPs for memory data are counters, data addresses and paddings. The paddings are padded after counters and data addresses to ensure that the OTP size is the same as the memory line size (64B). The paddings are never changed during running time. We now discuss the counters and data addresses. As shown in Fig. <ref>, during running time, the counters are cached in the counter cache, and the system reads the counters from NVM into the counter cache. The incr-counter is stored in the 64b non-volatile register, which is used upon crashes. The initial value of incr-counter is 1. When the incr-counter value is used to generate the OTP, this value increases to ensure that the counters of OTPs are different. Specifically, we assume there is a 4MB cache with 65,536 cache lines. Upon crashes, these cache lines are flushed to the memory controller for encryption, and finally written into NVM one by one via the support of eADR. In the memory controller, the initial incr-counter is leveraged to generate the OTP for encrypting the 1st cache line. The incr-counter then increases to generate the OTP for encrypting the 2nd cache line. Finally, the value of incr-counter is 65,536 for encrypting the 65,536th cache line. After system recovery, the initial value of incr-counter is 65,537 to ensure that the OTPs are never reused. Note that the max value of incr-counter in the 64b non-volatile register is 2^64. Only after 2^48 times of system crashes, the incr-counter overflows. Therefore, we argue that the incr-counter never overflows during the system lifetime.
The data addresses are one of the inputs to generate the OTPs. To ensure the OTPs are never reused, we need to use the addresses, which are different from those used during the running time, to generate the OTPs upon crashes. Specifically, for one particular cache line, since the incr-counter increases from 1, the incr-counter for the cache line may be the same as the counter which is used to encrypt the line during running time. If the data address for generating the OTP is unchanged, the OTP of the line may be reused, thus violating the one-time principle (cme). To guarantee that the OTPs are never reused, we use the outside-the-memory-space address to generate the OTPs. Assuming the NVM is 16GB, there are many unused memory addresses in the 64-bit address space. For the 16GB NVM, we use 16GB+N×64B as the address of the Nth cache line to generate the OTP upon crashes. These outside-the-memory-space addresses are never used during system running time. Therefore, we ensure that the OTPs used upon crashes are different from those used during running time.
Generating the OTPs incurs the write latency and decreases the system performance, which is addressed by existing works <cit.>. Moreover, since system crashes are rare, the high write latency upon crashes does not affect the entire system performance.
§.§.§ The encryption engine with backup battery
After system crashes, the encryption engine without power cannot continue running. To support the generation of OTPs upon crashes, as shown in Fig. <ref>, we place the AES engine and the XOR gate in the eADR protection domain. The AES engine includes more than ten rounds of SubBytes, ShiftRows, MixColumns and AddRoundKey <cit.> to encrypt data. Fortunately, we found that the modern AES engine is cost-efficient, e.g., DW-AES engine <cit.> achieves an energy efficiency of 24 pJ/bit with about 78.121 um^2 area costs. The XOR gate is one of the basic circuits with the simplest operation <cit.>. Moreover, the silicon area and energy overheads of the XOR gate are very small, i.e., about 16.56 x 12.81 um^2 with 11 transistors, and 100 fJ/byte <cit.>. We thus leverage the backup battery to support the encryption engine to generate the OTPs and XOR gate to XOR the data and OTPs under the eADR Write-Compute-Order Model upon crashes. We analyze the energy consumption of BBE scheme in evaluation. The shadow cache in NVM in Fig. <ref> demonstrates the same structure as the on-chip cache, and is used to store the cached data one by one upon crashes, e.g., the Nth memory line in the shadow cache stores the Nth cache line after crashes.
§.§.§ The work flow of BBE
There are two states in the eADR-based system: system running time and system crashes. As shown in Fig. <ref>, during system running time, the cached data is encrypted by CME and flushed into its corresponding address in the NVM. After crashes, the AES engine generates the OTPs via the incr-counter. The system further XORs the OTPs with the cached data to encrypt the data via the XOR gate. The AES engine and XOR gates are supported by the backup battery of eADR upon crashes under the Write-Compute-Order Model. When the cached data is encrypted, the data is flushed into the shadow cache in the NVM, instead of the corresponding address, since the encryption way (i.e., the BBE) of this data is different from other data (i.e., the conventional CME) in NVM.
During system recovery, the system reads the data from the shadow cache to the memory controller and decrypts them. Since the structure of the shadow cache in NVM is the same as the data cache, we can obtain the correct counter by checking the location of the data in the shadow cache and the current incr-counter. All the data in the shadow cache are read, decrypted, and stored in the cache. The system then continues running.
In BBE, in addition to writing data into NVM upon crashes, we leverage the battery of eADR to support the running of the AES engine and XOR gate under the Write-Compute-Order Model of eADR.
§.§ Sepencr under Write-Only Model
Write-Only Model only allows the system to write data into NVM upon crashes, which is the function of the current version of eADR. We propose a Separate Encryption (Sepencr) scheme to ensure data confidentiality in the eADR-based NVM system under the practical Write-Only Model of eADR by generating the OTPs in advance.
§.§.§ The Separate Structure of CME
As shown in Fig. <ref>, CME is partitioned into two stages: OTP generation and XOR encryption. In the OTP generation stage, the counter-based seeds (i.e., counters, addresses and paddings) are encrypted by the secret key via the AES engine to generate the OTPs. In the XOR encryption stage, the OTPs are XORed with plaintext data to perform encryption. In the CME, the OTP generation dominates the latency, and the operation of XORing OTPs with data becomes fast, i.e., less than 1 cycle <cit.>. The data are finally encrypted in the XOR encryption stage.
From Fig. <ref>, we observed that CME essentially is a separate encryption structure. The OTP generation and XOR encryption stages can be separately performed. Based on this observation, we propose Separate Encryption (Sepencr) to encrypt data in the eADR-based NVM systems under the practical Write-Only eADR execution model. The idea of Sepencr is to decouple the OTP generation and data encryption (i.e., XORing OTP with data). In our Sepencr, the OTP generation and data encryption are performed in different locations and times.
§.§.§ Sepencr Overview
Sepencr generates the OTPs in advance and leverages the pre-generated OTPs to encrypt cached data in case the system crashes. As shown in Fig. <ref>, the pre-generated OTPs are stored in cache (called C-OTPs). Every cache line is encrypted in cache by XORing the cache line with the corresponding C-OTP. In the memory controller, conventional CME is generating the OTPs (called M-OTP) via the memory addresses and counters. During system running time, the encrypted cache line is flushed into the memory controller with the corresponding C-OTP. The encrypted cache line is decrypted by the corresponding C-OTP in the memory controller and re-encrypted by the M-OTP. The re-encrypted data is then written into NVM. Upon crashes, the encrypted cached data are directly flushed into the shadow cache in NVM. The structure of shadow cache in Fig. <ref> is described in sectionBBE.
§.§.§ Generating C-OTPs
The addresses for generating OTPs.
When the cached data enters the memory controller, the M-OTP is generated on demand via the memory address of the cached data. However, the memory addresses of cached data are not suitable for generating the C-OTPs due to security issues. Specifically, assuming we leverage the memory addresses of cached data to generate the C-OTPs. When writing a data into cache, the system needs to get the memory address, generate the OTP, move the OTP to cache as the C-OTP, and encrypt the cached data via the C-OTP. If the system crashes occur after the data has been written into cache but before the OTP is generated and XORed with the data, the plaintext data will be flushed into the NVM. Moreover, the memory address used to generate C-OTP may have been used to generate the M-OTP. If the counters of C-OTP and M-OTP are the same (we discuss the counter later), the C-OTP/M-OTP may be reused, which violates the security principle of OTP.
To ensure the cached data are instantly encrypted for confidentiality and avoid address reuses between M-OTPs and C-OTPs, we leverage different types of addresses to generate M-OTPs and C-OTPs like the BBE scheme (sectionBBE). For M-OTPs, we still use the memory addresses of the data to-be-encrypted as the inputs of CME. For C-OTPs, we use the outside-the-memory-space addresses as the input of CME. For the Nth cache line in the CPU cache, we leverage 16GB+N×64B as the address (called C-address) to generate the corresponding C-OTP when the NVM size is 16GB. According to the line's location in the cache, each cache line corresponds to a unique and fixed C-address.
The C-addresses of all cache lines can be deduced via the locations of the cache lines in the cache. By using the C-addresses, we generate the C-OTPs for all cache lines when the system starts, at which time there are no data in cache. To immediately encrypt the cached data, as shown in Fig. <ref>, we store all C-OTPs in the cache. Consequently, any data to be written into the cache line is XORed with the corresponding C-OTP to be encrypted instantly. When reading data from cache to the CPU register, the cached data are decrypted by XORing the encrypted data with the C-OTPs in the cache. The XOR operation is fast <cit.> and does not incur high performance overheads as shown in evaluation. Since the cached data in a 64B cache line is XORed with the C-OTP bit by bit, the size of the C-OTP is the same as that of the corresponding cache line. Half of the cache space is hence used to store C-OTPs for encrypting the cached data.
When to use C-OTPs.
However, for the multi-way set-associative cache <cit.>, multiple 64B data blocks in memory will be cached into the same cache line. Multiple data blocks hence share the same C-address to generate the same C-OTPs, which violates the OTP principle as shown in cme, i.e., different data lines leverage different OTPs. To ensure data security, as shown in Fig. <ref>, we divide the system into two states, i.e., running state and crash state. During system running time, the data are encrypted by M-OTPs. Upon system crashes, the data are encrypted by C-OTPs.
The data blocks in cache are always encrypted by the C-OTPs. However, the data blocks persisted into NVM are encrypted by M-OTPs during running time. Specifically, as shown in Fig. <ref>, when one encrypted cached data block is flushed into NVM, the encrypted data block is XORed with the corresponding C-OTP and M-OTP in the memory controller. The encrypted data block is decrypted by C-OTP to obtain plaintext data block, and then re-encrypted by M-OTP after XORing C-OTP and M-OTP. The data block re-encrypted via M-OTP is further flushed into the NVM. Therefore, during running time, the data in the system are encrypted like existing security NVM systems via CME <cit.>. Upon system crashes, the encryption module (i.e., AES engine and XOR gates) framed by the green dotted line in Fig. <ref> does not work under the Write-Only Model of eADR. The data blocks encrypted via C-OTPs in the cache are directly flushed into NVM via the support of eADR. The C-OTPs are hence only used upon system crashes.
The counters for generating C-OTPs.
Sepencr generates the C-OTPs at system startup. Although the cached data are always encrypted via C-OTPs, in fact, the C-OTPs are used only upon system crashes. The new C-OTPs are further generated on system recovery for future use. Consequently, the C-OTPs are one-time like the OTPs. During running time, the C-OTPs are not used and do not need to be changed until crashes occur. The counters of C-OTPs increase by 1 only on the system crashes and recovery. Since the initialization values of all counters are 1 and the counters increase at the same time, the values of all counters are the same at any time. We use one 64b non-volatile register on-chip to store the value of counters of C-OTPs, which is also the number of times the system crashes during the system lifetime. Since the 64b counter overflows only after 2^64 times of system crashes, which is impossible during the NVM system lifetime, the 64b register is enough to record the values of counters. Our Sepencr only generates the C-OTPs on system crashes and recovery. The overheads of generating C-OTPs are thus negligible.
The counters used to generate C-OTPs are different from the incr-counter in BBE. The incr-counter is increased after encrypting one cached data to encrypt the next cached data. In Sepencr, all C-OTPs are generated by the same counter, and the counter is increased only after system crashes and recovery.
§.§ Integrity Trees in eADR-Based Systems
Integrity trees are used to protect data integrity in NVM systems. When the user data are written into NVM, the integrity tree needs to read the intermediate tree nodes for propagating the modifications to the root by generating the HMACs in the intermediate nodes. Since this propagation process requires reading data upon crashes, it only runs in the ideal All-Operation eADR execution model upon crashes. In the Write-Compute-Order and Write-Only eADR models, when crashes occur, the root is inconsistent with the user data since the modification propagation in the integrity tree interrupts.
Fortunately, existing work SCUE <cit.> proposes a shortcut update scheme to immediately update the integrity tree root from user data by skipping the intermediate tree nodes in SIT, which can be leveraged in the Write-Compute-Order and Write-Only eADR Models. Data integrity is beyond the scope of our confidentiality work, and our design is orthogonal to SCUE to ensure data integrity in eADR-based NVM systems.
§ PERFORMANCE EVALUATION
§.§ Evaluation Methodology
To evaluate the performance of BBE and Sepencr, we model the cycle-accurate systems in the Gem5 <cit.>. The main parameters are shown in Table <ref>. The counter cache in the memory controller is 512KB for storing counter blocks. Since the integrity verifications <cit.> are beyond the scope of our paper, we do not consider the integrity tree nodes in the system. We model the 16GB NVM via PCM technologies. The PCM latency is modeled like well-recognized designs <cit.>. Since the counter mode encryption is not leveraged in DRAM, we model the NVM without DRAM like existing schemes <cit.>. We use 5 typical persistent workloads, i.e., array, queue, btree, hash, and rbtree, which are widely used in state-of-the-art NVM schemes <cit.>, to evaluate the systems. Like existing eADR-based designs <cit.>, we remove the CLFLUSH, CLFLUSHOPT, and CLWB instructions from source codes to build the workloads.
To comprehensively examine the performance of our proposed designs, we evaluate and compare the following schemes.
* Unsecure eADR-based NVM system as Baseline. The Baseline system contains the eADR mechanism without any data encryption, and hence achieves the optimal performance.
* The eADR system with CME in cache (eADR-CME). eADR-CME moves the AES engine from the memory controller to the cache (base) to encrypt data in the cache. The encryption and decryption operations exist on the critical path of writing/reading data into/from caches. eADR-CME can work on all eADR execution models to ensure data confidentiality.
* Our proposed BBE (BBE). BBE leverages the battery of eADR to support the AES engine and XOR gates upon crashes to encrypt the cached data (sectionBBE). BBE works under the Write-Compute-Order Model.
* Our proposed Sepencr (Sepencr). Sepencr pre-generates the C-OTPs, and stores the C-OTPs in the cache (sepencr). Upon crashes, the data encrypted via C-OTPs in the cache are flushed into NVM. The Sepencr works under the Write-Only Model to protect data in eADR-based systems.
§.§ Result Analysis
The Single-core Performance.
We run the workloads in different transaction sizes in the schemes, e.g., from 64B to 1024B. As shown in Fig. <ref>, the overheads of eADR-CME are very high. On average, the execution latency of eADR-CME is 4.03x – 6.90x than that of the Baseline scheme. In eADR-CME, when writing data from the processor into the cache, the AES engine generates the OTPs to encrypt the plaintext data. When reading data from the cache to the processor, the AES engine generates the OTPs to decrypt the encrypted data. The encryption/decryption significantly decreases the performance of eADR-CME. Unlike eADR-CME, Sepencr prepares the OTPs at system startup, i.e., the C-OTPs. Sepencr stores C-OTPs in cache and leverages C-OTPs to encrypt cached data in case the system crashes. Since half of the cache in Sepencr is used to store C-OTPs, the cache contention of user data in Sepencr increases the performance overheads. However, the increased transaction sizes also result in the severe cache contention of the Baseline. The performance overheads of Sepencr in some workloads (e.g. the array workload), which are normalized to Baseline, decrease when the transaction size becomes 1024B, i.e., the size of 16 cache lines. On the other hand, the XOR operations in latency-sensitive caches in Sepencr decrease the system performance. On average, the execution latency of Sepencr is 1.31x – 1.46x than that of the Baseline scheme. In BBE, the data are encrypted/decrypted via the traditional CME method during running time. Upon crashes, BBE encrypts the user data with the support of eADR energy via new counters and addresses. Since system crashes are rare, the encryption latencies of BBE upon crashes do not affect system performance. Moreover, since the decryption latency in CME during running time is masked by that of reading data from NVM <cit.>, the execution latency of BBE is small, i.e., 1.04x – 1.10x on average.
The Multi-core Performance.
To demonstrate the performance impact of our proposed schemes, we run the workloads in a different number of cores (from 1 to 8), where each thread executes the same workload in 64B transaction size on different cores. As shown in Fig. <ref>, the performance trends of Sepencr are different in the multi-core system for different workloads. For example, from 2 cores to 4 cores, the performance overheads of array workload in Sepencr increase since the cache contention in Sepencr is severe when only half of the cache space is used to store user data. In the 8-core system, the performance overheads of Sepencr in the array workload significantly decrease. The reason is that the L2 and L3 caches are shared by all cores, which leads to the cache contention when running the array workload in Baseline. The performance of Baseline in an 8-core system significantly decreases. On average, the execution latency of Sepencr increases by 1.42x – 1.47x compared with Baseline.
Unlike Sepencr, the performance of BBE normalized to Baseline is stable, since the cache usage of BBE is similar to that of Baseline. The average execution latency of BBE increases by 1.05x – 1.07x compared with Baseline.
§.§ Space and Energy Costs
The extra space costs in Sepencr over the Baseline scheme are incurred by storing C-OTPs in cache and the encryption module in the memory controller. To execute traditional CME, a 512KB counter cache is used in the memory controller. A 64b register is used to store the values of counters to generate C-OTPs in Sepencr. The size of one C-OTP is the same as that of one data line (64B). To encrypt all cached data, when storing C-OTPs in data caches in Sepencr, half of the cache space is used to store C-OTPs. As shown in Table <ref>, in our configurations, the size of data caches (eight private L1 data, one L2 and one L3 caches) is 4MB (8x128KB + 1MB + 2MB = 4MB). In Sepencr, 2MB space in cache is used to store C-OTPs, and the other 2MB is used to store user data. Moreover, since the atomic operation granularity in the processor is 8b, we place 8 XOR gates in the memory controller in Sepencr. The silicon area costs of 8 XOR gates are 0.001697 mm^2 with 88 transistors <cit.>. Moreover, the DW-AES engine requires about 78.121 um^2 <cit.>. For BBE, in addition to these counter cache, AES engine, and XOR gates, BBE also leverages a 64b register to store the incr-counter.
We estimate the energy costs of Sepencr and BBE following BBB <cit.> by using the results from Dhinakaran et al. <cit.>. Since these papers <cit.> do not include the costs of flushing data from the L3 cache and memory controller into NVM, we assume the cost is the same as that of flushing data from the L2 cache into NVM even though the L3 cache and memory controller are closer to NVM than the L2 cache in the memory hierarchy. The energy costs of flushing data from the L1 data cache/L2 cache/L3 cache/memory controller into NVM are 11.839/11.228/11.228/11.228nJ/Byte. Moreover, the energy cost of XORing two bytes to obtain one byte is 800fJ <cit.>, and generating OTPs via the DW-AES engine is 192pJ/Byte <cit.>. As shown in Table <ref>, we estimate the energy cost of flushing data from the caches (L1 data, L2, and L3 caches) and memory controller (counter cache) into NVM upon crashes. We also estimate the energy cost of the encryption module (AES engine and XOR gates) to process the data. We emphasize that the results in Table <ref> do not accurately demonstrate the energy cost of eADR. However, they show the relative energy costs of different schemes.
Since only half of the cache stores user data in Sepencr, the energy cost of Sepencr is about half that of BBE, and less than Baseline. Compared with Baseline, BBE needs to drain 512KB counter cache from the memory controller into NVM, and consumes the energy to support the encryption engine and XOR gates upon crashes. BBE incurs 14% extra energy cost than Baseline due to flushing counter cache. Moreover, it is worth noting that the energy costs of the encryption engine and XOR gates are extremely low compared with the cost of Baseline.
§.§ Recovery Time
After a reboot from system crashes, since the cached data before crashes are encrypted via BBE/Sepencr while other data in NVM are encrypted via conventional CME, the system needs to first read the cached data and leverage the BBE/Sepencr to generate the OTPs to decrypt the encrypted data. After the data are decrypted, the Sepencr generates the new C-OTPs to encrypt the cached data. Following existing designs <cit.>, we assume that reading a 64B block from NVM to cache needs about 200ns. Since generating OTPs can be executed in parallel with reading data, the latencies of reading data from NVM dominate the recovery time.
The recovery times of different schemes are shown in Fig. <ref>. Compared with BBE, the half of data cache space in Sepencr is used to store the C-OTPs. Sepencr thus needs to read half of the data of BBE on recovery. The recovery time of Sepencr is also half of BBE. For a large enough data cache (e.g., 32MB data cache), the recovery time of BBE is less than 0.11s. Moreover, after crashes and reboots, the system needs 10–100s to perform self-check <cit.>. The recovery time of Sepencr/BBE is negligible.
§ DISCUSSIONS
The difference between dirty and clean data in eADR-based caches.
An intuitive optimization for an eADR-based NVM system is to distinguish between dirty and clean data in the cache and handle them in different ways <cit.>. Specifically, the dirty data in cache are modified, and the clean data in cache are not modified. Upon crashes, only the dirty data need to be flushed into NVM. The clean data in cache can be discarded. In the persistent workloads with flush instruction, there are many clean data in cache since the dirty data are actively flushed via the flush instruction during running time. However, due to the simplified flush instructions in eADR-based NVM systems, the data in cache are passively evicted via the cache replacement policy. For example, by using the Least Recently Used (LRU) cache replacement policy, the system prioritizes keeping dirty data in cache and evicting clean data when the dirty data are more likely to be recently used than the clean data, especially in write-intensive workloads. In our experiments, we found that after system warm-up, almost all data in the eADR-based cache are dirty. In our Sepencr/BBE, we also distinguish between clean and dirty data and discard the clean data upon crashes, although there are no performance improvements.
The high space overheads in Sepencr.
In this paper, we abstract three eADR execution models from ideal to practice. In the first two ideal models, eADR supports computation upon crashes, and our design is low-overhead and high-performance. However, the practical Write-Only Model is the only model supported by the current eADR product. Since eADR cannot support the running of the encryption module upon crashes, the design space for ensuring data confidentiality under the Write-Only Model is limited. Our Sepencr incurs high space overheads to store the C-OTPs. However, the evaluation results under multi-core systems demonstrate that the performance of Sepencr is still high even though the cache contention is severe. We leave the optimization of Sepencr in terms of space overheads as our future work.
The applicability of the eADR mechanism.
In this paper, we discuss the encryption scheme in eADR-based NVM systems under different eADR execution models. We also observed that other mechanisms cannot be directly applied to eADR-based NVM systems under the current version of eADR. Specifically, Error Correction Codes (ECCs) computed in the memory controller <cit.> are proposed to enhance data fault tolerance. Upon crashes, the data in the eADR-based cache is directly flushed into NVM without generating ECCs. Moreover, Oblivious Read Access Machine (ORAM) requires complex data processing before flushing data to hide the program access pattern <cit.>. Directly flushing data from cache into NVM in the eADR-based NVM system breaks the principle of ORAM. It is important to consider how to apply the eADR mechanism in different systems.
§ RELATED WORK
eADR-based NVM systems. The recently proposed eADR mechanism significantly improves the system performance due to allowing lazy updates and decreasing the number of flush instructions. HTMFS <cit.> builds a Hardware Transactional Memory in eADR-based NVM systems to achieve both high performance and strong consistency. Dang et al. <cit.> evaluate their persistent memory allocator in the eADR-based NVM systems. This allocator significantly improves the system performance. BBB <cit.> provides a battery-backed persist buffer in each core to bridge the gap between the visibility and persistence. BMF <cit.> leverages the cache protected by the backup battery to store the nodes of integrity trees and ensure crash consistency. Horus <cit.> reduces the number of accessing security metadata upon crashes in eADR-based systems. Unlike these designs, we discuss how to implement encryption in eADR-based NVM systems from the ideal eADR model to the practical eADR model.
Data integrity in NVM systems. To defend against unauthorized modifications, i.e., integrity attacks, the integrity trees are widely used in NVM systems <cit.>. To reduce the overheads of integrity verification, Janus <cit.> executes the integrity tree updates with backend operations (e.g., encryption and deduplication) in parallel. Janus also pre-executes the tree updates before the write requests arrive at the memory controller. Freij et al. <cit.> observed that updating the Bonsai Merkle Tree (BMT) with the correct order needs large overheads. They propose the pipelining BMT update scheme to reduce the latency of updating BMT. SCUE <cit.> skips the modifications of the intermediate nodes to immediately update the root in SIT. Moreover, there exist crash inconsistency problems among tree nodes and user data upon crashes. Anubis <cit.>, Triad-NVM <cit.>, STAR <cit.>, and Phoenix <cit.> propose different approaches to recovering the integrity trees from crash states with low recovery time. Unlike these designs, our Sepencr and BBE focus on data confidentiality in eADR-based NVM systems. Since BBE and Sepencr still leverage the counters to encrypt data, our designs are orthogonal to these counter-based integrity trees, e.g., BMT and SIT.
§ CONCLUSION
In order to efficiently bridge the gap between data persistence and confidentiality, this paper comprehensively studies the ideal and practical models of eADR, and proposes BBE and Sepencr in the eADR-based NVM system. BBE under Write-Compute-Order Model leverages the battery of eADR to support the running of the AES engine and XOR gates to encrypt the cached data upon crashes. Moreover, Sepencr under Write-Only Model leverages the outside-the-memory-space addresses to generate the C-OTPs on system-start. These C-OTPs are stored in cache and used to encrypt cached data in case the system crashes. The C-OTPs are only used upon system crashes. During system running time, the data in the system are encrypted by the CME scheme. Experimental results show that our BBE/Sepencr significantly reduces performance overheads compared with intuitive encrypting data in caches.
IEEEtranS
|
http://arxiv.org/abs/2307.01503v1
|
20230704062304
|
On Evaluating and Mitigating Gender Biases in Multilingual Settings
|
[
"Aniket Vashishtha",
"Kabir Ahuja",
"Sunayana Sitaram"
] |
cs.CL
|
[
"cs.CL"
] |
Kinetic inductance and voltage response dependence on temperature: Asymmetric dc SQUID case study
1st M. A. Galí Labarias
CSIRO Manufacturing
Lindfield, NSW, Australia.
[email protected]
2nd O. A. Nieves
CSIRO Manufacturing
Lindfield, NSW, Australia.
[email protected]
3rd S. T. Keenan
CSIRO Manufacturing
Lindfield, NSW, Australia.
[email protected]
4th E. E. Mitchell
CSIRO Manufacturing
Lindfield, NSW, Australia.
[email protected]
August 1, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================
While understanding and removing gender biases in language models has been a long-standing problem in Natural Language Processing, prior research work has primarily been limited to English. In this work, we investigate some of the challenges with evaluating and mitigating biases in multilingual settings which stem from a lack of existing benchmarks and resources for bias evaluation beyond English especially for non-western context. In this paper, we first create a benchmark for evaluating gender biases in pre-trained masked language models by extending DisCo to different Indian languages using human annotations. We extend various debiasing methods to work beyond English and evaluate their effectiveness for SOTA massively multilingual models on our proposed metric. Overall, our work highlights the challenges that arise while studying social biases in multilingual settings and provides resources as well as mitigation techniques to take a step toward scaling to more languages.
§ INTRODUCTION
Large Language Models (LLMs) <cit.> have obtained impressive performance on a wide range of NLP tasks showing great potential in several downstream applications for real world impact. However, these models have shown to be prone to picking up unwanted correlations and stereotypes from the pre-training data <cit.> which, can perpetuate harmful biases for people belonging to marginalized groups. While there has been a great deal of interest in understanding and mitigating such biases in LLMs <cit.>, the focus of such studies has primarily been on English.
While Massively Multilingual Language Models <cit.>, have shown impressive performances across a wide range of languages, especially with their surprising effectiveness at zero-shot cross-lingual transfer, there still exists a lack of focused research to evaluate and mitigate the biases that exist in these models. This can lead to a lack of inclusive and responsible technologies for groups whose native language is not English and can also lead to the dissemination of stereotypes and the widening of existing cultural gaps.
Past work on evaluating and mitigating biases in multilingual models has mostly been concerned with gender bias in cross-lingual word embeddings <cit.> which fails to account for contextual information <cit.>, making them unreliable for LLMs. Other methods for estimating biases in contextualized representations involve Multilingual Bias Evaluation <cit.>, which utilizes parallel translation corpora in different languages that might lack non-western cultural contexts <cit.>. For debiasing LLMs, <cit.> proposed an adapter <cit.> based approach. However, the biases are measured in the word representations and only English data was used for debiasing, missing out on cultural context for other languages.
To address these concerns, we make the following key contributions in our work. First, we extend the DisCo metric <cit.> by creating human-corrected templates for 6 Indian languages.
DisCo takes sentence-level context while measuring bias and our templates are largely culturally agnostic making them more generally applicable. Second, we extend existing debiasing strategies like Counterfactual Data Augmentation <cit.> and Self-Debiasing <cit.> to mitigate gender biases across languages in Masked Language Models (MLMs).
Finally, we also evaluate the transferability of debiasing MLMs from one source language to other target languages and observe limited transfer from English to languages lacking western context. However, we do observe that typologically and culturally similar languages aid each other in reducing gender bias. While there have been multiple studies on measuring biases in multilingual models, previous work has not explored mitigating gender biases from these models on multiple languages and studying the transferability of debiasing across different languages. This is especially true while using non-embedding based approaches for evaluation and debiasing. To the best of our knowledge, ours is the first work to debias multilingual LLMs for different languages and measure the cross-lingual transfer for gender bias mitigation. To encourage future research in this area, we will release our code and datasets publically[<https://aka.ms/multilingual-bias>].
§ MEASURING BIAS IN MULTILINGUAL MODELS
In this section, we describe the benchmarks to evaluate biases in MLMs across different languages. Since most existing benchmarks for bias evaluation in contextualized representations are designed for English, we discuss our multilingual variant of DisCo and the recently proposed MBE metric.
§.§ Multilingual DisCo
Discovery of Correlations (DisCo) is a template-based metric that measures unfair or biased associations of predictions of an MLM to a particular gender. It follows a slot-filling procedure where for each template, predictions are made for a masked token, which are evaluated to assess whether there is a statistically significant difference in the top predictions across male and female genders. For calculating the bias score using DisCo, a ^2 test is performed to reject the null hypothesis (with a p-value of 0.05) that the model has the same prediction rate with both male and female context. We use the modified version of the metric from <cit.> that measures the fraction of slot-fills containing predictions with gendered associations (fully biased model gets a score of 1, and fully unbiased gets a score of 0).
We extend the Names variant of DisCo, as personal names can act as representatives for various socio-demographic attributes to capture cultural context <cit.>. Especially for India, surnames are a strong cultural identifier. Majority Indian surnames are typically an identifier of belonging to a particular caste, religion and culture. We use surnames from specific cultures which speak the languages for which we prepare the name pairs for. We further use these surnames to filter out personal first names for both male and female from an open-source Indian names list containing a large number of popular Indian names (details in Appendix <ref>) and word-translated the names from English to the corresponding languages, to be used for slot-filling.
Further, unlike nouns and pronouns which might be gender-neutral in some languages, names are indicative of gender to a large extent across cultures.
Dataset Construction: We start with the 14 templates provided in <cit.> and translate them using Bing translation API [<https://www.microsoft.com/en-us/translator/>] to 6 Indian languages of varying resources. We use the Class taxonomy from <cit.> to characterize language resources, where Class 5 represent high resource and Class-0 for lowest resource languages. Our set of Indian Languages contain Class 4 language Hindi (hi); Class 3 language Bengali (bn); Class 2 languages Marathi (mr) and Punjabi (pa); and Class 1 language Gujarati (gu). A challenge while transferring templates from English to these languages is that, unlike English, a common template might not be applicable to both genders. For eg. the template “`{PERSON} likes to {BLANK}”', will have different translations in Hindi, depending upon the gender of the slot fill for {PERSON}, as Hindi has gendered verbs. Hence, during translation we first filled the {PERSON} slot with a male and a female name to obtain two templates corresponding to each gender (see Figure <ref>). All the translated templates in our dataset were then thoroughly reviewed and corrected by human annotators who are native speakers of the languages (details in Appendix <ref>).
§.§ Multilingual Bias Evaluation (MBE)
We also evaluate MLMs with the MBE score proposed in <cit.> containing datasets for bias evaluation in 8 high resource languages: German (de), Japanese (ja), Arabic (ar), Spanish (es), and Mandarin (zh) belonging to Class 5; Portuguese (pt) and Russian (ru) in Class 4; and Indonesian (id) in Class 3. For evaluation, it first considers parallel corpora from English to different languages and extracts the set of sentences containing male and female words.
Next, the likelihood for each sentence is evaluated with the MLM, and the bias score is measured as the percentage of total pairs for which a male sentence gets a higher likelihood than a female sentence. Hence a value close to 50 for an MLM indicates no bias towards both groups while greater or smaller values indicate a bias towards females and males respectively. For better interpretability of metrics, we report |50 - MBE| in our results.
§ MITIGATING BIAS IN MULTILINGUAL MODELS
We next discuss how we extend bias mitigation techniques to work beyond English along with different fine-tuning and prompting strategies that we deploy in our experiments.
§.§ Counterfactual Data Augmentation (CDA)
CDA <cit.> is an effective method for reducing biases picked up by the language models during pre-training. It operates by augmenting an unlabeled text corpus with counterfactuals generated for each sentence based on a specific dimension like gender. As an example, the counterfactual for a sentence s = “The doctor went to his home” will be ŝ =“The doctor went to her home”. The model is then fine-tuned on the augmented data, which helps balance out any spurious correlations that would have existed in the pre-training dataset.
To generate counterfactuals in English, we do word replacements on Wikipedia data using 193 gendered term pairs (eg. {he, she}, {actor, actress}, etc.) following <cit.>. However, generating counterfactuals for languages other than English can be challenging as acquiring term pairs need recruiting annotators which can be expensive for low-resource languages. Further, word replacement can prove unreliable for languages that mark gender case to objects (like Hindi), producing ungrammatical sentences <cit.>.
Generating Multilingual Counterfactuals: We use a translation-based approach to obtain counterfactually augmented examples in different languages. We first select the sentences in the Wikipedia English corpus containing India-related keywords which were extracted using ConceptNet <cit.> which include keywords related to Indian food, location, languages, religions, etc. Using these keywords we select a set of 20K sentences to avoid under-representation of Indian culture specific context. Also, generating counterfactuals for the whole corpus and fine-tuning MLMs for each of the languages will require substantial energy consumption <cit.>, so we decided to use the set of filtered 20k sentences for debiasing the MLMs. Further, we augment the 193 term pairs list to contain pairs of Indian personal names as well. We align the male and female names through a greedy search for selecting pairs with minimum edit distance. Finally, using the augmented term pairs list and the filtered data with Indian context, we generate counterfactuals using word replacements and translate the obtained data to the 6 Indian languages.
Once we have obtained CDA data in different languages, we can utilize it to debias the model. We define CDA-𝒮 as a fine-tuning setup where the MLM is debiased using CDA data for languages belonging to the set 𝒮⊂ℒ, where ℒ = {en, hi, pa, bn, ta, gu, mr}.
In particular, we explore the following classes of fine-tuning setups:
1. : Fine-tune the model with English CDA data only (zero-shot debiasing).
2. : Fine-tune the model with language l specific CDA data (monolingual-debiasing).
3. : Fine-tune the model with English and language l's CDA data (few-shot debiasing).
4. : Fine-tune the model with CDA data in all non-English languages (multilingual-debiasing).
§.§ Self-Debiasing
Self-Debiasing <cit.> is a post-hoc method to reduce corpus-based biases in language models. It is based on the observation that pretrained language models can recognize biases in text data fairly well and prepends the input text with prompts encouraging the model to exhibit undesired behavior. Using this, it recognizes the undesirable predictions of the model as the ones with an increase in likelihood when the prompt is provided and suppresses them in the final predictions.
We translate the English prompt “The following text discriminates against people because of their gender” in different languages and use them for bias mitigation (SD-l). We also experiment with using English prompt for other languages (SD-en).
§ RESULTS
We evaluate the Out Of Box (OOB) biases as well the effect of applying aforementioned debiasing techniques in multilingual MLMs like XLMR-base <cit.>, IndicBERT <cit.>, and mBERT (cased) <cit.> using our multilingual DisCo metric. Additionally, we also evaluate language-specific monolingual models (refer Table <ref> in appendix) and XLMR on the MBE score.
Comparison Between Different Fine-tuning Setups for CDA: We first compare the results of bias mitigation across all 4 classes of finetuning setups for CDA to understand the effect each had on the final bias reduction. As can be seen in Table <ref> even though zero-shot transfer from English () results in some reduction in biases when compared to the models without any debiasing (OOB), most of the other fine-tuning setups that use language-specific counterfactuals incur better drops in the DisCo score. Specifically, few-shot debiasing () and multilingual-debiasing () perform consistently the best for both models with performing slightly better for XLMR and substantially so for Indic-BERT. This shows that even though language-specific counterfactuals were translated, using them for the debiasing of models helped in considerable bias reduction. We also observe that the monolingual debiasing () leads to a drop similar to , and we conjecture that it might be attributed to the low amount of data we have in languages other than English for debiasing. Further, the dominant performance of highlights that languages from a similar culture can collectively help improve biases in such models. We also observe similar results for mBERT which are provided in Table <ref> in the appendix.
Comparison Between CDA and Self-Debiasing:
Counter to CDA, Self-Debiasing shows different bias mitigation trends for Indian languages. Table <ref> shows that for both multilingual MLMs, the overall bias ends up increasing when Self-Debiasing is applied, and that too by a considerable amount for IndicBERT.
This seems to be in contrast to the past work <cit.> that shows Self-Debiasing to be the strongest debiasing technique. However, we will see next the cases where it can indeed be effective in reducing biases.
Evaluation on MBE Metric:
We first investigate the effect of Self-Debiasing on monolingual models when evaluated for the MBE metric. As can be observed in Figure <ref>, for most languages (except Russian and Spanish), both variants of Self-Debiasing manage to reduce the biases substantially. However, when we compare the results on a multilingual model i.e. XLMR in Figure <ref>, we again observe the same phenomenon as for multilingual DisCo, where the biases tend to increase upon applying Self-Debiasing. Figure <ref> shows that SD-en and SD-l have similar debiasing performance for monolingual models. It is intriguing that monolingual models are able to debias so well based on English prompts. This similarity in results with non-English and English prompts could possibly be explained by contamination in the pretraining monolingual data <cit.>.
We also compare the effect of on reducing the biases and we observed it does obtain more success in most languages (except Spanish and Japanese). Even though MBE and Multilingual DisCo have different experimental setups, obtaining consistent results while using the two different metrics like English-only debiasing being insufficient to reduce biases in other languages. Self-debiasing being ineffective for mitigating biases in multilingual models strenghtens the applicability of our results.
Our results indicate that Self-Debiasing might be limited for multilingual models and we leave the investigation of this phenomenon to future work.
§ CONCLUSION
In this work, we investigated gender biases in multilingual settings by proposing a bias evaluation dataset in 6 Indian languages. We further extended debiasing approaches like CDA and Self-Debiasing to work for languages beyond English and evaluated their effectiveness in removing biases across languages in MLMs. One of our key findings is that debiasing with English data might only provide a limited bias reduction in other languages and even collecting a limited amount of counterfactual data through translation can lead to substantial improvements when jointly trained with such data from similar languages. Finally, we showed that despite being effective on monolingual models, Self-Debiasing is limited in reducing biases in multilingual models with often resulting in an increase in overall bias. We hope that our work will act as a useful resource for the community to build more inclusive technologies for all cultures.
§ LIMITATIONS
The present study is limited to exploring biases in MLMs for the gender dimension only. For future work, important dimensionalities can be explored, especially for non-western contexts like Caste, Ethnicity, etc <cit.>.
We also used Machine Translation on English counterfactuals to obtain CDA data in each language in our dataset. Translations are prone to errors and issues like Translaionese <cit.>, especially for the lower resource languages, and therefore can lead to the unreliability of the quality of generated counterfactuals were generated. In the future, we would like to explore learning generative <cit.> or editing models <cit.> for automatically generating gender counterfactuals given text data in different languages. This can help us scale our counterfactual generation process to a much higher number of samples while also avoiding any losses in quality that may arise due to machine translation. Our multilingual DisCo metric is currently limited to 6 Indian languages and we hope our work will inspire further extension to cover different language families for improving the focus on multilingual biases evaluation.
§ ETHICAL CONSIDERATIONS
Our work dealt with evaluating biases in MLMs and different methods for bias mitigation in multilingual settings. While most of the current work is disproportionately in favor of high-resource languages like English, it is extremely important to improve this linguistic disparity for building inclusive and responsible language technology. Through our work, we provided a dataset to evaluate gender biases in languages of varying resources as well as methods to reduce such biases.
§ ACKNOWLEDGEMENTS
We would like to thank the following people who helped in evaluating and improving the Multilingual DisCo templates: Ranajoy Sadhukhan, Atharv Sonwane, Abhinav Rao, Krut Patel and Mirza Baig.
acl_natbib
§ APPENDIX
§.§ Dataset Construction Details
Scraping Langauge-Specific Personal Names: We curated a list of personal names corresponding to the cultures for each language by scraping the popular surnames associated with each culture from Wikipedia[<https://en.wikipedia.org/wiki/Category:Indian_surnames>]. We then obtain the open source list of Indian male[<https://gist.github.com/mbejda/7f86ca901fe41bc14a63>] and female[<https://gist.github.com/mbejda/9b93c7545c9dd93060bd>] names, and we segment the names to different languages by referring to our culture-specific surnames list. The names obtained this way our in Latin script, so we transliterate them to the corresponding languages using the Bing Translator API.
Annotator Details: For verifying the templates obtained using machine translation we asked human annotators to correct them. Our annotators were colleagues working at our research lab and all of them were of South Asian (Indian) descent, native to different parts of India, and each having one of the six Indian languages that we consider as their L1. They all identify as males and are in their mid-20s.
The annotators were provided original English templates along with the translated ones in their native language and were asked to verify that they were grammatically correct and conveyed the exact same meaning as the original base template. Further, they were asked to make corrections to ensure that a template pair was as close to each other as possible except for modifications in the gendered terms, like verbs in the case of Hindi (Figure <ref>).
Dataset Statistics:
Our dataset consists of 14 templates in each language and for each language the number of name pairs are given in Table <ref>.
§.§ Experimental Setup
We performed all our experiments on a single A100 GPU. For the fine-tuning setup , we trained for 50K steps using a batch size of 32, a learning rate of 2e-5, and a weight decay of 0.01. We follow the same hyperparameters for other fine-tuning setups as well, but instead of fine-tuning for 50K steps, we train for 1 epoch following <cit.> as the amount of data is limited in other languages. For Self-Debiasing, we used the default hyperparameters i.e. the decay constant λ = 50 and ϵ = 0.01. For all of our experiments, we used the pre-trained models provided with HuggingFace's transformers library <cit.>. The details of all the pre-trained models that we use in the paper are provided in Table <ref>
|
http://arxiv.org/abs/2307.01658v1
|
20230704113630
|
SliceOps: Explainable MLOps for Streamlined Automation-Native 6G Networks
|
[
"Farhad Rezazadeh",
"Hatim Chergui",
"Luis Alonso",
"Christos Verikoukis"
] |
cs.NI
|
[
"cs.NI",
"eess.SP"
] |
gobble
verbose,tmargin=0.75in,bmargin=1in,lmargin=0.625in,rmargin=0.625in
KwByby
Figures/
theoremTheorem
corollaryCorollary
defnDefinition
assumAssumption
remarkRemark
5
addpunct.addpunct:
|
http://arxiv.org/abs/2307.00229v1
|
20230701053414
|
Constrained Local Approximate Ideal Restriction for Advection-Diffusion Problems
|
[
"Ahsan Ali",
"James Brannick",
"Karsten Kahl",
"Oliver A. Krzysik",
"Jacob B. Schroder",
"Ben S. Southworth"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65N55 (Primary), 65N22, 65F08, 65F10 (Secondary)"
] |
.eps,.pdf,.png,.jpg
.eps
remarkRemark
hypothesisHypothesis
hypothesisHypothesisHypotheses
claimClaim
./plots/
|
http://arxiv.org/abs/2307.02550v1
|
20230705180005
|
K-classes of delta-matroids and equivariant localization
|
[
"Christopher Eur",
"Matt Larson",
"Hunter Spink"
] |
math.CO
|
[
"math.CO",
"math.AG"
] |
From the tabletop to the Big Bang:
Analogue vacuum decay from vacuum initial conditions
Silke Weinfurtner
August 1, 2023
=======================================================================================
Delta-matroids are “type B” generalizations of matroids in the same way that maximal orthogonal Grassmannians are generalizations of Grassmannians. A delta-matroid analogue of the Tutte polynomial of a matroid is the interlace polynomial. We give a geometric interpretation for the interlace polynomial via the K-theory of maximal orthogonal Grassmannians. To do so, we develop a new Hirzebruch–Riemann–Roch-type formula for the type B permutohedral variety.
§ INTRODUCTION
For a nonnegative integer n, let [n] = {1, …, n}, and for a subset S⊆ [n], let _S = ∑_i∈ S_i be the sum of the corresponding standard basis vectors in ^n.
Let [n̅] = {1̅, …, n̅}, and
consider [n, n̅] = [n]⊔ [n̅] equipped with the involution i ↦i̅.
Writing _i̅ = -_i, let _S = ∑_i∈ S_i for a subset S⊆ [n,n̅].
A subset S⊆ [n,n̅] is admissible if {i,i̅}⊄S for all i∈ [n]. Note that a maximal admissible subset of [n,n̅] has cardinality n.
A delta-matroid on [n,n̅] is a nonempty collection ℱ of maximal admissible subsets of [n,n̅] such that
each edge of the polytope
P() = the convex hull of {_B∩ [n] : B∈ℱ}⊂^n
is a parallel translate of _i or _i±_j for some i,j∈[n].
The collection ℱ is called the feasible sets of , and P() is called the base polytope of .
One often works with the following translation of the twice-dilated base polytope
P() = 2P() - (1, …, 1) = the convex hull of {_B : B ∈ℱ}⊂^n.
Delta-matroids generalize matroids as the “minuscule type B matroids” in the theory of Coxeter matroids <cit.>, and as “2-matroids” in the theory of multimatroids <cit.>.
The Tutte polynomial of a matroid <cit.> admits a delta-matroid analogue called the interlace polynomial, introduced in <cit.>.
For a delta-matroid on [n,n̅] with feasible sets ℱ and a subset S ⊆ [n], let
d_(S) = min_B ∈ℱ(| S ∪ (B∩ [n])| - |S∩ B∩ [n]| ), the lattice distance between _S and P().
Then, the interlace polynomial Int_(v)∈[v] of is defined as
Int_(v) = ∑_S ⊆ [n] v^d_(S).
Similar to the Tutte polynomial of a matroid, the interlace polynomial has several alternative definitions: it satisfies a deletion-contraction recursion <cit.>, and Int_(v-1) has an activities description <cit.>. Additionally, its evaluation at q=0 gives the number of feasible sets.
Here, we show that Fink and Speyer's geometric interpretation of Tutte polynomials via the K-theory of Grassmannians <cit.> also generalizes to interlace polynomials.
Let us first recall their result.
Each r-dimensional linear space L ⊆^n over a field gives rise to a matroid on [n] and a point Ł in the Grassmannian Gr(r; n).
The torus T = (^*)^n acts on Gr(r; n), and we consider the torus-orbit-closure T·Ł of L.
The K-class of the structure sheaf [𝒪_T ·Ł] in Grothendieck ring K(Gr(r; n)) of vector bundles on Gr(r;n) depends only on , and it admits a combinatorial formula which makes sense for any matroid of rank r on [n].
This formula is used to define a class y() ∈ K(Gr(r; n)) such that y() = [𝒪_T·Ł] whenever has a realization L.
Now, consider the diagram
Fl(1, r, n-1; n) [dr] [ddr, "π_1n"'] [dl, "π_r"]
Gr(r; n) Fl(1, n-1; n) [d, hook]
ℙ^n-1 ×ℙ^n-1
where π_r and π_1n are the natural forgetful maps.
Then <cit.> states that
π_1n*π_r^* ( y() · [𝒪(1)] ) = T_(α, β),
where 𝒪(1) is the line bundle on Gr(r; n) defining the Plücker embedding, α and β are the K-classes of the structure sheaves of hyperplanes in each of the ^n-1 factors, and T_ is the Tutte polynomial of .
This result was subsequently generalized to Tutte polynomials of morphisms of matroids in <cit.>.
Here, we establish a similar geometric interpretation for the interlace polynomials of delta-matroids via the K-theory of maximal orthogonal Grassmanians.
Let ^2n+1 have coordinates labelled n̅, …, 1̅, 0, 1, …, n.
Let q be the nondegenerate quadratic form on ^2n+1 given by q(x) = x_1 x_1̅ + … + x_n x_n̅ + x_0^2.
For 0≤ r ≤ n,
let OGr(r;2n+1) be the orthogonal Grassmannian, which is the subvariety of Gr(r;2n+1) consisting of isotropic r-dimensional subspaces, i.e.,
OGr(r;2n+1) = {r-dimensional linear subspaces L⊂^2n+1 such that q|_L is identically zero}.
The action of the torus T = (^*)^n on ^2n+1 given by
(t_1, …, t_n) · (x_n̅, …, x_1̅, x_0, x_1, …, x_n) = (t_n^-1x_n̅, …, t_1^-1x_1̅, x_0, t_1x_1, …, t_nx_n)
preserves the quadratic form q, and hence induces a T-action on OGr(r;2n+1).
One has the T-equivariant Plücker embedding OGr(r;2n+1) ↪ Gr(r;2n+1) ↪(⋀^r ^2n+1).
The maximal orthogonal Grassmannian is OGr(n;2n+1). Points on OGr(n;2n+1) realize delta-matroids in the same way that points on the usual Grassmannian realize matroids. More precisely, <cit.> <cit.> showed that the torus-orbit-closure T·Ł of a point Ł∈ OGr(n;2n+1), considered as a T-invariant subvariety of (⋀^n ^2n+1) via the Plücker embedding, has moment polytope μ(T·Ł) equal to P(), where is a delta-matroid with the set of feasible sets
{maximal admissible B⊂ [n,n̅] such that the B-th Plücker coordinate of L is nonzero}.
Using this polyhedral property, we construct for any (not necessarily realizable) delta-matroid an element y() in the Grothendieck ring K(OGr(n;2n+1)) of vector bundles on OGr(n;2n+1) (see Proposition <ref>).[We caution that, unlike the matroid case in <cit.>, the class y() of a delta-matroid with a realization [L]∈ OGr(n;2n+1) may not be equal to the K-class of the structure sheaf [𝒪_T·Ł], although it is closely related, see <Ref> and <Ref>. For a detailed discussion of [𝒪_T·Ł], see <Ref> and <Ref> .]
To relate the K-class y() to the the interlace polynomial, we consider the orthogonal partial flag variety OFl(1,n; 2n+1) ⊂ OGr(1; 2n+1) × OGr(n; 2n+1).
Note that OGr(1; 2n+1) is a smooth quadric inside of Gr(1; 2n+1) = ℙ^2n. We have the diagram
OFl(1, n; 2n+1) [dr] [ddr, "π_1"'] [dl, "π_n"]
OGr(n; 2n+1) OGr(1; 2n+1) [d, hook]
ℙ^2n.
Let 𝒪(1) denote the ample line bundle that generates the Picard group of OGr(n;2n+1), i.e., its square 𝒪(2) defines the Plücker embedding OGr(n;2n+1) ↪ Gr(n;2n+1) ↪(⋀^n ^2n+1). The line bundle 𝒪(1) defines the Spinor embedding of OGr(n; 2n+1) into ℙ^2^n - 1.
Recall that K(ℙ^2n)≃ℤ[u]/(u^2n+1), where u is the structure sheaf of a hyperplane in ^2n. So we may represent any class in K(ℙ^2n) uniquely as a polynomial in u of degree at most 2n.
Let Int_(v)∈[v] be the interlace polynomial of a delta-matroid . We have
π_1*π_n^* (y() · [𝒪(1)] ) = u ·Int_(u - 1) ∈ K(ℙ^2n).
To prove the theorem, in Proposition <ref> we transport the pullback-pushforward π_1_*π_n^*(-) computation to a sheaf Euler characteristic χ(-) computation on a smooth projective toric variety X_B_n known as the type B permutohedral variety (Definition <ref>).
Then, to carry out the sheaf Euler characteristic computation, we establish the following new Hirzebruch–Riemann–Roch-type formula for X_B_n. Let A^∙(X_B_n) be the Chow ring of X_B_n, with the degree map ∫_X_B_n A^n(X_B_n) ∼→.
There is an injective ring homomorphism ψ K(X_B_n) → A^∙(X_B_n), which becomes an isomorphism after tensoring with ℤ[1/2]. For any [ℰ]∈ K(X_B_n), the map ψ satisfies
χ(X_B_n, [ℰ]) = 1/2^n∫_X_B_nψ([ℰ]) · (1+γ + γ^2 + ⋯ + γ^n)
where γ is the anti-canonical divisor of X_B_n.
The map ψ in Theorem <ref> is unrelated to the usual Chern character. It also differs from the Hirzebruch–Riemann–Roch-type isomorphism of <cit.>, which is not as suitable for proving Theorem <ref>.
The g-polynomial <cit.> of a matroid is an invariant of matroids that can be (conjecturally) used to give strong bounds on the number of pieces in a matroid polytope subdivision. The coefficients of the g-polynomial are certain linear combinations of the coefficients that are used to express y() in terms of structure sheaves of Schubert varieties in K(Gr(r; n)). In <cit.>, the authors express the g-polynomial in terms of a computation similar to the one in Theorem <ref>. Is there an invariant of delta-matroids which gives strong bounds on the number of pieces in a delta-matroid polytope subdivision?
The paper is organized as follows. In Section <ref>, we discuss equivariant K-theory and define y(). In Section <ref>, we prove Theorem <ref> and discuss certain class in K(X_B_n) which will be used in the proof of Theorem <ref>. In Section <ref>, we prove Theorem <ref>. In Section <ref>, we give some examples and questions.
§.§ Acknowledgements
We thank Alex Fink, Steven Noble, Kris Shaw, and David Speyer for helpful conversations.
The first author is partially supported by the US National Science Foundation (DMS-2001854 and DMS-2246518). The second author is supported by an NDSEG graduate fellowship.
§ K-CLASSES OF DELTA-MATROIDS
Throughout, we will use localization for the torus-equivariant K-theory of toric varieties and flag varieties, for which one can consult <cit.>, <cit.>, or <cit.> along with references therein. Let T = (^*)^n for an algebraically closed field, and denote by K_T(X) the T-equivariant K-ring of vector bundles on a T-variety X. Identifying the character lattice of T with ^n, we write K_T(pt) = [T_1^± 1, …, T_n^± 1] for the equivariant K-ring of a point pt. For 𝐯 = (v_1, …, v_n) ∈^n, we write T^𝐯 = T_1^v_1⋯ T_n^v_n.
For a countable-dimensional T-representation V≃⊕_i · v_i, where T acts on v_i by t· v_i = t^𝐦v_i, the Hilbert series Hilb(V) = ∑_i T^𝐦 is the sum of the characters of the action, which is often a rational function. For an affine semigroup S⊆^n, we write Hilb(S) = Hilb([S]) = ∑_𝐦∈ S T^-𝐦. Note the minus sign, which arise because for χ^𝐦∈[S], we have t·χ^𝐦 = t^-𝐦χ^𝐦.
§.§ K-classes on the maximal orthogonal Grassmannian
We begin by recalling some facts about the T-action on OGr(n;2n+1), whose verification is routine and is omitted. Recall that we have set _i̅ = - _i.
* The T-fixed points OGr(n;2n+1)^T of OGr(n;2n+1) are in bijection with maximal admissible subsets, where such a subset B ⊂ [n,n̅] corresponds to the isotropic subspace
L_B = {x∈^2n+1 : x_0 = 0 and x_j = 0 for all j ∈ [n,n̅]∖ B}.
Polyhedrally, by identifying B⊂ [n,n̅] with _B∩ [n]∈^n, we may further identify the T-fixed points with the vertices of the unit cube [0,1]^n ⊂^n.
* Each T-fixed point L_B admits a T-invariant affine chart U_B ≃𝔸^n(n+1)/2, on which T acts with characters
𝒯_B = {-_i : i∈ B}∪{-_i-_j : i≠ j ∈ B}.
In particular, for 𝐯∈𝒯_B with B' ⊂ [n,n̅] such that _B' = _B + 2𝐯, we have an 1-dimensional T-orbit in OGr(n;2n+1) whose boundary points are L_B and L_B'.
All 1-dimensional T-orbits of OGr(n;2n+1)) arise in this way.
Now, the localization theorem applied to K_T(OGr(n;2n+1)) states the following:
<cit.>
The restriction map
K_T(OGr(n;2n+1)) → K_T(OGr(n;2n+1)^T) = ∏_L_B ∈ OGr(n;2n+1)^T[T_1^± 1, … T_n^± 1]
is injective, and its image is
{ (f_B)_B ∈∏_L_B ∈ OGr(n;2n+1)^T[T_1^± 1, … T_n^± 1] : for 𝐯∈𝒯_B with B' ⊂ [n,n̅] such that _B' = _B + 2𝐯
f_B - f_B'≡ 0 mod (1 - T^𝐯) }.
For an equivariant K-class [ℰ]∈ K_T(OGr(n;2n+1)) and a maximal admissible subset B, we write [ℰ]_B ∈[T_1^± 1, …, T_n^± 1] for the B-th factor of the image of [ℰ] under the restriction map in <Ref>.
For a matroid on a ground set [n], Fink and Speyer defined a T-equivariant K-class y() on a Grassmannian Gr(r;n).
We now define an analogous T-equivariant K-class y() for a delta-matroid . For a feasible set B of , denote by cone_B() the tangent cone of P() at the vertex _B∩ [n], i.e.,
cone_B() = _≥ 0{P() - _B∩ [n]}.
Since cone_B() is a rational strongly convex cone whose set of primitive rays is a subset of 𝒯_B, the multigraded Hilbert series
Hilb(cone_B()∩^n) = ∑_𝐦∈cone_B()∩^n T^-𝐦
is a rational function whose denominator divides ∏_𝐯∈𝒯_B (1-T^-𝐯) <cit.>.
For a delta-matroid on [n,n̅], define y() ∈ K_T(OGr(n;2n+1)^T) by
y()_B = Hilb(cone_B() ∩^n) ·∏_𝐯∈𝒯_B (1-T^-𝐯) if B a feasible set of
0 otherwise
for any maximal admissible subset B ⊂ [n,n̅]. Then y() lies in the subring K_T(OGr(n;2n+1)).
We omit the proof of the proposition, as it is essentially identical to the proof of the analogous statement <cit.> for matroids. Alternatively, it can be deduced from Theorem <ref> and Proposition <ref>.
Let us note however the following difference from the matroid case.
For a matroid on [n], the class y() in <cit.> has the property that if Ł∈ Gr(r;n) realizes , then y() equals [𝒪_T·Ł], the K-class of the structure sheaf of the torus-orbit closure.
This property often fails for delta-matroids because delta-matroid base polytopes often do not enjoy certain polyhedral properties enjoyed by matroid base polytopes, namely normality and very ampleness.
Recall that a lattice polytope P ⊂^n (with respect to the lattice ^n) is normal if
for all positive integer ℓ one has (ℓ P) ∩^n = {𝐦_1 + … + 𝐦_ℓ : 𝐦_i ∈ P ∩^n for all i =1, …, ℓ}. If P is normal, then it is very ample, meaning that for every vertex 𝐯 of P, one has
(_≥ 0{P - 𝐯}) ∩^n = _≥ 0{(P - 𝐯)∩^n}.
For a delta-matroid realized by Ł∈ OGr(n;2n+1), the T-equivariant K-class [𝒪_T·Ł] of the structure sheaf of the torus-orbit-closure of L satisfies
[𝒪_T·Ł]_B =
Hilb( _≥ 0{(P() - _B∩ [n])∩^n})∏_𝐯∈𝒯_B (1- T^-𝐯) if B a feasible subset of
0 otherwise
for any maximal admissible subset B.
In particular, the T-equivariant K-class y() equals [𝒪_T·Ł] if and only if P() is very ample.
For a finite subset 𝒜⊂^n, let Y_𝒜 be the projective toric variety defined as the closure of the image of the map T→^|𝒜|-1 given by 𝐭↦ (𝐭^𝐦)_𝐦∈𝒜.
Writing _0 = 0 ∈^n, let us consider
𝒜(L) = {_S : S ⊂ [n,n̅]∪{0} with |S| = n such that
the S-th Plücker coordinate of L is nonzero}.
There is an embedding of ℙ^|𝒜| - 1 into (⋀^n ^2n+1) which identifies the orbit closure T·Ł⊂(⋀^n ^2n+1) with Y_𝒜(L).
We now claim that
𝒜(L) = {𝐦 + 𝐦' - (1, …, 1) : 𝐦, 𝐦' ∈ P() ∩^n}⊂P().
That is, up to translation by -(1, …, 1), the set 𝒜(L) is the set of all sums of two (not necessarily distinct) lattice points in P().
When B is a feasible set of , in the T-invariant affine chart U_B around L_B, the coordinate ring 𝒪_T·Ł(U_B) equals the semigroup algebra [_≥ 0{𝐦 - _B : 𝐦∈𝒜(L)}], which the claim implies equals [_≥ 0{(P() - _B∩ [n])∩^n}], and thus the proposition follows from <cit.> (see also <cit.>).
For the claim, we first note that 𝒜(L) is contained in P()∩^n and contains all vertices of P() because the moment polytope μ(T·Ł) equals P() by <cit.>.
The Plücker embedding OGr(n;2n+1) ↪(⋀^n ^2n+1) is given by the square 𝒪(2) of the very ample generator 𝒪(1) of the Picard group of OGr(n;2n+1).
Because homogeneous spaces are projectively normal, we find that T·Ł is isomorphic to Y_𝒜 for some subset 𝒜⊆ P()∩^n that includes all vertices of P(). But all lattices points of P() are its vertices, so 𝒜 = P() ∩^n. Therefore, the projective embedding of T·Ł given by 𝒪(2) is isomorphic to Y_2𝒜 where 2𝒜 = {𝐦 + 𝐦' : 𝐦, 𝐦' ∈𝒜}, which after translating each element by -(1,…, 1) is exactly 𝒜(L).
The polytope P() can fail to be very ample in various degrees. See <Ref> for a series of examples. In particular, the class y() may not equal [𝒪_T·Ł] when L realizes .
<Ref> also implies that the class [𝒪_T·Ł] depends only on the delta-matroid , independently of the realization L of .
The analogous statement fails when delta-matroids are considered as “type C Coxeter matroids,” a.k.a. symplectic matroids. More precisely, in <cit.>, realizations of delta-matroids are points on the Lagrangian Grassmannian LGr(n;2n) consisting of maximal isotropic subspaces with respect to the standard symplectic form on ^2n. However, in this case, the K-class of the torus-orbit-closure of a point Ł∈ LGr(n;2n) may not depend only on the delta-matroid that L realizes. See the following example. This is related to the fact that the parabolic corresponding to OGr(n; 2n+1) is minuscule, but the parabolic corresponding to LGr(n; 2n) is not.
Let ℂ^4 (with coordinates labeled by (1, 2, 1̅, 2̅) be equipped with the standard symplectic form.
The torus T = (ℂ^*)^2 acts on ℂ^4 by (t_1, t_2) · (x_1, x_2, x_1̅, x_2̅) = (t_1 x_1, t_2x_2, t_1^-1 x_1̅, t_2^-1x_2̅).
For each z ∈ℂ, consider the 2-dimensional subspace L_z spanned by (1, 0, 1, z) and (0, 1, z, 1), which is Lagrangian.
For all z ≠± 1, every Plücker coordinate corresponding to a maximal admissible subset is nonzero.
Thus, the moment polytope μ(T· [L_z]) always equals [-1,1]^2 ⊂^2 as long as z ≠± 1.
However, when z = 0, one computes that T· [L_z]≃^1×^1, whereas T· [L_z] is a toric surface with four conical singularities when z ≠±1 and z≠ 0.
As a result, one verifies that the [𝒪_T· [L_0]] ≠ [𝒪_T· [L_3]], even as non-equivariant K-classes.
§.§ K-classes on the type B permutohedral variety
We explain how the geometry of the type B permutohedral variety X_B_n relates to the class y() on OGr(n;2n+1), which we will use to prove <Ref>.
We begin by briefly reviewing the relation between delta-matroids and X_B_n, details of which can be found in <cit.>.
Let W be the signed permutation group on [n,n̅], which is the subgroup of the permutation group 𝔖_[n,n̅] defined as
W = {w∈𝔖_[n,n̅] : w(i) = w(i) for all i∈ [n]}.
The B_n permutohedral fan Σ_B_n is the complete fan in ^n, unimodular with respect to the lattice ^n, whose maximal cones are labeled by elements of W, with the maximal cone σ_w being
_≥ 0{_w(1), _w(1)+_w(2), …, _w(1)+ _w(2)+… + _w(n)} for each w∈ W.
Let X_B_n be the (smooth projective) toric variety of the fan Σ_B_n, which contains T as its open dense torus. For each w∈ W, let pt_w be the T-fixed point of X_B_n corresponding to the maximal cone σ_w.
For toric variety conventions, we follow <cit.>.
The normal fan of a delta-matroid polytope P() is always a coarsening of Σ_B_n<cit.>. Hence, under the standard correspondence between nef toric line bundles and polytopes, the polytope P() defines a line bundle whose K-class we denote [P()] ∈ K(X_B_n). See <cit.> and <cit.> for details.
The assignment ↦ [P()] is valuative in the following sense.
For a subset S⊂^n, let _S ^n → be defined by _S(x) = 1 if x∈ S and _S(x) = 0 if otherwise. Define the valuative group of delta-matroids on [n,n̅] to be
𝕀(𝖣𝖬𝖺𝗍_n) = the subgroup of ^(^n) generated by {_P() : a delta-matroid on [n,n̅]}.
A function f on delta-matroids valued in an abelian group is valuative if it factors through 𝕀(𝖣𝖬𝖺𝗍_n).
We record the following useful consequence of <cit.>.
Let 𝒟 = { a delta-matroid on [n,n̅]: has a realization L with [𝒪_T·Ł] = y()}.
Then, the delta-matroids in 𝒟 generate both the K-ring K(X_B_n), considered as an abelian group, and the valuative group 𝕀(𝖣𝖬𝖺𝗍_n). That is, the set {[P()]: ∈𝒟} generates K(X_B_n), and the set {1_P() : ∈𝒟} generates 𝕀(𝖣𝖬𝖺𝗍_n).
We first note that the set 𝒟 includes the family of delta-matroids known as Schubert delta-matroids <cit.>. Indeed, Schubert delta-matroids are realizable <cit.>, and their base polytopes, being isomorphic to an polymatroid polytope, are normal <cit.>.
Hence, by <Ref>, the set 𝒟 includes all Schubert delta-matroids.
Now, Schubert delta-matroids generate both K(X_B_n) <cit.> and 𝕀(𝖣𝖬𝖺𝗍_n) <cit.>.
Lastly, the K-class y() relates to the geometry of X_B_n in the following way. When has a realization Ł∈ OGr(n;2n+1), there exists a unique T-equivariant map φ_L X_B_n→ OGr(n;2n+1) such that the identity point of the torus T⊂ X_B_n is mapped to Ł <cit.>. Note that its image is the torus-orbit-closure T·Ł.
The assignment ↦ y() is the unique valuative map such that y() = φ_L_*[𝒪_X_B_n] whenever has a realization L.
The assignment ↦ y() is valuative because taking the Hilbert series of the tangent cone at a chosen point is valuative. When has a realization L and P() is very ample, the map φ_L, considered as a map X_B_n→T·Ł of toric varieties, is induced by a map of tori with a connected kernel. Hence, in this case we have φ_L_*[𝒪_X_n] = [𝒪_T·Ł] by <cit.> and [𝒪_T·Ł] = y() by <Ref>.
The uniqueness then follows from <Ref>.
To see that y() = φ_L_*[𝒪_X_B_n] whenever has a realization L, even if P() is not very ample, we compute the pushforward using Atiyah–Bott.
First, for a maximal admissible B⊂ [n,n̅], the construction of the map φ_L shows that the fiber φ_L^-1(L_B) is
φ_L^-1(L_B) = {pt_w ∈ X_B_n^T: w∈ W such that the dual cone of
_≥ 0{P() - _B∩ [n]} contains σ_w} if B a feasible set of
∅ otherwise.
We note that because the normal fan of P() is a coarsening of Σ_B_n, for B a feasible set of , the cones {σ_w : pt_w∈φ_L^-1(L_B)} form a polyhedral subdivision of the dual cone of _≥ 0{P() - _B∩ [n]}. Now, the desired result follows from combining <cit.> and the generalized Brion's formula <cit.>, <cit.>.
One could have defined a K-class on OGr(n;2n+1) for an arbitrary delta-matroid via the formula in <Ref> instead of <Ref>.
Abusing notation, denote this alternate K-class by [𝒪_T·], even though may not be realizable.
<Ref> states that y() = [𝒪_T·] exactly when P() is very ample (with respect to ^n).
Unlike ↦ y(), the assignment ↦ [𝒪_T·] enjoys the feature that [𝒪_T·] = [𝒪_T·Ł] whenever has a realization L, but it is not valuative by <Ref>.
Moreover, <Ref> fails when [𝒪_T·] is used in place of y(), and we do not know a description of π_1*π_n^* ([𝒪_T·]· [𝒪(1)] ) in terms of known delta-matroid invariants.
See <Ref> for examples and questions about [𝒪_T·].
§ THE EXCEPTIONAL HIRZEBRUCH–RIEMANN–ROCH FORMULA
In this section, we prove Theorem <ref>.
We first construct ψ and prove that it is an isomorphism after inverting 2.
Then, we discuss how ψ relates to the isotropic tautological classes of delta-matroids constructed in <cit.>, which we use to finish the proof of <Ref>.
§.§ The isomorphism
We follow the notation and conventions in <cit.>, recalling what is necessary. For a variety with a T-action, we will denote the Chow ring and equivariant Chow ring by A^∙(X) and A_T^∙(X) respectively. We use the language of moment graphs; see <cit.> or <cit.>.
We first define the moment graph Γ associated to the T-action on X_B_n. The vertex set V(Γ) is the signed permutation group W, which indexes the torus-fixed points of X_B_n, and the edges E(Γ) are given by (w,wτ) for a transposition τ∈{(1,2),(2,3),…,(n-1,n),(n, n̅)}, indexing T-invariant ℙ^1's joining torus-fixed points of X_B_n. Denote τ_i,i+1:=(i,i+1) and τ_n:=(n,n̅). We have edge labels c(w,wτ) which are characters of T up to sign (i.e., elements of ℤ^n/± 1) by taking c(w,wτ_n)=±_w(n)∈ℤ^n/± 1 and c(w,wτ_i,i+1)=± (_w(i)-_w(i+1))∈ℤ^n/± 1, recalling the convention that _i=-_i.
By the identification of the character lattice of T with ℤ^n, we write K_T(pt)=ℤ[T_1^± 1,…,T_n^± 1] and A_T^∙(pt)=ℤ[t_1,…,t_n].
By equivariant localization we have
K_T(X_B_n)={(f_v)_v∈ V(Γ):f_i-f_j≡ 01-∏_k=1^nT_k^c(ij)_k for all (i, j)∈ E(Γ)}⊂⊕_v∈Γ K_T(pt),
A_T^∙(X_B_n)={(f_v)_v∈ V(Γ):f_i-f_j≡ 0 ∑_k=1^nc(ij)_k· t_k for all (i, j)∈ E(Γ)}⊂⊕_v∈Γ A^∙_T(pt).
Note that both compatibility conditions are invariant under c(ij)↦ -c(ij). These are algebras over the rings ℤ[T_1^± 1,…,T_n^± 1] and ℤ[t_1,…,t_n] respectively, which are identified as subrings of K_T(X_B_n) and A^∙_T(X_B_n) via the constant collections of (f_v)_v∈ V. Additionally, we have that
K(X_B_n)=K_T(X_B_n)/(T_1-1,…,T_n-1) and A^∙(X_B_n)=A^∙_T(X_B_n)/(t_1,…,t_n).
Finally, there is are W-actions on K_T(X_B_n) by (w· f)_w'(T_1,…,T_n)=f_w^-1w'(T_w(1),…,T_w(n)), and on A_T(X_B_n) by (w· f)_w'(t_1,…,t_n)=f_w^-1w'(t_w(1),…,t_w(n)), where we set
T_i=T_i^-1 and t_i=-t_i.
This action descends to usual action of W ⊂Aut X_B_n on K(X_B_n) and A^∙(X_B_n).
There is an injective ring map
ψ_T K_T(X_B_n)→ A^∙_T(X_B_n)[1/(1± t_i)]:=A_T^∙(X_B_n)[{1/1-t_i,1/1+t_i}_1≤ i ≤ n]
obtained by
(ψ_T(f))_w(t_1,…,t_n)=f_w(1+t_1/1-t_1,…, 1+t_n/1-t_n).
This map descends to a non-equivariant map ψ K(X_B_n)→ A^∙(X_B_n), which is injective and becomes an isomorphism after tensoring with [1/2].
Finally, ψ_T and ψ are W-equivariant in the sense that they intertwine the W-actions:
ψ_T(w· f)=w·ψ_T(f) and ψ(w· f)=w·ψ(f).
The map ψ_T is an injective ring homomorphism if it is well-defined, so we need to check that the compatibility conditions are preserved by ψ_T.
Let p(z)=1+z/1-z.
* If c(ij)=±_k, then f_i(T_1,…,T_n)=f_j(T_1,…,T_n) when T_k=1. Because p(0)=1, this implies that f_i(p(t_1),…,p(t_n))=f_j(p(t_1),…,p(t_n)) when t_k=0.
* If c(ij)=± (_k-_ℓ), then f_i(T_1,…,T_n)=f_j(T_1,…,T_n) when T_k=T_ℓ. This implies that f_i(p(t_1),…,p(t_n))=f_j(p(t_1),…,p(t_n)) when t_i=t_j.
* If c(ij)=± (_k+_ℓ), then f_i(T_1,…,T_n)=f_j(T_1,…,T_n) when T_k=T_ℓ^-1. Because p(z)=p(-z)^-1, this implies that f_i(p(t_1),…,p(t_n))=f_j(p(t_1),…,p(t_n)) when t_k=-t_ℓ.
We now check that the map ψ_T descends non-equivariantly to a map ψ K(X_B_n)→ A^∙(X_B_n). Note that under the map A^∙_T(X_B_n)→ A^∙(X_B_n) we have 1± t_i↦ 1, so there is an induced map A^∙_T(X_B_n)[1/1± t_i]→ A^∙(X_B_n). To obtain the map ψ, we have to show that under the composition K_T(X_B_n)→ A^∙(X_B_n)[1/1± t_i]→ A^∙(X_B_n), the ideal (T_1-1,…,T_n-1) gets mapped to 0. Indeed, ψ_T(T_i-1)=2t_i/1-t_i, which gets mapped to 0 under the map A^∙_T(X_B_n)[1/1± t_i]→ A^∙(X_B_n) because t_i maps to 0.
We now check that ψ is an isomorphism after inverting 2. Note that under the map K_T(X_B_n)→ A^∙_T(X_B_n)[1/1± t_i][1/2], the element 1+T_i maps to the unit 2/1-t_i, and hence, by the universal property of localization, we have a map K_T(X_B_n)[1/1+T_i][1/2]→ A^∙_T(X_B_n)[1/1± t_i][1/2].
We claim that this is an isomorphism.
Indeed, first note that it is clearly injective by definition of ψ_T, so we just have to check surjectivity.
For g∈ A^∙(X_B_n)[1/1± t_i][1/2], it is easy to see that g_w(T_1-1/T_1+1,…,T_n-1/T_n+1)∈ K_T(pt)[1/1+T_i][1/2], and arguing as before, we see that
w↦ g_w(T_1-1/T_1+1,…,T_n-1/T_n+1)
gives a preimage of g in K_T(X_B_n)[1/1+T_i][1/2].
Now the ideal (T_1-1,…,T_n-1)⊂ K_T(X_B_n)[1/1+T_i][1/2] maps to the ideal (-2t_1/1-t_1,…,-2t_n/1-t_n)=(t_1,…,t_n)⊂ A^∙(X_B_n)[1/1± t_i][1/2]. Hence we obtain that ψ⊗ℤ[1/2] is the isomorphism
K(X_B_n)[1/2]=K_T(X_B_n)[1/2]/(T_1-1,…,T_n-1) =K_T(X_B_n)[1/1+T_i][1/2]/(T_1-1,…,T_n-1)
≅ A_T^∙(X_B_n)[1/1± t_i][1/2]/(t_1,…,t_n)
=A_T^∙(X_B_n)[1/2]/(t_1,…,t_n)=A^∙(X_B_n)[1/2].
Finally, we check W-equivariance. Let ϵ_i(w) equal 1 if w(i)∈{1,…,n} and -1 if w(i)∈{1,…,n}. Then for f∈ K_T(X_B_n) we verify W-equivariance of ψ_T by computing
(w·ψ_T(f))_w' =f_w^-1w'(1+t_w(1)/1-t_w(1),…,1+t_w(n)/1-t_w(n)), and
(ψ_T(w· f))_w' =f_w^-1w'((1+ϵ_1(w)t_w(1)/1-ϵ_1(w)t_w(1))^ϵ_1(w),…, (1+ϵ_n(w)t_w(n)/1-ϵ_n(w)t_w(n))^ϵ_n(w))
which are equal as p(z)=1+z/1-z has p(z)=p(-z)^-1. The W-equivariance then descends to ψ.
Although we state the theorem above for X_B_n, we note that the only hypothesis on the moment graph Γ used in the proof up to the verification of W-equivariance is that all edge labels lie in the set {±_k:1≤ k ≤ n}∪{± (_k+_ℓ):1≤ k <ℓ≤ n}∪{± (_k-_ℓ):1≤ k<ℓ≤ n}.
The map ψ K(X_B_n) → A^∙(X_B_n) differs from the previous Hirzebruch–Riemann–Roch-type isomorphisms for X_B_n established in <cit.>, but is related as follows.
Let ϕ^B and ζ^B be the exceptional isomorphisms K(X_B_n) ∼→ A^∙(X_B_n) as in <cit.> and <cit.>.
Comparing the formulas for their T-equivariant maps, one can show that ψ is the unique ring map such that
ψ([ℒ]) = ϕ^B([ℒ])·ζ^B([ℒ]) for any T-equivariant line bundle ℒ on X_B_n.
§.§ Isotropic tautological classes
We now discuss the “isotropic tautological class” [ℐ_D]∈ K(X_B_n) of a delta-matroid , which was introduced in <cit.>. We show how this class is related to [P()] via the ψ map, which will allow us to use the relationship between [ℐ_] and interlace polynomials established in <cit.>.
By pulling back the tautological sequence 0→𝒮→𝒪_Gr(n;2n+1)^⊕ 2n+1→𝒬→ 0 involving the tautological subbundle and quotient bundle on the Grassmannian, one has a short exact sequence
0→ℐ→𝒪_OGr(n;2n+1)^⊕ 2n+1→𝒬→ 0
of vector bundles on OGr(n;2n+1).
For a realization Ł∈ OGr(n;2n+1) of a delta-matroid , pulling back the sequence via φ_L yields T-equivariant vector bundles ℐ_L and 𝒬_L on X_B_n.
In general, we have the following T-equivariant K-classes for a delta-matroid <cit.>.
Denote T_i̅ = T_i^-1 for i∈ [n], and let B_w() denote the w-minimal feasible set of for w∈ W, which is the feasible set corresponding to the vertex of P() that minimizes the inner product with any vector 𝐯 in the interior of σ_w.
For a delta-matroid on [n, n̅],
define [ℐ_] ∈ K_T(X_B_n) to be the isotropic tautological class of , given by
[ℐ_]_w = ∑_i ∈ B_w() T_i for all w∈ W.
Define [𝒬_] ∈ K_T(X_B_n) as [𝒪_X_B_n^⊕ 2n+1] - [ℐ_], that is,
[𝒬_]_w = 1 + ∑_i∈ [n, n̅]∖ B_w() T_i.
We will use the following fundamental computation relating Chern classes of isotropic tautological classes and interlace polynomials. For [ℰ]∈ K(X_B_n), let c_i(ℰ) denote its i-th Chern class, and denote by c(ℰ, q) = ∑_i ≥ 0 c_i(ℰ)q^i its Chern polynomial. Recall that γ is the class of the anti-canonical divisor on X_B_n, which is the line bundle on X_B_n corresponding to the cross polytope.
<cit.>
Let be a delta-matroid on [n, n̅]. Then
∫_X_B_n c(ℐ_^∨, v) ·1/1 - γ = (1 + v)^n Int_(1 - v/1 + v).
Many constructions using isotropic tautological classes are valuative (cf. <cit.>), which is often useful when combined with <Ref>.
Any function that maps a delta-matroid to a fixed polynomial expression in the exterior powers of [ℐ_] or [𝒬_] or their duals is valuative, and similarly for a fixed polynomial expression in the Chern classes of [ℐ_] or [𝒬_].
Let ℤ^2^[n, n̅] be the free abelian group with basis given by subsets of [n, n̅]. By <cit.> (see also <cit.>), the function
{delta-matroids on [n, n̅]}→⊕_w ∈ Wℤ^2^[n, n̅] given by ↦∑_w ∈ W_B_w()
is valuative. Any such polynomial expression depends only on B_w() for each w ∈ W, and so it factors through this map and is therefore valuative.
We also note the following property of Chern classes of [ℐ_] and [𝒬_].
Let be a delta-matroid. Then c(ℐ_) = c(𝒬_^∨) and c(ℐ_) c(ℐ_^∨) = 1.
We claim that one has the following short exact sequence of vector bundles
0 →ℐ→𝒬^∨→𝒪_OGr(n;2n+1)→ 0.
The claim implies the proposition for realizable delta-matroids, and by valuativity (Theorem <ref> and Lemma <ref>), for all delta-matroids.
For the claim, let b be the map ^2n+1→ (^2n+1)^∨ given by the bilinear pairing of the quadratic form q, that is, b(x) y ↦ q(x+y) - q(x) - q(y).
Note that if L ⊆^2n+1 is isotropic, then b(L) ⊆ (^2n+1/L)^∨⊆ (^2n+1)^∨, since b(ℓ)(ℓ') = q(ℓ+ℓ') - q(ℓ) - q(ℓ') = 0 for all ℓ, ℓ'∈ L.
When char≠ 2, the map b is an isomorphism, and when char = 2, its kernel is span(_0), which is not isotropic.
Hence, the map b gives
an injection of vector bundles 0→ℐ→𝒬^∨, whose quotient line bundle is necessarily trivial because ℐ≃𝒬^∨ from (<ref>).
Alternatively, one can prove the proposition via localization as follows. In K_T(X_B_n), we have that [ℐ_] +1 = [𝒬_^∨], which gives that c(ℐ_) = c(𝒬_^∨), and therefore that c(ℐ_^∨) = c(𝒬_). Because [ℐ_] + [𝒬_] = [𝒪_X_B_n^⊕ 2n+1], we have that c(ℐ_) c(𝒬_^∨) = 1, and substituting gives the result.
In order to prove Theorem <ref>, it remains to prove the Hirzebruch–Riemann–Roch-type formula. We prepare by doing the following computation, which will be used in the proof of Theorem <ref> as well.
Let be a delta-matroid. Then ψ([P()]) = c(ℐ_^∨).
The class in K_T(X_B_n) defined by the line bundle corresponding to P() under the usual correspondence between polytopes and nef toric line bundles on a toric variety has
[P()]_w = ∏_i ∈ B_w() T_i̅.
Therefore, we see that
ψ^T([P()])_w = ∏_a ∈ B_w() ∩ [n]1 -t_a/1 + t_a·∏_a̅∈ B_w() ∩ [n̅]1 + t_a/1 - t_a.
On the other hand, by the definition of [ℐ_] and [𝒬_], we have that
c^T(ℐ_)_w = ∏_i ∈ B_w()(1 + t_i), and c^T(𝒬_)_w = ∏_i ∈ B_w()(1 - t_i).
We see that ψ^T([P()]) = c^T(𝒬_)/c^T(ℐ_). Because c(ℐ_^∨) = c(ℐ_)^-1 = c(𝒬_) by Proposition <ref>, we get that
ψ([P()]) = ψ([P()]^2) = c(ℐ_^∨)^2.
In a graded ring, a class which has degree zero part equal to 1 has at most one square root with degree zero part equal to 1. Using this, we conclude that ψ([P()]) = c(ℐ_^∨).
We have already constructed ψ, so it suffices to show that, for any [ℰ] ∈ K(X_B_n),
χ(X_B_n, [ℰ]) = 1/2^n∫_X_B_nψ([ℰ]) ·1/1 - γ.
By Theorem <ref>, K(X_B_n) is spanned by the classes [P()] for a delta-matroid, so it suffices to check this for [ℰ] = [P()]. Note that χ(X_B_n, [P()]) is the number lattice points in P(), which is the number of feasible sets of . It follows from Proposition <ref> that 1/2^n∫_X_B_n c(ℐ_^∨) ·1/1 - γ is the number of feasible sets of as well, so the result follows from Proposition <ref>.
§ THE PUSH-PULL COMPUTATION
Our strategy to prove Theorem <ref> is based on transferring the computation of π_1*π_n^* (y() · [𝒪(1)]) to a computation on OGr(n; 2n+1). This idea first appeared in <cit.> and was also used in <cit.>. This is implemented in Proposition <ref>. We then reduce the computation to a computation on X_B_n, following the strategy in <cit.>.
For ϵ∈ K(OGr(n; 2n+1)), define a polynomial
R_ϵ(v) = ∑_i ≥ 0χ(OGr(n; 2n+1), ϵ· [ ⋀^i 𝒬^∨]) v^i.
Then π_1*π_n^* ϵ = R_ϵ(u - 1) ∈ K(ℙ^2n), where u = [𝒪_H] ∈ K(ℙ^2n) is the class of the structure sheaf of a hyperplane H⊂^2n.
We prove the claim in a slighter more general setting: Let X be a variety with a short exact sequence of vector bundles 0→𝒮→𝒪_X^⊕ N→𝒬→ 0. Let _X(𝒮) = ProjSym^∙𝒮^∨ be the projective bundle with the projection π_X(𝒮) → X and the inclusion _X(𝒮) ↪ X ×^N-1. Let ρ_X(𝒮) →^N-1 be the composition _X(𝒮) ↪ X ×^N-1→^N-1. We claim that for ϵ∈ K(X), one has
∑_i ≥ 0χ(X, ϵ· [ ⋀^i𝒬^∨])(u-1)^i = ρ_* π^* ϵ,
where u is the class of the structure sheaf of a hyperplane in ^N-1.
To prove the claim, since K(^N-1) ≃[u]/( u^N), and since χ(^N-1, u^k) is equal to 1 if 0≤ k ≤ N-1 and is equal to 0 if k ≥ N, we first note that
ξ = ∑_i ≥ 0χ(ℙ^N-1, ξ· u^N -1- i· (1 - u)) u^i for ξ∈ K(ℙ^N-1).
We consider the polynomial
∑_i ≥ 0χ( ^N-1, ρ_* π^* ϵ· u^N-1-i(1-u)) v^i = χ( ^N-1, ρ_* π^* ϵ· v^N·1 - u/v·1/1- u v^-1)
= v^N χ( ^N-1, ρ_* π^* ϵ·1/1+ (1-u)^-1(v-1)).
Letting λ = (1 - u)^-1 = [𝒪(1)] ∈ K(^N-1) and substituting v with v+1, the right-hand-side becomes
(v+1)^N χ( ^N-1, ρ_* π^* ϵ·1/1+λ v) = (v+1)^N χ( X, ϵ·π_* ρ^* (1/1+λ v)),
where the equality is due to the projection formula in K-theory.
Thus, to finish we need show
(v+1)^N π_* ρ^* (1/1+λ v) = ∑_i ≥ 0 [ ⋀^i𝒬^∨]v^i.
But this follows by combining the following three facts from <cit.> and <cit.>:
* We have π_*ρ^*(λ^i) = [Sym^i 𝒮^∨] for all i≥ 0.
* We have (∑_i ≥ 0[⋀^i 𝒮^∨]v^i) (∑_i≥ 0 [⋀^i 𝒬^∨]v^i ) = (v+1)^N from the dual short exact sequence 0→𝒬^∨→ (𝒪_X^⊕ N)^∨→𝒮^∨→ 0.
* We have (∑_i≥ 0 (-1)^i[Sym^i𝒮^∨]v^i)(∑_i≥ 0[ ⋀^i 𝒮^∨]v^i) = 1 from the exactness of the Koszul complex ⋀^∙𝒮^∨⊗Sym^∙𝒮^∨→𝒪_X → 0.
Lastly, the desired result follows from the general claim by setting X = OGr(n;2n+1) and 𝒮 = ℐ, since OFl(1,n;2n+1) = _OGr(n;2n+1)(ℐ).
Before proving Theorem <ref>, we make one more preparatory computation.
Let be a delta-matroid. Then
ψ(∑_p ≥ 0 [∧^p 𝒬^∨_]v^p) = (v + 1)^n+1· c(ℐ_, v-1/v+1) · c(ℐ_).
We compute equivariantly. We have that
∑_p ≥ 0 [∧^p 𝒬^∨_]_w v^p = (1 + v) ∏_i ∈ B_w()(1 + T_i v),
see, e.g., <cit.>. Therefore, we get that
ψ^T(∑_p ≥ 0 [∧^p 𝒬^∨_])_w v^p = (1 + v) ∏_i ∈ B_w()(1 + 1 + t_i/1 - t_i v )
= (1 + v)^n+1∏_i ∈ B_w()(1 + t_i(v - 1)/v+1) ·∏_i ∈ B_w()1/(1 -t_i)
= (1 + v)^n+1· c^T(ℐ_, v-1/v+1) · c^T(ℐ_^∨)^-1.
As c(ℐ_^∨)^-1 = c(ℐ_) by Proposition <ref>, the result follows.
By Proposition <ref>, we need to show that
R_y() · [𝒪(1)](v) := ∑_p ≥ 0χ(OGr(n; 2n+1), y() · [𝒪(1)] · [∧^p 𝒬^∨]) v^p = (v + 1) Int_(v).
The left-hand-side is valuative by <Ref>, and the right-hand-side also by <cit.>.
Thus, by <Ref>, it suffices to verify this equality when has a realization Ł∈ OGr(n; 2n+1) such that y() = [𝒪_T ·Ł].
As in the proof of <Ref>, in this case we have a toric map φ_L X_B_n→T·Ł such that φ_L_*[𝒪_X_B_n] = y(), and by construction φ_L^*[𝒪(1)] = [P()] and φ_L^*[∧^p 𝒬^∨] = [∧^p 𝒬_^∨]. Hence, by the projection formula, we have that
R_y() · [𝒪(1)](v) =
∑_p ≥ 0χ(X_B_n, [P()] · [∧^p 𝒬_^∨]) v^p.
Applying Theorem <ref> and Proposition <ref>, we get that
R_y() · [𝒪(1)](v) = 1/2^n∫_X_B_n1/1 - γ· c(ℐ_^∨) · (v + 1)^n+1· c(ℐ_, v - 1/v + 1) · c(ℐ_)
=(v + 1)^n+1/2^n∫_X_B_n1/1 - γ· c(ℐ_, v - 1/v + 1)
= (v + 1) Int_(v).
In the second line we used Proposition <ref>, and in the third line we used Proposition <ref>.
§ STRUCTURE SHEAVES OF ORBIT CLOSURES
We noted in <Ref> that, using the formula in <Ref>, one may assign a K-class [𝒪_T ·] to a delta-matroid , different from y(). It has the feature that [𝒪_T ·] = [𝒪_T ·Ł] whenever has a realization Ł∈ OGr(n;2n+1).
Here, we collect various examples and questions about this K-class.
The Macaulay2 code used for the computation of these examples can be found at <https://github.com/chrisweur/KThryDeltaMat>.
A database of small delta-matroids can be found at <https://eprints.bbk.ac.uk/id/eprint/19837/> <cit.>.
We start with the smallest example where y() ≠ [𝒪_T ·].
Let L ⊂^7 be the maximal isotropic subspaces given by the row span of the matrix
[ 1 0 0 0 a b 0; 0 1 0 -a 0 c 0; 0 0 1 -b -c 0 0 ]
for a, b, c generic elements of . Then the delta-matroid represented by L has feasible sets
{1, 2, 3}, {1, 2̅, 3̅}, {1̅, 2, 3̅}, {1̅, 2̅, 3}.
The stabilizer of Ł is {(1,1,1), (-1, -1, -1)}∈ T, so the map X_B_3→T ·Ł is a double cover.
This implies that y() ≠ [𝒪_T ·Ł]. Alternatively, one can verify that P() is not very ample with respect to ^3 and using <Ref>.
We have π_1*π_n^* ([𝒪_T ·Ł] · [𝒪(1)]) = R_[𝒪_T ·Ł] · [𝒪(1)](u-1) by Proposition <ref>.
A computer computation shows that
R_[𝒪_T ·Ł] · [𝒪(1)](v) = 4v^2 + 8v + 4 = (v + 1)Int_(v).
In other words, here <Ref> holds with [𝒪_T ·Ł] in place of y() although y() ≠ [𝒪_T ·Ł].
Let us say that a delta-matroid has property (<ref>) if <Ref> holds with [𝒪_T ·] in place of y(), that is, by <Ref>, if
*
R_[𝒪_T ·] · [𝒪(1)](v) = (v + 1)Int_(v).
We now feature an example where (<ref>) fails.
Let be the delta-matroid with feasible sets
{1̅, 2̅, 3̅, 4̅}, {1, 2̅, 3̅, 4̅}, {1̅, 2, 3̅, 4̅}, {1̅, 2̅, 3, 4̅}, {1̅, 2̅, 3̅, 4}{1̅, 2, 3, 4}, {1, 2̅, 3, 4}, {1, 2, 3̅, 4}, {1, 2, 3, 4̅}.
A computer computation shows that (v + 1)Int_(v) = 9 + 16v + 7v^2, but
R_[𝒪_T ·] · [𝒪(1)] (v) = 9 + 16v + 6v^2 - v^3 + v^4 + v^5.
A computer search shows that <Ref> is the only delta-matroid up to n=4 that fails (<ref>).
The delta-matroids in the above two examples differ in the following ways. The delta-matroid in <Ref>
* is realizable,
* is even in the sense that the parity of |B∩ [n]| is constant over all feasible sets B, and
* has the polytope P() very ample with respect to the lattice (affinely) generated by its vertices.
The last property, when has a realization Ł, is equivalent to stating that T·Ł is a normal variety.
All three properties fail for the delta-matroid in <Ref>. We thus ask:
When does <Ref> hold with [𝒪_T ·] in place of y()? More specifically, is (<ref>) satisfied when
* is realizable?
* is an even delta-matroid?
* the polytope P() is very ample with respect to the lattice (affinely) generated by its vertices?
We expect (<ref>) to fail for some realizable delta-matroid, but do not know any examples.
We conclude with the following realizable even delta-matroid example.
Let G be a graph on vertices [7] with edges {12,13,23,34,45,56,57,67}. Let A(G) be its adjacency matrix, considered over 𝔽_2 so that it is skew-symmetric with zero diagonal entries. Let be the delta-matroid realized by the row span of the 7× (7+7+1) matrix [ A | I_7 | 0 ]. That is, its feasible sets are
{maximal admissible subsets B ⊂ [7,7̅] such that the principal minor
of A(G) corresponding to the subset B∩ [7] is nonzero}.
The polytope P() is not very ample with respect to the lattice (affinely) generated by its vertices, demonstrated as follows.
One verifies that P() contains the origin, and the semigroup _≥ 0{P() ∩^7} is generated by
{_12,_13,_23,_34,_45,_56,_57,_67}.
In the intersection of the cone _≥ 0{P()} and the lattice {P() ∩^7}, we have the point
(1,1,1,0,1,1,1) = 1/2(_12+_13 + _23) + 1/2(_56+_57 + _67) = _13 + _23 - _34 + _45 + _67,
but this point is not in the semigroup _≥ 0{P() ∩^7}. In particular, the torus-orbit-closure is not normal.
Nonetheless, this even delta-matroid satisfies (<ref>): a computer computation shows that
R_[𝒪_T ·] · [𝒪(1)] (v) = 32 + 92v+ 92v^2 + 36v^3+ 4v^4 = (v + 1)Int_(v) .
alpha
|
http://arxiv.org/abs/2307.02901v1
|
20230706102140
|
Asymptotic degeneracies of M2-brane SCFTs
|
[
"Hirotaka Hayashi",
"Tomoki Nosaka",
"Tadashi Okazaki"
] |
hep-th
|
[
"hep-th"
] | |
http://arxiv.org/abs/2307.00768v1
|
20230703061503
|
A family of infinite degree tt-rings
|
[
"Juan Omar Gómez"
] |
math.RT
|
[
"math.RT",
"math.AT",
"math.CT",
"18G80"
] |
DifFSS: Diffusion Model for Few-Shot Semantic Segmentation
Bo Yan
August 1, 2023
==========================================================
We construct a family of infinite degree tt-rings, giving a negative answer to an open question by P. Balmer.
§ INTRODUCTION
The abstraction of tensor-triangular geometry makes it possible to connect ideas and unify techniques in the study of tensor triangulated categories arising in different areas of mathematics, including algebraic geometry, commutative algebra, modular representation theory and stable homotopy theory. We refer to <cit.> for an account of standard tensor triangulated categories.
A tt-ring is a separable commutative algebra object in a tensor triangulated category. These objects play an important role in tensor-triangular geometry. For instance, the Eilenberg-Moore category of modules over a tt-ring remains a tensor triangulated category, and extension of scalars is a tt-functor. The notion of degree introduced by Balmer in <cit.> has been successfully exploited in many applications: notably, in establishing a connection between the Going-Up Theorem and Quillen's Stratification Theorem (see <cit.>) and generalizing the étale topology (see <cit.>), both in the setting of tensor-triangular geometry.
Remarkably, all tt-rings in standard tensor triangulated categories have finite degree <cit.>. It is an open question in <cit.> whether the degree of a tt-ring must always be finite. We provide a family of infinite degree tt-rings, giving a negative answer to this question. In fact, this family extends to a family of infinite degree rigid-compact tt-rings in the framework of rigidly-compactly generated tensor triangulated categories.
§ INFINITE DEGREE TT-RINGS
For i∈ℕ, let _i be a non-trivial essentially small tensor triangulated category. Define
:=∏_i∈ℕ_i.
It is clear that is essentially small; the product of small skeletons in each component defines a small skeleton of . We give a triangulated structure and a symetric monoidal structure, both component-wise. In particular, is a non-trivial essentially small tensor triangulated category.
Let _n and as above and let 1_n denote the monoidal unit of _n. Then the tt-ring
A:=(1_n^× n)_n∈ℕ∈
has infinite degree with the component-wise tt-ring structure.
It is clear that A is a tt-ring with component-wise multiplication, and a component-wise bilinear section. On the other hand, by the definition of , the projection functor
pr_n→_n
is a tensor triangulated functor for each n≥1. In particular, pr_n(A)= 1_n^× n which has finite degree n (see <cit.>). Then A has infinite degree, otherwise it contradicts <cit.>.
By <cit.>, it follows that there exists a prime in such that the tt-ring q_ (A) has infinite degree in _. Therefore placing the adjective local on an essentially small tensor triangulated category is not enough to guarantee that tt-rings have finite degree.
At first glance, our example of a tt-ring of infinite degree seems to live in an artificial tensor triangulated category. However, it is possible to find this type of example in practice, for instance in the study of stable module categories for infinite groups.
Let CAlg(Pr^L_st) denote the ∞-category of stable homotopy theories, that is, presentable, symmetric monoidal, stable ∞-categories with cocontinuous tensor product in each variable[Also known as stable homotopy theories.]. Let 2-Ring denote the ∞-category of of essentially small, symmetric monoidal, stable ∞-categories with exact tensor product in each variable. We refer to <cit.>) for further details about these ∞-categories.
Let G be the fundamental group of the following graph of finite groups,
⋱@<-[rd]^0 G_2 @<-[rd]^0 @<-[ld]_0 G_1 @<-[ld]_0
0 0
where G_n is a non-trivial finite group, for n≥1. In other words, the group G corresponds to the free product of the groups G_n. In particular, G is a group of type Φ (see <cit.>). By <cit.>, the stable module ∞-category (kG) (see <cit.>) decomposes in terms of the above graph of groups, that is, we have an equivalence
StMod(kG)≃∏_n∈ℕStMod(kG_n)
in CAlg(Pr^L_st). Note that dualizable objects in (kG) are detected component-wise via this equivalence. In other words, we have a similar decomposition in 2-Ring for the dualizable part of (kG), i.e., the symmetric monoidal, stable ∞-category on the dualizable objects of (kG). Moreover, this factorization induces a product decomposition at the level of homotopy categories. Hence the homotopy category of the dualizable part of (kG) satisfies the hypothesis of Theorem <ref>.
In practice, essentially small tensor triangulated categories arise as the dualizable part of a bigger tensor triangulated category which, for instance, admits small coproducts, just as in Example <ref>. Then we can consider tt-rings in a tensor triangulated category which sits inside a bigger one. In particular, the framework of rigidly-compactly generated tensor triangulated categories has been extensively studied (see for instance <cit.>). In fact, all tt-rings that have been proved to have finite degree in <cit.> sit in the dualizable part a rigidly-compactly generated tensor triangulated category, so we might think these are the conditions we should impose on a tensor triangulated category to guarantee that any tt-ring has finite degree. We will see in Example <ref> that this is not the case.
Recall that an object x in a triangulated category with small coproducts is compact if the functor Hom(x,-) commutes with small coproducts. In particular, the subcategory ^c of compact objects remains triangulated.
A tensor triangulated category is rigidly-compactly generated if ^c is essentially small, the smallest triangulated subcategory containing ^c which is closed under small coproducts is , and the class of compact objects coincides with the class of dualizable objects. In this case, ^c remains tensor triangulated.
For a general tensor triangulated category with small coproducts, compact objects are not necessarily dualizable, and vice versa, dualizable objects are not necessarily compact. However, if the monoidal unit of is compact, then any dualizable object in is compact. This follows from the fact that a dualizable object x and its dual y determine adjoint functors x⊗-⊣ y⊗-.
For i∈ℕ, let _i be a non-trivial rigid 2-ring. Define :=∏_i∈ℕ_i in 2-Ring. Note that is a rigid 2-ring. Let ℒ denote the Ind-completion of which lies in CAlg(Pr^L_st) (see <cit.>). In particular, the compact objects of ℒ are precisely the elements of . Since the inclusion functor
↪ℒ
is strongly monoidal, we deduce that any compact element in ℒ is dualizable. Therefore the homotopy category of ℒ is a rigidly-compactly generated tensor triangulated category. In particular, we can construct a tt-ring in the dualizable part of ℒ, just as in Theorem <ref>, which has infinite degree.
Acknowledgments. I deeply thank my supervisor José Cantarero for his support and for many interesting conversations on this work. I thank Paul Balmer and Luca Pol for helpful comments on this project. This work is part of the author's PhD thesis.
alpha
|
http://arxiv.org/abs/2307.05394v1
|
20230705135303
|
Reinforcement learning-guided long-timescale simulation of hydrogen transport in metals
|
[
"Hao Tang",
"Boning Li",
"Yixuan Song",
"Mengren Liu",
"Haowei Xu",
"Guoqing Wang",
"Heejung Chung",
"Ju Li"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
Department of Materials Science and Engineering, Massachusetts Institute of Technology, MA 02139, USA
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, MA 02139, USA
Department of Materials Science and Engineering, Massachusetts Institute of Technology, MA 02139, USA
Department of Materials Science and Engineering, Massachusetts Institute of Technology, MA 02139, USA
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Materials Science and Engineering, Massachusetts Institute of Technology, MA 02139, USA
[email protected]
Department of Materials Science and Engineering, Massachusetts Institute of Technology, MA 02139, USA
Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Atomic diffusion in solids is an important process in various phenomena. However, atomistic simulations of diffusion processes are confronted with the timescale problem: the accessible simulation time is usually far shorter than that of experimental interests. In this work, we developed a long-timescale method using reinforcement learning that simulates diffusion processes.
As a testbed, we simulate hydrogen diffusion in pure metals and a medium entropy alloy, CrCoNi, getting hydrogen diffusivity reasonably consistent with previous experiments. We also demonstrate that our method can accelerate the sampling of low-energy configurations compared to the Metropolis-Hastings algorithm using hydrogen migration to copper (111) surface sites as an example.
Reinforcement learning-guided long-timescale simulation of hydrogen transport in metals
Ju Li
August 1, 2023
=======================================================================================
§ INTRODUCTION
Diffusional atomic motion is an essential microscopic process in the kinetics of materials <cit.>. Various interesting phenomena and applications are rooted in diffusion-related processes, from the interdiffusion at metal interfaces, vacancy and void formation, to hydrogen embrittlement <cit.> and resistance switching in oxide memristors <cit.>. One important tool to study the diffusion process is atomistic simulation <cit.>, which can simulate a wide range of materials phenomena <cit.>. However, a critical challenge of atomistic simulation of diffusion-related process is the timescale problem <cit.>: the atomic vibration has a timescale of fs - ps; however, the diffusion-related transitions between adjacent energy minima have orders of magnitude larger timescale. That is because the energy barriers on the diffusion pathway slow down the diffusion process <cit.>. The timescale problem limits most of the straightforward molecular dynamics simulations to nanoseconds, which fall short of the timescales relevant to many diffusion-related phenomena <cit.>. Therefore, different methods are needed to deal with the long-timescale problem <cit.>.
Our work is based on one of the widely studied algorithms, the kinetic Monte Carlo (KMC) method <cit.>, where one directly works with diffusion timescale without explicitly showing the vibration timescale motion. Traditional KMC (in contrast with off-lattice KMC) requires energy minima and transition pathways (the so-called event table) as input.
However, as the diffusion pathway is sometimes counter-intuitive, correctly determining the necessary input information of KMC is not a trivial task <cit.>.
To conduct a simulation without a known event table, the off-lattice KMC is developed <cit.>. The algorithm conducts saddle point searches to obtain the diffusion pathways along with the KMC simulation. Another method reported to have advantageous efficiency is temperature accelerated dynamics (TAD), where the transition pathways are explored by high-temperature molecular dynamics <cit.>. In both methods, the transition pathway is explored by random sampling (random initial guess in the saddle point search for off-lattice KMC, and random thermal motion for TAD). However, as the configuration space is high-dimensional, it requires a large amount of random sampling to be confident that the correct transition pathway is obtained, which limits the simulation system size and accessible timescale <cit.>.
In this work, we developed a reinforcement learning (RL) method that guides the transition pathway sampling in off-lattice KMC to simulate long-timescale diffusion processes. Instead of searching for all nearby saddle points along randomly sampled initial directions <cit.>, we use parameterized neural network model to guide the saddle-point search. The model can predict the direction of atomic motion that yields the high-probability transition pathway. That avoids the repeated saddle-point searches, which is the most significant contributor to the computational cost of the off-lattice KMC. We demonstrate that our RL model can either simulate physical diffusion trajectories or sample low-energy configurations in complex energy landscapes by simulating the hydrogen diffusion in alloys and metal surfaces.
§ RESULTS
Here, we briefly describe our RL method, as illustrated in Fig. <ref>a. In atomic diffusion, the energy landscape has a large number of local minima separated by transition energy barriers. In this paper, we use hydrogen diffusion in face-centered cubic (FCC) alloys as an example, as shown in Fig. <ref>b. In the local energy minimum configurations of FCC bulk structures, hydrogen atoms reside in octahedral and tetrahedral interstitial sites shown as the deep blue and shallow green potential wells in Fig. <ref>c, where the octahedral site has lower energy. The energy landscape is provided by a universal neural network PreFerred Potential (PFP) <cit.> throughout this paper. Beginning from a given local energy minimum configuration or “state" s_t = (r⃗_1,r⃗_2,⋯ ,r⃗_N) (the orange circles in Fig. <ref>, where r⃗_i is the coordinates of the ith atom), a set of possible transition displacements {a_ti} (also called “actions") are first identified. In our problem, this is realized by identifying the polyhedron surrounding each hydrogen atom formed by its metal neighbors where possible actions are defined by translations through all face centers of the polyhedron (See section 4.A for details).
In the next step, an action a_t is selected from the action space 𝒜_s_t≡{a_ti} based on the atomic descriptor 𝒟 of the configuration s_t. The probability of selecting each action a, π_θ (a|s_t), is given by the Boltzmann policy based on a neural-network value function Q_θ (s,a) <cit.>:
π_θ (a|s_t) = e^Q_θ (s_t,a)/k_ BT/∑_a'∈𝒜_s_te^Q_θ (s_t,a')/k_ BT,
where θ represents the model parameters, k_ B and T are the Boltzmann constant and temperature. Q_θ (s,a) can also depend on T if the vibrational entropy contribution is considered, which will be discussed later.
After selecting an action a_t = (i,v⃗), the ith atom is displaced by vector v⃗ across the energy barrier. The system is then relaxed to the next state, s_t+1, using the MDMin algorithm implemented in the Atomistic Simulation Environment <cit.>. Parameters of the transition, including the transition energy barrier E_b^ NEB, the attempt frequency ν_a^ NEB, and the energy change after the transition Δ E, can then be estimated using the NEB method <cit.> setting s_t and s_t+1 as the initial and final points. The reward of this transition, r_t, is designed to encourage either reproducing transition probabilities of the harmonic transition state theory (HTST) <cit.> or an energy minimization strategy, which will be discussed in the next part. The whole simulation trajectory is produced by repeating the above scheme that generates the next state according to the current state.
The Q_θ (s, a) model is constructed based on the DeepPot-SE sub-networks <cit.>. As the atomic interaction in alloys is short-range, we assume Q_θ (s, a=(i,v⃗)) is a function of the atomic environment of the moved atom i and its displacement v⃗. The descriptor 𝒟^i should be equivariant under translation, rotation, and permutation symmetry operations of the atomic system, realized by the following construction:
R̃^i =
[
r̂⃗̂_i1·r̂⃗̂_i1 ⋯ r̂⃗̂_i1·r̂⃗̂_iM r̂⃗̂_i1·v⃗
r̂⃗̂_i2·r̂⃗̂_i1 ⋯ r̂⃗̂_i2·r̂⃗̂_iM r̂⃗̂_i2·v⃗
⋮ ⋮ ⋮
r̂⃗̂_iM·r̂⃗̂_i1 ⋯ r̂⃗̂_iM·r̂⃗̂_iM r̂⃗̂_iM·v⃗
v⃗·r̂⃗̂_i1 ⋯ v⃗·r̂⃗̂_iM |v⃗|^2
],
𝒟^i_kl = ∑_m,n=1^M+1 G^1_k(f_c(r_im),c_m)R̃^i_mnG^2_l(f_c(r_in),c_n),
where r̂⃗̂_ij≡f_c(r_ij)r⃗_ij/r_ij, r⃗_ij≡r⃗_j - r⃗_i, r_ij≡ |r⃗_ij|, j=1,2,⋯ ,M goes through all atoms around the ith atom within a cut-off radius r_c. f_c(r) is a cutoff function as defined in Ref. <cit.>, which goes smoothly to zero at a cutoff radius r_c, and G_k^1 and G_l^2 are embedding neural networks parametrized by θ_ emb. c_m (m = 1,2,⋯, M) are the atomic species of the mth atom, and we set c_M+1 as a unique “action species". The descriptor 𝒟^i is invariant under all symmetry operations. The descriptor is then flattened to a vector and passed to a multilayer perceptron (MLP) that outputs the Q function: Q_θ (s, a=(i,v⃗)) = MLP_θ_ fit(𝒟^i(θ_ emb)), where the model parameters θ = (θ_ fit, θ_ emb) includes both parameters of the MLP θ_ fit and that of the embedding network θ_ emb (see section 4.B for detailed parameter settings).
By choosing different reward functions, our method has two working modes: transition kinetics simulator (TKS) and low-energy states sampler (LSS). The TKS aims to simulate physical transition probabilities according to HTST, and the LSS aims to converge to global energy minimum configurations.
The TKS adopts the reward function of
r_t = -E_b^ NEB + k_ BTlogν_a^ NEB , ν_a^ NEB = ∏_i=1^3Mν_i/∏_j=1^3M-1ν_j^*,
where ν_i and ν_j^* are the ith normal mode vibration frequency at state s_t and the jth positive vibration frequency at the transition saddle point between s_t and s_t+1. The model is trained as a contextual bandit problem <cit.>, where the value function Q_θ (s_t,a_t) is trained to fit the instantaneous reward r_t (minimizing ⟨ (Q_θ (s_t,a_t)-r_t)^2⟩). Then, as Γ_s_ta=ν_a^ NEBe^-E_b^ NEB/k_ BT=e^r_t/k_ BT (according to HTST) gives an estimation of the rate of the transition corresponding to action a, the policy in Eq. (<ref>) gives the physical transition probability P(a|s_t)=Γ_s_ta/∑_a'∈𝒜_s_tΓ_s_ta'. The average residence time of the system on the state s_t, ⟨Δ t⟩ = (∑_a∈𝒜_s_tΓ_s_ta)^-1, is then estimated as (∑_a∈𝒜_s_te^Q_θ (s_t,a))^-1. Expressing the reward r_t = r_t^0+r_t^1T as a linear function of T, the constant term r_t^0 and linear term r_t^1 can be fitted simultaneously by a two-component value function (Q_θ^0,Q_θ^1) in Q_θ=Q_θ^0+Q_θ^1T to make the model applicable to different temperatures:
θ←θ - λ∇_θ∑_t[(Q_θ^0(s_t,a_t)-r_t^0)^2+T_tr^2(Q_θ^1(s_t,a_t)-r_t^1)^2],
where λ is the learning rate, and T_tr, the training temperature, is a hyperparameter that controls the relative importance of the two terms in the loss function. The two components give neural-network predictions for the energy barrier E_b^ NN≡ -Q_θ^0 and attempt frequency
logν_a^ NN≡Q_θ^1/k_ B.
As a testbed, we first apply our method to hydrogen diffusion in pure FCC Cu and Ni. The model is trained on a 4× 4× 4 cubic supercell with 4 randomly sampled hydrogen sites. The model is then deployed to simulate a single hydrogen diffusion in a 3× 3× 3 cubic supercell for 500 timesteps. This system is simulated at temperatures spanning 250 K to 500 K with an interval of 50 K and repeated 50 times for each temperature. The final displacement Δ x_i, total time t_i, and temperature T_i of the ith simulation trajectory are recorded. The two parameters D_0 and Q in the Arrhenius form of diffusivity D=D_0e^-Q/k_ BT are fitted by the maximum likelihood estimation (MLE):
max_D_0, Q∏_i 4πΔ x_i^2/(12π D_0 t_i e^-Q/k_ BT)^3/2exp-Δ x_i^2/12D_0t_ie^Q/k_ BT.
The derived D_0 and Q are reasonably consistent with the previous experimental measurement, as shown in Table <ref>. The effective activation energy Q in simulation tends to be slightly smaller than the experimental results. That's probably because a small concentration of trapping sites (defects or impurities) in experiments are not considered in simulation, which slightly increases the average energy barriers.
To test the method's capability to capture compositional complexity, we train the RL model on equiatomic CrCoNi medium entropy alloy. The CrCoNi alloy has recently attracted broad interest because of its outstanding fracture toughness and ductility <cit.>. In the CrCoNi solid solution, each metal atom near the hydrogen can be of different atomic species, giving a complex state space. The predicted E_b^ NN and ν_a^ NN are approximately consistent with the values in the training and testing dataset, as shown in Fig. <ref>, where the data points are distributed close to the diagonal line in the wide range of observed quantities. The standard deviation errors of the model predictions are close in training and testing datasets, confirming that the training data is not overfitted despite the large volume of model parameters.
The hydrogen self-diffusion in CrCoNi is simulated using the trained model running on one hydrogen in a 4× 4× 4 rhombohedral supercell with short-range ordering obtained from Ref. <cit.>. The hydrogen displacement as a function of simulation time is shown in Fig. <ref>a under 300 K using 30 repetitions of μs long-timescale simulations. An approximate function form of ⟨Δ x^2⟩∝ t is shown by the blue line, and the diffusivity is estimated as 2.84× 10^-14 m^2/s. Similar simulations are implemented for different temperatures, as shown in Fig. <ref>b. The Arrhenius plot shows a good linear relation. The estimated effective activation energy Q equals 0.43± 0.01 eV, and the pre-exponential factor D_0 equals 5± 2× 10^-7 m^2/s. To our knowledge, these parameters have not been provided in the literature, so we show these results as predictions of our method.
In CrCoNi, short-range ordering (SRO) has significant influences on various properties of the material ranging from hardness <cit.> and stacking fault energy <cit.> to magnetism <cit.>. We show that the SRO also has an evident influence on the hydrogen diffusivity in CrCoNi, as shown in Fig. <ref>c. The system with SRO under thermal equilibrium (SRO=1) gives approximately two times the hydrogen diffusivity of the fully random configuration (SRO=0), showing that the SRO enhances hydrogen diffusion. That can be explained by the reduction of Cr-Cr bond concentration by the SRO <cit.>, as hydrogen transition energy barriers proximate to the Cr-Cr bond are found higher than the average hydrogen transition energy barriers in our calculations. Our results predict that the hydrogen diffusion behavior can also be tuned by the SRO in multi-principle element alloys.
The second working mode of our method, the LSS, sets the reward function as the energy reduction after the transition:
r_t = E(s_t)-E(s_t+1)=-Δ E. The model is trained by the deep Q network (DQN) algorithm <cit.>, which aims to maximize the total reward R=∑_t=0^T γ^t r_t on a trajectory with a discount factor γ close to one (set as 0.8 in our calculation). The model parameters are updated through the Bellman equation <cit.>:
θ←θ - λ∇_θ∑_t(r_t+γmax _a^' Q_θ^t(s_t+1,a^')-Q_θ (s_t, a_t))^2,
where θ^t is the target network that updates less frequently than θ. The converged Q_θ (s_t,a_t) fits the maximal total rewards after timestep t, max_(a_t+1,a_t+2,⋯ )∑_t'=t^T γ^t'-tr_t'.
As the Q function “foresees" the energy reduction of future steps and chooses actions that maximize “long-term" energy reduction, it is expected to converge to low energy configurations faster than local strategies that only consider single-step energy terms. That provides LSS a simulator of an annealing process, which converges to a near-ground state with fewer timesteps than the TKS.
We demonstrate the LSS's performance in simulating annealing by the hydrogen migration to copper (111) surface process, as shown in Fig. <ref>a. 4× 4× 3 hexagonal supercells are constructed with 10 randomly sampled hydrogen sites, and the (111) surface is created with a 15 Å vacuum layer. Hydrogen in the surface adsorption sites has lower energy than that in the bulk interstitial site, so the energy ground state is that all hydrogen atoms are on the surface adsorption sites. However, because of the energy difference between the octahedral sites and tetrahedral sites, the migration pathway involves multiple local energy minimums and low energy barriers, making it challenging to sample the low-energy states <cit.>. After training, our RL policy gives the most likely action from each state, as shown in Fig. <ref>a. Within the cut-off radius of 8.5 Å in Eq. (<ref>) from the surface, the highest-probability actions (HPAs) from all sites are oriented towards the surface. The HPAs from surface adsorption sites point to neighbor surface sites. This policy provides orientation for the hydrogen atoms to migrate across the local energy barriers toward the surface sites. The HPAs from sites close to the surface have larger Q values than that far from the surface, as the discount factor reduces the contribution of long-term rewards to the Q function compared to short-term rewards. For sites deeper than the cut-off radius, all move gives the same Q function due to the constraint of symmetry.
We compare the annealing process using the LSS and the Metropolis-Hastings algorithm <cit.>, as shown in Fig.<ref>b,c. The LSS annealing leads all hydrogen atoms to surface adsorption sites and converges to the energy ground states in 200 timesteps in all 50 trajectories. From the grey lines, one can observe that the system moves across a large number of low-energy barriers and approaches the ground state. In comparison, the Metropolis-Hastings algorithm converges slowly. Less than half of the hydrogen migrates to the surface sites in both 200 and 500 timesteps annealing, leaving ∼ 4 eV energy above the ground state on average. These results demonstrate that the LSS can show advantageous performance in approaching low-energy configurations compared to straightforward Monte Carlo methods.
§ DISCUSSION AND CONCLUSIONS
The TKS and LSS can be viewed as two special cases of a unified DQN framework. The general reward function is:
r_t = -α (F̃(s^saddle_t) - F(s_t)) - β (F(s_t+1)-F(s_t)),
where F(s)≡ E(s) + k_ BT∑_i=1^3Mlogν_i(s) + F_0 is the free energy of state s, and F̃(s^saddle)≡ E(s^saddle) + k_ BT∑_j=1^3M-1logν_j^*(s^saddle) + F_0 is the effective free energy of the saddle point (F_0 is a state-independent constant). There are three tunable parameters, α, β, and γ (in Eq. (<ref>)), controlling the importance assigned to reproducing the correct transition probability, energy reduction, and long-term performance of the model. The TKS and LSS correspond to α=1, β=γ =0 and α=0, β=1, γ≃ 1, respectively. Other parameter settings, despite the lack of direct physical interpretation, can be used to explore different configurations in the energy landscape with certain preferences. A probabilistic interpretation of the general framework is discussed in section 4.C, mapping each parameter set to a probability distribution function from which the trajectory is sampled.
Our method provides a computational framework to simulate the long-timescale diffusion and annealing process. Although the simulations in this paper focus on hydrogen diffusion in metals, the method is applicable to diffusion processes in different materials and microstructures, given a specifically designed action space. This method can also bridge large length scales, by first training a model on varied small structures, then deploying the model to guide the long-timescale simulation in a large supercell that includes the complexity of all trained structures.
§ EXPERIMENTAL SECTION
§.§ Action space identification algorithm
The action space 𝒜(s) = {a = (i,v⃗)} is identified based on the atomic configuration s. The algorithm first identifies all hydrogen atoms with indices i_1, i_2, ⋯. For each hydrogen atom i, the distance of all metal atoms j within a cut-off radius r_c is ranked:
r_ij_1≤ r_ij_2≤⋯≤ r_ij_M
where r_ij_k is the distance between atom i and atom j. Then, we use all metal atoms j_k with a distance r_ij_k<1.2r_ij_4 (we denote the largest k satisfying the condition as n) and the hydrogen atom i itself to construct a convex hull including these atoms. If the hydrogen atom i is a corner of the convex hull, the hydrogen atom is on a surface adsorption site; if the hydrogen atom i is inside the convex hull, the hydrogen atom is a bulk interstitial site.
If the hydrogen atom is in a bulk interstitial site, we choose all face centers, (c⃗_1,c⃗_2,⋯ , c⃗_m), of the convex hull (j_1,⋯ ,j_n). Then, the actions towards every face center (i, max(1.6(c⃗_k-r⃗_i), 1.2Åc⃗_k-r⃗_i/|c⃗_k-r⃗_i|)), k=1,2,⋯ ,m are included into the action space, except there are “collision" events. The “collision" event is defined as, if the hydrogen atom i takes the action, it will have a smaller distance than 0.5 Å with at least one other atom. If the hydrogen atom “collide" with another hydrogen atom, the action is directly discarded. If the hydrogen atom “collide" with a metal atom, the metal atom will be added to reconstruct a convex hull, and actions towards face centers adjacent to the added atom will be included, except it evokes another “collision". If that happens, the action will be directly discarded.
If the hydrogen atom is on the surface adsorption site, the convex hull is reconstructed using metal atoms j_k satisfying r_ij_k<1.2r_ij_3. Atoms directly connected with the hydrogen atom, (j_1, j_2,⋯ ,j_n), are identified as the adsorption site (we sort (j_1, j_2,⋯ ,j_n) to form a counterclockwise loop). The adsorption site center is obtained as c⃗ = 1/n∑_k r⃗_j_k. The adsorption site has n edges, and the sth edge center is e⃗_s = (r⃗_j_s+r⃗_j_s+1)/2. First, the surface diffusion actions (i,1.6(e⃗_s-c⃗)), s=1,2,⋯ ,n are included. Then, the action towards the bulk (i, 3Åc⃗_k-r⃗_i/|c⃗_k-r⃗_i|) is included. If “collision" happens, the same procedure as the bulk interstitial site case is applied.
§.§ Detailed parameter settings
The model training on pure copper and nickel is conducted on 4× 4× 4 cubic supercell of the FCC metals. 3 atomic configurations are generated for each metal, where 4 hydrogen atoms are randomly sampled in all octahedral and tetrahedral sites in each configuration. 20 and 40 trajectories are sampled for copper and nickel, respectively, with 30 timesteps in each. In the atomic relaxation and NEB calculations, all forces converge to 0.05 eV/Å under the PreFerred Potential (PFP) v4.0.0, which is used throughout this paper. The cut-off radius of the neural network model is 4 Å. The embedding network G_k^1 has one hidden layer and an output layer both with a size of 12. Throughout the paper, we take the first 1/4 columns of G_k^1 to form G_k^2, and the input layers of G_k^1,2 have a size of N_c+1, where N_c is the number of element species. We define an element species list: C = (C_1, C_2, ⋯ , C_N_c, C_N_c+1= action), where C_l is the lth element. For G_k^1,2(f_c(r_im), c_m=C_l), the input layer takes the N_c+1 dimensional input vector whose lth component is f_c(r_im) and other components are zeros. The fitting network has two hidden layers with a size of 32. The maximum atom number is set as 40, which has not been exceeded during the training. The training temperature is set as 1000 K throughout this paper. After including the nth trajectory, one randomly samples a trajectory from probability distribution P_i = 1-0.99/1-0.99^n0.99^n-i (recent trajectory has larger probability) and train 20 gradient descend steps from the sampled trajectory, and repeat this for n times. The training algorithm is Adam throughout this paper, and the learning rate here is set as 10^-3 in all online training. Offline training is conducted to further improve the model's accuracy. We separate the training data into the training dataset (2/3 of the data) and the testing dataset (1/3 of the data). 10000 full gradient descent is implemented on the training dataset. The learning rate changes from 10^-3 to 10^-5 that exponentially decays with timesteps in all offline training in this paper.
The model training on NiCrCo medium entropy alloy is conducted on 4× 4× 4 cubic supercell of the FCC fully random solid solution. 9 atomic configurations are generated for each metal, where 4 hydrogen atoms are randomly sampled in all octahedral and tetrahedral sites in each configuration. 3 independent processes of training are conducted with 101 trajectories in each, and each trajectory contains 30 timesteps. In the atomic relaxation and NEB calculations, all forces converge to 0.05 and 0.07 eV/Å, respectively. The cut-off radius of the neural network model is 5 Å. The embedding network G_k^1 has one hidden layer and an output layer both with a size of 24. The fitting network has two hidden layers with a size of 128. The maximum atom number is set as 50, which was not exceeded during the training. The online training parameters are the same as pure metals. As to offline training, we separate the training data the same way as pure metals. Stochastic gradient descent is implemented with a minibatch size of 500 data points (one timestep is a data point). The minibatch is randomly sampled from all data points, and 10 gradient descent steps are applied to each minibatch. That is repeated for 20000 iterations. In order to avoid overfitting, a normalization term of 5× 10^-6∥θ∥^2 is added to the loss function.
The deep Q learning for copper (111) surface is conducted on 4× 4× 3 hexagonal lattice of FCC copper (4 replications on a and b directions and 3 replications on c direction. c direction is along the 3-fold axis). A vacuum layer of 15 Å is included in the c direction. We implemented 7 independent training processes, 4 of them have only one randomly sampled hydrogen atom in the copper slab (12 configurations are sampled as starting points, and initial configurations are randomly selected from them), and the other 3 have 10 randomly sampled hydrogen atoms (10 configurations are sampled as starting points). 300 trajectories are sampled with 30 timesteps in each. In the atomic relaxation, all forces converge to 0.05 eV/Å. The cut-off radius of the neural network model is 8.5 Å, as the model needs more distant atomic information to foresee the long-term rewards. the embedding network G_k^1 has one hidden layer and an output layer both with a size of 24. The fitting network has two hidden layers with a size of 128. The maximum atom number is set as 260, which has not been exceeded during the training. After including the nth trajectory, one randomly samples a trajectory and trains 5 gradient descent steps from the sampled trajectory, and repeats this for ⌈ n^2/3⌉ times. The offline training randomly samples a mini-batch with 10 trajectories and applies 10 steps of gradient descent at each iteration. There are 1010 iterations in the training process.
§.§ Probabilistic Interpretation of the DQN framework
By setting the parameters α, β, and γ, Our method samples different probability distributions. In physical reality, the transition rate is approximately determined by the harmonic transition state theory (HTST):
Γ_s_ta_t = ∏_i=1^3Mν_i(s_t)/∏_j=1^3M-1ν_i^*(s^ saddle_t)e^-(E(s^ saddle_t))-E(s_t))/k_ BT
= e^-(F̃(s^ saddle_t)-F(s_t))/k_ BT
At thermal equilibrium, the probability distribution among different states in the state space 𝒮 is:
P(s) = 1/Ze^-F(s)/k_ BT, Z = ∑_s∈𝒮e^-F(s)/k_ BT
§.§.§ γ = 0: sampling exact transition probabilities
If γ = 0, the exact value function Q^*(s_t,a_t) = r_t = -α (F̃(s^ saddle_t)-F(s_t)) - β (F(s_t+1)-F(s_t)). The problem simplifies into choosing an action based on the next step reward, namely, a contextual bandit problem. If the parameterized Q_θ (s,a) properly reproduce the exact value function Q^*(s,a), the policy gives:
π_θ (a|s) = (Γ_sa)^α P(s'_sa)^β/∑_a'∈𝒜_s(Γ_sa')^α P(s'_sa')^β
where s' is the next state after taking action a. For kinetics simulation (TKS) that reproduces the transition probabilities of Eq. (<ref>), coefficients are set as α = 1, β = 0. The expected stationary time is then evaluated as:
τ_t = 1/∑_aΓ_s_t a = 1/∑_ae^Q_θ (s_t a)/k_ BT
In certain scenarios, the goal is to sample thermal equilibrium distribution. The detailed balance principle proved that the probability distribution follows Eq. (<ref>) as long as α +2β = 1. One can set β to a larger value to sample more rare transition events while keeping the thermodynamics properties correct.
§.§.§ γ∼ 1: maximizing global probability of a trajectory
When we set γ∼ 1, the algorithm maximizes R(𝒯)≃∑_t=0^T r_t (we consider setting γ slightly smaller than 1 as a convergence technique that leads to a small bias). The probability of the trajectory is:
P(𝒯) = P(s_0)∏_t=0^T-1 e^-τ_tΓ_s_tΓ_s_t→ s_t+1
Using the expected value τ_t = 1/Γ_s_t, the probability becomes P(𝒯|τ_t = 1/Γ_s_t) = P(s_0)e^-T∏_t=0^T-1Γ_s_t→ s_t+1. Then, maximizing the total reward corresponds to maximizing:
e^R(𝒯)/k_ BT-α TP(s_0)^α +β = P(𝒯|τ_t = 1/Γ_s_t)^α P(s_T)^β
Here, the initial state s_0 does not depend on the policy, so it is constant when doing the maximization. If α = 0, β = 1, the method aims to sample the most probable final state s_T, corresponding to an annealing process that targets the ground state. If α = 1, β = 0, the method aims to sample the most probable trajectory based on transition kinetics. In the general case, α and β can be tuned according to sample probabilities considering both the final state distribution and transition kinetics.
§ ACKNOWLEDGEMENTS
We thank Prof. Cathy Wu, Dr. Shen Shen, and Zhiyuan Shu for their insightful discussions. This work was supported by NSF CMMI-1922206 and DTRA (Award No. HDTRA1-20-2-0002) Interaction of Ionizing Radiation with Matter (IIRM) University Research Alliance (URA). The calculations in this work were performed in part on the Matlantis High-Speed Universal Atomistic Simulator and the Texas Advanced Computing Center (TACC).
|
http://arxiv.org/abs/2307.00416v1
|
20230701194349
|
On the stability of vanishing cycles of étale sheaves in positive characteristic
|
[
"Tong Zhou"
] |
math.AG
|
[
"math.AG"
] |
Aggregation Consistency Errors in Semantic Layers and How to Avoid Them
Eugene Wu
=======================================================================
August 1, 2023
In positive characteristic, in contrast to the complex analytic case, vanishing cycles are highly sensitive to test functions (the maps to the henselian traits). We study this dependence and show that on a smooth surface, this dependence is generically (in a precise sense) only up to a finite jet of the test functions. We also study the class of sheaves whose vanishing cycles have the strongest stability. Among other things, we show that tame simple normal crossing sheaves belong to this class, and this class is stable under the Radon transform.
§ INTRODUCTION
The microlocal point of view of viewing objects as living on the cotangent bundle rather than the base space was introduced by Sato in the field of partial differential equations. This idea led to the birth of microlocal analysis and spread out to other fields of mathematics. In <cit.> Kashiwara and Schapira systematically developed the theory of sheaves on real/complex analytic manifolds from this point of view.
Question: what does the theory look like for ℓ-adic sheaves on schemes in positive characteristic? In this algebraic context, among many other works, we point out: in <cit.> Verdier defined the specialisation[which, however, kills wild ramifications.], in <cit.> Abbes and Saito defined the characteristic class, and in <cit.> Kato and Saito defined the Swan class. A recent breakthrough is from Beilinson <cit.> who defined singular supports (SS), and based on that Saito defined characteristic cycles (CC) in <cit.>.
The starting point of this paper is the following line of thought: apart from SS and CC, another key notion in microlocal sheaf theory in the real/complex analytic context is the microstalk, which is to microlocal sheaves as the stalk is to sheaves. One definition of microstalk is via the vanishing cycle functor. Namely, for (x,ξ) a smooth point of SSℱ, the microstalk of ℱ at (x,ξ) can be defined to be ϕ_f(ℱ)_x for any transverse test function f at (x,ξ) (ttfun, see Definition <ref>). This makes sense because of the wonderful fact that vanishing cycles have strong stability with respect to the variation of the ttfun (Proposition <ref>). In particular, as vector spaces with monodromy actions, they are (noncanonically) independent of the choice of the ttfun.
On the other hand, this totally fails in the algebraic context because of wild ramifications. Here is an example (see Section <ref> for details): consider 𝔸^2 over an algebraically closed field of characteristic p>3. Let D be the y-axis and U be the complement. Let ℱ be the !-extension to 𝔸^2 of the Artin-Schreier sheaf on U determined by the equation t^p-t=y/x^p. One can show SSℱ=T^*_XX ∪⟨ dy⟩_D, where ⟨ dy⟩_D denotes the subspace of D×_X T^*X consisting of covectors proportional to dy. Consider the vanishing cycles with respect to the following two functions: f_0(x,y)=y/1+x, f_1(x,y)=y/1+x+x^3. It is easily checked that f_0 and f_1 are ttfun's at (a,dy), where a is the origin. However, using a theorem of Deligne-Laumon (Theorem <ref>), one can compute: dim(ϕ_f_0(ℱ)_a)=-(p-1), while dim(ϕ_f_1(ℱ)_a)=-2.
Question: what stability do vanishing cycles have in the algebraic context?
In the first part of this paper, we show that on a smooth surface, the dependence of vanishing cycles on the ttfun is generically only up to a finite jet. This result is inspired by a result of Saito <cit.>, see the end of Section <ref> for a discussion of the relation between these two results.
Let X be a smooth surface over an algebraically closed field of characteristic p>2, D a prime divisor, U=X-D, ℱ=j_!ℱ_U, where ℱ_U is a local system on U concentrated in degree 0. Then, there exists a Zariski open dense V=X-{finitely many points on D} and a Zariski open dense S⊆ SS(ℱ|_V) such that for any (x,ξ)∈ S, there exists a natural number N≥2 (depending on (x,ξ)) such that, with respect to generic transverse test functions at (x,ξ), the vanishing cycles depend on the ttfun up to the N-th jet. Moreover, we have an upper bound: N≤2^M-1.b+(2p+1)^M.max_σ≠ id∈ G{ep(I_σ)}.ϵ.|G|. The terms are explained below.
We briefly explain the notions in this theorem. See Section <ref> for details. By shrinking, we may assume X is affine. Let U→ U be the minimal Galois covering trivialising ℱ, with Galois group G. Let X→ X be the normalisation of X in U. Then, I_σ:= the ideal corresponding to the subscheme of fixed points of σ acting on X; ep(I_σ):=the smallest r ∈ℕ such that √(I_σ)^r⊆I_σ; ϵ:= the intersection number at x of f^-1(0) and D (which is either 1 or 2 for a generic (x,ξ)∈ SSℱ); b:=max_{x above x} {the smallest r∈ℕ s.t. 𝔪_x^r⊆𝔪_x.𝒪_X,x}. The meaning of the vanishing cycles depending on the ttfun up to the N-th jet and the meaning of the term M are explained in the next two paragraphs.
Given a class 𝒞 of ttfun's, we say that vanishing cycles depend on the ttfun up to the N-th jet with respect to 𝒞 if they form a local system as the ttfun varies within 𝒞 and in order ≥ N terms. More precisely, for any T-family of ttfun's (ttfam), the theory of vanishing cycles over general bases (<cit.>) allows us to define a “big” vanishing cycle ϕ_f(ℱ). This is a sheaf on a “punctured tubular neighbourhood” of T, whose restrictions to slices are equal to the usual vanishing cycles over the slices. We then consider those ttfam's with the additional property that all functions in this family belong to 𝒞 and are equal mod 𝔪_x^N. If ϕ_f(ℱ) is a local system, we say that vanishing cycles of ℱ depend on the ttfun up to the N-th jet with respect to 𝒞.
In the theorem, 𝒞 is taken to be generic ttfun's at (x,ξ), where “generic” has a precise meaning as defined in Definition <ref>. Roughly speaking, a ttfun f at (x,ξ) is generic if the number of blowups at closed points needed to resolve the singularities of the curve f^-1(0)×_X X↪X is “generic”, in particular, bounded by some number independent of f. We take M to be this number.
Here is an outline of the proof. We first use a distinguished triangle of Saito to rephrase the local constancy of ϕ_f(ℱ) as the pair (f_T: V_T→ T, ℱ_T) being universally locally acyclic (ULA), where V_T→ T is the family of zero loci of this ttfam, and ℱ_T is the pullback of ℱ to V_T. Then we apply the theorem of Deligne-Laumon to further translate the ULA condition to the Swan conductor on each fibre being constant along the family. This reduces the question to the stability of Swan conductors of restrictions to curves. Then we recall the computation of Swan in terms of intersection numbers and representation-theoretic data and finally reduce the question to a purely geometric one, which we are able to analyse. The restriction to V and S is due to two reasons: a) to get a control on ttfam's, ensuring that Deligne-Laumon is applicable; b) our analysis of the geometric question requires certain regularities of the spaces involved.
The second part of this paper studies the class of sheaves whose vanishing cycles have the strongest stability. For technical reasons, we need to introduce two notions (Definition <ref>): a) μ c: sheaves whose vanishing cycles only depend on the ttfun up to 2-jet; b) μ c^s: sheaves which are μ c under all smooth pullbacks. The stability of vanishing cycles for μ c, μ c^s sheaves is similar to that in the real/complex analytic context. More precisely, among other things, we show:
i) Let X be a smooth variety over an algebraically closed field of characteristic p>2, D↪ X be a simple normal crossing divisor (allowed to be empty), j: U↪ X be its complement, ℱ be a local system on U. Then j_!ℱ is μ c^s.
ii) Let X be a smooth variety over an algebraically closed field of characteristic p>2, ℱ be a μ c sheaf on X, and (x,ξ) be a smooth point in SSℱ. Then for any two ttfun's f, g at (x, ξ), there exists a (noncanonical) isomorphism ϕ_f(ℱ)_x≅ϕ_g(ℱ)_x as objects in D^b_c(𝔽_ℓ). We call this the microstalk of ℱ at (x,ξ).
iii) μ c^s sheaves are preserved under the Radon transform. Moreover, their microstalks are invariant under the Radon transform.
We briefly explain the proofs. For i), as above, we first rephrase the question as showing (f_T: V_T→ T, ℱ_T) being ULA. We then need to understand the singularities of the intersections of the zero loci of ttfun's and the simple normal crossing divisors. We give explicit resolutions of such singularities, which implies the ULA statement. ii) basically follows from the definition of μ c sheaves. For iii), the argument is similar to the complex analytic case and essentially reduces a detailed understanding of the geometry of the Radon transform. Actually, one can show the stability of μ c, μ c^s sheaves for any proper pushforward which shares similar geometric properties as (the pushforward part of) the Radon transform (Lemma <ref>).
A basic question we cannot answer yet is: how does the stability of vanishing cycles change under smooth pullbacks? In particular, how does depth (Definition <ref>) change? Is μ c=μ c^s (Definition <ref>)? One expects that the stability remains the same. We know by i) above that the answer to the last question is yes for tame simple normal crossing sheaves.
In the appendix, we list some analogies and contrasts among several sheaf theories from the microlocal point of view.
§.§ Conventions
Unless otherwise specified, a “sheaf” means a bounded complex of sheaves whose cohomology sheaves are constructible, a “local system” means a bounded complex of sheaves whose cohomology sheaves are locally constant. All functors are derived.
In the complex analytic context, for X a complex analytic manifold, D(X) denotes the triangulated category of sheaves (in the sense above) with ℂ-coefficients. “Constructible” (ℂ-constructible) is with respect to stratifications by locally closed complex analytic submanifolds. We fix a generator of π_1(ℂ^×,1) and identify it with ℤ.
In the algebraic context, we fix an algebraically closed field k of characteristic p. For X is a scheme over k, D(X) denotes the triangulated category of sheaves (in the sense above) with 𝔽_ℓ-coefficients for a fixed l≠ q. “Constructible” is with respect to stratifications by locally closed subschemes. A “variety” means a finite type scheme over k. “Smooth” means smooth relative to k. A “geometric point” means a map from the Spec of a separably closed field. G_η denotes π_1(𝔸^1_k,(0)-{0},η), where 𝔸^1_k,(0) is the strict henselisation of 𝔸^1_k at the origin, η is a fixed geometric point over its generic point.
D^b_c(ℂ) denotes the triangulated category of bounded complexes of finite dimensional ℂ vector spaces. D^b_c(ℂ[ℤ]) denotes the triangulated category of bounded complexes of ℤ representations on finite dimensional ℂ vector spaces. Similarly for D^b_c(𝔽_ℓ), D^b_c(𝔽_ℓ[G_η]).
For f a complex analytic (resp. regular) function on a complex analytic manifold (resp. smooth scheme over k) X, df denotes its differential, Γ_df denotes the graph of df in the cotangent bundle T^*X (resp. T^*X:=T^*(X/k)). For f: X→ Y a map of complex analytic manifolds (resp. smooth schemes over k), we have the correspondence T^*X← X×_Y T^*Y→ T^*Y. df denotes the first map. For C⊆ T^*X, f_∘C denotes the image in T^*Y of its preimage in X×_Y T^*Y; for C⊆ T^*Y, f^∘C denotes the image in T^*X of its preimage in X×_Y T^*Y.
§.§ Acknowledgement
I would like to express sincere gratitude to my advisor David Nadler for his guidance, generosity, and encouragements.
I would like to thank Owen Barrett, Sasha Beilinson, Mark Macerato, Martin Olsson and Jeremy Taylor for valuable discussions, especially to Owen Barrett for showing me Lemma <ref>.
§ REVIEW
In this section, we review basic microlocal-sheaf-theoretic constructions in both complex analytic and algebraic contexts, and compare them. Except for the definitions of the ttfun and the Radon setup, Section <ref> is logically independent of the rest of the paper, but serves as a motivation.
§.§ Complex analytic context
The basic reference is <cit.>. Consider a complex analytic manifold X, let D(X) be the triangulated category of bounded ℂ-constructible complexes of sheaves of ℂ-vector spaces. The notion of the singular support (or the microsupport) SSℱ is defined for ℱ∈ D(X). It is a half-dimensional ℂ^×-conic closed Lagrangian subset in T^*X which records the codirections in which ℱ is not locally constant. More precisely, it equals the closure of all (x,ξ)∈ T^*X such that there exists some complex analytic function f on some open neighbourhood of x such that the vanishing cycle ϕ_f(ℱ)_x is nonzero. SSℱ is the 0-th order invariant measuring the “singularity” of ℱ, i.e. the locus. Clearly, the vanishing cycle, viewed as an object in D^b_c(ℂ[ℤ]) (bounded complexes of ℤ representations on finite dimensional ℂ vector spaces), is a much more refined measurement. However, it depends on the choice of the test function f. It is a wonderful fact that, when restricted to transverse test functions, ϕ_f(ℱ)_x is essentially independent of f, in the precise sense below.
A transverse test function (ttfun) of ℱ at a smooth point (x, ξ) of SSℱ is a complex analytic function f defined on an open neighbourhood U of x such that
i) f(x)=0
ii) Γ_df (the graph of the differential of f) intersects SSℱ|_U at (x, ξ) transversely.
A transverse test family (ttfam) of ℱ at a smooth point (x, ξ) of SSℱ, denoted by (T,U,V,f), is the following data (here 𝔸^1=ℂ):
U× T V x_T:=x× T
𝔸^1_T:=𝔸^1× T
[hook', from=1-3, to=1-2]
[hook', from=1-4, to=1-3]
["f", from=1-3, to=2-3]
where:
i) T is a connected complex manifold, serving as the parameter space of the family. We will often identify T with 0× T⊆𝔸^1× T, and occasionally with x_T;
ii) U is an open neighbourhood of x, V is an open of U× T containing x_T;
iii) f is a complex analytic map such that, for all s∈ T, the s-slice f_s: V_s (:= V×_𝔸^1_T𝔸^1_s)→𝔸^1_s is a ttfun with respect to ℱ at (x,ξ) (in particular, f_s is SSℱ-transversal except at x).
i) The following variation of the definition of a ttfam will be useful below: as s varies, instead of requiring f_s to be ttfun's at a fixed ν_0=(x,ξ), we allow f_s to be ttfun's at ν(s)=(x(s),ξ(s)) for varying smooth points ν(s) on SSℱ, and require ν(s_0)=ν_0 for some s_0. We will call such families weak transverse test families (wttfam) at (x,ξ).
ii) In the real analytic context, the same definitions of the ttfun, ttfam and wttfam apply, with “complex” replaced by “real” and “ ℂ” replaced by “ ℝ”.
iii) In the algebraic context, the same definition of the ttfun applies, with “complex analytic” replaced by “regular”, and “neighbourhood” means Zariski neighbourhood. For the ttfam, see Definition <ref>.
Let X be a complex analytic manifold, ℱ∈ D(X), (x, ξ) a smooth point of SSℱ. Then:
i) For a ttfun f of ℱ at (x, ξ), ϕ_f(ℱ)_x∈ D^b_c(ℂ[ℤ]) is abstractly independent of f, i.e. for any other ttfun g, there exists a (noncanonical) isomorphism ϕ_f(ℱ)_x≅ϕ_g(ℱ)_x in D^b_c(ℂ[ℤ]).
ii) For any ttfam (T,U,V,f) of ℱ at (x, ξ), ϕ_pf(ℱ_V) is a local system on x_T≅ T, with stalks at s canonically isomorphic to ϕ_f_s(ℱ)_x, for all s∈ T. Here p is the projection 𝔸^1× T→𝔸^1, ℱ_V is the pullback of ℱ to V.
The real analytic counterpart of this result is contained in the statement and proof of <cit.>. The complex case can be easily deduced from it. We include a proof for completeness.
We refer to <cit.> for details of this paragraph and the paragraph after the next proposition. For a real analytic manifold X, D(X) denotes the triangulated category of bounded ℝ-constructible sheaves of ℂ-vector spaces. For f a real analytic function and ℱ∈ D(X), the vanishing cycle is defined as ϕ_f(ℱ)=RΓ_{f≥0}(ℱ)|_H, where H={f=0}. If H is smooth, it is also equal to d_f^*μ_Y(ℱ),[We use the notation “^*” instead of “^-1” (as in <cit.>) for sheaf pullback.] where μ_Y is the microlocalisation along Y, d_f is the map Y→ T^*_YX, y↦ (y,df).
<cit.>
Let X be a real analytic manifold, ℱ∈ D(X), (x, ξ) a smooth point of SSℱ. Then for any wttfam (T,U,V,f) of ℱ at (x, ξ) (Remark <ref> i)), ϕ_pf(ℱ_V) is a local system on x_T≅ T, with stalks at s canonically isomorphic to ϕ_f_s(ℱ)_x, for all s∈ T. Here p is the projection 𝔸^1× T→𝔸^1, ℱ_V is the pullback of ℱ to V.
The proof in <cit.> works for ξ≠ 0. We sketch the argument. Let H={pf=0}, H_s={f_s=0}. We have ϕ_pf(ℱ_V)=d_pf^*μ_H(ℱ_V), ϕ_f_s(ℱ)=d_f_s^*μ_H_s(ℱ). Let W=SSℱ_V∩ T^*_HV=ℝ_> 0{(s,x,d(f_s))}_s∈ T, W_s=SSℱ∩ T^*_H_sV_s=ℝ_> 0{(s,x,d(f_s))}. By the estimate of SS of microlocalisations (<cit.>), one checks that SS(μ_H(ℱ_V))⊆ T^*_WT^*_HX. This implies μ_H(ℱ_V)|_W is locally constant. The first statement follows. By functoriality of the microlocalisation under noncharacteristic pullbacks (<cit.>), we get μ_H(ℱ_V)|_W_s≅μ_H_s(ℱ). The second statement follows.
For ξ=0. Consider the embedding i: X=X×{0}↪ X×ℝ. Let z be the standard coordinate on ℝ. One checks that the family of functions {z-f_s}_s∈ T gives a wttfam for i_*ℱ at (x,ξ'), where ξ' is any nonzero conormal vector at x of X in X×ℝ. Then the previous case applies, and the compatibility of vanishing cycles and proper pushforwards implies this case.
For a complex analytic manifold Y, denote by Y^ℝ the underlying real analytic manifold, there is a canonical identification (T^*Y)^ℝ=T^*Y^ℝ (see, e.g., <cit.>). For a complex analytic function h, (Γ_dh)^ℝ=Γ_Re(h) under this identification. In particular, if h is a ttfun for some ℱ then so is Re(h). Furthermore, by <cit.>, for a general h we have a canonical isomorphism ϕ_h≅ϕ_Re(h)|_H in D(H), where H={Re(h)=0}.
ii) is immediate from the above paragraph and Proposition <ref>: a ttfam on X induces a ttfam on X^ℝ whose vanishing cycle is a local system with stalks isomorphic to the vanishing cycles on the slices. Transfer back to complex vanishing cycles, we get the result.
i) follows from the following observation: given any ttfam (T,U,V,f) on X, consider the family ((T×ℂ^×)^ℝ,U^ℝ,(V×ℂ^×)^ℝ,g) on X^ℝ, which on each slice (s,λ)∈ (T×ℂ^×)^ℝ is given by g_(s,λ)=Re(λ f_s). One checks this is a wttfam. By Proposition <ref>, ϕ_pg(ℱ_(V×ℂ^×)^ℝ) is a local system on (T×ℂ^×)^ℝ with stalks at (s,λ) isomorphic to ϕ_Re(λ f_s)(ℱ)_x. Moreover, ϕ_Re(λ f_s)(ℱ)_x viewed as a local system with respect to λ (i.e. ϕ_pg(ℱ_(V×ℂ^×)^ℝ)|_s×ℂ^×) is exactly ϕ_f_s(ℱ)_x viewed as a local system on ℂ^×. This implies ϕ_f_s(ℱ)_x≅ϕ_f_s'(ℱ)_x (noncanonically) for any s,s'∈ T.
So, to show i), it suffices to show that any two ttfun's can be connected by a ttfam. This is a simple exercise: fix a coordinate, expand a ttfun in power series, cut off degree ≥ 3 terms with a ttfam[Note that being a ttfun only depends on degree ≤ 2 terms.], then observe that the space of all quadratic terms which makes the function a ttfun is a connected complex manifold. (See proof of Lemma <ref> i) for a detailed argument in the algebraic context.)
As mentioned in the introduction, Proposition <ref> is a fundamental fact underlying many microlocal-sheaf-theoretic constructions. In particular, to any smooth point (x, ξ) in SSℱ, this allows us to define the microstalk(μstalk) of ℱ at (x,ξ): take ϕ_f(ℱ)_x for any ttfun f at (x,ξ). It is an object in D^b_c(ℂ[ℤ]), independent of f in the sense above.
Another (related) fundamental feature of real/complex analytic microlocal sheaf theory is its invariance under contact transformations, of which the Radon transform is the prototypical case. We will not discuss the full invariance, but focus on one aspect of it: how microstalks change under the Radon transform.
[for both complex analytic and algebraic contexts]
Q
ℙ ℙ^∨["p"', from=1-2, to=2-1]
["q", from=1-2, to=2-3]
Here ℙ is the abbreviation for ℂℙ^n, ℙ^∨ is its dual, Q is the universal incidence variety. Let ℱ∈ D(ℙ). Its Radon transform is defined as Rℱ=q_!p^*ℱ[n-1]. Denote by P(...) the projectivisation of (...) after removing the zero section. We have the following facts (see, e.g., <cit.><cit.>):
i) P(T^*ℙ)≅ Q ≅ P(T^*ℙ^∨);
ii) If z is a point in Q, x=p(z), a=q(z), let ξ, α be nonzero covectors at x, a which are conormal to the hyperplanes represented by a, x, respectively. Then z is the codirection represented by ξ, α under the identifications in i). Furthermore, T_z^*Q equals the pushout of T_x^*ℙ and T_a^*ℙ^∨ along ⟨ξ⟩ and ⟨α⟩ (via dp_x and dq_a). We say (x, ξ) and (a, α) correspond to each other;
iii) SS^+(Rℱ)=q_∘SS^+(p^*ℱ)=q_∘p^∘SS^+ℱ, where ^+ means adding the zero section. PSSℱ=PSSRℱ as subvarieties of Q.
Let ν be a smooth point in PSSℱ=PSSRℱ. Then, μ stalk(Rℱ)_ν≅μ stalk(ℱ)_ν if n is odd, and μ stalk(Rℱ)_ν≅μ stalk(ℱ)_ν⊗𝒦_2 if n is even. Here 𝒦_2 ∈ D^b_c(ℂ[ℤ]) is the vector space ℂ concentrated in degree 0, with 1∈ℤ acting by multiplication by -1.
In particular, as vector spaces (i.e. as objects in D^b_c(ℂ)), microstalks are invariant for all n.
We will prove a similar result in the algebraic context (Proposition <ref>). The same proof plus Proposition <ref> imply the statement here. For Thom-Sebastiani in the complex analytic context, see, e.g., <cit.>.
§.§ Algebraic context
We now consider the algebraic context: let X be a smooth variety over a field k algebraically closed of characteristic p≥0, and D(X) be the triangulated category of bounded constructible complexes of étale sheaves of 𝔽_ℓ-vector spaces. The notion of the singular support SSℱ is defined for ℱ∈ D(X) (<cit.>). It is a half-dimensional conic closed subset in T^*X. As in the analytic case, it records the non-locally-acyclic codirections of ℱ, and has a similar description in terms of test functions and vanishing cycles.
In the world of positive characteristic, a basic difference is that singular supports need not be Lagrangian[Actually, Deligne (<cit.>) showed that on a smooth surface X, any half-dimensional conic closed subset in T^*X can be generically realised as a component of some SSℱ.]. We will record later more new phenomena (Section <ref>). Here we discuss the failure of the analogue of Proposition <ref>.
We will use the following result of Deligne-Laumon to compute the dimensions of vanishing cycles (<cit.>, see <cit.> for a more general version):
Let S be a Noetherian excellent scheme, f: X → S a separated smooth morphism of relative dimension 1, Z a closed subscheme of X finite flat over S with a single point in each fibre. Let ℱ∈ D(X) be a !-extension of a locally constant sheaf concentrated in degree 0 on U=X-Z. Define an ℕ-valued function a_s on the points of S:
a_s:= dimtot((ℱ|_X_s)_η_z)
where s is a geometric point over s with residue field an algebraic closure of the residue field of s, z is a geometric point of Z above s, η_z is a geometric point over the generic point of
the strict henselisation of
X_s at z, dimtot means swan+dim_𝔽_ℓ.
Then:
i) a_s is constructible, and a_s≤ a_η if η specialises to s.
ii) (f,ℱ) is universally locally acyclic (ULA) if and only if a_s is locally constant.
iii) If S is an excellent strict henselian trait, denote its closed and generic points by s,η respectively, then
a_s-a_η=dim(ϕ_f(ℱ)_z).
We now come to examples showing the failure of the analogue of Proposition <ref>.
(p>2) Let X=𝔸^2=Spec( k[x,y]). Fix a nontrivial character ψ: 𝔽_p→𝔽_ℓ.[assuming it exists, i.e., p|(l-1).] Let ℱ be the Artin-Schreier sheaf determined by the equation t^p-t=y/x^p, !-extended along D={x=0}.[This equation determines a finite Galois covering of U=X-{x=0}, with Galois group 𝔽_p, corresponding to a surjection π(U,η_U)↠𝔽_p. Compose with ψ gives a representation of π(U,η_U), which is the same thing as a local system on U.] One can show SSℱ=T^*_XX ∪⟨ dy⟩_D, where ⟨ dy⟩_D denotes the subspace of D×_X T^*X consisting of covectors proportional to dy (see, e.g., [Saito17, 3.6]). Consider the following family of ttfun's (Definition <ref>) at ν=((0,0),dy): f_s(x,y):=y/1+x+sx^N, where N is some integer ≥ 3, s∈ k. It is simple to check that for each fixed s, f_s (restricted to some Zariski neighbourhood of (0,0)) is indeed a ttfun.
For a fixed s, apply Deligne-Laumon to f_s: U →𝔸^1, where U is some Zariski open neighbourhood of (0,0) on which f_s is defined. Let ρ be the standard coordinate on 𝔸^1. The fibre f_s^-1(ρ) is locally isomorphic to 𝔸^1 with x as a coordinate. ℱ|_f_s^-1(ρ) is the Artin-Schreier sheaf determined by t^p-t=(ρ-sx^N)(1+x)/x^p (!-extended at x=0). In formula (<ref>), dim(ℱ_z)=0, dimtot((ℱ|_X_s)_η_z)=1+sw((ℱ|_X_s)_η_z). The Swan conductors are easily computed:
[c]0.5
sw(ρ, s) s=0 s generic
ρ=0 0 p-N
ρ generic p-1 p-1
*table3≤ N < p
[c]0.5
sw(ρ, s) s=0 s generic
ρ=0 0 0
ρ generic p-1 p-1
*tableN ≥ p
By formula (<ref>), the dimensions of ϕ_f_s(ℱ) are as follows:
We see that if p>3 and 3≤ N < p, then dim(ϕ_f_s(ℱ)) depends on the parameter s. So the analogue of Proposition <ref> is false. Nevertheless, if N ≥ p, then dim(ϕ_f_s(ℱ)) does not depend on s (for s in a small neighbourhood of 0∈𝔸^1). This is a first indication that vanishing cycles depend on the ttfun only up to a finite jet. We will come back to this in Section <ref>.
In this example, SSℱ is not Lagrangian. Does the analogue of Proposition <ref> hold if restricted to sheaves whose SS's are Lagrangian? The answer is no, as the next example shows:
Same setup and notations as above, but consider the Artin-Schreier sheaf determined by t^p-t=y/x^p-1. One can show SSℱ=T^*_XX ∪ T^*_(0,0)X ∪⟨ dx⟩_D. Consider the same ν and the same family of ttfun's as above.
The computation is similar, the results are as follows:
[c]0.5
sw(ρ, s) s=0 s generic
ρ=0 0 p-N-1
ρ generic p-1 p-1
*table3≤ N < p-1
[c]0.5
sw(ρ, s) s=0 s generic
ρ=0 0 0
ρ generic p-1 p-1
*tableN ≥ p-1
We remark that the simplicity of π_1 (in particular, that it splits locally, and that all ramifications are tame) is one fundamental reason why in the analytic context vanishing cycles have strong stability, so strong that they “live” on the cotangent bundle, leading to fundamental constructions in the theory of microlocal sheaves. In the positive characteristic algebraic context, due to the complexity of π_1 (or more or less the same thing, wild ramifications), the (micro)local data of a sheaf is huge. This is analogous to the distinction between regular holonomic D-modules and general holonomic D-modules. In the appendix, we list some more analogies and distinctions.
§ THE STABILITY OF VANISHING CYCLES
This section is devoted to discussing the stability of vanishing cycles in the positive characteristic algebraic context. In Section <ref> we discuss the independence of dimtot(ϕ) with respect to the ttfun. In Section <ref> we discuss the independence of ϕ of high jets of the ttfun. We fix the following setup:
X is a smooth variety over a field k algebraically closed of characteristic p>2, ℱ∈ D(X), (x,ξ)∈ SSℱ a smooth point. Note that in this setup ttfun's (Definition <ref>) at (x,ξ) always exist (<cit.>).
§.§ The stability of dimtot(ϕ)
<cit.>
With the above setup, for a ttfun f, dimtot(ϕ_f(ℱ)_x) is independent of f.
To see this is true, just apply the Milnor formula (<cit.>): for a ttfun f, dimtot(ϕ_f(ℱ)_x) is equal to minus the coefficient of CCℱ at x. But logically, this proposition comes before the Milnor formula. Indeed, the very fact that dimtot of vanishing cycles “live” on the cotangent bundle allows one to define the characteristic cycle. See Remark <ref> for a direct proof of this proposition.
<cit.>
dimtot(ϕ) is invariant under the Radon transform. More precisely, in the Radon setup <ref>, let (x,ξ) be a smooth point of SSℱ with ξ≠ 0. Denote by ν the image of (x,ξ) in PSSℱ. Let (a, α) be any representative of ν in T^*ℙ^∨. Let f (resp. g) be any ttfun for ℱ (resp. Rℱ) at (x,ξ) (resp. (a,α)). Then dimtot(ϕ_f(ℱ)_x) = dimtot(ϕ_g(Rℱ)_a).
This is a consequence of <cit.>. For later purposes, we record a direct proof in our setting.
By the compatibility of vanishing cycles with proper pushforwards, ϕ_g(q_!p^*ℱ)_a≅ q_*ϕ_gq(p^*ℱ). By the following Lemma <ref>, gq is a ttfun for p^*ℱ at (z, ζ), where z=(x,a), ζ=dq(α). Moreover, z is the only point in q^-1(a) where d(gq) and SS(p^*ℱ) intersects. So q_*ϕ_gq(p^*ℱ)≅ϕ_gq(p^*ℱ)_z as objects in D^b_c(𝔽_ℓ[G_η]) (see Conventions for notations). By Proposition <ref>, dimtotϕ_gq(p^*ℱ)_z can be computed by any ttfun for p^*ℱ at (z, ζ). Use f+h where h is a quadratic function in the fibre direction of p (in a local coordinate). It is an exercise to check that this is a ttfun. Apply Thom-Sebastiani <cit.>, the assertion follows. Note the shift is computed as (n-1)+(-(n-2))+(-1), where the first term comes from the definition of the Radon transform, the second and third terms come from Thom-Sebastiani.
In the Radon setup <ref>, let ℱ∈ D(ℙ), (a,α) a smooth point in SSRℱ, α≠ 0, g a ttfun for Rℱ at (a,α). Then
i) on q^-1(a), Γ_d(gq) intersects SS(p^*ℱ) only at (z, ζ)∈ T^*Q, where z is the point in Q≅ PT^*ℙ^∨ corresponding to (a, α), and ζ=dq(α);
ii) the intersection of Γ_d(gq) and SS(p^*ℱ) at (z, ζ) is transverse.
In particular, gq is a ttfun for p^*ℱ at (z, ζ).
i) If (z', ζ') is in the intersection and z'∈ q^-1(a), because SS^+(Rℱ)=q_∘SS^+(p^*ℱ)=q_∘p^∘SS^+ℱ, there must exist an (x', ξ')∈ T^*ℙ such that a) it lies in SS^+ℱ; b) it corresponds to (a, α). But a) forces x'=p(z'), ξ'= the unique covector at x' which pulls back under dp_x' to ζ'; b) forces (z', ζ')=(z, ζ).
Note actually more is true: if we restrict to a small Zariski neighbourhood V of a on which Γ_dg intersects SS^+(Rℱ) only at (a, α), then (z, ζ) is the only point of intersection of Γ_d(gq) and SS(p^*ℱ) on q^-1(V).
ii) Let V be a neighbourhood as above. Consider the correspondence:
Q×_VT^*V
T^*Q T^*V["u"', from=1-2, to=2-1]
["v", from=1-2, to=2-3]
Abbreviate SS(p^*ℱ) as C. Abuse notation, denote the restriction of Γ_dg, Γ_d(gq) to over V by Γ_dg, Γ_d(gq) again. Let u_*, v^* denote the intersection theoretic pushforward and pullback. We claim Γ_d(gq)=uv^-1Γ_dg=u_*v^*Γ_dg (i.e. no > 1 multiplicities are introduced in the intersection theoretic pull and push). Assume this for now. Note Γ_d(gq) intersects C at the single point (z, ζ). We want to compute the intersection number. Since u is a closed immersion and v is proper smooth, u^*C and v^*Γ_dg also intersect at a single point and, by the projection formula from intersection theory, C.u_*v^*Γ_dg=(u^*C).(v^*Γ_dg)=(v_*u^*C).Γ_dg. We claim that v_*u^*C=vu^-1C. Assuming this, then vu^-1C^+=q_∘C^+=SS^+(Rℱ), so C.u_*v^*Γ_dg=SSRℱ.Γ_d(gq)=1.
It remains to show the two claims. The first claim follows from v being smooth and u being a closed immersion. For the second claim, we show separately below u^* and v_* introduce no > 1 multiplicities:
u^*: This is intersecting C with Q×_VT^*V. Since (a,α) is a smooth point, C is also smooth near (z, ζ). By counting dimensions, it suffices to find n-1 tangent vectors of C which are not tangent to Q×_VT^*V. One verifies that the tangents of C in the p fibre direction work.
v_*: After removing the zero section, u^*C lies in the “diagonal” of Q×_VT^*V, so is mapped isomorphically to its image. More precisely, consider Q×_ℙ^∨T^*ℙ^∨→ T^*ℙ^∨ (Q×_VT^*V→ T^*V is then its base change to V). Note Q×_ℙ^∨(T^*ℙ^∨-T_ℙ^∨^*ℙ^∨)→ (T^*ℙ^∨-T_ℙ^∨^*ℙ^∨) admits a natural 𝔾_m action. Take the quotient (which does not change multiplicity computations), we get Q×_ℙ^∨Q→ Q (identifying (T^*ℙ^∨-T_ℙ^∨^*ℙ^∨)/𝔾_m with Q), where the map is just the projection to the second factor. Then, by C=p^∘SSℱ and the description of T_z'^*Q in the Radon setup <ref>, one checks that (u^*C-zero section)/𝔾_m lies in the diagonal of Q×_ℙ^∨Q, so is mapped isomorphically to its image.
§.§ The high-jet stability of ϕ
Return to Examples <ref>, <ref>, we noticed that when N is large enough, the Swan conductors are independent of s (for s in a small neighbourhood of 0∈𝔸^1), consequently dim(ϕ)'s are independent of s. This suggests that the dependence of vanishing cycles on the ttfun is only up to a high enough jet. In this subsection, we formulate precisely the notion of vanishing cycles being stable with respect to the variation of the ttfun in order ≥ N-terms and prove such a result in a special case. At the end, we discuss the relation between our result and a result of Saito (whose formulation and proof inspired ours). Recall the setup for Section <ref>.
§.§.§ Transverse test families
We introduce some preliminary notions (c.f. Definitions <ref>, <ref> and Remark <ref> iii)).
An N-transverse test family (N-ttfam) of ℱ at a smooth point (x, ξ) of SSℱ for N≥ 2 in ℕ, denoted by (T,U,V,f), is the following data (here 𝔸^1=𝔸^1_k):
U× T V x_T:=x× T
𝔸^1_T:=𝔸^1× T
[hook', from=1-3, to=1-2]
[hook', from=1-4, to=1-3]
["f", from=1-3, to=2-3]
where:
i) T is a connected smooth finite type scheme over k. We will often identify T with 0× T⊆𝔸^1× T, and occasionally with x_T;
ii) U is an neighbourhood of x (x is implicitly viewed as a point of U), V is a Zariski open of U× T containing x_T;
iii) f is a morphism such that, for all geometric points s of T, the s-slice f_s: V×_𝔸^1_T𝔸^1_s =: V_s →𝔸^1_s is a ttfun with respect to ℱ at (x,ξ)[Abuse notation: this means the base change of (x,ξ) to over s.] (recall Definition <ref>, in particular, f_s is SSℱ-transversal except at x).
iv) For any two closed points s,s' of T, f_s ≡ f_s'𝔪_x^N.
For the rest of this section, we will often say ttfam without specifying N, meaning N-ttfam for some N.
Let ℱ∈ D(X) and (T,U,V,f) be a ttfam for SSℱ at (x,ξ). The vanishing cycle associated to this ttfam is the following sheaf on TT:=x_T×_S (S-T):
ϕ_f(ℱ):= Φ_f(ℱ_V)|_TT
Here ℱ_V is ℱ pulled back to V, × is the oriented product, Φ is the vanishing cycle over general bases for f: V→ S.[We refer to <cit.> for basics of oriented products and vanishing cycle over general bases.]
i) In a ttfam, the condition on f implies that f is SSℱ-transversal outside x_T (essentially because for s↪ T the conormal bundle of V_s↪ V is isomorphic to the pullback of conormal bundle of 𝔸^1_s↪𝔸^1_T, see <cit.> for a proof). This implies it is ULA with respect to ℱ outside x_T.
ii) Directly from the definition of being SSℱ-transversal, f is smooth in a neighbourhood of the base of SSℱ_V except possibly at points in x_T. If ξ≠ 0, then f is also smooth on x_T; if ξ=0, then f is not smooth on x_T, nevertheless it is still flat on x_T because
V→𝔸^1_T is always a family of hypersurfaces in a neighbourhood of x_T, hence flat there.
iii) Apply <cit.> and <cit.> we see: Φ_f(ℱ_V) is constructible and commutes with any base change. In particular, it is supported on x_T×_𝔸^1_T𝔸^1_T and its restriction to each slice equals the usual vanishing cycle, i.e. for any geometric point s of T, Φ_f(ℱ_V)|_V_s×_𝔸^1_T𝔸^1_s is supported on x×_𝔸^1_s (𝔸^1_s-0)≅𝔸^1_s,(0)-{0} and canonically isomorphic to ϕ_f_s(ℱ)_x.
iv) Apply <cit.> to geometric points x of x_T, t of T (identified with 0× T⊆𝔸^1_T) and u of 𝔸^1_T -T, we get a distinguished triangle:
Ψ_f(ℱ_V)_x← t→Ψ_f(ℱ_V)_x← u→Φ_f(ℱ_V)_t← u→
where the first map is the cospecialisation and the second is the composition of Ψ_f(ℱ_V)_x← u→Φ_f(ℱ_V)_x← u and the cospecialisation Φ_f(ℱ_V)_x← u→Φ_f(ℱ_V)_t← u.
Compose this with
ℱ_x ℱ_x 0 [shorten <=3pt, shorten >=3pt, Rightarrow, no head, from=1-1, to=1-2]
[shorten <=3pt, shorten >=3pt, from=1-2, to=1-3]
[from=1-3, to=1-4]
and take the cone (<cit.>) we get
ℱ_x ℱ_x 0
Ψ_f(ℱ_V)_x← t Ψ_f(ℱ_V)_x← u Φ_f(ℱ_V)_t← u
Φ_f(ℱ_V)_x← t Φ_f(ℱ_V)_x← u Φ_f(ℱ_V)_t← u
[from=2-1, to=2-2]
[from=2-2, to=2-3]
[from=2-3, to=2-4]
[shorten <=8pt, shorten >=8pt, Rightarrow, no head, from=1-1, to=1-2]
[from=1-1, to=2-1]
[from=1-2, to=2-2]
[shorten <=8pt, shorten >=8pt, from=1-2, to=1-3]
[from=1-3, to=2-3]
[from=2-1, to=3-1]
[from=2-2, to=3-2]
[from=3-1, to=4-1]
[from=3-2, to=4-2]
[Rightarrow, no head, from=2-3, to=3-3]
[from=3-3, to=4-3]
[from=3-3, to=3-4]
[from=3-1, to=3-2]
[from=3-2, to=3-3]
[from=1-3, to=1-4]
where the two maps in the third row are cospecialisations.
v) Later we will often consider the condition of Φ_f(ℱ_V)|_TT being a local system. Combine iii) and iv), we see: Φ_f(ℱ_V)|_TT is a local system if and only if in the following diagram, (f_T, ℱ_T) (ℱ_T is the base change of ℱ to V_T) is ULA, which is equivalent to Φ_f_T(ℱ_T)=0 since Φ commutes with base change by iii) (c.f. <cit.>):
V_T:=V×_𝔸^1_TT
T
["f_T", from=1-1, to=2-1]
If this is satisfied, Φ_f(ℱ_V) is automatically supported on TT.
§.§.§ Generic finite depth
Now we define precisely the notion of vanishing cycles being stable up to some N-th jet of the ttfun. Roughly speaking, this means that the vanishing cycles form a local system with respect to the variation of the ttfun in order ≥ N terms.
The setup is as at the beginning of the section. The depth of ℱ at (x,ξ), denoted by depth(ℱ)_(x,ξ), is the smallest N∈ℕ such that for any N-ttfam (T,U,V,f) at (x,ξ), ϕ_f(ℱ) is a local system.
i) One cannot change the assumption on test functions from having transverse intersection with SS to having isolated intersection. Because the latter puts no restrictions on the intersection multiplicities[e.g. f_s: 𝔸^1→𝔸^1, x↦ (1-s)x^n+sx^m.], and the resulting depth would be infinity in general.
ii) depth is local.
iii) In the next section, we will prove certain functorialities of the depth and study sheaves with depth 2.
We record a basic question which we do not know how to answer yet:
How does depth change under smooth pullbacks?
We introduce the following terminology for convenience, to distinguish from the number of blowups.
(blowup stages)
We say a blowup sequence at closed points has r blowup stages if the longest sequence of successive blowups at points in new exceptional divisors has length r.
For example, the following blowup sequence has 2 blowup stages, while its number of blowups is 3.
Here is the first version of our result.
Let X be a smooth surface over an algebraically closed field k of characteristic p>2, x∈ D be a smooth point of a prime divisor, U=X-D, ℱ=j_!ℱ_U, where ℱ_U is a local system on U concentrated in degree 0, (x,ξ) be a smooth point of SSℱ with ξ≠ 0. Then ℱ has finite depth at (x,ξ) if the following conditions are satisfied:
i) (x,ξ)∈ SSℱ is nonexceptional, in the sense that either it is not conormal to D, or the component of SSℱ it lies in is the conormal of D.
ii) Let U→ U be the minimal Galois covering trivialising ℱ,[i.e., the covering corresponding to the quotient π_1(X,η_X)↠ G, where G is the image of π_1(X,η_X) in Aut_𝔽_ℓ(ℱ_η_X).] X be the normalisation of X in U, D=D×_XX. We require that X and D_red are smooth at points above x.
iii) Let C↪ W be a smooth curve on some open neighbourhood W of x passing through x, with ξ∈ T^*_CW. We call C a test curve if C is the zero locus of some ttfun on W. Base change X→ X to W→ W. Let C=C×_WW. Consider the number M_C:= minimal number of blowup stages needed to resolve C. We require that M_C is bounded by some M uniform for all test curves (on all open neighbourhoods of x).
Before giving the proof, we need to introduce a measure of “thickness” of an ideal.
Let A be a Noetherian ring, I be an ideal. ep(I):= the smallest r ∈ℕ such that √(I)^r⊆I. Note ep(I) exists because A is Noetherian.
i) ep(I)= sup_x(ep(I_x)), where x ranges through all closed points of Spec(A), and I_x denotes the localisation of I at x.
ii) Assume A is Noetherian local excellent, then ep(I)=ep(Î), where Î is the completion of I with respect to the maximal ideal.
Note that an adic completion of a Noetherian ring is Noetherian ([Stacks, 0316]), so statement ii) makes sense. Effectively, the lemma says: ep(I) can be computed locally in the completion.
i) Follows from two standard commutative algebra facts: a) localisation commutes with taking radicals; b) inclusion relations of ideals can be checked by localisations at all closed points.
ii) (Recall by Krull's Intersection Theorem (<cit.>), ideals of A (in particular A itself) inject into their completions.) First note that A/√(I) being reduced implies, by excellence of A, Â/=A/√(I) is reduced, so is radical. But is contained in √(Î), because if x_∞∈, let {x_i}⊆√(I) converge to it, then {x_i^ep(I)}⊆ I converges to x_∞^ep(I), so x_∞^ep(I) lies in Î. So =√(Î). This argument also shows ep(Î)≤ ep(I). For the converse, just notice √(I)^ep(Î)⊆ ()^ep(Î)= √(Î)^ep(Î)⊆Î, so √(I)^ep(Î)⊆Î∩ A=I, where the last step uses the fact that ideals of A are closed in A (see, e.g., <cit.>).
By Remark <ref> v), ϕ_f(ℱ) being a local system is equivalent to Diagram <ref> being ULA. By Deligne-Laumon (Theorem <ref>), this is equivalent to the function a_s (Formula <ref>) being constant. By the constructibility of a_s, this is further equivalent to being constant for closed s. Note that assumption i) ensures Deligne-Laumon is applicable in our situation, and a_s is just the Swan conductor of ℱ restricted to the curve C_s:={f_s=0}⊆ V_s at x. So it suffices to show:
In the setup of the theorem, there exists some N∈ℕ such that for any test curves C≡ C'𝔪_x^N on a same open neighbourhood of x, we have sw(C)=sw(C'), where sw means the Swan conductor at x of the restriction of ℱ.
[-3pt]150pt0.5pt
We digress to recall a few facts about Swan conductors. For details, see, e.g., <cit.>. The notations here are independent of the rest of the proof. Let C be a strict henselian trait, ℱ a sheaf at its generic point η, concentrated in degree 0, given by the Galois representation G_η↠ G↪ Aut_𝔽_ℓ(ℱ_η). Let C'→ C be the normalisation of C in the Galois cover of η corresponding to G. C' is a trait. To compute the Swan conductor of ℱ, one first form the filtration G=G_0⊇ G_1⊇... induced by i_G: G→ℕ∪{∞}, σ↦ v'(σ(π')-π') if σ≠ id; ∞ if σ=id, where π' is any uniformiser of C', v' is the discrete valuation on C', and G_i={σ∈ G| i_G(σ)≥ i+1}. Then
sw(ℱ)=∑_i≥ 1ℱ_η/ℱ_η^G_i/[G:G_i]
Important for us is the following geometric interpretation of i_G (see, e.g., <cit.>). Consider the G-action on C'. Then for σ≠ id, i_G(σ)=(Γ_σ.Δ_C'), where the latter is the intersection number of the graph of σ and the diagonal. Denote by I_σ,C' the ideal on C' corresponding to this intersection, then (Γ_σ.Δ_C')=length_𝒪_C'(𝒪_C'/I_σ,C').
[-3pt]150pt0.5pt
Back to the proof of the theorem. We first fix C↪ W for W some open neighbourhood of x and find N_C such that for any other test curve C'↪ W, satisfying C≡ C'𝔪_x^N_C, we have sw(C)=sw(C'). Then we show N_C is bounded uniformly as C varies.
Base change X→ X to W→ W. Consider the following diagram:
C̃+E W̃
C W
C W
[hook, from=3-1, to=3-2]
[from=2-2, to=3-2]
[from=2-1, to=3-1]
[hook, from=2-1, to=2-2]
["⌟"anchor=center, pos=0.125, draw=none, from=2-1, to=3-2]
[from=1-2, to=2-2]
[from=1-1, to=2-1]
[hook, from=1-1, to=1-2]
["⌟"anchor=center, pos=0.125, draw=none, from=1-1, to=2-2]
where W is obtained from W by successive blowups at closed points until C is resolved, C is the strict transform of C, E is the collection of exceptional divisors (with multiplicities). Note C̃ is smooth and equals to the normalisation of C[e.g. by Zariski's Main Theorem.]. We require that we blowup each time simultaneously at all points above x in the strict transforms of C, so that the G-action always extends. Let M_1= maximum of the multiplicities in E. Let M_2=max_σ≠ id ∈ G{ep(I_σ)}, where σ̃ is the extension (by the universal property of normalisations) of σ to W̃.
Claim: N_C:= M_1+M_2.(D.C)_x.|G| satisfies our purpose. Here (D.C)_x is the intersection number of D and C at x. A simple computation shows that in the nonexceptional situation, if ξ is not conormal to D, then (D.C)_x=1; if ξ is conormal to D, then (D.C)_x=2.
Proof of the claim: if C'↪ W is another test curve, let C' be its normalisation. By the above recollection on Swan conductors, to show sw(C)=sw(C'), it suffices to show there exists a bijection of points {x̃}↔{x'} of points of C̃, C' above x and for corresponding points the quantities length_𝒪_C̃,x̃(𝒪_C̃,x̃/I_σ̃,C̃.x̃), length_𝒪_C',x'(𝒪_C',x'/I_σ',C'.x') equal for each σ≠ id in G. But if C≡ C'𝔪_x^N_C, then C̃≡C̃'̃I_E_red^N_C-M_1, which implies: a) {x}:=C̃∩ E_red=C̃'̃∩ E_red; b) a fortiori C̃≡C̃'̃𝔪_x̃^N_C-M_1 so C̃'̃ is also smooth (hence equal to C', which is the normalisation of C'). From now on we abbreviate 𝒪_C̃,x̃ etc. by 𝒪 and drop the subscripts.
Consider length_𝒪_C̃,x̃(𝒪_C̃,x̃/I_σ̃,C̃.x̃). Estimate:
length(𝒪/I_σ̃,C̃.x̃)=length(𝒪/I_σ̃.𝒪)≤ M_2.length(𝒪/√(I_σ̃).𝒪)
≤ M_2.length(𝒪/√(I_D).𝒪)≤ M_2.length(𝒪/I_D.𝒪)=M_2.(D.C)_x≤ M_2.(D.C)=M_2.(D.C)_x.|G|
where for the last equality we used the projection formula from intersection theory in the form of <cit.>. Since C̃≡C̃'̃𝔪_x̃^M_2.(D.C)_x.|G|, length(𝒪/I_σ̃,C̃.x̃) and length(𝒪/I_σ̃,C̃'̃.x̃) must equal because C̃ and C̃'̃ are equal in the (M_2.(D.C)_x.|G|)-th infinitesimal neighbourhood of x̃∈W̃. This proves the claim.
It remains to show N_C is bounded with respect to C. For M_1, by Lemma <ref>, M_1≤ 2^M_C-1.max_x{mult_x(C)}. Let b=max_{x} {smallest r∈ℕ s.t. 𝔪_x^r⊆𝔪_x.𝒪_X,x}. Then mult_x(C)≤ b, so M_1≤ 2^M_C-1.b≤ 2^M-1.b. The estimate for M_2 is done in Lemma <ref>.
Let C be a curve on a smooth surface X. Suppose C has a single singularity at a closed point x. Let mult_x(C) be the multiplicity of C at x. Then, after M stages of blowups at closed points (Terminology <ref>), the largest multiplicity of the strict transform C at its singularities is ≤ 2^M-1.mult_x(C).
After the first blowup, C_1:= the strict transform of C has mult_x(C_1)≤mult_x(C) at each point x above x, and the exceptional divisor E_1 has multiplicity =mult_x(C) (<cit.>). In the second blowup (at a closed point in E_1), C_2 still has multiplicities ≤mult_x(C), and mult(E_2)≤max_{x}{mult_x(C_1)}+mult(E_1)≤ 2.mult_x(C). Iterate.
Let X be a smooth surface, σ≠ id be an automorphism of X, X^σ_red be the fixed locus, x_0∈ X^σ_red be a closed point. Assume X^σ_red is a prime divisor smooth at x_0. Then, after M stages of blowups at closed points which are fixed by (extensions of) σ, we have ep(I_σ̃)≤ (2p+1)^M.ep(I_σ), where σ̃ denotes the last extension of σ.
We will implicitly use Lemma <ref> in the following proof.
Clearly, it suffices to consider M successive blowups, each time at a closed point in the new exceptional divisors. σ induces an automorphism of the Zariski localisation of X at x_0. So we may assume X=Spec(A) is local. We need to analyse all possible configurations of fixed points in the successive blowups.
The configuration we start with is X^σ_red= a smooth curve. The blowup replaces x_0 by an exceptional divisor isomorphic to ℙ^1 (after reduction). Denote the extension of σ by σ. Then σ acts linearly on ℙ^1 (which is just the derivative of σ at x_0). There are three possibilities: a) σ|_ℙ^1; b) σ|_ℙ^1 fixes two points, each with multiplicity 1; c) σ|_ℙ^1 fixes a single point with multiplicity 2. Then perform the second blowup, and so on. In Figure <ref>, we draw all possible local configuration changes under blowups. Solid lines and points represent fixed points. The dotted arrow represent the point of blowup. Each crossing is a simple normal crossing. One can analyse the change of ep(I_σ) in all cases and finds the desired estimate. We illustrate with two cases, the others are similar.
To separate notations, in the following we will use W, w, σ, etc. to denote the starting space, point, action, etc. and W,w,σ, etc. to denote those after one blowup. We abbreviate ep(I_σ), ep(I_σ) as ep, ep. Let π be the projection W→ W.
General setup and observations: choose local coordinates (x,y) at w, use coordinates ((x,y),[u:v]) on W. If W=Spec(A) then W is Spec(A[x/y]) in the (y,u)-chart and is Spec(A[y/x]) in the (x,v)-chart. Denote σ^* the corresponding action on A. Then, essentially by definition, I_σ=(σ^*(a)-a) where a ranges through elements in A. Let σ^*(x)=x+f, σ^*(y)=y+f, so f,g∈ I_σ. Note π^*(I_σ)⊆ I_σ.[This is one precise sense how the blowup “improves the situation”.]
We look at two specific situations.
i) Line to cross (left column top): we may assume W^σ_red={x=0}. We have √(I_σ)=(x) and x^ep∈ I_σ. We look at (y,u)-chart, the other chart is similar. Denote by I_σ,(y,u) the restriction of I_σ to the (y,u)-chart. Then √(I_σ,(y,u))=(yu). Since π^*(I_σ)⊆ I_σ, x^ep lies in I_σ,(y,u). But in the (y,u)-chart x=yu, so (yu)^ep∈ I_σ, hence ep_(y,u)≤ ep.
ii) Point to point (right column bottom): we have √(I_σ)=(x,y) and (x,y)^ep⊆ I_σ. Without loss of generality, assume that the new fixed point w lies in the (y,u)-chart, with coordinate (0,u_1). Consider the completion of W at w. Denote by I_σ,ŵ the completion of I_σ at w. Then √(I_σ,ŵ)=(y,u-u_1).
We now exhibit some elements in I_σ,ŵ, which will be sufficient for estimating ep. Since σ^*(u)=σ^*(x/y)=x+f/y+g=u+f/y/1+g/y, we have σ^*(u)-u=f/y-ug/y/1+g/y∈ I_σ,(y,u). Upon localising to w, 1+g/y becomes a unit, so f/y-ug/y∈ I_σ,ŵ. On the other hand, we know (x,y)^ep⊆ I_σ and π^*(I_σ)⊆ I_σ. This implies y^ep∈ I_σ,ŵ.
Expand f,g in power series in y and u'=u-u_1, the terms in f/y-ug/y which do not involve y are precisely the terms in the quadratic equation on ℙ^1 for the fixed point w. So f/y-ug/y∈ k[[y,u']] is of the form au'^2 or au'^2+y^r(...) with (...)≠ 0, for some a≠ 0∈ k, r≥ 1. It is an exercise to see that in the former case ep≤ ep+2, and in the latter case ep≤ (2p+1)ep. (The bounds are by no means sharp.)
This completes the proof of Theorem <ref>. A natural question is: in what generality does assumption iii) hold? As we now discuss, this holds “generically”, in a precise sense to be defined.
Use the same notations as in Theorem <ref>. Let x∈X be any point above x. We have a map between the maximal ideals in the completion of local rings, induced by pullback: 𝔪̂_x→𝔪̂_x. We view elements in these maximal ideals as germs of functions or curves. Let 𝒯⊆𝔪̂_x be the set of all ttfun-germs at x, i.e. this is the image of all ttfun's (on varying neighbourhoods of x) in 𝔪̂_x. Consider the following two partitions of 𝔪̂_x (as a set):
i) partition by the multiplicity: 𝔪̂_x=R_1∐ R_2∐ R_3∐ ..., where R_r:={f∈𝔪̂_x| multiplicity of f=r}. Here multiplicity means the degree of the lowest order terms in the expansion of f in any coordinate.
ii) partition by the “stability”: 𝔪̂_x=S_0∐ S_1∐ S_2∐...∐ S_∞, where S_i's are defined as follows: given f∈𝔪̂_x, consider the following flow chart <ref>. Read from left to right. Start with the point marked by m= multiplicity of f. Consider the blowup at x. Let m'= the maximum of multiplicities at the infinitely near points[This means the points of intersection of the strict transform of the curve germ and the exceptional divisor.] in this blowup. If m'<m, go up; otherwise, go down. Iterate: in each step we blowup at all infinitely near singularities created by the previous blowup and compare the maximum of multiplicities before and after, if it strictly decreases, go up; otherwise, go down. Terminate when the curve becomes smooth. For i=1,2,...,∞, S_i:={f∈𝔪̂_x| number of down steps before termination for f equals i}.
We use the same letters to denote the induced partitions on 𝒯.
Given X→ X, SSℱ, with notations as above, let r_1 be the smallest i such that R_i⊆𝒯 is nonempty, let r_2 be the smallest j<∞ such that R_r_1∩ S_j is nonempty. 𝒯_gen:=R_r_1∩ S_r_2⊆𝒯. We call elements of 𝒯_gen generic ttfun germs for ℱ at (x,ξ).
A ttfam (T,U,V,f) for ℱ at (x,ξ) is generic if, for all closed point s∈ T, the image of f_s in 𝔪̂_x lies in 𝒯_gen. If it is moreover an N-ttfam for some N, we say it is a generic N-ttfam.
i) r_2 exists because we know all singularities on a curve can be resolved by finitely many blowups at closed points (see, e.g., <cit.>). In particular, 𝒯_gen is nonempty. Curve germs in 𝒯_gen can be resolved in r_1+r_2 blowup stages.
ii) 𝒯_gen is independent of the choice of x, by G-symmetry.
iii) 𝒯_gen depends on ℱ only indirectly, through X→ X and SSℱ.
The justification for the word “generic” is as follows: for f∈𝔪̂_x to lie in R_i or S_i for larger i, the coefficients of f have to satisfy more equalities. Consequently, a “generic” f∈𝒯 lies in R_i or S_i with i small. This is analogous to classical Galois theory: for a generic (in a precise sense) polynomial of fixed degree n≥ 5, the coefficients do not satisfy any algebraic relations and the polynomial is not solvable.
We can now restate Theorem <ref> in the following way:
3.11'[Generic finite depth]
Let X be a smooth surface over an algebraically closed field k of characteristic p>2, D be a prime divisor, U=X-D, ℱ=j_!ℱ_U, where ℱ_U is a local system on U concentrated in degree 0. Then ℱ has generically finite depth in the following sense: there exists a Zariski open dense V=X-{finitely many points on D} and a Zariski open dense S⊆ SS(ℱ|_V) such that for any (x,ξ)∈ S, there exists a natural number N≥2 (depending on (x,ξ)) such that, for any generic N-ttfam (T,U,V,f) at (x,ξ), ϕ_f(ℱ) is a local system. Moreover, we have an upper bound: N≤2^M-1.b+(2p+1)^M.max_σ≠ id∈ G{ep(I_σ)}.(D.C)_x.|G|, where M=r_1+r_2 (as in Definition <ref>), and the other terms are as in the proof of Theorem <ref>.
The existence of V and S on which the conditions in Theorem <ref> i), ii) are satisfied is clear. Note that this gives a precise description of S.
iii) is satisfied for generic ttfam's by the definition of generic ttfam, where we take M to be r_1+r_2.
Consider the sheaf in Example <ref>. (x,ξ)=((0,0),dy). One checks that the normalisation of k[x,y] in k(x,y)[t]/(t^p-t-y/x^p) is k[x,y,xt]/((xt)^p-x^p-1(xt)-y)=k[x,τ], τ=xt. So X=Spec(k[x,τ])≅𝔸^2. There is a single point x={x=τ=0} above x. The map 𝔪̂_x→𝔪̂_x is x↦ x, y↦τ^p-x^p-1τ. 𝒯 consists of power series of the form y+c_x^2x^2+c_xyxy+c_y^2y^2+.... The multiplicity partition on 𝒯 starts with R_2 and consists of power series with c_x^2≠ 0. One checks that when c_x^2≠ 0, p-1/2 blowups resolves the curve germ. So 𝒯_red=R_2⊆𝒯, r_1=2, r_2=p-1/2. G=ℤ/p, σ∈ℤ/p acts on X by (x,τ)↦(x,τ+σ x). For σ≠ 0, I_σ=(x), so ep(I_σ)=1. Our estimate gives depth(ℱ)_((0,0),dy)≤ 2^2+p-1/2-1.2+(2p+1)^2+p-1/2.1=2^p+3/2.(2p+1)^p+1/2.
However, we comment that by directly computing the Swan of test curves using explicit equations, one can show depth(ℱ)=p at every point ((0,y),ξ)∈ SSℱ with ξ≠ 0 (see Example <ref> for an illustration in a simpler case). So our estimate is more of theoretical value (in particular, it explicates what structures are involved in the question of the stability) rather than being useful in practice.
We now discuss Saito's result and discuss its relation with ours.
<cit.>
Let X be a smooth surface over a field k which is algebraically closed of characteristic p>0. Let ℱ∈ D(X) be of the form ℱ=j_!ℱ_U, where U is an open dense subscheme of X and ℱ_U is a local system on it concentrated in degree 0. Let Z=X-U. Let (x,ξ)∈ SSℱ be a smooth point, x closed. Let f: X →𝔸^1 be a morphism such that (x,ξ) is an isolated characteristic point of f with respect to ℱ. Assume f is flat and its restriction to Z-x is étale. Then there exists a positive integer N such that for any g: X →𝔸^1 satisfying f ≡ g 𝔪_x^N, there exists an isomorphism ϕ_f(ℱ)_x ϕ_g(ℱ)_x as objects in D^b_c(𝔽_ℓ[G_η]).
i) Saito's result fixes a test function f which has an isolated characteristic point, while our result does not fix f but restricts to transverse test functions. In other words, under our assumptions, we get a uniform bound for all f. We do not know if all these assumptions are necessary, c.f. Statements <ref>, <ref>. (The transversality condition is necessary, see the footnote to Remark <ref>.)
ii) The isomorphism of vanishing cycles in Saito's result is an isomorphism of G_η-representations. In our result, although ϕ_f(ℱ) being a local system certainly implies ϕ_f_s(ℱ)'s are isomorphic as (complexes of) vector spaces for all closed points s in T, it is not clear what representation-theoretic data is contained in our notion of stability.
iii) On the other hand, our loss in representation-theoretic data gained us more functoriality. For example, one has a version of 2-out-of-3 property for the depth, see Lemma <ref>.
To end this section, we formulate two statements which we think are plausible but currently do not know how to (dis)prove.
Let X be a smooth variety over an algebraically closed field k of characteristic p≠ 2.[We need p≠ 2 so that there are enough ttfun's and the depth makes sense.] Then ℱ∈ D(X) has finite depth at all smooth points of SSℱ.
Let X be a smooth variety over an algebraically closed field k of characteristic p≠ 2, ℱ∈ D(X), (x,ξ) be a smooth point of SSℱ. Then there exists a positive integer N (depending on (x,ξ)) such that for any neighbourhood U of x and f, g: U →𝔸^1 satisfying a) f and g are ttfun's at (x,ξ); b) f ≡ g 𝔪_x^N, there exists an isomorphism ϕ_f(ℱ)_x ϕ_g(ℱ)_x as objects in D^b_c(𝔽_ℓ[G_η]).
§ Μ C SHEAVES
In this section, “ttfam” always means 2-ttfam. See Definition <ref>. Note that condition iv) in the definition is automatic for 2-ttfam.
We maintain the same setup in Section <ref>. As mentioned in the introduction, our motivation is to build a microlocal sheaf theory in this setting. A microlocal sheaf theory “lives” on the cotangent bundle, but as discussed in Section <ref>, due to the complexity of π_1 (or wild ramification), microlocal data is huge, reflected in the fact that vanishing cycles depend on higher jets of the ttfun. This suggests at least two directions to go: i) work on a space larger than T^*X (e.g., higher jet bundles), ii) restrict the class of sheaves. The previous section is a step in i): we showed that on a surface, generically, the vanishing cycles “live” on some finite jet bundle. In this section, we explore the second route.
An immediate thought is to restrict to tame sheaves. However, this is not satisfactory, as tameness is not even preserved under the Radon transform (see Example <ref>), while as mentioned in Section <ref>, a fundamental feature of microlocal sheaf theory is contact invariance, of which the Radon transform is the prototypical case. Inspired by the situation in complex analytic context, we instead consider the class of sheaves with the strongest stability.
ℱ∈ D(X) is μ c at a smooth point (x,ξ)∈ SSℱ if for all ttfam's (i.e. 2-ttfam's) of ℱ at (x,ξ), ϕ_f(ℱ) is a local system. ℱ is μ c if it is μ c at all smooth points of SSℱ.
ℱ∈ D(X) is μ c^s at a smooth point (x,ξ)∈ SSℱ if for all smooth morphism p: Y→ X and (y,η)∈ T^*Y with y↦ x, η=dp(ξ), and all ttfam's of p^*ℱ at (y,η), ϕ_f(ℱ) is a local system. ℱ is μ c^s if it is μ c^s at all smooth points of SSℱ.
We record a question we do not know how to answer yet, it is a special case of Question <ref>.
Is μ c equivalent to μ c^s?
A μ c sheaf is just a sheaf of depth 2 at all smooth points of its SS. We give them a special name as they are closest to the complex analytic case and are good candidates for microlocal constructions. Actually, we have the analogues of Propositions <ref>, <ref>. (Note we have no control on the representation structure, see item ii) after Theorem <ref>.)
i) Let ℱ∈ D(X) be μ c and (x,ξ) be a smooth point in SSℱ. Then for any two ttfun's f, g at (x, ξ), there exists a (noncanonical) isomorphism ϕ_f(ℱ)_x≅ϕ_g(ℱ)_x as objects in D^b_c(𝔽_ℓ). We call this the microstalk of ℱ at (x,ξ).
ii) For μ c^s sheaves, the microstalks are invariant under the Radon transform: let ℱ∈ D(ℙ) be μ c^s, and (x,ξ) be a smooth point of SSℱ with ξ≠ 0. Let (a,α) be a point in SSRℱ corresponding to (x,ξ). Let f, g be ttfun's for ℱ, Rℱ at (x,ξ), (a,α) respectively. Then there exists an isomorphism ϕ_g(ℱ)_a≅ϕ_f(ℱ)_x as objects in D^b_c(𝔽_ℓ).
i) ℱ being μ c implies that in a ttfam (T,U,V,g), the stalks of ϕ_g(ℱ) are all isomorphic. So it suffices to show that any two ttfun's can be connected by a ttfam. Fix an coordinate {x_1,...,x_n} at x. Let f be a ttfun on some neighbourhood U of x. The restriction of f to the strict localisation X_(x)≅Spec(k{x,y}) is of the form f|_X_(x)=∑ξ_i x_i+ ∑ a_ij x_i x_j+ H, where ξ_i are components of ξ and H means higher order terms. Consider the 𝔸^1=Spec(k[s])-family: f_s:=f+(s-1)(f-(∑ξ_i x_i+ ∑ a_ij x_i x_j)), then f_s|_X_(x)=∑ξ_i x_i+ ∑ a_ij x_i x_j+sH. Note these are also defined on U, and since we have not changed ≤ second order terms, ν is still a transverse intersection point of Γ_df_s and SSℱ. f_s is a ttfun on some Zariski open neighbourhood V_s of x∈ U. Put them together, we get a ttfam (𝔸^1,U,V,f), connecting f_1=f to f_0=∑ξ_i x_i+ ∑ a_ij x_i x_j. Now consider Q= the space of all quadratic forms {b_ij} such that ∑ξ_i x_i+ ∑ b_ij x_i x_j is a ttfun on some Zariski open neighbourhood of x∈ X. It is an open dense subspace of an affine space. Let f_{b_ij}=∑ξ_i x_i+ ∑ b_ij x_i x_j. This defines a ttfam (Q,U',V',f_{b_ij}) for some Zariski open U' of X, connecting all ttfun's parametrised by Q.
ii) By Corollary <ref>, Rℱ is also μ c^s, so its microstalks are well-defined. The same computation as in the proof of Proposition <ref> then gives the result.
Here is a direct proof of Proposition <ref>: by the proof of the above Lemma i), all ttfun's can be connected via ttfam's. By <cit.>, dimtot is constant in a ttfam (note that by the definition of the ttfam, the nonacyclicity locus is mapped isomorphically to the base 𝔸^1_T, so being flat implies being locally constant in the terminology of <cit.>).
The rest of this section is devoted to:
i) showing some basic sheaves are μ c (μ c^s). In particular, tame simple normal crossing sheaves are μ c^s;
ii) showing some functorialities of the μ c condition. In particular, μ c^s sheaves are preserved under the Radon transform;
iii) computing some examples.
§.§ Basic objects
Recall definitions and remarks in Section <ref>.
If X is a smooth curve, then any ℱ∈ D(X) is μ c.
Let (x,ξ) be a smooth point of SSℱ. Notice that for any ttfam at (x,ξ), in Diagram <ref>, f_T is an isomorphism and ℱ_T is a constant sheaf, so (f_T,ℱ_T) is ULA.
Local systems are μ c^s.
It suffices to show they are μ c because pullback of local systems are local systems. The problem being local, we may assume the sheaf is constant. Let ℱ∈ D(X) be a constant sheaf, x∈ X. Let (T, U, V, f) be a ttfam for ℱ at (x, ξ=0). On each slice V_s𝔸^1_s, f_s being a ttfun implies it has a nondegenerate quadratic singularity at x over 0∈𝔸^1_s (in the sense of [SGA7 XV, 1.2.1]). We want to show Diagram <ref> is ULA. Consider the following diagram:
V_T V V-V_T
V_T V V-V_T
T
["f_T"', from=2-1, to=3-1]
["i", hook, from=2-1, to=2-2]
["h", from=2-2, to=3-1]
[from=1-1, to=2-1]
[hook, from=1-1, to=1-2]
["π"', from=1-2, to=2-2]
["j"', hook', from=2-3, to=2-2]
["≅", from=1-3, to=2-3]
["j"', hook', from=1-3, to=1-2]
where h is the composition of f and the projection 𝔸^1_T→ T, π: V→ V is the blowup of V along x_T. Note V_T↪V is a simple normal crossing divisor over T. By the distinguished triangle j_!j^*ℱ→ℱ→ i_*ℱ_T→ and the fact that (h,ℱ) is ULA, to show (f_T,ℱ_T) is ULA, it suffices to show (h,j_!j^*ℱ) is ULA. But j_!j^*ℱ≅π_*j_!𝒢, where 𝒢 is the pullback of j^*ℱ to V-V_T. By <cit.>, SS(j_!𝒢)=T_V^*V∪ T_V_T^*V,[We use the following notation: for D=∪_i=1^rD_i↪ X a sncd, T^*_DX:=∪_I T^*_D_IX, where I ranges through nonempty subsets of {1,2,...,r}, and D_I:=∩_i∈ ID_i.] so V→ T is SS(j_!𝒢)-transversal, so (hπ, j_!𝒢) is ULA. By the compatibility of vanishing cycles and proper pushforwards, Φ_h(π_*j_!𝒢)≅π_*Φ_hπ(j_!𝒢)=0.
Let D⊆ X be a simple normal crossing divisor (sncd), j: U→ X be its complement. If ℱ∈ D(X) is of the form ℱ=j_!ℱ_U for ℱ_U a local system tame along D, then ℱ is μ c (hence μ c^s because its smooth pullbacks are of the same form).
Recall that in this situation SSℱ=T^*_XX∪ T^*_D X by <cit.> (see Footnote footnoteonnotation for the notation).
The question being local, we may assume that we are in the situation D=∪_i=1^rD_i↪ X=𝔸^n_k, where 0<r≤ n, D_i={x_i=0}, with {x_1,...,x_n} the standard coordinates on 𝔸^n_k. The locally constant locus has been dealt with in the previous proposition. It suffices to show ℱ is μ c at (x,ξ) for x=origin, ξ=dx_1+...+dx_r.
Let (T,U,V,f) be a ttfam for ℱ at (x,ξ). We want to show Φ_f_T(ℱ_T)=0 in Diagram <ref>. For this, we need to understand the geometry of D_T:=(D× T)∩ V_T↪ V_T near x_T. First look at each slice D=D×{s}↪ V_s.
Claim: the embedded singularity D∩ H_s↪ H_s can be resolved in two steps: first blow up at x, then blow up along the intersection of the exceptional divisor with the strict transform of D_1∩ ... ∩ D_r-1∩ H_s. (For r=1, there is only one blowup.)[Of course the choice {1,2,...,r-1} is unimportant: one can choose any r-1 elements in {1,2,...,r}.]
Assuming this, return to D_T↪ V_T. It follows that the embedded singularity D_T↪ V_T can be resolved by first blowing up along x_T, then blowing up along the intersection of the exceptional divisor with the strict transform of D_1,T∩ ... ∩ D_r-1,T, where D_i,T:=(D_i× T)∩ V_T. We get the following diagram:
V_T V_T
T
["π"', from=1-2, to=1-1]
["f_T"', from=1-1, to=2-1]
["g_T", from=1-2, to=2-1]
where π is proper and induces an isomorphism over V_T-D_T, and π^-1(D_T)↪V_T is a sncd relative to T. Note ℱ_T=π_*π^*ℱ_T (because ℱ_T is a !-extension from the open), and π^*ℱ_T is still a sncd tame sheaf (by <cit.>). So Φ_f_T(ℱ_T)=π_*Φ_g_T(π^*ℱ_T)=0, where the last equality comes from the fact that SS of a sncd tame sheaf is conormal.
The claim is shown in the next two lemmas.
Let D=∪_i=1^rD_i↪ X=𝔸^n_k, where 0<r≤ n, D_i={x_i=0}, with {x_1,...,x_n} the standard coordinates on 𝔸^n_k. Denote D=D_1∩...∩ D_r. Let f be a ttfun of T^*_DX at (x,ξ) where x=origin, ξ=dx_1+...+dx_r.[For C a conic closed subset of T^*X and (x,ξ) a smooth point of C, we say f is a ttfun of C at (x,ξ) if f satisfies the same conditions as in Definition <ref> (Remark <ref> iii)), with “SSℱ” replaced by “C”.] Denote H=f^-1(0). Then D_1∩ H,...,D_r-1∩ H form a sncd on H and x_r|_H is a ttfun of T^*_D_1∩...∩ D_r-1∩ HH at (x,-(dx_1+...+dx_r-1)|_H). (For r=1, T^*_D_1∩...∩ D_r-1∩ HH:=T^*_HH.)
It follows easily from f being a ttfun that:
i) D_1,...,D_r-1 indeed form a sncd on H;
ii) Γ_dx_r|_H intersects T^*_D_1∩...∩ D_r-1∩ HH precisely at (x,-(dx_1+...+dx_r-1)|_H).
We want to show the intersection is transverse. By the “⇒” part of the proof of Lemma <ref>, it suffices to show this in the ambient space X, i.e., that Γ_dx_r.⟨ dx_1,...,dx_r-1, df⟩=1.(x, dx_r), where ⟨ dx_1,...,dx_r-1, df⟩ denotes the pushforward of T^*_D_1∩...∩ D_r-1∩ HH into X. The r=n case is easy. We assume r≤ n-1 in the following. Note Γ_dx_r and ⟨ dx_1,...,dx_r-1, df⟩ intersect precisely at (x, dx_r), so it suffices to show that their tangents at (x, dx_r) are linearly independent. The computation is straightforward, here are the results:
Use coordinates {x_1,...,x_n;p_1,...,p_n} on T^*X.
Γ_dx_r: tangent space at (x, dx_r) is spanned by {∂_x_1,...,∂_x_n}.
⟨ dx_1,...,dx_r-1, df⟩: tangent space at (x, dx_r) is spanned by {∂_p_1,...,∂_p_r-1, ∑ _i=1^n ∂_p_i,
(∂_x_r+1+∑_i=1^n f_i,r+1∂_p_i),...,(∂_x_n+∑_i=1^n f_i,n∂_p_i)}, where f_i,j denotes the derivative of f in x_i followed by in x_j.
These are linearly independent if and only if the matrix {f_i,j}_i,j∈{r+1,...,n} is nondegenerate. But this follows exactly from the assumption that f is a ttfun for T^*_DX.
Same set up as in the previous lemma. Then the embedded singularity D∩ H ↪ H can be resolved in two steps: first blow up at x, then blow up along the intersection of the exceptional divisor with the strict transform of D_1∩ ... ∩ D_r-1∩ H. (For r=1, there is only one blowup.).
As everything happens on H, for convenience, we make the following notation changes in this proof (new old): X H, D_i D_i∩ H (for i=1,2,...,r-1), D∪_i=1^r-1D_i∩ H, D∩_i=1^r-1D_i∩ H, H D_r∩ H, f x_r|_H. We also rename n and r such that our new X is of dimension n, and new D has r components. In this new notation, the statement becomes: the embedded singularity D∪ H ↪ X can be resolved by first blowing up at x, then blowing up along the intersection of the exceptional divisor with the strict transform of D.
The problem being local, we may assume X=𝔸^n_k, with the standard coordinates {x_1,...,x_n}, and x is the origin. The r=n case is simple. We check the r≤ n-1 cases. The condition on f implies that f is of the form x_1+...+x_r+Q+Q'+P+(...) where Q is a nondegenerate quadratic form in {x_r+1,...,x_n}, Q' is a quadratic form in {x_1,...,x_r}, P is a linear combination of monomials of the form x_ax_α for a∈{1,...,r}, α∈{r+1,...,n}, and (...) means higher degree terms. In the rest of the proof, a,b... always mean an index in {1,...,r}; α,β... always mean an index in {r+1,...,n}; i,j... always mean an index in {1,...,n}; ∑ means over all allowed indices unless specified. By a linear change of coordinates in {x_α}, we may assume Q=∑ x_α^2.
Blow up at x. Use new coordinates ((x_1,...,x_n),[p_1:...:p_n]) with relations {x_ip_j=x_jp_i}_all i,j. We look at p_n=1 piece, the others can be checked by the same method. On this piece, we may use coordinates {x_n,p_1,...,p_n-1}, and we have x_i=x_np_i for i=1,...,n-1. The exceptional divisor is E={x_n=0}. We list the strict transforms of relevant things:
D_a: D_a'={p_a=0};
D: D'={p_1=...=p_r=0};
H: H'={f^(1)=∑ p_a+(x_n+∑_α=r+1^n-1 x_np_α^2)+Q'/x_n+P/x_n+(...)=0}. Here Q'/x_n consists of (linear combinations of) terms of the form x_np_ap_b, and P/x_n of the forms x_np_ap_α, x_np_a.
The conormals of D_a', D' are spanned by dp_a, {dp_1,...,dp_r} respectively, and we compute df^(1)|_E=∑ dp_a+ (1+∑_α=r+1^n-1 p_α^2)dx_n+A, where A consists terms of the form p_ap_bdx_n, p_ap_αdx_n, p_adx_n. It is an exercise to deduce from these that:
i) {D_1',...,D_r', H', E} form a sncd except along D'∩ H'∩ E=D'∩ E;
ii) along D'∩ E: outside the conic C:={x_n=p_1=...=p_r=0, 1+∑_α=r+1^n-1 p_α^2=0}, any r-1 members of {D_1',...,D_r', H', E} form a sncd; on C, {D_1',...,D_r', E} form a sncd, and df^(1)=∑ dp_a.
It remains to resolve the singularity along D'∩ E.
Blow up along D'∩ E. It is another exercise to see, using ii), that the singularity outside C is resolved. We check that the singularity is also resolved over C.
Use new coordinates ((x_n,p_1,...,p_n-1),[q_n:q_1:...:q_r]) with relations {x_nq_i=q_np_i, p_iq_j=q_ip_j}_all i,j. We look at q_n=1 piece, the others can be checked by the same method. On this piece, we may use coordinates {x_n,q_1,...,q_r,p_r+1,...,p_n-1}, and we have p_a=x_nq_a for a=1,...,r. The exceptional divisor is F={x_n=0}. We list the strict transform of relevant things:
E: E' lies at infinity and is irrelevant on this piece;
D_a': D_a”={q_a=0};
D': D”={q_1=...=q_r=0};
H': H”={f^(2)=∑ q_a+(1+∑_α=r+1^n-1 p_α^2)+Q'/x^2_n+P/x^2_n+(...)=0}. Here Q'/x^2_n consists of terms of the form x_n^2q_aq_b, and P/x^2_n of the form x_nq_ap_α, x_nq_a.
Compute: df^(2)|_x_n=0=∑ dq_a+(∑_α=r+1^n-1 2p_αdp_α)+A, where A consists of terms of the form q_ap_αdx_n, q_adx_n.
Recall we want to show {D_1”,...,D_r”, H”, F, E'} form a sncd. E' is irrelevant here, and it suffices to check over C, i.e., on the locus {x_n=0, 1+∑_α=r+1^n-1 p_α^2=0}.
But along this locus, {p_r+1,...,p_n-1} are not all 0, so df^(2)|_x_n=0 contains some dp_α component and is thus not contained in the span of {dx_n,dq_1,...,dq_r}, consequently {D_1”,...,D_r”, H”, F, E'} form a sncd.
§.§ Properties
Let i: Z↪ X be a closed immersion of smooth varieties, ℱ∈ D(Z). Then
i) ℱ is μ c if and only if i_*ℱ is μ c;
ii) Same for μ c^s;
i') More generally, if (z, ζ) is a smooth point in SSℱ and (x, ξ) is in SS(i_*ℱ) such that x=i(z), di(ξ)=ζ, then depth(ℱ)_(z,ζ)=depth(i_*ℱ)_(x,ξ).
i) Let (z, ζ) be a point in SSℱ, and (x, ξ) be a point in SS(i_*ℱ)=i_∘SSℱ such that x=i(z), di(ξ)=ζ. First note that (x, ξ) is a smooth point of SS(i_*ℱ) if and only if (z, ζ) is a smooth point of SSℱ. This follows from the observation that in
the following correspondence u is smooth and v is a closed immersion.
Z×_XT^*X
T^*Z T^*X["u"', from=1-2, to=2-1]
["v", from=1-2, to=2-3]
ℱ μ c ⇒ i_*ℱ μ c: Let (T, U, V, f) be a ttfam at (x, ξ) for i_*ℱ, we want to show ϕ_f(i_*ℱ) is locally constant. Consider the restriction of (T,U,V,f) to Z:
Z× T U_Z× T V_Z z_T
X× T U× T V x_T
𝔸^1_T[hook,"i", from=1-1, to=2-1]
[hook, from=1-2, to=2-2]
[from=1-2, to=1-1]
[from=2-2, to=2-1]
["∼", from=1-4, to=2-4]
["i'", from=1-3, to=2-3]
["f", from=2-3, to=3-3]
[hook', from=1-3, to=1-2]
[hook', from=2-3, to=2-2]
[hook', from=1-4, to=1-3]
[hook', from=2-4, to=2-3]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-2, to=2-1]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-3, to=2-2]
By the compatibility of vanishing cycles (over general bases) with proper pushforwards, Φ_f((i_*ℱ)_V)≅i'_*Φ_fi'(ℱ_V_Z). But Φ_f((i_*ℱ)_V) is supported on z_T×_𝔸^1_T𝔸^1_T and i' restricted to z_T×_𝔸^1_T𝔸^1_T is an isomorphism, so ϕ_f(i_*ℱ)=Φ_f(i_*ℱ_V)|_TT≅Φ_fi'(ℱ_V_Z)|_TT=ϕ_fi'(ℱ). So, ℱ being μ c, it suffices to show the restriction of (T, U, V, f) to Z is a ttfam for ℱ at (z, ζ). In the definition of the ttfam i), ii) are clear. We check iii):
The computation being local, we may assume V_s=X. Consider the correspondence above. Abbreviate SSℱ as C. We want to compute C.Γ_d(f_s|_Z)=C.uv^-1Γ_df_s. First note (vu^-1C).Γ_df_s(=(i_∘SSℱ).Γ_df_s=1.(x,ξ)), (u^-1C).v^-1Γ_df_s, C.uv^-1Γ_df_s are all supported at a single point because f_s is a ttfun, u is smooth and v is a closed immersion. Then compute: (vu^-1C).Γ_df_s=(v_*u^-1C).Γ_df_s=(u^-1C).v^*Γ_df_s, where the second equality comes from v being a closed immersion, third equality comes from the projection formula in intersection theory. A simple computation in a local coordinate shows that the intersection of Γ_df_s and Z×_XT^*X is transverse. So v^-1Γ_df_s is also smooth and (u^-1C).v^*Γ_df_s=(u^-1C).v^-1Γ_df_s, i.e. u^-1C and v^-1Γ_df_s intersect transversely at a single point. So C and uv^-1Γ_df_s also intersect transversely at a single point.
ℱ μ c ⇐ i_*ℱ μ c: Let (T, Z',V,f) be a ttfam at (z,ζ) for ℱ. If ζ=0, then ℱ is locally constant near z and the assertion is clear. Assume ζ≠ 0. By [EGA IV, 18.1.2], we can extend Z' to an neighbourhood X' of x in X. By the following Lemma <ref>, after possibly shrinking Z' and X', there exists an neighbourhood X̃'̃X' of x in X and maps α, r satisfying the following diagram:
Z' X̃'̃ X'["α"', hook, from=1-1, to=1-2]
["β"', from=1-2, to=1-3]
["r"', curve=height=12pt, from=1-2, to=1-1]
where α is a closed immersion, β is , r is a retraction, and βα coincides with the closed immersion Z'↪ X'. Consider the pullback of (T, Z',V,f) via r:
X̃'̃× T Ṽ z_T
Z'× T V z_T
𝔸^1_T["r× id"', from=1-2, to=2-2]
[Rightarrow, no head, from=1-4, to=2-4]
[from=1-3, to=2-3]
["f", from=2-3, to=3-3]
[hook', from=1-3, to=1-2]
[hook', from=2-3, to=2-2]
[hook', from=1-4, to=1-3]
[hook', from=2-4, to=2-3]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-3, to=2-2]
["f̃"'pos=0.3, shift left=1, curve=height=12pt, from=1-3, to=3-3]
On each slice, f_s=f_sr is an extension of f_s. A similar intersection theoretic computation as above shows that (T, X',V,f) is a ttfam at (z,(df_s)_z) for i_*ℱ. Then again by the compatibility of vanishing cycles with proper pushforwards, ϕ_f(ℱ)≅ϕ_f(i_*ℱ), and the latter is a local system by assumption.
ii) “⇒” is clear. For “⇐”: The question being local, we may reduce to showing that the pullback of μ c a sheaf ℱ along 𝔸^m× Z→ Z is μ c. But it equals the restriction to 𝔸^m× Z of the pullback of ℱ along 𝔸^m× X→ X, which is μ c by i).
i') This follows from the same method as for i) plus Lemma <ref>.
The following lemma is well-known. We learned it from Owen Barrett.
Let i: Z↪ X be a closed immersion of smooth schemes over a field k, z∈ Z be a point. Then in some Zariski open neighbourhood X' of z in X, there exists an neighbourhood X'X' of z in X' and maps α, r satisfying the following diagram:
Z' X̃'̃ X'["α"', hook, from=1-1, to=1-2]
["β"', from=1-2, to=1-3]
["r"', curve=height=12pt, from=1-2, to=1-1]
where Z'=Z×_X X', α is a closed immersion, β is , r is a retraction, and βα=i. Moreover, the retraction r is smooth along Z'.
Z being smooth, there exists an map Z'→𝔸_k^m for some Zariski open neighbourhood Z' of z in Z. Extend Z' to a Zariski open X' in X. By further shrinking, we may assume Z', X' are affine. Consider the following pushout, and choose a retraction r':
Z' X'
𝔸^m_k X”[hook, from=1-1, to=1-2]
[from=1-1, to=2-1]
["f", from=1-2, to=2-2]
[hook, from=2-1, to=2-2]
["⌟"anchor=center, pos=0.125, rotate=180, draw=none, from=2-2, to=1-1]
["r'"description, curve=height=-12pt, from=2-2, to=2-1]
(If X”=SpecA, 𝔸^m_k=Spec(k[x_1,...,x_m]), choosing an r' amounts to choosing a lift for each x_i of A↠ k[x_1,...,x_m].) We construct X' etc. using the following two pullback diagrams:
Z' Z'×_𝔸^m_kX' Z'×_𝔸^m_kZ' Z'×_𝔸^m_kX'
𝔸^m_k X' Z' X'["pr_2", from=1-5, to=2-5]
["i"', hook, from=2-4, to=2-5]
["pr_2", from=1-4, to=2-4]
["i'", hook, from=1-4, to=1-5]
["⌟"anchor=center, pos=0.125, draw=none, from=1-4, to=2-5]
["Δ", curve=height=-12pt, from=2-4, to=1-4]
[from=1-1, to=2-1]
["r'f", from=2-2, to=2-1]
["pr_2", from=1-2, to=2-2]
["pr_1"', from=1-2, to=1-1]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-2, to=2-1]
In the right diagram, note Z'×_𝔸^m_kZ' is a disjoint union of several copies of Z' because pr_2 is . Δ is an isomorphism to the diagonal copy. Let X'=Z'×_𝔸^m_kX'-(Z'×_𝔸^m_kZ'-Δ(Z')), α=i'Δ, β=pr_2, r=pr_1. It is an exercise to see that these satisfy the requirement. Note the smoothness of r along Z' follows from the smoothness of Z', X̃' and the injectivity of dr on cotangent spaces (see, e.g., [Liu02, 6.2.10]).
Let f: X→ Y be a morphism of schemes, y∈ Y. If g,h∈𝒪_y,Y are such that f≡ g 𝔪^N_y,Y for some N∈ℕ, then g∘ f≡ h∘ f 𝔪^N_x,X for any x∈ f^-1(y).
Let φ: 𝒪_x,X←𝒪_y,Y be the induced local ring map. g≡ h 𝔪^N_y,Y⇒ g∘ f≡ h∘ f φ(𝔪^N_y,Y)=φ(𝔪_y,Y)^N, a fortiori g∘ f≡ h∘ f 𝔪^N_x,X.
Like tame sheaves, μ c sheaves are not stable under general proper pushforwards, however they are stable under pushforwards which resemble (the pushforward part of) an integral transform.
Let f: Y→ X be a morphism of smooth varieties, ℱ be a μ c sheaf on Y.
i) If f is special with respect to ℱ, then f_*ℱ is μ c;
ii) Same for μ c^s;
i') More generally, if f is special with respect to ℱ, then for any pair (x,ξ), (y,η) (see notations below), we have depth(f_*ℱ)_(x,ξ)≤depth(ℱ)_(y,η).
Here we say f: Y→ X is special with respect to ℱ if
a) it is smooth and proper;
b) for any smooth point (x, ξ)∈ SS(f_*ℱ) with ξ≠ 0, there exists a unique point (y, η)∈ (SSℱ)|_f^-1(x) such that df(ξ)=η. Furthermore, (y, η) is a smooth point of SSℱ;
c) f_+SSℱ=f_∘SS^+ℱ. Here f_+ is the map from cycles on T^*Y to cycles on T^*X defined as follows: take the intersection theoretic pull and push under the correspondence T^*Y← Y×_X T^*X→ T^*X, then set the coefficient of the zero section to be 1. We will use a similar notation for pullbacks.
Note, being special implies that the pull back of any ttfun for f_*ℱ at (x,ξ) with ξ≠0 is a ttfun for ℱ at (y,η) (c.f. the proof of Lemma <ref>).
i) Let (x, ξ) be a smooth point of SS(f_*ℱ) with ξ≠ 0, (y, η) be the point in SSℱ corresponding to it. Let (T, U, V, g) be a ttfam for f_*ℱ at (x, ξ). Consider the pullback of this ttfam along f:
Y× T Ũ× T Ṽ y_T
X× T U× T V x_T
𝔸^1_T["f× id", from=1-1, to=2-1]
[from=1-2, to=2-2]
[from=1-2, to=1-1]
[from=2-2, to=2-1]
["∼", from=1-4, to=2-4]
[from=1-3, to=2-3]
["g", from=2-3, to=3-3]
[hook', from=1-3, to=1-2]
[hook', from=2-3, to=2-2]
[hook', from=1-4, to=1-3]
[hook', from=2-4, to=2-3]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-2, to=2-1]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-3, to=2-2]
["h"'pos=0.3, shift left=1, curve=height=12pt, from=1-3, to=3-3]
f being special with respect to ℱ implies that, for any geometric point s∈ T, the slice Ṽ_s𝔸^1_s, satisfies condition iii) in the definition of a ttfam for ℱ at (y, η). Conditions i), ii) are clearly satisfied, so (T, Ũ, Ṽ, h) is a ttfam for ℱ at (y, η). Since ℱ is μ c, ϕ_h(ℱ) is a local system. By the compatibility of vanishing cycles (over general bases) with proper pushforwards, we conclude that ϕ_g(f_*ℱ) is also a local system.
ii) For the same statement for μ c^s, it suffices to check that being special with respect to a sheaf is preserved under smooth pullback.
Let g: W→ X be a smooth map. We have the following diagram:
Y Y_W
X W
["g", from=2-3, to=2-2]
["f"', from=1-2, to=2-2]
["f'", from=1-3, to=2-3]
["g'"', from=1-3, to=1-2]
["⌟"anchor=center, pos=0.125, rotate=-90, draw=none, from=1-3, to=2-2]
We want to show f' is special with respect to g'^*ℱ.
a): Clear;
b): We need to know the intersection (away from the zero section) of f'^∘SS(f'_*g'^*ℱ)=f'^∘SS(g^*f_*ℱ)=f'^∘g^∘SS(f_*ℱ)=g'^∘f^∘SS(f_*ℱ) and SS(g'^*ℱ)=g'^∘SSℱ. Clearly, on the fibre f'^-1(x'), for any x'∈ W, the intersection is none empty if and only if, on f^-1(g(x')), the intersection of f^∘SS(f_*ℱ) and SSℱ is nonempty, and if so the intersection is a single smooth point of SS(g'^*ℱ);
c): f'_+SS(g'^*ℱ)=f'_+g'^+SSℱ=g^+f_+SSℱ=g^∘f_∘SS^+ℱ=f'_∘g'^∘SS^+ℱ=f'_∘SS^+(g'^*ℱ), where the second equality comes from the base change formula in intersection theory.
i') This follows from the same method as for i) plus Lemma <ref>.
Recall the notations in Radon setup <ref> for the next corollary and remark.
The Radon transform preserves μ c^s sheaves: if ℱ∈ D(ℙ) is μ c^s, then Rℱ∈ D(ℙ^∨) is μ c^s.
It suffices to observe that, by the proof of Lemma <ref>, q is special with respect to p^*ℱ for any ℱ∈ D(ℙ).
Note that we actually proved a pointwise statement: if ℱ∈ D(ℙ) is μ c^s at (x,ξ), ξ≠ 0, then Rℱ∈ D(ℙ^∨) is μ c^s at (a,α), where (a,α) is any point corresponding to (x,ξ).
Being μ c is compatible with distinguished triangles: let ℱ→𝒢→ℋ→ be a distinguished triangle, assume SS𝒢=SSℱ∪ SSℋ, then
i) ℱ and ℋ μ c implies 𝒢 μ c;
ii) Same for μ c^s;
i') More generally, for any smooth point ν∈ SS𝒢, depth(𝒢)_ν≤max{depth(ℱ)_ν, depth(ℋ)_ν}.
These all follow from applying Remark <ref> v), and using the distinguished triangle
Φ_f_Tℱ_T Φ_f_T𝒢_T Φ_f_Tℋ_T [from=1-1, to=1-2]
[from=1-2, to=1-3]
[from=1-3, to=1-4]
Note that we need the assumption SS𝒢=SSℱ∪ SSℋ because otherwise SSℱ and SSℋ may cancel each other and a smooth point of SS𝒢 may be a nonsmooth point of SSℱ or SSℋ (see, e.g., Example <ref>).
Item i) in the following lemma is well-known, we include a proof for completeness.
Let Z be a smooth closed subvariety of X of codim≥ 2, j: U↪ X be the complement. Let ℱ=j_!ℱ_U with ℱ_U a local system. Then
i) SSℱ=T^*_ZX ∪ T^*_XX;
ii) ℱ is μ c^s.
i) By induction on amplitudes and the compatibility of μ c with distinguished triangles, it suffices to deal with the case where ℱ_U is concentrated in a single degree. By Theorem of Purity (e.g. [SGA1 X]), ℱ_U extends to some local system (concentrated in a single degree) ℱ on X. Consider the exact sequence 0→ℱ→ℱ→ i_*ℱ_Z → 0. The second and third terms have SS=T^*_ZX ∪ T^*_XX, so the first term has SS⊆ T^*_ZX ∪ T^*_XX. But ℱ is not locally constant on Z so SSℱ≠ T^*_XX, so the inclusion is an equality by dimensional reasons.
ii) By induction on amplitudes, we reduce to the case ℱ_U is concentrated in a single degree. Then it follows from the same exact sequence above and Lemma <ref>, <ref>, <ref>.
Let Z be a smooth closed subvariety of X of codim≥ 2, j: U↪ X be the complement. Let ℱ∈ D(X), assume it is not a local system. Then
i) SSℱ=T^*_ZX ∪ T^*_XX if and only if ℱ is a local system on U and Z;
ii) If so, ℱ is μ c^s.
In the real/complex analytic context, statement i) is true without assumptions on the codimension (<cit.>). In the positive characteristic algebraic context, it is false for codim = 1, see Example <ref>.
By the previous lemma, ii) and “⇐” in i) are clear. For “⇒” in i): suppose SSℱ=T^*_ZX ∪ T^*_XX. Consider the distinguished triangle j_!ℱ_U →ℱ→ i_*ℱ_Z →. The first and second terms both have SS=T^*_ZX ∪ T^*_XX, so the third has SS⊆ T^*_ZX. But SS(i_*ℱ_Z)=i_∘SSℱ_Z, these force SS(i_*ℱ_Z)=T^*_ZX and SSℱ_Z=T^*_ZZ, hence ℱ_Z is a local system.
§.§ Examples
The sheaves in Examples <ref>, <ref> are not μ c (for p>3) by the computations there and Lemma <ref>. Example <ref> shows that SS being Lagrangian does not imply being μ c. We do not know if being μ c implies SS being Lagrangian. Furthermore, similar computations at other points show that Example <ref> is not μ c anywhere. On the other hand, as we show now, Example <ref> is μ c everywhere (along the smooth locus of SSℱ) except above the origin.
Consider ((0,1),dx)∈ SSℱ, the other points are similar. Change coordinates, we may assume the sheaf is given by t^p-t=(y+1)/x^p-1 and the point in question is (x_0,ξ)=((0,0),dx). By the same reasoning as in the first paragraph of the proof of Theorem <ref>, it suffices to show: for any smooth curves on an open neighbourhood of x_0 passing through x_0 with conormal at x_0 proportional to dx, sw(C) is independent of the curve. By Implicit Function Theorem, any such curve is of the form {x=c_2y^2+c_3y^3+c_4y^4+...}, c_2≠0 in the formal neighbourhood of x_0. The restriction of the sheaf is given by Artin-Schreier equation
t^p-t=y+1/(c_2y^2+c_3y^3+c_4y^4+...)^p-1=y+1/y^2p-2(c_2+c_3y+c_4y^2+...)^p-1
which has Swan conductor 2p-1, independent of C.
In the following, we will use coordinates [x:y:z] on ℙ^2 and [a:b:c] on its dual.
(p>2)
Consider the Artin-Schreier sheaf on ℙ^2 determined by the equation t^p-t=yz^p-2/x^p-1, !-extended along {x=0}. Note on the affine {[x:y:1]}, this is just Example <ref>. One can compute: SSℱ=T^*_ℙℙ∪ T^*_{x=0}ℙ∪ T^*_[0:0:1]ℙ∪ T^*_[0:1:0]ℙ, SSRℱ=T^*_ℙ^∨ℙ^∨∪ T^*_{b=0}ℙ^∨∪ T^*_{c=0}ℙ^∨∪ T^*_[1:0:0]ℙ^∨. Focus on a neighbourhood of the point [0:1:0]∈ℙ^∨. Claim: although SSRℱ is the zero section union the conormal to a smooth divisor near this point, Rℱ is not locally constant on the divisor near this point. Indeed, as a varies, the points [a:1:0] correspond to the lines {aX+Y=0} on ℙ and the stalks (Rℱ)_[a:1:0]≅ RΓ({aX+Y=0},ℱ) has a jump at a=0.[This can be seen, e.g., by computing the Euler-Poincaré characteristics using Grothendieck-Ogg-Shafarevich.]
This shows that the same statement as in Corollary i) <ref> for codim=1 is false. This is in steep contrast with the real/complex analytic case.
(p>2)
Let Z↪ℙ^2 be the closed subscheme with equation z^p-1y=x^p. Let ℱ be the constant sheaf on Z, *-extended to ℙ. One can compute: SSℱ=T^*_Zℙ, SSRℱ=T^*_{b=0}ℙ^∨∪Λ, where Λ is described as follows: on the affine {[x:y:1]}, Λ|_{c=1}={((0,b),⟨1/b^1/pda+1/bdb⟩)} (for b=0 this means ((0,0),⟨ db⟩)), Λ is the closure of Λ|_{c=1} in T^*ℙ^∨. By Remark <ref>, Lemma <ref> and Remark <ref>, Rℱ is μ c^s except possibly along {b=0}. Its SS shows that it has wild ramification along {a=0}.
This tells us: a) being tame is not stable under proper pushforwards; b) μ c, μ c^s sheaves can have wildly ramifications.
§ APPENDIX
We list some analogies and contrasts among the following contexts from the microlocal perspective (well-known to experts):
i) Ét.: bounded constructible complexes of ℤ/ℓ^n-sheaves on smooth algebraic varieties over algebraically closed fields of positive characteristic p≠ l;
ii) Dist.: complex valued tempered distributions on ℝ^n;
iii) D-mod_h: bounded holonomic complexes of algebraic D-modules on smooth complex algebraic varieties;
iv) ℂ-ana.: bounded ℂ-constructible complexes of ℂ-sheaves on complex analytic manifolds. By Riemann-Hilbert this is equivalent to bounded regular holonomic complexes of analytic D-modules.
6-functor formalisms All except Dist. have 6-functor formalisms.
Special features:
Ét.: the subclass of tame sheaves is not preserved under (proper) pushforward;
Dist.: having polynomial growth is not preserved under integrations;
D-mod_h: subclass of regular holonomic D-modules is stable under 6-functors;
Singular supports (SS) and characteristic cycles (CC): SS and CC are defined for Ét., D-mod_h and ℂ-ana.. CC's satisfy index formulas. SS is also defined for Dist. (which are called wavefronts instead). SS's are closed conic subsets in T^*X. Special features:
Ét.: SS's are half-dimensional;
Dist.: no special feature;
D-mod_h: SS's are Lagrangian; for general coherent (not necessarily holonomic) D-modules SS's are coisotropic;
ℂ-ana.: SS's are Lagrangian.
Fourier transforms: All of them have Fourier transforms (on X=𝔸^n). Special features:
Ét., Dist., D-mod_h: equivalence on the whole category;
ℂ-ana.: not an equivalence on the whole category but becomes an equivalence after restriction to conic sheaves.
Microlocal data:
Ét.: large data contained in wild ramifications (in dimension one: representation of local Galois groups);
Dist.: large data contained in (essential) singularities[e.g., Great Picard's Theorem: at an essential singularity x of a complex analytic function f, in any punctured neighbourhood of x, f takes all complex values infinitely many times, with at most one exception.];
D-mod_h: large data contained in irregular singularities (in dimension one: Stokes data); for general analytic D-modules, microlocalisation can be carried out and is the content of the theory of algebraic analysis (microfunctions, microdifferential operators...);
ℂ-ana.: relatively small data, microlocalisation can be carried out.
Extension properties:
Ét.: given a local system on 𝔸^1_(0)-{0}, there are in general many ways to extend it to a local system on 𝔾_m. However, there exists a unique (up to isomorphism) extension which is tame at ∞ (<cit.>);
Dist.: given a smooth function on a small punctured disk at the origin of ℝ, there are many ways to extend to a smooth function on ℝ-{0};
D-mod_h: given a vector bundle with a flat connection on the punctured formal disk at 0∈ℂ, there are in general many ways to extend it to a vector bundle with a flat connection on ℂ^×. However, there exists a unique (up to isomorphism) extension which is special (<cit.>);
ℂ-ana.: there is a unique way to extend a local system on a punctured small disk at the origin of ℂ to a local system on ℂ^×.
amsalpha
[EGA IV] Alexandre Grothendieck and Jean Dieudonné, Éléments de géométrie algébrique : IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie. Publications Mathématiques de l'IHÉS, Volume 32, pp. 5-361, 1967.
[SGA41/2] Pierre Deligne, Cohomologie étale, Séminaire de Géométrie Algébrique du
Bois-Marie, Lecture Notes in Math. 569, Springer-Verlag
1977.
[SGA7] Groupes de monodromie en géométrie algébrique, Séminaire
de Géométrie Algébrique du Bois-Marie 1967-1969, I dirigé par A.
Grothendieck, II par Pierre Deligne et Nicholas Katz, Lecture Notes in Math. 288,
340, Springer-Verlag 1972, 1973.
[Stacks] The Stacks Project: https://stacks.math.columbia.edu/, 2023.
|
http://arxiv.org/abs/2307.01227v2
|
20230703044742
|
ESGCN: Edge Squeeze Attention Graph Convolutional Network for Traffic Flow Forecasting
|
[
"Sangrok Lee",
"Ha Young Kim"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Feasibility of Universal Anomaly Detection without Knowing the Abnormality in Medical Images
Can Cui1 Yaohong Wang2 Shunxing Bao1 Yucheng Tang3 Ruining Deng1 Lucas W. Remedios1 Zuhayr Asad1 Joseph T. Roland2 Ken S. Lau2
Qi Liu2 Lori A. Coburn2 Keith T. Wilson2 Bennett A. Landman1 Yuankai Huo1
August 1, 2023
=============================================================================================================================================================================================================
Traffic forecasting is a highly challenging task owing to the dynamical spatio-temporal dependencies of traffic flows.
To handle this, we focus on modeling the spatio-temporal dynamics and propose a network termed Edge Squeeze Graph Convolutional Network (ESGCN) to forecast traffic flow in multiple regions.
ESGCN consists of two modules: W-module and ES module.
W-module is a fully node-wise convolutional network.
It encodes the time-series of each traffic region separately and decomposes the time-series at various scales to capture fine and coarse features.
The ES module models the spatio-temporal dynamics using Graph Convolutional Network (GCN) and generates an Adaptive Adjacency Matrix (AAM) with temporal features.
To improve the accuracy of AAM, we introduce three key concepts.
1) Using edge features to directly capture the spatio-temporal flow representation among regions.
2) Applying an edge attention mechanism to GCN to extract the AAM from the edge features.
Here, the attention mechanism can effectively determine important spatio-temporal adjacency relations.
3) Proposing a novel node contrastive loss to suppress obstructed connections and emphasize related connections.
Experimental results show that ESGCN achieves state-of-the-art performance by a large margin on four real-world datasets (PEMS03, 04, 07, and 08) with a low computational cost.
§ INTRODUCTION
Traffic flow forecasting is a core component of intelligent transportation systems.
It is essential for analyzing traffic situations and aims at predicting the future traffic flow of regions using historical traffic data.
However, this task is challenging because of the heterogeneity and dynamic spatio-temporal dependence of traffic data.
Traffic data can be modeled using Graph Neural Networks (GNNs).
In such networks, the regions are represented as nodes and flows between regions as edges.
Graph Convolutional Network (GCN) is a type of GNN commonly used to handle traffic flow forecasting tasks.
It can adequately leverage the graph structure and aggregate node information <cit.>.
Because the edges define the intensity of the adjacency matrix in a graph operation, accurate edge graph is an important factor in determining the performance of a GCN.
Recent studies focus on capturing the connection patterns via an adaptive adjacency matrix (AAM) that is found in the training process <cit.>.
In our study, we focus on enhancing the AAM with the following three distinctive features.
First, we propose an Edge Squeeze (ES) module that directly uses spatio-temporal flows with edge features to construct an AAM.
Recent studies in GCN revealed that edge features are equally important as node features <cit.>.
While the edge features can be used to simulate the traffic flows between regions, to the best of our knowledge this is the first study to build an adjacency matrix from the edge features in this task.
Existing methods use node embeddings which cannot accurately reflect the relationship among nodes because the embedding vectors are fixed for inference and only represent spatial nodes.
Therefore, they cannot handle dynamic patterns occurred in the inference and are unable to accurately capture spatio-temporal features.
However, the ES module leverages the temporal features directly to construct the AAM and reflects the changes of temporal features in the inference.
ES module creates three-dimensional (3D) spatio-temporal correlations beyond the spatial-specific embeddings.
Second, we develop a novel edge attention mechanism.
We further explore the edge features with the attention mechanism to refine the AAM.
Because the edge features represent the adjacency relations, we apply the attention mechanism to activate meaningful edges and suppress the others of the adjacency matrix.
The edge attention exploits feature's channel information such as SENet <cit.>, referred to as the squeeze attention.
Existing methods apply transformer attention mechanism <cit.> which presents high computational burden.
The channel attention mechanism provides relatively lower computation cost and can generate more refined AAM.
Third, we introduce a novel node contrastive loss.
Previous studies computed the similarities between the node embeddings that were trained with a forecasting objective function for the AAM.
This method generated the adjacency matrix without an explicit objective for the shape of the graph, consequently inducing suboptimal performance.
To overcome this, we maximize the difference between related and unrelated nodes to facilitate separation of the forecasting relevant nodes from the residuals.
This aids in preventing the propagation of information on unrelated nodes through AAM.
Additionally, we propose a backbone network, W-module, to extract multi-scale temporal features for traffic flow forecasting.
W-module is a fully convolutional network that consists of node-wise convolution to handle time-series features of each node separately.
Owing to its non-autoregressive attributes and receptive field of convolution layers, the W-module can extract multiple levels of temporal features from shallow to deep layers and provide a hierarchical decomposition.
We combine the ES module with W-module and propose ESGCN to forecast traffic flow.
The main contributions of this study are summarized as follows:
* We propose an end-to-end framework (ESGCN) using two novel modules: ES module and W-module. ESGCN effectively learns hidden and dynamic spatio-temporal relationships using edge features.
* We introduce an edge attention mechanism and node contrastive loss to construct an AAM which captures accurately the relationships among nodes.
* We perform extensive experiments on four real-world datasets (PEMS03, 04, 07, and 08); the results shows that ESGCN achieves state-of-the-art performance by a large margin with a low computational cost.
§ RELATED WORK
GCN for traffic forecasting. GCN is a special type of convolutional neural network that is widely used in traffic forecasting tasks <cit.>.
Recurrent-based GCN adopts a recurrent architecture, such as LSTM and replaces inner layers with GCN <cit.>.
This type of GCN handles spatial and temporal features recurrently, however it has long-range memory loss problem.
STGCN <cit.>, GSTNet <cit.>, and Graph WaveNet <cit.> use fully convolutional architecture and graph operation.
They exploit spatial and temporal convolution separately to model spatio-temporal data.
These methods show relatively fast inference speed and improved performance on long-range temporal data.
Attention mechanism for traffic forecasting. Attention mechanisms are used to effectively capture spatio-temporal dynamics for traffic forecasting.
ASTGCN <cit.>, GMAN <cit.>, and STGRAT <cit.> exploited the attention mechanism which considers changes in road speed and diverse influence of spatial and temporal network.
Existing approaches leveraged transformer attention <cit.> that computes key, query and value relations.
Contrastingly, our proposed network adopts a channel attention such as SENet <cit.> and CBAM <cit.> which have relatively low computation cost and high speed.
Adaptive Adjacency Matrix Construction. Previous studies used a predefined adjacency matrix for GCN <cit.>.
In a previous study, a spatial AAM was proposed as a supplementary for the predefined adjacency matrix for graph WaveNet <cit.>.
STGAT <cit.> also utilized the AAM, however, they were limited by spatial dependencies.
AGCRN <cit.> exploited an AAM solely based on spatial node embedding.
In this study, we introduce a spatio-temporal based AAM that captures accurate relations.
§ PROBLEM DEFINITION
To predict future traffic, univariate time-series data from each region {X_1, X_2, X_3, …} are provided where X_i∈ℝ^n is the traffic record at time step i and n is the number of regions.
The purpose is to find a function ℰ that is capable of predicting the future of length o by analyzing existing T-length past data.
Traffic flows are generated for multiple regions and the problem is formulated as follows:
{X_t+1, …, X_t+o} = ℰ(X_t-T+1, …, X_t),
where t is an arbitrary time step.
§ PROPOSED METHOD
This section describes the edge squeeze graph convolution network (ESGCN).
The network comprises two modules: W-module and edge squeeze (ES) module, as shown in Fig. <ref>.
W-module extracts region-specific temporal features and ES module produces an AAM using temporal features with an edge attention and 3D spatio-temporal relations.
§.§ W-Module
W-module extracts the time-series features of each node and is split into four W-block groups as illustrated in Fig. <ref>.
We call each group as stage.
To organize the W-module, we use a combination of gated node-wise convolutions (GNC) and a layer normalization <cit.>, namely W-block as the smallest unit.
GNC has two node-wise convolution layers: one for embedding features with a tanh function and the other for gating with a sigmoid function, as shown in Fig. <ref>.
The two outputs are multiplied element-wise and summarized as follows:
GNC(X) = sigmoid(𝒞(X)) ∘ tanh (𝒞(X)),
where 𝒞 is a 2D convolution layer with a 1 × 3 kernel and 0×1 padding, and ∘ is element-wise multiplication.
X is an input feature of X ∈ℝ^c × n × t, where c is the number of channels, n is the number of nodes, and t is an arbitrary temporal dimension.
This 1×3 convolution layer is referred to as node-wise convolution.
Given that the convolution kernel has a size of one in the node dimension and expands the receptive field in the time dimension, the W-block is only responsible for extracting temporal features.
The output features of the GNC are fed to the layer normalization layer.
Each stage has 1, 2, 2, and 2 W-blocks with a stride of 1 for first stage and a stride of 2 for the others.
The output of i^th stage is defined as F_i∈ℝ^ c_i× n ×l/2^i-1, where c_i is specified as a hyperparameter and l is input time-series length.
The receptive field of convolution layer expands as the feature passes through the various stages.
Therefore, the features of the early stages capture local signals and subtle changes and the features of late stages are learned for global signals and overall movements.
To take this advantages in handling multiple levels of signal size, the outputs of the intermediate stages are also used with the last output features for the final prediction.
In W-module, each stage has 3, 9, 12, and 12 receptive field sizes in a sequential order.
This can decompose the time-series hierarchically and each stage is efficiently trained to be responsible for possible signals.
§.§ Edge Squeeze Module
The ES module is a spatio-temporal feature extractor, which enhances temporal features with relations from the W-module.
As described in Fig. <ref>, it reflects the node relations through three steps: spatio-temporal correlation computation, relational feature extraction, and GCN operation.
Spatio-temporal correlation computation.
In this step, we construct spatio-temporal correlations to model flows among nodes.
The modeling involves three steps.
First, we feed the last features of W-module, F_4, to a single convolution layer to reduce the channel size and computation cost. This is defined as:
F_c = 𝒞(F_4), F_c ∈ℝ^ c_4/4× n × l_4,
where 𝒞 is 1 × 1 convolution of output channels and l_4 is l/8.
Second, we extract a temporal representative from l_4 temporal nodes in each region to compute spatio-temporal correlations between the representatives (n) and all nodes (n × l_4).
The node is set in the last time step as a representative node.
This is inspired by the Markov decision process (MDP) <cit.>, which considers only a state of current time step to decide an action at the subsequent time, and the representative of node is the closest value to the prediction value of the last element on the time dimension.
The representative function 𝒯 is as follows:
F_l = 𝒯(F_c), F_l ∈ℝ^ c_4 × n,
where F_l denotes the last node on the time dimension.
Finally, the relations between the features and the representatives are computed.
The original feature, F_c has n × l_4 nodes, and representative, F_l has n nodes.
The spatio-temporal correlations between F_c and F_l are in n × n × l_4 space.
Inspired by previous studies <cit.>, a similarity function is adopted to measure the correlations between spatio-temporal nodes.
The cosine similarity is adopted for this study.
The result of cosine similarity is bounded from -1 to 1, and it is suitable for comparing high-dimensional features <cit.>.
The cosine similarity cos is defined as follows:
cos(x_1, x_2) = x_1 · x_2/‖ x_1 ‖ _2 ·‖ x_2 ‖ _2,
where · is dot product and ‖·‖_2 is Euclidean norm.
The final spatio-temporal correlations S is defined as:
S = cos(F_l, F_c), S ∈ℝ^n × n × l_4
Relational features extraction.
Based on the previous step, each node has n × l_4 correlations.
We employ these correlations to weight F_4 and generate correlation-aware features, ∈ℝ^c_4 × n × l_4, for each node.
Subsequently, we aggregate the temporal channel features, l_4, to reflect temporal contexts.
To facilitate computation, we expand the dimension of S to S^e ∈ℝ^n × c_4 × n × l_4 (1 to c_4) and define S_i^e ∈ℝ^c_4 × n × l_4 as the correlation of the i^th node.
In following equations, z(:,:,*) denotes *^th tensor of z on the last dimension.
The computation process is defined as:
R(:,:,k) = ∑_j∈ l_4 S_k^e(:,:,j) ∘ F_4(:,:,j),
where R(:, :, k) ∈ℝ^c_4 × n is the k^th node's relational features between all nodes, k = 1, 2, …, n.
We use the relational features as a feature matrix for graph convolution.
This has an advantage over other features as it reflects temporal relations.
GCN conducts operation only considering of spatial node relations.
However, in spatio-temporal data, temporal relations need to be included.
In the relational features, each node has n different nodes created by considering temporal contexts.
Thus, we use R(:,:,k) tensor as n neighbor nodes with c_4 dimension to predict k^th node future flow.
It can reflect the temporal relations in the relational features and spatial relations in GCN operation.
GCN operation. GCN requires a feature matrix and an adjacency matrix.
As previously mentioned, the relational features are used as the feature matrix.
To construct the adjacency matrix, we apply an attention mechanism to the relational features.
Inspired by SENet <cit.> and CBAM <cit.>, we adopt a channel attention mechanism known as squeeze attention to refine the AAM from the relational features.
The squeeze attention activates important spatial and channel positions.
Therefore, we use this mechanism to extract the edge positions' importance.
To squeeze the edge features, we feed the relational features to max operation, tanh, and ReLU activation as following:
A = ReLU(tanh(max(R)))
The outcome, A ∈ n × n, is the generated AAM.
The graph operation with A and R is defined as:
F_g(:,k) = W ⊗ R(:, :, k) ⊗ A(k,:) + B,
where F_g(:,k) ∈ℝ^c_4 represents features for k^th node, W is a learnable weight denoted as W ∈ℝ^c_4 × c_4, B is a bias as B ∈ R^c_4, and ⊗ is matrix multiplication.
§.§ ESGCN
The proposed framework consists of the W-module and ES module.
The outputs of the first three stages of the W-module and ES module are employed to access all levels of features.
The computation process is summarized as follows:
P = ∑_i=1^t-1𝒞^ i(F_i) + 𝒞^e(F_g),
where t is the number of stages, 𝒞^ i and 𝒞^e are a 1 × 1 convolution layer for the i-th stage and ES module respectively.
To predict future flow, two fully connected layers with ReLU activation function are used.
P is fed to the two fully connected layers.
Ŷ = W_b ⊗ ReLU (W_a ⊗ P + B_a) + B_b, Ŷ∈ℝ^ h × n,
where W_a and W_b are weights and B_a and B_b are bias of the fully connected layers and
h is the number of time steps in the prediction.
§.§ Loss Function
We adopt the Huber loss and the proposed node contrastive loss.
Huber loss is defined as follows:
h ( Ŷ, Y ) ={ 1/2 (Ŷ - Y)^2, |Ŷ - Y | < δ
δ|Ŷ - Y | - 1/2δ^2, |Ŷ - Y |≥δ. ,
where δ is set to 1, Ŷ denotes predicted values and Y is ground truth values.
The objective function, L_h is defined as follows:
L_h = 1/b × o∑_k=1^b∑_i=1^o h( Ŷ_i^k, Y_i^k ),
where Ŷ_i^k and Y_i^k are the predicted value and ground truth of i-th time step of k-th sample in a mini batch respectively, and b is the number of samples in a mini batch.
Node contrastive loss. This is used to effectively separate related and unrelated nodes.
As shown in Fig. <ref>, the reversed adjacency matrix, A_r, is generated.
We modify the squeeze function as follows:
A_r = ReLU(-tanh(max(R)))
Subsequently, the unrelated features are extracted using the graph operation in the reversed adjacency matrix.
Finally, we maximize the distance between the related features, F_g, with the original adjacency matrix and the unrelated features, F_gr, with the reversed adjacency matrix.
The node contrastive loss is defined as:
L_n = 1/n tr(F_g^T⊗ F_gr),
where tr is the trace of a matrix and F_g^T ∈ℝ^ n × c_4 denotes a transposed matrix. Minimizing an orthogonal of the multiplied matrix can be viewed as reducing the similarity between corresponding nodes based on a dot product.
The final loss function is calculated as follows:
L = L_h + λ L_n,
where λ is set to 0.1 in our experiments.
§ EXPERIMENTS
§.§ Implementation Details
The proposed model is trained with an Adam optimizer <cit.> for 50 epochs.
The initial learning rate is 0.0003 and reduced by 0.7 every 5 epochs.
The weight decay factor for L2 regulation is set to 0.0001, and the batch size is set to 30 for PEMS07 and 64 for PEMS03, 04, and 08. Training sessions are conducted on an NVIDIA Tesla V100 and Intel Xeon Gold 5120 CPU.
§.§ Datasets
The framework is validated on four real-world traffic datasets, namely: PEMS03, PEMS04, PEMS07, and PEMS08 <cit.>.
Table <ref> shows the description of each dataset.
These four datasets contain generated traffic flows in four different regions of California using the Caltrans Performance Measurement System.
Time-series data are collected at 5-minute intervals.
Standard normalization and linear interpolation are used for stable training.
For a fair comparison, all datasets are split into training, validation, and test data at a ratio of 6 : 2 : 2. Twelve time steps (1 h) are used to predict the next 12 time steps (1 h) and all experiments are repeated 10 times with random seeds. The test data performance is verified by selecting the model of the epoch that showed the best performance in the validation data.
§.§ Baseline Methods
ESGCN is compared with the following models on the same hyperparameters and official implementations:
* STGCN: Spatio-temporal graph convolutional networks, which comprise spatial and temporal dilated convolutions <cit.>.
* ASTGCN: Attention-based spatial temporal graph convolutional networks, which adopt spatial and temporal attention into the model <cit.>.
* GraphWaveNet: Graph WaveNet exploits an adaptive adjacency graph and dilated 1D convolution <cit.>.
* GMAN: Graph multi-attention network uses spatial and temporal attention in graph neural network <cit.>.
* STSGCN: Spatial–temporal synchronous graph convolutional networks, which utilize a spatio-temporal graph that extends the spatial graph to the temporal dimension <cit.>.
* AGCRN: Adaptive graph convolutional recurrent network for traffic forecasting. This model exploits node adaptive parameter learning and an adaptive graph <cit.>.
* STFGNN: Spatial–temporal fusion graph neural networks, which leverage fast-DTW to construct a spatiotemporal graph <cit.>.
§.§ Comparison with the Baseline Methods
The proposed model is compared with state-of-the-art models.
ESGCN outperformed all other baselines in terms of RMSE, MAE, and MAPE, as shown in Table <ref>.
Compared to the current best performing models in each dataset (PEMS07: AGCRN, PEMS03,04, and 08: GMAN), ESGCN yields 4%, 2.7%, and 3.8% relative improvements on average for all datasets in RMSE, MAE, and MAPE.
Graph WaveNet and AGCRN employ the AAM and STFGNN uses a non-adaptive spatio-temporal adjacency matrix.
Compared to these methods, the proposed model showed superior performance and guaranteed the effectiveness of our AAM which is refined with attention and node contrastive loss.
Based on the experimental results, ESGCN, has improved representation ability and exhibits promising forecasting performance.
§.§ Computational Cost
To evaluate the computational cost, we compare the number of parameters, the training time, and the inference time of our model with those of baselines in Table <ref>.
ESGCN has the smallest parameters compared to other baselines.
In training time, ESGCN is faster than STFGNN and slightly slower than AGCRN.
ESGCN has the third-fastest inference speed.
Although AGCRN shows faster training and inference time, specifically, 5 and 0.6 s in training and inference, respectively, the differences are insignificant.
Additionally, AGCRN requires three times more parameters than ours.
Especially, our model which has the channel attention shows faster running speed and includes fewer parameters than GMAN which leverages the transformer attention.
Considering its superior performance, ESGCN has an acceptable computational cost.
§.§ Ablation Study
§.§.§ Components
To validate the proposed ES module and the node contrastive loss, we conduct experiments (cases 1-3) as shown in Table <ref>.
The model with only W-module (case 1) shows the lowest performance.
However, the W-module with the ES module which reflects traffic flows using GCN (case 2) achieves significant improvement.
ESGCN, consisting of the W-module, ES module, and the node contrastive loss (case 3) outperforms the others.
This highlights that the feature enhancement ability of ES module and the proposed loss function.
§.§.§ Attention operation
To extract important adjacency relations, our edge attention used max operation to squeeze channel features.
However, CBAM <cit.> also uses average operation to squeeze channel features.
In this section, we conduct an ablation study (cases 3, 8, and 9) on the attention operation in Table <ref>.
We empirically discovered that the max operation is the most optimal setting.
Notably, the attention function with learnable layer (case 9) shows a lower performance because dimension reduction of the layer disturbs to extract the AAM <cit.>.
§.§.§ Node contrastive loss
We use a hyperparameter λ to balance the Huber loss and node contrastive loss.
Although the node contrastive loss assists in the construction of the refined AAM, an excessive effect of this loss can negatively affect forecasting performance.
We empirically determine the magnitude of λ.
The experimental results (cases 3-7) are shown in Table <ref>.
Based on the results, the optimal value of λ is 0.1.
We set λ as 0.1 across in all the experiments.
In Fig <ref>, it shows additionally sparse AAM in which unrelated connections are removed.
§.§.§ Representative function
The representative function of ES module extracts temporal representatives on time dimension.
Inspired by MDP, the representative function returns the last step node of F_c.
To test whether the last node can be a temporal representative, we conduct an experiment by replacing the last node with the first node, F_c(:, :, 0), and the middle node, F_c(:, :, l_4/2).
The experiment results (cases 3,10, and 11) are shown in Table <ref>.
The closer the node to the initial time step is, the lower the performance.
Our representative function is based on the assumption the closest value to the prediction value can be the representatives.
This ablation study empirically shows the assumption is reasonable.
§ CONCLUSION AND FUTURE WORK
This study proposes a novel method, ESGCN, to address traffic flow forecasting.
Experiments show that the proposed model achieves state-of-the-art performance on four real-world datasets and has the smallest parameters with relatively faster inference and training speed.
In the future, given that ESGCN is designed as a general framework to handle spatio-temporal data, it can be applied to other applications that have spatio-temporal data structures such as regional housing market prediction and electricity demand forecasting.
named
|
http://arxiv.org/abs/2307.01998v1
|
20230705030700
|
Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities
|
[
"Guihong Li",
"Duc Hoang",
"Kartikeya Bhardwaj",
"Ming Lin",
"Zhangyang Wang",
"Radu Marculescu"
] |
cs.LG
|
[
"cs.LG",
"cs.CV"
] |
Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals
Recently, zero-shot (or training-free) Neural Architecture Search (NAS) approaches have been proposed to liberate the NAS from training requirements. The key idea behind zero-shot NAS approaches is to design proxies that predict the accuracies of the given networks without training network parameters. The proxies proposed so far are usually inspired by recent progress in theoretical deep learning and have shown great potential on several NAS benchmark datasets. This paper aims to comprehensively review and compare the state-of-the-art (SOTA) zero-shot NAS approaches, with an emphasis on their hardware awareness. To this end, we first review the mainstream zero-shot proxies and discuss their theoretical underpinnings.
We then compare these zero-shot proxies through large-scale experiments and demonstrate their effectiveness in both hardware-aware and hardware-oblivious NAS scenarios. Finally, we point out several promising ideas to design better proxies. Our source code and the related paper list are available on <https://github.com/SLDGroup/survey-zero-shot-nas>.
Neural Architecture Search, Zero-shot proxy, Hardware-aware neural network design
Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities
Guihong Li, Student Member, IEEE,
Duc Hoang, Student Member, IEEE,
Kartikeya Bhardwaj, Member, IEEE,
Ming Lin, Member, IEEE,
Zhangyang Wang, Senior Member, IEEE,
Radu Marculescu, Fellow, IEEE
Guihong Li, Duc Hoang, Zhangyang Wang, and Radu Marculescu are with the Department of Electrical and Computer Engineering, The University
of Texas at Austin, TX, 78712. E-mail: {lgh, hoangduc, atlaswang, radum}@utexas.edu
Kartikeya Bhardwaj is with Qualcomm AI Research, an initiative of Qualcomm Technologies, Inc., CA, 92121. E-mail: [email protected]
Ming Lin is with Amazon, WA, 98004. E-mail: [email protected].
Correspondence to Radu Marculescu ([email protected]).
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In recent years, deep neural networks have made significant breakthroughs in many applications, such as recommendation systems, image classification, and natural language modeling <cit.>. To automatically design neural architectures withof high performance deep networks, Neural Architecture Search (NAS) has been proposed to search for optimal architecturesduring the past decade <cit.>. Specifically, NAS boils down to solving an optimization problem with specific targets (e.g., high classification accuracy) over a set of possible candidate architectures (search space) within a group of computational budgets. Recent breakthroughs in NAS can simplify the trial-and-error manual architecture design process and obtain networks with higher test accuracy compared to the hand-designed networksdiscover new deep network architectures with better performance and efficiency over hand-crafted ones <cit.>. These advantages of Therefore, NAS havehas attracted significant attentions from both academia and industry.
Recent developments in this area aim to explicitly consider various hardware constraints (e.g., memory, latency and power); hence, hardware-aware NAS has been proposed to incorporate such hardware constraints during the search processOne important application of NAS is to design hardware efficient deep models under various constraints, such as memory footprint, inference latency and power consumption <cit.>. Roughly, existing NAS approaches can be categorized into three groups as shown in Fig. <ref>: multi-shot NAS, one-shot NAS and zero-shot NAS. Multi-shot NAS methods involve training multiple candidate networks and are therefore time-consuming. It can take from a few hundred GPU hours <cit.> to thousands of GPU hours <cit.> in multi-shot NAS methods. One-shot NAS methods alleviate the computational burden by sharing candidate operations via a hyper-network <cit.>.
For example, As shown in Fig. <ref>, one-shot NAS can obtain the optimal architecture by using gradient descent on some training samples over the learnable weights of each candidate operation. Notably, one-shot NAS only needs to train thea single hyper-network instead of multiple candidate architectures whose number is usually exponentially large. The orders of magnitude reduction in training time enables differentiable search to achieve competitive accuracy withagainst multi-shot NAS, but with much lower search costs <cit.>.
Nevertheless, naively merging all candidate operations into a hyper-network is not efficient because the parameters of all operations need to be stored and updated during the search process. Consequently, the weight-sharing methods aims at improving the hardware efficiencyimprove the search efficiency of NAS even further <cit.>. As shown in Fig. <ref>, the key idea of weight-sharing NAS is samplingto share the parameters across different operations. Next, at each training step, a sub-network is sampled from the original hyper-network and then the updated parameters are copied back to the hyper-network. By sharing the parameters of various sub-networks, this differentiable search approach significantly reduces the total number of parameters, thus reducing the search costs to a few or tens of GPU hours <cit.>.
Though the differentiable search and weight-sharing have significantly improved the time efficiency of NAS, training is still required by these NAS approachesin one-shot NAS methods. In the last few years, the zero-shot NAS has been proposed to liberate NAS from parameter training entirely <cit.>.
Compared to multi-shot and one-shot methods, zero-shot NAS has the following major advantages: (i) Time efficiency: zero-shot NAS utilizes some proxy as the model's test accuracy to eliminate the model training altogether during the search stage. Compared to model training, the computation costs of these proxies are much more lightweight. Therefore, zero-shot NAS can significantly reduce the costs of NAS while achieving comparable test accuracy as one-shot and multi-shot NAS approaches (see Fig. <ref>). (ii) Explainability: Clearly, the quality of the accuracy proxy ultimately determines the performance of zero-shot NAS. The design of an accuracy proxy for zero-shot NAS is usually inspired by some theoretical analysis of deep neural networks thus deepening the theoretical understanding of why certain networks may work better. For instance, some recent approaches use the number of linear regions to represent approximately the test accuracyapproximate the complexity of a deep neural network <cit.>. Moreover, the properties ofconnection between the gradient of a network at random initialization and the accuracy of that network after training are widely explored as proxies of the model's test accuracy in zero-shot NAS <cit.>.
Starting fromBased on these overarching observations, this paper aims to comprehensively analyze existing hardware-aware zero-shot neural architecture searchNAS methods. Starting from the theoretical foundations of deep learning, we first investigate various proxies of test accuracy and their theoretical underpinnings. Then, we introduce someseveral popular benchmarks used to evaluatefor evaluating the performance of zero-shot NAS methods. Moreover, we compare the performance of these proxies on several NAS benchmarks and demonstrate their effectiveness when applied to hardware-aware NAS; notably, we reveal some fundamental limitations of existing proxies. Finally, we discuss several potential research directions for hardware-aware zero-shot NAS. Overall, this paper makes the following contributions:
* We review existing proxies for the test accuracy of deep neural networkszero-shot NAS and provide the theoretical insights behind these proxies. We categorize the existing accuracy proxies into (i) gradient-dependentbased proxies and (ii) gradient-free proxies.
* We conduct direct comparisons of various accuracyzero-shot proxies against two naive proxies, i.e., #Params and #FLOPs, and reveal a fundamental limitation of many existing proxies: they correlate much worse with the test accuracy forin constrained search spacessearch settings (i.e., when considering only networks withof high accuracies) compared to unconstrained scenariossettings (i.e., considering all architectures in benchmarksthe given search space).
* We further conductedconduct a thorough study including proxy design, benchmarks, and real hardware profiling for zero-shot NAS. We show that a few proxies have a better correlation with the test accuracy than these two naive proxies (#Params and #FLOPs) on the top-performing architectures such as ResNets and MobileNets.
* We discuss the limitations of existing zero-shot proxies and NAS benchmarks; we then outline a few possible directions for future research.
Of note, compared to other existing zeros-shot NAS surveys <cit.>, we not only cover all existing proxies, but also provide a deep analysis of the theoretical underpinning behind them. We believe that understanding the theoretical design considerations behind these proxies is very important for future proxies designimprovements. Moreover, for the first time, we conduct detailed comparisons when applying zero-shot NAS to hardware-aware scenarios. This is crucial for deploying the zero-shot approaches in practice, especially for edge-AI applications.
The restremainder of thethis paper is organized as follows.
We introduce the zero-shot proxies in Section <ref>. Section <ref> surveys
existing NAS benchmarks. We then introduce the mainstream hardware performance models Hardware latency predictor is presented in Section <ref>. Next, We evaluate thesevarious zero-shot proxies onunder hardware-aware settings in Section <ref> and points out future research directions. We conclude the paper in Section <ref>.
§ ZERO-SHOT PROXIES
The goal of zero-shot NAS is to design proxies that can rank the accuracy of candidate network architectures at the initialization stage, i.e., without training, such that we can replace the expensive training process in NAS with some computation-efficient alternatives. Hence, the proxy for the accuracy ranking is the key factor of zero-shot NAS.
In <cit.>, the existing accuracy proxies are classified into two classes: (i) data-dependent, where the accuracy proxy is calculated over the real data of the target task; (ii) data-independent, where the proxy's value doesn't rely on the real data. In this paper, we categorize zero-shot proxies from another perspective: depending on whether or not the gradients are involved in proxy's calculation, the existing accuracy proxies fall into two major classes: (i) gradient-based accuracy proxy and (ii) gradient-free accuracy proxy (summarized in Table <ref>). The symbols used in this section and their corresponding meaning are summarized in Table <ref>.
§.§ Gradient-based accuracy proxies
We first introduce several similar proxies derived from the gradient over parameters of deep networks.
§.§.§ Gradient norm
The gradient norm is the sum of norms for each layer's gradient vector <cit.>.
To calculate the gradient norm, we first input a mini-batch of data into the network and then propagate the loss values backward.
Next, we calculate the ℓ_2-norm of each layer's gradient and then add them up for all the convolution and linear layers of the given network. Formally, the definition of gradient norm G is as follows:
G≜∑_i=1^D∇_θ_iL_2
where D, θ_i and L are, the number of layers, the parameter vector of the i-th layer of a given network and L is the loss values, respectively.
§.§.§ SNIP
The gradient norm only measures the property of the gradient's propagation for a given network. To jointly measure the parameter importance both in forward inference and gradient propagation, SNIP consists of multiplying the value of each parameter and its corresponding gradient <cit.>. Formally, SNIP is defined as below:
SNIP≜∑_i^D|⟨θ_i,∇_θ_iL ⟩|
where ⟨ ⟩ represents for inner product; D, θ_i and L are, the number of layers, the parameter vector of the i-th layer of a given network and L is the loss values, respectively.
§.§.§ Synflow
Similar to SNIP, Synflow consists of maintaining the sign of the SNIP proxy <cit.>:
Synflow≜∑_i^D⟨θ_i,∇_θ_iL ⟩
§.§.§ GraSP
The above three proxies only take the first-order derivatives of neural networks into account. The GraSP proxy considers both the first-order and second-order derivatives of neural networks <cit.>. Specifically, GraSP multiplies the gradient and Hessian matrix of parameters:
∑_i^D-⟨_i∇_θ_iL,θ_i⟩
where _i is Hessian matrix of the i-th layer.
There are multiple theoretical analyses for the above three proxies. Specifically, Synflow and SNIP have been proven to be layer-wise constants in linear networks during the back-propagation process <cit.>. Moreover, authors in <cit.> show that Synflow and GraSP are different approximate formats of the first-order Taylor expansion of deep neural networks. We remark that the Taylor expansion of a deep network can identify the parameters that contribute the most to the loss values; thus, it can measure the importance of parameters.
Besides the gradient over parameters, the gradient over each layer's activation is also explored to build the accuracy proxy as shown below.
§.§.§ Fisher information
Authors of <cit.> show that Fisher information of a neural network can be approximated by the square of the activation value and their gradients:
∑_i^D⟨∇__iL, _i⟩^2
where _i is the feature map vector of the i-th layer of a given network.
Previous works show that a second-order approximation of Taylor expansion in a neural network is equivalent to an empirical estimate of the Fisher information <cit.>. Hence, measuring the Fisher information of each neuron/channel of a given network can reflect the importance of these neurons/channels.
§.§.§ Jacobian covariant
Besides the gradient over parameters and activations, the Jacobian covariant (Jacob_cov) leverages the gradient over the input data <cit.>. To get the Jacob_cov proxy, given an input batch with B input samples {_1, _2, ..., _B}, the gradients matrix of the output results {y_1, y_2, ..., y_B} vs. these inputs is first computed:
=(∇__1y_1, ∇__2y_2, ..., ∇__By_B)^T
Next, the raw covariance matrix is generated as:
= ( - )(- )^T
where M_i,j =1/B∑_n=1^BJ_i,n. Then the raw covariance matrix is normalized to get the real covariance matrix Γ:
Γ_i,j = G_i,j/√( G_i,i G_j,j)
where Γ_i,j denotes the entries of Γ.
Let λ_1≤λ_2≤... ≤λ_B be the B eigenvalues of Γ; then the Jacobian covariant is generated as follows:
Jacob_cov≜ -∑_i=1^B[(λ_i+ϵ) + (λ_i+ϵ)^-1]
where ϵ is a small value used for numerical stability.
As discussed in <cit.>, Jacob_cov can reflect the expressivity of deep networks thus higher Jacob_cov values indicate better accuracy.
§.§.§ Zen-score
Zen-score is a new proxy for a given model <cit.>. The Zen-score is defined as:
log𝔼_,ϵ (f_e()- f_e( + αϵ)_F) +∑_k,ilog(√(∑_jσ_ij^k/Ch_i)),
∼𝒩(0,)
where, is a sampled Gaussian random vector, ϵ is a small input perturbation, ·_F indicates the Frobenius norm, α is a tunable hyper-parameter, Ch_i is the number of channels of the the i-th convolution layer, and σ_ij^k is the variance of the i-th layer's j–th channels for the k-th samples in an input batch data. As shown in Eq.<ref>, Zen-score measures model expressivity by averaging the Gaussian complexity under randomly sampled x and ϵ. We note that this is equivalent to computing the expected gradient norm of f with respect to input x instead of network parameters. Hence, Zen-score measures the expressivity of neural networks instead of their trainability: networks with a higher Zen-score have a better expressivity and thus tend to have a better accuracy.
§.§.§ NTK Condition Number
Neural Tangent Kernel is proposed to study the training dynamics of neural networks <cit.>. More precisely, given two input samples _1 and _2, NTK is defined as:
κ(_1,_2)=(_1)(_2)
where () is the Jacobian matrix evaluated at the sample <cit.>. Authors in <cit.> prove that the training dynamics of wide neural networks can be solved as follows:
μ_t(_train)=(𝕀-e^-ηκ(_train, _train) t)_train
where t denotes the training step; μ_t represents the output expectations at training step t; _train and _train are the training samples and their corresponding labels; η is the learning rate. By conducting the eigendecomposition of Eq. <ref>, the i-th dimension in the eigenspace of output expectation can be written as follows:
μ_t(_train,i)=(-e^-ηλ_i t)_train,i, i={1,2,...,m}
where λ_1≤λ_2≤... ≤λ_m are the eigenvalues of the NTK (_train, _train). Therefore, a smaller difference between λ_1 and λ_m indicates (on average) a more “balanced" convergence among different dimensions in the eigenspace. To quantify the above observation, the NTK Condition Number (NTK_Cond) is defined as follows <cit.>:
NTK_Cond≜𝔼__train,Θλ_m/λ_1
where Θ is the randomly initialized network parameters. Authors of <cit.> demonstrate that the NTK_Cond is negatively correlated with the architecture’s test accuracy. Hence, the networks with lower NTK_Cond values tend to have a higher test accuracy. Similar insights are reported and leveraged in <cit.> for NAS of vision transformers (ViTs).
§.§ Gradient-free accuracy proxy
Though the gradient-based proxies do not require the training process on the entire dataset, backward propagation is still necessary to compute the gradient. To entirely remove the gradient computation from the neural architecture search, several gradient-free proxies have been proposed lately.
§.§.§ Number of linear regions
The number of linear regions of a network quantifies how many regions a given network could divide the input space into; thus, it describes the expressivity of a given network <cit.>. For instance, a single-neuron perceptron with a ReLU activation function can divide its input space into two regions.
<cit.> proves that one can estimate the number of linear regions with the help of the activation patterns of the output activation matrix :
=1·1^T- sign[_i(1-_i)^T + (1-_i)_i^T]
where 1 is a all-one vector. Next, by removing the repeating patterns and assigning the weights to each pattern, the number of linear regions ρ is as follows:
ρ≜∑_j1/∑_kR_j,k
where R_j,k is the entry of R. Therefore, the number of linear regions measures how many unique regions the network can divide the entire activation space into (see Fig. <ref>).
§.§.§ Logdet
Logdet is another proxy proposed based on the number of linear regions <cit.>:
=[ N_LR-d_H(_1,_1) ... N_LR-d_H(_1,_N); ... ... ...; N_LR-d_H(_N,_1) ... N_LR-d_H(_N,_1); ]
Logdet≜log||
where N_LR is the total number of linear regions, d_H is the Hamming distance, and _i is the binary coding vector of the i-th linear region as shown in Fig. <ref>. Previous work shows that networks with a higher Logdet at initialization tend to have higher test accuracy after training <cit.>.
§.§.§ Topology inspired proxies
The very first pioneering work behind theoretically-grounded, training-free architecture design was done by Bhardwaj et al. <cit.>. While the above proxies are proposed for a general search space, i.e., without any constraints on the candidate architectures, as discussed later, these general-purpose proxies are not better than some naive proxies, e.g., the number of parameters (#Params) of a model. To design better accuracy proxies than #Params, some approaches constrain the search space to some specific networksBhardwaj et al. <cit.> constrained the search space to specific topologies, e.g., DenseNets, ResNets, MobileNets, etc., and theoretically studied how network topology influences gradient propagation. Inspired by the network science, NN-Mass is defined as follows <cit.>:
ρ_c ≜#Actual skip connections of cell c/#Total possible skip connections of cell c
NN-Mass≜∑_each cell cρ_c w_c d_c
where w_c and d_c are the width and depth values of a cell[A cell represents a group of layers with the same width values or commonly used blocks in CNN, e.g., Basic/Bottleneck blocks in ResNet, and Inverted bottleneck blocks in MobileNet-v2.], respectively. Authors of <cit.> prove that higher NN-Mass values indicate better trainability of networks and faster convergence rate during training. Moreover, they also show that networks with higher NN-Mass values tend to achieve a higher accuracy. NN-Mass has also been used to perform training-free model scaling to significantly improve accuracy-MACs tradeoffs compared to highly accurate models like ConvNexts <cit.>. In <cit.>, the authors show the connection between NN-Mass and expressive power of deep networks for ResNet-type networks.
As an extension of NN-Mass, NN-Degree is proposed by relaxing the constraints on the width of networks. Formally, NN-Degree is defined as follows <cit.>:
NN-Degree=∑_each cell c(w_c + #Actual skip connections/#Total input channels)
where w_c is the average width value of a cell c. Similarly to NN-Mass, NN-Degree has shown a high positive correlation with the test accuracy.
Lately, <cit.> developed another principled understanding of a neural network's connectivity patterns on its capacity or trainability. Specifically, the authors theoretically characterized the impact of connectivity patterns on the convergence of deep networks under gradient descent training with fine granularity, by assuming a wide network and analyzing its Neural Network Gaussian Process (NNGP) <cit.>. The authors prove that how the spectrum of an NNGP kernel propagates through a particular connectivity pattern would affect the bounds of the convergence rates. On the practical side, they show that such NNGP-based characterization could act as a simple filtration of “unpromising" connectivity patterns, to significantly accelerate the large-scale neural architecture search without any overhead.
§ BENCHMARKS AND PROFILING MODELS
NAS benchmarks have been proposed to provide a standard test kit for fair evaluation and comparisons of various NAS approaches <cit.>. A NAS benchmark defines a set of candidate architectures and their test accuracy or hardware costs. We classify the existing NAS benchmarks as standard NAS (i.e., without hardware costs) and hardware-aware NAS benchmarks. Next, we introduce these two types of NAS benchmarks.
§.§ Standard NAS Benchmarks
§.§.§ NASBench-101
NASBench-101 provide users with 423k neural architectures and their test accuracy on the CIFAR10 dataset; the architectures are built by stacking a cell for multiple times <cit.>. The cells in the NASBench-101 search space can have at most seven operations; each operation is sampled from the following three operations: 3× 3 convolution, 1× 1 convolution, and max pooling.
§.§.§ NATS-Bench
NATS-Bench have two different search space.
NATS-Bench-TSS is also called NASBench-201. Similar to the NASBench-101, the networks in NASBench-201 are also built by repeating a cell multiple times <cit.>. One of the advantages of NASBench-201 over NASBench-101 is that NASBench-201 provides test accuracy on more datasets, namely, CIFAR10, CIFAR100, and ImageNet16-120. As shown in Fig. <ref>, the cells in the NASBench-101 search space have six operations; each operation is sampled from the following five operations: null, skip connection, 3× 3 convolution, 1× 1 convolution, and 3× 3 average pooling. Therefore, there are 5^6=15625 network architectures in the NASBench-201 benchmark.
NATS-Bench-SSS is the succeeding version of NASBench-201: it contains 32768 architectures with different width values for each layer; in the rest of the paper, we use the NATS-Bench to represents NATS-Bench-SSS for short <cit.>.
§.§.§ TransNAS-Bench-101
TransNAS-Bench-101 is a benchmark dataset containing network performance on seven diverse vision tasks, including image classification, image reconstruction, and pixel-level prediction <cit.>. As for the candidate architectures, there are two different sub-search spaces: (i) A cell-level search space consisting of 4,096 unique networks with different cells; each cell has six operations which is sampled from the following four operations: null, skip connection, 1x1 convolution, and 3x3 convolution. (ii) A macro-level search space containing 3256 unique networks with different depths (number of blocks, varying from 4 to 6), locations of down-sample layers and widening layers.
§.§ Hardware-aware NAS benchmarks
The above benchmarks do not consider hardware constraints. Recent hardware-aware NAS approaches aim to jointly optimize the test performance and hardware efficiency of neural architectures. Hence, hardware-aware NAS benchmarks have been proposed by incorporating the hardware costs of networks into the search process.
HW-NAS-Bench covers the search space from both the NASBench-201 and FBNet <cit.>. It provides all the architectures in these two search spaces measured/estimated
hardware cost (i.e., latency and energy consumption) on six different types of devices: NVIDIA Jetson TX2 (EdgeGPU), RaspberryPi-4, Google Edge-TPU, Pixel-3 phone, ASIC-Eyeriss, and Xilinx ZC706 (FPGA)).
§.§ Hardware Performance Models
To involve the hardware-awareness into NAS, we also need to construct models to efficiently and accurately estimate the hardware performance (e.g., latency) of given networks. In this section, we consider latency to characterize the hardware performance and use NASBench-201 as an example to compare several representative approaches for hardware performance models.
BRP-NAS is a pioneering approach that uses deep learning to build hardware performance models <cit.>. Specifically, BRP-NAS first converts a neural network into a directed acyclic graph by modeling each layer as an edge in a graph and modeling the input/output as nodes in the graph. Next, by using different values to present different types of layers, BRP-NAS uses a Graph-convolution-network (GCN) to build the hardware performance models. Then the model is trained with multiple networks and their real hardware performance data on the target hardware. In particular, for the networks with fixed depth, BRP-NAS can also use MLP to build the performance model.
Though BRP-NAS can achieve good prediction results with enough training samples, there is a limitation for BRP-NAS: the performance model is trained for a specific hardware platform; if new hardware comes, one needs to repeat the entire process.
To address the above problem, HELP builds the hardware performance models by taking the hardware information as extra input features (e.g., type of the hardware, number of computing elements, and the size of on-chip memory) <cit.>. Next, HELP is trained with the latency data collected from multiple platforms, such as desktop CPU/GPU and mobile CPU/GPU. This way, if new hardware comes in, HELP only needs a few samples to conduct the fine-tuning process (typically around 10). Hence, HELP is very efficient in terms of the transferability for new hardware.
Nevertheless, both BRP-NAS and HELP are built on the layer-level analysis, which is relatively coarse for an accurate prediction.
To further improve the accuracy of performance models, NN-Meter is proposed by analyzing the neural network at a finer granularity during run-time. Specifically, NN-Meter computes the kernels of each neural network, which are originally generated during the compilation process <cit.>. To remove the necessity of the compilation process, NN-Meter utilizes the algorithm to automatically predict the generated kernels. Hence, as shown in Table <ref>, NN-Meter has a much higher prediction quality than both HELP and BRP-NAS.
§ EXPERIMENTAL RESULTS
In this section, we compare the existing proxies on multiple NAS benchmarks under various scenarios. Besides the proxies mentioned above, we also evaluate two naive proxies, i.e., #Params and #FLOPs.
§.§ NAS without hardware-awareness
To compare the performance of these proposed accuracy proxies, we calculate the correlation of these proxy values vs. the real test accuracy. We next discuss the results on two NAS benchmarks: NASBench-201 and NATS-Bench.
§.§.§ Unconstrained search space
We first investigate the performance of zero-shot proxies for the unconstrained search spaces, i.e., considering all networks in the benchmarks.
NASBench-201:
We calculate the correlation coefficients between multiple proxies and the test accuracy on CIFAR-100 and ImageNet16-120 datasets. As shown in Fig. <ref> and <ref>, the #Params generally works best for these two datasets. Except for the #Params, several gradient-based proxies, such as Grad_norm, SNIP, GraSP, and Fisher, also work well.
As shown in Table <ref>, we compare the neural architectures with the highest test accuracy found via various proxies. The neural architectures obtained via #Params and #FLOPs have the highest test accuracy on NASBench-201, which is a natural and expected result given the correlation results above.
NATS-Bench:
Similar to NASBench-201, we calculate the correlation coefficients between these proxies and the test accuracy on CIFAR-100 and ImageNet16-120 datasets for NATS-Bench. As shown in Fig. <ref> and <ref>, the #Params and Zen-score generally work best for these two datasets.
Overall, it appears that all these proposed accuracy proxies do not have a higher correlation with the test accuracy compared to #Params and #FLOPs for these two NAS benchmarks.
§.§.§ Constrained search space
We note that the architectures with high accuracy are much more important than those networks with low test accuracy. Hence, we calculate the correlation coefficient for the architectures with test accuracy ranking top 5% in the entire search space. Fig. <ref> and <ref> show that, compared to ranking without constraints (i.e., considering all architectures), the correlation score has a significant drop except for the Zen-score on NASBench-201. Similarly, on NATS-Bench, Fig. <ref> and <ref> show that most of the proxies have a significant correlation score drop when constrained to the top 5% networks in the search space, including #Params and #FLOPs.
This correlation score drop on the top 5% networks will make the zero-shot NAS likely to miss the optimal or near-optimal networks. Table <ref> shows that there is a big accuracy gap between the ground truth and the networks obtained by each proxy. These results get even worse for a search with more relaxed hardware constraints (see Sec <ref>).
As shown in previous literature, #Params and #FLOPs outperform other proxies in multiple benchmarks <cit.>. Hence, we dig deep into the effectiveness of #Params and #FLOPs by gradually making the search space more constrained. As shown in Fig. <ref> and Fig. <ref>, if we compute the correlation for networks with higher accuracy, both #Params, and #FLOPs have a significant drop in correlation score.
Given the above results, we conclude that all of the existing proxies (including #Params and #FLOPs) do not correlate well for the network with high accuracy. This is a fundamental drawback because what matters most for NAS are precisely these networks with high accuracy. Hence, there is great potential for designing better proxies that could yield high correlation scores for these top networks.
§.§.§ Specific Network Families
We remark that most NAS benchmarks only contain cell-based architectures, where many popular architectures are not included. Hence, in this section, we consider several commonly used networks family as the search space since they are widely used in various applications.
As shown in Fig. <ref>, if we search within networks from ResNet and Wide-ResNet family, then SNIP, Zen-score, #Params, #FLOPs, and NN-Mass have a significantly high correlation with the test accuracy (i.e., Spearman's ρ>0.9).
As shown in Fig. <ref>, Grad_norm, SNIP, Fisher, Synflow, Zen-score and NN-Mass work best for the MobileNet-v2 network family, which is slightly better than the two naive proxies #Params and #FLOPs. These results show that there is great potential in designing good proxies for a constrained yet widely-used search space.
In practice, the test accuracy is not the only design consideration. Indeed, the models obtained by NAS shall meet some hardware constraints, especially for deployment on edge devices. Hence, we next explore the performance of these proxies for the hardware-aware search scenarios.
§.§ Hardware-aware NAS
In this part, we conduct the hardware-aware NAS using the zero-shot proxies introduced above. Specifically, we use these zero-shot proxies other than the real test accuracy to search for the Pareto-optimal networks under various constraints. We next introduce the results on NASBench-201 (with HW-NAS-Bench) and NATS-Bench.
§.§.§ NASBench-201 / HW-NAS-Bench
We use EdgeGPU (NVIDIA Jetson TX2) as the target hardware and use the energy consumption data from HW-NAS-Bench; then we set various energy consumption values as the hardware constraints. Next, we use different accuracy proxies to traverse all candidate architectures in the search space and obtain the Pareto-optimal networks under various energy constraints.
To illustrate the quality of these networks, we plot these networks and the ground truth results obtained via actual accuracy in Fig. <ref>. As shown, when the energy constraint is tight (e.g., less than 10mJ), most of the proxies could find networks very close to the real Pareto-optimal except the Jacob_cov. However, when the energy constraint is more relaxed (e.g., more than 20mJ), only #Params, #FLOPs, and Jacob_cov can find several networks close to the ground truth.
§.§.§ NATS-Bench
We measure the latency data on NVIDIA GTX-1080 for NATS-Bench. We then use different accuracy proxies to traverse all candidate architectures to obtain the Pareto-optimal networks under various latency constraints. As shown in Fig. <ref>, we plot these networks and the ground truth results. When we set the latency constraint to around 50ms, only #Params, SNIP, and Zen-score can still find the networks that nearly match the real Pareto-optimal networks.
The results on these two benchmarks further verify that current proxies don't correlate well for networks with high accuracy because the real Pareto-optimal networks have higher accuracy when the hardware constraints are more relaxed. This observation suggests a great potential to design better proxies in this scenario.
§.§ Discussion and future work
§.§.§ NAS Benchmarks
Diversity of search space:
We remark that the search space of most existing NAS benchmarks only contains cell-based neural architectures. To further improve the generality of NAS benchmarks, the community may need to incorporate new architectures from more diverse search spaces. For instance, the NATS-Bench has added architectures with different cells for different stages of the search space.
Moreover, the cells in these existing benchmarks are similar to the DARTS cell structure. However, in practice, the inverted bottleneck blocks from MobileNet-v2 are more widely used for higher hardware efficiency. Therefore, the next direction of NAS benchmarks may need to cover a more practical and widely used search space, such as FBNet-v3.
Awareness of hardware efficiency:
So far, only HW-NAS-Bench provides multiple hardware constraints on several types of hardware platforms, but it does not have the accuracy data for most of the networks in the benchmark. Therefore, we suggest that future NAS benchmarks provide both accuracy and hardware metrics on typical hardware platforms.
§.§.§ Zero-shot proxies
Why #Params works:
As shown in Section <ref>, #Params achieves a higher correlation than other proxies with multiple datasets and multiple benchmarks for unconstrained search space. One may wonder why such a trivial proxy works so well. In general, a good neural architecture should satisfy the following properties: good convergence/trainability and high expressive capacity.
We provide the following observations:
* Trainability On the one hand, given similar depth, the wider networks have better trainability and higher convergence rates, and clearly more parameters <cit.>. On the other hand, most of the networks evaluated on popular benchmarks share a similar depth value. Hence, within these benchmarks, more parameters will also indicate a better trainability.
* Expressivity It's well known that the models with more parameters are generally able to approximate more complex functions <cit.>. Hence, more parameters capture the higher expressive capacity of a given network.
Hence, #Params captures both the expressivity and trainability of the networks in these benchmarks.
In contrast, most of the proposed proxies usually emphasize either the expressivity or the trainability of networks (but not both). That may be why #Params outperforms these proposed proxies. Hence, future work should try to design a proxy that could indicate both the convergence/trainability and expressive capacity of a given network. For instance, recently propose proxy ZiCo indicates both trainability and generalization capacity of neural networks thus consistently outperforming #Params in multiple NAS benchmarks.
When #Params fails: (i) As shown in this section when accounting for the architectures with test accuracy ranking top 5%, several proxies outperform both #Params and #FLOPs for some benchmarks. Furthermore, these top-performing network architectures are most important since NAS focuses on obtaining the networks with high accuracy. (ii) Many proxies work well on the constrained search space, such as MobileNet and ResNet families. These networks are widely used in many applications (e.g., MobileNet-v2 for EdgeAI). Clearly, the above two failing cases are very important to push zero-shot NAS to more practical scenarios. Hence, there is a great potential to explore better zero-shot proxies in the above cases.
Search method:
Though #Params outperforms most proxies in several scenarios in terms of correlation coefficients, there are alternative search methods to use these zero-shot proxies. For example, as demonstrated in <cit.>, to better leverage these proxies, one potential search method can merge all candidate networks into a supernet and then apply these proxies to prune the network at the initialization stage until hardware constraints are met. This way, the time efficiency of zero-shot NAS approaches can be further improved since the search space is gradually compressed with pruning going on.
Theoretical support:
We remark that most gradient-based proxies are first proposed to estimate the importance of each parameter or neuron/channel of a given network, thus originally applied to the model pruning problem space instead of ranking networks. Hence, the effectiveness of these gradient-based proxies for zero-shot NAS needs a more profound understanding from a theoretical perspective. Moreover, though most gradient-free proxies are usually presented with some theoretical analysis for NAS, as shown in Section <ref> and Section <ref>, they generally have a lower correlation with the gradient-based ones. The theoretical understanding of why these zero-shot proxies can or cannot estimate the test accuracy of different networks is still an open question.
Customized proxy for different types of networks:
As mentioned in Section <ref>, several zero-shot proxies do not work well for a general search space, but do show a great correlation with the test accuracy and beat the #Params on constrained search spaces. In fact, Section <ref> and Section <ref> show that designing a zero-shot proxy that generally works well is extremely difficult. One potential direction for zero-shot proxies design may lie in partitioning the entire search space into several sub-spaces and then proposing customized proxies specifically designed for different sub-spaces.
§ CONCLUSION
In this paper, we have presented a comprehensive review of existing zero-shot NAS approaches. To this end, we have first introduced accuracy proxies for zero-shot NAS by providing theoretical inspirations behind these proxies and several commonly used NAS benchmarks. We then have introduced several popular approaches for hardware performance predictions. We have also compared the existing proxies against two naive proxies, namely, #Params and #FLOPs. By calculating the correlation between these proxies and the real test accuracy, we have shown that the proposed proxies to date are not necessarily better than #Params and #FLOPs for these tasks for unconstrained search spaces (i.e., considering all architectures in benchmarks). However, for constrained search spaces (i.e., when considering only networks with high accuracy), we revealed that the existing proxies much worse with the real accuracy including #Params and #FLOPs. Based on these analyses, we have explained why #Params work and when #Params fail. Finally, we have pointed out several potential research directions to design better NAS benchmarks for better zero-shot NAS and multiple ideas that may enable the design of better zero-shot NAS approaches.
langley00
IEEEtran
|
http://arxiv.org/abs/2307.03157v1
|
20230706173238
|
Can Domain Adaptation Improve Accuracy and Fairness of Skin Lesion Classification?
|
[
"Janet Wang",
"Yunbei Zhang",
"Zhengming Ding",
"Jihun Hamm"
] |
cs.CV
|
[
"cs.CV",
"cs.CY",
"cs.LG"
] |
plain
Tulane University
{swang47, yzhang111, zding1, jhamm3}@tulane.edu
J. Wang et al.
Can Domain Adaptation Improve Accuracy and Fairness of Skin Lesion Classification?
Janet Wang, Yunbei Zhang, Zhengming Ding, Jihun Hamm
August 1, 2023
==================================================================================
Deep learning-based diagnostic system has demonstrated potential in classifying skin cancer conditions when labeled training example are abundant. However, skin lesion analysis often suffers from a scarcity of labeled data, hindering the development of an accurate and reliable diagnostic system. In this work, we leverage multiple skin lesion datasets and investigate the feasibility of various unsupervised domain adaptation (UDA) methods in binary and multi-class skin lesion classification. In particular, we assess three UDA training schemes: single-, combined-, and multi-source. Our experiment results show that UDA is effective in binary classification, with further improvement being observed when imbalance is mitigated. In multi-class task, its performance is less prominent, and imbalance problem again needs to be addressed to achieve above-baseline accuracy. Through our quantitative analysis, we find that the test error of multi-class tasks is strongly correlated with label shift, and feature-level UDA methods have limitations when handling imbalanced datasets. Finally, our study reveals that UDA can effectively reduce bias against minority groups and promote fairness, even without the explicit use of fairness-focused techniques.
§ INTRODUCTION
With the development of Convolutional Neural Networks (CNNs), AI-assisted diagnostic systems have demonstrated expert-level capability in classifying skin cancers, a condition often identified through visual diagnoses <cit.><cit.><cit.>. The great potential of these systems can contribute to teledermatology as a diagnostic and decision support tool, thereby enhancing dermatological accessibility in rural areas where medical resources are limited <cit.>. Nevertheless, significant challenges arise due to the scarcity of labeled skin disease images available for developing accurate diagnostic models – an issue commonly encountered in medical imaging analysis. Efforts have been made to address the issue of data scarcity in skin lesion analysis. <cit.>, for instance, leverages Generative Adversarial Networks (GANs) to supplement the training set with synthetic images covering a wide array of skin colors and conditions. Even though these generated images are visually indistinguishable from actual examples, their addition to the training set only leads to marginal improvements over the baseline and primarily benefits rare cases, at the expense of the accuracy of prevalent classes. Additional research also has explored GANs as an augmentation technique to diversify skin lesion training samples <cit.><cit.>. However, using GANs as an augmentation has demonstrated only minimal enhancements while demanding substantial computational resources.
More often, there's no labeled data available to train an accurate classifier in skin lesion analysis and the target domain distribution is unknown. To overcome this challenge, we can utilize external labeled skin lesion datasets to enhance the training set. However, managing the distribution shift between source and target domains is crucial to ensure that the features learned from the source domains can be effectively generalized to the target domains. In this study, we explore multiple skin lesion datasets and conduct an extensive analysis of various UDA methods that align features for skin lesion analysis, within different training and task frameworks. The key contributions of our work are as follows:
* To the best of our knowledge, this is the first comprehensive evaluation of current single- and multi-source UDA methods applied to skin lesion classification across six datasets. This assessment considers single, aggregated, and multiple sources for both binary and multi-class classification tasks;
* We provide a detailed, quantitative analysis of UDA's performance in skin lesion classification, assessing its potential and limitations;
* We explore the feasibility of imbalance techniques in addressing the imbalance issue in skin lesion datasets;
* Fairness has become an increasing concern in machine learning. In this work, we test UDA's ability to ameliorate bias against underrepresented groups.
§ RELATED WORKS
§.§ Unsupervised Domain Adaptation.
UDA methods aim to mitigate the domain shifts between source and related, yet unlabeled target domains (Fig.<ref>). Existing UDA methods can be categorized based on their knowledge transfer strategies into two primary groups: image alignment and feature alignment. Image alignment uses GANs to translate source images into those that are visually similar to the target domain. In contrast, feature alignment approaches distribution shifts at the feature level. Domain Adversarial Neural Network (DANN) and Adversarial Discriminative Domain Adaptation (ADDA) exemplify adversarial training methods that learn invariant features across domains <cit.><cit.>. There's a growing interest in adapting multiple resources to an unlabeled domain. For instance, Multi-source Domain Adversarial Networks (MDAN) enhance domain adaptation by optimizing task-adaptive generalization bounds <cit.>. Moreover, the Moment Matching for Multi-Source Domain Adaptation (M^3SDA) method transfers knowledge learned from multiple labeled sources to an unlabeled target domain by dynamically aligning moments of their feature distributions <cit.>.
§.§ Domain Adaptation in Medical Imaging.
Recent studies have validated the feasibility of DA to enhance performance in skin lesion classification. <cit.>, for example, close the distribution gap between skin lesion datasets, which vary in image types and patient cohorts, by using cycleGAN to synthesize source domain images into the target domain. In <cit.>, source and target domains are aligned to include the same disease classes, under the assumption that target labels are known. In addition, <cit.> show that UDA is effective in mitigating domain shifts and boosting performance on the target task for melanoma and nevus classification. <cit.>, in particular, subgroups dermoscopic images from the ISIC archive based on their metadata to create different domains and evaluates performance on these domains, both with and without a single-source UDA technique. Further research expands on this work, investigating the effectiveness of 8 UDA methods for binary classification <cit.>.
§ METHODS AND MATERIALS
§.§ Dataset
We evaluate domain adaptation methods using 6 public and well-annotated skin lesion datasets. From each dataset, we select data for the diagnostic classes listed in Table <ref>, and subsequently group the data from these datasets into eight classes (see Table <ref>). We specifically separate the dermoscopic and clinical pairs in the Derm7pt dataset into two independent datasets—Derm7pt-derm and Derm7pt-clinic—due to their differing image types. Except for Derm7pt, where we use the official split by the dataset provider, we split each dataset using a 0.2 test ratio with class stratification. For the Fitzpatrick17k dataset, we eliminate duplicate and invalid data (non-lesion images). Besides, we apply ROI preprocessing to Fitzpatrick17k images and use ROI-cropped images in the following experiments. Details can be found in <ref>. Since our objective is to assess the performance of domain adaptation in skin lesion analysis, we resize each image to a 64 × 64 pixel format to facilitate the experiments. Throughout the remainder of the paper, we will refer to these datasets by their abbreviated names in Table <ref>.
§.§ Addressing Data Imbalance Problem
Two popular approaches for handling imbalanced dataset are resampling and reweighting. In this context, we operate under the assumption that the label distribution in both the source and target domains is known. It is crucial to point out that we do not require the individual label for each target data sample, adhering to the principles of UDA.
Denote the probability associated with label i in the source and target domains as p_i and q_i respectively, the weight importance can then be calculated by w_i = q_i/p_i.
In the resampling method, w_i is utilized to determine the weighted sample in each mini-batch. On the other hand, reweighting leverages w_i to compute a weighted cross-entropy loss.
§.§ Experiment Design
In this research, we examine UDA methods for binary and multi-class tasks within single-source, combined-source, and multi-source schemes. We first train classifiers on single and combined sources for each target domain, without using UDA, to compare single-source and combined-source pipelines. This comparison is carried out for both tasks. Next, we introduce UDA methods to these schemes. For binary tasks, we use DANN, whose advantage in binary skin lesion classification is validated in previous studies <cit.><cit.>. Applying single-source UDA to multi-class tasks is impractical because some target classes may be entirely absent from the source domain. So we focus on combined-source UDA pipelines, to which ADDA and DANN are applied. Lastly, we employ multi-source UDA methods, namely MDAN and M^3SDA, for both tasks. We evaluate performance using AUROC for binary tasks and accuracy for multi-class tasks. This inconsistency in metrics is because class-wise AUROC can not be computed when classes are missing from either the source or target domain.
§.§ Implementation Details
All of our experiments utilized a pre-trained VGG16-BN<cit.> in PyTorch as a feature extractor. Our methodology for single- and combined-source UDA experiments are based on a well-established repository[https://github.com/thuml/Transfer-Learning-Libraryhttps://github.com/thuml/Transfer-Learning-Library]. The default hyperparameters were employed for both single and combined sources, with the exception of the learning rates for single sources, which were slightly decreased to 1 × 10^-4. As for the multi-source UDA, we utilized the official code with the default hyperparameters[https://github.com/hanzhaoml/MDANhttps://github.com/hanzhaoml/MDAN] [https://github.com/VisionLearningGroup/VisionLearningGroup.github.io/tree/master/M3SDA/code_MSDA_digithttps://github.com/VisionLearningGroup/VisionLearningGroup.github.io/tree/master/M3SDA].
The data loading process was specifically adjusted to ensure the compatibility of these codes with our datasets. We deploy models on RTX A6000 and RTX 3090.
§ RESULTS
§.§ Single Source vs. Combined Sources
Table <ref> presents the results for binary classification without DA. In the single-source pipelines, we find that the classifier trained on the ISIC2018 dataset performs best on average across target domains, whereas PAD-UFES-20 dataset as sources yields the lowest average results. This discrepancy could be attributed to the size difference between these two datasets, with ISIC2018 having the most number of examples, and PAD-UFES-20, the fewest. Furthermore, for each target domain (column), an aggregated source consistently outperforms any single-source pipelines. We observe similar patterns in the more challenging multi-class task (Table <ref>), where a simple combination of sources outperforms single source domain as expected. However, it's worth to note that for each target domain, the average performance for classifiers trained on a single source consistently falls below the chance-level baselines, implying a performance inferior to random guessing of the target domain examples as the head class.
To better understand these results, we use Wasserstein Distance <cit.> to measure the pixel-level difference and Chi-square divergence to measure label shift, between each source and target pair Fig.<ref>. Calculating each of their Pearson correlations with the test error, we find that label shift is highly correlated with test error, revealing a Pearson correlation coefficient as high as 0.78. The significant role of label shift is intuitive and anticipated. For instance, a classifier trained on ISIC 2020 performs uniformly poorly on other target domains, with the exception of ISIC 2018, which has a small feature and label distance from ISIC 2020. Specifically, there are three classes—NEV, MEL, and BKL—in ISIC 2020, limiting the model trained on ISIC 2020 to predictions within these classes. As a result, the label shift caused by missing classes hinders the classifier trained on ISIC2020 from performing well on PAD-UFES-20 and Fitzpatrick-ROI, whose major classes are BCC, BKL, and SCC—none of which exist in ISIC 2020.
The combination of datasets can effectively mitigate issues arising from missing classes in source domains compared to the target domain, especially when the distribution of the target domain is unknown. Therefore, these experiments validate that without target labels, combining various datasets to enrich and diversify a source domain can significantly enhance the classification performance on the target skin lesion classification task. The experiments also demonstrate the impact of distribution shifts, characterized by feature and label distance, on knowledge transfer from the source domain to the target domain.
§.§ Domain Adaptation vs. Baseline
In binary classification, the single source+DANN method uniformly surpasses its non-DA counterpart across target domains, with the exception of when ISIC2018 is the target domain. In this case, the drop in average performance could be attributed to the fact that all other single-domain sources have a considerably smaller size than ISIC2018. The performance improves further when DANN and ADDA are applied to the combined sources. However, these results are comparable to those of the combined source without DA. When resampling is used in the combined source+DANN scheme, greater improvements are observed. Among multi-source DA schemes, M^3SDA with resampling yields the best result, which are also the best among all the pipelines for binary classification. This outcome is significantly closer to the average oracle result, thereby demonstrating the effectiveness of DA for melanoma and nevus classification. Fig. <ref> shows that after DANN, domain representations are intertwined, while classes remain separable.
In the multi-class task, for each target domain, the combined source provides results comparable to the baseline. However, applying DANN and ADDA to the combined sources rarely improves, and sometimes even undermines, the performance of the classifiers, causing accuracy to fall below the baseline level. Similar observations apply to the multi-source training scheme. We speculate that this performance decrease is attributable to an imbalanced distribution, and so we apply imbalance techniques to these three training schemes respectively. After applying the imbalance technique, we consistently observe improvements in average performance in each case, as well as improvement over baselines for each target domain. While the effectiveness of UDA is not as remarkable as in binary classification, given the challenge of differentiating 8 types of skin lesions without target domain labels, we interpret this as an encouraging indication that UDA is effective for classifying multiple skin lesion classes when imbalance regularization is enforced. Currently, most UDA methods aim to align at the feature level, instead of seeking to address label shift, which could explain their limitations in multi-class skin lesion classification when the dataset is skewed.
§.§ Impact on Fairness
Fairness is tested on three datasets - Fitzpatrick17k, ISIC 2020, and PAD-UFES-20 - for the binary classification task. The training and test sets from these datasets are combined into a single dataset for evaluation. In line with <cit.>, we consider skin color in Fitzpatrick17k as a sensitive attribute, grouping data into three categories according to their skin color scale (1&2, 3&4, and 5&6). For the other two datasets, age (<=30 and >30) and gender are regarded as sensitive attributes. Adapting the methods from <cit.><cit.>, fairness is quantified using three metrics: PQD, DPM, and EOM. A detailed description of these metrics can be found in Appendix <ref>. In Table <ref>, none of the best classification or fairness values are provided by the non-DA single-source pipeline. On the contrary, the optimal fairness results tend to align with the right-hand three UDA methods, which also offer superior overall classification results. Although fairness isn't intentionally sought in our experiments, the outcomes suggest that UDA, while enhancing AUROC, also steers the classifiers toward better fairness. In other words, UDA ensures equal opportunities for accurate diagnoses across various groups, an important achievement in medical imaging analysis.
§ CONCLUSION
In this study, we validate the effectiveness of aggregating skin lesion datasets to enhance performance on unlabeled target domains for binary and multi-class tasks. We explored three UDA training pipelines: single-source, combined source, and multi-source. The results demonstrate these methods' superiority over non-DA single-source pipelines in binary classification, particularly when imbalance is addressed. However, UDA proves less effective for multi-class skin lesion classification, owing to the strong tie between performance and label distribution. This is largely because our selection of UDA methods mainly mitigate feature-level distribution shifts, neglecting label shift. Notably, the application of imbalance techniques brings more evident improvement in both tasks. We see this as a promising step towards the challenging 8-class skin lesion classification task and anticipate that UDA methods designed to handle imbalance will significantly advance skin lesion analysis—a future direction for our research. Finally, our results reveal UDA's ability to reduce bias and improve fairness, an encouraging side-effect of UDA in skin lesion analysis.
splncs04
Appendix
§ ROI PREPROCESSING
In <cit.>, region of interest (ROI) detection is utilized to separate skin lesions from clinical photos, effectively reducing noise and enhancing the lesion information ratio. In the context of our problem setting, dermoscopic images of skin lesions are mostly close-up shots centered on the lesions, whereas clinical photos are taken at varying distances from the lesions or from different angles. To minimize background noise, as well as the discrepancy between dermoscopic and clinical images, we fine-tune a YOLO-8 model, one of the SOTA ROI detection algorithms <cit.>. This automated approach aids in the detection and cropping of skin lesions from clinical photos. Moreover, this preprocessing ensures that lesions are preserved and not inadvertently removed by standard cropping augmentations before images being fed into convolutional neural networks. After cropping, each image is resized to a resolution of 64× 64 pixels, which is the designated image size for subsequent experiments.
§ FAIRNESS METRICS
Let S be the sensitive group, for example, S ={male, female}. Denote the label set by {1, ..., M}. To quantify fairness, we adapt the 3 metrics from<cit.>: (i) Predictive Quality Disparity (PQD) measures the prediction quality difference between each sensitive groups:
PQD = min(acc_j, j∈ S)/max(acc_j, j ∈ S)
here acc_j is the accuracy of the data in j-th sensitive group.
(ii)Demographic Disparity Metric (DPM) computes the percentage diversities of positive outcomes for each sensitive group.
DPM = 1/M∑_i=1^M min[ p(ŷ = i|s=j), j∈ S ]/max[ p(ŷ =i | s= j), j ∈ S ]
where ŷ is the prediction of the model.
(iii) Equality of Opportunity Metic (EOM) requires that different sensitive group should have similar true positive rates.
EOM=1/M∑_i=1^M min[ p(ŷ = i|y=i, s=j), j∈ S ]/max[ p(ŷ =i |y=i, s= j), j ∈ S ]
where y is the true label.
|
http://arxiv.org/abs/2307.00449v1
|
20230702012547
|
A Dual-Stream Recurrence-Attention Network with Global-Local Awareness for Emotion Recognition in Textual Dialogue
|
[
"Jiang Li",
"Xiaoping Wang",
"Zhigang Zeng"
] |
cs.CL
|
[
"cs.CL"
] |
1
.001
A dual-stream recurrence-attention network with global-local awareness
Li et al.
mode = title]A dual-stream recurrence-attention network with global-local awareness for emotion recognition in textual dialogue
1,2,3]Jiang Li
[1]
1,2,3]Xiaoping Wang
[1]
[1]Corresponding authors: Jiang Li and Xiaoping Wang. E-mail address: [email protected] (J. Li), [email protected] (X. Wang), [email protected] (Z. Zeng).
1,2,3]Zhigang Zeng
[1]School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
[2]Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China
[3]Hubei Key Laboratory of Brain-inspired Intelligent Systems, Wuhan 430074, China
In real-world dialogue systems, the ability to understand the user's emotions and interact anthropomorphically is of great significance. Emotion Recognition in Conversation (ERC) is one of the key ways to accomplish this goal and has attracted growing attention. How to model the context in a conversation is a central aspect and a major challenge of ERC tasks. Most existing approaches are generally unable to capture both global and local contextual information efficiently, and their network structures are too complex to design. For this reason, in this work, we propose a straightforward Dual-stream Recurrence-Attention Network (DualRAN) based on Recurrent Neural Network (RNN) and Multi-head ATtention network (MAT). The proposed model eschews the complex network structure of current methods and focuses on combining recurrence-based methods with attention-based methods. DualRAN is a dual-stream structure mainly consisting of local- and global-aware modules, modeling a conversation from distinct perspectives. To achieve the local-aware module, we extend the structure of RNN, thus enhancing the expressive capability of the network. In addition, we develop two single-stream network variants for DualRAN, i.e., SingleRANv1 and SingleRANv2. We conduct extensive experiments on four widely used benchmark datasets, and the results reveal that the proposed model outshines all baselines. Ablation studies further demonstrate the effectiveness of each component.
Dialogue Emotion Recognition Recurrent Neural Network Multi-head Attention Network Dialogue System Dual-stream Network
[
[
August 1, 2023
==================
§ INTRODUCTION
Emotion recognition is a promising application and has received a great deal of attention from academics in recent years. Emotion Recognition in Conversation (ERC) is a subfield of emotion recognition with special scenarios. Distinct from general emotion recognition, ERC not only focuses on the utterance itself but also demands that the context of utterance is sufficiently understood <cit.>. Figure <ref> is a general flow that embodies the ERC task. With the rapid deployment and development of human-computer interaction, there is an urgent need to engage machines that can interact more naturally and humanely with humans. As a result, the importance of building conversational systems that can understand human emotion and intention has grown significantly <cit.>. The development of ERC, which fits the above-mentioned usage scenarios for dialogue systems, is urgent and has attracted increasing research and attention in natural language understanding communities.
Plenty of efforts have been made in context-based modeling, and these ERC models fall into three main categories: recurrence-based approaches, Transformer-based approaches, and graph-based approaches. Recurrence-based methods treat the utterance in a conversation as temporal series data. COSMIC <cit.> is a conversational emotion recognition framework based on commonsense knowledge guidance, claiming to alleviate the problems of emotion shift and similar emotion. AGHMN <cit.> is a conversational emotion recognition model based on Gated Recurrent Units (GRUs) <cit.> for building a memory bank to capture historical contexts and summarize memories to extract critical information. DialogueCRN <cit.> enhances the extraction and integration of emotional cues and is a contextual reasoning network based on cognitive theory. CauAIN <cit.> introduces commonsense knowledge as a cue for emotion cause detection in conversation, explicitly modeling intra-speaker and inter-speaker dependencies. Transformer-based methods allow for the consideration of long-range contextual information. HiTrans <cit.> is a context- and speaker-aware model based on the hierarchical Transformer <cit.>. DialogXL <cit.> is a pioneering work based on the pre-trained language network XLNet <cit.>, which modifies the network structure of XLNet to better model conversational emotion data. CoG-BART <cit.> is a conversational emotion recognition model that applies the encoder-decoder model BART <cit.> as a backbone network. Graph-based methods are similar to Transformer-based methods in that the context can be modeled with a global perspective. SKAIG-ERC <cit.> utilizes a psychological-knowledge-aware interaction graph to model the historical context and commonsense knowledge of utterance. I-GCN <cit.> first represents conversations at different times using a graph structure and then simulates dynamic conversational processes using an incremental graph structure to capture both semantic correlation information of utterances and time-varying information of conversations.
However, these methods either focus on the local sequence information of utterance or the global association information of utterance, ignoring the combination of local and global information. Although recurrence-based ERC methods can extract the temporal sequence information of dialogue sequences, they tend to capture the nearest contextual information (i.e., focusing on the extraction of local information) and have difficulty in capturing long-range contextual information. Transformer-based and graph-based ERC methods can alleviate these problems, but they do not take into account the temporal information of utterance and have difficulty in adequately capturing the local information of utterance. In addition, some ERC models have an overly complex network structure, such as incorporating commonsense knowledge <cit.>, including multiple complex modules <cit.>, adopting an encoder-decoder structure <cit.>, etc., consuming numerous computational resources in return for a weak performance gain.
Therefore, in this paper, we provide a simple and effective dual-stream network structure that explores combining recurrence-based ERC and attention-based ERC so that they complement each other. Based on Recurrent Neural Network (RNN) and Multi-head ATtention network (MAT), we construct a local-aware module and a global-aware module, respectively, and propose a Dual-stream Recurrence-Attention Network (DualRAN) for the ERC task to capture both local and global information about the context. Furthermore, relying on the local and global aware modules of DualRAN, we devise two Single-stream Recurrence-Attention Networks (SingleRAN), which can be regarded as two variants of DualRAN.
Our contribution is as follows:
* A simple Dual-stream Recurrence-Attention Network (DualRAN) with global-local-aware capacity is proposed to sufficiently model the contextual dependencies of utterance from both local and global perspectives. DualRAN adopts a dual-stream network structure, consisting mainly of an RNN-based local-aware module and a MAT-based global-aware module.
* To enhance the expressive capacity of RNN, we add two skip connections and a feed-forward network layer to the local-aware module inspired by Transformer architecture. In addition, we encode speaker identities to model speaker dependencies as well as to explore the influence of different speakers on utterance emotion.
* We only change the dual-stream structure in DualRAN to a single-stream structure and maintain other components unchanged, providing two single-stream recurrence-attention networks, i.e., SingleRANv1 and SingleRANv2.
* We conduct extensive experiments on four public emotion datasets, including comparisons with baselines, comparisons with two SingleRANs, ablation studies with different components, and sentiment classification. The empirical results demonstrate that the proposed DualRAN can effectively model the ERC dataset and still surpass other models without using external commonsense knowledge.
The remaining sections primarily cover related works, methodology, experimental settings, experimental results and analysis, and conclusion and prospect. In Section <ref>, we introduce the existing related works. Section <ref> corresponds to the methodology of this paper, i.e., we present in detail the DualRAN and its variants proposed in this paper. In Sections <ref> and <ref>, we first describe the experimental setup of this work, and then report, discuss, and analyze the experimental results. The last section (i.e., Section <ref>) contains our conclusion and prospect of this work.
§ RELATED WORKS
Emotion Recognition in Conversation (ERC) is a burning and promising task in recent years. Unlike general emotion recognition, ERC involves conversational context.
§.§ Emotion Recognition
Emotion Recognition (ER) has received increasing attention from natural language processing and social robotics communities. For many years, emotion recognition has been an active research field and explored in interdisciplinary domains such as computer vision <cit.>, natural language understanding <cit.>, automatic speech recognition <cit.>, machine learning, signal processing, and cognitive science. General works typically treat ER tasks as context-independent based classification tasks. These efforts can be divided into two main categories, namely feature engineering-based approaches <cit.> and deep learning-based approaches <cit.>.
§.§ Dialogue Emotion Recognition
Emotion Recognition in Conversation (ERC) has attracted extensive research attention owing to its wide range of applications. Depending on the structure of the network, there are mainly recurrence-based methods, Transformer-based methods, and graph-based methods.
Recurrence-based methods: DialogueRNN <cit.> is an ERC method based on multiple GRUs that incorporates speaker information for each utterance to provide more reliable contextual information. COSMIC <cit.> models different aspects of commonsense knowledge by considering mental states, events, actions, and cause-effect relations, and thus extracts complex interactions between personality, events, mental states, intents, and emotions. AGHMN <cit.> mainly consists of Hierarchical Memory Network (HMN) and Bi-directional GRU (BiGRU), where HMN is used to extract interaction information between historical utterances and BiGRU is used to summarize recent memory and long-term memory with the help of attention weights. DialogueCRN <cit.> constructs a multi-turn reasoning module to perform the intuitive retrieving process and conscious reasoning process, thus simulating the cognitive thinking of humans. BiERU <cit.> designs a generalized neural tensor block and a two-channel classifier namely bidirectional emotional recurrent unit to perform contextual feature extraction and sentiment classification. CauAIN <cit.> models the context of utterance through the perspective of emotion cause detection and is known as a causal aware interaction network. CauAIN consists of two main cause-aware interactions, i.e., causal cue retrieval and causal utterance retrieval, which are used to find the causal utterance of the emotion expressed by the target utterance.
Transformer-based methods: HiTrans <cit.> extracts the contextual information of the utterance with the help of low-level and high-level Transformers, and it extracts the speaker information with the aid of an auxiliary task called pairwise utterance speaker verification. TODKAT <cit.> is a transformer encoder-decoder structure that combines topic representation and commonsense knowledge for conversational emotion recognition. DialogXL <cit.> is improved in two main ways, the first one is to improve the recurrence mechanism of XLNet from segment-level to utterance-level, and the second one is to replace the original vanilla attention by utilizing dialogue-aware self-attention. EmotionFlow <cit.> encodes the utterances of speakers by connecting contexts and auxiliary tasks, and it applies conditional random fields to capture sequential features at the emotion level. CoG-BART <cit.> first adopts the utterance-level Transformer to model the long-range contextual dependencies between utterances, then utilizes supervised contrast learning to solve the similar emotion problem, and finally introduces auxiliary response generation task to enhance the capability of the model to capture contextual information.
Graph-based methods: KI-Net <cit.> consists of two main components to enhance the semantic information of utterances, namely a self-matching module for internal utterance-knowledge interaction and a phrase-level sentiment polarity intensity prediction task. SKAIG-ERC <cit.> captures contextually inferred behavioral action information and future contextually implied intention information leveraging the structure of graph, while knowledge representation of edges is performed with the help of commonsense knowledge to enhance the emotional expression of utterance. The approach claims to model past human actions and future intentions while modeling the mental state of the speaker. S+PAGE <cit.> is a graph neural network-based emotion recognition model. The method models a conversation as a graph, adding relative location encoding and speaker encoding to the representation of edge weight and edge type, respectively, to better capture speaker- and location-aware conversational structure information. I-GCN <cit.> first extracts latent correlation information between utterances with an improved multi-head attention module, then focuses on mining the correlation between speakers and utterances to provide guidance for utterance feature learning from another perspective. LR-GCN <cit.> first integrates contextual information and speaker dependencies by utilizing the potential relationship graph network, and then it extracts potential associations between utterances with the multi-head attention mechanism to fully explore the potential relationships between utterances.
Although the above approaches model the context to varying extents, the perspective considered is not comprehensive enough. Recurrence-based ERC focuses on local modeling, making it extremely difficult to consider the context from a global perspective, while Transformer- and graph-based approaches share the problems of often neglecting local and temporal modeling. Additionally, most of the models are overly complex in structure, but the performance gains are not significant enough.
§.§ Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural network architectures for processing sequential data. The gating mechanism is introduced to solve the problem of gradient explosion or gradient disappearance in the traditional RNN <cit.>. Hochreiter et al. <cit.> proposed Long and Short Term Memory (LSTM) network to correctly deal with the problem of vanishing gradient. Gated Recurrent Unit (GRU) was proposed by Chung et al <cit.> in 2014 and is another classical RNN architecture. RNNs have been widely applied in the field of natural language processing due to their ability to process temporal data. Bahdanau et al. <cit.> introduce an extension of encoder-decoder architecture to learn alignment and translation. Johnson et al. <cit.> proposed an LSTM-based neural machine translation model to achieve translation between multiple languages in a simple solution. Recurrent neural networks such as LSTM and GRU can theoretically propagate both contextual and sequential information. There are currently some ERC works modeling the context of discourse based on RNNs. DialogueRNN <cit.> updates the status of the speaker and the global information of the conversation by employing multiple GRUs. DialogueCRN <cit.> is a cognitive theory-inspired approach that designs a cognitive inference module by exploiting LSTM to capture emotional cues contained in the context.
§.§ Multi-Head Attention Network
Multi-head ATtention network (MAT) is first proposed by Vaswani et al. <cit.>. It is powerful in feature dependency extraction, leading to remarkable achievements in many tasks. Contrary to RNNs which focus on local information, MAT can extract long-distance elemental dependencies. In recent years, MAT has been widely used in many research areas, such as automatic speech recognition <cit.>, natural language processing <cit.>, and computer vision <cit.>. In addition, there exist some pre-trained models constructed with the help of MAT, such as BERT <cit.>, RoBERTa <cit.>, and BART <cit.>. Assuming that the context of utterance and speaker information is not considered, ERC can be regarded as a text classification task. In this case, each utterance can be fine-tuned with a pre-trained model to extract utterance-level feature. HiTrans <cit.> adopts BERT to extract utterance-level features, while COSMIC <cit.> leverages RoBERTa as a feature extractor for each utterance. In this paper, we follow COSMIC's manner and extract utterance-level features by utilizing RoBERTa.
§ METHODOLOGY
We elaborate the proposed dual-stream network structure and its single-stream variants in this section. Our DualRAN is designed with the original intention of combining recurrence-based and attention-based methods to extract both local contextual information and global contextual information. As shown in Figure <ref>, our DualRAN mainly consists of speaker-aware module, global-local-aware modeling, and emotion prediction. Among them, global-local-aware modeling includes RNN-based local-aware module and MAT-based global-aware module.
§.§ Task Definition
A conversation contains |S| speakers {s_1,s_2,...,s_|S|} and |U| utterances {u_1,u_2,...,u_|U|}. Each utterance u_i corresponds to a speaker s_i. The utterance u_i and utterance u_j may be spoken by the same speaker, i.e., s_i=s_j, or by different speakers, i.e., s_i≠ s_j. The task of conversational emotion recognition is to infer the corresponding emotion state e_i based on the utterance u_i spoken by the speaker s_i. There may be differences in the categories and number of emotions in distinct datasets. For instance, in the IEMOCAP dataset, the emotion categories include happy, sad, neutral, angry, excited, and frustrated; while in the MELD dataset, the emotion categories include joy, anger, fear, disgust, sadness, surprise, and neutral.
§.§ Speaker-Aware Module
Differences in the identity of speakers may have different effects on the semantics of utterances. To put it another way, the current emotional state of a speaker is influenced not only by his or her own historical utterances but also by the historical utterances of other speakers. That is, there is emotional inertia and emotional contagion within and between speakers. In order to distinguish the influence of different speakers, we add the corresponding identity of the speaker to each utterance, thus implementing speaker-aware encoding. Specifically, we first encode word embedding for each speaker, then add the encoded speaker embedding to the utterance feature, and finally take the obtained new utterance feature as the input to the global-local-aware network. The above process can be formulated as follows:
SPK=𝙴𝙼𝙱(S),
X = C + SPK,
where S denotes the set of speakers corresponding to the utterance set U, while 𝙴𝙼𝙱 denotes the word embedding network; C denotes the utterance-level feature matrix of U, which is extracted by the method of COSMIC <cit.>.
§.§ Global-Local-Aware Modeling
The network structure of global-local-aware modeling is simple and effective, as the name suggests, it mainly consists of the local-aware module and global-aware module, which extract local contextual information and global contextual information, respectively. When performing backpropagation, the designed local-aware module and global-aware module are trained simultaneously to update the network parameters. In the following two parts, we describe their network structures respectively.
§.§.§ Local-Aware Module
Numerous previous works have demonstrated that modeling the context of utterance is crucial for ERC. Therefore, we construct a local-aware module with a modified RNN. The designed local-aware module is shown in Figure <ref>. First, in order to extract temporal information of the utterance, we input the utterance feature to the vanilla RNN; then, inspired by the Transformer architecture, we adopt skip connection, i.e., the input and output of RNN are summed; finally, to enhance the expressiveness and stability of the network, we add a feedforward network layer consisting of two fully connected layers. The network structure of the local-aware module can be described by the following equation:
X_rnn^l = 𝙽𝙾𝚁𝙼(X^l + 𝚁𝙽𝙽^'(X^l)),
X^l+1 = 𝙽𝙾𝚁𝙼(X_rnn^l + 𝙵𝙴𝙴𝙳(X_rnn^l)),
where X^l indicates the l-th layer feature matrix composed of all utterances, 𝙽𝙾𝚁𝙼(·) denotes the normalization function; the layer normalization operation is used in our experiments. 𝚁𝙽𝙽^'(·) stands for the RNN layer with the addition of a fully connected layer, which can be formulated as,
𝚁𝙽𝙽^'(X^l) = 𝙳𝙿(𝙵𝙲(𝚁𝙽𝙽(X^l))),
𝚁𝙽𝙽(·) denotes the bidirectional vanilla RNN such as LSTM and GRU; 𝙵𝙲(·) means the fully connected layer, converting the feature dimension of the output to half of the input; 𝙳𝙿(·) indicates the dropout operation. 𝙵𝙴𝙴𝙳(·) is the feedforward network layer, which can be expressed as,
𝙵𝙴𝙴𝙳(X_rnn^l) = 𝙳𝙿(𝙵𝙲(𝙳𝙿(α(𝙵𝙲(X_rnn^l))))),
α(·) denotes the activation function, e.g., ReLu. In our experiments, we place 𝙽𝙾𝚁𝙼(·) in front of 𝚁𝙽𝙽^'(·), i.e.,
X_rnn^l = X^l + 𝚁𝙽𝙽^'(𝙽𝙾𝚁𝙼(X^l)),
X^l+1 = X_rnn^l + 𝙵𝙴𝙴𝙳(𝙽𝙾𝚁𝙼(X_rnn^l)).
§.§.§ Global-Aware Module
The local-aware module possesses powerful temporal extraction capability, but it tends to capture local contextual information, while it is quite difficult to aggregate long-distance information. Therefore, we build a global-aware module with the help of Multi-head ATtention network (MAT) to capture global contextual information. As shown in Figure <ref>, our global-aware module borrows the encoder structure of Transformer, and note that we do not incorporate position encoding because the local-aware module can capture temporal information in the conversation. The network structure of the global-aware module can be expressed as:
X_att^l = 𝙽𝙾𝚁𝙼(X^l + 𝙰𝚃𝚃(X^l)),
(X^')^l+1 = 𝙽𝙾𝚁𝙼(X_att^l + 𝙵𝙴𝙴𝙳(X_att^l)),
where X^l is the same as the input of the local-aware module and denotes the l-th layer feature matrix composed of all utterances; 𝙰𝚃𝚃(·) denotes the attention network with multi-head setting,
𝙰𝚃𝚃(X^l) = W_cat𝙲𝙰𝚃(head_1^l,head_2^l,⋯,head_h^l),
𝚜.𝚝. head_i = 𝚂𝙼𝙰𝚇(W_qX^l · (W_kX^l)^⊤/√(d_k))· W_vX^l,
𝙲𝙰𝚃(·) indicates the concatenation operation; W_cat, W_q, W_k, and W_v denote the learnable parameters; d_k denotes the dimensions of W_kX^l or W_vX^l, and 𝚂𝙼𝙰𝚇(·) represents the softmax function. 𝙵𝙴𝙴𝙳(·) denotes the feedforward network layer, which can be expressed as,
𝙵𝙴𝙴𝙳(X_att^l) = 𝙳𝙿(𝙵𝙲(𝙳𝙿(α(𝙵𝙲(X_att^l))))).
As with the local-aware module, 𝙽𝙾𝚁𝙼(·) is placed ahead of 𝙰𝚃𝚃(·), i.e,
X_att^l = X^l + 𝙰𝚃𝚃(𝙽𝙾𝚁𝙼(X^l)),
(X^')^l+1 = X_att^l + 𝙵𝙴𝙴𝙳(𝙽𝙾𝚁𝙼(X_att^l)).
After both local-aware modeling and global-aware modeling, we obtain the feature matrix X with local information and the feature matrix X^' with global information, respectively. Finally, to obtain the global-local-aware feature matrix, we concatenate X and X^',
X_gl = W_gl𝙲𝙰𝚃(X,X^'),
where W_gl is the trainable parameter, and X_gl denotes the feature matrix with global-local awareness.
§.§ Emotion Prediction
We make the obtained feature matrix X_gl as the input of the emotion prediction module. Specifically, the feature dimension of X_gl is converted to |E| (number of emotions) through a fully connected layer, and thus the predicted emotion e_i^' (e_i^'∈ E) is obtained. The process can be formulated as follows:
y_i^' = 𝚂𝙼𝙰𝚇(W_smaxx_gl,i),
e_i^' = 𝙰𝚁𝙶𝙼𝙰𝚇(y_i^'[k]),
where x_gl,i∈X_gl, W_smax is the learnable parameter, and 𝙰𝚁𝙶𝙼𝙰𝚇(·) denotes the argmax function. To learn the network parameters of DualRAN, we define the loss function as follows:
ℒ = - 1/∑_t=1^O o(t)∑_i=1^O∑_j=1^o(i) y_ijlog y^'_ij + η‖ W_all‖,
where o(i) is the number of utterances of the i-th dialogue, and O is the number of all dialogues in training set; y^'_ij denotes the probability distribution of predicted emotion label of the j-th utterance in the i-th dialogue, and y_ij denotes the ground truth label; η is the L2-regularizer weight, and W_all is the set of all learnable parameters.
§.§ Single-Stream Recurrence-Attention Network
We only change the network structure of global-local-aware modeling in DualRAN to construct the Single-stream Recurrence-Attention Networks (SingleRANs). Like DualRAN, the global-local-aware modeling of SingleRAN contains two modules: local-aware module and global-aware module. The structure of the local-aware module and global-aware module itself remains unchanged, but they are combined in a single-stream and sequential manner, as shown in Figure <ref>. According to the order of combining the local-aware module and global-aware module, we divide SingleRAN into two categories, i.e., SingleRANv1 and SingleRANv2.
In SingleRANv1 (see Figure <ref>), the local-aware module is in the front and the global-aware module is in the back, i.e.:
X_rnn^l_1 = X^l_1 + 𝚁𝙽𝙽^'(𝙽𝙾𝚁𝙼(X^l_1)),
X^l_1+1 = X_rnn^l+𝙵𝙴𝙴𝙳(𝙽𝙾𝚁𝙼(X_rnn^l_1)).
After the L_1 layers local-aware network, we obtain the feature matrix X. Here, X is the output of the L_1-th layer local-aware network and is also treated as the input to the global-aware module. The feature matrix of the global-aware module is calculated as follows,
X_att^l_2 = X^l_2 + 𝙰𝚃𝚃(𝙽𝙾𝚁𝙼(X^l_2)),
X^l_2+1 = X_att^l_2 + 𝙵𝙴𝙴𝙳(𝙽𝙾𝚁𝙼(X_att^l_2)).
We obtain the feature matrix X_gl after the L_2 layers global-aware network, which is treated as the input to the emotion prediction module.
In SingleRANv2 (see Figure <ref>), the global-aware module is in front and the local-aware module is in the back, i.e.:
X_att^l_1 = X^l_1 + 𝙰𝚃𝚃(𝙽𝙾𝚁𝙼(X^l_1)),
X^l_1+1 = X_att^l_1+𝙵𝙴𝙴𝙳(𝙽𝙾𝚁𝙼(X_att^l_1)).
We obtain the feature matrix X after the L_1 layers the global-aware network, which is used as the input to the local-aware module. The feature matrix of the local-aware module is computed as follows,
X_rnn^l_2 = X^l_2 + 𝚁𝙽𝙽^'(𝙽𝙾𝚁𝙼(X^l_2)),
X^l_2+1 = X_rnn^l_2 + 𝙵𝙴𝙴𝙳(𝙽𝙾𝚁𝙼(X_rnn^l_2)).
Like SingleRANv1, after the L_2 layers local-aware network, We obtain the feature matrix X_gl, and it is treated as the input to the emotion prediction module.
§ EXPERIMENTAL SETTINGS
§.§ Datasets
In order to evaluate the validity of our model, we conduct abundant experiments on four benchmark emotion datasets, i.e., IEMOCAP[https://sail.usc.edu/iemocap/] <cit.>, MELD[https://github.com/SenticNet/MELD] <cit.>, EmoryNLP[https://github.com/emorynlp/emotion-detection] <cit.>, and DailyDialog <cit.>. The statistics are reported in TABLE <ref>.
IEMOCAP is a dyadic conversational dataset containing 10 unique speakers, of which the first 8 speakers belong to the training set and the last two to the test set. The dataset consists of approximately 12 hours of multimodal dialogue data, and we employ only text modality in this work. The dataset contains 152 conversations with a total of 7433 utterances, where these utterances are annotated with one of six emotions, namely happy, sad, neutral, anger, excited, and frustrated. MELD is a multi-party multimodal dialogue dataset from the TV show "Friends", and we use only text modality in this work. The dataset contains 1433 dialogues with a total of 13708 utterances and has seven emotion categories: neutral, surprise, fear, sadness, joy, disgust, and anger. The utterances are labeled with sentiment categories, i.e., positive, negative, or neutral, in addition to being labeled as emotions. EmoryNLP collects multi-party conversations from the TV show "Friends". However, the selection of scenes and emotion labels differs from MELD. The dataset contains 897 dialogues with a total of 12606 utterances and seven emotional categories: sad, mad, scared, powerful, peaceful, joyful, and neutral. DailyDialog is a large scale multi-turn dyadic dialogue dataset with the conversations reflecting various topics in daily life. The dataset contains 13118 conversations with a total of 102979 utterances and seven emotion categories: neutral, happiness, surprise, sadness, anger, disgust, and fear. The dataset suffers from a severe class imbalance, with over 83% of the emotion labels being neutral.
Figure <ref> shows the percentage of each emotion in the four datasets. We can observe that all datasets exist the class-imbalanced problem, with DailyDialog being the most serious because of neutral accounting for 83.10%, which poses a high challenge to the ERC model. According to COSMIC[https://github.com/declare-lab/conv-emotion/tree/master/COSMIC] <cit.>, we use utterance-level text features which are fine-tuned adopting RoBERTa <cit.> to implement the ERC task.
§.§ Baselines and Evaluation Metrics
Baselines: The baseline used as comparisons in this work include COSMIC <cit.>, HiTrans <cit.>, AGHMN <cit.>, DialogueCRN <cit.>, SKAIG-ERC <cit.>, DialogXL <cit.>, I-GCN <cit.>, LR-GCN <cit.>, CauAIN <cit.>, CoG-BART <cit.>. Of all these baselines, COSMIC, SKAIG-ERC, and CauAIN use commonsense knowledge, while the others do not.
COSMIC is a classic work that introduces external commonsense knowledge into emotion recognition in conversation, which leverages multiple GRUs to integrate commonsense knowledge and extract complex interaction patterns. HiTrans extracts multidimensional contextual information with the use of two hierarchical Transformers and then captures speaker-aware information utilizing pairwise utterance speaker verification. AGHMN constructs the hierarchical memory network and attention gated recurrent units through multiple GRUs respectively to adequately model the context of the utterance. DialogueCRN employs multiple LSTMs to construct the perception phase module and cognition phase module respectively in order to simulate human cognitive behavior, thus enhancing the ability to extract and integrate emotional cues. SKAIG-ERC models the context of utterance adopting graph structure and commonsense knowledge, simulating the mental state of the speaker, and then it employs graph convolutional networks for information propagation, which enhances the emotional representation of the utterance. DialogXL is a pioneering work that applies XLNet to emotion recognition in conversation, which focuses on improvements to the recurrence and attention mechanisms to model conversational emotion data. I-GCN is a dialogue emotion recognition method that models the dialogue as a graph structure, and it extracts semantic correlation information of the utterances and temporal sequence information of the conversation with the help of graph convolutional networks. LR-GCN mainly consists of two modules, latent relation exploration and information propagation, which adopt a multi-branch graph architecture in order to simultaneously capture the speaker information, contextual information of the utterance, and potential correlations between utterances. CauAIN is a cause-aware interaction based model that explicitly models speaker dependencies and contextual dependencies of utterance in combination with commonsense knowledge. CoG-BART is an approach that employs both contrast learning and generative models, which can capture the information of long-distance utterances while alleviating the problem of similar emotion.
Evaluation Metrics: Our evaluation metrics include accuracy (%), weighted F1 (%), micro F1 (%), and macro F1 (%) scores. For the IEMOCAP and MELD datasets, we use accuracy and weighted F1 scores as evaluation metrics; for the EmoryNLP dataset, micro F1 and weighted F1 scores are taken as evaluation metrics; for the DailyDialog dataset, since neutral accounts for about 83% of the dataset, we adopt micro F1 score without neutral and macro F1 score to evaluate our model.
§.§ Training Details
Our experiments are conducted on a single NVIDIA GeForce RTX 3090 and trained in an end-to-end fashion. The deep learning framework which we use is Pytorch with version 2.0.0, and the operating system is Ubuntu 20.04. We choose AdamW <cit.> as the optimizer, the L2 regularization factor is 3e-4, and the maximum epochs are set to 100. In our experiments, we utilize LSTM <cit.> as recurrent neural network for the local-aware module. Other hyperparameter settings for different datasets are displayed in Table <ref>.
§ EXPERIMENTAL RESULTS AND ANALYSIS
In this section, we conduct extensive comparative experiments and ablation studies on four public datasets to demonstrate the effectiveness of our proposed method. In addition, I also conduct a case study and error analysis in the last two parts of this section.
§.§ Comparison with Baselines
We report the results of comparative experiments on four emotion datasets in Table <ref>, which allow the following conclusions to be drawn:
* Our proposed DualRAN achieves remarkable performance on all four emotion datasets, with the most significant improvements in scores on the IEMOCAP dataset. It indicates that DualRAN can adequately model the context and thus effectively extract both global dependency information and local dependency information.
* On the IEMOCAP dataset, DualRAN attains 69.62% accuracy and 69.73% weighted F1 score. Compared with DialogueCRN, the accuracy of our method is improved by 3.57%; compared with CoG-BART, the proposed DualRAN has a 3.55% improvement in the weight F1 score.
* On the MELD dataset, the weight F1 score of our DualRAN is 0.78% higher than that of CauAIN, reaching 66.24%. DualRAN achieves an accuracy of 67.70%, which is a 6.97% improvement relative to that of DialogueCRN. Without using external knowledge, the weighted F1 score of our method is still 1.03% higher than that of COSMIC.
* On the EmoryNLP dataset, the micro F1 score of the proposed DualRAN is 2.24% higher than that of CoG-BART, achieving 44.82%. Compared to DialogXL's weighted F1 score, the improvement of our model is 4.49%, achieving 39.22%.
* On the DailyDialog dataset, DualRAN obtains a micro F1 score of 60.07%, which is 1.86% higher than that of CauAIN. The macro F1 score of our model is 0.94% higher than that of SKAIG-ERC, achieving 52.89%. However, DualRAN's macro F1 score is 0.96% lower than CauAIN's and fails to achieve the best performance.
Overall, DualRAN shows the most dramatic performance improvement on the IEMOCAP dataset compared to the performance on the other datasets. By examining the dataset, it is found that the number of utterances in a conversation is much higher in the IEMOCAP dataset than in any other dataset. In this case, IEMOCAP relies more on contextual modeling than other datasets. Therefore, our DualRAN shows a definite advantage over other baselines on the IEMOCAP dataset with the help of the global-local-aware network.
We also record the F1 scores of DualRAN for each emotion on the IEMOCAP and MELD datasets, as shown in Table <ref>. It is evident from Table <ref> that the proposed DualRAN achieves the best or second best F1 scores for each emotion. For a more intuitive representation, we draw bar charts based on Table <ref> to show the comparisons between DualRAN with baselines for each emotion, as shown in Figure <ref>. On the IEMOCAP dataset, DualRAN achieves the best results on happy, neutral, and frustrated, and the second-best F1 scores on sad, angry, and excited, ultimately achieving the best weighted F1 scores. Similar results are obtained on the MELD dataset. Notably, our model achieves 31.78% F1 scores in disgust, an extremely rare emotion category, which is far higher than the other baselines. The above results demonstrate that the proposed DualRAN provides powerful contextual modeling capabilities. In particular, on the MELD dataset, disgust can be identified better than other models with the aid of contextual modeling, and the class-imbalanced problem is evaded to some extent.
§.§ Comparison of SingleRAN with DualRAN
In this subsection, we test the performance of two single-stream variants of DualRAN, namely SingleRANv1 and SingleRANv1, on four datasets. As shown in Table <ref>, on the IEMOCAP dataset, both SingleRANv1 and SingleRANv2 have an accuracy of 68.27%, which is lower than the score of DualRAN 1.35%. On the MELD dataset, the weighted F1 score for SingleRANv1 is 0.22% lower than that of DualRAN, while the weighted F1 score for SingleRANv2 is 0.92% lower than that of DualRAN. Similar results appear on the EmoryNLP and DailyDialog datasets. The micro F1 score for SingleRANv1 decreases by 2.04% relative to that for DualRAN on the EmoryNLP dataset, while SingleRANv2's micro F1 score of 43.6% is 1.22% lower than DualRAN's. On the DailyDialog dataset, the micro F1 scores for SingleRANv1 and SingleRANv2 declined by 0.5% and 0.92% relative to those for DualRAN, respectively. Overall, the performance of two variants, i.e., SingleRANv1 and SingleRANv1, slightly lag behind those of DualRAN.
§.§ Impact of Local- and Global- Aware Modules
In this subsection, we remove the local-aware and global-aware modules separately to explore their impact on the performance of DualRAN. From Table <ref>, we can conclude that either removing the local-aware modules or removing the global-aware modules leads to the performance degradation of our model. On the IEMOCAP dataset, the weight F1 score of the model decreases from 69.73% to 64.22% when we remove the local-aware module, while that of the model drops to 65.06% when the global-aware module is removed. The magnitude of reduction suggests that the IEMOCAP dataset is more dependent on local-aware modeling compared to global-aware modeling, and similar patterns are observed for the other datasets (i.e., MELD and EmoryNLP) except for the DailyDialog dataset. Overall, the impact on the IEMOCAP dataset is more significant than that on the others when removing any module. This is due to the fact that a conversation in the IEMOCAP dataset contains more utterances and relies more on contextual modeling than the other datasets.
§.§ Effect of Distinct Number of Network Layers
To investigate the effect of the number of network layers for global-local-aware modeling on the performance of DualRAN, we conduct ablation studies related to the number of network layers in this subsection. We fix the number of network layers for the global-aware module, while adjusting those for the local-aware module and recording the experimental results. As shown in Figure <ref>, the blue lines depict the effect of the number of network layers for the local-aware module on the accuracy and weight F1 scores. Note that these results are derived from experiments conducted on the IEMOCAP dataset. It can be found that as the number of network layers increases, both the accuracy score and the weight F1 score fluctuate around the optimal performance, i.e., roughly showing an increasing trend followed by a decreasing trend. Similarly, fixing the number of network layers for the local-aware module, we adjust the number of network layers for the global-aware module to explore its impact on the performance of our DualRAN. As shown by green lines in Figure <ref>, the performance of the proposed model tends to increase and then decrease as the number of network layers for the global-aware module increases.
§.§ Impact of Distinct Recurrent Neural Networks
We test the effect of different recurrent neural networks on DualRAN in this subsection. Figure <ref> shows the experimental results using improved LSTM and improved GRU as the local-aware module, respectively. We can see that the accuracy and weight F1 scores by adopting improved LSTM are higher relative to those by adopting improved GRU in all benchmark datasets. On the whole, better results can be obtained with improved LSTM, which indicates that improved LSTM can perform better local-aware modeling relative to improved GRU. Figure <ref> shows the comparison between employing improved LSTM and vanilla LSTM. We can reveal that DualRAN utilizing improved LSTM achieves better performance relative to vanilla LSTM on the four datasets. This situation suggests that the inclusion of skip connections and feedforward layers in local-aware module is beneficial in enhancing the expressiveness of the model.
§.§ Effect of Use or No-Use Speaker Identify
To explore the effect of speaker identity on the proposed DualRAN, we conduct the ablation experiments on speaker information, and the results are displayed in Figure <ref>. On the IEMOCAP dataset, the accuracy of our model decreases from 69.62% to 66.42% when speaker embedding is not employed, a decrease of 3.20%. On the MELD dataset, the weight F1 score drops to 65.35% when speaker information is removed. Similar performance decreases are found on the EmoryNLP and DailyDialog datasets. These phenomena suggest that speaker identity can effectively model emotional inertia and emotional contagion within and between speakers, which is beneficial to improve the performance of the model.
§.§ Impact of Use or No-Use of Skip Connection
Several studies <cit.> have demonstrated that skip connection can improve the expressiveness and stability of the model, so we add skip connections to both the local-aware module and global-aware module. To demonstrate the effectiveness of skip connection, we conduct the ablation studies on skip connection in this subsection, and the results are depicted in Table <ref>. As we can observe, the performance of the proposed model appears to degrade on all datasets whether skip connections are removed from the local-aware module or global-aware module. As expected, the degradation of our model is even more pronounced when we remove skip connections of both modules at the same time. These phenomena suggest that introducing skip connections in the global-local-aware network can effectively promote the performance of the model.
§.§ Results of Sentiment Classification
In this subsection, we replace emotion with sentiment as the classified target. Accordingly, we transform DualRAN into a tri-classification (i.e., neutral, positive, and negative) model. Note that since the IEMOCAP, EmoryNLP, and DailyDialog datasets contain no sentiment labels, we require to merge the original emotion labels. The specific scheme of merging is shown in Table <ref>.
As shown in Table <ref>, the results of DualRAN are similar to COSMIC after the coarsening of emotion into sentiment, and the performance is improved on all datasets. For instance, the accuracy of DualRAN on the IEMOCAP dataset improves from 69.62% to 82.38%, an increase of 12.76%. Although, relative to COSMIC's results of sentiment classification, the weight F1 scores of our DualRAN on the MELD and EmoryNLP datasets are improved, the enhancements are limited. This situation is mainly due to the fact that most of the models can be easily classified after the fine-grained emotions is coarsened into sentiments. To put it in a nutshell, the dataset becomes relatively simple, so it is not necessary to model both local and global contexts to achieve better results.
§.§ Case Study
We extract raw utterances from the MELD dataset for the case study. As shown in Figure <ref>, several baseline models (e.g., LR-GCN) tend to classify the utterances with true labels of disgust and fear as neutral. This is due to the problem of class imbalance in the MELD dataset, where disgust and fear belong to the minority class, while neutral belongs to the majority class. The baseline cannot adequately model the context and tends to predict the emotion of utterance as the majority class, leading to model failure. As can be seen in Figure <ref>, our proposed DualRAN, in contrast to the baseline, takes into account both global and local information and can identify the utterances with true emotions of disgust and fear as the correct emotions very well. Overall, relative to the baseline models, DualRAN can sufficiently capture local and global contextual information to accurately identify the minority categories by exploiting the global-local-aware network in some scenarios.
§.§ Limitation
As shown in Figure <ref>, we depict the performance of DualRAN on four public emotion datasets with the confusion matrices. It can be seen that the proposed DualRAN achieves superior result on the IEMOCAP dataset. Similar to some previous models, DualRAN works less well on the DailyDialog dataset, as shown in Figure <ref>. One main factor is that the DailyDialog dataset suffers from an extreme class imbalance, i.e., The utterances annotated as neutral account for a very large proportion of the dataset, causing DualRAN to be biased toward neutral during training. It is evident from Figure <ref> that most of the utterances tend to be predicted as neutral. It is assumed that the performance on the DailyDialog dataset will be improved if neutral is removed. The problem of class imbalance is also present in the MELD dataset. As shown in Figure <ref>, the utterances with true labels of fear, sadness, and disgust incline to be classified as neutral. After examining the MELD dataset, it is found that these three emotions belong to the minority class. In addition, DualRAN suffers from the problem of similar emotion, i.e., some utterances are easily misidentified as another similar emotions. For example, as shown in Figure <ref>, the utterances whose true labels are happy are easily predicted as excited, and the utterances with true emotions of angry are easily classified as frustrated on the IEMOCAP dataset.
§ CONCLUSION AND PROSPECT
In this paper, we propose a Dual-stream Recurrence-Attention Network (DualRAN) with global-local-aware capability to adequately capture both local and global contextual information of the utterance. The proposed DualRAN is a simple and effective dual-stream network consisting of local- and global-aware modules and focuses on the combination of recurrence-based and attention-based methods. In order to construct the local-aware module, we improve the structure of vanilla RNN referring to Transformer, that is adding the skip connection and feedforward network layer to enhance the expressiveness of the network. To explore the importance of speaker information for the ERC task, we encode the speaker identity and then add it to the corresponding utterance feature. Additionally, based on the local- and global-aware modules of DualRAN, we propose two single-stream recurrence-attention networks, i.e., SingleRANv1 and SingleRANv2. We conduct extensive comparison experiments and the results demonstrate that our proposed model outperforms all baselines by an absolute margin. Meanwhile, We perform ablation experiments for each component and the empirical results prove the validity of these components. In our future research, we work on addressing the widespread problem of class imbalance in the benchmark emotion datasets, as well as further exploring the multimodal setting of the ERC task.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Jiang Li: Conceptualization, Methodology, Data curation, Software, Validation, Formal analysis, Investigation, Visualization, Writing - original draft, Writing - review & editing, Project administration. Xiaoping Wang: Supervision, Writing - review & editing, Funding acquisition. Zhigang Zeng: Supervision, Funding acquisition.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENTS
This work was supported in part by the National Natural Science Foundation of China under Grant 62236005, 61936004, and U1913602.
elsarticle-num
|
http://arxiv.org/abs/2307.00583v1
|
20230702144359
|
A multi-task learning framework for carotid plaque segmentation and classification from ultrasound images
|
[
"Haitao Gan",
"Ran Zhou",
"Yanghan Ou",
"Furong Wang",
"Xinyao Cheng",
"Xiaoyan Wu",
"Aaron Fenster"
] |
eess.IV
|
[
"eess.IV",
"cs.CV"
] |
A multi-task learning framework for carotid plaque segmentation and classification from ultrasound images
Haitao Gan, Ran Zhou, Yanghan Ou, Furong Wang, Xinyao Cheng, Xiaoyan Wu,
Aaron Fenster, Fellow, IEEE
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
This work was supported by the National Natural Science Foundation of China under grant No. 62201203, the Natural Science Foundation of Hubei Province under grant No. 2021CFB282, the High-level Talents Fund of Hubei University of Technology under grant No. GCRC2020016. (Corresponding author: Ran Zhou.)
Haitao Gan is with the School of Computer Science, the Hubei University of Technology, Wuhan, Hubei, 430068, China (e-mail: [email protected]).
Ran Zhou is with the School of Computer Science, the Hubei University of Technology, Wuhan, Hubei, 430068, China (e-mail: [email protected]).
Yanghan Ou is with the School of Computer Science, the Hubei University of Technology, Wuhan, Hubei, 430068, China (e-mail: [email protected]).
Furong Wang is with the Liyuan Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China (e-mail: [email protected]).
Xinyao Cheng is with the Department of Cardiology, Zhongnan Hospital, Wuhan University, Wuhan, Hubei, 430068, China (e-mail: [email protected]).
Xiaoyan Wu is with the Cardiovascular Division, Zhongnan Hospital, Wuhan University, Wuhan, Hubei, 430068, China (e-mail: wuxiaoyan299@aliyun. com).
Aaron Fenster is with the Imaging Research Laboratories, Robarts Research Institute, Western University, London, ON N6A 5B7, Canada (e-mail: [email protected]).
August 1, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Carotid plaque segmentation and classification play important roles in the treatment of atherosclerosis and assessment for risk of stroke. Although deep learning methods have been used for carotid plaque segmentation and classification, most focused on a single task and ignored the relationship between the segmentation and classification of carotid plaques. Therefore, we propose a multi-task learning framework for ultrasound carotid plaque segmentation and classification, which utilizes a region-weight module (RWM) and a sample-weight module (SWM) to exploit the correlation between these two tasks. The RWM provides a plaque regional prior knowledge to the classification task, while the SWM is designed to learn the categorical sample weight for the segmentation task. A total of 1270 2D ultrasound images of carotid plaques were collected from Zhongnan Hospital (Wuhan, China) for our experiments. The results of the experiments showed that the proposed method can significantly improve the performance compared to existing networks trained for a single task, with an accuracy of 85.82% for classification and a Dice similarity coefficient of 84.92% for segmentation. In the ablation study, the results demonstrated that both the designed RWM and SWM were beneficial in improving the network's performance. Therefore, we believe that the proposed method could be useful for carotid plaque analysis in clinical trials and practice.
Carotid plaque, Multi-task learning, Segmentation, Plaque Classification
§ INTRODUCTION
Cardiovascular disease (CVD) is the global leading cause of death and disability, and ischemic heart disease and stroke are the primary causes of death among CVD patients <cit.>. Atherosclerotic plaques can rupture, resulting in thrombus formation and vascular stenosis that can lead to corresponding hemodynamic changes and ischemic cardiovascular events <cit.>. Because the carotid arteries (CA) have a simple geometry and are easily accessible, they are commonly used to visualize and assess carotid plaques and their potential risk for causing a stroke. Therefore, identifying carotid plaques and assessing them is a critical aspect of treating atherosclerosis and reducing the risk for stroke <cit.>.
Carotid ultrasound is widely used in carotid plaque identification and assessment due to its non-invasive and low-cost characteristics. Over the past decade, researchers have developed various ultrasound biomarkers for the quantification of carotid plaques and showed them to be useful for monitoring changes in the carotid arteries, including intima-media thickness (IMT) <cit.>, total plaque area (TPA) <cit.>, and total plaque volume (TPV) <cit.>. Furthermore, different types of carotid plaques pose different risks of causing cerebrovascular events such as stroke and TIA. The appearance of carotid plaques in B-mode ultrasound images can be divided into three types: hyperechoic plaque, hypoechoic plaque, and mixed-echoic plaque <cit.>. Measurement of these ultrasound-based biomarkers requires segmentation of plaque boundaries, and identification of carotid plaque type using classification methods. Thus, segmentation and classification of carotid plaques in ultrasound images are two critical tools in carotid ultrasound image analysis.
Many computer-aided diagnosis algorithms have been proposed for carotid plaque segmentation, which can be divided into two categories: traditional image processing-based methods or deep learning-based approaches. Traditional methods, including level sets, active contour models, snake models, Gaussian mixture models, and geometric priors, have alleviated the manual segmentation process <cit.>. However, these methods are sensitive to contour initialization and image quality, resulting in segmentation inaccuracy and instability that cannot meet the demands of clinical applications. As a result, deep learning-based methods have become the mainstream research direction for carotid plaque segmentation, with most studies focusing on deep learning network structures, loss function design, and post-processing of the segmentation results. Jain et al. developed hybrid deep learning segmentation models (i.e., UNet, UNet+, SegNet, SegNet-UNet, and SegNet-UNet+) for atherosclerotic plaques in the internal carotid artery using B-mode ultrasound images <cit.>. They further used seven U-series architectures for measuring the area of the plaque far-wall of the common carotid (CCA) and internal carotid arteries (ICA) in B-mode ultrasound images <cit.>. Mi et al. proposed an MBFF-Net for carotid plaque segmentation in ultrasound images by designing a multi-branch feature fusion module to extract multiple scales and different contexts and exploited the prior information of the carotid artery wall <cit.>. Li et al. proposed a U-shaped CSWin transformer for carotid artery segmentation in 3D ultrasound images, where hierarchical CSWT modules were used to capture the rich global context information in the 3D image <cit.>. We also used a U-Net to generate TPA from B-model carotid ultrasound images <cit.> <cit.>and further proposed ensemble learning approaches to reduce segmentation inconsistency, smooth segmentation contours, and improve the accuracy and robustness of plaque segmentation models <cit.>.
Many other works have focused on ultrasound carotid plaque classification. Christodoulou et al. extracted ten different texture feature sets and used the statistical k-nearest neighbor method for atherosclerotic carotid plaque classification <cit.>. Kyriacou et al. used two classifiers, the Probabilistic Neural Network and the Support Vector Machine (SVM), to classify atherosclerotic carotid plaques into symptomatic or asymptomatic types <cit.>. Prahl et al. proposed a percentage white feature to classify the echogenicity in carotid plaques <cit.>. Acharya et al. extracted several grayscale features and input them into an SVM classifier for plaque tissue and classification <cit.>. Engelen et al. used 3D texture features to predict vascular events from 3D carotid plaque ultrasound images <cit.>. However, these traditional plaque identification methods rely on feature extraction algorithms that cannot accurately and comprehensively extract carotid plaque features. However, in recent years, researchers have successfully applied deep learning methods to carotid ultrasound image classification. Lekadir et al. used a convolutional neural network (CNN) to identify plaque compositions from carotid ultrasound images <cit.> and Saba et al. implemented six deep artificial intelligence models for carotid ultrasound plaque characterization using images obtained from a multicenter study <cit.>.
Although deep learning methods have been used for carotid plaque assessment, these methods focused solely on a single segmentation or classification task. However, such methods have limitations: (1) Using two stages for carotid plaque image analysis, the classification performance is sensitive to the accuracy of the prior segmentation task. (2) These methods did not consider the relationship between the segmentation and classification tasks, which could result in under- or over-segmentation. Therefore, we believe that implementing a multi-task deep learning framework can be advantageous for both the segmentation and classification tasks. This approach has the potential to generate more robust results and could also simplify the processing stream and reduce the computation time. Thus, in this study, we propose a novel multi-task learning framework aimed at facilitating both ultrasound plaque segmentation and classification tasks. The framework incorporates a region-weight module (RWM) and a sample-weight module (SWM) to effectively leverage the correlation between the two tasks. The RWM fuses the segmented probability map with the high-level features of the classification task and is then applied to the feature fusion processing in the classification task. The SWM uses a predicted probability to generate the category sample weights that are embedded in the loss function of the segmentation task. This approach can provide rapid and automatic plaque segmentation and classification with high accuracy and low variability, making it possibly suitable for clinical use.
§ METHODS
The proposed multi-task learning framework leverages the complementary information of carotid plaque segmentation and classification tasks to enhance the overall performance of the system. Figure <ref> shows the general framework of our multi-task algorithm, which contains a segmentation task branch and a classification task branch. By segmenting the carotid plaque, the network learns crucial information about the plaque size and shape, enabling the classification branch to concentrate on features in plaque regions while minimizing the influence of irrelevant background information. The classification task provides valuable category information about plaques and integrated this category information into the loss function of the segmentation to improve the segmentation performance of difficult plaque images. The network incorporates two new modules: the region-weight module (RWM) and the sample-weight module (SWM), which mutually enhance the performance of the classification and segmentation tasks. RWM integrates the segmentation probability maps to the feature maps extracted in the classification task, which can make the classification model focus on the plaque region. SWM generates a confidence level to each sample (image) by comparing the predicted and labeled probability distributions of the classification task. This confidence level is used as the sample weight in loss function. The feature extraction layers (layers in red rectangles in Fig. 1) share training weights for the carotid plaque segmentation and classification branches.
§.§ Network Architecture
U-Net-based network architectures have been widely used in medical image segmentation, and UNet++ is an improvement of U-Net and superior to other U-Net variants <cit.>. Figure <ref> shows the architecture of UNet++, which fuses five convolutional blocks to extract multi-level features. Correspondingly, each convolutional block has a decoding path, except for the first convolutional block. Each decode path uses the upsampling block to convert feature maps of different sizes to the input size and to predict the segmentation mask of the input image. This architecture not only uses skip connections to connect feature maps from the encoder to each decoder path, but also uses skip connections to connect different decoder paths. The architecture enables flexible feature fusion, which helps to alleviate information loss when extracting features and converting images. In our multi-task algorithm, Residual blocks were implemented as the feature extraction layers in UNet++, which are used instead of the convolutional blocks in the backbones of UNet++.
The Residual block uses two 1×1 convolution kernels to change the number of channels in the feature map, a 3×3 convolution kernel to extract features, and a short connection to sum the input and output of the convolution kernel <cit.> This block alleviates the degradation of the deep neural network, reduces the number of model parameters, and makes the deep neural network easy to be trained.
§.§ Region-Weight Module (RWM)
To facilitate the classification task, RWM combines the output of UNet++ of the segmentation task to generate the region weights. These weights are then used to fuse the feature maps obtained in the feature extraction layers. Figure <ref> shows the detailed architecture of RWM. First, the four outputs of UNet++ are downsampled to the same size as the feature maps obtained in the feature extraction layers. After that, a Softmax function is applied to obtain the probability maps. These probability maps obtained from the segmentation task are used as region-weights in the feature fusion step to improve the classification feature extraction. Each probability map is multiplied by the corresponding position of each channel of the feature map to obtain the region feature map. Finally, the four region feature maps are summed to obtain the fused feature map.
Assuming that the four outputs of UNet++ are S={s_1,s_2,s_3,s_4}, the region probability maps (P={p_1,p_2,p_3,p_4}) are be obtained by
p_i=e^F_down(s_i)/∑_i=1^4e^F_down(s_i)
where F_down is a down-sampling function to make S the same size as the feature maps obtained in the feature extraction layers. The feature maps obtained in the feature extraction layers of the classification task are M. The fusion feature maps (FM) are then formulated as
FM=∑_i=1^4α_ip_iM
where α_i is the hyperparameter. The sum of the four hyperparameters equals 1 (∑_i=1^4α_i=1).
§.§ Sample-Weight Module (SWM)
SWM uses the results of the classification task to assign a confidence level to each sample, which is then used as the sample-weight in the loss function of the segmentation task. Kullback-Leibler (K-L) Divergence is used to determine the confidence level by calculating the difference between the probability distribution of the classification results and the probability distribution of the labels. Finally, the result of K-L Divergence is used to represent the weights of each sample. Figure <ref> shows the implementation details of SWM.
Assuming that y_i denotes the label probability distribution of the ith input sample for classification and g(x_i;θ_cls) denotes the probability distribution predicted by the classification network, the confidence level (sample-weight) for the ith sample is obtained by
ω_i=KL(y_i||g(x_i;θ_cls))=∑_j=1^3y_i,jlog(y_i,j/g_j(x_i;θ_cls))
For training samples that are incorrectly predicted by the encoder in SWM, the larger the result of K-L Divergence, the smaller the weight of the training samples. This is because a large K-L Divergence indicates that the predicted probability distribution is very different from the ground truth distribution, which indicates that the model is less confident in its prediction and assigns a smaller weight to the sample. But, if the K-L Divergence is small, it indicates that the predicted distribution is close to the ground truth distribution, and the model is more confident in its prediction, which results in a larger weight for the sample in the loss function.
§.§ Loss Function
Consider a carotid ultrasound dataset X={(x_i,y_i,z_i)|i∈[1,n], x_i∈ℝ^W×H, y_i∈ℝ^3×1, z_i∈ℝ^2×W×H}, where x_i denotes the ith image, y_i is the class label of the ith image, and z_i is the segmentation label of the ith image, where y_i and z_i are presented in a one-hot format. The number of images in the dataset is n. W and H denote the width and height of carotid ultrasound images, respectively. The weighted cross-entropy loss for segmentation is given by Eq. (4) and the cross-entropy loss for classification is given by Eq. (5).
loss_sc=-1/n∑_i=1^n∑_j=2^2w_iy_i,jlogf_j(x_i;θ_seg)
loss_cc=-1/n∑_i=1^n∑_j=2^2z_i,jlogg_j(x_i;θ_cls)
where w_i is the sample weight generated by SWM for the ith sample, y_i,j is the jth element of y_i, f is the classification model’s prediction and θ_seg is the set of classification network parameters.
Furthermore, we introduced an entropy loss to promote the network to generate a one-hot distribution with a high probability for a single class, instead of using a flat distribution. Consequently, the network will predict results with increased certainty. Through the minimization of the entropy loss, the network can acquire the ability to produce more precise and confident predictions for each input sample. For both segmentation and classification tasks, the entropy losses can be defined as
loss_se=-1/n∑_i=1^n∑_j=2^2f_j(x_i;θ_seg)logf_j(x_i;θ_seg)
loss_ce=-1/n∑_i=1^n∑_j=2^2g_j(x_i;θ_cls)logg_j(x_i;θ_cls)
Finally, the loss function of our multi-task framework is
Loss=Loss_sc+Loss_se+Loss_cc+Loss_ce
§.§ Evaluation Metrics
To evaluate the performance of our proposed method comprehensively, we employed a set of widely-used metrics to measure both segmentation and classification performance. Specifically, the Dice similarity coefficient (DSC), absolute plaque area difference (|ΔPA|), Hausdorff distance (HD), average symmetric surface distance (ASSD), and Pearson correlation coefficient (PCC) were used to quantify the accuracy of our segmentation results. In addition, we used accuracy (ACC), precision, F1-score, and the Kappa coefficient to assess the classification performance of our algorithm
DSC is used to calculate the similarity between the algorithm segmentation (A) and the manual segmentation (M), as
DSC(A, M)=2|A∩ M|/(|A|+|M|)×100%
|ΔPA| is used to calculate the plaque area difference between the algorithm segmentation and manual segmentation as
|Δ PA|= |PA_alg-PA_man|
where PA_alg is the algorithm-generated plaque area and PA_man is the manually segmented plaque area. ASSD and HD were used to evaluate the distance between the algorithm and manual segmentation contours and are given by:
ASSD(A,M)= 1/2(1/|∂ R_A|∑_p∈∂ R_Ad(p,∂ R_M)+
1/|∂ R_M|∑_p∈∂ R_Md(p,∂ R_A))
HD(A,M)=max(
max_p∈∂ R_A d(p,∂ R_M),
max_p∈∂ R_M d(p,∂ R_A))
where ∂ R_A is the algorithm segmentation surface and d(p, ∂ R_M) is the shortest Euclidean distance from a point p to the manual segmentation surface ∂ R_M; d(p,∂ R_A) is defined in the same manner. The Pearson correlation coefficient (PCC) was used to measure the correlation between the algorithm and manually generated plaque areas.
In classification tasks, in addition to ACC, we also used precision and F1-score to measure the classification performance.
F1-score=2× Precision × Recall/Precision + Recall
§ EXPERIMENTS AND RESULTS
§.§ Data Acquisition
A total of 1270 longitudinal carotid ultrasound images were obtained from Zhongnan Hospital of Wuhan University. The Zhongnan Hospital Institutional Review Board approved the use of the ultrasound data and all patients provided consent. These patients, who had risk factors such as hypertension or hyperlipidemia or had a history of vascular events, underwent ultrasound imaging of their carotid arteries (common, internal, and external). All images were acquired using an Acuson SC2000 (Siemens, Erlangen, Germany) ultrasound system with a 5-12 MHz linear array probe (9L4). Two experienced experts annotated the plaque contours and classified plaques into three categories (i.e., hyperechoic plaque, hypoechoic plaque, and mix-echoic plaque) according to the European Carotid Plaque Study Group criteria.
Before algorithm implementation, we reduced the size of the examined 2D ultrasound images by cropping a region-of-interest around each plaque, as would be done during the clinical examination. The cropped images and manual segmentations were automatically resized to 96×144 matrices as network inputs.
§.§ Experiment Setting
During the training of the multitask network, the dataset was randomly divided into training, validation, and test sets in the ratio of 6:2:2, resulting in 751, 258 and 261 images in the three sets, respectively. The following parameters were used for network training: the number of epochs=200, batch size=10, optimizer=ADAM, and learning rate=1e-3. The hyperparameters α_1, α_2, α_3 and α_4 in the SWM were set to 0.1, 0.2, 0.3 and 0.4, respectively. All deep learning networks were implemented using Pytorch 1.10.0, CUDA 11.6, and Python 3.8 platforms on an NVIDIA GTX3090 GPU.
Ultimately, all experimental results of this paper were obtained from three randomized experiments by setting different random seeds in three repetitions. The segmentation evaluation metric, DSC, ASSD, HD and |ΔPA|, are expressed as mean±SD for the average measurements of all patients and the classification metrics, ACC, Precision, F1-score and Kappa, are presented as mean±SD of results generated by the three randomized experiments.
§.§ Segmentation Accuracy of the Multi-task Framework
Figure <ref> shows representative segmentations of our multi-task approach compared to the baseline UNet++. The multi-task algorithm-generated segmentations outperformed the baseline UNet++ while generating results very close to the manual segmentations.
To evaluate the effectiveness of our proposed multi-task framework, we further compared the performance of our algorithm to networks used for segmentation only, including U-Net <cit.>, SegNet <cit.>, HRNet <cit.>, as well as to the baseline UNet++ <cit.>. Moreover, our multi-task framework was also compared to an ensemble algorithm (SegNet-UNet+) used for carotid plaque segmentation, which fuses the outputs of SegNet and UNet++ <cit.>. Table <ref> shows the results of carotid plaque segmentation. Our multi-task frameworks achieved better performance than the existing single segmentation networks (i.e., U-Net, SegNet, HRNet and UNet++) for all metrics. Compared to baseline UNet++, our multi-task framework increased the DSC over the three different backbones by 1.15%. reduced the ASSD, HD and |ΔPA| by 5.65%, 11.19% and 16.15%, respectively, and increased the Pearson correlation coefficient from 0.889 to 0.927. Compared to SegNet-UNet+, the multi-task achieved similar performance for most of the evaluation metrics, but the PPC of our algorithm was a little lower than that of SegNet-UNet+.
Figure <ref> shows the correlation and Bland-Altman plots comparing the multi-task algorithm and the manually generated plaque areas for 261 carotid ultrasound images and that the correlation is strong and significant with a correlation coefficient of 0.939 (p<0.0001) (Fig. 6(a)) and a small bias of 1.68 mm^2 with limits agreement from -10.23 to 13.59 mm^2 (Fig. 6(b)).
§.§ Classification Accuracy of the Multi-task Framework
To evaluate the classification accuracy of our proposed multi-task framework, we compared the performance of our algorithm to that of different single classification networks, including EfficientNet <cit.>, ResNet <cit.>, DenseNet [31]<cit.>, RepVGG <cit.> and HRNet <cit.>. Table <ref> summarizes the carotid plaque classification performance of our multi-task framework and the results from the other networks. Our multi-task frameworks achieved better performance than the existing single classification networks for all metrics. Moreover, our proposed method increased ACC, Precision, F1-score and Kappa coefficient of baseline by 3.06%, 2.66%, 3.02% and 5.58%, respectively.
In Figure <ref> we show the carotid plaque classification performance as confusion matrices of our multi-task model and the baseline UNet++. These results demonstrate that the multi-task approach improves the classification accuracy for all three networks. These results also show that the accuracy of classifying hyperechoic and hypoechoic plaques is higher than that of mixed-echoic plaques.
§.§ Ablation Experiments
We conducted an ablation study to examine the effectiveness of the RWM and SWM modules. The baseline method (Base) is a general multi-task framework <cit.> without the use of RWM and SWM. The experimental results are shown in Table <ref> for plaque segmentation and Table <ref> for plaque classification.
From Table <ref>, we observe that using SWM enhances the DSC, ASSD, and HD of the three backbones by 0.92%, 1.43%, and 7.37%, respectively. We observe that Base+SWM yields significantly better performance in all metrics except |ΔPA| and PCC, which are very close to the Base results. This suggests that SWM improves the segmentation task by allowing it to focus on misclassified samples with the help of the classification task. Furthermore, by adding the RWM module to the multi-task framework, we observe that Base+SWM+RWM improves all metrics for the three backbones, except for |ΔPA|, indicating that RWM can also improve the classification performance.
Table <ref> shows that Base+RWM outperforms Base in almost all metrics, except for the F1-score of ResNet. Specifically, the accuracy, precision, and Kappa are improved by 0.31%, 0.51% and 0.40%, respectively. Base+RWM achieves the same F1-score as 'Base'. These results indicate that the classification task can learn new features from the segmentation task, i.e., the segmentation task can facilitate the classification task. Furthermore, by adding the SWM module to Base+RWM, the performance is improved in all metrics, indicating that SWM is also helpful in the classification task.
§.§ Computational Time
The mean testing time for a single plaque segmentation and classification by our multi-task framework was 32.09±4.15 ms per image, which is similar to a single segmentation or classification task. This time is sufficiently short to allow plaque analysis immediately after the carotid image acquisition.
§ DISCUSSION
Accurate segmentation and classification of carotid plaques is crucial for identifying high-risk patients, selecting appropriate treatments, and for monitoring the progression of atherosclerosis. In this study, we propose a new multi-task deep learning framework for carotid ultrasound image analysis that trains a single model to generate both carotid plaque segmentation and plaque types simultaneously. Our experiments show a strong correlation between our algorithm and manual plaque areas (PA) and excellent accuracies in plaque classification using our multi-task framework. Furthermore, we show the mutually beneficial relationship between the segmentation and classification tasks in improving performance, as well as the efficiency of our algorithm to generate PA and plaque types simultaneously.
The proposed multi-task deep learning framework differs from the previous works that used two-stage model training and existing multi-task algorithms. In particular, the previous carotid ultrasound image analysis methods involved training separate segmentation and classification models and required two stages for testing <cit.>. However, these methods have several limitations, including segmentation errors leading to decreased classification accuracy and increased training time due to the need for two separate models. The previous multi-task algorithms did not consider the mutually beneficial relationship between the segmentation and classification tasks. For example, Shen et al. developed a multi-task learning method named NDDR-LCS that leverages auxiliary information from ultrasound reports to assist the carotid plaque classification task <cit.>. In contrast, our method provides a novel framework, aiming at using a single model to accomplish both carotid plaque segmentation and classification tasks. Also, our method incorporated a region-weight module (RWM) and a sample-weight module (SWM) to exploit the relationship between these two tasks, enabling mutual promotion of the performance of two tasks.
The proposed multi-task algorithm was evaluated on a large dataset with 1270 longitudinal carotid ultrasound images collected in Zhongnan Hospital (Wuhan, China). The manual plaque area (PA) measurements have a mean of 24.03 mm^2 and range from 1.45 mm^2 to 187.06 mm^2 with a majority in the range of 10 mm^2 to 80 mm^2. The small areas of these plaques, representing early-stage disease, makes accurate segmentation and classification difficult. Nonetheless, the segmentation task of the developed multi-task algorithm yielded DSC, ASSD, HD and |ΔPA| in excellent agreement with the manual segmentation results. Compared to the widely used U-Net++ network, our approach achieved better performance by 1.15%, 5.65%, 11.19%, 16.15% for DSC, ASSD, HD, and |ΔPA|, respectively. The algorithm PA measurements were strongly and significantly (r=0.927 and p=0.0001) correlated with manual measurements. For classification, our algorithm also showed high accuracy and agreement. Compared to the popular classification networks (i.e., EfficientNet, ResNet, DenseNet, RepVGG, and HRNet), our approach achieved the highest performance for ACC, Precision, F1-score, and Kappa coefficient. These results indicate that our approach provided plaque classification performance that are similar to an experienced human operator, possibly making our method usable clinically.
Furthermore, compared to the general multi-task algorithms, our method incorporated RWM and SWM to learn the relationship between plaque segmentation and classification tasks. The ablation results showed that RWM and SWM caused the two tasks to benefit from each other. For the segmentation results as shown in Table <ref>, the DSC, ASSD, HD, |ΔPA| improved with the help of SWM. For the classification results as shown in Table <ref>, SWM also showed an increase in specificity, accuracy, precision, and Kappa. This is due to two reasons: (1) RWM integrates the probability map of segmentation results into the high-level features of the classification task, enabling the classification task to learn location information of plaques; (2) SWM utilizes prediction errors as sample weights in the loss function of the segmentation task, enabling the segmentation task to focus on training misclassified samples. These results suggest that the proposed RWM and SWM modules could increase the performance of the multi-task framework.
Although we achieved high algorithm segmentation and classification accuracy, we acknowledge some limitations. We note that the classification accuracy of mixed-echoic plaques is much lower than that of hyperechoic plaques and hypoechoic plaques. This might be because of the complex structure of mixed-echoic plaques, which are more difficult to identify, may result in misclassification of the annotations as mixed-echoic plaques have a higher probability to be misclassified as hyperechoic or hypoechoic plaques. For future work, we will investigate the spatial attention module to make the networks focus on the features within plaques and use sample weights to let the networks bias to the difficult samples. Furthermore, we will also implement this approach to other plaque burden measurements (i.e., Total plaque volume (TPV), Vessel wall volume (VWV)).
§ CONCLUSION
In conclusion, this work is the first to report on the development of a multi-task framework to generate carotid plaque area measurements and plaque types simultaneously, in which the region-weight module and the sample-weight module were developed to exploit the correlation between classification and segmentation. Experimental results show that the proposed approach can effectively improve the accuracy of carotid plaque classification and segmentation, suggesting it may be useful in clinical practice and clinical trials for evaluating new therapies for atherosclerosis.
§ ACKNOWLEDGMENT
The authors thank the Zhongnan Hospital research team for providing the 2D US images. We also thank the Liyuan Hospital research team for their efforts in manually identifying plaques from carotid ultrasound images.
IEEEtran
|
http://arxiv.org/abs/2307.03006v1
|
20230706141005
|
Predicting a noisy signal: the costs and benefits of time averaging as a noise mitigation strategy
|
[
"Jenny Poulton",
"Age Tjalma",
"Lotte Slim",
"Pieter Rein ten Wolde"
] |
q-bio.QM
|
[
"q-bio.QM",
"cond-mat.stat-mech"
] |
Self-supervised Optimization of Hand Pose Estimation using Anatomical Features and Iterative Learning
We thank the Baden-Württemberg Ministry of Economic Affairs, Labour and Tourism for funding the AI Innovation Center “Learning Systems and Cognitive Robotics" where this work was carried out.
Christian Jauch1,
Timo Leitritz1, and
Marco F. Huber12
1Machine Vision and Signal Processing, Fraunhofer Institute for Manufacturing Engineering and Automation IPA,
Stuttgart, Germany,
2Institute of Industrial Manufacturing and Management IFF, University of Stuttgart, Germany,
August 1, 2023
=========================================================================================================================================================================================================================================================================================================
One major challenge for living cells is the measurement and prediction of signals corrupted by noise. In general, cells need to make decisions based on their compressed representation of noisy, time-varying signals. Strategies for signal noise mitigation are often tackled using Wiener filtering theory, but this theory cannot account for systems that have limited resources and hence must compress the signal. To study how accurately linear systems can predict noisy, time-varying signals in the presence of a compression constraint, we extend the information bottleneck method. We show that the optimal integration kernel reduces to the Wiener filter in the absence of a compression constraint. This kernel combines a delta function at short times and an exponential function that decays on a timescale that sets the integration time. Moreover, there exists an optimal integration time, which arises from a trade-off between time averaging signal noise and dynamical error. As the level of compression is increased, time averaging becomes harder, and as a result the optimal integration time decreases and the delta peak increases. We compare the behaviour of the optimal system with that of a canonical motif in cell signalling, the push-pull network, finding that the system reacts to signal noise and compression in a similar way.
§ INTRODUCTION
Autonomous or self-perpetuating systems such as cells typically exist in dynamic environments. A general requirement for self-perpetuating systems to thrive in such environments is the ability to respond to changing conditions. Ideally, a system would make an instantaneous change to respond to an environmental change. In reality, mounting a response takes time. Given this, an optimal response requires systems to predict an environmental change <cit.>. Intriguingly, experiments have revealed that even single-celled organisms can predict environmental change <cit.>. For example, cells can use the arrival of one sugar to predict that the next one will arrive <cit.>. In this work, we consider the optimal prediction of time-varying signals. We consider biological sensing systems, but our ideas can be applied to any system predicting a time-varying signal.
Living cells live in rich sensory environments and can sense and react to many different external signals. These include light, motion and chemical concentrations. In this work, we consider the trajectory of the changing concentration of ligand molecules in the environment as a function of time. These concentrations are measured via receptors, which are typically located on the surface of the cell. The ligand molecules bind to these receptors, which transmit information to a downstream system within the cell. Receptor-ligand binding, like all processes at the cellular scale, is noisy. As a result, the signal that is propagated to the downstream system is corrupted by signal noise, also called input noise. Living cells, like any signal detection system, are thus inevitably affected by signal noise.
This work is interested in understanding how systems can mitigate the effect of this signal noise.
How cells can maximize their sensing precision by minimizing the propagation of signal noise has been studied extensively. In their pioneering paper, Berg and Purcell <cit.> pointed out that cells can reduce the sensing error via the mechanism of time integration. In this mechanism, the cell does not infer the ligand concentration from the current concentration but rather from its average over some given integration time. Following the work of Berg and Purcell, many studies have addressed the question of how accurately living cells can measure ligand concentrations via the mechanism of time integration <cit.>. Importantly, these studies assume that the signal is constant on the timescale of the response and that the different signal values are averaged uniformly in time. However, when the integration time is comparable to the correlation time of receptor-ligand binding, the optimal weighting becomes non-uniform <cit.>. Moreover, ligand concentrations often fluctuate on a timescale that is comparable to the response time of the system, as, for example, in chemotaxis <cit.>. Predicting these signals optimally requires a non-uniform time average <cit.>. Another sensing strategy, which can reach a higher sensing precision, is that of maximum-likelihood sensing <cit.> or Bayesian filtering <cit.>.
Since systems cannot generally respond instantaneously to changes in their environment, it becomes beneficial to anticipate the change and mount a response ahead of time. How accurately this can be done is determined not by how accurately they can predict the current signal but rather the future signal. For a system to predict the future, it must extract characteristics of the past signal that are informative about the future signal. The amount of predictive information stored in the past signal trajectory about the future signal is the mutual information I(s⃗,s(τ)) between the past signal trajectory s⃗ and the signal value s(τ) at a future timepoint τ. This predictive information puts a fundamental lower bound on the prediction error. However, signal noise means that this bound can, in general, not be reached.
Wiener filtering theory <cit.> makes it possible to derive, for linear systems, the optimal integration function that minimizes the prediction error for time-varying signals in the presence of signal noise, and it has been applied to cellular systems <cit.>.
Wiener filtering theory, however, does not recognize that systems are built with finite resources. In general, and as assumed in Wiener filtering theory, systems do not predict the future signal s(τ) from the input signal trajectory s⃗ directly, but rather indirectly, from the output of the signalling system, x. This output depends on the past input signal trajectory s⃗. Wiener filtering theory assumes that the input trajectory can be reliably mapped onto the output x.
In general, however, the output trajectory x(t) is a noisy and compressed representation of the input trajectory s(t) because resources such as protein copy numbers and energy are finite. The data processing inequality implies that the mutual information between the compressed output x and the future of the signal s(τ) is less than that between the uncompressed signal and the future: I(x;s(τ))≤ I(s⃗,s(τ)). In this work, we go beyond Wiener filtering theory to study systems which have limited resources.
Here, we study the optimal compression of the input signal into the output for prediction under resource constraints. We define the optimal compression as that which maximises the predictive information in the compressed output =I(x;s(τ))
subject to the constraint that the information the output has about the past, =I(x;s⃗), is limited. We will confine ourselves to linear systems, and derive the optimal compression via the information bottleneck method <cit.>.
The information bottleneck method has been applied to a wide range of biological systems. The method has led to a greater understanding of optimal gene expression patterns for fly development and has identified the optimal sensors associated with this process <cit.>. It has been used to analyse retinal ganglion cells <cit.>, finding that the retina provides a nearly optimal representation of the predictive information contained in the visual input <cit.>. A related work calculates whether position or velocity information is more useful to the retina for predicting a moving image <cit.>. Yet, none of these studies has directly considered signals that are corrupted by signal noise. In this work, we will extend the Gaussian information bottleneck presented by Tishby et al. <cit.> to systems with signal noise. Using this approach, we will derive the optimal integration function, which captures the characteristics of the past signal that are most informative about the future signal in the presence of signal noise and a compression constraint.
In section <ref>, we will outline the discrete information bottleneck for systems with Gaussian signal noise. This method combines the information bottleneck method and the Wiener filter, considering both signal noise and compression. Previous attempts to link the information bottleneck method and the Wiener filter have not included signal noise on which the kernel acts <cit.>, without which the Wiener filter does not straightforwardly apply.
In section <ref>, we introduce a discrete Markovian signal modelled with an autoregressive model of order 1. A Markovian signal is the simplest signal in which the past is predictive of the future. We will then add correlated Gaussian noise to that signal, also modelled with an autoregressive model of order 1.
In section <ref>, we address the optimal prediction of this signal in the presence of signal noise and resource constraints. We derive optimal kernels for compressing the past signal and calculate the amount of predictive information these compressed representations contain. We find that the optimal kernel combines a δ peak at zero with a decaying exponential, which allows for time averaging over the signal noise. The relative importance of these two contributions, as well as the integration time (the timescale on which the exponential contribution decays), depends on the compression level. When the resources are limited, and the compression level is high, the δ peak is relatively large, and the integration time is short because the system cannot time average. In the other limit, the system time averages over an optimal integration time, which arises from the interplay between time averaging and the dynamical error or signal distortion <cit.>. Additionally, the relative contribution of the δ peak reduces. Finally, we examine the effect of changing the variance and the correlation time of the noise. When the noise variance is larger, more priority is given to the exponential part of the kernel, and its range, the integration time, also increases because this allows for more time averaging. When the correlation time of the noise is larger, the exponential part of the kernel widens to enable effective time averaging, while the importance of the δ peak increases because time averaging becomes less successful.
In section <ref>, we will compare our optimal kernels with the kernel of a well-known biological signalling motif, the push-pull network <cit.>. Push-pull systems are omnipresent in prokaryotic and eukaryotic cells <cit.>. Examples are phosphorylation cycles, as in MAPK cascades; GTPase cycles, such as the Ras system; and two-component systems, including the chemotaxis system of Escherichia coli. Push–pull networks constitute a simple exponential filter <cit.>, and hence do not contain a contribution with a δ peak implying that the push-pull motif is not optimally compressing the signal.
This work develops a very general method which can be used to study optimal compression for prediction in noisy systems with resource constraints. While we use it to study biological systems, the effect of compression on systems predicting any number of noisy signals, from financial data to robotic sensing data, can be studied using this method.
§ DERIVING THE INFORMATION BOTTLENECK FOR A SYSTEM WITH SIGNAL NOISE
This work seeks to find the optimal scheme for compressing a signal to predict the future given constrained resources. To start, we must define a general process that captures the essence of the problem, and that can be optimised. We consider signals that obey Gaussian statistics, which are corrupted by noise that also obeys Gaussian statistics. It has been shown that the optimal response systems for these signals are linear<cit.>. We, therefore, consider systems that respond linearly over the range of input fluctuations:
x=A⃗(s⃗+η⃗)+ξ.
Here s⃗ is a vector representing a discretised signal trajectory, η⃗ is a vector representing a noise trajectory. The linear kernel A⃗ is a vector and A⃗(s⃗+η⃗) is a scalar representing a weighted average over all timepoints of the signal corrupted by signal noise. The compression noise ξ is a scalar. Thus our compressed output x is a scalar. This output is correlated with the value of the signal in the future, and we are interested in its correlation with the value at one particular timepoint τ into the future, the scalar s(τ).
The information bottleneck finds the optimal kernel A⃗ over the signal trajectory to maximise predictive information while also compressing the signal. This optimal compression is found by maximising the information bottleneck Lagrangian with respect to A⃗:
max_A⃗ℒ=max_A⃗(-γ).
Here γ is a Lagrange multiplier that dictates the compression level. When this Lagrangian is maximised, the mutual information between the compressed output and the signal value in the future is maximised subject to the constraint that the mutual information between the compressed output and the signal trajectory in the past is limited.
The compression level γ runs from zero to one. Recall that x is a compression of s⃗, so ≤. Given this, at γ=1 optimal ==0. At lower γ, the system is allowed to increase via A⃗ to make a better prediction of the future. We can rewrite the information bottleneck Lagrangian in terms of entropy using I(a;b)=H(a)-H(a|b). The entropy for a Gaussian system in one dimension is H(Σ_x)=1/2log|Σ_x| where Σ_x is the covariance of x (where x is a vector, Σ_x is a covariance matrix). The conditional entropy H(Σ_y|z)=1/2log|Σ_y|z| where Σ_y|z is the conditional covariance of variable y given variable z where one or both of y and z can be vectors. Combining these, we rewrite the information bottleneck Lagrangian as
max_A⃗ℒ =H(x)-H(x|s(τ))-γ H(x)+γ H(x|s⃗)
max_A⃗ℒ =(1-γ)1/2log|Σ_x|-1/2log|Σ_x|s(τ)|+γ1/2log|Σ_x|s⃗|.
The information bottleneck absent signal noise eliminates x from Σ_x, Σ_x|s⃗ and Σ_x|s(τ), then differentiates with respect to A⃗, resulting in an eigenvalue equation <cit.>.
We follow the same method, but because of the addition of signal noise, the definition of the covariances has changed. Given x=A⃗(s⃗+η⃗)+ξ and noting that here there are no correlations between the signal s⃗, the signal noise η⃗, and the compression noise ξ, respectively, we find that:
Σ_x =⟨δ x δ x⟩
=⟨δ (A⃗(s⃗+η⃗)+ξ) δ ((s⃗^T+η⃗^T)A⃗^T+ξ)⟩
=A⃗⟨δs⃗δs⃗^T⟩A⃗^T + A⃗⟨δη⃗δη⃗^T ⟩A⃗^T +⟨δξδξ⟩
=A⃗Σ_s⃗A⃗^T + A⃗Σ_η⃗A⃗^T +Σ_ξ
If s⃗ is known, the remaining uncertainty in x is A⃗η⃗+ξ. Hence, Σ_x|s⃗=A⃗Σ_η⃗A⃗^T+Σ_ξ. Finally, to find Σ_x|s(τ) we use the Schur complement formula: Σ_x|s(τ)=Σ_x-Σ_x s(τ)Σ_s(τ)^-1Σ_s(τ)x. Now Σ_xs(τ)=⟨δ xδ s(τ) ⟩=A⃗⟨δs⃗δ s(τ)⟩=A⃗Σ_s⃗ s(τ) and similarly Σ_s(τ)x=Σ_s(τ)s⃗A⃗^T. Thus
Σ_x|s(τ) =A⃗Σ_s⃗A⃗^T+A⃗Σ_η⃗A⃗^T+Σ_ξ-A⃗Σ_s⃗ s(τ)Σ_s(τ)^-1Σ_s(τ) s⃗A⃗^T
=A⃗(Σ_s⃗-Σ_s⃗ s(τ)Σ_s(τ)^-1Σ_s(τ) s⃗)A⃗^T+A⃗Σ_η⃗A⃗^T+Σ_ξ
=A⃗Σ_s⃗|s(τ)A⃗^T+A⃗Σ_η⃗A⃗^T+Σ_ξ.
The information bottleneck Lagrangian (equation <ref>) can be rewritten as
max_A⃗ℒ =(1-γ)1/2log(A⃗Σ_s⃗A⃗^T+A⃗Σ_η⃗A⃗^T+Σ_ξ)+γ1/2log(A⃗Σ_η⃗A⃗^T+Σ_ξ)
-1/2log(A⃗Σ_s⃗|s(τ)A⃗^T+A⃗Σ_η⃗A⃗^T+Σ_ξ).
Differentiating and setting equal to zero gives
dℒ/dA =(1-γ)A⃗(Σ_s⃗+Σ_η⃗)/A⃗(Σ_s⃗+Σ_η⃗)A⃗^T+Σ_ξ+γA⃗Σ_η⃗/A⃗Σ_η⃗A⃗^T+Σ_ξ
-A⃗(Σ_s⃗|s(τ)+Σ_η⃗)/A⃗Σ_s⃗|s(τ)A⃗^T+A⃗Σ_η⃗A⃗^T+Σ_ξ=0.
Unlike the system without signal noise <cit.>, this equation no longer reduces to an eigenvalue equation and must be solved numerically.
Can this method be compared to the Wiener filter? The Wiener filter minimises the mean squared error between the filter output (here x) and the signal at a present or future time (here, the signal at a future point s(τ)). For a Gaussian system, minimising the mean squared error, |(x^2-s(τ))^2|, is equivalent to maximising the mutual information, . Maximising this mutual information is equivalent to maximising the information bottleneck Lagrangian, ℒ=-γ, for γ=0. Thus, as γ→0, the optimal kernels found by the information bottleneck method converge to that which optimally filters out signal noise, given by the Wiener filter. Convergence to the Wiener filter will generally be true for kernels found using this method. This explicit link has only been made possible by including signal noise on which the kernel acts, a vital component of the Wiener filter problem. Some attempt has been made to link the information bottleneck method and the Wiener filter before <cit.>. However, this work fails to include a noise source acted on by the kernel. Since the Wiener filter traditionally mitigates noise via the kernel, this rendered this comparison between the IBM and the Wiener filter somewhat confusing.
§ A DISCRETE SIGNAL MODELLED BY AN AUTOREGRESSIVE MODEL
Since our method of calculating the information bottleneck is discrete, we need a discrete signal. We consider a discrete Markovian input signal given by an autoregressive model. We choose a Markovian process as the simplest example of a signal in which the future depends on the past and can therefore be predicted. The autoregressive model is a time-series model, where each value is regressed upon previous values in sequence <cit.>. This work will focus on an order 1 autoregressive model with a zero mean, which models a Markovian process:
S_t= ϕ_1 S_t-1 + σ_AR^2η.
Here ϕ_1 is the weighting of how element 1 in the past affects the current value, η is a white noise process of variance one and mean zero and σ^2_AR sets the full variance of the white noise term. The covariance function of an order 1 autoregressive model with zero mean is
⟨δ_S(0)δ_S(t)⟩=σ_AR^2ϕ_1^t/dt/1-ϕ_1^2.
Here the current time t must be an integer multiple of the timestep dt. At non-integer multiples of dt, the function is not defined. Where the function is defined, we want this discrete covariance function to take the same form as that for continuous Markovian signal with covariance function ⟨δ_S(t_1)δ_S(t_2)⟩=σ_s^2 e^-1/τ_s |t_1 -t_2|. Here σ_s^2 is the variance of the signal, and τ_s is the correlation time of the signal. To give the autoregressive function the same correlation function, we take ϕ_1 = e^-dt/τ_s and σ_AR^2=σ_s^2(1-e^-2dt/τ_s).
Our signal is corrupted by correlated signaling noise with covariance ⟨δ_η⃗(t_1)δ_η⃗(t_2)⟩=σ_η^2 e^-1/τ_η|t_1 -t_2|, where σ_η^2 is the variance of the noise and τ_η is the correlation time of the noise. This is once again modelled by an autoregressive function with ϕ_1 = e^-dt/τ_η and σ_AR^2=σ_η^2(1-e^-2dt/τ_η).
§ OPTIMAL KERNELS AND THE INFORMATION BOTTLENECK LIMITS
How does the optimal kernel compress the trajectory of the signal and the noise into a prediction of the future? How much information about the past and the future are retained in the compressed representation x? In this section, we examine the forms of optimal kernels A⃗ for discrete input signals with Markovian statistics. Once we have found the optimal kernels, we calculate the corresponding predictive information and past information . Because these kernels are optimal, they will maximise the ratio of the predictive to the past information, /, for a given system. For illustrative purposes, we will also calculate the predictive information and past information for an arbitrary kernel A⃗, but / will be lower for such non-optimal kernels.
For a given signal and signal noise, there is an absolute limit on the amount of predictive information that can be extracted from a given amount of past information. Figure <ref>a is a parametric plot of and for varying values of our compression variable γ (see Eq. <ref>). Here, each curve is the fundamental bound on for a given , and for a given set of signals statistics: σ_s^2, τ_s, σ_η^2 and τ_η. At the top right of this information bound, the compression level γ is zero, and the system has maximum and . Moving down the information bound, the system is compressed, and the system has access to reduced and . Optimal kernels will result in values of and on the information bound, while arbitrary non-optimal kernels acting on signals with the same statistics will result in values of and below these bounds. The curves for σ_η^2=0 have been calculated before <cit.>, but the other limits are new. We see that increasing the variance of the noise σ_η^2 reduces the amount of predictive information and past information a system can extract at a given γ (fig. <ref>a). At high and , the predictive information extracted from a given amount of past information is indeed significantly lower when σ_η^2 is higher. Moreover, the maximum also decreases as σ_η^2 becomes larger. Perhaps surprisingly, for lower and , the predictive information that can be extracted for a given amount of past information is nearly independent of σ_η^2. Similarly, for σ_η^2>0, increasing the correlation time of the noise also reduces the amount of predictive information and past information the system can extract from the signal trajectory. The ratio / is once again reduced with increased correlation time for high while the ratio is constant at low (fig. <ref>b).
Examining the optimal kernel provides information about which characteristics of the signal are most important for predicting the future of the signal. As shown in fig. <ref>a, b and c, the optimal kernels take the form of a δ function at t=0 with a decaying exponential for t<0:
A_ opt(-t)=a_ opt(b_ optδ(t)+(1-b_ opt)/τ_A^ opt e^-t/τ_A^ opt).
The δ peak prioritises the signal's current value, but the exponential function averages over the trajectory, with lower weight given to time points further back into the past. Here a_ opt is the amplitude of the entire kernel, b_ opt is the weighting of the δ peak relative to the exponential part of the kernel, and τ_A^ opt is the decay rate of the kernel. We normalise the exponential part of the kernel with the decay rate. We note that while we write down a continuous form of the kernel, the kernel itself is discrete and only defined at integer multiples of the timestep dt.
The shape of the kernel changes along the information bound. Consider the system where σ_η^2=2 and τ_η=0.02s (fig. <ref>a, middle blue line, fig. <ref>b, middle red line). At the top right of the information bound, the compression level γ=0, the system is uncompressed and can access maximum and . In this limit, the kernel is a slowly decaying exponential supplemented by a δ function (fig.<ref>c, dashed line). Initially, as we move down the information bound, increasing γ, the width and relative height of the exponential function decrease until the function becomes a δ function (fig.<ref>c, light yellow line). Only then does the amplitude of the whole kernel decrease to zero.
To understand the optimal shape of the kernel, we need to understand the origins of the fluctuations in the output because these fluctuations limit the accuracy of prediction. Two of these we have already discussed: signal noise and compression noise, modelled by η and ξ in Eq. <ref>, respectively. Signal noise causes errors in the signal at the point of detection. Compression noise corrupts the output of the compression process. The final source of fluctuations in the output is known as the “dynamical error” <cit.>. It arises from time integration. Due to time integration, the output depends on input values further back into the past, which are less correlated with the current input <cit.>.
To understand how a system can mitigate these sources of error, we note that the compressed output is given by x=A⃗(s⃗+η⃗)+ξ. Increasing the amplitude of the kernel can mitigate the effect of the compression noise ξ by amplifying the signal over the compression noise. Changing the amplitude cannot reduce the effect of the signal noise η⃗ because the signal and noise will be amplified together. Signal noise must be mitigated by time averaging. By using more independent time points further into the past, the system can better estimate the current value of the signal. The integration time τ_A sets the width of the kernel and the window over which time averaging is performed. However, using time points further back into the past introduces dynamical error, which is mitigated by prioritising more recent values over values further into the past. Mitigating signal noise and dynamical error thus put opposing requirements on the integration time, leading to an optimum in τ_A <cit.>.
We next ask how varying the key parameters of the kernel: a, b and τ_A, affects these error sources. Answering this question will clarify how these parameters affect the past and predictive information and , which in turn helps us understand how the optimal kernel's shape varies along the information bounds shown in fig. <ref>a and b. We generate a set of non-optimal kernels for a given signal, A(t)=a(bδ(t)+(1-b)/τ_A e^-1/τ_At), by varying a, b and τ_A away from the optimum. To separate the effects of varying these three quantities, we fix two of the three quantities a=a_ opt, b=b_ opt and τ_A=τ_A^ opt, and vary the other.
How does varying the amplitude, a, allow the system to mitigate our three error types? Recall the expression for the compressed output: x=A⃗(s⃗+η⃗)+ξ. When the amplitude of A⃗, a, is small, the compression noise ξ dominates the signal. In this case, both and are small (fig. <ref>a). As a increases, both and increase as the kernel amplifies the corrupted signal over the compression noise. Eventually, the compression noise becomes negligible compared to the propagated input noise, and and plateau as a function of a. Indeed, while changing a can lift the signal above the compression noise ξ, it cannot mitigate the effect of signal noise η because the kernel amplifies the signal and input noise together. Similarly, as increasing the amplitude of the kernel does not affect how different points in the trajectory are weighted relatively in the kernel, it cannot decrease dynamical error.
Since varying the amplitude cannot mitigate signal noise, it can only be mitigated by varying the relative height and width of the exponential part of the kernel. The exponential part of the kernel takes a non-uniform time average over those time points in the past, mitigating signal noise. Consider first the integration time of the kernel, set by τ_A. As τ_A increases and the kernel widens, both and initially increase as the system averages out the signal noise (fig. <ref>c). They then peak at two different optimal integration times, which arise from the trade-off between minimizing the dynamical error and time averaging<cit.>.
We next ask how and change with the relative importance of the δ peak: b. Initially, and decrease very slowly as the relative importance of the δ peak increases. As b approaches one, both quantities drop sharply. For all values of the compression level γ, having a δ peak decreases the amount of past information the system obtains with the kernel (fig. <ref>a). For all but the lowest values of the compression level γ, having a δ peak also decreases the amount of predictive information the system obtains with the kernel (fig. <ref>b). Only in the zero compression limit does adding a δ peak increase ; in the SI, we prove that this is true even as dt→ 0. In this limit, the system finds the optimal trade-off between minimising signal noise via a wide integration kernel and minimising dynamical error via a δ peak (the compression noise is negligible). The δ peak emphasises the most recent signal value, the signal value most correlated with the future point the system is trying to predict. In this limit, peaks at b=b_ wiener (fig. <ref>d, dashed lines).
Since both and (except for the uncompressed limit) decrease upon adding a δ peak, a pertinent question rises is why the optimal kernel of the system at the information bound features a δ peak at all. The answer is that decreases more than upon adding a δ peak, so that the ratio / increases. This effect is strongest in the compressed regime (fig. 5c), which explains why the δ peak is most pronounced in the high γ regime of strong compression.
We can now understand the shape of the kernel along the information bottleneck curves (fig. <ref>). We start in the highly compressed region where
and are low, because the amplitude of the kernel a is low (Fig <ref>a) and the compression noise is relatively large. Because of the latter, the effect of the signal noise is relatively small. This means that time averaging is not important. The optimal integration time will be short because that minimizes the dynamical error (Fig <ref>c). The δ term will be relatively large (fig. <ref>b) because increasing the δ peak maximises the objective function by decreasing more than .
To increase and (corresponding to decreasing γ), the amplitude of the kernel must rise so that signal is lifted above the compression noise (fig. <ref>). Because the kernel acts on both the signal and the signalling noise but not the compression noise, this inevitably makes the effect of the signal noise stronger than the compression noise. This means that time averaging becomes more important, which in turn necessitates a longer integration time (fig. <ref>c). Since increasing τ_A also increases the magnitude of the kernel, amplifying the signal and signalling noise over the compression noise, the relative importance of the δ-peak contribution falls.
In the regime of high and (low γ), the compression noise has become negligible, and the output noise is caused by a combination of signal noise and dynamical error.
The optimal integration time in the uncompressed limit arises from the trade-off between the two error types. Similarly, since adding a δ peak reduces dynamical error, this trade-off also sets the optimal relative height of the exponential part of the kernel and the δ peak.
Hence the numerical procedure no longer finds a unique solution for the amplitude, it only ensures that it is large enough. In the limit γ→ 0, the kernel becomes identical to that given by the Wiener filter, as we show in Appendix <ref>. The Wiener filter has been used to analyse optimal kernels for Markovian signals <cit.>, although that study did not address the effect of correlations in the noise.
Now that we understand the optimal shape of the integration kernel, we are in a position to understand the effects of varying the magnitude and the correlation time of the input noise, σ^2_η and τ_η, respectively. The correlation time of the exponential part of the kernel increases and the relative weight of the δ peak decreases with σ_η^2 because more signal noise requires more time averaging (fig. <ref>a and b). In the absence of noise, σ_η^2=0, the kernel takes the form of a δ function because, for a Markovian signal, all the predictive power is stored in the current signal value. Indeed, in all of our systems, time averaging is performed to better estimate the current signal, which is then maximally predictive of the future signal.
The kernel also changes with the noise's correlation time. To mitigate the effects of correlated noise, the system must time average over periods longer than the correlation time of the noise, but shorter than the correlation time of the signal: τ_s>τ_A>τ_η. Initially, as τ_η increases, the width of the exponential part of the kernel τ_A increases (fig. <ref>c). As τ_η→τ_s, the width of the kernel decreases because the system can no longer average out the noise without averaging out the signal. This also explains why the relative importance of the exponential filter decreases and that of the δ peak increases as τ_η→τ_s (fig. <ref>d). Conversely, decreasing the input correlation time prioritises the exponential filter. Indeed, Becker et al. derived using Wiener filtering theory the optimal integration function for signals with δ correlated input noise, corresponding to τ_η→ 0, and found that the optimal kernel is a simple exponential filter <cit.>. Lastly, we note that when the correlation time of the signal noise becomes comparable to the correlation time of the signal itself, τ_η∼τ_s, the system cannot time average out the signal noise without time averaging out the signal itself. The system cannot do better than taking an instantaneous kernel, and the relative height of the exponential part of the kernel goes to zero (fig. <ref>d). This behaviour reflects that observed for cellular signaling systems <cit.>.
§ THE PUSH-PULL NETWORK
Having calculated the properties of optimal kernels, we now wish to compare our results to a standard signal-processing motif in biology: the push-pull motif. The cell must detect and predict the concentration of ligand molecules in the environment. The push-pull motif consists of receptors on the surface of a cell that detect the concentration of ligand molecules in the environment by binding to them (fig.<ref>a). Inside the cell, output molecules diffuse in and out of contact with the receptors. Output molecules in contact with bound receptors are activated, using ATP to drive the reaction. These molecules then spontaneously deactivate over time. The number of activated output molecules reflects the number of bound receptors, allowing the cell to estimate the concentration of ligand molecules outside the cell. Intrinsic to the push-pull network is correlated signal noise caused by the binding of ligand molecules to receptors. We are interested in whether the push-pull kernel can mitigate this noise.
The push-pull kernel is an exponential function: A(-t)∝ e^-t/τ_A <cit.>. Here, τ_A is the integration time of the kernel. In the supplementary information, we extract the variance of the signal noise from the push-pull system (SI, equation <ref>), which we simplify to:
σ_η^2∝1/R_T
to aid understanding.
Here R_T is the total number of receptors. Additionally, for the push-pull motif, resources, and therefore compression level, are dictated by the number of receptors R_T and the number of output molecules X_T.
In what follows, we increase the compression level by reducing X_T while keeping R_T constant to keep the signal noise variance constant. Fig. <ref>b shows that the information curves traced by the push-pull kernel (dashed lines) fall below those of the optimal kernels (solid line). However, the difference is small, hinting that the push-pull network is nearly optimal. To analyze this further, we study the integration kernels.
We find that, just like the optimal kernels, the optimal kernels for the push-pull motif widen with the variance of the signal noise and narrow with compression. Fig. <ref>a shows that when R_T decreases, which increases the signal noise variance, the push-pull kernels widen. In contrast, when X_T decreases, which increases the compression level, the kernels narrow (fig. <ref>b). Thus, the push-pull kernel uses time averaging to mitigate signal noise, and the ability to time average is reduced by compression, like the optimal kernels (fig. <ref>).
In ref. <cit.>, the authors observe that cells using the push-pull motif can reduce the sensing error by either increasing the number of receptors R_T or by taking more measurements per receptor via the mechanism of time integration (increasing τ_A). These two statements can now be directly related to signal noise. Increasing the number of receptors reduces the signal noise σ_η^2, while time integration corresponds to widening the kernel to average out signal noise. In the same study, the authors observe an optimal integration time which increases as the number of receptors decreases. Moreover, they found that the optimal integration time decreases for larger compression noise, i.e., smaller X_T. Our results corroborate and explain these findings (see figs. <ref>a and <ref>a).
Additionally, in ref. <cit.>, the authors observe that cells using the push-pull motif respond to the correlation time of the noise increasing by initially increasing the integration time of the kernel (fig. <ref>c). Then as τ_η approaches τ_s, time integration averages out the signal as well as the noise, lowering the utility of time integration as a strategy. As τ_η increases further, the optimal integration time gradually decreases back to zero. Here, the push-pull network's best strategy is merely to capture the current value of the signal, despite noise corruption. In the limit that X_T is so large as to be effectively infinite, rendering the compression noise negligible, the integration time decreases slowly beyond the peak. For a smaller X_T where compression noise is finite, this drop is sharp (SI, fig. <ref>), mirroring the equivalent drop found for our optimal system (fig. <ref>d).
The push-pull kernel does not and cannot manifest a δ peak. The system instead has an exponentially decaying kernel alone. As τ_η→ 0, the theoretical optimal kernels also tend towards an exponentially decaying kernel without a δ peak (fig. <ref>a). We suggest, therefore, that the push-pull kernel is optimised for signal noise with a short correlation time. Nonetheless, while the system cannot replicate the δ peak, the push-pull kernel still attempts to mitigate correlated signal noise in other ways. Specifically, the kernel widens as τ_η increases (fig. <ref>c), as observed for the optimal kernels (fig. <ref>c).
What does the push-pull system lose by not being able to implement a δ peak? For all but the lowest values of the compression level γ, the kernel without a δ peak collects more predictive and past information (fig. <ref>a and b) than a kernel with a δ peak. The δ peak emerges only when the predictive information is maximized under the specific constraint of limiting past information. As discussed in <cit.>, maximizing predictive information while constraining past information will yield systems that differ from those that maximize predictive information under the constraint of resource cost in terms of protein copies and energy. It is conceivable that the latter would not yield a δ peak.
A biological system could hypothetically create a network capable of implementing a kernel much closer in shape to our theoretically optimal kernels. Creating an additional δ peak would require coupling two push-pull motifs in parallel, with different turnover rates of the readout <cit.>. The faster push-pull motif would provide a sharp spike in the kernel close to t=0s, which would approximate a δ function, while the slower one would act as the exponentially decaying part of the kernel. A motif such as this is resource intensive, so the limited potential advantages may explain why two such parallel push-pull networks have not yet been observed in cellular systems.
§ CONCLUSIONS
Time averaging is essential for accurately detecting and predicting the true values of signals corrupted by signal noise. An optimal system will vary the width of the kernel to compensate for different characteristics of this signal noise, widening it for a greater variance or longer correlation times and shortening it if the opposite is true. Where the noise characteristics demand it, most notably when the correlation time of the noise is long, kernels will widen to the extent that dynamical error becomes a concern for the system. In this case, an optimal system will add a δ peak in the kernel at the current time. The push-pull kernel replicates the optimal kernel for systems where the noise correlation time is very short but otherwise fails as it cannot replicate a δ peak.
Suppose a system has finite resources for prediction. In that case, its ability to time average is reduced—both the theoretically optimal kernels and those of the push-pull motif narrow as their resources are restricted at fixed signal noise. With sufficient compression, the optimal kernels will only collect the most recent time point, omitting time averaging completely. In such cases, the system cannot mitigate the effect of signal noise at all.
We have combined the information bottleneck and the Wiener filter to study these systems. This technique can be applied to more complex signals, such as those described by the generalised Langevin equation <cit.>. Studying how noise corruption affects techniques for processing more complex signals is the subject of further work.
unsrt
§ SUPPLEMENTARY MATERIAL
§ THE KERNEL SHAPE (BUT NOT AMPLITUDE) AND INFORMATION BOTTLENECK CURVES ARE INDEPENDENT OF COMPRESSION NOISE
In figure <ref>, we compare the rescaled kernels (a) and information curves (b) for a system with σ_ξ^2=1 and σ_ξ^2=100 to see that they are identical. The kernels will be amplified for higher σ_ξ^2, but the shape will not change.
§ THE WIENER FILTER IS IDENTICAL TO THAT FOUND BY THE IBM WHEN Γ→0.
The discrete Wiener filter minimises the mean squared error between the output t and the future point y for a signal with δ x=A⃗(δs⃗+ δη⃗) and reduces to
A =Σ_s⃗ s(τ)(Σ_s⃗+Σ_η⃗)^-1.
The discrete Wiener filter minimises the mean squared error between the filtered signal x and the value of the signal at some future timepoint s(τ) for a signal which has been corrupted by noise δ x=A⃗(δs⃗+ δη⃗).
E[ϵ^2] =E[(⟨δ s(τ)-A⃗(⟨δs⃗+⟨δη⃗))(δ s(τ) ⟩-A⃗(δs⃗⟩+δη⃗⟩))]
=E[(⟨δ s(τ) δ s(τ) ⟩-2A ⟨δ s(τ) δs⃗⟩+ A ⟨δ s δ s ⟩A⃗^T+A ⟨δη⃗δη⃗⟩A⃗^T]
We convert Σ_xy=⟨δ x δ y ⟩ and Σ_x=⟨δ x δ x ⟩. Differentiating with respect to A and equating to zero gives
d E[ϵ^2]/dA =E[-2Σ_s(τ) s⃗+ A Σ_s⃗ +A Σ_η⃗]=0
A =2Σ_s(τ)s⃗(Σ_s⃗+Σ_η⃗)^-1.
As shown in figure <ref>, the kernel obtained using this method has an identical shape to that found using the IBM with noise as γ→0.
§ FINDING THE LIMITS IN WHICH IS GREATER WHEN THE KERNEL TAKES THE FORM A(T)=A_ΔΔ(T)+A_ EXPE^-1/Τ_A T AS OPPOSED TO A(T)=A_ EXPE^-1/Τ_A T
Consider in the continuous form for correlated noise; =1/2logρ_s+ρ_η+ρ_ξ/ρ_s+ρ_η+ρ_ξ-ρ_s,s(τ)=1/2log(1+ρ_s,s(τ)/ρ_s+ρ_η+ρ_ξ-ρ_s,s(τ)) where
ρ_s =σ_s^2 ∫^0_-∞∫^0_-∞A(t-s)A(t-s') e^-1/τ_s|s-s'| ds ds',
ρ_s s(τ) =σ_s^2 (∫^0_-∞A(t-s) e^-1/τ_s|τ-s| ds)^2,
ρ_η =σ_η^2 ∫^0_-∞∫^0_-∞A(t-s)A(t-s') e^-1/τ_η|s-s'| ds ds',
ρ_ξ =σ_ξ^2.
The optimal kernel shape for this system has the form A(t)=A_δδ(t)+A_ exp/τ_Ae^-1/τ_A t. Completing the integrals gives
ρ_s =A_ exp^2σ_s^2/τ_A+2A_δA_ expσ_s^2/τ_A+A_δ^2σ_s^2(1/τ_A+1/τ_s)/(1/τ_A+1/τ_s),
ρ_s s(τ) =σ_s^2e^2τ/τ_s(A_ exp/τ_A+A_δ(1/τ_s+1/τ_A))^2/(1/τ_s+1/τ_A)^2 ,
ρ_η =A_ exp^2/τ_Aσ_η^2+2A_δA_ exp/τ_Aσ_η^2+A_δ^2σ_η^2(1/τ_A+1/τ_η)/(1/τ_A+1/τ_η),
ρ_ξ =σ_ξ^2.
In figure <ref>, we plot against A_δ. In order to be able to plot it we extract the optimal kernel integration time τ_A^ opt=0.03476s from the discrete system, and set A_ exp=100 an arbitrarily high value at which the effect of the compression noise ξ becomes negligible. We see that, like the discrete case (fig. <ref>c), the predictive information increases to a peak as A_δ increases, giving a finite, non-zero A_δ as the optimal value for maximising .
§ DISCRETISING THE PUSH-PULL NETWORK
To understand our two systems, we compare the discrete covariance function of the IBM;
Σ_x=A⃗Σ_s⃗A⃗^T + A⃗Σ_η⃗A⃗^T +Σ_ξ,
to the full continuous-time covariances of the push-pull motif from <cit.> to extract the individual covariances.
The push-pull motif consists of receptors on the surface of a cell that detect the concentration of ligand molecules in the environment l by binding to them (fig.<ref>a). Inside the cell, X_T output molecules diffuse in and out of contact with the R_T receptors. Output molecules in contact with bound receptors are activated, using ATP to drive the reaction. These molecules then spontaneously deactivate over time. At steady state ϕ_l receptors are bound and ϕ_x output molecules are activated. The deviations of these quantities from their average are modelled with the linear noise approximation as:
δṘL̇ = γδ l(t)-δ RL(t)/τ_η+η_RL(t)
δẋ^̇*̇ = ρδ RL(t)-δ x^*(t)/τ_A+η_x^*(t)
The covariances of the concentration of ligand molecules ⟨δ_l(t_1) δ_l(t_2)⟩, the receptor-ligand binding noise ⟨δ_η_RL(t_1)δ_η_RL(t_2)⟩, and the activation noise ⟨δ_η_x^*(t_1)δ_ηx^*(t_2)⟩ are given by
⟨δ_l(t_1) δ_l(t_2)⟩=σ_s^2 e^-1/τ_s |t_1-t_2|,
⟨δ_η_RL(t_1)δ_η_RL(t_2)⟩=2 R_T ϕ_l (1-ϕ_l)1/τ_ηδ(t_1-t_2),
⟨δ_η_η_x^*(t_1)δ_η_η_x^*(t_2)⟩=2X_Tϕ_x(1-ϕ_x)1/τ_Aδ(t_1-t_2),
where ρ=ϕ_x(1-ϕ_x)X_T/τ_Aϕ_l R_T, γ= ϕ_l (1-ϕ_l) R_T/τ_η c are constants related to the push-pull network and c is the average ligand concentration. The covariances of the number of ligand-bound receptors and activated molecules are then:
⟨δ_RL(t_1) δ_RL(t_2)⟩=
∫_-∞^t_1∫_-∞^t_2(γ^2 ⟨δ_l(t_1') δ_l(t_2')⟩+⟨δ_η_RL(t_1') δ_η_RL(t_2')⟩) e^-1/τ_η(t_1-t_1') e^-1/τ_η(t_2-t_2') dt_2' dt_1'
σ^2_x^*=
∫_-∞^0∫_-∞^0(ρ^2 ⟨δ_RL(t_1') δ_RL(t_2')⟩+⟨δ_η_x^*(t_1') δ_η_x^*(t_2')⟩) e^-1/τ_A(-t_1') e^-1/τ_A(-t_2') dt_2' dt_1'
Substituting equations <ref>-<ref> into equation <ref> gives:
⟨δ_RL(t_1) δ_RL(t_2)⟩=
+∫_-∞^t_1∫_-∞^t_2γ^2 σ_s^2 e^-1/τ_s |t_1'-t_2'| e^-1/τ_η(t_1-t_1') e^-1/τ_η(t_2-t_2') dt_2' dt_1'
+∫_-∞^t_1∫_-∞^t_22 R_T ϕ_l (1-ϕ_l)1/τ_ηδ(t_1'-t_2') e^-1/τ_η(t_1-t_1') e^-1/τ_η(t_2-t_2') dt_2' dt_1'.
Completing the integrals and simplifying gives:
⟨δ_RL(t_1) δ_RL(t_2)⟩=
γ^2 σ_s^2(e^-|t_1-t_2|/τ_s-τ_η/τ_se^-|t_1-t_2|/τ_η)/1/τ_η^2-1/τ_s^2+R_T ϕ_l (1-ϕ_l) e^-|t_1-t_2|/τ_η
RL is now the signal plus signal noise the system acts on. Plugging this expression into eq. <ref> gives:
σ^2_x^*=
∫_-∞^0∫_-∞^0(σ_s^2(e^-|t_1'-t_2'|/τ_s-τ_η/τ_se^-|t_1'-t_2'|/τ_η)+(1/τ_η^2-1/τ_s^2)/γ^2R_T ϕ_l (1-ϕ_l) e^-|t_1'-t_2'|/τ_η+(1/τ_η^2-1/τ_s^2)/ρ^2γ^22X_Tϕ_x(1-ϕ_x)1/τ_Aδ(t_1'-t_2'))
× e^-1/τ_A(t_1-t_1') e^-1/τ_A(t_2-t_2')ρ^2γ^2/(1/τ_η^2-1/τ_s^2) dt_2' dt_1'
we next take a factor of ρ^2γ^2/(1/τ_η^2-1/τ_s^2) outside the integral and substitute ρ=ϕ_x(1-ϕ_x)X_T/τ_Aϕ_l R_T, γ= ϕ_l (1-ϕ_l) R_T/τ_η c inside the integrals in eq. <ref>. Taking out the factor allows us to highlight the relative importance of the signal, signal noise and compression noise. This process gives:
σ^2_x^*=
∫_-∞^t_1∫_-∞^t_2(σ_s^2(e^-|t_1'-t_2'|/τ_s-τ_η/τ_se^-|t_1'-t_2'|/τ_η)+τ_η^2 c^2/ϕ_l (1-ϕ_l) R_T(1/τ_η^2-1/τ_s^2) e^-|t_1'-t_2'|/τ_η+(1/τ_η^2-1/τ_s^2)2τ_η^2 c^2τ_A/ϕ_x(1-ϕ_x)X_T (1-ϕ_l)^2 δ(t_1'-t_2'))
× e^-1/τ_A(t_1-t_1') e^-1/τ_A(t_2-t_2')ρ^2γ^2/(1/τ_η^2-1/τ_s^2) dt_2' dt_1'
In order to compare our system to the discrete optimal IBM for the autoregressive signal, we must discretise the system. This way, we can identify the relative importance of the signal, signal noise and compression noise in the discrete case. Discretising the integrals gives:
Σ^2_x^*=
∑_i=0^T/Δ t∑_j=0^T/Δ t(σ_s^2(e^-|i-j|Δ t/τ_s-τ_η/τ_se^-|i-j|Δ t/τ_η)Δ tΔ t+τ_η^2 c^2/ϕ_l (1-ϕ_l) R_T(1/τ_η^2-1/τ_s^2) e^-|i-j| Δ t/τ_ηΔ tΔ t+(1/τ_η^2-1/τ_s^2)2τ_η^2 c^2τ_A/ϕ_x(1-ϕ_x)X_T (1-ϕ_l)^2 δ_ijΔ t)
× e^-|N-i|Δ t/τ_A e^-|N-j|Δ t/τ_Aρ^2γ^2/(1/τ_η^2-1/τ_s^2)
Summing over the Kroneckerδ, δ_ij, gives:
Σ^2_x^*=
∑_i=0^T/Δ t∑_j=0^T/Δ t(σ_s^2(e^-|i-j|Δ t/τ_s-τ_η/τ_se^-|i-j|Δ t/τ_η)Δ tΔ t+τ_η^2 c^2/ϕ_l (1-ϕ_l) R_T(1/τ_η^2-1/τ_s^2) e^-|i-j| Δ t/τ_ηΔ tΔ t) e^-|N-i|Δ t/τ_A e^-|N-j|Δ t/τ_Aρ^2γ^2/(1/τ_η^2-1/τ_s^2)
+ ∑_i=0^T/Δ t((1/τ_η^2-1/τ_s^2)2τ_η^2 c^2τ_A/ϕ_x(1-ϕ_x)X_T (1-ϕ_l)^2 Δ t) e^-2|N-i|Δ t/τ_Aρ^2γ^2/(1/τ_η^2-1/τ_s^2)
Next we take the limit τ_s>>τ_η.
Σ^2_x^*=
∑_i=0^T/Δ t∑_j=0^T/Δ t(σ_s^2(e^-|i-j|Δ t/τ_s)+c^2/ϕ_l (1-ϕ_l) R_T e^-|i-j| Δ t/τ_η)Δ tΔ t e^-|N-i|Δ t/τ_A e^-|N-j|Δ t/τ_Aρ^2γ^2/(1/τ_η^2-1/τ_s^2)
+ ∑_i=0^T/Δ t2 c^2τ_A/ϕ_x(1-ϕ_x)X_T (1-ϕ_l)^2 Δ t e^-2|N-i|Δ t/τ_Aρ^2γ^2/(1/τ_η^2-1/τ_s^2)
= ∑_i=0^T/Δ t∑_j=0^T/Δ t(σ_s^2 e^-|i-j|Δ t/τ_s+σ_η^2 e^-|i-j| Δ t/τ_η) Δ tΔ t A(|N-i|Δ t) A(|N-j|Δ t)+∑_i=0^T/Δ tσ_ξ^2Δ t A(|N-i|Δ t)^2
where
A(t)= √(ρ^2γ^2/(1/τ_η^2-1/τ_s^2))e^-t/τ_A
,
σ_η^2=c^2/ϕ_l (1-ϕ_l) R_T
and
σ_ξ^2=2 c^2τ_A/ϕ_x(1-ϕ_x)X_T (1-ϕ_l)^2
.
Similarly, using the Schurr complement formula:
Σ^2_x^*|s(τ)=
∑_i=0^T/Δ t∑_j=0^T/Δ t(σ_s^2 e^-|i-j|Δ t/τ_s+σ_η^2 e^-|i-j| Δ t/τ_η) Δ tΔ t A(|N-i|Δ t) A(|N-j|Δ t)+∑_i=0^T/Δ tσ_ξ^2Δ t A(|N-i|Δ t)^2
-σ_s^2 (∑_i=0^T/Δ tA(|N-i| Δ t) e^-1/τ_s|τ+ |N-i| Δ t|Δ t)^2.
Finally, the variance given the signal trajectory is:
Σ^2_x^*|s=
∑_i=0^T/Δ t∑_j=0^T/Δ tσ_η^2 e^-|i-j| Δ t/τ_ηΔ tΔ t A(|N-i|Δ t) A(|N-j|Δ t)+∑_i=0^T/Δ tσ_ξ^2Δ t A(|N-i|Δ t).
Now =1/2log(Σ^2_x^*/Σ^2_x^*|s) and =1/2log(Σ^2_x^*/Σ^2_x^*|s(τ)).
§ THE OPTIMAL INTEGRATION TIME OF THE PUSH-PULL NETWORK FOR SMALL X_T
In ref. <cit.>, the authors observe that cells using the push-pull motif respond to the correlation time of the noise increasing by initially increasing the integration time of the kernel. Then as τ_η approaches τ_s, time integration averages out the signal as well as the noise, lowering the utility of time integration as a strategy. As τ_η increases further, the optimal integration time drops sharply back to zero (fig. <ref>c), mirroring the equivalent drop found for our optimal system (fig. <ref>d). Here, the push-pull network's best strategy is merely to capture the current value of the signal, despite noise corruption.
|
http://arxiv.org/abs/2307.01622v2
|
20230704101816
|
Renewable energy management in smart home environment via forecast embedded scheduling based on Recurrent Trend Predictive Neural Network
|
[
"Mert Nakıp",
"Onur Çopur",
"Emrah Biyik",
"Cüneyt Güzeliş"
] |
cs.LG
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] |
inst1]Mert Nakıp[cor1]
[email protected]
[inst1]Institute of Theoretical and Applied Informatics,
Polish Academy of Sciences (PAN), 44–100
Gliwice,
Poland
inst2]Onur Çopur
[email protected]
[inst2]Prime Vision, 2600 JA, Delft,Netherlands
inst3]Emrah Biyik
[email protected]
[inst3]Department of Energy Systems Engineering, Yaşar University, 35100, Izmir, Turkey
inst4]Cüneyt Güzeliş
[email protected]
[inst4]Department of Electrical and Electronics Engineering,
Yaşar University, 35100, Izmir, Turkey
[cor1]Corresponding author
The final version of this preprint is published at Applied Energy https://doi.org/10.1016/j.apenergy.2023.121014.
Smart home energy management systems help the distribution grid operate more efficiently and reliably, and enable effective penetration of distributed renewable energy sources. These systems rely on robust forecasting, optimization, and control/scheduling algorithms that can handle the uncertain nature of demand and renewable generation. This paper proposes an advanced ML algorithm, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES), to provide efficient residential demand control. rTPNN-FES is a novel neural network architecture that simultaneously forecasts renewable energy generation and schedules household appliances. By its embedded structure, rTPNN-FES eliminates the utilization of separate algorithms for forecasting and scheduling and generates a schedule that is robust against forecasting errors. This paper also evaluates the performance of the proposed algorithm for an IoT-enabled smart home. The evaluation results reveal that rTPNN-FES provides near-optimal scheduling 37.5 times faster than the optimization while outperforming state-of-the-art forecasting techniques.
energy management, forecasting, scheduling, neural networks, recurrent trend predictive neural network
Renewable Energy Management in Smart Home Environment via Forecast Embedded Scheduling based on Recurrent Trend Predictive Neural Network
[
August 1, 2023
=========================================================================================================================================
§ INTRODUCTION
Residential loads account for a significant portion of the demand on the power system. Therefore, intelligent control and scheduling of these loads enable a more flexible, robust, and economical power system operation. Moreover, the distributed nature of the local residential load controllers increases system scalability. On the distribution level, the smart grid benefits from the increased adoption of residential demand and generation control systems, because they improve system flexibility, help to achieve a better demand-supply balance, and enable increased penetration of renewable energy sources. Increasing flexibility of the building energy demand depends on multiple developments, including accurate forecasting and effective scheduling of the loads, incorporation of renewable energy sources such as solar and wind power, and integration of suitable energy storage technologies (e.g. batteries and/or electric vehicle charging) into the building energy management system. Advanced control, optimization and forecasting approaches are necessary to operate these complex systems seamlessly.
In this paper, in order to address this problem, we propose a novel embedded neural network architecture, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (rTPNN-FES), which simultaneously forecasts the renewable energy generation and schedules the household appliances (loads). rTPNN-FES is a unique neural network architecture that enables both accurate forecasting and heuristic scheduling in a single neural network. This architecture is comprised of two main layers: 1) the Forecasting Layer which consists of replicated Recurrent Trend Predictive Neural Networks (rTPNN) with weight-sharing properties, and 2) the Scheduling Layer which contains parallel softmax layers with customized inputs each of which is assigned to a single load. In this paper, we also develop a 2-Stage Training algorithm that trains rTPNN-FES to learn the optimal scheduling along with the forecasting. However, the proposed rTPNN-FES architecture does not depend on the particular training algorithm, and the main contributions and advantages are provided by the architectural design. Note that the rTPNN model was originally proposed by Nakıp et al. <cit.> for multivariate time series prediction, and its superior performance compared to other ML models was demonstrated when making predictions based on multiple time series features in the case of multi-sensor fire detection. On the other hand, rTPNN has not yet been used in an energy management system and for forecasting renewable energy generation.
Furthermore, the advantages of using rTPNN-FES instead of a separate forecaster and scheduler are in three folds:
* rTPNN-FES learns how to construct a schedule adapted to forecast energy generation by emulating (mimicking) optimal scheduling. Thus, the scheduling via rTPNN-FES is highly robust against forecasting errors.
* The requirements of rTPNN-FES for the memory space and computation time are significantly lower compared to the combination of a forecaster and an optimal scheduler.
* rTPNN-FES proposes a considerably high scalability for the systems in which the set of loads varies over time, e.g. adding new devices into a smart home Internet of Things (IoT) network.
We numerically evaluate the performance of the proposed rTPNN-FES architecture against 7 different well-known ML algorithms combined with optimal scheduling. To this end, publicly available datasets <cit.> are utilized for a smart home environment with 12 distinct appliances. Our results reveal that the proposed rTPNN-FES architecture achieves significantly high forecasting accuracy while generating a close-to-optimal schedule over a period of one year. It also outperforms existing techniques in both forecasting and scheduling tasks.
The remainder of this paper is organized as follows: Section <ref> reviews the differences between this paper and the state-of-the-art. Section <ref> presents the system set-up and initiates the optimization problem. Section <ref> presents the rTPNN-FES architecture and the 2-Stage Training algorithm which is used to learn and emulate the optimal scheduling. Section <ref> presents the performance evaluation and comparison. Finally, Section <ref> summarizes the main contributions of this paper.
§ RELATED WORKS
In this section, we present the comparison of this paper with the-state-of-the art works in three categories: 1) The works in the first category develop an optimization-based energy management system without interacting with ML. 2) The works in the second category focus on forecasting renewable energy generation using either statistical or deep learning techniques. 3) The works in the last category develop energy management systems using ML algorithms.
§.§ Optimization-based Energy Management Systems
We first review the recent works which developed optimization-based energy management systems. In <cit.>, Shareef et al. gave a comprehensive summary of heuristic optimization techniques used for home energy management systems. In <cit.>, Nezhad et al. presented a model predictive controller for a home energy management system with loads, photovoltaic (PV) and battery electric storage. They formulated the MPC as a mixed-integer programming problem and evaluated its economic performance under different energy pricing schemes. In <cit.>, Albogamy et al. utilized Lyapunov-based optimization to regulate HVAC loads in a home with battery energy storage and renewable generation. In <cit.>, S. Ali et al. considered heuristic optimization techniques to develop a demand response scheduler for smart homes with renewable energy sources, energy storage, and electric and thermal loads. In <cit.>, G. Belli et al. resorted to mixed integer linear programming for optimal scheduling of thermal and electrical appliances in homes within a demand response framework. They utilized a cloud service provider to compute and share aggregate data in a distributed fashion. In <cit.>, variants of several heuristic optimization methods (optimal stopping rule, particle swarm optimization, and grey wolf optimization) were applied to the scheduling of home appliances under a virtual power plant framework for the distribution grid. Then, their performance was compared for three types of homes with different demand levels and profiles.
There is a wealth of research on optimization and model predictive controller-based scheduling of residential loads. In this literature, usually, prediction of the load demand and generation (if available) are pursued independently from the scheduling algorithm and are merely used as a constraint parameter in the optimization problem. The discrepancy in predicted and observed demand and generation may lead to poor performance and robustness issues. The proposed rTPPN-FES in this paper handles forecast and scheduling in a unified way and, therefore, provides robustness in the presence of forecasting errors.
§.§ Forecasting of Renewable Energy Generation
We now briefly review the related works on forecasting renewable energy generation, which have also been reviewed in more detail in the literature, i.e. <cit.>.
The earlier research in this category forecast energy generation using statistical methods. For example, in <cit.>, Kushwaha et al. use the well-known seasonal autoregressive integrated moving average technique to forecast the PV generation in 20-minute intervals. In <cit.>, Rogier et al. evaluated the performance of a nonlinear autoregressive neural network on forecasting the PV generation data collected through a LoRa-based IoT network. In <cit.>, Fentis et al. used Feed Forward Neural Network and Least Square Support Vector Regression with exogenous inputs to perform short-term forecasting of PV generation. In <cit.> analyzed the performances of (Autoregressive Integrated Moving Average) ARIMA and Artificial Neural Network (ANN) for forecasting the PV energy generation. In <cit.>, Atique et al. used ARIMA with parameter selection based on Akaike information criterion and the sum of the squared estimate to forecast PV generation. In <cit.>, Erdem and Shi analyzed the performance of autoregressive moving averages to forecast wind speed and direction in four different approaches such as decomposing the lateral and longitudinal components of the speed. In <cit.>, Cadenas et al. performed a comparative study between ARIMA and nonlinear autoregressive exogenous artificial neural network on the forecasting wind speed.
The recent trend of research focuses on the development of ML and (neural network-based) deep learning techniques. In <cit.>, Pawar et al. combined ANN and Support Vector Regressor (SVR) to predict renewable energy generated via PV. In <cit.>, Corizzo et al. forecast renewable energy using a regression tree with an adopted Tucker tensor decomposition. In <cit.> forecast the PV generation based on the historical data of some features such as irradiance, temperature and relative humidity. In <cit.>, Shi et al. proposed a pooling-based deep recurrent neural network technique to prevent overfitting for household load forecast. In <cit.>, Zheng et al. developed an adaptive neuro-fuzzy system that forecasts the generation of wind turbines in conjunction with the forecast of weather features such as wind speed.
In <cit.>, Vandeventer et al. used a genetic algorithm to select the parameters of SVM to forecast residential PV generation. In <cit.>, van der Meer et al. performed a probabilistic forecast of solar power using quantile regression and dynamic Gaussian process. In <cit.>, He and Li have combined quantile regression with kernel density estimation to predict wind power density. In <cit.>, Alessandrini et al. used an analogue ensemble method to problematically forecast wind power. In <cit.>, Cervone et al. combined ANN with the analogue ensemble method to forecast the PV generations in both deterministic and probabilistic ways. Recently in <cit.>, Guo et al. proposed a combined load forecasting method for a Multi Energy Systems (MES) based on Bi-directional Long Short-Term Memory (BiLSTM). The combined load forecasting framework is trained with a multi-tasking approach for sharing the coupling information among the loads.
Although there is a significantly large number of studies to forecast renewable energy generation and/or other factors related to generation, this paper differs sharply from the existing literature as it proposes an embedded neural network architecture called rTPNN-FES that performs both forecasting and scheduling simultaneously.
§.§ Machine Learning Enabled Energy Management Systems
In this category, we review the recent studies that aim to develop energy management systems enabled by ML, especially for residential buildings.
The first group of works in this category used scheduling (based on either optimization or heuristic) using the forecasts provided by an ML algorithm. In <cit.>, Elkazaz et al. developed a heuristic energy management algorithm for hybrid systems using an autoregressive ML for forecasting and optimization for parameter settings. In <cit.>, Zaouali et al. developed an auto-configurable middle-ware using Long-Short Term Memory (LSTM) based forecasting of renewable energy generated via PV. In <cit.>, Shakir et al. developed a home energy management system using LSTM for forecasting and Genetic Algorithm for optimization. In <cit.>, Manue et al. used LSTM to forecast the load for battery utilization in a solar system in a smart home system. In <cit.> developed a hybrid system of renewable and grid-supplied energy via exponential weighted moving average-based forecasting and a heuristic load control algorithm. In <cit.>, Aurangzeb et al. developed an energy management system which uses a convolutional neural network to forecast renewable energy generation. Finally, in <cit.>, in order to distribute the load and decrease the costs, Sarker et al. developed a home energy management system based on heuristic scheduling.
The second group of works in this category developed energy management systems based on reinforcement learning. In <cit.>, Ren et al. developed a model-free Dueling-double deep Q-learning neural network for home energy management systems. In <cit.>, Lissa et al. used ANN-based deep reinforcement learning to minimize energy consumption by adjusting the hot water temperature in the PV-enabled home energy management system. In <cit.>, Yu et al. developed an energy management system using a deep deterministic policy gradient algorithm. In <cit.>, Wan et al. used a deep reinforcement learning algorithm to learn the energy management strategy for a residential building. In <cit.>, Mathew et al. developed a reinforcement learning-based energy management system to reduce both the peak load and the electricity cost. In <cit.>, Liu et al. developed a home energy management system using deep and double deep Q-learning techniques for scheduling home appliances.
In <cit.>, Lu et al. developed an energy management system with hybrid CNN-LSTM based forecasting and rolling horizon scheduling. In <cit.>, Ji et al. developed a microgrid energy management system using the Markov decision process for modelling and ANN-based deep reinforcement learning for determining actions.
Deep learning-based control systems are also very popular for off-grid scenarios, as off-grid energy management systems are gaining increasing attention to provide sustainable and reliable energy services. In References <cit.> and <cit.>, the authors developed algorithms based on deep reinforcement to deal with the uncertain and stochastic nature of renewable energy sources.
All of these works have used ML techniques, especially deep learning and reinforcement learning, to build energy management systems. Moreover, in a recent work <cit.>, Nakıp et al. mimicked the scheduling via ANN and developed an energy management system using this ANN-based scheduling. However, in contrast with rTPNN-FES proposed in this paper, none of them has used ANN to generate scheduling or combined forecasting and scheduling in a single neural network architecture.
§ SYSTEM SETUP AND OPTIMIZATION PROBLEM
In this section, we present the assumptions, mathematical definitions and the optimization problem related to the system setup which is used for embedded forecasting scheduling via rTPNN-FES and shown in Figure <ref>. During this paper, rTPNN-FES is assumed to perform at the beginning of a scheduling window that consists of equal-length S slots and has a total duration of H in actual time (i.e. the horizon length). In addition, the length of each slot s equals H/S, and the actual time instance at which the slot s starts is denoted by m_s. Then, we let g^m_s denote the power generation by the renewable energy source within slot s. Also, ĝ^m_s denotes the forecast of g^m_s.
We let 𝒩 be the set of devices that need to be scheduled until H (in other words until the end of slot S), and N denote the total number of devices, i.e. |𝒩| = N. Each device n ∈𝒩 has a constant power consumption per slot denoted by E_n. In addition, n should be active uninterruptedly for a_n successive slots. That is, when n is started, it consumes a_n E_n until it stops. Moreover, we assume that the considered renewable energy system contains a battery with a capacity of B_max, where the stored energy in this battery is used via an inverter with a supply limit of Θ. We assume that there is enough energy in total (the sum of the stored energy in the battery and total generation) to supply all devices within [0, H].
At the beginning of the scheduling window, we forecast the renewable energy generation and schedule the devices accordingly. To this end, as the main contribution of this paper, we combine the forecaster and scheduler in a single neural network architecture, called rTPNN-FES, which shall be presented in Section <ref>.
Optimization Problem: We now define the optimization problem for the non-preemptive scheduling of the starting slots of devices to minimize user dissatisfaction. In other words, this optimization problem aims to distribute the energy consumption over slots prioritizing “user satisfaction”, assuming that the operation of each device is uninterruptible. In this article, we consider a completely off-grid system –which utilizes only renewable energy sources– where it is crucial to achieve near-optimal scheduling to use limited available resources. Recall that this optimization problem is re-solved at the beginning of each scheduling window for the available set of devices 𝒩 using the forecast generation ĝ^m_s over the scheduling window in Figure <ref>.
Moreover, for each n ∈𝒩, there is a predefined cost of user dissatisfaction, denoted by c_(n, s), for scheduling the start of n at slot s. This cost can take value in the range of [0, +∞), and c_(n, s) set to +∞ if the user does not want slot s to be reserved for device n. As we shall explain in more detail in Section <ref>, we determine the user dissatisfaction cost c_(n, s) as the increasing function of the distance between s and the desired start time of the considered device n. We should note that the definition of the user dissatisfaction cost only affects the numerical results since the proposed rTPNN-FES methodology does not depend on its definition.
Then, we let x_(n, s) denote a binary schedule for the start of the activity of device n at slot s. That is, x_(n, s) = 1 if device n is scheduled to start at the beginning of slot s, and x_(n, s) = 0 otherwise. In addition, in our optimization program, we let x^*_(n, s) be a binary decision variable and denote the optimal value of x_(n, s).
Accordingly, we define the optimization problem as follows:
min ∑_n ∈𝒩∑_s=1^Sx^*_(n, s) c_(n, s)
subject to
∑_s=1^S-(a_n-1)x^*_(n, s) = 1, ∀ n ∈𝒩
∑_n ∈𝒩∑_s'=[s-(a_n-1)]^+^sE_n x^*_(n, s')≤Θ, ∀ s ∈{1, …, S}
∑_n ∈𝒩_i∑_s'=[s-(a_n-1)]^+^sE_n x^*_(n, s')≤ĝ^m_s + B_max,
∀ s ∈{1,…, S}
∑_n ∈𝒩∑_s'=1^s∑_s”=[s'-(a_n-1)]^+^s'E_n x^*_(n, s”)≤ B + ∑_s'=1^sĝ^m_s',
∀ s ∈{1,…, S}
where [Ξ]^+ = Ξ if Ξ≥ 1; otherwise, [Ξ]^+ = 1.
The objective function (<ref>) minimizes the total user dissatisfaction cost over all devices as (∑_n ∈𝒩∑_s=1^Sx^*_(n, s) c_(n, s)). While minimizing user dissatisfaction, the optimization problem also considers the following constraints:
* Uniqueness and Operation constraint in (<ref>) ensures that each device n is scheduled to start exactly at a single slot between 1-st and [S-(a_n-1)]-th slot. The upper limit for the starting of the operation of device n is set to [S-(a_n-1)] because n must operate for successive a_n slots before the end of the last slot S.
* Inverter Limitation constraint in (<ref>) limits total power consumption at each slot s to the maximum power of Θ that can be provided by the inverter. Note that the term ∑_s'=s-(a_n-1)^sx^*_(n,s') is a convolution which equals 1 if device n is scheduled to be active at slot s (i.e. n is scheduled to start between s-(a_n-1) and s).
* Maximum Storage constraint in (<ref>) ensures that the scheduled consumption at each slot s does not exceed the sum of the predicted generation (ĝ^m_s) at this slot and the maximum energy (B_max) that can be stored in the battery.
* Total Consumption constraint in (<ref>) ensures that the scheduled total power consumption until each slot s is not greater than the summation of the stored energy, B, at the beginning of the scheduling window and the total generation until s. This constraint is used as we are considering a completely off-grid system.
§ RECURRENT TREND PREDICTIVE NEURAL NETWORK BASED FORECAST EMBEDDED SCHEDULING (RTPNN-FES)
In this section, we present our rTPNN-FES neural network architecture. Figure <ref> displays the architectural design of rTPNN-FES which aims to generate scheduling for the considered window while forecasting the power generation through this window automatically and simultaneously. To this end, rTPNN-FES is comprised of two main layers of “Forecasting Layer” and “Scheduling Layer”, and it is trained using the “2-Stage Training Procedure”.
We let ℱ be the set of features and ℱ≡{1, …, F}. In addition, z_f^m_s denotes the value of input feature f in slot s which starts at m_s, where this feature can be considered as any external data
, such as weather predictions, that are directly or indirectly related to power generation g^m_s. We also let τ_f be a duration of time when the system developer has observed that the feature f has periodicity; τ_0 represents the periodicity duration for g^m_s. Note that we do not assume that the features will have a periodic nature. If there is no observed periodicity, τ_f can be set to H.
As shown in Figure <ref>, the inputs of rTPNN-FES are {g^m_s - 2τ_0, g^m_s - τ_0} and {z^m_s - 2τ_f_f, z^m_s - τ_f_f} for f ∈ℱ, and the output of that is {x_n, s}_n∈{1,…,N}^s∈{1,…,S}.
§.§ Forecasting Layer
Forecasting Layer is responsible for forecasting the power generation within the architecture of rTPNN-FES. For each slot s in the scheduling window, rTPNN-FES forecasts the renewable energy generation ĝ^m_s based on the collection of the past feature values for two periods, {z^m_s - 2τ_f_f, z^m_s - τ_f_f}_f ∈ℱ, as well as the past generation for two periods {g^m_s - 2τ_0, g^m_s - τ_0}. To this end, this layer consists of S parallel rTPNN models that share the same parameter set (connection weights and biases). That is, in this layer, there are S replicas of a single trained rTPNN; in other words, one may say that a single rTPNN is used with different inputs to forecast the traffic generation for each slot s. Therefore, all but one of the Trained rTPNN blocks are shown as transparent in Figure <ref>.
The weight sharing among rTPNN models (i.e. using replicated rTPNNs) has the following advantages:
* The number of parameters in the Forecasting Layer decreases by a factor of S; thus reducing both time and space complexity.
* By avoiding rTPNN training repeated S times, the training time is also reduced by a factor of S.
* Because a single rTPNN is trained on the data collected over S different slots, the rTPNN can now capture recurrent trends and relationships with higher generalization ability.
§.§.§ Structure of rTPNN
We now briefly explain the structure of rTPNN, which has been originally proposed in <cit.>, for our rTPNN-FES neural network architecture. As shown in Figure <ref> displaying the structure of rTPNN, for any s, the inputs of rTPNN are {g^m_s - 2τ_0, g^m_s - τ_0} and {z^m_s - 2τ_f_f, z^m_s - τ_f_f} for f ∈ℱ, and the output is ĝ^m_s. In addition, the rTPNN architecture consists of (F+1) Data Processing (DP) units and L fully connected layers, including the output layer.
§.§.§ DP units
In the architecture of rTPNN, there is one DP unit either for the past values of energy generation, denoted by DP_0 or for each time series feature f, denoted by DP_f. That is, DP_f for any feature f (including f=0) has the same structure but its corresponding input is different for each f. For example, the input of DP_f is {z_f^m_s - 2τ_f, z_f^m_s - τ_f} corresponding to any time series feature f ∈{1, …, F} while the input of DP_0 is the past values of energy generation {g^m_s - 2τ_0, g^m_s - τ_0}. Thus, one may notice that DP_0 is the only unit with a special input.
During the explanation of the DP unit, we focus on a particular instance DP_f, which is also shown in detail in Figure <ref>. Using {z_f^m_s - 2τ_f, z_f^m_s - τ_f} input pair, DP_f aims to learn the relationship between this pair and each of the predicted trend t_f^s and the predicted level l_f^s. To this end, DP_f consists of Trend Predictor and Level Predictor sub-units each of which is a linear recurrent neuron.
As shown in Figure <ref>, Trend Predictor of DP_f computes the weighted sum of the change in the value of feature f from m_s - 2τ_f to m_s - τ_f and the previous value of the predicted trend. That is, DP_f calculates the sum of the difference between (z_f^m_s - τ_f - z_f^m_s - 2τ_f) with connection weight of α^1_f and the previous value of the predicted trend t_f^s-1 with the connection weight of α^2_f as
t_f^s=α^1_f (z_f^m_s - τ_f - z_f^m_s - 2τ_f)+ α^2_f t_f^s-1
By calculating the trend of a feature and learning the parameters in (<ref>), rTPNN is able to capture behavioural changes over time, particularly those related to the forecasting of ĝ^m_s.
Level Predictor sub-unit of DP_f predicts the level of feature value, which is the smoothed version of the value of feature f, using only z_f^m_s - τ_f and the previous state of the predicted level l_f^s-1. To this end, it computes the sum of the z_f^m_s - τ_f and l_f^s-1 with weights of β^1_f and β^2_f respectively as
l_f^s=β^1_f z_f^m_s - τ_f+ β^2_f l_f^s-1
By predicting the level, we can reduce the effects on the forecasting of any anomalous instantaneous changes in the measurement of any other feature f.
Note that parameters α^1_f, α^2_f, β_f^1 and β_f^2 of Trend Predictor and Level Predictor sub-units are learned during the rTPNN training like all other parameters (i.e. connection weights).
§.§.§ Feed-forward of rTPNN
We now describe the calculations performed during the execution of the rTPNN; that is, when making a prediction via rTPNN. To this end, first, let 𝐖_l denote the connection weight matrix for the inputs of hidden layer l, and 𝐛_l denote the vector of biases of l. Thus, for each s, the forward pass of rTPNN is as follows:
* Trend Predictors of DP_0-DP_F:
t_0^s=α^1_0 (g^m_s - τ_0-g^m_s - 2τ_0)+ α^2_0 t_0^s-1,
t_f^s=α^1_f (z_f^m_s - τ_f-z_f^m_s - 2τ_f)+ α^2_f t_f^s-1, ∀ f ∈ℱ
* Level Predictors of DP_0-DP_F:
l_0^s=β^1_0 g^m_s - τ_0+ β^2_0 l_0^s-1,
l_f^s=β^1_f z_f^m_s - τ_0+ β^2_f l_f^s-1, ∀ f ∈ℱ
* Concatenation of the outputs of DP_0-DP_F to feed to the hidden layers:
𝐳^s=[t_0^s, l_0^s, g^m_s - τ_0, …, t_F^s, l_F^s, z_F^m_s - τ_F]
* Hidden Layers from l=1 to l=L:
𝐎^s_1 = Ψ(𝐖_1 (𝐳^s)^T+ 𝐛_1),
𝐎^s_l = Ψ(𝐖_l 𝐎^s_l-1 + 𝐛_l), ∀ l ∈{2, …, L-1}
ĝ^m_s = Ψ(𝐖_L𝐎^s_L-1 + 𝐛_L),
where (𝐳^s)^T is the transpose of the input vector 𝐳^s, 𝐎^s_l is the output vector of hidden layer l, and Ψ(·) denotes the activation function as an element-wise operator.
§.§ Scheduling Layer
The Scheduling Layer consists of N parallel softmax layers, each responsible for generating a schedule for a single device's start time. A single softmax layer for device n is shown in Figure <ref>. Since this layer is cascaded behind the Forecasting Layer, each device n is scheduled to be started at each slot s based on the output of the Forecasting Layer ĝ^m_s as well as the system parameters c_(n,s), E_n, B, B_max and Θ for this device n and this slot s.
In Figure <ref>, each arrow represents a connection weight. Accordingly, for device n for slot s in a softmax layer of the Scheduling Layer, a neuron first calculates the weighted sum of the inputs as
α_(n, s) = w^g_(n, s) g^m_s + w^B_(n, s)B/S - w^c_(n, s) c_(n, s)
-w^E_(n, s) E_n - w^Θ_(n, s)Θ - w^B_max_(n, s) B_max
where all connection weights of w^g_(n, s), w^B_(n, s), w^c_(n, s), w^E_(n, s), w^Θ_(n, s), and w^B_max_(n, s) are strictly positive. In addition, the signs of the terms are determined considering the intuitive effect of the parameter on the schedule decision for device n at slot s. For example, the higher g^m_s makes slot s a better candidate to schedule n, while the higher user dissatisfaction cost c_(n,s) makes slot s a worse candidate.
In addition, a softmax activation is applied at the output of this neuron:
x_(n, s) = Φ(α_(n, s)) = e^α_(n, s)/∑_s=1^Sα_(n, s)
§.§ 2-Stage Training Procedure
We train our rTPNN-FES architecture to learn the optimal scheduling of devices as well as the forecasting of energy generation in a single neural network. To this end, we first assume that there is a collected dataset comprised of the actual values of g^m_s and {z_f^m_s}_f∈ℱ for s ∈{1,…,S} for multiple scheduling windows. Note that rTPNN-FES does not depend on the developed 2-stage training procedure, so it can be used with any training algorithm. For each window in this dataset, the 2-stage procedure works as follows:
§.§.§ Stage 1 - Training of rTPNN Separately for Forecasting
In this first stage of training, in order to create a forecaster, the rTPNN model (Figure <ref>) is trained separately from the rTPNN-FES architecture (Figure <ref>). To this end, the deviation of ĝ^m_s from g^m_s for s ∈{1, …,S}, i.e. the forecasting error of rTPNN, is measured via Mean Squared Error as
MSE_forecast≡1/S∑_s=1^S(g^m_s - ĝ^m_s)^2
We update the parameters (connection weights and biases) of rTPNN via back-propagation with gradient descent, in particular the Adam algorithm, to minimize MSE_forecast, where the initial parameters are set to parameters found in previous training. We repeat updating parameters as many epochs as required without over-fitting to the training samples.
When Stage 1 is completed, the parameters of “Trained rTPNN” in Figure <ref> are replaced by the resulting parameters found in this stage. Then, the parameters of Trained rTPNN are frozen to continue further training of rTPNN-FES in Stage 2. That is, the parameters of Trained rTPNN are not updated in Stage 2.
§.§.§ Stage 2 - Training of rTPNN-FES for Scheduling
In Stage 2 of training, in order to create a scheduler emulating optimization, the rTPNN-FES architecture (Figure <ref>) is trained following the steps shown in Figure <ref>.
The steps in Stage 2 shown in Figure <ref> are as follows:
* The optimal schedule, {x_n,s^*}_n∈{1,…,N}^s∈{1,…,S} is computed by solving the optimization problem given in Section <ref> in (<ref>)-(<ref>).
* The feed-forward output of rTPNN-FES, {x_n,s}_n∈{1,…,N}^s∈{1,…,S}, which is the estimation of scheduling, is computed through (<ref>)-(<ref>) using the architecture in Figure <ref>.
* The performance of rTPNN-FES for scheduling, i.e. total estimation error of rTPNN-FES, is measured via Categorical Cross-Entropy as
CCE_schedule≡ - ∑_n=1^N∑_s=1^S x_n,s^* log(x_n,s)
* The parameters (connection weights and biases) in the “Scheduling Layers” of rTPNN-FES are updated via back-propagation with gradient decent (using Adam optimization algorithm) to minimize CCE_schedule.
As soon as this training procedure is completed, i.e. during real-time operation, rTPNN-FES generates both forecasts of renewable energy generations, {ĝ^m_s}_s∈{1,…,S} and a schedule {x_n, s}_n∈{1,…,N}^s∈{1,…,S} that emulates the optimization.
§ RESULTS
In this section, we aim to evaluate the performance of our rTPNN-FES. To this end, during this section, we first present the considered datasets and hyper-parameter settings. We also perform a brief time-series data analysis aiming to determine the most important features for the forecasting of PV energy generation. Then, we numerically evaluate the performance of our technique and compare that with some existing techniques.
§.§ Methodology of Experiments
§.§.§ Datasets
For the performance evaluation of the proposed rTPNN-FES, we combine two publicly available datasets <cit.> and <cit.>. The first dataset <cit.> consists of hourly solar power generation (kW) of various residential buildings in Konstanz, Germany between 22-05-2015 and 12-03-2017. Within this dataset, we consider only the residential building called “freq_DE_KN_residential1_pv” which corresponds to 15864 samples in total. The second dataset contains weather-related information which is scraped with World Weather Online (WWO) API <cit.>. This API provides 19 features related to temperature, precipitation, illumination and wind.
§.§.§ Experimental Set-up
Considering the limitations of the available dataset, we perform our experiments on a virtual residential building which is, each year, actively used between May and September. It is assumed that there are 12 different smart home appliances in active months. These appliances are shown in Table <ref>, where each appliance should operate at least once a day. Note that Electric Water Heater and Central AC operate twice a day, where the desired start times are 6:00 and 17:00 for the heater, and 6:00 and 18:00 for the AC. In order to produce sufficient energy for the operation of these appliances, this building has its own PV system which consists of the following elements: 1) PV panels for which the generations are taken from the dataset <cit.> explained above, 2) three batteries with 13.5 kWh capacity of each, and 3) inverter with a power rate of 10kW.
Furthermore, during our experimental work, we set H=24 h, and we define the user dissatisfaction cost c_(n, s) for each device n at each slot s based on the “Desired Start Time”, which is given in Table <ref>, as
c_(n, s) = 1 - 1/σ_n √(2 π) exp(-1/2 (s - μ_n/σ_n)^2)
where μ_n is the desired start time of n, and σ_n is the acceptable variance for the start of n. The value of σ_n is 1 for Iron and Electric Water Heater, 2 for TV, Oven, Dishwasher and AC, 3 for Washing Machine and Dryer, and 5 for Robot Vacuum Cleaner. Also, the value of c_(n, s) is set to infinity for s lower than the earliest start time and for that greater than the latest start time.
Recall that the Water Heater and AC, which are activated twice a day, are modelled as two separate devices.
§.§.§ Implementation and Hyper-Parameter Settings for rTPNN-FES
We implemented rTPNN-FES by using Keras API on Python 3.7.13. The experiments are executed on the Google Colab platform with an operating system of Linux 5.4.144 and a 2.2GHz processor with 13 GB RAM.
Forecasting Layer is trained on this platform via the adam optimizer for 40 epochs with 10^-3 initial learning rate. In order to exploit the PV generation trend on daily basis, the batch size is fixed at 24. Moreover, an L_2 regularization term is injected into Trend and Level Predictors in the rTPNN layer in order to avoid gradient vanishing. Finally, we used fully connected layers of rTPNN which are respectively comprised of F+1 and (F+1)/2 neurons with sigmoid activation. Scheduling Layer of each device is trained on the same platform also using the adam optimizer for 20 epochs with a batch size of 1 and initial learning rate of 10^-3. Note that setting the batch size to 1 is due to the particular implementation of rTPNN-FES which uses the Keras library. In addition, the infinity values of c_(n,s) are set to 100 at the inputs of the scheduling layer in order to be able to calculate the neuron activation. We also set the periodicity τ_0 of g^m_s as 24 h.
Furthermore, the source codes of the rTPNN-FES and experiments in this paper are shared in <cit.> in addition to the repository of the original rTPNN.
§.§.§ Genetic Algorithm-based Scheduling for Comparison
Genetic algorithms (GAs) have been widely used in scheduling tasks due to their ability to effectively solve complex optimization problems. GAs are able to incorporate various constraints and prior knowledge into the optimization process, making them well-suited for scheduling tasks with many constraints. GAs are also able to efficiently search through a vast search space to find near-optimal solutions, even for problems with a large number of variables <cit.>. These characteristics make GAs powerful tools for finding high-quality solutions in our experimental setup and good candidates to compare against rTPNN-FES.
The experiments are executed on the Google Colab platform with the same hardware configurations of rTPNN-FES. In this experimental setting, a chromosome is a daily schedule matrix. The cross-over is made by swapping device schedules by selecting a random cross-over point out of the total number of devices and mutation is introduced by changing the scheduled time of a single device randomly with probability 0.1. The GA application starts with sampling feasible solutions out of 5000 random solutions as an initial population. After that, 1000 new generations are simulated while the population size is fixed to 200 by making selections in an elitist style.
§.§ Forecasting Performance of rTPNN-FES
We now compare the forecasting performance of rTPNN with the performances of LSTM, MLP, Linear Regression, Lasso, Ridge, ElasticNet, Random Forest as well as 1-Day Naive Forecast.[1-Day Naive Forecast equals to the original time series with 1-day lag.] Recall that in recent literature, References <cit.> used LSTM, and Reference <cit.> used MLP.
During our experimental work, the dataset is partitioned into training and test sets with the first 300 days (corresponding to 7200 samples) and the rest 361 days (corresponding to 8664 samples) respectively.
First, Table <ref> presents the performances of all models on both training and test sets with respect to Mean Squared Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE) metrics, which are calculated as
MSE = 1/S∑_s=1^S(g^m_s - ĝ^m_s)^2
MAE = 1/S∑_s=1^S| g^m_s - ĝ^m_s|
MAPE = 100%/S∑_s=1^S| g^m_s - ĝ^m_s/g^m_s|
SMAPE = 100%/S∑_s=1^S|g^m_s - ĝ^m_s| /(|g^m_s| + |ĝ^m_s|)/2
In Table <ref>, the results on the test set show that rTPNN outperforms all of the other forecasters for the majority of the error metrics while some forecasters may perform better in individual metrics. However, observations on an individual error metric (without considering the other metrics) may be misleading due to its properties. For example, the MAPE of Ridge Regression is significantly low but MSE, MAE and SMAPE of that are high. The reason is that Ridge is more accurate in forecasting samples with high energy generation than forecasting those with low generations. Moreover, rTPNN is shown to have high generalization ability since it performs well for both training and test sets with regard to all metrics. Also, only rTPNN and LSTM are able to achieve better performances than the benchmark performance of the 1-Day Naive Forecast with respect to MSE, MAE and SMAPE.
We also see that SMAPE yields significantly larger values than those of other metrics (including MAPE) because SMAPE takes values in [0, 200] and has a scaling effect as a result of the denominator in (<ref>). In particular, the absolute deviation of forecast values from the actual values is divided by the sum of those. Therefore, under- and over-forecasting have different effects on SMAPE, where under-forecasting results in higher SMAPE.
Next, in Figure <ref>, we present the actual energy generation between the fifth and the seventh days of the test set as well as those forecast by the best three techniques (rTPNN, LSTM and MLP). Our results show that the predictions of rTPNN are the closest to the actual generation within the predictions of these three techniques. In addition, we see that rTPNN can successfully capture both increases and decreases in energy generation while LSTM and MLP struggle to predict sharp increases and decreases.
Finally, Figure <ref> displays the histogram of the forecasting error that is realized by each of rTPNN, LSTM, and MLP on the test set. Our results in this figure show that the forecasting error of rTPNN is around zero for the significantly large number of samples (around 5000 out of 8664 samples). We also see that the absolute error is smaller than 2 for 93% of the samples. We also see that the overall forecasting error is lower for rTPNN than both LSTM and MLP.
§.§ Scheduling Performance of rTPNN-FES
We now evaluate the scheduling performance of rTPNN-FES for the considered smart home energy management system. To this end, we compare the schedule generated by rTPNN-FES with that by optimization (solving (<ref>)-(<ref>)) using actual energy generations as well as the GA-based scheduling (presented in Section <ref>).
Note that although the schedule generated by the optimization using actual generations is the best achievable schedule, it is practically not available due to the lack of future information about the actual generations.
Figure <ref> (top) displays the comparison of rTPNN-FES against the optimal scheduling and the GA-based scheduling regarding the cost value for the days of the test set. In this figure, we see that rTPNN-FES significantly outperforms GA-based scheduling achieving close-to-optimal cost. In other words, the user dissatisfaction cost – which is defined in (<ref>) – of rTPNN-FES is significantly lower than the cost of GA-based scheduling, and it is slightly higher than that of optimal scheduling. The average cost difference between rTPNN-FES and optimal scheduling is 1.3% and the maximum difference is about 3.48%.
Furthermore, Figure <ref> (bottom) displays the summary of the statistics for the cost difference between rTPNN-FES and the optimal scheduling as well as the difference between GA-based and optimal scheduling as a boxplot. In Figure <ref> (bottom), we first see that the cost difference is significantly lower for rTPNN-FES, where even the upper quartile of rTPNN-FES is smaller than the lower quartile of GA-based scheduling. We also see that the median of the cost difference between rTPNN-FES and optimal scheduling is 0.13 and the upper quartile of that is about 0.146. That is, the cost difference is less than 0.146 for 75 % of the days in the test set. In addition, we see that there are only 7 outlier days for which the cost is between 0.19 and 0.3. According to the results presented in Figure <ref>, rTPNN-FES can be considered as a successful heuristic with a low increase in cost.
§.§ Evaluation of the Computation Time
In Table <ref>, we present measurements on the training and execution times of each forecasting model. Our results first show that the execution time of rTPNN (0.17 ms) is comparable with the execution time of LSTM and highly acceptable for real-time applications. On the other hand, the training time measurements show that the training of rTPNN takes longer than that of other forecasting models. Accordingly, one may say that there is a trade-off between training time and the forecasting performance of rTPNN.
Figure <ref> displays the computation time of rTPNN-FES and that of optimization combined with LSTM (the second-best forecaster after rTPNN) in seconds. Note that we do not present the computation of GA-based scheduling in this figure since it takes 4.61 seconds on average – which is approximately 3 orders of magnitude higher than the computation time of rTPNN-FES and 1 order of magnitude higher than that of optimization – to find a schedule for a single window.
Our results in this figure show that rTPNN-FES requires significantly lower computation time than optimization to generate a daily schedule of household appliances. The average computation time of rTPNN-FES is about 4 ms while that of optimization with LSTM is 150 ms. That is, rTPNN-FES is 37.5 times faster than optimization with LSTM to simultaneously forecast and schedule. Although the absolute computation time difference seems insignificant for a small use case (as in this paper), it would have important effects on the operation of large renewable energy networks with a high number of sources and devices.
§ CONCLUSION
We have proposed a novel neural network architecture, called Recurrent Trend Predictive Neural Network based Forecast Embedded Scheduling (namely rTPNN-FES), for smart home energy management systems. The rTPNN-FES architecture forecasts renewable energy generation and schedules household appliances to use renewable energy efficiently and to minimize user dissatisfaction. As the main contribution of rTPNN-FES, it performs both forecasting and scheduling in a single architecture. Thus, it 1) provides a schedule that is robust against forecasting and measurement errors, 2) requires significantly low computation time and memory space by eliminating the use of two separate algorithms for forecasting and scheduling, and 3) offers high scalability to grow the load set (i.e. adding devices) over time.
We have evaluated the performance of rTPNN-FES for both forecasting renewable energy generation and scheduling household appliances using two publicly available datasets. During the performance evaluation, rTPNN-FES is compared against 8 different techniques for forecasting and against the optimization and genetic algorithm for scheduling. Our experimental results have drawn the following conclusions:
* The forecasting layer of rTPNN-FES outperforms all of the other forecasters for the majority of MSE, MAE, MAPE, and SMAPE metrics.
* rTPNN-FES achieves a highly successful schedule which is very close to the optimal schedule with only 1.3% of the cost difference.
* rTPNN-FES requires a much shorter time than both optimal and GA-based scheduling to generate embedded forecasts and scheduling, although the forecasting time alone is slightly higher than other forecasters.
Future work shall improve the training of rTPNN-FES by directly minimizing the cost of user dissatisfaction (or other scheduling costs) to eliminate the collection of optimal schedules for training. In addition, the integration of a predictive dynamic thermal model into the rTPNN-FES framework shall be pursued in future studies. (Such integration is required to utilize more advanced HVAC scheduling/control system designs.) It would also be interesting to observe the performance of rTPNN-FES for large-scale renewable energy networks. Furthermore, since the architecture of rTPNN-FES is not dependent on the particular optimization problem formulated in this paper, rTPNN-FES shall be applied for other forecasting/scheduling problems such as optimal dispatch in microgrids, flow control in networks, and smart energy distribution in future work.
elsarticle-num
|
http://arxiv.org/abs/2307.01123v1
|
20230703155454
|
Facilitating Cooperation in Human-Agent Hybrid Populations through Autonomous Agents
|
[
"Hao Guo",
"Chen Shen",
"Shuyue Hu",
"Junliang Xing",
"Pin Tao",
"Yuanchun Shi",
"Zhen Wang"
] |
physics.soc-ph
|
[
"physics.soc-ph"
] |
Hao Guo^1, Chen Shen^2, Shuyue Hu^3, Junliang Xing^4, †, Pin Tao^4,
Yuanchun Shi^4, Zhen Wang^1,†
†Corresponding author: [email protected], [email protected]
1. School of Mechanical Engineering, and School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an 710072, China
2. Faculty of Engineering Sciences, Kyushu University, Kasuga-koen, Kasuga-shi, Fukuoka 816-8580, Japan
3. Shanghai Artificial Intelligence Laboratory, Shanghai, China
4. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
August 1, 2023
§ SUMMARY
Cooperation is a vital social behavior that plays a crucial role in human prosperity, enabling conflict resolution and averting disastrous outcomes. With the increasing presence of autonomous agents (AAs), human-agent interaction becomes more frequent in modern society. We investigate the impact of cooperative and defective AAs on human cooperation within the framework of evolutionary game theory, particularly in one-shot social dilemma games. Our findings reveal that cooperative AAs have a limited impact on prisoner's dilemma, but facilitate cooperation in stag hunt games. Surprisingly, defective AAs, rather than cooperative AAs, promote complete dominance of cooperation in snowdrift games. Meanwhile, in scenarios with weak imitation strength, cooperative AAs are able to maintain or even promote cooperation in all these games. Additionally, the results obtained from structured populations also imply that the effectiveness of AAs in promoting cooperation can be maximized by carefully considering their design and application in a given context.
§ INTRODUCTION
Cooperation, which serves as a fundamental social behavior (<cit.>), plays a crucial role in ensuring human prosperity. It not only facilitates the resolution of individual conflicts, such as hunting and driving, but also mitigates burdensome catastrophes like global climate change and disease transmission (<cit.>). However, cooperation often struggles to survive in the face of competition with defection due to lower payoffs (<cit.>). Although mutual cooperation is beneficial to collective interests, individuals are frequently tempted to choose defection. The concept of social dilemma captures the inherent challenge in the evolution of cooperation, referring to a situation where an individual's interests conflict with collective interests. Two-player social dilemma games such as prisoner's dilemma (PD) game, stag hunt (SH) game, and snowdrift (SD) game, are employed ubiquitously, portraying the rational decision-making of two participants using a strategy set and payoff matrix (<cit.>). This type of matrix game allows for equilibrium analysis and has been extensively utilized in research within the fields of social science, biology, and artificial intelligence (AI) (<cit.>).
With the integration of AI into various aspects of human life, advancements in science and technology have allowed humans to delegate decision-making tasks to machines (<cit.>).
Although previous studies have suggested fascinating solutions to encourage cooperation in human-human interactions (<cit.>), they do not address this problem within human-agent hybrid populations.
Consequently, research on human-agent coordination has gained significant attention and encompasses diverse areas. One typical example is autonomous driving (<cit.>), where humans relinquish decision-making power to cars, thereby freeing themselves from the physical demands of driving and making travel easier and more enjoyable. However, most of these researches focus on situations where humans and agents share a common goal (<cit.>). When conflicts of interest arise, it becomes crucial to investigate the evolution of human behavior in a human-agent hybrid environment (<cit.>).
As social interactions have become more hybrid (<cit.>), involving humans and AAs, there lies an opportunity to gain new insights into how human cooperation is affected (<cit.>).
This work aims to examine the influence of AAs on human cooperative behavior when social dilemmas exist.
Understanding how human behavior changes in the presence of robots or AAs is a challenging but essential topic (<cit.>).
To accurately capture human cooperation toward agents, previous studies have primarily focused on developing (or designing) algorithms for AAs (<cit.>). In particular, they mainly focused on repeated games where human players (HPs) can make decisions based on historical information about AAs. The impact of one-shot settings, where players lack prior experience and information about their counterparts, has been generally ignored with few exceptions (<cit.>). In this paper, we focus on how cooperative and defective AAs affect human cooperation in two-player social dilemma games, and we ask: Are cooperative AAs always beneficial to human cooperation? Do defective AAs consistently impede the evolution of human cooperation? How do population dynamics change in structured and unstructured populations when the ratio of human-human interaction to human-agent interaction varies?
To address the research questions mentioned above, we utilize an evolutionary game theoretic framework to study the conundrum of cooperation in social dilemma games with a one-shot setting. As shown in Fig. <ref>, the typical games involve PD, SD, and SH games. As previous evidence has proved (or hypothesized), human players update strategies according to payoff differences, with social learning being the most well-known modality (<cit.>).
Therefore, we examine the evolutionary dynamics of human cooperation by employing replicator dynamics and pairwise comparison (<cit.>).
The main difference between these two dynamics lies in the consideration of AAs. In the pairwise comparison rule, human players can imitate the strategies of AAs, whereas replicator dynamics do not incorporate such imitation.
In the human-agent hybrid population, the fraction of cooperation among human players is denoted as ρ_C (0 ≤ρ_C ≤ 1). Assuming human players are one unit, add y units of AAs to the hybrid population. Specifically, AAs are programmed to choose cooperation with a fixed probability ϕ that remains constant over time. The summary of the notations is given in Table <ref>. Our findings indicate that in well-mixed populations with replicator dynamics, AAs have little impact on equilibrium in games with a dominant strategy (Theorem 1 in electronic supplementary material). Cooperative agents facilitate cooperation in SH games (Theorem 3 in electronic supplementary material) but undermine cooperation in SD games. Counterintuitively, seemingly harmful defective agents can support the dominance of cooperation in SD games (Theorem 2 in electronic supplementary material). Additionally, we conduct stability analysis and establish the conditions for the prevalence of cooperation. Our results demonstrate that even a minority of defective (or cooperative) AAs can significantly enhance human cooperation in SD (SH) games. However, when cooperative (or defective) AAs constitute a majority, they may trigger the collapse of cooperation in SD (or SH) games. These findings are further corroborated by pairwise comparison rule when strong imitation strength is considered. In scenarios with weak imitation strength, cooperative AAs are more likely to stimulate human cooperation.
In contrast to well-mixed populations, where players can interact with others with an equal probability, structured populations restrict interactions to locally connected neighbors. Such a difference in interactive environments is deemed a determinate factor influencing the emergence of cooperation (<cit.>).
To investigate this, we conduct experimental simulations on complex networks and observe that structured populations yield comparable results to well-mixed scenarios, except in the case of heterogeneous networks. This divergence can be attributed to the influential role of nodes with higher degrees. Our results, taken together, provide valuable insights into the impact of population state control on humans.
§ RESULTS
In this section, we mainly present the theoretic results of well-mixed populations in PD, SD, and SH games by analyzing the replicator dynamics and pairwise comparison rule.
The replicator equation (<cit.>) is a differential equation, depicting the growth of a specific strategy based on the payoff difference. The pairwise comparison rule depicts the process of strategy imitation according to Fermi function. At last, we present an extension to complex networks.
§.§ Replicator dynamics
§.§.§ Prisoner's dilemma game and harmony game
In PD game, even though the presence of AAs, defection is the dominant strategy and the expected payoff of defection is equal to or larger than cooperation. Therefore, evolutionary dynamics reach a full defection equilibrium state, and human cooperation diminishes irrespective of its initial frequency (see Theorem 1 in electronic supplementary material). This finding remains robust against any cooperation probability of AAs, as shown in Fig. <ref> A. Although the equilibrium remains constant, the convergence rate is influenced by the values of y and ϕ.
In contrast, the expected payoff of cooperation is equal to or larger than defection (see Fig. <ref> B) in a harmony game, thereby the system emerges into a full cooperation equilibrium state.
The results show that both cooperative and defective AAs have no effect on the convergence state if the game has a dominant strategy.
§.§.§ Snowdrift game
In SD game, when AAs are absent, replicator dynamics have demonstrated that the interior equilibrium ρ̂_̂Ĉ is the unique asymptotically stable state, while the equilibria ρ_C=0 and ρ_C=1 are always unstable. However, the results will be different if we take AAs into consideration, as indicated in Theorem 2 (electronic supplementary material). We find that the population converges to a full cooperation
equilibrium state regardless of the initial frequency of human cooperation, provided the condition 1+yϕ/1+y≤P-S/R-T-S+P is satisfied. Interestingly, to achieve a full cooperation equilibrium state with as few AAs as possible, the optimal approach is to introduce AAs with a cooperation probability of ϕ=0. Correspondingly, the minimum value for y is T-R/S-P. On the other hand, when yϕ/1+y≥P-S/R-T-S+P, the population converges to a full defection equilibrium state regardless of the initial frequency of human cooperation. Therefore, we reveal that defective AAs can actually stimulate human cooperation, acting as catalysts for cooperative behavior in SD games.
We then present analytical results regarding how AAs affect the equilibrium by setting T=1.2 and S=0.5 in Fig. <ref>.
Panel A depicts the ϕ-y phase diagram, which consists of three parts: full cooperation state, full defection state, and coexistence state of C and D. We find that a minority of defective AAs (see the area with y<1) can shift the equilibrium state from coexistence state to a full cooperation equilibrium state, whereas a sufficiently large fraction of cooperative AAs drives the system to a full defection equilibrium state (blue color).
To better understand how such an unexpected full defection state happens, we examine the frequency of human cooperation as a function of AAs' cooperation probability ϕ at y=4 in Fig. <ref> B. We observe that with a fixed ratio of AAs to human players y, the lower the cooperation probability of AAs, the higher the level of human cooperation. In detail, the unique asymptotically stable state moves from full cooperation to the coexistence of C and D, and ultimately to complete defection with the increase of ϕ.
§.§.§ Stag hunt game
In the absence of AAs, replicator dynamics have revealed that the coexistence of C and D is an unstable equilibrium state, whereas the full cooperation and defection equilibrium states are both asymptotically stable. The equilibrium state that the population evolves in depends on the initial frequency of human cooperation.
However, the full cooperation equilibrium state provides each player with a higher payoff compared to the full defection equilibrium state. Consequently, the question arises of how to steer the population towards a full cooperation equilibrium state that is independent of the initial frequency of human cooperation. This can be addressed by incorporating AAs, particularly cooperative AAs (see Theorem 3 in electronic supplementary material). In detail, when yϕ/1+y≥P-S/R-T-S+P, the full cooperation equilibrium state becomes a unique asymptotically stable solution, implying that the population converges to full cooperation irrespective of initial frequency of human cooperation. In particular, to achieve full cooperation with as few AAs as possible, the best option is to introduce AAs with cooperation probability ϕ=1. On the other hand, the population converges to full defection regardless of the initial frequency of human cooperation when 1+yϕ/1+y≤P-S/R-T-S+P. To prevent the collapse of cooperation, it's advisable to control the cooperation probability of AAs with ϕ≥P-S/R-S-T+P.
In Fig. <ref> A, we present the phase diagram of analytical solutions. There exist three phases, including full C, full D, and a bistable state of C or D. The results demonstrate that the higher the cooperation probability of AAs, the lower the threshold for the proportion of AA required to achieve a full cooperation equilibrium state. Notably, when y<1, we showcase that even a minority of cooperative AAs can stimulate a full cooperation equilibrium state. On the other hand, AAs with a lower cooperation probability (ϕ<P-S/R-S-T+P) can result in the collapse of cooperation in a population containing a large fraction of AAs (see blue area).
We then show how the equilibrium of the population varies as a function of ϕ by fixing y=4. In the monostable state, the equilibrium is insensitive to the initial frequency of human cooperation. However, in the bistable state, increasing ϕ decreases the unstable interior equilibrium and expands the area capable of reaching a full cooperation equilibrium state.
Overall, we find that a minority of defective (or cooperative) AAs can trigger a pronounced phase transition towards a full cooperation equilibrium state in SD (or SH )games. In contrast, if AAs take a larger proportion, although a full cooperation equilibrium state is easy to reach, there is a risk of transitioning to a full defection equilibrium state. Given the recognition of social learning as a means of describing human strategy updating (<cit.>), we further investigate the results by considering pairwise comparison rule, focusing on the situation where human players can not only imitate the strategy of human player but also AAs.
§.§ Pairwise comparison rule
In the pairwise comparison rule, we investigate the influence of AAs as well as the imitation strength K. Note that, K →∞ and K → 0 mean strong and weak imitation strength, respectively. The probability given by Fermi function is totally affected by the sign of payoff difference π_C-π_D if K →∞, or tends to 0.5 if K → 0.
Since obtaining an analytical solution for pairwise comparison rule is intractable, we present numerical and simulation results in this section.
A fascinating finding is that results are qualitatively consistent with replicator dynamics if we consider strong imitation strength in pairwise comparison. However, the results vary as the imitation strength weakens.
We utilize the same values of T and S as in the RD section and present the numerical and simulation results for the pairwise comparison rule in Fig. <ref>. Note that we here represent cooperative and defective AAs as ϕ=0.1 and ϕ=0.9, respectively. The simulation results are obtained following Algorithm I (electronic supplementary material).
In PD game, human cooperation, either in ϕ=0.1 or ϕ=0.9 condition, is difficult to emerge under strong imitation strength (see Fig. <ref> A). In detail, when K=100, human cooperation is insensitive to the strategy and fraction of AAs, which is consistent with replicator dynamics. However, when we reduce the imitation strength, the results change. Both AAs' proportion and cooperation probability positively influence the evolution of human cooperation. In particular, cooperative AAs promote human cooperation more effectively than defective AAs. This effect becomes more significant with lower imitation strength (see K=1).
In SD game (see Fig. <ref> B), human cooperation increases (or decreases) with AA's proportion when ϕ=0.1 (or ϕ=0.9) under strong imitation strength. Defective AAs benefit the evolution of human cooperation, which is consistent with the finding in RD. In particular, this effect is insensitive to the initial fraction of human cooperation, as shown in Fig. S1 (electronic supplementary material).
However, the results become totally contrary when imitation strength weakens (see K=1): cooperative AAs are more beneficial to human cooperation.
In SH game (see Fig. <ref> C), there exists two kinds of state: a unique asymptotically stable state ρ_C1^* (or ρ_C2^*) and a bi-stable state ρ_C1^* and ρ_C2^*, where ρ_C1^* means the coexistence of C and D with a higher frequency of cooperation and ρ_C2^* means the coexistence of C and D with a lower frequency of cooperation. As shown in Fig. S2 (electronic supplementary material), in a bistable state, which equilibrium the system evolves to is affected by the initial frequency of human cooperation. Furthermore, similar to the findings in RD, AAs with lower (or higher) cooperation probability ϕ=0.1 (or ϕ=0.9) are more feasible to cause the state ρ_C2^* (or ρ_C1^*) as y increases under strong imitation strength. This finding is still robust when imitation strength becomes weak.
Our findings demonstrate consistent results in both the replicator dynamics and pairwise comparison rule under strong imitation strength. However, if weak imitation strength (the irrational option of the human player) is taken into account, cooperative AAs are more beneficial to the evolution of human cooperation in all three types of games. While our previous discussions primarily focused on well-mixed populations, exploring the outcomes within networks that incorporate local interactions is also pertinent.
§.§ Extension to complex networks
We first present the average human cooperation frequency ρ_C as a function of ϕ for square lattice, Barabasi Albert (BA) scale-free, and Erdos Renyi (ER) random network in Fig. <ref>. In PD game, cooperation is consistently promoted with increasing ϕ, regardless of the network type. The results are qualitatively consistent with previous theoretical analyses. Turning our attention to SD games, we find that AAs inhibit cooperation compared to scenarios without AAs, regardless of the network type. In square lattice, we get a similar conclusion that cooperation is further weakened as ϕ increases. However, a contrasting phenomenon emerges when considering heterogeneous networks, such as ER random and BA scale-free networks. We infer that this discrepancy may be attributed to AAs occupying network hub nodes.
To verify it, we conduct simulation experiments on a BA scale-free network under two scenarios (see Fig. S5 in electronic supplementary material):
i) By assigning 4871 nodes (to maintain a similar number of AAs as in Fig. <ref>) with the highest degrees as AAs, Fig. S5 presents similar results in Fig. <ref> B.
ii) By assigning 4871 nodes with the lowest degrees as AAs, we arrive at the conclusion that defective AAs effectively promote cooperation once again. This finding highlights the significant influence of AAs' location on human cooperation. Next, in SH game, AAs with higher cooperation probability are more beneficial for human cooperation compared to defective AAs. In particular, the increase in ϕ triggers tipping points in three types of networks.
Given the significance of hub nodes, we conducted additional analysis by focusing on ten nodes in three different types of games, as shown in Figure <ref>. When AAs are assigned to nodes with the highest degrees, the strategy of AAs has a profound impact on human cooperation. Even a slight change in the behavior of AAs, especially in PD games (see Figure <ref> A), can lead to a significant increase in overall human cooperation. Conversely, when these ten AAs are randomly allocated across the network, they have little influence on human behavior and exhibit limited utility in altering cooperation levels.
§ DISCUSSION
In this study, we investigate human cooperation in hybrid populations, involving interactions between human players and AAs. AAs, who are programmed to choose cooperation with a specific probability, are employed to answer our motivation questions. The human player is assumed to update strategy according to payoff difference given by replicator dynamics and pairwise comparison rule. Theoretic analysis and experimental simulations mainly proceed following well-mixed population and network structures, respectively. Using replicator dynamics, we investigate the impact of AAs on the equilibrium of various social dilemma games, such as the PD, SH, and SD games. We showcase that cooperative AAs effectively promote human cooperation in SH game, but their influence is limited in games with dominant strategies. Surprisingly, in SD game, cooperative AAs can even disrupt cooperation. To achieve a full cooperation state with as few AAs as possible in SH game, introducing AAs with fully cooperating probability is proved to be the most effective approach. Furthermore, our results also show that defective AAs are not useless, as they can stimulate cooperation in SD games. Correspondingly, to achieve a cooperation-dominant state with as few AAs as possible, the best choice is to introduce AAs with always defection. These findings are further verified using pairwise comparison rule with strong imitation strength.
On the other hand, if taking weak imitation strength (which includes irrational options of human players) into account, we demonstrate that cooperative AAs are beneficial for promoting human cooperation regardless of social dilemmas.
In an extended study, we implement experimental simulations involving three types of complex networks. By incorporating spatial structures into the interaction environment, we obtain qualitatively consistent results in the homogeneous network. The differences in heterogeneous networks are mainly due to the location of AAs. By controlling AAs' location, we find that assigning AAs to hub nodes, even in a small proportion, can significantly affect evolutionary outcomes.
Thus far, we have contributed a model for studying human cooperation in hybrid populations, showing that it is essential to consider environments related to social dilemmas, networks, and imitation strength when designing AAs. The insights gained from our results have practical implications for developing AI algorithms to foster human cooperation.
Our research distinguishes itself from previous studies mainly in several key aspects.
Firstly, in addition to considering the committed minorities (<cit.>), we incorporate a substantial number of autonomous agents into our model. These AAs cooperate with their counterparts with a certain probability. The inclusion of a substantial number of AAs is motivated by recent advancements in social network research (<cit.>), which suggest that machine accounts make up approximately 32% of all tweets based on empirical evidence from Twitter data. Moreover, there is an increasing trend in the number of machine accounts, posing significant challenges in terms of reducing their potential risks (<cit.>). In the context of human-agent games, we validate that minority cooperative AAs stimulate cooperation, which aligns with existing literature (<cit.>). However, our research reveals an unexpected result: the inclusion of a large number of AAs leads to a breakdown of the cooperative system, surpassing the effects observed with a mere minority of AAs, as demonstrated in the defective region depicted in the left panel of Fig.<ref> and Fig.<ref>. This discovery emphasizes the potential risks posed by the growing prevalence of AAs in relation to human cooperation.
Secondly, we introduce defective AAs, whose role in fostering cooperation in two-player social dilemma games has been largely overlooked in the context of human-agent interaction.
Either replicator dynamics or pairwise comparison rule, we find that the inclusion of defective AAs can indeed trigger the dominance of cooperation in SD games, an effect that remains hidden when solely focusing on cooperative AAs.
Furthermore, although several existing studies primarily use AAs to address fairness or collective risk problems (<cit.>), they have not introduced AAs in structured populations (<cit.>). These interactive environments have been recognized as important factors in the context of human-human interactions. By introducing structured populations, we reveal tipping points that are triggered by AAs in SH games. Meanwhile, we investigate the effect of nodes with higher degrees on triggering human cooperation. These additional critical extensions provide a comprehensive understanding of the role of cooperative and defective AAs in the evolution of human cooperation. They offer a more realistic representation of interactive environments in human-agent interactions, shedding light on the complex dynamics at play in social systems.
There are still intriguing avenues for future exploration in this field. In human-human interactions, punishment has been proven to be a powerful behavior in eliciting cooperation (<cit.>). It's also essential to understand its utility in human-agent populations, particularly in solving second-order free-rider problems (<cit.>). Even though theoretical analysis helps identify critical values, it neglects human players' emotional and social factors. Conducting experiments involving structured and unstructured populations to test these findings will open up exciting avenues for research in human-agent interaction.
§.§ Limitations of the study
The current study focuses solely on the simplest social dilemmas involving two decisions: cooperation and defection. In social dilemmas where players have no prior information about counterparts, we have shown that AAs with even simple intelligence can facilitate cooperation. However, it still remains uncertain how they would perform in more complex scenarios, such as stochastic games and sequential social dilemma games (<cit.>). Interaction among players in these scenarios may be influenced by historical information or the state of the environment, making it necessary and meaningful to address such problems.
Meanwhile, although simple algorithms may be effective in stimulating cooperation (<cit.>), developing algorithms with more intelligence will benefit more to the development of artificial intelligence, especially for human-machine or human-robot interaction (<cit.>).
Several subsequent studies have investigated the information level (i.e., to what extent human players know their opponents are agents) on human cooperation (<cit.>). These studies revealed that human cooperation fares better in situations where human players have no information about the true nature of their opponents (<cit.>). Conversely, human players tend to reduce their willingness to cooperate even when participants recognize that agents perform better than human players at inducing cooperation.
However, Shirado and Christakis conducted the human-agent interactions on network structures and presented contrasting results that human cooperation can still be promoted even if the identity of agents is transparent (<cit.>). In social dilemma games with one-shot settings, there is still no deterministic answer. Our study ignored the intrinsic property of AAs, exploring a scenario without learning bias between humans and AAs. Therefore, it is crucial to consider the true nature of agents, especially in one-shot settings, to expand our understanding of their behavior and implications.
§ STAR METHODS
In this section, we briefly describe the basic concept of social dilemma games and then present our model in the context of hybrid populations.
§.§ Social Dilemma Games
Two-player social dilemma game, a typical subclass of social dilemma, depicts the rational decision-making of two participants by introducing a strategy set and payoff matrix. In the simplest version, each player selects a strategy from a strategy set 𝒮={C, D}, where C and D represent cooperation and defection, respectively. Mutual cooperation yields a reward R to both players, while mutual defection results in a punishment P. Unilateral cooperation leads to a sucker's payoff S, while the corresponding defection receives a temptation to defect T. The above process can be represented by a payoff matrix:
𝒜 = (
R S
T P
).
Using this payoff matrix, the so-called social dilemma is meeting if it follows these four conditions simultaneously (<cit.>):
i) R>P. Players prefer to cooperate with each other than to defect from each other.
ii) R>S. Mutual cooperation is preferred over unilateral cooperation.
iii) 2R>T+S. Mutual cooperation is more beneficial for the collective than defecting against a cooperator.
iv) Either T>R (greed) or P>S (fear). The former condition means players prefer exploiting a cooperator to cooperating with him. The latter condition means players prefer mutual defection over being exploited by a defector.
According to the ranking order of these parameters, these two-player social dilemma games can be classified into four different kinds of games (<cit.>), which are PD games (T>R>P>S) where defection is the dominant strategy, SD games (T>R>S>P), SH games (R>T>P>S), and harmony (H) game (R>T, S>P) where cooperation is the dominant strategy. It is noteworthy that the first three games exhibit social dilemmas (<cit.>), while the harmony game does not. Without a specific declaration, we set R=1 and P=0 throughout this paper.
§.§ Population Setup and Autonomous Agents
We consider a well-mixed and infinitely large population 𝒫={1, 2, ⋯, N}, where N →∞, and each player can interact with each other with equal probability. In the population, player i ∈𝒫 can choose one of two strategies from set 𝒮={C, D}. We denote the strategy of player i as a vector 𝒳_i = (x_1, x_2)', where x_j=1 if the jth strategy is chosen and the other element is equal to 0.
To investigate how AAs affect the cooperative behavior among human players, we consider a hybrid population consisting of human players and AAs (see Fig. <ref> A).
Human players actively participate in the game and update their strategies through a social learning process. AAs, on the other hand, follow a pre-designed algorithm to make their choices: they cooperate with a fixed probability ϕ (0 ≤ϕ≤ 1) and defect otherwise. We refer to them as cooperative AAs if 0.5 ≤ϕ≤ 1, and defective AAs if 0≤ϕ < 0.5. In the human-agent hybrid population, each player has an equal chance to engage in a two-player social game with other players (see Fig. <ref> B). Consequently, the interaction probability between humans and AAs significantly depends on the composition of this population.
§.§ Hybrid Population Game
In the hybrid population, the fraction of cooperation among human players is denoted as ρ_C (0 ≤ρ_C ≤ 1).
Assuming human players are one unit, add y units of AAs to the hybrid population. Consequently, the fraction of human cooperation is denoted by f_C = ρ_C/1+y.
The parameter y can also be used to quantify the composition of the population: if 0 < y < 1, it implies that the fraction of AAs is lower than that of human players; whereas y ≥ 1 indicates a higher proportion of AAs.
Accordingly, the expected payoff of cooperation and defection among human players in a hybrid population can be calculated as follows:
π_C=1/1+y(ρ_C R + (1-ρ_C)S) + y/1+y(ϕ R+(1-ϕ)S),
π_D=1/1+y(ρ_C T + (1-ρ_C)P) + y/1+y(ϕ T+(1-ϕ)P),
where the first term on the right-hand represents the payoff from interacting with human players, and the second term signifies the payoff obtained from interacting with AAs.
§.§ Replicator Dynamics
The replicator equation (<cit.>) is a widely used differential equation that depicts evolutionary dynamics in infinitely large populations. Following this rule, the growth of a specific strategy is proportional to the payoff difference. Therefore, the dynamics of human cooperation can be represented by the following differential equation:
ρ̇_C = (1+y) ḟ_C
= (1+y) ρ_C/1+y1-ρ_C/1+y(π_C-π_D)
=ρ_C(1-ρ_C)/1+y(π_C-π_D),
where
π_C-π_D = ρ_C+yϕ/1+y(R-T-S+P)+S-P.
By solving ρ̇_C=0, we find that there exists two trivial equilibrium ρ_C=0 and ρ_C=1, and a third equilibrium ρ_C^* that is closely associated with game models and AAs. By solving π_C-π_D=0, one can derive:
ρ_C^* = P-S/P+R-T-S+y(P-S/P+R-T-S-ϕ)
=ρ̂_C+y(ρ̂_C-ϕ),
where ρ̂_C=P-S/R+P-T-S. It is easy to deduce that the restriction T>R>S>P or R>T>P>S guarantees 0<ρ̂<1. Moreover, ρ_C^* increases with y if ρ̂_C-ϕ>0, whereas decreases with y if ρ̂_C-ϕ<0. Since ρ_C^* measures cooperation rate in human players, this equilibrium will vanish if ρ_C^*<0 or ρ_C^*>1.
Note that ρ̂_C is also the interior equilibrium when the population consists only of human players (<cit.>), i.e., the scenario y=0. Subsequently, the stability of the equilibrium will be discussed from three types of social dilemmas.
We first demonstrate hybrid population dynamics by analyzing replicator equations. As evidence has revealed that human players may update their behaviors through social learning (<cit.>), one may ask: what results can be obtained when considering pairwise comparison rule? Under this dynamic, we can also examine the effect of imitation strength on the outcomes.
§.§ Pairwise Comparison Rule
Pairwise comparison is a well-known social learning mechanism that accurately depicts game dynamics.
In this process, strategy updating takes place within a randomly chosen pair of players, denoted as i and j, with strategy 𝒳 and 𝒴 (𝒳,𝒴∈𝒮). If 𝒳≠𝒴, player i takes j as a reference and imitates its strategy with a probability determined by the Fermi function,
W_𝒳←𝒴=1/1+e^-K(π_𝒴-π_𝒳),
where K represents selection intensity (which is also known as imitation strength) and measures the irrational degree of human players (or the extent players make decisions by payoff comparisons) (<cit.>). In the hybrid population defined, the probability of a cooperator taking a defector as an indicator is given by:
𝐏_1 = ρ_C(1-ρ_C+y(1-ϕ))/1+y.
Subsequently, the probability that cooperators decrease by one is
Q^- = 𝐏_1 W_C ← D
= ρ_C(1-ρ_C+y(1-ϕ))/1+y1/1+e^-K(π_D-π_C).
Similarly, the probability that a defector taking a cooperator as an indicator is
𝐏_2 = (ρ_C+yϕ)(1-ρ_C)/1+y.
Consequently, the probability that cooperators increase by one is
Q^+ = 𝐏_2 W_D ← C
=(ρ_C+yϕ)(1-ρ_C)/1+y1/1+e^-K(π_C-π_D).
In total, the dynamics of cooperation can be represented by a master equation
ρ̇_C = Q^+ - Q^-
= (ρ_C-ρ_C^2+yϕ-ρ_Cyϕ)e^K(π_C-π_D)/(1+y)(1+e^K(π_C-π_D)) - ρ_C-ρ_C^2+ρ_Cy-ρ_Cyϕ/(1+y)(1+e^K(π_C-π_D)).
We can derive the equilibrium by solving ρ̇_C=0.
Since the denominator is larger than 0 evidently, the equilibrium is mainly determined by the numerator. We then denote the numerator as f(ρ_C) and examine the condition:
f(0) = yϕ e^K(yϕ/1+y(R-S-T+P)+S-P),
f(1) = yϕ-y.
In the presence of AAs, f(0) ≥ 0 and f(1) ≤ 0 are established. The equality holds when ϕ=0 and ϕ=1, respectively. Therefore, there exists at least one interior equilibrium when 0<ϕ<1. Note that ϕ=1 is the so-called zealous cooperator (<cit.>). Moreover, the imitation strength K also plays a crucial role in the dynamics of human cooperation.
§.§ Simulation for complex networks
Building upon the aforementioned results, we further study the network structure effect on human-agent cooperation in this section. Since interaction, in reality, is not limited to well-mixed populations, we also implement experiments in complex networks that contain local interactions. This means that players can only interact with a limited set of neighboring individuals.
To assess the effect of network structure on cooperation, we employ pairwise imitation as a strategy updating rule and measure the expected cooperation rate among human players.
Following this, players are matched in pairs and imitate their opponent's strategy based on a probability determined by their payoff difference (<cit.>). We begin by introducing three types of complex networks.
§.§.§ Network settings
Denote 𝒢={𝒱, ℰ} as a complex network, where 𝒱={1,2,⋯,N} represents node set, and ℰ⊆𝒱×𝒱 is link set. Each node i∈𝒱 represents either a human player or an AA. For the edge (i, j)∈ℰ, each player i is paired up with another player j to play a two-player social dilemma game. We consider a network with N=10,000 players and an average degree of ⟨ k ⟩=4.
* Square lattice is a homogeneous network. Each player interacts with their four neighbors and receives a payoff by playing with its north, south, east, and west neighbors. It is noteworthy that here we consider lattice with periodic boundary.
* Barabasi Albert scale-free network is generated following the growth and preferential attach rules (<cit.>). The degree distribution of the ultimate network satisfies a power-law function.
* Erdos Renyi random network is generated by linking two different nodes with a random probability (<cit.>).
The degree distribution of the ultimate network satisfies the Poisson distribution.
§.§.§ Agent-based simulation
We utilized the Monte Carlo simulation to examine the variation of cooperation across different networks. The pseudocode for the simulation is provided in Algorithm II (electronic supplementary material). Initially, each human player is assigned either cooperation or defection with a probability of 0.5, whereas each AA adopts cooperation and defection with probability ϕ and 1-ϕ, respectively. With the specific strategy, a randomly chosen player (assumed to be human), denoted as i, obtains the payoff by interacting with connected neighbors
𝒫_i = ∑_y ∈Ω_i𝒳_i^T𝒜𝒳_y ,
where Ω_i represents neighbor set of player i, 𝒜 is the payoff matrix given by Fig. <ref> B.
After calculating the cumulative payoff, player i decides whether to imitate one of his/her neighbors' strategy with the probability given by the Fermi function
W_𝒳_i ←𝒳_j=1/1+e^-K(𝒫_j-𝒫_i),
where j is a randomly chosen neighbor. We set K=10 in the following simulations.
Results are calculated by conducting 60 realizations. For each realization, we fix the total step as 50000, and each value is averaged over 5000 steps when the network reaches an asymptotic state.
§ ACKNOWLEDGMENTS
This research was supported by the National Science Fund for Distinguished Young Scholars (No. 62025602), the National Science Fund for Excellent Young Scholars (No. 62222606), the National Natural Science Foundation of China (Nos. 11931015, U1803263, 81961138010 and 62076238), Fok Ying-Tong Education Foundation, China (No. 171105), Technological Innovation Team of Shaanxi Province (No. 2020TD-013), Fundamental Research Funds for the Central Universities (No. D5000211001), the Tencent Foundation and XPLORER PRIZE, JSPS Postdoctoral Fellowship Program for Foreign Researchers (grant no. P21374).
|
http://arxiv.org/abs/2307.02582v1
|
20230705182951
|
Estimating the roughness exponent of stochastic volatility from discrete observations of the realized variance
|
[
"Xiyue Han",
"Alexander Schied"
] |
q-fin.ST
|
[
"q-fin.ST",
"math.PR",
"math.ST",
"stat.TH"
] |
⌈⌉
⌊⌋
#1#1
#1#1
#1#1
#1#1
#1#1
ε
Ωω
#1trace #1
#11__#1
Y>X
⟵
⟶
⟹
⇒
⟼
↑
↓
C>X
#1
equationsection
thmbis[1]
theorem-1
positioning,calc,cd
[4]
𝒜
ℬ
𝒞
𝒫
𝒩
𝒦
𝒲
𝒮
𝒥
ℰ
𝒟
ℱ
𝒢
ℒ
ℳ
𝒰
𝒵
𝒬
ℛ
𝒯
𝒱
𝒳
𝒴
ℋ
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
example[theorem]Example
examples[theorem]Examples
remark[theorem]Remark
assumption[theorem]Assumption
#11__#1
⟶
ε
⟨
⟩
#1#1
⟵
⟶
⟹
⇒
⟼
↑
↓
ri
covVar
conv
ess sup
arg max
arg min
ess inf
Estimating the roughness exponent of stochastic volatility from discrete observations of the realized variance
Xiyue Han^* and Alexander Schied
Department of Statistics and Actuarial Science, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada. E-Mails: [email protected], [email protected] authors gratefully acknowledge support from the
Natural Sciences and Engineering Research Council of Canada through grant RGPIN-2017-04054.
June 25, 2023
=================================================================================================================================================================================================================================================================================================================================================================================
We consider the problem of estimating the roughness of the volatility in a stochastic volatility model that arises as a nonlinear function of fractional Brownian motion with drift. To this end, we introduce a new estimator that measures the so-called roughness exponent of a continuous trajectory, based on discrete observations of its antiderivative. We provide conditions on the underlying trajectory under which our estimator converges in a strictly pathwise sense. Then we verify that these conditions are satisfied by almost every sample path of fractional Brownian motion (with drift). As a consequence, we obtain strong consistency theorems in the context of a large class of rough volatility models. Numerical simulations show that our estimation procedure performs well after passing to a scale-invariant modification of our estimator.
MSC2020 subject classifications: 91G70, 62P05, 60F15, 60G22
Keywords: Rough volatility, roughness exponent, fractional Brownian motion with drift, strong consistency
§ INTRODUCTION
Consider a stochastic volatility model whose price process satisfies
dS_t=σ_tS_t dB_t, S_0=s_0>0,
where B is a standard Brownian motion and σ_t is a progressively measurable stochastic process. Since the publication of the seminal paper <cit.> by Gatheral, Jaisson, and Rosenbaum, it has been widely accepted that the sample paths of σ_t often do not exhibit diffusive behavior but instead are much rougher. A specific example suggested in <cit.> is to model the log volatility by a fractional Ornstein–Uhlenbeck process. That is,
σ_t=exp(X^H_t),
where X^H solves the following integral equation
X^H_t=x_0+ρ∫_0^t(μ-X^H_s) ds+W^H_t, t≥0,
for a fractional Brownian motion W^H with Hurst parameter H∈(0,1). In this model, the roughness' of the trajectories of X^H is governed by the Hurst parameter H, and it was pointed
out
in <cit.> that rather small values of H appear to be most adequate for capturing the stylized facts of empirical volatility time series. Since the publication of <cit.>, many alternative rough volatility models have been proposed, e.g., the rough Heston model <cit.> and the rough Bergomi model <cit.>.
The present paper contributes to the literature on rough volatility by considering the statistical estimation of the degree of roughness of the volatility process σ_t.
There are several difficulties that arise in this context.
The first difficulty consists in the fact that in reality the volatility process σ_t cannot be observed directly; only the asset prices S are known. Thus, one typically computes the quadratic variation of the log stock prices,
log S_t = ∫_0^tσ^2_s ds,
which is also called the realized variance or the integrated volatility, and then performs numerical differentiation to estimate proxies σ_t for the actual values of σ_t. The roughness estimation is then based on those proxy values σ_t. For instance, this two-step procedure is underlying the statistical analysis for empirical volatilities in <cit.>, where roughness estimates were based on proxy values σ_t taken from the Oxford-Man Institute of Quantitative Finance Realized Library. A problem with that approach is that estimation errors in in the proxy values σ_t might substantially distort the outcomes of the final roughness estimation; see Fukasawa et al. <cit.> and Cont and Das <cit.>.
As a matter of fact, the quadratic variation (<ref>) is usually approximated by a finite sum of the form ∑_i(log S_t_i-log S_t_i-1)^2 based on discrete observations S_t_i of the price process. The bias caused by this error is emphasized in <cit.>, where it is assumed that the approximation errors are log-normally distributed and independent of the Brownian motion B in (<ref>), and a Whittle-type estimator for the Hurst parameter is developed based on quasi-likelihood. Another attempt to tackle this measurement error is made by Bolko et al. <cit.>, where in a similar framework, the proposed estimator is based on the generalized method of moments approach. Chong et al. <cit.> substantially extend the previous results by alleviating the assumption on proxy errors and basing the volatility model on a semi-parametric setup, in which, with the exception of the Hurst parameter of the underlying fractional Brownian motion, all components are fully non-parametric. One of the conclusions from <cit.> is that the error arising from approximating the quadratic variation (<ref>) with finite sum ∑_i(log S_t_i-log S_t_i-1)^2 can be negligible when properly controlled. For this reason, we do not consider that error source in our present paper.
Here, we analyze a new estimator for the roughness of the volatility process σ_t that is based directly on discrete observations of the quadratic variation (<ref>). Our estimator has a very simple form and can be computed with great efficiency on large data sets. It is not derived from distributional assumptions, as most other estimators in the literature, but from strictly pathwise considerations that were developed in <cit.>. As a consequence, our estimator does not actually measure the traditional Hurst parameter, which quantifies the autocorrelation of a stochastic process and hence does not make sense in a strictly pathwise setting. Instead, our estimator measures the so-called roughness exponent, which was introduced in <cit.> as the reciprocal of the critical exponent for the power variations of trajectories.
For fractional Brownian motion, this roughness exponent coincides with the Hurst parameter, but it can also be computed for many other trajectories, including certain fractal functions.
In <cit.>, we state conditions under which a given trajectory x∈ C[0,1] admits a roughness exponent R and we provide several estimators that approximate R, based on the Faber–Schauder expansion of x. In <cit.>, we derive a robust method for estimating the Faber–Schauder coefficients of x for the situation in which only the antiderivative y(t)=∫_0^tx(s) ds, and not x itself, is observed on a discrete time grid. As explained in greater detail in <Ref>, that method, when combined with one of the estimators from <cit.>, gives rise to the specific form of the estimator _n we propose here. In <Ref>, we formulate conditions on the trajectory x under which _n(v) converges to the roughness exponent of x, resting on discrete observations of the function v(t)=∫_0^tg(x(s)) ds, where g is a generic, strictly monotone C^2-function. In <Ref>, we then verify that the aforementioned conditions on the trajectory x are satisfied by almost every sample path of fractional Brownian motion (with drift). This verification yields immediately the strong consistency of our estimator for the case in which the stochastic volatility is a nonlinear function of a fractional Brownian motion with drift. This includes in particular the rough volatility model defined by (<ref>) and (<ref>). These results
are stated in <Ref>.
We believe that the fact that our estimator is built on a strictly pathwise approach makes it very versatile and applicable also in situations in which trajectories are not based on fractional Brownian motion. As a matter of fact, our Examples <ref> and <ref> illustrate that our estimation procedure can work very well for certain deterministic fractal functions.
One disadvantage of our original estimator _n is that it is not scale invariant. Using an idea from <cit.>, we thus propose a scale-invariant modification of _n in <Ref>.
The subsequent <Ref> contains a simulation study illustrating the performance of our estimators. This study illustrates that passing to the scale-invariant estimator can greatly improve the estimation accuracy in practice.
§ MAIN RESULTS
Consider a stochastic volatility model whose price process satisfies
dS_t=σ_tS_t dB_t, S_0=s_0>0,
where B is a standard Brownian motion and σ_t is a progressively measurable stochastic process.
As explained in the introduction, our goal in this paper is to estimate the roughness of the trajectories t↦σ_t directly from discrete, equidistant observations of the realized variance,
log S_t = ∫_0^tσ^2_s ds,
without having first to compute proxy values for σ_t via numerical differentiation of t↦log S_t. This is important, because in reality the volatility σ_t is not directly observable and numerical errors in the computation of its proxy values might distort the roughness estimate (see, e.g., <cit.>).
While our main results are concerned with rough stochastic volatility models based on fractional Brownian motion, a significant portion of our approach actually works completely trajectorial-wise, in a model-free setting; see <Ref>. So let x:[0,1]→ be any continuous function.
For p ≥1, the p^th variation of the function x along the n^th dyadic partition is defined as
x^(p)_n:= ∑_k = 0^2^n-1|x((k+1)2^-n) - x(k2^-n)|^p.
If there exists R∈ [0,1] such that
lim_n ∞x^(p)_n =
0 for p > 1/R,
∞ for p < 1/R,
we follow <cit.> in referring to R as the roughness exponent of x. Intuitively, the smaller R the rougher the trajectory x and vice verse. Moreover, if x is a typical sample path of fractional Brownian motion, the roughness exponent R is equal to the traditional Hurst parameter (see in <cit.>). An analysis of general properties of the roughness exponent can be found in <cit.>. There, we also provide an estimation procedure for R from discrete observations of the trajectory x.
However, the problem of estimating R for a trajectory of stochastic volatility is more complex, because volatility cannot be measured directly; only asset prices and their realized variance (<ref>) can be observed. In our current pathwise setting, this corresponds to making discrete observations of
y(t)=∫_0^t g(x(s)) ds, 0≤ t≤ 1,
where g:→ is sufficiently regular. For instance, in the rough stochastic volatility model (<ref>), (<ref>), where log-volatility is given by a fractional Ornstein–Uhlenbeck process (<ref>), we will take x as a trajectory of the fractional Ornstein–Uhlenbeck process and g(t)=(e^t)^2=e^2t.
Let us now introduce our estimator. Suppose that for some given n∈ we have the discrete observations {y(k2^-n-2):k=0,…, 2^n+2} of the function y in (<ref>). Based on these data points, we introduce the coefficients
ϑ_n,k:= 2^3n/2+3(y(4k/2^n+2)-2y(4k+1/2^n+2)+2y(4k+3/2^n+2)-y(4k+4/2^n+2)),
for 0 ≤ k ≤ 2^n-1.
Our estimator for the roughness exponent of the trajectory g∘ x is now given by
_n(y): = 1-1/nlog_2 √(∑_k = 0^2^n-1ϑ_n,k^2).
This estimator was first proposed in <cit.>. In <Ref>, we provide a detailed explanation of the rationale behind the estimator _n and how it relates to the results in <cit.>.
§.§ Strong consistency theorems
We can now state our main results, which show the strong consistency of _n when it is applied to
the situation in which x is a typical trajectory of fractional Brownian motion with possible drift. In the sequel, W^H=(W^H_t)_0≤ t≤1 will denote a fractional Brownian motion with Hurst parameter H, defined on a given probability space (,,).
For H∈(0,1) and a strictly monotone function g∈ C^2(), let X_t:=g(W^H_t) and
Y_t := ∫_0^tX_s ds=∫_0^tg(W^H_s) ds, 0≤ t≤ 1.
Then, with probability one, X admits the roughness exponent H and we have lim_n_n(Y)= H.
The preceding theorem solves our problem of consistently estimating the roughness exponent for a rough volatility model with σ_t^2=g(W^H_t). However, empirical
volatility is mean-reverting, and that effect is not captured by this model. Therefore, it is desirable to replace the fractional Brownian motion W^H with a mean-reverting process such as the fractional Ornstein–Uhlenbeck process.
This process
was first introduced in <cit.> as the solution of the integral equation
X^H_t=x_0+ρ∫_0^t(μ-X^H_s) ds+W^H_t, t∈[0,1],
where x_0,ρ,m∈ℝ are given parameters. The integral equation (<ref>) can be uniquely solved in a pathwise manner. The fractional Ornstein–Uhlenbeck process was suggested by Gatheral et al. <cit.> as a suitable model for log volatility, i.e., σ_t=e^X^H_t. In our context, this model choice implies that we are making discrete observations of the process
∫_0^t σ^2_s ds=∫_0^t e^2X^H_s ds, 0≤ t≤1.
The fractional Ornstein–Uhlenbeck process can simply be regarded as a fractional Brownian motion with starting point x_0 and adapted and absolutely continuous drift ρ(μ-X^H_s), and so it falls into the class of stochastic processes considered in the following theorem, which we are quoting from <cit.> for the convenience of the reader.
Let X be given by
X_t:=x_0+W^H_t+∫_0^tξ_s ds, 0≤ t≤ 1,
where ξ is progressively measurable with respect to the natural filtration of W^H and satisfies the following additional assumption.
* If H<1/2, we assume that t↦ξ_t is -a.s. bounded in the sense that there exists a finite random variable C such that ξ_t()≤ C() for a.e. t and -a.e. ∈.
* If H>1/2, we assume that ξ_0=0 and that t↦ξ_t is -a.s. Hölder continuous with some exponent α>2H-1.
Then the law of (X_t)_t∈[0,1] is absolutely continuous with respect to the law of (x_0+W^H_t)_t∈[0,1].
More specifically, if X is a solution of the fractional integral equation
X_t=x_0+∫_0^tb(X_s) ds+W^H_t, 0≤ t≤1,
where b:→ is locally bounded and, for H>1/2, locally Hölder continuous with some exponent α>2-1/H, it is further stated in <cit.> that the law of (X_t)_t ∈ [0,1] is equivalent to the law of (x_0 + W^H_t)_t ∈ [0,1]. This applies in particular to the fractional Ornstein–Uhlenbeck process X^H defined in (<ref>), where b(x) = ρ(μ - x).
The main result of our paper is now an immediate corollary of Theorems <ref> and <ref>.
Suppose that X is as in <Ref> and g∈ C^2() is strictly monotone. Then the stochastic process
g(X_t)=g(x_0+∫_0^tξ_s ds+W^H_t), 0≤ t≤1,
admits -a.s. the roughness exponent H, and for
Y_t = ∫_0^tg(X_s) ds we have lim_n_n(Y) =H -a.s.
By <Ref>, adding a drift to fractional Brownian motion can also be regarded as changing the underlying probability law. <Ref> can therefore also be stated as follows: The strong consistency of _n observed in <Ref> remains true after replacing the law of W^H with a law that arises in the context of <Ref>. This invariance can be seen as robustness of _n with respect to model misspecification. In addition, the strong consistency of our estimator is unaffected by changes of the nonlinear scale function g, which is yet another indication of the estimator's robustness and versatility.
§.§ A scale-invariant estimator
By definition, the roughness exponent is scale-invariant, but our estimator is not. To wit, for every trajectory y∈ C[0,1] we have
_n(λ y) - _n(y) = -log_2 |λ|/n for λ≠ 0.
Consequently, a scaling factor λ may either remove or introduce a bias into an estimate and it can notably slow down or speed up the convergence of _n(y). This will be illustrated by the simulation studies provided in <Ref>.
A number of scale-invariant modifications of _n can be constructed in a manner completely analogous to the definitions in <cit.>. Here, we carry this out for the analogue of sequential scaling proposed in <cit.>. The underlying idea is fairly simple: We choose m<n and then search for that scaling factor λ that minimizes the weighted mean-squared differences _k(λ y) - _k-1(λ x) for k = m+1,…,n. The intuition is that such an optimal scaling factor λ enforces the convergence of the estimates _k(λ y).
Fix m∈ and α_0,…,α_m≥0 with α_0>0.
For n>m, the sequential scaling factor λ_n^s and the sequential scale estimate ^s_n(y) are defined as follows,
λ^s_n := _λ > 0∑_k = n-m^nα_n-k(_k(λ y) - _k-1(λ y))^2 and ^s_n(y):= _n(λ^s_n y).
The corresponding mapping ^s_n:C[0,1]→ will be called the sequential scale estimator.
Just as Proposition 5.3 in <cit.>, one can prove the following result.
Consider the context of Definition <ref> with fixed m∈ and α_0,…,α_m≥0 such that α_0>0.
* The optimization problem (<ref>) admits a unique solutions for every function y∈ C[0,1]. In particular, all objects in Definition <ref> are well defined.
* The sequential scale estimator ^s_n can be represented as follows as a linear combination of _n-m-1,…, _n,
R_n^s = β_n,n_n + β_n,n-1_n-1 + ⋯ + β_n,n-m-1_n-m-1,
where
β_n,k=1+α_0/c^ s_nn^2(n-1) if k=n,
1/c^ s_nnk(α_n-k/k-1-α_n-k-1/k+1) if n-m≤ k≤ n-1,
-α_m/c^ s_nn(n-m)(n-m-1) if k= n-m-1, for c^s_n:= ∑_k = n-m^nα_n-k/k^2(k-1)^2.
* The sequential scale estimator is scale-invariant. That is, for n>m, y ∈ C[0,1], and λ≠ 0, we have ^s_n(λ y) = ^s_n(y).
* If y∈ C[0,1] and R∈[0,1] are such that there exists λ≠ 0 for which |_n(λ y)-R|=O(a_n) as n∞ for some sequence (a_n) with a_n=o(1/n), then |^s_n(y)-R| =O(na_n).
§.§ Simulation study
In this section, we illustrate the practical application of <Ref>, <Ref>, and <Ref> by means of simulations.
We will see that the estimation performance can be significantly boosted by replacing _n with the sequential scale estimator ^s_n.
We start by illustrating <Ref> for the simple choice g(x)=x. Recall from (<ref>) and (<ref>) that for given n∈, the computation of _n(y) requires observations of the trajectory y at all values of the time grid _n+2:={k2^-n-2:k=0,1,…, 2^n+2}. When using for y the antiderivative of a sample path of fractional Brownian motion W^H, we generate the values of W^H on the finer grid _N with N=n+6. Then we put
Y_k2^-n-2:=2^-N∑_j = 1^2^N-n-2kW^H_j2^-N, k=0,1,…, 2^n+2,
which is an approximation of ∫_0^tW^H_s ds by Riemann sums. Our corresponding simulation results are displayed in <Ref>.
As one can see from <Ref>, the estimator _n performs relatively well but also exhibits a certain bias.
This bias can be completely removed by passing to the scale-invariant estimator ^s_n; see <Ref>.
Now we apply our estimator _n to a model in which log-volatility, logσ_t, is given by a fractional Ornstein–Uhlenbeck process X^H of the form
X^H_t=x_0+ρ∫_0^t(μ-X^H_s) ds+W^H_t, t∈[0,1],
and we make discrete observations of the process
∫_0^t σ^2_s ds=∫_0^t e^2X^H_s ds, 0≤ t≤1.
To this end, we take again N=n+6 and simulate the values X^H_k2^-N (k=0,…, 2^N) by means of an Euler scheme. Then we put
Y^σ_k2^-n-2:=2^-N∑_j = 1^2^N-n-2kexp(2X^H_j2^-N), k=0,1,…, 2^n+2,
which is an approximation of ∫_0^te^2X^H_s ds by Riemann sums. As one can see from <Ref>, the original estimator _n performs rather poorly in this case, while the sequential scale estimator ^s_n performs almost as well as for the simple case Y_t=∫_0^tW^H_s ds. This is due to the fact that the function g(t)=e^2t used in (<ref>) distorts substantially the scale of the underlying process, but this distortion can be remedied by using the sequential scale estimator.
§ PATHWISE ESTIMATION
In this section, we formulate conditions on a single trajectory x∈ C[0,1] and its antiderivative y(t)=∫_0^tx(s) ds under which the estimates _n(y) converge to the roughness exponent of x. In <Ref>, we will then verify that these conditions are satisfied for the typical sample paths of fractional Brownian motion. The results in the present section are hence of independent interest in situations in which it is not clear whether a given trajectory x arises from fractional Brownian motion.
We start by summarizing some key results and concepts from <cit.> and also outline our rationale behind the specific form of the estimator _n.
§.§ The rationale behind the estimator _n
Recall that the Faber–Schauder functions are defined as
e_-1,0(t):= t, e_0,0(t):= (min{t,1-t})^+, e_m,k(t):= 2^-m/2e_0,0(2^mt-k)
for t ∈, m ∈ and k ∈. It is well known that the restrictions of the Faber–Schauder functions to [0,1] form a Schauder basis for C[0,1]. More precisely, our function x ∈ C[0,1] can be uniquely represented as the uniform limit x=lim_nx_n, where
x_n= x(0)+(x(1)-x(0))e_-1,0 + ∑_m = 0^n-1∑_k = 0^2^m-1θ_m,ke_m,k,
and the Faber–Schauder coefficients θ_m,k are given by
θ_m,k = 2^m/2(2x(2k+1/2^m+1)-x(k/2^m)-x(k+1/2^m)).
As a matter of fact, it is easy to see that the function x_n is simply the linear interpolation of x based on the supporting grid _n={k2^-n:k=0,…, 2^n}.
In <cit.>, we derived simple conditions under which the trajectory x admits a roughness exponent R∈[0,1] and also suggested a way in which R can be estimated from discrete observations of x. Specifically, it follows from Theorem 2.5 and Proposition 4.8 in <cit.> that, if the Faber–Schauder coefficients satisfy the so-called reverse Jensen condition (see Definition 2.4 in <cit.>) and the sequence
R^*_n(x):=1-1/nlog_2√(∑_k=0^2^n-1θ_n,k^2)
converges to a finite limit R, then x admits the roughness exponent R.
Note that it is assumed in <cit.> that the trajectory x can be observed directly. This, however, is not the case in the context of our present paper, where
x is the (squared) volatility in a stochastic volatility model. So let us suppose now
that we can only observe the values the antiderivative y(t)=∫_0^tx(s) ds takes on the supporting grid _n+2. If we can interpolate the data points {y(t):t∈_n+2} by means of a piecewise quadratic function y_n+2∈ C^1(), then its derivative y'_n+2 will be a continuous and piecewise linear function with supporting grid _n+1 and hence representable in the form
y'_n+2=x̂_0+θ_-1,0e_-1,0 + ∑_m = 0^n+1∑_k = 0^2^m-1θ_m,ke_m,k
for some initial value x̂_0 and certain coefficients θ_m,k. Such a piecewise quadratic C^1-interpolation y_n+2 exists in the form of the standard quadratic spline interpolation. Unfortunately, though, it is well known that quadratic spline interpolation suffers some serious drawbacks:
* the initial value x̂_0 is not uniquely determined by the given data {y(t):t∈_n+2};
* the values y_n+2(t) depend in a highly sensitive manner on the choice of x̂_0;
* the values y_n+2(s) depend in a nonlocal way on the given data {y(t):t∈_n+2}, i.e., altering one data point y(t) may affect the value y_n+2(s) also if s is located far away from t.
In <cit.>, we investigate the analytical properties of the estimated
Faber–Schauder coefficients θ_m,k defined in (<ref>). It turns out that, when looking at quadratic spline interpolation through the lens of these coefficients, a miracle occurs.
To see what happens, let us recall from <cit.> the formula for the Faber–Schauder coefficients of y'_n+2 for the generations m=0,…, n and for generation n+1,
0.85θ_m,k = 0.852^n+m/2+3∑_j = 1^2^n+1-m(-1)^j(y(k/2^m+j/2^n+2)-y(k/2^m+j-1/2^n+2)+y(k+1/2^m-j-1/2^n+2)-y(k+1/2^m-j/2^n+2)),
0.85θ_n+1,k =0.85-2^(n+1)/2+2x̂_0 -2^3(n+1)/2+4∑_j = 1^2k(-1)^j(y(j/2^n+2)-y(j-1/2^n+2))
0.85+3 · 2^3(n+1)/2+2(y(2k+1/2^n+2)-y(2k/2^n+2))-2^3(n+1)/2+2(y(2k+2/2^n+2)-y(2k+1/2^n+2)).
As one can see immediately from those formulas, the coefficients in generations m=0,… n are independent of x_0, whereas the coefficients in generation n+1 contain the additive term -2^(n+1)/2+2x̂_0, which translates any error made in estimating x̂_0 into an 2^(n+1)/2+2-fold error for each final-generation coefficient.
Moreover, for m=0,… n, each θ_m,k depends only on those data points y(t) for which t belongs to the closure of the support of the corresponding wavelet function e_m,k. Thus, the entire nonlocality of the function y_n+2 arises from the coefficients in generation n+1, while the coefficients of all lower generations depend on locally on the given data. We refer to <cit.> for an illustration.
The main results in <cit.> concern error bounds for the estimated Faber–Schauder coefficients θ_m,k. Specifically, we found that the ℓ_2-norm of the combined errors in generations m=0,… n is typically benign, whereas the error in the
final generation m=n+1 can be larger than a factor of size 𝒪(2^n) times the error of all previous generations combined. While the exact error bounds from <cit.> will not be needed in our present paper, the proof of <Ref> will rely on an algebraic representation of the error terms obtained in <cit.> and stated in <Ref> below.
The above-mentioned facts make it clear that the coefficients in generations m=0,… n provide robust estimates for the corresponding true coefficients, while the estimates θ_n+1,k are highly non-robust and should be discarded. It is now obvious that in estimating the roughness exponent of x from the data {y(t):t∈_n+2}, we should replace the true coefficients θ_n,k in our formula (<ref>) for R^*_n(x) with their estimates θ_n,k. It remains to note that θ_n,k is in fact equal to ϑ_n,k defined in (<ref>), so that we finally arrive at the rationale behind our estimator _n.
The following example provides a concrete instance where choosing the final-generation coefficients θ_n+1,k instead of ϑ_n,k=θ_n,k leads to an estimate that is non-robust and also otherwise inferior.
For R∈(0,1], let x^R∈ C[0,1] be the function with Faber–Schauder coefficients θ_n,k = 2^n(1/2-R). These functions belong to the well-studied class of fractal Takagi–Landsberg functions. It was shown in <cit.> that x^R has the roughness exponent R. Moreover, for y^R(t) = ∫_0^tx^R(s) ds, it was shown in <cit.> that the robust approximation (<ref>) based on discrete observations of y^R recovers exactly the Faber–Schauder coefficients of x^R. That is, for n ∈ and 0 ≤ k ≤ 2^n-1, we have
ϑ_n,k = θ_n,k = 2^n(1/2-R).
It follows that
_n(y^R)= 1 - 1/nlog_2√(∑_k = 0^2^n-12^(1-2R)n) = 1 - 1/nlog_22^(1-R)n = R.
Hence, the estimator _n is not only consistent but also exact in the sense that it gives the correct value R for every finite n.
Now we replace ϑ_n,k=θ_n,k with the final-generation estimates θ_n+1,k as defined in (<ref>). Note that this requires the choice of an initial value x̂_0. The corresponding estimator is given by
_n(y^R): = 1 - 1/n+1log_2 √(∑_k = 0^2^n+1-1θ_n+1,k^2).
We get from <cit.> that for R<1/2,
θ_n+1,k = -2^(n+1)/2+2x̂_0 + ∑_m = n+1^∞ 2^m(1/2-R) = -2^(n+1)/2+2x̂_0+ 2^(n+1)(1/2-R)/1 - 2^1/2-R.
Hence,
√(∑_k = 0^2^n+1-1θ_n+1,k^2 )= 2^n+1|-4x̂_0+ 2^-(n+1)R/1 - 2^1/2-R| .
It follows that
lim_n ∞_n(y^R)= 0 if x̂_0≠0,
R if x̂_0=0.
This shows that the estimator _n is extremely sensitive with respect to the estimate x̂_0 of the exact initial value x(0), which in typical applications will be unknown. Even in the case that x(0) is known, the correct value R is only obtained asymptotically, whereas _n(y^R)=R for all finite n. These observations illustrate once again why we deliberately discard the final generation θ_n+1,k of estimated Faber–Schauder coefficients.
§.§ Pathwise consistency of _n(y)
Let us fix x ∈ C[0,1] and denote by θ_m,k its Faber–Schauder coefficients (<ref>). As before, we denote by y(t)=∫_0^tx(s) ds the antiderivative of x and by ϑ_n,k the coefficients defined in (<ref>). To be consistent with <cit.>, we introduce the following vector notation,
θ̅_n := (θ_n,0, θ_n,1⋯, θ_n,2^n)^⊤∈^2^n and ϑ̅_n = (ϑ_n,0, ϑ_n,1⋯, ϑ_n,2^n-1)^⊤∈^2^n,
Then the estimators R^*_n and _n defined in (<ref>) and (<ref>) can be written as
R_n^*(x)=1-1/nlog_2θ̅_n_ℓ_2 and _n(y)=1-1/nlog_2ϑ̅_n_ℓ_2.
Following <cit.>, we introduce the column vector z_n:= (z^(n)_i)_1 ≤ i ≤ 2^n with components
z^(n)_i = 2^3n/2∑_m = n^∞2^-3m/2∑_k = 0^2^m-n-1θ_m,k+2^m-n(i-1) for 1 ≤ i ≤ 2^n.
As observed in <cit.>, the infinite series in (<ref>) converges absolutely if x satisfies a Hölder condition, and for simplicity we are henceforth going to make this assumption.
For 1 ≤ i,j ≤ 2^n, we let furthermore
η_i,j = r for 1 ≤ i = j ≤ 2^n,
0_1 × 4 for 1 ≤ i ≠ j ≤ 2^n,
where r := 1/4(-1,+1,+1,-1), and 0_m × n denotes the m × n-dimensional zero matrix. Moreover, we denote
Q_n:= [
[ η_1,1 η_1,2 ⋯ η_1,2^n-1 η_1,2^n; η_2,1 η_2,2 ⋯ η_2,2^n-1 η_2,2^n; ⋮ ⋮ ⋱ ⋮ ⋮; η_2^n,1 η_2^n,2 ⋯ η_2^n,2^n-1 η_2^n,2^n; ]] ∈^2^n × 2^n+2.
It was shown in <cit.> that the error between the true and estimated Faber–Schauder coefficients can be represented as follows,
ϑ̅_n-θ̅_n= w_n, where w_n = Q_n z_n+2∈^2^n.
Consider the following condition:
There exists κ∈∖{1} such that w_n_ℓ_2/θ̅_n_ℓ_2κ as n∞.
We will see in <Ref> that condition (<ref>) is -a.s. satisfied for fractional Brownian motion.
Under condition (<ref>), there exist n_0∈ and constants 0<κ_-≤κ_+<∞ such that
κ_-θ̅_n_ℓ_2≤ϑ̅_n_ℓ_2≤κ_+θ̅_n_ℓ_2 for all n≥ n_0.
Let κ be as in (<ref>). Then, for any ε < |κ - 1|/2, there exists n_ε∈ such that for n ≥ n_ε, we have θ̅_n_ℓ_2(κ - ε) < w_n_ℓ_2 < θ̅_n_ℓ_2(κ + ε). Using the representation (<ref>) and applying the triangle inequality gives
ϑ̅_n_ℓ_2 = θ̅_n - w_n_ℓ_2≤θ̅_n_ℓ_2 + w_n_ℓ_2≤ (κ + ε + 1) θ̅_n_ℓ_2.
On the other hand, we have
ϑ̅_n_ℓ_2 = θ̅_n - w_n_ℓ_2≥|θ̅_n_ℓ_2 - w_n_ℓ_2| ≥(|1 - κ - ε| ∧ |1 - κ + ε|) θ̅_n_ℓ_2.
This completes the proof.
By taking logarithms in (<ref>), <Ref> immediately yields the following result.
Under condition (<ref>), the limit lim_n_n(y) exists if and only if lim_n R^*_n(x) exists. Moreover, in this case, lim_n_n(y) = lim_n R^*_n(x).
In the situation of <Ref>, we have seen that ϑ_n,k = θ_n,k = 2^n(1/2-R). Applying the representation (<ref>) yields that w_n = 0_2^n× 1. This implies
lim_n w_n_ℓ_2/θ̅_n_ℓ_2 = 0. That is, x^R satisfies condition (<ref>).
Hence, <Ref> applies, which gives an additional proof of the previously observed fact that lim_n_n(y)= R.
Next, we consider the following question: Under which conditions on x and g does u:=g∘ x admit the roughness exponent R?
To answer this question, we fix the following notation throughout the remainder of this section,
u(t)=g(x(t)) and v(t)=∫_0^tu(s) ds=∫_0^tg(x(s)) ds.
If x admits the roughness exponent R, g belongs to C^1(), and g' is nonzero on the range x([0,1]) of x, then u = g∘ x also admits the roughness exponent R.
For any p > 0, the mean value theorem and the intermediate value theorem yield numbers τ_n,k∈ [k2^-n,(k+1)2^-n] such that
u^(p)_n = ∑_k = 0^2^n-1|g'(x(τ_n,k))(x(k+1/2^n)-x(k/2^n))|^p = ∑_k = 0^2^n-1|g'(x(τ_n,k))|^p|x(k+1/2^n)-x(k/2^n)|^p,
where the notation u^(p)_n was introduced in (<ref>).
Since g' is continuous and nonzero, there are constants 0<c_-<c_+<∞ such that c_-≤|g'(x(t))|≤ c_+ for all t∈[0,1]. Hence, c_-^p x^(p)_n≤u^(p)_n≤ c_+^p x^(p)_n holds for all n. Passing to the limit n∞ for p>1/R and p<1/R yields the result.
Now we turn to the following question: Under which conditions do we have _n(v)→R, where v is as in (<ref>)? The conditions we are going to introduce for answering this question are relatively strong. Nevertheless, they hold for the sample paths of fractional Brownian motion.
Suppose there exists R∈(0,1) such that the following conditions hold.
* We have
0< lim inf_n ∞2^n(2R-2)∑_k = 0^2^n-1ϑ_n,k^2≤lim sup_n ∞2^n(2R-2)∑_k = 0^2^n-1ϑ_n,k^2<∞.
* The function x is Hölder continuous with exponent α∈(2R/5,1].
Then, if g∈ C^2() is strictly monotone, we have lim_n_n(v)=R.
In this proof, we will work with the actual and estimated Faber–Schauder coefficients of the various functions x, y, u, and v. For this reason, we will temporarily use a superscript to indicate from which function the Faber–Schauder coefficients will be computed. That is, for any function f, we write
θ^f_n,k = 2^n/2(2f(2k+1/2^n+1)-f(k/2^n) - f(k+1/2^n)),
ϑ^f_n,k = 2^3n/2+3(f(4k/2^n+2)-2f(4k+1/2^n+2)+2f(4k+3/2^n+2)-f(4k+4/2^n+2)).
With this notation, the coefficients ϑ_n,k in (<ref>) should be re-written as ϑ_n,k^y. In particular, (<ref>) refers to the coefficients ϑ^y_n,k.
Our goal in this proof is to show that (<ref>) carries over to the coefficients ϑ^v_n,k. That is,
0<lim inf_n ∞2^n(2R-2)∑_k = 0^2^n-1(ϑ^v_n,k)^2≤lim sup_n ∞2^n(2R-2)∑_k = 0^2^n-1(ϑ^v_n,k)^2<∞.
Taking logarithms, dividing by 2n, and passing to the limit will then yield
R-_n(v)→0, which is the assertion.
It remains to establish (<ref>).
Rewriting the second line in (<ref>)
gives after a short computation that
ϑ^f_n,k = 2^n+5/2(θ^f_n+1,2k+1-θ^f_n+1,2k).
Let us introduce the notation θ_m,k^f(s):=θ_m.k^f(s+·). That is, θ_m,k^f(s) are the Faber–Schauder coefficients of the function t↦ f(s+t) for given s≥0. One can avoid undefined arguments of functions in case s+t>1 by assuming without loss of generality that all occurring functions on [0,1] are in fact defined on all of [0,∞). With this notation,
we get from (<ref>) that for f∈ C^1[0,∞),
ϑ^f_n,k = 2^n+5/2∫_0^2^-n-1θ^f'_n+1,2k(s) ds .
Applying the mean-value theorem and the intermediate value theorem yields certain intermediate times τ_n+2,k(s) ∈ [2^-n-2k+s, 2^-n-2(k+1)+s] such that for s ∈ [0,2^-n-1],
θ^u_n+1,2k(s) = 2^(n+1)/2(2u(4k+1/2^n+2+s)-u(4k/2^n+2+s)-u(4k+2/2^n+2+s))
= 2^(n+1)/2 g'(x(τ_n+2,4k(s)))(x(4k+1/2^n+2+s)-x(4k/2^n+2+s))
+2^(n+1)/2 g'(x(τ_n+2,4k+1(s)))(x(4k+1/2^n+2+s)-x(4k+2/2^n+2+s)),
=2^(n+1)/20.9(g'(x(τ_n+2,4k(s)))+g'(x(τ_n+2,4k+1(s)))/2(2x(4k+1/2^n+2+s)-x(4k/2^n+2+s)-x(4k+2/2^n+2+s)))
+ 2^(n+1)/2(g'(x(τ_n+2,4k(s)))-g'(x(τ_n+2,4k+1(s)))/2(x(4k+2/2^n+2+s)-x(4k/2^n+2+s))).
The intermediate value theorem and the mean-value theorem also imply that there are intermediate times τ^♯_n+1,2k(s),τ^♭_n+1,2k(s) ∈ [2^-n-12k+s,2^-n-1(2k+1)+s] such that
1/2(g'(x(τ_n+2,4k(s)))+g'(x(τ_n+2,4k+1(s)))) = g'(τ^♯_n+1,2k(s)),
1/2(g'(x(τ_n+2,4k(s)))-g'(x(τ_n+2,4k+1(s)))) = 1/2g”(τ^♭_n+1,2k(s))(x(τ_n+2,4k(s))-x(τ_n+2,4k+1(s))).
With the shorthand notation
ζ^x_n+1,2k(s):= 2^(n+1)/2(x(4k+2/2^n+2+s)-x(4k/2^n+2+s))(x(τ_n+2,4k(s))-x(τ_n+2,4k+1(s))),
we then have
θ^u_n+1,2k(s) =g'(τ^♯_n+1,2k(s))θ^x_n+1,2k(s) + g”(τ^♭_n+1,2k(s))ζ^x_n+1,2k(s).
Plugging the preceding equation into (<ref>) and applying the mean value theorem for integrals yields intermediate times τ^♯_n+1,k, τ^♭_n+1,k∈ [2^-n-1k,2^-n-1(k+1)] that are independent of s such that
ϑ^v_n,k = 2^n+5/2g'(x(τ^♯_n+1,2k))∫_0^2^-n-1θ^x_n+1,2k(s) ds + 2^n+5/2g”(x(τ^♭_n+1,2k))∫_0^2^-n-1ζ^x_n+1,2k(s) ds
= g'(x(τ^♯_n+1,2k))ϑ^y_n,k + 2^n+5/2g”(x(τ^♭_n+1,2k))∫_0^2^-n-1ζ^x_n+1,2k(s) ds.
Introducing the shorthand notation
ζ^x_n+1,2k:= 2^n+5/2∫_0^2^-n-1ζ^x_n+1,2k(s) ds,
and let us write
( ϑ^v_n,k)^2
= (g'(x(τ^♯_n+1,2k)))^2(ϑ^y_n,k)^2 + (g”(x(τ^♭_n+1,2k)))^2(ζ^x_n+1,2k)^2
+ 2g'(x(τ^♯_n+1,2k))g”(x(τ^♭_n+1,2k))ϑ^y_n,kζ^x_n+1,2k.
For each of the three terms on the right, we will now analyze its contribution to the quantities in (<ref>).
The main contribution comes from the first term on the right. Indeed, our assumptions on g imply that there are constants 0<c_-≤ c_+<∞ such that c_-<(g'(x(t)))^2<c^+ for all t∈[0,1], and so
c_-2^n(2R-2)∑_k = 0^2^n-1(ϑ^y_n,k)^2≤ 2^n(2R-2)∑_k = 0^2^n-1(ϑ^v_n,k)^2≤ c_+2^n(2R-2)∑_k = 0^2^n-1(ϑ^y_n,k)^2.
This will establish (<ref>) as soon as
we have shown that the contributions of the two remaining terms in (<ref>) are asymptotically negligible. For the second term, we use the Hölder continuity of x to get a constant c_x for which
|x(τ_n+2,4k(s))-x(τ_n+2,4k+1(s))| ≤ c_x|τ_n+2,4k(s) - τ_n+2,4k+1(s)|^α≤ c_x2^-α n
Furthermore, there exists κ_x > 0 such that 32 (g”(x(s)))^2 ≤κ_x for all s∈[0,1]. Then,
2^(2R-2)n∑_k = 0^2^n-1(g”(x(τ^♭_n+1,2k)))^2(ζ_n+1,2k)^2
= 2^(2R-2)n∑_k = 0^2^n-1(g”(x(τ^♭_n+1,2k)))^2(2^n+5/2∫_0^2^-n-1ζ^x_n+1,2k(s) ds)^2
≤κ_x 2^2Rn∑_k = 0^2^n-1(∫_0^2^-n-1ζ^x_n+1,2k(s) ds)^2 ≤κ_x 2^2Rn∑_k = 0^2^n-12^-n-1∫_0^2^-n-1(ζ^x_n+1,2k(s))^2 ds
≤κ_x 2^(2R-1)n∫_0^2^-n-1∑_k = 0^2^n-1(x(4k+2/2^n+2+s)-x(4k/2^n+2+s))^2(x(τ_n+2,4k(s))-x(τ_n+2,4k+1(s)))^2 ds
= κ_x 2^α n∫_0^2^-n-12^(2R-1-α)n∑_k = 0^2^n-1(x(2k+1/2^n+1+s)-x(2k/2^n+1+s))^2(c^2_x2^-2α n) ds
≤κ_x c^2_x ∫_0^2^-n-1sup_s ∈ [0,2^-n-1](2^(2R-1-3α)n∑_k = 0^2^n+1-1(x(2k+1/2^n+1+s)-x(2k/2^n+s))^2) ds.
Moreover, <ref> implies the integrand in the final term converges to zero:
lim_n∞sup_0≤ s≤ 2^-n-12^(2R-1-3α)n∑_k = 0^2^n+1-1(x(2k+1/2^n+1+s)-x(2k/2^n+s))^2=0.
Indeed, by the Hölder continuity of x, we can again use the constant c_x to get
2^(2R-1-3α)n∑_k = 0^2^n+1-1(x(2k+1/2^n+1+s)-x(2k/2^n+s))^2 ≤ 2^(2R-1-3α)n· 2^n+1· c_x^22^-2(n+1)α;
the right-hand side is equal to c_x^2·2^1-2α·2^(2R-5α)n, which converges to zero as n∞. Altogether, this shows that the contribution of the second term on the right-hand side of (<ref>) is negligible.
For the cross-product term on the rightmost side of (<ref>), we get from the Cauchy–Schwarz inequality,
lim_n ∞2^(2R-2)n∑_k = 0^2^n-1g'(x(τ^♯_n+1,2k))g”(x(τ^♭_n+1,2k))ϑ_n,kζ^x_n+1,2k
≤√(lim_n ∞2^(2R-2)n∑_k = 0^2^n-1(g'(x(τ^♯_n+1,2k)))^2(ϑ_n,k)^2)√(lim_n ∞2^(2R-2)n∑_k = 0^2^n-1(g”(x(τ^♭_n+1,2k)))^2(ζ^x_n+1,2k)^2) = 0.
Altogether, (<ref>) follows.
To conclude this section, we state and prove a lemma, which will be needed for the proof of <Ref>. For possible future reference, we include it into our present pathwise context.
For n,k ∈, let us consider the vector z_(n,k) = (z^(n,k)_i) ∈^2^n, where
z^(n,k)_i = 2^3n/2∑_m = n^n+k2^-3m/2∑_j = 0^2^m-n-1θ_m,j+2^m-n(i-1) for 1 ≤ i ≤ 2^n.
It is clear that the vector z_(n,k) is a truncated version of the vector z_n defined in (<ref>). Since each Faber–Schauder coefficient θ_m,k is a linear combination of the values x(j2^-n-k-1), each z^(n,k)_i must admit the following representation,
z^(n,k)_i = ∑_j = 0^2^n+k+1ξ^(n,k,i)_j x(j/2^n+k+1),
for certain coefficients ξ^(n,k,i)_j. The following lemma computes the values of these coefficients.
We have
ξ_j^(n,k,i) =
0 if j ≤ 2^k+1(i-1)-1 or j ≥ 2^k+1i+1,
2^n/2(2^-k-2) if j = 2^k+1(i-1) or j = 2^k+1i,
2^1-k+n/2 if 2^k+1(i-1)+1 ≤ j ≤ 2^ki-1.
We fix n ∈ and 1 ≤ i ≤ 2^n and proceed by induction on k ∈. First, let us establish the base case k = 0. Then
z^(n,0)_i = θ_n,i-1 = 2^n/2+1x(2i-1/2^n+1) - 2^n/2 x(2i/2^n+1)-2^n/2x(2i-2/2^n+1).
Moreover, plugging k = 0 into (<ref>) yields that ξ_2i-2^(n,0,i) = ξ_2i^(n,0,i) = -2^n/2, ξ_2i-1^(n,0,i) = 2^n/2+1 and ξ_j^(n,0,i) = 0 otherwise. It is clear that those coefficients coincide with the corresponding ones in (<ref>), which proves our induction for the initial step k = 0.
Next, let us assume that (<ref>) holds for k = m and subsequently prove that this identity also holds for k = m + 1. It follows from (<ref>) that
z^(n,m+1)_i = z^(n,m)_i + 2^n/2-m - 1∑_j = 2^m+1(i-1)^2^m + 1i-1(2x(2j+1/2^n+m +2)-x(j/2^n+m +1)-x(j+1/2^n+m +1)).
For 2^m + 1(i-1) < j < 2^m + 1i-1, the point 2^-n-m-2(2j+1) cannot be written in the form ℓ2^-n-m-1 for some ℓ∈_0. Hence
ξ_2j+1^(n,m+1,i) = 2· 2^n/2-m-1 = 2^n/2-m,
as the term x(2^-n-m-2(2j+1)) does not appear in the linear combination (<ref>) for k = m. Next, for 2^k-n(i-1) < j < 2^k-ni, the point 2^-n-m-22j = 2^-n-m-1j can be written in the form ℓ2^-n-m-1 for some ℓ∈_0. It thus follows from (<ref>) and (<ref>) that
ξ^(n,m+1,i)_2j = ξ^(n,m,i)_j - 2 · 2^n/2-m-1 = 2^n/2-m,
as the term x(2^-n-m-1j) contributes to the representation of z^(n,m)_i with ξ^(n,m,i)_j = 2^n/2-m+1. Moreover, for j = 2^k-n(i-1) or j = 2^k-ni, we have
ξ_j^(n,m+1,i) = ξ_j^(n,m,i) - 2^n/2-m-1 = 2^n/2(2^-m-2) - 2^n/2-m-1 = 2^n/2(2^-m-1-2).
Last, for j ≤ 2^m+2(i-1)-1 or j ≥ 2^m+2i + 1, the term x(2^-n-m-2) does not appear on right-hand side of (<ref>). Thus, we have ξ_j^(n,m+1,i) = 0. Comparing the above identities with (<ref>) proves the case for k = m+1.
§ PROOF OF <REF>
It was shown in <cit.> that W^H admits -a.s. the roughness exponent H. It now follows from <Ref> that the sample paths of X=g(W^H)
also admit the roughness exponent H.
Now we prove that, with probability one, _n(X)→ H. To this end, we use the following result by Gladyshev <cit.> on the convergence of the weighted quadratic variation of W^H,
2^(2H-1)nW^H_n^(2) 1 -a.s.
Hence, if θ̅_n=(θ_n,k) are the Faber–Schauder coefficients of the sample paths of W^H, then <cit.> yields that
2^(2H-2)nθ̅_n_ℓ_2^2=2^(2H-2)n∑_k = 0^2^n-1θ^2_n,k 2^2-2H-1 -a.s.
<Ref> now implies that condition (a) of <Ref> is satisfied. Condition (b) of that proposition is also satisfied, because it is well known that the sample paths of W^H are -a.s. Hölder continuous for every exponent α<H; see, e.g., <cit.>. Hence, we may apply <Ref> and so _n(X)→ H follows.
For completing the proof of <Ref>, it remains to establish (<ref>). This is achieved in the following proposition.
With probability one, the sample paths of fractional Brownian motion W^H satisfy condition (<ref>).
§.§ Proof of <Ref>
To prove <Ref>, we need to obtain the asymptotic behavior of the w_n_ℓ_2 associated with a fractional Brownian motion W^H. Let ϑ̅_n, z_(n,k), and z_n be defined as in (<ref>), (<ref>), and (<ref>) for the sample paths of W^H. It is clear that z_n is well defined, since the sample paths of W^H satisfy a Hölder condition. Moreover, all three are Gaussian random vectors. Our next lemma characterizes the covariance structure of the Gaussian vector z_n. To this end, consider the function g_H := h_1 + h_2 + h_3, where the h_i: _0 → are defined as follows,
h_1(ς) = -2(2ς^2H + |ς-1|^2H+|ς+1|^2H)
h_2(ς) = 8/2H+1((ς+1)^2H+1 -(ς-1)^2H+1)
for ς≥ 1,
16/2H+1 for ς = 0,
h_3(ς) =-8/(2H+2)(2H+1)(|ς+1|^2H+2-2ς^2H+2+|ς-1|^2H+2).
Furthermore, we introduce the Toeplitz matrix G_n := (g_H(|i-j|))_1 ≤ i,j ≤ 2^n.
For each n ∈, the random vector z_n is a well-defined zero-mean Gaussian vector with covariance matrix
Γ_n = (γ^(n)_i,j)_1 ≤ i,j ≤ 2^n = 2^(1-2H)nG_n.
For n,k ∈, let us denote
Γ_(n,k) = (γ^(n,k)_i,j)_1≤ i,j ≤ 2^n := [ z_(n,k)z_(n,k)^⊤].
It suffices to show that the components γ^(n,k)_i,j converges to γ^(n)_i,j as k ∞. Moreover, by symmetry, it suffices to consider the case j≥ i. <Ref> yields
γ^(n,k)_i,j = [z^(n,k)_iz^(n,k)_j]=∑_τ_1 = 0^2^n+k+1∑_τ_2 = 0^2^n+k+1ξ^(n,k,i)_τ_1ξ^(n,k,j)_τ_2[W^H_τ_1/2^n+k+1· W^H_τ_2/2^n+k+1].
We also get from <Ref> that
∑_τ = 0^2^n+k+1ξ_τ^(n,k,i) = 0 and ξ_j^(n,k,i) = 0 for j ≤ 2^k+1(i-1)-1 or j ≥ 2^k+1i+1. Hence, for ς := j-i ≥0,
γ^(n,k)_i,j =-∑_τ_1 = 0^2^n+k+1∑_τ_2 = 0^2^n+k+1ξ^(n,k,i)_τ_1ξ^(n,k,j)_τ_2/2|τ_1-τ_2/2^n+k+1|^2H = -2^-2Hn∑_τ_1 = 0^2^k+1∑_τ_2 =0^2^k+1ξ^(n,k,i)_τ_1ξ^(n,k,j)_τ_2/2|τ_1-τ_2/2^k+1 + ς|^2H.
Using once again (<ref>) yields that
γ^(n,k)_i,j = 2^(1-2H)n(h_1,k(ς)+h_2,k(ς)+h_3,k(ς)),
where functions h_i,k are defined as follows,
h_1,k(ς) = -(2^-k-2)^2/2(2|ς|^2H + |ς-1|^2H+|ς+1|^2H),
h_2,k(ς) = 2^-k(2-2^-k)∑_τ = 1^2^k+1-1(|τ/2^k+1 + ς|^2H+ |τ/2^k+1 + ς-1|^2H + |-τ/2^k+1 + ς|^2H+ |-τ/2^k+1 + ς+1|^2H),
h_3,k(ς) = -2^1-2k∑_τ_1 = 1^2^k+1-1∑_τ_2 = 1^2^k+1-1|τ_1-τ_2/2^k+1 + ς|^2H.
Let us first consider the case ς≥ 1. Then,
lim_k ∞h_1,k(ς) = -2(2ς^2H + (ς-1)^2H+(ς+1)^2H).
Furthermore,
lim_k ∞h_2,k(ς) = lim_k ∞2^1-k(2-2^-k)∑_τ = 1^2^k+1-1(|τ/2^k+1 + ς|^2H+ |τ/2^k+1+ς-1|^2H)
= 8lim_k ∞2^-k-1∑_τ = 1^2^k+1-1(|τ/2^k+1 + ς|^2H+ |τ/2^k+1+ς-1|^2H)
= 8 (∫_ς^ς+1t^2H dt + ∫_ς-1^ςt^2H dt)= 8((ς+1)^2H+1 - (ς-1)^2H+1)/2H+1
We also get in a similar way that
lim_k ∞h_3,k(ς) = -8lim_k ∞2^-2-2k∑_τ_1 = 1^2^k+1-1∑_τ_2 = 1^2^k+1-1(τ_1-τ_2/2^k+1 + ς)^2H=-8 ∫_0^1∫_0^1(t-s+ς)^2H ds dt
= -8((ς+1)^2H+2-2ς^2H+2+(ς-1)^2H+2)/(2H+2)(2H+1).
For the case ς = 0, lim_k ∞h_1,k(0) = h_1(0) as in (<ref>). Next, we have
lim_k ∞h_2,k(0) = lim_k ∞2^2-k(2-2^-k)∑_τ = 1^2^k+1-1|τ/2^k+1|^2H = 16∫_0^1t^2H dt = 16/2H+1.
Finally,
lim_k ∞h_3,k(0) = lim_k ∞-2^1-2k∑_τ_1 = 1^2^k+1-1∑_τ_2 = 1^2^k+1-1|τ_1-τ_2/2^k+1|^2H = -8∫_0^1∫_0^1|t-s|^2H dsdt
= 16∫_0^1∫_0^t(t-s)^2H dsdt = 16/(2H+1)(2H+2).
Comparing the above equations with (<ref>) completes the proof.
Our next lemma investigates the limit of 2^n(H-1) z_n_ℓ_2 as n ∞ by applying a concentration inequality from <cit.>. In the form needed here, it states that if Z is a centered Gaussian random vector with covariance matrix C, T:=√(C), and γ:=√(C_2), then there exists a universal constant κ independent of C such that
[| Z_ℓ_2-T|≥ t]≤κexp(-t^2/4γ^2) for all t>0.
With probability one,
lim_n ∞2^n(H-1) z_n_ℓ_2=2√(1-H/H+1).
It follows from (<ref>) that
√(Γ_n) = √(∑_i = 1^2^nγ^(n)_i,i) = 2^(1-H)n√(g_H(0))=2^(1-H)n√(-4 + 16/2H+1 - 16/(2H+1)(2H+2))
= 2^(1-H)n+1√(1-H/H+1).
Let ·_p denote the ℓ_p-induced operator norm. As shown in <Ref>, the covariance matrix Γ_n is a symmetric Toeplitz matrix and so γ^(n)_i,j = γ^(n)_j,i = γ^(n)_1,|j-i+1|. Hence, we have Γ_n_1 = Γ_n_∞, and this gives
Γ_n_2 ≤√(Γ_n_1Γ_n_∞) = Γ_n_1 = max_1 ≤ j ≤ 2^n∑_i = 1^2^n|γ^(n)_i,j| = max_1 ≤ j ≤ 2^n∑_i = 1^2^n|γ^(n)_1,|j-i+1||
≤ 2∑_i=1^2^n|γ^(n)_1,i| ≤ 2^(1-2H)n+1∑_ς = 0^2^n|g_H(ς)|,
where the first inequality is a well-known bound for the spectral norm of a matrix; see, e.g., <cit.>.
In the next step, we will show that g_H(ς) = 𝒪(ς^2H-4) as ς∞. For ς≥ 3, Taylor expansion yields u_1 ∈ (ς-1, ς) and u_2 ∈ (ς, ς+1) such that
(ς - 1)^2H = ς^2H + ∑_i = 1^3(-1)^i∏_j = 1^i(2H-j+1)/i!ς^2H-i+ ∏_j = 1^4(2H-i+1)/4!u_1^2H-4,
(ς + 1)^2H = ς^2H + ∑_i = 1^3∏_j = 1^i(2H-j+1)/i!ς^2H-i + ∏_j = 1^4(2H-i+1)/4!u_2^2H-4.
Note that
∑_i = 1^3((-1)^i + 1)∏_j = 1^i(2H-j+1)/i!ς^2H-i= 2H(2H-1)ς^2H-2,
and therefore, we have
h_1(ς) = -2(4 ς^2H + 2H(2H-1)ς^2H-2 + ∏_j = 1^4(2H-j+1)/4!(u_1^2H-4+u_2^2H-4)).
In the same way, we obtain
h_2(ς) = 8(2ς^2H + 2(2H)(2H-1)/3!ς^2H-2 +∏_j = 1^4(2H-j+1)/5! (u_3^2H-4+u_4^2H-4)),
h_3(ς) = -8(2/2!ς^2H + 2(2H)(2H-1)/4!ς^2H-2 +∏_j = 1^4(2H-j+1)/6!(u_5^2H-4+u_6^2H-4) ),
for some u_3,u_5 ∈ (ς-1, ς) and u_4, u_6 ∈ (ς, ς+1). Since u_i ≥ς - 1, we get u_i^2H-4≤ (ς-1)^2H-4. Summing up (<ref>), (<ref>) and (<ref>) yields that g_H(ς) = 𝒪(ς^2H-4) as ς∞, which with (<ref>) implies that
Γ_n_2 ≤ 2^(1-2H)n(g_H(0)+g_H(1)+∑_ς = 2^2^ng_H(ς)) = 𝒪(2^(1-2H)n) for H ∈ (0,1).
Therefore, for each H ∈ (0,1), there exist c_H≥ 0 and n_c,H∈ such that for n ≥ n_c,H, we have Γ_n_2 ≤ c_H. Thus, for n ≥ n_c,H and any given ε > 0, the concentration inequality (<ref>) gives
(|2^n(H-1) z_n_ℓ_2- 2√(1-H/H+1)| ≥ε)= (| z_n_ℓ_2- 2^(1-H)n+1√(1-H/H+1)| ≥ 2^(1-H)nε)
≤κexp(-2^n(2-2H)ε^2/Γ_n_2)= κexp(-c^-1_H2^nε^2)
The latter expression is summable in n for every >0, and so a Borel–Cantelli argument yields that 2^n(H-1) z_n_ℓ_2→ 2√(1-H/H+1) with probability one as n ∞.
In the following lemma, we will derive the asymptotic behaviour of the norms of w_n defined in (<ref>).
With probability one, we have
lim_n ∞2^n(H-1) w_n_ℓ_2=2^-2H√(α(H)),
where α(H) = g_H(0) - 1/2g_H(1) - g_H(2) + 1/2g_H(3).
Let us denote the covariance matrix of w_n by Φ_n := (ϕ^(n)_i,j)_i,j = 1^2^n = Q_nΓ_n+2Q^⊤_n. We first show that
Φ_n= 2^(2-2H)n-4Hα(H).
For the fixed n ∈, consider the following partition of the covariance matrix Γ_n+2,
Γ_n+2 = [[ Γ^∗_1,1 Γ^∗_1,2 ⋯ Γ^∗_1,2^n; Γ^∗_2,1 Γ^∗_2,2 ⋯ Γ^∗_2,2^n; ⋮ ⋮ ⋱ ⋮; Γ^∗_2^n,1 Γ^∗_2^n,2 ⋯ Γ^∗_2^n,2^n ]],
where Γ^∗_i,j are 4 × 4-dimensional matrices. In particular, for 1 ≤ i ≤ 2^n, the diagonal partitioned matrices Γ^∗_i,i are of the form:
Γ^∗_i,i = 2^(1-2H)(n+2)G_2 = 2^(1-2H)(n+2)[[ g_H(0) g_H(1) g_H(2) g_H(3); g_H(1) g_H(0) g_H(1) g_H(2); g_H(2) g_H(1) g_H(0) g_H(1); g_H(3) g_H(2) g_H(1) g_H(0); ]].
Recall the definition of η_i,j from (<ref>), we get
ϕ^(n)_i,i = (
η_i,1, η_i,2, …, η_i,2^n)Γ_n+1(
η_i,1, η_i,2,… , η_i,2^n)^⊤
= (
0_1× 4, …, η_i,i, …, 0_1× 4)Γ_n+1(
0_1× 4, …, η_i,i, …, 0_1× 4)^⊤
= rΓ^∗_i,i r^⊤ = 2^(1-2H)(n+2) rG_2 r^⊤.
To evaluate the last argument in the above equation, we have
rG_2 r^⊤ =1/16 1_1 × 4[[ g_H(0) -g_H(1) -g_H(2) g_H(3); -g_H(1) g_H(0) g_H(1) -g_H(2); -g_H(2) g_H(1) g_H(0) -g_H(1); g_H(3) - g_H(2) -g_H(1) g_H(0); ]] 1_4 × 1 = α(H)/4.
Therefore, we have ϕ^(n)_i,i = 2^(1-2H)(n+2)-2α(H) for every 1 ≤ i ≤ 2^n, and
Φ_n = ∑_i = 1^2^nϕ^(n)_i,i = 2^(2-2H)n-4Hα(H).
In our next step, we shall show that 2^2n(H-1) w_n_ℓ_2 converges to 2^-2H√(α(H)). First of all, it follows from <cit.> that Q_n_2 = 1/4, and due to (<ref>), there exists a constant c_H > 0 such that
Φ_n_2 ≤Q_n^2_2 Γ_n+2_2 ≤ c_H2^(1-2H)n.
For any given ε > 0, the concentration inequality (<ref>)
yields that
(|2^n(H-1) w_n_ℓ_2- 2^n(H-1)√(Φ_n)| ≥ε) = (|2^n(H-1) w_n_ℓ_2- √(2^2-4Hα(H))| ≥ε)
≤κexp(-2^n(2-2H)ε^2/Q_n_2)= κexp(-c^-1_H2^nε^2).
From here, a Borel–Cantelli yields the assertion.
By (<ref>) and <Ref>,
lim_n ∞ w_n_ℓ_2/θ̅_n_ℓ_2 = lim_n ∞√(2^n(2H-2) w_n^2_ℓ_2/2^n(2H-2)θ̅_n_ℓ_2^2) = √(2^-4Hα(H)/2^2-2H-1) = √(α(H)/2^2+2H-2^4H) < 1.
See <Ref> for an illustration of the latter inequality.
plain
|
http://arxiv.org/abs/2307.01450v1
|
20230704025615
|
The $p$-adic constant for mock modular forms associated to CM forms
|
[
"Ryota Tajima"
] |
math.NT
|
[
"math.NT"
] |
]The p-adic constant for mock modular forms associated to CM forms.
Kyushu university
[email protected]
The author was supported by JSPS KAKENHI Grant Number JP23KJ1720 and WISE program (MEXT) at Kyushu University.
[
Ryota Tajima
August 1, 2023
==================
Let g ∈ S_k(Γ_0(N)) be a normalized newform and f be a harmonic Maass form that is good for g. The holomorphic part of f is called a mock modular form and denoted by f^+.
For odd prime p, K. Bringmann, P. Guerzhoy, and B. Kane obtained a p-adic modular form of level pN from f^+ and a certain p-adic constant α_g(f) in <cit.>. When g has complex multiplication by an imaginary quadratic field K and p is split in 𝒪_K, it is known that α_g(f) is zero. On the other hand, we do not know much about α_g(f) for an inert prime p. In this paper, we prove that α_g(f) is a p-adic unit when p is inert in 𝒪_K and _ℂS_k(Γ_0(N))=1.
§ INTRODUCTION
A mock modular form is the holomorphic part f^+(z) of a harmonic Maass form f(z). The non-holomorphic part of f is connected to a cusp form by the differential operator ξ_2-k that maps from harmonic Maass forms to cusp forms
ξ_2-k:=2iy^2-k( ∂∂z) :H_2-k ( Γ _0( N) ) → S_k( Γ _0( N) ).
The image of f(z) by ξ_2-k is called the shadow of f^+. For a cusp form g, we consider lifts of g by ξ_2-k since ξ_2-k is surjective. In <cit.>, J. H. Bruinier, K. Ono, and R. C. Rhoades found lifts that satisfy some algebraic properties and they called them good for g. (cf. Definition <ref>.) For example, if g is a normalized newform with complex multiplication and f is good for g, then all coefficients of f^+ are algebraic.
It is a fundamental problem to find a direct relation between mock modular forms and shadows. In <cit.>, K. Bringmann, P. Guerthoy, and B. Kane revealed the p-adic relation between a normalized newform and the holomorphic part of their good lifts. We state this result more precisely.
Let g ∈ S_k(Γ_0(N)) be a normalized newform with complex multiplication by an imaginary quadratic field K and f ∈ H_2-k ( Γ _0( N) ) good for g. Let p be a prime number such that p ∤ N and inert in 𝒪_K. We define two operators U(p), V(p), and D^k-1 acting a formal power series by
U(p)(∑_n ∈ℤ C(n)q^n):=∑_n ∈ℤ C(pn)q^n
V(p)(∑_n ∈ℤ C(n)q^n):=∑_n ∈ℤ C(n/p)q^n
D^k-1(∑_n ∈ℤ C(n)q^n):=∑_n ∈ℤ n^k-1C(n)q^n.
For a p-adic number γ, we define the formal power series ℱ_γ∈ℂ_p[[q]][q^-1] by
ℱ_γ:=f^+-γ E_g|V(p)=∑ _n≫ -∞( C_f^+(n) -n^1-kC_g(n/p) )q^n=∑ _n≫ -∞ n^1-kd_γ(n)q^n.
Keep the notation above. Then for all but exactly one γ∈ℂ_p, we have the p-adic limit
lim _m→∞(D^k-1ℱ_γ) | U ( p^2m+1)d_γ( p^2m+1) =g.
They also showed a theorem similar to Theorem <ref> when g does not have complex multiplication.
We denote the exceptional constant γ of Theorem <ref> by α_g(f). In <cit.>, K. Bringmann, P. Guerzhoy, and B. Kane showed a remarkable result that ℱ_α_g(f) is not only a formal power series but also a p-adic modular form. Similarly, if g does not have complex multiplication, they showed that there exists precisely one α_g(f) ∈ℂ_p such that ℱ_α_g(f) is a p-adic modular form. In order to develop the p-adic theory of mock modular forms, it is important to investigate the p-adic constant α_g(f). For example, it is an interesting question whether α_g(f) is zero or not. If α_g(f) is not zero, we can choose γ=0 in Theorem <ref>. Therefore we recover the shadow g from only a mock modular form f^+.
In this paper, we assume that g has complex multiplication. Then α_g(f) is indepedent on the lifts of g and we denote α_g(f) by α_g from this. If p is split in 𝒪_K, then α_g=0 by <cit.>. However, it is not known that whether α_g is zero or not when p is inert in 𝒪_K. It is known no example such that α_g=0 and one example such that α_g≠0 when p is inert in 𝒪_K.
Let g(z) := η ( z) ^8∈ S_4( Γ _0( 9) ).
Then g has complex multiplication by K=ℚ(√(-3)), and the following statements are hold.
(1) There exists a good lift f of g such that
D^k-1(f^+)=-η ( 3z) ^8 ( η ( z) ^3η (9z) ^3+3) ^2 = ∑ C( n) q^n
(2) Let p be inert in 𝒪_K.
If p^3∤ C( p), then α_g≠0.
It was shown that p^3∤ C( p) for all inert primes p<32500 by <cit.>.
In 2022, Hanson and Jameson show that p^3∤ C(p) for all inert primes in <cit.>.
The operator D^k-1 defines the map from harmonic Maass forms to weakly holomorphic modular forms
D^k-1:H_2-k ( Γ _0( N) ) → M_k^!( Γ _0( N) )
and kills the non-holomorphic part of f.
In this paper, we show that if _ℂS_k(Γ_0(N))=1 and p is odd prime and inert in 𝒪_K, then α_g≠ 0. Furthermore, we also determine the p-adic valuation of α_g. We state our main theorem.
Suppose that k is an even integer and N is a natural number. Let g ∈ S_k(Γ_0(N)) be a normalized newform with complex multiplication by an imaginary quadratic field K and p an odd prime number.
Assume that p is inert in 𝒪_K and p ∤ N.
If _ℂ S_k(Γ_0(N))=1, then α_g is a p-adic unit.
I will explain the idea of proof. In the proof of (2) in Theorem <ref>, they used the fact that D^k-1(f^+) is equal to a weakly holomorphic modular form F defined by the Dedekind's eta-function (cf. (1) in Theorem <ref>.)
However, an explicit calculation of the holomorphic part of good lifts is very difficult in general. This difficulty comes from the image space of D^k-1 has infinite dimension over ℂ. In this paper, we consider the quotient space of M_k^!( Γ _0( N) ) and we denote this quotient space by S_k^#, 0( Γ _0( N) ). (cf. Lemma <ref>.) We show that S_k^#, 0( Γ _0( N) ) has finite dimension over ℂ and we can regard D^k-1 as the map to S_k^#, 0( Γ _0( N) )
D^k-1:H_2-k ( Γ _0( N) ) →S_k^#, 0( Γ _0( N) ).
It is easily shown that
D^k-1(f^+)=cF in S_k^#, 0( Γ _0( N) )
for some non-zero scalar c. Lastly, we evaluate the error term between D^k-1(f) and cF.
From the definition of S_k^#, 0( Γ _0( N) ), there exists a weakly holomorphic modular form h and a complex number d such that
D^k-1(f^+)=cF+D^k-1(h)+dg in M_k^!( Γ _0( N) ).
The error term coming from g is zero since p is inert in 𝒪_K.
Considering the Galois action on <ref>, it is shown that h is defined over some algebraic field. This fact implies that the error term coming from D^k-1(h) is equal to 0. Thus we conclude that α_g≠0.
§ ACKNOWLEDGEMENTS
The author would like to show my greatest appreciation to Professor Shinichi Kobayashi for giving many numerous and extremely helpful comments on earlier versions of this paper. I also thank Professor Toshiki Matsusaka for giving many comments about mock modular forms. I was supported by JSPS KAKENHI Grant Number JP23KJ1720 and WISE program (MEXT) at Kyushu University.
§ HARMONIC MAASS FORMS AND WEAKLY HOLOMORPHIC MODULAR FORMS
In this section, we introduce facts for harmonic Maass forms and weakly holomorphic modular forms.
Throughout, let ℍ be the upper-half of the complex plane and z=x+iy ∈ℍ with x, y ∈ℝ
Let k ∈ℤ and N ∈ℕ.
Then a harmonic Maass form of weight k on Γ _0( N) is any smooth function f on ℍ satisfying:
(1)f(az+bcz+d)=(cz+d)^kf(z) for all γ = a b
c d∈Γ _0( N).
(2)Δ_kf=0, where
Δ_k=-y^2( ∂ ^2∂ x^2+∂ ^2∂ y^2) +iky( ∂∂ x+i∂∂ y).
(3)There is a polynomial P_∞(z) ∈ℂ[q^-1] such that
f(z)-P_∞(z)=O (e^-ε y) as y →∞ for some ϵ >0.
Analogous conditions are required at all cusps.
We denote the vector space of these harmonic Maass forms by H_k( Γ _0( N) ).
Every harmonic Maass form f(z) of weight 2-k has a Fourier expansion of the form
f=∑ _n≫ -∞C_f^+( n) q^n+∑ _n <0C_f^-( n) Γ ( k-1,4π| n| y) q^n.
Obviously, each f(z) is the sum of two disjoint pieces, the holomorphic part of f(z)
f^+(z):=∑ _n≫ -∞C_f^+( n) q^n,
and the non-holomorphic part of f(z)
f^-(z):=∑ _n <0C_h^-( n) Γ ( k-1,4π| n| y) q^n.
In addition, ∑ _n≤ 0C_f^+( n) q^n is called the principal part of f(z) at the cusp ∞.
Every weakly holomorphic modular form f(z) is in H_k( Γ _0( N) ) with f^-(z)=0.
A mock modular form is the holomorphic part of a harmonic Maass form.
Suppose that k is an integer greater than or equal to 2.
We define two operators D:=12 π iddz and ξ _w:=2iy^w( ∂∂z) where w ∈ℤ.
Then
D^k-1:H_2-k ( Γ _0( N) ) → M_k^!( Γ _0( N) ),
ξ_2-k:H_2-k ( Γ _0( N) ) → S_k( Γ _0( N) )
and
D^k-1(f^-)=0, ξ_2-k(f^+)=0.
In particular, ξ_2-k:H_2-k ( Γ _0( N) ) → S_k( Γ _0( N) ) is surjective.
The image of f by ξ_2-k is called the shadow of f^+.
If f ∈ H_2-k(Γ_0(N)) has the property that ξ_2-k(f)≠0, then the principal part of f is nonconstant for at least one cusp.
Let g ∈ S_k( Γ _0( N) ) be a normalized newform and F_g be the number field obtained by adjoining the coefficients of g to ℚ.
We say that a harmonic Maass form f ∈ H_2-k ( Γ _0( N) ) is good for g if it satisfies the following properties.
(1)The principal part of f at the ∞ belongs to F_g[q^-1].
(2)The principal part of f at the other cusps of Γ_0(N) are constant.
(3)We have that ξ _2-k( f) =g g ^2.
Let g ∈ S_k( Γ _0( N) ) be a normalized newform with complex multiplication.
If f ∈ H_2-k ( Γ _0( N) ) is good for g,
then there exists a positive integer M such that all coefficients of f^+ are in F_g(ζ_M), where ζ_M:=e^2π i/M.
The one-dimensional spaces S_k(Γ_0(N)) which satisfies the assumption of Theorem <ref> are only
S_2( Γ _0( 27) ), S_2( Γ _0( 32) ), S_2( Γ _0( 36) ), S_2( Γ _0( 49) ),
S_4( Γ _0( 9) ).
In addition, every genus of X_0(N) is 0 or 1 where N=27, 32, 36, 49, 9.
We define two subspaces of weakly holomorphic modular forms
M_k^#( Γ _0( N) ) :={ f∈ M_k^!( Γ _0( N) ) |f is holomorphic at every cusp except possibly ∞},
S_k^# ,0( Γ _0( N) ) :={ f∈ M_k^#( Γ _0( N) ) |f vanishes at every cusp except possibly ∞}.
Let k be a positive even integer and N be a positive integer for which the
genus of Γ_0(N) is zero or one.
We define the space S_k^# ,0 ( Γ _0( N) ) by
S_k^# ,0 ( Γ _0( N) ) :=S_k^# ,0( Γ_0( N) ) D^k-1( M_2-k^#( Γ _0( N) )) ⊕ S_k( Γ_0( N) ) .
Then
S_k^# ,0 ( Γ _0( N) ) = S_k( Γ _0( N) ).
Suppose that S_k( Γ _0( N) ) is one-dimensional and that the unique normalized
cusp form g has complex multiplication by K.
There exists
F=-q^-1+∑ ^∞_n=2C_F( n) q^n∈ S_k^# ,0( Γ _0( N) ) ∩ℤ[[q]][q^-1]
such that for every odd prime p which is inert in 𝒪_K and every integer m ≥ 0 we have that
v_p( C_F( p^2m+1) )=(k-1)m.
The above result except for F ∈ S_k^# ,0( Γ _0( N) ) is clear by <cit.>.
The function F is defined by F_1 in <cit.>.
It is clear that F_1∈ S_k^# ,0( Γ _0( N) ) from the definition of F_1.
When (k,N)=(4,9), we have F=-η ( 3z) ^8 ( η ( z) ^3η ( 9z) ^3+3) ^2. (See Theorem <ref>)
Let g be a normalized newform in S_k( Γ _0( N) ).
If f ∈ H_2-k ( Γ _0( N) ) is good for g,
then
D^k-1(f) ∈ S_k^# ,0( Γ _0( N) ).
We have D^k-1(f) ∈ M_k^!( Γ _0( N) ) by <cit.>.
We will show that the constant term of D^k-1(f) is zero at every cusp of Γ _0( N).
Let s be a cusp of Γ _0( N), and h be the width of s.
We denote the Fourier expansion of f^+ at s by ∑ _n≫ -∞C_s( n) q_h^n where q_h:= e^2π i/h.
Then the Fourier expansion of D^k-1(f) at s is
D^k-1(f)=D^k-1(f^+)=∑ _n≫ -∞(nh)^k-1C_s( n) q_h^n.
Therefore the constant term of D^k-1(f) is zero at every cusp of Γ _0( N).
We will show that D^k-1(f) is holomorphic at every cusp exept for ∞.
Let s ∈ℚ be a cusp of Γ _0( N) and h be the width of s.
We denote the Fourier expansion of f^+ at s by ∑ _n≫ -∞C_s( n) q_h^n.
Since f is good for g, C_s( n)=0 holds for all n <0.
Hence D^k-1(f) is holomorphic at every cusp exept for ∞.
§ P-ADIC PROPERTIES OF MOCK MODULAR FORMS
In this section, we recall p-adic properties of mock modular forms.
From now on, we fix an algebraic closure ℚ_p along with embedding ιℚ→ℚ_p for each prime number p. We denote the p-adic closure by ℂ_p and normalize the p-adic valuation so that v_p(p)=1.
Let g ∈ S_k(Γ_0(N)) be a normalized newform with complex multiplication by K and f ∈ H_2-k(Γ_0(N)) be good for g. We denote the holomorphic part of f by f^+. We define the Eichler integral of g by
E_g( z) :=∑ _n>0n^1-kC_g( n) q^n
where C_g(n) denotes the n-th coefficient of g.
For γ∈ℂ_p, we define
ℱ_γ:=f^+-γ E_g|V(p).
Let β, β' be the roots of the polynomial X^2-C_g(p)X+p^k-1 such that v_p( β ) ≤ v_p( β ').
Let g be a normalized newform with complex multiplication by K and p is inert in 𝒪_K. Then
lim _m→∞C_D^k-1(f)( p^2m+1) β ^2m
is convergence.
Assume that p ∤ N and p is inert in 𝒪_K.
Then there exists exactly one α_g∈ℂ_p such that ℱ_α_g is a p-adic modular form of weight 2-k and level pN, given by the p-adic limit
α_g=lim _m→∞C_D^k-1(f)(p^2m+1) β ^2m.
We will show that α_g is well-defined.
Let h ∈ M_2-k^!( Γ _0( N) ) be defined over some algebraic field. Then there is a real number A such that
v_p(C_D^k-1(h)(p^2m+1)) ≥ (2m+1)(k-1)-A for all m ∈ℕ.
If f and f' are good for g, then ξ_2-k(f-f')=0. Therefore f-f' is an element of M_2-k^!( Γ _0(N)) and defined over F_g(ζ_M) by Theorem <ref>. By Lemma <ref>, we have
lim _m→∞(C_D^k-1(f)(p^2m+1)-C_D^k-1(f')(p^2m+1)) β ^2m=0.
Therefore α_g is well-defined.
§ PROOF OF THE MAIN THEOREM
In this section, we prove the main theorem.
Firstly, we will show that α_g≠0.
Suppose that S_k( Γ _0( N) ) is one-dimensional and that the unique normalized
cusp form g has complex multiplication by K. Then there exist c, d ∈ℂ and h ∈ M_2-k^#( Γ _0( N) ) such that
D^k-1(f)=cF+D^k-1(h)+dg.
From Lemma <ref> and Lemma <ref>, we obtain that S_k^# ,0 ( Γ _0( N) )=1.
Therefore we can show this lemma by D^k-1(f), F ∈S_k^# ,0 ( Γ _0( N) ).
The constant c in Lemma <ref> is not zero.
Assume that c=0.
We write
f^+=∑ _n≫ -∞C_f^+( n) q^n and
h=∑ _n≫ -∞C_h( n) q^n.
Since c=0, we have
D^k-1(f)=D^k-1(h)+dg.
Therefore if n≤ -1, then
C_f^+(n)=C_h(n).
We put H =f-h ∈ H_2-k ( Γ _0( N) ). Then the principal part of H at ∞ is constant.
Let s ∈ℚ be a cusp of Γ_0(N).
Since f is good for g, the principal part of f at s is constant.
Therefore the principal part of H at s is constant.
Consequently, the principal part of H is constant at all cusps and
ξ_2-k(H)=ξ_2-k(f)=g g ^2≠0.
This contradicts Theorem <ref>.
There exists a positive integer M such that the constant c, d in Lemma <ref> are in F_g(ζ_M).
In a manner similar to the proof of Lemma <ref>, we can show that F≠0 as an element of S_k^# ,0 ( Γ _0( N) ).
By Theorem <ref>, we have
D^k-1(f)=c^σF+D^k-1(h^σ)+d^σg
for all σ∈Aut(ℂ / F_g(ζ_M)).
From the discussion of <cit.>, h^σ∈ M_2-k^#( Γ _0( N) ) holds. Therefore as an element of S_k^# ,0 ( Γ _0( N) ), we have
D^k-1(f)=cF=c^σF.
Since F≠0, we obtain c ∈ F_g(ζ_M).
Therefore, we have
D^k-1(f)=cF+D^k-1(h^σ)+d^σg
for all σ∈Aut(ℂ / F_g(ζ_M)).
Therefore
(d-d^σ)g=D^k-1(h^σ-h)
holds.
Since F is written by F=-q^-1+∑ ^∞_n=2C( n) q^n and (<ref>),
C_h(-1)=C_f^+(-1)+1
and
C_h(n)=C_f^+(n)
for all n ≤ -2.
Consequently, all coefficients of h at negative integers are in F_g(ζ_M).
Therefore, we have
h^σ-h ∈ M_2-k(Γ_0(N)).
By Lemma <ref> and the assumption of Lemma <ref>, we have k=2, 4.
(1)In case that k=2, D^k-1(h^σ-h)=0 by M_0(Γ_0(N))=ℂ.
(2)In case that k=4, D^k-1(h^σ-h)=0 by M_-2(Γ_0(N))=0.
Consequently, we obtain d=d^σ.
All coefficients of h in Lemma <ref> are element of F_g(ζ_M).
By the proof of Theorem <ref>, we have
h^σ-h ∈ M_2-k(Γ_0(N)).
for all σ∈Aut(ℂ / F_g(ζ_M)).
If k=4, then
h^σ=h
holds.
Therefore we assume that k=2.
Let C_h(0) be the 0-th coefficient of h.
Then we have
h-C_h(0) ∈ M_0^!(Γ_0(N))
and h-C_h(0) satisfies <ref>.
Therefore we can assume that the constant term of h is zero.
Thanks to this assumption,
h^σ=h
holds.
Suppose that k is an even integer and N is a natural number. Let g ∈ S_k(Γ_0(N)) be a normalized newform with complex multiplication by an imaginary quadratic field K and p an odd prime number.
Assume that p is inert in 𝒪_K and p ∤ N.
If S_k(Γ_0(N))=1, then α_g≠0.
Since p is inert in 𝒪_K, we have
C_g(p^2m+1)=0
for all m ∈ℕ.
Hence we obtain
C_D^k-1(f)^+(p^2m+1)=cC_F(p^2m+1)+C_D^k-1(h)(p^2m+1).
Therefore
C_D^k-1(f)(p^2m+1)β^2m=c C_F(p^2m+1)β^2m+c_D^k-1(h)(p^2m+1)β^2m
holds.
Since β is the root of the polynomial X^2-c_g(p)X+p^k-1=X^2+p^k-1 and Lemma <ref>, we have
α_g = lim _m→∞C_D^k-1(f)( p^2m+1) β ^2m = c lim _m→∞C_F( p^2m+1) β ^2m.
From Lemma <ref> and the fact that { x ∈ℂ_p| v_p(x)=0 } is closed,
lim _m→∞C_F( p^2m+1) β ^2m≠0
holds.
By Lemma <ref>, we obtain
α_g≠ 0.
Lastly, we will show that v_p(α_g)=0.
Let f ∈ H_2-k( Γ _0( N) ) and g ∈ S_k( Γ _0( N) ).
We denote the principal part of f at cusp s by
∑_n<0C_f, s^+(n)q_h_s^n
and the Fourier expansion of g at cusp s by
∑_n>0C_g, s(n)q_h_s^n
where h_s is the width of s.
Then we have
(ξ_2-k(f), g)=∑_s : cusp∑_n<0 C_f, s^+(n)C_g, s(n)
where (· , ·) is the Petersson inner product.
We generalize the pairing that is defined by P. Guerzhoy. (cf. <cit.>)
We define the pairing
⟨· , ·⟩ : S_k^#, 0( Γ _0( N) ) × S_k ( Γ _0( N) ) →ℂ by
⟨∑_n≫ -∞ a_nq^n, ∑_n>0 b_nq^n⟩:=∑_n < 0a_nb_-nn^k-1.
Then this pairing is well-defined.
It is sufficient to show that ⟨ D^k-1(h), g⟩=0 for all h ∈ M_2-k^#( Γ _0( N) ) and g ∈ S_k ( Γ _0( N) ). Since M_2-k^#( Γ _0( N) ) ⊂ H_2-k( Γ _0( N) ) and Lemma <ref>, we have
⟨ D^k-1(h), g⟩=∑_n<0C_h(n)C_g(n)= (ξ_2-k(h), g)=(0, g)=0.
Let g be a normalized newform. If f is good for g then we have
⟨ D^k-1(f), g⟩=1.
This Lemma is followed by Lemma <ref> and the definition of “good for g”.
Suppose that k is an even integer and N is a natural number. Let g ∈ S_k(Γ_0(N)) be a normalized newform with complex multiplication by an imaginary quadratic field K and p an odd prime number.
Assume that p is inert in 𝒪_K and p ∤ N.
If S_k(Γ_0(N))=1, then v_p(α_g)=0.
From the discussion in Theorem <ref>, it is sufficient to show that c=1.
From the definition of ⟨· , ·⟩, we have
⟨ F , g⟩=1.
Since D^k-1(f)=cF in S_k^#, 0( Γ _0( N) ),
1=⟨ D^k-1(f), g⟩=⟨ cF, g ⟩=c⟨ F, g⟩=c.
amsplain
|
http://arxiv.org/abs/2307.00962v2
|
20230703122608
|
Complex translation methods and its application to resonances for quantum walks
|
[
"Kenta Higuchi",
"Hisashi Morioka"
] |
math.SP
|
[
"math.SP",
"math-ph",
"math.AP",
"math.MP",
"81U24, 47A75"
] |
Challenges in Domain-Specific Abstractive Summarization and How to Overcome them
Anum Afzalsup1, Juraj Vladikasup1, Daniel Braunsup2, Florian Matthessup1
sup1Department of Computer Science, Technical University of Munich, Boltzmannstrasse 3, 85748 Garching bei Muenchen, Germany
sup2Department of High-tech Business and Entrepreneurship, University of Twente, Hallenweg 17, 7522NH Enschede, The Netherlands
{anum.afzal, juraj.vladika, matthes}@tum.de, [email protected]
Received date ; Accepted date
===========================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, some properties of resonances for multi-dimensional quantum walks are studied.
Resonances for quantum walks are defined as eigenvalues of complex translated time evolution operators in the pseudo momentum space.
For some typical cases, we show some results of existence or nonexistence of resonances.
One is a perturbation of an elastic scattering of a quantum walk which is an analogue of classical mechanics.
Another one is a shape resonance model which is a perturbation of a quantum walk with a non-penetrable barrier.
§ INTRODUCTION
In the research area of quantum physics, the study of resonances has a long history.
For Schrödinger operators, one often adopts the framework of semi-classical analysis.
The shape resonance models (<cit.>, <cit.>, <cit.>, <cit.>, <cit.>) deserve our attention in this paper.
A typical shape resonance model is the Schrödinger operator H(h)=- (h^2 /2)Δ +V on R^d with a small parameter h>0 where the potential V is like a cut-off harmonic oscillator (see Figure <ref>).
In the classical mechanics, if a particle has an energy λ in Figure <ref>, there is a trajectory bounded in the domain Ω^i.
On the other hand, the Schrödinger operator has no bound state since there is the tunneling effect.
Combes et al. <cit.> showed that there exist some resonances of H(h) in a small neighborhood of Dirichlet eigenvalues of -(h^2 /2)Δ +V in Ω^i if we take sufficiently small h>0.
Klein <cit.> proved the absence of resonances in a neighborhood of the essential spectrum of H(h) (which is the positive-semi axis) for the case where V is a non-trapping potential.
Their results suggest that the existence of resonances near the positive semi-axis reflects the existence of bounded classical trajectories.
We would like to refer <cit.> and <cit.> for general information.
See also the references therein.
In these previous works, the complex dilation (<cit.>) for Schrödinger operators was often used.
The essential spectrum is rotated and becomes a half line in the lower half plane.
On the other hand, eigenvalues and resonances are invariant under the complex dilation after the deformed essential spectrum moves over them.
Thus the complex dilation allows us to study resonances as isolated eigenvalues of the dilated Hamiltonian.
In this paper, we study resonances in the context of quantum walks (QWs) as a perturbation problem of eigenvalues of time evolution operators.
The spectral theory and the eigenvalue problem for time evolution operators of QWs on Z^d are studied in some recent works.
The properties of the essential spectrum are studied in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
As an earlier work than these articles, Kato-Kuroda <cit.> presented an abstract theory of wave operators in view of perturbations of unitary operators rather than that of self-adjoint operators.
Their theory corresponds to considering the discrete unitary group associated with the unitary operators rather than the continuous one-parameter unitary group associated with self-adjoint operators.
We introduce the model of the d-dimensional QW which is studied in this paper.
For the sake of simplicity of notations, we restrict our consideration to the case d=2.
For other d, the argument is parallel.
In this paper, we consider finite rank perturbations of the free QW.
Let e_1 = [1,0]^𝖳 and e_2 = [0,1]^𝖳.
The free QW is given by the unitary operator U_0 =S on ℋ:= ℓ^2 ( Z^2 ; C^4 ) where S is the shift operator
(Su)(x)= [ u _← (x+e_1 ); u_→ (x-e_1 ); u_↓ (x+e_2 ); u_↑ (x-e_2 ) ] , x∈ Z^2 ,
for a C^4-valued sequence u={ [u_← (x), u_→ (x), u_↓ (x), u_↑ (x) ]^𝖳} _x∈ Z^2.
The perturbed QW is defined by a unitary operator U=SC on ℋ where C is the operator of multiplication by a matrix C(x)∈U (4) at every x∈ Z^2.
Note that the time evolutions of the free QW and the perturbed QW are define by
Ψ (t,· )= U^t ψ , Ψ_0 (t,· )= U_0^t ψ , t∈ Z ,
for an initial state ψ.
If ψ∈ℋ, we have
Ψ (t,· ) _ℋ = Ψ_0 (t,· ) _ℋ = ψ _ℋ ,
for any t∈ Z since U and U_0 are unitary on ℋ.
Throughout this paper, we assume the following property.
Due to this assumption, the perturbation V:= U-U_0 is of finite rank.
(A-1) There exists a positive integer M_0 such that C(x)=I_4 which is the 4× 4 identity matrix for x∈ Z^2 ∖Ω^i where Ω^i = { x∈ Z^2 ; |x_1| ≤ M_0, |x_2| ≤ M_0 }.
In general, the operator U may have some eigenvalues.
In the settings introduced as above, associated eigenstates are localized in the domain Ω^i.
Under some suitable conditions, we can find another QW U' such that U' has no eigenvalue even if U'-U is sufficiently small in a suitable topology and U has some eigenvalues.
This kind of situations motivates us to study resonances of QWs.
Namely, the eigenvalues of U may move into the “unphiysical" second sheet of the Riemann surface.
In order to define resonances of the QW U, we consider the meromorphic extension of the resolvent operator R(κ)=(U-e^-iκ )^-1 with Im κ >0 to the lower half region of the complex torus T _ C: = C /2π Z.
In this paper, we use the complex translation in the pseudo momentum space as an analogue of the complex dilation.
We note that a resonance expansion in view of the dynamics of one-dimensional QWs is going to be shown in the forthcoming paper <cit.>.
This paper is organized as follows.
At the beginning of Section 2, we recall some known facts in the spectral theory for QWs.
After that, we introduce the complex translation in the pseudo momentum space.
The rigorous definition of (outgoing) resonances is given here.
In Section 3, we study elastic scattering of QWs.
Some relations between eigenvalues or resonances and closed trajectories are given here.
Elastic scattering of QWs is an analogue of classical mechanics.
After that, we also study a QW with a non-penetrable barrier on the boundary of Ω^i.
Some eigenvalues appear in this model.
This model corresponds to the Schrödinger operator with the Dirichlet boundary condition.
Some estimates of resolvent operators are given here.
In Section 4, an example of resonances is given in a constructive approach.
Proposition <ref> is our result of this section.
Namely, we consider U for the case where U has (approximately) a few closed trajectories in view of the elastic scattering.
Next we consider an analogue of shape resonance models in Section 5.
We prove that some eigenvalues of a QW with non-penetrable barrier move into the second sheet of the Riemann surface due to a perturbation.
As a conclusion, we show Theorem <ref>.
Corollary <ref> is our main result.
The notations which are used throughout this paper are as follows.
As above, we put ℋ = ℓ^2 ( Z^2 ; C^4 ) equipped with the inner product
(f,g) _ℋ = ∑ _x∈ Z^2∑ _j∈{← , → , ↓ , ↑} f_j (x) g_j (x) , f,g∈ℋ.
We often use the standard basis on the vector space R^2 and C^4.
We denote it by e_1 = [1,0]^𝖳, e_2 = [0,1]^𝖳, and e _← = [1,0,0,0]^𝖳, e _→ = [0,1,0,0]^𝖳, e _↓ = [0,0,1,0] ^𝖳, e _↑ = [0,0,0,1]^𝖳.
For an operator A on ℋ, we denote by σ (A), σ_ess (A), σ_p (A) and σ_ac (A) the spectrum, the essential spectrum, the point spectrum and the absolutely continuous spectrum of A, respectively.
For Banach spaces ℋ_1 and ℋ_2, B ( ℋ_1 ; ℋ_2 ) denotes the space of bounded linear operators from ℋ_1 to ℋ_2.
If ℋ_1 = ℋ_2, we simply write B (ℋ_1 ; ℋ_1 )= B (ℋ_1 ).
The flat torus is defined by T^d = R^d / 2π Z^d.
The complex torus is defined by T _ C^d = C^d /2π Z ^d = T ^d +i R ^d.
For a∈ R, we put
𝒪^+_a = {κ∈ T _ C ; Im κ >a } , 𝒪^-_a = {κ∈ T _ C ; Im κ < a } .
§ COMPLEX TRANSLATION ON THE PSEUDO MOMENTUM SPACE
§.§ Preliminary results of spectra
First of all, let us recall the spectra of U_0 and U under the assumption (A-1).
The Fourier transform is defined by
u (ξ )=(ℱu)(ξ )= 1/2π∑ _x∈ Z^2 e^-ix·ξ u(x), ξ∈ T^2 ,
where x·ξ = x_1 ξ_1 +x_2 ξ_2.
It is well-known that ℱ is a unitary operator from ℋ to ℋ := L^2 ( T^2 ; C^4 ).
The operator U_0 = ℱ U_0 ℱ^* is multiplication by the 4× 4 unitary matrix
U_0 (ξ )= diag [ e^iξ_1 , e^-iξ_1 , e^iξ_2 , e^-iξ_2 ] .
For κ∈ C, we have
(U_0 (ξ ) - e^-iκ ) = (e^iξ_1 -e^-iκ )(e^-iξ_1 -e^-iκ )(e^iξ_2 -e^-iκ )(e^-iξ_2 -e^-iκ ).
This formula determines the spectrum of U_0.
We have σ (U_0)= σ_ac (U_0)= { e^-iλ ; λ∈ [0,2π )} =S^1.
Since V=U-U_0 is of finite rank, we can see that the essential spectrum of U coincides with σ_ess (U_0)=S^1.
For details of this topic, the rigorous proof was given in <cit.> which is parallel to the well-known Weyl's singular sequence lemma for compact perturbations of self-adjoint operators (see e.g. <cit.>).
We can also prove the absence of the singular continuous spectrum of U (see <cit.> for 1DQWs and <cit.> for 2DQWs).
If there exist some eigenvalues, they are embedded in the essential spectrum.
Under the assumption (A-1), we have σ_ess (U)=S^1 and σ_p (U) consists of eigenvalues lying on S^1 with finite multiplicities.
Remark.
If we add another condition (C), we can show the absence of eigenvalues (<cit.>, <cit.>, <cit.> for 1DQWs, and <cit.> for 2DQWs).
(C) Let C(x)=[ c_j,k (x) ] _j,k∈{← , → , ↓ , ↑} for every x∈Ω^i.
One of the following properties holds true.
* Neither [c_j,k (x)] _j,k∈{← , ↓} nor [c_j,k (x)] _j,k∈{→ , ↑} vanishes for any x∈Ω^i.
* Neither [c_j,k (x)] _j,k∈{← , ↑} nor [c_j,k (x)] _j,k∈{→ , ↓} vanishes for any x∈Ω^i.
Note that the condition (C) is sufficient for the absence of eigenvalues.
For the most part of our argument, we do not assume that (C) holds true.
We use (C) in a typical case appearing in the context of scattering theory.
§.§ Complex translation on the pseudo momentum space
Suppose that an operator A on a Hilbert space ℍ has an isolated eigenvalue λ∈ C.
The algebraic multiplicity of λ is defined by the rank of the operator
P_A (λ)=-1/2π i∮ _ℒ(λ) (A-z)^-1 dz,
where ℒ (λ ) is a sufficiently small counterclockwise loop without self-intersection such that there is no other eigenvalues inside ℒ (λ ).
Then the resolvent operator (A-z)^-1 acts a crucial role in the study of eigenvalues.
On the other hand, if an eigenvalue λ is embedded in the continuous spectrum of A, the projection P_A (λ) is not well-defined.
For the case ℍ=ℋ and A=U, eigenvalues are embedded in the essential spectrum in view of Lemma <ref>.
Then we have to avoid the difficulty for P_A (λ).
In order to do this, we deform the continuous spectrum σ_ess (U) by using a complex translation on the pseudo momentum space T^2.
Let us begin with the real translation operator.
For θ∈ T, we define the operator T (θ ) of translation by
(T (θ ) u )(ξ )= [ u _← (ξ_1 - θ , ξ_2 ); u _→ (ξ_1 + θ , ξ_2 ); u _↓ (ξ_1 , ξ_2 - θ ); u_↑ (ξ_1 , ξ_2 + θ ) ] , u∈ L^2 ( T^2 ; C^4 ).
Obviously, T ( θ ) is unitary on L^2 ( T^2 ; C^4 ) for every θ∈ T.
We immediately see
U_0 (θ ):= T ( θ ) U_0 T (θ )^-1 = e^-iθU_0 ,
R_0 ( κ , θ ):= T ( θ ) R_0 (κ) T (θ )^-1 = e^iθR_0 (κ -θ) , κ∈ T _ C ,
where
R_0 (κ )=(U_0 -e^-iκ )^-1 , R_0 (κ )=ℱ R_0 (κ) ℱ^* .
Thus U_0 (θ) as well as R_0 (κ ,θ) are naturally defined for θ∈𝒪^-_0 and κ∈𝒪_Im θ^+, and they are operators belonging to B (L^2 ( T^2 ; C^4)).
Precisely, we have
U_0 (θ )= e^-i Re θ e^Im θU_0 , θ∈𝒪^-_0 .
As a direct consequence, the spectrum of U_0 is deformed as follows.
We have σ (U_0 (θ ))= σ_ac (U_0 (θ)) = { e^Im θ e^iλ ; λ∈ [0,2π ) } for θ∈𝒪^-_0.
Let us turn to the complex translation U(θ) of U.
At first we consider the real translation again.
In order to construct U(θ), it is convenient to use T(θ)= ℱ^* T (θ ) ℱ for θ∈ T.
In fact, we have
(T(θ)u)(x)= diag [e^iθ x_1 , e^-iθ x_1 , e^i θ x_2 , e^-i θ x_2 ] u(x) , x∈ Z^2 ,
for θ∈ T.
We put
V(θ ):= T(θ )VT(θ )^-1 .
In view of the assumption (A-1), we can extend V(θ) to θ∈𝒪^-_0 as a finite rank operator on ℋ.
Then we obtain a construction of the complex translation
U(θ)=T(θ)UT(θ)^-1 =U_0 (θ) +V(θ ) ∈ B (ℋ), θ∈𝒪^-_0,
and formally define
R(κ , θ)= T(θ)R(κ)T(θ)^-1 = (U(θ)-e^-iκ )^-1∈ B (ℋ ), e^-iκ∉σ (U(θ)) .
The compactness of V(θ ) and the definition of T(θ) imply the following property of the continuous spectrum.
For θ∈𝒪^-_0, we have σ_ess (U(θ)) = σ_ess (U_0 (θ)) = { e^Im θ e^iλ ; λ∈ [0,2π ) }.
Now let us consider the meromorphic extension of R(κ) from 𝒪^+_0 to the second sheet of the Riemann surface. We show that this meromorphic extension coincides with R(κ,θ) for θ∈𝒪^- _0 and κ∈𝒪^+ _Im θ.
We put
𝒟 = {f | _ T^2 ; f is a C^4 -valued analytic function on T^2 _ C} ,
and
𝒟 = ℱ^* 𝒟 .
Note that 𝒟 is the set of C^4-valued, super-exponentially decreasing sequences on Z^2 in view of Paley-Wiener's theorem (<cit.>) :
𝒟 = ∩ _γ >0{ f∈ℋ ; e^τ⟨·⟩f ∈ℋ for any 0≤τ <γ} .
The operator T(θ) for θ∈𝒪^-_0 has 𝒟 as its dense domain in ℋ.
We define the function F_f ( κ ) for κ∈𝒪_0^+ and f∈𝒟 by
F_f (κ )= (R(κ)f,f) _ℋ .
For any f∈𝒟 and any a<0, the function F_f (κ) has the meromorphic extension to 𝒪^+_a with poles of finite rank.
Poles of F_f (κ) lie on T or in 𝒪^-_0 ∩𝒪^+ _a.
Proof.
In this proof, we sometimes use the analytic Fredholm theory (see e.g. <cit.>).
Since U is a unitary operator on ℋ, R(κ) is analytic in 𝒪^+_0 with respect to κ.
Thus F_f (κ) with κ∈𝒪^+_0 is well-defined for any f∈ℋ.
In view of the resolvent equation
R(κ)=R_0 (κ)-R(κ)VR_0 (κ)= R_0 (κ)-R_0 (κ)VR(κ),
we have
(1-VR(κ))(1+VR_0 (κ))=(1+VR_0 (κ))(1-VR(κ))=1.
This implies that 1+VR_0 (κ) is invertible for κ∈𝒪^+_0 and we obtain
R(κ)= R_0 (κ) (1+VR_0 (κ))^-1 , κ∈𝒪^+_0 .
By the same way, we also see
R(κ,θ)=R_0 (κ,θ)-R(κ,θ) V (θ )R_0 (κ,θ)=R_0 (κ,θ)-R_0 (κ,θ) V (θ )R (κ,θ),
(1-V(θ)R(κ,θ))(1+V(θ)R_0 (κ,θ))
= (1+V(θ)R_0 (κ,θ))(1-V(θ)R(κ,θ))
=1,
R(κ,θ)= R_0 (κ,θ) (1+V(θ)R_0 (κ,θ))^-1 ,
for κ∈𝒪^+_0 and θ∈ T.
Since T(θ ) is unitary on ℋ for θ∈ T, we have
F_f (κ ) = (T(θ)R(κ )f,T(θ)f)_ℋ
= (R(κ,θ) T(θ)f,T(θ)f)_ℋ
= ( R_0 (κ ,θ) (1+ V (θ) R_0 (κ,θ))^-1 T( θ )f, T(θ )f)_ℋ ,
for κ∈𝒪^+_0.
Here we have used the formulas (<ref>)-(<ref>).
We fix κ_0 ∈𝒪^+ _0 and f∈𝒟.
If θ∈𝒪^- _Im κ_0, we have Im (κ _0 -θ)>0.
Then 1+ V (θ) R_0 (κ_0,θ) = e^iθ V(θ) R_0 (κ_0 - θ) is compact and analytic with respect to θ∈𝒪^-_Im κ_0.
The existence of the inverse (1+V(θ)R_0 (κ_0 ,θ ))^-1 for θ∈ T⊂𝒪^-_Im κ_0 follows from (<ref>).
The analytic Fredholm theory shows the existence of the meromorphic extension of (1+ V (θ) R_0 (κ_0,θ) )^-1 to θ∈𝒪^- _Im κ_0 with poles of finite rank.
Then the function
G_f (κ_0 ,θ )= ( R_0 (κ_0 , θ) (1+ V (θ) R_0 (κ_0,θ) )^-1 T(θ) f, T(θ)f )_ℋ ,
of θ is meromorphic in 𝒪^-_Im κ_0.
Furthermore, the equality (<ref>) for θ∈ T implies that G_f (κ_0 , θ )=F_f (κ_0 ) for any θ∈ T.
Then the function G_f (κ_0 , θ ) is a constant in 𝒪^- _Im κ_0, i.e.,
G_f (κ_0 , θ )=F_f (κ_0) , θ∈𝒪^- _Im κ_0 .
As a consequence, we see that (1+ V (θ) R_0 (κ_0,θ) )^-1 is analytic with respect to θ∈𝒪^- _Im κ_0.
Next we fix θ_0 ∈𝒪^-_0 and f∈𝒟.
From the above argument, we have G_f (κ ,θ_0 )= F_f (κ ) for κ∈𝒪^+_0.
Noting Im κ - Im θ_0 >0 for κ∈𝒪^+ _Im θ_0, we see that R_0 (κ , θ_0 )= e^iθ_0 R_0 (κ - θ_0 ) and V(θ_0 ) R_0 (κ, θ_0 )= e^iθ_0 V( θ_0 ) R_0 ( κ - θ_0 ) are analytic with respect to κ∈𝒪^+_Im θ_0.
We also see that V(θ_0 ) R_0 (κ, θ_0 ) is compact for κ∈𝒪^+_Im θ_0.
The existence of the inverse (1+ V(θ_0 ) R_0 (κ , θ_0 ))^-1 for some κ∈𝒪^+_0 has been shown in the above argument, since (1+ V(θ ) R_0 (κ , θ ))^-1 is analytic with respect to θ∈𝒪^- _Im κ.
Then the analytic Fredholm theory implies that (1+ V(θ_0 ) R_0 (κ , θ_0 ))^-1 has the meromorphic extension to κ∈𝒪^+ _Im θ_0 with poles of finite rank.
Therefore, we obtain the meromorphic extension of F_f (κ) to the domain 𝒪^+ _Im θ_0 with poles of finite rank.
Since θ_0 ∈𝒪^-_0 is arbitrary, we obtain the meromorphic extension of F_f (κ ) to 𝒪^+_a for any a<0.
Finally, we note that the discrete eigenvalues of U(θ) in the region C^e _θ := { z∈ C ; |z|> e^Im θ} are invariant with respect to θ.
An eigenvalue λ (θ ) of U(θ ) moves analytically with respect to θ∈𝒪^+_a for any a<0 as long as λ (θ ) belongs to C^e _θ.
For any α∈ T, U( θ + α ) and U(θ ) are unitary equivalent.
Then we have λ ( θ + α )=λ (θ ).
By the analyticity of λ (θ ), this implies that λ (θ ) is a constant with respect to θ as long as λ (θ)∈ C^e _θ.
In view of Lemma <ref> and its proof, the rigorous definition of resonances is given as follows.
Note that the resonances are independent of choice of θ as long as they are in C^e _θ even though they are defined as eigenvalues of U( θ ).
Suppose that θ∈𝒪^-_0 is fixed.
We call e^-iκ for κ∈𝒪^+ _Im θ a resonance of U if e^-iκ∈σ_p (U(θ )).
Now we can define the algebraic multiplicity of a resonance λ of U by
P_U ( λ )=- 1/2π i∮ _ℒ (λ ) (U(θ)-z )^-1 dz,
where ℒ (λ ) is a sufficiently small counterclockwise loop without self-intersection and the inside of ℒ (λ ) contains no eigenvalues except for λ.
However, it is convenient to replace P_U (κ):= P_U (e^-iκ ) with z=e^-iμ by the integral of μ as
P_U (κ )= 1/2π∮ _ℒ (κ) e^-iμ R(μ ,θ ) dμ,
where ℒ (κ ) is a sufficiently small counterclockwise loop without self-intersection such that there is no other poles inside ℒ (κ ).
Note that we can take ℒ (e^-iκ) such that ℒ (κ) satisfies the above condition under the change of variable z=e^-iμ.
See Lemma <ref>.
Suppose that θ∈𝒪^-_0 is fixed.
If e^-iκ with κ∈𝒪^+ _Im θ is a resonance of U, the algebraic multiplicity of e^-iκ is defined by rank P_U (κ ).
§.§ Outgoing solution
If e^-iλ with λ∈ T is an eigenvalue of U, then e^-iλ is an eigenvalue of U(θ ) for any θ∈𝒪^-_0 as we have seen in the proof of Lemma <ref>.
More precisely, the following property holds true.
If e^-iκ∈σ_p (U) with κ∈ T, then we have e^-iκ∈σ_p (U(θ )).
Each associated eigenfunction u∈ℋ is supported in Ω^i.
Proof.
It suffices to show supp u ⊂Ω^i for any eigenfunction u.
Suppose that Uu=e^-iκ u holds true.
For x∈ Z^2 ∖Ω^i with x_1 ≤ -M_0 -1, we have u_← (x )=e^-iκ u_← (x-e_1 ).
Applying this equation at points x,x-e_1,…,x-(N-1)e_1 for any positive integer N, we have u_← (x)= e^-iκ N u _← (x-Ne_1 ).
In view of u∈ℋ, u_← (x) satisfies
u_← (x )= lim_N→∞ e^-iκ N u_← (x-Ne_1 )=0.
We can see u(x)=0 for any x∈ Z^2 ∖Ω^i repeating similar procedures.
The resonances given by Definition <ref> are related to outgoing states as follows.
Let us fix θ∈𝒪^-_0 arbitrarily.
A resonance e^-iκ with κ∈𝒪^+ _Im θ is characterized by the existence of outgoing solutions to the equation Uu=e^-iκ u, i.e., u is of the form
u_← (x)= {
a_← (x_2) e^-iκ x_1 , x_1 < -M_0 ,
0 , x_1 > M_0 ,
.
u_→ (x)= {
0 , x_1 < -M_0 ,
a_→ (x_2) e^iκ x_1 , x_1 > M_0 ,
.
u_↓ (x)= {
a_↓ (x_1) e^-iκ x_2 , x_2 < -M_0 ,
0 , x_2 > M_0 ,
.
u_↑ (x)= {
0 , x_2 < -M_0 ,
a_↑ (x_1) e^iκ x_2 , x_2 > M_0 ,
.
where a_← (x_2 ), a_→ (x_2 ), a_↓ (x_1) and a_→ (x_1 ) are complex valued sequences.
We show that
supp a_j ⊂{ -M_0 , -M_0 +1 , … ,M_0 -1, M_0 } ,
for every j∈{← , → , ↓ , ↑} in the proof of Proposition <ref>.
By the definition, outgoing solutions do not belong to ℋ and they are exponentially growing at infinity if κ∈𝒪^+ _Im θ∩𝒪^-_0.
For κ∈𝒪^-_0, e^-iκ is a resonance of U if and only if there exists a non-trivial outgoing solution u to the equation Uu=e^-iκ u.
A solution u is outgoing if and only if v:=T(θ)u belongs to ℋ for θ∈𝒪_Imθ^-.
Proof.
If e^-iκ is a resonance of U, there exists a non-trivial solution v∈ℋ to the equation U(θ)v=e^-iκ v by Definition <ref>.
In view of U(θ) = e^-iθ ST(θ)CT(θ)^-1, v satisfies ST(θ)CT(θ)^-1 v= e^-i(κ -θ) v.
For x_1 < -M_0, we have
v_← (x)=e^-i(κ -θ) v_← (x-e_1 ) ,
which implies v_← (x) = a_← (x_2 ) e^-i (κ - θ )x_1 for a sequence a_← (x_2 ).
For x_1 > M_0, we have
v_← ( x+e_1 )= e^-i(κ - θ ) v_← (x),
which implies v_← (x)= a'_← (x_2) e^-i(κ-θ )x_1 for a sequence a'_← (x_2 ).
In view of v∈ℋ, a' _← (x_2) must vanish for any x_2, since e^-i (κ - θ) x_1 grows exponentially as x_1 →∞.
For v_j with j∈{→ , ↓ , ↑}, the argument is similar.
We obtain
v_← (x)= {
a_← (x_2 ) e^-i (κ - θ ) x_1 , x_1 < -M_0 ,
0 , x_1 > M_0 ,
.
v_→ (x)= {
0 , x_1 < -M_0 ,
a_→ (x_2 ) e^i (κ - θ ) x_1 , x_1 >M_0 ,
.
v_↓ (x)= {
a_↓ (x_1) e^-i (κ - θ ) x_2 , x_2 < -M_0 ,
0 , x_2 > M_0 ,
.
v_↑ (x)= {
0 , x_2 < -M_0 ,
a_↑ (x_1 ) e^i (κ - θ ) x_2 , x_2 >M_0 .
.
For |x_2| >M_0, v_← (x+e_1 )= e^-i (κ - θ ) v_← (x) for all x_1 ∈ Z.
Then a_← (x_2)=0 for |x_2| > M_0 due to v∈ℋ.
For a_→ (x_2), a_↓ (x_1 ) and a_↑ (x_1), the proof is similar.
Now we put u=T(θ)^-1 v.
In view of U(θ)v=e^-iκv, we have Uu=e^-iκ u.
Due to (<ref>)-(<ref>), u is obviously outgoing.
Let us turn to the proof of the converse.
For an outgoing solution u to Uu=e^-iκ u, we put v= T(θ )u.
Note that we see supp a_j ⊂{ -M_0 , -M_0 +1 , … ,M_0 -1, M_0 } for every j∈{← , → , ↓ , ↑} by the equation Uu=e^-iκ u and the assumption κ∈𝒪^+ _Im θ∩𝒪^- _0.
Due to the definition of T(θ ), we see that v∈ℋ solves the equation U(θ)v= e^-iκ v.
§ TRAPPING AND NON-TRAPPING TRAJECTORY
§.§ Elastic scattering of QWs
Here we consider a special case of QWs.
Namely, we introduce elastic scattering of QWs and define a trajectory of the quantum walker as a particle.
Let 𝒫 _ch be the set of permutations
σ = [ ← → ↓ ↑; σ (←) σ (→) σ (↓) σ (↑) ].
We define the coin operator C_el of multiplication by the matrix C_el (x) ∈U (4) of the form
C_el (x)= {
[ e^iα _j (x) e _σ (x,j) ]_j∈{← , → , ↓ , ↑} , x∈Ω^i ,
I_4 , x∈ Z^2 ∖Ω^i ,
.
where α_j (x)∈ T and
σ (x):= [ ← → ↓ ↑; σ (x,←) σ (x,→) σ (x,↓) σ (x,↑) ]∈𝒫_ch .
The scattering process associated with the QW U_el =SC_el is an elastic scattering, i.e., the quantum walker behaves like a classical particle and scatters without loss of its energy at every point x∈ Z^2.
Indeed, we have
(C_elu)(x)= ∑ _j∈{← , → , ↓ , ↑} e^iα_j (x) u_j (x) e _σ (x,j).
It follows that
(U_el u)_← (x)= (C_el u)_← (x+e_1) = e^iα_σ^-1 (x+e_1 ,←) (x+e_1) u _σ^-1 (x+e_1,←) (x+e_1) ,
(U_el u)_→ (x)= (C_el u)_→ (x-e_1) = e^iα_σ^-1 (x-e_1 ,→) (x-e_1) u _σ^-1 (x-e_1,→) (x-e_1) ,
(U_el u)_↓ (x)= (C_el u)_↓ (x+e_2) = e^iα_σ^-1 (x+e_2 ,↓) (x+e_2) u _σ^-1 (x+e_2 ,↓) (x+e_2) ,
(U_el u)_↑ (x)= (C_el u)_↑ (x-e_2) = e^iα_σ^-1 (x-e_2 ,↑) (x-e_2) u _σ^-1 (x-e_2,↑) (x-e_2) ,
for every x ∈ Z^2, extending σ (x)∈𝒫_ch to be the identity for x∈ Z^2 ∖Ω^i.
Let us introduce the trajectory of the QW U_el.
We take the initial state
f_ y,j,α ={δ_x,y e^iα e _j } _x∈ Z^2∈ℋ ,
for y∈ Z^2, j∈{← , → , ↓ , ↑}, and α∈ T.
Here δ_x,y denotes the Kronecker delta.
The dynamics of U_el is an elastic scattering.
U^t_el f_ y,j,α for every t∈ Z is of the form
(U^t_el f_ y,j,α )(x)= δ_x,q(t,y,j ) e^iα (t,y,j) e _p(t,y,j) ,
where α (t,y,j)∈ T and
q(t,y,j)= supp (U^t_el f_ y,j,α ), q(0,y,j)=y,
p(t,y,j)∈{← , → , ↓ , ↑} , p(0,y,j)=j .
In view of (<ref>)-(<ref>), the mapping Φ ( · ,y,j) : Z→ Z^2 ×{← , → , ↓ , ↑} defined by
Φ (t,y,j)= (q(t,y,j),p(t,y,j)), Φ (0,y,j)=(y,j),
is one of analogues of classical trajectories.
By the definition, Φ (· ,y,j) is independent of α.
Obviously, the trajectory Φ (·,y,j) is uniquely determined by the operator U_el and the initial value (y,j).
Thus it follows that each trajectory Φ (·,y,j) does not have junctions (Φ (·,y,j) has neither confluences nor branches).
For Φ ( · ,y,j), we define the following notions.
* We call Φ ( · ,y,j) a bounded trajectory if there exists a constant ρ >0 such that |q(t,y,j)| ≤ρ for any t∈ Z.
* We call Φ ( · ,y,j) a closed trajectory if there exist integers t_1 , t_2 ∈ Z with t_1 < t_2 such that Φ (t_1 ,y,j) = Φ (t_2 ,y,j).
For a constant ρ >0, the subset { x∈ Z^2 ; |x|≤ρ}×{← , → , ↓ , ↑} is a finite set.
Thus we see the following property.
Let Φ (· ,y,j) for (y,j)∈ Z^2 ×{← , → , ↓ , ↑} be a trajectory.
* The trajectory Φ (· ,y,j) is closed if and only if Φ (· ,y,j) is bounded.
* The trajectory Φ (· ,y,j) satisfies |q(t,y,j)|→∞ as t→∞ if and only if |q(t,y,j)|→∞ as t→ -∞.
Proof.
The statement (1) is trivial in view of the finiteness of the subset { x∈ Z^2 ; |x|≤ρ}×{← , → , ↓ , ↑} for any constant ρ >0.
We consider the statement (2).
Suppose that there exists a constant ρ_+ >0 such that |q(t,y,j)| ≤ρ_+ for t≥ 0 even though |q(t,y,j)|→∞ as t→ -∞.
The finiteness of the subset { x∈ Z^2 ; |x|≤ρ_+ }×{← , → , ↓ , ↑} implies that Φ (t,y,j) for t≥ 0 consists of a closed trajectory.
Then we can take nonnegative integers t_1 and t_2 with t_1 < t_2 such that Φ (t_1 ,y,j)=Φ (t_2 ,y,j).
Moreover, we can choose t_1 ≥ 0 as the minimum of integers which satisfy the above situation.
By the assumption, Φ (t,y,j) for t< t_1 is an unbounded trajectory without junctions.
On the other hand, Φ (· ,y,j) has a junction (a confluence) at the point q(t_1,y,j).
This is a contradiction for the definition of U_el.
If we replace t→∞ and t→ -∞, the proof is parallel.
On a closed trajectory Φ (·,y,j), we can construct an eigenfunction of U_el as follows.
The existence of closed trajectories associated with U_el determines the existence of eigenvalues of U_el.
There exists a pair (y,j) ∈ Z^2 ×{← , → , ↓ , ↑} such that Φ ( · , y,j ) is closed if and only if there exist some eigenvalues of U_el.
If Φ ( · , y,j ) is closed with its period N, then there exist N eigenvalues satisfying (<ref>), where associated eigenfunctions are supported only on {Φ(t,y,j); t≥0}.
Remark.
Here, we say a sequence u={[u_← (x),u_→ (x),u_↓ (x),u_↑ (x)]^𝖳}_x∈ Z^2 is supported on a subset A⊂ Z^2×{← , → , ↓ , ↑} if u_j(x)=0 for each (x,j)∉ A.
Note that the condition (<ref>) associated with the periodic trajectory is similar to the well-known Bohr-Sommerfeld quantization condition in the quantum mechanics.
Proof.
Suppose that there is an initial value (y,j) such that Φ (· , y,j) is closed.
Without loss of generality, we can assume that Φ (0,y,j)=Φ (N,y,j) for a positive integer N.
In fact, N must be even.
In the following, we choose the smallest N such that the above situation holds.
Let q(t) = q(t,y,j) and p(t) = p(t,y,j) for t=0,1,2,….
Thus the trajectory { (q(t),p(t)) } _t≥ 0 is a closed with (q(0),p(0))=(q(N),p(N))=(y,j).
By the definition of Φ (· ,y,j), we note
(U_el u)_p(t) (q(t) )= e^iα _p(t-1) (q(t-1)) u_p(t-1) (q(t-1) ),
for every t=0,1,…,N-1.
Letting f_t = u_p(t) (q(t) ) and β_t = α _p(t) (q(t) ), the equation U_el u= e^-iλ u on { q(t) } _t=0^N can be rewritten as
e^iβ _t-1 f_t-1 = e^-iλ f_t ,
in view of (<ref>)-(<ref>).
As a consequence, we have the equality
f_0 = e^i(λ N+β_0 + ⋯ + β_N-1) f_0 .
Suppose that f_0 ≠ 0 is given and the equality
λ≡ - 1/N∑ _t=0^N-1β_t modulo 2π ,
holds true.
Then the sequence { f_t } is uniquely determined from f_0 by the equality (<ref>).
Moreover, the function u∈ℋ with
u_j (x)= {
f_t , x=q(t) , j=p(t) , t=0,1,…,N-1,
0 , otherwise ,
.
is an eigenfunction of U_el associated with the eigenvalue e ^-iλ.
Conversely, suppose that for any (y,j), the trajectory Φ (· ,y,j) is not closed.
Let u∈ℋ be a solution to the equation U_el u= e^-iλ u for a constant λ∈ T. We prove the argument by showing that u is trivial, that is, u vanishes identically.
Fix an arbitrary point x∈ Z^2 and an arbitrary chirality j∈{← , → , ↓ , ↑}.
We consider the trajectory Φ (· ,x,j).
Since there is no bounded trajectory in view of Lemma <ref>, we note |q(t,x,j)| →∞ as |t|→∞.
We introduce the notation Φ (t,x,j) = (q(t),p(t)) by q(0) =x, q(± 1) =q ( ± 1,x,j), q(± 2) = q(± 2,x,j), … and p(0)=j, p(± 1) =p(± 1,x,j), p(± 2) = p(± 2 ,x,j), ….
Letting f_t = u_p(t) (q(t) ) and β_t = α _p(t) (q(t) ), we obtain
e^ iβ_t-1 f_t-1 = e^-iλ f_t , t∈ Z,
and thus
f_0 = e^-i (λ N + β_0 + β_1 +⋯ + β_N-1 ) f_N, |f_N|=|f_0|
for any positive integer N.
On the other hand, u∈ℋ implies
∑_t∈ Z| f_0 |^2 = ∑_t∈ Z|f_t|^2≤ u ^2 _ℋ < + ∞ .
As a consequence, we obtain f_0 =u_j (x)=0. We conclude that u=0 since the choice of (x,j) is arbitrary.
In view of Lemma <ref>, we call the QW U_el non-trapping if there is no closed trajectory associated with U_el.
We saw that a non-trapping elastic QW has no eigenvalue. The following lemma shows that it has no resonance. Moreover, the absence of resonances (other than eigenvalues) of elastic QWs is always true.
Elastic QWs have no resonance other than eigenvalues. Moreover, a non-trapping QW does not have eigenvalues either.
Proof.
Let an outgoing sequence u satisfy e^-iκu=U_elu for some κ∈𝒪_0^-. For each (x,j)∈ Z^2×{←,→,↓,↑} such that Φ(·,x,j) is unbounded, u_j(x)=0 follows.
In particular, the absence of resonances of non-trapping elastic QWs is already obtained. Here, we use the facts that u is outgoing and that Φ(·,x,j) is an incoming straight line for t∈(-∞,t_0] for some t_0∈ Z.
If u_j(x)≠0 holds for some (x,j) such that Φ(·,x,j) is bounded, the same argument as the proof of Lemma <ref> shows that κ has to satisfy the quantization condition (<ref>). This implies that κ is real, and contradicts with κ∈𝒪_0^-.
§.§ Eigenvalue of QWs via non-penetrable barriers
Let us consider another setting of QWs which have no resonances other than some eigenvalues.
In our model, we can know the sum of geometric multiplicities of eigenvalues.
Namely, we introduce a “non-penetrable barrier" on the boundary of Ω^i.
Let U_np denote a QW satisfying
(U u)_→ (x+e_1 )= u_← (x) , x∈ K_1^+ ,
(U u)_← (x-e_1 )= u_← (x) , x∈ K_1^- ,
(U u)_↑ (x+e_2 )= u_↓ (x) , x∈ K_2^+ ,
(U u)_↓ (x-e_2 )= u_↑ (x) , x∈ K_2^- ,
in addition to the assumption (A-1), where we put
K_1^± = { x∈Ω^i ; x_1 = ± M_0 , -M_0 ≤ x_2 ≤ M_0 } ,
K_2^± = { x∈Ω^i ; x_2 = ± M_0 , -M_0 ≤ x_1 ≤ M_0 } .
Under above boundary condition on K= K^+_1 ∪ K^- _1 ∪ K^+ _2 ∪ K_2^- , U_np is completely split into two QWs U_i and U_e independent of each other in the following manner. Note that (± M_0 , ± M_0 ) ∈ K_1^±∩ K_2 ^± and (± M_0 , ∓ M_0 )∈ K_1^±∩ K_2^∓, respectively.
We define the exterior domain Ω^e by
Ω^e = ( Z^2 ∖Ω^i ) ∪ K.
Now we decompose ℋ into the direct sum of Hilbert spaces ℋ_i ⊕ℋ_e where
ℋ_i = { u∈ℓ^2 ( Z^2 ; C^4 ) ; u satisfies the condition (<ref>) } ,
ℋ_e = { u∈ℓ^2 ( Z^2 ; C^4 ) ; u satisfies the condition (<ref>) } ,
with
supp u ⊂Ω^i, u_← | _K_1^+ = u_→ | _K_1^- = u_↓ | _K_2^+ = u _↑ | _K_2^- =0,
supp u ⊂Ω^e , { u_← | _K_1^- ∪ K_2^+ ∪ K_2^- = u_→ | _K_1^+ ∪ K_2^+ ∪ K_2^- =0 ,
u_↓ | _K_2^- ∪ K_1^+ ∪ K_1^- = u_↑ | _K_2^+ ∪ K_1^+ ∪ K_1^- =0.
.
Then we have U _np = U_i ⊕ U_e on ℋ_i ⊕ℋ_e with U_i=U|_ℋ_i and U_e=U|_ℋ_e.
Roughly speaking, the condition (<ref>) means that the boundary K acts the reflector from Ω^e to Ω^e as well as Ω^i does not leak quantum walkers from Ω^i.
See Figure <ref>.
Remark.
The boundary condition defined by (<ref>) and (<ref>) is slightly complicated.
In order to avoid the complexity, it is convenient to identify Ω^i and Ω^e with graphs Γ^i and Γ^e and consider the corresponding QWs on Γ^i and Γ^e as follows.
Let Γ^i = (𝒱^i , 𝒜^i ) be a finite graph where 𝒱^i = Ω^i is the set of vertices and 𝒜^i is the set of oriented edges ω = ( o(ω),t(ω)) with o(ω),t(ω) ∈Ω^i and |o(ω)-t(ω)|=1.
Here o(ω) and t(ω) denote the origin and the terminus of the edge ω, respectively.
The infinite graph Γ^e = (𝒱^e , 𝒜^e ) corresponds to the exterior domain Ω^e.
Here 𝒱^e = Ω^e and 𝒜^e is the set of oriented edges ω = (o(ω),t(ω)) such that both of neighboring vertices o(ω) and t(ω) belong to Ω^e ∖ K or one of neighboring vertices o(ω) and t(ω) lies in K and the other lies in Ω^e ∖ K.
Let ℓ^2 (𝒜^i ) be the Hilbert space equipped with the inner product
(u,v )_ℓ^2 (𝒜^i ) = ∑ _ω∈𝒜^iu (ω) v (ω) , u,v∈ℓ^2 (𝒜^i ) .
Now we consider the identification by the unitary transform ℐ_i :ℋ_i →ℓ^2 (𝒜^i ) as
(ℐ_i u)(x+e_1,x)=u_← (x), (ℐ_i u)(x-e_1,x)=u_→ (x),
(ℐ_i u)(x+e_2,x)=u_↓ (x), (ℐ_i u)(x-e_2,x)=u_↑ (x),
if (x± e_1,x), (x± e_2,x)∈𝒜^i.
Then U_i := ℐ_i U_i ℐ_i^-1 is a QW on the finite graph Γ^i without any boundary condition.
We also see that U_i is unitary on ℓ^2 (𝒜^i ).
For U_e and Γ^e, the argument is similar.
In view of the above remark, we see the following result on the spectrum of U_i.
We have σ (U_i) = σ_p (U_i) ⊂ S^1.
Each eigenvalue has a finite multiplicity.
The sum of geometric multiplicities of eigenvalues coincides with N= #𝒜^i.
Namely, there exist orthonormal eigenfunctions u^(1) , … , u^(N) in ℋ_i.
Proof.
Noting that the graph Γ^i is finite and U_i is unitary on ℓ^2 (𝒜^i ), this lemma is a direct consequence of the above remark.
Let us turn to the representation of the Green operator
R_i (κ )= (U_i -e^-iκ )^-1 ,
by the eigenvalues and the associated eigenfunctions.
Suppose that e^-iκ_1 , …, e^-iκ_N∈σ_p (U_i) and take orthonormal eigenfunctions u^(1) , … ,u^(N)∈ℋ_i.
Letting u=R_i (κ )f for f∈ℋ_i and e^-iκ∉σ_p (U_i), we obtain
u= ∑ _j=1^N (f,u^(j) ) _ℋ_i/e^-iκ_j - e^-iκ u^(j) ,
by a direct calculation.
For an eigenvalue e^ -iμ∈σ_p (U_i), let us show an estimate of R_i (κ ) near μ∈ T.
We take a counterclockwise loop ℒ_ϵ,s (μ )=∑ _j=1^4 ℒ _ϵ ,s,j (μ ) where
ℒ_ϵ,s,1 (μ )= { (1-τ )(μ +aϵ^s -ib ϵ^s ) + τ (μ +aϵ^s +ib ϵ^s ) ; τ∈ [0,1] } ,
ℒ_ϵ,s,2 (μ )= { (1-τ )(μ +aϵ^s +ib ϵ^s ) + τ (μ -aϵ^s +ib ϵ^s ) ; τ∈ [0,1] } ,
ℒ_ϵ,s,3 (μ )= { (1-τ )(μ -aϵ^s +ib ϵ^s ) + τ (μ -aϵ^s -ib ϵ^s ) ; τ∈ [0,1] } ,
ℒ_ϵ,s,4 (μ )= { (1-τ )(μ -aϵ^s -ib ϵ^s ) + τ (μ +aϵ^s -ib ϵ^s ) ; τ∈ [0,1] } ,
for some constants a,b,s,ϵ >0.
Taking sufficiently small ϵ >0, we can assume that there is no other κ inside ℒ_ϵ,s (μ ) such that e^-iκ∈σ_p (U_i).
Let e^-iμ∈σ_p (U_i).
If κ∈ T _ C varies on the loop ℒ _ϵ,s (μ ), we have R_i (κ) _ B (ℋ_i ) = O(ϵ^-s ) as ϵ↓ 0.
Proof.
We have
| e^-iμ - e^-iκ |^2 = 4e^Im κ |sin ((κ- μ)/2)|^2 .
Thus, for sufficiently small |κ-μ|, there exists a constant δ >0 such that
| e^-iμ - e^-iκ | ≥δ |κ -μ | .
Suppose that κ∈ℒ _ϵ,s (μ ).
For the case κ∈ℒ _ϵ,s,1 (μ ) ∪ℒ _ϵ,s,3 (μ ), we have | Re κ - μ | =aϵ^s and |Im κ | ≤ b ϵ^s.
If κ∈ℒ _ϵ,s,2 (μ ) ∪ℒ _ϵ,s,4 (μ ), we have | Im κ | = bϵ^s and |Re κ |≤ a ϵ^s.
Then, by the formulas (<ref>) and (<ref>), we can take a constant δ >0 such that R_i (κ ) f _ℋ_i≤δϵ^-s f _ℋ_i for any f∈ℋ_i.
For f∈ℋ_i, the complex translation T(θ)f ∈ℋ_i can be defined naturally in view of (<ref>).
Then we consider the operator U_i (θ)=T(θ)U_i T(θ)^-1 and R_i (κ ,θ)= (U_i (θ) -e^-iκ )^-1 for some κ∈ T _ C.
Fix θ∈𝒪^-_0.
* For each κ∈ T_ C with Im κ≥0, e^-iκ∈σ_p (U_i) if and only if e^-iκ∈σ_p (U_i (θ)). In particular, U_i has no resonances other than eigenvalues.
* For f∈ℋ_i and κ∈𝒪^+_Im θ with e^-iκ∉σ_p ( U_i (θ )), we have
R_i (κ , θ)f = ∑ _j=1^N (T(θ)^-1 f,u^(j) ) _ℋ_i/e^-iκ_j - e^-iκ T(θ)u^(j).
Suppose that ϵ >0 is sufficiently small.
If κ varies on ℒ _ϵ,s (μ ) for an eigenvalue e^-iμ∈σ_p (U_i(θ)), we have R_i (κ , θ) _ B (ℋ_i) =O(ϵ^-s).
Proof.
The assertion (1) follows from
U_i u=e^-iκ u U_i (θ)v=e^-iκv,
for v= T(θ)u. According to Lemma <ref>, the eigenfunctions of U_i spans ℋ_i. Thus, there is no resonance of U_i other than eigenvalues.
Suppose that u,f∈ℋ_i satisfy (U_i (θ) -e^-iκ )u=f.
This equation is equivalent to (U_i -e^-iκ ) T(θ)^-1 u=T(θ )^-1 f.
We apply the formula (<ref>) to this equation.
The representation in the assertion (2) follows.
The remaining part is parallel to Lemma <ref>.
Let us turn to the exterior QW U_e and its resolvent operator
R_e (κ) = (U_e -e^-iκ )^-1 .
We define U_e (θ)= T(θ)U_e T(θ)^-1 and R_e (κ , θ)= (U_e ( θ)-e^-iκ )^-1 for some κ∈ T _ C.
The following estimate is used in Section 5.
Fix θ∈𝒪^-_0 and a compact subset Z ⊂𝒪^+ _Im θ.
Then there exists a constant γ >0 which depends only on Z such that
R_e (κ , θ) _ B (ℋ_e)≤γ ,
for κ∈ Z.
As a consequence, there is no resonance of U_e.
Proof.
Let us introduce the operator of restriction J_e : ℋ→ℋ_e.
Thus J_e is the orthogonal projection onto ℋ_e.
Its adjoint J_e^* satisfies J^*_e : ℋ_e ∋ g ↦ g∈ℋ.
Letting u= R_e ( κ , θ )f for f∈ℋ_e, we have
(U_0 (θ) -e^-iκ ) J_e^* u =f+Q(θ)u,
where Q(θ)= U_0 (θ) J_e^* - J_e^* U_e (θ ).
It follows from this equality that
J_e^* R_e (κ, θ )= R_0 (κ , θ ) + R_0 (κ , θ) Q(θ) R_e (κ , θ) .
If the estimate of the lemma fails, there exist sequences { f^(j)} _j≥ 1⊂ℋ_e, {κ_j } _j≥ 1⊂ Z, and κ∈ Z such that f^(j) _ℋ_e→ 0, R_e (κ _j , θ )f^(j) _ℋ_e =1, and κ_j →κ as j→∞.
We put u^(j) = R_e ( κ_j , θ )f^(j).
Since the operator Q(θ) is of finite rank, there exists a subsequence { u^(j_k)} _k≥ 1 such that Q(θ) u^(j_k)→ g as k→∞ for a function g∈ℋ.
Obviously, U_0 is non-trapping.
Then Lemma <ref> implies R_0 (κ _j_k ,θ ) → R_0 (κ , θ ) in B (ℋ) as k→∞.
Then it follows from (<ref>) that
v:= lim _k→∞ J_e^* u^(j_k) = R_0 (κ , θ) g in ℋ .
We put u_e =J_e v = lim_k→∞ u^(j_k)∈ℋ_e.
By the definition of u^(j_k), we have (U_e (θ) -e^-iκ ) u_e = 0.
However, U_e is also non-trapping since U_e does not have bounded trajectories in Ω^e.
Then we see u_e =0 by the same way of the proof of Lemma <ref>.
This fact contradicts with u^(j_k) _ℋ_e =1 for any k.
§ RESONANCE ASSOCIATED WITH PERTURBED CLOSED TRAJECTORIES
§.§ Perturbation of closed trajectories
In this section, an example of resonances is given in a constructive approach for the case where a QW U is a small perturbation of U_el with only a few closed trajectories.
For the sake of simplicity, we consider a simple model as follows.
Fix four points (0,0), (m_0, 0), (m_0 ,n_0 ), (0,n_0 ) for two positive integers m_0 and n_0.
We introduce C _el (x) for x∈ Z^2 by
C_el (x)= {
I_4 , x∈ Z^2 ∖{ (0,0), (m_0, 0), (m_0 ,n_0 ), (0,n_0 ) } ,
[ e _↑ , e _← , e _→ , e _↓ ] , x= (0,0),
[ e _→ , e _↑ , e _← , e _↓ ] , x= (m_0 ,0),
[ e _→ , e _↓ , e _↑ , e _← ] , x= (m_0 ,n_ 0),
[ e _↓ , e _← , e _↑ , e _→ ] , x= (0,n_0 ).
.
See Figure <ref>.
Then U_el satisfies (A-1) and has only two closed trajectories Φ_+ (·)=(q_+(·),p_+(·)):=Φ(· ,0,← ) and Φ_-(·)=(q_- (· ),p_- (·)):=Φ (·,0,↓) where
q_+ ( t)= { (0,t), 0≤ t≤ n_0 ,
(t-n_0,n_0 ), n_0 <t≤ n_0 +m_0 ,
(m_0 , 2n_0+m_0 -t), n_0 +m_0 < t≤ 2n_0 + m_0 ,
(2(n_0+m_0) -t,0), 2n_0 +m_0<t≤ 2(n_0 +m_0 ) ,
.
p_+ ( t)= {↑ , 0<t≤ n_0 ,
→ , n_0 <t≤ n_0 +m_0 ,
↓ , n_0 +m_0 < t≤ 2n_0 + m_0 ,
← , 2n_0 +m_0<t≤ 2(n_0 +m_0 ) ,
.
and Φ_- (·)=Φ(· ,0,↓ ) is the inverse path of Φ _+ (·)=Φ(· ,0,← ).
As we have seen in Lemma <ref>, the set of eigenvalues is given by
σ_p(U_el)= { e^-iπ k/ (m_0 +n_0 ) ; k= 0,1,…, 2(m_0 +n_0 )-1 } .
Each eigenvalue has the geometric multiplicity 2 corresponding to the number of the periodic paths which have the same period as each other.
Now we take a family of QWs {U_ϵ;ϵ∈(0,1]} with U_ϵ=SC_ϵ such that each matrix C_ϵ (x) ∈U (4) satisfies
C_ϵ (x)=C_el(x)=I_4 , x∈ Z^2 ∖{ (0,0), (m_0, 0), (m_0 ,n_0 ), (0,n_0 ) } ,
|C_ϵ (x)- C_el (x)| _∞ < ϵ , x∈{ (0,0), (m_0, 0), (m_0 ,n_0 ), (0,n_0 ) },
c_ϵ,→,←(0,0)=c_ϵ,↑,↓(0,0)
=c_ϵ,↑,↓(m_0,0)=c_ϵ,←,→(m_0,0)
=c_ϵ,↓,↑(m_0,n_0)=c_ϵ,←,→(m_0,n_0)
=c_ϵ,→,←(0,n_0)=c_ϵ,↓,↑(0,n_0)=0,
for each ϵ∈(0,1], where c_ϵ,j,k(x) (j,k ∈{← , → , ↓ , ↑}) denotes the (j,k)-entry of C_ϵ (x).
Here we used the norm |A|_∞ = max _j,k |a_j,k | for a matrix A=[a_j,k ]_j,k ∈{← , → , ↓ , ↑}.
§.§ Construction of outgoing solutions
We here discuss the eigenvalues and the resonances of the QW U_ϵ for each fixed ϵ. The following proposition gives the quantization conditions along Φ_+ of eigenvalues and resonances, which is an analogue of (<ref>).
For each ϵ, there exist eigenvalues of U_ϵ whose associated eigenfunctions are supported only on the image of Φ_+ if and only if
|c_ϵ^+|=1,
c_ϵ^+:=
c_ϵ,↑,← (0,0) c_ϵ,←,↓ (m_0 ,0) c_ϵ,↓,→ (m_0 ,n_0 )c_ϵ,→,↑ (0,n_0 ).
Under this condition, such eigenvalues are 2(m_0+n_0)-roots belonging to T of
e^-2i(m_0+n_0)κ=c_ϵ^+.
Otherwise, 2(m_0+n_0)-roots belonging to 𝒪_0^- of (<ref>)
are resonances. Associated resonant states are supported on the union of the image of Φ_+ and the eight outgoing tails starting from the four corners (0,0), (m_0,0), (m_0,n_0), (0,n_0), i.e.,
{((-N,y_2),←)}_N≥1, {((y_1,n_0+N),↑)}_N≥1,
{((y_1,-N),↓)}_N≥1, {((m_0+N,y_2),→)}_N≥1,
for y_1∈{0,m_0}, y_2∈{0,n_0}.
Proof.
The eigenfunctions are constructed in the same way as Lemma <ref>. Let us suppose U_elu=e^-iκu for a κ∈ T_ C. We put f_t=u_p_+(t)(q_+(t)) for (q_+(t),p_+(t))=Φ_+(t) defined by (<ref>). Then we have f_t=e^-iκf_t+1 for t∉{0,n_0,m_0+n_0,m_0+2n_0} and
c_ϵ,↑,← (0,0) f_0 = e^-iκ f_1 ,
c_ϵ,→,↑ (0,n_0 )f_n_0= e^-iκ f_n_0+1,
c_ϵ,↓,→ (m_0 ,n_0 ) f_m_0 +n_0 = e^-iκ f_m_0 +n_0 +1 , c_ϵ,←,↓ (m_0 ,0 ) f_m_0 +2n_0 = e^-iκ f_m_0 +2n_0 +1 .
Plugging these equalities into f_2(m_0+n_0)=f_0, we obtain
f_0 = e^2iκ (m_0 +n_0 ) c_ϵ^+ f_0,
This shows that (<ref>) is a necessary condition to have a solution u with u_←(0,0)=f_0≠0.
Moreover, under the condition (<ref>), 2(m_0+n_0)-roots of (<ref>) is real, and u defined by (<ref>) is an associated eigenfunction.
When the condition (<ref>) is false, at least one of the four factors of c_ϵ^+ has its modulus less than 1. For example, let us suppose that |c_ϵ,↑,←(0,0) |<1. Then u also satisfies
e^-iκu_←(-1,0)=c_ϵ,←,←(0,0) u_←(0,0),
e^-iκu_↓(0,-1)=c_ϵ,↓,←(0,0)u_←(0,0).
Note that under the condition (<ref>), u_→(1,0) is independent of u_←(0,0). It follows inductively that
u_←(-N,0)=e^iNκc_ϵ,←,←(0,0)f_0,
u_↓(0,-N)=e^iNκc_ϵ,↓,←(0,0)f_0,
for N=1,2,…. By a symmetric argument for other corners, we obtain a resonant state.
The above proposition with a symmetric argument along Φ_- shows the following instability of eigenvalues under small perturbations.
For any ϵ∈(0,1], and any N∈{0,2(n_0+m_0),4(n_0+m_0)}, one can construct a QW U_ϵ having N eigenvalues. However, the sum of the numbers of eigenvalues and resonances is always 4(n_0+m_0).
Proof.
It suffices to show that there is no eigenfunctions or resonant states other than those we have constructed above. If an outgoing solution u to U_ϵ u=e^-iκu for a κ∈ T_ C does not vanishes at least one point of the image of Φ_±, the other entries of u are automatically determined by the argument used in the proof of Proposition <ref>. Thus, u coincides with one of such eigenfunctions or resonant states constructed above.
Otherwise, that is, if u vanishes at every point of the image of Φ_±, then the same argument as in the proof of Lemma <ref> shows that u is identically vanishing.
§.§ Asymptotic distribution of eigenvalues and resonances
In the previous subsection, we saw that U_ϵ can be eigenvalue-free however small the perturbation is (Corollary <ref>). Contrary, we show in the present subsection that the eigenvalues of the conjugated operator U(θ) are stable under small perturbations. The following proposition is the main result of this section. Recall that the eigenvalues of U_el are given by (<ref>).
Suppose that U_ϵ satisfies (<ref>), (<ref>), (<ref>).
Let θ∈𝒪_0^-.
Then there exists ϵ_0>0 such that for any ϵ∈(0,ϵ_0], U_ϵ(θ)=T(θ)U_ϵ T(θ)^-1 has 4(m_0+n_0)-eigenvalues as well as U_el.
Moreover, there exists r>0 such that for each k=0,1,…, 2(m_0 +n_0 )-1, there exist κ_k,ϵ^+, κ_k,ϵ^-∈ T_ C such that e^-iκ_k,ϵ^±∈σ_p(U_ϵ(θ)) and
| κ_k,ϵ^± - π k/(m_0 +n_0 ) | <rϵ in T_ C
for any ϵ∈(0,ϵ_0].
Proof. Under the assumption (<ref>), we have
c_ϵ^±=(1+O(ϵ))^4=1+O(ϵ).
Note that c_ϵ^+ is defined by (<ref>), and c_ϵ^- is by
c_ϵ^-=c_ϵ,→,↓(0,0) c_ϵ, ↑,→(m_0,0) c_ϵ,←,↑(m_0,n_0) c_ϵ,↓,←(0,n_0).
Recalling the quantization condition (<ref>), each κ∈ T_ C satisfying one of
e^2i(m_0+n_0)κ=c_ϵ^+=1+O(ϵ) or
e^2i(m_0+n_0)κ=c_ϵ^-=1+O(ϵ)
is a resonance or an eigenvalue of U_ϵ. Consequently, for each κ, there exists k∈{0,1,…,2(m_0+n_0)-1} such that
κ=π k/m_0+n_0+O(ϵ).
This shows in particular that κ∈𝒪^+_Im θ (thus e^-iκ∈σ_p(U(θ))) for sufficiently small ϵ>0.
§ SHAPE RESONANCE MODEL FOR QW
§.§ Shape resonance model
In this section, we redefine the QW U_ϵ = SC_ϵ as a perturbation of U_np=SC_np = U_i ⊕ U_e which has been introduced in Subsection 3.2.
Suppose that C_ϵ (x)∈U (4) satisfies the assumption
C_ϵ (x)= C_np (x) for x∈ Z^2 ∖ K ,
|C_ϵ (x) -C_np (x)| _∞ < ϵ for x∈ K,
for ϵ∈(0,1].
As a consequence, the estimate
U_ϵ - U_np _ B (ℋ) =O(ϵ ),
follows.
§.§ Resolvent estimate
We fix θ∈𝒪^-_0. In order to study the resonances, we consider the difference
T_ϵ (κ , θ )= R_ϵ (κ , θ)-R_np (κ , θ ),
of the resolvent operators
R_ϵ (κ , θ ) = (U_ϵ (θ)-e^-iκ )^-1 , R_np ( κ , θ )=R_i (κ , θ ) ⊕ R_e (κ , θ ),
where we put U_ϵ (θ )= T(θ ) U_ϵ T(θ )^-1 and U_np (θ )= T(θ ) U_np T(θ )^-1.
For κ∈𝒪^+ _Im θ such that R_ϵ ( κ , θ ) and R_np (κ , θ ) are well-defined, we have
T_ϵ (κ,θ )= R_ϵ (κ , θ )Q_ϵ (θ ) R_np (κ , θ )= R_np (κ,θ )Q_ϵ (θ ) R_ϵ (κ , θ ),
where Q_ϵ ( θ )= U_np ( θ )-U_ϵ (θ ).
Proof.
We put
v=R_np (κ,θ)f.
The first equality of the lemma follows from the computation
(T_ϵ (κ , θ)f,f)_ℋ = (f,R_ϵ (κ,θ)^* f )_ℋ -(v,f)_ℋ
= ((U _np (θ)-e^-iκ)R_np (κ,θ)f,R_ϵ (κ,θ)^* f)_ℋ
- (v,(U_ϵ (θ) ^* -e^-iκ ) R_ϵ (κ , θ )^* f )_ℋ ,
for any f∈ℋ.
The second equality can be proven in a symmetric argument.
Now we take an eigenvalue e^-iμ_0∈σ_p (U_np ).
Suppose that κ∈ T _ C varies on the counterclockwise loop ℒ _ϵ,s ( μ_0 ) which has been introduced in Subsection 3.2.
Taking sufficiently small ϵ >0, we assume that there is no other μ∈ T such that e^-iμ∈σ_p (U_np ) inside the loop ℒ _ϵ,s ( μ_0 ).
Fix s∈(0,1/2].
There exists ϵ_0>0 such that
T_ϵ (κ,θ) _ B (ℋ)
is uniformly bounded for κ∈ℒ_ϵ,s (μ_0 ) and for ϵ∈(0,ϵ_0].
Proof.
Lemma <ref> implies
R_i (κ,θ) _ B (ℋ_i) =O(ϵ^-s),
as well as Lemma <ref> shows the existence of a constant γ >0 such that
R_e (κ,θ) _ B (ℋ_e)≤γ .
Recall the operator of restriction J_e : ℋ→ℋ_e which has been used in the proof of Lemma <ref>.
We define J_i : ℋ→ℋ_i in the similar way.
We give the following estimates :
J_i R_ϵ (κ,θ) _ B (ℋ;ℋ_i )
= R_i (κ , θ )(U_i (θ)-e^-iκ ) J_i R_ϵ (κ , θ) _ B (ℋ;ℋ_i)
≤ R_i (κ , θ ) _ B (ℋ_i ) 1+(U_i (θ)J_i - J_i U_ϵ (θ) )R_ϵ (κ , θ ) _ B (ℋ;ℋ_i )
≤ R_i (κ , θ ) _ B (ℋ_i )( 1+ J_i (U_i (θ) - U_ϵ (θ) )R_ϵ (κ , θ ) _ B (ℋ;ℋ_i )),
and similarly
J_e R_ϵ (κ,θ) _ B (ℋ;ℋ_e )
≤ R_e (κ , θ ) _ B (ℋ_e )( 1+ J_e (U_e (θ) - U_ϵ (θ) )R_ϵ (κ , θ ) _ B (ℋ;ℋ_e )) .
By the triangular inequality with (<ref>)-(<ref>), we obtain
R_ϵ (κ,θ) _ B (ℋ)≤ J_i R_ϵ (κ,θ) _ B (ℋ;ℋ_i) + J_e R_ϵ (κ,θ) _ B (ℋ;ℋ_e)
≤ O(ϵ^-s ) ( 1+O(ϵ) R_ϵ (κ,θ) _ B (ℋ )) + γ( 1+O(ϵ) R_ϵ (κ,θ) _ B (ℋ))
= O(ϵ^-s) +O(ϵ^1-s) R_ϵ (κ,θ) _ B (ℋ ) .
Since ϵ>0 is sufficiently small, we can show
R_ϵ (κ , θ ) _ B (ℋ )≤ (1+O(ϵ^1-s))^-1 O(ϵ^-s ) ≤ O(ϵ^-s) ,
for s∈ (0,1).
Now we apply Lemma <ref> in order to show
T_ϵ (κ,θ) _ B (ℋ ) ≤ R_ϵ (κ,θ) _ B (ℋ) Q_ϵ (θ) _ B (ℋ) R_np (κ,θ) _ B (ℋ)
= O(ϵ^-s)O(ϵ)O(ϵ^-s)= O(ϵ^1-2s)=O(1),
for s∈ (0,1/2].
§.§ Existence of resonance
The eigenvalues of U_np are unstable under the perturbation like those of U_el where we discussed in the previous section.
For example, in view of a sufficient condition (C), if neither [c_np,j,k (x)] _j,k∈{← , ↓} nor [c_np,j,k (x)] _j,k∈{→ , ↑} vanishes for any x∈Ω^i∖ K, we can take U_ϵ such that it is eigenvalue-free for any small ϵ∈(0,1).
Here, c_np,j,k(x) is the (j,k)-entry of the coin matrix C_np(x).
As a concrete example, we derive the following case.
In view of the definition of C_np (x), it is a unitary matrix of the form
C_np (x)= [ 0 1 0 0; c_→,← (x) 0 c_→,↓ (x) c_→,↑ (x); c_↓,← (x) 0 c_↓,↓ (x) c_↓,↑ (x); c_↑,← (x) 0 c_↑,↓ (x) c_↑,↑ (x) ] , x∈ K_1^- ,
C_np (x)= [ 0 c_←,→ (x) c_←,↓ (x) c_←,↑ (x); 1 0 0 0; 0 c_↓,→ (x) c_↓,↓ (x) c_↓,↑ (x); 0 c_↑,→ (x) c_↑,↓ (x) c_↑,↑ (x) ] , x∈ K_1^+ ,
C_np (x)= [ c_←,← (x) c_←,→ (x) c_←,↓ (x) 0; c_→,← (x) c_→,→ (x) c_→,↓ (x) 0; 0 0 0 1; c_↑,← (x) c_↑,→ (x) c_↑,↓ (x) 0 ] , x∈ K_2^- ,
C_np (x)= [ c_←,← (x) c_←,→ (x) 0 c_←,↑ (x); c_→,← (x) c_→,→ (x) 0 c_→,↑ (x); c_↓,← (x) c_↓,→ (x) 0 c_↓,↑ (x); 0 0 1 0 ] , x∈ K_2^+ ,
where c_j,k (x)= c_np,j,k (x) for j,k∈{←, →, ↓, ↑}.
As the matrix C_ϵ (x) for x∈ K, we can take
C_ϵ (x)= [ ϵ √(1-ϵ^2) 0 0; √(1-ϵ^2) c_→,← (x) -ϵ c_→,← (x) c_→,↓ (x) c_→,↑ (x); √(1-ϵ^2) c_↓,← (x) -ϵ c_↓,← (x) c_↓,↓ (x) c_↓,↑ (x); √(1-ϵ^2) c_↑,← (x) -ϵ c_↑,← (x) c_↑,↓ (x) c_↑,↑ (x) ] , x∈ K_1^- ,
C_ϵ (x)= [ ϵ c_←,→ (x) √(1-ϵ^2) c_←,→ (x) c_←,↓ (x) c_←,↑ (x); √(1-ϵ^2) -ϵ 0 0; ϵ c_↓,→ (x) √(1-ϵ^2) c_↓,→ (x) c_↓,↓ (x) c_↓,↑ (x); ϵ c_↑,→ (x) √(1-ϵ^2) c_↑,→ (x) c_↑,↓ (x) c_↑,↑ (x) ] , x∈ K_1^+ ,
C_ϵ (x)= [ c_←,← (x) c_←,→ (x) √(1-ϵ^2) c_←,↓ (x) -ϵ c_←,↓ (x); c_→,← (x) c_→,→ (x) √(1-ϵ^2) c_→,↓ (x) -ϵ c_→,↓ (x); 0 0 ϵ √(1-ϵ^2); c_↑,← (x) c_↑,→ (x) √(1-ϵ^2) c_↑,↓ (x) -ϵ c_↑,↓ (x) ] , x∈ K_2^- ,
C_ϵ (x)= [ c_←,← (x) c_←,→ (x) ϵ c_←,↑ (x) √( 1-ϵ^2 ) c_←,↑ (x); c_→,← (x) c_→,→ (x) ϵ c_→,↑ (x) √( 1-ϵ^2 ) c_→,↑ (x); c_↓,← (x) c_↓,→ (x) ϵ c_↓,↑ (x) √( 1-ϵ^2 ) c_↓,↑ (x); 0 0 √( 1-ϵ^2 ) -ϵ ] , x∈ K_2^+ .
It is easily checked that the matrix C_ϵ (x) for every x∈ K is unitary.
For the matrix C_ϵ (x)=[ c_ϵ,j,k (x)]_j,k∈{←, →, ↓, ↑}, we have
[ c_ϵ,j,k (x)] _j,k∈{←,↓} = {ϵ c_↓,↓ (x) , x∈ K_1^- ,
ϵ ( c_←,→ (x) c_↓,↓ (x)-c_←,↓ (x) c_↓,→ (x)) , x∈ K_1^+ ,
ϵ c_←,← (x) , x∈ K_2^- ,
ϵ ( c_←,← (x)c_↓,↑ (x)-c_←,↑ (x) c_↓,← (x)) , x∈ K_2^+ ,
.
and
[ c_ϵ,j,k (x)] _j,k∈{→,↑} = {ϵ (c_→,↑ (x) c_↑,← (x)- c_←,→ (x) c_↑,↑ (x)) , x∈ K_1^- ,
-ϵ c_↑,↑ (x) , x∈ K_1^+ ,
ϵ ( c_→,↓ (x) c_↑,→ (x)-c_→,→ (x)c_↑,↓ (x)) , x∈ K_2^- ,
-ϵ c_→,→ (x) , x∈ K_2^+ .
.
Now suppose that neither [ c_np,j,k (x)] _j,k∈{←,↓} nor [ c_np,j,k (x)] _j,k∈{→,↑} vanishes for all x∈Ω^i ∖ K.
If C_np (x) for every x∈ K satisfies
c_↓,↓ (x)≠0 , c_→,↑ (x) c_↑,← (x)- c_←,→ (x) c_↑,↑ (x) ≠ 0 , x∈ K_1^- ,
c_↑,↑ (x)≠ 0 , c_←,↓ (x) c_↓,→ (x)- c_←,→ (x) c_↓,↓ (x) ≠ 0 , x∈ K_1^+ ,
c_←,← (x)≠ 0, c_→,↓ (x) c_↑,→ (x)- c_→,→ (x)c_↑,↓ (x) ≠ 0 , x∈ K_2^- ,
c_→,→ (x)≠ 0 , c_←,↑ (x) c_↓,← (x)- c_←,← (x)c_↓,↑ (x) ≠ 0 , x∈ K_2^+ ,
it follows that C_ϵ (x) satisfies the condition (C).
One of more trivial cases is given by
C_ϵ (x)= [ ϵ √(1-ϵ^2) 0 0; √(1-ϵ^2) -ϵ 0 0; 0 0 ϵ √(1-ϵ^2); 0 0 √(1-ϵ^2) -ϵ ] , x∈ K,
and C_np (x)= C_ϵ (x)| _ϵ =0 for x∈ K.
Now we shall prove the existence result of resonances for U_ϵ near each eigenvalue of U_np.
We fix e^-iμ_0∈σ_p (U_np ).
In view of (<ref>) and (<ref>), we define the projection operators
P_U_ϵ (μ_0 )= 1/2π∮ _ℒ _ϵ,s (μ_0) e^-iκ R_ϵ (κ , θ)dκ ,
P_U_np (μ_0 )= 1/2π∮ _ℒ _ϵ,s (μ_0) e^-iκ R_np (κ , θ)dκ ,
for sufficiently small ϵ >0 and s∈ (0,1/2].
Let
P _ϵ (μ_0)= P_U_ϵ (μ_0 ) - P_U_np (μ_0 )= 1/2π∮ _ℒ _ϵ,s (μ_0) e^-iκ T_ϵ (κ , θ )dκ .
Fix θ∈𝒪^-_0.
For sufficiently small ϵ >0 and s∈ (0,1/2], we have
P _ϵ (μ_0)_ B(ℋ)=O(ϵ^s).
Proof.
For any θ∈𝒪^-_0, we apply Lemma <ref> which states the uniform boundedness of T_ϵ (κ , θ)_ B(ℋ) on the loop ℒ_ϵ,s(μ_0). Since the length of the integral path, the loop ℒ_ϵ,s(μ_0), is of O(ϵ^s), we obtain the required estimate.
As a consequence, we obtain the following result immediately.
Let μ_0∈ T be such that e^-iμ_0∈σ_p ( U_np ), and put
exp (-iℒ _ϵ,s (μ_0 )) = { e^-iκ ; κ∈ℒ _ϵ,s (μ_0 ) } .
Fix s∈(0,1/2] and θ∈𝒪^-_0.
There exists ϵ_0>0 such that the number of eigenvalues inside the loop exp (-iℒ _ϵ,s (μ_0) ) of U_ϵ(θ) coincides with the multiplicity of the eigenvalue e^-iμ_0 for any ϵ∈(0,ϵ_0], where we count the eigenvalues the same time as their algebraic multiplicity.
Remark.
Note that the eigenvalues of U_ϵ(θ) is either a resonance or an eigenvalue of U_ϵ. Corollary <ref> is an analogue of Proposition <ref>.
In view of Proposition <ref>, the estimate in Corollary <ref> does not show the optimal result for the existence of resonances near an eigenvalue of U_np.
For some models, we expect that Corollary <ref> can be improved for s=1.
99
AgCo
J. Aguilar and J. M. Combes, A class of analytic perturbations for one-body Schrödinger Hamiltonians, Commun. Math. Phys., 22 (1971), 269-279.
CDKS
J. M. Combes, P. Duclos, M. Klein and R. Seiler, The shape resonance, Commun. Math. Phys., 110 (1987), 215-236.
DyZw
S. Dyatlov and M. Zworski, “Mathematical Theory of Scattering Resonances", Graduate Studies in Mathematics 200, AMS, 2019.
HMS
K. Higuchi, H. Morioka and E. Segawa, Resonance expansion for quantum walks and its applications to the long-time behavior, preprint. arXiv:2306.10719
Ka
K. Kameoka, Semiclassical study of shape resonances in the Stark effect, J. Spectr. Theory, 11 (2021), 677-708.
KaKu
T. Kato and S. Kuroda, Theory of simple scattering and eigenfunction expansions, Functional Analysis and Related Fields (Proc. Conf. for M. Stone, Univ. Chicago, Chicago, III., 1968) pp. 99-131. Springer, New York, 1970.
Kl
M. Klein, On the absence of resonances for Schrödinger operators with non-trapping potentials in the classical limit, Commun. Math. Phys., 106 (1986), 485-494.
KKMS
T. Komatsu, N. Konno, H. Morioka and E. Segawa, Generalized eigenfunctions for quantum walks via path counting approach, Rev. Math. Phys., 33 (2021), 2150019.
KKMS2
T. Komatsu, N. Konno, H. Morioka and E. Segawa, Asymptotic properties of generalized eigenfunctions for multi-dimensional quantum walks, Ann. Henri Poincaré, 23 (2022), 1693-1724.
MS4
M. Maeda, H. Sasaki, E. Segawa, A. Suzuki and K. Suzuki, Dispersive estimates for quantum walks on 1D lattice, J. Math. Soc. Japan, 74 (2022), 217-246.
Mo
H. Morioka, Generalized eigenfunctions and scattering matrices for position-dependent quantum walks, Rev. Math. Phys., 31 (2019), 1950019.
MoSe
H. Morioka and E. Segawa, Detection of edge defects by embedded eigenvalues of quantum walks, Quantum Inf. Process., 18 (2019), 283.
Na1
S. Nakamura Scattering theory for the shape resonance model I. Non-resonant energies, Ann. Inst. Henri Poincaré, 50 (1989), 115-131.
Na2
S. Nakamura Scattering theory for the shape resonance model II. Resonance scattering, Ann. Inst. Henri Poincaré, 50 (1989), 133-142.
RST
S. Richard, A. Suzuki and R. Tiedra de Aldecoa, Quantum walks with an anisotropic coin I: Spectral theory, Lett. Math. Phys., 108 (2018), 331-357.
Ti
R. Tiedra de Aldecoa, Stationary scattering theory for unitary operators with an application to quantum walks, J. Funct. Anal., 279 (2020), 108704.
Si
B. Simon, Resonances and complex scaling : a rigorous overview, J. Quant. Chem., 14 (1978), 529-542.
Ya
D. Yafaev, “Mathematical Scattering Theory: General Theory", Translations of Mathematical Monographs, vol. 105, American Mathematical Society, Providence, 2009.
Ves
E. V. Vesalainen, Rellich type theorems for unbounded domains, Inverse Probl. Imaging, 8 (2014), 865-883.
|
http://arxiv.org/abs/2307.02811v2
|
20230706070232
|
Machine Learning Classification of Repeating FRBs from FRB121102
|
[
"Bjorn Jasper R. Raquel",
"Tetsuya Hashimoto",
"Tomotsugu Goto",
"Bo Han Chen",
"Yuri Uno",
"Tiger Yu-Yang Hsiao",
"Seong Jin Kim",
"Simon C. -C. Ho"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
firstpage–lastpage
CPDG: A Contrastive Pre-Training Method for Dynamic Graph Neural Networks
Yuanchen Bei^11* Both authors contributed equally to this paper., Hao Xu^21, Sheng Zhou^12† Corresponding author., Huixuan Chi^3,
Haishuai Wang^1, Mengdi Zhang^2, Zhao Li^1, Jiajun Bu^1
^1 Zhejiang University, Hangzhou, China
^2 Meituan, Beijing, China
^3 Institute of Computing Technology, Chinese Academy of Science, Beijing, China
[email protected], [email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
August 1, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Fast Radio Bursts (FRBs) are mysterious bursts in the millisecond timescale at radio wavelengths. Currently, there is little understanding about the classification of repeating FRBs, based on difference in physics, which is of great importance in understanding their origin. Recent works from the literature focus on using specific parameters to classify FRBs to draw inferences on the possible physical mechanisms or properties of these FRB subtypes. In this study, we use publicly available 1652 repeating FRBs from FRB121102 detected with the Five-hundred-meter Aperture Spherical Telescope (FAST), and studied them with an unsupervised machine learning model. By fine-tuning the hyperparameters of the model, we found that there is an indication for four clusters from the bursts of FRB121102 instead of the two clusters ("Classical" and "Atypical") suggested in the literature. Wherein, the “Atypical” cluster can be further classified into three sub-clusters with distinct characteristics. Our findings show that the clustering result we obtained is more comprehensive not only because our study produced results which are consistent with those in the literature but also because our work uses more physical parameters to create these clusters. Overall, our methods and analyses produced a more holistic approach in clustering the repeating FRBs of FRB121102.
(transients:) fast radio bursts – stars: magnetars – stars: neutron – methods: data analysis
§ INTRODUCTION
Fast Radio Bursts (FRBs) are bright millisecond-duration radio flashes of extragalactic origin (). They are characterized by their anomalously high dispersion measure (DM) and millisecond duration, indicating high brightness temperature and isotropic energy release (). FRBs are usually classified as either ‘repeating’ or ‘non-repeating.’ Repeating FRBs have multiple bursts, while non-repeating FRBs have one-off bursts (). Currently, there are > 600 FRBs that are reported as of April 2022 ().
FRB121102, first discovered in 2014 () and identified as a repeater in 2016 (), is the most extensively studied FRB across a broad range of radio frequencies from 600 MHz up to 8 GHz ().The repetition allowed for localization with a high precision of 100 mas, leading to the first unambiguous identification of an FRB host galaxy at ∼1 Gpc (z = 0.193) and its association with a persistent radio source <cit.>). Many theoretical models have been developed to explain the physical nature of FRB121102 (see for review). In particular, it has been suggested that FRB121102 might have originated from a young magnetar <cit.>. Performing follow-up observations using the Arecibo Telescope, found ten additional bursts for FRB121102. Shortly after, found six bursts from two different telescopes. Five from the Green Bank Telescope (GBT) at 2 GHz, and one from the Arecibo Telescope at 1.4 GHz.
detected 16 bursts from FRB121102 using the William E. Gordon Telescope at the Arecibo Observatory at 4.1-4.9 GHz. Most recently, found 1652 bursts using the Five-hundred-meter Aperture Spherical radio Telescope (FAST) at 1.05-1.45 GHz. In addition to these, discovered a tentative period of 157 d with a duty cycle of 56 percent, and showed that FRB121102 exhibits a complex time-frequency structure.
Machine Learning (ML) has been proven helpful in astronomy and its related fields. In the field of FRB research, ML has found its applications in the works of , wherein they used a combination of neural network detection with dedispersion verification to work on pulse detection and periodicity of FRB121102; in the development of automated methods in identifying events of interest; in applying deep learning to single-pulse classification and developing a hierarchical framework for ranking events by their probability of being astrophysical transients; and most recently, where an unsupervised machine learning algorithm, namely Uniform Manifold Approximation and Projection or UMAP <cit.>, was used to understand, classify, and identify possible FRB repeaters from a sample of 501 non-repeating and 93 repeating FRBs.
Despite these developments, there is still little understanding about the nature of the repeating FRBs (e.g., <cit.>). Thus, the main purpose of this research is to shed light on the underlying physical mechanisms of repeating FRBs by studying FRB121102. Specifically, this study focuses on determining and characterizing burst subtypes of FRB121102 in order to unveil latent features or properties of repeating FRBs. Also, we would limit the focus of this paper to classifying FRBs leaving the discussion of the possible mechanisms to future theoretical studies.
This paper is structured as follows: Section <ref> (Data Preprocessing) discusses the selection of the samples from the archival data shown in the Supplementary Table 1 of the <cit.> paper. Section <ref> (Unsupervised Machine Learning) is divided into two subsections. Section <ref> (Uniform Manifold Approximation and Projection (UMAP)) focuses on finding the low-dimensional representation of the data using UMAP and Section <ref> (Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)) discusses how HDBSCAN was used to cluster the data. In Section <ref> (Results) we show the parameter coloring of the UMAP embedding results to see any trends and investigate the properties of each cluster. In Section <ref> (Discussion) we discuss the implication of the results, change in cluster membership (Section <ref>, Cluster membership change), and compare it to the results found in the literature (Section <ref>, Comparison with other results). Lastly, Section <ref> (Conclusions) summarizes the findings and conclusions of this study. An Appendix <ref> has also been included to show other important results that is used in the analysis but not central to the goal of the paper.
§ DATA PREPROCESSING
In this paper, we used the archival data from FAST as presented in the Supplementary Table 1 of <cit.>. Wherein, they reported 1652 independent bursts in a total of 59.5 hours all throughout the continuous monitoring campaign of FRB121102 from August 29, 2019, up until October 29, 2019, using the FAST. The archival data from the Supplementary Table 1 of <cit.> have the following parameters:
* Burst Arrival Time (MJD)
* Dispersion Measure (pc · cm^-3)
* Time Width (ms)
* Bandwidth (GHz)
* Peak Flux (mJy)
* Fluence (Jy · ms)
* Energy (erg)
We want to include as many parameters as possible to ensure the veracity of the results. Thus, we included waiting time, which is defined to be the arrival-time difference between two subsequent bursts.
For the parameters that we used in the unsupervised machine learning we excluded the Burst Time (MJD) parameter because the observational periods of the monitoring campaign are not uniform based on Figure 1.a of <cit.>. The Dispersion Measure (pc · cm^-3) is also excluded for the reason that it is not intrinsic to the FRB source, and it is mainly related to the distance of the source. Some of these parameters are known to have correlation with each other, but we still included them in the analysis because their inclusion does not introduce bias as what <cit.> found in their analysis and exploration on the treatment of collinearity in quantitative empirical research. Additionally, it does not hurt to include as many parameters as possible. Thus, the parameters that are used for the unsupervised machine learning are Time Width (ms), Bandwidth (GHz), Peak Flux (mJy), Fluence (Jy · ms), Energy (erg), and Waiting Time (s).
It is also important to realize that since the observation period is not uniform, we need to exclude the data points that have a waiting time of longer than or equal to one day. As this is just an artifact of the monitoring campaign and has no use for our analysis in this paper. As we can see from Figure <ref>, the red dotted line represents a waiting time value of one day and the blue dash-dotted line represents a waiting time of half a day. We exclude the data points beyond the blue dash dotted line because of the observational cadence of FAST.
From the 1652 independent bursts reported by <cit.>, after following the data selection method explained above, the number of independent burst samples we will use for the unsupervised machine learning is 1613. It is known that FRB121102 have a bimodal waiting time distribution <cit.> and as Figure <ref> shows; the exclusion of 39 data points did not affect this property.
§ UNSUPERVISED MACHINE LEARNING
§.§ Uniform Manifold Approximation and Projection (UMAP)
Our data have 1613 rows and 7 columns after the preprocessing. Now, we employ a dimension reduction algorithm to visualize our data and conduct our unsupervised learning. This can be done by using Uniform Manifold Approximation and Projection (UMAP) (). Based on the ideas from topological data analysis and manifold learning techniques, UMAP finds a low dimensional representation of a given data by using basic Riemannian geometry to bring the data much closer to the underlying assumptions of the topological data analysis algorithm.
UMAP has four basic hyperparameters which significantly affect the resulting embedding of the data. These hyperparameters are min_dist, metric, n_components, and n_neighbors. It is important to realize that in this work we would like to uncover if there are underlying physical mechanisms or properties that make an FRB a repeater. Thus, we tune these parameters in a way that we will be able to notice a structure in the embedding.
min_dist restricts the clumping of the points in the resulting embedding. Providing the minimum distance apart that the points are allowed to be in the low dimensional representation. Meaning the closer the value of min_dist is to zero the clumpier the embedding of the locally connected points. Since we would like to, as much as possible, see clustering in the embedding, we set min_dist = 0.
metric defines the way the distance between two points is measured. For our purpose of extracting intuitive realizations, we set metric = euclidean.
n_components is just the dimension of the resulting embedding. This hyperparameter helps us to visualize the data in the reduced dimension space of our own choosing and since we want to visualize our result in the two-dimensional (2D) plane, we set n_components = 2.
n_neighbors constrains the size of the local neighborhood UMAP considers when estimating the manifold structure of the data. This hyperparameter focuses much more on the local structure when it has low values and on the global structure when it has a higher value. In our analysis, we considered a range of values for n_neighbors. Namely, n_neighbors = 5,6,7,8, and 9 which is a reasonable range of values as these provide us with distinct clusters. However, for our interests, we will be only focusing on the clustering result of n_neighbors = 9.
As shown in Figure <ref>, the UMAP embedding shows the lower left of the plot has a higher density of data points compared to the upper right of the plot. It is also evident that as the value of n_neighbors increases more focus on the overall structure of the data is highlighted (See <ref>). This is why the researchers only considered these values for the n_neighbors because if we include higher values we will see embeddings that do not have a clear division or separation between data points and this will not prove useful to us in investigating the underlying mechanisms of the FRB.
§.§ Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)
In this paper, we use the clustering algorithm developed by Campello, Moulavi, and Sander <cit.>, namely Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN), to cluster the UMAP results we got from Section <ref>. In HDBSCAN there are only four major parameters that can be tuned. These parameters are min_cluster_size, min_samples, cluster_selection_epsilon, and alpha. Each of these have significant effects on the clustering result of the data points.
min_cluster_size affects the size of grouping that can be considered a cluster. The bigger the value of this parameter the lesser the number of the resulting clusters. For our purposes, we set min_cluster_size = 200.
min_samples controls the number of points that will be declared as noise. Considering points that are far from dense areas to be noise. The larger the value of this parameter the larger number of points that will be considered noise. In this study, we have min_samples = 10.
cluster_selection_epsilon when tuned allows micro-clusters in high concentration region to be merged. Preventing clusters to be split up any further than the given threshold. Since we obtained our desired clustering we allowed this parameter to retain its default value, cluster_selection_epsilon = 0.
Lastly alpha, this parameter is usually not changed or modified. However, if the clustering result from the changes made in min_samples and cluster_selection_epsilon is unwieldy, one can then adjust alpha and make the clustering more conservative. Meaning more points will be considered as noise. Similar to cluster_selection_epsilon, the researchers allowed this parameter to retain its value, alpha = 1.
After using UMAP to find a low dimensional representation of our data, we now use HDBSCAN to cluster this embedding.Looking at Fig. <ref> and the figures of <ref>, it is evident that the HDBSCAN Clustering results for n_neighbors = 7, n_neighbors = 8, and n_neighbors = 9 have three clusters with noise while n_neighbors = 5 have three clusters without noise, and n_neighbors = 6 have two clusters without noise. Since similar cluster number of different n_neighbors value are not identical to each other we introduce the following nomenclature: nn[n_neighborsvalue].c[clusternumber]. As an example, nn5.c1 refers to the cluster 1 of n_neighbors = 5. For the Noise clusters we will use a "N" in place of the [clusternumber], e.g., nn5.cN.
§ RESULTS
Since this paper is focused on determining what physical mechanisms or properties underlie repeating FRBs. It is important to see how the parameters of our data behave or manifest in our clustering. This can be achieved by colouring the clustering result with the parameters. Doing so will allow us to recognize trends, patterns, and properties among the clusters that can be helpful in investigating the repeating FRBs. Since we have six parameters, we have six plots with colouring based on a single parameter. In Figure <ref>, we have the Bandwidth colouring (GHz) <ref>, the Energy colouring (erg) <ref>, the Fluence colouring (Jy ms) <ref>, the Peak Flux colouring (mJy) <ref>, the Time Width colouring (ms) <ref>, and the Waiting Time colouring (s) <ref>. The other parameter colouring results that we have considered in our analyses can be found at Figures <ref> – <ref>.
From these, we can see that parameter coloring enables us to easily characterize each cluster. From Figure <ref> (Bandwidth), nn9.c1, nn9.c3, and nn9.cN are narrowband and nn9.c2 is broadband. Figures <ref> – <ref> (Energy, Fluence, and Peak Flux) all exhibit a similar trend for their clusters. nn9.c1 and nn9.cN both exhibits low energy, low fluence, and low peak flux. Majority of nn9.c3 have high energy, high fluence, and high peak flux while nn9.c2 have diverse energy, diverse fluence, and diverse peak flux. Figure <ref> (Time Width) exhibits an interesting coloring among the clustering results. Points with longer duration tend to be located near the center of the clustering, and points with shorter duration tend to be located away from the center of the embedding. Lastly, Figure <ref> (Waiting Time) tend to cluster points with long waiting times into small clusters and scattered among the bigger clusters of short waiting time points. Showing that regardless of cluster, FRB 121102 can be described to have a very short waiting time. In addition, all of these clusters do have a bimodal distribution for the waiting time. It is also important to realize that the significance of these properties were also supported by the histograms which we included in Appendix <ref>. Inspecting the histograms, especially the Bandwidth <ref>, Energy <ref>, Fluence <ref>, and Peak Flux <ref> histograms, we see that the clusters were distinct from each other, showing different distributions for each cluster. One of the most notable is the bandwidth distributions, where all clusters show different distributions. Thus, supporting the idea that the resulting clusters are significantly different.
We can then summarize the characterization of each cluster regardless of n_neighbors value as shown in Table <ref>. It is important to keep in mind that the qualitative description of the clusters is based relative to the range of values for each parameter on a given cluster. As shown in Table <ref>, each cluster contains a unique set of properties that remains constant throughout the change of n_neighbors value which highlights the difference in physics among clusters. We may refer to these properties as "invariant" cluster properties. Identifying these invariant cluster properties is essential in describing the underlying physical mechanisms that we might discover based on the number of clusters that we found. Thus, supporting the idea that the resulting clusters are significant.
§ DISCUSSION
The model used in this study is similar to what was employed in the work of . UMAP was used to find the low dimensional representation of the data and then HDBSCAN was used to cluster the data points. Several key differences can be pointed out between our work and . First, used both non-repeater and repeater sources in their study while we used a single repeating FRB which is FRB121102. Second, one of the main goals of their work is to evaluate the assumption that non-repeating FRBs are contaminated by repeating FRBs while this work focuses on characterizing or identifying the underlying properties of FRB121102 to further understand repeating FRBs. Lastly, the work of provided a new way to classify repeating and non-repeating FRBs while this study aims to provide a classification of repeating FRBs from FRB121102. Nevertheless, this study also introduced additional analysis which helped us to qualitatively characterize FRB121102, such as parameter colouring, identification of invariant cluster properties, and the cluster membership change of the data points/FRBs that will be discussed in the succeeding subsection.
§.§ Cluster membership change
In Section <ref>, we found the invariant cluster properties (see Table <ref>) regardless of the n_neighbors values. However, it is easy to see that the number of clusters as the n_neighbors value change did not remain the same. Four clusters (including the Noise cluster) were found for n_neighbors = 7,8,9, three clusters were found for n_neighbors = 5, and two clusters were found for n_neighbors = 6. This suggests that the FRB clusters of n_neighbors = 5,6 might be shared in more than one cluster of n_neighbors = 7,8,9. Therefore, we investigate the change in the cluster membership of the FRBs as the n_neighbors values varies to see whether this is true. This can be done using an alluvial diagram, as shown below in Figure <ref>.
Looking at Figure <ref>, the axes (oriented vertically) represents the n_neighbors values 5,6,7,8, and 9, respectively. The stratum contained in each axis represents the clusters we found using HDBSCAN which are Cluster 1, Cluster 2, Cluster 3, and Noise. The flow between axes connecting two strata represent the change of the cluster membership of those data points across an n_neighbors value change. An alluvium in our diagram consists of four flows connecting different strata or clusters on different axes or clustering. From the diagram we find the following observations; (i) Majority of the data points tend to retain their cluster membership over n_neighbors value change, (ii) The number of clusters increases as the n_neighbors value increases, and (iii) If we only consider "thick flows" (i.e., flows that contains majority of the data points) to be significant, we can divide alluvial diagram into two alluviums. One of which is the alluvium connecting the clusters nn5.c2, nn6.c1, nn7.c2, nn8.c2, and nn9.c2.
The result (i) we have, based on the diagram, shows that the clustering we found on the data is because of the data itself and not an effect of the clustering algorithm. This then lends to a more physical interpretation and characterization of the clusters. Result (ii) compared to (i) can also be attributed to how the clustering algorithm works. Particularly, the hyperparameters n_neighbors from UMAP, min_cluster_size, and min_samples from HDBSCAN affect the resulting embedding and clustering of the data points. But to this effect, looking at the alluvium from nn5.c2 to nn9.c2 we can see that nn5.c2 evidently bears greater and greater significance as we increase the n_neighbors value. Showing that there are data points that are once grouped into nn5.c2 are now considered a major part of nn9.c2. This observation is also true for alluviums with significant flows. Lastly, result (iii) implies that the two significant clusters (nn6.c2 and nn6.c1) are really made up of four clusters (nn9.c1, nn9.c2, nn9.c3, nn9.cN) and one of these two major clusters, nn6.c2, can be split up further into three clusters, namely nn9.c1, nn9.cN, and nn9.cN.
Since the data have noise, it is to be expected that our unsupervised learning might produce non-physical clusters. Regarding this matter, we can use the alluvial diagram which keeps track of the cluster membership change of each FRB as an additional cross-check. Enabling us to look past the cluster membership assigned to each FRB by considering thick flows to be significant and see how each cluster evolves throughout n_neighbors change. This eliminates complete dependency of our final results on a certain set of hyperparameter values, showing that certain groups of FRB/data points remain as a group all throughout n_neighbors change. Also, this supports the idea of certain cluster properties being carried over from one cluster with different/similar n_neighbors value into another cluster with different/similar n_neighbors value. This entails that the clustering result is based on the difference in physics which is also supported by the one-dimensional histograms (see Section <ref>).
§.§ Comparison with other results
In relation to the number of clusters we have from the HDBSCAN clustering, there are similar results from literature that found the same number of clusters as our results for the n_neighbors = 6 clustering. Using the same dataset from the Supplementary Table 1 of , found two clusters by assigning a critical brightness temperature (T_B,cri) of 10^33 K. In their work, they used the brightness temperature of the FRBs as a criterion to cluster the bursts of FRB121102 because it directly relates to the radiation mechanism of FRBs. These clusters contain bursts depending on whether the bursts have a T_B value greater than or equal or less than the T_B,cri. "Classical" bursts are bursts that have T_B ≥ T_B,cri while the "Atypical" bursts are bursts that have T_B < T_B,cri. also found that the 76 "Classical" bursts have a tight width - fluence (T - ℱ_ν) relation described by log(T) = 0.306·log(ℱ_ν) + 0.399 with a correlation coefficient of r = 0.936. Given that this relation does not hold true for the case of "Atypical" bursts and the total bursts of FRB121102, it leads to suggest that these "Atypical" bursts may be further grouped into several subtypes consisting of different radio transient types.
Marking the "Classical" and "Atypical" bursts of in our UMAP results and checking their cluster membership based on the clusters we identified, we have Figure <ref>. In the figure, the clustering based on HDBSCAN is represented by a convex hull boundary and the number of "Classical" bursts within a cluster is indicated within the parenthesis. Since there are 76 "Classical" bursts and not all "Classical" bursts are members of clusters 1, 2, and 3. Then, these other "Classical" bursts are members of the Noise cluster. Tracking the change of "Classical" bursts membership using Figure <ref>. We found that the change in cluster membership of the majority (≥ 75% = 57) of "Classical" bursts follow the alluvium connecting nn5.c2 and nn9.c2. Suggesting that nn6.c1 corresponds to the "Classical" bursts while nn6.c2 corresponds to the "Atypical" bursts.
However, compared to the work of which only used a single parameter to group or cluster the FRBs of FRB121102. This study used seven of the parameters from the Supplementary Table 1 of to cluster the FRBs giving a more robust clustering result. Nevertheless, the agreement between the clustering results implies that there is an existing structure in the FRBs of FRB121102 regardless of clustering method that is used.
The work of also agrees with the findings of and this study in terms of the number of clusters. Using the version of the CHIME/FRB catalog data <cit.> which contains 536 events (repeaters and non-repeaters). , also found two clusters with significant differences in their morphology by using the frbmclust software <cit.>. The first cluster is described to have broad widths, low flux, several peaks per event (13.4% of events have >1 peak), mean boxcar width = 24.79 ms, median flux = 0.56 Jy, and has 28 repeaters. The second cluster is described to have narrow widths, high flux, single peaks per event (6.3% of events have >1 peak), mean boxcar width = 4.12 ms, median flux = 1.08 Jy, and has 33 repeaters. From these descriptions of each cluster, concluded that what they identified as second cluster corresponds to the "Classical" bursts of and what they identified as first cluster resembles the findings of for the broad population of sources. This result of then suggests that the clustering of FRB121102 into two according to must also be the same to the 536 events (repeaters and non-repeaters) they studied.
In addition to these, also suggested that the bimodality of the energy distribution points to more than one emission mechanism or emission site or beam shape. Whereas pointed out that the bimodal burst energy distribution found by already hints (if not indicate) that there are two subtypes of FRBs and the subsequent work of supports this result and along with ours. Thus, we find that the number of significant clusters we found for n_neighbors = 6 corresponds to clusters found by , , and .
Since we have established that the clusters of n_neighbors = 6 is consistent with the results of , , and and that the nn6.c2 based on Figure <ref> is really composed of three clusters. It then follows that what we found as nn9.c1, nn9.c3, and nn9.cN must correspond to the "Atypical" bursts described in the work of . Showing that these "Atypical" bursts can be further split into three clusters with distinct properties (see Table <ref>). Therefore, we can describe the properties of these "Atypical" bursts based on the properties of nn9.c1, nn9.c3, and nn9.cN. However, since the primary focus of our work revolves around classifying repeating FRBs, discussions of the physical mechanisms of each cluster will be left for future theoretical works.
§.§ Clustering performance
In evaluating the agreement or similarity of the clustering results presented in this paper. A clustering performance metric is employed namely the Rand Index <cit.> and its corrected-for-chance version Adjusted Rand index <cit.>. This metric and its adjusted form will give us as a Rand score and Adjusted Rand score for each pair of clustering result we compare. Wherein a high score suggests that the two clustering result are in very good agreement. In addition, we also considered the Rand score and Adjusted score for the case where the Cluster 3 and Noise of each clustering result is merged. The reason for this is rooted in the results presented in Figures <ref> – <ref> where the existence of Cluster 3 is not a “firm detection” but an “indicator” of another cluster; one that is an intermediate one between Cluster 1 and Cluster 2.
As shown in Figure <ref>, we present the Rand and Adjusted Rand scores using a scatter plot of their values. Each point is annotated with the pair of n_neighbors values that the scores are calculated. Thus, all the data points are pairs of clustering results with their corresponding Rand and Adjusted Rand scores as their coordinates. Figure <ref> shows the calculated scores for each pair of clustering result where Cluster 3 and Noise is not merged while Figure <ref> shows the calculated scores for each pair of clustering result where Cluster 3 and Noise is merged. In both scatter plots, the clustering results for n_neighbors = 5,7,8, and 9 exhibited Rand scores greater than 0.80 and Adjusted Rand scores of at least 0.60. This indicates the clustering results were in very good agreement with one another with the exclusion of n_neighbors = 6. Now, comparing the scores for the clustering results for the two cases one where the Noise is not merged with Cluster 3 (Figure <ref>) and one where it is merged (Figure <ref>). We can see that there is no significant difference or significant improvement in the scores of the clustering result pairs after merging the Noise and Cluster 3. This implies that there is no significant merit in merging Cluster 3 and Noise. Lastly, both figures show that either the clustering result for n_neighbors = 8 and 9 is a good representative of the clustering for our dataset. In this paper, we adopt the case of Noise separate from Cluster 3.
§ CONCLUSIONS
With the above underpinnings, this paper concludes the following:
* Using parameter colouring, we have identified the invariant cluster properties of each cluster regardless of n_neighbors value. Showing that describing the FRB subtypes without any dependence on the set value of n_neighbors permits comparison with other works that aims at classifying FRBs. Invariant cluster properties also aids in determining possible physical mechanisms that corresponds to the characterization of each cluster which can be further discussed in future theoretical works.
* Investigating and plotting the change in cluster membership of the FRBs proved to be useful in pointing out connections between clusters of different n_neighbors value. This analysis led us to understand more about the underlying structure of the FRBs of FRB121102 by showing that certain clusters may have complex composition and consist of smaller distinct clusters. As shown, we have found that the "Atypical" cluster of FRB121102 can be further split up into three smaller clusters.
* Compared to the existing results in the literature, our clustering result does not only depend on a single parameter to create a grouping of FRBs. Using pertinent physical parameters, if not all parameters, the model we have used created a more robust classification of the repeating FRBs from FRB121102. Nevertheless, a certain degree of agreement with other results (e.g., being able to recover the FRB classification used by ) exhibits consistency and foundation on physical parameters of the clusters.
§ ACKNOWLEDGMENTS
We thank the anonymous referee for many insightful comments, which improved the paper significantly.
TG acknowledges the support of the National Science and Technology Council of Taiwan through grants 108-2628-M-007-004-MY3, 111-2112-M-007-021, and 111-2123-M-001-008-.
TH acknowledges the support of the National Science and Technology Council of Taiwan through grants 110-2112-M-005-013-MY3, 110-2112-M-007-034-, and 111-2123-M-001-008-.
The authors would also like to extend their utmost gratitude to Professor Wang Pei and his colleagues for sharing and making the data openly available in Science Data Bank. The authors would also like to thank Dr. Shotaro Yamasaki for his valuable insights and suggestions.
§ DATA AVAILABILITY
The data underlying this article is available in the work of <cit.>. The dataset were derived from Science Data Bank, at <http://doi.org/10.11922/sciencedb.01092. DOI:10.11922/sciencedb.01092>.
mnras
§ APPENDIX
Additional results that are important to the analyses in our paper.
§.§ UMAP results for n_neighbors = 5,6,7, and 8
§.§ HDBSCAN results for n_neighbors = 5,6,7, and 8
§.§ Parameter Colouring results for n_neighbors = 5,6,7, and 8
§.§ Histogram results for n_neighbors = 5,6,7,8, and 9
|
http://arxiv.org/abs/2307.03109v5
|
20230706162835
|
A Survey on Evaluation of Large Language Models
|
[
"Yupeng Chang",
"Xu Wang",
"Jindong Wang",
"Yuan Wu",
"Kaijie Zhu",
"Hao Chen",
"Linyi Yang",
"Xiaoyuan Yi",
"Cunxiang Wang",
"Yidong Wang",
"Wei Ye",
"Yue Zhang",
"Yi Chang",
"Philip S. Yu",
"Qiang Yang",
"Xing Xie"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
Preprint
Shell et al.: Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals
Large language models () are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications. As continue to play a vital role in both research and daily use, their evaluation becomes increasingly critical, not only at the task level, but also at the society level for better understanding of their potential risks. Over the past years, significant efforts have been made to examine from various perspectives.
This paper presents a comprehensive review of these evaluation methods for , focusing on three key dimensions: what to evaluate, where to evaluate, and how to evaluate.
Firstly, we provide an overview from the perspective of evaluation tasks, encompassing general natural language processing tasks, reasoning, medical usage, ethics, educations, natural and social sciences, agent applications, and other areas.
Secondly, we answer the `where' and `how' questions by diving into the evaluation methods and benchmarks, which serve as crucial components in assessing performance of .
Then, we summarize the success and failure cases of in different tasks.
Finally, we shed light on several future challenges that lie ahead in evaluation. Our aim is to offer invaluable insights to researchers in the realm of evaluation, thereby aiding the development of more proficient . Our key point is that evaluation should be treated as an essential discipline to better assist the development of . We consistently maintain the related open-source materials at: <https://github.com/MLGroupJLU/LLM-eval-survey>.
Large language models, evaluation, model assessment, benchmark
A Survey on Evaluation of Large Language Models
Yupeng Chang,
Xu Wang,
Jindong Wang,
Yuan Wu,
Kaijie Zhu,
Hao Chen,
Linyi Yang,
Xiaoyuan Yi,
Cunxiang Wang,
Yidong Wang,
Wei Ye,
Yue Zhang,
Yi Chang, Senior Member, IEEE,
Philip S. Yu, Fellow, IEEE,
Qiang Yang, Fellow, IEEE,
and Xing Xie, Fellow, IEEE
Y. Chang, X. Wang, Y. Wu and Y. Chang are with the School of Artificial Intelligence, Jilin University, Changchun, China. The first two authors contributed equally.
J. Wang, X. Yi, and X. Xie are with Microsoft Research, Beijing, China.
K. Zhu is with Institute of Automation, CAS, Beijing, China.
H. Chen is with Carnegie Mellon University, PA, USA.
L. Yang, C. Wang, and Y. Zhang are with Westlake University, Hangzhou, China.
Y. Wang and W. Ye are with Peking University, Beijing, China.
P. Yu is with the University of Illinois at Chicago, IL, USA.
Q. Yang is with Hong Kong University of Science and Technology, Kowloon, Hong Kong.
Correspondence to: Yuan Wu ([email protected]) and Jindong Wang ([email protected]).
Manuscript received April 19, 2005; revised August 26, 2015.
August 1, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Understanding the essence of intelligence and establishing whether a machine embodies it poses a compelling question for scientists.
It is generally agreed upon that authentic intelligence equips us with reasoning capabilities, enables us to test hypotheses, and prepare for future eventualities <cit.>.
In particular, Artificial Intelligence (AI) researchers focus on the development of machine-based intelligence, as opposed to biologically based intellect <cit.>.
Proper measurement helps to understand intelligence.
For instance, measures for general intelligence in human individuals often encompass IQ tests <cit.>.
Within the scope of AI, the Turing Test <cit.>, a widely recognized test for assessing intelligence by discerning if responses are of human or machine origin, has been a longstanding objective in AI evolution.
It is generally believed among researchers that a computing machine that successfully passes the Turing Test can be regarded as intelligent.
Consequently, when viewed from a wider lens, the chronicle of AI can be depicted as the timeline of creation and evaluation of intelligent models and algorithms.
With each emergence of a novel AI model or algorithm, researchers invariably scrutinize its capabilities in real-world scenarios through evaluation using specific and challenging tasks.
For instance, the Perceptron algorithm <cit.>, touted as an Artificial General Intelligence (AGI) approach in the 1950s, was later revealed as inadequate due to its inability to resolve the XOR problem.
The subsequent rise and application of Support Vector Machines (SVMs) <cit.> and deep learning <cit.> have marked both progress and setbacks in the AI landscape.
A significant takeaway from previous attempts is the paramount importance of AI evaluation, which serves as a critical tool to identify current system limitations and inform the design of more powerful models.
Recently, large language models () has incited substantial interest across both academic and industrial domains <cit.>.
As demonstrated by existing work <cit.>, the great performance of has raised promise that they could be AGI in this era.
posses the capabilities to solve diverse tasks, contrasting with prior models confined to solving specific tasks.
Due to its great performance in handling different applications such as general natural language tasks and domain-specific ones,
are increasingly used by individuals with critical information needs, such as students or patients.
Evaluation is of paramount prominence to the success of due to several reasons.
First, evaluating helps us better understand the strengths and weakness of .
For instance, the PromptBench <cit.> benchmark illustrates that current are sensitive to adversarial prompts, thus a careful prompt engineering is necessary for better performance.
Second, better evaluations can provide a better guidance for human-interaction, which could inspire future interaction design and implementation.
Third, the broad applicability of underscores the paramount importance of ensuring their safety and reliability, particularly in safety-sensitive sectors such as financial institutions and healthcare facilities.
Finally, as are becoming larger with more emergent abilities, existing evaluation protocols may not be enough to evaluate their capabilities and potential risks.
Therefore, we aim to call awareness of the community of the importance to evaluations by reviewing the current evaluation protocols and most importantly, shed light on future research about designing new evaluation protocols.
With the introduction of ChatGPT <cit.> and GPT-4 <cit.>, there have been a number of research efforts aiming at evaluating ChatGPT and other from different aspects ( <ref>), encompassing a range of factors such as natural language tasks, reasoning, robustness, trustworthiness, medical applications, and ethical considerations.
Despite these efforts, a comprehensive overview capturing the entire gamut of evaluations is still lacking.
Furthermore, the ongoing evolution of has also presents novel aspects for evaluation, thereby challenging existing evaluation protocols and reinforcing the need for thorough, multifaceted evaluation techniques.
While existing research such as <cit.> claimed that GPT-4 can be seen as sparks of AGI, others contest this claim due to the human-crafted nature of its evaluation approach.
This paper serves as the first comprehensive survey on evaluation of large language models.
As depicted in <ref>, we explore existing work in three dimensions: 1) What to evaluate, 2) Where to evaluate, and 3) How to evaluate. Specifically, “what to evaluate" encapsulates existing evaluation tasks for , “where to evaluate" involves selecting appropriate datasets and benchmarks for evaluation, while “how to evaluate" is concerned with the evaluation process given appropriate tasks and datasets. These three dimensions are integral to the evaluation of . We subsequently discuss potential future challenges in the realm of evaluation.
The contributions of this paper are as follows:
* We provide a comprehensive overview of evaluations from three aspects: what to evaluate, where to evaluate, and how to evaluate. Our categorization is general and encompasses the entire life cycle of evaluation.
* Regarding what to evaluate, we summarize existing tasks in various areas and obtain insightful conclusions on the success and failure case of (Sec. <ref>), providing experience for future research.
* As for where to evaluate, we summarize evaluation metrics, datasets, and benchmarks to provide a profound understanding of current evaluations. In terms of how to evaluate, we explore current protocols and summarize novel evaluation approaches.
* We further discuss future challenges in evaluating . We open-source and maintain the related materials of evaluation at <https://github.com/MLGroupJLU/LLM-eval-survey> to foster a collaborative community for better evaluations.
The paper is organized as follows.
In Sec. <ref>, we provide the basic information of and AI model evaluation.
Then, Sec. <ref> reviews existing work from the aspects of “what to evaluate”.
After that, Sec. <ref> is the “where to evaluate” part, which summarizes existing datasets and benchmarks.
Sec. <ref> discusses how to perform the evaluation.
In Sec. <ref>, we summarize the key findings of this paper.
We discuss grand future challenges in Sec. <ref> and Sec. <ref> concludes the paper.
§ BACKGROUND
§.§ Large Language Models
Language models (LMs) <cit.> are computational models that have the capability to understand and generate human language.
LMs have the transformative ability to predict the likelihood of word sequences or generate new text based on a given input.
N-gram models <cit.>, the most common type of LM, estimate word probabilities based on the preceding context.
However, LMs also face challenges, such as the issue of rare or unseen words, the problem of overfitting, and the difficulty in capturing complex linguistic phenomena. Researchers are continuously working on improving LM architectures and training methods to address these challenges.
Large Language Models (LLMs) <cit.> are advanced language models with massive parameter sizes and exceptional learning capabilities.
The core module behind many LLMs such as GPT-3 <cit.>, InstructGPT <cit.>, and GPT-4 <cit.> is the self-attention module in Transformer
<cit.> that serves as the fundamental building block for language modeling tasks. Transformers have revolutionized the field of NLP with their ability to handle sequential data efficiently, allowing for parallelization and capturing long-range dependencies in text.
One key feature of is in-context learning <cit.>, where the model is trained to generate text based on a given context or prompt. This enables LLMs to generate more coherent and contextually relevant responses, making them suitable for interactive and conversational applications.
Reinforcement Learning from Human Feedback (RLHF) <cit.> is another crucial aspect of . This technique involves fine-tuning the model using human-generated responses as rewards, allowing the model to learn from its mistakes and improve its performance over time.
In an autoregressive language model, such as GPT-3 and PaLM <cit.>, given a context sequence X, the LM tasks aim to predict the next token y. The model is trained by maximizing the probability of the given token sequence conditioned on the context, i.e., P(y | X) = P(y | x_1, x_2, ..., x_t-1), where x_1, x_2, ..., x_t-1 are the tokens in the context sequence, and t is the current position. By using the chain rule, the conditional probability can be decomposed into a product of probabilities at each position:
P(y | X) = ∏_t=1^T P(y_t | x_1, x_2, ..., x_t-1),
where T is sequence length. In this way, the model predicts each token at each position in an autoregressive manner, generating a complete text sequence.
One common approach to interacting with is prompt engineering <cit.>, where users design and provide specific prompt texts to guide LLMs in generating desired responses or completing specific tasks. This is widely adopted in existing evaluation efforts.
People can also engage in question-and-answer interactions <cit.>, where they pose questions to the model and receive answers, or engage in dialogue interactions, having natural language conversations with LLMs.
In conclusion, , with their Transformer architecture, in-context learning, and RLHF capabilities, have revolutionized NLP and hold promise in various applications.
<ref> provides a brief comparison of traditional ML, deep learning, and .
§.§ AI Model Evaluation
AI model evaluation is an essential step in assessing the performance of a model.
There are some standard model evaluation protocols, including k-fold cross-validation, holdout validation, leave one out cross-validation (LOOCV), bootstrap, and reduced set <cit.>.
For instance, k-fold cross-validation divides the dataset into k parts, with one part used as a test set and the rest as training sets, which can reduce training data loss and obtain relatively more accurate model performance evaluation <cit.>; Holdout validation divides the dataset into training and test sets, with a smaller calculation amount but potentially more significant bias; LOOCV is a unique k-fold cross-validation method where only one data point is used as the test set <cit.>; Reduced set trains the model with one dataset and tests it with the remaining data, which is computationally simple, but the applicability is limited.
The appropriate evaluation method should be chosen according to the specific problem and data characteristics for more reliable performance indicators.
<ref> illustrates the evaluation process of AI models, including .
Some evaluation protocols may not be feasible to evaluate deep learning models due to the extensive training size.
Thus, evaluation on a static validation set has long been the standard choice for deep learning models.
For instance, computer vision models leverage static test sets such as ImageNet <cit.> and MS COCO <cit.> for evaluation.
also use GLUE <cit.> or SuperGLUE <cit.> as the common test sets.
As are becoming more popular with even poorer interpretability, existing evaluation protocols may not be enough to evaluate the true capabilities of thoroughly.
We will introduce recent evaluations of in Sec. <ref>.
§ WHAT TO EVALUATE
What tasks should we evaluate to show their performance?
On what tasks can we claim the strength and weakness of ?
In this section, we divide existing tasks into the following categories: natural language processing, robustness, ethics, biases and trustworthiness, social sciences, natural science and engineering, medical applications, agent applications (using as agents), and other applications.[Note that are evaluated in various tasks and the categorization in this paper is only one possible way for classification of these works. There are certainly other taxonomies.]
§.§ Natural Language Processing Tasks
The initial objective behind the development of language models, particularly large language models, was to enhance performance on natural language processing tasks, encompassing both understanding and generation.
Consequently, the majority of evaluation research has been primarily focused on natural language tasks.
<ref> summarizes the evaluation aspects of existing research, and we mainly highlight their conclusions in the following.[Several NLP areas have intersections and thus our categorization of these areas is only one possible way to categorize.]
§.§.§ Natural language understanding
Natural language understanding represents a wide spectrum of tasks that aims to obtain a better understanding of the input sequence.
We summarize recent efforts in evaluation from several aspects.
Sentiment analysis is a task that analyzes and interprets the text to determine the emotional inclination.
It is typically a binary (positive and negative) or triple (positive, neutral, and negative) class classification problem.
Evaluating sentiment analysis tasks is a popular direction.
<cit.> showed that the performance of the models on this task is usually high.
ChatGPT's sentiment analysis prediction performance is superior to traditional sentiment analysis methods <cit.> and comes close to that of GPT-3.5 <cit.>.
In fine-grained sentiment and emotion cause analysis, ChatGPT also exhibits exceptional performance <cit.>.
In low-resource learning environments, exhibit significant advantages over small language models <cit.>, but the ability of ChatGPT to understand low-resource languages is limited <cit.>.
In conclusion, have demonstrated commendable performance in sentiment analysis tasks. Future work should focus on enhancing their capability to understand emotions in under-resourced languages.
Text classification and sentiment analysis are related fields, text classification not only focuses on sentiment, but also includes the processing of all texts and tasks.
The work of <cit.> showed that GLM-130B was the best-performed model, with an overall accuracy of 85.8% for miscellaneous text classification.
<cit.> found that ChatGPT can produce credibility ratings for a wide range of news outlets, and these ratings have a moderate correlation with those from human experts.
Furthermore, ChatGPT achieves acceptable accuracy in a binary classification scenario (AUC=0.89).
<cit.> discussed the problem of topic classification for public affairs documents and showed that using an LLM backbone in combination with SVM classifiers is a useful strategy to conduct the multi-label topic classification task in the domain of public affairs with accuracies over 85%.
Overall, performs well on text classification and can even handle text classification tasks in unconventional problem settings as well.
Natural language inference (NLI) is the task of determining whether the given “hypothesis” logically follows from the “premise”.
<cit.> showed that ChatGPT outperforms GPT-3.5 for NLI tasks.
They also found that ChatGPT excels in handling factual input that could be attributed to its RLHF training process in favoring human feedback.
However, <cit.> observed perform poorly in the scope of NLI and further fail in representing human disagreement, which indicates that still have a large room for improvement in this field.
Semantic understanding refers to the meaning or understanding of language and its associated concepts. It involves the interpretation and comprehension of words, phrases, sentences and the relationships between them. Semantic processing goes beyond the surface level and focuses on understanding the underlying meaning and intent.
<cit.> comprehensively evaluated the event semantic processing abilities of covering understanding, reasoning, and prediction about the event semantics. Results indicated that possess an understanding of individual events, but their capacity to perceive the semantic similarity among events is constrained. In reasoning tasks, exhibit robust reasoning abilities in causal and intentional relations, yet their performance in other relation types is comparatively weaker. In prediction tasks, exhibit enhanced predictive capabilities for future events with increased contextual information.
<cit.> explored the semantic proficiency of and showed that these models perform poorly in evaluating basic phrases.
Furthermore, GPT-3.5 and Bard cannot distinguish between meaningful and nonsense phrases, consistently classifying highly nonsense phrases as meaningful.
GPT-4 shows significant improvements, but its performance is still significantly lower than that of humans.
In summary, the performance of in semantic understanding tasks is poor. In the future, we can start from this aspect and focus on improving its performance on this application.
In the field of social knowledge understanding, <cit.> evaluated how well models perform at learning and recognizing concepts of social knowledge and the results revealed that despite being much smaller in the number of parameters, finetuning supervised models such as BERT lead to much better performance than zero-shot models using state-of-the-art , such as GPT <cit.>, GPT-J-6B <cit.> and so on.
This statement demonstrates that supervised models significantly outperform zero-shot models in terms of performance, highlighting that an increase in parameters does not necessarily guarantee a higher level of social knowledge in this particular scenario.
§.§.§ Reasoning
The task of reasoning poses significant challenges for an intelligent AI model.
To effectively tackle reasoning tasks, the models need to not only comprehend the provided information but also utilize reasoning and inference to deduce answers when explicit responses are absent.
<ref> reveals that there is a growing interest in evaluating the reasoning ability of , as evidenced by the increasing number of articles focusing on exploring this aspect.
Currently, the evaluation of reasoning tasks can be broadly categorized into mathematical reasoning, commonsense reasoning, logical reasoning, and domain-specific reasoning.
ChatGPT exhibits a strong capability for arithmetic reasoning by outperforming GPT-3.5 in the majority of tasks <cit.>.
However, its proficiency in mathematical reasoning still requires improvement <cit.>.
On symbolic reasoning tasks, ChatGPT is mostly worse than GPT-3.5, which may be because ChatGPT is prone to uncertain responses, leading to poor performance <cit.>.
Through the poor performance of on task variants of counterfactual conditions, <cit.> showed that the current have certain limitations in abstract reasoning ability.
In logical reasoning, <cit.> indicated that ChatGPT and GPT-4 outperform traditional fine-tuning methods on most benchmarks, demonstrating their superiority in logical reasoning.
However, both models face challenges when handling new and out-of-distribution data.
ChatGPT does not perform as well as other , including GPT-3.5 and BARD <cit.>.
This is because ChatGPT is designed explicitly for chatting, so it does an excellent job of maintaining rationality.
FLAN-T5, LLaMA, GPT-3.5, and PaLM perform well in general deductive reasoning tasks <cit.>.
GPT-3.5 is not good at keep oriented for reasoning in the inductive setting <cit.>.
For multi-step reasoning, <cit.> showed PaLM and Claude2 are the only two model families that achiving similar performance (but still worse than the GPT model family).
Moreover, LLaMA-65B is the most robust open-source to date, which performs closely to code-davinci-002.
Some papers separately evaluate the performance of ChatGPT on some reasoning tasks: ChatGPT generally performs poorly on commonsense reasoning tasks, but relatively better than non-text semantic reasoning <cit.>.
Meanwhile, ChatGPT also lacks spatial reasoning ability, but exhibits better temporal reasoning.
Finally, while the performance of ChatGPT is acceptable on causal and analogical reasoning, it performs poorly on multi-hop reasoning ability, which is similar to the weakness of other on complex reasoning <cit.>.
In professional domain reasoning tasks, zero-shot InstructGPT and Codex are capable of complex medical reasoning tasks, but still need to be further improved <cit.>.
In terms of language insight issues, <cit.> demonstrated the potential of ChatGPT for solving verbal insight problems, as ChatGPT's performance was comparable to that of human participants.
It should be noted that most of the above conclusions are obtained for specific data sets.
Overall, show great potential in reasoning and show a continuous improvement trend, but still face many challenges and limitations, requiring more in-depth research and optimization.
§.§.§ Natural language generation
Natural language generation (NLG) evaluates the capabilities of in generating specific texts, which consists of several tasks, including summarization, dialogue generation, machine translation, question answering, and other open-ended generation applications.
Summarization is a generation task that aims to learn a concise abstract for the given sentence.
In this evaluation, <cit.> found that TNLG v2 (530B) <cit.> achieved the highest score in both scenarios, followed by OPT (175B) <cit.> in second place.
It is disappointing that ChatGPT sometimes generates a longer summary than the input document <cit.>.
The fine-tuned Bart <cit.> is still better than zero-shot ChatGPT.
Specifically, ChatGPT demonstrates comparable zero-shot performance to the text-davinci-002 <cit.>, but performs worse than GPT-3.5 <cit.>.
In controllable text summarization, <cit.> showed that ChatGPT summaries are slightly more extractive (i.e., containing more content copied directly from the source) compared to human summaries.
These findings indicate that , particularly ChatGPT, have a general performance in summarization tasks.
However, their summary and generalization abilities still require further improvement.
Evaluating the performance of on dialogue tasks is crucial to the development of dialogue systems and improving the human-computer interaction.
Through such evaluation, the natural language processing ability, context understanding ability and generation ability of the model can be improved, so as to realize a more intelligent and more natural dialogue system.
Both Claude and ChatGPT generally achieve better performance across all dimensions when compared to GPT-3.5 <cit.>.
When comparing the Claude and ChatGPT models, both models demonstrate competitive performance across different evaluation dimensions, with Claude slightly outperforming ChatGPT in specific configurations.
<cit.> conducted tests on ChatGPT for response generation in different dialogue settings: 1) Knowledge-Grounded Open-Domain Dialogue and 2) Task-Oriented Dialogue.
The automatic evaluation results revealed that ChatGPT’s performance is comparatively lower than that of GPT-2 fine-tuned on the dataset for knowledge-grounded open-domain dialogue.
In task-oriented dialogue, ChatGPT’s performance is acceptable; however, it tends to make errors in the presence of the following challenges: long-term multi-turn dependency, fundamental reasoning failure, and extrinsic hallucination.
While are not explicitly trained for translation tasks, they can still demonstrate strong performance.
<cit.> demonstrated that ChatGPT and GPT-4 exhibit superior performance in comparison to commercial machine translation (MT) systems, as evaluated by humans.
Additionally, they outperform most document-level NMT methods in terms of sacreBLEU scores.
During contrastive testing, ChatGPT shows lower accuracy in comparison to traditional translation models.
However, GPT-4 demonstrates a robust capability in explaining discourse knowledge, even though it may occasionally select incorrect translation candidates.
The findings from <cit.> indicated that ChatGPT performs X → Eng translation well, but it still lack the ability to perform Eng → X translation.
<cit.> investigated several research directions in MT utilizing .
This study significantly contributes to the advancement of MT research and highlights the potential of LLMs in enhancing translation capabilities.
In summary, while perform satisfactorily in several translation tasks, there is still room for improvement, e.g., enhancing the translation capability from English to non-English languages.
Question answering is a crucial technology in the field of human-computer interaction, and it has found wide application in scenarios like search engines, intelligent customer service, and question answering systems.
The measurement of accuracy and efficiency in QA models will have significant implications for these applications.
According to <cit.>, among all the evaluated models, InstructGPT davinci v2 (175B) exhibited the highest performance in terms of accuracy, robustness, and fairness across the 9 question answering scenarios.
Both GPT-3.5 and ChatGPT demonstrate significant advancements compared to GPT-3 in their ability to answer general knowledge questions.
In most domains, ChatGPT surpasses GPT-3.5 by more than 2% in terms of performance <cit.>.
However, ChatGPT performs slightly weaker than GPT-3.5 on the CommonsenseQA and Social IQA benchmarks.
This can be attributed to ChatGPT’s cautious nature, as it tends to decline providing an answer when there is insufficient information available.
Fine-tuned models, such as Vícuna and ChatGPT, exhibit exceptional performance with near-perfect scores, surpassing models that lack supervised fine-tuning by a significant margin <cit.>.
<cit.> evaluated the effectiveness of ChatGPT on a range of academic datasets, including various tasks such as answering questions, summarizing text, generating code, reasoning with commonsense, solving math problems, translating languages, detecting bias, and addressing ethical issues.
Overall, showcase flawless performance on QA tasks and hold the potential for further enhancing their proficiency in social, event, and temporal commonsense knowledge in the future.
There are also other generation tasks to explore.
In the field of sentence style transfer, <cit.> demonstrated that ChatGPT surpasses the previous SOTA supervised model through training on the same subset for few-shot learning, as evident from the higher BLEU score.
However, when it comes to controlling the formality of sentence style, ChatGPT’s performance still differs significantly from human behavior.
In writing tasks, <cit.> discovered that exhibit consistent performance across various categories such as informative, professional, argumentative, and creative writing.
This finding implies that possess a general proficiency in writing capabilities.
In text generation quality, <cit.> revealed that ChatGPT excels in assessing text quality from multiple angles, even in the absence of reference texts, surpassing the performance of most existing automated metrics.
Employing ChatGPT to generate numerical scores for text quality emerged as the most reliable and effective approach among the various testing methods studied.
§.§.§ Multilingual tasks
While English is the predominant language, many are trained on mixed-language training data.
The combination of multilingual data indeed helps gain the ability to process inputs and generate responses in different languages, making them widely adopted and accepted across the globe.
However, due to the relatively recent emergence of this technology, are primarily evaluated on English data, leading to a potential oversight of evaluating their multilingual performance.
To address this, several articles have provided comprehensive, open, and independent evaluations of ’ performance on various NLP tasks in different non-English languages.
These evaluations offer valuable insights and perspectives for future research and applications.
<cit.> evaluated the performance of ChatGPT in standard Arabic NLP tasks and observed that ChatGPT exhibits lower performance compared to SOTA models in the zero-shot setting for most tasks.
<cit.> utilized a greater number of languages across multiple datasets, encompassing a wider range of tasks, and conducted a more comprehensive evaluation of , including BLOOM, Vicuna, Claude, ChatGPT, and GPT-4.
The results indicated that these perform poorly when it came to non-Latin languages and languages with limited resources.
Despite translating the input to English and using it as the query, generative still display subpar performance across tasks and languages compared to a SOTA models <cit.>.
Furthermore, <cit.> highlighted that ChatGPT still faces a limitation in translating sentences written in non-Latin script languages with rich linguistic resources.
The aforementioned demonstrates that there are numerous challenges and ample opportunities for enhancement in multilingual tasks for .
Future research should prioritize achieving multilingual balance and addressing the challenges faced by non-Latin languages and low-resource languages, with the aim of better supporting users worldwide.
At the same time, attention should be paid to the impartiality and neutrality of the language in order to mitigate any potential biases, including English bias or other biases, that could impact multilingual applications.
§.§.§ Factuality
Factuality in the context of refers to the extent to which the information or answers provided by the model align with real-world truths and verifiable facts. Factuality in significantly impacts a variety of tasks and downstream applications, such as question answering systems, information extraction, text summarization, dialogue systems, and automated fact-checking, where incorrect or inconsistent information could lead to substantial misunderstandings and misinterpretations. Evaluating factuality is of great importance in order to trust and efficiently use these models. This includes the ability of these models to maintain consistency with known facts, avoid generating misleading or false information (known as "factual hallucination"), and effectively learn and recall factual knowledge. A range of methodologies have been proposed to measure and improve the factuality of .
<cit.> assessed the internal knowledge capabilities of several large models, namely InstructGPT, ChatGPT-3.5, GPT-4, and BingChat <cit.>, by examining their ability to answer open questions based on the Natural Questions <cit.> and TriviaQA <cit.> datasets. The evaluation process involved human assessment. The results of the study indicated that while GPT-4 and BingChat can provide correct answers for more than 80% of the questions, there is still a remaining gap of over 15% to achieve complete accuracy.
In the work of <cit.>, they conducted a review of current factual consistency evaluation methods and highlighted the absence of a unified comparison framework and the limited reference value of related scores compared to binary labels. To address this, they transformed existing fact consistency tasks into binary labels, specifically considering only whether there is a factual conflict with the input text, without factoring in external knowledge. The research discovered that fact evaluation methods founded on natural language inference and question generation-question answering exhibit superior performance and can complement each other.
<cit.> proposed a novel metric, based on information theory, to assess the inclusion of specific knowledge in . The metric utilized the concept of uncertainty in knowledge to measure factualness, calculated by filling in prompts and examining the probability distribution of the answer. The paper discussed two methods for injecting knowledge into : explicit inclusion of knowledge in the prompts and implicit fine-tuning of the using knowledge-related data. The study demonstrated that this approach surpasses traditional ranking methods by achieving an accuracy improvement of over 30%.
<cit.> improved the method for evaluating fact consistency in summarization tasks. It proposed a novel approach that involved training student NLI models using summaries generated by multiple models and annotated by to ensure fact consistency. The trained student model was then used for summarization fact consistency evaluation.
<cit.> operated on two hypotheses regarding how generate factual or hallucinated responses. It proposed the use of three formulas (BERTScore <cit.>, MQAG <cit.> and n-gram) to evaluate factuality and employed alternative to gather token probabilities for black-box language models. The study discovered that simply computing sentence likelihood or entropy helped validate the factuality of the responses.
<cit.> broke down text generated by into individual 'atomic' facts, which were then evaluated for their correctness. The FActScore is used to measure the performance of estimators through the calculation of F1 scores. The paper tested various estimators and revealed that current estimators still have some way to go in effectively addressing the task.
<cit.> introduced the TruthfulQA dataset, designed to cause models to make mistakes. Multiple language models were tested by providing factual answers. The findings from these experiments suggest that simply scaling up model sizes may not necessarily improve their truthfulness, and recommendations are provided for the training approach. This dataset has become widely used for evaluating the factuality of <cit.>.
§.§ Robustness, Ethic, Bias, and Trustworthiness
The evaluation of encompasses the crucial aspects of robustness, ethics, biases, and trustworthiness.
These factors have gained increasing importance in assessing the performance of comprehensively.
§.§.§ Robustness
Robustness studies the stability of a system when facing unexpected inputs.
Specifically, out-of-distribution (OOD) <cit.> and adversarial robustness are two popular research topics for robustness.
<cit.> is an early work that evaluated ChatGPT and other from both the adversarial and OOD perspectives using existing benchmarks such as AdvGLUE <cit.>, ANLI <cit.>, and DDXPlus <cit.> datasets.
<cit.> evaluated the robustness of semantic parsing.
<cit.> evaluated OOD robustness by extending the GLUE <cit.> dataset.
The results of this study emphasize the potential risks to the overall system security when manipulating visual input.
For vision-language models, <cit.> evaluated on visual input and transferred them to other visual-linguistic models, revealing the vulnerability of visual input.
<cit.> provided an overview of OOD evaluation for language models: adversarial robustness, domain generalization, and dataset biases.
The authors compared and unified the three research lines, summarized the data-generating processes and evaluation protocols for each line, and highlighted the challenges and opportunities for future work.
For adversarial robustness, <cit.> evaluated the robustness of to prompts by proposing a unified benchmark called PromptBench.
They comprehensively evaluated adversarial text attacks at multiple levels (character, word, sentence, and semantics). The results showed that contemporary are vulnerable to adversarial prompts, highlighting the importance of the models' robustness when facing adversarial inputs.
As for new adversarial datasets, <cit.> introduced the use of the AdvGLUE++ benchmark data for assessing adversarial robustness and implemented a new evaluation protocol to scrutinize machine ethics via jailbreaking system prompts.
§.§.§ Ethic and bias
LLMs have been found to internalize, spread, and potentially magnify harmful information existing in the crawled training corpora, usually, toxic languages, like offensiveness, hate speech, and insults <cit.>, as well as social biases like stereotypes towards people with a particular demographic identity (e.g., gender, race, religion, occupation and ideology) <cit.>. More recently, <cit.> uses conventional testing sets and metrics <cit.> to perform a systematic evaluation of ChatGPT's toxicity and social bias, finding that it still exhibits noxious content to some extend. Taking a further step, <cit.> introduced role-playing into the model and observed an increase in generated toxicity up to 6x. Furthermore, such role-playing also caused biased toxicity towards specific entities. Different from simply measuring social biases, <cit.> investigated the sources, underlying mechanisms and corresponding ethical consequences of these biases potentially produced by ChatGPT. Beyond social biases, LLMs have also been assessed by political tendency and personality traits <cit.> based questionnaires like Political Compass Test and MBTI test, demonstrating a propensity for progressive views and an ENFJ personality type. In addition, LLMs like GPT-3 were found to have moral biases <cit.> in terms of the Moral Foundation theory <cit.>;
The study conducted by <cit.> reveals that existing LMs have potential in ethical judgment, but still need improvement.
Moreover, in the assessment of GPT-4 alignment, <cit.> discovered a systematic bias.
ChatGPT was also observed to exhibit somewhat bias on cultural values <cit.>.
<cit.> also incorporated an evaluation dataset specifically aimed at gauging stereotype bias, using both targeted and untargeted system prompts.
All these ethical issues might elicit serious risks, impeding the deployment of LLMs and having a profound negative impact on society.
§.§.§ Trustworthiness
Some work focuses on other trustworthiness problems in addition to robustness and ethics.[The term `trustworthiness' in this section refers to other work that contains more than robustness and ethics.]
In their 2023 study, DecodingTrust, <cit.> offered a multifaceted exploration of trustworthiness vulnerabilities in the GPT models, especially GPT-3.5 and GPT-4.
Their evaluation expanded beyond the typical trustworthiness concerns to include eight critical aspects: toxicity, stereotype bias, adversarial and out-of-distribution robustness, robustness to adversarial demonstrations, privacy, machine ethics, and fairness. DecodingTrust's investigation employs an array of newly constructed scenarios, tasks, and metrics.
They revealed that while GPT-4 often showcases improved trustworthiness over GPT-3.5 in standard evaluations, it is simultaneously more susceptible to attacks.
In another study by <cit.>, with enhanced cognitive abilities were evaluated.
They found that these models can avoid common human intuitions and cognitive errors, demonstrating super-rational performance. By utilizing cognitive reflection tests and semantic illusion experiments, the researchers gained insights into the psychological aspects of .
This method offers new perspectives for evaluating model biases and ethical issues that may not have been previously identified.
§.§ Social Science
Social science involves the study of human society and individual behavior, including economics, sociology, political science, law, and other disciplines.
Evaluating the performance of in social science is important for academic research, policy formulation, and social problem-solving.
Such evaluations can help improve the applicability and quality of models in the social sciences, increasing understanding of human societies and promoting social progress.
<cit.> evaluated the potential use of in addressing scaling and measurement issues in social science and found that could generate meaningful responses regarding political ideology and significantly improve text-as-data methods in social science.
In computational social science (CSS) tasks, <cit.> presented a comprehensive evaluation of on several CSS tasks.
During classification tasks, exhibit the lowest absolute performance on event argument extraction, character tropes, implicit hate, and empathy classification, achieving accuracy below 40%.
These tasks either involve complex structures (event arguments) or subjective expert taxonomies with semantics that differ from those learned during LLM pretraining.
Conversely, achieve the best performance on misinformation, stance, and emotion classification.
When it comes to generation tasks, often produce explanations that surpass the quality of gold references provided by crowdworkers.
In summary, while can greatly enhance the traditional CSS research pipeline, they cannot completely replace it.
Some articles also evaluate on legal tasks.
The zero-shot performance of is mediocre in legal case judgment summarization.
have several problems, including incomplete sentences and words, meaningless sentences merge, and more serious errors such as inconsistent and hallucinated information <cit.>.
The results show that further improvement is necessary for to be useful for case judgment summarization by legal experts.
<cit.> indicated that , particularly when combined with prompting enhancements and the correct legal texts, could perform better but not yet at expert tax lawyer levels.
Lastly, within the realm of psychology,
<cit.> adopts an interdisciplinary approach and draws insights from developmental psychology and comparative psychology to explore alternative methods for evaluating the capabilities of large language models (LLMs). By integrating different perspectives, researchers can deepen their understanding of the essence of cognition and effectively leverage the potential of advanced technologies such as large language models, while mitigating potential risks.
In summary, although these models have shown excellent performance in various tasks, the existing models are primarily designed for single-task systems and lack sufficient expressive and interactive capabilities, which creates a gap between their capabilities and the practical clinical requirements.
While these models bring hope for interactive medical systems, they still face challenges such as generating erroneous outputs and illusions, making them currently unsuitable for direct application in real-world scenarios.
§.§ Natural Science and Engineering
Evaluating the performance of in natural science and engineering fields can help guide applications and development in scientific research, technology development, and engineering studies.
§.§.§ Mathematics
For fundamental mathematical problems, most large language models () demonstrate proficiency in addition and subtraction, and possess some capability in multiplication. However, they face challenges when it comes to division, exponentiation, trigonometry functions, and logarithm functions. On the other hand, exhibit competence in handling decimal numbers, negative numbers, and irrational numbers <cit.>.
In terms of performance, GPT-4 and ChatGPT outperform other models significantly, showcasing their superiority in solving mathematical tasks <cit.>.
These two models have a distinct advantage in dealing with large numbers (greater than 1e12) and complex, lengthy mathematical queries.
GPT-4 outperforms ChatGPT by achieving a significant increase in accuracy of 10 percentage points and a reduction in relative error by 50%, due to its superior division and trigonometry abilities, proper understanding of irrational numbers, and consistent step-by-step calculation of long expressions.
When confronted with complex and challenging mathematical problems, exhibit subpar performance.
Specifically, GPT-3 demonstrates nearly random performance, while GPT-3.5 shows improvement, and GPT-4 performs the best <cit.>.
Despite the advancements made in the new models, it is important to note that the peak performance remains relatively low compared to that of experts and these models lack the capability to engage in mathematical research <cit.>.
The specific tasks of algebraic manipulation and calculation continue to pose challenges for GPTs <cit.>.
The primary reasons behind GPT-4's low performance in these tasks are errors in algebraic manipulation and difficulties in retrieving pertinent domain-specific concepts.
<cit.> evaluated the use of GPT-4 on difficult high school competition problems and GPT-4 reached 60% accuracy on half of the categories.
Intermediate algebra and precalculus can only be solved with a low accuracy rate of around 20%.
ChatGPT is not good at answering questions on topics including derivatives and applications, Oxyz spatial calculus and spatial geometry <cit.>.
<cit.> showed that ChatGPT's performance worsens as task difficulty increases: it correctly answered 83% of the questions at the recognition level, 62% at the comprehension level, 27% at the application level, and only 10% at the highest cognitive complexity level.
Given those problems at higher knowledge levels tend to be more complex, requiring in-depth understanding and problem-solving skills, such results are to be expected.
These results suggest that ' ability is easily affected by the complexity of problems.
It has great implications for the design of optimized artificial intelligence systems for handling such challenging tasks.
§.§.§ General science
The application of in chemistry is still in its infancy.
<cit.> posed five simple tasks in different subareas of chemistry to evaluate ChatGPT's understanding of chemistry, with accuracy ranging from 25% to 100%.
<cit.> showed that perform worse on physics problems than chemistry problems, probably because chemistry problems have lower inference complexity than physics problems in this setting.
There are few evaluation studies of in general science, and the existing evaluation results show that the performance of in this field still needs to be improved.
§.§.§ Engineering
In the field of engineering, the task from easy to difficult can be arranged as code generation, software engineering, and commonsense planning.
In code generation tasks, the smaller trained for the tasks are competitive in performance, and CODEGEN-16B is comparable in performance to ChatGPT using a larger parameter setting, reaching about a 78% match <cit.>.
Despite facing challenges in mastering and comprehending certain fundamental concepts in programming languages, ChatGPT showcases a commendable level of coding level <cit.>.
Specifically, ChatGPT has developed superior skills in dynamic programming, greedy algorithm, and search, surpassing highly capable college students, but it struggle in data structure, tree, and graph theory.
GPT-4 exhibits an advanced ability to write code based on provided instructions and comprehend existing code <cit.>. Additionally, it can effectively reason about code execution, simulate the impact of instructions, articulate outcomes in natural language, and execute pseudocode.
In software engineering tasks, ChatGPT usually performs credibly and the response from it is detailed and often better than the human expert output or the SOTA output.
However, in the case of a few other tasks like code vulnerability detection and information retrieval-based test prioritization, the current form of ChatGPT fails to deliver accurate answers, making it unsuitable for such tasks <cit.>.
In commonsense planning tasks, may not be good, even in simple planning tasks that humans are good at <cit.>.
<cit.> demonstrated that the fine-tuned CodeT5 model performed best across all considered domains, with the least inference time.
Moreover, it explored whether are capable of plan generalization and found that generalization capabilities seem limited.
It turns out that can handle simple engineering tasks, but performs terribly on complex engineering tasks.
§.§ Medical Applications
The application of in the medical field has recently gained significant attention.
In this section, we review existing efforts in applying to medical applications.
Specifically, we categorized them into four aspects as shown in <ref>: medical QA, medical examination, medical assessment, and medical education.
§.§.§ Medical QA
<ref> illustrates that in medical applications, most evaluations of are in medical question answering.
This trend can be attributed to the extensive utilization and demand for precise and trustworthy answers in the medical field.
Several studies have been conducted to evaluate the performance of ChatGPT in Medical QA, demonstrating its abilities in human respondents <cit.>, QA with bariatric surgery patients <cit.>, medical physicists <cit.>, biomedical applications <cit.>, and many other QA situations <cit.>.
As for the limitations, <cit.> assess its performance in primary care and find that ChatGPT's average score in the student comprehensive assessment falls below the passing score, indicating room for improvement.
<cit.> highlight that while ChatGPT can generate responses similar to existing sources in fertility-related clinical prompts, its limitations in reliably citing sources and potential for fabricating information restrict its clinical utility.
§.§.§ Medical examination
<cit.> evaluate the performance of in medical exam assessment to explore their potential applications in the USMLE [<https://www.usmle.org/>].
In <cit.>, ChatGPT's performance in answering USMLE Step 1 and Step 2 exam questions was assessed using novel multiple-choice question sets. The results indicated that ChatGPT achieved varying accuracies across different datasets. However, the presence of out-of-context information was found to be lower compared to the correct answer in the NBME-Free-Step1 and NBME-Free-Step2 datasets.
<cit.> showed that ChatGPT achieved or approached the passing threshold in these exams with no tailored training.
The model demonstrated high consistency and insight, indicating its potential to assist in medical education and clinical decision-making. ChatGPT can be used as a tool to answer medical questions, provide explanations, and support decision-making processes. This offers additional resources and support for medical students and clinicians in their educational and clinical practices.
Moreover, <cit.> found that answers generated by ChatGPT are more context-aware with better deductive reasoning abilities compared to Google search results.
§.§.§ Medical education
Several studies have evaluated the performance and feasibility of ChatGPT in the medical education field.
In the study by <cit.>, ChatGPT, specifically GPT-3.5 and GPT-4 models, were evaluated in terms of their understanding of surgical clinical information and their potential impact on surgical education and training. The results indicate an overall accuracy of 46.8% for GPT-3.5 and 76.4% for GPT-4, demonstrating a significant performance difference between the two models. Notably, GPT-4 consistently performs well across different subspecialties, suggesting its capability to comprehend complex clinical information and enhance surgical education and training.
Another study by <cit.> explores the feasibility of utilizing ChatGPT in clinical education, particularly in translating radiology reports into easily understandable language.
The findings demonstrate that ChatGPT effectively translates radiology reports into accessible language and provides general recommendations.
Furthermore, the quality of ChatGPT has shown improvement compared to GPT-4.
These findings suggest that employing large-scale language models in clinical education is feasible, although further efforts are needed to address limitations and unlock their full potential.
§.§.§ Medical assistants
In the field of medical assistance, demonstrate potential applications, including research on identifying gastrointestinal diseases <cit.>, dementia diagnosis <cit.> and accelerating the evaluation of COVID-19 literature <cit.>. However, there are also limitations and challenges, such as lack of originality, high input requirements, resource constraints and uncertainty in answers.
§.§ Agent Applications
Instead of focusing solely on general language tasks, can be utilized as powerful tools in various domains. Equipping LLMs with external tools can greatly expand the capabilities of the model.
<cit.> introduce KOSMOS-1, which is capable of understanding general patterns, following instructions, and learning based on context.
<cit.> emphasize that knowing when and how to use these external symbolic tools is crucial, and this knowledge is determined by the ' capabilities, especially when these tools can reliably function.
In addition, two other studies,
Toolformer <cit.> and TALM <cit.>, explore the utilization of tools to enhance language models. Toolformer employs a training approach to determine the optimal usage of specific APIs and integrates the obtained results into subsequent token predictions. On the other hand, TALM combines indistinguishable tools with text-based methods to augment language models and employs an iterative technique known as "self-play," guided by minimal tool demonstrations.
<cit.> propose the HuggingGPT framework, which leverages to connect various artificial intelligence models within the machine learning community (like Hugging Face), aiming to address artificial intelligence tasks.
§.§ Other Applications
In addition to the categories mentioned above, there have been evaluations of in various other domains, including education, search and recommendation, personality testing, and specific applications.
§.§.§ Education
have shown promise in revolutionizing the field of education. They have the potential to contribute significantly to several areas, such as assisting students in improving their writing skills, facilitating better comprehension of complex concepts, expediting the delivery of information, and providing personalized feedback to enhance student engagement. These applications aim to create more efficient and interactive learning experiences, offering students a wider range of educational opportunities. However, to fully harness the potential of in education, extensive research and ongoing refinement are necessary.
The evaluation of for educational assistance aims to investigate and assess their potential contributions to the field of education. Such evaluations can be conducted from various perspectives.
According to <cit.>, ChatGPT demonstrates the ability to generate detailed, fluent, and coherent feedback that surpasses that of human teachers. It can accurately assess student assignments and provide feedback on task completion, thereby assisting in the development of student skills.
However, as mentioned by <cit.>, ChatGPT's responses may lack novelty or insightful perspectives regarding teaching improvement.
Additionally, the study conducted by <cit.> revealed that can successfully identify at least one actual problem in student code, although instances of misjudgment were also observed.
In conclusion, the utilization of shows promise in addressing program logic issues, although challenges remain in achieving proficiency in output formatting. It is important to note that while these models can provide valuable insights, they may still generate errors similar to those made by students.
In educational testing, researchers aim to evaluate the application effectiveness of , including automatic scoring, question generation, and learning guidance.
<cit.> showed that ChatGPT achieved an average of 71.8% correctness, which is comparable to the average score of all participating students.
Subsequently, the evaluation was conducted using GPT-4, and it achieved a score of 8.33. Furthermore, this evaluation showed the effectiveness of leveraging bootstrapping that combines randomness via the “temperature” parameter in diagnosing incorrect answers.
<cit.> claimed that GPT-3.5 can solve MIT math and EECS exams with GPT-4 achieving better performance.
However, it turned out to be not fair since they accidentally input the correct answers to the prompts.
§.§.§ Search and recommendation
The assessment of in search and recommendation can be broadly categorized into two areas:
In the realm of information retrieval, <cit.> investigate the effectiveness of generative ranking algorithms, such as ChatGPT and GPT-4, for information retrieval tasks. Experimental results demonstrate that guided ChatGPT and GPT-4 exhibit competitive performance on popular benchmark tests, even outperforming supervised methods. Additionally, the extraction of ChatGPT's ranking functionality into a specialized model shows superior performance when trained on 10K ChatGPT-generated data compared to training on 400K annotated MS MARCO data in the BEIR dataset <cit.>.
Furthermore, <cit.> conducted a randomized online experiment to investigate the behavioral differences of users when performing information retrieval tasks using search engine and chatbot tools.
Participants were divided into two groups: one using tools similar to ChatGPT and the other using tools similar to Google Search. The results show that the ChatGPT group spent less time on all tasks and the difference between these two groups is not significant.
Moving to the domain of recommendation systems,
have emerged as essential components that leverage their natural language processing capabilities to comprehend user preferences, item descriptions, and contextual information <cit.>. By incorporating LLMs into recommendation pipelines, these systems can offer more accurate and personalized recommendations, thereby improving user experience and overall recommendation quality.
However, it is crucial to address the potential risks associated with using LLMs for recommendations. Recent research by <cit.> has highlighted the issue of unfair recommendations generated by ChatGPT. This emphasizes the importance of evaluating fairness when employing LLMs in recommendation scenarios.
<cit.> reveals that ChatGPT exhibits strong performance in recommender systems. The use of listwise ranking is found to strike the best balance between cost and performance. Furthermore, ChatGPT shows promise in addressing the cold-start problem and providing interpretable recommendations.
§.§.§ Personality testing
Personality testing aims to measure individuals' personality traits and behavioral tendencies, and as powerful natural language processing models have been widely applied in such tasks.
Research conducted by <cit.> investigated the personality features of using Davinci-003 as a chatbot and found variations in the consistency of its answers, despite exhibiting prosocial characteristics.
However, there remains uncertainty regarding whether the chatbot's responses are driven by conscious self-reflection or algorithmic processes.
<cit.> examined the manifestation of personality in language models and discovered that many models performed unreliably in self-assessment tests and exhibited inherent biases.
Therefore, it is necessary to develop specific machine personality measurement tools to enhance reliability.
These studies offer vital insights to better understand in personality testing.
<cit.> proposed a comprehensive approach to conduct effective psychometric testing for the personality traits in the text generated by .
<cit.> discussed the challenges of incorporating humor into , particularly ChatGPT.
They found that while ChatGPT demonstrates impressive capabilities in NLP tasks, it falls short in generating humorous responses.
This study emphasizes the importance of humor in human communication and the difficulties that face in capturing the subtleties and context-dependent nature of humor.
It discusses the limitations of current approaches and highlights the need for further research to develop more sophisticated models that can effectively understand and generate humor.
§.§.§ Specific applications
Furthermore, several studies have investigated the application and evaluation of large language models across diverse tasks, such as game design <cit.>, model performance assessment <cit.>, and log parsing <cit.>.
Collectively, these findings enhance our comprehension of the practical implications associated with the utilization of large language models across diverse tasks. They shed light on the potential and limitations of these models while providing valuable insights for performance improvement.
§ WHERE TO EVALUATE: DATASETS AND BENCHMARKS
evaluation datasets are used to test and compare the performance of different language models on various tasks, as depicted in Sec. <ref>.
These datasets, such as GLUE <cit.> and SuperGLUE <cit.>, aim to simulate real-world language processing scenarios and cover diverse tasks such as text classification, machine translation, reading comprehension, and dialogue generation.
This section will not discuss any single dataset for language models but benchmarks for .
As benchmarks for are evolving, a variety of benchmarks have emerged to evaluate their performance.
In this study, we compile a selection of 26 popular benchmarks, as shown in <ref>.[Note that as the evaluation of is a hot research area, it is very likely that we cannot cover all benchmarks. We welcome suggestions and comments to make this list perfect.]
Each benchmark focuses on different aspects and evaluation criteria, providing valuable contributions to their respective domains.
For a better summarization, we divide these benchmarks into two categories: benchmarks for general language tasks and benchmarks for specific downstream tasks.
§.§ Benchmarks for General Tasks
are designed to solve a vast majority of tasks.
To this end, existing benchmarks tend to evaluate the performance in different tasks.
Chatbot Arena <cit.> and MT-Bench <cit.> are two significant benchmarks that contribute to the evaluation and advancement of chatbot models and in different contexts.
Chatbot Arena provides a platform to assess and compare diverse chatbot models through user engagement and voting.
Users can engage with anonymous models and express their preferences via voting.
The platform gathers a significant volume of votes, facilitating the evaluation of models' performance in realistic scenarios.
Chatbot Arena provides valuable insights into the strengths and limitations of chatbot models, thereby contributing to the progress of chatbot research and advancement.
Meanwhile, MT-Bench evaluates on multi-turn dialogues using comprehensive questions tailored to handling conversations.
It provides a comprehensive set of questions specifically designed for assessing the capabilities of models in handling multi-turn dialogues.
MT-Bench possesses several distinguishing features that differentiate it from conventional evaluation methodologies.
Notably, it excels in simulating dialogue scenarios representative of real-world settings, thereby facilitating a more precise evaluation of a model's practical performance.
Moreover, MT-Bench effectively overcomes the limitations in traditional evaluation approaches, particularly in gauging a model's competence in handling intricate multi-turn dialogue inquiries.
Instead of focusing on specific tasks and evaluation metrics, HELM <cit.> provides a comprehensive assessment of . It evaluates language models across various aspects such as language understanding, generation, coherence, context sensitivity, common-sense reasoning, and domain-specific knowledge. HELM aims to holistically evaluate the performance of language models across different tasks and domains.
Furthermore, Xiezhi <cit.> provides a comprehensive suite for assessing the knowledge level of large-scale language models in different subject areas.
The evaluation conducted through Xiezhi enables researchers to comprehend the notable limitations inherent in these models and facilitates a deeper comprehension of their capabilities in diverse fields.
Big-Bench <cit.> introduces a diverse collection of 204 challenging tasks contributed by 450 authors from 132 institutions.
These tasks cover various domains such as math, childhood development, linguistics, biology, common-sense reasoning, social bias, physics, software development, etc.
The primary objective of Big-Bench is to evaluate tasks that go beyond the capabilities of existing language models.
Moreover, MME <cit.> serves as an extensive evaluative benchmark specifically designed for multimodal large language models (MLLM), aiming to assess their perceptual and cognitive aptitudes. MME employs meticulously crafted instruction-answer pairs alongside succinct instruction design, thereby guaranteeing equitable evaluation conditions.
KoLA <cit.>, a Knowledge-Oriented Evaluation Benchmark, is specially designed to evaluate the language understanding and reasoning abilities of . It emphasizes the comprehension and utilization of semantic knowledge and inference. KoLA serves as a crucial platform for researchers to assess the depth of LLMs' understanding and reasoning, thereby propelling progress in language comprehension models.
To allow for crowd-sourcing evaluations in language tasks, DynaBench <cit.> is designed for conducting dynamic benchmark testing. It explores exciting new research directions, such as the impact of integration within a loop, characteristics of distributional shifts, exploring annotator efficiency, studying the influence of expert annotators, and enhancing model robustness against targeted adversarial attacks in interactive environments. Additionally, it contributes to advancing research on dynamic data collection and conducting cross-task analysis in the domain of general human-computer interaction.
The main goal of MMLU <cit.> is to develop a comprehensive test for evaluating the performance of text models in multi-task contexts.
Additionally, AlpacaEval <cit.> stands as an automated evaluation benchmark, which places its focus on assessing the performance of across various natural language processing tasks. It provides a range of metrics, robustness measures, and diversity evaluations to gauge the capabilities of LLMs. AlpacaEval has significantly contributed to advancing LLMs in diverse domains and promoting a deeper understanding of their performance.
Furthermore, AGIEval, <cit.>, serves as a dedicated evaluation framework for assessing the performance of foundation models in the domain of human-centric standardized exams.
Moreover, OpenLLM <cit.> functions as an evaluation benchmark by offering a public competition platform for comparing and assessing different LLM models' performance on various tasks. It encourages researchers to submit their models and compete on different tasks, driving progress and competition in the field of LLM research.
As for tasks beyond standard performance, there are benchmarks designed for OOD, adversarial robustness, and fine-tuning.
GLUE-X <cit.> is a novel attempt to create a unified benchmark aimed at evaluating the robustness of NLP models in OOD scenarios. This benchmark emphasizes the significance of robustness in NLP and provides insights into measuring and enhancing the robustness of models.
PromptBench <cit.> centers on the importance of prompt engineering in fine-tuning . It provides a standardized evaluation framework to compare different prompt engineering techniques and assess their impact on model performance. PromptBench facilitates the enhancement and optimization of fine-tuning methods for .
To ensure impartial and equitable evaluation, PandaLM <cit.> is introduced as a discriminative large-scale language model specifically designed to differentiate among multiple high-proficiency LLMs through training. In contrast to conventional evaluation datasets that predominantly emphasize objective correctness, PandaLM incorporates crucial subjective elements, including relative conciseness, clarity, adherence to instructions, comprehensiveness, and formality.
§.§ Benchmarks for Specific Downstream Tasks
Other than benchmarks for general tasks, there exist benchmarks specifically designed for certain downstream tasks.
MultiMedQA <cit.> is a medical QA benchmark that focuses on medical examinations, medical research, and consumer healthcare questions. It consists of seven datasets related to medical QA, including six existing datasets and one new dataset. The goal of this benchmark is to evaluate the performance of in terms of clinical knowledge and QA abilities.
Other specific benchmarks such as C-Eval <cit.>, which is the first extensive benchmark to assess the advanced knowledge and reasoning capabilities of foundation models in Chinese.
M3Exam <cit.> provides a unique and comprehensive evaluation framework that incorporates multiple languages, modalities, and levels to test the general capabilities of in diverse contexts.
Additionally, GAOKAO-Bench <cit.> provides a comprehensive evaluation benchmark for gauging the proficiency of large language models in intricate and context-specific tasks, utilizing questions sourced from the Chinese Gaokao examination.
On the other hand, SOCKET <cit.> serves as an NLP benchmark designed to evaluate the performance of in learning and recognizing social knowledge concepts.
It consists of several tasks and case studies to assess the limitations of in social capabilities.
MATH <cit.> concentrates on assessing reasoning and problem-solving proficiencies of AI models within the domain of mathematics.
APPS <cit.> is a more comprehensive and rigorous benchmark for evaluating code generation, measuring the ability of language models to generate python code according to natural language specifications.
CUAD <cit.> is an expert-annotated, domain-specific legal contract review dataset that presents a challenging research benchmark and potential for enhancing deep learning models' performance in contract understanding tasks.
CVALUES <cit.> introduces a humanistic evaluation benchmark to assess the alignment of LLMs with safety and responsibility standards.
In addition to existing evaluation benchmarks, there is a research gap in assessing the effectiveness of utilizing tools for . To address this gap,
the API-Bank benchmark <cit.> is introduced as the first benchmark explicitly designed for tool-augmented LLMs. It comprises a comprehensive Tool-Augmented LLM workflow, encompassing 53 commonly used API tools and 264 annotated dialogues, encompassing a total of 568 API calls.
Furthermore, the ToolBench project <cit.> aims to empower the development of large language models that effectively leverage the capabilities of general-purpose tools. By providing a platform for creating optimized instruction datasets, the ToolBench project seeks to drive progress in language models and enhance their practical applications.
§ HOW TO EVALUATE
In this section, we introduce two common evaluation methods: automatic evaluation and human evaluation.
In fact, the taxonomy of “how to evaluate” is also not definite.
Our categorization is based on whether or not the evaluation criterion can be automatically computed.
If it can be automatically calculated, we categorize it into automatic evaluation; otherwise, it falls into human evaluation.
§.§ Automatic Evaluation
Automated evaluation of is a common and perhaps the most popular evaluation method that usually uses standard metrics or indicators and evaluation tools to assess the performance of models, such as accuracy, BLEU <cit.>, ROUGE <cit.>, BERTScore <cit.>, to name a few.
For instance, we can use BLEU score to quantify the similarity and quality between the model-generated text and the reference text in a machine translation task.
In fact, most of the existing evaluation efforts adopt this evaluation protocol due to its subjectivity, automatic computing, and simplicity.
Thus, most of the deterministic tasks, such as natural language understanding and math problems, often adopt this evaluation protocol.
Compared with human evaluation, automatic evaluation does not require intensive human participation, which saves costs and time.
For example, both <cit.> and <cit.> use automated evaluation methods to evaluate a large number of tasks.
Recently, with the development of , some advanced automatic evaluation techniques are also designed to help evaluate.
<cit.> proposed LLM-EVAL, a unified multidimensional automatic evaluation method for open-domain conversations with .
PandaLM <cit.> can achieve reproducible and automated language model assessment by training an LLM that serves as the “judge” to evaluate different models.
Proposing a self-supervised evaluation framework, <cit.> enabled a more efficient form of evaluating models in real-world deployment settings by eliminating the need for laborious labeling of new data.
Due to the large volume of automatic evaluation papers, we will not introduce them in detail.
The principle of automatic evaluation is in fact the same as other AI model evaluation process: we just use some standard metrics to compute certain values under these metrics, which serves as indicators for model performance.
§.§ Human Evaluation
The increasingly strengthened capabilities of have certainly gone beyond standard evaluation metrics on general natural language tasks.
Therefore, human evaluation becomes a natural choice in some non-standard cases where automatic evaluation is not suitable.
For instance, in open generation tasks where embedded similarity metrics (such as BERTScore) are not enough, human evaluation is more reliable <cit.>.
While some generation tasks can adopt certain automatic evaluation protocols, human evaluation in these tasks is more favorable as generation can always go better than standard answers.
Human evaluation of is a way to evaluate the quality and accuracy of model-generated results through human participation.
Compared with automatic evaluation, manual evaluation is closer to the actual application scenario and can provide more comprehensive and accurate feedback.
In the manual evaluation of , evaluators (such as experts, researchers, or ordinary users) are usually invited to evaluate the results generated by the model. For example, <cit.> used the annotations from experts for generation.
By human evaluation, <cit.> performed human evaluation on summarization and disinformation scenarios on 6 models and <cit.> evaluated analogical reasoning tasks.
The seminal evaluation work by <cit.> did a series of human-crafted tests using GPT-4 and they found that GPT-4 performs close to or even exceeds human performance on multiple tasks.
This evaluation requires human evaluators to actually test and compare the performance of the models, not just evaluate the models through automated evaluation metrics.
Note that even human evaluations can have high variance and instability, which could be due to cultural and individual differences <cit.>.
In practical applications, these two evaluation methods are considered and weighed in combination with the actual situation.
§ SUMMARY
In this section, we summarize the key findings based on our review in sections <ref>, <ref>, and <ref>.
First of all, we would like to highlight that despite all the efforts spent on summarizing existing works on evaluation, there is no evidence to explicitly show that one certain evaluation protocol or benchmark is the most useful and successful, but with different characteristics and focuses.
This also demonstrates that not a single model can perform best in all kinds of tasks.
The purpose of this survey is to go beyond simply determining the “best” benchmark or evaluation protocol.
By summarizing and analyzing existing efforts on evaluation, we may identify the current success and failure cases of , derive new trend for evaluation protocols, and most importantly, propose new challenges and opportunities for future research.
§.§ Task: Success and Failure Cases of
We now summarize the success and failure cases of in different tasks.
Note that all the following conclusions are made based on existing evaluation efforts and the results are only dependent on specific datasets.
§.§.§ What can do well?
* demonstrate proficiency in generating text by producing fluent and precise linguistic expressions.
* obtain impressive performance in tasks involving language understanding, such as sentiment analysis, and text classification.
* exhibit robust contextual comprehension, enabling them to generate coherent responses that align with the given input.
* achieve satisfying performance across several natural language processing tasks, including machine translation, text generation, and question answering.
§.§.§ When can fail?
* may exhibit biases and inaccuracies during the generation process, resulting in the production of biased outputs.
* have limited abilities in comprehending complex logic and reasoning tasks, often experiencing confusion or making errors in intricate contexts.
* face constraints in handling extensive datasets and long-term memory, which can pose challenges in processing lengthy texts and tasks involving long-term dependencies.
* have limitations in incorporating real-time or dynamic information, making them less suitable for tasks that require up-to-date knowledge or rapid adaptation to changing contexts.
* is sensitive to prompts, especially adversarial prompts, which trigger new evaluations and algorithms to improve its robustness.
* In the domain of text summarization, it is observed that might demonstrate subpar performance on particular evaluation metrics, which can potentially be attributed to inherent limitations or inadequacies within those specific metrics.
* do not achieve satisfying performance in counterfactual tasks.
§.§ Benchmark and Evaluation Protocol
With the rapid development and widespread use of , the importance of evaluating them in practical applications and research has become crucial. This evaluation process should include not only task-level evaluation but also a deep understanding of the potential risks they pose from a societal perspective.
In this section, we summarize existing benchmark and evaluation protocols in <ref>.
First, a shift from objective calculation to human-in-the-loop testing, allowing for greater human feedback during the evaluation process.
AdaVision <cit.>, an interactive process for testing vision models, enables users to label a small amount of data for model correctness, which helps users identify and fix coherent failure modes.
In AdaTest <cit.>, the user filters test samples by only selecting high quality tests and organizing them into semantically related topics.
Second, a move from static to crowd-sourcing test sets is becoming more common.
Tools like DynaBench <cit.>, DynaBoard <cit.>, and DynaTask <cit.> rely on crowdworkers to create and test hard samples.
Additionally, DynamicTempLAMA <cit.> allows for dynamically constructed time-related tests.
Third, a shift from a unified to a challenging setting in evaluating machine learning models.
While unified settings involve a test set with no preference for any specific task, challenging settings create test sets for specific tasks.
Tools like DeepTest <cit.> use seeds to generate input transformations for testing, CheckList <cit.> builds test sets based on templates, and AdaFilter <cit.> adversarially constructs tests.
However, it is worth noting that AdaFilter may not be entirely fair as it relies on adversarial examples.
HELM <cit.> evaluates LLMs from different aspects, while the Big-Bench <cit.> platform is used to design hard tasks for machine learning models to tackle.
PromptBench <cit.> aims to evaluate the adversarial robustness of by creating adversarial prompts, which is more challenging and the results demonstrated that current are not robust to adversarial prompts.
§ GRAND CHALLENGES AND OPPORTUNITIES FOR FUTURE RESEARCH
Evaluation as a new discipline:
Our summarization inspires us to redesign a wide spectrum of aspects related to evaluation in the era of .
In this section, we present several grand challenges.
Our key point is that evaluation should be treated as an essential discipline to drive the success of and other AI models.
Existing protocols are not enough to thoroughly evaluate the true capabilities of , which poses grand challenges and triggers new opportunities for future research on evaluation.
§.§ Designing AGI Benchmarks
As we discussed earlier, while all tasks can potentially serve as evaluation tools for , the question remains as to which can truly measure AGI capabilities.
As we expect to demonstrate AGI abilities, a comprehensive understanding of the differences between human and AGI capacities becomes crucial in the creation of AGI benchmarks.
The prevailing trend seems to conceptualize AGI as a superhuman entity, thereby utilizing cross-disciplinary knowledge from fields such as education, psychology, and social sciences to design innovative benchmarks.
Nonetheless, there remains a plethora of unresolved issues. For instance, does it make sense to use human values as a starting point for test construction, or should alternative perspectives be considered?
The process of developing suitable AGI benchmarks presents many open questions demanding further exploration.
§.§ Complete Behavioral Evaluation
An idea AGI evaluation should contain not only standard benchmarks on common tasks, but also evaluations on open tasks such as complete behavioral tests.
By behavioral test, we mean that AGI models should also be evaluated in an open environment.
For instance, by treating as the central controller, we can construct evaluations on a robot manipulated by to test its behaviors in real situations.
By treating as a completely intelligent machine, the evaluations of its multi-modal dimensions should also be considered.
In fact, complete behavioral evaluations are complementary to standard AGI benchmarks and they should work together for better testing.
§.§ Robustness Evaluation
Beyond general tasks, it is crucial for to maintain robustness against a wide variety of inputs in order to perform optimally for end-users, given their extensive integration into daily life.
For instance, the same prompts but with different grammars and expressions could lead ChatGPT and other to generate diverse results, indicating that current are not robust to the inputs.
While there are some prior work on robustness evaluation <cit.>, there are much room for advancement, such as including more diverse evaluation sets, examining more evaluation aspects, and developing more efficient evaluations to generate robustness tasks.
Concurrently, the concept and definition of robustness are constantly evolving. It is thus vital to consider updating the evaluation system to better align with emerging requirements related to ethics and bias.
§.§ Dynamic and Evolving Evaluation
Existing evaluation protocols for most AI tasks rely on static and public benchmarks, i.e., the evaluation datasets and protocols are often publicly available.
While this facilitates rapid and convenient evaluation within the community, it is unable to accurately assess the evolving abilities of , given their rapid rate of development.
The capabilities of may enhance over time which cannot be consistently evaluated by existing static benchmarks.
On the other hand, as grow increasingly powerful with larger model sizes and training set sizes, static and public benchmarks are likely to be memorized by , resulting in potential training data contamination.
Therefore, developing dynamic and evolving evaluation systems is the key to providing a fair evaluation of .
§.§ Principled and Trustworthy Evaluation
When introducing an evaluation system, it is crucial to ascertain its integrity and trustworthiness.
Therefore, the necessity for trustworthy computing extends to the requirement for reliable evaluation systems as well.
This poses a challenging research question that intertwines with measurement theory, probability, and numerous other domains.
For instance, how can we ensure that dynamic testing truly generates out-of-distribution examples?
There is a scarcity of research in this domain, and it is hoped that future work will aim to scrutinize not only the algorithms but the evaluation system itself.
§.§ Unified Evaluation that Supports All Tasks
There are many other research areas of and we need to develop evaluation systems that can support all kinds of tasks such as value alignment, safety, verification, interdisciplinary research, fine-tuning, and others.
For instance, PandaLM <cit.> is an evaluation system that assists fine-tuning by providing an open-source evaluation model, which can automatically assess the performance of fine-tuning.
We expect that more evaluation systems are becoming more general and can be used as assistance in certain tasks.
§.§ Beyond Evaluation: Enhancement
Ultimately, evaluation is not the end goal but rather the starting point.
Following the evaluation, there are undoubtedly conclusions to be drawn regarding performance, robustness, stability, and other factors.
A proficient evaluation system should not only offer benchmark results but should also deliver an insightful analysis, recommendations, and guidance for future research and development.
For instance, PromptBench <cit.> provides not only robustness evaluation results on adversarial prompts but also a comprehensive analysis through attention visualization, elucidating how adversarial texts can result in erroneous responses.
The system further offers a word frequency analysis to identify robust and non-robust words in the test sets, thus providing prompt engineering guidance for end users.
Subsequent research can leverage these findings to enhance .
Another example is that <cit.> first explored the performance of large vision-language models on imbalanced (long-tailed) tasks, which demonstrates the limitation of current large models.
Then, they explored different methodologies to enhance the performance on these tasks.
In summary, enhancement after evaluation helps to build better and much can be done in the future.
§ CONCLUSION
Evaluation carries profound significance, becoming imperative in the advancement of AI models, especially within the context of large language models.
This paper presents the first survey to give an comprehensive overview of the evaluation on from three aspects: what to evaluate, how to evaluate, and where to evaluate.
By encapsulating evaluation tasks, protocols, and benchmarks, our aim is to augment understanding of the current status of , elucidate their strengths and limitations, and furnish insights for future progression.
Our survey reveals that current exhibit certain limitations in numerous tasks, notably reasoning and robustness tasks.
Concurrently, the need for contemporary evaluation systems to adapt and evolve remains evident, ensuring the accurate assessment of ' inherent capabilities and limitations.
We identify several grand challenges that future research should address, with the aspiration that can progressively enhance their service to humanity.
§ DISCLAIMER
The goal of this paper is mainly to summarize and discuss existing evaluation efforts on large language models.
Results and conclusions in each paper are original contributions of their corresponding authors, particularly for potential issues in ethics and biases.
This paper may discuss some side effects of and the only intention is to foster a better understanding of large language models.
Additionally, due to the evolution of especially online services such as Claude and ChatGPT, it is very likely that they become stronger and some of their limitations described in this paper are mitigated (and new limitations may arise).
We encourage interested readers to take this survey as a reference for future research and conduct real experiments in current systems when performing evaluations.
Finally, the evaluation of is continuously developing, thus we may miss some new papers or benchmarks.
We welcome all constructive feedback and suggestions to help make this survey better.
apalike
|
http://arxiv.org/abs/2307.03195v1
|
20230703075320
|
A Comprehensive Survey of Artificial Intelligence Techniques for Talent Analytics
|
[
"Chuan Qin",
"Le Zhang",
"Rui Zha",
"Dazhong Shen",
"Qi Zhang",
"Ying Sun",
"Chen Zhu",
"Hengshu Zhu",
"Hui Xiong"
] |
cs.CY
|
[
"cs.CY",
"cs.AI"
] |
In today’s competitive and fast-evolving business environment, it is a critical time for organizations to rethink how to make talent-related decisions in a quantitative manner. Indeed, the recent development of Big Data and Artificial Intelligence (AI) techniques have revolutionized human resource management. The availability of large-scale talent and management-related data provides unparalleled opportunities for business leaders to comprehend organizational behaviors and gain tangible knowledge from a data science perspective, which in turn delivers intelligence for real-time decision-making and effective talent management at work for their organizations. In the last decade, talent analytics has emerged as a promising field in applied data science for human resource management, garnering significant attention from AI communities and inspiring numerous research efforts. To this end, we present an up-to-date and comprehensive survey on AI technologies used for talent analytics in the field of human resource management. Specifically, we first provide the background knowledge of talent analytics and categorize various pertinent data. Subsequently, we offer a comprehensive taxonomy of relevant research efforts, categorized based on three distinct application-driven scenarios: talent management, organization management, and labor market analysis. In conclusion, we summarize the open challenges and potential prospects for future research directions in the domain of AI-driven talent analytics.
Artificial intelligence, talent analytics, talent management, organization management, labor market analysis
A Comprehensive Survey of Artificial Intelligence Techniques for Talent Analytics
Chuan Qin, Member, IEEE,
Le Zhang,
Rui Zha,
Dazhong Shen,
Qi Zhang, Ying Sun, Member, IEEE,
Chen Zhu, Member, IEEE,
Hengshu Zhu*, Senior Member, IEEE,
Hui Xiong*, Fellow, IEEE
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
C. Qin, C. Zhu, and H. Zhu are with the Career Science Lab, BOSS Zhipin, Beijing, China. E-mail: [email protected], [email protected], [email protected].
L. Zhang is with the Business Intelligence Lab, Baidu Inc, Beijing, China. E-mail: [email protected].
R. Zha is with the University of Science and Technology of China, Anhui, China. E-mail: [email protected].
D. Shen and Q. Zhang are with the Shanghai Artificial Intelligence Laboratory. E-mail: [email protected], [email protected].
Y. Sun and H. Xiong are with the Hong Kong University of Science and Technology (Guangzhou), china. E-mail: [email protected], [email protected]
H. Zhu and H. Xiong are the corresponding authors.
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In the world of volatility, uncertainty, complexity, and ambiguity (VUCA), talents are always precious treasures and play an important role for business success. To cope with the fast-evolving business environment and maintain competitive edges, it is critical for organizations to rethink how to make talent-related decisions in a quantitative manner. Indeed, thanks to the era of big data, the availability of large-scale talent data provides unparalleled opportunities for business leaders to understand the rules of talent and management, which in turn deliver intelligence for effective decision-making and management for their organizations <cit.>. Along this line, as an emerging applied data science direction in human resource management, talent analytics has attracted a wide range of attention from both academic and industry circles. Specifically, talent analytics, also as known as workforce analysis or people analytics, focuses on leveraging data science technologies to analyze extensive sets of talent-related data, empowering organizations with informed decision-making capabilities that enhance their organizational and operational effectiveness. <cit.>. In practice, talent analytics plays a pivotal role in strategic human resource management (HRM), encompassing diverse applications such as talent acquisition, development, retention, as well as examining organizational behaviors and external labor market dynamics.
Generally, the research directions of talent analytics can be divided into three categories, as illustrated in Figure <ref>, including talent management, organization management, and labor market analysis.
To be specific, first, talent management is a constant strategic process of attracting and hiring the high-potential employees, training their skills, motivating them to improve their performance, and retaining them to keep organizational competitiveness. In this particular scenario, talent analytics primarily focus on individual-level analysis. For instance, it can help human resources managers find the right talents for different jobs in a practical way <cit.>, and it can reasonably make the employee performance or turnover prediction <cit.>. Second, organization management is the art of fostering collaboration among talents and guiding the organizations toward achieving success. In this scenario, talent analytics can diagnose the health of an organization and measure the organizational performance by leveraging various relationship information between talents or organizations, such as organizational structure, communication patterns, and project collaborations <cit.>. It can also assist the organization in effectively structuring and optimizing teams <cit.>. Third, talent analytics can be applied from an external and macro perspective, i.e., applied to labor market analysis scenario. It is crucial to devise talent and organizational strategies. For instance, by analyzing talent demands within the labor market, managers can effectively craft recruitment strategies <cit.>.
While the traditional studies around talent analytics have achieved great theoretical and practical success, the field of research has been facing dramatic changes in recent years. On the one hand, talent analytics is facing digital disruption, which enables the availability of large-scale relevant data. For instance, online recruitment platforms are rapidly developing and have amassed a significant amount of recruitment data. One such platform, Indeed, a world-renowned job search site, had 11.3 million active jobs as of January 2022 <cit.>.
Meanwhile, Linkedin, the largest online professional network, had 774 million members from around 200 countries as of March 2022 <cit.>, building up a wealth of labor market data. Moreover, numerous enterprises are setting up their Digital Human Resource Management Systems (Digital HRMS), enabling the collection, storage, and processing of talent and organizational information in a digital environment <cit.>. On the other hand, with the advent of talent-related big data, Artificial Intelligence (AI) techniques have rapidly revolutionized a series of research and practices in this field at an alarming rate, which in turn deliver intelligence for decision-making and management for their organizations. For instance, the deep learning methods have enabled the new paradigm in person-job fit <cit.> and person-organization fit <cit.>, so as to achieve the efficient and accurate talent selection and development. Text mining methods, such as probabilistic topic models, have been adopted in the employer brand analysis based on the large-scale labor market data <cit.>, which enable the forward-looking strategic plans created for the business. Meanwhile, in recent years, several high-tech companies are gradually incorporating AI technologies into their HRMS. As an illustration, IBM leverages AI technology to achieve a remarkable 95 percent accuracy in predicting employees who are considering leaving their positions, which saved IBM $300 million in retention costs <cit.>.
This survey attempts to provide a comprehensive review of the rapidly evolving AI techniques for talent analytics. Based on our investigation, we first provide a detailed taxonomy of relevant data laying a data foundation for leveraging AI techniques to understand talents, organizations, and management better. Along this line, we introduce the research efforts of the AI techniques for talent analytics from three aspects, including talent management, organization management, and labor market analysis. Finally, we identify challenges for future AI-based talent analytics and suggest potential research directions.
Moreover, in order to help the readers learn more effectively, we highlight the systematic resources provided in this survey as follows,
* Table <ref> summarizes the data for talent analytics.
* Table <ref> summarizes the recent AI-based talent analytics efforts in the talent management scenario.
* Table <ref> summarizes the recent AI-based talent analytics efforts in the organization management scenario.
* Table <ref> summarizes the recent AI-based talent analytics efforts in the labor market analysis scenario.
§ DATA FOR TALENT ANALYTICS
Nowadays, as enterprise undergo an accelerated digital transformation, a large amount of talent analytics-related data has been accumulated. In this section, we will introduce the data collected across various scenarios, providing readers with a foundational understanding of the related research data and the motivation for model design. Generally, the data can be divided into internal data, which are collected from the internal enterprise management system, and external data, which are collected from the external labor market.
§.§ Internal Data
Based on the described objects, internal data can be broadly divided into three categories: recruitment data, employee data, and organizational data.
§.§.§ Recruitment Data
Recruitment data in pre-employment mainly includes the following types:
Resume:
A resume or Curriculum Vitae (CV) is a document that outlines a person's background, skills, and accomplishments, which plays a vital role in the recruitment process as it serves to facilitate talent screening and assessment <cit.>. It serves as an important tool for job seekers to showcase their qualifications and suitability for a job position.
Recently, a large amount of resume data, in either Word or PDF format, has accumulated with the development of online recruitment.
As shown in Figure <ref>, a resume typically comprises structured information such as gender, age, and education, as well as semi-structured information like educational experience, work experience, and project experience.
Accordingly, several resume parsing techniques have been developed to extract the redundant information <cit.>.
On such basis, substantial effort are posed in talent analytics with resume data from different perspectives.
For instance, Yao et al. <cit.> introduced a keyphrase extraction approach to explore job seekers' skills in resumes, and Pena et al. <cit.> used image information in resume data to improve screening performance and explore fairness issue.
Moreover, several studies have proposed leveraging the text mining techniques to determine the matching degree between jobs and job seekers based to their resumes <cit.>.
In addition, the resumes also encompass the career trajectories of the job seekers. As illustrated on the right side of Figure <ref>, the candidate's profile showcases three job experiences, including a job change at Microsoft and work experience at Google. In this phase, Zhang et al. <cit.> introduced the ResumeVis system to visualize the individual career trajectory and mobility within different organizations. And the researchers further analyzed the sequential patterns of the career trajectory and proposed personalized career development recommendation <cit.>.
Job Posting:
A job posting is an advertisement for a vacant position that provides job seekers with information on the job description and requirements. The posting offers applicants a clear understanding of what the position is responsible for and what qualifications are necessary. Recently, the proliferation of online recruitment services has made it increasingly common to publish job postings as web pages. Figure <ref> illustrates a typical job posting that comprises structured information, such as the salary range and education requirements, as well as the semi-structured content that includes job duties descriptions and abilities requirements.
Nevertheless, it is still difficult to deal with such a large corpus of data by Human Resource (HR) experts manually.
To this end, researchers have attempted to reduce the dependence on manual labor by using neural network-based techniques, particularly NLP, on voluminous job postings.
As mentioned before, considerable effort has been posed in Person-Job Fit <cit.>, which aims to match the job postings with suitable resumes. Moreover, Shen et al. <cit.> leveraged the latent variable model to jointly model the job description, candidate resume, and interview assessment, which can further benefit several downstream applications such as person-job fit and interview question recommendation. In order to reduce the expense of manual screening, researchers also extracted the job entities from postings and generated interview questions automatically <cit.>.
Apart from these in-firm applications, some studies are carried out to provide comprehensive insights into the global labor market. For instance, researchers have proposed several data-driven methods for salary analysis across different companies and positions <cit.>. Zhang et al. <cit.> utilized large-scale job postings from one of the largest Chinese online recruitment websites and forecast fine-grained talent demand in the recruitment market. Moreover, some studies aim to measure the popularity of job skills and forecast their evolving trends <cit.>. Along this line, Sun et al. <cit.> further focus on measuring the values of job skills based on massive job postings, contributing to the quantitative assessment of job skills.
Interview-related Data:
Interview-related data is typically collected during the interview process and serves the purpose of evaluating applicants' overall qualifications for the position they are applying for. In general, an interview can be conducted either in-person or through video, resulting in textual or video-based assessments, respectively. Both of these two kinds of data enable comprehensive evaluations for the candidates and facilitate the integration of AI in HR.
To address the subjectivity of traditional interviews, Shen et al. <cit.> utilized the latent variable model to explore the relationship among job descriptions, candidate resumes, and textual interview assessments. The results provide an interpretable understanding of job interview assessments.
Indeed, textual interview assessments within a company are usually private and sensitive, whereas video assessments draw more attention.
For instance, several studies extract multimodal features from the videos for automatic analysis of job interviews <cit.>. In addition, Hemamou et al. <cit.> proposed a hierarchical attention model to predict the hirability of the candidates using multimodal information, including text, audio, and video. Along this line, Chen et al. <cit.> leveraged a hierarchical reasoning graph neural network to automatically score candidate competencies using textual features in asynchronous video interviews.
§.§.§ Employee Data
Regarding the development of employees within a company, a significant amount of employee data has been accumulated, including training records and individual work outcomes. An overview of employee-related data is shown in Figure <ref>.
Employee Profiles: Employee profiles typically describe an individual based on two main aspects: demographic characteristics and individual work outcomes. In specific, the former branch includes characteristics such as age, gender, and education levels <cit.>, which can be used to enhance employee representations and benefit various downstream analyses, such as career mobility prediction <cit.> and performance forecasting <cit.>. In addition to these static variables, individual work outcomes depict the dynamic career development from different dimensions, such as performance appraisals, promotion, and turnover records. In particular, a performance appraisal is a systematic evaluation of an employee's job performance and productivity that is typically conducted by line managers. Besides, the promotion and turnover records show employee movements within and across companies, respectively.
All of this information contributes to further insights into employee dynamics.
For instance, researchers have leveraged performance appraisals to identify the high-potential talents within a company <cit.>. Li et al. <cit.> utilized static profiles, performance appraisals, and reporting lines of employees to model career development within a company, focusing on turnover and career progression. Sun et al. <cit.> proposed to capture the dynamic nature of person-organization fit based on individual profiles, reporting lines, and communication records. To investigate the contagious effect of turnovers, researchers have utilized both employee profiles and turnover data <cit.>. Furthermore, Hang et al. <cit.> leveraged five kinds of standardized data, including employee turnover records, to predict the turnover probability and period.
Training Records: Employee training is a program designed to improve the performance of employees by equipping them with specific skills. Ongoing employee training has proven to be crucial in attracting and retaining top talent <cit.>. Typically, the training record describes the learning path of an employee, which is a sequence of different skills.
Based on these training records, considerable effort has been devoted to exploring the learning patterns of employees. For instance, Wang et al. <cit.> utilized both learning records and skill profiles of employees from a high-tech company in China to develop a personalized online course recommendation system. Along this line, Srivastava et al. <cit.> collected employees' training and work history from a large multinational IT organization to provide personalized next training recommendations. In addition, some researchers also provided insights into employee competency study <cit.>. For instance, multi-dimensional features, including learning and training dimensions, were collected from a Chinese state-owned enterprise to provide competency assessment for employees <cit.>.
§.§.§ Organizational Data
An organizational structure is a system that outlines how activities are directed toward the achievement of organizational aims <cit.>, which plays an important role in decision-making and knowledge management. Generally, an organization is commonly represented as a hierarchical tree structure, which can take on diverse forms, such as matrix, flat, and network structures. Figure <ref> shows several common types of organizational structures.
Typically, existing studies explore these complex structures from various dimensions, such as reporting lines and in-firm social networks.
Reporting lines are generally the most representative aspect of an organizational structure, which delineates how authority and responsibility are allocated in an organization. Regarding this point, Sun et al. <cit.> developed an organization structure-ware convolutional neural network to hierarchically extract compatibility features for measuring person-organization fit and its impact on talent management.
Nevertheless, due to privacy concerns, mainstream studies utilize in-firm social networks for human resource management. In general, an in-firm social network can be formed from email or Instant Messaging (IM) records across employees.
For example, text-based communication has been used by several machine learning classifiers to identify group mood <cit.>. Besides, Cao et al. <cit.> leveraged the lasso regression model to explore team viability using text conversations of online teams. In addition to social networks, researchers have also taken other information into account.
For example, Ye et al. <cit.> utilized both email communication and a high-potential talent list to identify employees with high potential. Along this line, Teng et al. <cit.> further utilized datasets from three sources for organizational turnover prediction, including profile and turnover, social network, and job levels.
§.§ External Data
Apart from the aforementioned internal data, external sources also contribute to a comprehensive understanding of the labor market, which can be broadly classified into two categories: social media platforms and job search websites.
Social Media:
Widely-used social media platforms that contribute to a comprehensive understanding of the labor market include Twitter [https://twitter.com], Facebook [https://www.facebook.com], and news reports.
With the help of NLP and Topic Model techniques <cit.>, numerous studies have been carried out to explore the semantic information in this corpus.
For example, more than 60,000 tweets related to nine energy companies were collected for sentiment analysis expressed on Twitter <cit.>. To gain further insight into the impact of public opinion, Spears et al. <cit.> collected earnings reports and news articles spanning eight years from four companies.
The results indicate that companies may face a decline in valuation when they receive negative publicity.
Job Search Websites:
Recent years have witnessed the rapid growth of job search websites, such as Indeed [https://www.indeed.com], LinkedIn [https://www.linkedin.com], and Glassdoor [https://www.glassdoor.com]. Specifically, Indeed and Glassdoor allow users to comment on a company, providing an overall understanding of the employer brand. For instance, Lin et al. <cit.> collaboratively modeled both textual (i.e., reviews) and numerical information (i.e., salaries and ratings) for learning latent structural patterns of employer brands. In addition, Bajpai <cit.> leveraged the data from Glassdoor to perform aspect-level sentiment analysis. Along this line, large-scale reviews of Fortune 500 companies are collected to identify topics that matter to employees <cit.>. Differently, LinkedIn provides a wide range of business services, including job listings, professional profile creation, and career development services, with personal profiles being the most analyzed, as they describe users' employment history. For example, Park et al. <cit.> used LinkedIn’s employment history data from more than 500 million users over 25 years to construct a labor flow network of over 4 million firms worldwide, demonstrating a strong association between the influx of educated workers and financial performance in detected geo-industrial clusters.
Furthermore, there are also several third-party business investigation platforms that offer detailed information about companies and their board members' relationships, such as Crunchbase [https://www.crunchbase.com], Owler [https://www.owler.com], Tianyancha [https://www.tianyancha.com] and Aiqicha [https://www.aiqicha.com]. These details can be viewed as complementary information to the job search websites. Building upon this foundation, one can gain deeper insights into the aligned companies and conduct more relevant research, such as analyzing cooperative and competitive relationships <cit.> and providing investment target recommendations <cit.>.
§ TALENT MANAGEMENT
Talent management, which focuses on placing the right person in the right job at the right time, has emerged as a predominant human capital topic in the early twenty-first century <cit.>.
In this section, we discuss the role of AI technologies in talent management, specifically from an individual perspective, including talent recruitment, assessment, and career development. First, we describe various intelligent talent recruitment scenarios, such as job posting generation <cit.> and talent searching <cit.>. Next, we discuss two primary issues in talent assessment: interview question recommendation <cit.> and assessment scoring <cit.>. Furthermore, we outline several post-employment career development problems, including course recommendations <cit.> for employee training and employee dynamics analysis <cit.>. In the following sections, we will delve into these issues in more detail.
§.§ Talent Recruitment
Talent recruitment is a critical component of talent management, as it involves identifying the right candidates for positions within an organization. The quality of this function can significantly impact the organization's future development, which is why considerable human and material resources have been invested in ensuring the efficiency and effectiveness of related procedures. According to a Forbes article, US corporations spend nearly 72 billion annually on various recruiting services, and the global amount is likely three times larger [https://www.forbes.com/sites/joshbersin/2013/05/23/corporate-recruitmenttransformed-new-breed-of-service-providers/].
However, traditional talent recruitment methods rely heavily on the personal knowledge and experience of recruiters, which may introduce bias due to the subjective nature of the process. This potential bias can be exacerbated by varying levels of experience and personal qualities among different recruiters. Fortunately, the rapid development of online recruitment platforms, such as LinkedIn and Lagou, has ushered in a new era of data-driven talent recruitment, empowered by AI technologies.
§.§.§ Job Posting Generation
A job posting comprises both job duties, which describe the responsibilities and tasks of the role to candidates, and job requirements, which outline the professional experience, skills, and domain knowledge that an employer expects from the ideal candidate to perform the role.
Job duties are usually tailored to each specific position. However, determining the required capabilities and prioritizing them in line with the job duties can be challenging, especially when recruiters have limited experience and domain knowledge.
An intuitive solution to address this challenge is to generate job requirements in a data-driven manner, based on the job duties. This can be formulated as follows:
Given a set of job posting 𝒞, where each C_i∈𝒞 contains a job duty X_i and a job requirement Y_i. The target of job posting generation is to learn a model M which can generate fluent and rational job requirement Y_new when a new job duty X_new is given.
Technically, the task of job posting generation can be viewed as a text-to-text generation problem, where the job duties and job requirements are typically long sequences of text.
To address this task, sequence-to-sequence models in an encoder-decoder architecture are commonly employed, as depicted in Figure <ref>.
For instance, Liu et al. <cit.> applied two Long-Short Time Memory (LSTM) layers as the encoder and decoder, respectively, to extract the key information from job duties and generate the job requirement, respectively.
Since it is important to precisely use and organize skill-related keywords in job requirements, they implemented the decoder in a two-pass manner. The first-pass decoder is to predict skill-related keywords, while the second-pass decoder aims to generate fluent text guided by the predicted skills. In particular, the attention mechanism is used to combine hidden states in the LSTM layer of the encoder with context information when predicting skills in the decoder.
Furthermore, Qin et al. <cit.> trained a neural topic model in job duties to capture the global topic information. The topic distribution of each job duty is used as the context information to guide the generation of each word in the LSTM-based decoder.
§.§.§ Talent Searching
Talent Searching aims to find suitable candidates based on the search query given by a recruiter or a hiring manager.
Formally, considering the candidate set ℛ = {R_1, .., R_n}, where each candidate R_i denotes her resume consisting of work experience, education experiences, and so on. Then, talent searching can be approached as an information retrieval task:
Given the candidate set ℛ and a searching query q consisting of search criteria, the goal of talent searching is to determine a subset of candidates 𝒰⊂ℛ satisfying the search criteria, and rank those candidates based on the fitness.
Intuitively, to solve this problem, we need to measure the fitness between the search query q and each resume R_i. However, in industrial practice, the efficiency of computing the fitness for each pair (q, R_i) is a challenge due to the large scale of ℛ. Therefore, before the online talent search, it is essential to derive the crucial representation from the resume in the offline pipeline, even clustering them first to reduce computation.
Resume Representation Learning.
Here, researchers aim to extract crucial and comprehensive information from resumes. Most related works primarily focus on analyzing the textual content.
Then, the problem is transferred into an NLP problem, and various language models have been employed to represent the resume content, such as LSTM <cit.>, Bidirectional Encoder Representations from Transformers (BERT) <cit.>, Generative Adversarial Networks (GAN) <cit.>, and semantic processing technologies <cit.>. Recently, knowledge graphs based on the keywords in the resume have also been introduced to enhance the representation of the resume <cit.>.
In addition to understanding the textual content, the classification of resume blocks is another crucial step in resume information extraction, which aims to distinguish the semantic purpose of different resume blocks.
Both the textual content in each block and the interrelation between different blocks should be considered.
Therefore, some sequence models have been applied to this task, such as Conditional Random Field (CRF) <cit.>, Recurrent Neural Network (RNN) <cit.>, and Transformer <cit.>.
Online Talent Searching Engine.
After extracting information from the resume, we turn to introduce the AI-based talent searching engine by employing resume representation learning in the offline pipeline. The core task is to rank candidates using different representation techniques and AI-based models.
For example, Ozcaglar et al. <cit.> proposed a two-level ranking system to integrate structured candidate features by combining Generalized Linear Mixed (GLMix) models and Gradient Boosted Decision Tree (GBDT) models. This approach utilizes recruiter actions as supervised information to learn to rank candidates.
In addition, Manad et al. <cit.> extracted skills from unstructured candidate resumes and ranked them by scoring the proficiency of the skills.
Furthermore, Geyik et al. <cit.> involved recruiter's immediate feedback on the recommendation results to cluster candidates based on recruiter's intent. Then, a multi-armed bandit-based approach was developed to choose the appropriate intent cluster for the current recruiter, followed by ranking the candidates in this cluster. Unlike the aforementioned works, where searching queries are given by recruiters, Ha et al. <cit.> designed a new talent search engine to recommend candidates based on ideal candidates, where the query is constructed from the keywords extracted from the ideal candidates.
§.§.§ Person-Job Fitting
In this phase, we introduce several applications of AI technologies in the task of Person-Job Fitting (PJF), which focuses on measuring the matching degree between a job posting, consisting of job duties and job requirements, and a candidate's resume, consisting of work and educational experiences. Following <cit.>, we denote a job application S=(J,R) as the tuple of a job posting J, a resume R, and a binary label y that corresponds to the recruitment result, where y=1 indicates that the candidate has been selected for further interviews. Based on this basis, the problem of person-job fit can be formalized as a text matching problem.
Given a set of job applications 𝒮, where each application S ∈𝒮 contains a job posting J and a resume R, as well as the corresponding recruitment result label y, the target of Person-Job Fit is to learn a predictive model M for measuring the matching degree between J and R, and then the corresponding result label y can be predicted.
In the following, we will introduce several AI-based solutions for this problem, along with two extended applications: job recommendation and talent recommendation.
AI-based Solutions for PJF.
The early research efforts in the field of Person-Job Fit with AI-based technologies can be traced back to <cit.>. They proposed that the compatibility between a job and a candidate is often dependent on underlying factors that may not be explicitly stated in the job posting or the candidate's resume. To address this issue, the authors developed a latent variable model to represent job requirements and candidate abilities, and formulated PJF as a bilateral matching problem from a demands-abilities perspective.
Since then, several studies have explored the use of AI-based technologies to extract job and candidate profiles from textual data. For instance, Zhu et al. <cit.> introduced a Convolutional Neural Network (CNN) based neural network to extract representation vectors from job postings and resumes. They then evaluated the fit between a candidate's qualifications and the job requirements by measuring the similarity between these vectors.
Qin et al. <cit.> used LSTM to model the text sequence and applied ability-aware attention strategies to measure the importance of each job requirement or candidate's ability on the final PJF decision. Similarly, Luo et al. <cit.> treated job postings and resumes as multi-sentence documents and utilized Bidirectional Gated Recurrent Units (BIGRU) and word-level attention to model sentences and documents.
Luo et al. <cit.> extended these efforts by integrating LSTM, CNN, and attention models to handle different types of structured textual information, such as skills and experiences, and proposed an adversarial learning-based framework to enhance expressive representations.
Recently, Lavi et al. <cit.> applied the fine-tuning BERT model to handle the noise and heterogeneous application data. Specifically, they matched jobs and candidates with both classification and similarity-based objectives. Yao et al. <cit.> introduced a skill knowledge graph to enhance the representation of job postings and resumes in a knowledge-aware manner.
Besides capturing textual features from application data, several works have explored including other related information to improve the performance or address challenges in the PJF task. For example, it has been reported that certain numerical and categorical attributes of jobs and candidates are also important and helpful, such as career level, education, company name, tag, and region <cit.>. As a result, several neural network-based approaches have been designed to capture the comprehensive interaction of different types of data, such as Factorization Machine <cit.>, CNN <cit.>, and Self-Attention <cit.>.
Additionally, job-resume relation graphs have been used to aggregate useful evidence from similar jobs or resumes <cit.> with Graph Neural Networks (GNN), where jobs or resumes are considered as nodes and their relation-specific connections, such as job applications, are considered as links.
Meanwhile, the potential semantic relation among different job categories has been noted and leveraged to tackle the availability of labeled data among them with domain adaptation technologies <cit.>.
Moreover, the user interaction record on online recruitment services is also an important complementary feature to represent both jobs and candidates. Specifically, Yan et al. and Jiang et al. <cit.> integrated historical interviewed applicants for a job posting and historically applied jobs for a particular talent to complement and enhance the representation learning for jobs and candidates. The hidden idea is that historical interview choices and job applicants reveal the preference of jobs and candidates for each other. Fu et al. <cit.> further explored users' dynamic preferences in browsing, clicking, and online chat behaviors and regarded them as the cascading relations between jobs and candidates.
Applications for Talent Recommendation.
One intuitive application of person-job fit is talent recommendation, which involves finding suitable candidates for a specific job.
Given a specific job posting J and a set of candidates {R_1, R_2,..., R_n}, a well-trained PJF architecture can be used to measure the fitness of each pair (J, R_i) and rank the candidates.
While the PJF models mentioned previously can all be applied in this context, some studies have attempted to enhance candidate ranking with a more comprehensive evaluation.
For instance, personality traits have been identified as critical success factors for job performance and organizational effectiveness <cit.>. Researchers have mined these traits through linguistic analysis of social media text data <cit.>. Moreover, by integrating various measurements and data sources, researchers and companies have developed electronic recruitment (e-recruitment) systems for more effective and efficient recruitment, particularly in the talent pre-screening stage <cit.>. These systems have been demonstrated to have a positive impact on recruitment <cit.>.
Applications for Job Recommendation.
As a dual problem of talent recommendation, job recommendation aims to recommend jobs for a specific candidate.
The PJF architecture, which can be built on any PFJ models mentioned previously, can output the fitness of each job in the job set {J_1, J_2, ..., J_n} for the given candidate R.
This can be valuable in recruitment scenarios where job application redistribution is necessary, such as position assignments for candidates in campus recruitment or for employees in internal position adjustment. Job recommendation can also be used in online recruitment services to assist job seekers in finding suitable job opportunities <cit.>.
§.§ Talent Assessment
Talent assessment is a crucial process for companies to identify the competency of candidates.
In this paper, we discuss two primary branches of talent assessment, i.e., interview question recommendation and assessment scoring.
§.§.§ Interview Question Recommendation
Job interview aims to assess the fitness of candidates and the job positions by evaluating their skills and experiences. A critical task is how to design appropriate questions for comprehensively assessing the competencies of employees. In this phase, personalized question recommendation has emerged as a feasible approach, that selects the right questions from the question set based on the candidate and the job position.
Along this line, Qin et al. <cit.> first proposed to recommend personalized question sets for various applications, taking into consideration both job requirements and candidates' experiences. In particular, to enhance the performance of the recommendation system, a knowledge graph of job skills was built using query logs from the biggest search engine, Baidu. Substantially, Shi et al. <cit.> proposed an automated system for generating personalized screening questionnaires based on job postings. They encoded the job posting with the BERT model, selected the question templates with a Multi-Layer Perception(MLP) Classifier, and extracted necessary parameters from the templates using feature-based regression models.
To cope with the substantial risk of bias arising from the subjective nature of traditional in-person interviews, Shen et al. <cit.> proposed to learn the representative perspectives of in-person interviews from the successful job interview records in history. With the help of topic models, they represented job postings, resumes, and interview assessment reports in an interpretable way. The potential relationships among them are also mined to recommend questions or skills that should be estimated during interviews <cit.>.
§.§.§ Assessment Scoring
Assessment scoring is another critical problem in talent management, that evaluates the competency of candidates or employees based on their performance.
Typically, this problem can be formulated as a binary classification problem as follows:
Based on the observed attributes x_u of employee u, the target is to predict whether she is competent for the current job, i.e., P(Y=1 | x_u).
In this phase, substantial effort has been posed in the interviewing phase <cit.>.
For instance, Naim et al. <cit.> predicted overall interview scores based on 82 features from 138 recorded interview videos covering three dimensions: prosodic, lexical, and facial information. With the progression of recruitment techniques, Asynchronous Video Interview (AVI) has received considerable attention from researchers. Specifically, Chen et al. <cit.> proposed to extract features from a monologue video interview multimodal corpus, including text, audio, and video. Several shallow classification models are utilized to make the prediction, such as Support Vector Machine (SVM) and Random Forest (RF).
To help recruiters, Hemamou et al. <cit.>collected a corpus of over 7,000 candidates who participated in asynchronous video job interviews for real positions and recorded videos of themselves answering a set of questions. Based on such a massive dataset, they designed a hierarchical attention model to predict the hireability of the candidates. To identify the relevant parts of an answer, the authors also developed attention mechanisms to extract fine-grained temporal information <cit.>. Moreover, Chen et al. <cit.> introduced Graph Neural Networks to construct the dependency relation between questions. Differently, they only used the automatic speech recognition transcriptions in the AVI and scored on multiple Question-Answer pairs. Besides, Singhania et al. <cit.> first investigated fairness with regard to gender and race in the video interview.
In addition to the interviewing phase, there is still a focus on predicting competency based on employee profile information. For instance, Liu et al. and Hong et al. <cit.> applied the SVM model to predict the competence level of civil servants and highway construction foremen, respectively. Furthermore, Li et al. <cit.> compared the performance of multiple traditional classifiers such as SVM, RF, and Adaboost in competency evaluation and found that the prediction result is not ideal based solely on structured personal static data. For better performance, more unstructured and dynamic data, such as textual data or social networks can be involved, which may be a significant direction for future work.
§.§ Career Development
Career development is the process of acquiring and experiencing planned and unplanned activities that support the attainment of life and work goals <cit.>. In this paper, we investigate four correlated tasks in individual career development: course recommendation, promotion prediction, turnover prediction, and career mobility prediction.
§.§.§ Course Recommendation
Course Recommendation aims to provide personalized courses based on the different preferences and needs of users in various aspects. Much research in this direction is concerned with student education <cit.>. Recently, some effort has also been posed in the talent management field <cit.>. Typically, we formalize this problem as follows:
Given the sequential history learning record ℋ_u of the employee u and her/his profile 𝒮_u, the target is to predict the rating of employee u on course c, i.e., E[r |ℋ_u, 𝒮_u, c].
As a recommendation task, most of the methods in the recommendation system can be utilized to conduct course recommendations. However, unlike traditional item recommendation, where the decision is determined by the users' rating, more attention should be paid to mining employees’ competencies and their needs for further development from side information such as the employee's profiles.
Therefore, Wang et al. <cit.> used a topic model to extract the latent interpretable representations of the employee's current competencies from their skill profiles, as well as a recognition mechanism to explore the personal demands from learning records. Then, they integrated the collaborative filtering algorithm with the Variational AutoEncoder (VAE) to develop an explainable course recommendation system.
In addition to learning records, Srivastava et al. <cit.> also introduced work history into the learning content recommendation and defined a Markov Decision Process (MDP) to extract the past training patterns. Yang et al. <cit.> introduced a contextualized knowledge graph embedding to recommend training courses to the talent in an explainable manner.
§.§.§ Promotion Prediction
Promotions serve two essential roles in the organization, that is, assigning individuals to the jobs for which they are best suited and providing incentives for lower-level employees <cit.>. To this end, some research is concerned with identifying the features that are correlated with promotion and applying machine learning methods to predict employee promotion. Typically, the problem can be formulated as a classification task as follows:
Given the history record ℋ_i^t in a certain period t of the employee i, the target is to predict the probability of a promotion P(M_i = 1 |ℋ_i^t).
In this phase, Yuan et al. <cit.> suggested that work-related interactions and online social connections are strongly predictive and correlate with promotion and resignation. Along this line, Long et al. <cit.> used Random Forest to predict promotions based on demographic and job features. However, these methods may not fully capture the complexity and dynamism of career development. To address this, Li et al. <cit.> proposed a novel survival analysis approach, where predicting promotion events is transformed into estimating the expected duration of time until promotion occurs.
§.§.§ Turnover Prediction
As turnover is considered one of the major factors causing declining productivity <cit.>, many machine learning-based methods have been developed to predict and mitigate it. Mathematically, the turnover prediction problem can be formulated as follows:
Given the history record ℋ_i^t in a certain period t of employee i, the target is to predict the probability of a turnover P(E_i = 1 |ℋ_i^t).
For instance, Nagadevara <cit.> used five data mining techniques to predict turnover, including MLP, Logistic Regression, Decision Tree, etc. The results reveal that absenteeism and lateness, job content, demographics, and experience in the current team are strong predictors of turnover. Subsequently, more data mining techniques have been introduced to investigate retaining employees <cit.>. However, the above studies fail to account for the noise in the data, such as the limited understanding of benefits and cost. To this end, Ajit et al. <cit.> suggested using Extreme Gradient Boosting (XGBoost) to obtain better performance.
Besides predicting turnover based on static features, considerable attention is invested in capturing the dynamic factors, especially the evolving neural network-based methods, which have demonstrated significant expressive ability <cit.>. In this phase, Teng et al. <cit.> investigated the contagious effect of employee turnover on an individual and organizational level, respectively. Specifically, they developed two LSTM cells to process peers' turnover sequence and environmental change, as well as a global attention mechanism to evaluate the heterogeneous impact on potential turnover behavior. The experiments conducted on the dataset provided by a high-tech company in China demonstrate the effectiveness of their proposed framework, including profile information and turnover records. On this basis, Hang et al. <cit.> further modeled employee turnover from both internal and external views. For the internal component, they captured the influence of close collaborators and colleagues with similar skills using a graph convolutional network. From the external-market view, they connected employees and external job markets through shared job skills. Finally, both internal and external information is fed into Bidirectional Long Short-Term Memory (BiLSTM) and survival analysis for turnover predictions.
Job Satisfaction
In this phase, another problem also receives considerable attention, namely, how to measure the employee's job satisfaction, which is inversely related to turnover intention <cit.>. Traditional approaches in this direction are based on self-reported questionnaire surveys, which are time-consuming and cannot be applied to large organizations. Recently, AI-based technologies have been introduced to automatically analyze job satisfaction from various aspects. Overall, the prediction of job satisfaction can be formulated as a binary classification problem, defined as follows:
Given a set of independent variables X, the target is to predict the dependent variable Y, where X characterizes some job-related features and Y indicates job satisfaction as a binary variable.
For instance, Arambepola <cit.> explored the influence of job-specific factors on job satisfaction level by combining both the employee's background data and company-related factors, where several classifiers, including Random Forest, Logistic Regression, and SVM, were used for the prediction of the job satisfaction level of software developers. Saha <cit.> proposed to assess job satisfaction by leveraging large-scale social media, i.e., employees' Twitter post dataset, where word frequency statistics, lexical analysis, and sentiment analysis have been conducted to extract features from textual data. Accordingly, multiple classifiers, such as SVM and MLP, were used to predict employees' job satisfaction.
In addition to the aforementioned classification models, Mcjames <cit.> developed a causal inference machine learning approach to identify practical interventions for improving job satisfaction. This approach was based on the TALIS 2018 dataset, which provides a representative sample of school teachers.
§.§.§ Career Mobility Prediction
To cope with the fast-evolving job-hopping phenomenon, the methodology is progressing for understanding the underlying job movement patterns in the labor market. Following <cit.>, we define the career mobility prediction problem as follows:
Given the career path 𝒮(u) = {𝒥(u), Ω(u) } of employee u, where 𝒥_i(u) = {c_i, p_i, d_i} records the work experience of u at company c_i, position p_i with duration d_i, and Ω(u) stands for the personal information. We attempt to predict the employee u's next career move, including company c_L+1, position p_L+1, as well as duration d_L.
Generally, as shown in Figure <ref>, we can solve this task based on time series analysis, where the career path is treated as an event sequence. Along this line, Li et al. <cit.> first designed a contextual LSTM model to integrate the profile context and career path dynamics simultaneously for predicting the next company/position of talents.
To provide a fine-grained prediction, researchers further developed several methods to model the career trajectories for predicting both the next employer and the corresponding job duration.
Along this line, Meng et al. exploits a hierarchical neural network structure with an embedded attention mechanism for characterizing the internal and external job mobility <cit.>. Furthermore, Wang et al. introduced a temporal encoding mechanism to handle dynamic temporal information <cit.>.
Indeed, macro-level job transition behavior may also affect individual career choices. To this end, Zhang et al. first constructed a heterogeneous company-position network based on massive career trajectory data and integrated macro information from the company-position into personal career move prediction <cit.>.
Li et al. were the first to design a contextual LSTM model that integrates the profile context and career path dynamics simultaneously to predict the next company/position of talents <cit.>. To provide a fine-grained prediction, researchers have developed several methods for modeling career trajectories that predict both the next employer and the corresponding job duration. Meng et al. employed a hierarchical neural network structure with an embedded attention mechanism to characterize internal and external job mobility<cit.>. Moreover, Wang et al. introduced a temporal encoding mechanism that handles dynamic temporal information <cit.>. Macro-level job transition behavior may also impact individual career choices. To address this, Zhang et al. constructed a heterogeneous company-position network based on massive career trajectory data and integrated macro information from the company-position into personal career move prediction <cit.>. Moreover, Guo et al. propose an intelligent sequential career planning system via stochastic subsampling reinforcement learning, which is capable of finding globally optimal career paths for talents <cit.>.
§.§ Summary
In summary, AI-based approaches have been applied in talent management for recruitment, assessment, and career development. Specifically, the majority of research efforts have focused on talent recruitment, which may be attributed to the rapidly growing demand brought about by the development of online recruitment platforms. However, the application of AI-based approaches to comprehensively assess talent and plan career development has also attracted increasing attention in this era of information.
§ ORGANIZATION MANAGEMENT
The organization management is arguably the art of getting talents to cooperate and lead the entire organization toward a common predefined goal. In this section, we will introduce organization management with AI technologies, including organizational network analysis, organizational stability analysis, and organizational incentive analysis. Generally, the complex relationships among employees and organizations will naturally form a network structure, and the AI-related techniques for organizational network analysis aim to help understand the importance of critical connections and flows in an organization by modeling the special network structure <cit.>, thus serves the downstream management applications, such as organizational turnover prediction <cit.> and high-potential talents identification <cit.>. Then, to study the stability of the organization, some studies propose to analyze the composition of the organization from the formation and optimization perspectives <cit.>. Besides, several studies explore the compatibility between employees and organizations <cit.>. Finally, in order to motivate the talents in the organization to perform well, some AI-related studies conduct the organizational incentive analysis, which mainly focuses on two important tasks in human resources, namely job title benchmarking <cit.> and job salary benchmarking <cit.> respectively.
§.§ Organizational Network Analysis
In modern organizations, it is common for employees to build informal “go-to” teams to facilitate business collaboration beyond the organizational structure. Organizational social networks often emerge spontaneously, forming communicative and socio-technical connections. In this context, Organizational Network Analysis (ONA) serves a crucial role. It aids in understanding the significance of these critical connections and information flows within an organization, making leaders aware of the importance of vibrant communities and employees to be more targeted and effective in business operations. In this subsection, we will introduce AI-related techniques for organizational network modeling and introduce a classic application for ONA, namely high-potential talent identification.
§.§.§ Organizational Network Modeling
In the real scenario, abundant talent data can be utilized to construct an organizational network, unveiling the intricate relationships among employees as they form project teams or forge alliances across different groups. An illustration of an organizational network is shown in Figure <ref>. Without loss of generality, the generalized organization network can be defined as follows:
Organizational network is defined as G = (V, E), where V denotes the node set representing employees and departments (or organizations), and E denotes the edge set representing the relationship between nodes, such as the belonging relationship between employees and department, the frequency of communication (e.g., email or instant message) and the reporting lines between employees. Besides, the employee node has a specific profession, which indicates the type of their work (e.g., engineer and product). Meanwhile, each employee and department node has some work-related attributes (e.g., length of service, job level).
Based on the organizational network, organizational network modeling aims to capture knowledge from this network for supporting talent management-related tasks. Generally, the network embedding techniques are good choices <cit.> to achieve this goal, which describes the network as low-dimension vectors, and further serves for the downstream applications. For instance, Ye et al. <cit.> proposed a multiplex attentive network embedding approach for modeling organizational network in a holistic way. In their work, the organizational network is composed of the multiple communication interactions among employees. They generated embedding for employees based on the random walk strategy <cit.> with k-core and approximated shortest path algorithm. Furthermore, they proposed a relational transition-based approach to represent each department. In this way, the learned representation can be leveraged for several talent management tasks, including employee turnover, performance prediction, and department performance prediction. Besides, Teng et al. <cit.> exploited network fusion technique for organizational turnover prediction. They concentrated on modeling the relationships among organizations. Specifically, they demonstrated the correlation between the topology of organizational network and organizational turnover. To this end, they constructed a turnover similarity network based on the multiple organizational social networks, and took advantage of the GNN to learn comprehensive knowledge from these topological structures, which were further used for organizational turnover prediction. Apart from that, there are also some other applications. For example, Dong et al. <cit.> studied the problem of cross-group community search on labeled graph, namely Butterfly-Core Community (BCC) search. Specifically, they demonstrated the BCC problem is NP-hard, and proposed an approximated algorithm to solve it. The proposed algorithm was evaluated on a real-world organizational network from Baidu Inc, which can effectively find communities formed by cross-group collaborations given two employees with different positions.
§.§.§ High-potential Talent Identification
High-potential talents (HIPOs) possess leadership abilities, business acumen, and a strong drive for success, making them more likely to emerge as future leaders within organizations when compared to their peers <cit.>. Proactively identifying and developing HIPOs has always been a major issue in human resource management, it plays significant roles in the execution of organizational strategy and the optimization of organizational structure <cit.>.
The traditional methods for HIPOs identification usually rely on subjective selection of HR experts. They primarily focus on evaluating certain talent factors, such as communication skills, teamwork, and self-learning <cit.>. However, these manually-selected factors may lead to the unintentional bias and inconsistencies <cit.>. Recently, with the development of ONA, objective data-driven HIPOs identification has become possible. The rationale behind this is that HIPOs usually perform more actively and have higher competencies than their peers to accumulate their social capitals during their daily work <cit.>. We can detect HIPOs implicitly through social information. Formally, the HIPOs identification problem based on organizational network can be formulated as:
Given a new employee v, who joined the company in the t-th time slice, and a set of organizational network G = {G_t, G_t+1, ... G_t+k}, where G_t represents organizational network of time slice t. The objective is to develop a model f(v, G)=y to predict whether v is a HIPO (i.e., y=1) or not (i.e., y=0).
To solve this problem, Ye et al. <cit.> proposed a neural network-based dynamic social profiling approach for quantitative identification of HIPOs, which focuses on modeling the dynamics of employees' behaviors within the organizational network. In particular, they applied GCN and social centrality analysis to extract both local and global information in the organizational network as social profiles for each employee. Then they adopt LSTM with a global attention mechanism to capture the profile dynamics of employees during their early careers. Finally, they evaluated their model on real-world talent data, which clearly validates the effectiveness and interpretability of the proposed model.
§.§ Organizational Stability Analysis
Generally, organizational stability is determined by multiple factors, such as the structure of the organization itself and the compatibility between employees and organizations. In this part, we will introduce AI-related techniques for organizational stability from two perspectives, namely organizational structure and person-organization fit.
§.§.§ Organizational Structure
An organizational structure defines how activities such as task allocation, coordination, and supervision are directed toward the achievement of organizational aims <cit.>. Considering the availability of data, existing AI-related research has largely explored the formation and optimization of organizations (e.g., teams) under certain goals.
Team Formation. Given a project, team formation aims to discover a team of experts that collectively cover all the required skills, as shown in Figure <ref>.
Whereas it is proven to be NP-hard <cit.>, this requirement still needs to be solved in many real-world scenarios, such as team discovery in a social network, which contains professionals who provide specialized skills or services.
For instance, Kargar et al. <cit.> proposed a method to find the object team with minimal communication cost as well as personnel cost of the project. Specifically, they used a graph to model a social network where nodes represent experts and formulate the task as a constrained bi-criteria optimization problem. Since it is proved that the problem of minimizing the combined cost function is still NP-hard, the authors efficiently solve the problem with an approximation algorithm and three heuristic algorithms in polynomial time.
Later, Zihayat et al. <cit.> took both communication cost and experts' authority into account and proposed greedy algorithms to solve the optimization problem.
Since these team formation algorithms are based on very different criteria and performance metrics, Wang et al. <cit.> implemented these algorithms using a common platform and evaluated their performance with several real datasets.
However, these studies have limitations in terms of scalability and fail to effectively manage the dynamic nature of expert networks. To this end, instead of searching over the graph representation of the expert network, Hamidi et al. <cit.> searched for variational distributions of experts and skills in the context of a team. To be specific, they employed a variational Bayesian neural network to form the optimal team, contributing to a better performance than prior state-of-the-art.
Another scene that draws wide attention is the fast-growing Online Labor Marketplaces, which provide a sharp decrease in communication costs. For example, Liu et al. <cit.> first implemented team formation in crowdsourcing markets with consideration of the impact of teamwork. The study designs a mechanism that combines the greedy selection rule and a special payment scheme, obtaining various desirable properties, such as efficiency, profitability, and truthfulness. In addition, Barnabo et al. <cit.> considered the fairness of algorithms related to these online marketplaces. They formalized the Fair Team Formation as the problem of finding the cheapest team that can complete the task and, at the same time, that counts the same number of people from two not overlapping classes. Consequently, four algorithms are designed to solve the problem and experiment on real-world data to confirm their effectiveness.
Yet, most of the works focus on the offline version of the team formation problem, i.e., the tasks to be completed are a-priori known. To this end, Anagnostopoulos et al. <cit.> implemented the problem of online cost minimization, where the goal is to minimize the overall cost (paid on hiring, outsourcing, and salary costs) of maintaining a team that can complete the arriving tasks. Moreover, the study considers a more complex case of outsourcing, i.e., hiring, firing, and outsourcing decisions can be taken by an online algorithm leading to cost savings with respect to alternatives.
Team Optimization. Two key problems within the scope of team optimization are team member replacement and team expansion, as illustrated in Figure <ref>.
Specifically, team member replacement is put forward first by <cit.>, which aims to find a good candidate to best replace a team member who becomes unavailable to perform the task. To tackle this problem, they introduced the concept of graph kernels that takes into account the interaction of both skill and structure matching requirements. Furthermore, the study proposes a series of effective and scalable algorithms for this problem.
Later in <cit.>, the authors further took the synergy between skill similarity and structural similarity into consideration, instead of considering the two aspects independently.
In addition, some effort has been paid on team expansion. Zhao et al. <cit.> formally defined the problem in collaborative environments and proposed a neural network-based approach, considering three important factors (team task, existing team members, and candidate team member) as well as their interactions simultaneously.
However, most works on team optimization treat teams as a static system and recommend a single action to optimize a short-term objective. To this end, Zhou et al. <cit.> proposed a deep reinforcement learning-based framework to continuously learn and updated its team optimization strategy by incorporating both skill similarity and structural consistency.
§.§.§ Person-Organization Fit
Person-Organization fit (P-O fit) refers to the compatibility between employees and their organizations. In fact, P-O fit has been widely recognized as an effective indicator of proactive talent management, and it has a significant impact on outcomes such as work attitudes, turnover intentions, and job performance <cit.>.
In the domain of organizational behavior, most studies measure P-O fit based on the similarity between organizational profile and employees' profile. Figure <ref> presents the classical P-O fit modeling process. At first, experts collect information with questionnaires to extract employee and organization profiles and manually design metrics. Then, the statistical methods are applied to measure the congruence between an employee and an organization as P-O fit score. However, this process is labor-intensive and subjective, which is difficult to apply to real-world applications. To this end, AI-driven techniques are proposed to automatically extract the profiles and model P-O fit in a dynamic, quantitative, and objective manner. Formally, the P-O fit problem is defined as follows:
Given a sequence of time periods, each associated with an organization network G^t=(V, E^t), where the nodes V are employees and links E^t indicate their relationships (e.g., reporting relationship) in the time period t. Each node v_i has a feature vector x^t_i, representing their traits and behaviors in the t-th time period. The target of Person-Organization Fit is to learn to model the compatibility of each node on the tree with their local environment, seize their dynamic nature and patterns, and accordingly predict relevant talent outcomes y.
To solve this problem, Sun et al. <cit.> proposed a new P-O fit modeling process based on AI technology, as shown in Figure <ref>. Specifically, they first extracted features automatically from collected employees’ in-firm data and generated person profiles by dimension reduction on these features. Then, they exploited the organization profile by combining the organization’s structure with the profiles of the employees and extracted a unique environment profile for each employee based on their corresponding positions. Finally, they applied a deep neural network to achieve a more complicated mapping from person and environment profiles to a P-O fit representation. To capture the dynamic nature of P-O fit and its consequent impact, they exploited an adapted Recurrent Neural Network with an attention mechanism to model the temporal information of P-O fit. Later, in <cit.>, Sun et al. further proposed the attentional features extraction layers that can distinguish individualized relation-level and individual-level influence differences for different nodes on the organizational tree. This largely enhanced the performance of person-organization compatibility modeling and improved the interpretability.
§.§ Organizational Incentive Analysis
Compensation and benefits (C&B) represent one of the most important branches of human resources, which plays an indispensable role in attracting, motivating, and retaining talents. It includes the process of determining how much an employee should be paid and deciding what benefits should be offered. In the past few decades, considerable efforts have been made in this research direction from the management perspective. Recently, the accumulation of massive job related-data enables a new paradigm for organizational incentive analysis in a data-driven view. In this part, we will introduce two classic data-driven tasks in C&B, namely job title benchmarking and salary benchmarking respectively.
§.§.§ Job Title Benchmarking
Job title benchmarking (JTB), as an important function in C&B, aims at matching job titles with similar expertise levels across various organizations (i.e., companies), which provides precise and substantial facilitation of job and salary calibration/forecasting for both talent recruitment and job seekers. Traditional JTB mainly relies on manual market surveys, which are expensive and labor-intensive. Recently, the
popularity of online professional networks helps to accumulate massive career records, which provides the opportunity for a data-driven solution. Formally, JTB can be defined as follows:
JTB is a process that matches job titles with similar expertise levels across various companies. Formally, given two job title-company pairs, i.e., (title_i, company_i) and (title_j, company_j), the objective is to determine whether the given pairs are on the same level.
To handle this problem, Zhang et al. <cit.> proposed to construct a Job-Graph by extracting information from large-scale career trajectory data, where nodes represent job titles affiliated with the specific companies and edges represent the numbers of transitions between job titles. They redefined JTB as a link prediction task on the Job-Graph by assuming that the benchmarked job title pairs should have a strong correlation with the link. Along this line, they proposed a collective multi-view representation learning model to represent job titles from multiple views, including graph topology view, semantic view, job transition balance view, and job transition duration view. Subsequently, they devised a fusion strategy to generate a unified representation from multi-view representation. Finally, they leveraged the similarity between these representations as an indicator for job title benchmarking.
§.§.§ Job Salary Benchmarking
Job salary benchmarking (JSB) refers to the process by which organizations obtain and analyze labor market data to determine appropriate compensation for their existing and potential employees <cit.>. Traditional approaches for JSB mainly rely on the experience from domain experts and market surveys provided by third-party consulting companies or governmental organizations <cit.>. However, fast-developing technology and industrial structure lead to changes in positions and job requirements, making it difficult to conduct salary benchmarking in a timely manner in dynamic scenarios.
In recent years, the prevalence of emerging online recruiting services, such as Indeed and Lagou, has provided the opportunity to accumulate vast amounts of job-related data from a wide range of companies, enabling a new paradigm of compensation benchmarking in a data-driven manner. Formally, the job salary benchmarking problem can be formulated as follows:
Suppose there are job positions i=1,2,3...,I and location-specific company j=1,2,3,...,J. Each position i has some features, e.g., bag-of-words, and each location-specific company j can be described as a list of features, e.g., location and industry. Given a combination of position and company (i,j), the objective is to predict its salary ŝ_ij so that the similarity between ŝ_ij and real observation s_ij is maximized.
To address this problem, one straightforward procedure is to construct a job-company salary matrix, where each entry indicates the corresponding salary of a given job-company pair. And the JSB problem can be regarded as a matrix completion task. Generally, matrix factorization (MF) is a widely-used method for handling this task. It aims to factorize an incomplete job-company salary matrix into two lower rank latent matrices, and use their dot product for estimating the possible salary of the missing entries. However, the intuitive method is too general to meet the various special needs of C&B professionals. To this end, Meng et al. <cit.> proposed an expanded salary matrix by expanding the original job-company salary matrix with locations and time information for the fine-grained salary benchmarking. And then they designed a matrix factorization based model for predicting the missing salary information in the expanded salary matrix by integrating multiple confounding factors, including company similarity, job similarity, and spatial-temporal similarity. Further, Meng et al. <cit.> designed a nonparametric Dirichlet-process-based latent factor model for JSB, which learns representation for companies and positions to alleviate the data deficiency problem. By conducting experiments on two large-scale real-world data, the effectiveness and interpretability of the proposed model have been proved.
§.§ Summary
In conclusion, AI-related techniques for organization management contain three aspects, including the organizational network analysis, organizational stability analysis and organizational incentive analysis. Specifically, organizational network analysis aims to help understand the importance of critical connections and flows in an organization, which can serve downstream talent management applications. Then organizational stability analysis focuses on analyzing the composition of the organization and exploring the compatibility between employees and organizations. Finally, organizational incentive analysis concentrates on leveraging data-mining techniques to solve the job title/salary benchmarking problem in human resources.
§ LABOR MARKET ANALYSIS
Labor market analysis is crucial to the formulation of the strategy, which is an important part of intelligence talent management.
Traditionally, most existing studies on the labor market generally also rely on expert knowledge, subjective surveys, and qualitative analysis from psychological, economic, and cultural perspectives <cit.>. These methods make it challenging to uncover the complex associations among multi-source data and the hidden patterns in massive data. The efficiency is also limited by manual analysis. Moreover, some studies that rely on online collected data typically employ causal inference or statistical analysis methods. For example, Jackson et al. <cit.> deployed psychometric measures with internet surveys to infer the reasons behind talent flow. Hershbein et al. <cit.> analyzed concentration in labor markets from vacancy and employment data. Hershbein et al. <cit.> deployed various statistical approaches to analyze the different skill requirements of job postings in different economic situations.
Recently, the prevalence of Online Professional Networks (OPNs) and online recruitment websites has facilitated the accumulation of a large number of job reviews, company reviews, digital resumes, and job postings. These sources contain a wealth of intricate and diverse information about the labor market, including talent flow, talent demand, market trends, job skills, company branding, and more. These extensive datasets provide novel perspectives and opportunities for conducting a more fine-grained analysis of the labor market at a large scale.
However, traditional methods are difficult to efficiently discover complex market patterns from data and accurately predict market trends. AI and machine learning algorithms possess powerful pattern recognition, data generalization, and fitting capabilities, making them well-suited for exploring labor market data <cit.>. Many researchers have analyzed labor market with AI methods mainly from four aspects: talent flow analysis, job analysis, skill analysis, brand analysis. To facilitate readers checking the literature, we summarize and organize these papers in Table <ref>, which lists their tasks, the techniques, and the data adopted in these works. And, Figure <ref> presents an overview of works related to labor market analysis.
§.§ Talent Flow Analysis.
Talent flow analysis mainly includes talent flow prediction tasks and other various flow pattern analyses. These tasks primarily leverage OPNs data, which can reflect the flow of talent between different companies, to analyze the situation of talent flow in the market, and help formulate company strategies. Following <cit.>, we denote talent flow as transition tensor R^t∈ℝ^N× N × M for each time slice t, where N denotes the number of companies, M denotes the number of job positions, and each element R^t_ijk is defined as the normalized number of corresponding job transitions:
R^t_ijk=Num^t_i,j,k/∑_j=1^NNum^t_i,j,k,
where Num^t_i,j,k denotes the transition number from the job position k of company i to company j at time slice t. According to the definition, the talent flow analysis tasks mainly contain talent flow prediction and various flow pattern analyses.
§.§.§ Flow Prediction
The task of talent flow prediction mainly revolves around anticipating changes in the labor market, thereby offering guidance for talent strategies. AI-driven techniques can enhance the accuracy and flexibility of such predictions.
Formally, the talent flow prediction problem can be defined as follows:
Given a set of talent flow tensors R_1, ..., R_T , and some attributes of companies and market context, the goal of talent flow prediction is to predict the value of R^T+1_ijk.
To solve this problem,
Zhang et al. <cit.> designed a dynamic latent factor-based Evolving Tensor Factorization (ETF) model for predicting future talent flows. In detail, they used U^t_i, V^t_j, W^t_k to represent the latent vectors of origin company i, destination company j and job position k at time slice t, and evolve them to time slice t+1 for predicting talent flows at t+1. This model also integrates several representative attributes of companies as side information for regulating the model inference.
The authors also proposed a Talent Flow Embedding (TFE) model to learn the bi-directional talent attractions of each company <cit.>. Subsequently, they explored the competition between different companies by analyzing talent flows using data from OPNs. In detail, the objective of this latent variable model is to learn two attraction vectors S_u and T_u from talent flow network G, where S_u is the source attraction vector of company u and T_u is the target attraction vector of company u. The pair dot of S_u and T_v indicates the talent flow from company u to company v. The experimental results show the pairwise competitive relationships between different companies.
Xu et al. <cit.> enriched the sparse talent flow data by exploiting the correlations between the stock price movement and the talent flows of public companies. They developed a fine-grained data-driven RNN model to capture the dynamics and evolving nature of talent flows, utilizing the rich information available in job transition networks.
§.§.§ Flow Pattern Analysis
Many researchers also explored other various flow pattern analysis tasks, such as competitiveness analysis, hopping behavior analysis, and talent circle detection.
In <cit.>, the author initially gathered job-related information from various social media data. Subsequently, they developed a model called JobMiner, which mainly focuses on employing graph mining techniques to mine influential companies and uncover talent flow patterns. This method provides a better understanding of company competition and talent flow in professional social networks.
Yu Cheng <cit.> further developed machine learning and analytical techniques for the purpose of mining OPNs data. From OPNs, they can mine influential companies with the related company groups and evaluate the company's influence and competitiveness.
Oentaryo et al. <cit.> developed a series of data mining methodologies to analyze job-hopping behavior between different jobs and companies by publicly OPNs data. In detail, they used a weighted version of PageRank to measure the competitiveness of jobs or companies, and construct some metrics to measure the relationship between many properties of jobs and the propensity of hopping.
Then, Oentaryo et al. <cit.> enhanced the data mining framework for analyzing talent flow patterns. The results show that the factors influencing employee turnover can mainly be divided into four categories: employee personal factors, organizational factors, external environmental factors, and structural factors.
Xu et al. <cit.> developed a talent circle detection model and design the corresponding learning method by maximizing the Normalized Discounted Cumulative Gain (NDCG) to detect suitable circle structure. Each talent circle includes organizations with similar talent exchange patterns. Formally, A talent circle is a subset of neighbors of an ego node. In one circle, nodes are closely connected and similar to each other. Circles can be denoted as {C_m∈ℂ}, where m = 1, 2, ..., M and C_m ⊆ V. V represents the set of all organizations. The circles can be overlapped and the appropriate talent circles mean similar flow patterns and are closely connected.
The detected talent circle can be used to predict talent exchange in the future and improve the recommendation in talent recruitment and job search.
§.§ Job Analysis.
Job analysis mainly focuses on the analysis of trends, such as demand trends, topic trends, and other relevant factors. These tasks mainly use job posting data, which contains information about the recruitment demand of jobs from companies, to analyze the situation of the recruitment market. Following <cit.>, the job recruitment data can be denoted as trend tensor J^t∈ℝ^N × M for each time slice t, where N denotes the number of companies, M denotes the number of job positions, and each element J^t_ij is the number of job postings published at time slice t, from company i and position j. Meanwhile, more context information regarding companies and job positions can be denoted as {C_1,...,C_N} and {P_1,...,P_M}. Based on these data, a variety of trend analysis tasks can be explored.
§.§.§ Demand Trend Analysis
The demand trend mainly focuses on the number of job recruitment,
some researches <cit.> used classifier method SVM and decomposition methods STL (Seasonal and Trend decomposition using Loess) to analyze the demand time series characteristics from the web data and the official data. These researches show that web data can reflect the trend of labor market.
Karakatsanis et al. <cit.> suggested a data mining-based approach for identifying the most in-demand occupations in the modern job market. In detail, a Latent Semantic Indexing (LSI) model was developed for online job posts with job description data. The analysis results can highlight job trends most in-demand and identify occupational clusters.
Zhang et al. <cit.> provided Talent Demand Attention Network (TDAN), which can forecast fine-grained talent demand in the labor market. Specifically, they constructed multiple-grained levels of information (e.g., market level, company level, job level et al.) and the intrinsic attributes of both companies and job positions from recruitment job post data. Then, they designed a transformer-based attentive neural network to automatically utilize this information to forecast the demand trend of each job in each company. These job topic analyses and forecasting results are crucial for continuously reviewing a company's talent recruitment strategies.
§.§.§ Topic Trend Analysis
The topic trend primarily focuses on text mining and language modeling techniques applied to job postings. For instance,
Mbah et al. <cit.> analyzed and visualized job description data using text mining techniques to discover the trends of the job market.
Marrara et al. <cit.> designed a language modeling approach for discovering novel occupations in the labor market, which can help the company catch the new trend of recruitment.
Zhu et al. <cit.> developed MTLVM, which is a sequential latent variable model. This model can capture sequential patterns of recruitment states. Moreover, it can automatically learn the latent recruitment topics by the Bayesian generative framework. In detail, it uses c_e,t to represent the latent recruitment state of the company e at time step t. Then the transition probability between different states is learned to analyze the evolving rules of the recruitment trend, and the topic model is deployed to reveal the trend of different recruitment topics.
§.§ Skill Analysis.
Tasks related to skill analysis mainly concentrate on exploring the relationship between jobs and skills, such as analyzing the skills required for different jobs and estimating the value of newly emerged skills. These tasks rely on job posting data, which contains information on skills, jobs, and salaries, to analyze the situation of job skills in the recruitment market. Following <cit.>, the job postings can be denoted as
𝒫={(D_i,J_i,S_i,Y_i,T_i)| i=1,2,...},
where D_i denotes a set of job description, J_i denotes the job title, S_i denotes required skill set, Y_i denotes the job salary, T_i denotes the publish time.
§.§.§ Skills Requirement
The skill requirements can be inferred from job postings, and analyzing these requirements can provide valuable assistance in talent selection, job description formulation, and other related tasks.
The task of skills requirement prediction task can generally be formulated as:
Given a set of job postings 𝒫{(J_i,S_i,(D_i,Y_i,T_i)^*)| i=1,2,...}, where * indicates optional information. The goal of skills requirement prediction is to measure the required level or popularity for potential skills S_i of job J_i or labor market.
To solve this problem,
Colombo et al. <cit.> deployed language models and machine learning classification approaches, e.g., SVM, to calculate the skills requirements of jobs. Furthermore, they also classified job skills into a standard classification system and measure the relevance of soft and hard skills, which is important for talent selection and culture cultivation.
Akhriza et al. <cit.> applied Apriori algorithm of the association rule and used recommendation techniques based on the output of the skill association to determine the most sought after IT skills in the industry.
Patacsil et al. <cit.> applied Frequent Pattern-growth (FP-growth) algorithm of the association rule to analyze the relationship of jobs and skills requirements which provides a new dimension in labor market research. These results can provide job skill requirements of jobs which are important to enhance the training strategy.
Wowczko et al. <cit.> used k-NN clustering methods to identify key skill requirements in online job postings.
Wu et al. <cit.> designed Trend-Aware Tensor Factorization (TATF) framework to analyze the skill demand of jobs. In detail, TATF constructs the relationship between skills and jobs as a special tensor with 4 dimensions, each element e_t,c,p,s in this tensor reflects the demand trend of skill s in job p, company c at time t. Then, they enhanced tensor factorization with aggregation-based constraint, i.e., competition (among companies) and co-occurrence (among skills) based aggregations. Furthermore, they designed the temporal constraint based on previous models to output jobs and skills representations which can quantify the potential skill trends of jobs.
Xu et al. <cit.> proposed Skill Popularity based Topic Model (SPTM) for modeling the generation of the skill network. They used the neighbors of a skill on the skill network to generate the document for this skill. Then, the documents can be used to further analyze the popularity of the skill using topic models. This kind of topic model can integrate different criteria of jobs (e.g., salary levels, company size) and the latent connections between different skills. Then they effectively ranked the job skills based on the multi-faceted popularity.
§.§.§ Skill Value Estimation
In recent times, the evaluation of skills has garnered significant attention from researchers and companies alike. This task holds importance not only for companies seeking to identify and retain top talent but also for individuals aiming to proactively acquire essential skills for their desired career path.
Formally, the task of estimating the value of skills can be defined as follows:
Given a set of job postings 𝒫{(J_i,S_i,Y_i,(D_i,T_i)^*)| i=1,2,...}, where * indicates optional information. The goal of skills value estimation is to measure the value of each skill S_i, and the skill value set of job J_i can be combined into salary Y_i.
Sun et al. <cit.> proposed an enhanced neural network with cooperative structure, Salary-Skill Composition Network (SSCN), for separating the job skills and measuring their value from the massive job postings. Figure <ref> shows the overview of workflow. In detail, this method mainly contains two modules, one is Context-aware Skill Valuation Network (CSVN) for dynamically modeling the skills, extracting the context-skill interaction, and estimating the context-aware skill value. Another is the Attentive Skill Domination Network (ASDN) which can extract an influence representation for each skill to model their influence on domination to each other from skill graph. The value of job skills can help companies in formulating talent strategies.
§.§ Brand Analysis
Brands are one of the most precious assets for a company, which highlights the talent attractiveness from working and innovation, and the corporate image attributes from employees and public opinion. It is crucial for corporate to manage brands as a talent strategic tool to keep up with the continuously changing business world. How to formulate a strategy for improving brand is raising increasing attention in the area of talent management. Traditionally, the approaches for brand analysis mainly depend on survey and interview with expert knowledge. For example, Ambler et al. <cit.> interviewed with respondents from some companies about the relevance of branding to HRM. Arasanmi et al. <cit.> used online survey method to collect the data, and analyzed the relationships between employer branding, job designs, and employee performance by statistical methods. Fatma et al. <cit.> collected data through social survey and analyze the impact of Corporate Social Responsibility (CSR) on corporate brand equity.
Recently, with the development of the Internet and the online professional social networks, a large amount of company review data and various public data related to companies, e.g., online reviews, news, Twitter and so on, can be collected. These data can provide new perspectives and opportunities for more comprehensive company brand analysis. However, traditional methods are difficult to analyze massive unstructured text data. The rapid developed AI technology provide suitable methods for these kind of data. In particular, the topic model method <cit.>, which can cluster the latent semantic structure of the corpus in an unsupervised learning manner, is good at semantic analysis and text mining of brand analysis <cit.>. Figure <ref> presents an overview of works related to brand analysis.
§.§.§ Company Profiling
The company profiling is a kind of analytical task to understand the fundamental characteristics of companies. AI-driven approaches provide an opportunity to profile company from abundant and various online employment data.
In <cit.>, Lin et al. proposed CPCTR, which is a Bayesian model combines topic modeling with matrix factorization to obtain the company profiles from online job and company reviews. In detail, CPCTR groups reviews by their job positions and companies denote two words lists as {w^P_n,j,e}^N_n=1 and {w^C_m,j,e}^M_m=1 to represent positive opinion and negative opinion for a specific job position j and company e, then formulates a joint optimization framework for learning the latent patterns of companies with different jobs v_j,e, which leads to a more comprehensive interpretation of company profiling and provides a collaborative view of opinion modeling. Subsequently, in <cit.>, they provided a Gaussian processes–based extension, GPCTR, which can capture the complex correlation among heterogeneous information and improve the profiling performance.
Bajpai et al. <cit.> provided a hybrid algorithm, which works as an ensemble of unsupervised and machine learning approaches, for company profiling from online company reviews data. First, this work use CNN and Doc2Vec to extract the important opinion aspects from reviews. Then they combined universal dependent modifier and sentiment dictionary to assign polarity to each aspect of each company. If it is fail to assign a score to the aspect, the ELM model can be used to predict the polarity. Finally, each company can be embedded in a n-dimensional representation space where n is the number of aspects.
§.§.§ Opinion Analysis
The opinion analysis task is more focused on the analysis of the company's public reviews, especially the reviews of employees. These indicates the feedback of the company's talent strategy, which is important for further iteration of the right talent strategy. As a widely used text analysis model, the topic model is a appropriate method for opinions analysis task. For example,
Moniz et al. <cit.> proposed an aspect-sentiment model based on the LDA approach for analyze company reviews. This kind of LDA approach can identify salient aspects in company reviews, and manually infer one latent topic that appears relationship with the firm's vision. Then, they combined the satisfaction topic information of company reviews with existing methods for earnings prediction. According to the results, the employee satisfaction is important for firm earnings.
Ikoro et al. <cit.> proposed a lexicon-based sentiment analysis method for analyze the public opinion for corporate brand. In detail, they combined two sentiment lexica and extract two levels sentiment terms, and collected over 60,000 tweets split over nine companies from Twitter. Then the LDA methods are deployed to discover the sentiment topics.
Chae et al. <cit.> analyzed the CSR base on posts data from Twitter, then, they applied the Structural Topic Model algorithm to discover the correlation between different responsibility topics and the sequential tread of topics. The results also show that the CSR and the corporate reputation of a firm are important to its brand equity.
In addition, some authors explored other machine learning methods for brand analysis. For example,
Spears et al. <cit.> investigated the impact of public opinion on companies’ earning over time. The public opinions were extracted from news and social media and the earning are collected from earnings reports. Then, they used Markov switching model to quantitative relationships between bad publicity impact and the finances of companies, which can guide the company to bulid a great brand impact.
§.§ Summary
In general, AI-related labor market analysis primarily focuses on four key aspects: talent flow analysis, job analysis, skill analysis, and brand analysis. The research data predominantly consists of various public data sources, including online professional networks (OPNs), social media platforms, job postings, and job and company reviews.
The talent flow analysis mainly includes talent flow prediction task and other various flow pattern analysis tasks. The job analysis mainly focus on the analysis of trend, such as new job trend, demand trend and topic trend. The skill analysis mainly explore to measure the skills requirement of jobs and the value of skills.
Moreover, the brand analysis related works aim to model the brand and culture of company and analyze the opinions about company.
However, these related researches about labor market are still at an early stage, many advanced and potential AI method can be combined with labor market analysis related tasks and improve the intelligence of talent management.
§ PROSPECTS
In the above sections, we reviewed a varsity of recent efforts in AI-based talent analytics in human resource management from three different aspects: talent management, organization management, and labor market analytics. Although it has helped enterprises deliver intelligence for effective decision-making and management, some urgent and vital issues still exist to be resolved. In this section, we outline some potential research directions toward handling those challenges and fostering further advancements in this field.
§.§ Multimodal Talent Analytics
Information about a phenomenon or a process in talent analytics-related scenarios usually comes in different modalities. For instance, we can obtain communication and project collaboration networks in employee collaboration analysis. Indeed, mining the multimodal data in talent analytics can help us enhance the effectiveness of different applications. For example, Hemamou et al. collected the multimodal data in the job interview process, including text, audio, and video, and proposed a hierarchical attention model to achieve the best performance in predicting the hirability of the candidates <cit.>. Recently, multimodal learning is used to achieve multimodal data representation, translation, alignment, fusion, and co-learning in various domains, such as commercial, social, biomedical <cit.>. We can foresee that more multimodal learning methods will gradually be extensively used in talent analytics.
§.§ Talent Knowledge Management
Though AI-based approaches have achieved great success in acquiring talents and developing them, relatively few works explore managing those talents' knowledge with AI technologies, which is the primary driving force to the economics of ideas <cit.>. There is an urgent need for additional AI-based technologies that focus on talent knowledge creation, sharing, utilization, and management. Such efforts are crucial for maximizing the potential of human resources and enhancing organizational productivity. Indeed, we can leverage knowledge graph-related technologies <cit.> to construct the talent's knowledge base and achieve efficient knowledge management. Moreover, we can transform the scenarios in talent development, such as knowledge learning and collaboration, into different recommendation scenarios and utilize recommendation algorithms to solve these problems. Recently, Wang et al. developed a personalized online courses recommendation system based on the employees' current profiling <cit.>. However, there is still a lack of an algorithm that can recommend heterogeneous knowledge. The existing algorithms are only from the individual perspective and have not been analyzed from the organizational aspect, such as the organizational knowledge diversity or competitiveness.
§.§ Market-oriented Talent Analytics
AI technology has been effectively applied in labor market analytics <cit.>. However, those approaches mainly focus on the perspective of global market analysis and have not explored how the changing environment of the labor market affects internal talent management or organization management. In fact, combining macro and micro data in talent analytics is a vital research direction <cit.>. With the recent accumulation of internal and external data, there exists an exceptional opportunity to implement market-oriented talent analytics. For instance, Hang et al. leveraged the job posting data to capture the potential popularity of employees in external markets specific to skills and further achieve more accurate employee turnover prediction based on the market trend <cit.>. Moreover, the rapid development of AI technologies provides an excellent technical foundation for this direction. We can unitize multi-task learning <cit.> to jointly learn both macro and micro talent analytics-related tasks. Heterogeneous graph learning <cit.> can help us effectively model the correlation of macro and micro data.
§.§ Organizational Culture Management
In our survey, we reviewed the recent advance in AI techniques for talent analytics in HRM from three perspectives, including talent management, organization management, and labor market analysis. Indeed, the culture of the organizations is one of the most important means of addressing the never-ending quest to maintain organizational viability and effectiveness <cit.>. Generally, the culture mainly contains three aspects: Mission, Vision, and Values (MVVs), which can help the employees understand what is encouraged, discouraged, accepted, or rejected within an organization, and facilitate the organization to thrive with the shared purpose. Recently, the availability of large-scale data on the whole life cycle of talents and organizations is gradually demonstrating extraordinary opportunities for leaders in culture management. For instance, culture and leadership are inextricably linked and the best team leaders can effectively shape the culture <cit.>. Some researchers <cit.> discussed how ML techniques can be used to inform predictive and causal models of leadership effects. Accordingly, they further provided a step-by-step guide on designing studies that combine field experiments with ML applications to establish causal relationships with maximal predictive power. Meanwhile, several studies analyze the leadership styles with the data mining algorithms, demonstrating that the different leadership styles significantly influence leadership outcomes <cit.>. Moreover, there are some researchers who tried to leverage text mining to analyze the organizational culture. For instance, Schmiedel et al. leveraged the online company reviews data and topic model to explore the employee’s perception of corporate culture <cit.>. Li et al. <cit.> applied a topic model to obtain firm-level measures of exposure and response related to COVID-19 for many U.S. firms. In detail, they deployed Correlated Topic Model (CTM) <cit.>, which is similar to Latent Dirichlet Allocation (LDA), with 35 topics to discover the correlation between COVID-19 and the company-level measure. The results show that despite the large negative impact of COVID-19 on their operations, firms with a strong corporate culture can outperform their peers without a strong culture. As a famous saying goes, “Culture eats strategy for breakfast”, employing AI technologies in organizational culture management will become one of the most critical research directions in the near future, because it can help managers realize cultural management scientifically.
§.§ Ethical AI in Talent Analytics
Admittedly, AI technologies have been widely used in talent analytics with significantly improved efficiency and accuracy of management. However, they still raise various concerns about how to ensure that AI technologies adhere to well-defined ethical guidelines regarding fundamental values. Recently, some researchers have made from efforts two perspectives, i.e., fairness and explainability.
§.§.§ Fairness
Generally, the fairness of talent analytics needs to be considered frequently, because management decisions affect employees' lives and are directly related to the organization's values. Although AI technologies have achieved various successes in talent analytics, there is growing concern that such approaches may bring issues of unfairness to people and organizations, as evidenced by some recent reports <cit.>. For instance, Amazon scraps its AI-based recruitment system due to its discriminatory impact against women <cit.>.
Recently, there are several studies focusing on the fairness of talent management with AI technologies from different perspectives. For instance, Qin et al. <cit.> verified that when involving the sensitive features, such as gender, age, etc., into the person-job fit model, the model without special design will easily learn the bias from the original data. Intuitively, we can solve this problem by removing sensitive features, also regarded as one of the pre-processing methods for imposing fairness <cit.>. However, a large amount of unstructured data already contains sensitive features, such as audio and video data in the interview process. The model can easily infer the potentially sensitive attributes of data, which may still cause the bias of AI algorithms <cit.>. In <cit.>, Pena et al. focused on the multimodal system to predict recruitable from the rich information in the candidate's resume, i.e., image and structured features. The authors first demonstrated that the deep learning model could reproduce the biases from the training data, even without the sensitive features. They further introduced incorporating an adversarial regularizer <cit.> that can remove the sensitive information from unstructured features, so that to achieve the fairness of the algorithm. Similarly, Yan et al. both leveraged data balancing and adversarial learning to mitigate bias in the multimodal personality assessment <cit.>. Moreover, there exist several open-source tools, such as AIF360 <cit.>, FairML <cit.>, Themis-ML <cit.>, that can facilitate systematic bias checks and embed fairness in the AI algorithms <cit.>. In addition, AI technologies can also help to reduce human bias in different talent management scenarios. For instance, AI technologies have been applied to detect the potentially problematic words in the job posting that lead to bias or even legal risks and further assist employers in writing inclusive job descriptions <cit.>. Therefore, ensuring the fairness of AI algorithms is an important research direction that has received more and more attention in talent analytics.
§.§.§ Explainability
Recently, there has been an increasing concern among employees and managers regarding the decisions made by black-box AI algorithms. Questions arise regarding the basis of these decisions, understanding the factors behind algorithmic success or failure, and determining how to rectify any errors that occur.
Therefore, the research interests in increasing the transparency of AI-based automated decision-making in talent management are re-emerging <cit.>. For instance, Qin et al. proposed to leverage the attention mechanisms to explain the matching degree between the content of job posting and resume <cit.>. Zhang et al. further introduced the hierarchical attention and collaborative attention mechanisms to increase the person-job fit model explainability both at the structured and unstructured information level <cit.>. Upadhyay et al. leveraged the knowledge graph and name entity recognition technologies to generate the understandable textual job recommendation explanation <cit.>. In <cit.>, Kaya et al. focused on constructing an end-to-end system for explainable automatic job candidate screening from video resumes. The authors extracted the audio, face, and scene features and leveraged the decision trees to both predict whether the candidates will be invited to the interview and explain the decisions by using binarization with a threshold. Liem et al. further handled the job candidate screening problem from an interdisciplinary viewpoint of psychologists and machine learning scientists <cit.>. Moreover, Juvitayapun et al. used the tree-based model to calculate different features' importance to enhance the explainability of AI-based turn-over prediction <cit.>.
However, the current approaches only stay from the perspective of AI model design and fail to consider whether employees or managers can easily comprehend and grasp the explanatory conclusions provided by the model. Indeed, visual analytics is an inherent way to help people who are inexperienced in AI understand the data and model <cit.>. Therefore, combining visual analysis and explainable AI and building an intelligent talent management system is a valuable research direction. Furthermore, the vast amount of interaction data generated by users can assist us in iterating the model from various perspectives, such as rectifying errors in automated decision-making and enhancing the efficiency of visual information presentation.
§ CONCLUSIONS
Artificial Intelligence (AI)-driven talent analytics represents a potent frontier of innovation and opportunity in today’s competitive and fast-evolving business environment. The essence of this survey provides a comprehensive exploration of the recent advancements in this domain. Specifically, we first delineated a detailed taxonomy of pertinent data, establishing a critical foundation for the utilization of AI techniques to understand talents, organizations, and management better. Subsequently, we illustrated the research efforts of the AI techniques for talent analytics, parsed into three critical aspects: talent management, organization management, and labor market analysis. Finally, we summarized the open challenges and potential prospects for future research directions within the AI-driven talent analytics sphere. The primary intent of this survey paper is to provide a thorough understanding of the recent effort of this emergent field to our readers, thereby fostering insights into the dynamic intersection of AI and talent analytics.
IEEEtranS
|
http://arxiv.org/abs/2307.02953v2
|
20230706123906
|
SegNetr: Rethinking the local-global interactions and skip connections in U-shaped networks
|
[
"Junlong Cheng",
"Chengrui Gao",
"Fengjie Wang",
"Min Zhu"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] |
SegNetr
Junlong Cheng et al.
College of Computer Science, Sichuan University, Chengdu 610065, China
[email protected]
SegNetr: Rethinking the local-global interactions and skip connections in U-shaped networks
Junlong Cheng1 Chengrui Gao1 Fengjie Wang1 Min Zhu1 ()
August 1, 2023
===========================================================================================
Recently, U-shaped networks have dominated the field of medical image segmentation due to their simple and easily tuned structure. However, existing U-shaped segmentation networks: 1) mostly focus on designing complex self-attention modules to compensate for the lack of long-term dependence based on convolution operation, which increases the overall number of parameters and computational complexity of the network;
2) simply fuse the features of encoder and decoder, ignoring the connection between their spatial locations. In this paper, we rethink the above problem and build a lightweight medical image segmentation network, called SegNetr. Specifically, we introduce a novel SegNetr block that can perform local-global interactions dynamically at any stage and with only linear complexity. At the same time, we design a general information retention skip connection (IRSC) to preserve the spatial location information of encoder features and achieve accurate fusion with the decoder features. We validate the effectiveness of SegNetr on four mainstream medical image segmentation datasets, with 59% and 76% fewer parameters and GFLOPs than vanilla U-Net, while achieving segmentation performance comparable to state-of-the-art methods. Notably, the components proposed in this paper can be applied to other U-shaped networks to improve their segmentation performance.
§ INTRODUCTION
Medical image segmentation has been one of the key aspects in developing automated assisted diagnosis systems, which aims to separate objects or structures in medical images for independent analysis and processing.
Normally, segmentation needs to be performed manually by professional physicians, which is time-consuming and error-prone.
In contrast, developing computer-aided segmentation algorithms can be faster and more accurate for batch processing.
The approach represented by U-Net <cit.> is a general architecture for medical image segmentation, which generates a hierarchical feature representation of the image through a top-down encoder path and uses a bottom-up decoder path to map the learned feature representation to the original resolution to achieve pixel-by-pixel classification.
After U-Net, U-shaped methods based on Convolutional Neural Networks (CNN) have been extended for various medical image segmentation tasks <cit.>. They either enhance the feature representation capabilities of the encoder-decoder or carefully design the attention module to focus on specific content in the image.
Although these extensions can improve the benchmark approach, the local nature of the convolution limits them to capturing long-term dependencies, which is critical for medical image segmentation. Recently, segmentation methods based on U-shaped networks have undergone significant changes driven by Transformer <cit.>.
Chen et al <cit.> proposed the first Transformer-based U-shaped segmentation network.
Cao et al <cit.> extended the Swin Transformer <cit.> directly to the U-shaped structure. The above methods suffer from high computational and memory cost explosion when the feature map size becomes large. In addition, some researchers have tried to build Hybrid Networks by combining the advantages of CNN and Transformer, such as UNeXt <cit.>, TransFuse <cit.>, MedT <cit.>, and FAT-Net <cit.>.
Similar to these works, we redesign the window-based local-global interaction and insert it into a pure convolutional framework to compensate for the deficiency of convolution in capturing global features and to reduce the high computational cost arising from self-attention operations.
Skip connection is the most basic operation for fusing shallow and deep features in U-shaped networks.
Considering that this simple fusion does not fully exploit the information, researchers have proposed some novel ways of skip connection <cit.>.
UNet++ <cit.> design a series of dense skip connections to reduce the semantic gap between the encoder and decoder sub-network feature maps.
SegNet <cit.> used the maximum pooling index to determine the location information to avoid the ambiguity problem during up-sampling using deconvolution.
BiO-Net <cit.> proposed bi-directional skip connections to reuse building blocks in a cyclic manner.
UCTransNet <cit.> designed a Transformer-based channel feature fusion method to bridge the semantic gap between shallow and deep features. Our approach focuses on the connection between the spatial locations of the encoder and decoder, preserving more of the original features to help recover the resolution of the feature map in the upsampling phase, and thus obtaining a more accurate segmentation map.
By reviewing the above multiple successful cases based on U-shaped structure, we believe that the efficiency and performance of U-shaped networks can be improved by improving the following two aspects:
(i) local-global interactions. Often networks need to deal with objects of different sizes in medical images, and local-global interactions can help the network understand the content of the images more accurately.
(ii) Spatial connection between encoder-decoder. Semantically stronger and positionally more accurate features can be obtained using the spatial information between encoder-decoders.
Based on the above analysis, this paper rethinks the design of the U-shaped network.
Specifically, we construct lightweight SegNetr (Segmentation Network with Transformer) blocks to dynamically learn local-global information over non-overlapping windows and maintain linear complexity.
We propose information retention skip connection (IRSC), which focuses on the connection between encoder and decoder spatial locations, retaining more original features to help recover the resolution of the feature map in the up-sampling phase.
In summary, the contributions of this paper can be summarized as follows:
1) We propose a lightweight U-shape SegNetr segmentation network with less computational cost and better segmentation performance.
2) We investigate the potential deficiency of the traditional U-shaped framework for skip connection and improve a skip connection with information retention.
3) When we apply the components proposed in this paper to other U-shaped methods, the segmentation performance obtains a consistent improvement.
§ METHOD
As shown in Fig. <ref>, SegNetr is a hierarchical U-shaped network with important components including SegNetr blocks and IRSC.
To make the network more lightweight, we use MBConv <cit.> as the base convolutional building block.
SegNetr blocks implement dynamic local-global interaction in the encoder and decoder stages.
Patch merging <cit.> is used to reduce the resolution by a factor of two without losing the original image information.
IRSC is used to fuse encoder and decoder features, reducing the detailed information lost by the network as the depth deepens.
Note that by changing the number of channels, we can get the smaller version of SegNetr-S (C=32) and the standard version of SegNetr (C=64).
Next, we will explain in detail the important components in SegNetr.
§.§ SegNetr Block
The self-attention mechanism with global interactions is one of the keys to Transformer's success, but computing the attention matrix over the entire space requires a quadratic complexity. Inspired by the window attention method <cit.>, we construct SegNetr blocks that require only linear complexity to implement local-global interactions. Let the input feature map be X∈ R^H× W× C. We first extract the feature X_MBConv∈ R^H× W× C using MBConv <cit.>, which provides non-explicit position encoding compared to the usual convolutional layer.
Local interaction can be achieved by calculating the attention matrix of non-overlapping small patches (P for patch size). First, we divide X_MBConv into a series of patches (H× W/P× P,P,P,C) that are spatially continuous (Fig. <ref> shows the patch size for P = 2) using a computationally costless local partition (LP) operation. Then, we average the information of the channel dimensions and flatten the spatial dimensions to obtain (H× W/P× P,P× P), which is fed into the FFN <cit.> for linear computation. Since the importance of the channel aspect is weighed in MBConv <cit.>, we focus on the computation of spatial attention when performing local interactions. Finally, we use Softamx to obtain the spatial dimensional probability distribution and weight the input features X_MBConv. This approach is not only beneficial for parallel computation, but also focuses more purely on the importance of the local space.
Considering that local interactions are not sufficient and may have under-fitting problems, we also design parallel global interaction branches. First, we use the global partition (GP) operation to aggregate non-contiguous patches on the space. GP adds the operation of window displacement to LP with the aim of changing the overall distribution of features in space (The global branch in Fig. <ref> shows the change in patch space location after displacement). The displacement rules are one window to the left for odd patches in the horizontal direction (and vice versa for even patches to the right), and one window up for odd patches in the vertical direction (and vice versa for even patches down). Note that the displacement of patches does not have any computational cost and only memory changes occur. Compared to the sliding window operation of <cit.>, our approach is more global in nature. Then, we decompose the spatially shifted feature map into 2P (H× W/2P× 2P,2P,2P,C) patches and perform global attention computation (similar to the local interaction branch). Even though the global interaction computes the attention matrix over a larger window relative to the local interaction operation, the amount of computation required is much smaller than that of the standard self-attention model.
The local and global branches are finally fused by weighted summation, before which the feature map shape needs to be recovered by LP and GP reversal operations (i.e., local reverse (LR) and global reverse (GR)). In addition, our approach also employs efficient designs of Transformer, such as Norm, feed-forward networks (FFN) and residual connections. Most Transformer models use fixed-size patches <cit.>, but this approach limits them to focus on a wider range of regions in the early stages. This paper alleviates this problem by applying dynamically sized patches. In the encoder stage, we compute local attention using patches of (8, 4, 2, 1) in turn, and the global branch expands patches to the size of (16, 8, 4, 2). To reduce the hyper-parameter setting, the patches of the decoder stage are of the same size as the encoder patches of the corresponding stage.
§.§ Information Retention Skip Connection
Fig. <ref> shows three different types of skip connections. U-Net splices the channel dimensions at the corresponding stages of the encoder and decoder, allowing the decoder to retain more high-resolution detail information when performing up-sampling. SegNet assists the decoder to recover the feature map resolution by retaining the position information of the down-sampling process in the encoder. We design the IRSC to consider both of these features, i.e., to preserve the location information of encoder features while achieving the fusion of shallow and deep features. Specifically, the patch merging (PM) operation in the encoder reduces the resolution of the input feature map X_in∈ R^H× W× C to twice the original one, while the channel dimension is expanded to four times the original one to obtain X_PM∈ R^H/2×W/2× 4C. The essence of the PM operation is to convert the information in the spatial dimension into a channel representation without any computational cost and retaining all the information of the input features. The patch reverse (PR) in IRSC is used to recover the spatial resolution of the encoder, and it is a reciprocal operation with PM. We alternately select half the number of channels of X_PM (i.e., H/2×W/2× 2C) as the input of PR, which can reduce the redundant features in the encoder on the one hand and align the number of feature channels in the decoder on the other hand. PR reduces the problem of information loss to a large extent compared to traditional up-sampling methods, while providing accurate location information. Finally, the output features X_PR∈ R^H×W×C/2 of PR are fused with the up-sampled features of the decoder for the next stage of learning.
§ EXPERIMENTS AND DISCUSSION
Datasets. To verify the validity of SegNetr, we selected four datasets, ISIC2017 <cit.>, PH2 <cit.>, TNSCUI <cit.> and ACDC <cit.>, for benchmarking. ISIC2017 consists of 2000 training images, 200 validation images, and 600 test images. The PH2 and ISIC2017 tasks are the same, but this dataset contains only 200 images without any specific test set, so we use a five-fold cross-validation approach to validate the different models. The TNSCUI dataset has 3644 ultrasound images of thyroid nodules, which we randomly divided into a 6:2:2 ratio for training, validation, and testing. The ACDC contains Cardiac MRI images from 150 patients, and we obtained a total of 1489 slice images from 150 3D images, of which 951 were used for training and 538 for testing. Unlike the three datasets mentioned above, the ACDC dataset contains three categories: left ventricle (LV), right ventricle (RV), and myocardium (Myo). We use this dataset to explore the performance of different models for multi-category segmentation.
Implementation details. We implement the SegNetr method based on the PyTorch framework by training on an NVIDIA 3090 GPU with 24 GB of memory. Use the Adam optimizer with a fixed learning rate of 1e-4. All networks use a cross-entropy loss function and an input image resolution of 224 × 224, and training is stopped when 200 epochs are iteratively optimized. We use the source code provided by the authors to conduct experiments with the same dataset, and data enhancement strategy. In addition, we use the IoU and Dice metrics to evaluate the segmentation performance, while giving the number of parameters and GFLOPs for the comparison models.
§.§ Comparison with State-of-the-arts
ISIC2017 and PH2 Results. As shown in Table. <ref>, we compared SegNetr with the baseline U-Net and eight other state-of-the-art methods <cit.>. On the ISIC2017 dataset, SegNetr and TransUNet obtained the highest IoU (0.775), which is 3.9% higher than the baseline U-Net. Even SegNetr-S with a smaller number of parameters can obtain a segmentation performance similar to that of its UNeXt-L counterpart. By observing the experimental results of PH2, we found that the Transformer-based method Swin-UNet segmentation has the worst performance, which is directly related to the data volume of the target dataset. Our method obtains the best segmentation performance on this dataset and keeps the overhead low. Although we use an attention method based on window displacement, the convolutional neural network has a better inductive bias, so the dependence on the amount of data is smaller compared to Transformer-based methods such as Swin-UNet or TransUNet.
TNSCUI and ACDC Results. As shown in Table <ref>, SegNetr's IoU and Dice are 1.6% and 0.8 higher than those of the dual encoder FATNet, respectively, while the GFLOPs are 32.65 less. In the ACDC dataset, the left ventricle is easier to segment, with an IoU of 0.861 for U-Net, but 1.1% worse than SegNetr. The myocardium is in the middle of the left and right ventricles in an annular pattern, and our method is 0.6% higher IoU than the EANet that focuses on the boundary segmentation mass. In addition, we observe the segmentation performance of the four networks UNeXt, UNeXt-L, SegNetr-S and SegNetr to find that the smaller parameters may limit the learning ability of the network. The proposed method in this paper shows competitive segmentation performance on all four datasets, indicating that our method has good generalization performance and robustness. Additional qualitative results are in the supplementary.
In addition, Fig. 3 provides qualitative examples that demonstrate the effectiveness and robustness of our proposed method. The results show that SegNetr is capable of accurately describing skin lesions with less data, and achieves multi-class segmentation with minimized under-segmentation and over-segmentation.
§.§ Ablation Study
Effect of local-global interactions. The role of local-global interactions in SegNetr can be understood from Table. <ref>. The overall parameters of the network are less when there is no local or global interaction, but the segmentation performance is also greatly affected. With the addition of local or global interactions, the segmentation performance of the network for different categories is improved. In addition, similar performance can be obtained by running the local-global interaction modules in series and parallel, but the series connection leads to lower computational efficiency and affects the running speed.
Effect of patch size. As shown in Table. <ref> (left), different patch size significantly affects the efficiency and parameters of the model. The number of parameters reaches 54.34 M when patches of size 2 are used in each phase, which is an increase of 42.08 M compared to using dynamic patches of size (8, 4, 2, 1). Based on this ablation study, we recommend the use of [Resolution/14] patches size at different stages.
Effect of IRSC. Table. <ref> (right) shows the experimental results of replacing the skip connections of UNeXt, U-Net, U-Net++, and SegNet with IRSC. These methods get consistent improvement with the help of IRSC, which clearly shows that IRSC is useful.
§ CONCLUSION
In this study, we introduce a novel framework SegNetr for medical image segmentation, which achieves segmentation performance improvement by optimizing local-global interactions and skip connections.
Specifically, the SegNetr block implements dynamic interactions based on non-overlapping windows using parallel local and global branches, and IRSC enables more accurate fusion of shallow and deep features by providing spacial information.
We evaluated the proposed method using four medical image datasets, and extensive experiments showed that SegNetr is able to obtain challenging experimental results while maintaining a small number of parameters and GFLOPs.
The proposed framework is general and flexible that we believe it can be easily extended to other U-shaped networks.
8
01
Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: MICCAI. pp. 234–241. Springer (2015)
02
Ma Q, Zu C, Wu X, et al. Coarse-to-fine segmentation of organs at risk in nasopharyngeal carcinoma radiotherapy. In: MICCAI. pp. 358-368. Springer (2021)
03
Han Z, Jian M, Wang G G. ConvUNeXt: An efficient convolution neural network for medical image segmentation. KBS, 253: 109512. (2022)
04
Oktay O, Schlemper J, Folgoc L L, et al. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999, (2018)
05
Cheng J, Tian S, Yu L, et al. ResGANet: Residual group attention network for medical image classification and segmentation. MED IMAGE ANAL, 76: 102313. (2022)
06
Wang K, Zhan B, Zu C, et al. Semi-supervised medical image segmentation via a tripled-uncertainty guided mean teacher model with contrastive learning. MED IMAGE ANAL, 79: 102447. (2022)
07
Gu Z, Cheng J, Fu H, et al. Ce-net: Context encoder network for 2d medical image segmentation. IEEE TMI, 38(10): 2281-2292. (2019)
08
Wu Y, Liao K, Chen J, et al. D-former: A u-shaped dilated transformer for 3d medical image segmentation. NEURAL COMPUT APPL, 1-14. (2022)
09
Cheng J, Tian S, Yu L, et al. A deep learning algorithm using contrast-enhanced computed tomography (CT) images for segmentation and rapid automatic detection of aortic dissection. BSPC, 62: 102145. (2020)
10
Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR. pp. 3-7. (2021)
11
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. NIPS. 30 (2017)
12
Chen J, Lu Y, Yu Q, et al. Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306, (2021)
13
Cao H, Wang Y, Chen J, et al. Swin-unet: Unet-like pure transformer for medical image segmentation. in: ECCV Workshops. pp. 205-218. (2023)
14
Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows. In: IEEE ICCV. PP. 10012-10022. (2021)
15
Valanarasu J M J, Patel V M. Unext: Mlp-based rapid medical image segmentation network. In: MICCAI. pp. 23-33. Springer (2022)
16
Zhang Y, Liu H, Hu Q. Transfuse: Fusing transformers and cnns for medical image segmentation. In: MICCAI. pp. 14-24. Springer (2021)
17
Valanarasu J M J, Oza P, Hacihaliloglu I, et al. Medical transformer: Gated axial-attention for medical image segmentation. In: MICCAI. pp. 36-46. Springer (2021)
18
Wu H, Chen S, Chen G, et al. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. MED IMAGE ANAL, 76: 102327. (2022)
19
Zhou Z, Rahman Siddiquee M M, Tajbakhsh N, et al. Unet++: A nested u-net architecture for medical image segmentation. In: MICCAI. pp. 3-11. Springer (2018)
20
Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE TPAMI, 39(12), 2481-2495. (2017)
21
Xiang T, Zhang C, Liu D, et al. BiO-Net: learning recurrent bi-directional connections for encoder-decoder architecture. In: MICCAI. pp. 74-84. Springer (2020)
22
Wang H, Cao P, Wang J, et al. Uctransnet: rethinking the skip connections in u-net from a channel-wise perspective with transformer. In: AAAI. pp. 36(3): 2441-2449. (2022)
23
Tu Z, Talebi H, Zhang H, et al. Maxvit: Multi-axis vision transformer. In: ECCV. pp. 459-479. (2022)
24
Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In: ICML. PP. 6105-6114. (2019)
25
Quang N H. Automatic skin lesion analysis towards melanoma detection In: IES. pp. 106-111. IEEE (2017)
26
Mendonça T, Ferreira P M, Marques J S, et al. PH 2-A dermoscopic image database for research and benchmarking. In: EMBC. pp. 5437-5440. IEEE (2013)
27
Pedraza L, Vargas C, Narváez F, et al. An open access thyroid ultrasound image database. In: SPIE. pp. 9287: 188-193. (2015)
28
Bernard O, Lalande A, Zotti C, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?. IEEE TMI, 37(11): 2514-2525. (2018)
29
Isensee F, Jaeger P F, Kohl S A A, et al. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2): 203-211. (2021)
30
Wang K, Zhang X, Zhang X, et al. EANet: Iterative edge attention network for medical image segmentation. Pattern Recognition, 127: 108636. (2022)
|
http://arxiv.org/abs/2307.00638v2
|
20230702191701
|
Semi-automated Thermal Envelope Model Setup for Adaptive Model Predictive Control with Event-triggered System Identification
|
[
"Lu Wan",
"Xiaobing Dai",
"Torsten Welfonder",
"Ekaterina Petrova",
"Pieter Pauwels"
] |
cs.SY
|
[
"cs.SY"
] |
Emergent Spatiotemporal Organization in Stochastic Intracellular Transport Dynamics
Kunaal Joshi,^1,∗ Harrison York,^2,∗ Charles S. Wright,^1,2 Rudro R. Biswas,^1 Senthil Arumugam^2,3,4,5,†, and Srividya Iyer-Biswas,^1,6,†
^1Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, USA
^2Monash Biomedicine Discovery Institute, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton/Melbourne, VIC 3800, Australia
^3ARC Centre of Excellence in Advanced Molecular Imaging, Monash University, Clayton/Melbourne, VIC 3800, Australia
^4European Molecular Biological Laboratory Australia (EMBL Australia), Monash University, Clayton4/Melbourne, VIC 3800, Australia
^5Single Molecule Science, University of New South Wales, Sydney, NSW 2052, Australia
^6Santa Fe Institute, Santa Fe, NM 87501, USA
^∗These authors contributed equally to this work.
^†To whom correspondence should be addressed: [email protected] and [email protected].
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
section1
To reach carbon neutrality in the middle of this century, smart controls for building energy systems are urgently required. Model predictive control (MPC) demonstrates great potential in improving the performance of heating ventilation and air-conditioning (HVAC) systems, whereas its wide application in the building sector is impeded by the considerable manual efforts involved in setting up the control-oriented model. To facilitate the system identification (SI) of the building envelope as well as the configuration of the MPC algorithms with less human intervention, a semantic-assisted control framework is proposed in this paper. We first integrate different data sources required by the MPC algorithms such as the building topology, HVAC systems, sensor data stream and control settings in the form of a knowledge graph and then employ the data to set up the MPC algorithm automatically. Moreover, an event-triggered SI scheme is designed, to ensure the computational efficiency and accuracy of the MPC algorithm simultaneously. The proposed method is validated via simulations. The results demonstrate the practical relevance and effectiveness of the proposed semantics-assisted MPC framework with event-triggered learning of system dynamics.
§ HIGHLIGHTS
* Semantic web technologies
* Ontology-based data integration
* Adaptive model predictive control
* Event-triggered system identification
§ INTRODUCTION
The building sector takes up about 40% of the primary energy consumption and the greenhouse gas emissions worldwide <cit.>, more than half of which occurs during the operational stage.
heating, ventilation and air-conditioning (HVAC) systems account for a large part of the building’s total energy use during the operation, inducing considerable CO_2 emission and monetary costs. Due to the increasing need for space cooling and heating globally, smart and economical control strategies are required for HVAC systems to decrease the carbon footprint. =-1
Model Predictive Control (MPC) is a control method that uses building models and disturbance forecasts to solve constrained optimization problems in a dynamic manner <cit.>.
It has gained increasing attention in building control these years due to its effectiveness in energy cost reduction and energy efficiency improvement <cit.>.
The control-oriented prediction model for MPC includes the dynamic of the thermal envelope and HVAC systems.
In this study, we mainly deal with thermal envelope modeling, where Resistance and Capacitance (RC) models are widely employed <cit.>.The lumped parameters of RC models, characterizing the heat transfer of buildings, are usually estimated through data-driven system identification (SI).
In recent studies, adaptive SI methods are often employed, which update the RC model parameters regularly, performing moving horizon estimation (MHE) or model identification at a daily frequency <cit.>. Because the performance of MPC deteriorates if the estimated RC parameters are inaccurate.
We adopt an MPC algorithm with event-triggered model identification, to reduce the computational effort caused by frequent SI.
The wide application of MPC algorithms is however a challenge in the building sector. Because the control-oriented model configuration requires the interpretation of different data sources such as building geometry, HVAC systems, and sensor measurement <cit.>, demanding different domain experts' knowledge. Furthermore, the MPC algorithm requires forecasts on disturbances caused by e.g.weather and occupancy, and appropriate hyper-parameter setting e.g. prediction horizon. As buildings have miscellaneous geometry, building energy systems, and geological locations, such a model-based approach is labor-intensive and difficult to transfer among buildings. Most of the previous studies have focused solely on the automatic control model setup for optimal structure and parameters, but with little regard to the practical implementation efforts required by both SI and MPC algorithms.
For instance, <cit.> has developed a python toolbox to identify the RC model for the building envelope, which is also deployed in <cit.> to automatically determine the optimal model structures for multi-zone buildings. However, the meta information required by the toolbox is manually configured.
<cit.> propose a tool-chain to generate RC model in Modelica, which automatically uses geometric data in the building information model (BIM) and monitoring data from the building management system (BMS). This study is insightful, but the connection between BIM model and BMS is established in a hard-coded manner and the implementation depends highly on the proprietary commercial software Revit. In <cit.>, a comprehensive open-source toolchain is developed that can generate automatically the MPC algorithm (including both model parameter estimation and optimal control formulation).
Nevertheless, the data integration and interpretation process involved in the MPC algorithm generation is implemented specifically for the case study. Thus, the developed infrastructure is hard to transfer to another building. An integrated framework that collects and interprets the diverse data required by the MPC needs to be studied, in order to prompt the MPC applications. =-1
There is no mature common platform yet to keep the heterogeneous data well-connected in the architecture, engineering, and construction (AEC) industry. Semantic technologies are considered promising to solve the data silo dilemma <cit.>. Many previous attempts use Industry Foundation Classes (IFC) <cit.>, which is an open BIM schema for data exchange in the AEC domain. However, IFC is not suitable for describing dynamic operational data such as sensor measurements, despite its strength in geometry modeling. Semantic web technologies (SWT) [<https://www.w3.org/standards/semanticweb/>]
enable the exchange and sharing of diverse data sources (semantic graphs) over the web, which is hard to achieve by using classical approaches.
Semantic graphs also referred to as knowledge graphs or metadata schema, contain structured information describing the meaning of the underlying data <cit.>. Built by ontologies that mean a specification of a conceptualization <cit.>, semantic graphs are able to connect heterogeneous data. A few studies try to integrate the building data from the design and operation stages using SWT. <cit.> propose an SWT-based methodology to link static building design data modeled in IFC with dynamic sensor data modeled by Brick Schema <cit.>. Building topology, product, and sensor data are connected using these two schemata for further exploitation. A similar approach is adopted in <cit.> to integrate the IFC data and BMS sensor data, by first converting different data sources into knowledge graphs and then linking them together. They build a vendor-neutral web application to visualize the 3D BIM model and the spatial-related sensor measurements in a common platform. The ontology-based data integration can enable more efficient data-driven applications via properly linked data sources. However, to the best of the authors' knowledge, no existing study has investigated a semantics-aware framework for advanced building control, such as MPC. =-1
In this paper, we propose a semantic-assisted control framework to support MPC applications in buildings. In the proposed framework, the data from diverse sources, including building geometry, building physics, and sensors, is first collected and managed in a machine-interpretable way and then employed to set up the algorithm automatically.
Furthermore, an MPC algorithm with event-triggered SI is designed and implemented, in order to minimize operating costs with desirable indoor temperatures.
The effectiveness of the proposed framework and algorithm is demonstrated through simulations.=-1
The paper is structured as follows. In the section ch_system_modeling, we introduce the building and the HVAC system. In the section ch_architecture, we explain the proposed control framework and elaborate on the designed MPC algorithm with event-triggered SI. The results section shows the effectiveness of the proposed approach via simulations. Finally, conclusions are drawn with remarks on future research.
§ SYSTEM MODELING
In this paper, we study a typical system of a European office building, consisting of a one-zone building envelope equipped with a Variable Air Volume (VAV) flow system and a radiator heating system.
The physical model of the system is adapted from “Buildings.Examples.ScalableBenchmarks.BuildingVAV.-One_Floor_OneZone" provided in the open-source Modelica Buildings library <cit.>.
The existing control logic in the model is selected as the baseline, i.e., a rule-based controller (RBC) designed according to <cit.>, and compared with the proposed MPC algorithm in the result section.
§.§ Building model description
The BESTEST Case 600 <cit.>, a benchmark model for building energy simulation, is used in the study.
The envelope consists of a single zone with a window on the south facade and a constant infiltration mass flow rate.
A BIM model for the building envelope is manually created.
The building represents a typical small-group office located in Stuttgart, Germany. Internal heat gains q_int and occupancy table are set according to the local standard <cit.>.
Moreover, bounds for desired zone temperature T_z are adjusted according to the occupancy.
In particular, for the occupied time t ∈𝕆 (8:00 to 18:00) during weekdays, the upper and lower bounds of the zone temperature are set as T_max^occ= 27 ^oC and T_min^occ= 21 ^oC.
For unoccupied time t ∉𝕆, the temperature bounds are relaxed to T_max^un= 32 ^oC and T_min^un= 17 ^oC, respectively.
The air in the thermal is assumed to be well-mixed.
The building is controlled by the system as in Figure <ref>, in which the VAV system consists of a heating coil, a cooling coil, and a reheat coil with maximal power Q̇^hc_max, Q̇^cc_max and Q̇^rc_max, respectively.
Moreover, an economizer is located between the main supply branch and the return branch. In the zone, the radiator heating system is deployed with a maximal power Q̇^rad_max.
The HVAC components are modeled as ideal devices with constant overall efficiency, which considers the energy loss in the hydraulic distribution system and the generator system by using a discount coefficient.
The efficiencies are η_hc, η_rc, η_rad and COP_cc, respectively.
The total thermal power delivered by the HVAC system Q̇^hvac to the zone is as in <ref>:=-1
Q̇^hvac = u^T ΓQ̇_max
where u^T=[u^cc,u^hc,u^rc,u^rad] refer to the normalized power matrix of the component (control variable), with values between 0 and 1.
The efficiency matrix Γ is set as Γ = diag(COP_cc, η_hc, η_rc, η_rad), while maximal power matrix set as Q̇_max^T = [Q̇^cc_max, Q̇^hc_max, Q̇^rc_max, Q̇^rad_max].
§.§ Control problem setting
The control objective is to achieve low operating costs while satisfying the requirements for the desired zone temperature.
For this purpose, the MPC controller is designed with a first-order RC model for prediction.
The continuous time 1R1C model is expressed as in <ref>
C_z Ṫ_z = R_w^-1 (T_amb- T_z) +Q̇^hvac+ q_int A +α H_glo
where T_amb stands for the ambient temperature, H_glo for the global horizontal irradiation, and A for zone floor area.
The parameter is set as θ= [C_z, R_w,α]^T, a collection of the heat capacity C_z, thermal resistance of the walls R_w and solar irradiance coefficient α.
The lumped parameter θ is identified using data-driven methods with historical measurements as the training data.
SI aims to find the parameters that minimize the difference between the true state and prediction, defined in <ref>:=-1
θ̂=min_θ̂ J^s =∫ε^2(t,θ̂) dt s.t. θ̂∈Θ = [θ, θ̅]
where ε(t,θ̂) represents the deviation of RC model prediction compared with measured data.
Note that for the optimization, the proper setup of initial guess θ̂_0 and boundaries Θ are necessary for plausible results.
As the real parameter θ is time-varying, repeated SI is needed but induces more computational effort. To reduce the computational burden, we design the event-triggered SI strategy, updating the parameter via <ref> only when the model error is large. The detail is discussed in section Control_Algorithms.
As SI described above and the MPC implementation afterward require both inputs from different sources, such as a proper setup of initial guess and continuous state measurements,
this leads to the demand for an integrated data framework.
We endeavor to integrate building design data (IFC file) and operational data (sensor measurements), in order to reduce the manual efforts required for the RC model set-up and the control algorithm.
A semantic-assisted architecture is proposed, to realize the semi-automated setup of the proposed MPC algorithm with event-triggered SI.
§ SEMANTIC-ASSISTED ARCHITECTURE FOR MPC
The proposed semantic-assisted framework adopts a layered service-oriented architecture as illustrated in Figure <ref>.
Starting at the bottom, design data (BIM model) and operation data (data points of sensors and actuators) are first pre-processed and then delivered to the graph database and time-series databases accordingly.
In the integration layer, semantic graphs for different chunks of entities e.g. buildings, HVAC systems, sensors and actuators are connected with each other and the overall information is integrated. The link between the virtual sensors in the graph DB and their measurements in the time-series DB is realized via the sensor ID. Via the semantic layer, the building design data and its operational data are seamlessly combined.
Eventually, the functional service layer communicates with the semantic integration layer, exchanging corresponding data to execute the services. Currently, there are two services implemented, namely the forecast and control services.
As for the software implementation, all the components except the MPC service are implemented in Python due to its versatile packages available.
The manipulation of the semantic graphs is realized using RDFLib
[<https://rdflib.readthedocs.io/en/stable/index.html>].
We use MATLAB to develop the MPC service because of its powerful numeric solvers available.
The real-time communications between the semantic layer and the databases, and between the field measurements and time-series database are via RESTful-APIs, which rely on HTTP protocol and are secure.
The communication between the semantic layer and the services, as well as between the semantic layer and the field actuators, on the other hand, is realized via User Datagram Protocol (UDP), which is based on the TCP protocol and has the advantage of fast communication speed.
In this way, the field devices (i.e. sensors and actuators) and the MPC service form a closed-loop system with a local area network.
Individual components are specifically explained in the following subsections.
§.§ Data sources and generation of graphs
In this study, we take IFC file as the information source for data about envelope thermal properties, building geometry, and topology.
The semantic graph for building-related information is automatically generated by the IFCtoLBD convertor <cit.>.
Since we use a simulation building with fictive HVAC systems, the graphs for the HVAC system and BMS data points are currently manually configured using Resource Description Framework (RDF) schema.
Note that the RDF graph generation for these sources can theoretically also be automated via tools described in <cit.> and <cit.>, which will be handled in the future study.
§.§ Databases
The generated semantic graphs for the building, HVAC system, and the metadata about sensors as well as actuators are stored in graph DB, because graph DB is fast in querying the relationships between entities. The schema used in the semantic graphs is detailed in the next section. On the contrary, the sensor measurements are stored in a time-series database, to enable more potential time-series analyses. We use GraphDB [<https://www.ontotext.com/products/graphdb/>] and InfluxDB [<https://www.influxdata.com/>] for the specific implementation.
[5]
URIs for all used ontologies:
BOT: <https://w3id.org/bot##>;
Brick: <https://brickschema.org/schema/Brick##>;
PEP: <https://w3id.org/pep/>;
FSO: <https://w3id.org/fso##>;
PROPS: <https://w3id.org/props##>;
SEAS forecasting ontology: <https://w3id.org/seas/ForecastingOntology>;
SOSA: <https://www.w3.org/ns/sosa/>;
SSN: <https://www.w3.org/ns/ssn/>;
TIME: <http://www.w3.org/2006/time/>.
§.§ Semantic integration layer
In this subsection, we explain in detail how the semantic graph is modeled in terms of the adopted terminologies (T-box). This is fundamental for ontology-based data integration. Based on the controller design, we categorize the data and information that requires human inputs into the following five main aspects:
* Building elements and topology.
The topological information and geometrical information about the thermal zones are required for RC model structure estimation and therefore modeled.
In addition, the properties of building elements (walls, windows, doors etc.) such as area, thermal transmittance, solar transmittance and thermal capacitance are needed in calculating the initial guess and bounds of the parameter θ̂_0.
The information above is described using Building Topology Ontology (BOT)[5] <cit.> and PROPS Ontology[5] <cit.>.
* Components and topology for HVAC systems.
The properties of the HVAC system components, such as the nominal power Q̇_max, and the nominal efficiency Γ are modeled as , using Semantic Sensor Network[5] (SSN) ontology <cit.>.
Furthermore, the interaction between the HVAC systems and zones is also modeled, so that the building envelope model is matched with the corresponding HVAC components efficiently. We use Flow System Ontology[5] (FSO) <cit.> to describe the heat and fluid transfer among zones and HVAC components.
* Sensor data collection.
The sensor data required by the MPC algorithm includes state measurement T_z.
In addition, the set-points i.e., T_max^(·) and T_min^(·), are required to setup the constraints in MPC as in <ref>. Such properties (e.g. temperature and power) observed by sensors are described by SOSA ontology[5] <cit.>. The virtual data points attached to these properties are further modeled using Brick, with their identifier to the time-series measurements in the databases modeled via .
* Forecast information.
Forecasts required by the MPC algorithm include weather, occupancy, and the energy price in the market.
The link to the forecast models in the file system is described using SEAS forecasting ontology[5] and SEAS Procedure Execution ontology[5] <cit.>.
* Controller setup.
The proposed adaptive control algorithm has 2 sub-modules as shown in Figure <ref>, namely MPC and the event-triggered SI.
For MPC, the prediction horizon N_c is tuned according to the specific use case. For the event-triggered SI, the optimal setting for trigger horizon N_t and identification horizon N_s, as well as the trigger threshold ρ typically from domain experts’ experience. The above hyperparameters for controller setup are represented by SEAS optimization ontology and Time ontology[5] (Hobbs and Pan 2004), and can be modified by domain experts easily.
With the proposed information modeling paradigm, the information needed in building MPC algorithm is organized in a uniform manner, reducing the redundant efforts in data preparation.
How the aforementioned information benefits the MPC algorithm setup is detailed in section Control_Algorithms.
A concrete instance model (A-box) is shown in the section A_box.
The data from different sources are integrated and manipulated via the semantic integration layer, which serves as the coordinator between services and other components. The first task of the semantic integration layer is to calculate the initial guess θ̂_0 and boundaries for RC parameters according to the material data, in which R̂_w,0 is the sum of thermal resistance of all surfaces, and Ĉ_z,0 for the total thermal capacitance of the surfaces and air within the zone, and α̂_0 for the solar irradiation through the window area.
The lower and upper bounds of θ̂ is set as [0.1 θ̂_0, 10 θ̂_0].
The second task of the semantic layer is to retrieve the relevant data from the databases and delivery it to the corresponding service, which is detailed in the specific service. =-1
§.§ Functional service layer
§.§.§ Forecast service
The forecast service provides predictions on the weather environment, energy price and the occupancy of the building.
We use the predictions made by existing models and store them in the file system. Test reference year weather data set[<https://energyplus.net/weather>] of Stuttgart is employed. We adopt the day-ahead German electricity price[<https://www.smard.de/en/>] for price forecast. The occupancy profile as defined in the norm <cit.> is employed for predicting occupancy and internal gains. The SPARQL [<https://www.w3.org/TR/sparql11-overview/>] query against prediction-related information is demonstrated in Figure <ref>, running in the forecast service. =-1
§.§.§ MPC service with event-triggered SI
The proposed algorithm in the control service consists of 2 sub-modules, namely MPC and event-triggered SI.
The MPC module is formulated as an optimization problem, aiming to minimize the operating costs of the HVAC system with desired zone temperature.
The optimization problem with receding horizon N_c is written as <ref>:
min_{u_i,s̅_i,s_i} = J^c_k =∑_i = k^k + N_c - 1( λ̂_i ΓQ̇_max^T û_i +μ̅s̅_i +μs_i )
s.t. T_min(t_i) - s_i ≤T̂_z, i≤ T_max(t_i) + s̅_i
û^cc_i ∈ [0,1], û^hc_i ∈ [0,1], û^rh_i ∈ [0,1], û^rad_i ∈ [0,1]
s_i ≥ 0, s̅_i ≥ 0, ∀ i = k, ⋯, k + N_c
T̂_z, i + 1 = f(T̂_z,i,û_i,ê_i,θ̂_i), T̂_z, k = T_z, k
where λ̂_i is the predicted electricity price, Q̇_max is the maximal thermal power,and û_i is control variables.
The slack variables s_i and s̅_i stand for lower and upper bounds, and serve to guarantee the feasibility of the objective function by relaxing the hard constraints on T_z as equation <ref>, with the penalty coefficient μ for violations.
<ref> and <ref> are the constraints on the state T_z,i and control variable u_i, respectively.
<ref> is the discrete form of <ref> with the estimated parameter θ̂_k.
The estimated parameter is updated in an event-triggered way, i.e., θ̂_k =γ_k θ̂_k^* + (1 -γ_k) θ̂_k-1, where the binary variable γ_k indicates whether the parameter is updated.
The newly estimated parameter θ̂_k^* is obtained from historical data with length N_s through the optimization as <ref>:=-1
θ̂_k^* = min_θ̂ J^s_k = ∑_k - 1^k - N_s + 1ξ_i ε_i^2(θ̂)
s.t. R̂_w∈ [R, R̅], Ĉ_z ∈ [C, C̅], α̂∈ [α, α̅]
ε_i(θ̂) = T_z,i + 1- f(T_z,i,u_i,e_i,θ̂),∀ i = k - 1, ⋯, k - N_s + 1
where R, C, α and R̅, C̅, α̅ are the constant lower and upper bounds of R_w, C_z and α.
The indicator γ_k is the result from the event trigger, which is designed by considering N_t previous data as <ref>:
RMSE_k =√(N_t^-1∑_i = k - 1^k - N_tε_i^2(θ̂_k))>ρ⇔γ_k = 1
where ρ> 0 is the trigger threshold.
The choice of ρ is important but empirical, considering the trade-off between the trigger times and model-induced control accuracy.
According to the equations (<ref>), (<ref>) and (<ref>), the setup of the MPC algorithm requires concrete data from (i) forecasts on electricity price, internal heat gains and ambient climate data, (ii) initial guess for the thermal envelope parameters and the boundaries, (iii) thermal zone and properties the HVAC components connected it, including nominal power and efficiency (iv) the specific control algorithm for the use case, e.g. horizons and threshold (v) the sensor measurements on states of the studied thermal zone, to which the semantic graph corresponds (described in section ch_semantic_model).
Regarding the sources, data (i) can be derived from the forecast service, data (ii) to (iv) come originally from BIM model and BMS and are stored in the graph DB, while data (v) is stored in the time-series DB. To fetch data (i)-(iv), SPARQL queries against the graph DB are sufficient. To retrieve data (v), the appropriate sensor ID needs to be firstly queried using SPARQL; afterward, the sensor ID as well as the time range need to be encoded into the FLUX query, to get the data out of InfluxDB. For illustration, the SPARQL query to retrieve all sensors related to the observed thermal zone is shown as in Figure <ref>.
With the assistance of the semantic integration layer and the underlying ch_semantic_model, the data required by the MPC service is easily redirected to the correct components and then merged to instantiate the algorithm depending on the specific use case. In this manner, the re-usability of the MPC among buildings is improved.
§ RESULTS
§.§ Controller setup using the semantic graph
Here, we elaborate on how to set up the control algorithm with the semantic graph. An excerpt of the semantic graph deployed in the integration layer is shown in Figure <ref>.
The lower right corner of Figure <ref> demonstrates the modeling of building-related information. The studied zone is modeled as a with a volume of 129.6 m^3 and an area of 48 m^2. It has 7 adjacent surfaces, including 4 walls, 1 floor, 1 ceiling, and 1 window. Taking the modeling of the window as an instance, the window () has an area of 6 m^2, a u-value of 0.2W/m^2· K and a g-Value of 0.7, whereas its thermal capacitance is neglected. The properties of other surfaces are modeled in a similar way with one more property on the area-specific capacitance.
Using the data above, the initial guess and bounds of the RC parameters are calculated as θ̂_0=[0.017K/W,6.6MJ/K,4.2m^2] and Θ̂_0=[0.1θ̂_0, 10θ̂_0], which are used by the event-triggered SI module in the MPC service.
Moreover, the information on sensor data points connected to the zone are queried using SPARQL described in Figure <ref>, and the results are listed in Table <ref>. In total, four sensors are related to the zone (), measuring the T_max, T_min, T_z and occupancy head count. By using the sensor ID, the historic measurements made by the sensor are retrieved from time-series DB via Flux query. The historic measurements of T_z are taken as an example and shown in Table <ref>.
The historical measurements on occupancy and T_z are sent to MPC service when SI procedure is triggered, while the real-time measurement of the state T_z is sent to the MPC service every 5 minutes. Note the time-series historical weather data related to the building geological location () is sent to MPC service in a similar way, and is not detailed here.
The right upper corner part illustrates the HVAC systems.
and exchange fluids directly with the thermal zone, whereas the radiator exchanges heat directly with the zone.
The radiator has nominal properties, such as Q̇^rad_max of 2787 W and η_rad of 0.9, and dynamic property Q̇^rad, which is monitored and also optimized by the MPC optimization procedure. The rest of HVAC system's topology and their properties are modeled in a similar way. The maximal powers for the heating coil, reheat coil, and cooling coil are 1477 W, 261 W, and -1814 W, with respective efficiencies 0.8, 0.8, and 2.7, where negative power means cooling. The maximal power matrix Q̇_max and efficiency matrix Γ are utilized to initialize the MPC at the beginning, and historical measurements Q̂ are retrieved by the SI module when a system update is activated, in the same way as described in the last paragraph.
The forecaster information is modeled as the left lower corner of Figure <ref>.
Using the query in Figure <ref>, the forecast file paths are first retrieved via the forecast service (results in Table <ref>), and passed to the MPC service.
The information about the hyper-parameter settings in the MPC algorithm is modeled in the left upper corner of Figure <ref>.
The MPC service has two sub-modules: event-triggered SI and economic MPC. The former optimizes the RC model parameters to ensure the accuracy of the RC model, which is later used to predict the T̂_z in the MPC; the latter optimizes the relative power of all related components u, to minimize the operating costs. For event-triggered SI, the trigger threshold ρ is set as 0.1 ^o C with trigger horizon N_t set as 1 day to ensure accurate daily prediction. The SI horizon N_s (training data length) is set to 7 days, as recommended in <cit.>.
For the economic MPC module, we set the prediction horizon N_c to 8 hours, because 1R1C model is not accurate for long-time prediction <cit.>.
§.§ Performance of MPC with Event-triggered SI
The simulation model in Modelica is exported as Functional Mockup Units (FMU), defined by Functional Mockup Interface (FMI) standard
, and simulated in Python with PyFMI library.
The simulation results of July are demonstrated as follows. =-1
Figure <ref> shows the values of RMSE in (<ref>) with its threshold ρ and the estimated parameters θ̂_k along the time in July.
The evaluation of RMSE starts after 1 day, while the SI after 7 days to collect enough data.
The SI is triggered repeatedly in certain periods because the eventual convergence of θ̂ demands a few new data.
Moreover, more obvious changes are observed in R_w than C_z and α, which results from their different sensitivities to the environmental boundaries and different effects on the short-time prediction (8h) accuracy.
Overall, a total of 3545 times of SI is activated among 6912 simulation steps by using the proposed event-trigger scheme, saving 49% computations. Results compared to the MPC with MHE prove that the proposed event-triggered SI achieves similar control performance while demanding less computational effort.
Figure <ref> shows the performance of the MPC algorithm in comparison to RBC model in July, which is defined in the system modeling section.
More specifically, diagram (a) shows the varying electricity price, diagram (b) for the measured zone temperatures, diagram (c) for the total heating power Q̇^h =Q̇^hc+Q̇^rh+Q̇^rad, and diagram (5) for the cooling power Q̇^c =Q̇^cc.
According to the temperature profiles in Figure <ref>, the MPC algorithm controls the indoor climate better than the RBC, in terms of fewer violations of temperature constraints.
Especially during the weekends (07-15,07-16 and 07-28), RBC model has forced the HVAC system to “Unoccupied-Off” mode, and no cooling power is supplied, while MPC algorithm predicts the temperature peak is about to come and provide moderate cooling power.
Overall, MPC achieves an operating cost reduction of 12% compared to RBC.
Combined with the successful integration of the building design data and sensor measurement explained, the results demonstrate the proposed semantic-assisted framework can be used practically to instantiate the algorithm.
§ CONCLUSION
In this paper, a semantic-assisted control framework for MPC algorithm with event-triggered SI is proposed.
The framework facilitates MPC algorithm setup by integrating heterogeneous data sources via semantic modeling.
To ensure computational efficiency and the accuracy of MPC model at the same time, an event-triggered SI scheme is designed, where an educated initial guess and reasonable boundaries of RC model parameters for the thermal envelope are automatically instantiated using the semantic graph.
The effectiveness of the proposed MPC algorithm is validated via simulations, where lower operating costs and better indoor temperature control are achieved, compared to the legacy RBC sequence. In the future study, the proposed framework will be verified on a real building.
§ ACKNOWLEDGMENT
The authors would like to thank Dr. Philipp Kotman for the informative feedback dedicated to this paper.
Xiaobing Dai is supported by the BMBF “Souverän. Digital. Vernetzt.” joint project 6G-life: 16KISK002. =-1
bs2023
|
http://arxiv.org/abs/2307.02909v1
|
20230706105046
|
Audio-visual End-to-end Multi-channel Speech Separation, Dereverberation and Recognition
|
[
"Guinan Li",
"Jiajun Deng",
"Mengzhe Geng",
"Zengrui Jin",
"Tianzi Wang",
"Shujie Hu",
"Mingyu Cui",
"Helen Meng",
"Xunying Liu"
] |
eess.AS
|
[
"eess.AS",
"cs.AI",
"cs.SD"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Audio-visual End-to-end Multi-channel Speech Separation, Dereverberation and Recognition
Guinan Li, Jiajun Deng, Mengzhe Geng, Zengrui Jin, Tianzi Wang, Shujie Hu,
Mingyu Cui, Helen Meng, Fellow, IEEE, Xunying Liu, Memeber, IEEE
Guinan Li, Jiajun Deng, Mengzhe Geng, Zengrui Jin, Tianzi Wang, Shujie Hu, Mingyu Cui are with the Chinese University of Hong Kong, China (email: {gnli, jjdeng, mzgeng, zrjin, twang, sjhu, mycui}@se.cuhk.edu.hk)
Helen Meng is with the Chinese University of Hong Kong, China (email: [email protected]).
Xunying Liu is with the Chinese University of Hong Kong, China and the corresponding author (email: [email protected]).
Gastón P. Fernández[Ph.D. student at the University of Leuven (KU Leuven), Department of Economics, Naamsestraat 69, box 3565, 3000 Leuven (e-mail: [email protected]). I deeply appreciate the invaluable guidance of my advisors Laurens Cherchye and Frederic Vermeulen. I would also like to thank Wietse Leleu and all participants at the Conference of the European Society for Population Economics (ESPE) in Belgrade, the Trans-Atlantic Doctoral Conference (TADC) in London, and the Public-Labor-Health Seminar, the Household Economics Gathering, and the ECORES Summer School in Leuven for their helpful comments. All errors are on my own.]
University of Leuven (KU Leuven)
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Accurate recognition of cocktail party speech containing overlapping speakers, noise and reverberation remains a highly challenging task to date. Motivated by the invariance of visual modality to acoustic signal corruption, an audio-visual multi-channel speech separation, dereverberation and recognition approach featuring a full incorporation of visual information into all system components is proposed in this paper. The efficacy of the video input is consistently demonstrated in mask-based MVDR speech separation, DNN-WPE or spectral mapping (SpecM) based speech dereverberation front-end and Conformer ASR back-end. Audio-visual integrated front-end architectures performing speech separation and dereverberation in a pipelined or joint fashion via mask-based WPD are investigated. The error cost mismatch between the speech enhancement front-end and ASR back-end components is minimized by end-to-end jointly fine-tuning using either the ASR cost function alone, or its interpolation with the speech enhancement loss. Experiments were conducted on the mixture overlapped and reverberant speech data constructed using simulation or replay of the Oxford LRS2 dataset. The proposed audio-visual multi-channel speech separation, dereverberation and recognition systems consistently outperformed the comparable audio-only baseline by 9.1% and 6.2% absolute (41.7% and 36.0% relative) word error rate (WER) reductions. Consistent speech enhancement improvements were also obtained on PESQ, STOI and SRMR scores[Enhanced audio examples for demonstration purposes are available in https://liguinan.github.io/AV-E2E-MC-ASR].
Audio-visual, Speech separation, Speech dereverberation, Speech recognition, End-to-end, Conformer
§ INTRODUCTION
Despite the rapid progress of automatic speech recognition (ASR) in the past few decades, accurate recognition of cocktail party speech <cit.> remains a highly challenging task to date.
Its difficulty can be attributed to multiple sources of interference including overlapping speakers, background noise and room reverberation. These lead to a large mismatch between the resulting mixture speech and clean signals.
To this end, microphone arrays play a key role in state-of-the-art speech enhancement and recognition systems designed for cocktail party overlapped speech and far-field scenarios <cit.>.
The required array beamforming techniques used to perform multi-channel signal integration are normally implemented as either time or frequency domain filters.
These are represented by time domain delay and sum <cit.>, frequency domain minimum variance distortionless response (MVDR) <cit.> and generalized eigenvalue (GEV) <cit.> based multi-channel integration approaches.
Earlier generations of mixed speech separation and recognition systems featuring conventional multi-channel array beamforming techniques typically used a pipelined system architecture.
It contains separately constructed speech enhancement front-end modules designed to perform speech separation, dereverberation as well as denoising tasks, and speech recognition back-end components.
With the wider application of deep neural networks (DNNs) based speech technologies, microphone array beamforming techniques have also evolved into a rich variety of neural network based designs in recent few years.
These include:
a) neural time-frequency (TF) masking approaches <cit.> used to predict spectral mask labels for a reference channel that specify whether a particular TF spectrum point is dominated by the target speaker or interfering sources to facilitate speech separation;
b) neural Filter and Sum approaches directly estimating the beamforming filter parameters in either time domain <cit.> or frequency domain <cit.> to produce the separated outputs;
and c) mask-based MVDR <cit.>, and mask-based GEV <cit.> approaches utilizing DNN estimated TF masks to compute target speaker and noise specific speech power spectral density (PSD) matrices and to obtain the beamforming filter parameters, while alleviating the need of explicit direction of arrival (DOA) estimation.
In many practical applications, reverberation presents a further challenge which can lead to severe speech recognition performance degradation <cit.> when such systems are trained on anechoic and non-reverberant data.
Classical solutions to the resulting dereverberation problem represented by, for example, weighted prediction error (WPE) <cit.>, require the estimation of a time delayed linear filter.
In recent years, there has been a similar trend of conventional speech dereverberation approaches <cit.> such as WPE evolving into their current DNN based variants. These include:
a) the DNN-WPE <cit.> method, which uses neural network estimated target signal PSD matrices in place of those traditionally obtained using maximum likelihood estimation trained complex value Gaussian Mixture Models <cit.> in the dereverberation filter estimation;
and b) complex spectral masking <cit.> and spectral mapping <cit.> learning a transformation between reverberant and anechoic data.
End-to-end all neural microphone array based speech enhancement and recognition systems present a comprehensive and overarching solution to the cocktail party speech problem by simultaneously performing speech separation, denoising and dereverberation. However, efforts on developing such systems are confronted by a number of key research challenges.
1) Full incorporation of video modality: Motivated by the bimodal nature of human speech perception and the invariance of visual information to extrinsic acoustic corruption, there has been a long history of developing audio-visual speech enhancement <cit.> and recognition <cit.> techniques.
When processing the cocktail mixed speech, a holistic, consistent incorporation of visual information in all components of the entire system (speech separation, dereverberation and recognition) is preferred.
In contrast, among existing researches, video information has mainly been partially incorporated into:
a) the speech enhancement (separation and/or dereverberation) front-end <cit.> alone;
or b) the speech recognition back-end <cit.> only.
More recent works used video information in both the multi-channel speech separation and ASR <cit.>, but not in speech dereverberation.
2) Integration between speech separation and dereverberation modules: Surface reflection of speech signals in reverberant environments distorts the DOA or TF-mask estimation for the target speaker.
At the same time, interfering sound sources also impact the dereverberation filter estimation. Hence, a suitable form of integration between the speech separation and dereverberation techniques is required within the speech enhancement front-end sub-system.
Possible integration solutions include:
a) a pipelined architecture within which the speech separation and dereverberation components are sequentially connected in any order such as the previous researches in <cit.>;
or b) a single architecture where both these two enhancement functions are implemented, for example, using weighted power minimization distortionless response (WPD) <cit.> and the related DNN TF-mask based WPD <cit.> approaches. To date, such integration problem has only been investigated for audio-only speech enhancement <cit.>, but has not been studied for audio-visual speech separation and dereverberation.
3) Joint optimization of audio-visual speech enhancement front-end and recognition back-end: Conventional non-DNN based speech enhancement front-end models are often separately constructed and cannot be easily integrated with the ASR back-end.
The wide application of deep learning approaches for speech enhancement and recognition components allows them to be more tightly integrated and consistently optimized in an end-to-end manner.
An improved trade-off between the speech enhancement front-end loss function and ASR accuracy can then be obtained, for example, using multi-task learning <cit.>.
To date, such joint speech enhancement front-end and ASR back-end optimization has been only conducted among: a) audio-only speech enhancement and recognition systems using no video input <cit.>; or b) audio-visual speech separation and recognition tasks only while not considering speech dereverberation <cit.>.
Hence, there is a pressing need to derive suitable joint optimization methods for a complete audio-visual multi-channel speech separation, dereverberation and recognition system.
In order to address the above issues, an audio-visual multi-channel speech separation, dereverberation and recognition approach featuring a full incorporation of visual information into all three components of the entire system is proposed in this paper.
The efficacy of the video input is consistently demonstrated when being used in the mask-based MVDR speech separation, DNN-WPE or spectral mapping (SpecM) based speech dereverberation front-end and Conformer encoder-decoder based ASR back-end components.
Both the pipelined integration methods using either a) a serial connection of the audio-visual speech separation component with the following dereverberation module; or b) audio-visual speech dereverberation followed by separation; and c) joint speech separation and dereverberation via audio-visual mask-based WPD are investigated.
In order to reduce the error cost mismatch between the speech enhancement front-end and ASR back-end components, they are jointly fine-tuned using either only the Conformer ASR cost function (CTC plus Attention) <cit.>, or the ASR cost function interpolated with the speech enhancement loss based on mean square error (MSE) and scale-invariant signal to noise ratio (SISNR).
Experiments conducted on the mixture overlapped and reverberant speech data constructed using either simulation or replay of the benchmark Oxford LRS2 dataset <cit.> suggest:
1) The proposed audio-visual multi-channel speech separation, dereverberation and recognition systems consistently outperformed the comparable audio-only baseline systems by 9.1% and 6.2% absolute (41.7% and 36.0% relative) word error rate (WER) reductions on the LRS2 simulated and replayed evaluation datasets, respectively. Consistent improvements of perceptual evaluation of speech quality (PESQ) <cit.>, short-time objective intelligibility (STOI) <cit.> and speech to reverberation modulation energy ratio (SRMR) <cit.> scores were also obtained.
2) In particular, when compared with audio-only dereverberation, incorporating visual information into the DNN-WPE or SpecM based dereverberation module produced consistent improvements of PESQ, STOI and SRMR scores and a statistically significant[Matched pairs sentence-segment word error (MAPSSWE) based statistical significance test <cit.> was performed at a significance level α=0.05.] WER reduction by up to 1.9% absolute (5.9% relative), irrespective of the form of integration between speech separation and dereverberation components.
3) Among different architectures to integrate the speech separation and dereverberation components within the front-end, a pipelined, full audio-visual configuration performing DNN-WPE based speech dereverberation followed by mask-based MVDR speech separation using video input in both stages produced the best overall speech enhancement and recognition performance.
4) Consistent WER reductions and improvements on speech enhancement metric scores were also obtained after joint fine-tuning the entire audio-visual speech separation, dereverberation and recognition system in a fully end-to-end manner.
The main contributions of this paper are summarized below:
1) To the best of our knowledge, this paper presents the first use of a complete audio-visual multi-channel speech separation, dereverberation and recognition system architecture featuring a full incorporation of visual information into all three stages. In contrast, prior researches incorporate visual modality in either only the speech enhancement front-end <cit.>, ASR back-end <cit.>, or both the multi-channel speech separation and recognition stages <cit.> but excluding the dereverberation component.
2) This paper presents a more complete investigation of the advantages of audio-visual dereverberation approaches versus audio-only dereverberation methods based on DNN-WPE and SpecM. In contrast, similar prior studies <cit.> were conducted only in the context of SpecM based dereverberation.
3) To the best of our knowledge, this is the first work that systematically investigates the suitable form of integration between the full audio-visual speech separation and dereverberation modules within the speech enhancement front-end. In contrast, similar studies in previous researches were only conducted for audio-only speech enhancement <cit.>.
4) This paper presents the first research to demonstrate that performing an end-to-end joint optimization is useful for training a complete audio-visual multi-channel speech separation, dereverberation and recognition system. In contrast, related prior studies were conducted only in the context of audio-only speech enhancement and recognition <cit.>.
We hope these findings above will provide valuable insights for the practical development of state-of-the-art audio-visual speech separation, dereverberation and recognition systems for cocktail party and far-field scenarios.
The rest of the paper is organized as follows. Audio-visual multi-channel speech separation is reviewed in Section II. Section III presents audio-visual multi-channel speech dereverberation. Integrated audio-visual speech separation and dereverberation approaches are proposed in Section IV. Section V presents the audio-visual Conformer ASR back-end component and its joint fine-tuning with the speech enhancement front-end. Experimental data setup and results are presented in Section VI and VII, respectively. Section VIII draws the conclusion and discusses future research directions.
§ AUDIO-VISUAL MULTI-CHANNEL SPEECH SEPARATION
In this section, the multi-channel far-field speech signal model is reviewed first, before the introduction of the audio-visual multi-channel mask-based MVDR approach for speech separation is presented.
§.§ Multi-channel Far-field Signal Model
In the far-field scenarios, the short-time Fourier transform (STFT) spectrum of the received multi-channel speech signal 𝐲(t, f) ∈ℂ^R recorded by a microphone array consisting of R channels can be modeled as:
𝐲(t, f) = 𝐱(t, f) + 𝐧(t, f) = 𝐠(f)S(t, f) + 𝐧(t, f),
where t and f denote the indices of time and frequency bins, respectively. 𝐱(t, f) ∈ℂ^R is a complex vector containing the clean speech signals received by the array channels. 𝐧(t, f) ∈ℂ^R represents either the interfering speaker’s speech or additive background noise alone, or a combination of both. 𝐠(f) ∈ℂ^R denotes the array steering vector and S(t, f) is the STFT spectrum of the target speaker's clean speech.
§.§ Mask-based MVDR
Classic acoustic beamforming approaches <cit.> are designed to capture the speech from the target speaker’s direction while attenuating the interfering sounds coming from other locations. This is realized by setting, or “steering", the beamforming filter parameters to the target direction. Taking the MVDR beamformer as an example, a linear filter 𝐰_MVDR(f) ∈ℂ^R is applied to the multi-channel mixture speech spectrum 𝐲(t, f) to produce the filtered output Ŝ_MVDR(t,f) as:
Ŝ_MVDR(t, f) =𝐰_MVDR(f)^H 𝐲(t, f),
= 𝐰_MVDR(f)^H 𝐱(t, f)_target speech
component + 𝐰_MVDR(f)^H𝐧(t, f)_residual noise,
where (·)^H denotes the conjugate transpose operator.
The MVDR beamformer is designed to minimize the residual noise output while imposing a distortionless constraint on the target speech <cit.>, which can be formulated as
min _𝐰_MVDR(f)∑_t | 𝐰_MVDR(f)^H𝐧(t, f) |^2,
subject to : ∑_t |(𝐮_r-𝐰_MVDR(f))^H𝐱(t, f) |^2=0,
where 𝐮_r=[0,0, …, 1, …, 0]^T ∈ℝ^R is a one-hot reference vector where its r-th component equals to one. (·)^T denotes the transpose operator. Without loss of generality, we select the first channel, i.e., r=1 as the reference channel among the R channels throughout this paper.
The distortionless constraint in the above optimization problem is equivalent to 𝐰_MVDR(f)^H𝐠(f)=1, which can be interpreted as maintaining the energy along the target direction.
The MVDR beamforming filter is estimated as
𝐰_MVDR(f)=Φ_n(f)^-1𝐠(f)/𝐠(f)^HΦ_n(f)^-1𝐠(f)=Φ_n(f)^-1Φ_x(f)/tr(Φ_n(f)^-1Φ_x(f))𝐮_r,
where the target speaker and noise specific power spectral density (PSD) matrices
Φ_x(f) =∑_t(M^x_MVDR(t, f) 𝐲(t, f))(M^x_MVDR(t, f) 𝐲(t, f))^H/∑_t M^x_MVDR(t, f) (M^x_MVDR(t, f))^*,
Φ_n(f) =∑_t(M^n_MVDR(t, f) 𝐲(t, f))(M^n_MVDR(t, f)𝐲(t, f))^H/∑_t M^n_MVDR(t, f)(M^n_MVDR(t, f))^*,
are computed using DNN predicted complex TF masks M^x_MVDR(t, f)∈ℂ and M^n_MVDR(t, f) ∈ℂ<cit.>. tr(·) denotes the trace operator. (·)^* is complex conjugate operator.
§.§ Audio Modality
As is illustrated in the top left corner of Fig. <ref>, three types of audio features including the complex STFT spectrum of all the microphone array channels, the inter-microphone phase differences (IPDs) <cit.> and location-guided angle feature (AF) <cit.> are adopted as the audio inputs. IPDs features are used to capture the relative phase difference between different microphone channels and provide additional spatial cues for mask-based multi-channel speech separation.
Angle features that are based on the approximated DOA of the target speaker[The target speaker is located using a 180-degree wide-angle camera to track the speaker’s face.
The camera approximated DOA of target speaker is only used in AF features.] are also incorporated to provide further spatial filtering constraints.
In this work, the approximated DOA of the target speaker is obtained by tracking the speaker’s face from a 180^∘ wide-angle camera (Fig. <ref>, bottom left corner).
Following prior researches on audio-visual multi-channel speech separation <cit.>, the temporal convolutional network architecture (TCN) <cit.>, which uses a long reception field to capture more sufficient contextual information, is used in our separation system. As shown in the left of Fig. <ref>, each TCN block is stacked by 8 Dilated 1-D ConvBlock with exponentially increased dilation factors 2^0, 2^1, … ., 2^7. As shown in the top left corner of Fig. <ref>, the log-power spectrum (LPS) features of the reference microphone channel are concatenated with the IPDs and AF features before being fed into a single TCN module based Audio Block to compute the audio embeddings 𝐀∈ℝ^F_a × T_a, where F_a is the dimension of audio embeddings and T_a is the number of audio frames.
§.§ Visual Modality
The lip region of a target speaker obtained via face tracking is fed into a LipNet <cit.> which consists of a 3D convolutional layer (Fig. <ref>, bottom left, in pink) and an 18-layer ResNet <cit.> (Fig. <ref>, bottom left, in light turquoise), to extract the visual features from the target speaker’s lip movements.
Before fusing the visual features with the audio embeddings to improve the TF masks estimation, the visual features are firstly fed into the linear layer followed by the Visual Block containing five Visual Conv1DBlocks (Fig. <ref>, bottom, in light brown, the detailed network architecture is illustrated in the right of Fig. <ref>), and then the output of Visual Block is up-sampled to be time synchronised with the audio frames via linear interpolation to compute the visual embeddings 𝐕∈ℝ^F_v × T_a, where F_v is the dimension of visual embeddings. In this work, the LipNet model is pretrained on the lipreading task as described in <cit.>.
§.§ Modality Fusion
In order to effectively integrate the audio and visual embeddings, a factorized attention-based modality fusion method <cit.> is utilized in the audio-visual speech separation module.
As shown in Fig. <ref> (middle up), the acoustic embeddings at frame index t denoted by 𝐀(t) are first factorized into K acoustic subspace vectors [𝐞_1^a(t), 𝐞_2^a(t),…, 𝐞_K^a(t)] by a series of parallel linear transformation 𝐏_k^a ∈ℝ^F_a × F_a. The visual embeddings at frame index t named by 𝐕(t) is mapped into a K dimensional vector 𝐞^v(t)=[e_1^v(t), e_2^v(t), …,e_K^v(t)]^T by projection matrix 𝐏^v ∈ℝ^K × F_v as
[𝐞_1^a(t), 𝐞_2^a(t), …, 𝐞_K^a(t)]= [𝐏_1^a, 𝐏_2^a, …, 𝐏_K^a]𝐀(t),
𝐞^v(t)=Softmax( 𝐏^v 𝐕(t)),
Then the fused audio-visual embeddings 𝐀𝐕(t) ∈ℝ^F_a are
𝐀 𝐕(t)=σ(∑_k=1^K e_k^v(t) 𝐞_k^a(t)),
where σ(·) is the sigmoid function.
The above audio-visual embeddings are fed into both the Target Speech Block and Noise Block (Fig. <ref>, center), before their respective outputs being further fed into the corresponding linear layers (Fig. <ref>, top right, yellow blocks) to estimate the complex TF masks M^x_MVDR(t, f)∈ℂ and M^n_MVDR(t, f) ∈ℂ required by the target speech and noise PSD matrices in Eqns. (<ref>) and (<ref>)
for MVDR filter estimation. After MVDR filtering, the separated target speech spectrum is inverse STFT (iSTFT) transformed to produce the corresponding waveform.
§.§ Separation Network Training Cost Function
Following the prior researches<cit.>, the mask-MVDR based multi-channel speech separation network is trained to maximize the SISNR metric, unless further joint fine-tuning with the back-end ASR error loss later presented in Section <ref> is performed.
§ AUDIO-VISUAL MULTI-CHANNEL SPEECH DEREVERBERATION
In this section, the multi-channel far-field signal model is reformulated with additional reverberation.
Audio-visual multi-channel speech dereverberation approaches based on audio-visual DNN-WPE and SpecM are then proposed.
The incorporation of the video features and its fusion with audio modality in both methods are also presented.
§.§ Multi-channel Far-field Signal Model with Reverberation
In reverberant conditions, the target speech signal 𝐱(t, f) of Eqn. (<ref>) is further decomposed into two parts.
The first part consists of the direct signal and early reflections, referred to as the desired signal 𝐝(t, f) ∈ℂ^R, while the other contains the late reverberation 𝐫(t, f) ∈ℂ^R. This is given by
𝐱(t, f) = ∑_τ=0^D-1𝐚(τ, f) S(t-τ, f)_𝐝(t, f) + ∑_τ=D^D+L-1𝐚(τ, f) S(t-τ, f)_𝐫(t, f)
where D denotes the prediction delay parameter and L is the number of filter taps. 𝐚(τ, f) ∈ℂ^R is the room reverberant transfer function from a given speaker to all microphones for τ∈{ 0, 1, …, D+L-1 }.
The dereverberation process requires the desired signal 𝐝(t, f) to be preserved to enhance speech intelligibility and improve ASR performance, while the late reverberation 𝐫(t, f) to be eliminated <cit.>.
§.§ DNN-WPE Based Dereverberation
In conventional WPE <cit.>,
the dereverberated signal 𝐝̂(t, f) can be obtained by applying the WPE filter 𝐖_WPE(f) ∈ℂ^LR × R to the reverberant multi-channel signal as follows:
𝐝̂(t, f) = 𝐱(t, f) - 𝐖_WPE(f)^H 𝐱̃(t-D, f),
where 𝐱̃(t-D, f) = [𝐱(t-D, f)^T, …, 𝐱(t-D-L+1, f)^T]^T∈ℂ^LR
is the time-delayed reverberant speech spectrum vector.
The required WPE filter coefficients are traditionally estimated using maximum likelihood estimation <cit.>.
It is assumed that the desired signal at each microphone follows a time-varying complex Gaussian distribution with a mean of zero and a time-varying variance λ(t, f), which corresponds to the power of the desired signal. Minimizing the average power of the frame prediction errors weighted by λ^-1(t, f),
min_{𝐖_WPE(f) ,λ(t, f)}∑_t 𝐱(t, f) -𝐖_WPE(f)^H 𝐱̃(t-D, f) _2^2/λ(t, f).
leads to alternating updates between the WPE filter parameters,
𝐖_WPE(f) = (∑_t𝐱̃(t-D, f) 𝐱̃(t-D, f)^H/λ(t,f))^-1
(∑_t𝐱̃(t-D, f) 𝐱(t, f)^H/λ(t,f))
and the residual signal power given the current WPE filter
λ(t,f) = 1/R𝐝̂(t, f) _2^2,
where ·_2 denotes the Euclidean norm. The above alternating estimation procedure iterates until convergence.
Recent deep neural network extension to WPE led to the DNN-WPE approach <cit.>, where the filtered signal power λ(t,f) is estimated using DNN (e.g. LSTM <cit.>) predicted TF complex mask[Alternatively using channel dependent predicted mask M_WPE^r(t,f) produced comparable performance in practice while increasing the system training time approximately by a factor of 5, and therefore not considered.]
M_WPE(t,f) ∈ℂ. This is given by
λ(t,f) = 1/R M_WPE(t,f) 𝐱(t,f) _2^2 ,
An example of DNN-WPE based dereverberation is shown in Fig. <ref> (top right, in light blue).
§.§ SpecM Based Dereverberation
In addition to DNN-WPE based dereverberation, SpecM based dereverberation is also leveraged in this work. A neural network based TF spectral transformation between the input reverberant and desired anechoic speech spectrum is learned as follows:
𝐝̂(t, f) = W_SpecM(t,f) 𝐱(t,f)=M_SpecM(t,f) 𝐱(t,f),
where W_SpecM(t,f) ∈ℂ denotes the SpecM filter and M_SpecM(t,f)∈ℂ is the estimated complex TF-mask for SpecM based dereverberation.
An example of SpecM based speech dereverberation is shown in Fig. <ref> (bottom right, in light yellow). Compared with DNN-WPE, although the SpecM based dereverberation approach can provide perceptually enhanced sounds, it has been reported that the artifacts resulting from deterministic spectral masking introduced a negative impact on downstream speech recognition system performance <cit.>.
§.§ Audio-visual Speech Dereverberation
The audio and video embeddings previously used in the mask-based MVDR speech separation network of Section <ref> and Fig. <ref> are concatenated[Alternative audio-visual modality fusion methods, e.g. using the factorized attention based fusion mechanism of Section <ref> for speech separation, led to performance degradation in practice and therefore not considered.]
before being fed into an AV Fusion Block consisting of three TCN modules to produce the integrated audio-visual embeddings (Fig. <ref>, left).
These audio-visual embeddings are then forwarded into linear layers (Fig. <ref>, right, yellow blocks) to estimate the complex TF masks of the desired speech for either DNN-WPE (Fig. <ref>, top right, light blue) or SpecM (Fig. <ref>, bottom right, light yellow) based dereverberation filter estimation. In this work, the dereverberation network is trained in both cases using the MSE loss computed between the filtered and ground-truth anechoic speech spectrum<cit.>.
§ AUDIO-VISUAL SEPARATION AND DEREVERBERATION
In this section, three integrated audio-visual speech separation and dereverberation architectures are proposed.
These include: a) a serial pipelined connection of the audio-visual speech separation component with the following dereverberation module; or b) conversely audio-visual speech dereverberation followed by separation; and c) joint speech separation & dereverberation using audio-visual mask-based WPD.
§.§ Audio-visual Speech Separation-Dereverberation
In the audio-visual speech separation-dereverberation architecture, the multi-channel mixture speech spectra 𝐲(t,f) ∈ℂ^R as well as the extracted visual features and the camera captured target speaker's DOA from the Visual Front-end module (e.g. Fig. <ref>, bottom left corner, in light green) are first fed into the MVDR separation module as shown in Fig. <ref>(a) to produce single-channel outputs, Ŝ_MVDR(t, f), before being connected to the dereverberation module based on DNN-WPE or SpecM as shown in Fig. <ref> to obtain the final enhanced speech d̂_MVDR-WPE(t, f) ∈ℂ or d̂_MVDR-SpecM(t, f) ∈ℂ, respectively.
When DNN-WPE based dereverberation is used, this is computed in a two stage, pipelined manner as
Ŝ_MVDR(t, f) =𝐰_MVDR(f)^H 𝐲(t, f),
d̂_MVDR-WPE(t, f) = Ŝ_MVDR(t, f) - 𝐖_WPE(f)^H𝐬̂_MVDR(t-D, f),
where
𝐬̂_MVDR(t-D, f) =[Ŝ_MVDR(t-D, f), …, Ŝ_MVDR(t-D-L+1, f)]^T
denotes the enhanced single-channel output of the MVDR beamformer from the past L frames and 𝐬̂_MVDR(t-D, f) ∈ℂ^L. Here, 𝐖_WPE(f) ∈ℂ^L represents the single-channel WPE filter.
L is the number of filter taps and D denotes the prediction delay parameter in WPE.
When SpecM based dereverberation is used, the final enhanced single-channel speech spectrum is computed as
Ŝ_MVDR(t, f) =𝐰_MVDR(f)^H 𝐲(t, f),
d̂_MVDR-SpecM(t, f) = W_SpecM(t,f) Ŝ_MVDR(t, f).
§.§ Audio-visual Speech Dereverberation-Separation
In contrast to the above, connecting the speech dereverberation and separation modules in a reverse order leads to the audio-visual speech dereverberation-separation architecture.
The sequence of filtering operations of this architecture is performed
as follows:
When using DNN-WPE based dereverberation, the dereverberated multi-channel output 𝐝̂_WPE(t, f) is first produced, before being fed into the MVDR separation filter to produce the final single-channel speech spectrum Ŝ_WPE-MVDR(t, f) as
𝐝̂_WPE(t, f) =𝐲(t, f)-𝐖_WPE(f)^H 𝐲̃(t-D, f),
Ŝ_WPE-MVDR(t, f) = 𝐰_MVDR(f)^H 𝐝̂_WPE(t, f),
where
𝐲̃(t-D, f) =[𝐲(t-D, f)^T, …, 𝐲(t-D-L+1, f)^T]^T∈ℂ^LR
denotes the stacked vector representation of the input multi-channel mixture speech signal.
When using SpecM based dereverberation, the above can be expressed as
𝐝̂_SpecM(t, f) = W_SpecM(t,f) 𝐲(t, f),
Ŝ_SpecM-MVDR(t, f) = 𝐰_MVDR(f)^H 𝐝̂_SpecM(t, f).
§.§ Audio-visual Joint Speech Separation & Dereverberation
Combining the multi-channel speech separation and dereverberation
functions into a single convolutional filter leads to a joint
speech separation and dereverberation architecture, for example, based on
WPD <cit.>
and their DNN predicted mask-based variants <cit.>.
When producing the final enhanced speech spectrum, a single WPD filter 𝐰̃_WPD(f) ∈ℂ^(L+1)R is applied to the time-delayed multi-channel mixed speech vector stacked by 𝐲(t, f) ∈ℂ^R and 𝐲̃(t-D, f)^T ∈ℂ^LR as follows:
d̂(t, f)=𝐰̃_WPD(f)^H [𝐲(t, f)^T, 𝐲̃(t-D, f)^T ]^T,
The WPD beamformer is trained to minimize the average weighted power of the filtered signal while satisfying an orthogonal constraint for channel synchronization without distorting the target speech. This is given by
min _𝐰̃_WPD(f)∑_t |𝐰̃_WPD(f)^H [𝐲(t, f)^T, 𝐲̃(t-D, f)^T ]^T |^2/λ(t, f),
subject to : 𝐰̃_WPD(f)^H 𝐠̃(f)=1.
where the signal variance is averaged across R channels as
λ(t,f) = 1/R∑_r=1^R | M^λ_WPD(t,f)Y_r(t,f) |^2,
is estimated using DNN predicted TF complex mask of the desired signal M^λ_WPD(t, f) ∈ℂ.
Y_r(t,f) represents the r-th component of the multi-channel mixture speech signal 𝐲(t,f).
𝐠̃(f)=[𝐠(f)^T, 0, …, 0]^T ∈ℂ^(L+1)R is the padded steering vector which is composed of a steering vector 𝐠(f) ∈ℂ^R and the others 0∈ℂ^R vectors. It can be shown that the solution of the above WPD convolutional beamformer is:
𝐰̃_WPD(f)=Φ_ỹ(f)^-1𝐠̃(f)/𝐠̃(f)^H Φ_ỹ(f)^-1𝐠̃(f)=Φ_ỹ(f)^-1Φ_𝐱̃(f)/tr(Φ_ỹ(f)^-1Φ_𝐱̃(f))𝐮̃_r,
where the target speaker and power normalized spatial-temporal PSD matrices are
Φ_x̃(f) =∑_t(M^x̃_WPD(t, f) 𝐲(t, f))(M^x̃_WPD(t, f) 𝐲(t, f))^H/∑_t M^x̃_WPD(t, f) (M^x̃_WPD(t, f))^*,
Φ_ỹ(f)=∑_t𝐲(t, f) 𝐲(t, f)^H/λ(t, f),
and 𝐲(t, f) = [𝐲(t, f)^T, 𝐲̃(t-D, f)^T ]^T ∈ℂ^(L+1)R.
𝐮̃_𝐫 = [𝐮_r, 0, …, 0]^T is the padded reference vector. M^x̃_WPD(t, f) ∈ℂ denotes the complex TF mask of target speech.
An example of mask-based WPD is illustrated in Fig. <ref>(b) (bottom right, in light blue).
The same audio-visual embeddings that are used in mask-based MVDR separation module (Fig. 1, top right, light yellow) are now fed into three TCN based Target Speech Block
and Time-varying Power Block for WPD filtering.
Their respective outputs are then fed into the separate linear layers to estimate the complex TF masks M^x̃_WPD(t, f) ∈ℂ and M^λ_WPD(t, f) ∈ℂ required for the computation of the two spatial-temporal PSD matrices and finally the WPD filter parameters.
The entire mask-based WPD network is trained using an equally weighted interpolation between the SISNR and MSE losses
to perform joint speech separation & dereverberation.
§ AUDIO-VISUAL MULTI-CHANNEL SPEECH RECOGNITION
In this section, the Conformer-based audio-visual speech recognition back-end and its further integration with the speech enhancement front-end are introduced.
§.§ Audio-visual Conformer Speech Recognition Back-end
As shown in Fig. <ref> (bottom left), the enhanced speech waveform produced by the speech separation and dereverberation front-ends of Sections <ref>, <ref> and <ref> is fed through a STFT transform before log Mel-filterbank (Mel-FBK) audio features are calculated.
As is also shown in Fig. <ref> (top left), the visual features extracted from the Visual Front-end are forwarded into a linear layer before being up-sampled to be time synchronised with the Mel-FBK audio frames.
Finally, the audio and visual features are concatenated and fed into the ASR back-end.
The Conformer ASR back-end <cit.> comprises a Conformer encoder and a Transformer decoder.
The Conformer encoder has one convolutional subsampling module, and a linear layer with dropout operation followed by stacked encoder blocks.
The internal components of each Conformer encoder block include: a position-wise feed-forward network module, a multi-head self-attention module, a convolution module, and a final position-wise feed-forward network module at the end.
All the encoder blocks additionally undergo layer normalization and residual connections.
Fig. <ref> (right) shows an example of a Conformer ASR system, where the backbone model architecture is in the grey colored part (Fig. <ref>, bottom right). The detailed encoder block compositions are in the blue colored part (Fig. <ref>, top right).
The following multi-task criterion interpolation between the CTC and attention error costs <cit.> is utilized in Conformer model training,
ℒ_𝒜𝒮ℛ=(1-β) ℒ_a t t + βℒ_c t c,
where β∈[0,1] is a tunable hyper-parameter and empirically set as 0.3 for training and 0.4 for recognition in this paper.
§.§ Integration of Speech Enhancement and Recognition
Traditionally, the speech enhancement front-end and recognition back-end components are optimized separately and used in a pipelined manner <cit.>.
However, two issues arise with this pipelined approach:
1) the learning cost function mismatch between speech enhancement front-end and recognition back-end components is not addressed;
2) the artifacts brought by the speech enhancement front-end can lead to ASR performance degradation.
To this end, a tight integration of the audio-visual speech separation, dereverberation and recognition components via joint fine-tuning <cit.> is considered in this paper.
Three fine-tuning methods are investigated:
a) only fine-tuning the back-end ASR component using the enhanced speech outputs while the front-end remains unchanged;
b) end-to-end jointly fine-tuning the entire system including the speech enhancement front-end and the recognition back-end components using the ASR cost function;
c) end-to-end jointly fine-tuning the entire system using a multi-task criterion interpolation between the speech enhancement and recognition cost functions as follows:
ℒ=(1 - γ) ℒ_ASR + γℒ_SE,
where γ is empirically set as 0.5 in the experiments unless otherwise stated.
The precise form of the speech enhancement loss function, ℒ_SE, is determined by the underlying integrated front-end architectures being used, as described in Section <ref>. This is expressed as follows:
a) ℒ_SE=ℒ_SISNR for audio-visual speech separation followed by dereverberation, as in Section <ref>;
b) ℒ_SE=ℒ_MSE for audio-visual speech dereverberation followed by separation, as in Section <ref>;
and c) ℒ_SE=ℒ_𝒮ℐ𝒮𝒩ℛ + ℒ_ℳ𝒮ℰ for joint speech separation & dereverberation in Section <ref>.
§ EXPERIMENTAL SETUP
This section is organized as follows.
Section <ref> gives the details of the LRS2 corpus.
The simulated and replayed multi-channel mixture speech datasets are described in Section Section <ref> and <ref>, respectively.
Section <ref> presents the performance of the baseline single-channel ASR and AVSR systems on mixture speech.
Finally, two important implementation issues that affect the performance of the proposed audio-visual multi-channel speech separation, dereverberation and recognition systems are discussed in Section <ref>.
§.§ LRS2 Corpus
The Oxford LRS2 corpus <cit.> is
one of the largest publicly available corpora for audio-visual speech recognition.
This corpus consists of news and talk shows from BBC programs.
This is a challenging AVSR task since it contains thousands of speakers with large variations in head pose.
The LRS2 corpus is divided into four subsets, i.e. Pre-train, Train, Validation and Test sets. In our experiments, the official Pre-train and Train data sets are combined for model training.
§.§ Simulated Overlapped and Reverberant Speech
Since there is no publicly available audio-visual multi-channel mixture speech corpus, we simulated the multi-channel mixture speech with overlapping and reverberation based on the LRS2 corpus in the experiments.
Details of the simulation process are described in Algorithm 1.
A 15-channel symmetric linear array with non-even inter-channel spacing [7,6,5,4,3,2,1,1,2,3,4,5,6,7]cm is used in the simulation process.
843 point-source noises <cit.> and 20000 room impulse responses (RIRs) generated by the image method <cit.> in 400 different simulated rooms are used in our experiment.
The distance between a sound source and the microphone array center is uniformly sampled from a range of 1m to 5m and the room size ranges from 4m×4m×3m to 10m×10m ×6m (length×width×height).
The reverberation time T_60 is uniformly sampled from a range of 0.14s to 0.92s.
The average overlapping ratio is around 80%.
The signal-to-noise ratio (SNR) is uniformly sampled from {0, 5, 10, 15, 20}dB, and the signal-to-interference ratio (SIR) is uniformly sampled from {-6, 0, 6}dB.
In addition, the angle difference relative to the microphone array between the target and interfering speakers is uniformly sampled from four ranges of the angle difference
{[0^∘, 15^∘), [15^∘, 45^∘), [45^∘, 90^∘), [90^∘, 180^∘)}.
The final simulated multi-channel datasets contain three subsets with 96997, 4272 and 4972 utterances respectively for training (91.37 hours), validation (2.59 hours) and test (2.32 hours).
§.§ Replayed Mixture Speech
To further evaluate the performance of the proposed approach in a more realistic application environment, a replayed test set <cit.> with 1200 utterances (0.5 hours) of LRS2 Test set recorded in a 10m×5m×3m meeting room is also used in our experiments.
Two loudspeakers are used to replay different utterances simultaneously to produce mixture speech.
The geometric specification of the microphone array used during recording is the same as that used in the simulation.
The target and interfering speakers are located at the following directions relative to the microphone array, i.e. {15^∘/30^∘, 45^∘/30^∘, 75^∘/30^∘, 105^∘/30^∘, 30^∘/60^∘, 90^∘/60^∘, 120^∘/60^∘, 150^∘/60^∘}, where the distance between the loudspeakers and microphones ranges from 1m to 1.5m.
In the replayed data, the target speaker’s DOA is captured by a 180^∘ camera <cit.>.
The average overlapping ratio of the replayed mixture speech is around 80% and SIR is around 1.5dB.
§.§ Baseline System Description
1) Speech Enhancement Front-end:
The 257-dimensional complex spectrum of each channel is extracted using a 512-point STFT with a 32ms square-root Hanning window and 16ms frame rate (e.g. Fig. <ref>, top left corner).
The AF and IPD features are computed using 9 microphone pairs {1/15, 2/14, 3/13, 1/7, 12/4, 11/5, 12/8, 7/10, 8/9} to sample different spacing between microphones following <cit.>.
For each Dilated 1D Conv Block in a TCN module (Fig. <ref>, left), the number of channels in the 1×1 Conv layer is set to 256.
The kernel size of the D-Conv layer is set to 3, with 512 channels.
The output dimension of the linear layer is set to 257.
2) Visual Front-end:
The original 160×160 dimensional video frames in the LRS2 datasets are centrally cropped by a 112×112 dimensional window and then up-sampled to be time synchronised with the audio frames via linear interpolation.
The Visual Front-end (e.g. Fig. <ref>, bottom left corner, in light green) uses the same hyper-parameter settings as described in <cit.>.
In addition, the number of the acoustic subspaces K is set to 10 with 𝐏_k^a ∈ℝ^256 × 256 and 𝐏^v ∈ℝ^10 × 256 in the factorized attention layer <cit.>.
3) Recognition Back-end:
The 80-dimensional log Mel-FBK features extracted using a 25ms window and 10ms frame rate serve as the inputs to the recognition back-end.
The baseline Conformer models consist of 12 encoder and 6 decoder blocks following the ESPnet recipe[github.com/espnet/espnet/blob/master/egs/lrs2/asr1/run.sh].
Each encoder or decoder block is configured with 4-head attention of 256 dimensions and 2048 feed-forward hidden units.
The convolutional sub-sampling module includes two 2D
convolutional layers with a stride of 2, each followed by a ReLU activation.
500 byte-pair-encoding (BPE) tokens are used as decoder outputs.
All models are trained using NVIDIA A40 GPU cards[The jointly fine-tuned speech enhancement front-end and recognition back-end systems in Table V are trained using one thread on a single Nvidia A40 GPU with a batch size of 24 and the GPU memory usage vary from 32G to 43G maximum.].
4) Performance of Speech Recognition without Speech Enhancement Front-end:
Table <ref> presents the WER results of the single-channel input based Conformer ASR and AVSR systems (without using a microphone array and any speech enhancement front-end) on the anechoic, reverberant-only and mixture speech.
It can be observed that using visual information can consistently improve the recognition performance over the audio-only ASR systems by up to 1.5% absolute (17.0% relative) WER reduction on the anechoic speech (sys. 2 vs. sys. 1) and 3.3% absolute (23.9% relative) WER reduction on the reverberant-only speech (sys. 4 vs. sys. 3).
In particular, the AVSR system significantly outperforms the audio-only ASR system (sys. 6 vs. sys. 5) by up to 32.3% and 36.0% absolute (56.2% and 61.4% relative) WER reductions on the simulated and replayed mixture speech respectively.
§.§ Implementation Details
1) Number of Filter Taps:
The number of filter taps L used in WPE and WPD approaches has a huge impact on the quality of the enhanced speech and the downstream recognition performance.
A set of ablation studies on the settings of filter taps L are conducted for each of the three integrated speech separation and dereverberation front-end architectures of Section <ref> (i.e. “Sep. → Dervb", “Dervb. → Sep." and “Joint Sep. & Dervb." denote the speech separation followed by dereverberation, speech dereverberation followed by separation and joint speech separation & dereverberation, respectively.)
These are shown in Table <ref> for audio-only speech enhancement.
Considering the speech enhancement performance in terms of PESQ, STOI and SRMR scores, the number of filter taps for single-channel DNN-WPE, multi-channel DNN-WPE and mask-based WPD are respectively chosen and fixed as 18 (sys. 10), 2 (sys. 2) and 1 (sys. 1) in the following experiments.
In addition, the prediction delay D is empirically set to 2 for DNN-WPE and mask-based WPD.
2) Matrix Inversion:
The inversion of the PSD matrices for MVDR and WPD (Eqn. (<ref>) and Eqn. (<ref>)) and the temporal correlation matrix for WPE (Eqn. (<ref>)) are prone to numerical issues when they are ill-conditioned or singular.
To this end, the diagonal variance flooring approach <cit.> is utilized in this work.
A complex PSD or correlation matrix Φ is floored as Φ^'=Φ+εtr(Φ) 𝐈 before inversion, where a flooring scaling term ε needs to be set, and 𝐈 is the identity matrix.
In addition, a more stable complex matrix inversion algorithm <cit.> is adopted in this paper.
A set of ablation studies on the setting of the flooring scaling ε is shown in Table <ref> for audio-only speech enhancement front-end systems with different separation only or integrated (separation and dereverberation) architectures.
Based on the PESQ, STOI and SRMR scores, 10^-5 (sys. 4), 10^-5 (sys. 4), 10^-6 (sys. 5) and 10^-4 (sys. 3) are selected as the optimal values of the diagonal variance flooring scaling ε for mask-based MVDR, single-channel DNN-WPE, multi-channel DNN-WPE and mask-based WPD respectively in the following experiments.
§ EXPERIMENTAL RESULTS
In this section, the performance of three integrated audio-visual multi-channel speech separation, dereverberation and recognition architectures of Section <ref> are evaluated on the LRS2 simulated and replayed mixture speech datasets.
Section <ref> analyses the performance improvements by incorporating visual features into different speech enhancement front-end components as well as the recognition back-end.
After end-to-end joint fine-tuning, the performance of tightly integrated audio-visual speech separation, dereverberation and recognition systems are presented in Section <ref>.
§.§ Performance of Audio-visual Multi-channel Speech Enhancement and Recognition Systems
In this part, we systematically investigate the performance improvements attributed to the visual modality in the proposed integrated speech enhancement architectures of Section <ref> on the LRS2 simulated multi-channel mixture dataset with four angle difference ranges [0^∘, 15^∘), [15^∘, 45^∘), [45^∘, 90^∘) and [90^∘, 180^∘).
The mask-based MVDR approach is used in the separation module, and the dereverberation module leverages either DNN-WPE or SpecM based dereverberation methods. The mask-based WPD is used for joint speech separation & dereverberation.
The multi-channel audio (including AF and IPD) features and visual modality features and their fusion mechanism presented in Sections <ref>, <ref>, <ref> and <ref> for speech separation and dereverberation are used.
The visual features are also incorporated into the Conformer speech recognition back-end, as described in Section <ref>.
The speech recognition systems in Table <ref> are obtained by fine-tuning the baseline single-channel Conformer ASR (Table <ref>, sys. 1) or AVSR (Table <ref>, sys. 2) systems using the enhanced outputs of the corresponding speech enhancement front-ends.
From Table <ref>, several trends can be observed:
1) The proposed audio-visual multi-channel speech separation, dereverberation and recognition systems (sys. 11,18,25,32,36) consistently outperformed the corresponding audio-only baseline systems (sys. 5,12,19,26,33) on the LRS2 simulated test set.
Consistent performance improvements in PESQ, STOI and SRMR scores were also obtained.
For example, a statistically significant WER reduction of 12.4% absolute (45.1% relative) was obtained by the full audio-visual system (sys. 25) over the corresponding audio-only baseline (sys. 19) using a pipelined front-end architecture whereby speech dereverberation was followed by separation.
A general trend can also be found that the performance gap between systems with full incorporation of video modality (sys. 11,18,25,32,36) and those using audio-only (sys. 5,12,19,26,33) was much larger when examining the performance on the more challenging subsets, e.g. when inter-speaker angle difference fell in the smallest range of [0^∘, 15^∘).
2) When compared with audio-only dereverberation, incorporating visual information into the corresponding DNN-WPE (sys. 6,8,10,20,22,24 vs. sys. 5,7,9,19,21,23) or SpecM based dereverberation (sys. 13,15,17,27,29,31 vs. sys. 12,14,16,26,28,30) module produced consistent improvements in terms of PESQ, STOI and SRMR scores, irrespective of the underlying form of integration between speech separation and dereverberation components.
A statistically significant WER reduction by up to 1.9% absolute (sys. 13 vs. sys. 12, 5.9% relative) was also obtained.
3) Among the proposed architectures to integrate speech separation and dereverberation components within the speech enhancement front-end, a pipelined, full audio-visual configuration performing DNN-WPE based speech dereverberation followed by mask-based MVDR speech separation using visual input in both enhancement and recognition stages (sys. 25 vs. sys. 11,18,32,36) produced the lowest overall WERs.
4) The integrated audio-visual speech separation, dereverberation and recognition systems (sys. 11,18,25,32,36) consistently outperformed the corresponding separation-only AVSR systems (sys. 4) in terms of PESQ, STOI and SRMR scores.
However, with regard to recognition performance, the SpecM based AVSR systems (sys. 18,32) and the mask-WPD based AVSR system (sys. 36) did not outperform the baseline system (sys. 4).
The potential causes were:
a) For systems using SpecM based dereverberation (sys. 18,32), although perceptually enhanced speech quality was obtained when compared to the corresponding baseline systems (sys. 4), the spectral artifacts caused by SpecM introduced a negative impact on downstream speech recognition performance;
and b) For mask-based WPD systems, the number of filter taps and microphone channels together produced spatial-temporal PSD matrices in Eqns. (<ref>)-(<ref>) larger than, for example, those in Eqns. (<ref>)-(<ref>) for MVDR speech separation only, and thus increased difficulty in their inversion.
This was further suggested by the larger variance flooring scaling ε=10^-4 in mask-based WPD than all the other systems shown in the ablation studies of Table <ref>.
This issue can offset the benefit of joint speech separation & dereverberation from WPD.
5) Finally, incorporating both the video modality and AF spatial features into the front-ends (e.g. sys. 3,10,17,24,31,35) consistently outperformed the comparable systems using either only AF features (sys. 1,5,12,19,26,33), or video features alone (sys. 2,8,15,22,29,34).
§.§ Performance of End-to-end Joint Fine-tuning of Speech Enhancement Front-end and Recognition Back-end
The most representative subset of audio-visual and audio-only multi-channel systems in Table <ref> are then end-to-end joint fine-tuning using either the ASR cost function alone, or a multi-task criterion interpolation between the speech enhancement and recognition cost as described in Section <ref>.
Their performance in terms of WER and front-end metrics (PESQ, STOI and SRMR) are evaluated on both the LSR2 simulated (“Simu”) and replayed (“Replay”) test sets and shown in Table <ref> (original system numbering in Table <ref> carried over). Several main trends can be observed:
1) After end-to-end joint fine-tuning, consistent performance improvements in WER were obtained over all systems without doing so (sys. marked with “-" in Col. 3, Table <ref>), irrespective of the joint fine-tuning criterion based on ASR loss alone (sys. marked with “(a)"), or its interpolation with enhancement loss (sys. marked with “(b)"). In particular, statistically significant overall (“O.V.") WER reductions of 3.3% and 1.6% absolute (14.6% and 11.9% relative) were obtained using the joint fine-tuned ASR (sys. 19(a) vs. sys. 19) and AVSR (sys. 25(b) vs. sys. 25) systems across both test sets. Consistent performance improvements in speech enhancement front-end metrics scores were also obtained. Fig. <ref> shows a set of example spectra of (a) Overlapped-reverberant-noisy speech, (b) Target clean speech, (c) Pipelined audio-only speech enhancement output (Table <ref>, sys. 19), (d) Pipelined audio-visual speech enhancement output (Table <ref>, sys. 25), (e) Jointly fine-tuned audio-only speech enhancement output (Table <ref>, sys. 19(b)), and (f) Jointly fine-tuned audio-visual speech enhancement output (Table <ref>, sys. 25(b)). The spectrum portions circled using blue dotted lines in (a) represent the interfering speaker’s speech, background noise and reverberation, which have been largely removed in (f).
2) The best overall performance was produced by the end-to-end joint fine-tuned audio-visual system with DNN-WPE based dereverberation followed by mask-based MVDR (sys.25(b)).
Using this system statistically significant WER reductions of up to 9.1% and 6.2% absolute (41.7% and 36.0% relative) were obtained on the LRS2 simulated and replayed test sets over the audio-only baseline (19(b)).
In addition, all the jointly fine-tuned audio-visual speech separation, dereverberation and recognition systems consistently outperformed the comparable baseline separation-only AVSR systems (e.g. sys. 11(b),18(b),25(b),32(b),36(b) vs. sys. 4(b)), with a statistically significant WER reduction up to 1.9% absolute (13.8% relative) (sys. 25(b) vs. sys. 4(b)).
3) End-to-end joint fine-tuning of the speech enhancement front-end and recognition back-end is effective in mitigating the impact from spectral artifacts produced in SpecM based dereverberation <cit.> (e.g. sys. 12(b),18(b),26(b),32(b)).
This leads to their smaller performance gap against systems using DNN-WPE dereverberation (sys. 5(b),11(b),19(b),25(b)), when compared the gap before joint fine-tuning.
4) A further ablation study is conducted on the setting of the speech enhancement cost weight γ in Eqn. (<ref>) using three end-to-end joint fine-tuned multi-channel speech enhancement and recognition systems: sys. 1(b), 4(b) and 25(b) of Table <ref>. Their WER performance with respect to γ on the LRS2 simulated (“Simu") and replayed (“Replay") test sets are shown in Table <ref>. These results suggest that the performance of the audio-visual multi-channel speech separation, dereverberation and recognition system (sys. 25(b)) is largely insensitive to the setting of γ∈ [0, 0.75] during end-to-end joint fine-tuning using interpolated speech enhancement and ASR error costs.
5) The performance of the most important systems shown in Table <ref> (sys. 1,4,5,11,12,18,19,25,26,32,33,36) and Table <ref> (sys. 1(b),4(b),5(b),11(b),12(b),18(b),19(b),25(b),26(b),32(b),
33(b),36(b)) are further evaluated on the LRS3 <cit.> test set after applying the same multi-channel mixture speech simulation protocol of Algorithm <ref>. These results are shown in Table VII. Similar trends of WER reductions and improvements on speech enhancement metric scores, as well as the same performance ranking among the corresponding systems previously shown in Table <ref> and Table <ref>, can also be found in Table <ref>.
§ CONCLUSION
In this paper, an audio-visual multi-channel speech separation, dereverberation and recognition approach featuring a full incorporation of visual information into all system components is proposed. The advantages of additional visual modality over using acoustic features only are demonstrated consistently in mask-based MVDR speech separation, DNN-WPE or spectral mapping (SpecM) based speech dereverberation front-end and Conformer based ASR back-end. A set of audio-visual front-end architectures that integrates the speech separation and dereverberation modules in a pipelined or joint fashion are also derived. They are end-to-end jointly fine-tuned to minimize the error cost mismatch between the speech enhancement front-end and ASR back-end.
Experiments were conducted on the mixture overlapped and reverberant speech data constructed using simulation or replay of the benchmark Oxford LRS2 dataset.
The proposed audio-visual multi-channel speech separation, dereverberation and recognition systems consistently outperformed the comparable audio-only multi-channel baseline by 9.1% and 6.2% absolute (41.7% and 36.0% relative) in word error rate (WER) reductions, together with consistent improvements obtained on PESQ, STOI and SRMR based speech enhancement metrics. Future research will focus on improving system generalization to diverse microphone array geometrics and room acoustics.
§ ACKNOWLEDGMENT
This research is supported by Hong Kong RGC GRF grant No. 14200021, 14200218, 14200220, TRS T45-407/19N and Innovation & Technology Fund grant No. ITS/218/21.
We would like to thank Wangyou Zhang for the insightful discussions in the preliminary experiments of WPD.
IEEEtran.bst
|
http://arxiv.org/abs/2307.02186v1
|
20230705102725
|
Possible Circumstellar Interaction Origin of the Early Excess Emission in Thermonuclear Supernovae
|
[
"Maokai Hu",
"Lifan Wang",
"Xiaofeng Wang",
"Lingzhi Wang"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR"
] |
firstpage–lastpage
Electroweak sphalerons, scalar multiplets, and symmetry breaking patterns
Michael J. Ramsey-Musolf
August 1, 2023
==========================================================================
Type Ia supernovae (SNe Ia) arise from the thermonuclear explosion in binary systems involving carbon-oxygen white dwarfs (WDs). The pathway of WDs acquiring mass may produce circumstellar material (CSM). Observing SNe Ia within a few hours to a few days after the explosion can provide insight into the nature of CSM relating to the progenitor systems. In this paper, we propose a CSM model to investigate the effect of ejecta-CSM interaction on the early-time multi-band light curves of SNe Ia. By varying the mass-loss history of the progenitor system, we apply the ejecta-CSM interaction model to fit the optical and ultraviolet (UV) photometric data of eight SNe Ia with early excess. The photometric data of SNe Ia in our sample can be well-matched by our CSM model except for the UV-band light curve of iPTF14atg, indicating its early excess may not be due to the ejecta-CSM interaction. Meanwhile, the CSM interaction can generate synchrotron radiation from relativistic electrons in the shocked gas, making radio observations a distinctive probe of CSM. The radio luminosity based on our models suggests that positive detection of the radio signal is only possible within a few days after the explosion at higher radio frequencies (e.g., ∼250 GHz); at lower frequencies (e.g., ∼1.5 GHz) the detection is difficult. These models lead us to conclude that a multi-messenger approach that involves UV, optical, and radio observations of SNe Ia a few days past explosion is needed to address many of the outstanding questions concerning the progenitor systems of SNe Ia.
supernovae: general – supernovae: circumstellar material – supernovae: light curves
§ INTRODUCTION
Type Ia supernovae (SNe Ia) are employed as the standardized candle in measuring cosmological distance through the luminosity-width relation <cit.>, although their progenitor systems are still unclear (e.g., ) and they may have different progenitor populations even for spectroscopically normal ones (e.g., ). The conventional scenario is that SNe Ia are the results of the thermonuclear explosions of carbon-oxygen white dwarfs (WDs) whose masses approach the Chandrasekhar limit through merging with or accretion from a binary companion (e.g., ). In the merger scenario, the so-called double degenerate (DD) channel, the companion is another carbon-oxygen WD <cit.>, while in the single degenerate (SD) channel, a WD accretes matter from a main sequence, red giant, or Helium star <cit.>. These two channels may both encounter difficulties when confronted with observations. The DD channel predicts a relatively high degree of polarization <cit.>, while the observed polarization of SNe Ia is usually lower than 0.2% <cit.>. On the other hand, direct evidences of the SD channel have not been found from extensive observational efforts, such as the null detection of H/He emission lines in the nebular spectrum <cit.>, and the absence of super-soft X-ray signals as can be expected from the accretion process of progenitors <cit.>.
Multi-band observations within a few days after the explosion provide a powerful probe to investigate the physical origins of SNe Ia. In the SD channel, interaction with the companion can lead to radiations in the X-ray, ultra-violet (UV), and optical wavelengths several hours after the explosion in certain viewing angles <cit.>. An early flux excess can also be produced if ^56Ni is mixed to the outer layers of the ejecta due to hydrodynamic turbulence during the thermonuclear explosion <cit.>, or if there is nuclear burning on the surface of the WD progenitor <cit.>. The interaction with circumstellar matter (CSM) can transform the kinetic energy of the ejecta into radiation and power the light curves of SNe with significant mass-loss history <cit.>. CSM interaction can also be the energy source of the first light curve peak shown a few days after the explosion for some core collapse supernovae <cit.>. Likely, the possibility exists that the early flux excess of SNe Ia may originate from ejecta-CSM interaction <cit.>.
In recent decades, a large amount of photometric and spectroscopic observations of SNe Ia are available due to the rapid growth in time-domain surveys (e.g., ), but data within the first few days after the explosion are still rare. This situation is mainly limited by the cadence of the supernova survey program, which is usually around 2∼3 days to cover as large a survey area as possible. With recent wide-field supernova survey programs <cit.>, more and more early signals of SNe Ia have been captured, such as the spectroscopic normal ones (e.g., SN 2011fe <cit.>, SN 2012cg <cit.>, SN 2017cbv <cit.>, SN 2018oh <cit.>, SN 2019np <cit.>, and SN 2021aefx <cit.>), subluminous 2002es-like ones (e.g., iPTF14atg <cit.> and SN 2019yvq <cit.>), and the super-Chandrasekhar explosion (e.g., SN 2020hvf <cit.>).
The above nine SNe Ia also constitute the sample of this paper. The first detection of SN 2011fe is just several hours after its explosion, and such early photometric data in consistence with a t^α law constrains the radius of the progenitor to that of a WD <cit.>. The other eight SNe Ia are revisited in this paper, because they show apparent flux excess during their early phases compared with the light curve of typical objects such as SN 2011fe. In particular, SN 2017cbv exhibits apparent blue excess at its early phases. This flux excess may be generated from the decay of ^56Ni mixed in the outer layers of the ejecta <cit.>, ^56Ni produced at the surface layers due to a Helium detonation <cit.>, the interaction with the companion star <cit.>, or ejecta-CSM interaction. For the companion interaction scenario, the predicted large amount of UV radiation is not supported by observations <cit.>. Besides, the predicted H/He emission lines in the nebular spectra relating to the SD channel are not observed for SN 2017cbv <cit.>.
In this paper, we revisited the influence of CSM interaction on the early multi-band light curves of SNe Ia, since the popular channels of progenitor systems may generate CSM through the processes involving mass accretion/excretion, stellar wind, or nova explosions. Section <ref> describes the early flux excess of the eight revisited SNe Ia in our sample. In Section <ref>, two models of ejecta-CSM interaction are introduced. The fits to the optical and UV luminosity are shown in Section <ref>. We show the radio radiation from the relativistic electrons generated by the ejecta-CSM interaction in Section <ref>. The conclusions are given in Section <ref>.
§ THE EARLY EXCESS OF THERMONUCLEAR SUPERNOVAE
Several recent studies have modeled the early-phase observations of SNe Ia through their UV properties <cit.>, optical rises <cit.>, and color evolutions <cit.>. In this paper, we focus on the ejecta-CSM interaction to model eight SNe Ia with the strongest evidences of early flux excess. Among them, SN 2012cg <cit.>, iPTF14atg <cit.>, and SN 2019yvq <cit.> show an initial declining flux excess in the UV bands which may be related to the ejecta-CSM interaction. The early flux excesses of SN 2017cbv <cit.>, SN 2018oh <cit.>, SN 2019np <cit.>, and SN 2021aefx <cit.> are still under debate, while SN 2020hvf <cit.> seems to show optical bumps within the first day since the discovery which is consistent with the expectations from ejecta-CSM interaction. SN 2016jhr also has early observations showing flux excess compared to typical normal SNe Ia, but it is not in our sample since its early flash is likely to be triggered by a helium detonation on the surface of the WD <cit.>.
The optical light curves of SNe Ia in our studies have different photometric systems, including the Sloan Digital Sky Survey photometry, the Johnson-Cousins UBVRI system, the Kepler filter (SN 2018oh), and no-filter observations (SN 2020hvf). All the UV-band light curves are from the Swift satellite. Therefore, we adopt the optical luminosity (L_opti) to characterize the early excess of the eight SNe Ia in our study to reduce the influence of the magnitude systems among the observations of SNe Ia, and we adopt the UV-band luminosity (L_UV) to represent the early-phase evolution in UV bands. The L_opti defined in this paper is the integration of the black-body spectrum fitted by the multi-band photometric data from 4000 Å to 8000 Å. As shown in Figure <ref>, the black-body spectrum is a favorable profile fitting the early-time multi-band photometric data of SNe Ia. For iPTF14atg, SN 2018oh, and SN 2020hvf, the optical multi-band observations are absent in the duration of early excess, and the observational band is PTFr band (iPTF14atg), Kepler filter (SN 2018oh), or no-filter (SN 2020hvf), respectively. Thus, we shifted the single-band flux to the scale of the corresponding L_opti to approximately acquire the L_opti curve fo these three SNe Ia during their flux-excess phases.
The optical photometric data of SNe Ia are de-reddened from the extinction of the Milky Way and host galaxies. The color excess (E(B-V)), the total-to-selective extinction ratio (R_V), and the luminosity distance of the SNe Ia are all from their respective references. Note that we adopted 12.3 Mpc as the distance of SN 2017cbv derived from <cit.>. A similar distance to SN 2017cbv is also adopted in several other studies <cit.>. Figure <ref> displays the normalized L_opti curves of the eight SNe Ia, together with SN 2011fe for comparison. The early-time L_opti curve of SNe Ia satisfies the t^α law with the index α∼ 2.0, which is consistent with previous results <cit.>. The early-time optical excesses of the eight SNe Ia over the t^α law (L_opti, excess) are shown in the lower panel of Figure <ref>, and it can be roughly described by two quantities, the maximum of L_opti, excess (L_opti, excess^max) and the rising time (T_rise) of L_opti, excess since the explosion. For simplicity, the values of L_opti, excess^max and T_rise are just from the corresponding data point without any process like the Gaussian process fit or smooth process. These two quantities can provide a preliminary diagnosis of our ejecta-CSM interaction model.
The same process is also applied to generate the early-phase UV-band light curves of each SN Ia with extinction corrections using the same R_V and E(B-V) as for L_opti. Figure <ref> shows the normalized L_UV of SNe Ia. Similarly, a t^α law of L_UV is generated from the observed data of SN 2011fe, with α = 2.3, 2.4, or 2.0 for UVW1, UVW2, or UVM2 band, which are consistent with the result reported in <cit.>. Note that the template of UV-band light curve is from the smoothed curve of SN 2011fe, rather than the fitted t^α law of SN 2011fe, and the difference between the smoothed curve and the t^α law is minimal within a few days since the explosion as shown in Figure <ref>. Comparing Figure <ref> and Figure <ref>, SN 2012cg, iPTF14atg, SN 2017cbv, SN 2019np, SN 2019yvq, and SN 2021aefx all have early-time multi-band observations (from optical to UV bands) and all show significant excess over the t^α law, while the UV-band coverage is absent for SN 2018oh and SN 2020hvf during the phases corresponding to the early optical excess.
With the definition of both L_opti and L_UVW1, it is straightforward to examine the possible CSM interaction origin of the early excess emission in SNe Ia. We will compare L_opti, excess^max and T_rise of the revisited SNe Ia with our CSM model covering a broad range of model parameters to give a quick look of whether our CSM model is reasonable for the early excess of SNe Ia. We then fit the early excess of L_opti curves and predict the related L_UVW1 curves. The model parameters from these well-fitted models are employed to predict further the radio radiations related to the ejecta-CSM interactions of these SNe Ia.
§ THE CSM INTERACTION MODEL
The ejecta-CSM interaction has been studies previously for SNe (e.g., ). The CSM density (ρ_csm) considered in this study follows the expression ρ_csm = Ṁ_w10/(4π R^2), where R is the distance from the SN and Ṁ_w10 is the mass-loss rate of that leads to the CSM normalized to the wind speed (v_w) 10 km s^-1. We adopt v_w = 10 km s^-1 in the study. For a constant Ṁ_w10, the total CSM mass M_csm is equal to (R_out - R_in)Ṁ_w10 with R_in and R_out being the inner and outer boundaries of the CSM, respectively. In general, R_in is related to the position of the surface of the progenitor and is much smaller than R_out, while R_out is assumed to vary from 10^11 cm to 10^16 cm in our study.
The velocity of SN ejecta (v_ej) satisfies v_ej = R/t as expected from a homologous expansion, where t is the time since SN explosion. The density (ρ_ej) of the ejecta follows the power-law profile of ρ_ej∝ R^-δ and ρ_ej∝ R^-n for regions interior and exterior of a transition velocity v_t <cit.>, respectively. The indices n and δ are equal to 10.0 and 0.5 as expected from self-similar solutions. The transition velocity v_t is formulated by the SN kinetic energy E_ej and ejecta mass M_ej as v_t = [2(5-δ)(n-5)E_ej/(3-δ)(n-3)M_ej]^1/2 <cit.>. The value of v_t is about 1.2×10^4 km s^-1 assuming E_ej = 1.5×10^51 erg and M_ej = 1.4 M_⊙ <cit.>.
§.§ Model_sh
The first scenario considered in our study, named Model_sh, has the characteristic parameters of R_out∼10^12 cm, Ṁ_w10∼10^-1 M_⊙ yr^-1, and the corresponding total CSM mass M_csm∼0.003 M_⊙. The duration of CSM interaction in the Model_sh is less than an hour and the interaction process can be regarded as a shock breakout, which results in a thin shell expanding with velocity V_sh at distance R_sh and a shell thickness Δ R_sh. We adopt Δ R_sh/R_sh∼0.2 in the Model_sh, which is different from <cit.>, but is consistent with the results in <cit.>. The R_sh evolves as R_sh = R_out + V_sht. V_sh is determined by the equation assuming that the mass of the shocked ejecta is equal to the total mass of the CSM such that ∫_V_sh^∞4π (vt)^2ρ_ejtdv = M_csm <cit.>. The bolometric luminosity (L) from this adiabatically expanding shell can be solved by the first law of thermodynamics as L∝exp(-t_ht+t^2/2/t_ht_d(0)), where t_h = R_out / V_sh and t_d(0) is the diffusion timescale when t = 0 <cit.>. The observed multi-band light curves can be generated with the assumption of blackbody radiation. Note that the bolometric luminosity is monotonically decreasing with time, while the light curve of a certain waveband has a unimodal structure. Thus the predicted flux contributions by Model_sh allow calculations of quantities such as the maximum optical luminosity of the ejecta-CSM interaction and the rising time since the explosion.
§.§ Model_ext
The interaction with extended CSM cannot be simplified to the shock breakout process since the interaction can last more than a few days. A similar situation may happen for SNe Ia, because the mass-loss history for the progenitor may be long enough to generate CSM with an extended distribution. Based on this picture, we consider the scenario Model_ext, which has a more extended CSM (e.g., the outer boundary of CSM is ∼10^15 cm), and we assume that the un-shocked CSM is optically thin. The evolution of R_sh and V_sh for the shocked CSM satisfies the conservation of momentum as follows,
M_shdV_sh/dt = 4π R_sh^2[ρ_ej(v_ej - V_sh)^2 - ρ_csm(V_sh - v_w)^2]
where M_sh is the total mass of the shocked ejecta and CSM. In Model_ext, we only consider the interaction process during the first few days after the explosion, and Ṁ_w10 is basically less than 10^-4 M_⊙ yr^-1 as has been constrained by radio or X-ray observations of SNe Ia <cit.>. Thus, the shocked SN ejecta is always confined inside the exterior part of the ejecta with v_ej > v_t. With the solution of the kinetic evolution, the corresponding bolometric luminosity L is given by the power of the shocked CSM with a conversion efficiency ϵ as L = ϵ/2Ṁ_w10V_sh^3, where ϵ = 0.15 in our simulations in consistence with previous studies <cit.>. On the other hand, one important quantity in the Model_ext is Ṁ_w10(R) which is a function of the distance R as given below,
Ṁ_w10(R) = Ṁ_w10(0)(R/R_1)^n_1, R ≤ R_1
Ṁ_w10(0), R_1 < R ≤ R_2
Ṁ_w10(0) (R_3 - R/R_3 - R_2)^n_2, R_2 < R ≤ R_3
As shown in Equation <ref>, Ṁ_w10(R) increases to Ṁ_w10(0) within the distance of R_1 relating to an index of n_1. Ṁ_w10(R) equals to a constant Ṁ_w10(0) between R_1 and R_2. Ṁ_w10(R) decreases to zero from R_2 to R_3 with an index of n_2. CSM could be ignored for a distance larger than R_3.
Therefore, the observed light curves for Model_ext can be numerically solved based on Equation <ref>. As a simplified situation with a constant Ṁ_w10(R), <cit.> acquired the integrated formula of the luminosity curve of CSM interaction. We compared the evolution of R_sh between the integrated formula from <cit.> and our numerical solutions with a constant Ṁ_w10(R) as shown in Figure <ref>, which demonstrate the validity of our numerical procedure.
§.§ Dusty CSM
Assuming any typical gas-to-dust ratios, we introduce the effect of dusty CSM on the UV-band light curves. The dust located within ∼ 1×10^15 cm to the SN will be destroyed by radiative heating soon after the SN explosion. Therefore, we only consider the dusty CSM in the Model_ext rather than in Model_sh due to the difference in characteristic distances. To investigate the absorption and scattering from circumstellar dust, we consider a simple dust model in which the chemical composition is just silicate with a typical size of 0.05 μm, indicating that the dust extinction is more significant in UV bands than in the optical.
For simplicity, We assume a spherical distribution of the dust within an inner boundary of 1×10^15 cm and an outer boundary of 5×10^15 cm. The optical depth in B-band is adopted as 0.15, the corresponding optical depth in UVW1 band is 1.1, and the averaged optical depth from 4000 Å to 8000 Å is about 0.07. Thus, the radiative transfer process in dusty CSM for optical bands is ignored in this paper. Assuming the same wind velocity (10 km s^-1), the mass-loss rate of the dust is about 1×10^-8 M_⊙ yr^-1, which is about 10^-2 times of the typical value of Ṁ_w10(0) (∼ 10^-6 M_⊙ yr^-1) in the Model_ext as illustrated in Figure <ref>. The dust destruction process will take about 2 days to 3 days with the simplified assumption that the photosphere of a SN Ia has a velocity of 15,000 km s^-1 and temperature of 10,000 K and the vaporization temperature of silicate with the size of 0.05 μ m is approximate 1,500 K <cit.>. The time-dependent dust destruction makes the radiative transfer in the dusty CSM a dynamic process. We incorporated this dynamic process in our Monte Carlo radiative transfer program in <cit.> to solve for the UV fluxes in dusty CSM.
§ FITTING THE EARLY EXCESS EMISSION WITH CSM INTERACTION
Figure <ref> displays the predicted rising time of the optical excess versus the maximum of the optical excess for Model_sh and Model_ext with different parameter configurations. For Model_sh, R_out is set to 10^12 cm, 10^13 cm, and 5×10^13 cm, and Ṁ_w10 is set to from 0.001 M_⊙ yr^-1 to 1.0 M_⊙ yr^-1. The corresponding total CSM mass is in the range of 1.6×10^-5 M_⊙ to 0.16 M_⊙. For Model_ext, although R_1, R_2, R_3, and Ṁ_w10(0) are all free parameters, R_2 and Ṁ_w10(0) can significantly influence the flux relating to the ejecta-CSM interaction. The ranges of parameter R_2 considered here is set to 2×10^14 cm, 10^15 cm, and 5×10^15 cm, and Ṁ_w10(0) varies from 10^-7 M_⊙ yr^-1 to 10^-4 M_⊙ yr^-1. It is clearly shown that Model_sh in each parameter grid has the characteristics of a very short duration, which is contradictory to the early flux excess of the revisited SNe Ia in this paper except for SN 2020hvf. Meanwhile, Model_ext with certain parameters can fit the early optical excess of SNe Ia satisfactorily. However, combining the photometric data of optical and UV bands may examine the hypothesis that the early excess arises from the ejecta-CSM interaction.
We adopt Model_sh to fit the early-time optical excess of SN 2020hvf (R_out=3×10^13 cm, M_CSM = 0.05 M_⊙) and Model_ext for the rest seven SNe Ia with the parameter values shown in Table <ref>. The fitted optical luminosity curves and the predicted UVW1-band luminosity are shown in Figure <ref>. The result clearly suggests that ejecta-CSM interaction can explain the early excess in optical band of SNe Ia, and the total mass of CSM is at the level of about 10^-4 M_⊙ in agreement with the observations on the non-detection of H emission lines in the nebular spectrum (e.g., ).
However, the great deviation of the predictions on UVW1-band luminosity suggests that the early-time excess of iPTF14atg may not be generated from the the ejecta-CSM interaction but the ejecta-companion interaction, since the ejecta-companion interaction can produce much higher temperature and hence more luminous UV-band radiation <cit.>. As the discussion in <cit.>, the early excess of SN 2020hvf is highly possible to be generated from the CSM interaction process for its short duration of the optical flash. The values of parameter R_out and M_csm in fitting SN 2020hvf are slightly different from that in <cit.> due to the simplification of L_opti for SN 2020hvf in this paper. The fitting L_opti of SN 2018oh can only indicate that the ejecta-CSM interaction may be one of possible origination due to the lack of the early-time UV-band observations. For SNe 2012cg, 2017cbv, 2019np and 2021aefx, the predicted L_UVW1 is consistent with the observed data with considering the extinction from dusty CSM. A further diagnosis from radio observations is discussed in Section <ref>.
§ THE RADIO RADIATION FROM CSM INTERACTION
An evident phenomenon of CSM interaction is the radio radiation emitted by the relativistic electrons. Although almost all the radio observations of spectroscopic normal SNe Ia can only provide upper limit, the radio radiation from ejecta-CSM interaction has important potential in distinguishing the various scenarios. The theory of the radio radiation from CSM interaction has been well established <cit.>, and here we apply this theory to SNe Ia with ejecta-CSM interaction soon after explosion.
§.§ The Synchrotron Radiation
A reasonable assumption is that the relativistic electrons produced by the ejecta-CSM interaction follow a power-law distribution, dN/dE = N_0E^-p, where N and N_0 are the number density of the relativistic electrons and a scaling parameter, respectively. E=γ m_e c^2 is the energy of the electrons with γ being the Lorentz factor. The corresponding synchrotron emission coefficient (j_ν) is proportional to a declining power law of the frequency of the radiated photons, j_ν∝ν^-α, where the parameter α is equal to (p-1)/2. We adopt α = 1 and p = 3 in this study.
§.§ The Synchrotron Self-Absorption
The effect of the synchrotron self-absorption (SSA) cannot be ignored, because N_0 and the magnetic field (B) might be large enough to make the shocked area optically thick for the radio radiation. Assuming a uniform opacity distribution with the path length Δ s, the optical depth τ_ν is expressed as τ_ν = κ_νΔ s, where κ_ν is the absorption coefficient and κ_ν = κ_0(p)N_0B^(p+2)/2ν^-(p+4)/2, where κ_0(p) is a constant (= 5.5×10^26 for p = 3). The intensity (I_ν) is acquired by an integral as I_ν = ∫_0^Δ s j_νexp(-κ_νs)ds = j_ν/κ_ν(1-exp(-τ_ν)). Thus, the source function (S_ν = j_ν/κ_ν) is proportional to ν^5/2.
For the simplicity of calculating S_ν, we introduce a characteristic frequency ν_abs, which has a corresponding optical depth τ_abs∼ 1. This directly leads to τ_ν = (ν/ν_abs)^-(p+4)/2. Besides, we can define a frequency ν_peak as I_ν_peak≡2kT_bright(ν_peak/c)^2, where k is the Boltzmann constant and T_bright is the brightness temperature. Thus, the intensity of any frequency can be formulated by I_ν = S_ν/S_ν_peak1-exp(-τ_ν)/1-exp(-τ_ν_peak)I_ν_peak. After proper arrangement, the formula is as follows,
I_ν = 2kT_bright/c^2ν^5/2/f(x)ν_abs^1/2[1-exp(-τ_ν)]
where x = ν_peak/ν_abs and f(x) = x^1/2[1 - exp(-x^-(p+4)/2)]. Based on the Equation 12 in <cit.>, x ≈ 1.137 for p = 3. With τ_abs∼ 1, we can get that ν_abs = (Δ sκ_0(p)N_0B^(p+2)/2)^2/(p+4).
§.§ The Radio Luminosity from CSM Interaction
The kinetic evolution of the shocked shell can be constrained from Model_ext with the assumption of Δ R_sh = 0.2R_sh. Following the results in <cit.>, we assume that γ_min≈ 1.64[V_sh/(70,000 km s^-1)]^2 and γ_min≥ 1. We then have N_0 = (p-2)ϵ_relu_thE_min^p-2 by integrating the pow-law distribution of the relativistic electrons, where u_th = (9/8)ρ_csmV_sh^2 is the thermal energy density and ϵ_rel is the ratio of the energy density of the relativistic electrons and u_th. Besides, the magnetic field is determined by B^2/(8π) = ϵ_Bu_th, where ϵ_B is the ratio of the magnetic energy density and u_th. We set ϵ_rel = 0.1 and ϵ_B = 0.01 in our simulations <cit.>.
Assuming that the shocked shell is homogeneous, the intensity along the line of sight is a function of the polar angle due to the path length. We define a parameter h = sinθ, where θ is the polar angle with respect to the direction of the line of sight. For h = 0, we denote ν_abs = ν_abs,0, τ_ν = τ_ν,0, and τ_ν_abs = τ_ν_abs,0 = 1. For 0 ≥ h ≥ 1, τ_ν(h) = ξ_hτ_ν,0, where ξ_h = Δ s(h)/(2Δ R_sh). Thus, I_ν(h) can be directly derived from Equation <ref> by replacing ν_abs and τ_ν with ν_abs,0 and τ_ν(h), respectively. The luminosity L_ν is the integration over h as L_ν = 8π^2R_sh^2∫_0^1I_ν(h)hdh. We then define a factor ϑ = L_ν/L_ν,0, where L_ν,0 = 4π^2R_sh^2I_ν(0). Thus we can get the observed luminosity as,
L_ν = L_0ν^5/2/ν_abs,0^1/2[1-exp(-τ_ν,0)]
where L_0 = 8π^2kT_bright/c^2f(x)R_sh^2ϑ. For optically thin or thick shell, Equation <ref> is reduced to L_ν = L_0ν_abs,0^(p+3)/2ν^-(p-1)/2 or L_ν = L_0ν^5/2/ν_abs,0^1/2, respectively. In our simulations, the optical depth τ_ν,0 evolves with time during the process of ejecta-CSM interaction.
§.§ The Predicted Radio Luminosity by the Model_ext
Here, we compare the predicted radio radiation from the ejecta-CSM interaction process with the early-phase radio observations of SNe Ia. There are early-phase radio observations for SNe 2012cg, 2019np, and 2021aefx, while we use the observational data from <cit.> for comparison due to the lack of the early-time radio observations of SNe 2017cbv, 2018oh, and 2019yvq. The predicted curves of radio luminosity for the low frequencies (e.g., 1.5 GHz, 4.0 GHz, and 5.5 GHz) and for the high frequency (250 GHz) are shown in Figure <ref> with the same CSM parameter values as shown in Table<ref>. At the beginning of the ejecta-CSM interaction, the optical depth of radio bands is so large due to SSA that the radio luminosity at low frequencies is at a relatively low level. As the shocked shell travels outwards, the CSM density rapidly decreases, resulting in a sharp decrease of the radio radiation for both high and low frequencies. The predicted radio luminosity are compared with the observations of SNe Ia excluding the peculiar ones such as Iax, 02es-like, Ca-rich, super-Chandrasekhar and Ia-CSM in Figure <ref>. The predicted curves are below the upper limits of radio observations, except for SNe 2011fe and 2014J. This implies that even with the revisited SNe Ia which shows obvious early light curve bumps, the existing observations are not sensitive enough to reveal the underlying CSM interaction. The progenitor mass loss rate soon before the explosion are even more tenuous for those SNe with detection limits lower than the predicted radio flux. For instance, the upper limit of the mass loss rate of SN 2011fe and SN 2014J is about 2×10^-10 M_⊙ yr^-1 from our calculation, which is a little bit smaller than the upper limit from <cit.> due to the configuration setting of CSM interaction models.
Besides, the radio light curves shown in Figure <ref> suggest that the radio observation at higher frequency (e.g., ∼250 GHz) is several orders of magnitude stronger than at lower frequency (e.g., ∼1.5 GHz). However, the biggest constraint is that the radio observations must be triggered within few days after the explosion of SNe Ia with early optical excess. Such high frequency observations may be achievable by the Atacama Large Millimeter/submillimeter Array (ALMA) telescope. As shown in Figure <ref>, the luminosity of 250 GHz can exceed about 10^27 ergs^-1Hz^-1 during +1 to +5 days respect to the explosion. The corresponding flux is about 10.0 mJy at a distance of about 20 Mpc, which happens to be within the sensitivity of the ALMA. It is critical to discover nearby SNe Ia within one or two days after the explosion and triggering the multi-band photometric, spectral, and radio observations. The multi-messenger observations time-domain observational approach involving optical telescopes such as the Zwicky Transient Facility (ZTF, ), the Wide Field Survey Telescope (WFST, ), the Ultraviolet Transient Astronomy Satellite (ULTRASAT, ), and ALMA radio observations will provide us the best chance to capture the UV, optical and radio signals from the ejecta-CSM interaction of SNe Ia.
§ CONCLUSIONS
In this paper, we revisited the possible ejecta-CSM interaction origin of the early excess emission in SNe Ia. The CSM interaction described by Model_sh is similar to that of the shock breakout process, in which the distance of CSM is about 10^11∼10^13 cm. At such a short distance scale, the temperature of the shocked CSM rapidly decreases as it expands. Therefore, the corresponding thermal radiation duration is so short that Model_sh can fit only the early flash of SN 2020hvf among the revisited eight SNe Ia. When the radial distribution of CSM extends to about 10^15 cm, the CSM interaction continues for a few days. The Model_ext describes a situation in which the mass-loss rate is a function of the time before the explosion. Under the appropriate parameter values, Model_ext can fit the optical excess of the rest seven SNe Ia. By considering the extinction and scattering from circumstellar dust, the Model_ext can match the UV-band light curve except for iPTF14atg, which may rule out the possibility that the early excess emission in iPTF14atg arises from the ejecta-CSM interaction. In particular, the CSM interaction model relating to the case of Model_ext also predicts radio radiations that can be detectable a few days past explosion at ∼ 250 GHz, leading to a multi-band diagnosis of the circumstellar environment surrounding SNe Ia.
The success of Model_ext in fitting the observed data of the revisited SNe Ia suggests that the SNe Ia with early excess require more observations to distinguish whether this excess originates from ^56Ni mixing in the ejecta, Helium detonation on the surface of a WD, interaction with the companion, or ejecta-CSM interaction. It is necessary to compare the observational characteristics of these four scenarios in the first few days after the SN explosion. In particular, multi-messenger observations including the X-ray, UV, optical, and radio bands are all needed in distinguishing these scenarios.
§ ACKNOWLEDGEMENTS
This work is supported by the the Major Science and Technology Project of Qinghai Province (2019-ZJ-A10) and the National Key Research and Development Programs of China (2022SKA0130100). Maokai Hu acknowledges support from the Jiangsu Funding Program for Excellent Postdoctoral Talent. Xiaofeng Wang is supported by National Natural Science Foundation of China (NSFC grants NSFC grants 12288102, 12033003, and 11633002), the Scholar Program of Beijing Academy of Science and Technology (DZ: BS202002), and the Tencent Xplorer Prize. Lingzhi Wang is sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
|
http://arxiv.org/abs/2307.03363v1
|
20230707030726
|
Federated Unlearning via Active Forgetting
|
[
"Yuyuan Li",
"Chaochao Chen",
"Xiaolin Zheng",
"Jiaming Zhang"
] |
cs.LG
|
[
"cs.LG"
] |
[email protected]
Zhejiang University
China
[email protected]
Zhejiang University
China
[email protected]
Zhejiang University
China
[email protected]
Zhejiang University
China
The increasing concerns regarding the privacy of machine learning models have catalyzed the exploration of machine unlearning, i.e., a process that removes the influence of training data on machine learning models.
This concern also arises in the realm of federated learning, prompting researchers to address the federated unlearning problem.
However, federated unlearning remains challenging.
Existing unlearning methods can be broadly categorized into two approaches, i.e., exact unlearning and approximate unlearning.
Firstly, implementing exact unlearning, which typically relies on the partition-aggregation framework, in a distributed manner does not improve time efficiency theoretically.
Secondly, existing federated (approximate) unlearning methods suffer from imprecise data influence estimation, significant computational burden, or both.
To this end, we propose a novel federated unlearning framework based on incremental learning, which is independent of specific models and federated settings.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
Instead, we leverage new memories to overwrite old ones, imitating the process of active forgetting in neurology.
Specifically, the model, intended to unlearn, serves as a student model that continuously learns from randomly initiated teacher models.
To preserve catastrophic forgetting of non-target data, we utilize elastic weight consolidation to elastically constrain weight change.
Extensive experiments on three benchmark datasets demonstrate the efficiency and effectiveness of our proposed method.
The result of backdoor attacks demonstrates that our proposed method achieves satisfying completeness.
<ccs2012>
<concept>
<concept_id>10002978.10003029.10011150</concept_id>
<concept_desc>Security and privacy Privacy protections</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010178.10010219.10010223</concept_id>
<concept_desc>Computing methodologies Cooperation and coordination</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Privacy protections
[500]Computing methodologies Cooperation and coordination
Federated Unlearning via Active Forgetting
Jiaming Zhang
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
With the prevalence of Machine Learning (ML) in various areas <cit.>, there is a growing concern regarding the potential negative impacts of ML models.
In response, several regulatory requirements have emerged to promote privacy in ML systems.
For example, the General Data Protection Regulation (GDPR) <cit.> in the European Union allows individuals to request the removal of their data, including any influence it may have had on training models, i.e., the Right To Be Forgotten (RTBF).
Similarly, the California Consumer Privacy Act (CCPA) <cit.>, proposed as a state law in the U.S., requires businesses to disclose what personal information they collect and gives consumers the right to request deletion.
Machine unlearning is an effective approach that can help preserve privacy in ML systems, which focuses on removing previously used data and learned information from ML models.
On the one hand, individuals can remove their sensitive information learned by ML models, in addition to removing their training data.
On the other hand, companies can proactively unlearn the dirty data that is no longer accurate <cit.>.
The most straightforward method of unlearning is retraining the model from scratch using the updated dataset, without including the target data, i.e., data to be unlearned.
However, retraining from scratch is often impractical due to the significant computational overhead involved when training ML models.
Based on the degree of unlearning completeness, unlearning methods can be categorized into two approaches, i.e., exact unlearning (full completeness) and approximate unlearning (partial completeness).
In recent years, federated learning has emerged as a promising approach for training ML models in a distributed manner, without compromising user privacy.
Despite its potential benefits, federated learning still falls short in addressing the aforementioned concerns, i.e., RTBF and fairness.
As a result, researchers have turned their attention to exploring unlearning in a distributed manner, known as federated unlearning.
Existing federated unlearning methods predominantly focus on approximate unlearning.
This is due to the fact that implementing exact unlearning, which typically relies on the partition-aggregation framework, in a distributed manner does not yield substantial improvements in time efficiency.
Existing federated (approximate) unlearning methods are inadequate as they are plagued by imprecise data influence estimation <cit.>, significant computational burden <cit.>, or both, leaving much room for improvement.
To be specific, imprecise estimation will significantly affect the completeness of unlearning and hinder model utility,
while computational burden reduces the efficiency of unlearning.
In this paper, we propose a novel federated unlearning method, named , which is independent of specific models and federated settings.
As shown in Figure <ref>, our proposed presents a distinct design in comparison to the prevailing unlearning approaches.
Taking inspiration from active forgetting in neurology <cit.>, enables the target model, i.e., the model in a client with the intention to unlearn, to use new memories to overwrite the old ones, mimicking the function of dopamine-producing forgetting cells that accelerate the elimination of memorization.
This allows for a seamless unlearning process that is integrated into the original federated learning procedure, which fundamentally overcomes the limitations of existing federated unlearning methods, without the need for storing historical updates or estimating data influence.
Specifically, we implement the design of active unlearning based on incremental learning, which focuses on the task of learning multiple tasks in sequence.
In the context of unlearning, the original learning process and the subsequent unlearning process can be regarded as two sequential tasks.
Here we treat all unlearning requests as one process for conciseness.
The key distinction between incremental learning and federated unlearning lies in the presence of task conflict.
Incremental learning acquires sequentially new knowledge without any conflict between two sequential tasks.
In federated unlearning, there is a conflict as the subsequent task requires unlearning previously acquired knowledge in the previous task.
When using incremental learning to unlearn, there are two critical points to consider: (i) generating effective new memories, and (ii) overcoming catastrophic forgetting.
consists of two modules, i.e., memory generator and knowledge preserver, to address these issues, respectively.
Firstly, we need to generate knowledge-free and easy-to-learn new memories to overwrite the old ones.
Knowledge-free means containing no effective information about the data, e.g., a random label vector.
Meanwhile, the new memories have to be easy-to-learn so that it will not cost considerable computational overhead to accomplish unlearning.
Memory generator follows a teacher-student learning pattern to generate fake labels and pairs them with original features.
Secondly, previous studies found that conventional deep learning methods fail to tackle incremental learning due to the phenomenon of catastrophic forgetting <cit.>.
In other words, besides removing the influence of target data, the influence of non-target data is also removed.
Knowledge preserver alleviates the issue of unintentional forgetting by elastically constraining the model's parameters with our derived loss, which is to address the problem of task conflict.
The main contributions of this paper are summarized as follows:
* We propose a novel federated unlearning method, i.e., based on the concept of active forgetting, which fundamentally overcomes the limitations posed by existing federated unlearning methods.
* To effectively generate new memories to overwrite the old ones, we adopt a teacher-student learning pattern.
The model in a target client, i.e., student, distills knowledge from manipulated data which is generated by teacher models.
* To alleviate the catastrophic forgetting phenomenon of non-target data, we first derive a new loss to address the problem of task conflict, and then dynamically constrain the change of model's parameters with elastic weight consolidation.
* We conduct extensive experiments on three benchmark datasets to evaluate the performance of .
The results show that our proposed outperforms compared methods in terms of efficiency, utility, and completeness.
§ PRELIMINARIES
In this section, we first introduce the notations of federated learning and unlearning, followed by the principles of federated unlearning.
Afterwards, we clarify the unlearning target in this paper.
§.§ Notation
Federated Learning Federated learning allows multiple clients to collaboratively train a global model without sharing their private data.
A typical architecture of federated learning, i.e., FedAvg <cit.> can be formulated as follows:
There are K clients denoted by 𝒦 = {1, 2, …, K}, and a server.
Each client k ∈𝒦 has a local dataset 𝒟_k with n_k denting the number of samples in it.
The goal is to obtain a global model parameterized by θ^* that minimizes the empirical risk over all clients:
θ^* = min_θ1/K∑_k=1^K w_k L_k(θ),
where w_k = n_k/∑_k n_k is the weight assigned to client k, and L_k(θ) is the local loss function of client k.
At t-th federated iteration, each client k performs E epochs of stochastic gradient descent on its local dataset 𝒟_k using the current global model parameters θ_t, and then uploads the updated model parameters θ_k, t+1 to the server for aggregation.
The server aggregates the model updates from all clients using a weighted average:
θ_t+1 = ∑_k=1^K = w_k θ_k, t+1.
This process continues until the global model θ converges or reaches the maximum federated iterations.
Federated Unlearning As formulated above, each client k possesses a local dataset 𝒟_k.
Thus, the client k has the right to remove any of its data and corresponding influence in the global model.
This process is termed as unlearning.
Formally, the client k submits a request asking to unlearn a specific target ℛ∈𝒟_k.
Following <cit.>, we assume that the requests are submitted after the end of federated training, i.e., the globe model has reached convergence.
In practice, clients may submit unlearning requests during federated training, but comparing the performance of the global model before and after unlearning during federated training is inconvenient.
After receiving the requests, the server can instruct all clients to take any necessary steps to unlearn the target data while ensuring that their local data remains private.
We denote the parameters of the ground-truth unlearned model by θ_¬^* which is collaboratively retrained across all clients on 𝒟\ℛ from scratch.
§.§ Unlearning Principles
We identify four principles that we consider as the pillars of achieving successful federated unlearning.
Similar objectives of the first three principles can also be found in <cit.>.
P1: Unlearning Completeness
Completely unlearning the influence of target data refers to entirely revoking the target data information learned by ML models, making it irretrievable.
In some real-world scenarios, completeness may be partially sacrificed for efficiency.
P2: Unlearning Efficiency
Efficiency is another important principle of unlearning.
Practical ML models often involve large-scale datasets and parameters, which results in significant computational overheads in terms of both time and space.
As a consequence, retraining from scratch is prohibitive.
P3: Model Utility
It is evident that both clients and servers desire to maintain model performance after unlearning.
However, it is important to acknowledge that unlearning a significant amount of data lineage would inevitably diminish the model utility, because unlearning is equivalent to reducing the amount of training data.
Thus, an adequate unlearning method needs to generate an unlearned model that achieves comparable performance to a model retrained from scratch.
P4: Data Privacy
In the context of federated learning, the raw data is possessed by a set of clients who are unwilling and not authorized to share it with each other.
Due to this inherent limitation, it becomes imperative to ensure that user privacy, i.e., sharing raw data, is not compromised during the federated unlearning process.
§.§ Unlearning Targets
Unlearning targets can be mainly classified into four categories based on their scope, i.e., client-wise (all data of a client), class-wise (all data of a class), sample-wise (a data sample) and feature-wise (one feature dimension of a data sample).
In federated learning, the data is possessed by various clients without permission for sharing.
Thus, in this paper, the class-wise target refers to a specific class of data that is possessed by the target client.
Except for feature-wise targets, our proposed method () is capable of handling the other three types of targets, and is particularly well-suited for class-wise targets.
To fully exploit the capability of , in this paper, we conduct experiments of class-wise unlearning, without compromising the generality.
§ RELATED WORK
§.§ Machine Unlearning
Machine unlearning methods can be categorized into two approaches: exact unlearning (full completeness) and approximate unlearning (partial completeness).
The choice of approach depends on the situation and desired outcome.
Exact Unlearning (EU) This approach aims to completely remove the influence of target data, guaranteeing that there is no residual influence in the unlearned model.
Retraining from scratch is a naive but algorithmic, i.e., naturally enabling full completeness, approach.
As retraining takes considerable computational overhead in practice, EU approach mainly focuses on enhancing retraining efficiency.
The main idea behind the existing EU methods is partition-aggregation, which involves partitioning the dataset or model into sub-components, training them individually, and then aggregating them at the end <cit.>.
With this idea, EU methods can limit the overhead of retraining to sub-components, and avoid retraining from scratch.
However, EU methods suffer from a trade-off between unlearning efficiency (P2) and model utility (P3).
On the one hand, increasing the number of sub-components can enhance unlearning efficiency, but this may lead to the issue of weak learners, i.e., models with poor utility.
On the other hand, limiting the number of sub-components can preserve model utility, but this simultaneously restricts the unlearning efficiency.
Approximated Unlearning (AU) This approach aims to expedite unlearning by approximating the parameters of the ground truth model, i.e., retraining from scratch.
Existing AU methods estimate the influence of target data and directly remove it through reverse gradient operations <cit.>.
Estimating the data influence is mainly based on influence function <cit.>.
Despite the theoretical ability of AU methods to improve unlearning efficiency, the associated computational overhead required for influence estimation remains a significant and limiting factor, particularly for large-scale models.
The latest AU methods manage to accelerate influence estimation by approximation, i.e., approximate the approximation <cit.>, which inevitably results in decreased accuracy of influence estimation.
§.§ Federated Unlearning
Due to the collaboration among clients in federated learning, achieving exact federated unlearning costs extra-prohibitive computational overhead.
Therefore, existing federated unlearning methods focus on approximate unlearning, which can be further divided into the following three categories.
Retrain Unlearning This approach mimics EU approach, but does not adopt the partition-aggregation framework, which can not improve time efficiency in the context of federated learning.
Instead, retrain unlearning methods accelerate retraining by approximating the gradients using previously stored historical updates <cit.>.
This approach is hindered by inaccurate approximation and the burden of data storage.
Reverse Unlearning This approach follows the idea of AU, removing the estimated influence through reverse gradient operations, e.g., loss maximization <cit.> and stochastic gradient ascent <cit.>.
Analogous challenges to the AU approach are encountered by this approach.
Others There are other federated unlearning methods using knowledge distillation <cit.>, scaled gradients <cit.>, and channel pruning (CNN-specific) <cit.>.
The theoretical underpinnings of these methods are notably weaker than that of the above two approaches, placing them at a greater risk of encountering privacy concerns.
As for comparison, our proposed is based on a novel approach that continues the original federated learning process to achieve unlearning.
§.§ Incremental Learning
Incremental learning, which is also known as continual learning <cit.> and lifelong learning in literature, aims to sequentially learn multiple tasks.
Different from transfer learning and multi-task learning, incremental learning
focuses on achieving high performance across all tasks while only having access to the data of new task(s).
This aligns with the setting of federated unlearning, where it can be arduous to reach for historical updates.
There are mainly three approaches to address catastrophic forgetting in incremental learning:
* Selective Synaptic Plasticity elastically constrains parameter change based on synaptic importance to preserve learned knowledge <cit.>.
It is known for Elastic Weight Consolidation (EWC) framework.
* Additional Neural Resource Allocation allocates new parameters for new tasks <cit.>.
This approach changes the model structure, which is obviously unsuitable for the unlearning problem.
* Memory Reply stores previous data or generates pseudo-data, and replays this data with the new data <cit.>.
This approach either increases extra computational overhead or changes the original learning process.
Thus it is out of our consideration.
§ ACTIVE FORGETTING FRAMEWORK
Motivation
The challenge of federated unlearning arises due to the extensive collaboration between clients and the server in federated learning.
Existing federated unlearning methods suffer from imprecise data influence estimation, significant computational burden, or both.
Inspired by active forgetting <cit.> in Neurology, we propose a novel federated unlearning framework named .
Different from natural forgetting, i.e., passive forgetting, active forgetting can induce the forgetting of specific memories <cit.>.
Advantages To actively forget the target data influence, our proposed Federated Active Forgetting Framework () continually trains the model with new memories, i.e., manipulated data, which are generated from the old memories that need to be forgotten, i.e., target data.
Compared with existing framework, our proposed has the following advantages:
* 's unlearning process can be considered as a part of the extended learning process, reducing the additional impact on the model utility to a minimal level.
* has wide applicability.
As it seamlessly integrated the unlearning process into the learning process, it is independent of specific models and federated settings.
* Compared with retrain unlearning, neither requires additional storage for historical updates nor spends extra computational overhead for non-target data, which reduces the deployment cost of unlearning in practice.
* Compared with reverse unlearning, avoids estimating the influence of data, which is found analytically intractable <cit.>, and continues the original training to achieve unlearning.
This enables to unlearn with higher precision.
§.§ Framework Overview
learns from the new memories to achieve unlearning.
The new memories contain no effective knowledge of the target data, which can overwrite the old ones.
Figure <ref> shows the learning (black arrows) and unlearning (blue arrows) workflow of .
In the learning workflow, neither interferes with the original federated learning process nor stores any historical update.
As described in Section <ref>, we can condense the original federated learning process into two loops, i.e., local training loop and federated training loop.
In the unlearning workflow, incorporates an unlearning loop within the local training loop, and then broadcast the unlearned update through the federated training loop.
The unlearning loop in consists of two modules, i.e., memory generator and knowledge preserver.
These two modules tackle the aforementioned issues respectively, i.e., generating effective new memories and overcoming catastrophic forgetting.
In general, the memory generator initializes a set of teacher models to generate fake labels, and then pairs them with the original feature to produce the manipulated data.
The knowledge preserver continually trains the model on the manipulated data.
With the help of the new loss function derived from EWC framework, the knowledge preserver manages to alleviate the catastrophic forgetting phenomenon.
Algorithm <ref> summarizes the details in the unlearning workflow, where lines 1 to 6 represent memory generator and line 7 represents knowledge preserver.
§.§ Memory Generator
The goal of memory generator is to produce knowledge-free and easy-to-learn new memories for overwriting.
In this paper, we focus on supervised learning, where the knowledge usually lies in labels.
Therefore, we manipulate labels to generate new memories.
§.§.§ Knowledge-Free
The primary characteristic of new memories is knowledge-free.
Removing the influence of the target data means making the model behave as if it has never seen the target data before.
In other words, the model is supposed to have no knowledge about the target data.
Following the idea of active forgetting, we generate knowledge-free labels, and overwrite the previously learned knowledge by training the model on these manipulated data (original features with new labels).
Assuming there is a binary classification task, and the label of a target data point can be [0, 1]^⊤, which means it belongs to the second class.
There are two types of straightforward knowledge-free labels: i) the uniform label, e.g., [0.5, 0.5]^⊤, and ii) the random label, e.g., [r∼𝒰(0, 1), 1 - r]^⊤ where r is a random value sampled from a uniform distribution 𝒰(0, 1).
We compared with both types of the above labels in our ablation study (Section <ref>).
§.§.§ Easy-to-Learn
The above two introduced knowledge-free labels do not take prior knowledge of models into consideration, which makes them hard to learn by the model.
Our empirical study (Figure <ref>) shows that both the uniform and random labels have residual old memories and unstable performance.
These hard-to-learn labels cause two potential problems during unlearning: i) the incapacity of completely removing the influence of target data, and ii) increasing the computational overhead for the unlearning process.
To make it easier to learn, we distill the prior knowledge from the model into labels.
Specifically, we adopt a teacher-student learning pattern <cit.>.
We first initialize a set of untrained models as teacher models, since they have no learned knowledge about training data.
Then we feed teacher models with features of target data, producing the teacher predictions.
Finally, the teacher label is obtained by averaging the teacher predictions.
Formally, the teacher label ŷ is computed as
ŷ = 1/Q∑_i=1^Q θ_i(x),
where x is data features and Q is the number of teachers.
Note that the teacher models do not require training and can be released after generating the fake labels.
Thus, the memory generator has both space and time advantages over the space-for-time strategy in existing retrain unlearning methods.
However, the prior knowledge can be so strong that it results in bias on particular data.
The bias is added purely by the model or algorithm, which is referred to as algorithmic bias <cit.>.
Our empirical study (Figure <ref>) shows that the untrained model naturally has prediction bias on particular classes, which makes the testing accuracy of these classes relatively high.
This potentially leaks knowledge about the data, which is conflicted with the primary characteristic, i.e., knowledge-free.
Consequently, we import a debias vector ν to underweight the target class <cit.>.
Each element of ν represents the propensity score ν_i of the corresponding class.
We set the propensity score of target class ν_target = σ∈ [0, 1] and the other classes ν_non-target = 1.
Formally, debias teacher label is computed as
ỹ = debias(ŷ) = νŷ/|νŷ|_1, ν = 1 + (σ - 1)y,
where 1 is an all-one vector, y is the original label, and σ is debias weight.
νŷ is scaled by its L1 norm to satisfy the summation constraint, i.e., ∑ y = 1.
The debias weight can be dynamically determined by the quotient of the target prediction weight and average prediction weight.
In this way, we enjoy a balanced trade-off between the characteristics of knowledge-free and easy-to-learn.
§.§ Knowledge Preserver
While unlearning target data, we also need to preserve the remaining knowledge learned from non-target data.
Conventional deep learning methods suffer from the phenomenon of catastrophic forgetting when incrementally learning a sequence of tasks.
To imitate active forgetting in the human brain, our proposed is based on the idea of incremental learning.
The target client k sequentially trains the model on two tasks, i.e., 𝒟_k and ℳ.
To avoid confusion, we directly name the task by its dataset.
Tasks 𝒟_k and ℳ stand for the learning and unlearning process, respectively.
Catastrophic forgetting happens when training on the latter task, i.e., ℳ.
The new memories not only overwrite the old memories of target data, but also make the model forget the memories of non-target data.
Thus, we introduce EWC training to alleviate this phenomenon.
As introduced in Section <ref>, we do not change the learning process, which means that task 𝒟_k adopts conventional training.
In this paper, we take empirical risk minimization as an example.
The client k optimizes model θ by minimizing the loss of task 𝒟_k as follows:
L_𝒟_k(θ) = 1/|𝒟_k|∑_x ∈𝒟_kℓ(θ, x).
At the end of training, θ is supposed to converge to a solution space θ^*_𝒟_k where ∀θ∈θ^*_𝒟_k is an acceptable solution for task 𝒟_k.
The next step is the unlearning process, where we train on task ℛ.
We explain the difference between conventional training and EWC training as follows.
Conventional Training The model θ^*_𝒟_k continues training by minimizing the loss of task ℳ, i.e., L_ℳ(θ).
As shown in Figure <ref>, conventional training methods suffer from catastrophic forgetting, which is detrimental to model performance.
EWC Training A straightforward way to preserve knowledge is to train the model on both tasks simultaneously.
As shown in Figure <ref>, this method is based on the assumption that there is an overlapping solution space of both tasks.
We empirically validate this assumption in Section <ref>.
Ignoring the scaling term 1/|𝒟', ℳ|, the overall loss on both tasks is defined as
L_𝒟', ℳ(θ) = L_ℳ(θ) + L_𝒟'(θ),
where 𝒟' = 𝒟_k \ℛ denotes the task of non-target data, since we are required to unlearn the target data ℛ.
However, computing the loss of 𝒟' requires non-target data, which is against our intention of avoiding extra computational overhead.
EWC provides us an approximation of L_𝒟_k(θ) when having θ^*_𝒟_k, but not having access to 𝒟_k.
The overall loss L_𝒟_k, ℳ(θ) is approximated by computing the second order Taylor expansion of L_𝒟_k(θ) at θ^*_𝒟_k as follows:
L_𝒟_k, ℳ(θ) = L_ℳ(θ) + L_𝒟_k(θ)
≈ L_ℳ(θ) + λ/2(θ - θ^*_𝒟_k)^⊤ H_𝒟_k(θ - θ^*_𝒟_k) + ϵ,
where λ is a hyper-parameter introduced to have a trade-off between learning ℳ and not forgetting 𝒟_k, H_𝒟_k = ∂^2L(θ^*_𝒟_k)/∂^2θ^*_𝒟_k denotes Hessian matrix at θ^*_𝒟_k, and ϵ accounts for all constants.
Please refer to <cit.> for more details of derivation.
As shown in Eq (<ref>), Hessian matrix can be interpreted as regularizer weight matrix of (θ - θ^*_𝒟_k), where each element represents synaptic importance of a parameter.
The more important the parameter is, the more regularizer weight it should be added to, making the parameter change less when learning ℳ.
The Hessian can be efficiently computed by Fisher information matrix from first order derivatives <cit.>.
Based on the above approximation of L_𝒟_k(θ), we compute the overall loss on tasks 𝒟' and ℳ, i.e., the unlearning loss, as follows:
L_𝒟', ℳ(θ) = L_ℳ(θ) + L_𝒟'(θ) = L_ℳ(θ) + L_𝒟_k(θ) - L_ℛ(θ)
≈ L_ℳ(θ) - L_ℛ(θ)+ λ/2(θ - θ^*_𝒟_k)^⊤ H_𝒟_k(θ - θ^*_𝒟_k) + ϵ,
where ℛ and ℳ denote the original target data (old memories) and the manipulated target data (new memories) respectively.
Consequently, the model θ^*_𝒟_k continues training on our proposed new loss to alleviate the phenomenon of catastrophic forgetting.
In this way, the knowledge preserver can unlearn the target data without breaking the original training procedure.
§ EXPERIMENTS
We evaluate the effectiveness of our proposed method on three widely used datasets based on three principles, i.e., unlearning completeness, unlearning efficiency, and model utility.
To further investigate our proposed method, we also conduct an ablation study.
§.§ Dataset
Our experiments are conducted on three benchmark datasets, i.e., MNIST <cit.>, CIFAR10 <cit.>, and CelebA <cit.>.
We provide the information about these datasets in Table <ref>.
For MNIST and CIFAR10, following their original papers, we leave 10,000 samples for testing and use the others for training.
For CelebA, we select two representative attributes, i.e., Male and Mouth_Slightly_Open.
We label the samples based on whether they possess these attributes or not, which results in four distinct classes, i.e., having both attributes, having none of the attributes, only having the Male attribute, and only having the Mouth_Slightly_Open attribute.
We use 80%, 10%, and 10% of the original dataset for training, validation, and testing, respectively.
§.§ Experimental Settings
Model We use a network that consists of 2 convolutional layers followed by 1 fully-connected layer for MNIST, ResNet-10 <cit.> for CIFAR10 and CelebA.
For simplicity, we did not sufficiently fine-tune the models to get their optimal performance, which is not the focus of this paper.
Training Details We initialize all model parameters, including teacher models, with Xavier Initialization <cit.>.
As initialization involves randomness, we run all models for 10 trials and report the average results.
We adopt the cross entropy loss function and stochastic gradient descent to train the models.
The learning rate is set as 0.001 for MNIST, 0.01 for CIFAR10, and 0.01 for CelebA.
In this paper, we conduct class-wise unlearning, and unlearn one class of a specific client at one time for all classes.
All models are implemented with PyTorch.
We run all experiments on an Ubuntu 20.04 LTS system server with 256GB RAM, and NVIDIA GeForce RTX 3090 GPU.
Federated Settings For federated learning, we adopt the widely acknowledged FedAvg <cit.>.
Specifically, we set the number of clients as 4, and the local epoch as 1.
Ensuring convergence of the global model, the maximum round of federated iteration is set as 5, 20, and 10 for MNIST, CIFAR10, and CelebA, respectively.
Hyper-parameters In our proposed , there are three hyper-parameters, i.e., EWC training epoch, trade-off coefficient λ, and the number of teacher models Q.
For the EWC training epoch, we investigate it in {1, 2, 3, 4, 5}.
For λ, we investigate it in {0.1, 0.5, 1, 10, 50}.
Based on empirical results, we set the EWC training epoch as 1 and λ as 10 for the following experiments in this paper.
We report empirical results in Appendix A.
For Q, we investigate it in {5, 10, 15, 20} and finally set Q=10, since we observe that a larger number over 10 cannot significantly improve the overall performance of teacher models.
§.§ Compared Methods
We compare our proposed with two representative unlearning methods which are applicable for class-wise unlearning.
* Retrain: Retraining from scratch is the ground-truth unlearning method with heavy computational overhead.
* FRR <cit.>: Federated Rapid Retraining (FRR) is the State-Of-The-Art (SOTA) sample-wise federated unlearning method, which can be applied to class-wise unlearning.
We implement FRR using the published code.
§.§ Results and Discussions
§.§.§ Unlearning Completeness
Following <cit.>, we utilize backdoor attack <cit.> to evaluate the unlearning completeness.
Specifically, we implant backdoor triggers into the target data that we aim to unlearn, and flip their labels to a random class.
Through this process, the backdoor attack enforces the model to build a mapping between the trigger pattern and the flipped label.
As a consequence, we can evaluate unlearning completeness by comparing the performance of target data before and after unlearning.
We report BackDoor Accuracy (BD Acc) of the unlearned data in Table <ref>.
From it, we have the following observations:
i) All compared methods significantly decrease the BD Acc after unlearning (from 76.21% to 1.00% on average), indicating that the influence of target data is significantly reduced;
ii) FRR and cannot achieve as low BD Acc (after unlearning) as Retrain.
Compared with the BD Acc of Retrain, FRR and have an average increase of 1.02% and 0.90%, respectively;
iii) There is only a marginal difference between FRR and , regarding performance on completeness.
Overall, slightly outperforms the SOTA method in terms of unlearning completeness.
§.§.§ Unlearning Efficiency
We use running time to measure the unlearning efficiency of the compared methods.
Specifically, we report the average running time of unlearning the first class in a specific client in Table <ref>.
As shown in Table <ref>, our proposed on average improves the efficiency of Retrain by 39.51 times.
On the contrary, FRR spends a significantly greater amount of time than Retrain due to the precise computation required by the Newton optimizer.
This is totally unacceptable because the design ethos behind unlearning methods is to achieve more efficient unlearning than Retrain.
§.§.§ Model Utility
As we mentioned in Section <ref>, preserving the model utility is also an important principle of federated unlearning.
An adequate unlearning method is supposed to avoid over-remove the influence of target data.
Therefore, we compare the after-unlearning model utility by evaluating the model performance on the testing set, and report the results in Table <ref>.
From it, we observe that all compared methods, i.e., Retrain, FRR, and , achieve Test Acc that is close to what was achieved before unlearning.
Specifically, regarding Retrain as a baseline, FRR and can limit the difference between their Test Acc and that of Retrain to below 3.12% and 2.31% respectively, indicating that there is no significant forgetting of the non-target data.
§.§ Ablation Study
§.§.§ Memory Generator
We conduct an overlapping validation experiment for two purposes:
i) to empirically validate that there is an overlapping solution space for tasks 𝒟' and ℳ. As we mentioned in Section <ref>, the existence of overlapping solution space is one of the most fundamental requirements for EWC training,
and ii) to compare the performance of the aforementioned labels, including two benchmark labels, i.e., uniform label and random label, and two our proposed labels, i.e., teacher label and debias teacher label.
Thus, we train the model on the mixed task 𝒟' + ℳ, and observe the performance, which indicates the upper bound of EWC training.
For task 𝒟', we remove the target class ℛ from the original local dataset 𝒟_k.
For task ℳ, we replace the label of the target class with a fake label to construct manipulated data.
We mix the data from the above two tasks for training and test the model on the testing set.
Manipulating one class at each time, we report the accuracy of the target class in Figure <ref> and the accuracy of non-target data in Appendix B.
If the overlapping solution space exists, the model is supposed to achieve desirable performance on both tasks, i.e., i) on task 𝒟': having high accuracy on non-target data, and ii) on task ℳ: having low accuracy on target class, which means it has no knowledge about target class.
From the accuracy of non-target data, we observe that all types of labels achieve high accuracy, indicating that using whichever label can learn the knowledge from task 𝒟'.
From Figure <ref>, we observe that uniform and random labels fail to have the overlapping solution space.
They suffer from the phenomenon of intransigence.
They achieve considerably high accuracy and only have low accuracy in few classes, which means they have residual knowledge of the target class.
Their performance is also unstable, which proves that they are hard to learn.
Teacher label has low accuracy in most classes.
However, it fails in a few classes.
This is because the untrained model potentially has prediction bias on particular classes, making the accuracy on these classes relatively high <cit.>.
Debias teacher label has low testing accuracy for all classes, proving that it exists an overlapping solution space robustly.
§.§.§ Knowledge Preserver
To investigate the effectiveness of EWC, we compare with -C which continually trains the model with conventional loss.
As shown in Table <ref> (after unlearning), -C achieves comparable DB Acc as Retrain.
But it also shows a decrease in Test Acc by 62.14% when compared with Retrain.
This indicates that -C unlearns the target data, but also forgets some of the non-target data (catastrophic forgetting).
As for comparison, our proposed utilizes EWC training to alleviate catastrophic forgetting.
§ CONCLUSION AND FUTURE WORK
In this paper, we propose a novel federated unlearning method, named .
Inspired by active forgetting in neurology, unlearns by leveraging new memories to overwrite the old ones, which fundamentally overcomes the limitations of existing federated unlearning methods, without requiring the storage of historical updates or the estimation of data influence.
In general, we build based on the idea of incremental learning to achieve active forgetting.
Specifically, consists of two modules in each client, i.e., memory generator and knowledge preserver.
Memory generator produces knowledge-free and easy-to-learn fake labels based on teacher-student learning, and pairs them with the target data features to generate new memories.
Knowledge preserver continually trains the model with new memories, and adopts EWC training with our derived loss to alleviate catastrophic forgetting.
The target client triggers these two modules by submitting unlearning requests, while other clients follow the standard training process.
Experiments conducted on three benchmark datasets demonstrate that our proposed can not only efficiently unlearn the target data of a specific client from the global model, but also preserve the knowledge of non-target data.
§ HYPER-PARAMETERS
We empirically investigate the effect of two hyper-parameters, i.e., EWC training epoch and trade-off coefficient λ.
To properly achieve unlearning, our proposed framework is supposed to have i) low accuracy on target data (ideally 0% accuracy), and ii) relatively high accuracy on non-target data (comparable results with learning process).
Therefore, we choose adequate hyper-parameters based on the above two metrics.
§.§ EWC Training Epoch
For the EWC training epoch, we investigate it in {1, 2, 3, 4, 5} for all datasets.
Target data
Figures <ref>, <ref>, and <ref> report the accuracy of target data.
From them, we observe that EWC training can already achieve almost 0% accuracy with the minimal epoch options on both datasets.
Non-target data
Figures <ref>, <ref>, and <ref> report the accuracy of non-target data.
From them, we observe that EWC training achieves the best non-target accuracy with the minimal epoch options on both datasets, and then gradually declines.
Summary
Therefore, we choose the minimal epoch options, setting EWC training epoch as 1.
§.§ Trade-off Coefficient λ
For λ, we investigate it in {0.1, 0.5, 1, 10, 50} for all datasets and report the results in Figure <ref>.
λ balances the trade-off between unlearning target data (low accuracy on target data) and not forgetting non-target data (high accuracy on non-target data).
Target data
As it is shown in Figures <ref>, <ref>, and <ref>, when λ≤ 10, our proposed framework can achieve almost 0% accuracy of target data on both datasets.
The target data accuracy gradually grows when further increasing λ, which means the model does not completely unlearn the target data.
Non-target data
As it is shown in Figures <ref>, <ref>, and <ref>, the accuracy of non-target data increases with the value of λ, but the degree of increase decreases gradually.
Summary
Based on the empirical results in Figure <ref>, we set λ as 10, since it strikes a good balance.
§ OVERLAPPING VALIDATION
Manipulating one class at each time, we report the accuracy of non-target data in Table <ref> and the results of target data (CIFAR10 and CelebA) in Figure <ref>.
As shown in Table <ref> all types of labels achieve high accuracy, indicating that using whichever label can learn the knowledge from task 𝒟'.
The results presented in Figure <ref> demonstrate a consistent observation with those found in experiments on MNIST.
Specifically, we observe that uniform and random labels do not yield an overlapping solution space.
While the teacher label exhibits low accuracy across most classes, it does demonstrate a bias towards certain ones (relatively high accuracy).
ACM-Reference-Format
|
http://arxiv.org/abs/2307.01544v1
|
20230704075436
|
SDSS-IV MaNGA: Ionization sources of diffuse extra-planar galactic medium
|
[
"Vera K. Postnikova",
"Dmitry Bizyaev"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
1Sternberg Astronomical Institute, Lomonosov Moscow State University, Moscow, Russia
2Physics Department of Lomonosov Moscow State University, Moscow, Russia
3Apache Point Observatory and New Mexico State University, Sunspot, NM, 88349, USA
We explore sources of ionization of diffuse gas at different altitudes in galaxies
in dependence of their stellar mass, luminosity, and specific star formation rate.
We use the MaNGA data from SDSS-IV data release DR16 together with
photoionization and shock ionization models provided by the 3MdB database.
Our sample comprises 239 edge-on galaxies, which makes our results statistically
valuable. We reach very high galactic altitudes with the help of spectra stacking.
We demonstrate that models of the gas photoionization in a combination of young
OB-stars and hot low-mass evolved stars (HOLMES) describes the gas ionization
state in the galaxies of all types on diagnostic diagrams.
Nevertheless, the shock waves may contribute to the gas ionization in massive
galaxies with passive star formation. We observe a general trend of decreasing
the fraction of the ionizing flux from OB-stars and the ionization parameter
with the altitude, while the role of the ionization by the HOLMES
increases. The biggest difference in the contribution from these types of ionizing sources
correlates with the specific star formation rate and with stellar masses
of galaxies. The HOLMES are the principal gas ionization sources in massive galaxies
with passive star formation, while OB-stars dominate the gas ionization in low-mass
galaxies with active star formation.
§ INTRODUCTION
<cit.> started investigating the diffuse ionized gas medium (DIG)
in the Milky Way galaxy, which lead to its detection not only in the galactic midplane,
but also at high galactic altitudes <cit.>. Later, the DIG was discovered in
other galaxies <cit.>. It was found that the DIG phase prevails
at several kpc above the galactic midplane <cit.>. A decade ago kinematics of
the neutral <cit.> and ionized gas <cit.> was quite well studied
only in a few nearby galaxies. Recent progress in massive multi-object spectral extragalactic
surveys enables us to study kinematics of ionized gas in and around objects of the
local Universe for statistically large samples of galaxies <cit.>.
At the same time the ionization sources of the extraplanar gas at large galactic altitudes
(eDIG hereafter) are still not well understood for our and other galaxies. Thus, the ionizing
photons flux from the OB-stars in the galactic midplane is sufficient to explain the amount of
ionized gas in galaxies with active star formation <cit.>. On the
other hand, the bright forbidden line ratios at high galactic altitudes in some galaxies
require taking into account evolved stars as the main source of the gas ionization
<cit.>. The shock wave ionization was also proposed
as a scenario for the explanation of the eDIG emission <cit.>.
A large data release DR16 <cit.> of the Mapping Nearby Galaxies at the
Apache Point Observatory (MaNGA, <cit.>), a part of the Sloan Digital Sky
Survey-IV (SDSS-IV, <cit.>) allows us to make a large sample of
objects with convenient to observe eDIG. Due to a large number of galaxies, with the
help of spectra stacking we can trace eDIG to extremely large altitudes above
the galactic midplane — up to a dozen kpc. The main purpose of this study is
to advance the eDIG studies with the new, large MaNGA sample and new spectra
modeling results.
In the next section we describe the MaNGA data that we utilize and their
analysis. Then we describe the used diagnostic diagrams and the line ratio modeling.
Then we report our results and discuss them. Finally, we summarize our results.
We assume that the Hubble constant is 70 km s^-1 Mpc^-1 throughout our paper.
§ SDSS-IV MANGA DATA
§.§ The MaNGA Spectra
We employ data from the MaNGA survey released in the frames of DR16 of SDSS-IV.
The MaNGA survey was conducted with the 2.5-m Sloan telescope <cit.>
at the Apache Point Observatory with the resolution of R ∼ 2000 and in the
wavelength range of 3600–10300Å <cit.>. The survey followed
a sample of over 10,000 galaxies with a uniform distribution by stellar mass
at the median redshift of z ≈ 0.03 <cit.>.
The MaNGA obtained resolved two-dimensional spectral maps
for its objects via two fiber-fed spectrographs <cit.> with
the Integral Field Unit (IFU) heads <cit.> that consisted of
densely packed optical fibers allocated at the telescope's focal plane.
The fiber projection diameter was 2 arcsec, and the spatial filling factor
for the packed circular fibers was 56%. The full coverage of observed
objects was realized via a 3-point dithering, which allowed to restore
continuous spectra images, see <cit.>.
The MaNGA data reduction pipeline consists of two main stages.
The first stage is the Data Reduction Pipeline (DRP, <cit.>), which
delivers flux-calibrated spectra cubes homogenized to the uniform
angular resolution of about 2.5 arcsec (FWHM) placed on a regular
rectangular spatial grid with a 0.5 arcsec spaxel. The photometric
calibration precision was not worse than 5% <cit.>.
The second stage is the Data Analysis Pipeline (DAP, <cit.>),
that separated the absorption and emission spectra using the Penalized
Pixel Fitting (pPXF) method <cit.>, which
allowed one to estimate global parameters of galaxies, to obtain
two-dimensional maps of various astrophysical parameters,
cubes of co-added binned spectra, and best-fitting model spectra.
We make use of the cubes of emission spectra obtained
after subtracting the model continuum from the observed
spectra, maps of some emission line fluxes, and gas velocity
maps out of the MaNGA products. We also use some global
parameters derived from published SDSS photometry.
§.§ Making Masks for Selected Galaxies
We create spaxel masks to select only good quality data for the consequent
spectra stacking. We leave only spaxels that satisfy the following
criteria:
* the emission spectrum was successfully modeled
for this spaxel, with no bad data processing flags;
* the radial velocity in the emission line was successfully determined
for this spaxel, with no bad data processing flags;
* the signal-to-noise ratio (SNR) in the line was ≥ 3.0;
* the absolute value of the radial velocity for this spaxel was within
350 of the galactic center velocity.
§.§ Analysis of the Sample
The primary selection of edge-on galaxies was performed via the visual inspection
of composite, color images made by SDSS survey. Our experience in selecting
edge-on objects for large catalogs <cit.> and for individual studies
of MaNGA objects <cit.> shows that the visibility of the dust lane projected to
the central region of galaxies suggests high inclinations of galactic midplane
to the line of sight ≥ 85, which also has been confirmed
with calculations by <cit.>.
In turn, the high inclination ensures that we can study highly elevated gas
without its overlapping on bright star formation regions in the midplanes of galaxies.
Our result of the visual selection is a sample of 258 edge-on galaxies.
After the visual inspection we notice that some galaxies should be rejected based
on their maps of the emission, maps of the equivalent width EW(), and
maps of the stellar and gas velocities. Thus, our sample has a few objects in
which their gas and stars rotate in near orthogonal planes, which resembles
galaxies with polar rings, see e. g. <cit.>. While they can be interesting
objects for the further studies, they don't allow us to study eDIG by the methods that
this paper uses. As a result, we reject 32 more
galaxies that have one or more of the following features:
* a large angle between the stellar and gas rotation;
* evident inconsistency between reported photometric parameters
and observed picture, e. g. when the effective radius R_eff is
too large with respect to the visible size of the object;
* the spaxel mask rejects the majority of galactic regions,
which can statistically bias the contribution of the remained
spaxels from this object to the stacked spectra.
The resulting sample comprises 239 galaxies. We show them as a mosaic in Figure 1.
§.§ The Spectra Stacking Procedure and the Emission Line Flux Estimation
Since we study faint regions far away from galactic midplanes, the individual SNR
of their emission lines is often too low for the analysis. To increase the SNR, we stack
spectra from regions that are close by their properties. To do it, we subdivide the sample by
a small number of groups, or bins, with relatively similar global parameters.
Among with studying the eDIG properties at different altitude bins, we also
incorporate a binning by the following galactic parameters:
* the integral stellar mass M_s estimated by the NASA-Sloan Atlas of galaxies
(NSA[<http://nsatlas.org>]);
* the galactic luminosity not corrected for the reddening L_Hα-R_eff(r),
which is the luminosity within one effective radius R_eff in the r-band, and proportional to the star formation rate (SFR), and also taken
from the NSA;
* the specific star formation rate sSFR estimated as
sSFR = L_Hα-R_eff(r) / 10^41.27 / M_s,
where the normalization is derived by <cit.>, <cit.>, <cit.>;
* the visual altitude of spaxels z/z_0 above the galactic midplane normalized by
the exponential scale height, the latter was estimated as z_0=0.596 · R_eff· b/a,
where R_eff is the effective radius, and b/a is the minor-to-major axis ratio
from a 2D fitting in the r-band taken from the NSA.
We optimize the binning via selecting the number of bins and their borders
in the way that in each bin by the galactic parameters and by the altitude
has a contribution from at least 10% of all galaxies in the sample. After that we co-add corrected for the radial velocities emission spectra in each bin. We ensure that the corrected spectra have no other lines
within ± 7.5Å from the line of interest. We also check and confirm that
the DAP subtracted continuum spectra so well that an additional continuum
correction is not required. The flux in selected emission lines is
found via a simple integration of the line intensity profiles within ± 7.5Åfrom the line centers. Then we correct the line fluxes for the
extinction based on the Balmer decrement. Since the DAP provides uncertainties
of the emission intensities in each spaxel, we find the resulting flux
uncertainties via the standard error propagation method.
As a result, we co-add the emission spectra for 239 galaxies in the optimal
bins by the general galactic parameters and by the galactic altitude
according to the procedure described above. Then we find the emission line
fluxes and their uncertainties. An example of our binning by the galactic altitudes
is shown in Figure 2.
§ DIAGNOSTIC DIAGRAMS AND THEORETICAL MODELS
§.§ Diagnostic Diagrams
In order to compare the data of observations with models, we
employ diagnostic diagrams that are based on relative intensities
of emission lines. They enable us to efficiently separate regions with different
physical conditions. The diagnostic diagrams are widely
used after a study by <cit.>, where the advantage of
two-dimensional classification was demonstrated and also the most
useful combinations of strong emission lines were introduced.
<cit.> also considered a set of the line ratios
on diagnostic diagrams, and both works contributed to a set
of the diagrams that are traditionally called as BPT-diagrams. <cit.>
explained and updated the classification with the help of models
that described the gaseous medium.
An important addition to the diagnostic diagrams are the demarcation lines,
which show borders between the gas with different ionization mechanisms.
The most frequently used demarcation lines were introduced by
<cit.>, <cit.>, <cit.>. A more advanced approach to the
demarcation line setting that utilizes gas dynamics was considered
by <cit.>. The data of MaNGA allow us to use a variety of
emission line combinations for the diagnostic diagrams, but in this
work we limit the study by 3 traditional BPT diagrams:
* log([OIII]λ5007/Hβ) vs log([NII]λ6583/Hα);
* log([OIII]λ5007/Hβ) vs log([SII]λλ6716,6731/Hα);
* log([OIII]λ5007/Hβ) vs log([OI]λ6300/Hα).
§.§ The 3MdB Models
The models of the gas ionization based on numerical computations
are widely demanded for studies of interstellar medium. At the same time,
running the computations is a time-consuming process, and they are
run and published mostly for limited and specific cases. This problem was mitigated by
the creation of the Mexical Million Models dataBase (3MdB) by
<cit.> and <cit.>. The 3MdB contains
pre-computed model emission line ratios that are organized as a
MySQL database. This approach allows one to save time and computation
resources and simplifies comparison between observations
and modeling.
At the moment the 3MdB consists of two major parts:
* The database of photoionization models 3MdB-p <cit.> computed with
the package Cloudy <cit.>, version C13 <cit.>.
This database considers several principal setups of the photoionization modeling
and includes the grids ”DIG_HR” — the models designed for the DIG (and eDIG)
description. They assume that the gas is ionized by a combination of
radiation from OB-stars and from HOt Low-Mass Evolved Stars (HOLMES).
The choice of these two main sources of the gas ionization is based on
a study of a well-known edge-on galaxy NGC 891 <cit.>.
One of the purposes of our study is to verify whether the generalization
of this assumption is suitable for all types of galaxies.
* The database of shock ionization models 3MdB-s <cit.> computed
with the help of the MAPPINGS astrophysical plasma modeling
code <cit.>, version V <cit.>.
Out of cases considered by this set of models, the most relevant are
the models ”Allen08” described by <cit.>.
Another goal that we pursue in this paper is to verify whether
the shock models can be relevant to the ionization of gas at different
galactic altitudes in various types of galaxies.
§.§ The Photoionization models ”DIG_HR”
The photoionization grids, that describe the diffuse gas in the frames
of the 3MdB-p database, have 4 model parameters.
* The flux from the OB-stars Φ_OB, photons/sec/cm^2.
It spans the range log Φ_OB=(3.5 ÷ 7.5)
with the step of 0.25 dex.
* The ionization parameter U=Φ_total/n_e/c, where Φ_total=Φ_OB+ Φ_HOLMES, n_e is the electron density, and c is the speed of light.
It spans the range log U=(-4.0 ÷ -3.0) with the step of 0.1 dex.
* The gas metallicity O/H defined as the ratio of the number of corresponding atoms.
It spans the range Δ O/H= (-1.0 ÷ 0.6)
with the step of 0.1 dex, where Δ O/H + 8.69 = 12.00 + log O/H.
* The nitrogen abundance N/O defined as the ratio of the number of corresponding atoms.
It spans the range log N/O=(-1.4 ÷ -0.2)
with the step of 0.1 dex.
It is important to notice that the HOLMES flux is fixed
at Φ_HOLMES= 8.4 · 10^4 photons/sec/cm^2 in all ”DIG_HR”
models.
All other metal abundances relative to O are fixed to the solar values except for the Mg, Si,
and Fe, which are decreased by 1 dex with respect
to the solar values.
§.§ The shock models ”Allen08”
The grids of models for the shock ionization ”Allen08”
have also 4 model parameters:
* Pre-shock gas density. We consider only two realistic
in our case values: n=0.1 cm^-3 and n=0.01 cm^-3.
* The gas metallicity. We have to stick to the solar metallicities
for these grids because the lower metallicity cases, e. g. typical for
the Large and Small Magellanic Clouds, are not computed for very
low gas densities typical for the eDIG. We analyze possible behavior of
the models for the eDIG set of parameters below.
* The magnetic field. In the midplane of our and other galaxies
the magnetic field has B ≃ 10 μ G, which decreases with the altitude
down to B ≃ 5 μ G at several kpc above the midplane. We consider a
range of the B between . Within this range the ”Allen08” grids
are computed for the
B=(1.0, 1.26, 1.58, 2.0, 3.16, 4.0, 5.0, 10.0) μ G with
n=0.1 cm^-3 and for the B=(1.0, 10.0) μ G with n=0.01 cm^-3.
* The shock wave velocity. We consider all provided values
of this parameter.
For the grids with n=0.1 cm^-3 the velocities range between
100 and 1000 , and with n=0.01 cm^-3 they range between
200 and 1000 , both with the step of 25 .
§ RESULTS
§.§ The optimality of our binning
Our binning scheme fills all altitude bins with statistically valuable number
of representatives. When only the altitude binning is considered,
about 30% of all galaxies contribute to the highest, least populated bin.
When an additional galaxy parameter binning is added, some 10% of all galaxies
have representative data in the highest bin.
We ensure that our binning scheme is stable against the number of galaxies in
the parental sample, the number of altitude bins, and the bin altitude
border values by changing these values and inspecting the resulting diagrams
explained below.
§.§ Co-added Galactic Spectra in the Diagnostic Diagrams
Here we consider how the line ratios of our co-added spectra
overlap with the model photoionization and shock grids in dependence of
our integral galactic parameters M_s, L_Hα-R_eff(r), and sSFR.
Note that we vary the Δ O/H and log N/O for the
photoionization models in dependence on galactic parameters, which is
discussed below. The results are shown in Figures 3, 4, and 5.
§.§ The Diagrams
As we can see in Figures 3–5, some high-altitude data for the intermediate and high M_s,
for the low and high L_Hα-R_eff(r) and for low and intermediate sSFR
fall onto the cross-section of the photoionization and shock grids. Nevertheless,
all our observing line ratios can be explained with the photoionization
grids only. The position of the points on the BPT diagrams can be
translated to the OB-stars relative flux and the ionization parameter
via a grid interpolation procedure. Below we estimate these parameters
by neglecting the shocks contribution to the gas ionization.
At the beginning, let's consider our simplest binning scheme by galactic altitudes only.
We note that the O/H ratio should change with the altitude, but using photoionization
models for HII regions from <cit.> we conclude that the O/H variation
with the altitude can be neglected in the frames of the calibration uncertainties
of ±0.1 dex. When different BPT diagrams are compared between each other,
the best agreement for our data can be achieved for Δ O/H = 0.0 and log N/O = -0.9,
which corresponds to the solar abundance.
Note that the models do not allow us to vary the S abundance, which leads to systematically
bias of all diagrams that include the S. To avoid the bias, for the regression below
we utilize only the BPT-plots based on H, N, and O. We also exclude the very first,
in-midplane bin from the further regression because a very high dust extinction there can
bias line ratios even for nearby lines in spectra. The results of the grid interpolation and
regression are shown in Figure 6.
Next, we consider more complicated binning cases and include additional galactic
parameters introduced above. Before the interpolation, we also use results of HII
regions modeling from <cit.> and ensure that the O/H
ratio would not change by more than ±0.1 dex in the case of the diagrams
for case of adding the M_s, L_Hα-R_eff(r), or sSFR to the binning
schemes, hence we conclude that we can neglect the O/H variation with the galactic altitude.
Then we make the plots similar to Figure 6 that also include additional galactic
parameters and find Δ O/H and log N/O for which the extracted
parameters best agree between all our BPT plots.
We notice that the best values of Δ O/H and log N/O for the
galaxies with different M_s (Figure 3) correlate with M_s same way
as in well-known O/H – M_s <cit.>,
N/O – M_s <cit.>, and N/O – O/H
<cit.> relations.
As for Figure 6, we derive the best-fitting model parameters
for each observing bin by interpolating
among the model grids on the BPT diagrams. Then the for regression we include the BPT diagrams with H, N, and O only, and exclude the lowest galactic altitude points. The altitude distribution of the extracted
parameters and the linear regression lines are shown in Figures 7–9.
§ DISCUSSION
Before making certain conclusion about possible contribution of shocks to the DIG ionization we
need to verify how would the shock grids move in Figures 3–5 for the lower than solar
gas metallicities, which is expected for eDIG at high galactic altitudes. As we have
noticed above, grids for subsolar metallicities were computed in the 3MdB only
with n= 1 cm^-3, which is way higher than typical eDIG densities. Therefore,
we consider the direction of trends for n= 1 cm^-3 and assume that the
trends will be the same for lower gas densities. The gas metallicities can be ordered
as (1) twice as solar, (2) solar, (3) subsolar with the Large Magellanic Cloud
values, and (4) subsolar with the Small Magellanic Cloud values. Comparing the model
grids in this order,
one can see that the shock grids move from the right to the left
in Figures 3–5, parallel to the abscissa axes. In this case the position of the
shock grids on Figures 3–5 will not overlap more with the observing points and the
conclusions are expected to remain the same for all low gas metallicities.
The eDIG emission line ratios on the BPT diagrams can be explained with
two sources of ionization only: hot massive stars and HOLMES. This is
important to stress that OB-stars alone cannot explain the line ratios
in the eDIG at high altitudes, and incorporating HOLMES is required.
Nevertheless, we notice that some sets of parameters, like
z > 4.5 z_0 for high and intermediate masses M_s (Figure 3),
lowest and highest L_Hα-R_eff(r) (Figure 4), or
lowest and intermediate sSFR (Figure 5) send observing
points to the areas where the photoionization and shock grids
overlap. This may suggest a non-negligeable contribution
of shock waves to the eDIG ionization in those types of galaxies.
As we remind above, the assumption of two types of sources (OB-stars and HOLMES)
that ionize gas in the ”DIG_HR” photoionization models was based on the study
by <cit.> that relies on a single galaxy, NGC 891, in which
the gas was traced up to 4 kpc above the galactic midplane. If we considered
NGC 891 with its parameters from <cit.>, <cit.>, <cit.>
as a part of our sample, it would fall into the high M_s bin, high
L_Hα-R_eff(r) bin, and intermediate bin by sSFR. Its altitudes
z would span up to (3÷ 4) z_0.
For comparison, parameters of the Milky Way from <cit.>, <cit.>, <cit.>
would place it to the high M_s bin, high L_Hα-R_eff(r) bin,
and intermediate sSFR bin — similar to NGC 891.
We conclude that the assumption that eDIG on the BPT diagrams at any altitude
in NGC 891 can be described only by photoionization models made by <cit.>
is in agreement with our Figures 3–5.
Therefore, the assumption about the OB-stars and HOLMES as the main sources of the DIG
ionization based on a single galaxy, NGC 891, works well for the majority of other galaxies
and for a wide range of galactic altitudes. Nevertheless, it does not take into account
possible contribution from shocks, which reveals itself at the highest altitudes, in the most
massive galaxies, and especially in galaxies with the lowest sSFR. Indeed, eDIG in our
galaxies can be described by photionization grids only, and we do not have to take
possible contribution of shocks into account.
In a general case (Figure 6), we observe increasing contribution of HOLMES and
decreasing contribution of OB-stars to the gas ionization with the galactic altitude.
At low altitudes OB-stars contribute 3–5 times more to the ionization flux than
HOLMES. At the highest altitudes, in a contrary, HOLMES contribute 2–3 times
more to the gas ionization than OB-stars. The ionization parameter decreases
systematically with the altitude. When the additional binning by the galactic
parameters is applied, the trends described above are kept, qualitatively.
When the sample is additionally binned by stellar mass M_s (Figures 3 and 7), at any altitude
the OB-flux contribution decreases when the mass increases. In the least massive
galaxies the ionization parameter drops significantly with the altitude.
The ionization parameter also increases with the mass at high galactic altitudes.
When the luminosity L_Hα-R_eff(r) is used for the binning (Figures 4 and 8),
at low galactic altitudes the OB-flux contribution increases with L_Hα-R_eff(r), but this trend
blurs at high altitudes.
In the case of sSFR binning (Figures 5 and 9) we see the most clear difference between the two main
sources of the gas ionization. At any galactic altitude increasing sSFR means increasing
contribution of OB-stars to the ionizing flux. Moreover, in the high sSFR galaxies OB-stars
contribute more than HOLMES at almost all galactic altitudes. Also the highest sSFR galaxies
demonstrate the most significant drop of the ionization parameter with altitude.
In the low sSFR galaxies the OB-stars contribution almost at all galactic altitudes
is less than that of HOLMES. This contribution varies with the altitude slowly,
as well as the ionization parameter does.
This study is a logical extension of the work by <cit.>, which uses much smaller
sample of galaxies, and we find that our conclusions agree with those by <cit.>.
Thus, we observe systematic growth of forbidden emission lines with respect to the
Balmer lines from the gas with the altitude increasing. Also, we confirm the guess
by Jones et al. that eDIG properties depend on some integral properties
of the galaxies, e. g. on stellar mass and specific star formation rate.
We also notice that our results are in agreement with works by
<cit.>, <cit.> that conclude that HOLMES are important for
the DIG ionization. Unfortunately, direct comparison between our works
is difficult because the galaxies considered by <cit.>, <cit.>
allow them to study the DIG in the galactic midplanes only.
The necessity of taking HOLMES into account for the description of
the emission lines ratios in galactic gas is a significant addition
to the previous assumption that OB-stars can explain the ratios in the majority
of galaxies made by <cit.>.
Promising conclusions were made in a study by <cit.> based
on panoramic spectroscopy results from the survey CALIFA.
<cit.> proposed to use the equivalent width (EW) of
Hα emission as an indicator of the gas ionization regime,
i. e. to distinguish the gas in DIG from HII regions.
A humble sample of edge-on galaxies demonstrated by <cit.>
gives a guess that the extraplanar gas is in the DIG ionization
regime, too. We note that our approach to the gas analysis makes
a reliable estimation of EWs difficult: while the flux from
emission lines can be reliably measured even at the highest altitudes,
the stellar continuum is not well detected very far away from
the galactic midplane. We take the EW analysis out of the frames of our work.
At the end, we would like to discuss limitations of our approach to the
gas state and ionization sources analysis at high galactic altitudes. Although
the spectra stacking procedure helps us increase the range where the eDIG
can be studied, the stacked spectra are contributed by galaxies of different
kinds. The limited spatial resolution of MaNGA does not allow us to resolve
individual HII regions in eDIG. We can only extrapolate our knowledge
of smooth density distribution of gas at high galactic altitudes
earned from studies of our Galaxy. We also note that we do not
separate galaxies by dominating gas ionization mechanisms,
e. g. by the presence of active galactic nuclei. Also, we do not
consider any radial binning in this paper, which would help
us check whether powerful central sources can contribute to the
gas ionization. We also acknowledge a non-optimal determination
of the vertical scale height of the disks via the effective radii.
In this case, a two-dimensional photometric decomposition would
allow us to determine the vertical disk scale more reliably and to eliminate
inconsistencies for the case of galaxies with large bulges. We purposely
prefer to estimate the vertical scale from the effective radius for this
study in order to better compare results to the work by <cit.>.
We postpone the possible adjustments to our analysis mentioned above to
a future work.
§ CONCLUSIONS
We present results of a study of the ionization sources at different altitudes
in galaxies of various types. The key element of this work is using a large
sample of 239 true edge-on galaxies selected from a recent SDSS data release DR16,
which allows us to study emission lines at extremely high altitudes via a
spectra stacking procedure.
We compare the derived emission line ratios with results of modeling available from
the 3MdB database, which enables us to consider the gas ionization by a combination of
OB-stars and HOLMES, and also by shocks. We employ three BPT diagrams for
the comparison.
We find that the model of gas ionization that takes into account two types of sources:
OB-stars and HOLMES, adequately describes observed distributions of emission line ratios
in galactic gas. Nevertheless, shock waves may be required for better description
of DIG at high altitudes, especially at z > 4.5 z_0 in galaxies with intermediate
and high stellar masses or with low specific star formation rates. Interaction of gas in galaxies with circumgalactic medium
<cit.> can be a source of the shocks, which also affects the gas kinematics
at high galactic altitudes <cit.>.
We infer how the OB-stars contribution to the total ionizing flux and the ionization
parameter change with the altitude via the grid interpolation from the BPT diagrams
and find the following trends:
* In the galaxies of all types (Figure 6) the contribution of OB-stars decreases,
the contribution of HOLMES increases, and the ionization parameter decreases with the
galactic altitude.
* The increasing of stellar mass (Figures 3 and 7) leads to the OB-contribution decreasing and HOLMES
contribution increasing at any galactic altitudes. Also, the vertical gradient of the ionization parameter
decreases in this case. In turn, at the highest galactic altitudes the stellar mass increasing
leads to the increasing of the ionization parameter.
* At fixed low galactic altitudes the growth of the Hα luminosity (Figures 4 and 8) leads to
the increasing of OB-stars contribution and the decreasing of that from HOLMES. This trend blurs
at higher altitudes.
* The most prominent difference in the considered sources of the gas ionization
among all binning cases is seen for the galaxies with different specific star formation rates (Figures 5 and 9).
With the specific star formation rate increasing, the OB-stars contribution increases and the one from HOLMES decreases.
The ionization parameter vertical gradient also increases in this case. Moreover,
for the galaxies with active star formation the contribution of OB-stars exceeds that from HOLMES
at almost all galactic altitudes. In the star formation passive galaxies the OB-stars contribution is
less than that from HOLMES at almost all galactic altitudes, and it does not change significantly with galactic
altitude, as well as the ionization parameter.
§ ACKNOWLEDGEMENTS
This is a preprint of the work accepted for publication in Astronomy Letters,
©, copyright 2023, belonging to the authors,
see http://pleiades.online/.
This study was partly supported by the Russian Science Foundation via
grant 22-12-00080, rscf.ru/project/22-12-00080/.
The authors thank the anonymous referee for his constructive feedback that improved the paper.
The study makes use of the SDSS-IV MaNGA data available from http://www.sdss.org/dr16/data_access/.
Funding for the Sloan Digital Sky
Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of
Science, and the Participating
Institutions.
SDSS-IV acknowledges support and
resources from the Center for High
Performance Computing at the
University of Utah. The SDSS-IV
website is www.sdss4.org.
SDSS-IV is managed by the
Astrophysical Research Consortium
for the Participating Institutions
of the SDSS Collaboration including
the Brazilian Participation Group,
the Carnegie Institution for Science,
Carnegie Mellon University, Center for
Astrophysics | Harvard &
Smithsonian, the Chilean Participation
Group, the French Participation Group,
Instituto de Astrofísica de
Canarias, The Johns Hopkins
University, Kavli Institute for the
Physics and Mathematics of the
Universe (IPMU) / University of
Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory,
Leibniz Institut für Astrophysik
Potsdam (AIP), Max-Planck-Institut
für Astronomie (MPIA Heidelberg),
Max-Planck-Institut für
Astrophysik (MPA Garching),
Max-Planck-Institut für
Extraterrestrische Physik (MPE),
National Astronomical Observatories of
China, New Mexico State University,
New York University, University of
Notre Dame, Observatário
Nacional / MCTI, The Ohio State
University, Pennsylvania State
University, Shanghai
Astronomical Observatory, United
Kingdom Participation Group,
Universidad Nacional Autónoma
de México, University of Arizona,
University of Colorado Boulder,
University of Oxford, University of
Portsmouth, University of Utah,
University of Virginia, University
of Washington, University of
Wisconsin, Vanderbilt University,
and Yale University.
[Ahumada et al.(2020)]ahumada20 Ahumada, R., Allende Prieto, C., Almeida, A., et al. 2020, , 249, 3
[Alarie & Morisset(2019)]alarie19 Alarie, A., & Morisset, C. 2019, Revista Mexicana de Astronomía y Astrofísica, 55, 377
[Allen et al.(2008)]allen08 Allen, M. G., Groves, B. A., Dopita, M. A., Sutherland, R. S., & Kewley, L. J. 2008, , 178, 20
[Andrews & Martini(2013)]andrews13 Andrews, B. H., & Martini, P. 2013, , 765, 140
[Baldwin et al.(1981)]bpt Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, , 93, 5
[Belfiore et al.(2016)]belfiore16 Belfiore, F., Maiolino, R., Maraston, C., et al. 2016, , 461, 3111
[Binette et al.(1985)]binette85 Binette, L., Dopita, M. A., & Tuohy, I. R. 1985, , 297, 476
[Bizyaev & Kajsin(2004)]bizyaev04 Bizyaev, D., & Kajsin, S. 2004, , 613, 886
[Bizyaev et al.(2014)]bizyaev14 Bizyaev, D. V., Kautsch, S. J., Mosenkov, A. V., et al. 2014, , 787, 24
[Bizyaev et al.(2017)]bizyaev17 Bizyaev, D., Walterbos, R. A. M., Yoachim, P., et al. 2017, , 839, 87
[Bizyaev et al.(2022)]bizyaev22 Bizyaev, D., Walterbos, R. A. M., Chen, Y.-M., et al. 2022, , 515, 1598
[Blanton et al.(2017)]blanton17 Blanton, M. R., Bershady, M. A., Abolfathi, B., et al. 2017, , 154, 28
[Bundy et al.(2015)]bundy15 Bundy, K., Bershady, M. A., Law, D. R., et al. 2015, , 798, 7
[Cappellari(2017)]cappellari17 Cappellari, M. 2017, , 466, 798
[Cappellari & Emsellem(2004)]cappellari04 Cappellari, M., & Emsellem, E. 2004 , 116, 138
[Collins & Rand(2001)]collins01 Collins, J. A., & Rand, R. J. 2001, Gas and Galaxy Evolution, Astronomical Society of the Pacific Conference Proceedings, 240 (Ed. Hibbard, J. E., Rupen, M., & van Gorkom, J. H., San Francisco: Astronomical Society of the Pacific), 392
[Curti et al.(2017)]curti17 Curti, M., Cresci, G., Mannucci, F., et al. 2017, , 465, 1384
[Dettmar(1990)]dettmar90 Dettmar, R.-J. 1990, 232, L15
[Dopita et al.(2000)]dopita00 Dopita, M. A., Kewley, L. J., Heisler, C. A., & Sutherland, R. S. 2000, , 542, 224
[Dopita et al.(2016)]dopita16 Dopita, M. A., Kewley, L. J., Sutherland, R. S., et al. 2016, , 361, 61
[Drory et al.(2015)]drory15 Drory, N., MacDonald, N., Bershady, M. A., et al. 2015, , 149, 77
[Ferland et al.(1998)]ferland98 Ferland, G. J., Korista, K. T., Verner, D. A., et al. 1998, , 110, 761
[Ferland et al.(2013)]ferland13 Ferland, G. J., Porter, R. L., van Hoof, P. A. M., et al. 2013, Revista Mexicana de Astronomía y Astrofísica, 49, 137
[Flores-Fajardo et al.(2011)]floresfajardo11 Flores-Fajardo, N., Morisset, C., Stasińska, G., & Binette L. 2011, , 415, 2182
[Gunn et al.(2006)]gunn06 Gunn, J. E., Siegmund, W. A., Mannery, E. J., et al. 2006, , 131, 2332
[Haffner et al.(2009)]haffner09 Haffner, L. M., Dettmar, R.-J., Beckman, J. E., et al. 2009, Reviews of Modern Physics, 81, 969
[Hao et al.(2011)]hao11 Hao, C.-N., Kennicutt, R. C., Johnson, B. D., et al. 2011, , 741, 124
[Hoyle & Ellis(1963)]hoyle63 Hoyle F., & Ellis, G. R. 1963, Aust. J. Phys., 16, 1
[Jones et al.(2017)]jones17 Jones, A., Kauffmann, G., D'Souza, R., et al. 2017, , 599, A141
[Karachentsev et al.(2013)]karachentsev13 Karachentsev, I. D., Makarov, D. I., & Kaisina, E. I. 2013, , 145, 101
[Kauffmann et al.(2003)]kauffmann03 Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, , 346, 1055
[Kennicutt & Evans(2012)]kennicutt12 Kennicutt, R. C., & Evans, N. J. 2012, , 50, 531
[Kewley et al.(2001)]kewley01 Kewley, L. J., Dopita, M. A., Sutherland, R. S., et al. 2001, , 556, 121
[Kewley et al.(2006)]kewley06 Kewley, L. J., Groves, B., Kauffmann, G., & Heckman T. 2006, , 372, 961
[Lacerda et al.(2018)]lacerda18 Lacerda, E. A. D., Cid Fernandes, R., Couto, G. S., et al. 2018, , 474, 3727
[Law et al.(2015)]law15 Law, D. R., Yan, R., Bershady, M. A., et al. 2015, , 150, 19
[Law et al.(2016)]law16 Law, D. R., Cherinka, B., Yan, R., et al. 2016, , 152, 83
[Law et al.(2021)]law21 Law, D. R., Ji, X., Belfiore, F., et al. 2021, , 915, 35
[Levy et al.(2019)]levy19 Levy, R. C., Bolatto, A. D., Sánchez, S. F., et al. 2019, , 882, 84
[Licquia & Newman(2015)]licquia15 Licquia T. C., & Newman J. A. 2015, , 806, 96
[Marasco et al.(2019)]marasco19 Marasco, A., Fraternali, F., & Heald, G. 2019, , 631, 50
[Masters et al.(2016)]masters16 Masters, D., Faisst, A., & Capak, P. 2016, , 828, 18
[McMillan(2017)]mcmillan17 McMillan, P. J. 2017, , 465, 76
[Moiseev et al.(2011)]moiseev11 Moiseev, A. V., Smirnova, K. I., Smirnova, A. A., & Reshetnikov V. P. 2011, , 418, 244
[Morisset et al.(2015)]morisset15 Morisset, C., Delgado-Inglada, G., & Flores-Fajardo, N. 2015, Revista Mexicana de Astronomía y Astrofísica, 51, 103
[Mosenkov et al.(2015)]mosenkov15. Mosenkov, A. V, Sotnikova, N. Ya., Reshetnikov, V. P., et al. 2015, , 451, 2376
[Murphy et al.(2011)]murphy11 Murphy, E. J., Condon, J. J., Schinnerer, E., et al. 2011, , 737, 67
[Pérez-Montero & Contini(2009)]perezmontero09 Pérez-Montero E., & Contini, T. 2009, , 398, 949
[Rand(2000)]rand00 Rand, R. J. 2000, , 537, L13
[Rand et al.(1990)]rand90 Rand, R. J., Kulkarni, S. R., & Hester, J. J. 1990, , 352, L1
[Reynolds et al.(1973)]reynolds73 Reynolds, R. J., Scherb, F., & Roesler, F. L. 1973, , 185, 869
[Reynolds(1991)]reynolds91 Reynolds, R. J. 1991, The Interstellar Disk-Halo Connection in Galaxies, IAU Symp., 144 (Ed. Bloemen H., Dordrecht: Kluwer Acad. Publ.), 67
[Robitaille & Whitney(2010)]robitaille10 Robitaille T. P., & Whitney, B. A. 2010, , 710, L11
[Shaw & Gilmore(1989)]shaw89 Shaw M. A., & Gilmore, G. 1989, , 237, 903
[Slavin et al.(1993)]slavin93 Slavin, J. D., Shull, J. M., & Begelman M. C. 1993, , 407, 83
[Smee et al.(2013)]smee13 Smee, S. A., Gunn, J. E., Uomoto, A., et al. 2013, , 146, 32
[Sutherland et al.(2018)]sutherland18 Sutherland, R., Dopita, M., Binette, L., & Groves, B. 2018, ”MAPPINGS V: Astrophysical plasma modeling code”, Astrophysics Source Code Library, ascl:1807.005
[Swaters et al.(1997)]swaters97 Swaters, R. A., Sancisi, R., & van der Hulst J. M. 1997, , 491, 140
[Tremonti et al.(2004)]tremonti04 Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, , 613, 898
[Veilleux & Osterbrock(1987)]veilleux87 Veilleux S., & Osterbrock, D. E. 1987, , 63, 295
[Wake et al.(2017)]wake17 Wake, D. A., Bundy, K., Diamond-Stanic, A. M., et al. 2017, , 154, 86
[Westfall et al.(2019)]westfall19 Westfall, K. B., Cappellari, M., Bershady, M. A., et al. 2019, , 158, 231
[Yan et al.(2016a)]yan16a Yan, R., Bundy, K., Law, D. R., et al. 2016, , 152, 197
[Yan et al.(2016b)]yan16b Yan, R., Tremonti, C., Bershady, M. A., et al. 2016, , 151, 8
[York et al.(2000)]york00 York, D. G., Adelman, J., Anderson, J. E., et al. 2000, , 120, 1579
[Zhang et al.(2017)]zhang17 Zhang, K., Yan, R., Bundy, K., et al. 2017, , 466, 3217
|
http://arxiv.org/abs/2307.00417v1
|
20230701200030
|
Aggregation Consistency Errors in Semantic Layers and How to Avoid Them
|
[
"Zezhou Huang",
"Pavan Kalyan Damalapati",
"Eugene Wu"
] |
cs.DB
|
[
"cs.DB",
"cs.HC"
] |
shapes.arrows
Cypher
morekeywords=MATCH, OPTIONAL, WITH, SUM, RETURN,
sensitive=true,
morecomment=[l]//,
morecomment=[s]/**/,
morestring=[b]",
language=SQL,
basicstyle=,
keywordstyle=,
commentstyle=,
stringstyle=,
showstringspaces=false,
breaklines=true,
breakatwhitespace=true,
tabsize=4,
frame=single,
rulecolor=,
captionpos=b,
belowcaptionskip=5pt,
aboveskip=5pt,
belowskip=5pt
ℝ
ℂ
𝐰
𝒲
ℙ
𝒜
ℬ
𝒞
𝒟
ℰ
ℱ
𝒢
ℋ
𝒥
𝒦
ℒ
ℳ
𝒩
𝒪
𝒫
ℛ
𝒯
𝒳
𝒰
𝒲
𝒴
ℝ
ℤ
ℕ
#1 #1
theoTheorem[section]
lemma[theo]Lemma
claim[theo]Claim
corollary[theo]Corollary
assumption[theo]Assumption
prop[theo]Proposition
fact[theo]Fact
thmi[theo]Theorem (Informal)
cori[theo]Corollary (Informal)
lemi[theo]Lemma (Informal)
example[theo]Example
question[theo]Question
remark
remark[theo]Remark
L
p
w
x
y
z
Bern
Unif
ncov(, )
nvar()
C
V
Cov
Var
[0.10ex].3em.4pt[1.40ex].3em.4pt
myitemize∙
arabic
[email protected]
Columbia University
[email protected]
Columbia University
[email protected]
DSI, Columbia University
Analysts often struggle with analyzing data from multiple tables in a database due to their lack of knowledge on how to join and aggregate the data.
To address this, data engineers pre-specify "semantic layers" which include the join conditions and "metrics" of interest with aggregation functions and expressions.
However, joins can cause "aggregation consistency issues".
For example, analysts may observe inflated total revenue caused by double counting from join fanouts.
Existing BI tools rely on heuristics for deduplication, resulting in imprecise and challenging-to-understand outcomes.
To overcome these challenges, we propose "weighing" as a core primitive to counteract join fanouts.
"Weighing" has been used in various areas, such as market attribution and order management, ensuring metrics consistency (e.g., total revenue remains the same) even for many-to-many joins.
The idea is to assign equal weight to each join key group (rather than each tuple) and then distribute the weights among tuples. Implementing weighing techniques necessitates user input; therefore, we recommend a human-in-the-loop framework that enables users to iteratively explore different strategies and visualize the results.
Aggregation Consistency Errors in Semantic Layers and How to Avoid Them
Eugene Wu
=======================================================================
§ INTRODUCTION
Cloud technology has enabled enterprises to store unlimited amounts of tables in data warehouses.
However, analysts with domain knowledge of the business but with limited expertise in data management face challenges when analyzing metrics of interest across multiple tables. In particular, they may find it difficult to determine which tables to join, what the join conditions are, and how the attributes of these tables relate to the metrics of interest <cit.>.
To bridge this knowledge gap, one popular and easy approach is to denormalize (join) tables into a wide table <cit.>, which is usually carried out by data engineers offline.
This simplifies the data analysis process for analysts, who only need to work with one table as the single source of truth.
However, join causes spurious duplications, and drops rows due to the lack of matching data or missing values <cit.>, which are difficult to detect, understand, and ameliorate. We illustrate these with the following example:
The example retail business database <Ref> and join graph <Ref> contain information on the user ad view and purchase history. There are two fact tables: Ad_view and Purchase. The standard approach of denormalization <cit.> can lead to incorrect results. The denormalized relations are defined as follows:
CREATE VIEW Denormalized_Ad_View AS SELECT *
FROM V JOIN U ON V.uid = U.uid
JOIN A on V.aid = A.aid;
CREATE VIEW Denormalized_Purchase AS SELECT *
FROM H JOIN U ON H.uid = U.uid
JOIN I ON H.iid = I.iid
JOIN P ON H.pid = P.pid;
Let us consider three simple questions and their SQL queries:
Q1: What is the total cost of ads from all sources?
SELECT SUM(A.cost) FROM Denormalized_Ad_View;
This query above is incorrect because Ad_view duplicates the cost for each view. In the table below, Google's cost is double-counted in red:
We should instead aggregate on only A: SELECT SUM(A.cost) FROM A
Q2: What is the total revenue from the purchased items?
SELECT SUM(I.price) FROM Denormalized_Purchase;
Unlike ad cost, item price should be duplicated for each purchase in order to compute the total revenue. The above query is incorrect because there may be missing payments, and joining on NULL values removes the tuple. One fix is to use outer joins.
Q3: What is the total revenue from different ad sources?
SELECT A.source, SUM(I.price)
FROM V FULL OUTER JOIN U ON V.uid = U.uid
FULL OUTER JOIN A on V.aid = A.aid
FULL OUTER JOIN H ON H.uid = U.uid
FULL OUTER JOIN I ON H.iid = I.iid
FULL OUTER JOIN P ON H.pid = P.pid
GROUP BY A.source;
The query above is incorrect because of the one-to-many relationship between a user's revenue (Q2) and their Ad Views; a full outer join would result in duplications and an increase in the total revenue.
One approach widely adopted by the marketing domain <cit.> is to weigh each ad view based on its “importance”, and ensure that the sum of weights for each join key value (uid) is 1 to counteract join fanouts. The choice of weights is necessarily based on the analyst's domain knowledge: one analyst may believe that the first ad view is the most important, while another prioritizes the last one <cit.>. The following illustrates weights where all ad views are equally important:
For each user, the weights are uniform among ad views. For instance, uid=1 has weights of 1/2=0.5 as there are 2 views.
Note that the total revenue from User and Ad View are the same (70).
Therefore, analysts can examine how each ad source contributes to revenue based on their assumptions, despite the one-to-many relationship.
Such an idea of "weighing" has been used across multiple domains, as summarized by Kimball and Ross <cit.>: In order management, freight charges are allocated (or weighed) to a line's products based on their sizes. In financial services, personal incomes are weighed across individual accounts. In accounting, payments are weighed across organizations according to ownership.
From the above example, we see that the correctness of the aggregates hinges upon the selection of tables to join, deduplication methods, null handling, and weighing designs. Unfortunately, the choices depend on the specific analysis query and the analyst's understanding of the issue at hand, rendering a singular static interpretation unsuitable for all analyses. Nevertheless, the denormalized wide table is still considered the conventional method.
To overcome denormalization's limits and tailor the decisions to specific queries, the industry has developed the notion of a "semantic layer" <cit.> as a way of decoupling the needs and expertise of data engineers and analysts.
In the offline phase, a data engineer designs a wide table in the form of a join graph, along with appropriate metrics (i.e., aggregation functions over expressions) pertinent to a given business problem (by discussing the requirements from analysts). In the online exploration phase, analysts use BI tools to specify exploratory queries over the metrics with grouping and filtering. Behind the scenes, the semantic layer employs heuristics to determine the tables to join, deduplication methods, and null handling specifically for each query.
For example, Tableau "relationships" <cit.> consist of a join graph (<Ref>), and analysts treat it as a base table and create visualizations by dragging and dropping attributes onto the canvas to explore different metrics, such as Q2-3 in <Ref>.
Unfortunately, to decide the tables to join for each query, Tableau arbitrarily selects those in the minimum sub-tree of the join graph that cover all referenced attributes,
which leads to subtle errors.
For example, Q2 only references attribute I.price. Tableau, therefore, sums the prices over I and computes the total revenue as SUM(I.price)=85 (<Ref>), even though I.price should be duplicated for each purchase (by joining with H) to compute total revenue correctly.
For Q3, Tableau generates results "Google" 50 and "FaceBook" 20, which internally applies specialized deduplication <cit.> to handle the join fanout. However, the deduplication mechanism is not surfaced to the analysts, even though it impacts query results and may not align with the analysts' intentions.
Tableau is not alone in making incorrect heuristics. Of the 5 BI tools and semantic layers we surveyed,
2 of them determine the tables to join in a heuristic manner that's not disclosed to the analyst, which leads to incorrect outcomes for Q2. Additionally, for Q3 with many-to-many relationships, 2 of these BI tools don't support it, and the remaining 3 apply arbitrary deduplication rules solely based on heuristics.
They are therefore all susceptible to correctness errors, and hide from the analyst the ability to interpret and control how the final metrics in a query are decided upon.
How do analysts make informed decisions about how to handle duplication problems that arise from joins? How can analysts best decide to include a table in a query, duplicate or deduplicate tuples, or reweigh duplicates, to correctly compute their desired business metrics?
This paper formalizes these issues and proposes “weighing” as the core primitive to address them, whose idea is as follows:
Naive query treats every tuple as having an equal "weight" when aggregated. However, with join fanout, the "weights" of join key groups (calculated as the sum of the "weights" of their tuples) can be amplified, which may inadvertently bias the join results. To overcome this, we "weigh" tables by assigning equal "weight" to each join group (instead of each tuple) and then distributing the weight among the tuples within the group. Consequently, tuple values are aggregated based on these weights to counteract join fanouts.
Previous BI tools use deduplication, which can be considered as a special "weighing" of 1/0 without fractional numbers.
We demonstrate how reweighing can resolve fanout-induced errors, and how instances of this formulation have been used in existing problems such as market attribution, unbiased sampling, and ML fairness.
A key challenge that necessitates a human-in-the-loop approach is that the appropriate weighing cannot be determined offline, but is dependent on the specific wide table, query, and even analyst needs. For instance, the weights for Q3 may vary depending on different analysts' belief in the importance of different ad views—Shannon may only care about the ad user viewed last, while Erika may care about all ad views equally.
To help users specify the weights and visualize the outcomes, we propose a human-in-the-loop framework. We observe that visualizing each table one-by-one makes it difficult to contextualize the aggregates across joins, and presenting the full join can be overwhelming. Our framework addresses these issues by (1) enabling users to decide the weights iteratively along the join path, (2) visualizing partial aggregates that summarize the current aggregates for the decided weights (instead of computing the full join), and (3) providing an interface with common weighing options to declaratively specify the weights.
§ BACKGROUND
This section surveys the existing academic literature for pitfalls that can arise when aggregating over join graphs,
and identifies the missing gaps and limitations of existing approaches.
We further survey prominent BI tools for how they address these pitfalls.
§.§ Pitfalls in Join Aggregation Query
"Summarizability" <cit.> studies the correctness of querying aggregated fine-grained values at a coarser level. Within snowflake-schema multidimensional models, fact values are usually fine-grained, while aggregation queries that join and group dimensional attributes are considered coarse-grained.
To address the fanout issues in Q1 and Q2, they require users to specify the "level of detail" of metrics, which determines what should be joined for duplications before the values are aggregated. They cover other additive issues not directly related to join fanouts. For example, age should not be added, and population can be added over geographical areas but not over time. These are complementary to this work.
However, "summarizability" is too strict for practical exploratory queries.
For example, when there are missing join keys in dimension tables, "roll up" and "drill down" are considered "incomplete" and therefore not summarizable, which can be addressed by outer join in practice <cit.>. Besides, many-to-many joins are considered "non-strict" and also not summarizable, but are important in applications like market attribution <cit.> and order management <cit.>. In such cases, weighing can be used (<Ref>) to counteract join fanouts.
The same join-induced issues arise in domains beyond BI. Previous analytics works over joins, such as ML <cit.> and sampling <cit.>, are susceptible to bias <cit.> and correctness errors when computed over the materialized join result. Consider the following:
Consider the database shown in Table <ref>. Suppose analysts aim to train a model to predict user purchases or sample tuples for insights. They want to include features in the join of all three tables A U H for enrichment. Despite having an even gender distribution in U (1 for each), the full join produces 6× more tuples for female due to fanout. Consequently, this creates a potential imbalance in the ML training and bias for sampled data.
A common approach to address data imbalance in the single table regime is via weighing <cit.>. We extend weighing to analytics over joins.
Another approach to address join fanout is to pre-aggregate (e.g., averaging) <cit.> to reduce a N-M relationship into a N-1 relationship. However, this introduces errors for long join paths (e.g., the average of averages is not the total average), and so BI tools that apply pre-aggregation only support a single join <cit.>.
§.§ Current Business Intelligence Tools
Modern BI tools offer different ways to pre-define join graphs and metrics offline, and then allow for online exploratory analysis. For join graphs and metrics, some automatically infer default joins from the database schema, while others offer GUI, SQL, or custom languages for user inputs.
During online exploration, analysts query the join graphs as a wide table,
and the system dynamically determines the appropriate joins to avoid logical pitfalls. In all cases, they vary in how they handle different fanout conditions (1-1, 1-N, N-M) and NULLs. We now examine how popular BI tools address the above pitfalls for Q1-3. Our findings are summarized in <Ref>.
* Blend is adopted by both Tableau and Looker <cit.>.
The user specifies the join condition between two tables and query, and the BI tool executes it using pre-aggregation before the outer join. Blend is limited in two ways: (1). It only supports queries of join path with a length of 2 (and not general join graphs), therefore doesn't support Q3 (with a join path of 5 tables). (2).
It doesn't require an explicit definition of the metrics and the decision to join is based on whether the attributes are referenced. For Q2, it avoids the join since the referenced attributes (I.price) are not referenced in H, which is incorrect because the join is necessary to duplicate prices for the total revenue metric.
* Relationship is supported by Tableau and PowerBI, where users specify the join graph (tables and join conditions) and can query over it directly. However, both tools make the same error as Blend for Q2 as they don't require explicit metric definitions. For Q3, PowerBI generates errors as it doesn't support many-to-many relationships, while Tableau applies arbitrary deduplication that impacts query results but may deviate from the analyst's intentions without even notifying the analyst.
* Looker <cit.> and Malloy <cit.> use a SQL-based declarative language to model a source table to query against, which defines both join graph and metrics. For instance, the following is used for Q2, which defines the join condition with I to construct a join graph, along with the total revenue as metric (sum(I.price)).
source: H is table('PurchaseHistory')
join_one: I is table('Items') on iid = I.iid
measure: total_revenue is sum(I.price)
Note that there are different options for selecting the source table. In the example, H is the source, but I could also be the source. The choice of the source table determines the metric duplication level: In this example, because the total revenue metric is sourced from H, it is aggregated after H I.
For Q1 and Q2, the correctness depends on if the correct source table is used for duplication (A for Q1 and H for Q2 in the Source column). For Q3, choosing different source tables yields different results, and both BI tools apply arbitrary deduplication without user interactions.
* Finally, Sigma Computing supports queries over attributes across two tables through lookup join <cit.>, which employs preaggregations and is not applicable to Q3 similar to blend. Unlike blend but similar to model, users can specify explicitly whether the aggregation is being performed before or after join for Q1-2.
In summary, current BI tools frequently depend on implicit assumptions for deduplication, which can be dangerous if they are incorrect and challenging to identify the errors.
§ WEIGHING TO THE RESCUE
In this work, we argue that users should be able to easily specify reweighing policies in order to avoid correctness errors when performing analytics over joins.
This section first defines the consistency errors in the context of semantic layers that store pre-defined join graphs and associated metrics. Our solution then builds on semi-ring aggregation. Although semi-ring aggregation is traditionally used to accelerate join-aggregation queries by pushing the aggregations through joins, this push-down computes partial aggregates that are also useful for reasons about the appropriate reweighing decisions.
§.§ Problem
We consider the setting where a data engineer has pre-defined (1) the metric as an aggregation function and (2) the duplication as an acyclic join; the aggregation and join together constitute a base query Q_base=γ(R_1⋯ R_n) (without group-by and selection).
For example, the base query for Q2 can be represented as Q_base=γ_SUM(I.price)(H I).
The analyst then composes an SPJA query Q that uses the same metric but may also include selection and group-by expressions that reference attributes in tables that require left outer joining with additional tables.
For instance, to answer Q3, they may issue Q=γ_A.source,SUM(I.price)(H I U V A).
We next formalize the "consistency" error.
To facilitate understanding, we consider "sum" as the aggregation in this section, and we will extend it to arbitrary aggregation when introducing the solution framework.
The "consistency" error refers to
the inequality between the sum of the groupby [Selection removes data and the inequality is expected. For the problem definition purposes, we regard "selection" as a groupby based on whether the selection predicate is satisfied and post-process it to obtain the final selected value.] results for Q and Q_base:
Q includes additional joins for enriched group-bys and selections, and the total measurement (e.g., revenue or expense) can be amplified compared to Q_base. Such inconsistently larger results have been complained about across different BI tools <cit.>, and we want to help analysts understand and avoid them.
Given the base query Q_base=γ_SUM(R_1...R_k), and the exploration query Q=γ_gb,SUM(σ(R_1...R_k...R_j)), that includes additional joins, selections, and group-bys (in red), the objective is to find the re-weighing function W that weighs (multiplies) the sum when joined (e.g., the revenue is weighed for Q3 in <Ref>), such that Q^*=γ_gb(σ(R_1...R_kW(R_k+1)...W(R_j))) is:
* Consistent:
If Q doesn't have a selection σ, then γ(Q^*)= Q_base.
If Q has a selection σ, let σ be its negation and γ(Q^*_)=γ_gb(σ(W(R_1)...W(R_j))).
Then γ(Q^*) + γ(Q^*_) = Q_base.
* User-Directed: There could be various valid weighing strategies. For example, in Q3, different analysts may assign different weights based on their opinions about the importance of the ad views. The weighing should be transparent and understandable to analysts, who should be able to guide based on their interpretation of the data and domain knowledge.
We observe that the re-weighing function W is applied to relations only in the exploratory query (R_j+1...R_k) to mitigate fanout effects, but not to those in the base query. The duplication in the base query is assumed intentional to compute the total metric correctly. Although there might be other reasons to weigh the base query relations, such as addressing an imbalanced distribution <cit.>, we consider these as a preprocessing step, separate from W.
Scope. Many previous works <cit.> studied syntax and semantic issues related to SQL queries for correctness, which remains challenging. We specifically focus on the amplified or reduced aggregate result caused by the join fanout and ensure "consistency" as formally defined above. We hope that, ensuring consistency helps users identify semantic errors and achieve final query correctness.
§.§ Semi-ring Aggregation Preliminary
Semi-ring aggregation breaks down the aggregation into two fundamental operations: addition (e.g., to aggregate values) and multiplication (e.g., to weigh values). It is highly expressive and can express nearly all common aggregations with the benefit of efficient partial aggregation, such as sum, count, average, and max. Other aggregations, such as median, can also be expressed using semi-ring aggregation (by exhaustively tracking all values in a long list), but don't derive as much benefit from partial aggregation for both performance and interpretability perspectives.
Data Model.
We use the traditional relational data model: Given relation R, let A be an attribute, dom(A) be its domain, S_R=[A_1,⋯,A_n] be its schema, t∈ R be a tuple of R, and t[A] be the value of attribute A in tuple t. The domain of R is then the Cartesian product of attribute domains, i.e., dom(R) = dom(A_1)×⋯× dom(A_n).
Semi-ring Aggregation Query. We begin by extending the relation table with annotations,<cit.>, which maps t∈ R to a commutative semi-ring (D, +, ×, 0, 1), where D is a set, + and × are commutative binary operators closed over D, and 0/1 are the zero/unit elements. Annotations are useful for query optimizations based on algebraic manipulation. Different semi-ring definitions support various aggregation functions, from standard statistical functions to machine learning models. For example, the natural numbers semi-ring (ℕ,+,×,0,1) allows for integer addition and multiplication, and supports the aggregate. For an annotated relation R, let R(t) represent the annotation of tuple t.
Tuples in the domain but not in the table are assumed to have 0 as annotations.
Aggregation queries can now be redefined over annotated relations by translating group-by and join operations into + and × operations over the semi-ring annotations, respectively:
(γ_𝐀 R)(t) = ∑{R(t_1) | t_ 1 ∈ R , t = π_𝐀 (t_1 )}
(R T)(t) = R(π_S_R (t)) × T(π_S_T (t))
(1) The annotation for each group-by result in γ_𝐀 R is the sum of the annotations of all tuples in its input group. (2) The annotation for each join result in R T is the product of semi-ring annotations from its contributing tuples in R and T. The can be extended for outer join, where the non-matching tuple retains its original annotation and has the rest of the attributes as NULL.
To translate an aggregation function like SUM into semi-ring aggregation, we need to determine (1) the semi-ring and (2) the annotation for initial tables. For example, in the case of SUM(I.price), the semi-ring is the real number following standard mathematics. For the annotation, only I has each tuple t annotated with t[price]; all other tables have all annotations of 1.
Partial Aggregation.
The key optimization in factorized query execution <cit.> is to distribute aggregations (additions) through joins (multiplications).
Consider the γ(H V) for count semi-ring, which is a many-to-many join. We can push down part of aggregations before join for one-to-many as illustrated in <Ref>.
§.§ Consistency through Weighing
There are many aggregations like min/max <cit.> and other specialized probabilistic databases <cit.> that provides consistent results, even with many-to-many joins. We highlight the key property they have that ensures consistency, and generalize it to other aggregations through reweighing.
Given Q_base=γ(R_1...R_k), consider the query with an additional join γ(Q)=γ(R_1...R_k R_k+1) = γ(γ_J(R_1...R_k) γ_J(R_k+1)), where J is the join key between R_k+1 and the rest of the relations.
Then, the key sufficient condition for consistency is that:
∀ j∈ Dom(J), γ_J(R_k+1)(j) = 1
The consistency γ(Q) = Q_base is ensured because of the multiplicative identity property of 1 element in semi-ring, and can be recursively applied to an arbitrary number of joins. This is also satisfied with additional group-bys (because the groupings are summed) and selections (as special group-bys of whether selected or not).
Previous applications that support many-to-many joins satisfy this property. For probabilistic tables, it is ensured that each conditional probability table has the sum of the probabilities conditioned (selected) on the join key as 1. In the case of min/max aggregation, the relation without the attribute to maximize have all tuples annotated with 1, and the min/max semi-ring has its addition operator ⊕ as max that ensures 1 ⊕ ... ⊕ 1 = max(1,..., 1) =1.
For other aggregation queries such as sum and count, such a property is not natively satisfied and it's necessary to weigh relations. For instance, in the context of market attribution of Q3, we can assign equal weights to all tuples in V with the same uid. However, assigning weights is not one-time, as different analysts may have different opinions on how to weigh the ad views. Therefore, we next introduce a framework to assist users in assigning weights.
§.§ Human-in-the-loop Weighing
To make informed decisions, users need to contextualize aggregation within the larger join graph and be capable of specifying weights for the joined table. However, visualizing each table individually makes it challenging to contextualize aggregates across joins, while presenting the full join can be overwhelming.
A key design decision for understanding the aggregation relationship is utilizing partial aggregates. These partial aggregates progressively join with new tables, compress the results by aggregating out attributes, not in use and retaining only the necessary attributes for users to comprehend and design weights.
The challenge in specifying weights lies in the worst-case scenario, where users need to assign a weight for each tuple, which is impractical. Instead, we identify common and efficient special cases that simplify the process for the user. We aim to determine the minimum information required, given reasonable assumptions for specific settings that are clearly stated and easier for the analyst to specify. This section delves into these details and introduces an interface.
Interface. Our interface comprises two panels illustrated in <Ref> to solicit weighings from users, and visualize outcomes.
§.§.§ Weighing.
For exploratory query Q, we request weighing only the necessary relations in a depth-first fashion, where the relations in Q_base are the roots.
Some relations, such as those in the Q_base don't require weighing (<Ref>).
Other relations, with one-to-many or one-to-one relationships (to the parent relation in the depth-first search), have one tuple per join key and are directly annotated with a weight of 1 without user request. We only request weights to relations that (1) have many-to-one or many-to-many relationships and (2) are part of any paths from the Q_base relations to the Q relations, excluding the Q_base relations themselves.
Based on the sophistication of users, we offer parameterized options for common cases to streamline the process such as:
* Equal Weighing: Tuples within the group have the same weight.
* Order-Based Weighing: The first/last tuple ordered by some user-specified attribute has weight 1, while the rest have 0.
* Position-Based Weighing: Similar to order-based one, but users can additionally determine the allocation percentages for the first/last tuple and the rest. E.g., the first and last have weights of 0.4, while the remaining 0.2 is distributed evenly for the rest.
* Proportional Weighing: The weights are distributed proportionally to some user-specified attribute (e.g., distributing freight charges based on the item sizes <cit.>).
For advanced users, we provide a SQL-based interface that allows for customized weighing specifications.
The SQL query is intended to create a weight column. Since relations are unordered, we ask users to create a weight table W[rowid, weight] and then join it with the relation to append the weight column.
For instance, the uniform weighing (linear attribution) in <Ref> Q3 can be specified using the following SQL queries:
SELECT rowid, 1/COUNT(*)
OVER (PARTITION BY uid) AS weight
FROM V;
Users can then modify SQL queries to customize the weights as per their requirements. The weighing can also be based on multiple attributes and even other tables in SQL. Once the weight column is specified, we perform sanity checks: For exploratory queries, we verify whether the weights grouped by the join key are all 1.
§.§.§ Visualization.
We have developed two types of visualizations for the weighing interface: (1) Tabular views of weights for detailed weights, (2) Visualizations of query results (e.g., Q_base, Q) over the whole join along with the join graph for an overview view.
During the weighing process, we provide users with a tabular view of the relation with weights for detailed inspection.
It is necessary to visualize both the relation being weighed and the relation it joins, to understand the distribution of the aggregation. But these two relations have potentially many-to-many relationships, which is hard to visualize.
To address this, we implement partial aggregation to summarize aggregates of the parent table from the depth-first search grouped by its join key and present it as a one-to-many join using nested table layout <cit.>. For instance, in <Ref>, we aggregate the revenue of each user by the join key "uid". The relation could be large, and we only sample n (=100 by default) join key groups to display, but the table can be expanded to display more rows (by clicking on the button at the bottom of the table).
For visualizations, we display Q_base and Q by default but offer a library for users to build extra visualizations.
Before weighing, we use the "equal weighing" by default to compute the query results for visualizations, which are progressively updated as users specify weighing.
We place Q_base at the top since it remains unaffected by weighing and show other possible visualizations for various weighing options to assist users in understanding potential outcomes. Additionally, we present the join graph, which depicts the relationships (e.g., many-to-many, one-to-one), highlighting the Q_base relations (in black), the relation being weighed (in blue).
§.§ Limitations and Future works
Our existing interface evaluates each metric independently, necessitating users to examine and navigate various metrics individually. However, users might prefer to compare and visualize multiple metrics collectively and further generate metrics based on the existing ones. In our future work, we plan to create a unified interface that not only displays multiple metrics but also enables users to seamlessly comprehend weighed outcomes across diverse metrics and weighing options, in order to fully actualize the semantic layer.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02781v1
|
20230706052003
|
Dynamic Factor Analysis with Dependent Gaussian Processes for High-Dimensional Gene Expression Trajectories
|
[
"Jiachen Cai",
"Robert J. B. Goudie",
"Colin Starr",
"Brian D. M. Tom"
] |
stat.AP
|
[
"stat.AP",
"stat.CO",
"stat.ME"
] |
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering
[
==================================================================================================================================
The increasing availability of high-dimensional, longitudinal measures of genetic expression can facilitate analysis of the biological mechanisms of disease and prediction of future trajectories, as required for precision medicine. Biological knowledge suggests that it may be best to describe complex diseases at the level of underlying pathways, which may interact with one another. We propose a Bayesian approach that allows for characterising such correlation among different pathways through Dependent Gaussian Processes (DGP) and mapping the observed high-dimensional gene expression trajectories into unobserved low-dimensional pathway expression trajectories via Bayesian Sparse Factor Analysis. Compared to previous approaches that model each pathway expression trajectory independently, our model demonstrates better performance in recovering the shape of pathway expression trajectories, revealing the relationships between genes and pathways, and predicting gene expressions (closer point estimates and narrower predictive intervals), as demonstrated in the simulation study and real data analysis. To fit the model, we propose a Monte Carlo Expectation Maximization (MCEM) scheme that can be implemented conveniently by combining a standard Markov Chain Monte Carlo sampler and an R package GPFDA <cit.>, which returns the maximum likelihood estimates of DGP parameters. The modular structure of MCEM makes it generalizable to other complex models involving the DGP model component. An R package has been developed that implements the proposed approach.
Keywords: High-Dimensional Gene Expression Trajectories; Multivariate Longitudinal Data; Pathways; Sparse Factor Analysis; Dependent Gaussian Processes; Monte Carlo Expectation Maximization.
§ INTRODUCTION
The development of high-throughput technology has enabled researchers to collect high-dimensional genomic data repeatedly over time, facilitating the discovery of disease mechanisms. Biological knowledge suggests that it may be best to describe complex diseases at the level of pathways, rather than the level of individual genes <cit.>. A biological pathway is a series of interactions among molecules that results in a certain biological function or response or describe a particular mechanism or phenomena. Pathways are sometimes summarized by activity scores derived from genes expression values <cit.> with corresponding gene set; such scores form a basis of making comparisons between people in different clinical statuses <cit.>. The relationship between genes and pathways can be illustrated (in a simplified representation) using Figure <ref>. Each pathway involves only a small proportion of genes (sparsity), where some genes may not contribute to any relevant pathway (e.g. gene 4), may contribute to a single pathway (e.g. gene 1 only contributes to pathway 1 and gene 3 only contributes to pathway 2), or may contribute to multiple pathways (e.g. gene 2 contributes to both pathways). Pathways are often unobservable in practice. Therefore, recovering their trajectories from observed gene expression data has been one of the major interests in genomic data analysis.
The Bayesian Sparse Factor Analysis (BSFA) model is an established statistical approach used to map high-dimensional gene expression data to a low-dimensional pathway expression representation; the former are treated as observed variables and the latter as latent factors <cit.>. One of the main assumptions in BSFA is the independence of factors. In our context, this assumption means that the expressions of pathways are uncorrelated. However, this may not be true biologically. In fact, previous research has found that pathways often interact with one another to achieve complex biological functions <cit.>.
In this paper we propose relaxing the classical assumption of independent factors to allow for the possibility of correlated factors, and estimate the cross-correlation among the factors from data. To do this, we model the latent factor trajectories using Dependent Gaussian Processes (DGP). As we will show in the simulation study and real data analysis, our approach performs better at recovering the shape of latent factor trajectories, estimating the relationship between genes and pathways, and predicting future gene expression.
In addition to the modeling innovation, another contribution of this paper is with regard to the algorithm developed for estimating parameters of the DGP model when it is embedded within another model. To obtain the maximum likelihood estimate (MLE) of the DGP parameters, we developed a Monte Carlo Expectation Maximization (MCEM) algorithm, which can be conveniently implemented by combining an existing R package, GPFDA <cit.>, with a standard Markov Chain Monte Carlo (MCMC) sampler. An R package DGP4LCF (Dependent Gaussian Processes for Longitudinal Correlated Factors) has been developed to implement the proposed method; available on Github: <https://github.com/jcai-1122/DGP4LCF>.
The remainder of the article is organized as follows. In Section <ref>, we review BSFA and DGP, then propose our integrated model based on them. In Section <ref>, we introduce the inference method for the proposed model, and discuss identifiability issues with the model and our approach to addressing these. We explore various aspects of the behavior of our proposed approach under different factor generation mechanisms in the simulation study in Section <ref>: prediction of gene expression, estimation of gene-pathway relationships and the shape of pathway expression trajectories; and we demonstrate that its performance in the aforementioned aspects is always superior to the traditional model ignoring correlation among factors. In Section <ref>, we apply the proposed method to real data and compare our results with a previous analysis in <cit.>. We conclude with a discussion on future research directions in Section <ref>.
§ MODEL
Let t_ij denote the jth measured time point of the ith individual, i=1,…,n, j=1,…,q_i, where n and q_i are the number of subjects and subject-specific time points, respectively. At time t_ij, x_ijg is the gth gene expression, g=1,…,p, where p is the number of genes. We seek to describe these data in terms of k latent factors/pathways y_ija, a = 1,…,k. In practice, we expect k to be much smaller than p.
Throughout this paper we assume that k is pre-specified and fixed, with the choice primarily based on previous knowledge about the data at hand. Note, however, that it may be possible to identify when k is unnecessarily large (as we will show in the simulation study), since redundant factors will have no significantly loaded genes.
Our proposed model is based on BSFA and DGP. Therefore, before presenting the proposed model in Section <ref>, we first introduce BSFA in Section <ref> and DGP in Section <ref>.
§.§ Uncovering Sparse Factor Structure via BSFA
The BSFA model connects observations x_ijg with the unobserved latent factors y_ija via a factor loading matrix 𝐋={l_ga}_g = 1,..., p, a = 1,...,k∈ℝ^p × k, and incorporates the prior belief of sparsity by imposing a sparsity-inducing prior distribution on 𝐋. It can be expressed as follows,
x_ijg = μ_ig + ∑_a=1^kl_gay_ija + e_ijg,
where μ_ig is the intercept term for the gth gene of the ith individual (hereafter the “subject-gene mean”), e_ijg is the residual error, and each element l_ga quantifies the extent to which the gth gene expression is related to the ath pathway expression, with larger absolute values indicating a stronger contribution of the gene expression to the pathway expression. In this paper, we assume l_ga is constant across all time points.
* x_ijg: the gth gene expression at time t_ij, the jth time point of the ith individual
* y_ija: the ath pathway expression at time t_ij
* l_ga: the contribution of the gth gene expression to the ath pathway expression
* μ_ig: the intercept term for the gth gene of the ith individual
* e_ijg: the residual
We adopt point-mass mixture priors to induce sparsity <cit.> because we want the model to shrink insignificant parameters completely to zero, without the need to further set up a threshold for inclusion, as in continuous shrinkage priors <cit.>. The point-mass mixture prior is introduced by first decomposing l_ga as the product of a binary variable Z_ga indicating inclusion and a continuous variable A_ga denoting the regression coefficient, and then specifying a Bernoulli-Beta prior for Z_ga and a Normal-Inverse-Gamma prior for A_ga:
l_ga = Z_ga· A_ga,
Z_ga∼Bern(π_a), π_a ∼Beta(c_0, d_0),
A_ga∼N(0,ρ_a^2), ρ_a^2 ∼Inverse-Gamma(c_1, d_1),
where g=1,…,p; a=1,…,k and c_0, d_0, c_1, d_1 are pre-specified positive constants. An a priori belief about sparsity can be represented via (c_0, d_0), which controls the proportion of genes π_a that contributes to the ath pathway. If Z_ga=0, meaning that the gth gene does not contribute to the ath pathway, then the corresponding loading l_ga=0; otherwise the loading is drawn from the prior distribution N(0, ρ_a^2).
* Z_ga: a binary variable indicating inclusion of the gth gene on the ath pathway
* A_ga: a continuous variable denoting the regression coefficient of the gth gene on the ath pathway
* π_a: control the proportion of genes that contribute to the ath pathway
* ρ_a^2: variance for regression coefficients related to the ath pathway
* c_0, d_0, c_1, d_1: pre-specified positive constants
We complete the model specification by assigning a Normal-Inverse-Gamma prior to both subject-gene means μ_ig and residuals e_ijg,
μ_ig ∼N(μ_g, σ_g^2), σ_g^2 ∼Inverse-Gamma(c_2, d_2),
e_ijg ∼N(0, ϕ_g^2), ϕ_g^2 ∼Inverse-Gamma(c_3, d_3),
where μ_g is fixed as the mean of the gth gene expression across all time points of all people, and c_2, d_2, c_3, d_3 are pre-specified positive constants.
To implement the BSFA model, the software https://www2.stat.duke.edu/ mw/mwsoftware/BFRM/index.html BFRM has been developed <cit.>. However, the current version of BFRM has two major limitations. First, it only returns point estimates of parameters, without any quantification of uncertainty. Second, it can handle only independent data; therefore it does not account for within-individual correlation when supplied with longitudinal data.
§.§ Modeling Correlated, Time-Dependent Factor Trajectories via DGP
Several approaches treating factor trajectories y_a(t), a = 1,...,k as functional data have been proposed, including spline functions <cit.>, differential equations <cit.>, autoregressive models <cit.>, and Gaussian Processes (GP)<cit.>. As mentioned previously, we are interested in incorporating the cross-correlation among different factors into the model. GPs are well-suited to this task because DGPs can account for the inter-dependence of factors in a straight-forward manner. Indeed, the DGP model has been widely applied to model dependent multi-output time series in the machine learning community <cit.>, where it is also known as “multitask learning”. Sharing information between tasks using DGPs can improve prediction compared to using Independent Gaussian Processes (IGPs) <cit.>. DGPs have also been used to model correlated, multivariate spatial data in the field of geostatistics <cit.>, improving prediction performance over IGPs <cit.>.
The main difficulty with DGP modeling is how to appropriately define cross-covariance functions that imply a positive definite covariance matrix. <cit.> reviewed existing strategies developed to address this issue and found, in simulation studies, that no single approach outperformed all others in all scenarios. When the intent was to improve the predictions of
all the outputs jointly (such is our case here), the kernel convolution framework (KCF)<cit.> was among the best performers. The KCF has also been widely employed by other researchers <cit.>. Therefore, we adopt the KCF strategy for DGP modeling here, and a detailed illustration can be found in Section A.2. of the supplementary materials.
The distribution of (y_i1a, …, y_iq_ia, y_i1b,…, y_iq_ib)^T induced under KCF is a multivariate normal distribution (MVN) with mean vector 0 and covariance matrix fully determined by parameters of kernel functions and the noise parameter; we will use Θ to denote all of them hereafter. The covariance matrix contains the information of both the auto-correlation for each single process and the cross-correlation between different processes. In this paper, we focus on the cross-correlation, which correspond to interactions across different biological pathways that have been ignored in previous analysis <cit.>.
The KCF can be implemented via the R package GPFDA <cit.>, which outputs the MLE for DGP parameters given measurements of the processes y_a(t), a = 1,…,k. Its availability inspired and facilitated the algorithm developed for the proposed model. GPFDA assumes all input processes are measured at common time points, which is often unrealistic in practice; but we will show the adaptation of GPFDA to our case of subject-specific time points in Section <ref>.
§.§ Proposed Integrated Model
We propose a model that combines BSFA and DGP (referred as “BSFA-DGP” hereafter), and present it in matrix notation below. To accommodate irregularly measured time points across individuals, we first introduce the vector of all unique observation times 𝐭 = ⋃_i=1^n𝐭_i across all individuals, and denote its length as q; each 𝐭_i=⋃_j=1^q_it_ij is a vector of observed time points for the ith individual. Let 𝐘_i = (𝐲_i1,...,𝐲_ik)^T∈ℝ^k× q be the matrix of pathway expression, with 𝐲_ia= (y_i1a, ..., y_iqa)^T denoting the ath factor's expression across all observation times 𝐭; and let 𝐘_i,obs, 𝐘_i,miss be the sub-matrices of 𝐘_i, denoting pathway expression at times when gene expression of the ith person are observed and missing, respectively. Let (𝐘_i^T) denote the column vector obtained by stacking the columns of matrix 𝐘_i^T on top of one another; similar definitions apply to (𝐘_i,obs^T) and (𝐘_i,miss^T).
Let 𝐗_i=(𝐱_i1,...,𝐱_ip)^T∈ℝ^p × q_i be the matrix of gene expression measurements at the q_i observation times for the ith individual, with 𝐱_ig= (x_i1g, ..., x_iq_ig)^T denoting the gth gene's trajectory; and correspondingly let 𝐌_i = (μ_i1, ..., μ_ip)^T∈ℝ^p × q_i be the matrix of subject-gene means, with μ_ig=μ_ig1, where 1 is a q_i-dimensional column vector consisting of the scalar 1. Furthermore, let 𝐀={A_ga}_g=1,...,p; a = 1,...,k∈ℝ^p× k be the matrix of regression coefficients and 𝐙={Z_ga}_g=1,...,p; a = 1,...,k∈ℝ^p× k be the matrix of inclusion indicators,
𝐗_i =𝐌_i + 𝐋𝐘_i, obs+𝐄_i,
𝐋 =𝐀∘𝐙,
(𝐘_i^T) ∼MVN(0, Σ_𝐘)
where ∘ denotes element-wise matrix multiplication, Σ_𝐘∈ℝ^kq × kq is the covariance matrix induced via the KCF modeling, and 𝐄_i is the residual matrix. The prior distributions for components of 𝐀, 𝐙, 𝐌_i, and 𝐄_i have been described in Section <ref>.
* 𝐗_i: 𝐗_i=(𝐱_i1,...,𝐱_ip)^T∈ℝ^p × q_i is the matrix of gene expression measurements at the q_i observation times for the ith individual, with 𝐱_ig= (x_i1g, ..., x_iq_ig)^T denoting the gth gene's trajectory.
* 𝐘_i: 𝐘_i = (𝐲_i1,...,𝐲_ik)^T∈ℝ^k× q is the matrix of pathway expression, with 𝐲_ia= (y_i1a, ..., y_iqa)^T denoting the ath factor's trajectory across all unique observation times
𝐭 = ⋃_i=1^n𝐭_i; each 𝐭_i=⋃_j=1^q_it_ij is a vector of observed time points for the ith individual.
* (𝐘_i^T): the column vector obtained by stacking columns of matrices 𝐘_i^T on top of one another.
* 𝐘_i, obs: the sub-matrices of 𝐘_i, denoting pathway expression at times when gene expression of the ith person are observed.
* Σ_𝐘: the covariance matrix induced via the KCF modeling.
* 𝐋, 𝐀, 𝐙, 𝐌_i, 𝐄_i: the matrix of factor loadings, regression coefficients, inclusion indicators, subject-gene means, residuals, respectively.
§ INFERENCE
In this section, we describe inference for our proposed model. We develop an MCEM framework to obtain the MLE for DGP parameters Θ in Section <ref> and the framework is summarized in Algorithm <ref>. For fixed DGP parameter values, we propose a Gibbs sampler for the other variables in the model, denoted by Ω={𝐌,𝐘, 𝐀, 𝐙, ρ, π, σ, ϕ}, where 𝐌={𝐌_i}_i=1,...,n, 𝐘={𝐘_i}_i=1,...,n, ρ={ρ_a^2}_a=1,...,k, π={π_a}_a=1,...,k, σ={σ_g^2}_g=1,...,p, ϕ={ϕ_g^2}_g=1,...,p (Section <ref>). This sampler serves two purposes. First, within the MCEM algorithm (called “Gibbs-within-MCEM” hereafter), it generates samples for approximating the expectation in Equation <ref>, which is used for updating estimates of Θ. Second, after the final DGP estimate, denoted by Θ^MLE, is obtained from MCEM, we proceed with implementing the sampler (called “Gibbs-after-MCEM” hereafter and a summary is provided in Supplementary Algorithm 1) to find the final posterior distribution of interest, which is f(Ω|𝐗, Θ_MLE), where 𝐗={𝐗_i}_i=1,...,n represents observed gene expression. Finally, we discuss the underlying reason for and our approach to addressing the identifiability issues encountered throughout the whole process of inference in Section <ref>.
§.§ MCEM Framework for Estimating Cross-Correlation Determined by DGP Parameters
§.§.§ Options for Estimating DGP Parameters
To estimate the DGP parameters Θ, two strategies have been widely adopted: Fully Bayesian (FB) and Empirical Bayesian (EB). The former proceeds by assigning prior distributions to Θ to account for our uncertainty. The latter proceeds by fixing Θ to reasonable values based on the data; for example, we can set them to the MLE. Compared to FB, EB sacrifices the quantification of uncertainty for a lower computational cost. Here, we adopt the EB approach because the key quantities of interest for inference in our model are the factor loading matrix 𝐋, latent factors 𝐘_i, and the prediction of gene expression, rather than DGP parameters Θ. Therefore, we simply want a reasonably good estimate of Θ to proceed, without expending excessive computation time.
§.§.§ Finding the MLE for DGP Parameters via an MCEM Framework
To derive Θ^MLE, we first write out the marginal likelihood function with respect to Θ for our proposed model in Section <ref>, with f denoting probability density function,
f(𝐗|Θ) =∫f(𝐗,Ω|Θ)dΩ
=∫f(𝐗|𝐌, 𝐘, 𝐀, 𝐙, ϕ) f(𝐌|σ) f(𝐘|Θ)f(𝐀|ρ)f(𝐙|π)f(ϕ)f(σ)f(ρ)f(π)dΩ,
which requires high-dimensional integration.
To deal with the integration, one strategy is the Expectation-Maximization (EM) algorithm <cit.>. Here, we view Ω as the hidden variables, and Θ as the parameters to be estimated by maximum likelihood. The essential idea of EM is to iteratively construct a series of estimates Θ^(l), l=1,2,3,..., that converges to Θ^MLE <cit.>. Specifically, each iteration involves two alternating steps. Assuming the algorithm is at its lth iteration, the first step requires evaluation of the conditional expectation of the log-likelihood of the complete data {𝐗, Ω} given the observed data 𝐗 and the previously iterated parameter value Θ^(l-1). This step is known as the expectation step (E-Step), and the conditional expectation is called the “Q-function”,
Q(Θ,Θ^(l-1)) = 𝔼_Ω[lnf(𝐗, Ω|Θ) | 𝐗, Θ^(l-1)].
In the second step, this expectation is maximized to obtain the updated parameter Θ^(l),
Θ^(l) = Θarg max Q(Θ,Θ^(l-1)).
Therefore, this step is known as the maximization step (M-Step). The EM algorithm keeps updating parameters in this way until the pre-specified stopping condition is met.
As for many complex models, the analytic form of the aforementioned conditional expectation in Equation <ref> is unavailable. To address this issue, a Monte Carlo version of EM (MCEM) has been developed <cit.>, which uses Monte Carlo samples to approximate the exact expectation. <cit.> showed that MCEM has attractive statistical properties: it is able to result in consistent estimates of posterior distributions and asymptotically valid confidence sets. MCEM has been widely applied before in different models <cit.>; but to the best of our knowledge, this paper is the first use of MCEM in the context of the DGP model.
Suppose that we have R samples {Ω^r}_r = 1, …, R drawn from the posterior distribution f(Ω|𝐗,Θ^(l-1)) using an MCMC sampler (details of the sampler are provided in Section <ref>), then Equation <ref> can be approximated as,
Q(Θ,Θ^(l-1)) ≈Q(Θ,Θ^(l-1)) =1/R∑_r=1^Rlnf(𝐗, Ω^r|Θ).
Similar to Equation <ref>, the complete data likelihood f(𝐗, Ω^r|Θ) can be decomposed as,
f(𝐗, Ω^r|Θ)= f(𝐗|𝐌^r,𝐘^r, 𝐀^r, 𝐙^r, ϕ^r)· f(𝐌^r|σ^r) · f(𝐘^r|Θ)·
f(𝐀^r|ρ^r)· f(𝐙^r|π^r)· f(ϕ^r) · f(σ^r) · f(ρ^r)· f(π^r).
The M-Step maximizes Equation <ref> with respect to Θ, and from Equation <ref> the only term that depends on Θ is f(𝐘^r|Θ). Therefore, the approximated Q-function in Equation <ref> can be simplified as,
Q(Θ,Θ^(l-1)) =1/R∑_r=1^Rlnf(𝐘^r|Θ).
This simplification implies that, the maximizer Θ^(l) for Equation <ref> is exactly the same as that of Equation <ref>. As a result, the task has now been reduced to finding Θ that can maximize the likelihood function of {𝐘^r}_r=1,...,R = {𝐘_i^r}_i=1,...,n; r=1,...,R, given that each 𝐘_i^r follows a DGP distribution determined by Θ. Although gene expressions are measured at irregular time points for different individuals, samples of latent factors are available at common times 𝐭; therefore enabling the use of GPFDA to estimate the MLE.
The MCEM framework described above is summarized in Algorithm <ref>. Its implementation challenges are rooted in one common consideration: the computational cost of the algorithm. We discuss these challenges, including the choice of MCMC sample size R and the stopping condition, in Section <ref> and <ref>, respectively.
§.§.§ Choosing MCMC Sample Size
First and foremost, the Gibbs-within-MCEM sample size R determines the computational cost for both the Gibbs sampler and EM. Choosing R too small, while saving computational expense for the Gibbs sampler, results in a less precise approximation of the Q-function, and thus leads to significantly slower convergence of the EM algorithm. In contrast, choosing R too large, while providing an accurate approximation of the Q-function and faster EM convergence, results in the time taken for generating the required Gibbs samples becoming prohibitive. Consequently, the choice of R must trade-off the accuracy of the Q-function with the computational cost of generating Gibbs-within-MCEM samples. We provided a literature review on this topic in Section A.3.1. of the supplementary materials and adapted the approach proposed by <cit.> to our model, which automatically determines when to increase R dependent on whether the ascent property of the marginal likelihood under EM is preserved or not <cit.>. Section A.3.2. shows the derivation of the adapted algorithm.
§.§.§ Specifying the Stopping Condition
The other challenge when implementing the MCEM algorithm is to specify the stopping criterion. This can be based on the change in individual parameter estimates <cit.>, the marginal likelihood <cit.>, or the Q-function <cit.>. If there is little change between consecutive values, then the algorithm can be stopped. We propose to stop the algorithm when the total number of sample size increase exceeds a pre-specified value W; by doing so, the number of Gibbs samples and samples input to GPFDA is bounded, therefore ensuring the MCEM algorithm can return results within reasonable time.
Finally, post-processing Gibbs samples {𝐘^r}_r=1,...,R (including burn-in and thinning) before inputting them to GPFDA can help drastically reduce the computational cost of GPFDA while still preserving most information in the samples.
§.§ Gibbs Sampler for Other Variables under Fixed GP Estimates
To acquire samples for Ω from the posterior distribution under fixed DGP estimates f(Ω|𝐗,Θ), we use a Gibbs sampler since the full conditionals for all variables in Ω are analytically available. We summarise the high-level approach here; the details of the full conditionals are available in Section A.1. of the supplementary materials.
For the key variables 𝐙, 𝐀, and 𝐘, we use blocked Gibbs to improve mixing. To block-update the gth row of the binary matrix 𝐙, we compute the posterior probability under 2^k possible values of (Z_g1,...,Z_gk), then sample with corresponding probabilities. To update the gth row of the regression coefficient matrix 𝐀, (A_g1,...,A_gk), we draw from a MVN distribution.
When updating (𝐘_i^T), the vectorized form of the ith individual's factor scores, we divide it into two groups (𝐘_i^T) = ((𝐘_i,obs^T), (𝐘_i,miss^T)) because the form of the conditional posterior distributions differ. We first sample (𝐘_i,obs^T) and then (𝐘_i,miss^T),
f((𝐘_i^T)|𝐗,Θ,Ω∖(𝐘_i^T))
= f((𝐘_i,obs^T)|𝐗_i,Θ, Ω∖(𝐘_i^T)) · f((𝐘_i,miss^T)|(𝐘_i,obs^T), Θ),
where Ω∖(𝐘_i^T) denotes the remaining parameters excluding (𝐘_i^T). The first term follows a MVN distribution depending on measured gene expression, and the second term also follows a MVN distribution according to standard properties of the DGP model <cit.>.
§.§ Identifiability Issue of the Proposed Model
To facilitate illustrating the identifiability issue, we first re-express the proposed model in Section <ref> as,
(𝐗_i^T) =(𝐌_i^T)+𝐋_i^*(𝐘_i,obs^T)+ (𝐄_i^T),
(𝐘_i^T) ∼MVN(0,Σ_𝐘),
(𝐄_i^T) ∼MVN(0,Σ_𝐗_i),
where (𝐗_i^T), (𝐌_i^T), and (𝐄_i^T) are vectorized from matrices 𝐗_i^T, 𝐌_i^T, and 𝐄_i^T, respectively. To make the above equations hold, 𝐋_i^* and Σ_𝐗_i are constructed using components of 𝐋 and ϕ, respectively. Specific forms are available in Section A.1. of the supplementary materials.
The distribution of (𝐗_i^T) after integrating out (𝐘_i,obs^T), is
(𝐗_i^T) | (𝐌_i^T), 𝐋_i^*, Σ_𝐗_i, Σ_𝐘∼MVN((𝐌_i^T), 𝐋_i^*Σ_𝐘_i,obs(𝐋_i^*)^T+Σ_𝐗_i),
where Σ_𝐘_i,obs is a sub-matrix of Σ_𝐘 that characterizes the covariance structure for (𝐘_i,obs^T).
The identifiability issue arises from the invariance of the covariance 𝐋_i^*Σ_𝐘_i,obs(𝐋_i^*)^T+Σ_𝐗_i in Equation <ref>. The uniqueness of Σ_𝐗_i has been ensured in previous research <cit.>; given its identifiability, we are concerned with identifiability of the factor loadings 𝐋_i^*, the factor scores (𝐘_i,obs^T) and Σ_𝐘_i,obs. Non-identifiability is present because for any non-singular transformation matrix 𝐃∈ℝ^kq_i × kq_i, the covariances of the estimator {𝐋_i^*, (𝐘_i,obs^T), Σ_𝐘_i,obs} and the transformed estimator {𝐋_i^*𝐃, 𝐃^-1(𝐘_i,obs^T), 𝐃^-1Σ_𝐘_i,obs(𝐃^-1)^T} are equal. To address the issue, we first place a constraint on Σ_𝐘 that requires its diagonal element to be 1. In other words, the covariance matrix of latent factors Σ_𝐘 was forced to be a correlation matrix. This restriction ensures the uniqueness of Σ_𝐘, and has also been used in <cit.>.
However, under particular classes of transformation matrices, {𝐋_i^*, (𝐘_i,obs^T)} are still not identifiable. This is because 𝐃^-1Σ_𝐘_i,obs(𝐃^-1)^T might still equal Σ_𝐘_i,obs when 𝐃 is a specific signed matrix (diagonal matrix with diagonal elements being 1 or -1) or permutation matrix (under this case, the identifiability problem is also known as label switching). The specific form of the transformation matrix leading to such invariance depends on the real correlation structure of the data. To deal with this unidentifiability, we use the R package “factor.switch” developed by <cit.> to align samples across Gibbs iterations. Alignment of {𝐋^r, 𝐘^r}_r=1,...,R should be completed before inputting factor scores to GPFDA for estimating DGP parameters Θ within the MCEM algorithm, and also before the final posterior summary of {𝐋,𝐘} using the Gibbs-After-MCEM samples. For the latter Gibbs sampler, we ran five chains in parallel; therefore alignment should be firstly carried out within each chain, then across chains.
§ SIMULATION
We simulated observed gene expression from the model we propose. Section <ref> describes the data generation processes. We fitted the proposed model and a comparator model to the generated data. Section <ref> introduces the model settings. We assessed the models' performance in terms of estimating the correlation structure, predicting gene expressions, recovering factor trajectories and estimating factor loadings. Section <ref> introduces the metrics we used for assessment and Section <ref> discusses models' performance using these metrics.
§.§ Simulation Setting
To mimic the real data (discussed in more detail in Section <ref>), we chose the sample size n = 17 and the true number of latent factors k=4. The number of training and test time points was set to u_1 = 8 and u_2 = 2, respectively, for all individuals (i.e. we split the observed data into training and test datasets to assess models' performance in predicting gene expression). Since recovering latent factor trajectories is one of our major interests, we considered 4 different mechanisms for generating true latent factors according to a 2 × 2 factorial design: the first variable determined whether different factors were actually correlated (“C”) or uncorrelated (“U”), and the second variable determined whether the variability of factors was small (“S”) or large (“L”). The mean value for each factor score y_ija was fixed to be 0, and we generated the covariance matrix Σ_𝐘 for different scenarios in the following way: (1) under scenario “C”, we set the cross-correlation based on the estimated covariance matrix from real data under k=4; otherwise under scenario “U” the true cross-correlation was set to be 0. (2) under scenario “S”, we set the standard deviation for factors 1-4 to be 0.21, 0.23, 0.21, and 0.17, respectively, following the estimated results from real data under k=4; while under scenario “L”, the standard deviation for each y_ija was set to be 1. Below we use the factors' generation mechanism to name each scenario. For example, “scenario CS” refers to the case where true factors were correlated (C) and had small (S) variability.
Each factor was assumed to regulate 10% of all genes, and the total number of genes was p=100. Note that we allow for the possibility that one gene may be regulated by more than one factor. If a gene was regulated by an underlying factor, then the corresponding factor loading was generated from a normal distribution N(4,1^2); otherwise the factor loading was set to 0. Each subject-gene mean μ_ig was generated from N(μ_g, σ^2_g), where μ_g ranged between 4 and 16 (to match the real data) and σ_g = 0.5. Finally, observed genes were generated according to Equation <ref>, x_ijg∼N(μ_ig + ∑_a=1^k l_gay_ija, ϕ_g^2), where ϕ_g = 0.5.
* Number of individuals n=17, genes p=100, latent factors k=4, time points q=8
* Generate the data from the assumed model
* x_ijg∼N(μ_ig + ∑_a=1^k l_gay_ija, ϕ_g^2), where ϕ_g = 0.5
* μ_ig∼N(μ_g, σ^2_g), where μ_g ranged between 4 and 16 (to match the real data) and σ_g = 0.5
* (𝐘_i^T) ∼MVN(0, Σ_𝐘)
* l_ga = Z_ga· A_ga, where Z_ga∼Bern(0.1), A_ga∼N(4,1^2)
§.§ Analysis Methods
§.§.§ Setting of the Proposed Model
We fit the generated data using the proposed approach BSFA-DGP under different numbers of factors k=3, 4, 5. The true number is usually unavailable in practice. Therefore, we investigate how the mis-specification of k impacts model performance. In our model, we specified c_1=d_1=c_2=d_2=c_3=d_3=10^-2 for an uninformative Inverse-Gamma prior distribution for the variance terms. We also set c_0=10^-1· p and d_0 = (1-10^-1) · p to obtain a sparsity-inducing prior on Z_ga, as the prior expectation of π_a under this specification is 𝔼[π_a] = c_0/c_0 + d_0 = 10^-1, implying that only 10% of genes are expected to be involved in each pathway. For the pre-specified values of m and W in the MCEM algorithm, we set the rate of sample size increase to m=2 and the maximum number of attempts to increase the sample size to W=5.
§.§.§ Comparator Model
To compare with traditional approaches modeling each latent factor independently, we also fit the data assuming an IGP prior for each factor trajectory 𝐲_ia (other parts of the model remain unchanged); this model specification is referred to as BSFA-IGP.
§.§.§ Obtaining Good Initial Values
To provide good initial values for the MCEM algorithm, we implemented a two-step approach using available software. First, point estimates of latent factor scores y_ija were obtained via the BFRM software (described in Section <ref>). We centered the gene expression within each individual before inputting it into BFRM: the input data 𝐗^c consisted of the centered x_ijg^c, x_ijg^c=x_ijg-∑_j=1^q_i x_ijg/q_i. We did so because BFRM assumes independent data, therefore the intercept is specific to only a gene and not to a subject; in other words, it cannot estimate subject-gene mean μ_ig. Second, initial values of GP parameters were obtained by inputting 𝐘 to GPFDA, with either a DGP or IGP specification.
§.§ Performance Metrics
§.§.§ Evaluation of MCEM Algorithm
We evaluated two aspects of the performance of MCEM. Firstly, we compared the final correlation estimate with the truth and secondly, we monitored the change in the correlation estimate throughout the whole iterative process, to assess the speed of convergence of the algorithm.
§.§.§ Assessment Of Convergence
Before summarising the posterior using Gibbs samples from five parallel chains, we assessed convergence of the continuous variables by Gelman-Rubin diagnostic (also known as Rhat) <cit.>. Convergence of the latent factor scores y_ija and predictions of gene expression x_ijg^new were of particular interest. We used 1.2 as an empirical cutoff.
For factor loadings l_ga, which is the product of a binary variable Z_ga and a continuous variable A_ga, we first summarized the corresponding Z_ga: if the proportion of Z_ga=0 exceeded 0.5 for all chains, then l_ga was directly summarized as 0; otherwise l_ga=A_ga, and we assessed convergence for these factor loadings using Rhat.
Originally, for each chain, we set the total number of iterations to be 10,000, with 3000 burn-in iterations and retaining only every 10th iteration. However, we noticed that it was harder for factor scores y_ija and loadings l_ga to converge under certain cases: the “large variability” scenario for both DGP and IGP models, and the “small variability” scenario for only the IGP model. Therefore, for these cases we increased the number of iterations to 100,000, with 30,000 burn-in iterations and retaining only every 500th iteration. We list the largest Rhat for each variable type, x_ijg^new, y_ija, l_ga, in Supplementary Table 1. They reflect no apparent concerns over non-convergence.
§.§.§ Prediction of Gene Expression
We assessed the performance of predicting gene expressions using three metrics: mean absolute error (MAE_𝐗), mean width of the 95% predictive interval (MWI_𝐗), and proportion of genes within the 95% predictive interval (PWI_𝐗), which were calculated as:
MAE_𝐗 = ∑_i=1^n∑_j=u_1+1^u_1+u_2∑_g=1^p |x_ijg^0.5-x_ijg^true|/nu_2p,
MWI_𝐗 = ∑_i=1^n∑_j=u_1+1^u_1+u_2∑_g=1^p |x_ijg^0.975-x_ijg^0.025|/nu_2p,
PWI_𝐗 = ∑_i=1^n∑_j=u_1+1^u_1+u_2∑_g=1^p I(x_ijg^0.025 < x_ijg^true<x_ijg^0.975)/nu_2p,
where x_ijg^true denotes the true value of x_ijg, and x_ijg^0.025, x_ijg^0.5, x_ijg^0.975 denotes 2.5%, 50%, and 97.5% quantiles of the posterior samples, respectively; and I(x_ijg^0.025 < x_ijg^true<x_ijg^0.975)=1 if x_ijg^true is within the predictive interval, otherwise it is 0.
§.§.§ Recovery of Latent Factor Trajectories
To assess the performance in recovering factor trajectories underlying gene expression observed in the training data, we first present an overview of estimation results using the metric MAE_𝐘:
MAE_𝐘 = ∑_i=1^n∑_j=1^u_1∑_a=1^k|y_ija^0.5-y_ija^true|/nu_1k,
where y_ija^true is the true value of y_ija, and y_ija^0.5 is the 50% quantile of the posterior samples.
In addition, we plot true and estimated factor trajectories for visual comparison. For the convenience of discussing results, we present trajectories for factor 1 of person 1 as an example. This chosen person-factor estimation result is representative of all factors of all people.
Note that in the scenarios CS and US, true (𝐘_i^T) was rescaled as 𝐃^-1(𝐘_i^T) before the MAE_𝐘 calculation, where 𝐃 is a diagonal matrix consisting of the square root of corresponding diagonal elements of the true covariance matrix Σ_𝐘. This was to account for the difference in scale between the true and estimated factor trajectories caused by the unidentifiability of the factor covariance matrix, as discussed in Section <ref>. However, we did not apply such transformation when plotting the factor trajectories because the shape, in practice, can be estimated well, even though the true scale cannot be recovered.
§.§.§ Estimation of Factor Loadings
We evaluated the ability to estimate factor loadings from two perspectives. First, for a specific factor, could the model identify all genes that were truly affected by this factor? Second, for a specific gene, could the model identify all factors that regulate this gene? To answer these questions, we calculated 95% credible intervals for l_gas and presented those l_gas of which the interval did not contain 0 in the heatmap, using posterior median estimates. For the convenience of comparing estimates with the truth, true factor loadings under the scenarios CS and US were displayed after scaling by 𝐃. Note that genes displayed in the heatmap are ordered following two rules: first, genes on factors with smaller indexes are ranked first; second, genes with larger absolute factor loadings are ranked first.
§.§ Results
§.§.§ Number of Factors Correctly Specified
When the number of latent factors k is correctly specified, the MCEM algorithm can recover the true auto-correlation and cross-correlation well. In terms of auto-correlation, DGP and IGP return similar results. The unique advantage of DGP over IGP is the ability to estimate cross-correlation. Under all data generation mechanisms considered, the algorithm returns satisfactory estimation of cross-correlation (Figure <ref>).
We also present an example showing the evolution of cross-correlation estimates across iterations under the scenario CS in Supplementary Figure 2. The initial estimate (from the two-step approach) is poor, but MCEM is able to propose estimates that rapidly approximate the truth.
In terms of predicting gene expression, similar PWI_𝐗 is observed for both models. However, the DGP specification always leads to smaller MAE_𝐗 and narrower MWI_𝐗, even when the factors are uncorrelated in truth. This indicates more accuracy and less uncertainty in prediction (Table <ref>).
Summary results in Table <ref> suggest that estimating factors is generally easier when the true variability of factors is larger. A closer inspection of factor trajectories delineated in Figure <ref> further confirms this: under scenarios CL and UL, both DGP and IGP models are able to recover the shape of factor trajectories very well. This could be explained by the relatively strong expression of latent factors. In contrast, in the case of CS and US, where expression is relatively weak, recovery of details of factor shapes is harder: if factors are not correlated at all (scenario US), both DGP and IGP fail to recover the details at the 2nd and 5th time point, though the overall shape is still close to the truth. However, if factors are truly correlated (scenario CS), DGP is able to recover trajectories very well, due to its ability to borrow information from other related factors.
With regard to factor loading estimation, Figure <ref> shows results under the scenario CS, and Supplementary Figures 3-5 show results for the remaining scenarios. Both DGP and IGP perform well in the first task (identify the correct genes for a given factor) under all scenarios. However, for the second task (identify the correct factors for a given gene), although both models perform well under scenarios CL, UL and US, we observe that IGP performs less well under the scenario CS while DGP still performs well. As can be seen from Figure <ref>, IGP specification leads to the result that genes that are estimated to be significantly loaded on the first and second factor also significantly load on the fourth factor. One possible explanation for this is that, if in truth, factors have strong signals and/or are not correlated at all (corresponding to scenarios CL, UL and US), then it is relatively easy to distinguish contributions from different factors. In contrast, under the scenario CS where factors actually have weak signals and are highly-correlated (true correlation between factor 1 and 4 is -0.69, and correlation between factor 2 and 4 is -0.71, as can be found from Figure <ref>), it would be more difficult. DGP specification could greatly improve the estimation result as it explicitly takes the correlation among factors into consideration.
§.§.§ Number of Factors Misspecified
We also investigated the performance of our approach when the number of latent factors k is misspecified. The performance of BSFA-DGP in predicting gene expression is similar to when k is correctly specified (Supplementary Table 2). In addition, when k is misspecified to 5 (larger than the truth), the DGP model detected the redundant factor under all scenarios. Supplementary Figure 6 shows an example under scenario CL. In the estimated factor loading matrix, there is one factor with no genes significantly loaded on it, which indicates that a smaller k may suffice. In practice, we suggest users specify k as the upper limit of the expected number of latent factors initially, and decrease it accordingly if the result suggests redundancy.
§ DATA APPLICATION
§.§ Data Description
The human challenge study described in <cit.> is used as a real data example to illustrate the proposed approach. In this study, 17 healthy individuals were inoculated with the H3N2 influenza virus, and their blood samples were collected at regular time intervals until the individuals were discharged after a fixed period of 7 days. The blood samples were then assayed with DNA microarray technology, to produce gene expression values for 11,961 genes. Additionally, each participant was assigned a binary label based on a clinical assessment: symptomatic, or asymptomatic. This dataset will be called the “H3N2 data” below, and it is publicly available from this http://people.ee.duke.edu/ lcarin/reproduce.htmlwebsite.
§.§ Analysis Methods
To fit the BSFA-DGP model to the H3N2 data, we adopted the same constant parameters as that described in the simulation study. We ran 500,000 iterations for the final Gibbs sampler, with a 50% burn-in proportion and retained only every 100th iteration. We also compared our results with two alternative models, both of which assume independence between factors: the BSFA-IGP model, and a previous model in <cit.>, which adopted spline functions to model each factor trajectory independently.
§.§ Statistical Results
In terms of the prediction performance, similar PWI_𝐗 is observed under BSFA-DGP (0.951) and BSFA-IGP (0.956); whereas MAE_𝐗 and MWI_𝐗 are both smaller under BSFA-DGP (MAE_𝐗 0.211; MWI_𝐗 1.113) than BSFA-IGP (MAE_𝐗 0.217; MWI_𝐗 1.213). This again demonstrates the advantage of reducing prediction error and uncertainty if cross-correlations among factors are taken into consideration. With respect to the recovery of latent factors, both BSFA-DGP and BSFA-IGP have similar results, hence we will only discuss results under BSFA-DGP below and compare them with those in <cit.>.
Estimated factor trajectories are displayed in Figure <ref>. Among all, factor 1 is able to distinguish symptomatic people from asymptomatic people most clearly, therefore we investigate this factor further. We find that its shape is largely similar to the “principal factor” identified in <cit.>. To facilitate comparison, Figure <ref> displays the factor trajectory for all people in the same format as Figure 4 in <cit.>. For symptomatic people, both factors display an increase after the inoculation (time 0), then decrease to a level that is higher than before-inoculation; but for asymptomatic people, both factor shapes have little change.
In addition to the shape similarity, genes significantly loaded on factor 1 are also largely similar to those loaded on <cit.>'s principal factor. <cit.> list the top 50 genes sorted according to the absolute loading values (from most important to least important); we present this alongside the full list of our top 50 genes in Supplementary Table 3 for comparison. 33 out of the 50 genes are the same.
Despite the similarity between factor 1 estimated by our BSFA-DGP and the principal factor estimated by <cit.>, as discussed above, it is noteworthy that the trajectory of factor 1 is actually more individualized and informative than the principal factor. This can be seen by observing that, for symptomatic people, the shapes of the principal factor are exactly the same after the factor starts changing (Figure 4 in <cit.>) whereas the shapes of factor 1 still vary locally for different subjects, as can be seen from Figure <ref>. This difference is caused by the different assumptions underlying the two models. <cit.> assume that the dynamic trajectory of any factor is common to all individuals. Different factor values are observed at the same time point for different people only due to individual-dependent biological time shifts and random noise. Therefore, symptomatic and asymptomatic individuals are distinguished based on the time shift (Figure 4 in <cit.>). In contrast, we assume that the trajectory of each factor comes from a distribution (i.e., there is no common curve for every individual), therefore allowing for different curve shapes for different people. We make direct use of the diversity of curve shapes to distinguish patients without introducing the “time shift” quantity.
§.§ Biological Intepretation
To identify the biological counterpart (i.e., pathway) of the statistical factor, we used the KEGG Pathway analysis of the online bioinformatics platform https://david.ncifcrf.gov/home.jspDAVID. We uploaded the total 11,961 genes as the “Background” and the top 50 genes loaded on factor 1 as the “Gene List”.
Results show that these selected genes are significantly enriched on several pathways, including the pathways associated with Influenza A and Covid-19 (see Supplementary Figure 7 for a full list of pathway results returned by DAVID). What these pathways have in common is that they are all related to human innate immune and inflammatory response. This suggests that the genes selected by our model play important roles during the biological processes of detecting viral RNA and initiating an immune and inflammatory response, which is as expected because the data we analyze comes from a viral infection study.
§ DISCUSSION
In this paper we propose a BSFA-DGP model, which relaxes the classical assumption of independent factors when mapping the high-dimensional gene expressions to low-dimensional latent factor representations. By borrowing information from correlated factor trajectories, this model has demonstrated advantages in both gene expression prediction and latent factor recovery compared to other models.
It is worth mentioning that, although the proposed model was motivated by the specific context of gene-pathway relationship, it can be applied to other fields where such latent factor structure exists. For example, in Alzheimer's disease, three latent dimensions (cerebral anatomy, cognitive ability and functional autonomy) have been defined underlying six measured markers based on previous knowledge (see Figure 1 in <cit.> for more details). Understanding how these dimensions change over time according to the clinical stage is of interest in Alzheimer's research. Our model can be applied in this case to infer both the trajectories of the latent dimensions and the factor loading structure directly from data, which may discover factors that were not identified previously.
We also develop an MCEM algorithm for the inference of the model; the main motivation for adopting this algorithm is that it can make full use of the existing package to estimate DGP parameters. In the M-step of MCEM, GPFDA can be directly implemented to find the maximizer of the Q-function; thus we can exploit the existing optimization algorithm. In addition to its implementation convenience, the modular structure of the MCEM framework makes it generalizable to other complex models involving the DGP model component. MCEM comprises two relatively simple parts: one is to obtain empirical estimates of DGP parameters Θ given Monte Carlo samples of 𝐘 (M-step), using the GPFDA package; the other is to generate samples 𝐘 under a fixed estimate of Θ (E-step), using a standard MCMC sampler. For example, when using a DGP model for the latent continuous variable introduced to model multivariate dependent, non-continuous data (such as binary or count data), <cit.> used Laplace Approximation; the MCEM framework we develop would be an alternative inference approach in this case.
Computation time of our approach is largely dependent on GPFDA, which returns hyperparameter estimates for the DGP to maximize the exact likelihood of observing the inputs 𝐘.
GPFDA does not scale well with n, and this means our approach does not scale well with sample size.
Simulation results showed that the algorithm could not end within 36 hours once n was larger than 500.
Potential solutions to this scalability issue include the model approximation methods proposed in <cit.>, or variational inference discussed in <cit.>.
However, our approach does scale well with p: our MCEM algorithm took around 1.5 hours for the real data in Section <ref> and 4 minutes for the simulated data in Section <ref> on a standard laptop (Quad-Core Intel Core i5).
The good scaling in p is because the number of latent factors k input to GPFDA is always small regardless of the number of the biomarkers.
Throughout this paper, several assumptions were made in the model, and below we will discuss how they could be relaxed in future work.
First, when using GPs to model latent factor trajectories, we assume the mean function to be 0. In practice, if prior knowledge suggests that there are covariates that may affect the pathway-level expression, we could easily incorporate these covariates to model the mean function of GP <cit.>. This extension will not change the original covariance structure imposed on the latent trajectories, therefore the MCEM framework will still work for this modified model. In addition, if there is prior knowledge that one or more of the latent factors have a particular shape, then this shape can be described by the mean functions of the relevant GPs. For example, if it is a priori known that the cell cycle pathway is involved, then the mean function can be chosen to have a sinusoidal shape to capture this information.
Another assumption is that the impact of latent factors on genes is time-invariant (i.e., factor loading 𝐋 is constant across all time points). This assumption may not be satisfied in practice; a potential solution would be to use Hidden Markov Models <cit.> to introduce a hidden, discrete variable that varies with time, of which different states correspond to different factor loading matrices.
In addition, the number of latent factors k was fixed and needed to be pre-specified. Though results of our model was able to infer redundant factors (as demonstrated in the simulation study), an automatic approach to infer this number from the data may be preferable in some contexts. A potential solution would be to introduce the Indian Buffet Process as a prior distribution over equivalence classes of infinite-dimensional binary matrices <cit.>.
§ SOFTWARE
The R code used for implementing the proposed BSFA-DGP model is available as an R package, DGP4LCF, on Github: <https://github.com/jcai-1122/DGP4LCF>. The package contains vignettes, which illustrate the usage of the functions within the package by applying them to analyze simulated dataset. The release used in this paper is available at https://doi.org/10.5281/zenodo.8108150.
§ ACKNOWLEDGMENTS
This work is supported through the United Kingdom Medical Research Council programme grants and . The authors are grateful to Oscar Rueda for the helpful discussion on the bioinformatics side of the project.
Conflict of Interest: None declared.
Comment/* */
equationsection
Supplementary Materials
§ A. MATHEMATICAL DERIVATIONS
§.§ A.1. Full Conditionals of the Gibbs Sampler for the Proposed Model
Throughout the following derivation, “∘” denotes element-wise multiplication, “-” denotes observed data and all parameters in the model other than the parameter under derivation, “||𝐳||^2” denotes the sum of squares of each element of the vector 𝐳, “diag(𝐳)” denotes a diagonal matrix with the vector 𝐳 as its main diagonal elements, “MVN(𝐳; μ_𝐳, Σ_𝐳)” denotes that 𝐳 follows a multivariate normal distribution with mean μ_𝐳, and variance Σ_𝐳, and similar interpretations apply to other distributions. “𝐳^T” denotes the transpose of the vector or matrix 𝐳, and “pos” is short for `posterior probability'.
* Full conditional for the latent factors (𝐘_i^T),i=1,…,n
f((𝐘_i^T)|-)
= f((𝐘_i,obs^T)|-) · f((𝐘_i,miss^T)|(𝐘_i,obs^T),Σ_𝐘)
We first sample for (𝐘_i,obs^T):
f((𝐘_i,obs^T)|-)
∝MVN((𝐗_i^T); (𝐌_i^T)+ 𝐋_i^*(𝐘_i,obs^T), Σ_𝐗_i)
·MVN((𝐘_i,obs^T); 0, Σ_𝐘_i,obs)
=MVN(μ_𝐘_i,obs^pos, Σ_𝐘_i, obs^pos),
with
Σ^pos_𝐘_i,obs =[𝐋_i^*^TΣ_𝐗_i^-1𝐋_i^*+Σ_𝐘_i,obs^-1]^-1
μ^pos_𝐘_i,obs =Σ^pos_𝐘_i,obs (𝐋_i^*^TΣ_𝐗_i^-1((𝐗_i^T)-(𝐌_i^T))).
Then we sample for (𝐘_i,miss^T) from f((𝐘_i,miss^T)|(𝐘_i,obs^T), Σ_𝐘), a MVN distribution due to the property of the DGP model.
In the above equations:
* Σ_𝐘 is the covariance matrix of factor scores at full time 𝐭, and Σ_𝐘_i,obs is a sub-matrix of it (at subject-specific time points);
* 𝐋_i^* is constructed using components of the factor loading matrix 𝐋: 𝐋_i^*=(𝐋_1^*,...,𝐋_p^*)^T∈ℝ^pq_i × kq_i, where (𝐋_g^*)^T=(diag(l_g1)_q_i × q_i,...,diag(l_gk)_q_i × q_i) ∈ℝ^q_i × kq_i;
* Σ_𝐗_i=diag((ϕ_1^2)_× q_i,...,(ϕ_p^2)_× q_i) ∈ℝ^pq_i × pq_i, where (ϕ_a^2)_× q_i represents a q_i-dimensional row vector consisting of the scalar ϕ_a^2.
Note that when coding the algorithm, there is no need to really create the pq_i × pq_i diagonal matrix Σ_𝐗_i^-1 as the memory will be exhausted. To calculate the term 𝐋_i^*^TΣ_𝐗_i^-1, we can use the property of multiplication of a diagonal matrix: post-multiplying a diagonal matrix is equivalent to multiplying each column of the first matrix by corresponding elements in the diagonal matrix.
* Full conditional for the binary matrix 𝐙
Let 𝐙_g · = (Z_g1,...,Z_gk) denote the gth row of the matrix 𝐙, g=1,...,p; then,
f(𝐙_g ·|-)
∝∏_i=1^nMVN(𝐱_ig; μ_ig + (𝐀_g ·∘𝐙_g ·)𝐘_i, diag(ϕ_g^2, q_i)) ·∏_a=1^kBernoulli(Z_ga; π_a),
We calculate the posterior probability under 2^k possible values of 𝐙_g · based on the above formula, then sample with corresponding probability.
* Full conditional for the regression coefficient matrix 𝐀
Let 𝐀_g · = (A_g1,...,A_gk) denote the gth row of the matrix 𝐀, g=1,...,p; then,
f(𝐀_g ·|-)
∝∏_i=1^nMVN(𝐱_ig; μ_ig + (𝐀_g ·∘𝐙_g ·)𝐘_i, diag(ϕ_g^2, q_i)) ·MVN(𝐀_g ·; 0, diag(ρ^2))
=MVN(𝐀_g ·; μ_𝐀_g ·^pos, Σ_𝐀_g ·^pos),
where
Σ_𝐀_g ·^pos =(diag(𝐙_g·)(∑_i=1^n𝐘_i^T𝐘_i)diag(𝐙_g·)/ϕ^2_g+diag(1/ρ^2))^-1
μ_𝐀_g ·^pos =Σ_𝐀_g ·(diag(𝐙_g·)∑_i=1^n𝐘_i(𝐱_ig-μ_ig))/ϕ^2_g.
and ρ^2=(ρ^2_1,...,ρ^2_a).
* Full conditional for the intercept μ_ig, i=1,...,n; g=1,...,p
f(μ_ig|-)
∝∏_j=1^q_iN(x_ijg; μ_ig + ∑_a=1^kl_gay_ija, ϕ^2_g) ·N(μ_ig;μ_g, σ^2_g)
= N(μ_ig; μ_ig^pos, σ_ig^2,pos),
where
σ_ig^2,pos = (1/σ^2_g + q_i/ϕ^2_g)^-1
μ_ig^pos = (μ_g/σ^2_g + ∑_j=1^q_i(x_ijg-∑_a=1^kl_gay_ija)/ϕ^2_g) ·σ_ig^2,pos
* Full conditional for π_a, a=1,...,k
f(π_a|-)
∝∏_a=1^kBernoulli(Z_ga; π_a) ·Beta(π_a; c_0, d_0)
=Beta(c_0+∑_g=1^pZ_ga, d_0+∑_g=1^p(1-Z_ga))
* Full conditional for ρ_a^2, a=1,...,k
f(ρ_a^2|-)
∝∏_g=1^pN(A_ga;0, ρ^2_a) ·Inverse-Gamma(ρ_a^2; c_1, d_1)
=Inverse-Gamma(c_1+p/2, d_1+1/2∑_g=1^pA_ga^2)
* Full conditional for σ_g^2, g=1,...,p
f(σ_g^2|-)
∝∏_i=1^nN(μ_ig;μ_g, σ^2_g) ·Inverse-Gamma(σ^2_g; c_2, d_2)
=Inverse-Gamma(c_2+1/2n, d_2+1/2∑_i=1^n(μ_ig-μ_g)^2)
* Full conditional for ϕ_g^2, g=1,...,p
f(ϕ_g^2|-)
∝∏_i=1^nMVN(𝐱_ig; μ_ig + (𝐀_g ·∘𝐙_g ·)𝐘_i, diag(ϕ_g^2, q_i)) ·Inverse-Gamma(ϕ^2_g; c_3, d_3)
=Inverse-Gamma(c_3+1/2∑_i=1^n q_i, d_3+1/2∑_i=1^n||𝐱_ig-μ_ig-(𝐀_g·∘𝐙_g·)𝐘_i||^2)
* Full conditional for predictions of gene expression (only implemented when assessing models' prediction performance on the test dataset)
Suppose that 𝐗_i^new, 𝐘_i^new represent predicted gene expression and factor expression of the ith individual at new time points, respectively. The posterior predictive distribution under MCEM-algorithm-returned Θ^MLE can be expressed as
f(𝐗_i^new| Θ^MLE, Ω)
= ∫ f(𝐗_i^new, 𝐘_i^new| Θ^MLE, Ω) d 𝐘_i^new
= ∫ f(𝐗_i^new|𝐘_i^new, Ω) · f(𝐘_i^new| Θ^MLE,𝐘_i,obs) d 𝐘_i^new,
where the first term of the integrand is a MVN because of the assumed factor model, and the second term is also a MVN because of the assumed DGP model on latent factor trajectories <cit.>. Therefore, once a sample of parameters Ω^r is generated, the rth sample of 𝐘_i^new can be generated from f(𝐘_i^new| Θ^MLE,𝐘_i,obs^r), then the rth sample of 𝐗_i^new can be sampled from f(𝐗_i^new| 𝐘_i^new,Ω^r).
§.§ A.2. Kernel Convolution Framework to Model Dependent Gaussian Processes
§.§.§ A.2.1. Illustration Using Two Processes
The KCF constructs correlated processes by introducing common “base processes”. Take two processes y_a(t) and y_b(t) as an example. KCF constructs them as,
y_a(t) =η_a(t)+ξ_a(t)+ϵ_a(t),
y_b(t) =η_b(t)+ξ_b(t)+ϵ_b(t),
where ϵ_a(t), ϵ_b(t) are residual errors from
N(0, ψ^2), and η_a(t), η_b(t), ξ_a(t), ξ_b(t) are processes constructed in the following way, illustrated in Supplementary Figure <ref> below.
First, three independent, zero-mean base processes τ_0(t), τ_a(t) and τ_b(t) are introduced, which are all Gaussian white noise processes. The first process τ_0(t) is shared by both y_a(t) and y_b(t), thereby inducing dependence between them. Whereas τ_a(t) and τ_b(t) are specific to y_a(t) and y_b(t), respectively; they are responsible for capturing the unique aspects of each process.
Second, Gaussian kernel functions h_a0(t), h_a1(t), h_b0(t), h_b1(t) are applied to convolve the base processes: with h_-0(t) applied to the shared process τ_0(t) and h_-1(t) to the output-specific processes τ_a(t) and τ_b(t),
ξ_a(t) =h_a0(t)*τ_0(t), η_a(t) =h_a1(t)*τ_a(t),
ξ_b(t) =h_b0(t)*τ_0(t), η_b(t) =h_b1(t)*τ_b(t),
where the convolution operator * is defined as h(t) * τ(t) = ∫^∞_-∞ h(t-s)τ(s)ds. All kernel functions h(t) take the form h(t) = v exp{-1/2Bt^2}, where v and B are positive parameters that are specific to each kernel function.
§.§.§ A.2.2. Specific Form of Covariance Function
Under the kernel convolution framework, the covariance function between the time t_j and t_ℓ within a single process a, denoted as C_aa^Y(t_j, t_ℓ), can be decomposed as,
C_aa^Y(t_j,t_ℓ) =C_aa^ξ(t_j,t_ℓ)+C_aa^η(t_j,t_ℓ)+δ_jℓψ^2,
C_aa^ξ(t_j,t_ℓ) =v_a0^2(π)^1/2/√(|B_a0|)exp{-1/4B_a0d_t^2},
C_aa^η(t_j,t_ℓ) =v_a1^2(π)^1/2/√(|B_a1|)exp{-1/4B_a1d_t^2},
where j, ℓ are time indexes, d_t=t_j-t_ℓ, and δ_jℓ=1 if j=ℓ, otherwise δ_jℓ=0.
The covariance function between the time t_j of process a and t_ℓ of process b, denoted as C_ab^Y(t_j,t_ℓ), can be expressed as,
C_ab^Y(t_j,t_ℓ) =C_ab^ξ(t_j,t_ℓ), a b,
C_ab^ξ(t_j,t_ℓ) =v_a0v_b0(2π)^1/2/√(|B_a0+B_b0|)exp{-1/2B_ab d_t^2},
where B_ab= B_a0B_b0/B_a0+B_b0.
Note that the original paper <cit.> provides derivation results under more generalized cases. Let Q denote the dimension of the input variable t_j, and M the number of shared base process τ_0(t). In our proposed approach, Q=1 and M=1. In <cit.>, Q, M can be arbitrary positive integers.
§.§ A.3. Choosing the Gibbs Sample Size within the MCEM Algorithm
§.§.§ A.3.1. Literature Review
A thorough review of strategies for choosing R can be found in <cit.>. We focus on approaches that automatically adjust the sample size to avoid tedious manual tuning; <cit.> provided a review of this class of approaches. Briefly, there are primarily two methods with the main difference between them being the criterion to increase the sample size. The first approach <cit.> achieves automatic tuning by monitoring the Monte Carlo error associated with each individual parameter, which is the approximation error incurred when using Monte Carlo samples to approximate the exact expectation. Additional samples are needed if the Monte Carlo error for any of the parameters is deemed too large. However, this approach may be difficult to implement here because our experiments with GPFDA revealed that individual DGP parameters are not identifiable: for the same input data, different runs of GPFDA could return differing individual estimates yet still ensure similar estimates of the covariance matrix (therefore similar marginal likelihoods).
An alternative, proposed by <cit.>, considers increasing the sample size dependent on whether the ascent property of the marginal likelihood under EM is preserved or not <cit.>. For exact EM, where the expectation can be calculated precisely, the likelihood function is non-decreasing as the algorithm progresses. However, under MCEM, where the E-step is estimated with Monte Carlo samples, it is possible for the likelihood to decrease, due to approximation error. The algorithm by <cit.> ensures that the likelihood still increases with a high probability, as the algorithm iterates, by introducing a mechanism for rejection of proposed parameter estimates. Specifically, they use the Q-function as a proxy for the marginal likelihood. An updated Θ^(l) will be accepted only when it increases the Q-function compared to the previous Θ^(l-1); otherwise, the algorithm will increase the sample size and will propose a new Θ^(l) using the larger sample.
§.§.§ A.3.2. Adaptation of Caffo's Approach to Our Model
To apply the method of <cit.> to our model, we begin by writing out the exact Q-function after the (l-1)th iteration of EM. Following Equations 3.5 and 3.8 of the main manuscript, Q(Θ,Θ^(l-1)) = 𝔼_𝐘[lnf(𝐘|Θ)|𝐗,Θ^(l-1)].
Thus, the change in the value of the Q-function after obtaining an updated Θ^(l) compared to the current Θ^(l-1) can be represented as,
Δ Q
=Q(Θ^(l),Θ^(l-1))-Q(Θ^(l-1),Θ^(l-1))
=𝔼_𝐘[lnf(𝐘|Θ^(l))/f(𝐘|Θ^(l-1)) | 𝐗,Θ^(l-1)]
=𝔼_𝐘[g(𝐘)],
where g(𝐘)=lnf(𝐘|Θ^(l))/f(𝐘|Θ^(l-1)), and the expectation is with respect to f(𝐘|𝐗,Θ^(l-1)).
Under MCEM, we approximate this change Δ Q using the approximate Q-function Q in Equation 3.7,
ΔQ =Q(Θ^(l),Θ^(l-1))-Q(Θ^(l-1),Θ^(l-1))
=1/R∑_r=1^Rlnf(𝐘^r|Θ^(l))/f(𝐘^r|Θ^(l-1))
=1/R∑_r=1^Rg(𝐘^r), 𝐘^r∼ f(𝐘|𝐗,Θ^(l-1))
=g_R.
A generalized version of the Central Limit Theorem (CLT) <cit.> shows that g_R converges, in distribution, to N(𝔼_𝐘[g(𝐘)],ζ/R) as R →∞, where ζ = Var(g(𝐘)) can be estimated using either the sample variance of g(𝐘^r) when the samples {𝐘^r}_r=1,...,R are independent, or the batch means approach <cit.> when the samples are dependent (as is the case here, because they are obtained using an MCMC sampler). This implies that,
P(g_R-𝔼_𝐘[g(𝐘)]/√(ζ̂/R)<Z_1-α) ≈ 1-α; or equivalently,
P(Δ Q>ΔQ-√(ζ̂/R)Z_1-α) ≈ 1-α,
where ζ̂ is the estimate of ζ, Z_1-α is the upper α quantile of the standard normal distribution. (ΔQ-√(ζ/R)Z_1-α) is called the “Lower Bound” (LB) for Δ Q <cit.> because there is a high chance that Δ Q is larger than this estimator if we choose α to be small. When LB is positive, it is highly likely that Δ Q is also positive. The automatic updating rule for the sample size is based on LB. In the lth iteration, if LB is positive, then we accept the updated Θ^(l) and keep the current sample size R; otherwise, we reject Θ^(l) and continue generating additional samples under Θ^(l-1) for updating again. <cit.> suggests a geometric rate of increase for the sample size, by drawing additional R/m samples for some fixed m.
c|c|c
Top 50 genes sorted according to absolute factor loadings. Numbers within the parenthesis in the right column represent rank of this gene in the left column, and “-” denote this gene is not in the left column. Note that names beginning with “M97935” in <cit.> are actually control sequences rather than genes.
Row Index Principal Factor by <cit.> Factor 1 by Our Approach
1 RSAD2 LAMP3 (13)
2 IFI44L RSAD2 (1)
3 IFIT1 IFI44L (2)
4 IFI44 SERPING1 (10)
5 HERC5 SPATS2L (-)
6 OAS3 SIGLEC1 (22)
7 MX1 ISG15 (8)
8 ISG15 IFIT1 (3)
9 IFIT3 IFI44 (4)
10 SERPING1 RTP4 (35)
11 IFIT2 OAS3 (6)
12 OASL IFI6 (17)
13 LAMP3 CCL2 (-)
14 IFI27 IDO1 (-)
15 OAS1 HERC5 (5)
16 OAS2 MS4A4A (-)
17 IFI6 IFIT3 (9)
18 IFIT5 OAS1 (15)
19 IFITM3 OAS2 (16)
20 XAF1 LY6E (24)
21 DDX58 OASL (12)
22 SIGLEC1 ATF3 (-)
23 DDX60 CXCL10 (-)
24 LY6E CCL8 (-)
25 GBP1 XAF1 (20)
26 IFIH1 IFI27 (14)
26 LOC26010 SAMD4A (-)
28 ZCCHC2 MX1 (7)
29 EIF2AK2 LGALS3BP (-)
30 LAP3 C1QB (-)
31 IFI35 IFITM3 (19)
32 IRF7 LAP3 (30)
33 PLSCR1 IRF7 (32)
34 M97935_MA_at ZBP1 (42)
35 RTP4 HERC6 (37)
36 M97935_MB_at TFEC (-)
37 HERC6 IFI35 (31)
38 TNFAIP6 MT2A (-)
39 PARP12 SCO2 (41)
40 M97935_5_at DDX58 (21)
41 SCO2 IFIH1 (26)
42 ZBP1 IFIT2 (11)
43 STAT1 DHX58 (-)
44 UBE2L6 TMEM255A (-)
45 MX2 TNFAIP6 (38)
46 TOR1B VAMP5 (-)
47 M97935_3_at PARP12 (39)
48 TNFSF10 GBP1 (25)
49 TRIM22 TIMM10 (-)
50 APOL6 C1QA (-)
|
http://arxiv.org/abs/2307.00609v1
|
20230702163049
|
Reionisation time fields reconstruction from 21 cm signal map
|
[
"Julien Hiegel",
"Emilie Thélie",
"Dominique Aubert",
"Jonathan Chardin",
"Nicolas Gillet",
"Pierre Galois",
"Nicolas Mai",
"Pierre Ocvirk",
"Rodrigo Ibata"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
Université de Strasbourg, CNRS UMR 7550, Observatoire Astronomique de Strasbourg, Strasbourg, France
[email protected]
During the Epoch of reionisation, the intergalactic medium is reionised by the UV radiation from the first generation of stars and galaxies. One tracer of the process is the 21 cm line of hydrogen that will be observed by the Square Kilometre Array (SKA) at low frequencies, thus imaging the distribution of ionised and neutral regions and their evolution.
To prepare for these upcoming observations, we investigate a deep learning method to predict from 21 cm maps the reionisation time field (), i.e. the time at which each location has been reionised. encodes the propagation of ionisation fronts in a single field, gives access to times of local reionisation or to the extent of the radiative reach of early sources. Moreover it gives access to the time evolution of ionisation on the plane of sky, when such evolution is usually probed along the line-of-sight direction.
We trained a convolutional neural network (CNN) using simulated 21 cm maps and reionisation times fields produced by the simulation code . We also investigate the performance of the CNN when adding instrumental effects.
Globally, we find that without instrumental effects the 21 cm maps can be used to reconstruct the associated reionisation times field in a satisfying manner: the quality of the reconstruction is dependent on the redshift at which the 21 cm observation is being made and in general it is found that small scale (<10cMpch^-1) features are smoothed in the reconstructed field, while larger scale features are well recovered. When instrumental effects are included, the scale dependance of reconstruction is even further pronounced, with significant smoothing on small and intermediate scales.
The reionisation time field can be reconstructed, at least partially, from 21 cm maps of IGM during the Epoch of reionisation. This quantity can thus be derived in principle from observations and should then provide a mean to investigate the effect of local histories of reionisation on the first structures that appear in a given region.
Reionisation time fields reconstruction from 21 cm signal maps
Julien Hiegel
Emilie Thélie
Dominique Aubert
Jonathan Chardin
Nicolas Gillet
Pierre Galois
Nicolas Mai
Pierre Ocvirk
Rodrigo Ibata
Received ..., accepted ...
==============================================================================================================================================================================================================================================
§ INTRODUCTION
One of the most important transitions in the history of the Universe is the Epoch of reionisation (EoR), a period driven by collapsed dark matter halos where the first galaxies and stars emerge (Loeb_2001, Wise2019, Dayal2018, JulianB2020). The light emitted by these sources started to reionise the intergalactic medium (IGM) mainly composed of hydrogen. This phenomenon is often pictured as a network of growing ionised bubbles, where the centre of bubbles host the sources of light (Furlanetto_2004, 2022A A...658A.139T). Eventually, these growing regions percolate until the whole IGM gets reionised, ending the EoR near z=5.5-6 (e.g. Kulkarni2019, Konno2014).
This epoch can be probed using the 21 cm signal produced by a spin-flip transition (Furlanetto2006). This process releases a photon with an initial frequency f_0 = 1420 MHz that will be redshifted until it reaches us. Such low frequency radio observations allow to infer EoR properties from e.g. the 21 cm power spectrum (e.g. Furlanetto2004,Zaldarriaga2004,Mesinger2013, Iliev2012,Greig2017, Zhao2022, Nasirudin_2020, Pagano_2020, Gazagnes_2021, Liu_2016, Gorce_2023) or the 21 cm bispectrum (Karagiannis_2022, hutter_2020) For example, the Low Frequency Array[https://www.astron.nl/telescopes/lofar/] (LOFAR, VanHaarlem) sets upper limits on the 21 cm signal power spectrum, putting first constraints on the state of the IGM on the high emissivity of UV photons (Ghara2020 or on the radio background (Mondal_2020).
Likewise, the Hydrogen Epoch of reionisation Array[http://reionisation.org/] (HERA) is designed to study the 21 cm power spectrum to constrain several parameters such as the EoR timing (DeBoer_2017) : for example it was recently able to put actual boundaries on the X-Ray heating produced by the first galaxies (Hera2022).
Soon, the Square Kilometer Array[https://skatelescope.org] (SKA, see e.g. mellema2013), will be built with enough sensitivity, resolution, and coverage at low frequencies to measure the 21 cm signal at high redshift and map the hydrogen distribution during the EoR. While SKA will also be able to investigate the EoR from the 21 cm power spectrum, SKA will rather give us the unique opportunity to get images of the HI state. Such observations at different frequencies, hence different redshifts, will not only track the HI in 2D on the sky but also along the line of sight, providing the time evolution of the signal.
This tomography is a great opportunity to explore the EoR (e.g. GiriThesis, mellema2015). SKA will allow to study astrophysical parameters providing information on the IGM, size and distribution of ionised bubbles or the properties of the first generation of galaxies (e.g. mellema2013).
By extension, 21 cm observations from the EoR would help to improve our understanding of the early universe and to constrain many of its facets such as the optical depth τ of the last scattering surface (e.g. Billings), properties of dark matter by studying the non-linear matter power spectrum (e.g. Markus) and to probe the properties of sources and propagation of ionising photons (e.g. Shaw).
In this spirit, this paper aims at investigating how these future 21 cm observations can help us to study how the reionising radiation propagated, how it started and evolved. We want to focus on finding the seeds of ionising photons that set off the reionisation and on monitoring the propagation and eventual percolation of reionisation fronts.
The 21 cm signal contains a significant amount of physical information, encoded by the temperature brightness δ T_b (see Bianco2021,Prelogovic2021,Furlanetto2006) mellema2006):
δ T_b(z) ≈ 27 x_HI(z)(1+δ_b (z))(1+z/10)^1/2(1 - T_CMB(z)/T_s(z))
(Ω_b/0.044h/0.7)(Ω_m/0.27)^-1/2[mK],
that depends on x_HI the neutral fraction of hydrogen, δ_b the density contrast of baryons, the CMB temperature T_CMB and the so-called spin temperature T_s driven by the thermal state of the gas or the local amount of Ly-α radiation (Liszt2001).
A single 21 cm observation can therefore provide a direct insight on the state of these quantities at the observed redshift z. Fig.<ref> shows examples of mock 21 cm observations, obtained thanks to ((Mesinger2011, Murray2020), see section <ref>).
From z=15 to z=5.5, we can observe HII bubbles (in white) inside which no signal can be observed, growing with time until only HII remains and the radio signal vanishes. Since each observation in this sequence is a snapshot of a propagation process, they are correlated. At the extreme, it can even be envisioned that a single 21 cm observation may be used as an anchor point to trace back the sequence in the past (at larger z) or be extrapolated to lower redshift, 'in the future' relative to the observed z. This is the assumption that we aim to test in this work, and more specifically we aim at testing if the chronology of the spatial distribution of ionised gas can be recovered from a 21 cm observation at a single redshift.
In order to obtain this chronology, we can use the so-called reionisation time field (Chardin2019).
Mapped on 2D images (see Fig. <ref>), returns the time of reionisation for each pixel of the map
and encodes the complete history of ionisation propagation in a single field. In 2022A A...658A.139T a, b, it was shown how its topology contains a wealth of information on the reionisation process. For example, minima are the seeds of the propagation fronts where presumably the first sources can be found, isocontours track HII bubbles at a given time or its skeleton provides the sites of ionisation front encounters. It also gives information on the influence of radiation sources on each other (Thelie2022_2), opening the door to study distant radiative suppression by nearby objects in the environment. More generally, gives information on local 'reionisations' rather than the global reionisation, putting an emphasis on the environmental modulation of the ionisation history. Such local modulation of how light is produced and propagates can translate into local variations of star formation suppression (see e.g. Ocvirk_2020) or influence the spatial distribution of low mass galaxies (see e.g. Ocvirk2011). Galaxies experience a great diversity of reionisation from their point of view (e.g. (Aubert_2018, zhu2019, Sorce2022), and reionisation time distribution probes this diversity. Other examples of using a similar description include Trac2008 on the thermal imprint of local reionisations, Trac_2022 for reionisation modeling or Deparis_2019 for ionization front speed measurments. It should be noted that these specific examples use reionisation redshifts instead of reionisation times : while directly related, we found that times are more easily reconstructed than redshifts for our purposes (see Appendix) and we will solely focus on reionisation times in this paper.
As a means to predict , we will use
Convolutional neural network (CNN) methods, that are capable of detecting and learning complex patterns in images. This tool
has been widely used in different problems of astrophysics and cosmology (e.g. recently Bianco2021, Gillet2019, Chardin2019, Prelogovic2021, Ullmo). In this study, we extend such CNN applications to field reconstructions from mock observations of the 21 cm signal using a
U-shaped convolutional neural network (Ronneberger2015),
allowing us to get the whole history of reionisation of a sky patch from a single observation.
This article is structured as follows: in Section <ref> the CNN algorithm and the procedure to deal with such analysis are described. We also present the simulations used to get the data. In section <ref> and <ref> will be presented all the metrics used and their results to monitor the neural network performance. Then, we will discuss instrumental effects in section <ref>, and eventually conclude in section <ref>.
§ CONVOLUTIONAL NEURAL NETWORK AND SIMULATION
The main purpose of this study is to reconstruct the reionisation times spatial distribution from 21 cm images using a convolutional neural network (CNN). CNNs are often used to process pixel data and became widely used for image recognition (LeCun1999).
Our neural network is implemented in thanks to Tensorflow (tensorflow2015-whitepaper) and Keras (chollet2015) Python libraries. It took roots in the well-known U-net network first developed by Ronneberger2015. The particularity of this network architecture lies in two distinct parts (Fig. <ref>). The first one is a contracting path called the encoder, applying series of 2D convolutions and downsamplings to the input image (a 21 cm map here) where its size shrinks as it goes deeper through the neural network.
Then the second part does the opposite, consisting of an expansive path (the decoder) applying the same number of convolutions with upsamplings to propagate the information obtained in the encoder. The resulting final output is then another image, in our case. This special case of CNN is called an auto-encoder.
For the learning process, we generate a dataset of histories of reionisation, with their corresponding sequence of 21 cm maps. One CNN predictor will be considered for each redshift at which we have mock 21 cm observations.
In practice, we consider 18 predictors for each shown in Fig.<ref>. Ideally all CNN predictors have to predict the same map from mock observations drawn from the same reionisation history. However, depending on the specific properties of a given 21 cm observation (e.g. the non-zero signal fraction) at a given , the prediction will not be equal in performance.
The public simulation code (Mesinger2011, Murray2020) has been chosen to obtain the dataset, i.e. 21 cm signal and fields. Coeval simulations cubes of size 256 cMpch^-1 with resolution 1cMpch^-1/pixel have been produced using a ΛCDM cosmology with (Ω_m,Ω_b,Ω_Λ,h,σ_8,n_s) = (0.31,0.05,0.69,0.68,0.81,0.97) consistent with the results from planck2018 and using standard (T_vir,ζ) parameters. T_vir sets the minimal virial temperature for haloes to enable star formation (see theseNico, JulianB2020, BARKANA2001125) and SP.Oh) and is chosen such that log_10(T_vir)=4.69798. ζ sets the ionising efficiency of high z galaxies allowing us to modify the reionisation timing: the larger this value is, the faster will be the reionisation process (Greig Mesinger2015). We will consider two ionising efficiencies ζ=30 and ζ=55 (referred as ζ30 and ζ55), leading to a total of 36 CNN models to be trained, 18 redshifts per ζ value.
For each ζ, 50 different realisations with different seeds have been run,
giving us access to and 21 cm 3D fields.
As discussed in the introduction, an alternative approach is to consider the reionisation redshift z_reion(r) instead of . Yet we found that times were better reconstructed and a brief analysis using z_reion(r) is presented in Appendix <ref>.
To produce 2D images of and 21 cm, we took 64 evenly spaced slices, 1 out of 4 (of 1 cMpch^-1 thickness, corresponding to 1 cell), in the 3 directions of each cube.
Each slice has been cut into four 128x128 images, finally leading to a total of 768 21 cm images per realisation and per z, giving 38,400 maps per redshifts. Eventually, we standardise the 21 cm images to ensure that the range of pixel values is consistent across all images in the dataset and to help the model's training process : the mean value is subtracted and the result is divided by the standard deviation, both computed over the training set. The mean and standard deviation values are thus parameters of our predictors.
As shown in Figure <ref>, the neutral volume fraction Q_HI
is shifted (on the time axis) according to the ionising efficiency ζ. Since ζ controls how many photons escape from galaxies, ζ30 gives an history of reionisation delayed in comparison with ζ55.
Later on, we discuss the non-zero signal fraction, i.e. the fraction of pixels with non-zero 21 cm signal. Its time evolution is plotted as dot and crosses in Fig. <ref>, and is shown to follow Q_HI.
The entire dataset is split into three subsets, out of which 35,000 images are used for the learning phase. The first subset, known as the training set, comprises 31,500 images. At each epoch during the learning phase, this set is fed to the CNN, which computes the loss function (i.e., mean square error or MSE in our case) and modifies the weights to minimise it. Another separate subset of the entire dataset, called the validation set, consists of 3,500 images : it is exclusively used to evaluate the CNN's performance during the learning phase after each epoch, without being used in the weight adjustment process.
The final set is called the test set and consists in the remaining 3,400 images, that are never processed by the CNN during the learning stages.
All the results (besides the loss function and R^2, see section <ref>) shown in this paper are obtained via the test set.
§ MONITORING THE ALGORITHM PERFORMANCE
Once all hyper parameters are set and predictions made, we want to measure the training performance and the prediction accuracy by comparing predicted maps with the ground truth given by the simulation.
§.§ Network's internal metrics
First, two internal metrics are used to monitor the training process. Starting with the loss function, the mean square error (MSE) is defined as the average of the squares of the errors. At each epoch the algorithm tries to minimise this loss function (MSE) by comparing the ground truth (given by the simulation) with the prediction (given by the CNN).
A second indicator was used, called the determination coefficient, and is defined as:
R^2 = 1 - Σ (Pred - True)^2/Σ (True - True)^2 = 1 - Σ_n=1^N_pix (Pred_n - True_n)^2/Σ_n=1^N_pix (True_n)^2
Pred and True are the maps of the prediction and ground-truth respectively. Pred_n/True_n correspond to the n^th pixel of the considered batch of image. True depicts the average of the true field and is equal to zero after normalisation. In our case, the predicted values, Pred, and the ground-truth values, True, are both (3500, 128, 128) cubes. The summations are performed over N_pix pixels to measure the network performance on a set it has already/never (training/validation set) seen. A unit value stands for a perfect correlation between Pred and True.
Fig. <ref> shows the R^2 coefficient during the validation phase for several observation redshifts and for ζ 55. For the validation set, this coefficient gives a first estimation of the similarities between true fields and predicted fields immediately at the end of each epoch, allowing us to follow the model's accuracy during the learning process. At low redshift, the R^2 coefficient is small and even worse, negative for this model and for the lowest z. Nothing can be predicted from low redshift ranges since the non-zero signal fraction at small ∈[5,7] is low or even equal to zero. Then, above 8, we observe that the prediction performance increases for increasing until 11, where it will eventually degrade again until 15.
Fig. <ref> shows the maximal value across epochs for R^2 for each and the two ζ models, as well as the minimum value reached by the MSE loss. According to these metrics, the best reproduction of is obtained from the CNN predictor using =11 21 cm inputs, corresponding to 95% of non-zero signal (see Fig. <ref>).
Furthermore, the ζ 30 model returns better results at lower . It can be easily understood when looking at Fig <ref> where the non-zero signal fraction of ζ 30 is considerably higher than for ζ 55 in the (top axis) range [5,9].
At this range of signal fraction values, the gain in terms of information is such that the network performance increases significantly. Still looking at Fig. <ref> and Fig. <ref>, we can estimate that between [0.90,0.96] of non-zero signal fraction, the neural network produces better performances. For a non-zero signal fraction greater than 0.96, performances decrease again. Indeed, at such levels of non-zero signal fraction, there are only a few HII bubbles to be found, inducing a loss of information on the location of the seeds of most reionisation regions. Without HII bubbles it is difficult to know where the sources of reionisation are located nor how the UV radiation propagates.
Hence, in order to get the best performances, the CNN algorithm needs to have a compromise between a minimal set of HII bubbles and a significant non-zero signal fraction.
Also, the peak value of R^2=0.88 at z=11 suggests that this redshift of observation is peculiar. Indeed looking at the timeline in Figure <ref>, z=11 in our model seems to be the transition between a global negative temperature brightness and a positive one, as the long range influence of X-rays on the gas become more evident. At this , the 21 cm map contains small HII regions with no signal, easily interpretable for the CNN as the places where the first seeds of reionisation are found. Then there are regions that are hotter than average (shown in red), that will reionise sooner and blue regions, colder than average that will be the last regions to reionise. This thus contains information of the sequence of radiation propagation that seem to be more easily extracted compared to other observation redshifts.
§.§ reionisation time prediction
Beyond the CNN internal metrics, the immediate result is the predicted map itself, as shown in Fig. <ref>. This =11 map is one of the best reconstruction (R^2 = 0.84) we could do for the ζ 30 model. The predicted map on the right seems pretty close to the ground truth yet smoother. It is nevertheless remarkable how close to the reference the CNN reconstruction can be. Note that the best predicted maps for ζ 30 and the best predicted maps for ζ 55 (not shown here) present a similar qualitative behaviour.
Having the true map of and its prediction, we can count the number of pixel with values higher than a given reionisation time to get Q_HI(t) on the sky, shown in Fig.<ref>.
For =11, both true and pred measurements match and are consistent with the signal fraction evolution computed from the actual evolution of the 21 cm signal with z. It implies that the information obtained across the sky at a single via the predicted is consistent (or can be cross-checked) with the evolution along the line-of-sight.
Fig. <ref> shows examples of predictions obtained for different models trained at different . The first column shows the mock observations (21 cm maps) at several redshifts, the middle column shows the predictions obtained with the left panel and the right column shows the difference between the ground truth and the predicted field. Looking at the two first columns, at low redshift, the predictions are suboptimal, as the inferred field got totally smoothed. For ≥ 10, Our CNN becomes able to capture small scale (<10cMpch^-1) features such as extrema, which leads to a more detailed prediction. However, looking at the right column, the CNN seems to have more difficulties to predict the local extrema of reionisation times even though their location are well predicted. Such points correspond to the seeds of the propagation of fronts, presumably linked to the first sources of radiation and seem to suffer from a smoothing intrinsic to our adopted method. Compared to =10 or 11, the =15 prediction appears to be slighty smoother, yet the earliest reionisation times seem to be well reproduced.
Finally, Fig. <ref> depicts the normalised histogram of TRUE-PRED maps.
Distributions are centered on zero, with an assymetry toward negative values: our CNN predictions returns larger reionisation times than the ground truth (i.e. a delayed reionisation history), but this systematic effect is less severe for the best CNN predictors trained to process =8 or 10.5 observations in this figure.
§.§ True versus Pred histograms (TvP) and fitting fraction
One of the most standard tests is the so-called true versus predicted (TvP) where all the predicted pixels are compared one by one to their true value given by the simulation.
Fig. <ref> shows the TvP corresponding to all the maps in the test set using the =11 CNN. Most values follow the perfect correlation for typical values (0.4-1 Gyrs), while extreme values are not as well recovered by the CNN. This is not surprising looking at the predicted maps on Fig. <ref>.
Indeed, the extreme values of
coincide with small scale features that are smoothed out, where the first sources with lowest t_reion are found. These values are also rare, explaining why the algorithm fails at learning how to recover them.
Fig. <ref>
presents a synthesis of the true vs predicted maps of all the predictors at different . The fitting fraction is a value between 0 and 1 corresponding to the predicted pixel's fraction whose value fits within an arbitrary error calculated as ϵ% of the true pixel's value.
Obviously, the larger the allowed error is, the more "good" pixels will be found. It is then blatant how low (<8) gives less accurate results, especially when allowing a small error (<10%): the fitting fraction value is more than 10% lower than for >8. It can be understood when looking at maps of temperature brightness (or at Fig <ref>) where the maps contains less and less signal for decreasing redshifts from z=8: less than 50% of the map contains observable neutral hydrogen. At the extreme redshift z=5.5, there is no signal left because all HI got reionised and the algorithm cannot predict efficiently anymore.
>8 seems to give the best results and all predictions made from above 9 seem to have similar performance: between 70 and 75% of matching pixels for 5% error, with a slight decrease with growing . Overall, we recover the two trends identified previously: low simply lack the signal for a good reconstruction, while large lack the direct imprint of sources that appear later. The best compromise is found for between 8 and 12, corresponding to signal fraction ≈ 0.8, with an optimal value of 0.95.
§.§ ζ30 and ζ55
First results comparing ζ30 and ζ55 scenarios are really close. Predictions accuracy are similar except for lower where the signal fraction tends to be quite different between both situations. In the most extreme case, at =5.5, the CNN trained with ζ30 maps is quite limited but still returns a prediction whereas the CNN trained with ζ55 maps cannot predict anything. Nevertheless, it is what was expected to happen since there is no HI left at late times in the ζ55 scenario. In the following sections, we will only consider the ζ30 scenario to discuss results in the whole range of used.
§ STRUCTURE OF PREDICTED MAPS
We now investigate the spatial structure of the reconstructions using three metrics: the power spectrum, the Dice coefficient and the minima statistics.
§.§ Power Spectrum P_k
We now compare the power spectrum P_k of the reionisation time field with the one predicted by the neural network in order to have a statistical point of view on how well the network reconstructs the different scales present on the map.
Fig. <ref> depicts the power spectrum of model ζ30. A first look shows that the lack of 21 cm signal drastically erases the possibility to predict anything: predictions for z=5.5 and z=6 are incompatible with the real P_k at mid scales k ≈ 7e-2 cMpc^-1h: less than 30% of the power remains for z=6, against up to 85% for z=8. At small scales (k > 2e-1 cMpc^-1h): less than 12% of the power remains for z =6 against up to 57% for z=8. Now looking at large scales (small spatial frequencies such as k < 3e-2 cMpc^-1h), our model reproduces them almost perfectly for >8: more than 95% of the power remains. However, at k=0.2 cMpc^-1h and beyond
the prediction can not produce enough power meaning that the smaller scales have difficulties to get predicted at all . Again, the predictor smoothes the field, predicting a map generally blurrier than the ground truth.
To improve results at the smallest scales, Generative adversarial networks (GAN)
could be a solution to get a better prediction resolution (see Ullmo).
§.§ Dice coefficient
Another way to look at the predictor performance is the Dice coefficient (see Ullmo). This method is useful to see what kind of regions the algorithm reconstructs the best, for example whether the first regions that got reionised are well predicted or if, conversely, late regions are reconstructed in a better way. This coefficient tends to focus on the structure of maps by looking at regions with given values. It will tell how the CNN recovers structures instead of giving an accuracy according to the value of pixels or the considered scale.
Dice coefficient proceed by taking a threshold t (0 to 100) and by considering only the t% pixels with the largest t_reion values in the true and predicted maps. We can estimate regions of the map where the prediction overlap with the ground truth, using a newly formed map with pixels in only 3 possible states:
* Both predicted and true pixel have a value above (both fitting the condition) the threshold, referred to as yellow state.
* both have a value below (both out of the condition) the threshold, depicted as blue state.
* There is a mismatch between prediction and simulation, depicted as green state.
An important fact to note is that the value of the threshold corresponds to a given cosmological time (or redshift). Using 10% for the threshold, the constructed map will only contain the information for large values of cosmological time (low redshifts), typically the last regions to reionise. On the other hand, taking 100% as threshold, the whole map will be considered.
An example of an overlap map is depicted in Fig. <ref> using =10. The threshold example on the figure is 0.4, corresponding to 40% of the largest values. Only a few green regions are present, telling us that the predictions respect quite well the true field. This range of value is actually well reconstructed and the main differences are located at the edge of these regions.
The Dice coefficient, or association index, is calculated at a given threshold as <cit.>:
Dice = n_yellow/n_yellow+n_green.
with n_i the number of pixels with color i. The Dice coefficient can only take values between 0 and 1 : 0 for no correspondence between prediction and ground truth and 1 for a perfect reconstruction.
Fig. <ref> shows the Dice coefficient for the ζ30 model.
The dotted black line
stands for the Dice coefficient obtained if we compare the true thresholded map with a map randomly filled with zeros and ones. Globally, we recover that >8 provide good accuracy compared to the random situation, with similar performances at all . Furthermore, =11 seems to be dominating until median values for the threshold. Afterwards, =8 coefficents catch up followed by =10, meaning that these observation redshifts recover efficiently the first structures of reionisation.
Focusing on low threshold values, the Dice coefficient provides additional insights on the performance at low , such as =5.5 or 6. At these redshifts, predictions are slightly better for low threshold values meaning that at z_obs=6, the neural network predicts in a very efficient way the last regions reionised : since they are the only regions where a non-zero signal can be found, the predictor can locate them accurately.
§.§ Minima statistics
We now investigate at the minima of , i.e. the regions that reionised at the earliest times, to probe how our CNN detects sources of reionisation.
We use DisPerSE (Sousbie2011, Sousbie20112) to identify the distribution of reionisation times minima : it uses discrete Morse theory to identify persistent topological features in two-dimensional maps, such as voids, walls, filaments, and clusters. While we focus here only on minima, valuable insights on the underlying topology of can also be obtained from the persistent structures detected by DisPerSE, to see how they relate to the physical processes that shape the distribution of reionisation (see also Thelie2022_2 a, b).
The results of this analysis are illustrated in Figure <ref>.
The black line stands for this analysis performed on the true field: at low t_reion (high z), sources are rare, and their number is maximal at t=0.8 Gyrs (approx z=7). Their number then drops at larger values, not because sources becomes rarer but because they appear in already reionised regions and cannot be traced by peaks in maps.
Looking at the statistics of our CNN predictions, depicted with dotted lines, it is clear that the CNN has some difficulties to detect the first sources of ionising photons, resulting from the smoothing of maps. Yet the maximum of the distribution is well matched at least for >8, whereas low =5.5-7 predictors fail unsurprisingly at recovering the seeds of ionisation fronts from 21 cm maps with very low non-zero signal fractions. Interestingly, an observation made for example at =10 manages to predict in a satisfying manner the population of peaks (and thus seeds/sources of reionisation) at later times, emphasising the ability of our CNN to extrapolate the 'future' of a given observation. When compared with the previous power spectrum analysis, these results emphasise that the loss of accuracy on small scales have mostly an impact on high-z (low ) peaks whereas seeds of ionisation fronts at lower redshifts are much better predicted with lowest .
§ INSTRUMENTAL EFFECTS AND PREDICTION
The work discussed previously only takes into account 'perfect' 21 cm signal, without any noise or instrumental effect. Such effects are expected to degrade the predictor's capability to infer the field.
As a mean to study the potential impact of these effects in our predictions, we created a new data set of 21 cm maps with instrumental and noise characteristics corresponding to SKA. The uv coverage and instrumental effects are calculated using tools21cm[https://github.com/sambit-giri/tools21 cm] (Giri2020) library, assuming a daily scan of 6 hours, 10s integration time, a total observation of 1000 hours (2022MNRAS.509.3852P, Ghara_2016, Giri_2018). Our investigation is limited to =8 corresponding to the lowest redshift where the predictor accuracy in term of R^2 coefficient remains satisfying (R^2=0.86), while deeper observations are found to be significantly more degraded by noise. A maximum baseline of 2 km is assumed and the angular resolution is Δθ∼ 2.8 arcmin corresponding to 7.35 cMpc at this redshift. tools21 cm also convolves the coeval 21 cm cube in the frequency direction with a matching resolution Δν∼ 0.43 MHz.
Fig. <ref> shows the prediction from 'noisy' observations with the input 21 cm observations shown on the left panel.
The predicted map is much blurrier and at first sight, adding instrumental effects on input observations smoothes even more the prediction.
Fig. <ref> and Fig. <ref> depicts the power spectrum of the field and Q_HI for ζ30, for both ground truth and prediction. The two predicted curves (dotted lines) have been implemented with observation at z=8, one with instrumental noises (in blue) and the other using a perfect 21 cm map (in red). In both predictions, the power spectrum is successfully recovered at large scales (k < 3e-2 cMpc^-1h), with approximately 70% and 95% of the remaining power for the noisy case and perfect case, respectively. At smaller scales (k > 2e-1cMpc^-1h), the power spectrum recovered from noisy maps has a sharper turn-off (17 % of power remaining against 86% for perfect case) and comes out of the error bars (hatched and shaded areas). It results in missing the small scales structures, making it difficult and even impossible, to detect the first sources of reionisation accurately. On the other hand, Q_HI gives a fair history of reionisation, yet more sudden from SKA maps than ground truth and the 'perfect signal' scenario.
This outcome raises several questions. The first one is linked to the architecture of the CNN. Is it possible to improve the CNN such that the accuracy of the prediction improves significantly, especially for 'noisy' observation? For this purpose, a solution could be to tune the hyper parameters in order to find the best combination to recover . Also, keeping the U-shaped CNN, modifying the number of hidden layers, filters or the deepness of the algorithm could change the prediction in the right direction. Another direction could be to add attention block to help our CNN to focus on small scale features (oktay2018attention).
Using GANs (Ullmo) could also improve the output field in order to recover small scales.
Another solution would be to preprocess the noisy 21 cm observation in order to remove/reduce noise and instrumental effects. In such a situation, we would hopefully recover a "perfect observation" scenario, drastically boosting the prediction accuracy.
§ DISCUSSION AND PERSPECTIVES
The work presented in this paper has involved numerous decision-making processes that may have been influenced by factors such as default settings, initial ideas, and implementation challenges.
When it comes to the architecture and hyper parameters of the CNN algorithm, the choices made are typically based on the fact that they yield improved performance (in terms of R^2). However, some choices, such as the number of filters, the inclusion/location of dropout layers, or the choice of loss function for weights adjustment, can potentially be modified: it is conceivable that untested combinations of hyper-parameters might yield better results. Ongoing investigations are being conducted to further explore this matter.
Another decision was to use images of the 21 cm signal at a single redshift/frequency, leading to one CNN per redshift (and per model). A possibility would have been to train the predictors with multiple redshifts channels or even lightcones. This could possibly help the predictors to infer maps even in the regime of low non-zero signal fraction at low (<8) : the inclusion of larger informations in the prediction process can provide additional constraints (e.g. on the global reionisation history) that cannot be inferred from single low 21cm maps alone. Our choice is largely the result of the history of this work, where it wasn't obvious at first that any prediction would have been possible, even in the case of a perfect 21cm signal. Investigations are currently ongoing to see what can be gained from a multiple channels prediction.
However we also believe that having multiple CNNs has some merits regarding the adequacy of the parameters (cosmological, astrophysical) of a predictor to the parameters that drive a given 21cm observation. In a real case scenario, the 'real' parameters of an observation are unknown and we therefore face a situation where it is unclear which CNN should be used to reconstruct . One possibility is to assume that the model parameters will be obtained from another analysis (using e.g. the 21cm power spectrum) and the role of a CNN predictor is therefore 'limited' to the reconstruction the spatial distribution of the reionisation times of a specific observation. But preliminary investigations also show that when a set of 21cm maps at different are processed by the multiple predictors of a 'wrong' model (for example ζ55 maps in ζ30 predictors), they lead to a set of maps that are inconsistent with regard to e.g. their average reionisation history. Meanwhile a CNN that would reconstruct multiple at once from multiple 21m maps would always, by construction, ensure some consistency between its predictions, even for a wrong model. It implies that a set of CNN at different provides a mean to quantify autonomously the adequacy of its model to the data. The optimal situation is likely to be an intermediate situation with CNNs dedicated to reconstruct a given , but that ingest multiple 21cm maps at different redshifts.
§ CONCLUSIONS
In this study, we have implemented and tested a U-net architecture to infer the field from 21 cm maps simulated by the simulation code. These predictions are especially effective to recover the large scales ( > 10 cMpch^-1) feature of reionisation times and can to some extent recover the past and extrapolate the future evolution of an observation made at a given .
For our models, z_obs between 8 and 12 seem to provide the best results according to several metrics (e.g. R^2, Dice Coefficient, power spectrum, true vs pred), corresponding to 65% up to 96% signal fraction for the ζ30 model. For <8, even though the last regions to be reionised can be reconstructed, the lack of 21 cm signal in general significantly degrades the network's capability to predict .
For deep observations (z_obs>12), the CNN still manages to reproduce quite well the very first sources of reionisation due to the rare and narrow HII bubbles imprinted in the 21 cm signal but has more difficulties to predict the location of sources that appear later, leading to smoother maps.
Also, it might be interesting to keep the information with low signal fractions (z_obs<8) since it reconstructs pretty well the last regions to get reionised.
Also, our CNN model works well at recovering the largest scales, as seen for example on the power spectrum analysis.
Nevertheless, there are still some limits to what our network can do and it has for example difficulties to recover the smallest scales ( < 10 cMpch^-1). That could indeed be a problem to constrain physics related to small scale structures (such as the physics of small mass objects or physics related to the nature of dark matter for example). It might be still possible to improve results at small scales with the use of GANs to generate a more detailed field. In addition, implementing attention block to insert it in our CNN could help predictors to focus on small scale features.
Two scenarios have been used with different history of reionisation. No significant difference in the training phase nor the prediction phase has been detected. The main difference comes at the lowest redshifts: the ζ55 scenario reionises sooner, lack signal more rapidly and is more difficult to predict for low (<8).
We believe that the method presented here can prove to be useful for the future interpretation of 21 cm data. First, it demonstrates that the information of reionisation times is somehow encoded in the 21 cm signal.
The field gives access to chronology of light propagation in the transverse plane of the sky, that could be for example cross-checked with other estimates of the reionisation evolution obtained along the line-of-sight (21 cm lightcones or Lyα/21 cm forests for example). And presumably it can be related to the global history of structure buildup and star formation. Another application would be the cross-correlation of reionisation times maps with galaxy distributions or intensity maps other than 21 cm: having access to the propagation history around objects observed through other channels could provide insights on their own local history of light (and therefore stars/sources) production (see also e.g. Aubert_2018, Sorce2022). Also there should be an environmental modulation of star formation suppression by reionisation (e.g. Ocvirk_2020,Ocvirk2011) and reionisation times map could provide a way to test this by providing an insight on how local reionisation proceeded. As illustrated in Sec. <ref>, the reconstruction of reionisation times from actual 21 cm data will be surely be challenging, but surely holds some potential that we have not fully investigated yet.
§ ACKNOWLEDGEMENT
We thank J. Freundlich for his help and advice. The authors acknowledge funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No. 834148).
§ CNN ARCHITECTURE DETAILS AND HYPER PARAMETERS
Here is described details of the CNN algorithm used in this study. Table <ref> shows properties of the hidden layers.
A convolution layer consists on applying a filter of a given size (3x3 in this study) to the whole input resulting in a featured map. Each convolution is performed with 'same' padding, meaning that each convolution conserves the size of the input.
For the encoder part, the 4 first convolutions are followed by a MaxPooling operation (of size 2x2) that is shrinking the size of the input by a factor 2. When the 2x2 matrix passes through the input, it only conserves the pixel with the highest value.
Dropout (Drop) layers are also used in order to help the CNN to prevent overfitting (Labach). Dropout layer acts by shutting down a given fraction (0.5 for us) of neurons/filters in the corresponding hidden layer where it is applied.
Concerning the decoder part, each convolution is followed by an UpSampling layer that doubles the size of the input.
In addition, a concatenate layer (Merge or Skip connection) is applied to fuse features of given hidden layer within the encoder with features of the same size in the decoder. In practice, skip connections tend to improve the accuracy of CNN and to make it converge faster (XJM).
Eventually, the activation function for each layer is Relu, except for the last one that is a Linear activation function since we want to predict an output field with continuous values.
During the learning phase of a CNN algorithm, the weights of each convolution filter are updated each time a batch of data is passed through. In our implementation, we have set the hyper parameter batch size to 16. This means that our model will update its weights after processing each batch of 16 images, and that one epoch is completed after N batches have been processed, where N is the number of batches needed to cover the entire dataset.
In addition, to optimize the performance of CNN algorithm, we need to carefully choose the hyper parameters. Some of them have already been discussed, such as batch size, dropout, and loss function. The "optimizer" hyper parameter was set to Adam. Another important factor are the initial weights of the model. The "kernel_initializer" hyper parameter controls this and was set to "He Normal". However, because the weights are randomly initialized, there is a possibility that the learning process may get stuck in a local minimum without learning anymore. To prevent this, we added a feature to the code that restarts the weight initialization if such a situation is detected.
§ USING INSTEAD OF
The redshift of reionisation and are two fields depicting the same quantity: the time of reionisation of regions in the sky, with ∼^-2/3 during the reionization epoch.
We can investigate how choosing redshift instead of time affects the capability of the CNN to predict or and which field gives the best results.
First, Fig. <ref> shows the same plot as Fig. <ref> but for . For z=11, the values of the R2 and the MSE are quite similar. Nevertheless going to lower value of z, results for both metrics to get worse way faster than for . Working with time and not redshift allows us to have a greater range of redshift for which the results are relevant.
Eventually, Fig. <ref> shows the power spectrum obtained with true and predicted field. For ζ30, the true power spectrum is well respected, especially at large scale. However, the smaller scales turn off faster. This effect is even worse with ζ55. For (Fig. <ref>), the smaller scales turn off more fairly, showing that small scales are more represented when working with cosmic time instead of redshift.
The exact reason for this discrepancy is unclear. In Thelie2022_2 we found that reionization times are close to be gaussian random fields (GRF) and can be analysed by means of GRFs theory, unlike which is a non linear function of . We suspect that GRF are more easily reconstructed as they provide for instance a symmetric distribution of values around the mean, whereas for example present an asymetric distribution of values that suffer the most from the smoothing of extrema inherent to our implementation of CNNs.
@
aa
|
http://arxiv.org/abs/2307.00270v1
|
20230701083818
|
HrSegNet : Real-time High-Resolution Neural Network with Semantic Guidance for Crack Segmentation
|
[
"Yongshang Li",
"Ronggui Ma",
"Han Liu",
"Gaoli Cheng"
] |
cs.CV
|
[
"cs.CV"
] |
label1]Yongshang Li cor1
[email protected]
[cor1]Corresponding authors
label1]Ronggui Ma cor1
[email protected]
label1]Han Liu
label2]Gaoli Cheng
[label1]organization=School of Information Engineering, Chang'an University,
city=Xi'an,
postcode=710064,
state=Shaanxi,
country=China
[label2]organization=Shaanxi Expressway Mechanisation Engineering Co.,Ltd,
city=Xi'an,
postcode=710038,
state=Shaanxi,
country=China
Through extensive research on deep learning in recent years and its application in construction, crack detection has evolved rapidly from rough detection at the image-level and patch-level to fine-grained detection at the pixel-level, which better suits the nature of this field. Despite numerous existing studies utilizing off-the-shelf deep learning models or enhancing them, these models are not always effective or efficient in real-world applications. In order to bridge this gap, we propose a High-resolution model with Semantic guidance, specifically designed for real-time crack segmentation, referred to as HrSegNet.
Our model maintains high resolution throughout the entire process, as opposed to recovering from low-resolution features to high-resolution ones, thereby maximizing the preservation of crack details. Moreover, to enhance the context information, we use low-resolution semantic features to guide the reconstruction of high-resolution features. To ensure the efficiency of the algorithm, we design a simple yet effective method to control the computation cost of the entire model by controlling the capacity of high-resolution channels, while providing the model with extremely strong scalability.
Extensive quantitative and qualitative evaluations demonstrate that our proposed HrSegNet has exceptional crack segmentation capabilities, and that maintaining high resolution and semantic guidance are crucial to the final prediction. Compared to state-of-the-art segmentation models, HrSegNet achieves the best trade-off between efficiency and effectiveness. Specifically, on the crack dataset CrackSeg9k, our fastest model HrSegNet-B16 achieves a speed of 182 FPS with 78.43% mIoU, while our most accurate model HrSegNet-B48 achieves 80.32% mIoU with an inference speed of 140.3 FPS. Furthermore, the quantitative results demonstrate that our model maintains robustness and stability in the presence of noisy data.
< g r a p h i c s >
* HrSegNet is a high-resolution model explicitly designed for real-time crack segmentation, maintaining high resolution throughout the process to maximize the preservation of crack details.
* The proposed model uses low-resolution semantic features to guide the reconstruction of high-resolution features, enhancing the context information in the model and improving the final segmentation.
* The architecture includes a simple yet efficient method to control the entire model's computation cost by controlling the high-resolution channel's capacity, providing strong scalability while maintaining efficiency.
* HrSegNet achieves the best trade-off between efficiency and effectiveness compared to current popular segmentation models. The fastest model, HrSegNet-B16, achieves an inference speed of 182 FPS and 78.43% mIoU on the benchmark crackSeg9k, with a computational complexity of 0.66 GFLOPs. The model with the highest accuracy, HrSegNet-B48, achieves 80.32% mIoU at 140.3 FPS, with a computational complexity of 5.60 GFLOPs.
crack segmentation real-time processing high-resolution representation semantic guidance automated inspection
§ INTRODUCTION
Cracks are early ailments of buildings, bridges, and highways <cit.>. Timely detection and repair can mitigate subsequent maintenance costs and ensure the user's safety. Traditional methods for crack detection, such as visual inspection and manual assessment, are costly, inefficient, and susceptible to subjective errors resulting in missed or false detections. Non-contact detection techniques evaluate cracks or defects in the target without physical contact <cit.>. These methods surpass manual approaches in precision and efficiency but heavily rely on equipment and require specialized knowledge. The advancement of digital image processing techniques has significantly expedited crack detection; however, the results are influenced by image quality, including noise that diminishes detection accuracy. Furthermore, the robustness of digital image processing techniques is weak when facing challenges posed by complex environments characterized by low lighting, reflections, and deformations <cit.>.
The advent of deep learning methods, particularly convolutional neural networks (CNNs), heralds a breakthrough in image processing techniques. Due to the efficiency, accuracy, and end-to-end capabilities, an increasing number of researchers are applying CNNs to the field of crack detection. CNN-based crack detection methods can be classified into three categories: image-level classification, patch-level object detection, and pixel-level semantic segmentation <cit.>. The first two methods can locate the position of cracks in an image, but their results are coarse and cannot determine the morphology and quantification of the cracks. Semantic segmentation assigns a label to each pixel in the image, enabling precise localization of crack pixels. As a result, it is naturally suited for crack detection tasks.
Most existing crack segmentation methods adopt models based on general scene understanding, overlooking the challenges specific to crack segmentation tasks in practical applications. Crack segmentation tasks differ from general scene-agnostic segmentation tasks. In general scene images, such as COCO-stuff <cit.> and Cityscapes <cit.>, multiple object classes of interest have similar pixel proportions. However, in crack images, the proportion of pixels representing the objects of interest is merely 1% of all pixels <cit.>. This gives rise to a highly imbalanced pixel-level classification task. Furthermore, cracks can exhibit diverse shapes, occur in complex backgrounds, and frequently coexist with noise, further complicating the task.
Current crack detection tasks increasingly rely on fast detection devices such as drones <cit.>, road measurement vehicles <cit.>, and specially customized robots <cit.>, as shown in Figure <ref>. These edge devices prioritize lightweight and real-time processing, often lacking high computational power. Therefore, there are strict requirements for algorithm complexity and efficiency. Several studies have found that high-resolution CNNs possess a superior ability to capture fine details and perform well in location-sensitive tasks <cit.>. However, high-resolution features significantly increase computational cost and model complexity, making it challenging for such models to meet real-time demands in practical crack segmentation. Based on these observations, we identify a gap between current CNN-based crack segmentation models and the real-time application in the real-world.
We propose a real-time high-resolution model, HrSegNet, to achieve high performance and efficiency in crack segmentation. Our model includes a high-resolution path designed to extract detailed information while maintaining high resolution throughout, as well as an auxiliary semantic path that provides step-by-step contextual guidance and enhancement to the high-resolution path. To ensure real-time performance while controlling computational cost, we control the channel capacity of the entire high-resolution path, thereby making the model highly lightweight and scalable. HrSegNet uses a two-stage segmentation head to restore resolution incrementally rather than in one step, thereby improving segmentation accuracy at a small computational cost. HrSegNet achieves superior accuracy while maintaining real-time performance, as evidenced by extensive experimental results on two crack benchmarks <cit.>.
The main contributions can be summarized as follows:
* A high-resolution model explicitly designed for crack segmentation, which enhances detailed features with semantic guidance while maintaining high resolution throughout the process.
* We design the HrSegNet to be highly scalable, enabling a lightweight backbone for a breakneck inference speed or increased channel capacity for improved accuracy.
* The fastest model we proposed, HrSegNet-B16, achieves an inference speed of 182 FPS and 78.43% mIoU on the benchmark CrackSeg9k, with a computational complexity of 0.66 GFLOPs. The model with the highest accuracy, HrSegNet-B48, achieves 80.32% mIoU at 140.3 FPS, with a computational complexity of 5.60 GFLOPs.
* The code, trained weights, and training records of the models are publicly available at https://github.com/CHDyshli/HrSegNet4CrackSegmentationhttps://github.com/CHDyshli/HrSegNet4CrackSegme
ntation
The rest of this paper is organized as follows. Section <ref> presents the current studies relevant to this study. The methodology are described in Section <ref>. The experiments and results are outlined in Section <ref>. Lastly, we summarize all of the work.
§ RELATED WORK
Deep learning-based semantic segmentation has dramatically advanced the performance of crack detection. The cutting-edge research mainly explores three directions: higher segmentation accuracy, faster inference speed, and more effective feature fusion. Therefore, this section will introduce crack segmentation-related work from these three aspects.
§.§ High-resolution models
Many studies indicate that high-resolution representation is essential for detecting small objects, such as cracks <cit.>. HRNet <cit.> adopted a high-resolution design, decomposing the feature extraction and fusion processes into different branches, which maintains high-resolution and multi-scale features. <cit.> and <cit.> aimed to deal with high-resolution crack images and strove to maintain the integrity of details, then they used HRNet as the baseline model. <cit.> proposed using higher-resolution feature maps to solve the grid effect problem caused by dilated convolution in deep neural networks.
Given the heavy nature of the original HRNet backbone, <cit.> opted to eliminate the down-sampling layer in the initial stage while reducing the number of high-resolution representation layers. Furthermore, integrating dilated convolution and hierarchical features were introduced to decrease the model's parameters while maintaining accuracy.
<cit.> innovatively proposed a high-resolution network structure based on the transformer to more reasonably utilize and fuse multi-scale semantic features.
Although the abovementioned approaches can achieve high accuracy, they come at the cost of high computational consumption and latency. This is because high-resolution feature maps result in more convolutional operations, which dominate the model's complexity. To achieve real-time performance, models require low-latency inference, which is not feasible with high-precision ones.
§.§ Real-time models
Most methods use lightweight backbone to achieve real-time crack segmentation. A lightweight encoder-decoder model called LinkCrack was designed based on the UNet <cit.>. The authors adopted a ResNet34 with reduced channel numbers for the encoder, resulting in an inference speed of 17 FPS and 3.4 M parameters.
<cit.> proposed an improved DeeplabV3+ for road crack segmentation. The authors modified the encoder of the original architecture and introduced Ghost modules from GhostNet to generate more Ghost feature maps. This reduced the parameters required for forward propagation and computational complexity while maintaining performance. <cit.> proposed a novel approach to address the inefficiency of current mainstream CNNs, which overlooks the importance of different-level feature extractors. They introduced an asymmetric convolution enhancement module for low-level feature extraction and a residual expanded involution module for high-level semantic enhancement in crack segmentation task.
§.§ Feature fusion
In the context of semantic segmentation models, it is commonly agreed that the fusion of features from different scales is crucial for achieving accurate results. Currently, two main approaches for feature fusion based on their location are cross-layer connections and pyramid pooling. The typical model for cross-layer connections is UNet <cit.>, which extracts features from different layers through a completely symmetric encoder-decoder structure.
<cit.> compared two models, VGG-UNet and Res-UNet, which utilize VGG and ResNet as backbones respectively.
<cit.> designed an encoder-decoder model similar to UNet for crack segmentation in CCD images. They introduced the transformer to capture long-range contextual features in the image instead of using convolution.
Pyramid pooling <cit.> and atrous spatial pyramid pooling <cit.> are used to model long-range contextual information and extract features of different scales.
High-resolution detailed features are crucial for crack segmentation, but contextual information can still assist the model in achieving more accurate segmentation. Therefore, we propose a fusion method called “semantic guidance" that compensates for detailed information with semantic information, as discussed in the Section <ref>. Our method differs entirely from cross-layer connections and pyramid pooling because we extract low-level features and fuse high-level information simultaneously. This parallel processing approach makes our model more efficient.
§ METHOD
The concept behind the proposed model is intuitive, with particular emphasis on crack detection. Our design philosophy is based on three key points: (1) high-resolution representations are crucial for detecting small objects such as cracks; (2) semantic features can guide and strengthen the extraction of comprehensive contextual information from high-resolution representations; (3) high-resolution means high computational costs, so it is necessary to control that in order to achieve real-time segmentation.
§.§ High-resolution path
In tasks requiring attention to detail and location sensitivity, high-resolution representation is of paramount importance. Nevertheless, high resolution entails a concurrent increase in computational demand.
Inspired by the ideas from STDCNet <cit.> and HRNet <cit.>, we design a simple, efficient, and controllable high-resolution path to encode rich detail information in crack images. As shown in Figure <ref>, the high-resolution path contains three High-resolution with Semantic Guidance (HrSeg) blocks and maintains the identical resolution throughout the process. However, ordinary convolutions are very expensive when faced with high-resolution feature maps. When convolution is applied to high spatial resolution, the floating-point operations (FLOPs) are dominated by the spatial size of the output feature map. For ordinary convolution, given input and output channel numbers, C_in and C_out, kernel size k, and output's spatial size W_out*H_out, when ignoring bias, the FLOPs of the convolution can be represented as:
FLOPs = C_in * C_out * k * k * W_out * H_out
In our design, the convolutional kernel size k and the output feature size W_out*H_out remain constant. Therefore, we can control the FLOPs by defining C_in and C_out. In our setting, we set C_in equal to C_out, and the default value is not greater than 64. This effectively controls the computational cost.
As shown in Figure <ref>, our high-resolution path consists of three stages, each containing three layers. Each layer includes a convolution with stride 1, followed by Batch Normalization (BN) and ReLU. It should be noted that we omit the stem of the model in Figure <ref>. The stem consists of two Conv-BN-ReLU sequences, each of which down-sample the spatial resolution of the input by a factor of 2. Therefore, before entering the high-resolution path, the size of the feature map is 1/4 of the original image, and the channel number and spatial resolution remain unchanged throughout the subsequent process.
§.§ Semantic guidance
It is commonly believed that high-resolution feature maps contain rich details while down-sampling provides a sufficient receptive field for extracting contextual semantic information.
A dedicated context path is used to obtain macro features in the two-stream model, BiSeNetV2 <cit.>. However, the dual path causes information and structural redundancy, leading to inefficiency. HRNet <cit.> designs a gradually increasing sub-network from high to low resolution and parallel connects multi-resolution branches. However, its model is too heavy and unwieldy and far exceeds the requirements of real-time inference.
To address the issue of redundancy caused by separate context paths, as seen in BiSeNetV2 and HRNet, we propose a parallel semantic guidance path that is lightweight and flexible. Our approach involves down-sampling the high-resolution features and fusing them with semantic features for guidance and assistance simultaneously throughout the feature reconstruction process. The HrSeg block we designed, shown in Figure <ref>, demonstrates this process. Our design allows for flexible adjustment of semantic guidance, such as using different (Figure <ref> (a)) or identical (Figure <ref> (b)) down-sampling manners in the same block or different fusion methods during feature aggregation.
Figure <ref> (a) illustrates the semantic-guided component within the HrSeg block, which maintains the identical resolution as the high-resolution path but gradually decreases by a factor of 2 in subsequent blocks.
Figure <ref> (b) demonstrates another way to provide semantic guidance by gradually decreasing the spatial resolution of the semantic guidance within a HrSeg block. Each semantic-guided feature map is up-sampled to the same size as the high-resolution path and then adjusted to the same number of channels via a 1 × 1 convolution. The different down-sampling and fusion strategies will be discussed in Section <ref> and <ref>.
§.§ Segmentation head
Many semantic segmentation models with encoder-decoder structures usually perform aggregation of features at different levels before the final segmentation <cit.>. However, since we have continuously fused features at intermediate layers while maintaining high resolution throughout, the output directly enters the segmentation head.
We gradually recover the original spatial resolution from the high-resolution representation in steps instead of directly restoring from a 1/8-sized feature map to the original image size, as many existing works do (see Figure <ref> (a)). Our approach, as shown in Figure <ref> (b), first applies a 3 × 3 transposed convolution to the high-resolution representation, restoring spatial resolution to half the size of the original image. In the second step, the previous features are restored to the original image size through bilinear interpolation. The comparison between the single-step and double-step manners is illustrated in Section <ref>.
§.§ Deep supervision
Additional supervision can facilitate the optimization of deep CNNs during the training process. PSPNet <cit.> demonstrates the effectiveness of this approach by adding auxiliary loss at the output of the res4_22 block in ResNet-101 and setting the corresponding weights to 0.4. BiSeNetV2 <cit.> proposes booster training, which involves adding extra segmentation heads at the end of each stage in the semantic branch.
We add auxiliary loss to the final convolution layer of each HrSeg block, as shown in Figure <ref>. Unlike the final primary loss, the auxiliary loss segmentation heads follow the scheme shown in Figure <ref> (a). During the inference stage, the auxiliary heads are ignored, thus not affecting the overall inference speed. The total loss is the weighted sum of the cross-entropy loss of each segmentation head, as shown in Equation (<ref>):
L_t = L_p + α∑_i=1^n L_i
L_t, L_p, and L_i represent the total loss, primary loss, and auxiliary loss, respectively. In this work, the number of auxiliary loss n is 2, and the weight α is set to 0.5.
§.§ Overall architecture
Table <ref> presents an instance of HrSegNet. Each stage consists of a set of convolution operations, with each operation containing the parameters kernel size k, output channel c, and stride s. The default value of c is set to base, which is a constant that controls the computational complexity.
The model comprises six stages, with each of the first two stages containing a stem block consisting of a Conv-BN-ReLU sequence with a stride of 2. The stem blocks quickly reduce the spatial dimensions of the input image to 1/4, with a feature map channel base. To reduce the computations of high-resolution representation, we assigned each stem block only one convolution, which has been proven to be sufficient in subsequent experiments. The second, third, and fourth stages are our carefully crafted HrSeg blocks. Each HrSeg block contains a high-resolution path and a semantic guidance branch. The feature map size of the high-resolution path remains unchanged throughout, while that of the semantic guidance path gradually decreases as the channel number increases. We use the same style as ResNet where channel numbers double when spatial resolution is halved. The final stage is the segmentation head, where the feature map from the previous layer is restored to the original size through a transposed convolution and bilinear interpolation. As we only predict cracks and background, the predicted output channel is 2.
In our experiments, we studied three models: HrSegNet-B16, HrSegNet-B32, and HrSegNet-B48, where 16, 32, and 48 represent the channel numbers of the high-resolution path. By managing the size of the base, we control the computational complexity of the model, making it highly scalable.
§ EXPERIMENTS AND RESULTS
This section will first introduce the datasets and evaluation metrics. Subsequently, we will provide a comprehensive depiction of the experimental setup. We scrutinize the significance and influence of each component in the HrSegNet and assess the scalability and generalization aptitude. Finally, we compare the accuracy and speed of HrSegNet with state-of-the-art.
§.§ Datasets and evaluation metrics
In the field of crack segmentation, publicly available datasets are relatively small compared to general scenarios in size and number, making it difficult to establish a fair benchmark for algorithm comparison. Currently, two works integrate previous crack datasets: CrackSeg9k <cit.> and crack segmentation dataset <cit.>. They contain seven identical sub-datasets. However, there are still some differences. CrackSeg9k was refined to address the presence of noisy annotations, while the latter consists of raw images without preprocessing. For convenience, we refer to them as the Original Crack Dataset (OCD) and the Refined Crack Dataset (RCD) throughout the rest of this paper.
OCD has 9,887 images (448 × 448 resolution), and RCD has 9,255 images (400 × 400 resolution). Both datasets have background and crack labels but lack designated training, validation, and test sets. To ensure a fair comparison, we randomly select 900 images each for validation and testing from shared images, with the rest used for training. OCD is noisier than RCD, so we use it to evaluate the model's generalization ability while we choose RCD to evaluate theoretical performance.
We employ two evaluation metrics to assess the segmentation performance, namely mean Intersection over Union (mIoU) used to assess accuracy and Frames Per Second (FPS) as a measure of speed. In addition, the floating-point operations (FLOPs) and parameters (Params) of the model serve as indicators to evaluate the computational complexity and size.
§.§ Implementation details
The training phase on OCD and RCD datasets employs mini-batch stochastic gradient descent with a momentum of 0.9 and weight decay of 5e-4. The batch size is set to 32. A “poly" policy is used to control the learning rate where the initial rate of 0.01 is multiplied by (1 - iter/max_iter)^power, with the power set to 0.9. The models are trained for 100,000 iterations from scratch with “kaiming normal" initialization. A warm-up strategy is used for the first 2000 iterations to ensure stable training. We use various data augmentation techniques, including random distortion, random horizontal flipping, random cropping, random resizing, and normalization. The scale range for random resizing is consistent between the two datasets, as both use a range of 0.5 to 2.0. The random distortion applies random variations to an image's brightness, contrast, and saturation levels, with each parameter set to 0.5. All the training images are cropped to 400 × 400 resolution. Online Hard Example Mining (OHEM) is used to train all models.
We run the models using TensorRT for a fair comparison during the inference phase. For the OCD, the data is resized to 400 × 400 for and then restored to original 448 × 448. The inference time is measured using an NVIDIA GeForce RTX 2070 SUPER with CUDA 12.1 and cuDNN 8.9. The inference process is carried out over ten iterations to reduce the impact of error fluctuations. We conduct all experiments based on Paddle 2.4 and the same hardware platform.
§.§ Ablation study on RCD
In this subsection, we conducted an ablation study on the RCD to evaluate the effectiveness of the components of HrSegNet.
§.§.§ High-resolution path only
We first explore the influence of resolution on crack segmentation results in the high-resolution path. HRNet <cit.> and DDRNet <cit.> keep the high-resolution branch at 1/4 and 1/8 of the original image resolution, respectively, in order to extract detailed features. Previous work has yet to attempt to maintain the high-resolution path at the general original image resolution, as the convolutional operations in the high-resolution path consume too much computation. However, as our high-resolution path, which controls computational cost by managing channel numbers, is very lightweight, we also attempt a high-resolution path with a 1/2 original image resolution. As discussed in Section <ref>, we control the high-resolution path's spatial resolution by defining the stem's output. Table <ref> shows detailed comparative experiments of three different resolutions. When the resolution is set to 1/4 of the original image, the high-resolution model achieves 74.03% mIoU, which is 3.6% and 1.33% higher than that of 1/2 and 1/8, respectively. Although the accuracy of 1/8 resolution is inferior to 1/4 at 1.33%, the computation is only 28% of the former. When the computational requirements of the running device are extremely stringent, 1/8 resolution is still an excellent choice. However, in subsequent experiments, we still choose the best accuracy of 1/4 resolution as our default.
§.§.§ Semantic guidance
As discussed in Section <ref>, we design two distinct schemes for extracting semantic information. One approach involves multi-resolution (see Figure <ref> (b)) guidance within the HrSeg block, which is repeated three times. The other approach entails single-resolution (see Figure <ref> (a)) guidance within the block but with the use of different resolution guidance paths across the three HrSeg blocks. Table <ref> displays the results of both semantic guidance methods. When compared to the single-path model, both of the semantic guidance schemes prove to be superior. At a resolution of 1/4 of the original image, the high-resolution path achieves 74.03% mIoU. Furthermore, with a simple summation, both of the semantic guidance approaches yield improvements of 2.72% and 1.56%, respectively. This observation suggests that semantic guidance has a notable complementary effect on the features extracted through the high-resolution path. For the two different guidance manners, the computational cost of single-resolution guidance within the block is 40% that of multi-resolution, and the parameter remains in a small range relative to the previous high-resolution model, HRNet <cit.>. Here, we adopt single-resolution semantic guidance within the block as the default.
To better investigate the impact of semantic guidance on crack segmentation, we visualize the activation maps using Seg-Grad-CAM <cit.>. The results are displayed in Figure <ref>. The first and last two columns represent the two stages of HrSeg block 1 and 2. The first row shows the original image, while the second and third rows depict the Class Activation Map (CAM) visualizations without and with semantic guidance, respectively. It is clear that when semantic guidance is introduced, the HrSegNet can pay more attention to crack objects. In contrast, without semantic guidance, the model disperses its attention across the background (see first two columns in Figure <ref>). Additionally, at different stages of the model, as it becomes deeper (HrSeg block 2 in Figure <ref>), we observe that the model focuses more on small cracks when using semantic guidance, whereas, without semantic guidance, the model even struggles to detect them.
§.§.§ Feature fusion
The fusion of features at different levels significantly impacts the result of semantic segmentation. For instance, when using vanilla semantic guidance, combining semantic and detailed information through summation improved the mIoU by 2.72% (see Table <ref>). There are two mainstream methods for feature fusion: one is to fuse features of different positions during the model processing, such as skip connections used by UNet; the other is to fuse features before they enter last the segmentation head, such as PPM and ASPP. However, the latter is too heavy for real-time detection, so the fusion methods used in this paper are all carried out during the model processing.
BiSeNetV2 and DDRNet use a bilateral fusion strategy to merge high and low-level information to improve the feature extraction ability, but this structure leads to information redundancy. We use two simple yet practical fusion methods to reduce computational complexity and maximize semantic information guidance: element-wise multiplication and element-wise summation. Let X_h and X_s denote the high-level path and semantic-guided feature maps, respectively. These two fusion manners can be represented as follows:
X_h = X_h ⊗ Sigmoid(up(X_s))
X_h= X_h ⊕ ReLU(up(X_s))
⊗ and ⊕ represent element-wise multiplication and element-wise summation, respectively. up denotes up-sampling. We use different activation functions: sigmoid for element-wise multiplication and ReLU for element-wise summation.
The comparison of the results obtained from the two fusion strategies is presented in Table <ref>. Since both methods are point-wise operations, they have the same computational cost. The summation method outperforms the multiplication by 1.1% in terms of mIoU, indicating its ability to provide better guidance for high-resolution details.
§.§.§ Segmentation head
Our research proposes two forms of segmentation heads, single-step, and double-step, to generate high-resolution segmentation predictions through different up-sampling strategies. Specifically, the single-step segmentation head employs a single up-sampling operation to convert low-resolution feature maps to the same resolution as the input image. In contrast, the double-step segmentation head progressively up-samples the feature maps to the target resolution through two up-sampling operations.
In our experiments, we compared the performance of these two forms of segmentation heads, as shown in Table <ref>. The results demonstrate that, while the computational cost is almost the same for both forms, the double-step segmentation head outperforms the single-step segmentation head by 0.77% in terms of mIoU. This suggests that the double-step up-sampling operation can better capture fine details.
§.§.§ Deep supervision
Deep supervision is only inserted into the high-resolution path during training and ignored during inference, so the additional heads do not affect inference efficiency. As shown in Table <ref>, we explore different positions for deep supervision. It is apparent that incorporating deep supervision results in an enhancement in segmentation accuracy without incurring any reduction in inference speed. Specifically, including deep supervision in HrSeg blocks 1 and 2 simultaneously yielded a 1.69% mIoU increase. We conducted additional research on the convergence behavior of HrSegNet while utilizing deep supervision, illustrated in Figure <ref>. The results indicate that incorporating deep supervision leads to a more rapid and stable convergence process, thereby substantially reducing the overall training time required.
§.§ Scalability
In this section, we delve into the proposed structure's scalability. As we introduce in Section <ref>, our model is designed to be very flexible for real-time applications. We can easily generalize it to larger or smaller models by adjusting the capacity of the high-resolution path. Table <ref> showcases the quantization results for the segmentation models with varying computational complexities. The fastest model we proposed, HrSegNet-B16, achieves an inference speed of 182 FPS and 78.43% mIoU on the benchmark CrackSeg9k, with a computational complexity of 0.66 GFLOPs. The model with the highest accuracy, HrSegNet-B48, achieves 80.32% mIoU at 140.3 FPS, with a computational complexity of 5.60 GFLOPs.
§.§ Stability and robustness
We assess the stability and robustness of our model in this section. As described in Section <ref>, OCD and RCD share seven sub-datasets, but the annotations in OCD are more noisy and challenging. We train the HrSegNet-B32 on both datasets, and the quantization results are shown in Table <ref>. Our model achieves the mIoU score that is on par with the performance obtained on the precisely annotated dataset RCD when applied to the dataset OCD which has more noisy annotations. Qualitative results of the model on the test set are shown in Figure <ref>. The first column displays the input images, while the second and fourth columns show the pseudo-color annotations of the same sample on RCD and OCD, respectively. It can be clearly seen that the annotations of OCD contain more noise, with hair-like boundary distortions around the cracks. This caused scenarios where the model's metric score is lower even for visually accurate predictions due to the distortions in the ground truth. The third and fifth columns display the model's prediction on the two datasets, respectively. Despite the presence of noise, our model exhibits consistent performance and produces stable results even in the presence of noisy conditions. In situations where errors occur in ground truth (see the last row of Figure <ref>), both models accurately predict the location of actual crack pixels.
§.§ Comparisons with state-of-the-art
Our objective is to attain a superior trade-off between accuracy and speed. Thus our emphasis lies in achieving high segmentation accuracy while maintaining real-time inference. In this section, we compare the results of our models with seven segmentation models on the RCD test set. We utilize the RCD training and validation sets to train all the models and evaluate their segmentation accuracy on the test set. Inference time measurements are conducted on NVIDIA GeForce RTX 2070 SUPER with TensorRT 8.6. For the sake of efficient and expedient comparisons, our training is conducted from scratch without any pre-training on other datasets.
Table <ref> compares our method and state-of-the-art. Our HrSegNet achieves excellent inference speed while maintaining competitive segmentation accuracy. Specifically, our most miniature model, HrSegNet-B16, achieves 78.43% mIoU on the self-divided RCD test set at a speed of 140.3 FPS, outperforming PSPNet <cit.>, BiSeNetV2 <cit.>, STDCSeg <cit.>, and DeeplabV3+ <cit.>with similar accuracy. Moreover, the computational complexity of HrSegNet-B16 is remarkably efficient, equivalent to only 13.4% and 12.6% of the state-of-the-art real-time semantic segmentation models, BiSeNetV2 and STDCSeg, respectively. HrSegNet-B16 only requires 0.66 GFLOPs of computational cost, making it very lightweight. The medium-sized model, HrSegNet-B32, achieves a performance improvement of 1.27% compared to the smaller one. Although the parameters and computational complexity have increased fourfold, the model still maintains a very fast real-time segmentation speed at 156.6 FPS. We increase the channel capacity of the HrSeg block to 48, which is HrSegNet-B48, resulting in a segmentation accuracy improvement of 0.62%. While the parameters and computational complexity doubled, it still meets real-time segmentation requirements and achieves 140.3 FPS.
Comparative experiments reveal that despite its highest computational complexity of 75.87 GFLOPs, UNet <cit.> only attains a segmentation accuracy of 76.71%, which is the lowest among all models. DDRNet <cit.>, similar to our structure, only achieves 76.58% mIoU on the RCD test set. OCRNet <cit.>, which uses HRNet-W18 as the backbone, achieved the highest 80.90% mIoU. However, as discussed in Section <ref>, HRNet is very heavy and complex, and the inference speed is difficult to achieve real-time segmentation requirements. As CrackSeg9k does, we also test DeeplabV3+, but they use ResNet101 as the backbone, while we use ResNet18 because ResNet101 cannot meet the real-time requirements. In our test, DeeplabV3+ achieves 78.29% mIoU at 60.6 FPS but still lags behind our HrSegNet. In order to emphasize the effectiveness of our method, we show some examples of the RCD test set in Figure <ref>.
§ CONCLUDING REMARKS
We observe that segmenting cracks requires a high-resolution representation and supplementary contextual information. We devise a novel architecture named HrSegNet, which efficiently and parallelly processes high-level and low-level information, thereby merging them. The HrSegNet exhibits high scalability, yielding state-of-the-art segmentation accuracy and significantly outperforming state-of-the-art models in terms of inference speed, as demonstrated on the CrackSeg9k dataset. Compared with popular segmentation models, we find that excessive design for crack segmentation is ostentatious and impractical. Our design, on the other hand, is intuitive, versatile, and remarkably effective. We hope this research will contribute to advancements in the field of crack segmentation.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Yongshang Li: Conceptualization, Methodology, Writing – original draft, Writing – review & editing, Investigation, Validation. Ronggui Ma: Resources, Supervision, Funding acquisition, Writing – review & editing, Project administration. Han Liu: Investigation, Writing – review & editing. Gaoli Cheng: Resources, Supervision, Funding acquisition.
§ FUNDING
This work was supported in part by the Key Research and Development Project of China under Grant 2021YFB1600104, in part by the the National Natural Science Foundation of China under Grant 52002031, and also in part by the Scientific Research Project of Department of Transport of Shaanxi Province under Grants 20-24K, 20-25X.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
elsarticle-harv
|
http://arxiv.org/abs/2307.02143v3
|
20230705093648
|
A Brief Review on the Asymptotic Symmetries of Gravity in Higher Dimensions
|
[
"Arpan Kundu"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
[4]
1cm
Use of Non-Maximal entangled state for free space BBM92 quantum key distribution protocol
Ravindra P. Singh
August 1, 2023
=========================================================================================
§ INTRODUCTION
Inspired by the seminal work <cit.>, over the last decade there has been a huge progress made in the literature about understanding certain generic properties in the infrared sectors of gauge theories and gravity (see <cit.> for an early review). These works have established an interesting connection among the following three seemingly disjoint subjects of study: (1) infrared factorisation theorems of the flat-space 𝒮-matrix of theory (soft theorems), (2) symmetries that preserve certain large distance behaviour of the fields (asymptotic symmetries) (3) certain physically observable low-frequency effects of radiation (memory effects). Together they have been popularly referred to as Infrared-triangle.
Our focus here will be on gravity. Quest for understanding the properties of quan-
tum gravity has been one of the biggest questions of theoretical high-energy physics. In
the case of spacetimes with a negative cosmological constant (asymptotically Ads space-
times), there exists a fairly well-established notion of holography which goes by the name of
Ads/CFT correspondence (see <cit.> for a recent review). This correspondence holds in any spacetime dimensions. Ads/CFT serves as a theoretical laboratory to explore various
ideas related to quantum gravity in spacetimes with a negative cosmological constant. One
would like to have a similar understanding in the case of spacetime with zero cosmological
constant (asymptotically flat spacetimes). In this aspiration for Flat Holography the
infrared triangle for gravity plays a crucial role.
Research in the last decade has made large progress in understanding the IR-triangle
of gravity in d=4. However, while going to higher dimensions there are additional
challenges. Earlier works had concluded that the asymptotic symmetries of asymptotically
flat spacetimes are trivial in higher even dimensions[In higher odd dimensions, there is a crucial technical roadblock in pursuing similar studies, namely the non-existence of a useful notion of Null Infinity <cit.>. We shall not discuss these issues here and shall restrict to higher even dimensions only.]. But, the Infrared triangle in d = 4 and
the existence of soft theorems in any generic dimensions inspired a bunch of recent works to
revisit this issue.
In this short review, we summarize this recent progress in understanding the asymptotic symmetries of asymptotically flat spacetimes in higher (d>4) even dimensions. The rest of this review is organized as follows. In section-<ref>, we introduce the basic background material necessary to understand the rest of the review. In section-<ref>, we briefly discuss the early works in asymptotic symmetries in d=4 and negative results regarding the existence of non-trivial asymptotic symmetries in higher dimensions. Then in section-<ref> we discuss new insights from the Infrared triangle in perturbative quantum gravity in d=4 and how they motivate to revisit these negative results. Then, in section-<ref> we discuss in detail some of the recent results in understanding the asymptotic symmetries in higher dimensions and how they could bypass the no-go conditions posed by earlier works. We end this short review with a summary and a discussion of the open issues in section-<ref>.
§ BACKGROUND
Asymptotic symmetries are the symmetries of the solution space of a theory preserving certain boundary conditions. In our case, we shall consider gravity theory without cosmological constant coupled with massless fields. This requires preservation of certain beheviour of the metric at it's boundary at infinity. Since, we shall be dealing with the massless fields, we shall be looking at the infinity reached through the null geodesics aka Null Infinity (both in the past ℐ^+ and in the future ℐ^-). Null infinity (ℐ) in general d-dimension has a topology 𝕊^(d-2)×ℝ. We are interested in the class of solutions to Einstein eqn which has some specific behaviour near ℐ, details of which are mentioned later.
Although, asymptotic symmetries by their very nature is a coordinate independent notion, certain coordinate systems are more useful for various computational purposes. A particularly suitable coordinate system for studying asymptotic symmetries tied to ℐ^+ is the retarded Bondi coordinates (u,r,z), where r is the radial distance from the origin, u=t-r is the retarded time and z is the collective coordinate on the conformal sphere 𝕊^(d-2). Now, the flat spacetime in this coordinate system can be written as:
ds^2=-du^2-2dudr+r^2γ_abdz^adz^b.
Now, ℐ^+ in this coordinate system can be reached as r→∞ keeping u fixed. The question one asks is that what is the class of all spacetimes that behave like Flat specetime at ℐ^+ in a suitable sense? One starts with a general metric in the Bondi gauge :
ds^2= M e^2β du^2 - 2e^2β dudr + g_ab (dz^a - U^a du) (dz^b - U^b du),
where the Bondi gauge condition is given by
g_rr=0 g_ra=0 (g_ab/r^2)= (γ_ab).
Here, M(u,r,z), U^a(u,r,z), β(u,r,z) and g_ab(u,r,z) are parameters, which are general functions of the coordinates. Asymptotic flatness is ensured by demanding preservation of certain large-r fall-off of the metric near ℐ^+. This requires putting certain large r behaviour of these functions. There is no unique condition for this, but one is guided by the following basic principles: (1) the conditions should be weak enough to allow physically interesting solutions like black holes, and gravitational waves; (2) the conditions should be strong enough to ensure that physical quantities like charge, Mass, Angular Momentum, etc. don't diverge. The algebra of non-trivial symmetry transformations which preserve a specific boundary condition, is called the asymptotic symmetry algebra (ASA). Since there is no unique fall-off condition for AFS, there is no unique ASA. Weakening the fall-off corresponds to the enlargement of the ASA.
§ EARLY WORKS ON ASYMPTOTIC SYMMETRIES IN GRAVITY
Study of asymptotic symmetries can be traced back to as early as sixties. In the seminal works <cit.>, in d=4, the ASA was obtained to be the celebrated 𝔅𝔐𝔖 algebra, which is a semidirect product of Supertranslation (𝔖𝔗) and Lorentz. 𝔖𝔗 itself is an infinite dimensional enlargement of the translation subalgebra of the Poincare algebra (which is the semi-direct product of Translation and Lorentz). 𝔖𝔗 algebra is parametrized by a free function f(z) on 𝕊^2. In d=4, this 𝔅𝔐𝔖 algebra was further extended in later works. In d=4, there are different proposals for infinite dimensional extension of the 𝔅𝔐𝔖 algebra, using different infinite-dimensional extensions of the Lorentz subalgebra of the 𝔅𝔐𝔖 (𝔖𝔗⋉Lorentz) algebra. In d=4, Lorentz transformation induces the global conformal transformations on 𝕊^2. Inspired by an attempt to build a proposed BMS-CFT correspondence (in analogy to Ads/CFT), in the Extended BMS (𝔈𝔅𝔐𝔖) proposal <cit.>, the Lorentz algebra is extended to include local conformal transformations on 𝕊^2. Later, inspired by an attempt to build an improved understanding of Infrared Triangle [We shall return to this point in a bit more detail in the next section-<ref>.], in the Generalised BMS (𝔊𝔅𝔐𝔖) proposal <cit.>, the Lorentz algebra is extended to include any area preserving smooth diffeomorphisms of 𝕊^2. Both of these infinite dimensional extensions of the Lorentz algebra are called Superrotation in d=4. In the 𝔈𝔅𝔐𝔖 case, Superrotation is parametrized by holomorphic vector fields V^a(z) on 𝕊^2, whereas in the 𝔊𝔅𝔐𝔖 case, Superrotation is parametrized by any smooth area preserving vector fields V^a(z) on 𝕊^2. It is important to keep in mind that, 𝔖𝔗 algebra is a subalgebra of all three proposed ASA in d=4, namely 𝔅𝔐𝔖, 𝔈𝔅𝔐𝔖, and 𝔊𝔅𝔐𝔖.
In <cit.>, the ASA corresponding to AFS of even d≥4 was studied in their connection to displacement memory effect. The displacement memory effect is the DC shift observed in the pair of gravity wave detectors due to the passing of a burst gravitational wave. In <cit.>, it was argued that, in d=4 the Supertranslations are tied to the displacement memory effect, and if one uses a strict fall-off such that the ASA is Poincare and thus disallowing Supertranslations, generic radiative solutions are automatically excluded. Hence, allowing Supertranslation is essential in d=4. In contrast, in d>4, while the memory effects are seen corresponding to 𝒪(r) at the r-expansion of the angular part of the metric, radiation occurs at 𝒪(r^-(d-2)). Hence, enlargement of the Poincare algebra to include Supertranslation doesn't become a physical necessity. Furthermore, allowing for Supertranslation leads to divergent physical quantities. In this logic, it was argued that the Supertranslation doesn't exist in even dimension d>4. (See <cit.> for earlier works regarding asymptotic symmetries of higher dimensional gravity, which also gives negative results about the existence of Supertranslation in higher even dimensions.) However, certain new insights from the study of 𝒮-matrix in the corresponding quantum theory in d=4 have led to revisiting the ASA in higher d. We shall discuss these motivations in section-<ref>.
§ NEW INSIGHTS FROM INFRARED TRIANGLE
So far we have talked about asymptotic symmetries only in classical theory. One can ask what are the implications of these symmetries at the level of quantum gravity. More specifically, can we say anything about the properties of perturbative quantum gravity 𝒮-matrix? Starting with <cit.>, a program was initiated in which certain already known factorisation theorems of perturbative quantum gravity 𝒮-matrix have been found to be a consequence of elevating the asymptotic symmetries of the classical theory as a conjectured symmetry of the 𝒮-matrix of the corresponding quantum theory. These factorisation theorems are called Soft graviton theorems[Soft Theorems hold for any gauge theory, but our focus here will be on soft gravitons only.].
Consider a scattering amplitude containing finite energy particles of any mass, spin, and one soft (energy ω→0) graviton. The amplitude can be factorised in terms of the amplitude of other finite energy particles without the soft graviton and some universal factor called the Soft factor. In fact, there are similar factorisation theorems for more than one soft gravitons, but we shall focus in this article mainly on Single Soft Graviton theorems.
Early works on the Soft graviton theorem in tree-level perturbative quantum gravity can be traced back to the sixties <cit.>. Later, in <cit.>, Soft graviton theorems were extended to subleading orders in the energy of the Soft graviton. Recent works by Sen and his collaborators <cit.> have put this on a much more robust footing by proving soft graviton theorems for arbitrary but finite number of soft gravitons in any generic theories of quantum gravity in generic dimensions where finite energy particles can have any mass and spin. In d≥5 due to the absence of infrared divergence, these factorisation theorems are true for all-loop amplitudes. In d=4 infrared divergences force to make these statements at the tree-level and there are additional logarithmic corrections <cit.> to the soft factors once the loop effects are taken into account.
In this section, we first introduce Leading and Subleading Soft graviton theorems in general dimensions and discuss how in d=4 they are related to the asymptotic symmmetries. Then, we discuss the early hints that showed similar relations might hold in higher even dimensions as well.
§.§ Leading Soft Theorem & Supertranslation Symmetry in d=4
We want to briefly revisit how Leading Soft Graviton Theorem is related to the conjectured Supertranslation Symmetry of the quantum garvity 𝒮-matrix. Let us start by stating the Leading soft graviton theorem <cit.> first.
Consider a scattering amplitude containing i=1,⋯,n finite energy particles of any mass, spin and one soft graviton (energy going to zero in the limiting sense). Then the Leading Soft Graviton Theorem can be written as:
lim_ω→ 0ω⟨Out|𝔞_λ(ω,z_s)𝒮|in⟩=√(8π G_N)(∑_iϵ_λ^μνk^i_μk^i_ν/(p/ω)· k^i ) ⟨out|𝒮|in⟩.
Here, p^μ and ϵ_μν are the momentum and polarisation tensor of the soft graviton with polarisation label λ. 𝔞_λ(ω,z_s) creates a soft graviton in the “Out" state with energy ω whose direction on the celestial sphere can be denoted using collective coordinate z_s. k^i_μ is the momentum of the i-th finite energy particle.
Although, the connection between soft theorem and asymptotic symmetry can be build for finite energy particles with any mass and spins, let us now restrict to perturbative gravity coupled to a massless scalar for simplicity. In this case, it was shown in <cit.> that the above soft theorem (<ref>) is a consequence of the conjectured Supertranslation symmetry of the 𝒮 matrix.
One can derive this equivalence starting from the Soft theorem and then derive a Ward
identity of Supertranslation for the 𝒮-matrix. In this way, one obtains a Ward identity of
the form
⟨Out|[𝒬^d=4_ST,𝒮]|in⟩=0,
where, 𝒬^d=4_ST is the quantized version of the Supertranslation charge in d=4. Now, since the charge obtained from the Soft theorem matches with the charge obtained from classical gravity this proves that the Soft theorem (<ref>) is a consequence of the Supertranslation Symmetry.
Another way is to start from the classical symmetry and obtain a conserved charge (𝒬^d=4_ST). The charges are parametrized by free function f(z) on 𝕊^2. Then elevate this classical symmetry to the symmetry of the quantum gravity 𝒮-matrix by writing a Ward identity of Supertransaltion (<ref>). Finally, from this one derives the soft theorem (<ref>) as a consequence of the Ward identity (<ref>).
A few conceptual points need to be stated here. Apriori there are two independent 𝔅𝔐𝔖 algebras: (1) 𝔅𝔐𝔖^+ acting on ℐ^+, labelled by free function f^+(z) and (2) 𝔅𝔐𝔖^- acting on ℐ^-, labelled by free function f^-(z). In <cit.>, a diagonal subalgebra 𝔅𝔐𝔖^0 of 𝔅𝔐𝔖^+×𝔅𝔐𝔖^- was identified as the symmetry of the gravitational scattering problem. This is done through the antipodal matching f^+(z)=f^-(-z). Also, while going from the Ward identity (<ref>) to soft theorem (<ref>) one needs to choose the free function f(z) such that it localises on the particular direction of the soft graviton. Hence, Leading Soft Theorem can be thought of as a consequence of Spontaneous Supertranslation Symmetry Breaking in the space of degenerate vacua.
It is also worth mentioning here that the Supertranslation symmetry in d=4 is related to classical observable effects called gravitational displacement memory <cit.>.
§.§ Subleading Soft Graviton Theorem & Superrotation Symmetry in d=4
Soft factorization of the amplitude holds at the subleading level of the energy of the soft graviton as well <cit.>. Consider a scattering amplitude containing i=1,⋯,n finite energy particles of any mass, spin and one soft graviton (energy going to zero in the limiting sense). Then the Subleading Soft Graviton Theorem can be written as:
lim_ω→ 0(1 + ω∂_ω)⟨Out|𝔞_λ(ω,z_s)𝒮|in⟩ =-i√(8π G_N)(∑_iϵ_λ^μνk^i_νp^ρ𝒥^i_μρ/p· k^i )⟨out|𝒮|in⟩.
Here, as before p^μ and ϵ_μν are the momentum and polarisation tensor of the soft graviton with polarisation label λ. 𝔞_λ(ω,z_s) creates a soft graviton in the “Out" state with energy ω whose direction on the celestial sphere can be denoted using collective coordinate z_s. 𝒥^i_μν is the angular momentum of the i-th finite energy particle.
As before, for simplicity let us restrict to gravity coupled to a massless scalar. One wants to ask like the leading case, in the subleading case whether there is an asymptotic symmetry origin of such factorisation. In <cit.>, in d=4, a Starting from the Subleaing Soft Theorem a Ward identity of the form
⟨Out|[𝒬^d=4_SR,𝒮]|in⟩=0
was derived, where, 𝒬^d=4_SR is the quantized version of the Superrotation charge in d=4 corresponding to 𝔈𝔅𝔐𝔖 algebra. However, the singular nature of the vector fields restricted the proving of this equivalence in the other way around, namely, Ward identity (<ref>) to Soft theorem (<ref>). This prompted the authors of <cit.> to propose a different definition of Superrotation based on Diff(𝕊^2) vector field as mentioned in section-<ref>. This corresponds to the proposal of 𝔊𝔅𝔐𝔖 algebra as the ASA for AFS in d=4. In the case of Superrotations corresponding to 𝔊𝔅𝔐𝔖 one can go both-ways: from Ward identity (<ref>) to Soft theorem (<ref>) and the reverse. A first principle derivation of the charges was given in <cit.>.
Like the leading case, a few conceptual points need to be stated here. Apriori there are two independent 𝔊𝔅𝔐𝔖 algebras: (1) 𝔊𝔅𝔐𝔖^+ acting on ℐ^+, labelled by free functions (f^+(z), V^a_+(z)) and (2) 𝔊𝔅𝔐𝔖^- acting on ℐ^-, labelled by free function (f^-(z), V^a_-(z)). Insoired from <cit.>, a diagonal subalgebra 𝔊𝔅𝔐𝔖^0 of 𝔊𝔅𝔐𝔖^+×𝔊𝔅𝔐𝔖^- can identified as the symmetry of the gravitational scattering problem. This is done through the following antipodal matching
f^+(z)=f^-(-z) V^a_+(z)=V^a_-(-z).
Also, while going from the Ward identity (<ref>) to soft theorem (<ref>) one needs to choose the free vector fields V^a(z) such that it localises on the particular direction of the soft graviton. Hence, Subeading Soft Theorem can be thought of as a consequence of Spontaneous Superrotation Symmetry breaking in the space of degenerate vacua. This corresponds to spontaneous symmetry breaking from 𝔊𝔅𝔐𝔖 to 𝔅𝔐𝔖.
It is also worth mentioning here that the Superrotation symmetry in d=4 is related to classical observable effects called the gravitational Spin memory <cit.>.
§.§ Ward identities from Soft Theorem in Higher Even dimesions
In d=4 Leading Soft Graviton Theorem follows from the supertranslation symmetry of the 𝒮-matrix <cit.>. Since the leading soft graviton theorem (<ref>) holds in all dimensions, a natural question is whether supertranslation also exists in all dimensions. Contrary to the classical result of <cit.>, in <cit.>, based on the factorisation properties of the perturbative quantum gravity 𝒮-matrix, it was argued that the Supertranslation (and correspondingly 𝔅𝔐𝔖) holds even in higher even (d=2m+2) dimension and a Supertranslation compatible fall-offs of the Bondi metric (<ref>) were proposed. In <cit.>, in all higher even dimensions a Ward identity for the 𝒮-matrix of the following form was derived starting from the Leading Soft Graviton Theorem (<ref>).
⟨Out|[𝒬^d=2m+2_ST,𝒮]|in⟩=0
From this Ward identity, Supertranslation charge (𝒬^d=2m+2_ST) can be read-off in generic higher even dimension. This charge was shown to generate the Supertranslation using some proposed commutation relation among the radiative degrees of freedom. However, since there was no first principle derivation of the charge in classical gravity, this created an apparent contradiction with the results of classical gravity <cit.> , the resolution of which will be discussed in the next section.
Inspired from <cit.>, in <cit.>, based on an attempted generalisation of Diff(𝕊^2) Superrotation in higher even dimensions (in terms of Diff(𝕊^2m) vector fields) a Ward identity of Superrotation of the following form was derived in linearized gravity in higher even dimensions starting from Subleading Soft Graviton Theorem (<ref>):
⟨Out|[𝒬^d=2m+2_SR,𝒮]|in⟩=0.
However, lacking a first principle understanding of the 𝔊𝔅𝔐𝔖 symmetries in higher dimensions in classical gravity, it wasn't clear whether one can indeed generalise superrotations in higher dimensions in terms of Diff(𝕊^2m) vector fields and whether one can properly embed 𝔅𝔐𝔖 algebra as a subalgebra of this 𝔊𝔅𝔐𝔖 algebra. This issue was addressed in later works <cit.>.
§ REVISITING THE ASYMPTOTIC SYMMETRIES IN HIGHER EVEN DIMENSIONS
As already mentioned, regarding the non-trivial ASA in higher even dimensions, there is a contradiction between the results obtained from classical gravity <cit.> and from the factorisation property of quantum gravity 𝒮-matrix <cit.>. This apparent contradiction was resolved in <cit.>. The author made a derivation of Supertranslation charge in linearized gravity in higher even dimensions using the Covariant Phase Space Formalism <cit.>. Despite having the fall-off conditions that allow for Supertranslations, the author was able to get a finite charge by adding certain additional boundary conditions at the boundaries of the ℐ^+ and hence, bypassing the no-go conditions of <cit.>. Interestingly, these additional conditions also ensure the correct counting for the number of independent soft theorems. Hence it established the existence of Supertranslation in the higher even dimensions on a stronger footing.
The analysis of <cit.> was further strengthened in favour of the existence of Supertranslation in higher even dimensions in <cit.>, where the authors did the covariant phase space analysis in non-linear gravity focussing on d=6.
In the following, we first summarize lessons from the above results in a more concrete manner. Then we discuss how one can generalise 𝔅𝔐𝔖 to 𝔊𝔅𝔐𝔖 in d=6 and discuss its consequences.
§.§ Supertranslations in Higher Even Dimensions & Consequences
In <cit.> the analysis was done in the linearized gravity coupled to matter, and hence, the authors worked with the linearized Bondi metric, which can be written as:
ds^2= M du^2 - 2 dudr + g_abdz^adz^b-2 U_adz^a du.
The fall-off conditions choosen for the parameters were:
M = -1+∑_n = 1^∞M^(n)(u, z)/r^n,
U_a = ∑_n = 0^∞U_a^(n)(u, z)/r^n
g_ab = r^2 γ_ab(z) + ∑_n = -1^∞C_ab^(n)(u, z)/r^n
In linear theory, the determinant condition in (<ref>) ensures that γ^abC_ab^(n)=0 ∀ n, i.e. all C_ab^(n) are traceless. From the Einstein's eqn one can show that ∂_uC^(-1)_ab=0 and C^(m-2)_ab is the free radiative data. Supertranslations are generated by the vector fields:
ξ_ST=f(z)∂_u-γ^ab(z)𝒟_af(z)∂_b+1/2m𝒟^2f(z)∂_r+⋯
Here, f(z) is any smooth function on 𝕊^(d-2), and ⋯ denotes the subleading (in r) orders of the vector fields. The action of Supertranslation preserves the fall-off (<ref>) and thus Supertranslation qualifies as a valid candidate for asymptotic symmetry provided one gets a finite non-zero Noether charge corresponding to it.
Supertranslation does a shift of the C^(-1)_ab as :
δ_STC^(-1)_ab=1/m𝒟^2fγ_ab-2𝒟_a𝒟_bf
In the linearized theory, δ_STC_ab^(n)=0 ∀ n≥0 (including the radiative order m-2). However, later we shall see that this isn't true for non-linear gravity and supertranslation indeed does affect the radiative order as well.
Using the covariant-phase space techniques (for review see <cit.>), in <cit.>, the Noether charge was calculated for general even dimension d=2m+2. Since the analysis was done in the linearized gravity the hard part [similar to the d=4 case, the nomenclature soft and hard is used to denote the part of the charge linear in the gravitational free data and quadratic in the gravity/matter free data respectively.] of the charge 𝒬^Hard,Lin_ST=∫_ℐ^+f(z)𝒯^Matter(2m)_uu doesn't contain any contribution from the gravitational free data and depends on the matter only. Here,𝒯^Matter(2m)_uu stands for the term at the r^-2m order in the large-r expansion of uu component of the matter stress energy tensor.
The soft charge contained finite as well as the divergent term. Divergence could be cured by putting certain additional 2m-2 conditions on the behaviour of C^(n)_ab s at the boundaries of the ℐ^+. These conditions are:
𝒟^a𝒟^bC_ab^(n)=u^n+1[∏^n_j=0𝔇_j,m]𝒟^a𝒟^bC_ab^(-1) ∀ 0≤ n≤ m-3
𝒟^a𝒟^bC_ab^(m-2)|_u=±∞,z∼𝒪(|u|^-m+1-ϵ) ϵ>0
𝒟^a𝒟^bC_ab^(m+n-2)|_u=±∞,z∼𝒪(|u|^-m+1+n-ϵ) ∀ 1≤ n≤ m-2, ϵ>0,
where,
𝔇_j,m=j(2m-j-3)/2(j+2)(-2m+j+1)(-m+j+2)[𝒟^2-(j+1)(2m-j-2)].
In <cit.>, a aposteriori motivation for putting these conditions were given. It is important to note that, in the d dimension, there are d(d-3)/2 leading soft theorems corresponding to the number of polarisations of the graviton. However, all of them are not independent. Since, we have only one soft charge, and in that only one free function to choose, this means, there is only one independent soft theorem. This means one needs d(d-3)/2-1 extra conditions. These conditions are called the “Generalised CK conditions" in higher dimensions, in analogy with the Christodoulou-Klainerman (CK) condition in d=4 <cit.>, which give the correct counting for the number of independent soft theorems. Among these d(d-3)/2-1 conditions (d-4)=(2m-2) conditions, are the conditions (<ref>) necessary for the finiteness of charge <cit.>. The remaining (d-2)(d-3)/2 other conditions D_aU^(0)_b=D_bU^(0)_a are obtained from the vanishing of magnetic part of the Weyl tensor at 𝒪(r^-1) <cit.>.
The finite part of the soft charge obtained in <cit.> mathched with <cit.>, where it was derived from the soft theorems. This finite Soft charge is given by:
𝒬^Soft,Lin_ST
=1/8π G_N1/(2m-1)2^-m/Γ(m)∫_ℐ^+f(z)∏_l=m+1^2m-1[𝒟^2-(2m-l)(l-1)]I^(m-2)(𝒟^a𝒟^bC_ab^(m-2))
where the operator I^(n) stands for is n-th antiderivative of the argument with respect to u i.e. I^(n)=[∫_u]^n. Note that, ∫_ℐ^+=∫ d^2mz√(γ)∫_u and the ∫_uI^(m-2)(𝒟^a𝒟^bC_ab^(m-2)) gives the zero mode.
Total supertranslation charge in linearized gravity in any general higher even dimension d=2m+2 is thus given by:
𝒬^Lin_ST= 𝒬^Soft,Lin_ST+𝒬^Hard,Lin_ST
=1/8π G_N1/(2m-1)2^-m/Γ(m)∫_ℐ^+f(z)∏_l=m+1^2m-1[𝒟^2-(2m-l)(l-1)]I^(m-2)(𝒟^a𝒟^bC_ab^(m-2))
+1/8π G_N∫_ℐ^+f(z)𝒯^Matter(2m)_uu
In d=4, supertranslation charge obtained from the covariant phase space analysis matches with the “electric charge" obtained from the Weyl tensor <cit.>. In <cit.>, it was shown that the same is true for higher even dimension as well, since the charge (<ref>) is the same as the “electric charge" (𝒬^Elec[ξ_ST]) obtained from the Weyl tensor:
𝒬^Lin_ST=𝒬^Elec[ξ_ST]≡-1/8π G_N1/2m-1lim_t→∞∫_Σ_t∂_μ[r√(g)C^μ t_ λ rξ^λ_ST]
where, ξ_ST is the supertranslation vector field and C_μνρσ is the Weyl tensor.
So far, we have talked about the generic higher even dimensions. Let us now focus on the results of <cit.> in d=6 in particular, as we will discuss this case in detail for non-linear gravity. For notational ease, in d=6, we shall denote C^(0)_ab as D_ab and C^(-1)_ab as C_ab. Here, D_ab(u,z) is the dynamical mode and C_ab(z) is the pure supertranslation mode. Higher C^(n)_ab's will not be important for discussion in d=6 as they don't contribute at ℐ^+. The supertranslation soft charge in d=6 has a finite and a divergent piece. The divergence is cured by imposing the following u fall-off of the dynamical mode at the boundary of the ℐ^+:
𝒟^a𝒟^bD_ab(u=-∞,z)=𝒟^a𝒟^bD_ab(u=+∞,z)=𝒪(|u|^-1-ϵ) , ϵ>0.
Finally, soft supertranslation charge is given by:
𝒬^Soft,Lin_ST=1/96π G_N∫_ℐ^+f(z)(𝒟^2-2)𝒟^a𝒟^bD_ab=1/96π G_N∫_𝕊^4f(z)(𝒟^2-2)𝒟^a𝒟^b𝒩^(0)_ab,
where 𝒩^(0)_ab=∫_uD_ab is the leading soft mode.
Hard supertranslation charge is given by:
𝒬^Hard,Lin_ST=1/8π G_N∫_ℐ^+f(z)𝒯^Matter(4)_uu
Finally, one can write the total supertranslation charge in linearized gravity in d=6 as <cit.>:
𝒬^Lin_ST= 𝒬^Soft,Lin_ST+𝒬^Hard,Lin_ST
=1/96π G_N∫_ℐ^+f(z)(𝒟^2-2)𝒟^a𝒟^bD_ab+1/8π G_N∫_ℐ^+f(z)𝒯^Matter(4)_uu
=1/96π G_N∫_𝕊^4f(z)(𝒟^2-2)𝒟^a𝒟^b𝒩^(0)_ab+1/8π G_N∫_ℐ^+f(z)𝒯^Matter(4)_uu
So far, we have talked about asymptotically flat spacetime in linearised gravity. In <cit.>, the work of <cit.> was extended to non-linear gravity focussing on d=6. One starts with the general metric (<ref>) satisfying the Bondi gauge (<ref>) and impose the following fall-off condition:
M = -1+∑_n = 1^∞M^(n)(u, z)/r^n, β = ∑_n = 2^∞β^(n)(u, z)/r^n,
U_a = ∑_n = 0^∞U_a^(n)(u, z)/r^n
g_ab = r^2 γ_ab(z) + r C_ab(u,z)+D_ab(u,z)+ ∑_n = 1^∞g_ab^(n)(u, z)/r^n
Consider the r expansion of the angular part of the metric in d=6 as in (<ref>). From the equation of motion it can be shown that, ∂_uC_ab(u,z)=0 and given γ_ab(z), C_ab(z) and D_ab(u,z) at ℐ^+ the metric can be solved at all order in the bulk. D_ab coorsponds to the radiative mode.[ In <cit.>, the authors worked on decompactified sphere, i.e. 𝕊^4→ℝ^4, and so γ_ab→δ_ab. However, upon covariantization of the results obtained at the end, one can recover the 𝕊^4 results.]
The above r fall-off (<ref>) is preserved by the BMS vector fields, where 𝔅𝔐𝔖=𝔖𝔗⋉Lorentz. The components of Supertranslation (𝔖𝔗) vector fields at all order in r can be written as:
ξ^u_ST= f(z)
ξ^a_ST=-∂_bf∫_∞^re^2βg^abdr^'
ξ^r_ST=U^a∂_af-∂_aξ^a.
Action of supertranslation on C_ab and D_ab can be written as:
δ_STC_ab=-2[∂_a∂_bf-1/4δ_ab∂^2f]
δ_STD_ab=f∂_uD_ab+1/4δ_ab[-4/3∂_cC^cd∂_df-C^cd∂_c∂_df]+1/4C_ab∂^2f-∂_cC_ab∂^cf
-1/2[C_bc∂_a∂^cf+C_ac∂_b∂^cf]+1/2[∂_aC_bc∂^cf+∂_bC_ac∂^cf]+1/6[∂^cC_bc∂^af+∂^cC_ac∂^bf].
It is important to note that, from the saddle-point analysis and the finiteness of the symplectic structure one expects that the radiative degrees of freedom should scale as |u|^-(2+ϵ) (ϵ>0) at the boundaries of ℐ^+. However, as is evident from the (<ref>), supertranslation action violates this fall-off.
The news tensor associatated to the radiative degrees of freedom is given by N_ab=∂_uD_ab. Since, C_ab is independent of u, redefinition D_ab→ D_ab+ χ_ab, (where χ_ab is any function constructed from γ_ab and C_ab) doesn't change the physical news tensor.
So, one asks whether there exists a redefinition of the radiative degrees of freedom such that: (1) the redefined field gives same news tensor, (2) u fall-off of this is preserved by supertranslation. It was identified in <cit.>, the correct variable for the radiative degrees of freedom in classical theory and hence, correspondingly, the correct graviton mode in the quantized theory that satisfies the above criteria is not D_ab, but a non-linear field redefinition given by:
D̃^ST_ab(u,z)=D_ab(u,z)-1/4δ^cdC_ac(z)C_bd(z)-1/16δ_abC_cd(z)C^cd(z)
Equipped with this redefinition one finds that:
δ_STD̃^ST_ab(u,z)=f(z)∂_uD̃^ST_ab(u,z).
Using this redefinition one finds a finite supertranslation charge in d=6 for non-linear gravity. The charges can be split into soft and hard part. Note that the soft and hard parts now depend linearly and quadratically on D̃_ab respectively.
The hard supertranslation charge is given by:
𝒬^Hard_ST =1/8π G_N∫_ℐ^+f(z)𝒯^(4)_uu(u,z)
=1/8π G_N∫_ℐ^+f(z)[𝒯^Matter(4)_uu(u,z)+1/4N^ab(u,z)N_ab(u,z)],
where N_ab=∂_uD̃^ST_ab is the News tensor in d=6.
Soft Supertranslation Charge is given by:
𝒬^Soft_ST=1/96π G_N∫_ℐ^+f(z)∂^2∂^abD̃^ST_ab(u,z)=1/96π G_N∫_ℝ^4f(z)∂^2∂^ab𝒩^(0)_ab(z),
where 𝒩^(0)_ab is the leading soft mode given by:
𝒩^(0)_ab(z)=∫_uD̃^ST_ab(u,z).
Hence, for the total supertranslation charge we have:
𝒬_ST
=𝒬^Hard_ST+𝒬^Soft_ST
=1/8π G_N∫_ℐ^+f(z)𝒯^(4)_uu(u,z)+1/96π G_N∫_ℝ^4f(z)∂^2∂^ab𝒩^(0)_ab(z)
=1/8π G_N∫_ℐ^+f(z)[𝒯^Matter(4)_uu(u,z)+1/4N^ab(u,z)N_ab(u,z)]+1/96π G_N∫_ℐ^+f(z)∂^2∂^abD̃^ST_ab(u,z)
It is important to note how from this charge (<ref>) one can obtain the linearized gravity charge (<ref>) in d=6. In the case of linearized gravity the contribution to the energy momentum from the gravitaional news is absent. So replacing D̃^ST_ab→ D_ab in (<ref>) and decompactifying the 𝕊^4→ℝ^4 in (<ref>), both the charges match.
In <cit.>, the authors worked in non-linear gravity and using the charge (<ref>) the connection with the leading single soft graviton theorem was established in the generic C_ab≠0 case through a Ward identity of the following form.
⟨out|[𝒬_ST,𝒮]|in⟩=0⇔⟨out|[𝒬^Soft_ST,𝒮]|in⟩=-⟨Out|[𝒬^Hard_ST,𝒮]|in⟩
As we discussed previously, the correct graviton mode in this case is not D_ab but D̃^ST_ab <cit.>. It is important to note that, C_ab can be obtained from a scalar potential ψ, and supertranslated vacua are labeled by this scalar potential.
§.§ Superrotations in Higher Even Dimensions & Consequences
So far, we have talked only about supertranslation in higher dimensions. In d=4, the 𝔅𝔐𝔖 algebra can be further extended to include superrotations. Also, in d=4, the subleading soft graviton theorem follows from the conjectured superrotation symmetry of the quantum gravity 𝒮-matrix <cit.>. Since the subleading soft theorem holds in any dimension <cit.>, a natural question will be to ask if there is any generalisation of the superrotation symmetry to higher dimensions. As already mentioned, in d=4, there are many distinct proposals for superrotations in the sense that all of them are infinite-dimensional extensions of the
Lorentz subalgebra of the 𝔅𝔐𝔖 (𝔖𝔗⋉Lorentz) algebra. In d=4, Lorentz transformation induces the global conformal transformations on 𝕊^2. In the Extended BMS (𝔈𝔅𝔐𝔖) proposal <cit.>, the Lorentz algebra is extened to include local conformal transformations on 𝕊^2. In the Generalised BMS (𝔊𝔅𝔐𝔖) proposal <cit.>, the Lorentz algebra is extended to include any area preserving smooth diffeomorphisms of 𝕊^2. Since, for d>4, the corresponding local conformal transformations on 𝕊^d-2 are finite dimensional, there is no natural generelisation of 𝔈𝔅𝔐𝔖 to higher even dimensions. On the contrary, in higher even dimensions one can attempt to obtain superrotations, and correspondingly a generalisation of 𝔊𝔅𝔐𝔖, from the area-preserving smooth diffeomorphisms on 𝕊^d-2.
In the following, we focus mainly on d=6, but many of the aspects are expected to have a natural generalisation to any higher even dimensions. We start with the Bondi metric (<ref>). Inspired by the generalisation of symmetry algebra from 𝔅𝔐𝔖 to 𝔊𝔅𝔐𝔖 in the d=4 case, we start by the generalisation of the fall-off conditions chosen for studying the 𝔅𝔐𝔖 algebra. Let us start with the following fall-off conditions <cit.>:
M = ∑_n = 0^∞M^(n)(u, z)/r^n, β = ∑_n = 2^∞β^(n)(u, z)/r^n,
U_a = ∑_n = 0^∞U_a^(n)(u, z)/r^n
g_ab = r^2 q_ab(z) + r C_ab(u,z)+D_ab(u,z)+∑_n = 1^∞g_ab^(n)(u, z)/r^n
Here, q_ab(z) is obtained from any area preserving (√(q)=√(γ)) smooth diffeomorphisms of unit round sphere metric γ_ab(z). From Einstein's equation we get ∂_uC_ab=-ℛ̅_ab^TF, where ℛ̅_ab^TF is the trace-free part of the Ricci tensor corresponding to q_ab metric. This implies, C_ab(u,z)=C̅_ab(z)+uT_ab(z), where, T_ab=-ℛ̅_ab^TF. Given the q_ab(z), C̅_ab(z) and D_ab(u,z) metric can be solved at all order in r. D_ab(u,z) is the dynamical mode.
Connection with the fall-off conditions (<ref>) chosen for studying 𝔅𝔐𝔖 algebra must be stressed here. If one restricts to the unit round metric γ_ab on the 𝕊^4 i.e. q_ab=γ_ab, then T_ab=0, and M^(0)=-1, i.e. one essentially recovers (<ref>), and the corresponding symmetry algebra is 𝔅𝔐𝔖. Demanding preservation of the fall-off conditions (<ref>), one obtains an infinite dimensional extension of the Lorentz subalgebra of original 𝔅𝔐𝔖 algebra, parametrized by any smooth vector field V^a on 𝕊^4. Correspondingly, one gets Generalised-BMS (𝔊𝔅𝔐𝔖) algebra in d=6. Hence, 𝔊𝔅𝔐𝔖= 𝔖𝔗⋉Diff(𝕊^4). Henceforth, by Superrotation in d=6, we shall mean this extension of Lorentz algebra. Superrotation vector fields are given by:
ξ^u_SR = u α(z)
ξ^a_SR = V^a(z) - u 𝒟_bα(z) ∫_r^∞ e^2β(u,r',z) g^ab(u,r',z) dr'
ξ^r_SR = - r/4[ 𝒟_a ξ_V^a(u,r,z) - u U^a(u,r,z) 𝒟_a α(z) ] .
Here, α=1/4𝒟_aV^a. Action of the superrotation on C̅_ab, T_ab and D_ab can be written as:
δ_SRC̅_ab = ℒ_V C̅_ab - αC̅_ab
δ_SR T_ab = ℒ_V T_ab - 2 ( 𝒟_a 𝒟_b α)^TF
δ_SR D_ab = u α∂_u D_ab + ℒ_V D_ab
+ u {1/4𝒟^2 α C_ab - U_(a^(0)𝒟_b)α + 1/2 q_c(a𝒟_b)( C^cd𝒟_d α) - C_c(a𝒟_b)𝒟^c α
- 𝒟^c α𝒟_c C_ab + 1/2q_ab U^(0)c𝒟_c α - 1/4q_ab𝒟_c (C^cd𝒟_d α) }
Similar to <cit.>, we shall work on the decompactified sphere (ℝ^4). Borrowing from the terminology used in d=4, we call the case of q_ab=γ_ab metric on 𝕊^4 (or δ_ab metric on ℝ^4) as Bondi frame. In the Bondi frame, T_ab=0 and C_ab=C̅_ab and hence, the superrotation action (<ref>) takes a simpler form. Now, superrotation action takes away from the Bondi frame i.e. δ_SRT_ab≠0, even starting from Bondi frame where T_ab=0.
Due to the generalisation r fall-off condition from (<ref>) to (<ref>), there arises a need for further field redefinition of radiative degrees of freedom, such that the u fall-off at the boundaries of the ℐ^+ is maintained. This generalisation should capture the information of non-zero T_ab, but should smoothly reproduce the redefinition (<ref>) in the Bondi case (T_ab=0, C_ab=C̅_ab). We shall look at the effect of going linearly away from the Bondi frame. In this case, a natural generalisation of field redefinition becomes:
D̃_ab = D_ab - 1/4 q^mnC̅_amC̅_bn - 1/16 q_abC̅_mnC̅^mn
- u [ 1/4 q^mn (C̅_am T_bn + T_amC̅_bn) + 1/8 q_ab T_mnC̅^mn] + O(T^2).
Note that supertranslation and superrotation action on this redefined radiative field can be written as:
δ_STD̃_ab = f ∂_u D̃_ab
δ_SRD̃_ab = ℒ_V D̃_ab + u α∂_u D̃_ab
Thus, the u fall-offs are not violated by supertranslation or superotation action starting from a Bondi frame.
In <cit.>, conserved charge corresponding to superrotation symmetry in the Bondi frame was obtained. Superrotation hard charge was derived from the energy momentum tensor as follows:
𝒬^Hard_SR=1/8π G_N∫_ℐ^+[uα(z)𝒯^(4)_uu(u,z) + V^a(z) 𝒯^(4)_ua(u,z)].
Hence, for pure gravity superrotation hard charge was obtained to be:
𝒬^Hard_SR = 1/32π G_N∫_ℐ^+ N^ab( ℒ_V D̃_ab + u α N_ab).
where, N_ab=∂_uD̃_ab is the news tensor.
In <cit.>, the following superrotation soft charge was proposed for any generic Bondi frame (C̅_ab≠ 0):
𝒬^Soft_SR= 1/128π G_N∫_ℐ^+ u V^b(x) [ ∂^4 ∂^a D̃_ab - 4/3∂_b ∂^2 ∂^efD̃_ef]
+ 1/96π G_N∫_ℐ^+ (ℒ_V C̅_ab - αC̅_ab) ∂^a ∂^m D̃^b_m
The correctness of the soft charge is tested by the fact that they produce correct action on the Kinametic data (C̅_ab, T_ab) in Bondi frame. This soft charge is further justified by the fact that they reproduce the correct subleading soft graviton theorem in the quantum theory.
Subleading Single Soft Graviton Theorem <cit.> in any dimension including d=6 is given by (<ref>). Now, 𝔊𝔅𝔐𝔖 compatible fall-off (<ref>) implies that the correct graviton mode to quantize is D̃_ab. We choose the vacua to be labelled by the simultaneous eigenstates of C̅_ab and T_ab. Ordinary Fock vacuum is identified as |0⟩=|C̅_ab=0, T_ab=0⟩.
Next, we consider a scenario of a massless scalar field coupled with gravity and consider the implication of superrotation symmetry to the 𝒮-matrix of this theory. In this theory, soft charge is given by (<ref>) and the hard charge is obtained from the corresponding stress energy tensor of scalar using (<ref>).
Next one looks at the Ward identity:
⟨out|[𝒬_SR,𝒮]|in⟩=0⇔⟨out|[𝒬^Soft_SR,𝒮]|in⟩=-⟨Out|[𝒬^Hard_SR,𝒮]|in⟩
In <cit.>, it was found that this identity can be obtained as a consequence of the subleading soft graviton theorem (<ref>) in d=6. It is important to note that, the action of the second term in (<ref>) on the ordinary Fock vacuum is zero due to the normal ordering of the operators chosen in <cit.>. Reproduction of the subleading single soft graviton theorem justifies the correctness of the proposed soft charge in (<ref>).
For a very recent and rigorous study on the phase space of gravity in six-dimensional asymptotically flat spacetime, we refer the reader to <cit.>. For a study of general superrotation compatible kinematic space of gravity in generic higher even dimensions, we refer the reader to <cit.>.
§ SUMMARY AND OPEN ISSUES
Let us summarize what we have discussed so far. We started with a discussion of early results on asymptotic symmetries in d=4 and higher. In particular, we stated that how ealy results set no-go conditions on the existance of non-trivial asymptotic symmetries in higher even dimensions. Then we discussed new insights gained from certain results regarding quantum gravity 𝒮-matrix and the need for the existence of supertranslation in the higher even dimensions.
We first discussed the consequences of Supertranslation in linearized gravity at the classical level in d=2m+2 dimensions and discussed the conserved asymptotic charge. We stressed how earlier no-go conditions could be bypassed by imposing certain late and early time behaviour on the metric components. Then we specialised to d=6 and discussed the consequences of supertranslation symmetry both at the level of linear as well as non-linear theory. An important lesson from the non-linear theory is that, to make the physically necessary fall-off of the radiative degrees of freedom at the boundaries of null infinity supertranslation compatible, a non-linear field redefinition of the radiative degrees of freedom is needed.
Next, we discussed the consequences of extending the symmetry to include superrotation. The lesson that we get is: to have the physically necessary fall-off of the radiative degrees of freedom both Supertranslation and superrotation compatible it becomes necessary to do a further non-linear redefinition of radiative degrees of freedom. We discussed the conserved asymptotic charge that one gets from the Superrotation. Next, we briefly discussed how by elevation of this symmetry to the symmetry of quantum gravity 𝒮-matrix a connction with the subleading soft graviton theorem can be made.
Many aspects remain open-ended. In d=4, we now understand how the double soft graviton factorisations are connected to asymptotic symmetries <cit.>. A similar derivation in higher dimensions is not yet known. Also, in d=4 for the case, when there are massive particles in the external states, one knows how to build the connection between the single soft graviton theorems and asymptotic symmetries <cit.>. Similar derivation in higher even dimensions is yet to be done.
Soft graviton theorems hold in all dimensions. However, the study of asymptotic symmetries in odd dimensions and their possible connection to soft theorems still remain a largely open issue. However, important progress has been made in recent works <cit.>.
In this article, we have focussed on spacetimes with higher non-compact dimensions. For study of higher dimensional spacetimes with compact extra dimensions we refer the readers to <cit.>.
§ ACKNOWLEDGEMENT
I thank Alok Laddha for his constant encouragement for writing this review article and his crucial comments on the first draft. I learned many aspects of the topic of asymptotic symmetries through the discussions and collaborations with Alok Laddha, Anupam AH, Chandramouli Chowdhury, Ankit Aggarwal, Aniket Khairnar, Krishnendu Roy, Miguel Campiglia, Amitabh Virmani, Arnab Priya Saha, Nishant Agarwal, Amit Suthar, Shamim Akhtar at different stages of the past few years. I would like to thank Shrihari Goplakrishnan, V. Ravindran, Sujay Ashok, and various other members of the IMSc high energy physics group for their various conceptual questions, which encouraged me to investigate various minute aspects of the subject.
JHEP
|
http://arxiv.org/abs/2307.05505v1
|
20230701032458
|
Reconfiguration of Amazon's Connectivity in the Climate System
|
[
"Adam Giammarese",
"Jacob Brown",
"Nishant Malik"
] |
physics.ao-ph
|
[
"physics.ao-ph",
"physics.data-an",
"physics.soc-ph"
] |
AIP/123-QED
Email: [email protected]
School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623, USA
Department of Mathematics, University of Connecticut, Storrs, CT 06269, USA
Email: [email protected]
School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623, USA
With the recent increase in deforestation, forest fires, and regional temperatures, the concerns around the rapid and complete collapse of the Amazon rainforest ecosystem have heightened. The thresholds of deforestation and the temperature increase required for such a catastrophic event are still uncertain. However, our analysis presented here shows that signatures of changing Amazon are already apparent in historical climate data sets. Here, we extend the methods of climate network analysis and apply them to study the temporal evolution of the connectivity between the Amazon rainforest and the global climate system. We observe that the Amazon rainforest is losing short-range connectivity and gaining more long-range connections, indicating shifts in regional-scale processes. Using embeddings inspired by manifold learning, we show that Amazon connectivity patterns have become more variable in the twenty-first century. By investigating edge-based network metrics on similar regions to the Amazon we see the changing properties of the Amazon are significant in comparison. Furthermore, we simulate diffusion and random walks on these networks and observe a faster spread of perturbations from the Amazon in recent decades. Our methodology innovations can act as a template for examining the spatiotemporal patterns of regional climate change and its impact on global climate using the toolbox of climate network analysis.
Reconfiguration of Amazon's Connectivity in the Climate System
Nishant Malik
Received October 2007. Revised February 2008. Accepted March 2008.
=======================================================================
The Amazon rainforest is an ecological system of high social significance, identified as a tipping element of the global climate system—increasing global temperatures and deforestation could lead to its dieback. Such an event will have disastrous consequences for the local environment and communities living in the region. Furthermore, the Amazon rainforest is an important carbon sink, and its destruction will negatively impact the planet's climate. Amazon is also facing an immediate threat due to the highest level of deforestation in history, which is already having adverse impacts on global and local environmental, weather, and climate patterns. This work attempts to understand changes occurring in patterns of interactions between the global climate system and the Amazon over the last seven decades. Traditionally, such a study will require broad analysis for large-scale computational models; however, here, we take an entirely data-driven approach and identify the changes occurring in the connectivity between the global climate and the Amazon rainforest through climate network analysis. Our study shows that the connectivity between the Amazon rainforest and the global climate system has been experiencing reconfiguration in their connectivity patterns.
§ INTRODUCTION
Humanity faces an existential crisis in the form of climate change.<cit.> Across the globe, communities are suffering due to increasing temperatures, rising sea levels, and the growing intensity of extreme weather phenomena such as floods, droughts, and hurricanes.<cit.>
Climate change is also driving the destruction of ecosystems, and simultaneously collapsing ecosystems are further exacerbating the climate crisis.<cit.>
The Amazon rainforest is one such ecosystem, also classified as one of the tipping elements of the global climate system; it may dieback if global temperature increases by 3-4^∘C, which will have catastrophic consequences for the region in the form of changing precipitation patterns and aridification. Currently, the Amazon rainforest is under even more immediate threat—historically highest levels of deforestation.<cit.> It is understood that if one-fourths of the forest is lost, it could lead to a chain of events that will also result in dieback of the forest.<cit.> The destruction of the Amazon rainforest will have far-reaching consequences for the planet, as it plays a critical role in controlling the global carbon dioxide fluxes, the most significant greenhouse gas. Amazon rain forest also modulates the local and global rainfall patterns through evapotranspiration.<cit.>
Estimates from the end of 2019 show that over 718,000 km^2 of rainforest have been deforested since 1970.<cit.> Recently, it has also been hypothesized that Amazon may have passed a critical point where the unabated deforestation has turned it into a net carbon dioxide source.<cit.> Given this precarious situation of the Amazon, it is paramount to study further the consequences of changes in the Amazon on the global climate system, including developing mathematical and computational tools that can provide physical insights into these changes. The traditional approach to such a study will require a large-scale computational modeling effort, simulating a hierarchy of interactions between various climate sub-systems and phenomena. In contrast, we propose a simpler, data-driven approach known as climate network analysis, which uses existing historical data sets and employs the toolbox of network science to study the evolution of connectivity between the global climate system and the Amazon. Using this technique, we analyze large spatiotemporal surface air temperature data for the last seven decades and show a reconfiguration of interactions between Amazon and the global climate system.
The underlying assumption in climate network analysis is that the global climate system is a complex network of numerous phenomena manifesting on various spatial and temporal scales, where the nodes in these climate networks are geographic locations, and the edges represent interactions between various phenomena.<cit.> For example, it is well known that monsoon systems interact with El Niño–Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), and Indian Ocean Dipole (IOD), and climate networks at the global scales do tend to have edges representative of these interactions.<cit.> The primary advantage of working with climate networks is that they provide a simpler mathematical representation of information contained in massive spatiotemporal climate datasets, and in the last decade, several significant insights into the global and regional climates have been obtained through climate network analysis. <cit.> Although less explored, climate networks are excellent tools for studying changes in weather patterns and climate due to anthropogenic forcings, as these forcings get engraved into the structure of the climate and can be identified and explored using the methods of network analysis. Thus, we will use the climate network as an analogous form to study climatic phenomena in relation to the Amazon rainforest. Furthermore, we will implement tools previously not explored in the climate networks settings, such as manifold learning, diffusion, and random walks on graphs.
A limitation of our study that requires to be underscored here is that we do not claim that the deforestation of the Amazon is causing the changes highlighted between the Amazon and the global climate system. Our study can not establish such causal links. The changes we are reporting are caused by the interplay of many factors, including the region's environmental degradation and other variations occurring on the regional and planetary scale in the climate system due to increasing temperatures. We cannot discern the factors causing the reconfiguration reported in this study. Nonetheless, the Amazon rainforest is a critical component of the global climate system; it influences the global energy balance, the hydrological cycle, and the carbon balance.<cit.> Therefore, quantifying changes occurring in the structure and patterns of interactions between the Amazon and the global climate will improve our understanding of climate change and the role of Amazon in it.
§ MATERIALS AND METHODS
§.§ Dataset
We construct networks using surface air temperature data from the NCEP/NCAR Reanalysis 1 data set provided by NOAA's (National Oceanic and Atmospheric Administration's) Physical Sciences Laboratory (PSL).<cit.> This data results from cooperation between NCEP (National Centers for Environmental Prediction) and NCAR (National Center for Atmospheric Research).<cit.> We choose the average daily air temperature at the σ=0.995 level (the height where air pressure is 99.5% of surface air pressure) for our analysis, as the surface air temperature (SAT) is one of the most accurately and extensively measured climate variables and an excellent proxy for evolving dynamics of the global climate system.<cit.> This data spans from 1948 to 2022 and are structured on a 2.5^∘ by 2.5^∘ regular latitude-longitude grid, for a total of 10,512 grid points.
§.§ Network Construction and Analysis
Our analysis involves four steps, out of which the first three transform the above dataset into networks. The fourth step is the analysis of these networks using various network analysis techniques, including analysis based on a variety of network metrics. Below we describe these four steps; the diagram in Fig. <ref> provides an illustration summarizing these steps.
§.§.§ Step 1: Regridding to Icosahedral Grid
A drawback of the regular latitude-longitude grid is that the density of grid points is greater near the poles than at the equator, leading to spatial biases when calculating network properties. One technique to counter this spatial bias in the network metrics is to take into account the area of the grid squares created by the latitude and longitude lines, which are smaller near the poles than at the equator, resulting in a method known as area-weighted connectivity.<cit.> An alternative technique we also employ here is to project the data onto a grid pattern free of such spatial biases. We use an icosahedral grid constructed in the following manner: first, an icosahedron with edge length two is identified in the Euclidean space. Each edge is bisected, a vertex is added at each bisection point, and an edge is added between each pair of adjacent points (i.e., those pairs separated by distance one). This process is repeated five times to give a total of 10,242 vertices on the surface of an icosahedron. Note that at the n-th iteration, edges are added between vertices separated by a distance of 1/2^n-1. The vertices are then projected onto a sphere of radius one, and the Cartesian coordinates of the vertices are converted to latitudes and longitudes. The final output is a set of 10,242 latitude-longitude pairs that are evenly spaced around the globe. After the projection, the nodes on the icosahedral grid are assigned a temperature time series via bilinear interpolation.
§.§.§ Step 2: Calculate Correlation Matrix
After re-gridding the data, we calculate correlations between grid points, where we first obtain the surface air temperature anomalies (SATA) for each node by subtracting long-term daily average temperatures from the SAT time series and normalizing it by the standard deviation of SAT time series. This process is equivalent to taking the z-score of SAT time series at each node, and it eliminates the long-term mean annual cycle from the data, resulting in SATA time series of length m at node i: {x_i(t_j)}_j=1^m, where t_j represents the time index. We divided the period January, 1948 to December, 2022 into 51 time windows, with each time window being 25 years long and an offset of 1 year between each time series (365.25 days). The time scale was chosen so that the number of El Niño cycles was the same for all networks. We also created a single network for the entire time span of the data set. One 25-year window will consist of m=25 × 365.25 ≈ 9131 time points. For every w-th time window, we calculate the Pearson correlation coefficient C_ij^(w) between grid points i and j, these coefficients can be arranged into an N × N matrix C^(w)=[C_ij^(w)]_N × N where C_ij^(w)∈ [-1,1] and N=10242 is the total number of grid points.
§.§.§ Step 3: Calculate Adjacency Matrix (Construct Networks)
We construct two types of networks from the correlation matrices: global threshold networks (GTN) and k-nearest neighbor networks (KNN).
Global Threshold Networks (GTN): An adjacency matrix represents a network or a graph. Here we obtain w-th adjacency matrix A^(w) corresponding to the w-th time window by thresholding C^(w) as follows: A_ij^(w)=1 if |C_ij^(w)|>τ and i ≠ j (meaning there exists an edge i ↔ j), else A_ij^(w)=0. The threshold τ is so chosen that the network has a ρ link density. This choice of threshold is based on our limited focus on studying the most persistent features of the atmospheric and climatic processes, encoded into the strongest correlations in the data. This step transforms large spatiotemporal data into a sparse binary matrix; that is it converts a complex and large dataset into simpler mathematical objects, a graph, or network G^(w) represented by the binary adjacency matrix A^(w).
K-Nearest Neighbor Networks (KNN): The k-nearest neighbors (KNN) is a non-parametric unsupervised learning framework used in clustering, classification, and regression of data; however, here, we use KNN to generate networks from SATA data. For a given node i we take the K other nodes j≠ i such that |C_ij^(w)| are amongst the K greatest entries in |C_i*^(w)|, excluding |C_ii^(w)|; one may also hold the view that for each node i we find an individual threshold τ_i such that there are K indices j≠ i such that |C_ij^(w)|≥τ_i. For each node j amongst the greatest K correlation values, a directed edge i→ j is added in the graph, indicating that j is in the nearest neighbors of i. While it remains true that the global edge density remains fixed in KNN graphs (more particularly, there will always be K· N edges), it differs from GTN due to its selection of local correlation thresholds for each node versus a global threshold.
§.§.§ Step 4: Network Metrics and Analysis
Step 3 concludes the network construction process, resulting in a set of 51 adjacency matrices {A^(w)}_w=0^w=50, each of these adjacency matrices represents a particular network G^(w). The set of networks {G^(w)}_w=0^w=50 corresponds to the 51 time windows with a 25-year span and offset of 1 year covering the period from January, 1948 to December, 2022. In this last step, we analyze these resulting networks identifying emerging features in Amazon's interaction with the global climate system. Next, we introduce our analysis's mathematical notations, metrics, and methods.
§.§.§ Relative Connectivity of the Amazon
The total number of nodes in any given network is N=10,242, which is the same as the total number of grid points resulting after regridding. The nodes in the Amazon region (the white outlined region in the maps in Fig. <ref>) form the set U with cardinality |U|=166. R_i^(w) is the number of links between the node i and the Amazon in the GTN network G_GTN^(w), while R_i^(←,w) is the number of links from the Amazon to node i and R_i^(→,w) is the number of links from the node i to the Amazon in KNN network G_KNN^(w).
§.§.§ Trends in Connectivity to (and from) the Amazon
As stated above R_i^(w) is the density of connections that node i has to the Amazon region, and a simple measure of connectivity shift of the Amazon will be the slope of the linear trend in R_i^(w) for every grid point. We denote m_w(·) to be the slope of the linear trend-line over the range of valid w∈{0,1,…,50} for the respective property.
§.§.§ Laplacian based Embedding of Amazon nodes:
A popular embedding for graphs is the eigenvectors of the Laplacian, which forms the basis for various clustering and manifold learning algorithms such as the Laplacian Eigenmaps.<cit.> These manifold learning algorithms are state-of-the-art tools for identifying and visualizing low-dimensional structures in high-dimensional data, and here we use Laplacian Eigenmaps to visualize structural changes in the subgraph of the Amazon with respect to the global climate system. Laplacian defined as the L_sym=D-A_sym, where D=D^(←)+D^(→) is the degree matrix (where ← and → refer to incoming and outgoing edges, respectively), a diagonal matrix with D_ii entry being the degree of node i and A_sym=(A + A^T)/2 is the symmetric adjacency matrix. We use the symmetric normalized version of the Laplacian, which is defined as ℒ_sym=D^-1/2L_symD^-1/2. For each network, we compute the first μ+1 eigenvectors v_0,v_1,⋯,v_μ of ℒ_sym associated with eigenvalues 0=λ_0 ≤λ_1 ≤⋯≤λ_μ. Each of these vectors v_i consist of N elements, v_i=[ v_ij ]_j=1^N. We drop the eigenvector v_0 corresponding to eigenvalue 0 and project each node i in (μ=3)-dimensional space with position of i given by ξ_i=[ v_1i, v_2i, v_3i]. In theory, a visualization of these manifolds for different networks could provide insights into the structural evolution of networks over time; however, the manifolds extracted using this technique are prone to mirroring due to arbitrary direction along each eigenvector; hence, in practice, such visualization of manifolds of successive graphs is not informative. To overcome this issue, we take an alternative approach, focusing only on the scale of these embeddings by taking the 2-norm of ξ_i for the Amazon nodes, ‖ξ_i ∈ U‖_2 and generate distributions of ‖ξ_i ∈ U‖_2 for different time windows w. A systematic expansion (contraction) in these distances would indicate an increasing (decreasing) volume of the Amazon sub-manifold, occupying more space in the global climate system, and allowing faster spread of perturbations. In general, expansion (contraction) of the space described by ‖ξ_i ∈ U‖_2 would be indicative of Amazon reconfiguring its connectivity.
§.§.§ Trends in Network Metrics across Random Boxes
In order to analyze the changing structure of the Amazon compared to the changing structure of the rest of the globe, we measure average network metrics of edges incident to a variety of random areas across the climate network. We first grow random boxes by initializing a latitude-longitude pair, and expanding a box with the same aspect ratio of the box bounding the Amazon region until it contains roughly the same number of nodes as the amazon (|U|=166). Since the average degree is much higher for nodes in the tropical region, we also create tropical random boxes with the same process, but requiring the box be contained within the Tropics of Capricorn and Cancer. We will investigate four kinds of network metrics for these boxes: average edge betweenness centrality of edges remaining in the box (labeled “in” edges), average edge betweenness centrality of edges that connect a node inside with a node outside the box (labeled “out” edges), connectivity ratio, and average geodesic distance of edges connecting an inside and outside node. Edge betweenness centrality,<cit.> B̅_e, measures the proportion of shortest paths in the graph that cross over edge e, and is normalized by the number of edges in the graph; B̅_e measures the importance of edge e based on how many lines of efficient information flow require the use of e. We define connectivity ratio, Γ, to be the proportion of incident edges that leave the box; Γ is indicative of the amount of teleconnections a box has. Lastly, ⟨ d_g,out⟩ is the measure of average geodesic distance of edges that leave the box, which is calculated using the Haversine formula.<cit.> For all metrics that involve edges leaving the box (namely ⟨B̅_out⟩, Γ, and ⟨ d_g,out⟩) have individual variants for incoming and outgoing edge directions (← and →, respectively) for directed KNN graphs.
§.§.§ Diffusion on climate networks from Amazon:
In order to further investigate the Amazon's changing connectivity with the global climate system, we carry out perturbations in the Amazon using graph diffusion. One may consider graph diffusion to demonstrate how information flows between neighbors. For both the GTN and KNN graphs we build diffusion simulations under the same assumptions: nodes may only send or receive information to their neighbors; nodes may only donate information through outgoing edges to nodes with less information; the rate of information spread between nodes is proportional to their difference in information. We denote the amount of information at node i at time t to be ϕ_i(t), u(·)=(sign(·)+1)/2 to be the Heaviside step function, c the coefficient of diffusion (which we choose to be c=1), and we use the adjacency matrix convention where A_ij=1 if there exists an edge i → j. Note that if the graph is undirected (such as in GTN) Eq. <ref> collapses to the classical undirected graph diffusion equation. Also note that since ϕ_i is only donated across outgoing edges to nodes j with less ϕ_j, diffusion in KNN graphs only spreads to the nearest neighbors of node i, and is received by nodes j where i is in j's nearest neighbors. In all diffusion simulations, ϕ_i∈ U(t=0)=N/|U| and ϕ_i∈ U^C(t=0)=0, and therefore the perturbation over the Amazon will diffuse to the rest of the graph, and since our graphs are all connected all nodes will asymptotically approach ϕ_i(t→∞)=1 due to the conservation of ∑_i=1^N ϕ_i(t).
dϕ_i/dt = c∑_j=1^N [ A_ij(ϕ_j - ϕ_i)u(ϕ_i-ϕ_j)
+ A_ji(ϕ_j-ϕ_i)u(ϕ_j-ϕ_i) ]
We use RK45 to obtain a numerical solution for Eq. <ref> for each GTN and KNN graph and for each time window. To measure the spread diffusion on the graphs we will use three metrics: average non-Amazon information, proportion of nodes affected by diffusion, and diffusion distance. Average non-Amazon information is calculated as ⟨ϕ_i ∉ U(t)⟩, and we expect this value will start at 0 at t=0 and asymptotically approach 1; the rate at which this measurement approaches 1 is indicative of how quickly the perturbation diffuses from the Amazon to the rest of the climate system. The proportion of affected nodes is measured simply as the proportion of nodes i∉ U whose ϕ_i(t) is above the threshold |U|/N: |ϕ_i∉ U(t) ≥|U|N||U^c|, where U^c is the complement of U; we also expect this measurement to start at 0 at t=0 and asymptotically approach 1, but its rate is not only indicative of how much information has diffused from the Amazon but also how much of the globe the perturbation has diffused to. Lastly, we define diffusion distance between nodes i and j in time window w as
d_i,j^(w)(t) = |ϕ_i^(w)(t) - ϕ_j^(w)(t)|
which measures a natural sense of distance in the diffusion process: two nodes with equal ϕ cannot diffuse to each other (even if they share an edge), while for nodes with differing ϕ, d_i,j is indicative of how much information is left to transfer between the nodes to reach a consensus. To measure an aggregate diffusion distance between node i and the original perturbation, we define
d_i,U^(w)(t) = 1/|U|∑_j∈ U d_i,j^(w)(t) = 1/|U|∑_j ∈ U |ϕ_i^(w)(t) - ϕ_j^(w)(t)|
to be the average diffusion distance between node i and the Amazon.
§.§.§ Random Walks
While diffusion restricts the spread of perturbations only to those with less information, we may also investigate the scenario where perturbations spread freely. Thus, we perform random walks originating in the Amazon and observe the regions in which there is changing likelihood of walks being received. We use Markov chain iteration to evolve the likelihood of a walk of length η being at node i: p_i(η). We initialize the probabilities to be p_i∈ U(η=0)=1/|U| and p_i∈ U^c(η=0)=0, and we define p̂_i(η) = Np_i(η) so that p̂_i(η→∞)=1 for connected graphs. The row-normalized transition matrix M is defined such that when a walk is at node i, it has an equal probability of stepping to any neighbors j of i such that there exists an edge i → j:
M_ij = A_ji/∑_k=1^NA_ki.
The random walk probability vector is updated as
p(η) = Mp(η - 1) = M^ηp(0).
§ RESULTS AND DISCUSSION
Our results show an emergence of distinctive connectivity patterns between the Amazon and the rest of the climate system. While our study cannot discern the causes of these new patterns, it is nevertheless a significant scientific endeavor, allowing us to understand how the changing climate and environment are leading to the Amazon rainforest reconfiguring its connectivity patterns within the climate system. Note that the Amazon rainforest is one of the tipping elements of the global climate system, and many of these tipping elements are interconnected: triggering one of the tipping elements could have a cascading domino effect, leading to the failure of many others <cit.>. Climate network analysis has recently provided quantitative evidence that Amazon possesses teleconnections with tipping elements. <cit.> In contrast to the existing studies on the analysis of Amazon connectivity using climate networks, our study focuses on quantifying the new connectivity patterns that the Amazon rainforest is exhibiting within the global climate system and the possible manifestation of this new connectivity patterns in the spread of perturbation as modeled here using diffusion and random walks on networks.
We explore the new connectivity first by identifying the trends in the density of connections that Amazon has with the other regions. In Fig. <ref>, we observe an increase in connectivity over the tropical Atlantic, the eastern Pacific, and the Indian Ocean region, some of the most dynamic regions in the global climate system. The connectivity increase with tropical Atlantic is a significant observation, as we know that the weakening of the Atlantic Meridional Overturning Circulation (AMOC) can destabilize the Amazon rainforest, leading to its dieback, as AMOC weakening can lead to variations in tropical Atlantic SST, which could change rainfall patterns over the Amazon. <cit.> With the Indian Ocean region, Amazon does not share a direct link. However, El Niño-Southern Oscillation (ENSO) is bi-directionally coupled with the larger Indian Ocean monsoon region. <cit.> Moreover, a recent climate networks-based study on Amazon indicates the existence of teleconnections with the Tibetan plateau. <cit.> In Fig. <ref> (a-c), we observe that Amazon's connectivity with the Indian Ocean and Tibetan Plateau is getting stronger. In the central Pacific, we observe two patterns: the increasing links to the region of El Niño pattern associated with the most significant warming, and decreasing links to the region of La Niña pattern associated with the most significant cooling. La Niñ phase of ENSO positively affects the Amazon as it brings wetter conditions over the Amazon. In contrast, El Niño brings dryer conditions over the Amazon, creating conditions for forest fires and related disasters.
To give stronger credence to our results, we have repeated our experiments with different thresholds on the correlations and also constructed networks using distinct methodologies, GTN vs. KNN; see Fig. <ref>. Our above findings are stable across all the repeated experiments, and we note that the interactions between some of the most dynamic regions of global climate, such as the Indian Ocean region, eastern Pacific, tropical Atlantic, and the Amazon, are getting reconfigured. Next, we explore an innovative methodology known as the Laplacian Eigenmaps to augment our above results that the connectivity between the Amazon and the global climate system is reconfiguring. In addition, this method can also identify the period in the data set when this reconfiguration started to take effect.
While employing Laplacian Eigenmaps, we observe an increase in the length of ‖ξ_i ∈ U‖_2 after 1992-2017 window, the 2-norm of Amazon nodes in the embedding obtained using Laplacian Eigenmaps-based projections, see Fig. <ref>. This increase is more pronounced in GTN with ρ∈{0.05, 0.1} and KNN with K∈{500, 1000}, see Figs. <ref> (b-f). This analysis does not provide insights into the property of individual nodes or edges but provides insights into the connectivity of the Amazon within the global climate system; the first interpretation of the observations in Fig. <ref> is that increase in ‖ξ_i ∈ U‖_2 amounts to increase in the volume occupied by the sub-manifold of the Amazon region within the larger manifold of the global climate. Such an increase is only possible if Amazon reconfigures its connectivity with a bias towards links that allow any perturbation to (from) the Amazon to spread faster in the system, as we also observe explicitly in later results. This pattern emerges from the 1992-2017 window, the period of greatest forest loss in the Amazon.<cit.> This observation indicates that large-scale deforestation in the Amazon could have played a role in developing the new interactions with the climate system.
Another question we explored was whether the Amazon is showing trends in connectivity that are different from areas of the same size in other parts of the globe. For this purpose, we compared the Amazon with two types of random boxes on the globe, one lying only in the tropics (tropical random boxes) and one anywhere on the planet (overall random boxes); these boxes had the same number of nodes as in the Amazon. We observe that Γ, the connectivity ratio for the Amazon, is showing a stronger trend than the median of this quantity for the two random boxes (see Fig. <ref> (c,g, and k)). The Amazon is gaining outgoing links faster than the 50% regions of the same size distributed in the tropics or other parts of the planet. Moreover, the geodesic length of the links between Amazon and non-Amazon nodes is also growing at a higher rate than the median of other parts of the planet (see Fig. <ref> (d,h, and l)). From these results, we conclude that the Amazon is gaining more long-range links faster than other regions globally. From the above observations, we hypothesize that the Amazon's local connectivity patterns are relatively stable; however, longer-range interactions in the climate system are changing; we have also observed similar patterns in Fig. <ref>. We also note that for KNN, we do not observe consistent patterns in trends for Γ, the connectivity ratio of the Amazon, for example, whereas the trends in the connectivity ratio of the outgoing links for all the thresholds on K are positive similar to GTN, for incoming links they are only positive for K=250 (see Fig. <ref> (d,k, r, e, l, and s)). Also, trends in geodesic distances are all negative in KNN, contrary to our observation in GTN (see Fig. <ref> (d,k, r, e, l, and s)), possibly due to KNN construction not being able to capture all the long-range connections.
In Fig. <ref> (a,e, f, i, and j), we observe that
in GTN, the average trends in edge betweenness, both in links internal to Amazon and going out of Amazon, is less than the median for random boxes. That is, edges involving Amazon are not gaining any strategic significance within these networks; instead, they seem to be losing some; note the slightest negative trends for most thresholds in Fig. <ref> (a,b,e, f, i, and j). In conclusion, from GTN networks, we infer that edge betweenness trends are relatively small, close to zero; that is betweenness of the links is stable. In the KNN case, betweenness shows similar features in the edge betweenness trends but with a few exceptions, for example, K=500 for links within Amazon (see Fig. <ref> (h)) and K=1000 for outgoing links (see Fig. <ref> (q)). Moreover, we observe greater values of these trends in KNN and broader distribution of edge betweenness for random boxes. Given these trends' complexity and inconsistencies, we cannot draw a more significant hypothesis about changes in the edge betweenness of the Amazon for KNN.
The above analysis identified several structural changes occurring in the connectivity between the Amazon and the rest of the climate system over the last seven decades; however, this analysis does not provide quantitative insights into the possible impacts of this reconfiguring connectivity, such as how environmental or climatic perturbations from (or to) the Amazon will spread in coming years and decades. It is critical to develop such an understanding, given the increasing incidents of large-scale forest fires resulting in intercontinental smoke transport.<cit.> Furthermore, the dieback of the Amazon rainforest is now a strong possibility, and the dispersal of its cascading consequences into the global climate system stills needs to be better comprehended.<cit.> In this work, we have used graph diffusion and random walks to study the transport of perturbations in the evolving climate network. To our knowledge, this is the first attempt to employ these techniques to study evolving climate networks. In the first set of simulations, we use diffusion on graphs (see section <ref> for details), modeling the spread of perturbations from the Amazon to the rest of the globe on GTN, and found that not only perturbation can spread faster in the network but travel further, too (see Fig. <ref> (a-b)). An analogous analysis for KNN in Fig. <ref> (a-b) shows an almost similar feature that, in more recent networks, perturbations can diffuse faster in the network. Moreover, in the KNN-based analysis, large parts of the central and eastern Pacific and Atlantic oceans are receiving perturbations faster from the Amazon (see Fig. <ref> (c)). Also, for KNN networks, more complex trends coexist in some parts of the Pacific and Indian Oceans. For the GTN, in Fig. <ref> (c), we observe that
the diffusion distance is decreasing for the whole globe except for the Amazon itself, indicating further that perturbations over the Amazon are spreading faster.
The diffusion discussed above is a continuous time process, and gradients in a hypothetical quantity over the graph drive it. Random walks are alternative modeling of the spreading phenomena on networks, a discrete-time process that is not as restrictive or driven by gradients as diffusion. In the GTN networks, the random walks originating from Amazon for more recent time windows spread faster and further; see methodological details in Fig. <ref> (a-b). In Fig. <ref> (a-b), a similar feature in random walks can be observed for the KNN graphs. Moreover, in GTN networks, we note that the most significant change in the form of a decrease in random walk probability is occurring over the Amazon itself (see Fig. <ref> (c)), that is, Amazonian random walks of length η = 5 are returning to the Amazon less frequently over successive time windows, in contrast, most other regions of the globe have an increasing random walk probability. Such characteristics in the random walk can only manifest if the Amazon is building long-range connections, possibly strengthening its teleconnections. In Fig. <ref> (c), we have also plotted the change in random walk probability for KNN networks; although the spatial patterns we observed do not have such clear climatological interpretation, the directionality in the KNN graphs does have significance, the regions of decreasing and increasing random walk probability may indicate a decrease and increase in that region’s influence over the Amazonian climate, respectively.
§ CONCLUSION
The Amazon rainforest is a critically endangered component of our planet's environment and climate, and it is also considered one of the tipping elements of the climate system, which can undergo irreversible changes with far-reaching consequences for the region and the globe. <cit.> Therefore, identifying new, emerging features in the interactions between the Amazon and our climate system is of great significance. In this work, we analyzed the surface air temperature for the last seven decades using climate network analysis to identify and quantify changes in connectivity patterns between the Amazon rainforest region and the global climate system. After studying a collection of network properties, including trends in connectivity, betweenness, diffusion, and random walks, we conclude that the Amazon is changing its connectivity configuration, gaining longer-range connectivity with the ability to spread a perturbation faster and further.
While carrying out the trend analysis of link densities between the Amazon and the globe, we found that the Amazon is gaining long-range links to several highly dynamic climate system components, including the western Atlantic region, critical for the future stability of AMOC. It also appears that there is a reconfiguration of connectivity between the Amazon and the South Asian monsoon region (the Indian Ocean and the Tibetan Plateau), along with the climatically critical eastern and central Pacific regions. It is important to note that AMOC and South Asian monsoons are also tipping elements. Furthermore, we observed that these new connectivity patterns allow for the faster diffusion of perturbations from the Amazon, an important observation in the context of increasing rates of forest fires that now it may be plausible for the wildfire smoke to spread faster and further.
In this work, we also provided a few methodological innovations within the framework of climate network analysis. We showed that Laplacian Eigenmaps-based embedding could be used to study changes in the global structure of evolving climate networks; this technique could prove helpful in studying temporally evolving networks. Furthermore, we employed the method of graph diffusion and random walks on climate networks to simulate the spread of perturbations. Our results indicate that the inclusion of the technique of graph diffusion in climate network analysis could prove very fruitful in studying the impact of global warming on the spread of climatological and environmental perturbations in the global climate system.
§ ACKNOWLEDGEMENT
Authors thank Prof. Darren Narayan and Dr. Kamal Rana for many helpful discussions.
AG, JB, and NM acknowledge the support of the National Science Foundation (NSF; Grant No. DMS-1950189, REU Site: Extremal Graph Theory and Dynamical Systems). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
*
46
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Pörtner et al.(2022)Pörtner, Roberts, Adams, Adelekan, Adler, Adrian, Aldunce, Ali, Begum, Friedl, Kerr, Biesbroek, Birkmann, Bowen, Caretta, Carnicer, Castellanos, Cheong,
Chow, G. Cissé, and Ibrahim]RN23
author author H.-O. Pörtner, author D. Roberts,
author H. Adams, author I. Adelekan, author
C. Adler, author R. Adrian, author P. Aldunce, author E. Ali, author R. A. Begum, author B. B. Friedl, author R. B. Kerr,
author R. Biesbroek, author J. Birkmann, author
K. Bowen, author M. Caretta, author J. Carnicer, author E. Castellanos, author T. Cheong, author W. Chow, author G. C. G. Cissé, and author Z. Z. Ibrahim, @noop title Climate
Change 2022: Impacts, Adaptation and Vulnerability, Technical Summary (publisher Cambridge University Press, address
Cambridge, UK and New York, USA, year 2022) pp. pages 37–118NoStop
[Ripple et al.(2019)Ripple,
Wolf, Newsome, Barnard, and Moomaw]biosci-biz088
author author W. J. Ripple, author C. Wolf,
author T. M. Newsome, author P. Barnard, and author W. R. Moomaw, title
title World Scientists’ Warning of a Climate
Emergency, 10.1093/biosci/biz088 journal
journal BioScience volume 70, pages 8–12 (year 2019), http://arxiv.org/abs/https://academic.oup.com/bioscience/article-pdf/70/1/8/31814460/biz088_supplemental_file_s2.pdf
https://academic.oup.com/bioscience/article-pdf/70/1/8/31814460/biz088_supplemental_file_s2.pdf
NoStop
[Ornes(2018)]pnas181139
author author S. Ornes, title title How does climate
change influence extreme weather? impact attribution research seeks
answers, 10.1073/pnas.1811393115 journal
journal Proceedings of the National Academy of Sciences volume 115, pages 8232–8235 (year 2018)NoStop
[Lenton et al.(2019)Lenton,
Rockström, Gaffney, Rahmstorf, Richardson, Steffen, and Schellnhuber]lenton2019climate
author author T. M. Lenton, author J. Rockström, author O. Gaffney, author S. Rahmstorf,
author K. Richardson, author W. Steffen, and author H. J. Schellnhuber, @noop
title Climate tipping points—too risky to bet
against, (year 2019)NoStop
[Ebi et al.(2021)Ebi,
Vanos, Baldwin, Bell,
Hondula, Errett, Hayes,
Reid, Saha, Spector, and Berry]012420-105026
author author K. L. Ebi, author J. Vanos, author J. W. Baldwin, author
J. E. Bell, author
D. M. Hondula, author
N. A. Errett, author
K. Hayes, author C. E. Reid, author S. Saha, author J. Spector, and author P. Berry, title title Extreme weather
and climate change: Population health and health system implications, 10.1146/annurev-publhealth-012420-105026 journal journal Annual Review of Public Health volume 42, pages 293–315 (year
2021)NoStop
[Church and White(2011)]Church2011
author author J. A. Church and author N. J. White, title title Sea-level rise
from the late 19th to the early 21st century, 10.1007/s10712-011-9119-1 journal journal
Surveys in Geophysics volume 32, pages
585–602 (year 2011)NoStop
[Domingues et al.(2018)Domingues, Goni, Baringer, and Volkov]2018GL081183
author author R. Domingues, author G. Goni,
author M. Baringer, and author D. Volkov, title title What caused the accelerated sea level
changes along the u.s. east coast during 2010–2015? https://doi.org/10.1029/2018GL081183 journal journal Geophysical Research Letters volume 45, pages 13,367–13,376 (year 2018)NoStop
[Bergstrom et al.(2021)Bergstrom, Wienecke, van den Hoff,
Hughes, Lindenmayer, Ainsworth, Baker, Bland, Bowman, Brooks, Canadell, Constable, Dafforn, Depledge, Dickson, Duke, Helmstedt, Holz, Johnson, McGeoch, Melbourne-Thomas, Morgain, Nicholson,
Prober, Raymond, Ritchie,
Robinson, Ruthrof, Setterfield, Sgrò, Stark, Travers, Trebilco, Ward, Wardle, Williams, Zylstra, and Shaw]gcb15539
author author D. M. Bergstrom, author B. C. Wienecke, author J. van den
Hoff, author L. Hughes,
author D. B. Lindenmayer,
author T. D. Ainsworth, author C. M. Baker, author
L. Bland, author D. M. J. S. Bowman, author S. T. Brooks, author J. G. Canadell, author A. J. Constable, author K. A. Dafforn, author M. H. Depledge, author C. R. Dickson, author N. C. Duke, author K. J. Helmstedt, author A. Holz,
author C. R. Johnson, author M. A. McGeoch, author
J. Melbourne-Thomas, author
R. Morgain, author E. Nicholson, author S. M. Prober, author B. Raymond, author E. G. Ritchie, author S. A. Robinson, author K. X. Ruthrof, author S. A. Setterfield, author C. M. Sgrò, author J. S. Stark,
author T. Travers, author R. Trebilco, author
D. F. L. Ward, author
G. M. Wardle, author
K. J. Williams, author
P. J. Zylstra, and author
J. D. Shaw, title title Combating ecosystem collapse from the tropics to the
antarctic, https://doi.org/10.1111/gcb.15539 journal journal Global Change Biology volume 27, pages 1692–1703 (year
2021)NoStop
[Sato and Lindenmayer(2018)]conl12348
author author C. F. Sato and author D. B. Lindenmayer, title title Meeting the
global ecosystem collapse challenge, https://doi.org/10.1111/conl.12348 journal journal Conservation Letters volume 11, pages e12348 (year 2018)NoStop
[Boers et al.(2017)Boers,
Marwan, Barbosa, and Kurths]boers2017deforestation
author author N. Boers, author N. Marwan,
author H. M. Barbosa, and author J. Kurths, title title A deforestation-induced tipping point
for the south american monsoon system, @noop journal journal Scientific reports volume 7, pages 41489 (year
2017)NoStop
[Boulton, Lenton, and Boers(2022)]Boulton2022
author author C. A. Boulton, author T. M. Lenton,
and author N. Boers, title title Pronounced loss of amazon
rainforest resilience since the early 2000s, 10.1038/s41558-022-01287-8 journal journal
Nature Climate Change volume 12, pages
271–278 (year 2022)NoStop
[Avissar and Werth(2005)]avissar2005global
author author R. Avissar and author D. Werth, title title Global
hydroclimatological teleconnections resulting from tropical deforestation, @noop journal journal Journal of
Hydrometeorology volume 6, pages
134–145 (year 2005)NoStop
[Butler(2020)]butler
author author R. A. Butler, @noop title What's the
deforestation rate in the amazon? (year 2020)NoStop
[Gatti et al.(2021)Gatti,
Basso, Miller, Gloor,
Gatti Domingues, Cassol, Tejada, Aragão, Nobre, Peters, Marani, Arai, Sanches, Corrêa, Anderson,
Von Randow, Correia, Crispim, and Neves]Gatti2021
author author L. V. Gatti, author L. S. Basso,
author J. B. Miller, author M. Gloor, author
L. Gatti Domingues, author
H. L. G. Cassol, author
G. Tejada, author L. E. O. C. Aragão, author C. Nobre, author W. Peters, author L. Marani, author E. Arai, author A. H. Sanches, author S. M. Corrêa, author L. Anderson, author C. Von Randow, author C. S. C. Correia, author S. P. Crispim, and author R. A. L. Neves, title title
Amazonia as a carbon source linked to deforestation and climate change, 10.1038/s41586-021-03629-6 journal journal Nature volume 595, pages
388–393 (year 2021)NoStop
[Malik et al.(2012)Malik,
Bookhagen, Marwan, and Kurths]Malik2012
author author N. Malik, author B. Bookhagen,
author N. Marwan, and author J. Kurths, title
title Analysis of spatial and temporal extreme
monsoonal rainfall over south asia using complex networks, 10.1007/s00382-011-1156-4 journal journal Climate Dynamics volume 39, pages 971–987 (year 2012)NoStop
[Ozturk et al.(2019)Ozturk,
Malik, Cheung, Marwan, and Kurths]Ozturk2019
author author U. Ozturk, author N. Malik,
author K. Cheung, author N. Marwan, and author
J. Kurths, title title A network-based comparative study of extreme tropical and
frontal storm rainfall over japan, 10.1007/s00382-018-4597-1 journal journal
Climate Dynamics volume 53, pages
521–532 (year 2019)NoStop
[Liu et al.(2023)Liu,
Chen, Yang, Meng,
Wang, Ludescher, Fan,
Yang, Chen, Kurths,
Chen, Havlin, and Schellnhuber]Liu2023
author author T. Liu, author D. Chen, author L. Yang, author
J. Meng, author Z. Wang, author J. Ludescher, author J. Fan,
author S. Yang, author
D. Chen, author J. Kurths, author X. Chen, author S. Havlin, and author H. J. Schellnhuber, title title
Teleconnections among tipping elements in the earth system, 10.1038/s41558-022-01558-4 journal journal Nature Climate Change volume 13, pages 67–74 (year 2023)NoStop
[Tsonis, Swanson, and Roebber(2006)]tsonis2006networks
author author A. A. Tsonis, author K. L. Swanson,
and author P. J. Roebber, title title What do networks have to do
with climate? @noop journal journal
Bulletin of the American Meteorological Society volume 87, pages 585–596 (year
2006)NoStop
[Donges et al.(2009)Donges,
Zou, Marwan, and Kurths]Donges2009
author author J. F. Donges, author Y. Zou,
author N. Marwan, and author J. Kurths, title
title Complex networks in climate dynamics, 10.1140/epjst/e2009-01098-2 journal journal The European Physical Journal Special Topics volume 174, pages 157–179 (year
2009)NoStop
[Radebach et al.(2013)Radebach, Donner, Runge, Donges, and Kurths]radebach2013disentangling
author author A. Radebach, author R. V. Donner, author J. Runge,
author J. F. Donges, and author J. Kurths, title title Disentangling different types of el
niño episodes by evolving climate network analysis, @noop
journal journal Physical Review E volume 88, pages 052807 (year
2013)NoStop
[Boers et al.(2019)Boers,
Goswami, Rheinwalt, Bookhagen, Hoskins, and Kurths]Boers2019
author author N. Boers, author B. Goswami,
author A. Rheinwalt, author B. Bookhagen, author
B. Hoskins, and author
J. Kurths, title title Complex networks reveal global pattern of extreme-rainfall
teleconnections, 10.1038/s41586-018-0872-x journal journal Nature volume 566, pages 373–377 (year 2019)NoStop
[dat()]data-ncep
@noop title The NCEP/NCAR reanalysis project
at the NOAA physical sciences laboratory: NCEP-NCAR reanalysis 1, howpublished
<https://psl.noaa.gov/data/gridded/data.ncep.reanalysis.html>NoStop
[Kalnay et al.(1996)Kalnay,
Kanamitsu, Kistler, Collins,
Deaven, Gandin, Iredell,
Saha, White, Woollen,
Zhu, Chelliah, Ebisuzaki,
Higgins, Janowiak, Mo,
Ropelewski, Wang, Leetmaa,
Reynolds, Jenne, and Joseph]TheNCEPNCAR40YearReanalysisProject
author author E. Kalnay, author M. Kanamitsu,
author R. Kistler, author W. Collins, author
D. Deaven, author L. Gandin, author M. Iredell, author S. Saha, author G. White, author J. Woollen,
author Y. Zhu, author
M. Chelliah, author
W. Ebisuzaki, author
W. Higgins, author J. Janowiak, author K. C. Mo, author C. Ropelewski, author J. Wang,
author A. Leetmaa, author R. Reynolds, author
R. Jenne, and author
D. Joseph, title title The NCEP/NCAR 40-year reanalysis project, 10.1175/1520-0477(1996)077<0437:TNYRP>2.0.CO;2 journal journal Bulletin of the American Meteorological
Society volume 77, pages 437 – 472
(year 1996)NoStop
[on Climate Change(2014)]ipcc-2014
author author I. P. on Climate Change, title Observations: Ocean
pages, in 10.1017/CBO9781107415324.010 booktitle Climate Change 2013 – The Physical Science Basis: Working
Group I Contribution to the Fifth Assessment Report of the Intergovernmental
Panel on Climate Change (publisher Cambridge University
Press, year 2014) p. pages
255–316NoStop
[Levitus et al.(2009)Levitus, Antonov, Boyer, Locarnini, Garcia, and Mishonov]https2008GL037155
author author S. Levitus, author J. I. Antonov, author T. P. Boyer,
author R. A. Locarnini, author H. E. Garcia, and author A. V. Mishonov, title title Global ocean heat content 1955–2008 in
light of recently revealed instrumentation problems, @noop
journal journal Geophysical Research Letters volume 36 (year 2009)NoStop
[Belkin and Niyogi(2001)]LEBelkin
author author M. Belkin and author P. Niyogi, title title Laplacian
eigenmaps and spectral techniques for embedding and clustering, in @noop booktitle Proceedings of the 14th
International Conference on Neural Information Processing Systems: Natural
and Synthetic, series and number NIPS'01 (publisher MIT Press, address Cambridge, MA, USA, year 2001) p. pages 585–591NoStop
[Brandes(2001)]edge-betweenness
author author U. Brandes, title title A faster
algorithm for betweenness centrality, 10.1080/0022250X.2001.9990249 journal journal
The Journal of Mathematical Sociology volume 25, pages 163–177 (year 2001), http://arxiv.org/abs/https://doi.org/10.1080/0022250X.2001.9990249
https://doi.org/10.1080/0022250X.2001.9990249 NoStop
[Cajori(2007)]cajori2007history
author author F. Cajori, https://books.google.com/books?id=bT5suOONXlgC title A History of Mathematical Notations: Vol. II, A
History of Mathematical Notations (publisher Cosimo,
Incorporated, year 2007)NoStop
[Steffen et al.(2018)Steffen, Rockström, Richardson,
Lenton, Folke, Liverman,
Summerhayes, Barnosky, Cornell, Crucifix, Donges, Fetzer, Lade, Scheffer, Winkelmann, and Schellnhuber]Steffen2018-ie
author author W. Steffen, author J. Rockström, author K. Richardson, author T. M. Lenton, author C. Folke,
author D. Liverman, author C. P. Summerhayes, author A. D. Barnosky, author
S. E. Cornell, author
M. Crucifix, author
J. F. Donges, author
I. Fetzer, author S. J. Lade, author M. Scheffer, author R. Winkelmann, and author H. J. Schellnhuber, title title Trajectories of the earth system in the anthropocene, @noop journal journal Proc Natl Acad Sci
U S A volume 115, pages 8252–8259
(year 2018)NoStop
[Ciemer et al.(2021)Ciemer,
Winkelmann, Kurths, and Boers]Ciemer2021
author author C. Ciemer, author R. Winkelmann,
author J. Kurths, and author N. Boers, title
title Impact of an amoc weakening on the stability of
the southern amazon rainforest, 10.1140/epjs/s11734-021-00186-x journal journal
The European Physical Journal Special Topics volume
230, pages 3065–3073 (year 2021)NoStop
[Mokhov et al.(2011)Mokhov,
Smirnov, Nakonechny, Kozlenko, Seleznev, and Kurths]2010GL045932
author author I. I. Mokhov, author D. A. Smirnov,
author P. I. Nakonechny,
author S. S. Kozlenko, author E. P. Seleznev, and author J. Kurths, title
title Alternating mutual influence of
el-niño/southern oscillation and indian monsoon, https://doi.org/10.1029/2010GL045932 journal journal Geophysical Research Letters volume 38
(year 2011), https://doi.org/10.1029/2010GL045932NoStop
[Silva Junior et al.(2021)Silva Junior, Pessôa, Carvalho,
Reis, Anderson, and Aragão]SilvaJunior2021
author author C. H. L. Silva Junior, author A. C. M. Pessôa, author N. S. Carvalho, author J. B. C. Reis, author L. O. Anderson, and author L. E. O. C. Aragão, title title
The brazilian amazon deforestation rate in 2020 is the greatest of the
decade, 10.1038/s41559-020-01368-x journal
journal Nature Ecology & Evolution volume 5, pages 144–145 (year
2021)NoStop
[Ansmann et al.(2009)Ansmann, Baars, Tesche, Müller, Althausen, Engelmann,
Pauliquevis, and Artaxo]2009GL037923
author author A. Ansmann, author H. Baars,
author M. Tesche, author D. Müller, author
D. Althausen, author
R. Engelmann, author
T. Pauliquevis, and author
P. Artaxo, title title Dust and smoke transport from africa to south america:
Lidar profiling over cape verde and the amazon rainforest, https://doi.org/10.1029/2009GL037923 journal
journal Geophysical Research Letters volume 36 (year 2009), https://doi.org/10.1029/2009GL037923NoStop
[Yu et al.(2013)Yu,
Remer, Kahn, Chin, and Zhang]YU201373
author author H. Yu, author L. A. Remer,
author R. A. Kahn, author M. Chin, and author
Y. Zhang, title title Satellite perspective of aerosol intercontinental
transport: From qualitative tracking to quantitative characterization, https://doi.org/10.1016/j.atmosres.2012.12.013 journal journal Atmospheric Research volume 124, pages 73–100 (year
2013)NoStop
[Akimoto(2003)]science1092666
author author H. Akimoto, title title Global air
quality and pollution, 10.1126/science.1092666
journal journal Science volume 302, pages 1716–1719 (year 2003), http://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.1092666
https://www.science.org/doi/pdf/10.1126/science.1092666 NoStop
[Walker(2021)]20211842711
author author R. T. Walker, title title Collision
course: Development pushes amazonia toward its tipping point, 10.1080/00139157.2021.1842711 journal journal Environment: Science and Policy for Sustainable Development volume 63, pages 15–25 (year 2021)NoStop
[Malhi et al.(2009)Malhi,
Aragão, Galbraith, Huntingford, Fisher, Zelazowski,
Sitch, McSweeney, and Meir]pnas0804619106
author author Y. Malhi, author L. E. O. C. Aragão, author D. Galbraith,
author C. Huntingford, author R. Fisher, author
P. Zelazowski, author
S. Sitch, author C. McSweeney, and author P. Meir, title title
Exploring the likelihood and mechanism of a climate-change-induced dieback
of the amazon rainforest, 10.1073/pnas.0804619106
journal journal Proceedings of the National
Academy of Sciences volume 106, pages
20610–20615 (year 2009), http://arxiv.org/abs/https://www.pnas.org/doi/pdf/10.1073/pnas.0804619106
https://www.pnas.org/doi/pdf/10.1073/pnas.0804619106 NoStop
[FEARNSIDE(2005)]1739200500697x
author author P. M. FEARNSIDE, title title Deforestation
in brazilian amazonia: History, rates, and consequences, https://doi.org/10.1111/j.1523-1739.2005.00697.x journal
journal Conservation Biology volume
19, pages 680–688 (year 2005), http://arxiv.org/abs/https://conbio.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1523-1739.2005.00697.x
https://conbio.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1523-1739.2005.00697.x
NoStop
[Mataveli et al.(2022)Mataveli, de Oliveira, Silva-Junior,
Stark, Carvalho, Anderson,
Gatti, and Aragão]Mataveli2022
author author G. Mataveli, author G. de Oliveira, author C. H. L. Silva-Junior, author S. C. Stark, author N. Carvalho,
author L. O. Anderson, author L. V. Gatti, and author L. E. O. C. Aragão, title title Record-breaking fires in the
brazilian amazon associated with uncontrolled deforestation, 10.1038/s41559-022-01945-2 journal journal Nature Ecology & Evolution volume 6, pages 1792–1793 (year 2022)NoStop
[Pereira and Viola(2020)]viola2020
author author J. C. Pereira and author E. Viola, title title Close to a
tipping point? the amazon and the challenge of sustainable development under
growing climate pressures, 10.1017/S0022216X20000577
journal journal Journal of Latin American
Studies volume 52, pages 467–494
(year 2020)NoStop
[IPCC(2022)]RN15
author author IPCC, @noop
title Climate Change 2022: Impacts, Adaptation and
Vulnerability, Summary for Policymakers (publisher
Cambridge University Press, address Cambridge, UK and New
York, USA, year 2022) pp. pages
3–33NoStop
[Zemp et al.(2017)Zemp,
Schleussner, Barbosa, Hirota,
Montade, Sampaio, Staal,
Wang-Erlandsson, and Rammig]zemp2017self
author author D. C. Zemp, author C.-F. Schleussner, author H. M. Barbosa, author M. Hirota,
author V. Montade, author G. Sampaio, author
A. Staal, author L. Wang-Erlandsson, and author A. Rammig, title title
Self-amplified amazon forest loss due to vegetation-atmosphere feedbacks, @noop journal journal Nature
communications volume 8, pages 1–10
(year 2017)NoStop
[Donges et al.(2015)Donges,
Heitzig, Beronov, Wiedermann,
Runge, Feng, Tupikina,
Stolbova, Donner, Marwan
et al.]donges2015unified
author author J. F. Donges, author J. Heitzig,
author B. Beronov, author M. Wiedermann, author
J. Runge, author Q. Y. Feng, author L. Tupikina, author V. Stolbova, author R. V. Donner, author N. Marwan, et al., title title
Unified functional network and nonlinear time series analysis for complex
systems science: The pyunicorn package, @noop journal journal Chaos: An Interdisciplinary Journal of
Nonlinear Science volume 25, pages
113101 (year 2015)NoStop
[Brandes and Pich(2007)]brandes2007centrality
author author U. Brandes and author C. Pich, title title Centrality estimation in
large networks, @noop journal journal
International Journal of Bifurcation and Chaos volume
17, pages 2303–2318 (year 2007)NoStop
[Estrada and Hatano(2008)]estrada2008communicability
author author E. Estrada and author N. Hatano, title title Communicability
in complex networks, @noop journal journal Physical Review E volume 77, pages 036111 (year 2008)NoStop
[Estrada and Hatano(2016)]estrada2016communicability
author author E. Estrada and author N. Hatano, title title Communicability
angle and the spatial efficiency of networks, @noop journal journal SIAM Review volume
58, pages 692–715 (year 2016)NoStop
|
http://arxiv.org/abs/2307.01527v1
|
20230704072650
|
Duality of O(N) and Sp(N) random tensor models: tensors with symmetries
|
[
"H. Keppler",
"T. Krajewski",
"T. Muller",
"A. Tanasa"
] |
math-ph
|
[
"math-ph",
"hep-th",
"math.MP"
] |
Duality of O(N) and Sp(N) random tensor models: tensors with symmetries
Mohaddese Heydari-Fard^1
Electronic address: mailto:[email protected][email protected] , Malihe Heydari-Fard^2Electronic address: mailto:[email protected]@qom.ac.ir and Nematollah Riazi^1Electronic address: mailto:[email protected][email protected]
^1 Department of Physics, Shahid Beheshti University, Evin 19839, Tehran, Iran
^2 Department of Physics, The University of Qom, 3716146611, Qom, Iran
================================================================================================================================================================================================================================================================================================================================================================================================================================================
In a recent series of papers, a duality between
orthogonal and symplectic random tensor models has been proven, first for quartic models and then for models with interactions of arbitrary order. However, the tensor
models considered so far in the literature had
no symmetry under permutation of the indices.
In this paper, we generalize these results for tensors models with interactions of arbitrary order
which further have non-trivial symmetry under the permutation of the indices.
Totally symmetric and anti-symmetric tensors are thus treated as a particular case of our result.
protrusion=false
protrusion=true
§ INTRODUCTION
Random tensor models (see the recent books <cit.> or the reviews <cit.>) are 0-dimensional quantum field theoretical generalisations of the celebrated matrix models <cit.>. Within this framework, they can be seen as
probability measures on tensor spaces; this is the point of view we take in this paper.
Tensor models have thus been used as tools to generate discrete random geometries in more than two dimensions.
Moreover, they have been further used to construct models similar to the holographic Sachdev-Ye-Kitaev model but without quenched disorder <cit.>, and new (melonic) Conformal Field Theories <cit.>.
Many of the original rigorous results on tensor models relied on the presence of a very large symmetry group (usual several distinct copies of U(N) or O(N)) that forbids the tensor to have any symmetry under permutation of their indices <cit.>.
Later on, tensor models with tensors living on some non-trivial (mostly O(N)) representation where studied systematically <cit.>).
In <cit.> the authors studied tensor models with symplectic symmetry Sp(N) in which case the tensor components sometimes are anticommuting (fermionic/odd graßmann) variables.
Relations between the representations of O(N) and Sp(N) have a long history. thus, King <cit.> showed that the dimensions of irreducible representations of both groups agree, when exchanging symmetrization and antisymmetrization (transposed Young tableau) and replacing N by -N. So called negative dimension theorems, or N to -N dualities, relating the orthogonal and symplectic group vial the formal relation SO(-N)≃ Sp(N) are well known <cit.>
for matrix and vector models.
Several incarnations of this relation can be found in the literature: for even N, SO(N) and Sp(N) gauge theories are known to be related by changing N to -N <cit.>; a vector model with symplectic fermions in three space-time dimensions has been studied in <cit.> and an example of SO(N) and Sp(N) gauge theories with matter fields and Yukawa interactions can be found in <cit.>; a duality between orthogonal and symplectic matrix ensembles (the β=1,4 ensembles) has been shown in <cit.>.
From a supergeometric or supersymmetric point of view such relations can be seen to arise naturally <cit.>.
As a natural followup of these matrix model results, we show in this paper how the N to -N symmetry arises in the tensor model case for tensors with interactions of arbitrary order, which further have non-trivial symmetry under the permutation of their indices.
This result is a
generalization of similar results obtained in simpler settings: for quartic interactions this was proven in <cit.> and for tensor models with interactions of arbitrary order this was done in <cit.>. However, let us emphasize that, unlike the results of this paper, both the results of <cit.> and <cit.> were obtained for tensor models that had no symmetry under the permutation of indices.
More precisely, the main result of this paper is the following.
We consider tensors of order D that transform in some tensor representation R of O(N) or Sp(N). This
implies that the tensors may obey some non-trivial symmetry under permutation of their indices. In order to treat models with orthogonal and symplectic symmetry simultaneously, we introduce a grading parameter ∈{0,1}, such that =0 corresponds to the O(N) symmetric model and =1 to the Sp(N) symmetric one. The tensor components are real fermionic (anticommuting, odd) if =1 and D is odd, and real bosonic (commuting, even) otherwise.
The real graded tensor model with symmetry R is defined by the measure
dμ[T] ≃ e^-S[T] ∏_a_1,…,a_D dT^a_1… a_D ,
S[T] = T^a_𝒟 C^-1_a_𝒟 b_𝒟 T^b_𝒟 + ∑_𝒮 connected,
|V(𝒮)|>2λ_𝒮/|V(𝒮)|/D I_𝒮(T) ,
where g^_a_cb_c is the Kronecker δ_a_cb_c for =0 or the canonical symplectic form ω_a_cb_c for =1 and the sum runs over independent connected invariants I_(T) of order higher than two, indexed by undirected standed graphs (see Section <ref> for more details).
The partition function Z and the expectation value of an invariant ⟨I_(T)⟩ are defined by:
Z({λ})=∫ dμ[T], and ⟨I_(T)⟩({λ} )=1/Z∫ dμ[T] I_(T) ,
and can be evaluated in perturbation theory.
The main theorem of this paper is:
The perturbative series of the partition function Z and expectation values of invariants ⟨I_(T)⟩ can be expressed as a formal sum over 2-colored stranded graphs .
Each summand, corresponding to a specific graph (called the amplitude of that graph), writes as a product:
K({λ},)·((-1)^N)^F() ,
of a term depending on N and a term K, encoding both the dependence on the coupling constants λ_ and some combinatorial factors associated to (see Section <ref> for the relevant definitions).
The main result of this paper follows as a direct consequence of the theorem above:
Tensor models of the form in Def. <ref> with symmetry given by the O(N) tensor representation R are dual to corresponding tensor models with Sp(N) symmetry given by the representation with transposed Young diagrams R^' (exchanging symmetrization and antisymmetrization) in the sense that the amplitudes of graphs in their perturbative expansions are mapped into each other after a change of N to -N.
This follows from Theorem <ref>. The replacement →+1 2 and N→ -N leaves the amplitude (<ref>) unchanged, and, as will be noted in Section <ref>, the shift →+1 2 exchanges symmetrization and antisymmetrization in the tensor representation R.
This has the effect of transposing all Young diagrams λ→λ^', and leads to the tensor representation R^'.
The paper is organized as follows. In Section <ref> we recall several
results on representation theory of the orthogonal and symplectic group focusing
on the Brauer algebra, that plays a similar role as the algebra of the symmetric group for representations of GL(N).
At the end of this section we give a dictionary between notions used in the physics/tensor model and representation theory literature. In Section <ref> we define the tensor models of interest for this paper
and give their diagrammatic representation in terms of stranded graph. In Section <ref> we give the proof of our main result, and in Section <ref> we use, as an explicit example, the totally symmetric and antisymmetric tensor representations to illustrate the duality between O(N) and Sp(N) tensor models proved in the previous section.
§ PREREQUISITE
§.§ Irreducible representations of the orthogonal and symplectic group
In this section we review some definitions and results of the
theory of irreducible representations of the general linear group GL(N) and its connection to representations of the symmetric group _D and Young diagrams.
We further review irreducible representations of the groups O(N) and Sp(N), preserving some non-degenerate bilinear form and their connections to the Brauer algebra.
Let V=ℝ^N. Both GL(N) and _D act on the tensor product space V^⊗ F. Irreducible representations of GL(N) can be obtained as the image of certain elements of the group algebra ℂ_D (Young symmetrizers). Analogously, irreducible representations of O(N) or Sp(N) can be obtained using projectors, defined by elements of the Brauer algebra B_D.
Our exposition is based on <cit.> and, when concerning the Brauer algebra, on <cit.>.
We further refer the interested reader to <cit.> or to the books <cit.> or <cit.>.
Young tableaux.
For general combinatorial references on Young tableaux, we refer to Chapter XIV of the handbook <cit.>.
To a partition λ=(λ_1,λ_2,…,λ_k) of D∈ℕ, denoted as λ⊢ D, i.e. a sequence of non increasing integers with |λ|=∑_i=1^kλ_i=D, we associate a Young diagram
λ= centertableaux[λ_1]
[λ_2]
[λ_3]
[λ_4]
[λ_5]
with λ_i boxes in the ith row. Note that we are using here the English notation for Young diagrams and tableaux.
The dual diagram λ^' is obtained by interchanging rows and columns in the Young diagram. Let us recall that
Young diagrams can be used to define projectors onto irreducible representations of the symmetric group _D.
Given a Young diagram, a Young tableau is a numbering of the boxes by the integers 1,2,…, D. The canonical Young tableau is obtained by numbering the boxes consecutively:
1 2 3
4 5 6
7 8
9
10
.
Define the sets of row and column permutations:
P_λ = { g∈_D | g preserves each row} ,
Q_λ = { g∈_D | g preserves each column} .
Next, one introduces two elements of the group algebra ℂ_D:
a_λ = ∑_g∈ P_λ g , b_λ=∑_g∈ Q_λ(g) g .
Noting that ℂ_D acts on V^⊗ D by permuting factors, a_λ acts as a symmetrizer and b_λ as an antisymmetrizer on the tensors. Finally, the Young symmetrizer is defined as:
c_λ = a_λ· b_λ .
Consider as an example λ=boxsize=.5em3 or 1,1,1. The image of the action of c_λ on V^⊗ 3 is Sym^3 V or ⋀^3 V, the spaces of totally symmetric or antisymmetric tensors, respectively.
boxsize=normal
The permutation group _D acts on tensors in V^⊗ D by permutation of the indices, with V a vector space of dimension N. Then all the previous projectors give rise to representations, which are in general reducible. The dimension of a representation indexed by the Young diagram λ reads
dim(π_λ,N)=∏_(i,j)∈λN-i+j/h_ij
with i (resp. j) the row (resp. column) label of the box and h_ij the hook length of the box (i,j), i.e.
h_i,j=#{ (k,l) with k=i, l≥ j or l=j, k≥ i}.
Then, it is worthwhile to notice that it is a polynomial in N that obey the relation
dim(π_λ,-N)=
(-1)^|λ|dim(π_λ',N)
with |λ| the number of boxes in λ and λ' the dual diagram. Therefore, trading N for -N involves exchanging rows and columns, or equivalently, symmetrization and antisymmetrization. For example
dim(boxsize=0.5em2,1,1,N)
=N(N-1)(N+1)(N+2)/4· 2· 1· 1↔_dualitydim(boxsize=0.5em3,1,-N)
=N(N-1)(N-2)(N+1)/4· 2· 1· 1
These are representations of the symmetric group but for our purposes it turns out to be helpful to identify irreducible representations of the groups O(N) and Sp(N) inside the previous ones, as we shall do in the following.
Representations.
Let us recall that a representation of the group GL(N) on V^⊗ D is semisimple and decomposes into a direct sum of irreducible representations that are determined by irreducible representations of _D, and thus indexed by Young diagrams. For simplicity, we focus on N much larger than D (N≥ 2D).
Note that for small N not all Young diagrams give irreducible representations.
An analogous construction holds for the groups O(N) and Sp(N) that preserve a non-degenerate (skew-)symmetric bilinear form. The main difference lies in the ability to form traces by contacting two factors of V^⊗ D with the bilinear form. To allow for these contractions, the group algebra ℂ_D is replaced by the Brauer algebra B_D <cit.>. As subgroups O(N), Sp(N)⊂ GL(N), irreducible representations of GL(N) are still representations of O(N) and Sp(N), but not necessarily irreducible. However, irreducible O(N) or Sp(N) representations can be obtained by traceless projections of irreducible representations of GL(N). In <cit.>, a universal traceless projector 𝔓_D∈ B_D was constructed, such that irreducible O(N) or Sp(N) representations can be obtained by first subtracting traces by applying 𝔓_D, and second applying a projector (e.g. Young symmetrizer) to an irreducible GL(N) representation.
Note that, in particular, both operations commute.
Brauer algebra.
Let us now exhibit the Brauer algebra B_D(z), for D∈ℕ, z∈ℂ.
For D∈ℕ, draw two horizontal rows of vertices labelled 1, 2, …, D. Brauer diagrams are represented by pairings of these 2D vertices. If every vertex in the top row is connected to a vertex in the bottom row, these elements represent permutation diagrams. Thus,
_D is a subset of the diagrams and ℂ_D a subset of the algebra.
For example:
σ = [brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ;
[above=.5ex,fill=none] at (ı a.north) ı; [below=.5ex,fill=none] at (ı b.south) ı; (1a)–(2b) (2a)–(3b) (3a)–(1b) (4a)–(4b);
, τ = [brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ;
[above=.5ex,fill=none] at (ı a.north) ı; [below=.5ex,fill=none] at (ı b.south) ı; (1a)–(2b) (2a)–(1b) (3a)–(4b) (4a)–(3b);
.
For simplicity, from now one we omit the labels on our diagrams.
Since Brauer diagrams are more general than permutation diagrams, the set of Brauer diagrams includes elements such as
β = [brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; (1a)to[bend right](3a) (2a)to[bend right](4a) (1b)to[bend left](2b) (3b)to[bend left](4b);
, υ = [brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; (1a)to[bend right](2a) (3a)–(4b) (4a)–(2b) (1b)to[bend left](3b);
,
having arcs connecting vertices of the same row.
The product of two Brauer diagrams στ is defined by placing σ below τ and “straightening” the lines:
στ = [brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; (ı c) at (0.7*ı-0.7,-2) ; (1b)–(2c) (2b)–(3c) (3b)–(1c) (4b)–(4c);
(1a)–(2b) (2a)–(1b) (3a)–(4b) (4a)–(3b);
=[brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; (1a)–(3b) (2a)–(2b) (3a)–(4b) (4a)–(1b);
.
For permutation diagrams, this is equivalent to the product of the permutations. Whenever loops appear, they get deleted to obtain again a Brauer diagram.
The Brauer algebra B_D(z) is the free ℂ-algebra on the set of Brauer diagrams together with the above product and the additional rule stating that when l≥0 loops appear in the product of two Brauer diagrams, the resulting diagram gets multiplied by a factor z^l.
βυ = [brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; (ı c) at (0.7*ı-0.7,-2) ; (1a)to[bend right](2a) (3a)–(4b) (4a)–(2b) (1b)to[bend left](3b);
(1b)to[bend right](3b) (2b)to[bend right](4b) (1c)to[bend left](2c) (3c)to[bend left](4c);
= z[brauer]
ı in 1,2,3,4 (ı a) at (0.7*ı-.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; (1a)to[bend right](2a) (3a)to[bend right](4a) (1b)to[bend left](2b) (3b)to[bend left](4b) ;
.
Note that one has: ℂ_D⊂ B_D(z) (diagrams with zero arcs).
A set of generators of B_D(z) is given by σ_i and β_i (i=1,2,…,D-1):
σ_i= [brauer]
ı in 1,3,4,5,6,8 (ı a) at (0.7*ı-0.7,0) ; (ı b) at (0.7*ı-0.7,-1) ; ı in 2,7[fill=none] at (0.7*ı-0.7,0) …; [fill=none] at (0.7*ı-0.7,-1) …; (1a)–(1b) (3a)–(3b) (4a)–(5b) (5a)–(4b) (6a)–(6b) (8a)–(8b) ;
[above=.5ex,fill=none] at (4a.north) i; [above=-1ex,fill=none] at (5a.north) i+1;
[above=.5ex,fill=none] at (1a.north) 1; [above=.5ex,fill=none] at (8a.north) D;
, β_i= [brauer]
ı in 1,3,4,5,6,8 (ı a) at (0.6*ı-0.6,0) ; (ı b) at (0.6*ı-0.6,-1) ; ı in 2,7[fill=none] at (0.6*ı-0.6,0) …; [fill=none] at (0.6*ı-0.6,-1) …; (1a)–(1b) (3a)–(3b) (4a)to[bend right](5a) (4b)to[bend left](5b) (6a)–(6b) (8a)–(8b) ;
[above=.5ex,fill=none] at (4a.north) i; [above=-1ex,fill=none] at (5a.north) i+1;
[above=.5ex,fill=none] at (1a.north) 1; [above=.5ex,fill=none] at (8a.north) D;
.
Furthermore, we introduce the following elements for i<j:
σ_ij = [brauer]
ı in 1,3,4,5,7,8,9,11 (ı a) at (0.6*ı-0.6,0) ; (ı b) at (0.6*ı-0.6,-1) ; ı in 2,6,10[fill=none] at (0.6*ı-0.6,0) …; [fill=none] at (0.6*ı-0.6,-1) …; (1a)–(1b) (3a)–(3b) (4a)–(8b) (5a)–(5b) (7a)–(7b) (8a)–(4b) (9a)–(9b) (11a)–(11b) ;
[above=.5ex,fill=none] at (4a.north) i; [above=.5ex,fill=none] at (8a.north) j;
[above=.5ex,fill=none] at (1a.north) 1; [above=.5ex,fill=none] at (11a.north) D;
, β_ij = [brauer]
ı in 1,3,4,5,7,8,9,11 (ı a) at (0.6*ı-0.6,0) ; (ı b) at (0.6*ı-0.6,-1) ; ı in 2,6,10[fill=none] at (0.6*ı-0.6,0) …; [fill=none] at (0.6*ı-0.6,-1) …; (1a)–(1b) (3a)–(3b) (4a)to[bend right](8a) (5a)–(5b) (7a)–(7b) (4b)to[bend left](8b) (9a)–(9b) (11a)–(11b) ;
[above=.5ex,fill=none] at (4a.north) i; [above=.5ex,fill=none] at (8a.north) j;
[above=.5ex,fill=none] at (1a.north) 1; [above=.5ex,fill=none] at (11a.north) D;
.
Action on V^⊗ D.
If V is a real N-dimensional vector space with non-degenerate bilinear form g, that can be the standard symmetric or symplectic form, one considers integer values of z=(-1)^N, N∈ℕ (=0 in the symmetric, and =1 in the symplectic case). The Brauer algebra B_D((-1)^N) acts naturally on tensors of order D that we represent by their components
T=T^a_1a_2… a_D e_a_1⊗ e_a_2⊗…⊗ e_a_D ,
where {e_a}_a=1,2,… N is a standard basis with respect to the bilinear form g on V. An element β∈ B_D( (-1)^N), corresponding to a single Brauer diagram, acts as follows on T^a_1a_2… a_D:
* Place the indices a_1a_2… a_D in the top row of the Brauer diagram.
* Permute them according to the lines that connect the bottom to the top row.
* Contract them with g if they are connected by an arc in the top row.
* Add a factor g^a_ia_j for each arc in the bottom row.
* Multiply the result by
(η(β))^, where η(β)= (-1)^m where m is the minimal number of crossings in β.
This sign can be expressed as the sign of oriented pairings in subsection <ref>.
Crucially, because of the last point, in applications to Sp(N), B_D(-N) acts in a signed representation.
More explicit, we can associate to β a linear map in End(V^⊗ D), whose components
write
(β)^a_1a_2… a_D_b_1b_2… b_D= η(β)^∏_(i,j)
i in the bottom row
connected to j in the top row-1em δ^a_i_b_j∏_(k,l)
k connected to l
by an arc in the bottom row-1em g^a_ka_l∏_(m,p)
m connected to p
by an arc in the top row-1em g_b_mb_p ,
and it acts on the tensor components as:
β· T^a_1a_2… a_D = ∑_b_1,b_2,…,b_D (β)^a_1a_2… a_D_b_1b_2… b_D T^b_1b_2… b_D .
For example, one has:
σ_ij· T^a_1… a_i… a_j… a_D = T^a_1… a_j… a_i… a_D ,
β_ij· T^a_1… a_i… a_j… a_D = g^a_ia_j g_b_ib_j T^a_1… b_i… b_j… a_D ,
υ· T^a_1a_2a_3a_4 = g^a_1a_3 g_b_1b_2 T^b_1b_2a_4a_2 .
The action is extended to arbitrary elements of the Brauer algebra by linearity.
One can also raise the indices of the linear map using the bilinear form such that:
(β)^a_1a_2… a_D, b_1… b_D = (β)^a_1a_2… a_D_ c_1 c_2… c_D g^c_1 b_1… g^c_D b_D
= η(β)^∏_(i,j)
i in the bottom row
connected to
j in the top row-1em g^a_i b_j∏_(k,l)
k connected to l
by an arc in the bottom row-1em g^a_k a_l∏_(m,p)
m connected to p
by an arc in the top row-1em g^b_m b_p .
Note that
because of the sign η(β) in the definition of the action on V^⊗ D, the interchange of symmetrization and antisymmetrization when going from O(N) representations to Sp(N) representations is already built in. This can be seen by the fact that in the case where σ is a permutation, the sign η(σ) corresponds to (σ). The components of the linear map associated to a_λ and b_λ (see (<ref>)) thus are:
(a_λ)^a_1 … a_D_b_1 … b_D = ∑_σ∈ P_λ(σ)^∏_(i,j)
j=σ(i)δ^a_i_b_j ,
(b_λ)^a_1 … a_D_b_1 … b_D =∑_τ∈ Q_λ(τ)^b+1∏_(i,j)
j=τ(i)δ^a_i_b_j .
In conclusion, in the O(N) case (=0), a_λ now acts as a symmetrizer and b_λ as an antisymmetrizer whereas the roles are reversed in the Sp(N) case (=1). The product c_λ = a_λ· b_λ thus corresponds to the Young symmetrizer associated to a tableau λ when =0 and to the symmetrizer associated to the dual tableau λ^', obtained by permuting the rows and columns of λ, when =1.
Traceless projector. In order to implement the projection onto irreducible representations of O(N) or Sp(N), the authors of <cit.> build a universal traceless projector, which we
introduce here, for the sake of completeness. The main building block of this projector is:
A_D=∑_1≤ i<j≤ Dβ_ij ∈ B_D((-1)^N) .
Let us now list some important properties of A_D:
* It commutes with all elements of ℂ_D⊂ B_D(N). Thus, in particular, it commutes with Young symmetrizers.
* The action of A_D on V^⊗ D is diagonalizable.
* The kernel A_D⊂ V^⊗ D is exactly the space of traceless tensors.
* Its non-zero eigenvalues are in (-1)^ℕ.
The proof of these statements can be found in <cit.>, and the universal traceless projector is
given by:
𝔓_D= ∑_α non-zero eigenvalue of A_D(1-1/αA_D ) .
Explicit formulas for the non-zero eigenvalues α are also given in <cit.>.
§.§ Sign of directed pairings
In this subsection, we define the sign given by two oriented pairings and give some of its properties.
Consider two oriented pairings _1 and _2 on a set of 2D elements, suppose these two pairings are given by
_1 = { (i_1,i_2), … , (i_2D-1,i_2D) } ,
_2 = { (j_1,j_2), … , (j_2D-1,j_2D) } .
The sign ϵ(_1,_2) of the two pairings is defined as the sign of the permutation σ=([ i_1 i_2 … i_2D-1 i_2D; j_1 j_2 … j_2D-1 j_2D ]):
ϵ(_1,_2) = (([ i_1 i_2 … i_2D-1 i_2D; j_1 j_2 … j_2D-1 j_2D ]) )
We give here a list of some of the properties of the sign ϵ(_1,_2):
* It is symmetric under permutation of its arguments:
ϵ(_1,_2) = ϵ(_2,_1) .
* For three pairings _1, _2, _3 on the same set, one has:
ϵ(_1,_2) = ϵ(_1,_3) ϵ(_2,_3).
* For two pairings _1, _2 on a first set 𝒮_1 of 2D elements and two pairings _3, _4 on a second set 𝒮_2 of 2p elements,
one has:
ϵ(_1,_2) ϵ(_3,_4) = ϵ(_1 ⊔_3,_2 ⊔_4) .
* Consider a set of elements 𝒮_v and two pairings _1 and _2 on this set. Depict each elements of 𝒮_v as a node and each pair in _1 and _2 as an oriented edge pointing from the first element to the second, and of color 1 for the pairs in _1 and color 2 for the ones in _2. The sign ϵ( _1,_2 ) can be written as:
ϵ( _1,_2 ) = (-1)^F_1/2,even .
In the equation above, F_1/2,even is the number of even faces of color 1 and 2 of the graphical representation described above. An even, resp. odd, face of color 1 and 2 is defined as a closed cycle of alternating colors 1 and 2 where an even, resp. odd, number of edges point in one direction around the cycle. Because each face consists of an even number of edges, this notion is well defined.
In the sequel, the sign
ϵ plays a crucial role in the proof of our main theorem. Let us first link this quantity to the Brauer algebra and connect them to the tensor models.
§.§ Pairings, the Brauer algebra, propagators and projectors
In this subsection we exhibit the connection between the notions of subsections <ref> and <ref>, and
propagators in the random tensor models we study in this paper.
The relation between Brauer diagrams and pairings is straightforward, as each Brauer diagram is a pairing of 2D vertices.
Moreover, the sign η(β) that appears in the description of the action of B_D((-1)^N) on V^⊗ D, can be expressed as the sign of two directed pairings by the following construction:
* Label the vertices in the Brauer diagram 1,2,…,D in the top row and D+1, D+2,…, 2D in the bottom row.
* Let β⃗ be the directed pairing induced by β, where edges are oriented from top to bottom, left to right in the top row and right to left in the bottom row.
* Let _ref ={(1,D+1),(2,D+2),… (D,2D)} be the reference pairing, that pair top to bottom vertices.
* One then has: η(β)=ϵ(β⃗,_ref).
This follows from the use of (<ref>).
Moreover, β^a_1 … a_D, a_D+1… a_2D admits a compact form in term of the oriented pairing β⃗:
(β)^a_1a_2… a_D, a_D+1… a_2D= ϵ(β⃗,_ref)^∏_(i,j) ∈β⃗ g^a_i a_j .
In a random tensor model, with tensors of order D, living in a representation R⊂ V^⊗ D of the group O(N) or Sp(N), a propagator is a O(N)- or Sp(N)-linear map C∈End(R). As the Brauer algebra is isomorphic (for N large enough) to this space of O(N)- or Sp(N)-linear maps, each propagator is also an element of B_D((-1)^N).
As a consequence of Schur's lemma, if the representation R is irreducible, C is proportional to the identity on R, and if R is reducible and decomposes into a direct sum of distinct irreducible representations
R_i
(R=⊕_i=1^k R_i),
then C decomposes as well into a direct sum of maps P_i, each proportional to the identity on R_i.
Denoting by P_R∈End(V^⊗ D) the orthogonal projector on R, i.e. im(P_R)=R. The propagator can be trivially extended to the whole space V^⊗ D by C∘P_R. Thus, reformulating the implications of Schur's Lemma: If R is irreducible the propagator is proportional to the projector on R, and if R decomposes into distinct irreducible representations as above, the propagator is a linear combination of the projectors on the R_i.
When studying tensor models from a quantum field theoretical perspective, one is interested in the calculation of expectation values of the form:
⟨ f(T) ⟩ = [e^∂_T C∂_T e^-V(T) f(T) ]_T=0/[e^∂_T C∂_T e^-V(T)]_T=0 ,
where V(T) and f(T) are invariant under the group action, and ∂_T C∂_T is a short hand notation for the Laplacian-like second order differential operator:
∂_T C∂_T := ∑_a^1_1,…,a^1_D,a^2_1,…,a^2_D=1^N ∂/∂ T^a^1_1… a^1_D C^a^1_1… a^1_D, a^2_1… a^2_D∂/∂ T^a^2_1… a^2_D .
Note that indices are raised and lowered by the non-degenerate bilinear form, as usual.
In the above formulation, the tensors are elements of R, i.e. have some non-trivial symmetry. But one can as well consider every tensor T∈ R to arise from the projection of a tensor T̃∈ V^⊗ D without symmetry under permutation of its indices. Thus, if we supplement the derivative operator with the appropriate projector, only modes obeying the symmetry (tensors in R) propagate and T can be replaced by T̃:
⟨ f(T) ⟩ = [e^∂_T̃ (C P_R)∂_T̃ e^-V(T̃) f(T̃) ]_T̃=0/[e^∂_T̃ (C P_R)∂_T̃ e^-V(T̃)]_T̃=0 ,
with the convention ∂_T̃T̃=id_V^⊗ D.
§ THE GRADED TENSOR MODEL
Let T^a_1 … a_D be the components of a generic random tensor with D indices (an order D tensor). Each index of the tensor ranges from 1 to N, the tensor has thus N^D independent components.
As already mentioned above, we introduce a parameter , equal to 0 or 1, that defines the symmetry properties of the tensor. If =0, resp. =1, the tensor transforms in some representation R of order D of the orthogonal group O(N), resp. symplectic group Sp(N). Using Einstein summation convention the group action writes:
T^a_1 … a_D→ T'^a_1 … a_D = (O_)^a_1_b_1(O_)^a_2_b_2…(O_)^a_D_b_D T^b_1 … b_D , O_∈
O(N) , =0
Sp(N) , =1
.
Moreover, the indices of the tensor are contracted using a graded symmetric form g^ such that g^_a b = (-1)^ g^_b a. One has:
g^_a b = [b]δ_a b , =0
ω_a b , =1
, with δ = (
[ 1_N/2 0; 0 1_N/2 ])
and ω = (
[ 0 1_N/2; -1_N/2 0 ]) .
Thus, the tensor components are fermionic (odd graßmannian) if =1 and D odd, and bosonic otherwise
(the parity of the tensor components is D 2).
Invariants and directed stranded graphs. By contracting indices with g^ one can build invariant polynomials in the tensor components.
Unlike the graded colored tensor models studied previously in <cit.>, two indices at different positions can now be contracted. Therefore, the invariants do not admit a graphic representation in term of directed edge colored graphs but they do admit one in terms of directed stranded graphs such that:
* each tensor is represented by a set of D nodes labeled by its indices.
* each contraction of indices is represented by a strand connecting the corresponding nodes.
We encode a directed stranded graph 𝒮⃗ with D strands by a set of nodes V(𝒮⃗) with |V(𝒮⃗)| elements, that come in groups of D, and a set of edges, called strands, E⃗(𝒮⃗), such that E⃗(𝒮⃗) is a directed pairing of V(𝒮⃗).
One often refers to the
D nodes as vertices. If two such vertices are directly connected by D strands one often refers to this collection of D strands as edge.
We also denote the undirected version of a directed stranded graph by 𝒮. Two examples are drawn in Figure <ref>.
As a shorthand notation we write a_ =(a_1, a_2, … , a_D) for the sequence of D indices.
To each directed stranded graph 𝒮⃗ is then associated an invariant whose expression reads:
I_𝒮⃗(T) = ( ∏_(i,j) ∈M⃗_ref T^a^i_𝒟 T^a^j_𝒟) ϵ (M⃗^D_ref,E⃗(𝒮⃗))^∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l .
In the equation above, M⃗_ref is an arbitrary reference pairing of 2p=|V(𝒮⃗)|/D tensors. The pairing M⃗^D_ref is a directed pairing of the indices of the tensors given by the disjoint union of D copies of M⃗_ref. An illustration is given
in Figure <ref>. The term ϵ (M⃗^D_ref,E⃗(𝒮⃗))^ is the sign of the pairing M⃗^D_ref with respect to E⃗(𝒮⃗);
this sign is defined in (<ref>).
Introducing the sign of the pairings in the expression of an invariant fixes the ambiguity induced by the graded symmetry of g^.
Two invariants associated to two directed version of the same stranded graph 𝒮 are equal, I_𝒮⃗(T) is then a class function and we can choose a single representative of 𝒮 in the action of our model, more comments on this can be found in <cit.>. As a consequence we drop the arrow in the notation if we refer to the undirected version of the graph, and if the quantity does not depend on the chosen orientation of the graph.
As one may contract indices of different positions together, there are several possible quadratic invariants. We group them into a quadratic term of the form
T^a_𝒟 C^-1_a_𝒟 b_𝒟 T^b_𝒟.
The propagator of the model is given by
C^a_𝒟 b_𝒟 = ∑_M ∈𝐌{a_ b_}γ_M ϵ(M⃗, M⃗_ref,C)^∏_(i,j) ∈M⃗ g_^ ij , γ_M∈ℝ ,
where
g^ a_1 b_1_ denotes the components of the inverse of g^ such that g_ a c^ g^ c d_ = δ_a^ d. Moreover,
𝐌{a_ b_} is the set of non oriented pairing on the set of 2D indices a_∪ b_ and M⃗ is a chosen oriented version of M. The pairing M⃗_ref,C is a reference pairing of the indices given by:
M⃗_ref,C = { (a_1,b_1), …, (a_D, b_D) } .
This corresponds to the case where each index of the first tensor propagates to the index at the same position in the second tensor (see Figure <ref>).
Let us emphasize that the product C P_R in (<ref>) is a particular case of the general propagator (<ref>), when C= 1 and P_R is the projector on the irreducible representation R of O(N) or Sp(N). This is explained in detail in Appendix <ref>.
As noted in subsection <ref>, C^a_𝒟 b_𝒟 is an element of the Brauer algebra B_D((-1)^N). Each pairing M in the sum represents a Brauer diagram and the factors γ_M are the coefficients in the linear combination. The reference pairing M⃗_ref,C coincides with M⃗_ref from subsection <ref>. Note that Brauer diagrams are conventionally read from top to bottom, whereas propagators are usually drawn from left to right.
We define the graded tensor model with symmetry R by the measure:
dμ[T] =e^-S[T] [dT], [dT]=ζ∏_a_𝒟 dT^a_1… a_D ,
with
S[T] = T^a_𝒟 C^-1_a_𝒟 b_𝒟 T^b_𝒟 + ∑_𝒮 connected,
|V(𝒮)|>2λ_𝒮/|V(𝒮)|/D I_𝒮(T) ,
and normalization ζ such that ∫ dμ[T]=1 for λ_ = 0 ∀λ_. All tensors are
elements of the O(N) (for =0), resp. Sp(N) (for =1), representation R.
In the definition above, the constant λ_𝒮 is the coupling constant of the invariant associated to 𝒮. The partition function of this models writes:
Z= ∫ dμ[T] = [ e^∂_T C∂_T e^∑λ_𝒮/|V(𝒮)|/D I_(T)]_T=0 ,
where the derivative representation <cit.> of the Gaussian integral is used and ∂_T C∂_T is a short-hand notation for:
∂_T C∂_T := ∂/∂ T^a^1_ C^a^1_ a^2_∂/∂ T^a^2_ .
When making use of the derivative representation, as discussed in Section <ref>, we can take the tensors to have no symmetries under permutations of their indices, but instead incorporate an appropriate projector on the space R in the definition of the propagator.
§ PROOF OF THE MAIN RESULT
In this section, we prove the main theorem of our paper. We show that the partition function of the graded tensor model is invariant under the change of parameters →+1 2 and N → -N. By choosing the propagator according to a given symmetry specified by the O(N) or Sp(N) representation R this implies the stated duality.
The appropriate choice of the propagator as an element of the respective Brauer algebra was discussed in Section <ref>.
From a mathematical point of view, the choice is implemented by fixing the pairing and constants γ_M in (<ref>) accordingly.
Let us first recall the commutation relation of the tensor components:
T^a_𝒟 T^b_𝒟 = (-1)^ D T^b_𝒟 T^a_𝒟.
The Gaußian (free) expectation value ⟨ T^a^1_𝒟… T^a^2p_𝒟⟩_0 of 2p tensors whose order is encoded by _ref is defined as:
⟨ T^a^1_… T^a^2p_⟩_0
= [ e^∂_T C∂_T T^a^1_… T^a^2p_]_T=0 .
For our model, Wick's theorem
expresses this expectation as a sum over pairings of 2p elements:
⟨ T^a^1_𝒟… T^a^2p_𝒟⟩_0 = ∑_M_0 ∈𝐌_2pϵ( _ref,_0)^ D( ∏_(i,j) ∈_0 C^a^i_ a^j_) .
The sign ϵ( _ref,2p,_0)^ D in (<ref>) takes into account the type (bosonic/fermionic) of the tensor components. The directed pairing _0 is an arbitrary oriented version of M_0, but notice that the term ϵ( _ref,2p,_0)^ D( ∏_(i,j) ∈_0 C^a^i_ a^j_) is invariant under reorientation of pairs in _0.
The Gaußian (free) expectation of an invariant I_(T) of order 2p specified by a stranded graph is defined as:
⟨ I_(T) ⟩_0 = [ e^∂_T C∂_T I_(T) ]_T=0 .
This expectation value can be computed by pairing the 2p groups of D vertices in by propagators (<ref>). We represent this pairing by edges of a new color 0, each consists again of D strands.
The result is a sum over 2-colored stranded graphs , such that ⊂ is the maximal subgraph of color 1 (see Fig. <ref> for an example of such a graph).
The Gaussian expectation (<ref>) writes:
⟨ I_ (T)⟩_0 = ∑_, ⊂
|V()|=2pDγ_((-1)^ N )^F() ,
where the power of N is given by the number of faces of . Moreover, the factor γ_ is a product of weights associated to the edges of color 0, given by the expression of the propagator in (<ref>). It writes:
γ_ = ∏_e∈ E_0()γ_M^e ,
with E_0() the set of edges of color 0 and M^e the pairing (Brauer diagram) defining the path of the D strands of the color 0 edge e.
Applying Wick's theorem (<ref>) to the formula of an invariant of order 2p specified by a directed stranded graph (<ref>), leads to the following form of the Gaußian expectation:
⟨ I_⟩_0 = ⟨∏_(i,j) ∈M⃗_ref T^a^i_𝒟 T^a^j_𝒟⟩_0 ϵ (M⃗^D_ref,E⃗(𝒮⃗))^(∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l)
= ∑_M_0 ∈𝐌_2pϵ (M⃗^D_ref,E⃗(𝒮⃗))^ϵ( _ref,_0)^ D( ∏_(i,j) ∈_0 C^a^i_ a^j_) (∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l) .
First, the dependence on the reference pairing can be eliminated using the properties (<ref>) and (<ref>) of the sign ϵ such that:
ϵ (M⃗^D_ref,E⃗(𝒮⃗))^ϵ( _ref,_0)^ D = ϵ(_0^D, E⃗(𝒮⃗))^ ,
where _0^D is the oriented pairing given by the disjoint union of D copies of _0. This pairing can be seen as taking each pairs of tensors in _0 and pairing their indices, respecting their position. An illustration can be found in Figure <ref>.
Second, we rewrite the term ( ∏_(i,j) ∈_0 C^a^i_ a^j_) using (<ref>) as:
∏_(i,j) ∈_0 C^a^i_ a^j_ = ∏_(i,j) ∈_0( ∑_M_ij∈𝐌{a^i_ a^j_}γ_Mϵ(M⃗_ij, M⃗_ref,ij,C)^∏_(m,n) ∈M⃗_ij g_^ m n)
= ∑_M_tot∈𝐌_totγ_M_totϵ(_tot,_0^D)^( ∏_(m,n)∈_tot g_^ m n) ,
where 𝐌{a^i_ a^j_} is the set of pairings of elements a^i_∪ a^j_, and 𝐌_tot is the set of pairings given by the disjoint union of all 𝐌{a^i_ a^j_} with (i,j) ∈_0.
A pairing M_tot∈𝐌_tot is therefore the disjoint union of p pairings belonging to the sets 𝐌{a^i_ a^j_}. An example is shown in Figure <ref>. Denoting these p pairings as M^1 … M^p, the factor γ_M_tot is equal to:
γ_M_tot= ∏_x=1^pγ_M^x .
We used here the fact that, by construction, the disjoint union of the M_ref,ij,C is equal to _0^D.
This comes from the fact that both contract the indices of a pair of tensors present in _0, respecting the position of indices (see Figure <ref>).
Inserting (<ref>) and (<ref>) in (<ref>), we obtain:
⟨ I_𝒮⃗⟩_0 = ∑_ M_0 ∈𝐌_2p
M_tot∈𝐌_totγ_M_totϵ(_0^D, E⃗(𝒮⃗))^ϵ(_tot,_0^D)^( ∏_(m,n)∈_tot g_^ m n) (∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l) .
This further leads to using property (<ref>):
⟨ I_⟩_0 = ∑_ M_0 ∈𝐌_2p
M_tot∈𝐌_totγ_M_totϵ(_tot, E⃗(𝒮⃗))^( ∏_(m,n)∈_tot g_^ m n) (∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l) .
Adding oriented edges of a new color 0 to 𝒮, according to _tot, yields a 2-color directed stranded graph . We define a face in as a cycle with strands of alternating colors. Along a face, g_ and its inverse alternate and all indices are summed. Therefore, each face contributes a factor N. However, because of the graded symmetry g^_ a c= (-1)^ g^b_ ca a face also picks up a factor (-1)^ if an odd number of strands point in one of the two directions around the face, we characterize such a face to be odd, otherwise a face is called even. The term ( ∏_(m,n)∈_tot g_^ m n) (∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l) thus contributes:
( ∏_(m,n)∈_tot g_^ m n) (∏_(k,l) ∈E⃗(𝒮⃗) g^_ k l) = (-1)^ F_odd() N^F()
Using property (<ref>) we also rewrite the term ϵ(_tot, E⃗(𝒮⃗))^ as:
ϵ(_tot, E⃗(𝒮⃗))^ = (-1)^F_even() b ,
where F_even(), resp. F_odd(), denotes the number of even, resp. odd, faces of and F() = F_odd() + F_even() is the total number of faces of , which does not depend on any chosen orientation.
The expectation value ⟨ I_⟩_0 can thus be evaluated as a sum over 2-colored stranded graphs :
⟨ I_⟩_0 = ∑_, ⊂
|V()|=2pDγ_M_tot((-1)^ N )^F() .
This concludes the proof.
Each term in (<ref>) is invariant under the transformation:
→+1 N → -N .
Thus, this transformation does not affect the Gaußian expectation value of any invariant nor the amplitude of its graphs and is hence a duality of our model.
The invariance of the partition function under the duality follows directly from the above statement, using a perturbative expansion of the interaction part of the action:
Z = [e^∂_T C∂_T∑_{p_≥ 0}∏_1/p_!( λ_/|V()|/D I_(T) )^p_]_T=0
= ∑_{p_≥ 0}∏_1/p_!( λ_/|V()|/D)^p_⟨∏_ I_(T)^p_⟩_0 .
Since any product of invariants is a single disconnected invariant, the factor ⟨∏_ I_(T)^p_⟩_0 is invariant under the duality (<ref>). Hence the partition function of the model is invariant under (<ref>).
As usually, expectation values of invariants are calculated by taking derivatives of ln Z with respect to the couplings λ_ (see, for example, <cit.>). Diagrammatically, the derivative marks a 1-colored stranded subgraph of type and this leads to the conclusion stating that the expectation value of an invariant can be experesssed a s a formal sum over 2-colored stranded graphs (see again <cit.>).
§ ILLUSTRATION: TOTALLY SYMMETRIC AND ANTISYMMETRIC TENSOR MODELS
In this section, we exhibit the general duality result proved in the previous section for the particular case of totally symmetric and antisymmetric tensor models.
§.§ O(N) tensor models
The vector space V is, in this case, an ordinary even (bosonic) N-dimensional real vector space and the tensor product space V^⊗ D is an even vector space ∀ D∈ℕ.
As already explained above, the grading parameter now takes the value =0.
To the GL(N) representation of totally symmetric tensors Sym^D(V) of order D is associated the following Young diagram:
λ_S= […] _length D .
The corresponding Young symmetrizer is: c_S=∑_σ∈_Dσ.
The projector on the O(N) representation of traceless symmetric tensors is (see <cit.>):
P_D,N^(λ_S) = ∏_f=1^⌊ D2⌋( 1- A_D/(N+2(D-f-1))f) .
This
is a restricted version
of the universal traceless projector (<ref>) and it removes the trace modes after restriction to symmetrized tensors.
As an element of the Brauer algebra B_D(N), a propagator (C in Def. <ref>) of a symmetric O(N) tensor model is proportional to the projector:
P_D,N^(λ_S)c_S/D! .
The Brauer algebra acts on tensors by permuting and contracting their indices (see again subsection <ref>).
To the GL(N) representation of totally antisymmetric tensors ⋀^D(V) of order D is associated the Young diagram:
λ_∧=.
[-.3ex⋮]
} length D .
The corresponding Young symmetrizer is: c_∧=∑_σ∈_D(σ) σ.
A totally antisymmetric O(N) tensor is automatically traceless, i.e. ⋀^D(V) is already an irreducible O(N) representation.
Thus a propagator of an antisymmetric O(N) tensor model, as an element of B_D(N), is proportional to the projector:
c_∧/D! ,
that acts by antisymmetrizing all indices.
§.§ Sp(N) tensor models
In this case, the N-dimensional vector space V is an odd (fermionic) real super-vector space and order D tensors are bosonic if D is an even integer and fermionic if D is odd.
Therefore the grading parameter now takes the value =1.
The representation of the dual tensor model with Sp(N) symmetry is obtained by transposing the Young diagram: λ_∧^'=λ_S.
Therefore, the dual model to the symmetric traceless O(N) tensor model is the antisymmetric traceless Sp(N) tensor model. The projector onto this representation is given by:
P_D,-N^(λ_S) = ∏_f=1^⌊ D2⌋( 1- A_D/(N+2(D-f-1))f) ,
seen as an element of B_D(-N) that differs from (<ref>) by the sign of N.
Recall the difference in the action of β when =1 instead of =0: the action of β on tensors differs by a factor η(β)=(-1)^m, where
m=minimal number of crossings in β. If β is a permutation, we have η(β)=(β) and thus, the Young symmetrizer c_S∈ B_D(-N) acts by antisymmetrizing the indices of a tensor, whereas c_S∈ B_D(N) acts by symmetrization.
The dual model to the antisymmetric O(N) tensor model contains tensors transforming in the symmetric representation of Sp(N). Note that these tensors are also automatically traceless:
ω_a_ia_j T^a_1… a_i… a_j … a_D =0 ,
because of the antisymmetry of the symplectic form. Thus, the projector is equal to <ref>, but regarded as an element of B_D(-N), and acts by symmetrizing the tensors.
The diagrammatic (Feynman type) expansions of a model and its dual contain exactly the same stranded graphs, but
the amplitude of a stranded graph picks up a factor N for each face in the O(N) models, in the Sp(N) models, each face contributes a factor -N.
For example, the graph in Figure <ref> has three faces and thus
contributes as N^3 in an O(N) model, but -N^3 in an Sp(N) model.
§ PROJECTOR OF TENSORS WITH IRREDUCIBLE SYMMETRY
Let us now consider an irreducible representation of O(N) or Sp(N) given by the Young tableau λ. The projector on this space of tensors is given by the product of the Young symmetrizer c_λ with the traceless projector 𝔓_D:
P_R^a_1 … a_D a_D+1… a_2D = (c_λ·𝔓_D)^a_1 … a_D a_D+1… a_2D
= ∑_α non-zero
eigenvalue of A_D
σ∈ P_λ , τ∈ Q_λ((τ) σ·τ + -(τ)/ασ·τ· A_D)^a_1… a_D a_D+1… a_2D
The products of the elements σ, τ and A_D of the Brauer algebra lead to ϕ and χ, which are elements of B_D((-1)^ N). We thus rewrite the terms present in P_R as
∑_α non-zero
eigenvalue of A_D
σ∈ P_λ , τ∈ Q_λ(τ) σ·τ = ∑_ϕ∈ P_λ· Q_λγ_ϕϕ ,
∑_α non-zero
eigenvalue of A_D
σ∈ P_λ , τ∈ Q_λ-(τ)/ασ·τ· A_D = ∑_χ∈ P_λ· Q_λ·𝔹γ_χχ ,
where 𝔹 is the set of elements β_ij of the Brauer algebra (see (<ref>)), and γ_ϕ and resp. γ_χ are factors taking into account the fact that different products of σ and τ may lead to the same ϕ. We use then expression (<ref>) to write:
P_R^a_1 … a_D a_D+1… a_2D = ∑_δ⃗∈𝕄_δγ_δϵ(δ⃗,_ref)^∏_(i,j) ∈δ⃗ g^a_i a_j + ∑_ω⃗∈𝕄_ωγ_τϵ(ω⃗,_ref)^∏_(i,j) ∈ω⃗ g^a_i a_j ,
where the sum over the elements τ and δ of B_D((-1)^N) is replaced by a sum over their associated oriented pairings τ⃗ and δ⃗.
Thus, the projector P_R is shown to be a particular case of the general projector (<ref>).
*Acknowledgements.
The authors warmly acknowledge Răzvan Gurău for useful discussions at various
stages of this research project.
T. K., T. M. and A. T. have been partially supported by the ANR-20-CE48-0018 “3DMaps” grant and by the PHC Procope program "Combinatorics of random tensors".
A. T. has been partially supported by the PN
23210101/2023 grant. H. K. has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC–2181/1 – 390900948 (the Heidelberg STRUCTURES Cluster of Excellence), and his mobilities were partially supported in the form of PPP France (DAAD). The authors further acknowledge support from the Institut Henri Poincaré (UAR 839 CNRS-Sorbonne Université), and LabEx CARMIN (ANR-10-LABX-59-01), where this project initiated, during the “Quantum gravity, random geometry and holography” trimester.
plain
tocsectionReferences
|
http://arxiv.org/abs/2307.03388v1
|
20230707045834
|
General-Purpose Multimodal Transformer meets Remote Sensing Semantic Segmentation
|
[
"Nhi Kieu",
"Kien Nguyen",
"Sridha Sridharan",
"Clinton Fookes"
] |
cs.CV
|
[
"cs.CV"
] |
General-Purpose Multimodal Transformer meets Remote Sensing
Semantic Segmentation
Nhi Kieu
Queensland University of Technology
[email protected]
Kien Nguyen
[email protected]
Sridha Sridharan
[email protected]
Clinton Fookes
[email protected]
==========================================================================================================================================================================================
The advent of high-resolution multispectral/hyperspectral sensors, LiDAR DSM (Digital Surface Model) information and many others has provided us with an unprecedented wealth of data for Earth Observation. Multimodal AI seeks to exploit those complementary data sources, particularly for complex tasks like semantic segmentation. While specialized architectures have been developed, they are highly complicated via significant effort in model design, and require considerable re-engineering whenever a new modality emerges. Recent trends in general-purpose multimodal networks have shown great potential to achieve state-of-the-art performance across multiple multimodal tasks with one unified architecture. In this work, we investigate the performance of PerceiverIO, one in the general-purpose multimodal family, in the remote sensing semantic segmentation domain. Our experiments reveal that this ostensibly universal network does not effectively capture the interactions between different modalities of interest in remote sensing arena. Furthermore, the network struggles with object scale variation in remote sensing images and fails to detect the presence of smaller objects such as cars from a top-down view. To address these issues, we propose a spatial and volumetric learning component, which employs 3D convolutions with an UNet configuration to encode vital local information and learn cross-modal features simultaneously, while reducing network computational burden via the cross-attention mechanism of PerceiverIO. The effectiveness of the proposed approach is validated through extensive experiments comparing it with other methods such as 2D convolution, and dual local module (the combination of Conv2D 1×1 and Conv2D 3×3 inspired by UNetFormer). The proposed method significantly improves the performance of PerceiverIO, and provides competitive performance against specialized architectures like UNetFormer and SwinUNet, showing its potential to minimize network architecture engineering with a minimal compromise on the performance. Code and data will be available at https://github.com/nhikieu/SpatialVolumetricMultimodal.
§ INTRODUCTION
Semantic segmentation of remote sensing imagery refers to the task of categorizing each pixel of an image into a specific class or object to produce a dense pixel-wise segmentation map. Semantic segmentation models with good performance are crucial for the practical application of high-resolution remote-sensing images such as land cover mapping, traffic monitoring and urban management.
However, designing remote-sensing semantic segmentation models such as UNetFormer <cit.> usually requires a significant amount of time, effort and domain knowledge. Moreover, adding new modalities with different structures makes the network subject to heavy re-engineering.
General-purpose transformers provide a new direction to model design by a unified architecture capable of handling different types of data in the same way. General-purpose transformers such as PerceiverIO <cit.> can achieve competitive performance in multiple tasks compared to state-of-the-art domain-specific approaches.
While showing considerable promise in multimodal tasks such as learning joint representations of video, audio, and labels, the performance of these general-purpose transformers in multimodal geospatial settings has not been verified. This paper investigates the effectiveness of these techniques in multimodal settings for geospatial tasks. We apply PerceiverIO <cit.> to the multimodal semantic segmentation task of very-high-resolution remote sensing and compare its performance with the state-of-the-art domain-specific approach UNetFormer <cit.>. Our first observation is that the PerceiverIO performs poorly on segmenting small objects such as cars. In particular, in the Vaihingen <cit.> and Potsdam <cit.> datasets, PerceiverIO failed to detect cars. Our second observation is that PerceiverIO does not effectively fuse data from different modalities that are typically processed in remote sensing settings. The poor performance is firstly due to the weakness in spatial encoding, especially local information. Secondly, interactions between different modalities aren't captured to discriminating classes. We experiment multiple configurations and propose a volumetric-aware module to address these issues. <ref> demonstrates the effectiveness of proposed methods in detecting small objects like cars.
Contributions: our main contributions in this paper are:
* Contribution 1: Propose a convolution-based preprocessing component to help with small objects detection
* Contribution 2: Propose a volumetric-aware preprocessing component to better exploit the synergies across different modalities
The remainder of the paper is organized as follows. Section II discusses related work. Section III describes our proposed methodology. Section IV presents our datasets, experimental setup, and experimental results. The paper is concluded in Section V.
§ RELATED WORK
This section discusses related work in general semantic segmentation architectures, specialized semantic segmentation in remote sensing, and general-purpose multimodal architectures.
§.§ Semantic Segmentation Architecture
UNet <cit.> is a convolutional architecture <cit.> that has been proven to be effective in general image semantic segmentation even though originally developed for the biomedical field. The encoder and decoder branches are independent allowing practitioners to experiment with different combinations of backbones. Hence, the idea is still widely used by the computer vision community today with more advanced backbones such as TransUNet <cit.> and SwinUNet <cit.>. TransUNet, for medical image segmentation, showed that Transformer can be a strong encoder while CNN remains a solid feature extractor and decoder. CNNs remain dominant in the computer vision community partially thanks to their ability of multiscale learning by progressively increasing receptive field. SwinUNet applying Swin Transformer <cit.> with sliding window mechanism aims to achieve the same goal. Skip connection is an important element in UNet-like architecture, which seeks to semantically join features learnt from multiscale between encoder and decoder. However, UCTransNet <cit.> pointed out that there is a huge semantic gap between the encoder and decoder. Especially with a hybrid structure where the encoder and decoder are totally different in nature, the gap is even more significant. Therefore, in this work, we lean towards exploring pure transformer architecture. SegFormer <cit.> and DC-Swin <cit.> demonstrated that a pure attention model can extract multiscale semantic features just as well as convolutional models. In this work, we adapted SwinUNet to multimodal data to understand the performance of state-of-the-art general semantic segmentation architectures on remote sensing data.
§.§ Specialized Architecture in Remote Sensing
UNetFormer <cit.> is the current state-of-the-art architecture, specialized for remote sensing data. However, the original paper only reported results on unimodal input. Its main contribution lies in the proposal of the Feature Refinement Head and Global Local Transformer Block components in the decoder branch. In both of which, a channel path is used in conjunction with a spatial path. Even though it wasn’t explicitly explained in the paper why such design was used, we speculate that it is an attempt of capturing cross-channel features in addition to spatial features. In this work, we adapted UNetFormer to multimodal data and experimented with integrating the idea of a dual local branch into a general-purpose architecture like PerceiverIO.
We also observe that top winners from the IEEE Data Fusion Contest 2018 (DFC2018) <cit.> have reported an early effort in multimodal learning. Independent branches are created for different modalities. For example, the runner-up in the contest used independent predictors for different classes. Also, heavy post-processing is required to boost performance. Since then, the potential of multimodal is gradually appreciated by the remote sensing community. Specialized architectures for tasks in geospatial settings have grown increasingly complex, pushing the boundaries of performance. The multi-stream topology is dominant within this landscape where modalities are encoded in separate branches and fused by advanced modules. While these specialized networks <cit.> and <cit.> achieve high performance, they are largely not generalizable and will require heavy re-engineering when a new modality emerges.
§.§ Multimodal General-Purpose Architecture
Recently, parallel to the development of specialized multimodal architectures, more attention is given to general frameworks such as MultiMAE (Multi-modal Multi-task Masked Autoencoders) <cit.> and GPNA (General-Purpose Neural Architecture) with geospatial inductive bias <cit.>. These studies prove that we can use just one unified Transformer-based encoder to learn features from different modalities offering a greater degree of flexibility. PerceiverIO <cit.> is an important member in this realm. It has demonstrated advantages over convolutional networks and self-attention mechanisms. Cross-attention mechanism transforms the quadratic problem into a linear problem where high resolution and high dimensional inputs can be mapped to a much smaller latent space. It also claims that the network makes little assumption about the nature of the data achieving the general-purpose goal. However, in this work, we revealed its shortcomings when it is applied to remote sensing data. Specifically, it fails to detect small objects like cars from top orthogonal inputs. In addition, it struggles to fuse information across modalities.
§ METHODOLOGY
This section describes our two key contributions to address the issues when applying the general-purpose multimodal PerceiverIO to remote sensing data.
§.§ Contribution 1
Through our empirical experiments, it appears that the default PerceiverIO architecture with either the fixed Fourier or the learnable positional embeddings <cit.> fails to perform segmentation on small objects such as cars. Even if we leverage the pretrained positional embeddings on ImageNet, the situation doesn't improve. We suspect that the model is missing crucial local information. Therefore, we first introduce an extra 2D Conv layer in the preprocessing step before feeding the inputs to the cross-attention head of the PerceiverIO. We immediately saw a huge improvement of PerceiverIO. It can detect cars, which was impossible for the default PerceiverIO.
To put more focus on spatial information and locality, we constructed a UNet-like module (<ref>) using several 2D Conv layers to capture more local details. As expected, there is a pronounced performance boost. However, the model can only detect very bright color cars (yellow, red, white) and ignore darker color cars (purple, gray, black). We suspected that the prediction was greatly dependant on color channels and hasn't taken into account the complementary features from another modality which is nDSM (normalised DSM). That leads us to the second contribution.
§.§ Contribution 2
To improve the interaction among input modalities, RGB, DSM, SAR in the remote sensing setting, we first propose a dual local branch preprocessing module. The module has two local branches: one branch uses Conv 1×1 and the other uses Conv 3×3. This is inspired by the GLTB (Global Local Transformer block) and the FRH (Feature Refinement Head) in the UNetFormer architecture <cit.>. In their GLTB local branch, in order to decode features, one branch uses Conv 1×1 and the other uses Conv 3×3. In their FRH, one branch is called channel path using Global Average Pool and Reduce/Expand operations, the other is named spatial path using depth-wise Conv 3×3. Even though these design decisions were explicitly explained by the author, it could be interpreted as an attempt to fuse spatial and channel-wise features. Inspired by this, we propose a dual local branch within our UNet-like module as shown in <ref>).
To further improve the interaction among input modalities, we propose a Conv3D-based volumetric-aware module. The key intuition here is using 3D Convolutions will enable us to learn the interaction rather than hard coding in the network architecture. 3D Convolutional kernels can be learned to effectively fuse different input modalities for semantic segmentation. We kept the UNet-inspired design that has been work well and used 3D Conv layers to learn spatial and channel-wise features simultaneously. We observed that 3D Conv works particularly well in this situation, consolidated the volumetric nature of multimodal data even though it hasn't been widely applied. <ref> illustrates the design of our preprocessing module using 3D Conv. To ensure that the global information isn't thrown away in the preprocessing step while trying to retain local information, we use multiscale architecture in both extractor and decoder branches, which can help minimize a well-known limitation of convolution operations. In the extractor line, there are three blocks of stacked 3×3 3D Conv followed by a 3D Maxpool operation (except for the final block). The number of filters increases as the component goes deeper. In the decoder line, the final representation is then upsampled twice by 3D Conv Transpose operation. After every upsampling, features from higher levels in the extractor line are concatenated and parsed through another 3×3 3D Conv. Finally, channels and depth are combined and re-projected using 1×1 2D Conv resulting a preprocessed input that is ready to parse through the PerceiverIO network.
§ EXPERIMENTAL RESULTS
This section presents our datasets, experimental setup, and experimental results.
§.§ Datasets
Vaihingen: The Vaihingen dataset <cit.> from the International Society for Photogrammetry and Remote Sensing (ISPRS) contains remote sensing data of the Vaihingen region in Germany. It has two modalities: true orthophoto (TOP) and Digital Surface Model (DSM). The TOP modality has three bands RGIR: red, green, and near infrared. The DSM modality is converted from the 3D LiDAR. It contains 33 large image tiles of different sizes with a GSD of 9 cm. Dense ground truth masks are provided for training and testing.
Potsdam: The Potsdam dataset <cit.>, also from the ISPRS, contains remote sensing data of the Potsdam region in Germany. The data set contains 38 patches of the same size, each consisting of a true orthophoto (TOP) and a DSM. The ground sampling distance of both, the TOP and the DSM, is 5 cm. Different to Vaihingen, Potsdam's TOP modality has four bands RGBIR: red, green, blue, and near infrared.
It's worth noting that both datasets are heavily imbalanced as shown in <ref>, which makes it very challenging for the network to pick up already hard-to-learn small objects like cars.
MMFlood: MMflood is a multimodal dataset used for flood monitoring and analysis. It includes data from Synthetic Aperture Radar (SAR - VV and VH channels), Hydrography and DEM (Digital elevation model). However, this dataset is very challenging because of two major issues: (1) More than half of the hydrography information is missing for train, (2) Severe class imbalance between flood area and background.
§.§ Experimental Setup
Selected tiles for train, validation and test are as specified on ISPRS data portal <cit.>. For training purposes, from 15 large tiles of varying dimensions provided by the Vaihingen dataset, we generated 1,620 samples of size 512×512. Similarly, we created 3,466 samples of size 512×512 for the Potsdam dataset from 22 large tiles with diverse dimensions. Specifically, tiles with the following IDs are used: (1) Vaihingen: Train [1,3,5,7,11,13,15,17,21,23,26,28,32,34,37], Validate [30], Test [2,4,6,8,10,12,14,16,20,22,24,27,29,31,33,35,38]; (2) Potsdam: Train ['2_11', '2_12', '3_10', '3_11', '3_12', '4_10', '4_11', '4_12', '5_10', '5_11', '5_12', '6_7', '6_8', '6_9', '6_10', '6_11', '6_12', '7_7', '7_8', '7_9', '7_11', '7_12'], Validation ['2_10'], Test ['2_13', '2_14', '3_13', '3_14', '4_13', '4_14', '4_15', '5_13', '5_14', '5_15', '6_13', '6_14', '6_15', '7_13']
Multimodal data is introduced to selected networks by stacking modalities on top of each other. For Vaihingen dataset, the multimodal input will have a shape of (512, 512, 5), where the final dimension includes Red-Green-NearInfrared, nDSM (normalised DSM), and NDVI (Normalized difference vegetation index - derived from R-G-IR channels). Similarly, Potsdam will have the same multimodal input shape of (512, 512, 5); however, the final dimension will be the combination of R-G-B-IR and nDSM. On the other hand, to tackle class imbalance issues, we experimented with different loss functions (<ref>). We found a joint loss of Dice <cit.> and Soft Cross-entropy without class weight perform the best. This joint loss function was applied in all reported experiments.
L = L_dice + L_ce
L_dice = 1 - 2/N∑^N_n=1∑^K_k=1ŷ^n_k y^n_k/ŷ^n_k + y^n_k
L_ce = - 1/N∑^N_n=1∑^K_k=1y^n_k logŷ^n_k
where N is the number of samples and K is the number of classes. y^n_k is the one-hot encoding map of true segmentation label of sample n class k. ŷ^n_k is the confidence of sample n belong to class k (corresponding softmax output from the network).
In terms of evaluation metrics, class-wise F1 score (Dice Coefficient), mIoU (mean Intersection over Union), and Average Accuracy are used. They are calculated using the following equations:
F1 = 2 × Precision × Recall/Precision + Recall
IoU = Area of Overlap/Area of Union
AA = 1/C∑^C_i=1N^i_c/N^i_a
where for each class i, N^i_c is the number of samples classified correctly and N^i_a is the total samples.
§.§ Experimental Results
<ref> shows that our proposed approaches result in a pronounced performance boost for PerceiverIO, especially, it resolves the problem with the car class. The results also shows that UNet is an effective architecture for feature encoding, which encodes information through multiple scales and aggregate those features. As indicated in <ref>, the last three methods, which employ a UNet-like architecture, yield superior performance. It is also worth noting here that, the UNet-like 2D convolution module can only be effective to a certain point. When we increase to 3-stage module instead of the previous two-stage module, the result is worse.
<ref> demonstrates the effectiveness of proposed methods compared to the original PerceiverIO. From the first row, it is clear that not only is the prediction of cars significantly improved, but also the overall prediction is also more realistic, devoid of obvious edge issues (less misclassified pixels at the instances' boundaries). From the second row, the integration of Conv3D improves the network's ability to handle dark-colored cars and reduces prediction noise in shaded areas.
Our proposed component - local spatial and volumetric encoding - allows a multimodal, general-purpose architecture like PerceiverIO to yield highly competitive results when compared to remote sensing specialized networks like UNetFormer and segmentation specialized networks like SwinUNet on both the Potsdam and Vaihingen datasets (<ref> and <ref>). When applied to a different dataset - MMFlood <cit.>, the three models perform very similarly; however, PerceiverIO with our proposed volumetric component and SwinUNet slightly outperform UNetFormer (<ref>). MMFlood dataset is a multimodal collection of remote sensing data focused on flood monitoring and analysis. It includes data from synthetic aperture radar (SAR), and hydrography and DEM (Digital Elevation Model). However, because more than half of the hydrography modality is missing at training, it is excluded in this study.
<ref> presents several examples of semantic segmentation on the Potsdam dataset. The result is consistent with the observation in the Vaihingen dataset. The incorporation of our proposed volumetric preprocessing (UNet-inspired Conv3D) ameliorated the issue with the car class to some extent. However, we have to acknowledge that, while improved, PerceiverIO's performance is still not as precise as the specialized architectures like SwinUNet and UNetFormer, which opens opportunity for future research. Besides, a noteworthy point is SwinUNet assumes a grid-like structure for the input. Its performance hinges on a judicious choice of window size. As demonstrated in <ref>, the predictions made by SwinUNet are pixelated at the boundaries, resulting in a less smooth segmentation map compared to that generated by the PerceiverIO.
§.§ Ablation Study
To arrive at the optimal loss function, some advanced/specialized options are experimented with. They are Focal Tversky Loss <cit.>, Asymmetric Unified Focal Loss <cit.>. However, in this case, they aren't effective because of the challenge of the object scale variation in the scene on top of the severe class imbalance issue. Assigning class weight is another option; nevertheless, it isn't easy to tune and is counter-intuitive if we want to develop a general-purpose architecture that can be applicable to different datasets. Hierarchical Perceiver (HiP) <cit.> - a successor of PerceiverIO, which claims to have multiscale learning power, was explored; however, with limited data, it performed worse than PerceiverIO. We tried different positional encoding scheme suggested by HiP in an attempt to capture local features. They are Fixed Fourier 2D positional embedding, learnable positional embedding, and fine-tuned positional embeddings that were pretrained on ImageNet; however, none could resolve the issue with car detection.
§ CONCLUSION
In this study, we proposed integrating a spatial and volumetric component into a multimodal general-purpose architecture (PerceiverIO). It helps overcome the challenge of object scale variation in severe class imbalance condition. Moreover, our experiments demonstrated the effectiveness of UNet-inspired architecture in extracting multiscale features. The baselines we used for performance comparison are specialized architectures in multimodal context (UNetFormer and SwinUNet). Our proposed method, which deploys multilayers of 3D convolutions while maintaining computing efficiency via cross-attention mechanism, provides competitive semantic segmentation results on the Vaihingen, Potsdam and MMFlood datasets. However, the development of multimodal general-purpose AI for semantic segmentation is still hindered by the expense of acquiring high-quality pixel-level annotations. In the future work, we'll introduce self-supervised and weakly-supervised learning approaches to leverage existing sparse data labels.
ieee_fullname
|
http://arxiv.org/abs/2307.02369v1
|
20230705153320
|
Interpolating Between the Gauge and Schrödinger Pictures of Quantum Dynamics
|
[
"Sayak Guha Roy",
"Kevin Slagle"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.str-el"
] |
Interpolating Between the Gauge and Schrödinger Pictures of Quantum Dynamics
Sayak Guha Roy1*,
Kevin Slagle2^†
1 Department of Physics and Astronomy, Rice University, Houston, Texas 77005, USA
2 Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005 USA
* [email protected]
† [email protected]
August 1, 2023
§ ABSTRACT
Although spatial locality is explicit in the Heisenberg picture of quantum dynamics, spatial locality is not explicit in the Schrödinger picture equations of motion. The gauge picture is a modification of Schrödinger's picture such that locality is explicit in the equations of motion. In order to achieve this explicit locality, the gauge picture utilizes (1) a distinct wavefunction associated with each patch of space, and (2) time-dependent unitary connections to relate the Hilbert spaces associated with nearby patches. In this work, we show that by adding an additional spatially-local term to the gauge picture equations of motion, we can effectively interpolate between the gauge and Schrödinger pictures, such that when this additional term has a large coefficient, all of the gauge picture wavefunctions approach the Schrödginer picture wavefunction (and the connections approach the identity).
1pt
fancy
1pt
§ INTRODUCTION
The dynamics of a physical system is explicitly spatially local if
the degrees of freedom are local (i.e. can be associated with a position in space)
and if the time dependence of the degrees of freedom only depend on sufficiently nearby degrees of freedom.
In the Schrödinger picture of quantum dynamics,
the wavefunction is the only time-dependent degree of freedom.
But the wavefunction is not a local degree of freedom;
it is a global degree of freedom since it can not be associated with a particular position in space.
As such, the Schrödinger picture does not exhibit explicit locality.
In contrast, the Heisenberg picture does exhibit explicit locality <cit.>
since the time dependence of local operators only depends on nearby local operators (for local Hamiltonians).
Since locality is of fundamental importance to theoretical physics,
a modified version of the Schrödinger picture <cit.> was recently developed such that locality is explicit in the equations of motion.
To formulate this new picture, we first choose a set of local patches (indexed by capital letters I, J, or K) of space (or the lattice),
as depicted in fig:patches.
In the simplest setting, the patches can be taken to be the spatial support of the different Hamiltonian terms.
A distinct local wavefunction |ψ_J⟩ is associated with each patch J.
Furthermore, the Hilbert spaces of nearby patches (I and J) are related by time-dependent unitary transformations U_IJ.
These unitary transformations resemble gauge connections in a lattice gauge theory (while the local wavefunctions resemble Higgs fields <cit.>),
which motivates the name “gauge picture” for this picture of quantum dynamics.
The equations of motion in the gauge picture are given by
∂_t|ψ_I⟩ = -iH_⟨ I ⟩^G|ψ_I⟩
∂_tU_IJ = -iH_⟨ I ⟩^G U_IJ + i U_IJH_⟨ J ⟩^G
H_⟨ I ⟩^G is the sum of the Hamiltonian terms on all patches that overlap with patch I
H_⟨ I ⟩^G = ∑_J^J∩ I ≠ 0 U_IJ H_J^G U_JI
H_J is a Hamiltonian term supported on patch J,
such that the Hamiltonian of the entire system is H= ∑_J H_J.
We use S, H, and G superscripts to distinguish time-dependent operators in the Schrödinger, Heisenberg, and gauge pictures respectively.
Time-independent operators in the Schrödinger picture are also time-independent in the gauge picture.
The local wavefunctions |ψ_I⟩ are local in the sense that their time dynamics only depends on nearby degrees of freedom (i.e. |ψ_I⟩ and U_IJ where I and J overlap).
As a consequence, although |ψ_I⟩ lives in an exponentially large Hilbert space for a large many-body system
(e.g. of dimension 2^n for a system of n qubits),
by itself, |ψ_I⟩ only encodes enough information to compute expectation values of operators supported on the patch I.
The information that describes operators outside patch I is typically scrambled,
and one must use a string of connections U_IJ, e.g. ⟨ψ_I | A_I^G U_IJ U_JK B_K^G | ψ_K|$⟩,
to unscramble this information to compute long-range correlation functions.
In this work,
we question to what extent it is possible to obtain explicitly local equations of motion (such as the gauge picture)
such that distant information is not scrambled in this way.
That is,
we ask if it is possible to modify the gauge picture such that local wavefunctions are approximately equal to the Schrödinger picture wavefunction:|ψ_I⟩ ≈|ψ⟩.
To achieve this,
we consider adding a local term to the equations of motion that drives the connectionsU_IJtowards the identity
(without affecting any expectations values or operator time-dependence in the gauge picture).
We show that if this new term has a large coefficientγ,
then the connections are approximately equal to the identity
and all of the local wavefunctions in the gauge picture are approximately equal to the Schrödginer picture wavefunction.
In this sense, this coefficient is capable of interpolating between the gauge and Schrödinger pictures.
However, we find that the magnitude of theγcoefficient must scale exponentially with system size in order to keep the deviation between the two wavefunctions below a constant bound.
In sec:gaugePicture, we briefly review a derivation of the gauge picture of quantum dynamics.
With the derivation fresh in our mind,
it is clear what kinds of modifications can be straightforwardly made to the gauge picture.
In sec:deriveEoM,
we derive an additional term, with coefficientγ,
that we can add to the gauge picture in order to
interpolate between the gauge and Schrödinger pictures.
In sec:scaling,
we estimate how much the modified gauge picture will deviate from Schrödinger's picture (i.e. how muchU_IJdeviates from the identity)
in the limit of largeγ.
In Sections <ref> and <ref>,
we validate our estimates using numerical simulations of the 1D transverse-field Ising model <cit.>
in a longitudinal field <cit.>.
§ REVIEW OF THE GAUGE PICTURE
We wish to modify the gauge picture such that the local wavefunctions in the gauge picture are approximately equal to the Schrödinger picture wavefunction.
At the same time, we want the modified gauge picture to be an exact description of the quantum dynamics, while still retaining the explicit locality that originally motivated the gauge picture.
In order to deduce the ideal modification,
it is useful to review how the gauge picture is derived.
The gauge picture can be derived from the Heisenberg picture,
which also features explicitly local equations of motion.
Consider a local Hamiltonian
H = ∑_J H_J
that is a sum over Hamiltonian termsH_J, each supported on some patchJof the lattice.
A local operatorA_Isupported on a patch (I) is time-evolved in the Heisenberg picture via
∂_t A_I^H = i [H^H, A_I^H]
= i [H_⟨I|⟩^H, A_I^H]
For simplicity, we assume that operators have no explicit time dependence.
We use S, H, and G superscripts to distinguish time-dependent operators in the Schrödinger, Heisenberg, and gauge pictures.
In the second line above,
we note that most terms in the Hamiltonian cancel out in the commutator due to locality,
and only the following Hamiltonian terms contribute:
H_⟨ I ⟩^H = ∑_J^J∩ I ≠ 0 H_J^H
where∑_J^J∩I ≠0denotes a sum over all patches that overlap with patchI.
Therefore, the Heisenberg picture equation of motion (<ref>) is explicitly local <cit.>.
In order to obtain the gauge picture,
we need to push the time dependence from the operators into the wavefunction.
This is achieved using the following unitary mapping:
|ψ_I⟩ = U_I |ψ^H⟩
A_I^G=U_IA_I^HU_I^†
Similar to how the Schrödinger and Heisenberg picture wavefunction and operators are related by a unitary transformation,
the above equation relates the wavefunction and operators in the Heisenberg picture (right hand side)
to those in the gauge picture (left hand side)
using a collection of unitary transformationsU_I.
An important difference, however,
is that in order to maintain a sense of local dynamics for the wavefunction,
we must use a different unitary transformation for each patch of space,
which results in the local wavefunctions|ψ_I⟩.
SinceU_Iis unitary, the time derivative ofU_Ican be expressed in terms of a unitary operatorG_I(t):
∂_t U_I = -iG_I U_I
Plugging eq:gaugeHeisenberg and (<ref>) into∂_t|ψ^H⟩=0and the local Heisenberg equation (<ref>) of motion yields:
∂_t |ψ_I⟩ = -iG_I|ψ_I⟩
∂_tA_I^G = i[H_⟨ I ⟩^G - G_I,A_I^G]
In the gauge picture,H_⟨I ⟩from eq:HI takes a modified form:
H_⟨ I ⟩^G = U_I H_⟨ I ⟩^H U_I^†
= ∑_J^J∩ I ≠ 0 U_IJ H_J^G U_JI
Above, we have defined the connections
U_IJ = U_I U_J^†
From eq:dUI, we see that the connections evolve as
∂_t U_IJ = -i G_I U_IJ + U_IJ G_JU_IJconnects the wavefunctions of different patches via
U_IJ|ψ_J⟩ = |ψ_I⟩
which follows from eq:gaugeHeisenberg.
The unitary connectionsU_IJbetween different patches(I,J,K)are analogous to “flat” gauge fields; i.e. they obey
U_IJU_JK=U_IK
In the gauge picture <cit.>,
we choose
G_I = H_⟨ I ⟩^G
U_I(t=0) = 1
so that local operators have no time dependence in eq:gaugeEoMG
and are equal local operators in the Schrödinger picture.
This leads to the gauge picture equations of motion in eq:gaugeEoM.
§ MODIFIED GAUGE PICTURE
In this section,
we derive how the gauge picture can be modified such that the connections can be kept close to the identity.
From the previous section,
we see that any choice of HermitianG_Ileads to valid equations of motion.
Let us decomposeG_Ias
G_I = H_⟨ I ⟩^G + γ X_I
whereγis a real-valued constant andX_Iis a Hermitian operator.
IfX_Icommutes with local operatorsA_Ithat only act on patchI,
then local operators will still be time-independent in eq:gaugeEoMG.
In this section,
we will derive a choice ofX_Isuch that the connections are pushed towards the identity.
We can quantify how close a connectionU_IJis to the identity via its trace,U_IJ,
which increases asU_IJapproaches the identity.
To be precise, we define
S_IJ(t) = 1- (U_IJ)/N
whereNis the Hilbert space dimension
(e.g.N=2^nfor a system withnqubits).
The value ofS_IJ(t)ranges between0and2,
withS_IJ(t) = 0whenU_IJis the identity.
Therefore, we want to chooseX_Isuch that the average∂_t S_IJ(t)is minimized (while holding a norm ofX_Ifixed).
The averaged time derivative is
∂_t ∑_IJ^I ∩ J ≠∅ S_IJ(t)
= -1/N∑_IJ^I ∩ J ≠∅∂_t U_IJ
= -1/N∑_IJ^I ∩ J ≠∅ (-i G_I U_IJ + i U_IJ G_J)
= -1/N∑_I G_I ∑_J^I∩ J ≠∅(-i U_IJ + i U_IJ^†)_X̃_I∑_IJ^I ∩J ≠∅sums over all patchesIandJthat have nonzero overlap.
In the last line, we identify a candidate
X_I = ∑_J^I∩ J ≠∅ (-i) (U_IJ - U_IJ^†)
forX_I,
which will contribute a negative derivative in the totalS_IJ(t).
But as previously mentioned,
we wantX_Ito commute with any local operatorA_Isupported on a patchIso that local operators are time-independent.
To achieve this, we simply defineX_IasX_Iafter taking the partial trace over qubits in patchI:
X_I = _I X_I
Therefore with this choice ofX_Iand for largeγ,
theγX_Iterm in eq:G_int can be expected to drive∑_IJ^I ∩J ≠∅ S_IJ(t)towards zero,
therefore pushing the connectionsU_IJtoward identity.
§ SCALING HYPOTHESIS
We can estimate how effective theγterm is at driving the connections towards the identity.
In the previous section,
we argued that for very largeγ,
the connections should be very close to the identity:U_IJ ≈1.
We therefore expect the following perturbative expansion in smallγ^-1:
ln U_IJ = i ∑_k=1^∞γ^-k A_IJ^(k)
whereA_IJ^(k)(t)are time-dependent Hermitian operators with no dependence onγ.
Although this expression is clearly valid for sufficiently small timest,
its validity at long timest ≫γ^-1is not immediately clear.
We previously identifiedS_IJ(t)as a useful metric for how much the connectionU_IJdeviates from the identity.
If we boldly assume that the above expansion holds at late times,
then we can can estimate
S_IJ(t) = 1- 1/N U_IJ
= γ^-2 A_IJ^(1) A_IJ^(1)/2N + O(γ^-3)
where the terms linear ini A_IJ^(k)(t)vanish since they have imaginary trace.
We therefore expect that
S_IJ(t) ∼γ^-2
§ SIMULATIONS
We use numerical simulations to check how effective theγterm is driving the connections towards the identity.
We simulate the one-dimensional transverse field Ising model after a quench in two regimes: (1) criticality (and integrable) and (2) with a longitudinal field (which is non-integrable <cit.>).
For both cases, we find that for long timest ≫γ^-1,
largeγ,
and long chains of lengthL,
the deviationS_IJ(t)from the Schrödinger picture
saturates at a valueS_IJ(∞).
We will numerically show thatS_IJ(∞)obeys the following scaling in the largeγand largeLlimit:
S_IJ(t=∞) ∼γ^-2 e^a L + b + c/L⋯
where “⋯” denotes subleading terms (and for nearest-neighbor two-qubit patchesIandJ).
Therefore,S_IJ(t=∞) ∼γ^-2as expected in the previous section.
In the simulations, we initialize the system with all spins pointing in the +X direction such that⟨σ_i^x|=⟩1att=0.
We then consider a time evolution under the transverse field Ising model in a longitudinal field, which has the following Hamiltonian:
H = -J∑_⟨ ij ⟩σ^z_iσ^z_j - h_x∑_i σ^x_i - h_z∑_i σ^z_iJis the Ising interaction strength;∑_⟨i,j ⟩denotes a sum over nearest neighbor sites;h_xis the transverse field strength;h_zis the longitudinal field strength;
andσ^μare Pauli operators.
We takeJ=h_x=1throughout and consider a periodic chain of lengthLin two regimes:
(1)h_z=0, for which the model is critical and integrable (via a mapping to free majorana fermions), and
(2)h_z = 1, for which the model is gapped and non-integrable <cit.>. The time evolution of the expectation value of the spin operatorσ^xis shown in fig:X vs t.[
Throughout the main text, all simulations are performed with a time step δ t = 0.005 using the modified RK4 Runge Kutta integration method described in Appendix F of slagle2022quantum. The modification is used to maintain |ψ_I⟩ = U_IJ|ψ_J⟩ exactly, but has the unfortunate side effect (which is probably preventable) of increasing the integration error from (δ t)^4 to (δ t)^3 at time t∼ 1. Nevertheless, we have checked that the time step is sufficiently small to not significantly affect any of our plots.]
To simulate our modified gauge picture,
we first choose a set of patches to cover the lattice.
We take the simplest choice of patchesI=⟨i,j ⟩that are taken to be nearest-neighbor pairs of qubits⟨i,j ⟩, as depicted in fig:patches.
Next we must split the Hamiltonian into a sumH = ∑_I H_Iof local termsH_I:
H_I=⟨ ij ⟩ = -J σ^z_iσ^z_j - h_x/2 (σ^x_i + σ^x_j) - h_z/2 (σ^z_i + σ^z_j)
Similar to the usual gauge picture, att=0the local wavefunction are initialized to be equal to the Schrödinger picture wavefunction,|ψ_I(t=0)⟩ = |ψ^S(t=0)⟩,
and the connections are initialized as the identity,U_IJ(t=0) = 1.
eq:gaugeEoMG and (<ref>),
withG_Igiven in eq:G_int,
are then used to numerically calculate the time-evolved
local wavefunctions|ψ_I(t)⟩and connectionsU_IJ(t).
Recall thatG_Iwas chosen such that the local operators (e.g.σ_i^μ)
are equal in the Schrödinger and gauge pictures.
Therefore, in order to reduce notational clutter,
we omit G superscripts and patch index subscripts on the Pauli operators.
§.§ h_z=0
Let us consider theh_z=0case first.
(h_z=1will be qualitatively the same.)
In fig:asymptote,
we plotS_IJ(t)[eq:Sch-ness] vs time for variousγand chain lengthsL.
We take the patchesI=⟨i-1,i ⟩andJ = ⟨i,i+1 ⟩to be nearest neighbors.
Since the model is translation-symmetric with periodic boundary conditions,
there is no dependence oniand allS_IJare equal for all nearest-neighbor patches.
Recall thatS_IJ(t)quantifies how much the modified gauge picture deviates from Schrödinger's picture;
i.e.S_IJ(t)quantifies how much the gauge picture connections deviate from the identity.
For sufficiently largeγ, we see thatS_IJ(t)asymptotes to a constant after timet ≈2,
and this asymptote decreases withγand increases with system sizeL.
Whenγis small, there does not appear to be a clean asymptote to a constant in fig:asymptoteL10;
instead,S_IJ(t)appears to randomly squiggle around a rough constant.
However, these squiggles appear to be completely absent forγ≥12in fig:asymptoteL10.
Note that the time after which the squiggling begins appears to increase asγincreases.
In sec:app_a, we provide numerical evidence to show that the time after which the squiggling begins actually diverges at a finiteγ(which in turn appears to increase with system size). Therefore, for sufficiently largeγ, we expect that there is a clean asymptote to a constant.
In fig:scaling,
we study how thet=∞asymptote ofS_IJ(t)scales withγand system sizeL.
We fit the data to eq:fit_asymp and find the following parameters
a=2.63 b=0.19 c=-20.63
These three parameters where used to simultaneously fit the 10 lines shown in fig:scaling.
We only used the twelve
data points withγ= 12,16,20andL=7,8,9,10to fit the data.
We excluded points with smallγthat did not have a clean asymptote.
The fit is remarkably clean,
which is strong evidence for the validity of the scaling shown in eq:fit_asymp.
§.§ h_z=1
To demonstrate that the above qualitative results are generic,
we also consider a non-integrable transverse field Ising model <cit.> with an applied longitudinal field whereJ=h_x=h_z=1in eq:hamil.
fig:asymptote_hz1 and <ref>
are analogous to fig:asymptote and <ref>
from the previous subsection.
We again find that the asymptote scales in accordance with eq:fit_asymp.
We extract the fit parameters
a=4.21 b=0.13 c=-25.35
by fitting to eq:fit_asymp using the 11 points given byγ= 12,16,20andL=7,8,9,10but with the(γ=12, L=10)point excluded (which does not asymptote cleanly, as seen in fig:asymptote_hz1).
§ S(T) SQUIGGLES ONLY FOR SMALL GAMMA
In fig:asymptoteL10 and <ref>,
we observed that for largeγ,S_IJ(t)displayed a remarkably clean asymptote to a constant.
However for smallerγ,S_IJ(t)instead wiggled around an approximate constant.
But the time at which the wiggles began appeared to increase withγ.
In this appendix,
we show that the time at which the wiggles begin actually becomes infinite at a finiteγ,
which is strong evidence that there are no wiggles for sufficiently largeγ.
Similar to fig:asymptoteL10, in fig:sq_analysis1
we plotS_IJ(t)for the critical transverse field Ising model (h_z=0), but for longer times and a smaller range ofγ.
For computational efficiency, we first study a smaller system size ofL = 6(with a Runge Kutta time step ofδt = 0.004).
We clearly see that the onset of the squiggles increases rapidly asγapproaches around 2.7.
Lett_sbe the time at which the squiggles begin,
which we define as the time at whichS_IJ(t)first begins to decrease.
We find thatt_sdiverges as
t_s^2 = t_0^2/γ - γ_0
for larget_swhereγ_0andt_0are constants.
To show this divergence, we rewrite the above equation as a linear equation int_s^-2vsγ:
t_0^2 t_s^-2 = γ - γ_0
In fig:sq_analysis2,
we plot a series of points(γ, t_s^-2).
The points agree very precisely with a linear fit to the above equation,
which is strong evidence that the temporal squiggle onsett_sdiverges atγ_0 ≈2.7.
We also show data points for numerical integration time stepδt=0.002(in addition toδt=0.004)
to show that decreasing the time step by a factor of two has no noticeable affects on the data,
which is good evidence that numerical integration errors are negligible for the data shown.
See sec:app_b for additional details on numerical integration errors.
To demonstrate the generality of this result,
we also show data and analogous asymptotic fits for a larger system sizeL=8in fig:sq_analysisL8,
and with an appliedh_z = 1longitudinal field forL=6in fig:sq_analysis_ex.
From fig:sq_analysisL8, we see that the squiggles seem to persist up to largerγwhen the system size is larger.
§ CONCLUSION
Our work extends recent work on the gauge picture of quantum dynamics <cit.> to show that it is possible to modify the gauge picture such that the local wavefunctions in the gauge picture are approximately equal to the Schrödinger picture wavefunction
while still enforcing equations of motion that are explicitly local
(while Schrödinger's equation is not explicitly local).
This approximate equivalence of wavefunctions occurs when the connections in the gauge picture are close to the identity,
which we obtain by adding an additionalγterm [see eq:G_int, (<ref>), and (<ref>)] to the gauge picture equations of motion (<ref>).
We quantified how close the connections are to the identity viaS_IJ(t)in eq:Sch-ness.S_IJ(t)therefore measures how “close” this modified gauge picture is to the Schrödinger picture,
andγ(in some sense) interpolates between the Schrödginer picture and (unmodified) gauge picture.
We showed thatS_IJ(t) ∼γ^-2 e^a L + b + ⋯[sec:scaling]
for larget, largeγ, and large system sizeLusing 1D spin chain numerics
for the transverse-field Ising models that we studied.
(We expect that these numerical results apply to generic spin chain models.)
Thus,γmust be exponentially large in the system size in order for the local wavefunctions in the gauge picture to be approximately equal to the Schrödinger picture wavefunction.
We analytically argued thatS_IJ(t) ∼γ^-2in sec:scaling.
We leave to future study: (1) a more rigorous analytical derivation, and (2) an explanation for the system size dependence.
Interestingly,S_IJ(t)exhibits an extremely clean asymptote ast →∞for sufficiently largeγ,
but squiggles erratically for smallerγ.
We were surprised to find in sec:app_a that there appears to be a sharp transition between these two regimes.
That is, for smallγwe find thatS_IJ(t)starts to show a clean asymptote at smallt, but then begins to squiggle erratically at later timest ≳t_s.
But we find that the onset timet_sof the squiggles diverges at a finiteγ.
It would be interesting to gain a better understanding of the physics underlying this effect.
We thoroughly checked that our data is free from numerical integration errors.
During that process, we found that the numerical integration errors are also interesting because they exhibit exponential sensitivity to initial conditions—the butterfly effect—even though quantum dynamics are linear.
However, the gauge picture equations of motion are nonlinear,
and we expect that numerical errors turn on the nonlinearity,
which we find leads to numerical errors that increase exponentially
(which does not occur when integrating Schrödinger's equation).
See sec:app_b for details.
Understanding this new kind of gauge picture chaotic dynamics would be another interesting direction for future research.
§ ACKNOWLEDGEMENTS
We would like to thank Toni Panzera and Pranav Satheesh for useful conversations.
Funding information
This research was supported in part by the Welch Foundation through Grant No. C-2166-20230405,
and by the National Science Foundation Grant No. NSF PHY-1748958 and the Gordon and Betty Moore Foundation Grant No. 2919.02.
§ NUMERICAL INSTABILITY IN THE GAUGE PICTURE
In this appendix, we study the numerical integration errors in some detail.
Nonlinear differential equations exhibit chaotic dynamics <cit.>.
That is, small perturbations to the initial conditions quickly evolve into large effects—the so-called butterfly effect.
Numerically integrating nonlinear differential equations therefore presents a challenge because small numerical errors are likely to similarly lead to large changes due to the same butterfly effect;
but in this context these large changes are viewed as large integration errors.
Schrödinger's equation is a linear differential equation
and therefore does not exhibit this kind of chaotic dynamics;
small changes in|ψ(0)⟩does not lead to large changes in|ψ(t)⟩due to unitarity.
However, although the gauge picture equations of motion can reproduce the same expectation values as Schrödinger's picture when integrated exactly,
in this appendix we find that small integration errors lead to exponentially increasing errors.
We expect this occurs because the small integration errors
insert nonlinearity into the dynamics,
which then exhibit the butterfly effect.
In fig:chaosa, we plot the⟨σ^x_i|$⟩ expectation value of the critical L=6 transverse field Ising model (with h_z=0) vs time.
We compare our gauge picture numerics
(which use the modified RK4 Runge Kutta integration method described in Appendix F of slagle2022quantum)
to our numerically exact Schrödinger picture numerics
(for which we can simply exponentiate the Hamiltonian).
The two methods agree up until around time t ≈ 18,
at which point our gauge picture numerics become rather inaccurate.
We show gauge picture data for numerical integration times steps
δ t = 0.005 and 0.0005.
In fig:chaosb, we plot the error of the same expectation value on a log scale to show that the integration error increases exponentially with time.
|
http://arxiv.org/abs/2307.03127v1
|
20230706165316
|
Quantitative analysis of optimal Sobolev-Lorentz embeddings with $α$-homogeneous weights
|
[
"Petr Gurka",
"Jan Lang",
"Zdeněk Mihula"
] |
math.FA
|
[
"math.FA",
"46E35, 47B06 (Primary) 46B50 (Secondary)"
] |
numbers
abbrvnat
Principal subbundles for dimension reduction
Morten Akhøj^1,2, ∗ James Benn^1
Erlend Grong^3 Stefan Sommer^2 Xavier Pennec^1
Submitted July 6, 2023
======================================================================================
§ INTRODUCTION
It is a truth generally acknowledged that Sobolev embeddings hold a prominent position in various areas of mathematics, making comprehensive understanding of their internal structure and behavior essential. One of their oft-studied aspects is their compactness and its quality. Quite often the quality of compactness is analyzed through the decay rate of different s-numbers. Various s-numbers are closely related to the spectral theory of the corresponding differential operators associated with Sobolev-type embeddings and provide estimates for the growth of their eigenvalues (see <cit.>). There is quite extensive literature in which the quality of compactness of Sobolev embeddings is investigated. However, significantly less attention has been devoted to studying the structure of non-compact Sobolev embeddings, where the measure of non-compactness may be related to the shape of the essential spectrum (see <cit.>).
Naturally, there are several ways which Sobolev embeddings can become non-compact, such as:
* when the underlying domain is unbounded (see <cit.>, cf. <cit.>);
* when the boundary of the underlying domain is excessively irregular (see <cit.>;
* when the target function norm is overly strong—in other words, the target function space is too close to the optimal one (see <cit.> and references therein).
Among these possibilities, the last one is particularly intriguing because it has not been explored quantitatively nearly as much as the others, despite the interest in optimal Sobolev embeddings (e.g., see <cit.> and references therein). Previous works investigating the case (c) (see <cit.>) dealt with Sobolev embeddings that are non-compact when restricted to any sub-domain of the underlying domain and, loosely speaking, the non-compactness is spread uniformly over all sub-domains.
A natural question arises: what can we say about quantitative aspects of the non-compactness of Sobolev embeddings in the case (c) when the non-compactness does not occur uniformly over all sub-domains of the underlying domain? This paper aims to address this question, focusing specifically on the scenario where the Sobolev embedding may become non-compact only in the sub-domains touching the boundary of the underlying domain but remains compact in all other bounded sub-domains (with regular boundary) of the domain.
To be more specific, we will consider an optimal weighted Sobolev embedding on an open convex cone Σ⊆, d≥2, with vertex at the origin, endowed with a nonnegative weight w that is α-homogeneous, α>0, and its (1/α)-th power is concave. The Sobolev embedding in question is
E V^1 L^p,q(Σ, μ) → L^p^*, q(Σ, μ),
where V^1 L^p,q(Σ, μ) is a suitable weighted Sobolev space built on the Lorentz space L^p,q(Σ, μ), 1≤ q≤ p < D, D=d+, μ is the weighted measure μ(x)=w(x) x, and p^* = (Dp)/(D - p) (see Section <ref> for exact definitions). A prototypical example of such weights is the monomial weight defined as w(x)=x_1^A_1⋯ x_k^A_k on Σ = {x_i>0, i = 1, …, k}, where 1≤ k ≤ d and A_i > 0. Such monomial weights have been quite fashionable since they appeared in <cit.>. In the former, they were used in connection with the regularity of stable solutions to certain planar reaction–diffusion problems, while in the latter they were the main focus of the paper. It is worth pointing out that the weights considered in this paper are not radial. Furthermore, the embedding (<ref>) itself is new in this generality, to the best of our knowledge.
Three comments are in order. First, even though the open convex cone Σ is unbounded, the lack of compactness is not caused by its unboundedness. The embedding is still non-compact when the underlying domain is replaced by Σ∩ B_R, where B_R is the open ball centered at the origin with radius R>0. Second, the regularity of ∂Σ is also completely immaterial. Last, while the weights considered in this paper may or may not be singular near parts of ∂Σ, they are always singular at the origin. Moreover, it follows from our results that the embedding is, loosely speaking, the most non-compact near the origin.
The paper is structured as follows: in the next section, we introduce the necessary notation and recall some preliminary results. In Section <ref>, we prove the Pólya-Szegő inequality for (<ref>) (Theorem <ref>), and then prove the optimal weighted Sobolev-Lorentz inequality in Σ (Theorem <ref>). Moreover, we obtain the inequality with the optimal constant (Remark <ref>). The optimal weighted Sobolev-Lorentz inequality in Σ proved in this paper improves and extends the inequality obtained in <cit.>, which is restricted to the Lebesgue spaces—that is, the Sobolev space is built on L^p(Σ, μ) and the target space is L^p^*(Σ, μ). Finally, in Section <ref>, we obtain the exact values of the Bernstein numbers for (<ref>) (Theorem <ref>), showing that they all coincide with the norm of the embedding. Leveraging the fact that the non-compactness is the worst in the neighborhoods of the origin, we construct an infinitely dimensional subspace of V^1 L^p,q(Σ, μ) restricted onto which the embedding is an isomorphism into L^p ,^*q(Σ, μ) (Proposition <ref>). Hence the embedding (<ref>) is not strictly singular. Moreover, it is also maximally non-compact in the sense that its (ball) measure of non-compactness is equal to its norm.
§ PRELIMINARIES
We start this section with basic definitions and necessary preliminary results.
§.§ Open convex cone and weighted measure
Throughout the paper Σ denotes an open convex cone in , d∈, d≥2, with vertex at the origin.
Furthermore, wΣ→ [0, ∞) is a nonnegative (not identically zero) continuous function that is -homogeneous, >0,
and such that w^1/α is concave in Σ. Recall that w is -homogeneous if
w(κ x)=κ^ w(x) for any x∈Σ and all κ>0.
Throughout the paper μ denotes the weighted measure on Σ defined as
μ(x)=w(x) x.
For future reference, we set
D=d+.
A large number of concrete examples of cones Σ and weights w satisfying the assumptions can be found in <cit.>. Important examples are the following monomial weights. Assume that Σ = Σ_1 ∩⋯∩Σ_k, k∈{1, …, d}, where Σ_j≠, j=1, …, k, are open convex cones in with vertex at the origin. Given A_1, …, A_k>0, the weight wΣ→ [0, ∞) defined as
w(x) = ∏_j = 1^k dist(x, ∂Σ_j)^A_j, x∈Σ,
satisfies the assumptions. In particular, when Σ_j = {x∈: x_j > 0}, j=1, …, k, we have (see <cit.>)
w(x) = x_1^A_1⋯ x_k^A_k, x∈Σ.
§.§ Lebesgue space
We denote by L^p(Σ,μ), p∈[1,∞], the Lebesgue space, of all measurable functions f in Σ with finite norm
f_p,μ=
{[ (∫_Σ |f(x)|^pμ(x))^1/p if p∈[1, ∞),; μ-ess sup _x∈Σ|f(x)| if p=∞. ]
§.§ Distribution function and rearrangements
For a measurable function f in Σ we define its distribution function with respect to μ as
f_*μ(τ)=μ({x∈Σ: |f(x)|>τ}), τ>0,
and its nonincreasing rearrangement with respect to μ as
f^*_μ(t)=inf{τ>0: f_*μ(τ)≤ t}, t>0.
The function f^_μ defined as
f^_μ(x) = f^*_μ(C_D |x|^D), x∈,
where
C_D = μ(B_1∩Σ),
is the radial rearrangement of f with respect to μ. We use the notation
B_r={x∈: |x|<r}, r>0.
Thanks to the -homogeneity of w, we have
μ(B_r∩Σ) = C_D r^D.
Note that the function f^_μ is nonnegative and radially nonincreasing. Though it is defined on the whole , it depends only on function values of f in Σ. Furthermore, the functions f and f^_μ are equimeasurable with respect to μ, that is,
μ({x∈Σ: |f(x)|>τ})
=
μ({x∈Σ: |f^_μ(x)|>τ}), τ>0.
When E⊆(0, ∞) is (Lebesgue) measurable, we denote its Lebesgue measure by |E|. For a measurable function ϕ in (0, ∞), we define its nonincreasing rearrangement (with respect to the Lebesgue measure) as
ϕ^*(t) = inf{τ > 0: |{s∈(0, ∞): |ϕ(s)| > τ}| ≤ t}, t∈(0, ∞).
§.§ Lorentz space
Let p,q∈[1, ∞]. Assume that either 1≤ q≤ p < ∞ or p = q = ∞. We denote by L^p,q(Σ,μ) the set of all measurable functions f in Σ such that
f_p,q,μ = t^1/p - 1/q f^*_μ(t)_L^q(0, ∞) < ∞.
Under the imposed restriction on the parameters p and q, the functional ·_p,q,μ is a norm on L^p,q(Σ,μ) (e.g., <cit.>). The function space L^p,q(Σ,μ) is usually called the Lorentz space with indices p, q. Thanks to the layer cake representation formula (e.g., <cit.>), we have ·_p,p,μ = ·_p,μ. The Lorentz norm ·_p,q,μ can be expressed in terms of the distributional function as
f_p,q,μ = ( p ∫_0^∞ t^q - 1 f_*μ(t)^q/p t)^1/q
for every measurable functions f in Σ. The Lorentz spaces are nested in the sense that
L^p,q_1(Σ,μ) ⊊ L^p,q_2(Σ,μ) when 1≤ q_1 < q_2 ≤ p < ∞.
Note that the functions equimeasurable with respect to μ have the same ·_p,q,μ norm. In particular,
f_p,q,μ = f^_μ_p,q,μ
for every measurable function f in Σ.
When 1≤ q≤ p < ∞, the Lorentz norm ·_p,q,μ is absolutely continuous, that is,
fχ_E_n_p,q,μ→0 for every sequence
{χ_E_n}_n=1^∞ satisfying E_n→∅μ-a.e.
Here E_n→∅ μ-a.e. means that the sequence {χ_E_n}_n=1^∞ converges pointwise to 0 μ-a.e.
§.§ Sobolev-Lorentz space
Let 1≤ q≤ p < ∞. We denote by V^1L^p,q(Σ, μ) the completion of 𝒞_c^1()—the space of continuously differentiable functions with compact
support in —with respect to the norm
u_V^1L^p,q(Σ, μ) = ∇ u_p,q,μ;
two functions that coincide μ-a.e. in Σ are identified. Note that the functions from V^1L^p,q(Σ, μ) need not vanish on ∂Σ.
We write ∇ u_p,q,μ for short instead of |∇ u|_p,q,μ, where |∇ u| stands for the Euclidean norm of ∇ u.
§.§ Strictly singular operators, Bernstein numbers and measure of non-compactness
Throughout this subsection we assume that X and Y are Banach spaces. We denote by B(X,Y) the collection of all bounded linear operators from X to Y, and by B_X the closed unit ball of X centered at the origin.
An operator T∈ B(X,Y) is said to be strictly singular if there is no infinite dimensional closed subspace Z of X such that the restriction T|_Z of T to Z is an isomorphism of Z onto T(Z)⊆ Y. Equivalently, for each infinite dimensional (closed) subspace Z of X,
inf{‖ Tx‖ _Y‖ x‖ _X=1,x∈
Z} = 0.
The nth, n∈, Bernstein number b_n(T) of T∈ B(X,Y) is defined as
b_n(T)=sup_X_n ⊆ Xinf_x∈ X_n
x_X = 1Tx_Y,
where the supremum extends over all n-dimensional subspaces of X.
Bernstein numbers are an important example of so-called (strict) s-numbers. Any rule s T→{s_n(T)}_n=1^∞ that assigns each bounded linear operator T from X to Y a sequence {s_n(T)}_n=1^∞ of nonnegative numbers having, for every n∈, the following properties:
(S1) T=s_1(T)≥ s_2(T)≥⋯≥0;
(S2) s_n(S+T)≤ s_n(S)+T for every S∈ B(X,Y);
(S3) s_n(BTA)≤Bs_n(T)A for every A∈ B(W,X) and B∈ B(Y,Z), where W,Z are Banach spaces;
(S4) s_n(Id_E E → E)=1 for every Banach space E with E≥ n;
(S5) s_n(T)=0 if rank T< n
is called a (strict) s-number. Notable examples of (strict) s-numbers are the approximation numbers a_n, the Bernstein numbers b_n, the Gelfand numbers c_n, the Kolmogorov numbers d_n, the isomorphism numbers i_n, or the Mityagin numbers m_n. For their definitions and the difference between strict s-numbers and `non-strict' s-numbers, the interested reader is referred to <cit.> and references therein.
The (ball) measure of non-compactness β(T) of T∈ B(X,Y) is defined as
β(T) = inf{r>0: T(B_X) can be covered by finitely many balls in Y with radius r}.
Clearly 0 ≤β(T) ≤T. The measure of non-compactness is a quantity that in a certain way measures how far from the class of compact operators the operator T is. We say that the operator T is maximally non-compact if T = β(T).
§ OPTIMAL SOBOLEV-LORENTZ EMBEDDING
We start by proving a suitable Pólya-Szegö inequality.
Let 1≤ q≤ p < ∞. For every function u∈𝒞_c^1(), its radial rearrangement u^_μ with respect to μ belongs to V^1L^p,q(Σ, μ), and
∇ u^_μ_p,q,μ≤∇ u_p,q,μ.
Furthermore, the function u^*_μ is locally absolutely continuous in (0, ∞), and
u^_μ(x) = ∫_C_D|x|^D^∞ (-u^*_μ)'(t) t for every x∈,
where D is defined by (<ref>) and C_D by (<ref>). In particular,
u^_μ = B_R where R is such that μ(B_R ∩Σ) = μ( u ∩Σ).
The proof of (<ref>) follows the line of argument of Talenti (<cit.>),
which combines the coarea formula with a suitable isoperimetric inequality
(see also the proof of <cit.>). The suitable isoperimetric inequality
in our case is that from <cit.> (cf. <cit.>).
It reads as, for every measurable E⊆ such that μ(E∩Σ) < ∞,
P_w(E; Σ)≥ D C_D^1/Dμ(E∩Σ)^D - 1/D.
Here P_w(E; Σ) is the weighted perimeter of E in Σ (its definition in full generality
can be found in <cit.>). Recall that, when P_w(E; Σ) < ∞, then
P_w(E; Σ) = ∫_∂^*E∩Σw(x)ℋ^n-1(x),
where ∂^*E is the reduced boundary of E. Note that
D C_D^1/D = P_w(B_1; Σ)/μ(B_1∩Σ)^D - 1/D
thanks to the homogeneity of w. Following Talenti, it can be shown that the function u^*_μ
is locally absolutely continuous in (0, ∞) and that, for a.e. t∈(0, ∞),
0≤ ( D C_D^1/D t^D - 1/D (-u_μ^*)'(t) )^q ≤d/dt∫_{x∈Σ: |u(x)| > u^*_μ(t)} |∇ u(x)|^q μ(x).
The validity of (<ref>) is an immediate consequence of the local absolute
continuity of u_μ^* combined with the fact that lim_t→∞ u_μ^*(t) = 0.
Let
ϕ(t) = D C_D^1/D t^D - 1/D (-u_μ^*)'(t), t∈(0, ∞).
We claim that
∫_0^t ϕ^*(s)^q s≤∫_0^t (∇ u)_μ^*(s)^q s for every t∈(0, ∞).
Thanks to <cit.>, it is sufficient to prove that
∫_E ϕ(s)^q s ≤∫_0^t (∇ u)_μ^*(s)^q s for every measurable E⊆(0, ∞), |E| = t.
Moreover, it follows from the (outer) regularity of the Lebesgue measure that we may assume that E is open. Thus
E = ⋃_j∈𝒥(a_j, b_j),
where {(a_j, b_j)}_j∈𝒥 is a countable system of nonempty mutually disjoint open intervals in (0, ∞). Using (<ref>), we have
∫_E ϕ(s)^q s ≤∑_j ∈𝒥∫_a_j^b_jϕ(s)^q s ≤∑_j ∈𝒥∫_{x∈Σ: u^*_μ(b_j) < |u(x)| ≤ u^*_μ(a_j) } |∇ u(x)|^q μ(x)
= ∑_j ∈𝒥∫_{x∈Σ: u^*_μ(b_j) < |u(x)| < u^*_μ(a_j) } |∇ u(x)|^q μ(x).
Note that the sets {x∈Σ: u^*_μ(b_j) < |u(x)| < u^*_μ(a_j) }, j∈𝒥, are mutually disjoint,
and μ({x∈Σ: u^*_μ(b_j) < |u(x)| < u^*_μ(a_j) }) ≤ b_j - a_j. Combining that with the
Hardy-Littlewood inequality (<cit.>), we obtain
∑_j ∈𝒥∫_{x∈Σ: u^*_μ(b_j) < |u(x)| < u^*_μ(a_j) } |∇ u(x)|^q μ(x) =
∫_⋃_j∈𝒥{x∈Σ: u^*_μ(b_j) < |u(x)| < u^*_μ(a_j) } |∇ u(x)|^q μ(x)
≤∫_0^∑_j ∈𝒥(b_j - a_j) (∇ u)_μ^*(s)^q s
= ∫_0^t (∇ u)_μ^*(s)^q s.
Hence
∫_E ϕ(s)^q s ≤∫_0^t (∇ u)_μ^*(s)^q s.
Now, note that the function (0, ∞) ∋ s ↦ s^q/p - 1 is nonincreasing. Therefore, it follows from (<ref>) combined with the Hardy lemma <cit.> that
∫_0^∞ϕ^*(t)^q t^q/p - 1 t ≤∫_0^∞ (∇ u)_μ^*(t)^q t^q/p - 1 t.
Note that
|∇ u_μ^|(x) = (-u_μ^*)'(C_D |x|^D) D C_D |x|^D-1 = ϕ(C_D |x|^D) for a.e. x∈.
Hence
(∇ u_μ^)_μ^*(t) = ϕ^*(t) for every t∈(0, ∞).
Therefore, (<ref>) follows from (<ref>) and (<ref>).
It remains to prove that u_μ^∈ V^1L^p,q(Σ, μ).
Assume that there is a sequence of functions ψ_n∈𝒞([0, ∞)) with bounded supports such that
0≤ψ_n(t) ↗ (-u_μ^*)'(t) for a.e. t∈(0, ∞) as n→∞.
Note that, for every n∈, the function
v_n(x) = ∫_C_D |x|^Dψ_n(t) t, x∈,
belongs to 𝒞_c^1(). Observe that
0≤ |∇ v_n| ≤ |∇ u_μ^| and lim_n→∞ |∇ v_n - ∇ u_μ^|(x) = 0 for every a.e. x∈.
Since the Lorentz norm ·_p,q,μ is absolutely continuous, it follows (<cit.>) that
lim_n→∞∇ v_n - ∇ u_μ^_p,q,μ = 0.
Hence u_μ^∈ V^1L^p,q(Σ, μ). Now we show that a sequence of functions ψ_n satisfying (<ref>) exists.
It can be constructed either by mollification or by the following approximation. Since
0≤min{k,(-u_μ^*)'(t)}↗ (-u_μ^*)'(t) for a.e. t∈(0, ∞) as k→∞,
we may assume, without loss of generality, that the function (-u_μ^*)'(t) is bounded.
Since the function (-u_μ^*)'(t) is nonnegative, nonincreasing,
bounded, and has bounded support, there is (e.g., <cit.>)
a sequence of (nonincreasing, bounded) functions ψ_n continuous on the interval [0, ∞) with
bounded supports
such that (<ref>) is true. This finishes the proof.
We now prove the optimal Sobolev-Lorentz inequality in Σ.
Let 1≤ q≤ p < D. Set p^* = Dp/(D-p). We have
u_p^*, q,μ≤p/(D - p)μ(B_1∩Σ)^1/D∇ u_p,q,μ for every u∈𝒞^1_c().
First, recall that the following Hardy inequality, which holds for every nonnegative measurable function f on (0, ∞) (see <cit.> or, e.g., <cit.>, <cit.>):
( ∫_0^∞(t^1/p^* - 1/q∫_t^∞ f(s) s)^q t
)^1/q≤ p^* ( ∫_0^∞(t^1 + 1/p^* - 1/q f(t))^q t )^1/q.
Second, using (<ref>) and (<ref>), we have
u_p^*,q,μ = u_μ^_p^*,q,μ = t^1/p^* - 1/q∫_t^∞ (-u_μ^*)'(s) s _L^q(0, ∞)
≤ p^* t^1 + 1/p^* - 1/q (-u_μ^*)'(t) _L^q(0, ∞) = p^* t^1/p^* - 1/q + 1/D ( -u_μ^*)'(t) t^D - 1/D_L^q(0, ∞)
= p^* t^1/p - 1/q (-u_μ^*)'(t) t^D - 1/D_L^q(0, ∞)
for every u∈𝒞_c^1(). Furthermore, thanks to the Hardy-Littlewood inequality (<cit.>), we have
t^1/p - 1/q (-u_μ^*)'(t) t^D - 1/D_L^q(0, ∞)≤ D^-1 C_D^-1/D t^1/p - 1/qϕ^*(t) _L^q(0, ∞),
where the function ϕ is defined by (<ref>). Hence
u_p^*,q,μ≤ p^*D^-1 C_D^-1/D t^1/p - 1/qϕ^*(t) _L^q(0, ∞)
for every u∈𝒞_c^1().
Finally, combining (<ref>) with (<ref>), we obtain
u_p^*,q,μ≤ p^* D^-1 C_D^-1/D∇ u_μ^_p,q,μ
for every u∈𝒞_c^1(). Hence, combining that with (<ref>), we have
u_p^*,q,μ≤ p^* D^-1 C_D^-1/D∇ u_p,q,μ for every u∈𝒞_c^1().
By straightforwardly modifying the maximizing sequence introduced by Alvino in <cit.> (cf. <cit.>), it is not hard to prove that the constant in (<ref>) is optimal. In other words,
E = p/(D - p)μ(B_1∩Σ)^1/D,
where E is the (operator) norm of the embedding operator E V^1L^p,q(Σ, μ) → L^p^*, q(Σ, μ).
§ BERNSTEIN NUMBERS OF THE EMBEDDING
We start with two auxiliary propositions.
Let 1≤ q≤ p < D. Let E V^1L^p,q(Σ, μ) → L^p^*, q(Σ, μ) be
the embedding operator, where p^* is as in Theorem <ref>. For 0 < r < R, set
ℱ_r,R = {u∈𝒞^1_c(): u=u_μ^, u ⊆ B_R, ∇ u≡0 in B_r}.
Then, for every R > 0,
E = sup_u∈ℱ_r,R
r∈(0, R)u_p^*,q,μ/∇ u_p,q,μ.
In view of Theorem <ref>,
E = sup_u∈𝒞^1_c()u_μ^_p^*,q,μ/∇ u_μ^_p,q,μ.
First, we claim that
E = sup_u∈𝒞^1_c()
u_μ^⊆ B_Ru_μ^_p^*,q,μ/∇ u_μ^_p,q,μ.
To that end, it is sufficient to prove that, for every u∈𝒞^1_c(),
u_μ^_p^*,q,μ/∇ u_μ^_p,q,μ≤sup_v∈𝒞^1_c()
v_μ^⊆ B_Rv_μ^_p^*,q,μ/∇ v_μ^_p,q,μ.
Let u∈𝒞^1_c(). Since u is compactly supported in , u_μ^ is supported in B_R̃ for some R̃>0. Set κ = R̃/R and let u_κ be the function defined as
u_κ(x) = u(κ x), x∈.
Clearly u_κ∈𝒞^1_c(). We have
(u_κ)_μ^*(t) = u_μ^*(κ^D t) for every t>0
thanks to the homogeneity of w. Indeed,
(u_κ)_*μ(t)
=
∫_{x∈Σ:|u_κ(x)|>t}w(x) x
=
∫_{x∈Σ:|u(κ x)|>t}w(x) x
=
κ^-d∫_{x∈Σ:|u(x)|>t}w(x/κ) x
=
κ^-d-∫_{x∈Σ:|u(x)|>t}w(x) x
=
κ^-Du_*μ(t)
for every t>0, where we used (<ref>) in the second to last equality. Hence (<ref>) is true. Similarly, it is easy to see that
(∇ (u_κ)_μ^)_μ^*(t) = κ(∇ u_μ^)_μ^*(κ^D t) for every t>0.
By plugging (<ref>) and (<ref>) in the definition of ·_p^*,q,μ and ·_p,q,μ, respectively, we observe that
(u_κ)_μ^_p^*,q,μ/∇ (u_κ)_μ^_p,q,μ = κ^-D/p^*u_μ^_p^*,q,μ/κ^1 - D/p∇ u_μ^_p,q,μ = u_μ^_p^*,q,μ/∇ u_μ^_p,q,μ.
Furthermore, note that (<ref>) implies that
(u_κ)_μ^⊆ B_R̃/κ = B_R.
Hence (<ref>) follows from the last two observations.
Next, we claim that
sup_u∈𝒞^1_c()
u_μ^⊆ B_Ru_μ^_p^*,q,μ/∇ u_μ^_p,q,μ = sup_u∈ℱ_r,R
r∈(0, R)u_p^*,q,μ/∇ u_p,q,μ.
To that end, it is sufficient to show that
sup_u∈𝒞^1_c()
u_μ^⊆ B_Ru_μ^_p^*,q,μ/∇ u_μ^_p,q,μ≤sup_u∈ℱ_r,R
r∈(0, R)u_p^*,q,μ/∇ u_p,q,μ.
Let u∈𝒞^1_c() be such that u_μ^⊆ B_R. Thanks to (<ref>), we have
u^_μ(x) = ∫_C_D|x|^D^∞ (-u_μ^*)'(t) t for every x∈.
Let ψ_n be the sequence of functions as in the proof of Theorem <ref>. Let η_n∈𝒞(0,∞), n∈, be a sequence of cutoff functions such that
0≤η_n≤1, η_n ≡ 0 in (0, 1/n+1], and η_n ≡ 1 in [1/n, ∞).
Let u_n, n∈, be the sequence of functions defined as
u_n(x) = ∫_C_D|x|^D^∞ψ_n(t)η_n(t) t, x∈.
Clearly, u_n∈𝒞_c^1() and u_n=(u_n)_μ^. Furthermore, each function u_n is constant in a ball
centered at the origin. Hence ∇ u_n≡0 in this ball. Combining the fact that
0≤ψ_n(t)η_n(t)↗ (-u^*_μ)'(t) for a.e. t∈(0,∞) as n→∞
with (<ref>), we obtain
(u_n)_μ^*(t)↗ (u_μ^)_μ^*(t) for every t∈(0, ∞) as n→∞.
It follows that
lim_n→∞u_n_p^*,q,μ = u_μ^_p^*,q,μ.
Furthermore, as in the proof of Theorem <ref>, it is easy to see that we also have
lim_n→∞∇ u_n_p,q,μ = ∇ u_μ^_p,q,μ.
Combining (<ref>) and (<ref>) with our observations about the functions u_n, we obtain (<ref>).
Finally, (<ref>) follows from (<ref>) and (<ref>).
Let 1≤ q≤ p < D. Let E V^1L^p,q(Σ, μ) → L^p^*, q(Σ, μ) be the embedding operator, where p^* is as in Theorem <ref>. For every 0 < λ < E, ε_1 > 0, and ε_2 > 0, there is a sequence of functions {u_j}_j = 1^∞⊆𝒞^1_c() such that
* u_j_p^*,q,μ = λ and ∇ u_j_p,q,μ = 1 for every j∈.
* u_j+1⊊ u_j and ∇ u_j ⊆ u_j∖ u_j + 1 for every j∈.
* u_j = (u_j)_μ^ for every j∈.
* u_j →∅ as j →∞.
* For every sequence {α_j}_j = 1^∞ we have
∑_j = 1^∞α_j u_j _p^*,q,μ≥( λ/1 + ε_1 - ε_2 )( ∑_j = 1^∞ |α_j|^q )^1/q.
We construct the desired sequence of functions inductively. Fix 0 < λ < E, ε_1 > 0, and ε_2 > 0. If q∈(1, p], let a∈(0,1) be so small that
a^q'/1-a^q'≤ε_2^q'.
For j∈, set
γ_j = {[ ε_2 if q = 1,; a^j if q∈(1, p]. ]
Note that
{γ_j}_j = 1^∞_ℓ_q' = ( ∑_j = 1^∞ |γ_j|^q')^1/q'≤ε_2.
First, take any R_1 > 0. Thanks to Proposition <ref>,
there is r_1∈(0, R_1) and a function u_1 = (u_1)_μ^∈𝒞_c^1() such that
u_1_p^*,q,μ = λ, ∇ u_1_p,q,μ = 1, u_1 ⊆ B_R_1, ∇ u_1 ⊆ B_R_1∖B_r_1.
Set δ_1 = μ(B_R_1∩Σ). Using the absolute continuity of the Lorentz norm ·_p^*,q,μ, we can find R̃_2∈(0, r_1) such that
u_1χ_B_R̃_2_p^*,q,μ ≤γ_1
and
(1 + ε_1)u_1χ_B_R_1∖ B_R̃_2_p^*,q,μ ≥u_1_p^*,q,μ.
Furthermore, by the dominated convergence theorem combined with the last inequality, we can find R_2 ∈(0, R̃_2) so small that
(1 + ε_1)^q ∫_δ_2^δ_1 t^q/p^* - 1 (u_1χ_B_R_1∖ B_R̃_2)_μ^*(t)^q t ≥u_1_p^*,q,μ^q,
where δ_2 = μ(B_R_2∩Σ) ∈ (0, δ_1). Moreover, R_2 can be taken so small that B_R_2⊊ u_1. Note that both (<ref>) and (<ref>) are still valid with R̃_2 replaced by R_2.
Second, let m∈, and assume that we have already found {u_j}_j = 1^m, {δ_j}_j = 1^m + 1, {r_j}_j = 1^m, and {R_j}_j = 1^m + 1 such that
u_j_p^*,q,μ = λ, ∇ u_j_p,q,μ = 1, u_j = (u_j)_μ^∈𝒞_c^1(),
B_R_j+1⊊ u_j ⊆ B_R_j, ∇ u_j ⊆ B_R_j∖B_r_j,
0< R_j + 1 < r_j < R_j, δ_j = μ(B_R_j∩Σ), δ_j+1∈ (0, δ_j/j),
u_jχ_B_R_j + 1_p^*,q,μ≤γ_j,
and (1 + ε_1)^q ∫_δ_j+1^δ_j t^q/p^* - 1 (u_1χ_B_R_j∖ B_R_j+1)_μ^*(t)^q t ≥u_j_p^*,q,μ^q
for every j = 1, …, m. The inductive step is very similar to the first step. Thanks to Proposition <ref> with R = R_m+1, there is r_m + 1∈(0, R_m + 1) and a function u_m + 1 = (u_m + 1)_μ^∈𝒞_c^1() such that u_m + 1_p^*,q,μ = λ, ∇ u_m + 1_p,q,μ = 1, u_m+1⊆ B_R_m + 1, and ∇ u_m + 1⊆ B_R_m + 1∖B_r_m + 1. Now, we find R̃_m + 2∈(0, r_m + 1) such that
u_m + 1χ_B_R̃_m + 2_p^*,q,μ ≤γ_m+1
and
(1 + ε_1)u_m + 1χ_B_R_m + 1∖ B_R̃_m + 2_p^*,q,μ ≥u_m + 1_p^*,q,μ.
Next, we find δ_m + 2∈ (0, δ_m+1/(m+1)) so small that R_m + 2 defined by δ_m + 2 = μ(B_R_m+2∩Σ) satisfies R_m + 2∈(0, R̃_m + 2), B_R_m+2⊊ u_m+1, and we have
(1 + ε_1)^q ∫_δ_m+2^δ_m+1 t^q/p^* - 1 (u_m+1χ_B_R_m+1∖ B_R̃_m+2)_μ^*(t)^q t ≥u_m + 1_p^*,q,μ^q.
Finally, we observe that the last three inequalities still hold with R̃_m+2 replaced by R_m+2. This finishes the construction of the desired sequence.
Next, we can easily verify that the constructed
sequence {u_j}_j=1^∞ fulfills properties (i) to (iv).
However, we still need to prove that the fifth property is also satisfied. Fix {α_j}_j = 1^∞⊆. Clearly, we may assume, without loss of generality, that the left-hand side of (<ref>) is finite. Then
∑_j = 1^∞α_j u_j _p^*,q,μ ≥∑_j = 1^∞α_j ũ_j _p^*,q,μ - ∑_j = 1^∞α_j (u_j - ũ_j) _p^*,q,μ
≥∑_j = 1^∞α_j ũ_j _p^*,q,μ - ∑_j = 1^∞α_j u_jχ_B_R_j+1_p^*,q,μ,
where the functions ũ_j are defined as
ũ_j = u_jχ_B_R_j∖ B_R_j+1, j∈.
As for the first term, since the functions ũ_j have mutually disjoint supports,
it follows (e.g., see <cit.>) that
( ∑_j = 1^∞α_j ũ_j )_μ^* ≥∑_j = 1^∞ |α_j|(ũ_j)_μ^*χ_(δ_j+1, δ_j).
Combining that with (<ref>) and with the fact that the intervals {(δ_j+1, δ_j)}_j = 1^∞ are mutually disjoint, we obtain
∑_j = 1^∞α_j ũ_j _p^*,q,μ^q ≥∑_j = 1^∞ |α_j|^q ∫_δ_j+1^δ_j t^q/p^* - 1 (ũ_j)_μ^*(t)^q t ≥∑_j = 1^∞u_j_p^*,q,μ^q/(1 + ε_1)^q|α_j|^q.
Hence, since u_j_p^*,q,μ = λ for every j∈,
∑_j = 1^∞α_j ũ_j _p^*,q,μ≥λ/1 + ε_1( ∑_j = 1^∞ |α_j|^q )^1/q.
Concerning the second term, we use (<ref>) and the Hölder inequality to obtain
∑_j = 1^∞α_j u_jχ_B_R_j+1_p^*,q,μ ≤∑_j = 1^∞ |α_j| u_jχ_B_R_j+1_p^*,q,μ≤∑_j = 1^∞ |α_j| γ_j
≤{α_j}_j = 1^∞_ℓ_q{γ_j}_j = 1^∞_ℓ_q'.
Combining that with (<ref>), we arrive at
∑_j = 1^∞α_j u_jχ_B_R_j+1_p^*,q,μ≤ε_2 {α_j}_j = 1^∞_ℓ_q.
Finally, by combining (<ref>) and
(<ref>) with
(<ref>), we obtain
(<ref>), which finishes the proof.
Now, we can show that all Bernstein numbers of the embedding operator
E V^1L^p,q(Σ, μ) → L^p^*, q(Σ, μ) coincide with its norm.
Let 1≤ q≤ p < D. Let E V^1L^p,q(Σ, μ) → L^p^*, q(Σ, μ) be the embedding operator, where p^* is as in Theorem <ref>. Then
b_m(E) = E for every m∈.
Furthermore, E is not strictly singular and is maximally non-compact.
In view of the property (S1) of (strict) s-numbers, it is sufficient to show that
b_m(E) ≥E for every m∈.
Fix arbitrary 0 < λ < E, ε_1 > 0, and ε_2 > 0. Let {u_j}_j = 1^∞⊆𝒞_c^1() be the sequence of functions whose existence is guaranteed by Proposition <ref>.
Fix m∈. Let X_m = {u_1, …, u_m}. Since u_j+1⊊ u_j for every j∈, X_m is an m-dimensional subspace of V^1L^p,q(Σ, μ). Hence
b_m(E) ≥inf_u∈ X_m∖{0}u_p^*,q,μ/∇ u_p,q,μ.
Since the functions ∇ u_j have disjoint supports, we have
(∑_j = 1^m α_j ∇ u_j )_*μ(t) = ∑_j = 1^m (∇ u_j)_*μ( t/|α_j|) for every t>0
and every {α_j}_j=1^∞⊆ (when α_j = 0, (∇ u_j)_*μ(t/|α_j|) is to be interpreted as 0). Furthermore, since q/p∈(0, 1], the function [0, ∞)∋ a ↦ a^q/p is subadditive, and so
( ∑_j = 1^m (∇ u_j)_*μ( t/|α_j|) )^q/p≤∑_j = 1^m (∇ u_j)_*μ( t/|α_j|)^q/p for every t>0
and every {α_j}_j=1^∞⊆. Therefore, combining these two observations with (<ref>), we arrive at
∑_j = 1^m α_j∇ u_j _p,q,μ^q ≤ p∑_j = 1^m ∫_0^∞ t^q - 1 (∇ u_j)_*μ( t/|α_j|)^q/p t
= p∑_j = 1^m |α_j|^q ∫_0^∞ t^q - 1 (∇ u_j)_*μ(t)^q/p t = ∑_j = 1^m |α_j|^q ∇ u_j_p,q,μ^q
for every {α_j}_j=1^∞⊆. Hence, combining this with the fact that ∇ u_j_p,q,μ = 1 for every j∈, we have
∑_j = 1^m α_j∇ u_j _p,q,μ≤(∑_j = 1^∞ |a_j|^q)^1/q
for every {α_j}_j=1^∞⊆.
Thanks to (<ref>) with (<ref>), we obtain
∑_j = 1^m α_j u_j_p^*,q,μ/∑_j = 1^m α_j ∇ u_j_p,q,μ≥( λ/1 + ε_1 - ε_2 )
for every {α_j}_j = 1^∞∈ℓ_q. It follows that
inf_u∈ X_m∖{0}u_p^*,q,μ/∇ u_p,q,μ≥( λ/1 + ε_1 - ε_2 ).
Therefore, by combining (<ref>) with (<ref>), we arrive at
b_m(E) ≥( λ/1 + ε_1 - ε_2 ).
By letting ε_2→0^+, ε_1→0^+, and λ→E^-, we obtain (<ref>).
Next, as for the fact that E is not strictly singular, we consider the infinite dimensional subspace Z of V^1L^p,q(Σ, μ) defined as Z = {u_j: j∈}. Note that
u=∑_j = 1^∞α_j u_j ∈𝒞_c^1() for every {α_j}_j = 1^∞ because the sum is locally finite and u ⊆ u_1. Using the same arguments as above, we obtain
inf_u∈ Z∖{0}u_p^*,q,μ/∇ u_p,q,μ≥E.
Finally, in order to establish that E is maximally non-compact,
we must show that E = β(E). Let us assume, for the sake of contradiction, that β(E) < E.
Choose any λ∈ (β(E), E ), and consider the sequence of functions {u_j}_j=1^∞
from Proposition <ref>, with arbitrarily chosen ε_1 and
ε_2 (whose specific values are irrelevant). Now, fix an r ∈ (β(E), λ). Since r > β(E), using the definition of the measure of non-compactness,
there are m∈ and functions {g_k}_k = 1^m ⊆ L^p^*,q(Σ, μ) such that
B_V^1 L^p, q(Σ, μ)⊆⋃_k = 1^m ( g_k + r B_L^p^*, q(Σ, μ)).
Set ε_0 = λ - r > 0.
Thanks to (<ref>), for every j∈ there is k_j∈{1, …, m} such that
u_j - g_k_j_p^*,q,μ≤ r.
Set
h_j = g_k_jχ_ u_j for every j∈.
Since
|h_j - u_j| ≤ |g_k_j - u_j| for every j∈ and μ-a.e. in Σ,
it follows from (<ref>) that
u_j - h_j_p^*,q,μ≤ r for every j∈.
Now, on the one hand,
h_j_p^*,q,μ≥u_j_p^*,q,μ - u_j - h_j_p^*,q,μ≥λ - r = ε_0 for every j∈.
On the other hand, since
h_j ⊆ u_j →∅ as j→∞
and
|h_j| ≤∑_k = 1^m |g_k| ∈ L^p^*, q(Σ, μ) for every j∈ and μ-a.e. in Σ,
it follows from the absolute continuity of the ·_p^*,q,μ norm that
lim_j →∞h_j_p^*,q,μ = 0.
However, a contradiction arises when considering
both (<ref>)
and (<ref>), thus achieving the desired result.
It follows from the equality (<ref>) that all injective (strict) s-numbers of the embedding operator E V^1 L^p,q(Σ, μ) → L^p^*,q(Σ, μ) coincide with the norm of the embedding. The reason is that Bernstein numbers are the smallest injective strict s-numbers (<cit.>), that is,
b_m(T) ≤ s_m(T) for every m∈,
for every injective (strict) s-number s and for every T∈ B(X,Y). A (strict) s-number is injective if the values of s_n(T) do not depend on the codomain of T. More precisely, s_n(J_N^Y ∘ T) = s_n(T) for every closed subspace N⊆ Y and every T∈ B(X, N), where J_N^Y N → Y is the canonical embedding operator.
Furthermore, the equality (<ref>) also shows that all entropy numbers e_m(E) of the embedding are equal to E. For m∈, the mth entropy number e_m(E) is defined as
e_m(E) = inf{ε > 0: B_V^1 L^p, q(Σ, μ)⊆⋃_j = 1^2^m - 1(g_j + rB_L^p^*, q(Σ, μ)), g_1,…, g_2^m - 1∈ L^p^*, q(Σ, μ) }.
It is easy to see that E≥ e_1(E) ≥ e_2(E) ≥⋯≥ 0 and lim_m→∞e_m(E) = β(E), which together with β(E) = E implies that e_m(E) = E for every m∈.
|
http://arxiv.org/abs/2307.05476v1
|
20230705055856
|
Fisher-Weighted Merge of Contrastive Learning Models in Sequential Recommendation
|
[
"Jung Hyun Ryu",
"Jaeheyoung Jeon",
"Jewoong Cho",
"Myungjoo Kang 1"
] |
cs.IR
|
[
"cs.IR",
"cs.AI",
"cs.LG"
] |
[
Fisher-Weighted Merge of Contrastive Learning Models
in Sequential Recommendation
equal*
Jung Hyun Ryuequal,schai
Jaeheyoung Jeonequal,schmath
Jewoong Choequal,schmath
Myungjoo Kangschai,schmath
schaiInterdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, Korea
schmathDepartment of Mathematics, Seoul National University, Seoul, Korea
Myungjoo [email protected]
Machine Learning, ICML
0.3in
]
Along with the exponential growth of online platforms and services, recommendation systems have become essential for identifying relevant items based on user preferences.
The domain of sequential recommendation aims to capture evolving user preferences over time.
To address dynamic preference, various contrastive learning methods have been proposed to target data sparsity, a challenge in recommendation systems due to the limited user-item interactions.
In this paper, we are the first to apply the Fisher-Merging method to Sequential Recommendation, addressing and resolving practical challenges associated with it.
This approach ensures robust fine-tuning by merging the parameters of multiple models, resulting in improved overall performance.
Through extensive experiments, we demonstrate the effectiveness of our proposed methods, highlighting their potential to advance the state-of-the-art in sequential learning and recommendation systems.
§ INTRODUCTION
With the exponential growth of online platforms and services, a significant amount of data is being generated daily.
Recommendation systems have become crucial in utilizing this data effectively.
The system aim to identify relevant items based on user preferences and interests.
As user preferences evolve over time, sequential recommendation has gained attention as a subfield in this area.
We address the problem of sequential recommendation as follows.
Let 𝒰 be the set of users 𝒰 = {u_1, u_2, ⋯, u_|𝒰 |}, and 𝒱 be the set of items as 𝒱 = {v_1, v_2, ⋯, v_|𝒱 |}.
The sequence of user-item interaction for u_i is a list with chronological order, s_i=[v_1^u_i, v_2^u_i, ⋯, v_t^u_i, ⋯, v_n_u_i^u_i].
Here user u_i ∈𝒰, v_t^u_i∈𝒱, and user u_i interact item v_t^u_i in time step t.
The length of sequence for user u_i is n_u_i, and our object is to build a model predicting the item with which user is interact in the next time step, i.e.
p(v_n_u_i+1^u_i = v | s_i).
The previous methodologies typically employ similar model structures but utilize various learning frameworks <cit.>.
Prior research has shown that ensemble methods yield significant benefits when multiple learning frameworks are employed <cit.>.
We propose a practical and feasible method to ensemble the parameters of models trained with different contrastive learning techniques in a sequential recommendation.
The purpose of this study is to effectively aggregate the obtained parameters θ in various learning frameworks and hyperparameter settings, building on previous research and experiments.
Assuming the posterior distribution of parameters θ_m for each m-th model, Sec <ref>, we achieved more effective ensemble results.
This approach allowed us to capture the uncertainty associated with each model's parameter estimates and leverage this information to enhance the ensemble process.
By considering the posterior distributions, we were able to account for the variability in parameter values across different models and obtain a more robust and comprehensive ensemble outcome.
§ RELATED WORKS
Researchers have explored various ensemble methods, including bootstrapping, bagging, and boosting, to improve model performance <cit.>. Ovadia et al. <cit.> demonstrated the accuracy of ensembles even in the presence of distribution shift, while Mustafa et al. <cit.> proposed a method that combines fine-tuned subsets of pre-trained models to achieve high accuracy and resilience to distribution shift. Parameter merging is another technique to reduce model size and computational requirements <cit.>. However, ensemble methods often require additional training, which can be computationally expensive and time-consuming.
§.§ Diverse Learning Framework
Wenzel et al. wenzel2020hyperparameter and Zaidi et al. zaidi2021neural investigated the role of random hyperparamters and architectures in ensemble.
Gontijo et al. gontijo2021no demonstrated the ensemble effect across various training methodologies; initialization, hyperparameter, architecture, framework, and dataset levels.
Diverse training methodologies exhibit different generalization capabilities, ultimately lead to uncorrelated errors.
Models tend to specialize in subdomains within the data and highlights the crucial role of ensemble techniques in enhancing overall performance.
§.§ Merging Methods
Model Soup Model Soup <cit.> presents an effective approach for combining parameters without additional training.
It demonstrates research findings that improve the performance of trained models by constructing a "recipe" composed of diverse models and averaging their parameters.
The study introduces three methods for creating the recipe: the uniform soup, which averages the parameter values of all models; the greedy soup, which sequentially adds models based on their performance ranking; and the learned soup, which identifies the optimal model interpolation through training.
These approaches contribute to enhancing the overall performance of the model without the need for additional training.
Fisher Merging Within the scope of related works, parameter merging is interpreted as a process that maximizes the joint likelihood of model parameters' posteriors <cit.>.
Previous study <cit.> consider averaging as a scenario where the posteriors of these models are assumed to follow an isotropic Gaussian distribution, and the joint likelihood is maximized accordingly.
To refine this approach, efforts have been made to approximate the posterior of the model using Laplace approximation <cit.>.
In this case, the distribution of each model is modeled by assuming the mean as the observed, which can be interpreted as trained parameter and the variance as the Gaussian distribution's Fisher matrix. By employing this formulation, the joint likelihood is calculated.
§.§ Sequential Recommendation System
SASRec <cit.> employ Transformer layers to dynamically assign weights to previous items.
BERT4Rec <cit.> demonstrate an improvement by incorporating user behavior information from both directions using a bidirectional Transformer.
CL4SRec <cit.> employed three data augmentation techniques, namely item cropping, item masking, and item reordering, to create pairs for contrastive learning.
DuoRec <cit.> integrated two types of contrastive loss.
Firstly, it incorporated unsupervised augmentation using dropout-based model-level augmentation to generate positive pairs. Secondly, it incorporated supervised positive sampling, which involves creating pairs by considering sequences with the same target item as positive samples.
§ METHODOLOGY
We perform model ensemble based on different types of loss functions. BERT4Rec <cit.>, CL4SRec <cit.>, and DuoRec <cit.> share the basic structure of BERT4Rec <cit.>.
However, they differ in the sense of constructing positive pairs, a key component of their learning framework of constrastive learning.
Figure <ref> represents the overview of parameter merging process.
By sharing the structure of the model, which is parameterized with diverse learning frameworks, we can leverage ensemble methods to our advantage.
Furthermore, inspired by previous studies demonstrating the effectiveness of ensemble models trained using various learning methods, we apply parameter merging techniques, namely Parameter Averaging and Fisher-weighted Parameter Merging, described in Section <ref>, to combine these models.
§.§ Understanding Model Ensemble
We follow the work of Matena et al matena2022merging. Consider a scenario where we have models with the same structure, model_1, model_2, ⋯, model_M, with corresponding parameters θ_1, θ_2, ⋯, θ_M.
Our objective is to find the parameter θ^* that maximizes the joint likelihood of the posteriors of these parameters.
The posterior of θ_m can be represented as p(θ | θ_m).
Since obtaining this posterior directly is generally challenging, it can employ approximation methods such as Laplace approximation to make assumptions and seek the parameter θ^* <cit.>.
Let us interpret the process of finding θ^* as maximizing the joint likelihood, ∑_m log p(θ|θ_m ).
Assuming that p(θ|θ_m) follows a Gaussian distribution, we set the mean of this Gaussian distribution as the observed θ_m and examine the procedure for averaging parameters and Fisher merging separately, depending on the method used to assume the variance.
Averaging Parameters
Assume that the posterior p(θ|θ_m) follows a Gaussian distribution 𝒩(θ̂_m, I).
Here, θ̂_m represents the parameters of the trained m-th model, and I denotes the identity matrix.
In this case, the desired solution θ^* can be obtained as the average of the parameters of the candidate models, as shown in eq.<ref>:
θ^* = θmax∑_m log p(θ|θ_m, I) = 1/M∑_mθ_m .
Fisher Merging
Let us consider the posterior p(θ|θ_m) following a Gaussian distribution 𝒩(θ̂_m, H^-1).
Here, θ̂_m represents the parameters of the trained m-th model, and H corresponds to the Hessian matrix of θ_m obtained through the second-order Taylor expansion at the mode of the posterior.
It has been established that the Hessian matrix in this distribution coincides with the Fisher information, but for computational efficiency, we only utilize the diagonal elements of the Fisher matrix <cit.>.
The desired solution θ^* can be expressed as eq.<ref>, capturing the essence of the Fisher likelihood :
θ^* = θmax∑_m λ_m log p(θ|θ_m, F_m),
where F_m = 𝔼_x𝔼_y∼ p_θ(y|x)∇_θlogp_θ(y|x)∇_θlogp_θ(y|x)^T. The closed-form solution for θ^* can be obtained as shown in eq.<ref>, which directly incorporates the Fisher matrix. In practice, we utilize an empirical estimate of the Fisher matrix, denoted as F̂, as shown eq.<ref> <cit.>.
θ^*(j) = ∑_m λ_m F_m^(j)θ_m^(j)/∑_m λ_m F_m^(j),
where F_m = 1/N𝔼_y∼ p_θ(y| x)(∇_θlogp_θ(y| x))^2 and j= 1,⋯,|θ|, considering as element-wise multiplication.
§.§ Applying Model Ensemble
By expressing the Fisher matrix we intend to compute in eq.<ref> in terms of recommendation factors, we can decompose it into the following components:
3 𝔼_x_i𝔼_y∼ p_θ(y| x_i)(∇_θlogp_θ(y| x_i))^2
=1/N∑_i ∑_j p_θ(y_j| x_i) (∇_θlogp_θ(y_j| x_i))^2
=1/|𝒰|∑_i^|𝒰|∑_j^|𝒱|
p_θ(v_j| s_i) (∇_θlogp_θ(v_j| s_i))^2 .
There are two computational challenges associated with the above equation.
First, calculations need to be performed for each individual sample s_i.
Second, calculations need to be performed for each item v_j within a single sample.
The reason why these points acts as a drawback in recommendation systems is due to the large number of users and items in the data.
For instance, in the case of MovieLens-1M dataset <cit.>, there are about 6000 users and 3500 items.
However, performing Fisher matrix calculations that require differentiation with respect to θ for each user and item becomes a computational burden.
§.§.§ Sampling sequences
Batch-wise Computation
To address the first challenge of performing computations on individual samples, we reinterpret the equation and carry out the calculations on a batch basis. It should be noted that p_θ(v_j|s_i) can vary for each sample s_i. Therefore, we perform the sorting of p_θ(v_j|s) to address this variation, where BS indicates batch size:
2 ∑_i^|𝒰|∑_j^|𝒱| p_θ(v_j| s_i) (∇_θlogp_θ(v_j| s_i))^2
=∑_BS_k∑_j^|𝒱|(∑_i^BS_kp_θ(v_j| s_i) ) (∇_θ∑_i^BS_klogp_θ(v_j|s_i))^2.
§.§.§ Sampling items
To alleviate the computational burden associated with iterating over all j values, which scales with |𝒱|, we employ a sampling-based approach within the methodology.
This sampling strategy aims to reduce the computational cost while maintaining the representativeness of the calculations.
Random Sampling
We compute the eq.<ref> by randomly sampling j from the total number of items.
This process was performed to calculate the Fisher matrix without any specific assumptions or prior knowledge.
Top-k Sampling
The probability which is output by the model can be interpreted as the preference or likelihood of the recommended items for a given sample.
Based on this interpretation, we select a set of n items that are most likely to be of interest to the corresponding user, i.e. p_θ(v_j| s_i).
Subsequently, we compute the Fisher matrix with these selected items as the focal points.
By focusing on this subset of items that are expected to be of highest interest, we aim to capture the relevant information for optimizing the model's performance effectively.
2 ∑_j^|𝒱| p_θ(v_j| s) (∇_θlogp_θ(v_j| s))^2
≈∑_j^top-kp_θ(v_j| s) (∇_θlogp_θ(v_j| s))^2.
Model-based Sampling
To select a subset of items for further analysis, we randomly sampled items based on their conditional probability p_θ(v_j|s_i) using a weighted random selection process. The selection probability of each item was determined by its associated probability stored in the model's output. By selecting items with higher probabilities, we focused on a specific number of items that were more likely to align with the user's preferences or interests. This allowed us to analyze and evaluate the subset of items based on their associated probabilities obtained from the model's output. With N denotes the sample size, his approximation can be represented as:
2 𝔼_y∼ p_θ(y| x)(∇_θlogp_θ(y| x))^2
≈1/N∑_v_j ∼ p_θ(v_j| s)^N(∇_θlogp_θ(v_j| s))^2.
Calculate with target item
We compute the Fisher matrix based on the target item, disregarding other items with limited direct relevance.
By employing this approach, we focus solely on the target item and its associated information to calculate the Fisher matrix.
Our rationale behind this decision is to prioritize the target item's impact on the model's optimization process, as it is directly linked to the specific objective or task at hand.
Consequently, we exclude items with minimal direct relevance to ensure a more targeted and meaningful computation of the Fisher matrix.
p_θ(v_j^*| s) (∇_θlogp_θ(v_j^*| s))^2,
where v_j^* is the target item.
§ EXPERIMENTS
We use MovieLens-1M dataset <cit.> for experiments.
For each user, we have sequential data consisting of movies purchased in chronological order.
We adopt next-item prediction task (i.e. leave-one-out evaluation), following previous works <cit.>.
The last movie is considered as the test set, and the validation data is used to predict the preceding movies.
During training, we adopt a masked language modeling approach similar to BERT <cit.>, where we mask certain movies in the sequentially ordered list and task the model with predicting them.
The evaluation method used in this study is the Normalized Discounted Cumulative Gain at 10 (NDCG@10), which is a ranking-based evaluation approach <cit.>. It ranks the top 10 items predicted by the model based on their perceived preference and considers the actual ranking of the preferred items. A higher NDCG value, closer to 1, indicates better performance. Different NDCG values can be obtained depending on the selection of items, such as from the full item pool, a random set of 100 items, or the top 100 most popular items.
§.§ Results of Model Merging
Examine the results through Table <ref> and Table <ref>.
Table <ref> presents the results obtained by training models, namely BET4Rec <cit.>, CL4SRec <cit.>, and DuoRec <cit.>. We merge these models using Fisher methods.
While Table <ref> demonstrates the results of models trained solely from scratch.
Table <ref> represents the results of fine-tune setting.
We train the baseline model without contrastive loss for 20 epochs, which is the convergence point of the baseline experiment without any additional contrastive loss, similar to BERT4Rec.
Following this, each model; BERT4Rec <cit.>, CL4SRec <cit.>, DuoRec <cit.>, underwent fine-tuning according to their respective methods, and the results were merged using Fisher methods.
In both conditions, we fine-tuned addtional epoch after merging process.
Fisher merge fails to improve the performance of individual models in baseline setting.
When Fisher merge is applied during the fine-tuning setting, it leads to improved performance compared to individual models.
This finding aligns with previously reported phenomena <cit.> where individual models tend to achieve higher performance than merged model in the baseline setting.
However, the results of Fisher merge in the fine-tuning setting show comparable performance with the individual models in baseline setting, while the individual recipe models of fine-tuning setting do not exceed.
Also, the results indicate that even for models that have not been sufficiently trained such as CL4SRec in our setting, merging parameters resulted in comparable performance to other models, demonstrating robustness.
§.§ The Validity of Batch-wise Computation
We performed batch-wise computations with the aim of implementing an efficient Fisher matrix calculation. Compared to computing on individual samples, grouping samples into batches allowed us to achieve computational efficiency.
The following Figure <ref> in Appendix <ref> illustrates the method for minimizing errors when performing calculations on a batch basis. The figure demonstrates that within a batch containing 10 samples, denoted as s_i, there is a phenomenon where the probabilities of item v_j decrease in a similar manner. By sorting the samples s_i based on the probability of v_j, even when grouping them into batches, it is possible to minimize the error described by the eq.<ref>. Furthermore, the figure illustrates the rationale behind top-k sampling. For the top-k items, the probabilities hold meaningful information, whereas for the remaining items, the probabilities are nearly zero or close to it.
§.§ Effect of Sampling Methods and Size
To investigate the effect of sampling methods, we conduct experiments by varying the number of sampled samples and the sampling techniques employed.
Specifically, we consider three sample sizes: n=10, n=30 and n=50, and four different sampling methods: random sampling, top-k sampling, model-based sampling, and calculate with target item.
The results of these experiments can be observed in Table <ref>.
The table provides insights into the performance of each sampling method under different sample sizes, allowing us to analyze their respective effects on the task at hand.
Note that this result is calculated on batched data.
To examine the results of parameter merging, we conducted experiments in fine-tuning setting, explained in <ref>.
The experiments revealed effective ensemble results, particularly showcasing the robust performance of CL4SRec <cit.>.
Despite having significantly lower performance compared to other models during the parameter merging process, the model with poor performance exhibited robust performance in the Fisher merge results.
Regarding the sampling methods, top-k sampling demonstrated the best performance.
This can be attributed to the concentration of probabilities assigned to specific items by the model, effectively approximating the Fisher criterion sought in the evaluation.
Also, the model-based sampling method exhibits a more pronounced improvement in performance as the sampling size increases compared to other models.
We interpret these results as being rooted in the direct interpretation of the equation defined for Fisher merging.
Interestingly, despite the fact that calculating Fisher matrix on target item has a single sample, the method demonstrated sampling effectiveness by achieving good performance even with a small sample size.
These findings shed light on the interpretation of experimental results in the context of deep learning research.
§.§ Computational Cost
Figure <ref> demonstrates computational cost in terms of time consumed during calculating Fisher matrix for single model.
The concept of parameter merging involves additional computation on the existing parameters.
Therefore, it is important to ensure efficiency in this process. To achieve efficiency, considerations such as calculating the Fisher matrix in batch units and performing sampling are necessary.
It is observed that, except for the calculation on the target item, the computational complexity increases linearly with the sampling size.
As for the calculation on the target item, the sampling size remains fixed at 1 since each sequence has a single target item.
Thus, our research is significant as it approximates the Fisher matrix calculation with a much smaller number of items (around 3000) compared to calculating it on the entire item set.
§.§ Visualization of Merged Weights
We present a visual illustration to aid in the intuitive understanding of the merged weights.
Figure <ref> represents the fine-tuning setting of <ref>, where the three centroids correspond to the weights of individual models.
The plane visualized in <ref> encompasses these three weights.
The scattered points, projected onto the plane, depict 100 samples drawn from 𝒩(θ_m, F_m).
It is observed that the baseline weight exhibits the largest variance.
This can be attributed to the experimental setup where the baseline is pre-trained and then fine-tuned with CL4SRec <cit.> and DuoRec <cit.>.
The weights obtained through uniform merging are represented as the average of the three centroid points, while the weights obtained through Fisher merging take into account the variances of these recipe weights.
It can be seen that the weights obtained through Fisher merging considered posterior and variance with Laplace approximation and provides nice initial point for fine-tune.
§ CONCLUSION
We apply ensemble technique, Fisher merging, for sequential models, enabling robust fine-tuning through parameter merging.
Our experimental results demonstrate the effectiveness of these proposed methods in improving recommendation performance.
These contributions have the potential to advance the field of sequential learning and recommendation systems, offering valuable insights for future research and practical applications.
§ ACKNOWLEDGEMENTS
This work was support by the NRF grant [2012R1A2C3010887] and the MSIT/IITP [1711117093, 2021-0-00077, 2021-0-01343, Artificial Intelligence Graduate School Program(SNU)].
langley00
icml2023
§ APPENDIX
§.§ Motivation : Error Inconsistency
Previous research <cit.> demonstrated the increased effectiveness of ensemble methods as error inconsistency grows.
Building upon the existing research discourse, we conducted the current experiment.
In this study, we analyze the impact of Fisher merging in the context of sequential recommendation systems, attributing its effectiveness to the selection of recipe models trained using different frameworks .
In our experiments, we employ a model based on the BERT4Rec <cit.> architecture as our baseline.
To enhance the performance of the model, we apply various data augmentation techniques to enable contrastive learning.
To analyze the effects of contrastive loss, we divide the training frameworks into two categories: similar and dissimilar.
The similar learning frameworks are trained using the same loss function but with slight variations such as different seeds and hyperparameters, indicating the relationship among models trained with small changes.
On the other hand, the dissimilar learning frameworks involve different data augmentation techniques, resulting in variations in the construction of positive and negative pairs for contrastive loss <cit.>.
Error inconsistency <cit.> refers to the percentage of data where two models have different classification results, with one model making correct predictions while the other model makes incorrect predictions.
Since we are not dealing with classification, we considered a model to have made a correct prediction if the value of NDCG@10 is above 0.5.
By comparing the error inconsistency between similar framework and dissimilar framework, we observe the effectiveness of contrastive loss.
An observation that can be inferred from Table <ref> is that the constructing positive pair for contrastive loss significantly affects the similarity of the samples that the models predict accurately.
As the method for constructing positive pair varies, the models demonstrate a considerable difference in their ability to predict samples correctly.
This finding highlights the sensitivity of the models to the specific construction of the contrastive loss, which in turn impacts their predictive performance.
§.§ Robustness of Fisher Merging; Recipe Selection
We compared two different recipe selection in Table <ref>; Fisher merged parameters with least performance model and Fisher merged parameters without the model. In our experimental setup, CL4SRec did not exhibit superior performance compared to other models, considering the chosen hyperparameter settings and other factors. Therefore, we aim to leverage the elements of the recipe to demonstrate the robustness effect of Fisher merge. Our findings confirm that by removing underperforming models as individual components and applying Fisher merge, the resulting ensemble demonstrates robustness.
§.§ Visualization of Sorted Probability
The figure displays the sorted probabilities of the top 50 items for 10 sequences, where single line represents single sequence. The cumulative probability values for sample sizes of 10, 30, and 50 are 0.381, 0.569, and 0.658, respectively. With the exception of a few largest ones, the majority of probabilities approximate 0.
|
http://arxiv.org/abs/2307.00490v1
|
20230702063529
|
Riemannian Trust Region Methods for SC$^1$ Minimization
|
[
"Chenyu Zhang",
"Rufeng Xiao",
"Wen Huang",
"Rujun Jiang"
] |
math.OC
|
[
"math.OC",
"90C30, 49J52, 65K05, 90C26"
] |
numbers, squarenatbib
a4paper,centering
equation(#2#1#3)
equation(#2#1#3)
and (#2#1#3), (#2#1#3) and (#2#1#3)
equation(#3#1#4–#5#2#6)
ifpackageloadedrefcheck
<ref>
<Ref>
=22pt
=11pt
=0in
=0in
=-.5in
=9.4in
=6.6in
plain
theoremTheorem
propositionProposition
lemmaLemma
corollary-thmCorollary[theorem]
corollary-lemCorollary[lemma]
definition
definitionDefinition
assumptionAssumption
remark
remarkRemark
ℝ
conv
argmin
λ
abbrvnat
figures/
[table]
singlelinecheck=false,
Inputinput
textnormal
footnote
solution
≤⩽≥⩾ϵε⟨
⟩
ϕφ
[figure]name=Fig.,labelsep=period,singlelinecheck=off
Riemannian Trust Region Methods for SC^1 Minimization
Chenyu ZhangData Science Institute, Columbia University, New York, USA. ([email protected]).Rufeng XiaoSchool of Data Science, Fudan University, Shanghai, China. ([email protected]).Wen HuangSchool of Mathematical Sciences, Xiamen University, Xiamen, China. ([email protected]).Rujun JiangSchool of Data Science, Fudan University, Shanghai, China. ([email protected]).
========================================================================================================================================================================================================================================================================================================================================================================================================
Manifold optimization has recently gained significant attention due to its wide range of applications in various areas. This paper introduces the first Riemannian trust region method for minimizing an SC^1 function, which is a differentiable function that has a semismooth gradient vector field, on manifolds with convergence guarantee. We provide proof of both global and local convergence results, along with demonstrating the local superlinear convergence rate of our proposed method. As an application and to demonstrate our motivation, we utilize our trust region method as a subproblem solver within an augmented Lagrangian method for minimizing nonsmooth nonconvex functions over manifolds. This represents the first approach that fully explores the second-order information of the subproblem in the context of augmented Lagrangian methods on manifolds. Numerical experiments confirm that our method outperforms existing methods.
§ INTRODUCTION
Manifold optimization has emerged as a significant research area due to its broad applicability in various fields, including phase retrieval , phase synchronization , low-rank matrix completion , principal component analysis , and deep learning .
In a manifold optimization problem, the feasible region is on a smooth manifold, such as a sphere or a Stiefel manifold.
Extensive research has been conducted in the past few decades on optimizing smooth objective functions on manifolds
, and <cit.> summarizes several classical algorithms in this field, such as Newton's method, line-search methods, and trust region methods.
However, these methods encounter challenges when the objective function becomes nonconvex. Hence, manifold optimization with a nonconvex objective function has become an active area of research in recent years .
In this paper, we consider an unconstrained nonconvex optimization problem on a manifold:
min_x∈ϕ(x)
where φ is bounded below on a complete Riemannian manifold ℳ, has a Lipschitz continuous and locally directionally differentiable gradient field, but may not be twice differentiable.
Particularly, we will consider the case where ϕ is an SC^1-function: a differentiable function that has a semismooth gradient vector field. We defer the definition of semismoothness to <ref>.
SC^1 objective functions are commonly encountered in various domains, including stochastic quadratic programs <cit.> and nonlinear minimax problems <cit.>.
§.§ Related Work
Semismooth Newton methods.
For an SC^1 problem <ref> in a Euclidean space, semismooth Newton (SSN) methods have been widely applied <cit.>.
Recently, extended the SSN methods to Riemannian manifolds.
Then, the Riemannian semismooth Newton method was applied to solve various optimization problems on manifolds. For instance,
<cit.> applied it to solve a primal-dual optimality system on a manifold,
while <cit.> applied it to solve the subproblem of ALM on manifolds.
In both papers, the Newton system was solved inexactly, and a superlinear local convergence rate was established.
Trust region methods.
<cit.> extended trust region methods to Riemannian manifolds, and established a superlinear convergence result similar to the Euclidean case. However, their smoothness requirements for the objective function are strong, assuming ϕ is twice continuously differentiable and its Hessian is Lipschitz continuous.
For objective functions that may not be twice differentiable, generalized Hessians have been considered for Euclidean trust region methods.
<cit.> proposed a globally
and superlinearly convergent trust region algorithm for the variational inequality problem, which utilizes the D-gap function and its computable generalized Hessian.
For the same problem, <cit.> also proposed a trust region type method, which only switches to a trust region step when Newton's step fails to yield a sufficient decrease, thus avoiding the use of the trust region near a strict local minimum.
<cit.> considered a constrained convex SC^1 problem, and similar to <cit.>, their algorithm only resorts to a trust region strategy when Newton's step fails.
Problem (<ref>) as subproblems in two methods.
Recently, <cit.> and extended augmented Lagrangian methods to nonsmooth nonconvex manifold optimization and established convergence guarantees, which motivates the research in this paper. However, neither of these papers incorporates a full second-order method to solve the subproblem in the ALM; the former solves the subproblem using the Riemannian gradient descent method, while the latter employs an SSN method that falls back to the gradient descent method when encountering negative curvatures.
For minimizing a composite function over a manifold, <cit.> reformulated the objective function utilizing dynamic smoothing, whose subproblem is also in form (<ref>).
§.§ Contributions
In this paper, we introduce a novel Riemannian trust region method for minimizing SC^1 functions on Riemannian manifolds. We provide empirical evidence of the global convergence, local convergence near nondegenerate local minima, and superlinear local convergence rate of our method under mild conditions, adapted from its Euclidean counterpart. Our method represents is the first Riemannian trust region approach to attain a provable superlinear local convergence rate without the need for the objective function to be twice differentiable. Moreover, we relax the smoothness requirement on the retraction to only necessitate a Hölder continuous differential, which aligns with the SC^1 objective function. In contrast, the prior work <cit.> requires the retraction to have a Lipschitz continuous differential for global convergence of the algorithm and to be twice continuously differentiable for the algorithm's superlinear local convergence rate. Furthermore, we provide a proof of the trust region's inactivity near a nondegenerate minimizer, which plays a crucial role in establishing the superlinear local convergence rate. To the best of our knowledge, this is the first guarantee of the eventual inactivity of the trust region for semismooth trust region methods, even within the framework of Euclidean spaces.
As an important application, we employ our semismooth Riemannian trust region method to solve the subproblem of the ALM on manifolds. Notably, our approach stands out as the first method to fully exploit the second-order information of the ALM's subproblem, thereby benefiting from the advantages offered by trust region methods over first-order methods and the Newton method, including adaptive step-size and automatic detection of negative curvature.
Through numerical experiments on compressed models and sparse principal component analysis, we demonstrate that our proposed method outperforms existing methods, achieving better convergence performance, characterized by faster convergence speed and improved objective function values.
§.§ Organization
This paper is organized as follows. In <ref>, we briefly review the basic concepts in Riemannian manifold optimization and trust region methods, introducing tools required for our algorithms and analysis.
In <ref>, we present our Riemannian trust region method for minimizing an SC^1 function on a manifold.
In <ref>, we establish the convergence results of our algorithm, including global convergence, local convergence (including attraction property of nondegenerate local minimizers), and superlinear local convergence rate. In <ref>, we present an application of our method, solving subproblems of augmented Lagrangian methods on manifolds. Finally, we evaluate our algorithm through multiple numerical experiments, including compressed modes and sparse principal component analysis in <ref>.
§ PRELIMINARIES
§.§ Riemannian Manifold Optimization
In this section, we provide a brief introduction to Riemannian manifold optimization, assuming familiarity with basic concepts such as smooth manifolds, tangent spaces, and smooth mappings on manifolds.
Omitted definitions in this section and more details can be found in monographs such as <cit.> and <cit.>.
We summarize the notations on Riemannian manifold optimization we will use in <ref>.
In this paper, we focus on Riemannian manifolds, which are smooth manifolds equipped with an inner product <·,·>_x on the tangent space x varying smoothly with respect to x on the manifold . The family of inner products is called the Riemannian metric on the manifold.
This paper deals with general Riemannian manifolds, and we only consider their intrinsic properties and always use the notation <·, ·>_x to refer to the Riemannian inner product.
The Riemannian metric also introduces a norm on the tangent space, defined by ξ_x = √(<ξ,ξ>_x) for any ξ∈x, and a distance on the manifold, defined by (x,y) = inf_γ∫_0^1γ'(t)_γ(t)ṭ for all x,y∈, where the infimum is taken over all piecewise smooth curves γ: [0,1] → connecting x and y.
Throughout the paper, we will drop the subscript x of the Riemannian inner product and norm if it is clear from the context.
In manifold optimization, we still need to return to linear spaces, like tangent spaces, to perform various operations.
However, unlike the Euclidean case, the tangent vectors at different points of a manifold are not in the same tangent space. So we need a mapping to bridge different tangent spaces. This is where the concepts of geodesics and parallel transports come into play.
Let γ:[0,1]→ be a smooth curve.
* A vector field X is said to be parallel along γ if ∇_γ'(t)X = 0 for any t ∈ [0,1], where ∇ is the Riemannian connection (Levi-Civita connection).
* γ is said to be geodesic if the field of its tangent vector γ'(t) is parallel along itself.
* For any ξ∈γ(0), there exists a unique parallel vector field X_ξ along γ such that X_ξ(0) = ξ.
The parallel transport operator along γ is defined by P_γ^0→ t: ξ↦ X_ξ(t). When the curve is geodesic and connects x,y, we denote P_xy P_γ^0→ 1.
c
Notations
Notation Definition
A complete Riemannian manifold
T_x The tangent space at x∈
ℒ(T_x) The set of linear operators from x to itself
,
The set of all real numbers; the set of all natural numbers
^k,E A vector space; a general Euclidean space
^k_-,^k_+ The set of elements in ^k with non-positive/non-negative components
B_δ(x) The ball with center p∈ S and radius δ,
where p ∈ S can be x∈, 0_x∈x, id_x∈ℒ(x), etc.
x,y,z,p,q Points on manifolds
X,Y,Z Vector fields on manifolds
ξ,η,ζ,v Vectors in the tangent space
γ A piecewise smooth curve on manifolds
·,·_x,·,· The Riemannian inner product on x
·_x,· The Riemannian norm on x
·_op The operator norm
(x,y) The distance between x,y∈
f̣ The differential of function f on manifolds
∇_v X The Riemannian covariant derivative of X at t along v (γ'(t) = v)
∇ The Riemannian connection (Levi-Civita connection) on
P_γ^t_1→ t_2 The parallel transport along γ from t_1 to t_2
P_xy The parallel transport along the geodesic connecting from x to y
exp_x The exponential map at x∈
R_x The retraction restrict to x
f The Riemannian gradient of f
f̋ The Riemannian Hessian of f
f, X The Clarke subdifferential of function f;
the Clarke generalized covariant derivative of the vector field X
H_x,H_k An element in the Clarke generalized covariant derivative ϕ(x)/ϕ(x_k)
The parallel transport is an isometry that preserves the inner product, i.e., <P_xyξ_x, P_xyη_x>_y = <ξ_x,η_x>_x for any tangent vectors ξ_x,η_x∈x. Hence, we can freely transfer vectors between different tangent spaces using the parallel transport.
From the tangent space to the manifold, geodesics naturally introduce a local map called the exponential map defined as exp_x: ξ↦γ(1), where γ is a unique geodesic such that γ(0) = x, γ'(0) = ξ, and (γ(0),γ(1)) = ξ. The exponential map is a local diffeomorphism, and we define the injective radius of x as follows:
inj_x() sup{δ>0: exp_x is a diffeomorphism on B_δ(0_x) ⊂x}.
The global injective radius of the manifold is defined as inj() inf_x∈inj_x().
We rely heavily on the smooth diffeomorphism offered by the exponential map, both in the algorithms and analysis. Therefore, throughout the paper, we restrict all neighborhoods of a point x to be within x's normal neighborhood: exp_x(B_inj_x()(0_x)).
This restriction is possible when we only consider a compact subset Ω of the manifold, and thus the injective radius of Ω has a positive lower bound.
For the rest of the paper, we omit this requirement and the concept of injective radius.
In addition, we assume that the manifold is complete, i.e., any locally defined geodesic can be extended to the entire real axis. This assumption ensures that the exponential map is well-defined on the entire tangent space, and the shortest curve connecting two points is a geodesic (see <cit.>).
We now introduce some key tools related to functions on manifolds that will help us apply algorithms to solve manifold optimization problems.
For a function from to , its Riemannian gradient at x is a unique tangent vector such that
f(x), ξ = f̣(x)[ξ], ∀ξ∈x,
where f̣(x) is the differential of f at x; its Riemannian Hessian is an element in ℒ(x), the set of all linear operators from x to itself, such that
f̋(x)[ξ] = ∇_ξ f(x), ∀ξ∈x,
where ∇ is the Riemannian connection (Levi-Civita connection) on .
The problem we consider is not necessarily smooth; to obtain its second-order information, we need the following definitions, which can be found in .
A vector field X on is said to be locally Lipschitz continuous if for any x∈, there exist a radius δ_x > 0 and a constant L_x >0 such that
P_yzX(y) - X(z)≤ L_x (y,z), ∀ y,z ∈ B_δ_x(x).
A function is said to be Lipschitz continuously differentiable if its gradient vector field is a Lipschitz vector field with a global Lipschitz constant L and a global radius of neighborhood δ.
For a vector field X, its directional derivative at x∈ along ξ∈x is defined as
∇ X(x;ξ)lim_t→ 0^+1/t( P_exp_x(tξ),x X(exp_x(tξ)) - X(x) ).
X is said to be directionally differentiable at x if ∇ X(x;ξ) exists for all ξ∈x.
If X is differentiable at x, then it is directionally differentiable, and ∇ X(x;ξ) = ∇_ξ X(x) for all ξ∈x.
For a locally Lipschitz continuous function f on , its Clarke Riemannian generalized directional derivative at x∈ along v∈x is defined as
f^∘(x;v) lim_y → xsup_t→ 0+f∘ϕ^-1(ϕ(y) + tϕ̣(x)[v]) - f∘ϕ^-1(ϕ(y))/t,
where (U,ϕ) is any coordinate chart at x.
Then the Clarke subdifferential of f at x∈ is defined as
∂ f(x) {ξ∈x:<ξ,v >≤ f^∘(x;v), ∀ v ∈x}.
The elements in the Clarke subdifferential can be viewed as generalized gradients and are frequently used in subgradient methods for manifold optimization <cit.>.
Similarly, we can define a generalized Hessian for non-twice differentiable functions.
<cit.> states that locally Lipschitz continuous vector fields on are differentiable almost everywhere, allowing us to introduce the Clarke generalized covariant derivative of such vector fields.
The Clarke generalized covariant derivative of a locally Lipschitz continuous vector field X at x∈ is defined as
X(x) {H∈ℒ(x) : ∃{x_k}⊂𝒟_X, lim_k→∞x_k = x, H = lim_k→∞P_x_kx∇ X(x_k) P_x x_k},
where 𝒟_X is the collection of points on the manifold where X is differentiable.
Since Hessians at differentiable points are self-adjoint <cit.>, all elements in the Clarke generalized covariant derivative, and their inverse (if exists), are self-adjoint.
Note that we use the same notation for the Clarke subdifferential and the Clarke generalized covariant derivative, as these two definitions are equivalent and incorporate the Euclidean case (see <cit.>),
although we can define other generalized gradients, such as the Fréchet subdifferential, and establish their corresponding optimality conditions and related algorithms <cit.>.
While elements in the Clarke generalized covariant derivative of the gradient field are not necessarily Hessian operators, they possess desired properties that make them suitable replacements for Hessian operators in our algorithms.
Let X be the Clarke generalized covariant derivative of a locally Lipschitz continuous vector field X on . The following statements are valid for any x ∈:
* ∂ X(x) is a nonempty, convex, and compact subset of ℒ(T_x);
* X is locally bounded; that is, for any δ>0, there exists C>0 such that for all y ∈ B_δ(x) and H ∈∂ X(y), it holds that H≤ C;
* ∂ X is upper semicontinuous at p; that is, for any scalar ϵ>0, there exists δ > 0 such that for all y ∈ B_δ(x), it holds that
P_yx∂ X(y) P_xy⊂∂ X(x)+B_ϵ(0),
where B_ϵ(0):={H ∈ℒ(T_x):H<ϵ}.
Consequently, ∂ X is closed at x; that is, if lim _k →+∞ x_k=x, H_k∈∂ X(x_k) for all k=0,1, …, and lim _k →+∞ P_x_k x H_kP_x x_k=H, then H ∈∂ X(x).
§.§ Trust Region Methods
Trust region methods are an extension of Newton's method, which have better convergence properties and relax the convexity requirement by automatically detecting the negative curvature. For a comprehensive discussion, we refer readers to monographs such as <cit.> and <cit.>.
While trust region methods share the same objective function as Newton's method in the model problem, they possess a trust region constraint.
Specifically, for a smooth function ϕ in a Euclidean space E, a classical trust region method (see <cit.>) often chooses the model problem at x_k as
min_η∈ E m_x_k(η) ϕ(x_k) + ⟨_Eϕ(x_k), η⟩_E + 1/2⟨Hess_Eϕ(x_k)η,η⟩_E
s.t. η_E≤Δ_k,
where _E, Hess_E, <·,·>_E, and ·_E are the Euclidean gradient, Hessian, inner product, and norm respectively.
After solving the model problem, a trust region method compares the actual decrease and model decrease by computing the relative decrease ratio, which is used to determine the next iteration point and trust region radius. We defer the implementation details of a trust region method to the next section.
§ SEMISMOOTH RIEMANNIAN TRUST REGION METHOD
In this section, we present a semismooth Riemannian trust region method to solve the unconstrained nonconvex problem on a manifold:
min_x∈φ(x),
where φ is bounded below on ℳ, has a Lipschitz continuous and locally directionally differentiable gradient field with a Lipschitz constant L, but may not be twice differentiable.
To overcome the absense of the Hessian, we utilize the Clarke generalized covariant derivative of the gradient field in our trust region method. To ensure the super-linear local convergence rate, we require ϕ to be SC^1, i.e., impose the semismoothness condition on the Clarke generalized covariant derivative ϕ.
Let X be a locally Lipschitz continuous vector field on that is directionally differentiable in a neighborhood of x ∈.
X is said to be μ-order semismooth at x with respect to its Clarke generalized covariant derivative, for μ≥ 0, if for any ϵ > 0, there exists δ > 0 such that
X(x) - P_yx(X(y) + H_y exp_y^-1(x))≤ϵ (x,y)^1+μ,
∀ y∈ B_δ(x), ∀ H_y∈∂ X(y).
When μ = 0, we simply say X is semismooth. Moreover, if there exists C,δ > 0 such that
X(x) - P_yx(X(y) + H_y exp_y^-1(x))≤ C (x,y)^2,
∀ y∈ B_δ(x), ∀ H_y∈ X(y),
we say X is strongly semismooth.
For example, a piecewise smooth vector field X is strongly semismooth with respect to X.
When solving manifold optimization problems iteratively, we typically do not compute the iteration points directly on the manifold. Instead, we use Riemannian gradients and Hessians to calculate iteration points on the tangent space and then retract them onto the manifold. The exponential map, a distance-preserving smooth diffeomorphism between the tangent space and the manifold, ensures that the retracted points preserve desirable properties, such as sufficient descent in the objective function.
However, computing the exponential map can be challenging.
To address this, we introduce retractions, a class of mappings that approximate the exponential map and relax the requirement for a distance-preserving smooth diffeomorphism.
A continuously differentiable mapping R: T→ M is called a retraction, if for any x∈, it satisfies that
* R_x(0_x) = x,
* Ṛ_x(0_x) = id_T_x,
where R_x is the restriction of R to x, 0_x is the zero element of x, and Ṛ_x(0_x) is the differential (pushforward) of R_x at 0_x.
The definition of a retraction shows that it provides a first-order approximation of the exponential map, which also satisfies exp_x(0_x) = x and ẹx̣p̣_x(0_x) = id_x.
Despite this, the above definition may seem abstract. To aid in comprehension, we provide an equivalent definition of retractions for embedded manifolds in Euclidean spaces. While we do not limit our discussion to such manifolds, this equivalence can help clarify the concept of retractions.
When is an embedded manifold in vector space ^k and its Riemannian inner product is induced by the dot product, then the second condition in <ref> is equivalent to
lim_T_x∋ξ→ 0_x R_x(ξ) - (x + ξ)_2/ξ_2 = 0.
<ref> helps translate the conditions on a retraction: R_x simply needs to map the point x + ξ in the tangent space back to the manifold in a way that preserves the distance between the two points as a higher-order term compared to the magnitude of the tangent vector ξ.
This condition ensures that x + ξ and R_x(ξ) exhibit similar properties. For example, if a continuous objective function exhibits a sufficient decrease for x + ξ, it should also demonstrate an acceptable decrease for R_x(ξ).
Retractions are a useful tool for manifold optimization, as indicated by <ref> and other desirable properties (see <cit.>). They turn the tangent space into a first-order approximation of the manifold, with the exponential map being a special retraction.
To achieve a quadratic local convergence rate, the retraction needs to be C^2 <cit.>.
However, if the gradient field of the objective function is not strongly semismooth, the quadratic local convergence rate may not be obtained.
Hence, to be more general and consistent with the semismoothness condition,
we only require the retraction to admit a continuous differential, rather than being twice continuously differentiable. We have the following proposition for this class of retractions.
Suppose the differential of R_x is ν-order Hölder continuous,
i.e., there exists ν >0 and C≥ 0 such that for any ξ_1,ξ_2∈x, we have
P_R_x(ξ_1)R_x(ξ_2)Ṛ_x(ξ_1) - Ṛ_x(ξ_2)_op≤ Cξ_1 - ξ_2^ν
.
Then for any ξ∈x, we have
(R_x(ξ), exp_x(ξ)) = O (ξ^1 + ν).
For any ξ, define the curve γ: t ↦ R_x(tξ). For any smooth function f on the manifold, denote f̂(t) = f(R_x(tξ)).
By the mean value theorem, we know there exists τ∈[0,1] such that
f̂(1) = f̂(0) + f̂'(τ)
= f(x) + < f(y), γ'(τ) >
= f(x) + < f(y), Ṛ_x(τξ)[ξ]>,
where y = R_x(τξ), and Ṛ_x(τξ): x→y.
Then by the triangle inequality, we have
< f(y), Ṛ_x(τξ)[ξ]>
= <P_yx f(y), P_yxṚ_x(τξ)[ξ] >
= < f(x), Ṛ_x(0_x)[ξ]>
+ < f(x), P_yxṚ_x(τξ)[ξ] - Ṛ_x(0_x)[ξ] >_S_1
+ <P_yx f(y) - f(x), P_yxṚ_x(τξ)[ξ] >_S_2.
By the Hölder continuity of Ṛ_x, S_1 = O(ξ^1+ν); and by the smoothness of f, P_yx f(y) - f(x) = O((x,y)) = O(ξ), and then S_2 = O(ξ^2). Then, using Ṛ_x(0_x) = id_x in <ref>, we get
< f(y), Ṛ_x(τξ)[ξ] > = < f(x), ξ> + O(ξ^1+ν).
Now let f(p) (p,exp_x(ξ)), which is smooth on ∖{exp _x(ξ) }. Combining <ref> gives
(R_x(ξ),exp_x(ξ)) = f̂(1) = (x,exp_x(ξ)) + < f(x), ξ> + O(ξ^1+ν).
Then the gradient of the Riemannian distance function gives
< f(x), ξ> = <γ_-'((x,exp_x(ξ))),ξ> = -<ξ/ξ,ξ> = -ξ = -(x,exp_x(ξ)),
where γ_- is the geodesic from exp _x(ξ) to x.
Combining <ref> gives
(R_x(ξ),exp_x(ξ))
= O(ξ^1+ν).
Using the retraction, we can first solve the model problem of a trust region method on the tangent space and then map the iteration point back onto the manifold.
The model problem of a Riemannian trust region method can be defined similarly to (<ref>) using the Riemannian gradient, Hessian, inner product, and norm <cit.>.
However, the objective function ϕ we consider may not necessarily be twice differentiable. As a result, we propose replacing the Hessian in the model problem with an arbitrary element in the Clarke generalized covariant derivative of the gradient vector field. That is, at each iteration, we choose an arbitrary H_k ∈ϕ(x_k)and define the model problem as follows:
min_η∈x_k m_x_k(η) ϕ(x_k) + ϕ(x_k), η + 1/2 H_kη,η
s.t. η≤Δ_k.
Note that we sometimes use H_x to represent an element in ϕ(x) to make it more self-explanatory. Thus, H_x_k and H_k both represent an element in ϕ(x_k).
After solving the model problem, we compare the descent in the objective function with that of the model function by computing the relative decrease ratio:
ρ_k = ϕ(x_k) - ϕ(R_x_k(η_k))/m_x_k(0) - m_x_k(η_k),
where η_k is the (approximate) solution to the model problem (<ref>), and R is the chosen retraction. If ρ_k is relatively large, we accept R_x_k(η_k) as the next iteration point. The remaining steps of our semismooth Riemannian trust region method are the same as in a vanilla trust region method <cit.>. We present our algorithm in Algorithm <ref>.
For the model problem, any approximate method with an appropriate termination condition can be used. For example, we use the truncated conjugate gradient (TCG) method <cit.> with the following stopping criterion
r_j+1≤r_0min{r_0^θ, ϵ},
where ϵ,θ > 0.
For completeness and convenience of subsequent analysis, we present the TCG method in Algorithm <ref>.
Besides stopping criterion (<ref>), Algorithm <ref> also terminates when one of two truncation conditions (lines 3 and 8) is satisfied and returns the truncated tangent vector η_k = ξ_j + τδ_j, with τ calculated as follows:
τ(ξ_j, _j, Δ_k) = -ξ_j, δ_j+√(ξ_j, δ_j^2+(Δ_k^2-ξ_j, ξ_j) δ_j, δ_j)/δ_j, δ_j.
§ CONVERGENCE ANALYSIS
In this section, we present the convergence results of our semismooth Riemannian trust region method (Algorithm <ref>).
We prove three classical results that are applicable to a smooth Euclidean trust region method.
The first is the global convergence theorem (<ref>), which shows that the algorithm converges to a stationary point for any initial point.
The second is the local convergence theorem (<ref>), which demonstrates that nondegenerate local minimizers form basins of attraction.
Finally, <ref> establishes the super-linear local convergence rate of our algorithm.
§.§ Global Convergence
Before presenting the global convergence theorem, we need some essential lemmas.
Since both the model problem <ref> and Algorithm <ref> are defined in a Euclidean space, the first two lemmas apply to our algorithm without requiring any modification.
Let {ξ_i}_i=0^j
be the first j+1 tangent vectors generated by Algorithm <ref> with j+1 iterations and η_k be the returned tangent vector. Then we have
* If the truncation conditions and the termination condition are not met, there exists j such that
ξ_j+1 = η^* = H_k^-1(-ϕ(x_k)),
where H_k is the (general/approximated) Hessian passed to the algorithm, and η^* is the minimum point of m.
* Algorithm <ref> is a descent algorithm, i.e., m(ξ_0) ≥ m(ξ_1) ≥…≥ m(ξ_i) ≥…≥ m(ξ_j) ≥ m(η_k) ≥ m(η^*).
* The norm of ξ_i is monotonically increasing, i.e., ξ_0≤ξ_1≤…≤ξ_i≤…≤ξ_j≤η_k≤η^*.
Let η_k be the tangent vector returned by Algorithm <ref>, then the decrease in the model problem <ref> satisfies
m_x_k(0) - m_x_k(η_k) ≥1/2ϕ(x_k)min{Δ_k, ϕ(x_k)/H_x_k}.
The following lemma, Taylor's theorem, is of utmost importance in our analysis. While there exist several forms and variations of Taylor's theorem on manifolds, we will only present the ones that are relevant to our analysis. Some other variations can be found in <cit.>.
Suppose ϕ∈ C^1(ℳ) has a Lipschitz gradient field and x, y ∈ℳ. Let γ: [0, 1] → ℳ be a geodesic from x to y. Then, there exist τ_1, τ_2∈ [0, 1], and H_τ_2∈ϕ(γ(τ_2)) such that
ϕ(y) - ϕ(x) = P^τ_1→0_γϕ(γ(τ_1)), γ'(0),
ϕ(y) - ϕ(x) = ϕ(x), γ'(0) + 1/2 P^τ_2→ 0_γ H_τ_2 P^0→τ_2_γγ'(0), γ'(0),
Furthermore, we have
ϕ(y) - ϕ(x) = ϕ(x), γ'(0) + O((x,y)^2),
ϕ(y) - ϕ(x) = ϕ(x), γ'(0) + 1/2 H_x γ'(0), γ'(0) + o((x,y)^2),
for some H_x ∈ϕ(x).
Similarly, for Lipschitz and directionally differentiable vector field X, there exist τ_3 ∈ [0,1], H_τ_3∈ϕ(γ(τ_3)), and H_x∈ϕ(x) such that
X(y) = P_γ^0→ 1[X(x) + P_γ^τ_3→ 0 H_τ_3P_γ^0→τ_3γ'(0)],
X(y) - P_γ^0→ 1[X(x) + H_xγ'(0)] = o((x,y)).
For <ref>, let γ be the geodesic from x to y and define ϕ̂ϕ∘γ. Then by the first-order expansion of ϕ̂, there exists τ_1∈[0,1] such that
ϕ(y) = ϕ̂(1) = ϕ̂(0) + ϕ̂'(τ_1)
= ϕ(x) + <ϕ(γ(τ_1)),γ'(τ_1) >
= ϕ(x) + <P^τ_1→ 0_γϕ(γ(τ_1)),γ'(0) >,
where the last equality is from the fact that the tangent vector of a geodesic is parallel along itself (see <ref>).
For the proof of (<ref>), please refer to <cit.>. Let L be the Lipschitz constant of ϕ's gradient as in <ref>; by (<ref>), we have
ϕ(y) - ϕ(x) - ϕ(x), γ'(0) = (P_γ^τ_1→ 0ϕ(γ(τ_1)) - ϕ(x) ), γ' (0)
≤P_γ^τ_1→ 0ϕ(γ(τ_1)) - ϕ(x) γ'(0)
≤ L (γ(τ_1), x) γ'(0)
≤ L (x,y)^2,
which gives (<ref>).
By the upper-semicontinuity of the Clarke generalized covariant derivative (Proposition <ref>), for any ϵ > 0, when y is near x, there exist H_x∈ϕ(x) and an operator B whose operator norm is no greater than 1, such that
P^τ_2→ 0_γ H_τ_2 P^0→τ_2_γ = H_x + ϵ B.
Therefore
ϕ(y) - ϕ(x) - ϕ(x), γ'(0) - 1/2 H_xγ'(0), γ'(0)
<ref>≤ 1/2(P^τ_2→ 0_γ H_τ_2 P^0→τ_2_γ - H_x)γ'(0), γ'(0)
≤ 1/2P^τ_2→ 0_γ H_τ_2 P^0→τ_2_γ - H_xγ'(0)^2
≤ ϵ/2(x,y)^2.
By the arbitrariness of ϵ, we get (<ref>).
Equations (<ref>) and (<ref>) can be derived similarly.
We now present the global convergence theorem, which mirrors <cit.>. However, in <cit.>, two additional assumptions are made on the retraction R. We could also adopt the same assumptions and then <cit.> directly applies to our setting, as it does not require the second-order differentiability of ϕ.
Nonetheless, we provide a proof of our theorem with only one assumption: the Hölder continuity of the retraction's differential. This requirement is strictly weaker than the radially Lipschitz continuity assumption made in <cit.>.
Let {x_k} be the sequence generated by Algorithm <ref> with ρ'∈ [0,1/4). Suppose on the level set {x∈ℳ: ϕ(x) ≤ϕ(x_0) }, {H_k} is uniformly bounded. Then we have
lim inf_k→∞ϕ(x_k) = 0.
Moreover, if ρ'∈(0,1/4) and the retraction R has a ν-continuous differential, we have
lim_k→∞ϕ(x_k) = 0.
Our proof is adapted from that of <cit.>.
For <ref>, we only need to reestablish the claim that the trust region radius has a positive lower bound if lim inf_k→∞ϕ(x_k) 0. For the remaining proof, please refer to <cit.>.
Similar to the proof of (<ref>) in <ref>, let γ be the curve such that γ(t) = R_x_k(tη_k). Then, there exists τ∈[0,1] such that
ϕ(R_x_k(η_k)) - ϕ(x_k)
= <ϕ(γ(τ)), γ'(τ) >
= <ϕ(γ(τ)), Ṛ_x_k(τη_k)[η_k] >
= <P_γ(τ)x_kϕ(γ(τ)), P_γ(τ)x_kṚ_x_k(τη_k)[η_k] >.
By the continuity of Ṛ_x_k, there exist C_1 ≥ 0 and ν>0 such that
Ṛ_x_k(0_x_k) - P_γ(τ)x_kṚ_x_k(τη_k)_op≤ C_1τη_k^ν.
By the Lipschitz continuity of ϕ and <ref>, there exists C_2 ≥ 0 such that
ϕ(x_k) - P_γ(τ)x_kϕ(γ(τ))≤ L(x_k, R_x_k(τη_k))
≤ L(x_k, exp _x_k(τη_k)) + L(exp_x_k(τη_k), R_x_k(τη_k))
≤ L(τη_k + C_2τη_k^1+ν)
Plugging <ref> back into (<ref>) gives
ϕ(R_x_k(η_k)) - ϕ(x_k)
= <P_γ(τ)x_kϕ(γ(τ)) - ϕ(x_k),
P_γ(τ)x_kṚ_x_k(τη_k)[η_k]>
+ <ϕ(x_k), (P_γ(τ)x_kṚ_x_k(τη_k) - Ṛ_x_k(0_x_k))[η_k] >
+ <ϕ(x_k), η_k>
≤ L(τη_k+ C_2τη_k^1+ν)P_γ(τ)x_kṚ_x_k(τη_k)_opη_k+ϕ(x_k)· C_1τη_k^ν·η_k+<ϕ(x_k), η_k>
≤ L(η_k + C_2η_k^1+ν)(1 + C_1τη_k^ν)·η_k +
C_1ϕ(x_k)η_k^1+ν +
<ϕ(x_k), η_k>
≤ (C_3 + C_1ϕ(x_k))η_k^1+ν +
<ϕ(x_k), η_k>,
where C_3 L(Δ̅^1-ν + C_2Δ̅)(1 + C_1Δ̅^ν) ≥ L(η_k^1-ν + C_2η_k)(1 + C_1τη_k^ν) because η_k≤Δ̅ and τ∈[0,1], where Δ̅ is the radius cap of the trust region specified in Algorithm <ref>.
Let β be the uniform upper bound of { H_k }. By the definition of the model problem (<ref>), we have
|m_x_k(η_k) - ϕ(R_x_k(η_k))|
≤|1/2η_k, H_kη_k| +
| ϕ(x_k) + <ϕ(x_k),η_k > - ϕ(R_x_k(η_k)) |
≤β/2η_k^2 + (C_3 + C_1ϕ(x_k))η_k^1+ν
≤ (C_1ϕ(x_k) + C_4)η_k^1+ν,
where C_4 = C_3 + βΔ̅^1-ν/2. Then, by <ref>, we get
|ρ_k - 1| = |m_x_k(η_k) - ϕ(R_x_k(η_k))/m_x_k(0) - m_x_k(η_k)|
≤(C_1ϕ(x_k) + C_4)η_k^1+ν/1/2ϕ(x_k)min{Δ_k,ϕ(x_k)/H_k}.
Suppose the lim inf_k→∞ϕ(x_k) 0, then there exist ϵ> 0 and K∈ℕ such that ϕ(x_k)≥ϵ for all k≥ K. Then, for any k≥ K, we have
|ρ_k - 1| ≤C_5η_k^1+ν/min{Δ_k, ϵ /β},
where C_5 = 2(C_1 + C_4 /ϵ).
Let Δ = min{ϵ/β, (2C_5)^-1 /ν}. When Δ_k ≤Δ, we have min{Δ_k, ϵ /β} = Δ_k, and
|ρ_k - 1| ≤C_5 Δ_k^1+ν/Δ_k≤ C_5 Δ^ν≤1/2,
which indicates that ρ_k ≥ 1/2 > ρ'. Therefore, by the trust region radius update rule, we can conclude
Δ_k+1≥Δ_k, if Δ_k ≤Δ;
≥1/4Δ, if Δ_k > Δ.
That is, we establish the positive lower bound for the trust region radius: min{Δ_K, Δ/4}.
For <ref>, we only need to reestablish the claim that there exist positive constants C and δ such that for all x∈ and ξ∈x with ξ≤δ, the following inequality holds:
ξ≥ C(x,R_x(ξ)).
This claim is a direct consequence of <ref>. Specifically, there exist C_1≥0 and ν>0 such that
(x,R_x(ξ)) ≤ (x,exp_x(ξ)) + (exp _x(ξ),R_x(ξ))
≤ ξ + C_1ξ^1+ν.
Choose δ small enough such that C_1δ^ν≤ 1, and we obtain
(x,R_x(ξ)) ≤ 2ξ,
which implies that ξ≥ 1 /2(x,R_x(ξ)) for all x∈ and ξ∈x with ξ≤δ.
Please refer to <cit.> for other parts of the proof.
In the statement of the theorem, we require that { H_k } is uniformly bounded. This seemingly strong condition holds under a mild assumption; we state it as a corollary.
If there exists k∈ℕ such that the level set { x∈:ϕ(x) ≤ϕ(x_k) } is compact, then { H_k } is uniformly bounded due to the local boundedness of the Clarke generalized covariant derivative (see Proposition <ref>). Consequently, the condition required by Theorem <ref> is met.
<ref> does not require the semismoothness of the gradient field of the objective function. Here, we only utilize the Lipschitz continuity of ϕ's gradient field.
One can also prove the global convergence under a weaker assumption that the objective function is C^1 and satisfies a modified Kurdyka-Łojasiewicz condition on the manifold, for example, using techniques proposed in <cit.>.
§.§ Local Convergence
Theorem <ref> states that the algorithm converges to some stationary point, which may not necessarily be a local minimizer.
Our next goal is to demonstrate that if the algorithm operates near a nondegenerate local minimizer, it will be attracted to that local minimizer. To this end, we first introduce the definition of a nondegenerate local minimizer and then present some results highlighting how they shape the landscape of their neighborhoods.
We say x^* is a nondegenerate local minimizer of ϕ, if ϕ(x^*) = 0 and H_x is postive definite for any H_x ∈ϕ(x^*).
In <ref>, we assume a uniform upper norm bound for { H_k }. In the next lemma, we will demonstrate that near a nondegenerate local minimizer, ϕ automatically has some uniform bounds, not only on its norm, but also on its eigenvalues and the norm of its inverse.
For a nondegenerate local minimizer x^* of ϕ, there exist λ_1, λ_2 > 0 and > 0 such that
λ_1 ≤min{<ξ,Hξ>: H ∈∂ϕ(x), x ∈ B_(x^*), ξ=1},
λ_2 ≥max{H: H ∈∂ϕ(x), x ∈ B_(x^*)},
λ_1 ^-1 ≥max{H^-1: H ∈∂ϕ(x), x ∈ B_(x^*)}.
For any x∈ and any postive definite H∈ϕ(x), H and H^-1 are self-adjoint (see <ref>). Thus, we have
min_ξ=1<ξ,Hξ> = λ_min(H) = (λ_max(H^-1))^-1 = H^-1^-1,
max_ξ=1<ξ,Hξ> = λ_max(H) = H,
where λ_min(H) and λ_max(H) are the smallest and the largest eigenvalues of H, respectively.
By item 1 in <ref>, ϕ(x^*) is compact. Then, since all elements in ϕ(x^*) are positive definite and λ_min and λ_max are smooth functions on ℒ(x^*), there exist λ_1^*, λ_2^* > 0 such that
λ_1^* = min{_min(H): H ∈ϕ(x^*)},
λ_2^* = max{_max(H): H ∈ϕ(x^*)}.
By item 3 in <ref>, ϕ(x^*) is upper-semicontinuous. Let ϵ = λ_1^* /2. Then there exists _1 > 0 such that for any H ∈ϕ(x) and x∈ B__1(x^*), there exist Ĥ∈ϕ(x^*) and an operator B with norm no greater than 1 such that
P_xx^*HP_x^*x = Ĥ + _1^*/2B.
Thus, for any ξ∈x^*∖{ 0 }, we have
<ξ, P_xx^*HP_x^*xξ> = <ξ, Ĥξ> + _1^*/2<ξ, Bξ> ≥_1^*ξ,ξ - _1^*/2ξ,ξ = _1^*/2ξ,ξ.
Therefore _min(H) = _min(P_xx^*HP_x^*x) ≥_1^*/2. By the arbitrariness of H and x, we get
min{_min(H) : H ∈∂ϕ(x), x ∈ B__1(x^*)}≥λ_1 λ_1^*/2.
Similarly, we can obtain
max{_max(H) : H ∈∂ϕ(x), x ∈ B__1(x^*)}≤λ_2λ^*_2 + λ_1^*/2.
From the above bounds, we know that elements of ϕ(x) near x^* are positive definite, and we have
max{H^-1: H∈ϕ(x), x∈ B_δ(x^*) }≤λ_1^-1.
A nondegenerate local minimizer is an isolated local minimizer.
Let x^* be a nondegenerate local minimizer. By (<ref>) in <ref> and <ref>, there exists a neighborhood U of x^* such that for any x ∈ U,
ϕ(x) = P_γ^0→ 1[ϕ(x^*) + P_γ^τ→ 0H_τP_γ^0→τγ'(0)] = P_γ^τ→ 1 H_τP_γ^0→τγ'(0) 0,
where γ is a geodesic joining x^* and x.
Therefore, x^* is the only stationary point in U.
Let x^* be a nondegenerate local minimizer of ϕ. Let λ_1,λ_2, and neighborhood U be specified in <ref>. We have the following relationship for any x∈ U:
λ_1 (x,x^*) ≤ϕ(x)≤λ_2 (x,x^*).
By (<ref>) in <ref>, there exists τ∈[0,1] such that
ϕ(x) = ϕ(x^*) + P_γ^τ→0H_γ(τ)P_γ^0→τγ'(0) = P_γ^τ→0H_γ(τ)P_γ^0→τγ'(0),
where γ is the geodesic connecting x,x^*.
Then, by <ref>, we get
λ_1(x,x^*) ≤min_ξ=1H_γ(τ)ξγ'(0)≤ϕ(x)≤max_ξ=1H_γ(τ)ξγ'(0)≤λ_2(x,x^*).
We will now present and prove the local convergence theorem. Our proof is adapted from <cit.>.
Let x^* be a nondegenerate local minimizer of ϕ. Suppose the retraction R has a ν-continuous differential near x^*. Then there exists a neighborhood U of x^* such that for any x_0 ∈ U, {x_k} generated by Algorithm <ref> converges to x^*.
By <ref>, there exists δ_0 > 0 such that x^* is the only local minimizer in B_δ_0(x^*).
By <ref>, there exist c_1, _1 > 0 such that H^-1≤ c_1 for any H ∈ϕ(x) and x∈ B__1(x^*).
Moreover, by <ref>, there exist c_2, _2 > 0 such that ϕ(x)≤ c_2 (x,x^*) for any x∈ B__2(x^*).
Since B_δ_0(x^*) is compact,
by <ref>, there exist c_3, δ_3 > 0 such that for any x∈ B_δ_0(x^*) and ξ∈ B_δ_3(0_x), we have
(R_x(ξ), x)
≤(R_x(ξ),exp_x(ξ)) + (exp_x(ξ),x)
≤ O(ξ^1+ν) + ξ≤ c_3ξ.
Let _4 = min{_0,_1,_2,δ_3/c_1 c_2}. Then let
= _4/c_1c_2 c_3 + 1.
Since ϕ is continuous and x^* is an isolated local minimizer by <ref>, there exists a level set L = {x∈ℳ: ϕ(x) ≤ϕ(x^*) + ϵ} such that L ∩ B__4(x^*) ⊂ B_(x^*).
Let U = L ∩ B__4(x^*). For any x_0∈ U, let η^* = H_0^-1ϕ(x_0). Then, we have
η^*≤H_0^-1ϕ(x_0)≤ c_1c_2 (x_0,x^*) ≤ c_1 c_2 ≤ c_1 c_2 _4 ≤δ_3.
By <ref>, η_0≤η^*≤δ_3. Therefore, we have
(x_0,x_1)
= (x_0, R_x_0(η_0))
<ref>≤ c_3 η_0≤ c_3 η^*<ref>≤ c_3c_1 c_2 <ref>=_4 - ,
which indicates that
(x_1, x^*) ≤(x_1, x_0) + (x_0, x^*)
≤ (_4 - ) + = _4.
Since the Algorithm <ref> is a descent algorithm, we have x_1 ∈ L ∩ B__4(x^*) = U. By induction, we know that {x_k}⊂ U and thus converges to the only minimizer x^* by the descent property of Algorithm <ref>.
§.§ Local Convergence Rate
In this section, we prove the superlinear local convergence rate of our algorithm.
We first state two essential geometric laws on manifolds.
For any two vectors ξ_1,ξ_2 ∈ T_xℳ,
we have
( exp_x(ξ_1+ξ_2), exp_y(P_xyξ_2) ) = O(ξ_1ξ_2),
where y = exp _x(ξ_1).
Or equivalently, for any z, we have
(exp_x(ξ_1 + ξ_2), z) = (exp_y(P_xyξ_2), z) + O(ξ_1ξ_2).
For any two vectors ξ_1,ξ_2 ∈ T_xℳ,
we have
|(exp_x(ξ_1),exp _x(ξ_2)) - ξ_1 - ξ_2| = O(ξ_1ξ_2).
First, we note that
ξ_1 - ξ_2 = (y, exp _y(P_xy(ξ_2 - ξ_1))),
where y = exp _x(ξ_1). Then by the triangle inequality, we get
|(exp_x(ξ_1),exp _x(ξ_2)) - ξ_1 - ξ_2|
= | (y, exp _x(ξ_2)) - (y, exp _y(P_xy(ξ_2 - ξ_1))) |
≤ (exp _x(ξ_2), exp_y(P_xy(ξ_2 - ξ_1)))
= (exp _x(ξ_1 + (ξ_2 - ξ_1)), exp_y(P_xy(ξ_2 - ξ_2))).
Therefore, by <ref>, we get
|(exp_x(ξ_1),exp _x(ξ_2)) - ξ_1 - ξ_2| = O(ξ_1ξ_2 - ξ_1).
Symmetrically, we have
|(exp_x(ξ_1),exp _x(ξ_2)) - ξ_1 - ξ_2| = O(ξ_2ξ_2 - ξ_1).
Combining the above two equations gives
|(exp_x(ξ_1),exp _x(ξ_2)) - ξ_1 - ξ_2|
≤ O(min{ξ_1ξ_2-ξ_1, ξ_2ξ_2-ξ_1})
≤ O(min{ξ_1ξ_2+ξ_1^2, ξ_1ξ_2+ξ_2^2})
= O(ξ_1ξ_2 + min{ξ_1^2, ξ_2^2})
≤ O(ξ_1ξ_2 + ξ_1ξ_2)
= O(ξ_1ξ_2).
This corollary can also be derived from the cosine law on manifolds; see <cit.>.
Proving the superlinear local convergence rate of a trust region method requires demonstrating that the trust region will eventually become inactive.
Once this is established, one can expect the superlinear convergence rate due to the trust region step being precisely Newton's step.
However, previous work on Euclidean semismooth trust region methods either makes the inactivity of the trust region an assumption <cit.>, or directly uses Newton's step near a nondegenerate local minimizer <cit.>.
In this paper, we prove the inactivity of the trust region without any additional assumptions.
The challenge in showing the inactivity of the trust region stems from the non-twice differentiability of the objective function, which makes it difficult to show that the model error is a second-order infinitesimal of the step-size, i.e., |m(η) - ϕ(η)| = o(η^2).
For twice differentiable objective functions, this condition automatically holds. However, for objective functions with only a semismooth gradient field, it is not as obvious, as the diameter of ϕ(x) is not an infinitesimal of the step-size. That is, the disparity between the generalized Hessian in m and the one in the Taylor expansion of ϕ can be significant in terms of operator norm.
Nonetheless, we can establish a variation of this condition for a semismooth trust region method near a nondegenerate local minimizer with a slight detour.
The key lies in recognizing that, due to its semismoothness (<ref>), the diameter of the ϕ(x) acting on certain directions can be well controlled.
We now present the lemma on this condition.
Let x^* be a nondegenerate local minimizer of ϕ. Let {x_k}→ x^* be a sequence generated by Algorithm <ref>. Suppose ϕ is semismooth at x^* and the retraction R has a ν-differential near x^*. Then, near x^*, we have
|m_x_k(η_k) - ϕ(exp _x_k(η_k))| = o(η_k(η_k + ϕ(x_k))).
Or equivalently,
lim_k →∞|m_x_k(η_k) - ϕ(exp _x_k(η_k))|/η_k(η_k + ϕ(x_k)) = 0.
By (<ref>) in <ref>, there exists Ĥ_k∈ϕ(x_k) such that
ϕ(exp_x_k(η_k)) = ϕ(x_k) + ϕ(x_k),η_k + 1/2Ĥ_k η_k, η_k + o(η_k^2)
.
Also, at each iteration, we arbitrarily choose an H_k ∈ϕ(x_k) for the model problem (<ref>). Therefore, we have
m_x_k(η_k) - ϕ(exp_x_k(η_k)) = 1/2(H_k - Ĥ_k) η_k,η_k + o(η_k^2).
When ϕ is twice differentiable, ϕ(x_k) only contains one element, then the condition automatically holds.
For a SC^1 function ϕ, we want to control the diameter of ϕ(x_k) in some way.
Let ζ_k = exp_x_k^-1(x^*). For any ϵ > 0 in Definition <ref>, since { x_k } converges to x^*, we know that for a sufficiently large k, we have
(H_k - Ĥ_k)ζ_k
= (ϕ(x^*) - P_x_kx^*(ϕ(x_k) + H_kζ_k)) -
(ϕ(x^*) - P_x_kx^*(ϕ(x_k) + Ĥ_kζ_k))
≤ ϕ(x^*) - P_x_kx^*(ϕ(x_k) + H_kζ_k) +
ϕ(x^*) - P_x_kx^*(ϕ(x_k) + Ĥ_kζ_k)
≤ 2ϵ(x_k,x^*)^1+μ.
By letting ϵ approach zero and using (x_k,x^*)=ζ_k, we get
lim_k →∞ (H_k - Ĥ_k)ζ_k /ζ_k = 0,
which can be equivalently expressed as (H_k-Ĥ_k)ζ_k=o(ζ_k).
We can interpret this as the diameter of ϕ(x_k) applied on ζ_k is controled by ζ_k.
Then, by the triangle inequality, we have
(H_k - Ĥ_k) η_k,η_k≤ (H_k - Ĥ_k)η_k η_k
≤ (H_k - Ĥ_k)(η_k-ζ_k) η_k + (H_k - Ĥ_k)ζ_k η_k
= (H_k - Ĥ_k)(η_k-ζ_k) η_k + o(η_kζ_k).
When { x_k } converges to x^*, the difference {(x_k,x_k+1) } also shrinks to zero. Therefore, for any ϵ in item 3 of <ref>, for a sufficiently large k, we have
P_x_k x_k+1ϕ(x_k)P_x_k+1x_k⊂ϕ(x_k+1) + B_ϵ(0).
That is, there exist operators B,B̂∈ℒ(x_k+1) and A_k+1, Â_k+1∈ϕ(x_k+1) such that B,B̂≤ϵ and
P_x_k x_k+1H_kP_x_k+1x_k = A_k+1 + B,
P_x_k x_k+1Ĥ_kP_x_k+1x_k = Â_k+1 + B̂.
Then we have
(H_k - Ĥ_k)(η_k-ζ_k)
≤ (A_k+1 - Â_k+1)P_x_k x_k+1(η_k - ζ_k)
+ (B - B̂)P_x_k x_k+1(η_k - ζ_k)
≤ (A_k+1 - Â_k+1)ζ_k+1_G_1
+ (A_k+1 - Â_k+1)(P_x_k x_k+1(ζ_k - η_k) - ζ_k+1) _G_2
+ 2 ϵη_k - ζ_k_G_3,
where we used two triangle inequalities.
By the arbitrariness of ϵ, we know G_3 = o(η_k - ζ_k) = o(ζ_k + η_k). Then similar to (<ref>), we have G_1 = o(ζ_k+1).
We are left with G_2.
By Corollary <ref>, we have
P_x_k x_k+1(ζ_k - η_k) - ζ_k+1 = (exp_x_k+1(P_x_k x_k+1(ζ_k - η_k)), exp_x_k+1(ζ_k+1)) + O (ζ_k - η_kζ_k+1).
Recall that ζ_k+1 = exp _x_k+1^-1(x^*).
By <ref>, we have
(exp_x_k+1(P_x_k x_k+1(ζ_k - η_k)), exp_x_k+1(ζ_k+1))
= (exp_x_k+1(P_x_k x_k+1(ζ_k - η_k)),x^*)
= (exp _x_k(η_k' + (ζ_k - η_k)),x^*) + O(η_k'ζ_k - η_k)
where η_k' = exp^-1_x_k(x_k+1).
Applying ζ_k = exp _x_k^-1(x^*) and <ref> again, we get
(exp _x_k(η_k' + (ζ_k - η_k)), x^*)
= (exp_x_k(ζ_k + (η_k' - η_k)), x^*)
= (exp _x^*(P_x_k x^*(η_k' - η_k)), x^*) + O(ζ_kη_k' - η_k)
= P_x_kx^*(η_k' - η_k) + O(ζ_kη_k' - η_k)
= η_k' - η_k + O(ζ_kη_k' - η_k).
Finally, by <ref> and <ref>, we have
η_k' - η_k
= (x_k+1, exp_x_k(η_k)) + O(η_k'η_k)
≤ (x_k+1, R_x_k(η_k)) + (R_x_k(η_k), exp _x_k(η_k))+ O(η_k'η_k)
≤ 0 + O(η_k^1+ν) + O(η_k'η_k).
Recall that {(x_k,x_k+1) } converges to zero. Thus, {η_k' } also converges to zero. This fact together with (<ref>) gives
η_k' - η_k = o(η_k) and η_k' = O(η_k).
Combining <ref> and <ref> gives
P_x_k x_k+1(ζ_k - η_k) - ζ_k+1
= O(ζ_k - η_kζ_k+1) + O(η_k'ζ_k - η_k) + O(ζ_kη_k' - η_k) + o(η_k)
= o(ζ_k - η_k) + o(ζ_k-η_k) + o(ζ_k) + o(η_k)
= o(η_k + ζ_k),
which further gives ζ_k+1 = O(η_k + ζ_k).
Then for G_1 and G_2 in (<ref>), we have
G_1 = o(η_k + ζ_k),
G_2
= A_k+1 - Â_k+1· o(ζ_k + η_k).
Since { x_k } converges to x^*, by <ref>, { A_k+1, Â_k+1} are uniformly bounded. Also, by <ref>, we have ζ_k = exp_x_k^-1(x^*) = (x_k,x^*) = O(ϕ(x_k)). Therefore, combining <ref> gives
|m_x_k(η_k) - ϕ(exp_x_k (η_k))| ≤ o(η_k(η_k + ϕ(x_k))).
We remark that the above lemma holds for Euclidean trust region methods as the Euclidean space is also a Riemannian manifold. However, the derivation for the Euclidean case is notably more straightforward, because G_2 in <ref> vanishes as a result of P_x_k x_k+1(ζ_k - η_k) = ζ_k+1 in Euclidean spaces.
Consequently, proof subsequent to (<ref>) vanishes in the Euclidean setting.
While on a Riemannian manifold with a general retraction, the process is much more involved due to the non-vanishing G_2.
The next lemma shows that the trust region will eventually be inactive. For a C^2 objective function and retraction, one can easily get |m(η_k) - ϕ(R(η_k))| = O(ϕ(x_k)η_k^2), and then the result follows (see <cit.>). However, in our setting, we need <ref> to tackle the non-twice differentiability of the objective function and retraction, respectively.
Let x^* be a nondegenerate local minimizer of ϕ. Let {x_k}→ x^* be a sequence generated by Algorithm <ref>. If ϕ is semismooth at x^* and the retraction R has a ν-differential near x^*, we have
lim_k→∞ρ_k = 1.
In this proof, without confusion, we omit the subscript x_k in m_x_k, R_x_k, and exp_x_k.
First by the Taylor equation (<ref>) and <ref>, we have
|ϕ(exp(η_k)) - ϕ(R(η_k))|
= | P_γ^τ→ 0ϕ(γ(τ)),γ'(0)|
≤ P_γ^τ→0ϕ(γ(τ))γ'(0)
= P_γ^τ→0ϕ(γ(τ))(exp (η_k),R(η_k))
≤ P_γ^τ→0ϕ(γ(τ))· o(η_k),
where γ is the geodesic from exp(η_k) to R(η_k), and τ∈ [0,1];
Then we decompose the norm in (<ref>):
P_γ^τ→0ϕ(γ(τ))≤ P_γ^τ→0ϕ(γ(τ)) - P_x_kexp(η_k)ϕ(x_k) + P_x_k exp (η_k)ϕ(x_k)
≤ L(γ(τ), x_k) + ϕ(x_k)
≤ L((γ(τ),exp(η_k)) + (exp (η_k),x_k)) + ϕ(x_k)
= Lτ(R(η_k),exp(η_k)) + Lη_k + ϕ(x_k)
= Lτ· o(η_k) + Lη_k + ϕ(x_k),
where (<ref>) and (<ref>) use the triangle inequality,
(<ref>) uses the Lipschitzness of ϕ and L is the Lipschitz constant as in <ref>,
and (<ref>) is by <ref>.
Combining (<ref>), (<ref>), and <ref> gives
|m(η_k) - ϕ(R(η_k))| ≤
|m(η_k)-ϕ(exp(η_k))| + |ϕ(exp (η_k))-ϕ(R(η_k))|=
o(η_k(η_k + ϕ(x_k))).
Since { x_k } converges to x^*, by <ref>, { H_k } and { H_k^-1} are uniformly bounded; let their operator norm uniform upper bounds be β_1,β_2 respectively.
Now we denote ζ_kϕ(x_k) and η_k ^* -H_k ^-1ζ_k. By <ref>, η_k≤η_k ^*≤β_2ζ_k. Putting these back to (<ref>) gives
|m(η_k) - ϕ(R(η_k))| = o(ζ_k·η_k).
By <ref>, putting the above equation back to ρ_k-1 gives
|ρ_k - 1| ≤2· o(ζ_k·η_k)/ζ_kmin{Δ_k, ζ_k/ β_1 }.
When the denominator is Δ_k, since η_k≤Δ_k, we have
|ρ_k - 1| ≤2· o(ζ_k·Δ_k)/ζ_k·Δ_k→ 0, k→∞.
Otherwise, when the denominator is ζ_k/β_1, since η_k≤β_2ζ_k, we have
|ρ_k - 1| ≤2β_1 β_2· o(ζ_k^2)/ζ_k^2→ 0, k→∞.
In conclusion, we have
lim_k →∞|ρ_k - 1| = 0,
which gives lim_k →∞ρ_k = 1.
To the best of our knowledge, <ref> is the first result on the eventual inactivity of the trust region of trust region methods for SC^1 problems, even for the Euclidean case.
We are now ready to prove the algorithm's superlinear local convergence rate.
Let x^* be a nondegenerate local minimizer of ϕ. Let {x_k}→ x^* be a sequence generated by Algorithm <ref>.
If the retraction is ν-order Hölder differentiable in a neighborhood of x^*,
ϕ(x) is μ-order semismooth at x^*,
Algorithm <ref> uses (<ref>) as the stopping criterion,
then there exist c,K>0 such that for any k ≥ K,
(x_k+1, x^*) ≤ c (x_k,x^*)^1+min{θ,ν,μ}.
Specifically, if R∈ C^2, ϕ(x) is strongly semismooth at x^*, and θ = 1 in (<ref>), we have
(x_k+1, x^*) ≤ c (x_k,x^*)^2.
First, by <ref>, there exists K_1,c_1 > 0 such that for any k≥ K_1, we have
(x_k+1, x^*)
= (x^*, R_x_k(η_k))
≤(x^*,exp_x_k(η_k)) + (exp_x_k(η_k), R_x_k(η_k))
≤(x^*,exp_x_k(η_k)) + c_1η_k^1+ν.
When x_k is in the normal neighborhood of x^* (say k≥ K_1), where exp _x_k is a diffeomorphism, we have x^* = exp_x_k(exp_x_k^-1(x^*)). Let ζ_k exp_x_k^-1(x^*). By Corollary <ref>, there exists c_2 > 0 such that
(x^*,exp_x_k(η_k))
= (exp_x_k(ζ_k),exp_x_k(η_k))
≤η_k - ζ_k + c_2ζ_kη_k.
Let η_k^* -H_k^-1ϕ(x_k). Then we have
η_k - ζ_k = η_k - η_k^* + η_k^* - exp_x_k^-1(x^*)
≤η_k - η_k^* + H_k^-1ϕ(x_k) + exp_x_k^-1(x^*).
By <ref>, there exists K_2 > 0 and β_2 > 0 such that {H_k^-1}_k≥ K_2 is uniformly bounded by β_2. Also, by the semismoothness of ϕ(x) at x^*, there exists K_3> 0 such that for any k≥ K_3,
H_k^-1ϕ(x_k) + exp_x_k^-1(x^*) ≤H_k^-1ϕ(x_k) + H_kexp_x_k^-1(x^*)
≤β_2 (x_k,x^*)^1+μ.
At last, we need to bound η_k -η_k^*. By <ref>, there exists K_4 > 0 such that for any k≥ K_4, the trust region is inactive. Then the trust region radius will reach the radius cap set in Algorithm <ref>, i.e., Δ_k = Δ̅ > 0. On the other hand, by <ref> we have
lim_k →∞η_k≤lim_k →∞η_k^*≤lim_k →∞H_k^-1ϕ(x_k)≤β_2lim_k →∞ϕ(x_k) = 0.
Therefore, the second truncation condition in Algorithm <ref> (line 8) shall not be met.
Moreover, by <ref>, the elements in ϕ(x) when x is near x^* are positive definite (say k≥ K_4). Therefore, the first truncation in Algorithm <ref> (line 3) shall not be met.
All in all, for k≥ K_4, Algorithm <ref> terminates using (<ref>), making the final residual satisfy r_j = r_0 + H_kη_k = ϕ(x_k) + H_kη_k. Therefore,
η_k - η_k^* = H_k^-1(H_kη_k + ϕ(x_k)) = H_k^-1r_j<ref>≤β_2 r_0^1+θ,
where r_0 = ϕ(x_k) and θ > 0 are set in Algorithm <ref>. Again, by <ref>, let β_1 be the operator norm uniform upper bound of { H_k } when k≥ K_4. By <ref>, we have
η_k - η_k^*≤β_2 ϕ(x_k)^1+θ≤β_2 β_1^1+θ(x^*,x_k)^1+θ.
Let K = max{K_1,K_2,K_3,K_4}. Combining <ref> gives
(x_k+1,x^*) ≤ c_1η_k^1+ν + c_2η_k(x_k,x^*) + β_2(1 + β_1^1+θ)(x_k,x^*)^min{1+θ,1+μ}.
Similar to (<ref>), we also have η_k≤η_k^*≤β_2 ϕ(x_k)≤β_2 β_1 (x_k,x^*).
Plugging this back into (<ref>) gives
(x_k+1,x^*) ≤β_2( c_1 β_2^νβ_1^1+ν + c_2β_1 + (1 + β_1^1+θ) ) (x_k,x^*)^1+min{θ,ν,μ}.
Finally, letting c = β_2( c_1 β_2^νβ_1^1+ν + c_2β_1 + (1 + β_1^1+θ) ), we get the result.
§ APPLICATION: SOLVING AUGMENTED LAGRANGIAN METHOD SUBPROBLEM
Our primary motivation for proposing a semismooth Riemannian trust region method is to develop a new technique tailored for solving the subproblem of an augmented Lagrangian method (ALM) on manifolds.
In this section, we briefly review ALM on manifolds and formulate its subproblem in the form of (<ref>).
Then, in <ref>, we assess the performance of Algorithm <ref> as the subproblem solver for ALM on manifolds.
We consider the following optimization problem
min_x f(x) + g(h_1(x)) s.t. h_2(x) ≤ 0, h_3(x) = 0,
where f,h_1,h_2 are continuously differentiable, g is lower semicontinuous, and { x: h_3(x)=0 } is a Riemannian manifold.
Recently, <cit.> and <cit.> proposed two manifold inexact augmented Lagrangian methods to tackle this problem with convergence guarantee.
By viewing the equality constraint as a manifold constraint and introducing two new variables, we get the reformulation:
min_x,y,z f(x) + g(y) + δ__-^n(z)
s.t. x∈, y = h_1(x), z = h_2(x),
where (z) is the indicator function which equals 0 if z≤ 0 and +∞ otherwise, replacing the inequality constraint.
Then the augmented Lagrangian function of the reformulation is
L_σ(x,y,z,λ,γ)
= f(x) + g(y) + (z) + λ^T(h_1(x) - y) + γ^T(h_2(x) - z) + σ/2h_1(x) - y_2^2 + σ/2h_2(x) - z_2^2
= f(x) + g(y) + (z) + σ/2h_1(x) - y + λ/σ_2^2 + σ/2h_2(x) - z + γ/σ_2^2 - λ_2^2 + γ^2_2/2σ.
The idea of ALM is to solve the minimization problem of L_σ with respect to x, y, and z respectively in each step. We then use the Moreau envelope to further simplify the subproblem. Using partial minimization, we have
min_yL_σ = min_y g(y) + σ/2h_1(x)+ λ/σ - y_2^2 = M_g^σ( h_1(x) + λ/σ),
min_zL_σ = min_z (z) + σ/2h_2(x)+ γ/σ - z_2^2 = M_^σ( h_2(x) + γ/σ),
where M_g^σ and M_^σ are Moreau envelopes defined as follows:
M^σ_ψ(u) min_x {ψ(x) + σ/2u - x_2^2}.
Once the optimal x is given, these two subproblems (the y-subproblem and the z-subproblem) can be directly solved using proximal operators, and their solutions are given by
y = _yL_σ = _g/σ( h_1(x) + λ/σ),
z = _zL_σ = _/σ( h_2(x) + γ/σ) = __-^n( h_2(x) + γ/σ).
Then the problem min_x,y,zL_σ is equivalent to solving the following problem:
x =
x∈{φ(x,σ,λ,γ)
f(x) + M_g^σ( h_1(x) + λ/σ) + M_^σ( h_2(x) + λ/σ)}.
Since Moreau envelopes M_g^σ and M_^σ have Lipschitz continuous gradient but may not necessarily be twice differentiable, the x-subproblem (<ref>) is captured by (<ref>), and is the most difficult part of the problem. Hence, we usually refer to problem (<ref>) by the subproblem of ALM.
For reference, we present the ALM using our trust region method as a subproblem solver in Algorithm <ref>.
We also remark that both M_g^σ and M_^σ have semismooth gradient field on manifiold .
<cit.> solves the subproblem using the Riemannian gradient descent method, while <cit.> employs a semismooth Newton method that falls back to the gradient descent method when encountering negative curvatures. It is worth noting that neither of them incorporates a full second-order method to solve the subproblem in the ALM, whereas our trust region method does.
§ NUMERICAL EXPERIMENTS
In this paper, we conducted a performance evaluation of our algorithm on two distinct problem domains: compressed modes (CM) <cit.> and sparse principal component analysis (SPCA) <cit.>. To assess the effectiveness of our approach, we compared our results against several state-of-the-art algorithms: SOC <cit.>, ManPG <cit.>, ALMSSN <cit.>, accelerated ManPG (AManPG) <cit.>, accelerated Riemannian proximal gradient (ARPG) <cit.>, and manifold inexact augmented Lagrangian method (MIALM) <cit.>.
Our algorithm was implemented using the manopt package[https://www.manopt.org/index.html] in MATLAB and executed on a standard PC equipped with an AMD Ryzen 7 5800H with Radeon Graphics CPU and 16GB RAM. In our experimental results, we use “ALMSRTR” to represent our algorithm. To ensure statistical significance, we conducted 20 independent instances for each parameter setting and reported the average experimental results.
§.§ Compressed Modes in Physics
The compressed modes (CM) problem is a mathematical physics problem that focuses on obtaining sparse solutions for a specific class of problems, such as the Schrödinger equation in quantum mechanics. To induce sparsity, the wave function undergoes L_1 regularization, leading to compact support solutions known as compressed modes.
Following the framework proposed by <cit.>, the CM problem can be formulated as
min_P ∈St(n, r){tr(P^T H P) + μP_1},
where St(n,r) := {P ∈ℝ^n × r : P^TP = I_r } is the Stiefel manifold and μ is a regularization parameter.
For additional details, readers may refer to <cit.>. Our experimental setup was consistent with that of <cit.>.
In our experiments, we employ the retraction described in Algorithm <ref>, which utilizes the QR decomposition method introduced in <cit.>.
Additionally, to maintain consistency, we employ QR decomposition in ALMSSN in the subsequent experiments.
Implementation details. It is worth noting that SOC considers an equivalent problem, which can be expressed as follows:
min_X, Q, P ∈ℝ^n × r tr(P^T H P) + μX_1
s.t. X=P, Q=P, Q^T Q=I_r .
The Lagrangian of the equivalent problem is given by:
L_S(Q, P, X, Γ, Λ) = tr(P^T H P) + μX_1 + tr(Γ^T(Q-P)) + tr(Λ^T(X-P)),
where P, X ∈ℝ^n× r and Q ∈St(n, r). Therefore, the termination conditions for SOC are as follows:
Q-P_∞/max{Q_F,P_F}+1 + X-P_∞/max{X_F,P_F}+1≤ 5 × 10^-7,
grad_Q L_S_∞/Q_F+1 + ∇_P L_S_∞/P_F+1 + min _G ∈∂_X L_SG_∞/X_F+1≤ 5 × 10^-5.
Similar to MIALM and ALMSSN, our algorithm translates the problem to
min _P, Q ∈ℝ^n × r{tr(P^T H P) + μQ_1 },
s.t. P=Q, P ∈St(n, r) .
The Lagrangian is given by L_N(P, Q, Λ) = tr(P^T H P) + μQ_1 + tr(Λ^T(P-Q)), where Q ∈ℝ^n × r and P ∈St(n, r).
We terminate these three algorithms when the following conditions hold:
P-Q_∞/max{P_F,Q_F}+1≤ 5 × 10^-7,
grad_P L_N_∞/P_F+1 + min _G ∈∂_Q L_NG_∞/Q_F+1≤ 5 × 10^-5.
Following the same termination criterion as <cit.>, we terminate ManPG, AManPG, and ARPG when t^-1V_*_∞ /(P_F+1) ≤ 5 × 10^-5, where
V_* = V ∈ T_P St(n, r)min{⟨grad_P tr(P^T H P), V⟩ + 1/2 tV_F^2 + P+V_1}.
We adopt the notation 𝐏_P V to represent the projection of a vector V ∈ℝ^n × r onto T_P St(n, r). Since -V_* / t ∈ 2 𝐏_P(HP) + μ𝐏_P(∂P+V_*_1), this termination criterion serves as an approximation of the first-order optimality condition for the CM problem, namely, the condition 0 ∈ 2 𝐏_P(HP) + μ𝐏_P(∂P_1).
The implementations of ManPG, AManPG, ARPG, and SOC in our study are consistent with the approaches employed in <cit.>. We directly utilize the same codes and parameters as described in that particular work. Similarly, for MIALM, we adopt the codes and parameters provided in <cit.>, while the codes and parameters of ALMSSN are based on the specifications presented in <cit.>. In our experimentation, we establish termination criteria based on predefined conditions, and additionally, we terminate all six methods if the number of iterations exceeds 30,000.
In Algorithm <ref>, the sequence ϵ_k is set to ϵ_k = 0.8^k, and the initial value of σ_0 is set to 1. The dual variable λ_0 is initialized as a null matrix, while the primal variable x_0 is randomly generated using the same procedure as the other algorithms. The parameter τ is set to 0.99, and the initial value of the parameter κ is chosen as 1.25. The initial maximum number of iterations in Algorithm <ref> is set to 40 for n ≥ 500 and 60 for n < 500, where n represents the data dimension. Both the maximum number of iterations of Algorithm <ref> and the value of κ are adaptively adjusted during the iteration based on the accuracy of the current iteration. The maximum number of iterations for Algorithm <ref> used to solve the model problem in Algorithm <ref> is set to 300. In Algorithm <ref>, we set Δ̅ = 10, Δ_0 = 0.01, and ρ' = 0.1. During our experiments, we noticed that ManPG exhibited notably slow performance when n=500, which led us to implement a 120-second time limit for the algorithm's termination.
We present the results of our experiments in Table <ref> and visualize them in Figure <ref>. The findings in Table <ref> indicate that all algorithms achieve similar objective function values across different settings. However, in the majority of cases, our proposed method demonstrates superior computational efficiency, suggesting its practical advantage. From Figure <ref>, we observe that first-order algorithms encounter limitations in achieving the desired termination condition, particularly when r is large. In contrast, our method, which consistently leverages second-order information, exhibits improved convergence compared to both first-order and second-order algorithms that do not consistently utilize second-order information, even for large r. This underscores the superiority of our method, which relies on the consistent utilization of second-order information.
§.§ Sparse Principal Component Analysis
This section presents the results of our experiments on the sparse principal component analysis (SPCA) problem.
SPCA is a widely used technique in data analysis that offers better interpretability compared to traditional principal component analysis. It achieves this by incorporating lasso regularization, which produces modified principal components with sparse loadings. The SPCA problem can be formulated as follows:
min_P ∈St(n, r){-tr(P^T A^T A P)+μ‖ P‖_1},
where St(n,r) := {P ∈ℝ^n × r:P^TP = I_r } represents the Stiefel manifold, and A ∈ℝ^p× n is the data matrix with p observations and n variables.
c
Comparison of CM. Significant results are presented in bold. The setting for (n, r, μ) = (1000, 20, 0.1) remains fixed, while one of the dimensions varies. The ManPG approach refers to the adaptive version proposed by <cit.> (ManPG-Ada). LS-1 denotes the ALMSSN technique with the first line-search method introduced by <cit.>, while LS-2 utilizes the second line-search method proposed by <cit.>.
ManPG AmanPG ARPG SOC MIALM LS-1 LS-2 ALMSRTR
8lRunning time (s)
n 200 23.61 5.76 9.16 2.91 20.93 1.58 2.06 1.30
500 115.56 13.95 13.27 7.15 24.11 5.05 7.61 4.80
1000 69.03 12.70 15.59 22.68 12.11 14.19 12.97 9.71
1500 62.00 59.74 50.38 48.56 70.28 29.28 34.99 21.69
2000 42.02 46.15 49.50 46.71 73.64 29.56 26.98 20.03
r 10 14.64 13.88 13.28 16.60 30.24 51.76 8.96 11.61
15 33.49 10.54 12.93 14.95 8.39 47.95 10.39 8.39
25 91.43 23.43 23.48 26.61 54.02 15.36 17.48 14.00
30 86.31 28.11 27.07 22.09 29.51 10.18 13.06 13.35
μ 0.05 93.51 16.05 14.16 6.72 8.74 6.06 6.72 7.07
0.15 57.12 18.55 21.45 31.57 25.97 12.97 14.41 11.84
0.20 51.48 13.70 21.62 34.21 23.88 19.24 19.73 10.68
0.25 36.41 12.65 22.14 35.83 13.48 53.62 30.18 12.47
8lLoss function value
n 200 14.18 14.18 14.18 14.18 14.18 14.17 14.17 14.16
500 18.63 18.63 18.63 18.63 18.63 18.63 18.63 18.63
1000 23.37 23.36 23.36 23.36 23.36 23.36 23.36 23.36
1500 26.97 26.86 26.86 26.86 26.86 26.86 26.86 26.86
2000 29.98 29.74 29.74 29.74 29.74 29.74 29.74 29.74
r 10 10.77 10.74 10.74 10.74 10.74 10.74 10.74 10.74
15 16.51 16.46 16.46 16.46 16.46 16.46 16.46 16.46
25 32.01 32.00 32.00 32.00 32.00 32.00 32.00 32.00
30 42.95 42.93 42.93 42.94 42.94 42.92 42.92 42.93
μ 0.05 15.15 15.14 15.14 15.14 15.14 15.14 15.14 15.14
0.15 31.05 31.01 31.01 31.01 31.01 31.01 31.01 31.01
0.20 38.32 38.27 38.27 38.28 38.27 38.27 38.27 38.27
0.25 45.36 45.26 45.26 45.29 45.26 45.26 45.26 45.26
To evaluate the efficacy of our proposed algorithm, we conducted a comparative evaluation against ALMSSN, AManPG, ARPG, and SOC, utilizing a high level of accuracy. To ensure reliable convergence, we set the termination threshold at 5 × 10^-8 for synthetic data and 5 × 10^-7 for real data.
The termination condition remains consistent with the preceding section, with the exception that we substitute H with -A^TA.
Due to not meeting our accuracy requirements or exhibiting excessively long runtimes, we excluded ManPG and MIALM from our experiments. The parameters and codes used for ALMSSN are consistent with those reported in <cit.>.
To ensure a high level of accuracy, we incorporated several enhancements into the AManPG and ARPG codes. Our primary modification focused on refining the subproblem solution, aiming to improve overall accuracy. Furthermore, we introduced specific modifications to the AManPG code to address situations where its accuracy remains almost unchanged after a certain iteration number. The parameters employed for AManPG aligned with those documented in <cit.>, while the parameters for ARPG corresponded to those specified in <cit.>. For SOC, we maintained consistency by employing the parameters and code outlined in <cit.>.
In Algorithm <ref>, we set ϵ_k=0.95^k and τ=0.9. The initial value of κ is set to 1.25. In Algorithm <ref>, the maximum number of iterations is set to 50 when n < 2000, 70 when 2000 ≤ n < 3000, and 90 when n ≥ 3000. Both the maximum number of iterations and the value of κ are adaptively adjusted during each iteration based on the accuracy of the current iteration. The remaining parameters in our algorithm are consistent with those in the CM algorithm.
The data matrix A is generated using two different methods:
(1) Synthetic.
The data matrix A is randomly generated using the method described in <cit.>. Various ill-conditioned matrices A ∈ℝ^50× n can be obtained in different dimensions.
(2) Real.
The data matrix A is selected from real datasets. The gene expression datasets and are obtained from <cit.>. Additionally, the NCI 60 dataset is chosen from <cit.>. Finally, the yeast eQTL dataset, known as , is selected from <cit.>.
Our experimental findings unveiled the potential failure of AManPG in cases where the solution of the Lyapunov equation <cit.> is non-unique or non-existent, even when utilizing the original unmodified code. To ensure a fair and equitable comparison among algorithms, we excluded scenarios where AManPG may encounter such challenges. Additionally, we observed that ARPG frequently terminated prematurely without achieving the desired accuracy due to its stopping criteria.
The results obtained from our experiments with synthetic data are presented in <ref> and visualized in <ref>. The outcomes obtained from real data are reported in <ref>. It proved to be challenging to achieve convergence of SOC and ARPG towards the termination condition threshold, whereas our proposed method consistently exhibited convergence properties akin to second-order algorithms. In terms of objective function values, our method demonstrated superior performance in almost all experiments. With the exception of AManPG, which was specifically designed for the SPCA problem, our method generally exhibited the lowest time consumption. These results serve as compelling evidence for the effectiveness of our second-order algorithm, which leverages second-order information, and its superiority over alternative algorithms in tackling the SPCA problem. The experimental results further reinforce the assertion made in the previous section regarding the exceptional performance of our algorithm in similar problem domains.
c
Comparison of SPCA with synthetic data. Bold numbers indicate superior results. The data matrix A∈ℝ^50× n. The tuple (n, r, μ) = (2000, 20, 1.00), with one of the elements varying.
AManPG ARPG SOC LS-1 LS-2 ALMSRTR
7lRunning time(s)
n 500 15.02 56.77 34.25 6.82 6.52 5.89
1000 21.32 74.65 99.26 14.66 12.83 10.80
1500 30.18 86.53 212.87 26.75 22.17 17.60
2000 31.28 77.78 365.03 36.24 29.46 27.08
2500 36.34 84.11 535.88 51.38 42.45 38.08
3000 42.09 79.32 768.18 67.21 47.13 54.81
r 5 1.47 16.54 226.60 14.88 7.76 8.79
10 4.96 8.06 298.78 25.07 18.19 21.18
15 17.19 23.30 337.94 31.18 24.61 22.50
25 45.01 163.72 405.76 51.50 40.36 33.31
μ 0.25 45.87 57.965 369.03 91.646 109.44 70.67
0.50 33.01 42.74 369.25 52.09 51.33 37.94
0.75 38.32 59.69 369.09 34.41 32.88 33.27
1.25 34.75 91.48 368.12 34.28 29.67 37.43
7lLoss function:{-tr(P^T A^T A P)+μP_1}
n 500 -347.78 -347.75 -346.87 -347.59 -347.59 -349.15
1000 -762.22 -762.19 -761.43 -761.90 -761.90 -763.53
1500 -1209.59 -1209.53 -1209.42 -1209.70 -1209.70 -1211.15
2000 -1621.05 -1621.37 -1620.70 -1621.05 -1621.05 -1622.98
2500 -2082.34 -2082.72 -2080.97 -2082.82 -2082.82 -2084.18
3000 -2516.72 -2516.77 -2515.30 -2517.06 -2517.06 -2518.69
r 5 -1520.36 -1520.33 -1520.32 -1520.47 -1520.47 -1520.42
10 -1630.61 -1630.62 -1630.35 -1630.36 -1630.36 -1631.21
15 -1631.77 -1631.73 -1631.46 -1632.13 -1632.13 -1633.14
25 -1606.38 -1606.50 -1605.76 -1606.85 -1606.85 -1607.78
μ 0.25 -1873.23 -1873.18 -1870.19 -1873.26 -1873.26 -1873.39
0.50 -1781.25 -1781.05 -1779.03 -1781.27 -1781.27 -1781.89
0.75 -1700.87 -1700.10 -1699.01 -1700.12 -1700.12 -1701.46
1.25 -1555.04 -1555.02 -1555.26 -1555.56 1555.56 -1557.73
c
Comparison of SPCA with real data. Bold numbers indicate superior results. The slash indicates that the value of the termination condition at the maximum number of iterations is still greater than 10^-3, indicating that the manifold constraint is not fully satisfied and the loss function value is meaningless. It should be noted that our algorithm ALMSRTR meets the threshold of the termination condition for all settings in this table.
Dataset (n,r,μ) AManPG ARPG SOC LS-1 LS-2 ALMSRTR
8lRunning time(s)
4*Arabidopsis (834, 10, 0.50) 14.16 28.58 34.83 37.32 11.44 9.93
(834, 10, 0.25) 6.05 28.14 34.64 27.88 27.22 7.14
(834, 15, 0.50) 22.16 36.37 45.09 30.73 30.31 19.17
(834, 15, 0.25) 19.50 43.30 44.67 38.73 36.21 23.46
4*Leukemia (1255, 10, 0.50) 12.98 29.21 97.25 14.33 11.76 18.83
(1255, 10, 0.25) 8.02 35.22 96.83 33.36 57.98 13.94
(1255, 15, 0.50) 16.18 48.22 109.90 55.04 46.51 21.79
(1255, 15, 0.25) 33.65 45.83 109.13 49.12 64.99 21.46
4*realEQTL (1260, 10, 0.50) 37.00 35.23 96.64 40.47 40.53 9.99
(1260, 10, 0.25) 36.37 35.43 97.10 64.28 65.10 38.91
(1260, 15, 0.50) 50.30 50.17 111.09 55.80 59.05 41.13
(1260, 15, 0.25) 49.73 49.75 111.13 136.97 140.96 31.08
4*Staunton100 (1517, 10, 0.50) 9.35 37.50 150.69 10.19 10.10 10.15
(1517, 10, 0.25) 9.59 37.09 149.28 44.03 48.41 8.78
(1517, 15, 0.50) 7.85 44.25 163.09 24.41 20.19 40.04
(1517, 15, 0.25) 21.08 51.93 162.80 29.52 30.56 33.25
4*Staunton200 (2455, 10, 0.50) 16.83 39.68 450.72 98.19 24.75 29.02
(2455, 10, 0.25) 7.99 43.35 451.48 70.53 80.20 37.96
(2455, 15, 0.50) 19.26 61.04 461.16 62.75 55.36 70.18
(2455, 15, 0.25) 44.01 54.17 461.32 91.42 60.66 35.24
6lLoss function:{-tr(P^T A^T A P)+μP_1}
4*Arabidopsis (834, 10, 0.50) -65867.82 -65867.73 -65864.81 -65867.42 -65867.42 -65867.51
(834, 10, 0.25) -65922.30 -65922.30 -65938.12 -65922.14 -65922.14 -65922.31
(834, 15, 0.50) -72553.02 -72552.31 -72546.44 -72552.46 -72552.46 -72553.54
(834, 15, 0.25) -72633.64 -72633.51 -72629.79 -72633.59 -72633.59 -72634.14
4*Leukemia (1255, 10, 0.50) -53787.19 -53787.27 -53780.28 -53787.02 -53787.02 -53787.26
(1255, 10, 0.25) -53854.00 -53854.00 -53849.77 -53853.94 -53853.94 -53854.03
(1255, 15, 0.50) -60108.81 -60109.28 -60101.74 -60109.03 -60109.03 -60109.76
(1255, 15, 0.25) -60208.75 -60208.81 -60203.91 -60208.78 -60208.77 -60209.07
4*realEQTL (1260, 10, 0.50) -191911.63 -191911.63 / -191912.62 -191912.62 -191912.85
(1260, 10, 0.25) -191972.26 -191972.26 / -191972.76 -191972.76 -191972.88
(1260, 15, 0.50) -207390.14 -207390.16 / -207390.17 -207390.17 -207390.61
(1260, 15, 0.25) -207477.62 -207477.61 / -207477.65 -207477.65 -207477.87
4*Staunton100 (1517, 10, 0.50) -37081.99 -37080.75 -37080.17 -37081.82 -37081.82 -37082.06
(1517, 10, 0.25) -37147.01 -37147.02 -37146.08 -37147.54 -37147.54 -37147.66
(1517, 15, 0.50) -42827.91 -42827.66 -42822.76 -42827.88 -42827.88 -42828.22
(1517, 15, 0.25) -42926.71 -42926.59 -42923.03 -42926.59 -42926.59 -42926.68
4*Staunton200 (2455, 10, 0.50) -43880.94 -43880.94 -43878.25 -43880.78 -43880.78 -43880.98
(2455, 10, 0.25) -43962.73 -43962.73 -43960.11 -43962.68 -43962.68 -43962.73
(2455, 15, 0.50) -50790.16 -50790.20 -50779.81 -50790.27 -50790.27 -50790.29
(2455, 15, 0.25) -50912.84 -50912.86 -50905.02 -50912.65 -50912.65 -50912.93
§ CONCLUSION
In this work, we have proposed a trust region method for minimizing an SC^1 function on a Riemannian manifold. We have established our method's global convergence, local convergence near nondegenerate local minima, and superlinear local convergence rate under relaxed smoothness requirement on the objective function and retraction.
We have also provided the first theoretical guarantee of trust region's inactivity near nondegenerate local minima for trust region methods applying to SC^1 functions, which is also new in the Euclidean case.
To demonstrate the superiority of our method, we have applied it to solve the subproblem of the augmented Lagrangian method on manifolds and performed extensive numerical experiments. Our results show that our proposed method outperforms existing state-of-the-art methods, achieving better convergence performance.
55
urlstyle
[Absil et al.(2006)Absil, Baker, and Gallivan]absil2006convergence
P.-A. Absil, C. G. Baker, and K. A. Gallivan.
Convergence analysis of riemannian trust-region methods.
Technical report, 2006.
URL
<http://www.optimization-online.org/DB_HTML/2006/06/1416.html>.
[Absil et al.(2007)Absil, Baker, and
Gallivan]absilTrustRegionMethodsRiemannian2007
P.-A. Absil, C. G. Baker, and K. A. Gallivan.
Trust-region methods on riemannian manifolds.
Foundations of Computational Mathematics, 7:0
303–330, 2007.
[Absil et al.(2008)Absil, Mahony, and
Sepulchre]absilOptimizationAlgorithmsMatrix2008
P.-A. Absil, R. Mahony, and R. Sepulchre.
Optimization Algorithms on Matrix Manifolds.
Princeton University Press, 2008.
ISBN 978-0-691-13298-3.
[Beck and Rosset(2023)]beck2023dynamic
A. Beck and I. Rosset.
A dynamic smoothing technique for a class of nonsmooth optimization
problems on manifolds.
Technical report, School of Mathematics Sciences, Tel Aviv
University, 2023.
URL <https://www.tau.ac.il/ becka/manifold7.pdf>.
[Bendory et al.(2017)Bendory, Eldar, and Boumal]bendory2017non
T. Bendory, Y. C. Eldar, and N. Boumal.
Non-convex phase retrieval from stft measurements.
IEEE Transactions on Information Theory, 640
(1):0 467–484, 2017.
[Boothby(2003)]boothbyIntroductionDifferentiableManifolds2003
W. M. Boothby.
An Introduction to Differentiable Manifolds and Riemannian
Geometry.
Academic Press, rev. 2nd ed edition, 2003.
ISBN 978-0-12-116051-7.
[Boumal(2016)]boumal2016nonconvex
N. Boumal.
Nonconvex phase synchronization.
SIAM Journal on Optimization, 260 (4):0
2355–2377, 2016.
[Boumal(2023)]boumalIntroductionOptimizationSmooth2020a
N. Boumal.
An introduction to optimization on smooth manifolds.
Cambridge University Press, 2023.
10.1017/9781009166164.
[Boumal and Absil(2011)]boumal2011rtrmc
N. Boumal and P.-A. Absil.
Rtrmc: A riemannian trust-region method for low-rank matrix
completion.
Advances in neural information processing systems, 24, 2011.
[Cai et al.(2019)Cai, Liu, and Wang]cai2019fast
J.-F. Cai, H. Liu, and Y. Wang.
Fast rank-one alternating minimization algorithm for phase retrieval.
Journal of Scientific Computing, 790 (1):0
128–147, 2019.
[Chen et al.(2020)Chen, Ma, So, and Zhang]chen2020proximal
S. Chen, S. Ma, M.-C. A. So, and T. Zhang.
Proximal gradient method for nonsmooth optimization over the stiefel
manifold.
SIAM Journal on Optimization, 300 (1):0
210–239, 2020.
[Chen et al.(2016)Chen, Ji, and You]chen2016augmented
W. Chen, H. Ji, and Y. You.
An augmented lagrangian method for 1-regularized optimization
problems with orthogonality constraints.
SIAM Journal on Scientific Computing, 380
(4):0 B570–B592, 2016.
[Cho and Lee(2017)]cho2017riemannian
M. Cho and J. Lee.
Riemannian approach to batch normalization.
Advances in Neural Information Processing Systems, 30, 2017.
[Clarke(1990)]clarkeOptimizationNonsmoothAnalysis1990
F. H. Clarke.
Optimization and Nonsmooth Analysis.
Number 5 in Classics in Applied Mathematics. Society for industrial
and applied mathematics, 1990.
ISBN 978-0-89871-256-8.
[Conn et al.(2000)Conn, Gould, and Toint]conn2000Trustregion
A. R. Conn, N. I. Gould, and P. L. Toint.
Trust Region Methods.
SIAM, 2000.
[Culhane et al.(2003)Culhane, Perrière, and
Higgins]culhane2003cross
A. C. Culhane, G. Perrière, and D. G. Higgins.
Cross-platform comparison and visualisation of gene expression data
using co-inertia analysis.
BMC bioinformatics, 40 (1):0 1–15, 2003.
[Daniilidis et al.(2018)Daniilidis, Deville, Durand-Cartagena, and
Rifford]daniilidisSelfcontractedCurvesRiemannian2018
A. Daniilidis, R. Deville, E. Durand-Cartagena, and L. Rifford.
Self-contracted curves in riemannian manifolds.
Journal of Mathematical Analysis and Applications,
4570 (2):0 1333–1352, 2018.
[de Oliveira and Ferreira(2020)]deoliveiraNewtonMethodFinding2020
F. R. de Oliveira and O. P. Ferreira.
Newton method for finding a singularity of a special class of locally
lipschitz continuous vector fields on riemannian manifolds.
Journal of Optimization Theory and Applications, 185:0
522–539, 2020.
[Deng and Peng(2022)]kangkang2022inexact
K. Deng and Z. Peng.
A manifold inexact augmented lagrangian method for nonsmooth
optimization on riemannian submanifolds in euclidean space.
IMA Journal of Numerical Analysis, 2022.
[Diepeveen and Lellmann(2021)]diepeveen2021inexact
W. Diepeveen and J. Lellmann.
An inexact semismooth newton method on riemannian manifolds with
application to duality-based total variation denoising.
SIAM Journal on Imaging Sciences, 140 (4):0
1565–1600, 2021.
[do Carmo(2016)]carmoDifferentialGeometryCurves2016
M. P. do Carmo.
Differential Geometry of Curves and Surfaces.
Dover Publications Inc, revised & upyeard second edition edition,
2016.
ISBN 978-0-486-80699-0.
[Gabay(1982)]gabay1982minimizing
D. Gabay.
Minimizing a differentiable function over a differential manifold.
Journal of Optimization Theory and Applications, 370
(2):0 177–219, 1982.
[Ghahraei et al.(2017)Ghahraei, Hosseini, and
Pouryayevali]ghahraei2017PseudoJacobiancharacterization
E. Ghahraei, S. Hosseini, and M. R. Pouryayevali.
Pseudo-jacobian and characterization of monotone vector fields on
riemannian manifolds.
J. Convex Anal, 240 (1):0 149–168, 2017.
[Grohs and Hosseini(2016)]grohs2016nonsmooth
P. Grohs and S. Hosseini.
Nonsmooth trust region algorithms for locally lipschitz functions on
riemannian manifolds.
IMA Journal of Numerical Analysis, 360 (3):0
1167–1192, 2016.
[Hiriart-Urruty et al.(1984)Hiriart-Urruty, Strodiot, and
Nguyen]hiriart-urruty1984GeneralizedHessian
J.-B. Hiriart-Urruty, J.-J. Strodiot, and V. H. Nguyen.
Generalized Hessian matrix and second-order optimality conditions
for problems withC 1,1 data.
Applied Mathematics & Optimization, 110 (1):0
43–56, Feb. 1984.
ISSN 0095-4616, 1432-0606.
10.1007/BF01442169.
[Hu et al.(2018)Hu, Milzarek, Wen, and Yuan]hu2018adaptive
J. Hu, A. Milzarek, Z. Wen, and Y. Yuan.
Adaptive quadratically regularized newton method for riemannian
optimization.
SIAM Journal on Matrix Analysis and Applications, 390
(3):0 1181–1207, 2018.
[Hu et al.(2020)Hu, Liu, Wen, and Yuan]hu2020brief
J. Hu, X. Liu, Z.-W. Wen, and Y.-X. Yuan.
A brief introduction to manifold optimization.
Journal of the Operations Research Society of China,
80 (2):0 199–248, 2020.
[Huang and Wei(2021)]huang2021riemannian
W. Huang and K. Wei.
Riemannian proximal gradient methods.
Mathematical Programming, pages 1–43, 2021.
[Huang and Wei(2022)]huang2022extension
W. Huang and K. Wei.
An extension of fast iterative shrinkage-thresholding algorithm to
riemannian optimization for sparse principal component analysis.
Numerical Linear Algebra with Applications, 290
(1):0 e2409, 2022.
[Jiang and Qi(1996)]jiang1996Globallysuperlinearly
H. Jiang and L. Qi.
Globally and superlinearly convergent trust-region algorithm for
convex sc 1-minimization problems and its application to stochastic programs.
Journal of optimization theory and applications, 90:0
649–669, 1996.
[Kummer(1988)]kummer1988Newtonmethod
B. Kummer.
Newton’s method for non-differentiable functions.
Advances in mathematical optimization, 450
(1988):0 114–125, 1988.
[Lai and Osher(2014)]lai2014splitting
R. Lai and S. Osher.
A splitting method for orthogonality constrained problems.
Journal of Scientific Computing, 580 (2):0
431–449, 2014.
[Ledyaev and Zhu(2007)]ledyaevNonsmoothAnalysisSmooth2007
Y. Ledyaev and Q. Zhu.
Nonsmooth analysis on smooth manifolds.
Transactions of the American Mathematical Society,
3590 (8):0 3687–3732, 2007.
[Lee(2013)]lee2012Smoothmanifolds
J. M. Lee.
Introduction to Smooth Manifolds.
Springer, 2013.
[Lee(2018)]lee2018IntroductionRiemannian
J. M. Lee.
Introduction to Riemannian manifolds, volume 2.
Springer, 2018.
[Li et al.(2010)Li, Toh, et al.]li2010inexact
L. Li, K.-C. Toh, et al.
An inexact interior point method for ℓ_1-regularized sparse
covariance selection.
Math. Program. Comput., 20 (3-4):0 291–315,
2010.
[Liu et al.(2017)Liu, Yue, and So]liu2017estimation
H. Liu, M.-C. Yue, and A. M.-C. So.
On the estimation performance and convergence rate of the generalized
power method for phase synchronization.
SIAM Journal on Optimization, 270 (4):0
2426–2446, 2017.
[Lu and Zhang(2012)]lu2012augmented
Z. Lu and Y. Zhang.
An augmented lagrangian approach for sparse principal component
analysis.
Mathematical Programming, 1350 (1):0 149–193,
2012.
[Montanari and Richard(2015)]montanari2015non
A. Montanari and E. Richard.
Non-negative principal component analysis: Message passing algorithms
and sharp asymptotics.
IEEE Transactions on Information Theory, 620
(3):0 1458–1484, 2015.
[Nocedal and Wright(2006)]nocedalNumericalOptimization2006
J. Nocedal and S. J. Wright.
Numerical Optimization.
Springer Series in Operations Research. Springer, 2nd ed edition,
2006.
ISBN 978-0-387-30303-1.
[Noll and Rondepierre(2013)]noll2013Convergencelinesearch
D. Noll and A. Rondepierre.
Convergence of linesearch and trust-region methods using the
kurdyka–łojasiewicz inequality.
In Computational and Analytical Mathematics: In Honor of
Jonathan Borwein's 60th Birthday, pages 593–611. Springer, 2013.
[Ozoliņš et al.(2013)Ozoliņš, Lai, Caflisch,
and Osher]ozolicnvs2013compressed
V. Ozoliņš, R. Lai, R. Caflisch, and S. Osher.
Compressed modes for variational problems in mathematics and physics.
Proceedings of the National Academy of Sciences, 1100
(46):0 18368–18373, 2013.
[Pang and Qi(1995)]pang1995globally
J.-S. Pang and L. Qi.
A globally convergent newton method for convex sc1 minimization
problems.
Journal of Optimization Theory and Applications, 850
(3):0 633–648, 1995.
[Qi and Sun(1993)]qi1993nonsmoothversion
L. Qi and J. Sun.
A nonsmooth version of Newton's method.
Mathematical programming, 580 (1-3):0
353–367, 1993.
[Qi and Womersley(1995)]qi1995sqp
L. Qi and R. S. Womersley.
An sqp algorithm for extended linear-quadratic problems in stochastic
programming.
Annals of Operations Research, 560 (1):0
251–285, 1995.
[Steihaug(1983)]steihaugConjugateGradientMethod1983
T. Steihaug.
The conjugate gradient method and trust regions in large scale
optimization.
SIAM Journal on Numerical Analysis, 200 (3):0
626–637, 1983.
[Sun et al.(1997)Sun, Fukushima, and Qi]sun1997computablegeneralized
D. Sun, M. Fukushima, and L. Qi.
A computable generalized Hessian of the D-gap function and
Newton-type methods for variational inequality problems.
Complementarity and Variational Problems: State of the Art, MC
Ferris and JS Pang (eds.), SIAM, Philadelphia, PA, pages 452–472, 1997.
[Toint(1981)]toint1981efficientsparsity
P. Toint.
Towards an efficient sparsity exploiting Newton method for
minimization.
In Sparse Matrices and Their Uses, pages 57–88. Academic
press, 1981.
[Vandereycken(2013)]vandereycken2013low
B. Vandereycken.
Low-rank matrix completion by riemannian optimization.
SIAM Journal on Optimization, 230 (2):0
1214–1236, 2013.
[Wen and Yin(2013)]wen2013feasible
Z. Wen and W. Yin.
A feasible method for optimization with orthogonality constraints.
Mathematical Programming, 1420 (1):0 397–434,
2013.
[Yang et al.(2000)Yang, LI, and Zhou]yang2000trustregion
Y. Yang, D. LI, and S. Zhou.
A trust region method for a semismooth reformulation to variational
inequality problems.
Optimization Methods and Software, 140 (1-2):0
139–157, 2000.
[Zhang et al.(2016)Zhang, J Reddi, and Sra]zhang2016riemannian
H. Zhang, S. J Reddi, and S. Sra.
Riemannian svrg: Fast stochastic optimization on riemannian
manifolds.
Advances in Neural Information Processing Systems, 29, 2016.
[Zhou et al.(2022)Zhou, Bao, Ding, and Zhu]zhou2022semismooth
Y. Zhou, C. Bao, C. Ding, and J. Zhu.
A semismooth Newton based augmented Lagrangian method for
nonsmooth optimization on matrix manifolds.
Mathematical Programming, pages 1–61, 2022.
[Zhu et al.(2008)Zhu, Zhang, Smith, Drees, Brem, Kruglyak, Bumgarner,
and Schadt]zhu2008integrating
J. Zhu, B. Zhang, E. N. Smith, B. Drees, R. B. Brem, L. Kruglyak, R. E.
Bumgarner, and E. E. Schadt.
Integrating large-scale functional genomic data to dissect the
complexity of yeast regulatory networks.
Nature genetics, 400 (7):0 854–861, 2008.
[Zou et al.(2006)Zou, Hastie, and Tibshirani]zou2006sparse
H. Zou, T. Hastie, and R. Tibshirani.
Sparse principal component analysis.
Journal of computational and graphical statistics, 150
(2):0 265–286, 2006.
|
http://arxiv.org/abs/2307.00331v1
|
20230701130139
|
Variation-aware Vision Transformer Quantization
|
[
"Xijie Huang",
"Zhiqiang Shen",
"Kwang-Ting Cheng"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] |
Homomorphism of independent random variable convolution and matrix multiplication
[
=================================================================================
Despite the remarkable performance of Vision Transformers (ViTs) in various visual tasks, the expanding computation and model size of ViTs have increased the demand for improved efficiency during training and inference. To address the heavy computation and parameter drawbacks, quantization is frequently studied in the community as a representative model compression technique and has seen extensive use on CNNs. However, due to the unique properties of CNNs and ViTs, the quantization applications on ViTs are still limited and underexplored. In this paper, we identify the difficulty of ViT quantization on its unique variation behaviors, which differ from traditional CNN architectures. The variations indicate the magnitude of the parameter fluctuations and can also measure outlier conditions. Moreover, the variation behaviors reflect the various sensitivities to the quantization of each module. The quantization sensitivity analysis and comparison of ViTs with CNNs help us locate the underlying differences in variations. We also find that the variations in ViTs cause training oscillations, bringing instability during quantization-aware training (QAT). Correspondingly, we solve the variation problem with an efficient knowledge-distillation-based variation-aware quantization method. The multi-crop knowledge distillation scheme can accelerate and stabilize the training and alleviate the variation's influence during QAT. We also proposed a module-dependent quantization scheme and a variation-aware regularization term to suppress the oscillation of weights. On ImageNet-1K, we obtain a 77.66% Top-1 accuracy on the extremely low-bit scenario of 2-bit Swin-T, outperforming the previous state-of-the-art quantized model by 3.35%. Code and models are publicly available at <https://github.com/HuangOwen/VVTQ>.
§ INTRODUCTION
Vision Transformers (ViTs), inspired by the success of transformer-based models in Natural Language Processing (NLP) tasks, have achieved impressive accuracy on a variety of computer vision tasks <cit.>. Despite the intrinsic superiority of ViTs, their remarkable performance also comes from the tremendous parameter numbers. For instance, Swin-L <cit.> of input size 224×224 has a total number of parameters of 197M with FLOPs of 34.5G. The high latency and large model size have become the most significant obstacle to the efficient deployment of the ViTs, especially on devices with computation constraints.
In recent years, researchers have explored and proposed various model compression methods to improve the computational efficiency of deep learning models. These model compression techniques include quantization <cit.>, pruning <cit.>, knowledge distillation <cit.>, and compact network design <cit.>. Among these methods, quantization of weights and activations have been the most widely utilized techniques because they enjoy the advantage of the promising affinity across different hardware architectures <cit.>. Although a few efforts <cit.> have been made to apply quantization techniques to ViTs, most of them <cit.> are based on Post-Training Quantization (PTQ) which suffers from a significant decline in performance and a bitwidth limitation at 8-bit or 6-bit. Additionally, the few existing Quantization-Aware Training (QAT) methods <cit.> take much more time than the full-precision model in training, and the models still fail to achieve the desired performance when being quantized to low-precision such as 3-bit and 2-bit.
The lower accuracy of quantized ViTs compared to CNNs guides us to raise the question: What is it that hinders us from improving the performance of quantized ViTs? Meanwhile, the low efficiency of previous QAT methods makes applying quantization to more ViT structures difficult. Thus, another question we would like to raise is: How to improve the efficiency of ViT quantization?
To comprehensively decipher the inherent obstacles that adversely impact the efficacy and performance of ViT quantization, in this work, we initially conduct an exhaustive investigation of the quantization resilience of each component within the structural layout of the ViTs. The empirical findings derived from the isolated variable (leave-one-out) quantization ablation experiments substantiate that specific constituents, such as Multi-head self-attention (MHSA), exhibit higher sensitivity to quantization compared to other constituents. We further perform a comparative analysis between the weight and activation distribution of ViTs and CNNs, deducing that the intrinsic variability of the distribution serves as the pivotal factor instigating complications with respect to ViTs quantization. This is confirmed through constant monitoring of the weight changing trajectory during the training phase, which revealed that this variability instigates a phenomenon known as weight oscillation. Such a phenomenon has detrimental effects on quantization, potentially culminating in decelerated convergence.
r0.5
< g r a p h i c s >
Top-1 accuracy on ImageNet-1K vs. BitOPs comparison of 2/3/4-bit quantized ViT models (DeiT-T, SReT-T, Swin-T) using LSQ+ <cit.> quantization and our method.
In light of the variation analysis, we propose an optimized solution for ViT quantization that is attuned to variations, demonstrating enhanced efficiency. Initially, a multi-crop knowledge distillation approach is employed, which aids in decreasing the data variance within mini-batches during the training phase, thereby stabilizing and expediting the training process. In terms of the distribution variance observed across differing modules, we introduce a module-specific scaling methodology. This strategy seeks to identify varying scale factors pertinent to different modules, thereby holistically accommodating the diversity in weight distribution through a gradient scaling technique that is sensitive to weight magnitude. When compared with the baseline quantization method, LSQ+ <cit.>, the presented approach exhibits less susceptibility to fluctuations in weight distribution and outliers that may arise within ViTs. Furthermore, to combat the potential oscillation throughout the training phase, we put forth a regularization process that is attuned to oscillation within quantization bins. This process seeks to penalize the variance in weight distribution within each respective quantization bin.
Extensive experiments across various ViT architectures with different characteristics, including DeiT <cit.>, Swin Transformer <cit.>, and SReT <cit.>, are conducted to verify the effectiveness and efficiency of our proposed method. For DeiT-T on ImageNet-1K dataset, as shown in Figure <ref>, our 4-bit quantized model can significantly improve top-1 accuracy to 74.71% compared to the model quantized by LSQ+ <cit.> which achieves 72.62%.
Furthermore, to the best of our knowledge, our approach is the first to surpass the full-precision baseline with a 4-bit quantized DeiT-T model and the pioneer in extending the frontier of ViT quantization to a 2-bit level, applicable to both weights and activations. Through these methodologies, we exhibit exceptional training optimization, as evidenced by a 50% reduction in total training duration compared to our established baseline. In summary, our contribution can be concluded as:
* We reveal the inherent complexity associated with the quantization of ViTs from the perspective of variation. Our claims that ViTs grapple with weight fluctuations and activation distribution disparities are substantiated through sensitivity analysis, comparison of ViTs to CNNs, and investigation of oscillatory behavior.
* We adopt a multi-crop knowledge distillation-based quantization methodology to decrease the data variance within mini-batches during training following <cit.>, and introduce module-dependent quantization and oscillation-aware regularization strategies. The proposed method is capable of mitigating the impact of variations in ViTs.
* We perform extensive experiments on DeiT, Swin, and SReT architectures using the ImageNet-1K dataset. Our approach significantly outperforms prior state-of-the-art quantization schemes, demonstrating both superior efficiency and performance.
§ RELATED WORK
Vision Transformer: Transformer <cit.> was originally proposed for natural language processing tasks and demonstrated remarkable performance across various benchmarks. Inspired by the success, Vision Transformers(ViTs) <cit.> utilize multi-head self-attention blocks for replacing convolutions and treating an image as patches/tokens. The attention mechanism can help capture both short-range and long-range visual dependencies. DeiT <cit.> introduced a teacher-student distillation token strategy and employed various data augmentation techniques in the training of ViTs and significantly improved the effectiveness and efficiency. Swin <cit.> proposed the shift window attention scheme at various scales to limit the self-attention computation in local windows, which largely boosts the performance and reduces complexity. Recently, SReT <cit.> has been proposed with a weight-sharing mechanism by a sliced recursion structure. The convolution layers in SReT also help supplement the inductive bias lacking in ViTs. Various extensions of ViTs <cit.> and more applications <cit.> are still emerging.
Quantization Techniques: Quantization techniques aim to replace the full-precision weights and activations with lower-precision representation. Based on the quantization intervals, they can be categorized into uniform and non-uniform quantization. While uniform quantization <cit.> with uniform quantization interval has better hardware affinity and efficiency, Non-uniform quantization <cit.>, due to its flexible representation, can usually better allocate the quantization values to minimize the quantization error and achieve better performance than uniform schemes. In addition, the quantization methods can also be classified as quantization-aware training (QAT) <cit.> and post-training quantization (PTQ) <cit.> based on whether to retrain a model with quantized weights and activations or start with a pre-trained model and directly quantize it without extra training. The majority of previous ViT quantization methods, such as Liu et al. <cit.>, PTQ4ViT <cit.>, and FQ-ViT <cit.>, focused on PTQ of ViTs. Due to the intrinsic restriction of PTQ, these methods only perform 8-bit or 6-bit quantization.
Knowledge Distillation: The concept of knowledge distillation is first proposed in <cit.>, where the core insight is to encourage student models to emulate the distribution of teacher models' prediction. The prediction distribution of teacher models contains more information than the one-hot labels. More recently, various knowledge distillation methods <cit.> have been proposed for better efficiency and effectiveness. The knowledge-distillation methods are also widely adopted in previous research <cit.> to help quantization-aware training.
§ APPROACH
§.§ ViT Architecture and Quantization
ViT Architecture.
The basic block of ViTs is the transformer layer, consisting of Multi-head Self Attention (MHSA), Layer Normalization (LN) <cit.>, and Feed-forward Network (FFN). The transformer layer can be formulated as:
X' =LN( X_i+MHSA( X_i))
X_O =LN( X'+FFN( X')),
where X_i, X', and X_o are this transformer block's input, intermediate representation, and output. The MHSA module consists of h heads, and each head performs inner products with a scaling factor and a softmax operation. For the i-th head, input X_i is projected into query, key, and value vectors with multiplication with learnable weight matrix W_Q,i, W_K,i, W_V,i respectively, which can be written as:
Q_i = X_i W_Q,i, K_i = X_i W_K,i, V_i = X_i W_V,i,
and the output of i-th head is
head_ i = softmax(Q_i K_i^T/√(d_k)) V_i,
where 1/√(d_k) is the scaling factor for normalization. MHSA further concatenates the output of these heads to improve the representative capacity and projects to the output by multiplication with a learnable weight matrix W_o:
MHSA(X_i)=Concat(head_1, head_2, ..., head_h) W_o.
Quantization.
Given the real-value data to be quantized as x^r, the scale factor s of the quantizer, the number of positive quantization levels Q_P, and the number of negative quantization levels Q_N, we can have the quantizer q_b that output the b-bit quantized representation of the input real value as x^q=q_b(x^r):
x^q=q_b(x^r)=s ×⌊clip(x^r/s, -Q_N, Q_P) ⌉,
where ⌊·⌉ is the rounding function that rounds the input to the nearest integer, clip(x, r_1, r_2) return x with all value below r_1 set to be r_1 and all values above r_2 set to be r_2. For the unsigned quantization, Q_N=0, Q_P=2^b-1. While for the quantization of signed data, Q_N=2^b-1, Q_P=2^b-1-1. To solve the problem that the gradient cannot back-propagate in Equation <ref>, the straight-through estimator (STE) <cit.> is utilized to approximate the gradient during quantization-aware training. The gradient of the rounding operation is approximated as 1 in the quantization limit. In the back-propagation with STE, the gradient of the loss ℒ with respect to the real-value data x^r is set to be:
∂ℒ/∂ x^r= ∂ℒ/∂ x^q·1_-Q_N ≤ x^r/s ≤ Q_P,
where 1 is the indicator function that outputs 1 within the quantization limit and 0 otherwise. This STE is widely used in quantization-aware training (QAT). Correspondingly, we focus on uniform quantization and QAT in this work.
§.§ Understanding Variation of ViTs
Many existing studies highlight that ViTs exhibit greater sensitivity to quantization compared to CNNs. For instance, Bit-Split <cit.>, which successfully achieves 4-bit quantization on ResNet with an accuracy loss of less than 1%, exhibits significant accuracy degradation of over 2% <cit.> when applied to 8-bit quantization of DeiT. However, there is a paucity of comprehensive analyses detailing the reasons behind ViTs' heightened computational sensitivity compared to CNNs. In this section, we will primarily examine the quantization sensitivity of each component via a leave-one-out quantization analysis. Upon identifying the problematic areas or “pain points” in ViT quantization, we will contrast ViTs with CNNs to validate the fundamental challenge in quantization, referred to in this work as variation. We define the term variation to include two components: (1) the differential sensitivity and importance of each module and (2) the variance of weight distribution. We will explore the variation in sensitivity in Section <ref> and delve into the variation in distribution and its subsequent side-effect of oscillation phenomenon in Sections <ref> and <ref>.
§.§.§ Quantization Sensitivity Analysis
Prior study Q-ViT <cit.> conducted a quantization robustness analysis on ViTs, concluding that the GELU activation function substantially mitigates performance during the quantization process. However, their experiments relied on post-training quantization (PTQ), which stands in stark contrast to quantization-aware training (QAT). Moreover, their experimental methodology lacked a comprehensive analysis of different components at a more granular level, such as the quantization impact on query, key, and value weight matrices. In this section, we aim to disentangle the intricacies of ViT quantization by executing an in-depth leave-one-out analysis employing QAT.
r.6
[h].6
Leave-one-out-anlysis for quantization of various components in DeiT-T on ImageNet-1K. The Para(%) stands for the percentage of parameters that are not quantized among all trainable parameters.
!
Quantization Target Top-1 Acc(%) Top-5 Acc(%) Para(%)
None (FP Model) 73.75 91.87 100
All (Baseline 3-bit) 68.22 88.56 0
All, except FFN 69.47 89.60 62.1
All, except MHSA 71.28 90.66 31.1
All, except query in MHSA 69.66 89.94 7.8
All, except key in MHSA 69.92 89.81 7.8
All, except value in MHSA 70.72 90.40 7.8
In terms of quantization methods, we employ LSQ+<cit.>. All components except for the analysis target will be quantized to 3-bit, while the analysis target will be retained at full precision. The experimental results using DeiT-T on the ImageNet-1K are presented in Table <ref>. These results indicate that MHSA, particularly the value weight matrices, are highly susceptible to quantization. Although MHSA and the value weight matrix constitute a relatively minor fraction of parameters in comparison to the FFN, maintaining these parts at full precision can optimize the performance of the quantized model.
r0.6
< g r a p h i c s >
The accuracy degradation compared to the full-precision model when a specific head in a layer is quantized. The label h-l in abscissa indicates the head h in layer l is quantized.
While we have fully exploited the clue that the quantization sensitivity of MHSA is higher than other components in ViTs, another critical clue is that some heads in MHSA are more important than other heads in Transformer-based models, which has already been proved in NLP tasks <cit.>. Here we apply a similar analysis as <cit.> to quantize various heads in different layers in ViTs. The target heads are quantized to 2-bit while the remaining components are quantized to 8-bit. The results of DeiT-T with three heads in a layer and 12 layers are shown in Figure <ref>. The results that some heads have higher accuracy degradation show that the quantization sensitivity of different heads at different layers varies. The first and last few layers are more sensitive to quantization. Additionally, the heads in the same layer also show a quantization robustness variation. For example, in layer 8 of the quantized model, the lower precision of head 0 (shown in 8-0 in Figure <ref>) will result in higher accuracy drop compared to the two parallel heads in the same layer.
§.§.§ Variation of ViTs and CNNs
In Section <ref>, we have demonstrated that ViTs suffer from significant variation in the sensitivity to quantization. However, previous mixed precision quantization research on CNN has also discovered that different parts of models have various quantization robustness. To fully understand why the sensitivity to quantization in ViTs is higher than CNNs, we visualize and quantify the distribution of different modules inside full-precision CNNs and ViTs to compare the real variation of ViT and CNN models.
To give an intuitive result on the variation of CNNs and ViTs, we first visualize the weight distribution across different channels in pre-trained full precision ResNet-18 <cit.> and DeiT-T. The results are shown in Figure <ref>. Based on our investigation, the ResNet-18 model shares a similar distribution across different channels, while the weight distribution varies significantly in different modules in DeiT-T.
r.55
[h].55
Standard Deviation of the Absolute Mean (SDAM) of real-value weight in CNNs and ViTs.
!
Model ResNet-18 VGG-11 ViT-T DeiT-T Swin-T
SDAM 5.59e-2 3.74e-2 9.65e-2 8.35e-2 9.71e-2
To quantify the fluctuation in the latent real-valued weight magnitude, we proceed to calculate the Average Standard Deviation of the Absolute Mean (SDAM) of the real-valued weight magnitude within each module of CNNs and ViTs. The SDAM metric has been previously employed to evaluate the stability and fairness of training in prior studies <cit.>. The corresponding results of the SDAM comparison are tabulated in Table <ref>. These numerical findings corroborate that the variability associated with ViTs surpasses that of CNNs with respect to the weight distribution.
Correspondingly, prior work <cit.> has highlighted significant disparities in the distribution of activations in ViTs as opposed to CNNs. Although these variations may augment the representational capacity of ViTs, they concurrently introduce complexities when implementing quantization in the context of ViT models. Consequently, the conception and arrangement of the quantization scheme become paramount, particularly in the generation of quantization scales and the determination of clipping factors during the process of quantization-aware training.
§.§.§ Oscillation in Training
High variance in weight and activation distribution can lead to suboptimal quantization, thereby inducing increased quantization errors. In quantization-aware training, certain modules fail to learn meaningful representation during the optimization process. This effect and its association with distribution variation have been investigated in AdamBNN <cit.>, where the notion of flip-flop was introduced, signifying the change in quantization results of weights at specific iterations. We observed that low-precision quantization of ViTs is also subject to a comparable effect, termed oscillation. This denotes the circumstance where the latent weights fluctuate around the boundary of adjacent quantization bins during quantization-aware training. As per our understanding, <cit.> is the sole work probing into these effects, however, it restricts its scope to CNNs and their impact on batch normalization, a technique not employed in ViTs. We take the initiative to identify and analyze this oscillation phenomenon specific to ViTs.
An illustration of the oscillation phenomenon is shown in Figure <ref>. Conventionally, the distribution of full-precision initialization adheres to a Gaussian distribution. There exist only a limited number of latent weights that precisely coincide with the optimal quantization value. A majority of weights necessitate updates during the process of quantization-aware training. However, when certain real-value weights w_t^r cross the quantization boundary at a particular iteration t, the update of real weights |w_t^r-w_t-1^r| triggers an update in the quantized value by a constant value |q(w_t^r)-q(w_t-1^r)| = s. Here, s represents the quantization scale and constitutes the length of a quantization bin within the framework of a uniform quantization scheme. As indicated by the STE detailed in Equation <ref>, the gradient of the real value is assigned a value identical to this quantized value, resulting in a consistent gradient that encourages the real value to once again traverse the quantization boundary, given that the learning rate remains consistent.
We further observe the side effect in the quantization-aware training of ViTs. As shown in Figure <ref>, the weights associated with MHSA tend to accumulate around the quantization threshold following a certain number of training epochs. Figure <ref> presents an example of this oscillatory behavior within the weights of ViTs. This oscillation effect adversely influences the training of ViTs and leads to substantial quantization error. The formulation of a solution to prevent this phenomenon, through the reduction of variation and mitigation of the impact, will be central to our design methodology for quantization.
§.§ Variation-aware ViT Quantization
As observed in Section <ref>, there exists a substantial fluctuation amongst all components of ViTs, which can precipitate an oscillation phenomenon potentially introducing instability during training. Motivated by this observation, we aim to introduce a variation-aware quantization scheme to mitigate the impacts of such fluctuations and enhance both the effectiveness and computational efficiency of ViT quantization. As illustrated in Figure <ref>, our approach incorporates several crucial components: training facilitated by multi-crop knowledge distillation, a module-specific quantization scheme, and a regularization strategy sensitive to oscillatory behavior.
§.§.§ Multi-crop Knowledge Distillation
To solve the variation mentioned in Section <ref> and help stabilize the training, we first propose a Multi-crop Knowledge Distillation (MCKD) scheme. The core insight is to train our quantized ViT models with a full-precision model as the teacher. The loss function is designed to enforce the similarity between the output distribution of the full-precision teacher and quantized student ViT model:
ℒ_VanillaKD = -1/N∑_c∑^N_i=1 p_c^T_f(X_i)log(p_c^S_q(X_i)),
where the KD loss is defined as the cross-entropy between the output distributions p_c of a full-precision teacher T_f and a quantized ViT student S_q. X_i is the input sample. c and N denote the classes and the number of samples, respectively. Note that one-hot label is not involved in training in our setting. The KD scheme helps our model converge fast because it learns the mapping directly from the full-precision teacher, which contains richer information. Previous research <cit.> also points out that KD loss can be seen as a regularization term to reduce the variance during the training, which makes the training more stable and alleviates the influence of the distribution variation. However, here we only employ KD loss as the sole objective to optimize the target model, which has been demonstrated more effective with adequate supervision signal in KD <cit.>.
One disadvantage of the conventional KD training scheme is that generating the prediction p_c^T_f of the teacher T_f consumes a relatively long time, which makes the training inefficient. To tackle this challenge, we propose to use a multi-crop KD scheme as FKD that first random crops M regions from one image X_i, and inputs each cropped image to the teacher model T_f to get the soft label p_c^T_f(X_i,m), m ∈ M, where m is the index of the cropped region. The soft label is stored together with its coordinates and augmentation hyper-parameters. In the training phase, we directly load the soft label and cropping parameter from the storage, and the cropped sample used for the training with KD. The loss function of this multi-crop KD scheme is:
ℒ_KD = -1/NM∑_c∑^N_i=1∑^M_m=1 p_c^T_f(X_i,m)log(p_c^S_q(X_i,m)).
The higher quality of the soft label generated by this scheme would reduce the variation within a mini-batch to a greater extent. Meanwhile, the data and its corresponding label is loaded the same as the training without knowledge distillation, where the time for inference with the teacher model is saved. We further show in the experiment that this multi-crop KD scheme improves performance by reducing variation and significantly boosts efficiency.
§.§.§ Module-dependent Quantization
Following the notion in Equation <ref>, the gradient of the quantized value x^q with respect to the scale factor s is defined as
∂x^q/∂s =
x^r/s + ⌊ x^r/s ⌉ if x^r/s ∈ (-Q_N,Q_P)
-Q_N if x^r/s ∈ (-∞,-Q_N]
Q_P if x^r/s ∈[Q_P,∞)
The scale factor s is the most important parameter in our quantization setting and will be optimized during the quantization-aware training.
We utilize the same scale learning strategy as LSQ+ <cit.>, wherein the scale factor s is dynamically learned during the optimization.
Our exploration in Section <ref> establishes a substantial variation in the sensitivity of distinct modules to quantization. However, conventional implementations of ViT quantization often overlook this characteristic.
In view of the variability observed in ViTs, we propose a module-dependent quantization scheme that facilitates the learning of the quantization scale s at the granular module level (query, key, and value in distinct heads of MHSA). This approach contrasts with previous layer-wise quantization methods that assigned a uniform scale to differing modules. Instead, we implement scale-learning quantization at a higher resolution, thereby promoting a finer granularity.
Previous work <cit.> has pointed out the negative impact of an imbalance gradient scale. However, the situation is even more severe in the quantization of ViTs, as weight distribution shows a significant variation. To overcome this challenge, we adopt a module-dependent gradient scaling that balances the weights and scale factor gradient, fully considering the distribution variation in different modules. We multiply the loss of scale factor s by a gradient scale g that encodes the magnitude of the weights in this module, which can be formulated as
∂ℒ/∂ s⟵∂ℒ/∂ s·1/√(Q_P||w||_1),
where ||w||_1 computes the L_1-norm of weights in the quantized module. For the modules with higher variation, the L_1-norm of weights will be higher than average, and the update of scale factor s will be decreased to ensure that the outliers of the distribution do not influence the scale factor.
§.§.§ Oscillation-aware Bin Regularization
In the analysis of Section <ref>, we identify that the weight distribution variance in ViTs caused oscillation, leading to instability during training. In the view of distribution in each quantization bin, the majority of the weights oscillate between both sides of the quantization bin. To suppress the oscillation phenomenon during QAT, we regularize the weight distribution with an Oscillation-aware Bin Regularizer (OBR) to encourage the real-value weights to be close to the quantization bin center. The proposed OBR can be formulated as
ℒ_OBR= ∑_m=1^M (||w_m^r - w_m^q||_2 + ∑_n=1^2^b𝒱(w_n,m^r)),
where w_m^r, w_m^q, w_n,m^r represent the real value and quantized value of weights in module m, and real value weights in the quantization bin n, respectively. ||·||_2 computes the L_2-norm and 𝒱(·) computes variance for all quantization bins with more than two elements.
Unlike the previous weight regularization <cit.> applied in quantization which only considers the global weight distribution, we minimize the global quantization error and local distribution variance in a specific quantization bin. Ideally, the distribution of the weights in a quantization bin is regularized to be a Dirac delta distribution which can largely suppress the oscillation during training. The final optimization target is ℒ = ℒ_KD + λℒ_OBR, where λ is the weighting coefficient to balance between ℒ_KD and ℒ_OBR. To make sure that the regularization does not influence the learning of scale factors at the very early stage of training, we gradually increase the coefficient λ during training by applying a cosine annealing schedule following <cit.>.
§ EXPERIMENTS
§.§ Experimental Settings
Dataset The experiments are carried out on the ImageNet-1K dataset <cit.>. We only perform basic data augmentation in PyTorch <cit.>, which includes RandomResizedCrop and RandomHorizontalFlip during the training and single-crop operation during the evaluation.
Model We evaluate our quantization methods on three ViT architectures: DeiT-T <cit.>, SReT-T <cit.>, and Swin-T <cit.>. Due to the fact that the first (patch embedding) and the last (classification) layer are more sensitive to quantization perturbation compared to intermediate layers, we fix their bitwidth to 8-bit following previous work <cit.>.
Training Detail Following previous quantization methods <cit.>, we adopt real-value pre-trained weights as initialization. The quantization parameters, including scale factors and offset, are initialized using the MSE-based method following <cit.>. Details of all hyper-parameters and training schemes are shown in the Appendix.
§.§ Comparison with State-of-the-Art Methods
Table <ref> compares our efficient variation-aware quantization with existing methods for DeiT-T, SReT-T, and Swin-T on the ImageNet-1K dataset. As we utilize different full-precision (FP) models as initialization, the corresponding FP Top-1 accuracy is also reported. To show that our performance improvement cannot simply be summarized as learning from the large teacher model, we also report the results of LSQ+ with vanilla knowledge distillation using the same teacher model. Compared with the baseline FP mode, our 4-bit quantized DeiT-T achieves 74.71% Top-1 accuracy, which is the first 4-bit quantized model with accuracy higher than FP initialization (0.96% absolute gain). Similarly, our 4-bit quantized SReT-T and Swin-T achieve 76.99% and 82.42% Top-1 accuracy, which is 1.18% and 1.42% higher than the FP baseline.
Compared with the previous quantization methods LSQ+ <cit.>, mixed-precision method Q-ViT <cit.>, and state-of-the-art <cit.>, our model also demonstrates remarkable improvement.
For example, our 4-bit Swin-T achieves a Top-1 accuracy of 82.42%, which has an absolute gain of 2.83% compared to Q-ViT <cit.>. Our method is especially effective for low-precision 2-bit quantization, as our 2-bit quantized Swin-T yields 77.66% Top-1 accuracy, which is 3.35% higher than the previous state-of-the-art method <cit.>.
In addition, our methods show better efficiency with the help of a multi-crop knowledge distillation scheme. The better quantization scheme and regularization also help our models converge faster than previous methods with the same training configurations. We only train our models with 150 epochs, sufficient to outperform previous methods in terms of accuracy. The total training time for our DeiT-T with 4 NVIDIA A100 GPUs is 57.3 hours, significantly lower than baseline methods shown in Table <ref>.
§.§ Ablation Study
r.6
[h].6
Overall ablation experiment on 4-bit quantized DeiT-T. For the experiment “Ours w/o MCKD”, the vanilla knowledge distillation with a ResNet152 teacher is applied.
!
Method Top-1 Acc Top-5 Acc SDAM
Ours 74.71 92.02 2.13e-2
Ours w/o Multi-crop Knowledge Distillation 73.56 91.52 2.30e-2
Ours w/o Module-dependent Quantization 73.79 91.54 7.15e-2
Ours w/o Oscillation-aware Bin Regularization 74.22 91.41 3.79e-2
We first perform an overall ablation experiment on 4-bit quantized DeiT-T to look into the effectiveness of all proposed modules. The results are shown in Table. <ref>. From the average Standard Deviation of the Absolute Mean (SDAM) and accuracy results, we can see that each module helps alleviate the variation influence and improve the performance of quantized ViTs. The following subsections give a more detailed ablation study on each module.
Multi-crop Knowledge Distillation Table <ref> compares the Top-1 accuracy of 4-bit quantized DeiT-T without knowledge distillation, with vanilla KD, and with our multi-crop KD of different teachers. The results demonstrate an improvement in both accuracy and efficiency. The teacher model of higher accuracy can improve the performance of student ViTs regardless of architecture. The training time can also be reduced as the soft label is extracted before the training. The time in Table <ref> does not include the time for soft label generation, which can be ignored when we have to apply QAT on different models and settings.
Module-dependent Quantization The proposed module-dependent quantization applies a finer-grained quantization scheme at the module level and scales the scale factors' gradients to ensure the scale factor update is not influenced by the variation in ViTs. We visualize the loss landscape showing the smoothness of optimization following <cit.> shown in Figure <ref>. Compared to the baseline quantized model, the more centralized and smoother loss landscape reflects that the proposed quantization scheme substantially improves the training stability and efficiency.
Oscillation-aware Bin Regularization To better know how our oscillation-aware bin regularization can help alleviate the oscillation, we quantify the degree of oscillation during training by measuring the frequency of this phenomenon over time. We define that the oscillation occurs at iteration t when the quantized integer value changes and the direction of the update in integer value also changes. This can be formulated as:
x^int_t ≠ x^int_t-1, sign(Δ^t_int) ≠sign(Δ^t^prev_int),
where x^int_t = ⌊clip(x^r/s, -Q_N, Q_P) ⌉ is the integer value of input real-value x^r following the notion in Equation <ref>. The update Δ^t_int = x^int_t - x^int_t-1 and t^prev is the iteration of last integer value change. Then the frequency of oscillation is measured using an exponential moving average (EMA):
f^t=m·sign(Δ^t_int) + (1-m)· f^t-1.
We define the weights as oscillating weights at iteration t as f^t>0.005. The Top-1 Accuracy of 3-bit quantized SReT-T and the percentage of oscillating weights are shown in Table <ref>. From the results, we can see a clear negative correlation between weight oscillation percentage and model performance. The proposed Oscillation-aware Bin Regularization (OBR) with a gradually increasing coefficient helps stabilize the training to achieve higher model accuracy.
r.56
[h].56
Comparison of 3-bit quantized SReT-T using different regularization. “Oscillation” indicates the percentage of weights that are oscillated at the last iteration of training.
!
2c|Regularization Top-1 Acc Top-5 Acc Oscillation (%)
2c|Baseline 75.02 92.31 7.33
2c|KURE <cit.> 74.85 92.24 8.12
3*Ours λ=cos(0,1) 75.06 92.32 0.23
λ=cos(0,0.1) 75.40 92.49 0.78
λ=cos(0,0.01) 75.11 92.36 4.36
§.§ Attention Map Visualization
To demonstrate how our quantization approach preserves the representational capacity of ViT models, we illustrate the attention map of the quantized Swin-T following <cit.> and <cit.>. We fuse the attention heads utilizing maximum operators and exclude low attention pixels to better accentuate the prominent object within the image. As shown in Figure <ref>, our quantized Swin-T exhibits superior representational capacity by maintaining a more relative ranking within the attention map. This distinction becomes more pronounced when the ViT model is quantized to 3-bit and 2-bit representations. For the baseline LSQ+ quantization <cit.>, the attention substantially deteriorates and distributes uniformly across the given input when quantized to extremely low bit-widths. However, our 2-bit quantized Swin-T is still capable of segmenting the salient object region effectively.
§ CONCLUSION
In this work, we have provided a comprehensive understanding of the complexities associated with Vision Transformers quantization. Through an in-depth analysis of quantization sensitivity, and contrasting CNNs with ViTs, we elucidate that the variation behavior inherent to ViTs poses considerable challenges to quantization-aware training. Specifically, the variation in ViTs can induce oscillatory phenomena, necessitating an extended convergence period due to the consequent instability. To address the challenges presented by variation, we propose an effective variation-aware quantization technique. The multi-crop knowledge distillation strategy enhances accuracy and efficiency by mitigating the variation within the mini-batch. Furthermore, we introduce module-dependent quantization and oscillation-aware bin regularization to ensure that the optimization process remains unaffected by variation and to suppress the oscillatory effect instigated by variation. Through extensive demonstrations, we have shown that our proposed solution to variation in ViTs results in state-of-the-art accuracy on the ImageNet-1K dataset across various ViT architectures.
tmlr
§ APPENDIX
When comparing the results of ViT quantization using our methods with other methods in the experiments, we use the training settings and hyper-parameters shown in Table <ref>. Generally, most of these hyper-parameters and training settings are the same across different ViT models and different bitwidths settings. We found that applying the proposed Oscillation-aware Bin Regularization (OBR) is more effective for low-bit quantization, including 3-bit and 2-bit. The different performance of OBR among different bitwidth is mainly because penalizing the oscillation during QAT will harm the normal optimization of latent weights, which is more prominent in higher bitwidth. Accordingly, we only apply OBR to the 2-bit and 3-bit quantization.
Fig. <ref> compares the training loss and Top-1 test accuracy for 4-bit quantized DeiT-T using our method and LSQ+ <cit.>. The core advantages of both effectiveness and efficiency are shown here. In terms of effectiveness, our method can achieve higher Top-1 accuracy and has a more stable loss scheme. For efficiency, our method helps the model converge faster, with only half of the total training epochs.
|
http://arxiv.org/abs/2307.01149v1
|
20230703164216
|
Piercing the Dirac spin liquid: from a single monopole to chiral states
|
[
"Sasank Budaraju",
"Yasir Iqbal",
"Federico Becca",
"Didier Poilblanc"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
Laboratoire de Physique Théorique, Université de Toulouse, CNRS, UPS, France
Department of Physics and Quantum Centre of Excellence for Diamond and Emergent Materials (QuCenDiEM), Indian Institute of Technology Madras, Chennai 600036, India
Department of Physics and Quantum Centre of Excellence for Diamond and Emergent Materials (QuCenDiEM), Indian Institute of Technology Madras, Chennai 600036, India
Dipartimento di Fisica, Università di Trieste, Strada Costiera 11, I-34151 Trieste, Italy
Laboratoire de Physique Théorique, Université de Toulouse, CNRS, UPS, France
The parton approach for quantum spin liquids gives a transparent description of low-energy elementary excitations, e.g., spinons and emergent
gauge-field fluctuations. The latter ones are directly coupled to the hopping/pairing of spinons. By using the fermionic representation of the
U(1) Dirac state on the kagome lattice and variational Monte Carlo techniques to include the Gutzwiller projection, we analyse the effect of
modifying the gauge fields in the spinon kinematics. In particular, we construct low-energy monopole excitations, which are shown to be gapless
in the thermodynamic limit. States with a finite number of monopoles or with a finite density of them are also considered, with different patterns
of the gauge fluxes. We show that these chiral states are not stabilized in the Heisenberg model with nearest-neighbor super-exchange couplings,
and the Dirac state corresponds to the lowest-energy Ansatz within this family of variational wave functions. Our results support the idea
that spinons with a gapless conical spectrum coexist with gapless monopole excitations, even for the spin-1/2 case.
Piercing the Dirac spin liquid: from a single monopole to chiral states
Didier Poilblanc
August 1, 2023
=======================================================================
Introduction.
Quantum spin models on frustrated low-dimensional lattices represent a playground to investigate a variety of different phases of matter and
the transitions among them <cit.>. Even though a full characterization of their phase diagrams would require a finite-temperature
analysis, in most cases the knowledge of the ground state and a few low-energy excitations is enough to obtain important information on the
relevant (low-temperature) behavior. Still, achieving an accurate description of the exact ground state of frustrated spin models poses itself
as a difficult task. Indeed, a faithful characterization can be obtained whenever (a sizable) magnetic order is present, since here the ground
state is well approximated by a product state, with spins having well-defined expectation values on each site. By contrast, whenever magnetic
order is significantly suppressed, or even absent, the ground-state wave function is much more elusive. The most complicated case is given by
the so-called quantum spin liquids, where the elementary degrees of freedom are no longer the original spin variables, but emergent particles
(spinons) and gauge fields (visons or magnetic monopoles) <cit.>. The standard approach to describe spin liquids is through the
parton construction, where spin operators are represented by using fermionic or bosonic particles; here, the original Hilbert space is enlarged
and additional gauge fields are introduced <cit.>. Thus, the resulting model describes fermions or bosons
that interact through gauge fields on a lattice. A spin liquid corresponds to the deconfined phase of the resulting model, in which particles
(spinons) are free at low energies. In this case, the elementary excitations of the spin model are fractionalized, i.e., they are not integer
multiples of those of the original constituents. By contrast, whenever the gauge fields lead to confinement, the spin liquid is unstable towards
some symmetry-breaking phenomenon, most notably the establishment of valence-bond or magnetic order <cit.>. The analysis of these
lattice gauge theories is not easy and requires non-perturbative methods <cit.>, which also include a detailed
examination of the symmetries of low-energy excitations. Still, some insight can be obtained from mean-field approaches <cit.>, where
gauge fields are frozen and fermions/bosons are free. From there, it is also possible to extract some information on the nature of the most
relevant gauge fluctuations: whenever they are gapped (corresponding to a ℤ_2 symmetry) the low-energy spectrum of the spinons is
not qualitatively modified, leading to stable ℤ_2 spin liquids <cit.> (the most remarkable example being the Kitaev model
on the honeycomb lattice <cit.>). The situation is more delicate when the low-energy gauge fields are gapless (with U(1) symmetry),
since in this case they can spoil the mean-field properties of the spinon spectrum. In particular, monopoles proliferate and may give rise to
a confined phase <cit.>. Still, the presence of a sufficiently large number of massless fermions may screen the monopoles and
prevent confinement <cit.>.
Among various possibilities, the nearest-neighbor S=1/2 Heisenberg antiferromagnetic model on the kagome lattice represents one of the most
intriguing and important examples in which magnetic frustration may give rise to a non-magnetic ground state. The interest in this spin model
was raised after the discovery of a number of compounds, where localized S=1/2 moments interact through a super-exchange mechanism in almost
decoupled kagome layers. The most notable example is given by the so-called Herbertsmithite Cu_3Zn(OH)_6Cl_2 <cit.>.
Here, there is no evidence of magnetic order down to extremely small temperatures, thus suggesting the possibility that the ground state is
indeed a quantum spin liquid <cit.>. Triggered by these outcomes, a huge effort has been spent in the last years to clarify the actual
nature of the ground state of the Heisenberg model on the kagome lattice. Early large-scale density-matrix renormalization group (DMRG) and
pseudo-fermion functional renormalization group calculations suggested the existence of a gapped spin liquid <cit.>,
while variational Monte Carlo techniques and more recent DMRG approaches supported a gapless spin liquid <cit.>.
The variational approach has a very simple and elegant description within the fermionic parton representation; here, the free fermions have only
kinetic terms (no pairing), defining peculiar magnetic fluxes piercing the unit cell (i.e., π-flux through hexagonal plaquettes and 0-flux
through triangular ones), thus leading to two Dirac points in the spinon spectrum <cit.>. As a consequence, this Ansatz
is dubbed as [π,0] Dirac spin liquid. Finally, an accurate variational wave function is obtained by including the Gutzwiller projection, which
imposes a single-fermion occupation on each lattice site <cit.>.
Still, alternative scenarios have been proposed, the most intriguing one suggesting the possibility that the ground state is a chiral spin
liquid <cit.>, which break time-reversal and point-group symmetries <cit.>. Originally, chiral spin liquids have been
constructed in analogy to the fractional quantum Hall effect <cit.>. However, the main difference with respect to the latter case is
that time-reversal is spontaneously broken, leading to even more exotic phenomena <cit.>. Recently, different calculations suggested that
chiral spin liquids may exist in extended Heisenberg models on the kagome lattice, e.g., adding super-exchange couplings at second or third neighbors,
multi-spin interactions, or Dzyaloshinskii-Moriya terms <cit.>.
In addition, chiral spin liquids have been also analysed within mean-field approaches, in terms of both bosonic <cit.>
and fermionic partons <cit.>.
In this paper, we study the stability of the Dirac spin liquid wave function, which has been proposed to capture the correct ground-state properties
of the nearest-neighbor Heisenberg model on the kagome lattice <cit.>, against chiral perturbations. We analyse the energetics of
Gutzwiller-projected fermionic states that are obtained by adding non-trivial magnetic fluxes to the ones that define the Dirac wave function.
In particular, we can independently (i) consider an additional flux (parametrized by ϕ and spread uniformly on the lattice) and/or (ii)
redistribute the flux inside the unit cell (parametrized by θ); hence, we assume that every unit cell has the same distribution of fluxes in
the hexagonal and triangular plaquettes, see Fig. <ref>. The flux through the triangular plaquettes is given by F_T=ϕ/8+θ,
while the flux through the hexagonal ones is F_H=π-2θ+3ϕ/4, such that the total flux piercing the unit cell is F_C=π+ϕ,
the Dirac state being recovered with ϕ=θ=0. All calculations are performed on tori with 3× L × L sites by using variational
Monte Carlo techniques to assess the properties of the Gutzwiller-projected states <cit.>. On finite clusters, ϕ is quantized,
while θ may assume any value. A “commensurate” flux ϕ=2π/q requires a large super-cell that includes q unit cells (assuming q
divides L) and implies a total flux multiple of 2π L on the whole torus. In addition to these standard cases, we also consider monopole
configurations. A single monopole brings a 2π flux on the torus, thus leading to ϕ=2π/L^2 on each unit cell; states with N_ mp
monopoles are then constructed by considering a flux density ϕ=2π N_ mp/L^2. On the one hand, this allows us to study the energetics of
a single monopole on finite clusters and its scaling in the thermodynamic limit; on the other hand, with monopole configurations, the stability of
the Dirac state may be assessed for very small additional fluxes (i.e., much smaller than the minimal one accessible within the commensurate fluxes).
The main outcome of this study is that the Dirac state is stable against chiral perturbations. Still, monopole excitations are gapless in the
thermodynamic limit. We would like to emphasize that, since we work on tori, the analysis of the monopole energy cannot be directly connected to
the scaling dimensions, as usually done within conformal-field theories, which consider a spherical geometry <cit.>.
Model and methods.
We study the Heisenberg model on the kagome lattice with nearest-neighbor super-exchange interaction J>0
H = J ∑_⟨ i,j ⟩ S_i · S_j,
where S_i=(S^x_i,S^y_i,S^z_i) is the spin-1/2 operator on a site i; periodic-boundary conditions are assumed on a cluster with
3 × L × L sites.
The variational wave functions are defined by
|Ψ⟩ = P_G |Φ_0 ⟩,
where |Φ_0 ⟩ is the ground state of the auxiliary (non-interacting) Hamiltonian:
H_0=∑_⟨ i,j ⟩, σχ^_i,j c^†_i,σ c^_j,σ + h.c.,
where c^†_i,σ (c^_i,σ) creates (destroys) a fermion on site i with spin σ= ↑, ↓;
χ^_i,j=χ^0_i,je^iα_i,j defines the hopping amplitude for nearest-neighbor sites (i,j). The “bare” term
χ^0_i,j = ± 1 defines the [π,0] flux pattern of the Dirac spin liquid, while the presence of α_i,j 0 allows us to
consider θ 0 and/or ϕ 0 (including single- or multi-monopole states), see Fig. <ref>. In addition, periodic- or
anti-periodic-boundary conditions can be taken in H_0. In practice, the auxiliary Hamiltonian is diagonalized and |Φ_0 ⟩ is
constructed as the Slater determinant of the lowest N single-particle orbitals (where N=3L^2), which is well defined whenever there is a closed
shell configuration, i.e., a finite-size gap between the N-th and the (N+1)-th levels. For commensurate fluxes, we adopt the Landau gauge, which
implies a q × 1 super-cell. By contrast, the single-monopole configuration requires a super-cell as large as the entire cluster (which remains
the case also for multi-monopole configurations). A similar monopole construction has been discussed in Ref. <cit.> for the square lattice.
We remark that, whenever a single monopole is considered on top of the Dirac state, there is an exact degeneracy at the Fermi level (which is robust
to changing the boundary conditions), with two levels per spin, i.e. four levels occupied by two fermions giving rise to 6 monopoles (3 singlets and 1
triplet) <cit.>. We verified that any occupation of these levels gives the same variational energy. In this case, the unprojected
state |Φ_0 ⟩ does not correspond to a closed shell configuration and we use the single-particle orbitals obtained by the real-space
diagonalization, without imposing any lattice symmetry. Then, monopole configurations do not correspond to specific k-points of the Brillouin zone.
Finally, P_G is the Gutzwiller projection onto the configuration space with one particle per site:
P_G=∏_i (n_i,↑ - n_i,↓)^2,
where n_i,σ=c^†_i,σ c^_i,σ. As a result, |Ψ⟩ of Eq. (<ref>) defines a faithful variational
wave function for the spin Hamiltonian (<ref>). Standard Monte Carlo sampling based upon Markov chains is used to evaluate the
variational energy <cit.>.
Results.
The main outcome of this work is that the Dirac state is stable when considering fluxes ϕ 0 and/or θ 0. Indeed, the best
variational energy (per site) when varying θ and ϕ is obtained for θ=ϕ=0, corresponding to the [π,0] case. As an
example, in Fig. <ref>, the variational energies for different cuts in the (ϕ,θ) plane are reported for L=8: along
θ=3ϕ/8 (i.e., F_H=π, which connects the Dirac state to the [π,π] one), along θ=-ϕ/8 (i.e., F_T=0, which
connects the Dirac state to the [0,0] one), and θ=0. In all cases, the energy increases with ϕ, even for the smallest possible
values obtained with a few monopoles. Similar results have been obtained for larger cluster sizes and different cuts. In particular, the case
with θ=0 is reported in Fig. <ref>, where several sizes of the cluster are reported from L=4 to L=16, including both
commensurate fluxes (the smallest one being ϕ=2π/L) and monopole configurations (which allow us to reach much smaller values of the
fluxes). Our results clearly show that the minimal variational energy is always obtained with ϕ=0, i.e., for the Dirac state.
Next, we perform the explicit size-scaling analysis of the single-monopole gap, see Fig. <ref>. At the unprojected level, i.e.,
when the Gutzwiller projection of Eq. (<ref>) is not imposed, the monopole configugration corresponds to an excited state that becomes
gapless in the thermodynamic limit. Obviously, this result does not depend on the filling of the degenerate levels at the Fermi level, including
the case where a triplet state is taken. We emphasize that the vanishing extrapolation becomes evident only when large clusters are considered
(e.g., L ≳ 30), since a fitting procedure that only includes L ≲ 12 would predict a finite gap for L →∞. Most
importantly, the presence of the Gutzwiller projection has no effect on the overall behavior. In fact, while the slope of the fit is increased,
the extrapolated value in the thermodynamic limit is always consistent (within a few errorbars) with a vanishing gap. In addition, there is no
appreciable difference (for large clusters) between states with S=0 (two fermions occupying orbitals at the Fermi level with up and down
spins) or S=1 (two fermions occupying the orbitals with the same spin). Note that, more generally, monopole excitations in the SU(N_f)
Heisenberg model <cit.> with N_f even and N_f/2>1 fermions per site were also found to be gapless <cit.>.
In order to prove (and improve) the statement that spinons are gapless, we construct particle-hole excitations of the Hamiltonian (<ref>),
by changing the fermion occupation in the unprojected state (i.e., by emptying one of the highest-energy single-particle orbital and filling one
of the lowest-energy ones). Given the shape of the cluster, there are several ways to do this, since both these shells are four-fold degenerate
(for each spin value). In particular, we can perform excitations within the same Dirac cone or across the two cones. Trivially, these states are
gapless in the unprojected wave function, when L →∞. Most interestingly, they remain gapless even when the Gutzwiller projection is
included. As a consequence, the [π,0] Ansatz, obtained from the auxiliary Hamiltonian (<ref>) with real hoppings
χ^_i,j = ± 1, has the remarkable property to describe the (approximated) ground-state wave function that sustain gapless excitation
for both spinons <cit.> and monopoles.
Discussion.
In this work, we constructed monopole excitations on top of the Dirac spin liquid Ansatz and showed them to be gapless in the thermodynamic
limit. By studying the energetics of states with a finite monopole density, we found no sign of an instability towards a chiral state. Our results
provide further evidence that the ground state of the kagome Heisenberg antiferromagnet is well described by the Dirac spin liquid, despite having
gapless monopole excitations <cit.>. Such a remarkable robustness was recently linked to free-fermion band topology dictating symmetry
properties of monopoles <cit.>. Recently, a similar analysis of monopole and bilinear excitations was performed on the the Dirac spin
liquid on the triangular lattice <cit.>.
Acknowledgements. We thank L. Di Pietro, A. Läuchli, C. Wang, J. Knolle, J. Willsher, S. Bhattacharjee, S. Sachdev, S. Capponi, and Y.-C.
He for helpful discussions. S. B. also thanks J. Colbois, R. Mishra, and S. Niu for discussions about the project. Y.I., D.P. and S.B. acknowledge
financial support by the Indo-French Centre for the Promotion of Advanced Research – CEFIPRA Project No. 64T3-1. Y.I. and S.B. would like to
acknowledge support from the ICTP through the Associates Programme and from the Simons Foundation through grant number 284558FY19, IIT Madras
through the QuCenDiEM CoE (Project No. SP22231244CPETWOQCDHOC), the International Centre for Theoretical Sciences (ICTS), Bengaluru, India during
a visit for participating in the program “Frustrated Metals and Insulators” (Code: ICTS/frumi2022/9). The research of Y.I. was supported
in part by the National Science Foundation under Grant No. NSF PHY-1748958. The work of Y.I. was performed in part and completed at the Aspen
Center for Physics, which is supported by National Science Foundation grant PHY-2210452. The participation of Y.I. at the Aspen Center for Physics
was supported by the Simons Foundation. Y.I. and S.B. acknowledge the use of the computing resources at HPCE, IIT Madras. This work was granted
access to the HPC resources of CALMIP center under the allocation 2017-P1231. This work was also supported by the TNTOP ANR-18-CE30-0026-01 grant
awarded by the French Research Council.
52
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Lacroix et al.(2011)Lacroix, Mendels, and Mila]lacroix2011
author author C. Lacroix, author P. Mendels, and author F. Mila, https://doi.org/10.1007/978-3-642-10589-0 title
Introduction to Frustrated Magnetism: Materials, Experiments, Theory (publisher Springer Series in Solid-State Sciences, year 2011)NoStop
[Savary and Balents(2016)]savary2017
author author L. Savary and author L. Balents, title title Quantum spin liquids: a
review, https://doi.org/10.1088/0034-4885/80/1/016502 journal journal Rep. Prog. Phys. volume 80, pages 016502 (year
2016)NoStop
[Baskaran and Anderson(1988)]baskaran1988
author author G. Baskaran and author P. W. Anderson, title title Gauge theory of
high-temperature superconductors and strongly correlated Fermi systems, https://doi.org/10.1103/PhysRevB.37.580 journal
journal Phys. Rev. B volume 37, pages 580 (year 1988)NoStop
[Arovas and Auerbach(1988)]arovas1988
author author D. P. Arovas and author A. Auerbach, title title Functional integral
theories of low-dimensional quantum Heisenberg models, https://doi.org/10.1103/PhysRevB.38.316 journal journal Phys. Rev. B volume 38, pages 316 (year 1988)NoStop
[Affleck et al.(1988)Affleck, Zou, Hsu, and Anderson]affleck1988
author author I. Affleck, author Z. Zou,
author T. Hsu, and author P. W. Anderson, title title SU(2) gauge symmetry of the large-U limit of
the Hubbard model, https://doi.org/10.1103/PhysRevB.38.745
journal journal Phys. Rev. B volume 38, pages 745 (year 1988)NoStop
[Read and Sachdev(1990)]read1990
author author N. Read and author S. Sachdev, title title Spin-Peierls, valence-bond solid, and
Néel ground states of low-dimensional quantum antiferromagnets, https://doi.org/10.1103/PhysRevB.42.4568 journal journal Phys. Rev. B volume 42, pages 4568 (year 1990)NoStop
[Xu et al.(2019)Xu,
Qi, Zhang, Assaad,
Xu, and Meng]xu2019
author author X. Y. Xu, author Y. Qi, author L. Zhang, author
F. F. Assaad, author
C. Xu, and author
Z. Y. Meng, title title Monte Carlo Study of Lattice Compact Quantum Electrodynamics with
Fermionic Matter: The Parent State of Quantum Phases, https://doi.org/10.1103/PhysRevX.9.021022 journal journal Phys. Rev. X volume 9, pages
021022 (year 2019)NoStop
[Song et al.(2019)Song,
Wang, Vishwanath, and He]song2019
author author X.-Y. Song, author C. Wang, author A. Vishwanath, and author Y.-C. He, title
title Unifying description of competing orders in
two-dimensional quantum magnets, https://doi.org/10.1038/s41467-019-11727-3 journal journal Nat. Commun. volume 10, pages 4254 (year 2019)NoStop
[Song et al.(2020)Song,
He, Vishwanath, and Wang]song2020
author author X.-Y. Song, author Y.-C. He,
author A. Vishwanath, and author C. Wang, title title From Spinon Band Topology to the Symmetry Quantum
Numbers of Monopoles in Dirac Spin Liquids, https://doi.org/10.1103/PhysRevX.10.011033 journal journal Phys. Rev. X volume 10, pages 011033 (year 2020)NoStop
[Wen(2002)]wen2002
author author X.-G. Wen, title title Quantum orders and symmetric
spin liquids, https://doi.org/10.1103/PhysRevB.65.165113
journal journal Phys. Rev. B volume 65, pages 165113 (year
2002)NoStop
[Kitaev(2006)]kitaev2006
author author A. Kitaev, title title Anyons in an exactly
solved model and beyond, https://doi.org/https://doi.org/10.1016/j.aop.2005.10.005 journal journal Ann. Phys. (Amst.) volume 321, pages 2 (year 2006)NoStop
[Polyakov(1977)]polyakov1977
author author A. M. Polyakov, title title Quark confinement and
topology of gauge theories, https://doi.org/https://doi.org/10.1016/0550-3213(77)90086-4 journal journal Nucl. Phys. B volume
120, pages 429 (year 1977)NoStop
[Borokhov et al.(2003)Borokhov, Kapustin, and Wu]borokhov2002
author author V. Borokhov, author A. Kapustin, and author X. Wu, title title Topological Disorder Operators in
Three-Dimensional Conformal Field Theory, https://doi.org/10.1088/1126-6708/2002/11/049 journal
journal J. High Energy Phys. volume
2002number (11), pages 049NoStop
[Hermele et al.(2004)Hermele, Senthil, Fisher, Lee, Nagaosa, and Wen]hermele2004
number author author M. Hermele, author T. Senthil, author M. P. A. Fisher, author P. A. Lee, author N. Nagaosa, and author X.-G. Wen, title title Stability of U(1) spin
liquids in two dimensions, https://doi.org/10.1103/PhysRevB.70.214437 journal journal Phys. Rev. B volume 70, pages 214437 (year 2004)NoStop
[Mendels et al.(2007)Mendels, Bert, de Vries, Olariu, Harrison, Duc, Trombe, Lord, Amato, and Baines]mendels2007
author author P. Mendels, author F. Bert,
author M. de Vries, author A. Olariu, author
A. Harrison, author
F. Duc, author J. Trombe, author J. Lord, author A. Amato, and author C. Baines, title title Quantum Magnetism in the
Paratacamite Family: Towards an Ideal Kagomé Lattice, https://doi.org/10.1103/PhysRevLett.98.077204 journal
journal Phys. Rev. Lett. volume 98, pages 077204 (year 2007)NoStop
[Helton et al.(2007)Helton,
Matan, Shores, Nytko,
Bartlett, Yoshida, Takano,
Suslov, Qiu, Chung,
Nocera, and Lee]helton2007
author author J. Helton, author K. Matan,
author M. Shores, author E. Nytko, author
B. Bartlett, author
Y. Yoshida, author Y. Takano, author A. Suslov, author Y. Qiu, author J.-H. Chung, author D. Nocera, and author Y. Lee, title title Spin Dynamics of the Spin-1/2
Kagome Lattice Antiferromagnet ZnCu_3(OH)_6Cl_2, https://doi.org/10.1103/PhysRevLett.98.107204 journal
journal Phys. Rev. Lett. volume 98, pages 107204 (year 2007)NoStop
[de Vries et al.(2008)de Vries, Kamenev, Kockelmann,
Sanchez-Benitez, and Harrison]devries2008
author author M. de Vries, author K. Kamenev,
author W. Kockelmann, author J. Sanchez-Benitez, and author A. Harrison, title
title Magnetic Ground State of an Experimental S=1/2 Kagome
Antiferromagnet, https://doi.org/10.1103/PhysRevLett.100.157205
journal journal Phys. Rev. Lett. volume 100, pages 157205 (year
2008)NoStop
[Norman(2016)]norman2016
author author M. R. Norman, title title Colloquium:
Herbertsmithite and the search for the quantum spin liquid, https://doi.org/10.1103/RevModPhys.88.041002 journal
journal Rev. Mod. Phys. volume 88, pages 041002 (year 2016)NoStop
[Yan et al.(2011)Yan,
Huse, and White]yan2011
author author S. Yan, author D. A. Huse, and author S. R. White, title title Spin-Liquid Ground State of the S=1/2 Kagome
Heisenberg Antiferromagnet, https://doi.org/10.1126/science.1201080 journal journal Science volume 332, pages
1173 (year 2011)NoStop
[Depenbrock et al.(2012)Depenbrock, McCulloch, and Schollwöck]depenbrock2012
author author S. Depenbrock, author I. McCulloch, and author U. Schollwöck, title title Nature of the
Spin-Liquid Ground State of the S=1/2 Heisenberg Model on the Kagome
Lattice, https://doi.org/10.1103/PhysRevLett.109.067201
journal journal Phys. Rev. Lett. volume 109, pages 067201 (year
2012)NoStop
[Hering et al.(2019)Hering,
Sonnenschein, Iqbal, and Reuther]hering2019
author author M. Hering, author J. Sonnenschein, author Y. Iqbal, and author J. Reuther, title title Characterization of
quantum spin liquids and their spinon band structures via functional
renormalization, https://doi.org/10.1103/PhysRevB.99.100405
journal journal Phys. Rev. B volume 99, pages 100405 (year
2019)NoStop
[Ran et al.(2007)Ran,
Hermele, Lee, and Wen]ran2007
author author Y. Ran, author M. Hermele,
author P. Lee, and author X.-G. Wen, title
title Projected-Wave-Function Study of the Spin-1/2
Heisenberg Model on the Kagomé Lattice, https://doi.org/10.1103/PhysRevLett.98.117205 journal
journal Phys. Rev. Lett. volume 98, pages 117205 (year 2007)NoStop
[Iqbal et al.(2013)Iqbal,
Becca, Sorella, and Poilblanc]iqbal2013
author author Y. Iqbal, author F. Becca,
author S. Sorella, and author D. Poilblanc, title
title Gapless spin-liquid phase in the kagome
spin-1/2 Heisenberg antiferromagnet, https://doi.org/10.1103/PhysRevB.87.060405 journal journal Phys. Rev. B volume 87, pages 060405 (year 2013)NoStop
[He et al.(2017)He,
Zaletel, Oshikawa, and Pollmann]he2017
author author Y.-C. He, author M. Zaletel,
author M. Oshikawa, and author F. Pollmann, title title Signatures of Dirac Cones in a DMRG Study of the
Kagome Heisenberg Model, https://doi.org/10.1103/PhysRevX.7.031020 journal journal Phys. Rev. X volume 7, pages
031020 (year 2017)NoStop
[Liao et al.(2017)Liao,
Xie, Chen, Liu, Xie, Huang, Normand, and Xiang]liao2017
author author H. Liao, author Z. Xie, author J. Chen, author
Z. Liu, author H. Xie, author R. Huang, author B. Normand, and author T. Xiang, title title Gapless Spin-Liquid Ground State in the S=1/2
Kagome Antiferromagnet, https://doi.org/10.1103/PhysRevLett.118.137202 journal
journal Phys. Rev. Lett. volume 118, pages 137202 (year 2017)NoStop
[Hastings(2000)]hastings2000
author author M. Hastings, title title Dirac structure, RVB,
and Goldstone modes in the kagomé antiferromagnet, https://doi.org/10.1103/PhysRevB.63.014413 journal journal Phys. Rev. B volume 63, pages 014413 (year 2000)NoStop
[Messio et al.(2012)Messio,
Bernu, and Lhuillier]messio2012
author author L. Messio, author B. Bernu, and author C. Lhuillier, title title Kagome Antiferromagnet: A Chiral Topological Spin
Liquid?, https://doi.org/10.1103/PhysRevLett.108.207204
journal journal Phys. Rev. Lett. volume 108, pages 207204 (year
2012)NoStop
[Sun et al.(2022)Sun,
Jin, Tu, and Zhou]sun2022
author author R.-Y. Sun, author H.-K. Jin,
author H.-H. Tu, and author Y. Zhou, @noop title Possible chiral spin liquid state in the S=1/2 kagome Heisenberg
model (year 2022), https://arxiv.org/abs/2203.07321 arXiv:2203.07321 [cond-mat.str-el]
NoStop
[Wen et al.(1989)Wen,
Wilczek, and Zee]wen1989
author author X. G. Wen, author F. Wilczek, and author A. Zee, title title Chiral spin states and superconductivity, https://doi.org/10.1103/PhysRevB.39.11413 journal
journal Phys. Rev. B volume 39, pages 11413 (year 1989)NoStop
[Kalmeyer and Laughlin(1987)]kalmeyer1987
author author V. Kalmeyer and author R. B. Laughlin, title title Equivalence of the
Resonating-Valence-Bond and Fractional Quantum Hall States, https://doi.org/10.1103/PhysRevLett.59.2095 journal journal Phys. Rev. Lett. volume 59, pages 2095 (year 1987)NoStop
[Bieri et al.(2016)Bieri,
Lhuillier, and Messio]bieri2016
author author S. Bieri, author C. Lhuillier, and author L. Messio, title title Projective symmetry group
classification of chiral spin liquids, https://doi.org/10.1103/PhysRevB.93.094437 journal journal Phys. Rev. B volume 93, pages 094437 (year 2016)NoStop
[He et al.(2014)He,
Sheng, and Chen]he2014
author author Y.-C. He, author D. N. Sheng, and author Y. Chen, title title Chiral Spin Liquid in a Frustrated Anisotropic
Kagome Heisenberg Model, https://doi.org/10.1103/PhysRevLett.112.137202 journal
journal Phys. Rev. Lett. volume 112, pages 137202 (year 2014)NoStop
[Gong et al.(2014)Gong,
Zhu, and Sheng]gong2014
author author S.-S. Gong, author W. Zhu, and author D. N. Sheng, title title Emergent Chiral Spin Liquid: Fractional Quantum
Hall Effect in a Kagome Heisenberg Model, https://doi.org/10.1038/srep06317 journal journal Sci. Rep. volume 4, pages
6317 (year 2014)NoStop
[Zhu et al.(2015)Zhu,
Gong, and Sheng]zhu2015
author author W. Zhu, author S. S. Gong, and author D. N. Sheng, title title Chiral and critical spin liquids in a
spin-1/2 kagome antiferromagnet, https://doi.org/10.1103/PhysRevB.92.014424 journal journal Phys. Rev. B volume 92, pages 014424 (year 2015)NoStop
[Kumar et al.(2015)Kumar,
Sun, and Fradkin]kumar2015
author author K. Kumar, author K. Sun, and author E. Fradkin, title title Chiral spin liquids on the kagome lattice, https://doi.org/10.1103/PhysRevB.92.094433 journal
journal Phys. Rev. B volume 92, pages 094433 (year 2015)NoStop
[Messio et al.(2017)Messio,
Bieri, Lhuillier, and Bernu]messio2017
author author L. Messio, author S. Bieri,
author C. Lhuillier, and author B. Bernu, title title Chiral Spin Liquid on a Kagome Antiferromagnet
Induced by the Dzyaloshinskii-Moriya Interaction, https://doi.org/10.1103/PhysRevLett.118.267201 journal
journal Phys. Rev. Lett. volume 118, pages 267201 (year 2017)NoStop
[Wietek et al.(2015)Wietek,
Sterdyniak, and Läuchli]wietek2015
author author A. Wietek, author A. Sterdyniak, and author A. M. Läuchli, title title Nature of chiral spin liquids on the
kagome lattice, https://doi.org/10.1103/PhysRevB.92.125122
journal journal Phys. Rev. B volume 92, pages 125122 (year
2015)NoStop
[Gong et al.(2015)Gong,
Zhu, Balents, and Sheng]gong2015
author author S.-S. Gong, author W. Zhu, author L. Balents, and author
D. N. Sheng, title title Global phase diagram of competing ordered and quantum spin-liquid
phases on the kagome lattice, https://doi.org/10.1103/PhysRevB.91.075112 journal journal Phys. Rev. B volume 91, pages 075112 (year 2015)NoStop
[He and Chen(2015)]he2015
author author Y.-C. He and author Y. Chen, title title Distinct Spin Liquids and Their
Transitions in Spin-1/2 XXZ Kagome Antiferromagnets, https://doi.org/10.1103/PhysRevLett.114.037201 journal
journal Phys. Rev. Lett. volume 114, pages 037201 (year 2015)NoStop
[Kiese et al.(2023)Kiese,
Ferrari, Astrakhantsev, Niggemann, Ghosh, Müller, Thomale, Neupert, Reuther, Gingras, Trebst, and Iqbal]kiese2023
author author D. Kiese, author F. Ferrari,
author N. Astrakhantsev, author N. Niggemann, author
P. Ghosh, author T. Müller, author R. Thomale, author T. Neupert, author J. Reuther, author M. J. P. Gingras, author S. Trebst, and author Y. Iqbal, title title Pinch-points to half-moons
and up in the stars: The kagome skymap, https://doi.org/10.1103/PhysRevResearch.5.L012025 journal
journal Phys. Rev. Res. volume 5, pages L012025 (year 2023)NoStop
[Ferrari et al.(2023)Ferrari, Niu, Hasik, Iqbal,
Poilblanc, and Becca]ferrari2023
author author F. Ferrari, author S. Niu,
author J. Hasik, author Y. Iqbal, author
D. Poilblanc, and author
F. Becca, title title Static and dynamical signatures of Dzyaloshinskii-Moriya
interactions in the Heisenberg model on the kagome lattice, https://doi.org/10.21468/SciPostPhys.14.6.139 journal
journal SciPost Phys. volume 14, pages 139 (year 2023)NoStop
[Messio et al.(2013)Messio,
Lhuillier, and Misguich]messio2013
author author L. Messio, author C. Lhuillier, and author G. Misguich, title title Time reversal symmetry breaking
chiral spin liquids: Projective symmetry group approach of bosonic mean-field
theories, https://doi.org/10.1103/PhysRevB.87.125127 journal journal Phys. Rev. B volume
87, pages 125127 (year 2013)NoStop
[Lugan et al.(2022)Lugan,
Jaubert, Udagawa, and Ralko]lugan2022
author author T. Lugan, author L. D. C. Jaubert, author M. Udagawa, and author A. Ralko, title title Schwinger boson theory of the J_1,
J_2=J_3 kagome antiferromagnet, https://doi.org/10.1103/PhysRevB.106.L140404 journal
journal Phys. Rev. B volume 106, pages L140404 (year 2022)NoStop
[Bieri et al.(2015)Bieri,
Messio, Bernu, and Lhuillier]bieri2015
author author S. Bieri, author L. Messio,
author B. Bernu, and author C. Lhuillier, title
title Gapless chiral spin liquid in a kagome Heisenberg
model, https://doi.org/10.1103/PhysRevB.92.060407 journal journal Phys. Rev. B volume
92, pages 060407 (year 2015)NoStop
[Becca and Sorella(2017)]becca2017
author author F. Becca and author S. Sorella, https://doi.org/10.1017/9781316417041 title Quantum Monte Carlo Approaches for Correlated Systems (publisher Cambridge University Press, year
2017)NoStop
[Dupuis and Witczak-Krempa(2021)]dupuis2021
author author E. Dupuis and author W. Witczak-Krempa, title title Monopole
hierarchy in transitions out of a Dirac spin liquid, https://doi.org/https://doi.org/10.1016/j.aop.2021.168496 journal journal Ann. Phys. volume
435, pages 168496 (year 2021)NoStop
[He et al.(2022)He,
Rong, and Su]he2022
author author Y.-C. He, author J. Rong, and author N. Su, title title Conformal bootstrap bounds for the U(1) Dirac
spin liquid and N=7 Stiefel liquid, https://doi.org/10.21468/SciPostPhys.13.2.014 journal
journal SciPost Phys. volume 13, pages 014 (year 2022)NoStop
[Poilblanc et al.(1990)Poilblanc, Hasegawa, and Rice]poilblanc1990
author author D. Poilblanc, author Y. Hasegawa, and author T. M. Rice, title title Numerical study of flux
phases in the t-J model, https://doi.org/10.1103/PhysRevB.41.1949 journal journal Phys. Rev. B volume 41, pages 1949 (year 1990)NoStop
[Wietek et al.(2023)Wietek,
Capponi, and Läuchli]wietek2023
author author A. Wietek, author S. Capponi, and author A. M. Läuchli, @noop title Quantum Electrodynamics in 2+1 Dimensions
as the Organizing Principle of a Triangular Lattice Antiferromagnet
(year 2023), https://arxiv.org/abs/2303.01585
arXiv:2303.01585 [cond-mat.str-el] NoStop
[Affleck and Marston(1988)]affleck1988b
author author I. Affleck and author J. B. Marston, title title Large-n limit of the
Heisenberg-Hubbard model: Implications for high-T_c superconductors, https://doi.org/10.1103/PhysRevB.37.3774 journal
journal Phys. Rev. B volume 37, pages 3774 (year 1988)NoStop
[SUN()]SUN
https://www.prl.org/??.pdf title See supplemental
material.Stop
[Iqbal et al.(2014)Iqbal,
Poilblanc, and Becca]iqbal2014
author author Y. Iqbal, author D. Poilblanc, and author F. Becca, title title Vanishing spin gap in a competing spin-liquid
phase in the kagome Heisenberg antiferromagnet, https://doi.org/10.1103/PhysRevB.89.020407 journal journal Phys. Rev. B volume 89, pages 020407 (year 2014)NoStop
— Supplemental Material —
fancy
Piercing the Dirac spin liquid: from a single monopole to chiral states
Didier Poilblanc
August 1, 2023
=======================================================================
§ SU(N_F) MONOPOLE
Here, we generalize the investigation described in the main text by studying the behavior of monopole excitations for fermion flavors N_f>2.
For that purpose we consider the following SU(N_f) generalization of the Heisenberg Hamiltonian <cit.>
H = ∑_⟨ i,j ⟩∑_αβ^ N_ f c^†_i,α c^_i,β c^†_j,β c^_j,α
with N_f even integer and N_f/2 fermions per site. Here, α,β are “spin” indices that take the values α,β=1,2,N_f.
For the standard SU(2) case, this is related to the Heisenberg Hamiltonian as
S_i · S_j = 1/2∑_α,β c^†_i,α c^_i,β c^†_j,β c_j,α -
1/4 n_i n_j
where
n_i = c^†_i,↑ c^_i,↑ + c^†_i,↓ c^_i↓ .
The latter term in Eq. (<ref>) is just a constant in the subspace with one fermion per site.
At the unprojected level, the expectation values of (<ref>) can be computed using Wick's theorem. The many-body wave function |Φ_0⟩
is constructed as a product of N/2 orbitals for each spin flavor:
|Φ_0⟩ = ∏_α = 1^N_f(∏_x=1^N/2ϕ^†_x,α) |0⟩ ,
where N=3L^2 is the number of sites. The (orthonormal) orbitals ϕ_x,α^† are obtained by diagonalizing the relevant free-fermion
tight-binding model (either the Dirac or the monopole ansatz)
ϕ_x,α^† = ∑_j=1^N U_j,x c^†_j,α ,
where U is the N × N eigenvector matrix. The unprojected expectation value
E_0 = ⟨Φ_0| H |Φ_0⟩/⟨Φ_0||Φ_0⟩
can be evaluated, using Eq. (<ref>), as
E_0 = - ∑_⟨ i,j ⟩[ N_f^2 |A_i,j|^2 + N_f/4]
where
A_i,j = ∑_x=1^N/2 U_j,x U_i,x^* .
Thus the coefficients A_i,j can be readily calculated from a real-space diagonalization of the tight-binding model. The single-monople gap is
obtained by taking the difference between the case with one monopole (spread over the entire torus) and no monopoles (i.e., the Dirac state):
Δ E_0 = - N_f^2 ∑_⟨ i,j ⟩[ |A^monopole_i,j|^2 - |A^Dirac_i,j|^2 ] .
For the projected wave functions, we use the Monte Carlo sampling to evaluate the variational energies corresponding to the single monopole and
the Dirac state (in both cases, the Gutzwiller projector imposes to have N_f/2 fermions per site).
The size scaling of the total monopole energy for N_f=2,4,6,8,10, and 12 is shown in Fig. <ref>.
At first glance, the data for N_f ≥ 8 would suggest a gapped monopole in the thermodynamic limit. However, after further investigation, we
believe this to be a finite-size effect. Indeed, by scaling the projected energies by N_f^2, we observe that all the data (except N_f=2)
collapse perfectly on top of each other, see Fig. <ref>. Furthermore, there is very good agreement between the projected
and the unprojected energies calculated from Eq. (<ref>), which increases with system size. Finally, we remark that the unprojected
data indicates a gapless monopole only if large enough system sizes L ≥ 30 are considered, which are very difficult to access for the
projected wave functions.
|
http://arxiv.org/abs/2307.02609v1
|
20230705190700
|
MRecGen: Multimodal Appropriate Reaction Generator
|
[
"Jiaqi Xu",
"Cheng Luo",
"Weicheng Xie",
"Linlin Shen",
"Xiaofeng Liu",
"Lu Liu",
"Hatice Gunes",
"Siyang Song"
] |
cs.CV
|
[
"cs.CV",
"68T40"
] |
University of Leicester
Leicester
United Kingdom
[email protected]
Shenzhen University
Shenzhen
China
[email protected]
Shenzhen University
Shenzhen
China
[email protected]
Shenzhen University
Shenzhen
China
[email protected]
Hohai University
Changzhou
China
[email protected]
University of Leicester
Leicester
United Kingdom
[email protected]
University of Cambridge
Cambridge
United Kingdom
[email protected]
Corresponding author.
University of Leicester & University of Cambridge
Leicester
United Kingdom
[email protected]
Verbal and non-verbal human reaction generation is a challenging task, as different reactions could be appropriate for responding to the same behaviour. This paper proposes the first multiple and multimodal (verbal and nonverbal) appropriate human reaction generation framework that can generate appropriate and realistic human-style reactions (displayed in the form of synchronised text, audio and video streams) in response to an input user behaviour. This novel technique can be applied to various human-computer interaction scenarios by generating appropriate virtual agent/robot behaviours. Our demo is available at <https://github.com/SSYSteve/MRecGen>.
<ccs2012>
<concept>
<concept_id>10003120</concept_id>
<concept_desc>Human-centered computing</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003120.10003121</concept_id>
<concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing
[300]Human-centered computing Human computer interaction (HCI)
<ccs2012>
<concept>
<concept_id>10010147.10010178</concept_id>
<concept_desc>Computing methodologies Artificial intelligence</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[300]Computing methodologies Artificial intelligence
MRecGen: Multimodal Appropriate Reaction Generator
Siyang Song
August 1, 2023
==================================================
§ INTRODUCTION
Automatic human behaviour reaction generation (ABRG) is a challenging task as multiple reactions (consisting of both verbal and non-verbal behaviours) could be appropriate in response to the same behaviour expressed by conversational partner (defined as the speaker) <cit.>. This complexity and uncertainty arises from the interplay between individuals' internal cognitive processes, personal characteristics, and external environmental factors <cit.>.
Recent advances in large language models (LLM) <cit.> result in powerful dialogue systems. Although these approaches, especially the GPT system <cit.>, can generate realistic verbal reactions (texts) in response to various textual inputs, they still lack the ability to generate non-verbal audio and visual reactions, which are integral components of authentic human behaviour. Although a few approaches have explored non-verbal reaction generation, such as facial reactions <cit.> and gesture reactions <cit.>, most of them rely on deterministic methods that aim to replicate the real reactions expressed by the corresponding listeners within a specific context. Consequently, these deterministic approaches suffer from 'one-to-many mapping' problem that arises from multiple different real reactions being triggered by the same speaker behavior. Thus, it is theoretically infeasible to develop a machine learning (ML) model capable of reproducing behavioural reactions from multiple subjects across diverse contexts. Consequently, a novel task called multiple appropriate reaction generation has been recently proposed <cit.> and investigated <cit.>.
In this paper, we propose and demonstrate the first fully automatic multiple appropriate human (verbal and non-verbal) behaviour reaction generation framework (called MRecGen). As illustrated in Fig. <ref>, the MRecGen consists of four main modules: a user behaviour encoding (UBE) module, an appropriate reaction prediction (ARP) module, a behaviour synchronisation (BS) module, and a reaction display (RD) module. Fig. <ref> visualises that our framework can generate multiple appropriate, synchronised and realistic human verbal textual and non-verbal audio-facial behaviour reactions in response to a previous unseen speaker behaviour (i.e., can be either verbal behaviour only or verbal and non-verbal audio-facial behaviours) in dyadic interaction scenarios.
§ MRECGEN FRAMEWORK
The proposed MRecGen (demo) is an end-to-end deep learning framework consisting of four deep learning modules, which are introduced as follows:
User behaviour encoding: The UBE module is a multi-modal transformer which takes the raw multi-modal user behaviour (e.g., audio, text and visual behaviour) as the input, and then encodes them as a set of user behaviour latent representations.
Appropriate reaction prediction: The ARP module takes aligned and combined speaker behaviour representations (produced by the BS module) as the input, based on which it predicts a distribution representing multiple different but verbal and non-verbal reactions. These reactions are expected to be appropriate for responding to the input user behaviour. Finally, this module decodes multiple sets of appropriate reaction representations from the predicted distribution, where each set contains three latent representations describing the text, audio and facial behaviours of an appropriate reaction.
Behaviour synchronisation: This module conducts following operations: (i) synchronising multiple user behaviour representations generated by the UBE module, and then combine them as a single user behaviour representation; (ii) synchronising multiple reaction representations generated by ARP module, which represents the multi-modal reaction behaviour; and (iii) synchronising the synchronised multi-modal reaction representations with the synchronised and combined multi-modal user behaviour representation.
Reaction display: This module finally displays the generated reactions in the form of audio, text and face video.
Implementation details: Our demo employs the architecture proposed in <cit.> as the SBE module. The distribution learning strategy in ARP module is inherited from <cit.>, where the speaker representation is transformed into graph representation to predict appropriate reaction distribution. The transformer-based multi-modal and inter-person behaviour synchronisation operations defined by the BS module are built based on the similar strategy proposed in <cit.>. For the reaction display module, GPT4 <cit.>, Bark [<https://github.com/suno-ai/bark>], and SadTalker <cit.> are employed to generate the final text, audio and facial video, respectively.
§ DEMO EVALUATION
We recruited 58 volunteers (19 females and 39 males) online for the following two user studies, where each volunteer is asked to evaluate the performances of two tasks:
* (i) Evaluating the demo's ability in generating reactions in response to users' audio-visual behaviours. Each volunteer is asked to watch five examples, where each example includes: (1) an audio-visual user clip; (2) a corresponding audio-visual-text human reaction clip; and (3) an audio-visual-text reaction clip generated by our demo.
* (ii) Evaluating the demo's ability in generating reactions in response to users' verbal textual behaviours. Each volunteer is asked to watch five examples, where each example includes: (1) an textual input by user; and (2) an audio-visual-text reaction clip generated by our demo.
We employ the widely-used Mean Opinion Scores (MOS) rating protocol <cit.>, where users are required to give their ratings (1-5) on the following seven aspects for each video : (1) textual response appropriateness, (2) audio response appropriateness, (3) audio response smoothness, (4) facial reaction appropriateness, (5) facial reaction smoothness, (6) lip sync quality, and (7) video realism. The results show that volunteers feel that the reactions generated by our demo are good in all aspects (i.e., scores are above 3.0 for all 7 aspects). Particularly, our demo generated appropriate textual/facial reactions with high lip sync quality (their scores are above 3.4). The details of these ratings are provided in Supplementary Material.
§ CONCLUSION
This paper proposes the first multiple and multi-modal appropriate human behaviour reaction generation framework, and provides a well-trained model (demo) that can generate multiple appropriate, synchronised and realistic human textual, audio and facial behaviour reactions in response to user behaviours.
ACM-Reference-Format
§ SUPPLEMENTARY MATERIAL
We recruited 58 volunteers (including 19 females and 39 males, and all of them are Chinese) via the ‘Tencent Questionnaire’ platform for the two user studies. An example of the user study screenshot is displayed in Fig. <ref>.
In the following, we provide the detailed user study results (including both gender dependent and independent results). Specifically, the gender independent results of the task 1 (i.e., each input is a human audio-visual clip) are reported in Table <ref>, where the inter agreement (Cronbach's alpha) for rating GT and reactions generated by our MRenGen are 0.478 and 0.803, respectively. Meanwhile, the task 1 results achieved from female users and male users are reported in Table <ref> and Table <ref>, respectively.
The gender independent results of the task 2 (i.e., each input is the text) are reported in Table <ref>, where the inter agreement (Cronbach's alpha) for rating reactions generated by our MRenGen is 0.825, respectively. Meanwhile, the task 2 results achieved from female users and male users are reported in Table <ref> and Table <ref>, respectively.
|
http://arxiv.org/abs/2307.01427v1
|
20230704014108
|
Event Rate of Strongly Lensed Gravitational Waves of Stellar Binary Black Hole Mergers Produced by Dynamical Interactions
|
[
"Zhiwei Chen"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.CO"
] | |
http://arxiv.org/abs/2307.01681v1
|
20230704122315
|
The emergence of expanding space-time in the Lorentzian type IIB matrix model with a novel regularization
|
[
"Mitsuaki Hirasawa",
"Konstantinos N. Anagnostopoulos",
"Takehiro Azuma",
"Kohta Hatakeyama",
"Jun Nishimura",
"Stratos Papadoudis",
"Asato Tsuchiya"
] |
hep-th
|
[
"hep-th",
"gr-qc",
"hep-lat"
] |
KEK-TH-2536
Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT Services
Liekang Zeng,
Xu Chen,
Peng Huang,
Ke Luo,
Xiaoxi Zhang,
and Zhi Zhou
The authors are with the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong, 510006 China (e-mail: [email protected], [email protected], {huangp57, luok7}@mail2.sysu.edu.cn, {zhangxx89, zhouzhi9}@mail.sysu.edu.cn).
August 1, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Superstring theory is considered to be a promising candidate for quantum gravity. One of the remarkable features of this theory is that the dimensionality of the space-time in which the theory is defined is not arbitrary but is determined by the consistency of the theory. Specifically, the theory is consistently defined only in 10D space-time. Therefore, it is important to clarify the relationship between the 10D space-time and our (3+1)D space-time.
One mechanism to explain (3+1)D space-time, in the theory, is the compactification, in which the physics in (3+1)D space-time is determined by the structure of the compactified extra dimension. At the perturbative level, one needs to fix the dimensionality of the extra dimensions to 6 by hand, and one also has huge ambiguity in the structure of the extra dimension. However, it is difficult to construct the extra dimensions explicitly by requiring that the physics in (3+1)D space-time is consistent with the Standard Model at the low-energy scale. Even if this is feasible, it is not clear whether one can choose one of the many possibilities at the perturbative level. Therefore, it is important to study the non-perturbative aspects of superstring theory.
The type IIB matrix model, also called IKKT matrix model, was proposed in Ref. <cit.>, and it is one of the promising candidates for a non-perturbative formulation of superstring theory. The model is defined by the partition function
Z =∫ dA dΨ dΨ̅ e^i(S_ b + S_ f),
S_ b =-N/4 Tr{ -2[A_0, A_i]^2 + [A_i,A_j]^2 },
S_ f =-N/2 Tr{Ψ̅_α (Γ^μ)_αβ[A_μ,Ψ_β] },
where A_μ (μ = 0,1,...,9) and Ψ_α (α=1,2,...,16) are N× N Hermitian matrices, and Γ^μ are the 10D Gamma matrices after the Weyl projection.
This model has 𝒩=2 supersymmetry (SUSY), which is the maximal SUSY in 10D space-time.
As a consequence, the model includes the gravitational interaction.
From the action, one can see that there are no space-time coordinates a priori, and that they emerge from the degrees of freedom of this model.
According to the SUSY algebra, a homogeneous shift of the diagonal elements of A_μ corresponds to the translation in the μ direction in this model.
As a result, we can interpret the eigenvalues of A_μ as the space-time coordinates.
The Euclidean version of this model has SO(10) rotational symmetry, which is spontaneously broken to SO(3). This was shown for the first time using the Gaussian expansion method (GEM) in Refs. <cit.>, and non-perturbative Monte Carlo simulations in Refs. <cit.> produced consistent results. The relation between the SO(3) symmetric Euclidean space and our (3+1)D space-time is, however, not clear.
Therefore, it is crucial to investigate the Lorentzian version of the model.
Since the action in the Lorentzian model is complex, the usual Monte Carlo methods are not applicable.
This is the sign problem[
The Euclidean model also has the same problem, but in that case it is due to the fermionic action.
When the phase of the Pfaffian that arises from the integration of the fermionic degrees of freedom is quenched, there is no SSB <cit.>. In order to consider the effect of the dynamics of the fermionic degrees of freedom, the authors in <cit.> used the complex Langevin method. We employ the same method in this work.], and
it is necessary to deal with it properly to obtain correct results.
The first-principle calculations of the Lorentzian model were done in Refs. <cit.>, where an approximation was used to avoid the sign problem. Then, the expanding (3+1)D space-time was observed.
However, it was found in Ref. <cit.> that the structure of the expanding space is essentially caused by two points, which implies that the space is not continuous.
The emergence of this singular structure is due to the approximation used to avoid the sign problem.
It was found that this approximation amounts to replacing the Boltzmann weight e^iS by e^-β S, where β is a positive constant.
See Ref. <cit.> for other recent studies on the type IIB matrix model, in which possible applications to cosmology are discussed.
Recently, we have been studying the Lorentzian model without the approximation by using the Complex Langevin Method (CLM) to overcome the sign problem <cit.>.
In this talk, we report on the current status of our work.
The rest of this paper is organized as follows.
In Sec. 2, we discuss the relationship between the Lorentzian and the Euclidean models.
In Sec. 3, we introduce a regulator in the Lorentzian model to make it well-defined.
This regulator was also used in the classical analysis in <cit.>.
In Sec. 4, we explain the CLM and its application to the type IIB matrix model.
The results obtained by the complex Langevin simulations are presented in Sec. 5.
Sec. 6 is devoted to a summary and discussions.
§ RELATIONSHIP BETWEEN THE LORENTZIAN AND EUCLIDEAN MODELS
In this section, we explain the relationship between the Lorentzian and the Euclidean models.
For simplicity, here we consider the bosonic model, in which the fermionic contribution is omitted.
The partition function of the model is given by
Z =∫ dA e^iS_ b,
S_ b =-N/4 Tr{ -2[A_0, A_i]^2 + [A_i,A_j]^2 }.
Here, we consider a Wick rotation as
S̃_ b =-N/4 e^iπ/2u Tr{ -2e^-iπ u[Ã_0, Ã_i]^2 + [Ã_i,Ã_j]^2 } .
We rotate both on the world sheet and in the target space at the same time using one parameter u.
Here, u=0 corresponds to the Lorentzian model, while u=1 corresponds to the Euclidean model.
This Wick rotation is equivalent to the contour deformation
Ã_0 = e^iπ/2u e^-iπ/8uA_0 = e^i3π/8uA_0 ,
Ã_i = e^-iπ/8uA_i .
Note that e^-iπ/8u and e^iπ/2u are the phases of the Wick rotations on the world sheet and in the target space, respectively.
Cauchy's theorem says that the expectation value of any observable ⟨𝒪(e^-i3π/8uÃ_0, e^iπ/8uÃ_i)|_⟩u is independent of u under the contour deformation.
Therefore, the following relations hold:
1/N Tr (A_0)^2_ L = e^-i3π/41/N Tr (Ã_0)^2_ E,
1/N Tr (A_i)^2_ L = e^iπ/41/N Tr (Ã_i)^2_ E,
where ·_ L and ·_ E are the expectation values in the Lorentzian and Euclidean models, respectively.
In other words, the Lorentzian and Euclidean models are equivalent to each other under the contour deformation.
We have confirmed this relation by simulations (see. Fig <ref>).
§ REGULARIZATION OF THE LORENTZIAN MODEL
Since the partition function of the Lorentzian model is not absolutely convergent as it is, we need to introduce a regularization.
Here, we use the following mass term as an IR regulator
S_γ = 1/2Nγ{ Tr (A_0)^2 - Tr (A_i)^2 },
where γ is a mass parameter.
This mass term is invariant under a Lorentz transformation in the target space-time.
The model with this mass term has been studied at the classical and perturbative levels in Refs. <cit.>.
In Ref. <cit.>, it was found that the typical classical solutions of the model with this mass term have an expanding space at γ>0, although the dimensionality is not determined by the classical analysis.
This result motivates us to perform a first-principle calculation of the model with this mass term.
§ COMPLEX LANGEVIN SIMULATIONS
In order to study the time evolution in Sec. <ref>, we choose an SU(N) basis, where the temporal matrix A_0 is diagonalized as
A_0 = diag(α_1, α_2, ..., α_N ),α_1 ≤α_2 ≤ ... ≤α_N.
The following change of variables, first introduced in Ref. <cit.>, make the ordering in Eq. (<ref>) explicit:
α_1 = 0,α_2 = e^τ_1,α_3 = e^τ_1 + e^τ_2 , ...,α_N = ∑_a=1^N-1e^τ_a .
The choice α_1 = 0 is made by using the shift symmetry A_0 → A_0 + c 1.
In this work, we use the complex Langevin method (CLM) <cit.> to overcome the sign problem.
In this method, the number of degrees of freedom is doubled by “complexifying" the dynamical variables as
τ_a ∈ℝ →τ_a ∈ℂ,
A_i: Hermitian matrices → A_i: general complex matrices.
We generate configurations by using the complex Langevin equations
dτ_a/dt_ L = - ∂ S/∂τ_a + η_a(t_ L),
d(A_i)_ab/dt_ L = - ∂ S/∂ (A_i)_ba + (η_i)_ab(t_ L),
where t_ L is the so-called Langevin time, and η_a(t_ L) and (η_i)_ab(t_ L) are the Gaussian noise with the probability distribution
P(η_a(t_ L)) ∝exp( -1/4∫ dt ∑_a (η_a(t_ L))^2 ),
P((η_i)_ab(t_ L)) ∝exp( -1/4∫ dt Tr(η_i(t_ L))^2 ).
Note that the Langevin equation must be extended to the complexified dynamical variables in a holomorphic way.
It is known that the CLM sometimes converges to wrong solutions. This is called the wrong convergence problem.
Fortunately, a practical criterion for the correct convergence was found recently in Ref. <cit.>.
The criterion says that the results are correct when the probability distribution of the drift term decays exponentially or faster.
When we consider the fermionic contribution, the inverse of the Dirac operator appears in the drift force.
If the Dirac operator has near zero eigenvalues, we have the singular drift problem, and the CLM suffers from the wrong convergence problem.
We avoid this problem by adding a SUSY-breaking fermionic mass term
S_m_ f = iNm_ f Tr[ Ψ̅_α (Γ_7Γ_8^†Γ_9)_αβΨ_β] ,
used in the studies of the Euclidean model <cit.>.
The original model is obtained after an m_ f→ 0 extrapolation.
For some values of γ the CLM can become unstable, and to stabilize the simulation, we perform the redefinition
A_i →A_i+ϵ A_i^†/1+ϵ
after each Langevin step. This procedure is similar to the dynamical stabilization used in lattice QCD simulations <cit.>,
and its effect is expected to be small when the A_i are near Hermitian.
§ RESULTS
All of the results presented in this paper are obtained through simulations in which the criterion for correct convergence is satisfied.
We fix the matrix size to N=64, and use ϵ=0.01 for the dynamical stabilization (<ref>).
In order to see whether the emergent time is real, we compute the eigenvalues α_a of A_0.
In Fig. <ref> (Left), we plot the eigenvalues α_a in the complex plane.
When 2.6 ≤γ≤ 4.0, the model is in the real-time phase, where α_a+1-α_a is almost real at late times.
At smaller γ, the α_a-distribution becomes wider in the real direction.
We define the matrix
𝒜_pq = 1/9∑_i=1^9 | (A_i)_pq|^2.
In Fig. <ref> (Right), we plot 𝒜_pq against p and q.
𝒜_pq is large for small |p-q|, and drops fast to very small values with increasing |p-q|, showing that
the matrices A_i have a band diagonal structure.
We define the band width n such that 𝒜_pq≈ 0 when |p-q| > n.
In this work, we choose n=12.
The appearance of the band diagonal structure motivates us to define the time and the block matrices that describe the state of the universe at that time as follows:
Time is defined using the average of n diagonal elements
t_a = ∑_i=1^a |α̅_i - α̅_i-1| , a = 1, 2, …, N-n ,
where α̅_i is an average of the α's in the i-th block:
α̅_i = 1/n∑_ν=1^nα_i+ν , i=0, 1, …, N-n .
We define the n × n block matrices within the spatial matrices as
( A̅_i )_kl(t_a) = ( A_i )_(k+a-1)(l+a-1) , k,l=1,2,…,n .
We interpret these block matrices to represent the state of the universe at t_a.
In the following, we omit the index of t_a and use t for simplicity.
In order to check whether the space is real or complex, we define the phase θ_ s(t) as
tr( A̅_i(t) )^2 = e^2iθ_ s(t)| tr( A̅_i(t) )^2 |.
We plot θ_ s(t) against t at γ=2.6 in Fig. <ref>.
At γ=2.6, the α's are almost real, and the phase θ_s(t) approaches 0 at late times.
Therefore, at late times, we obtain real space-time.
For the purpose of studying the SSB of the spatial SO(9) symmetry, we define the “moment of inertia tensor" as
T_ij(t) = tr( X_i(t) X_j(t) ),
where X_i(t) are the Hermitian matrices
X_i(t) = A̅_i(t) + A̅_i^†(t)/2.
As θ_ s(t) is near zero, particularly at late times, the block matrices are near Hermitian.
Therefore, the procedure (<ref>) is justified, even though the CLM only allows the calculation of holomorphic observables.
In Fig. <ref>, we plot the eigenvalues of T_ij(t) as a function of time at γ=2.6 and 4.
As we can see, the eigenvalues are almost degenerate around t=0.[
Due to the finite-N effects, the 9 eigenvalues are not exactly degenerate.
]
At some point in time, one out of nine eigenvalues starts to grow exponentially.
Thus, a 1-dimensional space expands exponentially.
The extent of time is larger at smaller γ.
In order to see the m_ f dependence, we compare the results at m_ f=10 and 5 in Fig. <ref>.
The expansion of space becomes more pronounced as m_ f decreases.
The reason for this is that the SUSY effects weaken the attractive force between space-time eigenvalues.
§ SUMMARY AND DISCUSSIONS
We have conducted an investigation into the emergence of space-time in the type IIB matrix model.
We primarily focused on the Lorentzian version of the model as the Euclidean version of the model revealed the SSB of the SO(10) to SO(3), and the connection between the emerging 3D space and our (3+1)D universe was not clear <cit.>.
We have used a Lorentz invariant mass term as a regulator, which breaks the equivalence between the Lorentzian and the Euclidean models.
This was motivated by the results in Ref. <cit.>, where the typical classical solutions with positive mass term (γ > 0) represent an expanding space, whose dimensionality is not fixed at the classical level.
We employed the CLM to overcome the sign problem, and found that real space-time appears for 2.6 ≤γ≤ 4.
The SO(9) rotational symmetry of space breaks spontaneously, and one spatial dimension expands exponentially with time.
This expansion becomes stronger as the fermionic mass m_ f decreases.
The reason why the 1-dimensional expanding space appears may be explained as follows.
If we ignore the fermionic contribution, the configurations that minimize - Tr[A_i,A_j]^2 are dominant.
Therefore, configurations in which the expanding space has small dimensionality are favored.
Particularly, when only one out of the nine matrices is large and the remaining nine are almost zero, - Tr[A_i,A_j]^2 acquires the minimum value (- Tr[A_i,A_j]^2=0).
In Refs. <cit.>, the effect of the Pfaffian of the Dirac matrix was studied, and it was found that the Pfaffian becomes 0 when only 2 out of 10 matrices are nonzero. Then, the appearance of less than 2-dimensional expanding space must be highly suppressed. Therefore, we conclude that m_ f=5 is not small enough to make this effect dominant, and we expect that the emergence of the expanding 3-dimensional space will occur by further decreasing m_ f.
§ ACKNOWLEDGEMENTS
T. A., K. H. and A. T. were supported in part by Grant-in-Aid (Nos.17K05425, 19J10002, and 18K03614, 21K03532, respectively)
from Japan Society for the Promotion of Science. This research was supported by MEXT as “Program for Promoting Researches on the
Supercomputer Fugaku” (Simulation for basic science: from fundamental laws of particles to creation of nuclei, JPMXP1020200105) and JICFuS.
This work used computational resources of supercomputer Fugaku provided by the RIKEN Center for Computational Science (Project ID: hp210165, hp220174), and Oakbridge-CX provided by the University of Tokyo (Project IDs: hp200106, hp200130, hp210094, hp220074) through the HPCI System Research Project.
Numerical computations were also carried out on PC clusters in KEK Computing Research Center.
This work was also supported by computational time granted by the Greek Research and Technology Network (GRNET) in the National HPC facility ARIS,
under the project IDs LIIB and LIIB2.
K. N. A and S. P. were supported in part by a Program of Basic Research PEVE 2020 (No. 65228700) of the National Technical University of Athens.
JHEP
|
http://arxiv.org/abs/2307.00251v1
|
20230701070319
|
Local Eviction Moratoria and the Spread of COVID-19
|
[
"Julia Hatamyar",
"Christopher F. Parmeter"
] |
econ.GN
|
[
"econ.GN",
"q-fin.EC"
] |
./images/
I
e
P
H̋
Φ
Ψ
Ξ
ϕ
Π
þ^
θ
β
γ
Π
Σ
ρ
w
y
0
x
k̨
full
true
lemmaLemmatheoremTheoremTh TheoremProofProofMthMain TheoremRes ResultDefDefinitionRem RemarkQesQuestionAimAimProPropositionLem LemmaCor CorollaryExExampleEqEquationassumptionAssumption𝒰𝒪𝒲#1 #1#1 #1sup_∈𝒲sup_1≤ s≤ Sinf_∈𝒲Julia Hatamyar, Centre for Health Economics, University of York. Christopher F. Parmeter, Department of Economics, University of Miami, Coral Gables, FL 33146; Corresponding Author e-mail: [email protected] All R and Stata code used in this paper is available upon request.upquote.sty
COVID and Evictions]Local Eviction Moratoria and the Spread of COVID-19We thank participants at the University of York Applied Microeconomics Cluster Seminar and the University of Miami for their invaluable feedback. The usual disclaimer applies.
At various stages during the initial onset of the COVID-19 pandemic, various US states and local municipalities enacted eviction moratoria. One of the main aims of these moratoria was to slow the spread of COVID-19 infections. We deploy a semiparametric difference-in-differences approach with an event study specification to test whether the lifting of these local moratoria led to an increase in COVID-19 cases and deaths. Our main findings, across a range of specifications, are inconclusive regarding the impact of the moratoria - especially after accounting for the number of actual evictions and conducting the analysis at the county level. We argue that recently developed augmented synthetic control (ASCM) methods are more appropriate in this setting. Our ASCM results also suggest that the lifting of eviction moratoria had little to no impact on COVID-19 cases and deaths. Thus, it seems that eviction moratoria had little to no robust effect on reducing the spread of COVID-19 throwing into question its use as a non-pharmaceutical intervention.
[
Christopher F. Parmeter
August 1, 2023
===========================
§ INTRODUCTION
With the near universal shutdown of the U.S. economy following the outbreak of COVID-19, many individuals could not (or chose not to) work, which led to concerns over late rental payments. To combat these concerns many states and local municipalities enacted (at various times) eviction moratoria that prevented qualified renters from being evicted. One of the primary motivations for these moratoria was to help prevent the spread of COVID-19 given the tangible public health risks of evicting people while a highly contagious respiratory disease was spreading.[https://www.vox.com/21569601/eviction-moratorium-cdc-covid-19-congress-rental-assistance-rent-crisis.]
On the surface, eviction moratoria seem a prudent policy measure. However, given a raft of other COVID-19 policies that were already in place across most US states, the efficacy of such a policy with respect to preventing the spread of COVID-19 is not obvious.[In addition to slowing/mitigating the spread of COVID-19 due to evictions, the moratorium kept tenants in their homes at a time when unemployment was high due to economy-wide impacts from the pandemic.] This suggests that identification of such an impact is likely to prove difficult. This is succinctly characterized by [pg. 154]GOODMAN-BACON_MARCUS:2020: “Good control groups will have to match treatment groups on many dimensions. Smart research designs will try to focus on situations where treatment and control groups differ only by the introduction of a single COVID policy (or, at least, only few policies).”
To date the findings in the literature related to the ability of eviction moratoria to slow the spread of COVID-19 are mixed GOODMAN-BACON_MARCUS:2020as presaged by. The first attempt to study the impact of eviction moratoria on the spread of COVID-19 is **LEIFHEIT_ETAL:2021 who use data from the 44 states that ever instituted an eviction moratoria from the period March 13 to September 3, 2020. **LEIFHEIT_ETAL:2021 deploy a difference-in-difference (DiD) approach with a two-way fixed effects event-study specification and find that both COVID-19 incidence and mortality increased steadily in states after the moratoria expired. They find that a spike in deaths due to evictions occurring after expiration of moratoriums preceded a spike in cases, which occurred almost 10 weeks later. In related work, **NANDE_ETAL:2021, use a simulated model of viral transmissions, and predict that evictions increase COVID-19 infection risk. They then apply their simulated model to Philadelphia using locally-specific parameters, and conclude that eviction moratoria are an effective and important policy measure.
Using a panel of individuals who were diagnosed with COVID-19 and a Cox DiD regression, **sandoval2021eviction find an increased likelihood of a COVID diagnosis after state-level moratoria were lifted. **jowers2021housing study the impact of “housing precarity policies" at the county level, which include both eviction and utility disconnection moratoria, on added COVID-19 cases and deaths, using a traditional panel fixed effects regression. Although the authors find that eviction moratoria reduce infections and deaths by a significant amount, their econometric model raises causal identification concerns - and does not control for any other local policies in place. In contrast to the above studies, **pan2020covid examine a variety of non-pharmaceutical interventions (including eviction moratoria) using a negative binomial specification, and do not find any statistically significant impact of eviction policies on COVID-19 spread.[The authors find that only shelter-in-place, stay at home measures, mask mandates, and travel restrictions achieved a significant effect.]
Our work here critically examines the impact of local eviction moratoria on COVID-19 incidence and mortality. Although the work of **LEIFHEIT_ETAL:2021 and **sandoval2021eviction are crucially important for understanding the potential causal effects of the state level eviction moratoria on limiting the spread of the COVID-19 virus, we nonetheless demonstrate that their results are not robust when replicated using alternative econometric techniques. This paper also differs from previous work in that we include actual eviction numbers as a control, perform analysis at the county level, and focus mainly on large metropolitan centers (where population density is increased).
We preview our results here. First,
we construct a dataset mimicking that of **LEIFHEIT_ETAL:2021. We also buttress this exercise with several other extensions which we believe lend credence to the estimation of a causal effect, and fail to find that expiring eviction moratoria had quantitatively meaningful impacts on either cases or deaths.[Replication details and results can be found in the appendix.] Next, we construct a new dataset at the county level, for a variety of metropolitan areas. We use Princeton Eviction Lab <cit.> data on the actual number of evictions in each of these counties by week, which allows us to control for this important confounding variable. Lastly, we repeat the analysis using three different estimators (each of which has merits beyond the simple two-way fixed effects DiD approach), and again fail to find significant evidence that expiring moratoria had any causal impact on either cases of, or deaths from, COVID-19.
One reason that we believe the main finding of LEIFHEIT_ETAL:2021 dissipates is that the timing differences of expiring eviction moratoria suggest that an alternative weighting scheme be used <cit.>GOODMAN-BACON_MARCUS:2020,SUN_ABRAHAM:2020,de2020two,borusyak2021revisiting,baker2022much. This scheme weights the treatment effects based on the cohorts of time from the expiration of the moratoria which has meaningful consequences not only for the estimates, but also the standard errors.[These alternative methods are also in alignment with the recommendations of GOODMAN-BACON_MARCUS:2020] When using more recent statistical models to account for this requirement, the LEIFHEIT_ETAL:2021 analysis fails at the state level. However, even if the results did hold, the county level is arguably the more relevant geographic area of analysis due to significant differences between state and county-level policy implementation (for example Austin's local moratoria in contrast to the lack of a binding Texas order). Finally, although LEIFHEIT_ETAL:2021 do control for various policies and population size in their specifications, they do not control for political or eviction-related potential confounders. These variables are likely to impact both the implementation of eviction laws and the number of COVID-19 cases and deaths.
Lastly, even with the cohort specific weighting, we argue that the most appropriate method to study the potential causal impact of eviction moratoria on the transmission of COVID-19 is augmented synthetic control (ASC) with staggered adoption <cit.>. This method constructs synthetic control observations that can be compared to the treated group while accounting for the staggered adoption that is prevalent in many event study applications. It is an ideal tool since even taking out county-specific averages, as done in a DiD, is unlikely to be credible given the substantial heterogeneity that is likely to be present in differences between counties, both in trends and in levels. As [pg. 2561]IMBENS:2022 notes “The basic synthetic control method … has in a short time found many applications in a wide range of fields, including … the effects of country- or state-level COVID-19 policies.” Again, using ASC with staggered adoption, our findings remain consistent. Once the moratoria expires, there is no statistically significant effect on COVID-19 cases or deaths.
Overall, our main finding is that while eviction moratoria certainly helped to keep people in their homes during a time of significant economic upheaval, the moratoria themselves had no statistically significant effect on COVID transmission. The fact that our findings differ from most previous work is likely due to the inability of studies at the state level to pinpoint specific transmission patterns that are likely to vary at a local scale, other policy devices already in place prior to any moratoria expiring, individuals being aware of the transmission of COVID and taking necessary steps to avoid infection, and eviction moratoria not being truly complete bans on evictions. All of these issues combined make it plausible that an eviction moratoria, as a policy instrument for public health, is rather imperfect.[We reiterate that the main aim of the eviction moratoria was to keep people who lost their jobs because of the COVID-19 pandemic from also losing their homes.] Targeted policies such as mask wearing, social distancing and stay-at-home orders. are likely to be much more effective, as shown in pan2020covid.
§ BACKGROUND
Understanding the economic, social, and health impacts of COVID-19, as well as the effects of various policies implemented to address the pandemic, is a crucial topic of research across multiple disciplines. However, a large scale multidisciplinary review of 102 articles attempting to estimate the impact of various COVID-19 policies on COVID-19 outcomes found that only one of them met criteria and design checks for estimating causal impacts <cit.>. We therefore outline relevant background on the policy studied in this paper, eviction moratoria, to highlight the importance of carefully considering the methodological framework used for causal inference.
§.§ Eviction Moratoria and the Pandemic
In the United States, one way that both federal and some state governments interceded to combat the spread of COVID-19 was by placing a moratorium on evictions. The justification for these moratoria was that evictions could lead to shelter overcrowding and homelessness as those forced to leave their homes searched for alternative housing. Thus, preventing landlords from evicting tenants would allow for better self-isolation, potentially limiting community spread. According to a CDC spokesperson,“it’s hard to follow social distancing orders if you have to double-up at a friend's or family member's house, and it's impossible if you're homeless and are forced to turn to shelters[Limited evidence indicates a wide degree of heterogeneity in the incidence of COVID-19 infections in homeless shelters during the initial weeks of the pandemic <cit.>.] as a last resort.”[https://www.vox.com/21569601/eviction-moratorium-cdc-covid-19-congress-rental-assistance-rent-crisis] Figure <ref> depicts the total number of per-county COVID-19 cases by population for our main sample, given the county's current weekly moratorium status. In total, there appears to be a much higher number of COVID-19 cases in counties without a current moratorium; however, this is not controlling for the crucially important presence of other COVID-19 mitigating policies.
At the federal level, the CDC eviction moratorium went into effect on September 4th, 2020.
Until January 1, 2021, landlords were no longer able to “force tenants out of their homes due to a failure to pay rent, as long as the tenants legally declare they qualify for protection[In order to qualify for protection, tenants must have: used “best efforts” to get “all available” rent and housing assistance from the government, been below certain income thresholds, been unable to make rent because of a loss of household income, layoff, or “extraordinary” medical expenses, used “best efforts” to make partial rent payments, and demonstrated that eviction would make them homeless or force them to crowd into a new home.] under the order.”
Landlords could still evict tenants for other reasons – like “engaging in criminal activity” or “threatening the health and safety of other residents.” These requirements for obtaining protection under the national moratorium may explain why a substantial number of evictions still occurred even after September 4th. Alternatively, certain states or counties may have simply decided not to enforce the CDC ruling. Figure <ref> shows the average number of eviction filings in the Eviction Lab database by week in 2020 – with no obvious effect of the September 4th ruling (depicted by the vertical line) for those counties in our sample.
Since there is no formal indication as to whether or why certain counties decided to follow (or not follow) the national moratorium, like LEIFHEIT_ETAL:2021, we perform our analysis at the local level instead of nationally.
§.§ Eviction Law in the United States
In addition to heterogeneity in COVID response policies across state, there exists substantial heterogeneity in (pre-pandemic) state eviction statutes.[“Eviction Laws" Policy Surveillance Program of the LawAtlas Project] In most states, landlords must present tenants with written complaint (notice of intended eviction) for non-payment of rent a few days to a few weeks prior to the intended eviction date. Most, but not all, states then require court orders or judicial rulings in order for the physical eviction to proceed. If a tenant has the right to appeal the eviction, there is large variation across states in terms of the minimum number of days in which a trial can be scheduled after the tenant receives written notice. This means, in some states, landlords could have started eviction processes so that once moratoria lifted tenants could be removed expeditiously, and these removal processes differ according to underlying statutes.
In the context of COVID-related eviction moratoria, it is especially important to control for whether a state's laws require a landlord to waive the right to evict a tenant after accepting partial repayment of rent. Since part of the tenant's “best efforts” under the national moratorium require partial payment of rent if possible, states in which this prevents an eviction from going forward will have lower eviction rates (and potentially lower infection rates) as a result of the pre-existing eviction laws, not the COVID-related eviction policies. In addition, areas which had a moratorium on both eviction filings and hearings saw more of a surge in evictions following expiration of local moratoria <cit.>. These examples of substantial variation make clear that any potential treatment and control groups for COVID-related policy are likely not comparable in terms of their underlying eviction policies - therefore we rely on ASC in our preferred analyses <cit.>. We also conduct analysis on a subset of cities for which data on underlying eviction legislation is available in Section 6.
§ DATA
Our sample contains 59 counties from the 30 US cities which enacted eviction moratoria and for which eviction data for 2020 is available on Princeton Eviction Lab. The sample period begins April 20, 2020 and ends December 31, 2020.[We begin the sample period in the first week in which all cities had active moratoria.] We extend the sample period to the end of 2020; even though the CDC eviction moratorium went into effect on September 4th, COVID-19 has a lag of 2-3 weeks, so we require data that goes past September to be able to properly extract cohort effects. We also do not have evidence that the nationwide moratorium made any difference at the local level on the actual number of evictions (see Figure <ref>). Since the Eviction Lab eviction data is at the city and/or county level, eviction moratorium information was also collected manually for each local municipality from this website. This is important to capture the true effect of moratorium endings, as there may be localities with orders that differ from their state's. For example, Texas's eviction moratorium ended on May 18, 2020, but the city of Austin, Texas, had an eviction moratorium in place through December 31st, 2020. More concerning, some states may have had no state-level moratorium in place, yet certain metropolitan areas within those states enacted their own orders. It is therefore crucial to collect detailed information about local municipality orders and not rely exclusively on state-level moratorium information. Since eviction data is at either the census-tract or the ZIP code level, all eviction counts were aggregated to the county level (using HUD USPS crosswalk information from Q1 2020). Figure <ref> depicts the number of counties in which moratoria lifted during each week of the sample period, and demonstrates no obvious pattern or grouping of the timing of moratoria endings across observations or with respect to the national CDC moratorium on September 4th, 2020.
COVID-19 case and death information was taken from the New York Times database, which is provided at the county level in the covid19R package available in the R statistical programming environment. Measurement errors in the data resulting in a few negative numbers for new cases and deaths were interpolated using a cubic spline. Demographic variables at the county level were taken from the 2018 American Community Survey, and include racial and ethnic demographics,[Which are known to be correlated with COVID-19 infection rates <cit.>, and are not controlled for by LEIFHEIT_ETAL:2021.] educational attainment, average renting rates, and poverty and inequality indices. We also use Census estimates for population density in each county. OxCGRT provides a database of various COVID-19 policies at the state level, including start and end dates for mask mandates, stay-at-home orders, school closings, and an overall policy Stringency Index.[https://raw.githubusercontent.com/OxCGRT/USA-covid-policy/master/data/OxCGRT_US_latest.csv] County-level policy information was taken from the HHS.[healthdata.gov] Information on political party vote share was taken from the MIT Election Lab <cit.>, and the Yale Climate Communication study <cit.> provides county-level survey data on belief in climate change, which we use as a proxy for trust in science. Finally, we merge selected details on eviction laws from the “Eviction Laws” Policy Surveillance Program of the LawAtlas Project to account for differences across eviction proceedings.
Table <ref> presents summary statistics for selected variables; the high degree of variation in number of weekly eviction filings is of note. There is a strong negative correlation (-0.370) between local moratorium length and the number of eviction filings per county. We also note a weak negative correlation between the number of eviction filings and the strength of various other COVID-19 mitigating policies as captured in the local Stringency Index variable. The lack of correlation between moratorium length and political affiliation or stringency index is also of note. Also, the positive correlation between eviction filings and new COVID-19 cases is consistent with Figure <ref> (and the subsequent correlation with deaths).[Table <ref> in Appendix <ref> contains a full correlation matrix for our policy and political variables.]
§ METHODOLOGY: DIFFERENCE-IN-DIFFERENCES
This section outlines the main econometric methods for staggered treatment timing settings used in this paper. We also present negative binomial results following LEIFHEIT_ETAL:2021, who did not account for cohort effects (as discussed earlier).
For each method, our primary estimand of interest is the Average Treatment Effect on the Treated (ATT), k periods after treatment:
ATT_k ≡1/J∑_j=1^J Y_j,T_j + k(T_j) - Y_j,T_j + k(∞).
Event time relative to treatment time for unit j, T_j, is indexed by k = t - T_j. Y_j,T_j + k(T_j) is the potential outcome at time T_j + k under treatment, and Y_j,T_j + k(∞) is the potential outcome for untreated units. Their difference, Y_j,T_j + k(T_j) - Y_j,T_j + k(∞), is the individual (unit-level) treatment effect, which is averaged to obtain the ATT as in Equation (<ref>).
§.§ Negative Binomial Regression: Leifheit et al. (2021) Analysis
For the state-level analysis, we follow LEIFHEIT_ETAL:2021 and use population-averaged negative binomial regression with two-way fixed effects (i.e. traditional difference-in-differences with an event study approach):
Y_it = α + β_1 T_it + β_2 Post_t + β_3 (T_it×Post_t) + γ_i + λ_t + ϵ_it,
with state-day as the unit of analysis, log of state population included as an offset, first-order autoregressive (AR1) structure, state and week fixed effects γ_i and λ_t, and conventional (non-robust) standard errors. β_3 at various leads and lags from treatment time is the coefficient of interest for estimating ATT_k.
§.§ DR-DiD
For our preferred DiD approach, we use the Double-Robust DiD (DR-DiD) proposed by callaway2021difference. This semiparametric estimator corrects for the bias inherent in two-way fixed-effects event study estimates <cit.>.
The starting point for estimation in the DR-DiD model is the Group-Time ATT:
ATT(g,t) = 𝔼[Y_t(g) - Y_t(0)| G_g = 1],
i.e., the ATT for units who are members of group g at time period t. Nonparametric identification is obtained using the Double-Robust estimand of sant2020doubly:
ATT(g,t;δ) = 𝔼[(
G_g/𝔼[G_g] - p_g(X)C/1-p_g(X)/𝔼[p_g(X)C/1-p_g(X)]) (Y_t - Y_g-δ-1 - m(X))
]
where G_g = 1 if a unit is first treated in period g, C = 1 if a unit is not treated in any time period (control), p_g(X) = P(G_g = 1|X, G_g + C = 1) is the probability of being first treated in period g conditional on covariates and either being a member of group g or never treated, m(X) = 𝔼[Y_t - Y_g-δ-1 | X, C=1] is the outcome regression for the never-treated group, and t = g - δ - 1 is the reference time period.[That is, the most recent time period when untreated potential outcomes are observed for group g.] This group-time ATT is then aggregated with respect to time-to-event e, using the weight of each cohort share and the associated influence function to obtain valid confidence intervals:
θ_es(e) = ∑_g ∈ G1{ g + e ≤ T } P(G = g | G + e ≤ T) ATT(g,g+e).
§.§ Interaction-Weighted DID (IWES)
Drawbacks of the DR-DiD procedure include the inability to include time-varying X_i, as all time-varying X_i are held constant at their value in the last pre-treatment period. Further, in specifications with many controls the estimator does not converge due to propensity scores being very near 0 or 1.[This indicates the overlap condition may be violated, and alternatively, ASC may be more appropriate.] We therefore also perform an event study DiD using the SUN_ABRAHAM:2020 Interaction-Weighted estimator (IWES). This procedure is equivalent to the DR-DiD, except that the group-time ATT is estimated using a traditional two-way fixed effect regression before the weighted aggregation is performed.
Specifically, the Group-Time ATTs β_g,e are estimated:
Y_i,t = α + ∑_g ∈ G∑_g + e ≠ -1β_g,e (1(E_i = e) · G^g+e_i,t) + λ_t + ϵ_it
using linear regression, and then aggregated as in Equation (<ref>).
§.§ Augmented Synthetic Control for Staggered Treatment Adoption
The goal of synthetic control is to use the observed outcomes of Y_jt to construct a weighted average of Y_iT(∞), which is not observed in our data. More specifically, synthetic control imputes the missing potential outcome as a weighted average of the control outcomes <cit.>. The weights are chosen as the solution to the constrained optimization problem:
γ∈Δmin ||V^1/2(Y_i·-Ỹ^'_j·)||^2_2+υ∑_W_i=0f(γ_i).
where Δ is the appropriately sized simple. Synthetic control has many deep theoretical underpinnings, but at its core it is quite simple, to find a set of weights, γ that can be used to construct an estimator of the controls to be used as the appropriate counterfactual. In fact this simplicity in intuition is perhaps its greatest strength and one of the reasons for its perceived popularity.
As abadie2010synthetic show, when the treated units vector of lagged outcomes, Y_i· lie interior of the convex hull of the control groups lagged outcomes Ỹ^'_j· the corresponding weights will achieve perfect pre-treatment fit with the corresponding treatment effect estimator in possession of many desirable statistical properties. However, due to potential dimensionality issues, it is not universally feasible to achieve perfect pre-treatment fit. Even with close to perfect fit it is commonly recommended <cit.> to run an extensive battery of placebo checks to ensure that γ do not overfit due to noise. ASC BEN-MICHAEL_FELLER_ROTHSTEIN:2021proposed by adjusts for poor pre-treatment fit.
BEN-MICHAEL_ETAL:2021 also extend SCM to the staggered treatment adoption setting. In this version, the original SCM estimator is considered for a single unit j. The SCM weights γ̂_̂ĵ are the solution to:
γ_j∈Δ_j^scmmin1/L_j( ∑_ℓ=1^L_j Y_j,T_j -ℓ - ∑_i=1^Nγ_ijY_i,T_j + ℓ )^2 + λ∑_n=1^Nγ_ij^2
where γ_j∈Δ_j^scm has elements that satisfy γ_ij≥ 0∀ i, ∑_i γ_ij = 1, and γ_ij = 0 whenever i is not a possible donor. This modification focuses only on lagged outcomes and penalizes the weights towards uniformity using λ.
Given the vector of weights γ̂_̂îĵ solving equation X, the estimate of the missing potential outcome for treated unit j at event time k is:
Ŷ_j,T_j + k(∞) = ∑_i=1^Nγ̂_̂îĵY_j,T_j + k
and the estimated treatment effect is τ̂_jk = Y_j,T_j + k - Ŷ_j,T_j + k(∞), the difference between the observed outcome under treatment for the treated units and the estimated potential outcome for the synthetic control.
With multiple treated units (i.e. the staggered adoption case), the above setup is generalized to create weights for each treated unit. The estimated treatment effect averages over the unit effect estimates:
ÂT̂T̂_̂k̂ = 1/J∑_j=1^Jτ̂_jk
which can be interpreted as both the average of individual unit SCM estimates, and an estimate for the average treated unit <cit.>. These equivalent interpretations are used to construct goodness-of-fit measures
q^sep(Γ̂) ≡√(1/J∑_j=1^J1/L_j∑_ℓ=1^L_j(Y_j,T_j -ℓ - ∑_i=1^Nγ_ijY_i,T_j + ℓ )^2)
and
q^pool(Γ̂) ≡√(1/L∑_ℓ=1^L(1/J∑_T_j > ℓY_j,T_j -ℓ - ∑_i=1^Nγ_ijY_i,T_j + ℓ )^2).
The final “partially pooled” estimator minimizes a weighted average of these two measures:
ν (q̂^pool)^2 + (1-ν)(q̂^sep)^2
where q̂ have been normalized by their values computed with weights Γ̂. BEN-MICHAEL_ETAL:2021 describe a heuristic for choice of ν which we adhere to in our analysis.
§ RESULTS
Given various cohort effects, we thought it easier to display our findings visually rather than in standard tabular form. For those interested, all specific point estimates and associated standard errors for both cases and deaths can be found in Appendix <ref> for all of the different estimation approaches deployed here.
While we advocate for using the FIPS level, we first discuss state level results to help compare our findings with those of LEIFHEIT_ETAL:2021. We use as covariates state-level COVID policies and the natural logarithm of the population.
§.§ State-Level Analysis
Figure <ref> presents the cohort effects at the state level using the negative binomial specification of LEIFHEIT_ETAL:2021 as well as the IWES and the doubly robust DID estimators. For these estimators we include as controls state-level COVID-19 policies (measures of stay-at-home orders, school closures, and mask mandates) and the logarithm of the population. There are several striking and immediate features. After the expiration of a moratorium, cases go up. This is true for all three estimators. However, where they diverge is in the statistical strength of this increase. The Negative Binomial specification of LEIFHEIT_ETAL:2021 suggests statistically relevant increases in cases after 3 weeks. Neither the IWES or DR-DiD estimators find a statistically significant effect. Further, the estimated effects for the three estimators are quite similar after the lifting of the moratoria with the exception of weeks 11 and 12, where again the Negative Binomial estimator suggests another “spike” in cases. We view the near constancy of the impact of COVID-19 cases after about week 4 to be an equilibrium effect from the end of the moratorium being lifted.
If we turn our attention to deaths, panel (b) in Figure <ref> paints a much different figure at the state level. Initially, deaths at the state level fluctuate around zero until around week 6, when we start to see a dedicated increase. Again, the Negative Binomial specification finds statistically significant increases in deaths attributed to COVID-19 starting at week 6 whereas the IWES and DR-DiD estimators do not find statistically significant effects. The week 6 increase in deaths is intuitive given the roughly two week lag of COVID-19 effects. Thus, finding increases in COVID-19 cases after 3 weeks suggests that around week 5 or 6 an increases in deaths is expected. We also note that while there is more variation in deaths as we move further from the end of the moratorium being lifted, it does appear to be roughly stable, in line with the impact on cases.
So, using the Negative Binomial specification promoted in LEIFHEIT_ETAL:2021 we see an increase in deaths from COVID-19 at around the same time (though our data run longer than their analysis) but we also find a more intuitive increase in cases. LEIFHEIT_ETAL:2021 claimed that their lack of finding an increase in cases prior to the spike in deaths was due to, among other things, undercounting of COVID cases (which could be attributed to those recently evicted not getting testing for COVID-19).
§.§ FIPS Level Analysis
As argued earlier, the state level is not the appropriate level to address the impact of eviction moratoria on the spread and mortality of COVID-19 given the discrepancy between local and state ordinances. To that end we migrate from a state level aggregate dataset to a FIPS level analysis. Once we are in this setting we abandon the Negative Binomial specification advocated by LEIFHEIT_ETAL:2021 and focus our attention exclusively on the IWES and DR-DiD estimators.
Our main specification for both estimators include the average stringency index (by FIPS), the logarithm of population, the proportion of the population that is black, the proportion of the population that is Hispanic, the proportion of the population that is college educated, and the average number of eviction filings (by FIPS). Figure <ref> presents the cohort comparison of the benchmark specification across the IWES and the DR-DiD estimators.
Several interesting features emerge. First, for cases, while both estimators fail to find statistically relevant effects due to the eviction moratoria expiring, the IWES estimator also finds a near 0 effect, while the DR-DiD estimator has a much larger positive effect which remains throughout the time-frame. We again see that in the first few weeks after the eviction moratoria expires at the the FIPS level, there are no noticeable impacts on cases, until around week 4, at which point the DR-DiD estimates experience the intuitive increase in average cases. Perhaps most interesting is that the simple switch from the state level to the FIPS level for the IWES estimates does not enjoy a similar increase in cases. Again this is additional evidence that buttresses our claim that the state level is the inappropriate focus for these effects.
Turning our attention to deaths attributable to COVID-19, we see an expected pattern; the first few weeks after the eviction moratoria at the FIPS level is lifted there is no distinguishable pattern in deaths for either the IWES or DR-DiD estimates, and it is not until around week 8 that the DR-DiD estimates start to increase. We also see that even with the DR-DiD estimates increasing starting at week 8, aside from the significant effect at week 10, both estimates (IWES and DR-DiD) remain statistically insignificant throughout the time-frame, with the DR-DiD having economically larger (and positive) COVID-19 deaths.
Having compared the estimated impacts of the eviction moratoria expiration at the FIPS level for a common specification to get a sense of the differences in the estimators, we now turn to different model specifications for each estimator.
§.§.§ Double-Robust DiD
Focusing exclusively on the DR-DiD estimator, we consider three alternative specifications. Model 1 controls for the stringency index (held constant at the first pre-treatment period), a binary indicator for ever having mask mandate or stay-at-home orders, and the logarithm of the FIPS population. Model 2 is the same as Model 1 but includes the proportion of the population that is black, the proportion of the population that is Hispanic, the proportion of the population that is college educated. Model 3 is the benchmark model previously discussed (including average eviction filings in a FIPS to Model 2).
Figure <ref> presents the cohort effects across these three models. Several features are worth highlighting. All three specifications have similar patterns for cases, but with varying widths of confidence intervals around the point estimate, with the narrowest intervals stemming from Model 1. We also see a pronounced `bump' in cases starting around week 5 for all three specifications, peaking at week 7 and then slowly decaying back towards 0 by the end of the year. The increase in cases is intuitive but the lack of robust statistical findings is a bit concerning. All of the confidence intervals contain 0, speaking to the difficulty that is inherent in trying to discern the impact that the expiration of the eviction moratoria had on total COVID-19 cases.
Turning our attention to deaths attributable to COVID-19, we see much less agreement across the three specifications. First, there is not a noticeable increase in the estimated increase in deaths until around week 8, but it is much less pronounced than it was for cases. We also see that for weeks 8 through 12, there is a difference between model specifications 2 and 3 and those from model 1 in terms of the magnitude of the number of deaths. Only in week 10 do we observe an estimated effect that is statistically different from 0, consistent with the totality of our findings so far.
§.§.§ Interaction-Weighted DID
As the DR-DiD does not allow controlling for time-varying covariates, and often fails to converge when including the full suite of controls,[We note that the DR-DiD estimates show no statistical significance even when no controls are used.] we now turn to the Interaction-Weighted DID (IWES). We report our findings in Figures <ref> while the estimates and associated standard errors for cases and deaths can be found in Table <ref> in Appendix <ref>.
As with our analysis of the impacts using the IWES estimator, we consider three different specifications. We note that the specifications here are slightly different than those analyzed with the DR-DiD estimator since the estimators make slightly different assumptions on the nature of time variation in the controls. Here we consider a baseline model (Model 1) that includes as controls the stringency index (time-varying), county-level stay-at-home orders, county-level mask mandates, and the logarithm of the population in the FIPS. Model 2 includes all the controls from Model 1, but also includes the proportion of the population that is black, the proportion of the population that is Hispanic, the proportion of the population that is college educated. Finally, Model 3 adds to Model 2 with eviction filings (time varying) and political point difference in the 2016 election.
The results are consistent with our earlier DR-DiD estimates; there is no significant increase in COVID-19 cases (Panel A) following the lifting of eviction moratoria. Model 1 has the highest estimated impacts, which seem to occur immediately after the moratoria expire, but with wide confidence intervals containing 0. Models 2 and 3 display the same behavior, but with smaller estimated effects than Model 1 along with confidence intervals that contain 0. Interestingly, for all three model specifications we see that the estimated cohort effect drops at week 7. Overall the IWES estimator suggests that the expiration of the moratoria changed little regarding COVID-19 cases.
Panel B of Figure <ref> presents the results for deaths. Again, the results are nearly identical to the setup with cases. We see that the estimates from Model 1 are higher in magnitude than for Models 2 and 3, as to be expected, but the confidence intervals contain 0 (outside of weeks 1-3 for Model 1), throwing some doubt as to the true effect of these moratoria. We also can see higher estimated effects for the cohorts immediately after the moratoria are lifted, which is at odds with the behavior of the disease. If the eviction moratoria were implemented with the express intent of mitigating the spread of COVID, and the diseases as a one to two week lag time of transmission along with another one to two week lag time for severe symptoms to lead to death, then we would not anticipate such a large estimated effect for deaths so early on. This result also differs from the state level findings in LEIFHEIT_ETAL:2021.
Thus, across both the IWES and DR-DiD estimators for a variety of model specifications, we see increases in both cases of COVID and mortality from COVID, but with wide confidence intervals and time varying behavior that is not consistent with the behavior of the virus. This suggests that the identification of these eviction expiration effects are difficult to identify in practice, consistent with the concerns raised in GOODMAN-BACON_MARCUS:2020.
§.§ Robustness Checks
Beyond our main DR-DiD and IWES specifications, we also consider if various forms of confounding and a different identification approach can more accurately reveal the impact of the expiration of evicition moratoria. To that end we consider a subset of our main dataset that incorporates observable features of eviction law at the FIPS/state level and the cohort effects using ASC with staggered adoption.
§.§.§ Augmented Synthetic Control with Staggered Adoption
We now present the results for ASC with staggered adoption. Note that the current implementation in R for the augsynth package does not yet allow for matching on auxiliary covariates, so we report results without any matching. Figure <ref> depicts the results for total COVID cases and deaths attributable to COVID.[Figure <ref> in Appendix <ref> plots out the depicts pre-treatment balance and individual treatment effects for cases and deaths. Point estimates, standard errors, and confidence bounds are presented in Table <ref>. ] Panel A depicts average effects and demonstrates a very slight and temporarily significant increase in the incidence rate of cases six weeks after the moratoria lifted, and again in weeks 10-12 after lifting.
Looking at Panel B, we again see no statistically meaningful effect of the moratoria expiration on deaths from COVID. We do see an increasing trend over time as we move further away from the moratoria ending, with a strange dip occurring at three weeks post expiration. The overall set of cohort effects is consistent with our earlier story that while there are estimated positive effects on mortality from COVID, said effects are difficult to precisely pin down. Our assumption is that if we were to also match on covariates that this would serve to further introduce noise as the quality of the mathces is likely to be poor as the number of covariates to match on increases.
§.§.§ Eviction Law Subset
As there is a great deal of heterogeneity across both states and individual counties in terms of landlord and tenant protection statutes (see Section 2), we repeat the DR-DiD analysis here using a subset of data for which we have detailed information on existing tenancy laws. Arguably, it may be the case that areas with more tenant protection statutes in existence prior to the COVID-19 pandemic may have been more likely to implement stricter eviction moratoria during the pandemic - introducing confounding. Including eviction law information reduces the size of our data to 17 cities and 36 counties. However, we are now able to control for whether a landlord waives the right to eviction if rent is partially repaid, the minimum number of days the landlord must provide before ending a tenancy due to non-payment of rent, the minimum number of days between when a landlord gives notice of tenancy termination and when the eviction may take place.[If there is time between notice and repossession, tenants may be able to file appeals or otherwise negotiate in order to avoid the eviction.] Once again, we find no evidence that the lifting of eviction moratoria increased the number of cases or deaths.
§ CONCLUSIONS
This paper set about critically examining the impact of local eviction moratoria on the spread of COVID-19. While several earlier studies documented increases in deaths attributable to COVID-19 following the expiration of these moratoria, we found minimal effects when deploying both newer econometric methods and what we believe to be more sensible data specification choices. Specifically, accounting for the differential timing of eviction moratoria across 44 states, switching from state level to country level data, controlling for number of evictions, and using cohort specific weighting for our time to treatment effects, we found that eviction moratoria likely did not mitigate the spread of COVID-19. This finding was consistent across a range of specifications and estimation approaches. In fact, the only setting where we found an effect of the moratoria was when are data were aggregated up to the state level.
However, as we stated earlier, the state is precisely the wrong level to focus attention on as different local municipalities had different rules in place for eviction filings and such aggregation washes away local variation in COVID-19 cases and deaths. Further, ignoring the fact that individuals could still be evicted when these eviction moratoria were in place represents a key omitted variable that helps to understand the impact of such a policy. Even the CDC's eviction moratoria, put in place on September 4th, 2020 nationally, did not prevent all evictions. Renters needed to qualify to seek eviction protections.
These findings may seem to undermine the need for such moratoria, however, when they were initially instituted it was not as a non-pharmaceutical intervention per se, but a means to keep people from being homeless at a time of extreme economic uncertainty. Further, as the understanding of COVID-19 became more prevalent across the country, it is likely that individuals took other precautions to mitigate the risk of catching the virus and so eviction moratoria, kept in place as a means of reducing the spread of COVID-19, were simply not effective.
Our crucial requirement of actual eviction numbers leads to the biggest limitation in this study: the sample size would ideally be larger than 30 cities. We believe, however, that the geographic dispersion of cities in the dataset is representative of the country as a whole. Future work may examine each city individually using a synthetic control method to uncover heterogeneity across cities or regions. In terms of methodology, we attempt to address the many issues inherent in policy impact evaluation for the COVID-19 pandemic by applying a variety of econometric techniques to our research question. However, the setting of staggered treatment timing is a fast-growing area of research, and it may be worthwhile for future authors to apply a staggered version of penalized synthetic control <cit.> or synthetic difference-in-differences <cit.>, as they become available.
We stress that our findings do not mean the moratoria were poor policies overall, far from it. The moratoria were designed not only to keep the spread of COVID-19 low, but to insulate individuals from losing their residence during this time of great upheaval. In that view the moratoria likely were quite effective. Indeed, an2021covid find that the eviction moratoria reduced the financial stress of households by allowing them to redirect financial resources towards immediate consumption needs. Evictions in general lead to negative physical and mental health outcomes <cit.>desmond2015forced,benfer2021eviction, a decreased likelihood of seeking medical attention <cit.>, and damage to the overall public health of children <cit.> - all of which are arguably even more problematic during a global pandemic.
agsm
§ LEIFHEIT ET AL. (2021) REPLICATION
For the initial replication at the state level, all information on moratoria and other policy start/end dates was collected as described in LEIFHEIT_ETAL:2021. COVID-19 case and death incidence data by US state is from the Covid Tracking Project.[covidtracking.com] Importantly, there are instances when the number of new cases and deaths is reported as a negative number, likely due to measurement error or data collection procedural changes. LEIFHEIT_ETAL:2021 do not describe these errors or how and whether they correct negative increases.[Some of these negative increases are on the order of thousands.] We interpolate all negative new cases and deaths using cubic interpolation. Population estimates for each state are taken from the 2018 American Community Survey. We create variables indicating ≥4, 3, 2 and 1 week prior, and 1, 2, 3, and ≥ 4 weeks after the start of mask mandates, school closures, and lifting of stay at home orders.[Arguably, for consistency it would be better to indicate the beginning of stay at home orders rather than their end, but we remain consistent with the Leifheit specification.] We create a variable of number of tests lagged by 7 days. Seven states which never imposed eviction moratoria are dropped from the analysis.[These states are Oklahoma, Arkansas, Georgia, Missouri, Ohio, South Dakota and Wyoming.]
We do not code a state as having an active moratorium if the state does not pass measures with specific language regarding eviction proceedings. This differs from LEIFHEIT_ETAL:2021, who code any state with civil court closures as having an eviction moratorium in place. In some states, courts are not necessarily involved for eviction proceedings to move forward. In order to perform additional analyses, we also extend the sample period to the end of 2020 LEIFHEIT_ETAL:2021from the original end date of September 3rd used in. We add ACS estimates of racial and ethnic demographics, educational attainment, and poverty indicators at the state level. Finally, we add information on the percentage difference between Republican and Democrat vote share in the 2016 election.[ github.com/kshaffer/election2016] Results for the negative binomial specification and the DR-DiD method are below.
§ TABLES AND FIGURES
Table <ref> examines the correlation between the length of a county's eviction moratorium and other variables reported on in the text.
§ DATA
§.§ FIPS-level (cities)
27 cities across 50 counties covering 16 states.
COVID data was taken from the New York Times Covid19 dataset using the R covid19nytimes package. NYT updates this data daily and reports cumulative totals for number of infections and deaths by county.
Weekly eviction filings data taken from EvictionLab run out of Princeton. EvictionLab provides weekly numbers of evictions filed in 27 US cities. Since eviction data is at either the census-tract or the ZIP code level, all eviction counts were aggregated to the county level (using HUD USPS crosswalk information from Q1 2020).
Eviction moratoriums at the state level are taken from <cit.> (I think, it is linked from Eviction Lab but cites their authors). Since the EvictionLab eviction data is at the city/county level, eviction moratorium information was also collected manually for each local municipality. This is important to capture the true effect of moratorium endings, as there may be localities with orders that differ from their state's. For example, Texas's eviction moratorium ended on May 18, 2020, but the city of Austin, Texas, has an eviction moratorium in place until December 31st. More concerning, some states may have no state-level moratorium in place, yet certain metropolitan areas within those states enacted their own orders. It is therefore crucial to collect detailed information about local municipality orders and not merely rely on state-level moratorium information.
State characteristics were taken from the 2018 1-year ACS. These include aggregate percentages at the county level of racial and ethnic group breakdowns, educational attainment levels, and renter population. Because lower income and service sector workers exhibit higher rates of COVID infections (CITE SOMETHING HERE), information on poverty rates, percent of households on food stamps or snap, and percentage of workers in service occupations were also collected. Lastly, the percent of workers utilizing public transit was also taken to control for differing rates of exposure to the virus. Population density of each county as well as pre-pandemic numbers of deaths and death rates were taken from the US Census.
Eviction laws by state taken from the Center for Public Health Law Research Policy Surveillance Program at Temple University.[http://lawatlas.org/datasets/eviction-laws-1530797420] These include indicators for how long a landlord must wait in between giving a tenant notice of intended eviction and the eviction taking place, whether a court or judges order is necessary for eviction proceedings, and importantly, whether a landlord waives the right to evict a tenant after accepting partial repayment of rent.
COVID-19 NPI stringency indices and state-level policy indicators were taken from OxCGRT <cit.>HALE_ETAL:2020. This rich set of covariates includes daily indicators for whether a particular COVID-related policy is in place in a given state[See appendix table (forthcoming Julia) for details] and includes information on stay-at-home orders, mask mandates, and school closures, among other policies. Where possible, county level policy indicators were included from the Department of HHS.
Finally, we need to control for potential cultural or political confounding in which a certain area may be both less inclined to enact eviction moratoriums and less inclined to behave in ways which reduce the spread of the virus. County-level percentages of votes in the 2016 general election by party were taken from townhall.com (might be worth explaining what this website is and that it is reputable, etc). A “belief in science" set of variables is taken from the Yale Program on Climate Change Communication and includes information on percentage of adults per county who believe global warming is happening, among other survey-based measures.
§ METHODS
§.§ Leifheit Replication
For the replication, we use population-averaged negative binomial regression with state-day as the unit of analysis, log of state population included as an offset, first-order autoregressive (AR1) structure, state and week fixed effects, and conventional (non-robust) standard errors. The model includes the lagged stay-at-home, mask mandate, school re-opening indicators, and lagged tests controls.
§ FINDINGS
1. Combine Figures 2 and 4 and Figures 3 and 5 (so Leifheit sample and full year). Done JH
2. Brief discussion of those findings. We have the following. Using Leifheit sample with our baseline, slimmed down DiD correctly implemented to account for cohort effects we see an increase in cases that is statistically significant around week 10. Roughly consistent with Leifheit. When we look at deaths (panel (b)) we see an upward trajectory of deaths after eviction moratoria expire but these effects are not statistically significant.
2a. Maybe also see about using hospitalizations.
3. Discussion of full year effects. Similar story with cases, about 10 weeks after eviction moratoria expire we see statistically significant bump in COVID cases and we also see a bump in deaths, though trajectory is lower. This is more in line with Leifheit finding but here we are using a longer time frame than they did and of course more appropriate methods.
3a. Add in hospitalizations.
4. Repeat but add in any time constant controls that we can: mask mandates, ACS census.
At this point we now make two broad claims. 1. we should be looking at the city level (this of course precludes us from using hospitalizations any more as that data is not available at this level of granularity. 2. DiD is not really the appropriate statistical platform, rather should be using SC type methods. Which we turn to now.
Good to keep in mind. We have several cities that never had an eviction moratorium so by our definition they are always treated. How to deal with this appropriately when most of these methods focus on going from control to treated.
In the ACS notation Y_it where i is county, t is week of the year and Y is cases/deaths. T_i is time period when county i has the eviction moratoria lifted (i.e. week of the year when it expires) and T_i=∞ means moratoria never lifted.
augsynth(Y T|z)
multisynth(Y T|Z)
§.§ Callaway and Sant'Anna DiD
Things of note:
* Uses not-yet-treated FIPS as the control group
* Does not allow for time-varying covariates after treatment. If there is a time-varying X, the last value before treatment is used.
* Uses double robust semi-parametric estimation except for in C8
§.§ Abraham and Sun DiD
Things of note:
* Uses never-treated FIPS as the control group
* Allows for time-varying controls and fixed effects.
§ ROBUSTNESS CHECKS
We did a perfect analysis so not necessary.
|
http://arxiv.org/abs/2307.02936v1
|
20230706120238
|
A Meta-Evaluation of C/W/L/A Metrics: System Ranking Similarity, System Ranking Consistency and Discriminative Power
|
[
"Nuo Chen",
"Tetsuya Sakai"
] |
cs.IR
|
[
"cs.IR"
] |
System Ranking Similarity, System Ranking Consistency and Discriminative Power
[email protected]
Waseda University
Tokyo
Japan
[email protected]
Waseda University
Tokyo
Japan
Recently, Moffat et al. proposed an analytic framework, namely C/W/L/A, for offline evaluation metrics. This framework allows information retrieval (IR) researchers to design evaluation metrics through the flexible combination of user browsing models and user gain aggregations. However, the statistical stability of C/W/L/A metrics with different aggregations is not yet investigated. In this study, we investigate the statistical stability of C/W/L/A metrics from the perspective of: (1) the system ranking similarity among aggregations, (2) the system ranking consistency of aggregations and (3) the discriminative power of aggregations. More specifically, we combined various aggregation functions with the browsing model of Precision, Discounted Cumulative Gain (DCG), Rank-Biased Precision (RBP), INST, Average Precision (AP) and Expected Reciprocal Rank (ERR), examing their performances in terms of system ranking similarity, system ranking consistency and discriminative power on two offline test collections. Our experimental result suggests that, in terms of system ranking consistency and discriminative power, the aggregation function of expected rate of gain (ERG) has an outstanding performance while the aggregation function of maximum relevance usually has an insufficient performance. The result also suggests that Precision, DCG, RBP, INST and AP with their canonical aggregation all have favourable performances in system ranking consistency and discriminative power; but for ERR, replacing its canonical aggregation with ERG can further strengthen the discriminative power while obtaining a system ranking list similar to the canonical version at the same time.
<ccs2012>
<concept>
<concept_id>10002951.10003317.10003359.10003362</concept_id>
<concept_desc>Information systems Retrieval effectiveness</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systems Retrieval effectiveness
A Meta-Evaluation of C/W/L/A Metrics
Tetsuya Sakai
August 1, 2023
====================================
§ INTRODUCTION
Online and offline evaluations of ranked retrieval systems complement each other to advance the state of the art of web search engines and other ranking applications. Recently, Moffat et al. <cit.> proposed an analytic framework, namely C/W/L/A, for offline evaluation metrics. Under the framework, the score of a metric can be obtained from the combination of a user browsing model given by the continuation probability (C(·)) and a user gain aggregation function (A(·)) as the following.
M_CWLA(𝐫) = ∑_i=1^∞L(i)· A(i)
Here, M_CWLA is the metric score, 𝐫 = <r_1, r_2, ⋯, r_i> is the relevance levels of the documents from position 1 to i, A(i) represents how users accumulate their gain when they end the search interaction at rank i, and L(i) is the probability a user inspects the item at position i and then stop inspecting the search engine result page (SERP). L(i) can be eventually obtained from C(i) , the probability that a user who has inspected the i-th item in the SERP will continue to examine the item at rank i+1, through:
L(i) = (1 - C(i))∏_j=1^i-1C(j)
C/W/L/A framework provides the information retrieval (IR) community with numerous alternative evaluation metrics through the flexible combination of C(·) and A(·). For example, one can combine the browsing model of Discounted Cumulative Gain (DCG) <cit.> (refer to Eq.<ref>) and an aggregation function A_PE assuming that the user’s gain from the SERP is in compliance with the peak-end rule <cit.> (refer to Eq.<ref>), to create a new evaluation metric.
These metrics have the potential to be used for evaluating IR systems offline from multiple different perspectives. However, the statistical stability of these alternative evaluation metrics in offline evaluation is not yet investigated. In particular, in order for alternative metrics obtained by combining various browsing models and different aggregations to be widely used in offline evaluations, the IR community needs to understand that, given a browsing model, how metrics obtained from different aggregations perform in terms of statistical stability.
Inspired by previous work <cit.>, in this study, we investigate the statistical stability of C/W/L/A metrics in offline evaluation. Specifically, we come up with the following research questions (RQs).
* RQ1: Given a browsing model, how metrics obtained from different aggregations resemble one another? (The system ranking similarity among aggregations)
* RQ2: Given a browsing model, how metrics obtained from different aggregations perform in terms of system ranking consistency across two disjoint topic sets? (The system ranking consistency of aggregations)
* RQ3: Given a browsing model, how metrics obtained from different aggregations perform in terms of discriminative power, the ability to tell one run is better than another with statistical significance? (The discriminative power of aggregations)
* Moreover, after investigating the above three questions, we further propose RQ4: Can we find an alternative metric(s) that improves system ranking consistency or (and) discriminative power compared to the canonical version, while at the same time returning system ranking lists that are very similar to the canonical version?
To figure out these research questions, we combined various aggregation functions with the browsing model of Precision, Discounted Cumulative Gain (DCG) <cit.>, Rank-Biased Precision (RBP) <cit.>, INST <cit.>, Average Precision (AP) <cit.> and Expected Reciprocal Rank (ERR) <cit.> (Section <ref>); and then we conducted experiments on two offline evaluation collections, examining performances of aggregations in terms of system ranking similarity (Section <ref>), system ranking consistency (Section <ref>) and discriminative power ( <ref>).
Our result suggests that, in terms of system ranking consistency and discriminative power, the aggregation function of expected rate of gain (A_ERG) has an outstanding performance while the aggregation function of maximum relevance (A_max) usually has an insufficient performance. Our result also suggests that Precision, DCG, RBP, INST and AP with their canonical aggregation all have favourable performances in system ranking consistency and discriminative power. But for ERR, replacing its canonical aggregation with ERG can further strengthen the discriminative power while obtaining a system ranking list similar to the canonical version at the same time.
As far as we know, we are the first to examine the statistical stability of metrics generated by C/W/L/A framework. Our work extends the work of Moffat et al <cit.> from the perspective of statistical reliability in offline evaluation experiment. Based on the result, we suggest researchers who want to design reliable evaluation metrics with C/W/L/A framework use ERG as the aggregation function in order to achieve favourable system ranking consistency and discriminative power.
§ RELATED WORK
Evaluating the effectiveness of search engines has long been a central concern for the information retrieval (IR) community. Existing evaluation methods can be broadly divided into two classes, online evaluation and offline evaluation <cit.>. Offline evaluation is often built upon different simulations of the process of a user interacting with a system under operational settings <cit.>, and the evaluation metric scores can be viewed as the simulation of the gain a user accumulated during that process. Widely used offline evaluation metrics include: Precision, DCG <cit.>, AP <cit.>, RBP <cit.>, ERR <cit.>, INST <cit.>, among others <cit.>.
§.§ The C/W/L/A Framework
To characterise user models behind offline evaluation metrics, C/W/L framework <cit.> was proposed. In C/W/L framework, the user model behind a metric can be deconstructed into three interrelated aspects of the user behavior:
* Continuation probability, C(i): the probability that a user who has inspected the i-th item in the SERP will continue to examine the item at rank i+1.
* Weight function, W(i): the fraction of user attention on the item at position i. In other words, it is the likelihood of a user viewing the item at position i at any time under a sequence of random selections.
* Last probability, L(i): the probability that a user examine the document at rank i and then stop interacting with the SERP.
In the C/W/L framework, as long as one of the three components is known, the other two components can be calculated as well. For example, the L(i) can be calculated by C(i) through Eq. <ref>.
The C/W/L framework thus provides a common ground for comparing models of various widely used metrics like DCG and RBP in terms of user browsing behaviour. For instance, the user model of Rank-Biased Precision (RBP)@p can be viewed as the assumption that at each rank of the SERP, the user will continue to examine the next result with a constant probability of p <cit.>. The C/W/L framework also allow researchers to design new metrics by defining C(i) <cit.>.
However, the original C/W/L framework assumes that users accumulate their gain only through the form of Expected Rate of Gain (also known as Expected Utility, refer to Eq. <ref>) or Expected Total Gain (refer to Eq. <ref>). This assumption cannot explain the user behavior behind ERR, a metric widely used in offline evaluation practice, since there is no appropriate aggregation for ERR given its browsing model (refer to Eq. <ref>) <cit.>. To resolve this incompatibility limitation, Moffat et al. <cit.> extended the C/W/L framework to the C/W/L/A framework by introducing a new component: aggregation (A). The score of a metric under the C/W/L/A framework can be computed through Eq. <ref>.
The introduction of aggregation allowed the C/W/L/A framework to characterize metrics like ERR through incorporating an appropriate aggregation function. For example, the score of ERR can be computed from the combination of C_ERR (Eq. <ref>) and A_ERR (Eq. <ref>) .
§.§ Meta-Evaluation of Metrics
As various evaluation metrics have been proposed, IR researchers have to come to grips with a question: what should a “good” evaluation metric be? Driven by the question, researchers began to shed light on the meta-evaluation of evaluation metrics. Sakai <cit.> argued that, as offline evaluation measures are used in experiments in the hope of ameliorating the effectiveness of search systems for real users, a good evaluation metric should: (a) serve as surrogates of users’ perspectives so that IR systems can be improved align with the better user experience (user satisfaction); and (b) be statistically stable so that reliable offline experiments can be conducted (statistical stability).
As user satisfaction is regarded as a near-ideal ground truth metric of retrieval effectiveness, meta-evaluating metrics from the perspective of user satisfaction has already been widely adopted in previous studies <cit.>. To measure to what extent metric scores are consistent with users' satisfaction feedbacks, some researchers use correlations with users' satisfaction feedbacks <cit.>, while others use agreements with users' SERP preference <cit.>.
Moffat et al. <cit.> have already meta-evaluated the performances of aggregation functions in terms of the correlation with users' satisfaction feedbacks and the agreement with users' SERP preferences. Experimental results from Moffat et al. <cit.> showed that the aggregation function using maximum relevance (A_max) usually correlates well with users' satisfaction feedbacks, but in terms of agreement users' SERP preferences, the aggregation function using expected rate of gain (ERG) usually performs better. However, Moffat et al. <cit.> did not meta-evaluate the performances of aggregation functions in terms of the statistical stability. Hence the present study complements their work.
Discriminative power <cit.>, the statistical ability of a metric to significantly discriminate system pairs, is a widely used method to meta-evaluate the statistical stability of metrics <cit.>. Discriminative power measures the stability of a metric across the topics based on significance testing <cit.>.
System ranking consistency, which is based on swap method is another method to meta-evaluate the statistical stability of metrics <cit.>. System ranking consistency is the similarity of two rankings given by an evaluation measure on topic set A and topic set B respectively <cit.>. Previous work <cit.> formalised the procedure to measure system ranking consistency as randomly spliting the topic set multiple times and completing with distribution-free statistical significance testing for the difference in mean τ’s between two topic subsets.
Other meta-evaluation methods include judgement cost <cit.>, coverage <cit.>, and axiomatic approaches <cit.>, but they are beyond the scope of this study.
In this study, we focus on the statistical stability of various aggregation functions in the C/W/L/A framework. In contrast to previous work ((e.g., <cit.>), we are concerned with the within-group difference of metrics under the same browsing model while adopting different aggregations rather than the between-group difference of metrics under different browsing models. Note that the statistical stability of a metric cannot tell whether the metrics is“measuring what we want to measure” <cit.> (e.g., how well a metric is correlating with users' satisfaction feedback). Thus it meta-evaluates metrics on a dimension orthogonal to user satisfaction.
§ EXPERIMENTAL SETTINGS
§.§ Metrics
In our experiment, we consider browsing models of Precision@k=10, DCG@k=10, RBP@p=0.8, INST@T=2.25, AP and ERR. These metrics and parameters are chosen because: (1) they have clearly defined browsing models in C/W/L/A framework; (2) combinations of their browsing models and different aggregations have already examined by Moffat et al. <cit.> in terms of consistency with user satisfaction. The browsing models of these metrics in C/W/L/A framework are as follows.
* Precision@k:
C_Precision(i) =
{ 1 for i < k,
0 otherwise..
* Discounted Cumulative Gain at k (DCG@k) <cit.>:
C_DCG@k(i) =
{ log_2(i+1)/log_2(i+2) for i < k,
0 otherwise..
* Rank-Biased Precision (RBP@p) <cit.>:
C_RBP@p(i) = p
* INST@T <cit.>:
C_INST@T(i) = (i - 1 + T + T_i)^2/(i + T + T_i)^2
where T_i =T - ∑_j = 1^i r_i represents the remaining gain the user needs to acquire in order to fulfill the expected gain after inspecting the i-th item.
* Average Precision (AP) <cit.>:
C_AP(i) = ∑_j = i+1^∞ (r_j / j)/∑_j = i^∞ (r_j / j)
* Exponential Reciprocal Rank (ERR) <cit.>:
C_ERR(i) = 1 - r_i
In our experiment, we combine the C(·) of the above metrics with the following aggregation functions which can be referred to the work of Moffat et al. <cit.>. Table <ref> shows the combinations examined in our experiment.
* The expected rate of gain (A_ERG):
A_ERG(i) = 1/V^+∑_j = 1^i r_i
where
1/V^+ = ∑_i = 1^∞∏_j = 1^i-1C(j)
This aggregation represents the “expected utility accumulated per item inspected” in the original C/W/L framework <cit.>.
* The expected total gain (A_ETG):
A_ETG(i) = ∑_j = 1^i r_i
This aggregation function assumes that users simply sum up the gain collected from each item with the same weight when they leave.
* The average relevance (A_avg):
A_avg(i) = 1/i∑_j = 1^i r_i
This aggregation function assumes that users' gain is determined by the average relevance of items they inspected when they leave.
* The maximum relevance (A_max)
A_max(i) = max_j = 1^i r_j
This aggregation function assuming that the user’s gain from the SERP will be completely dominated by the best element they inspected when they leave:
* The last relevance (A_fin)
A_fin(i) = r_i
This aggregation function assuming that the user’s gain from the SERP will be completely dominated by the last element they observed when they leave.
* The peak-end relevance (A_PE)
A_PE(i) = β· A_max(i) + (1-β)· A_fin(i)
This aggregation function assuming that the user’s gain from the SERP is in compliance with the peak-end rule, which suggests that people judge their experience of a series of past events by how they felt at its peak and by what they occurred most recently <cit.>.
In our experiment we set β = 0.5 following the setting in the work of Moffat et al. <cit.>.
* The aggregation for ERR (A_ERR)
A_ERR(i) = 1/i
This aggregation function assuming that the user becomes increasingly dissatisfied as he or she inspects more documents, regardless the quality of the document.
§.§ Dataset
Table <ref> presents an overview of datasets we used in our experiment, where we examine the system ranking similarity, system ranking consistency and discriminative power of the metrics listed in Table <ref>. These datasets were chosen based on the following principles: (1) they should be recent; (2) they should include enough topics and submitted runs, since we want to obtain reliable experimental results.
The NTCIR-15 WWW-3 (WWW3) <cit.> dataset is from the NTCIR-15 WWW-3 English subtask whose target corpus is clueweb12-B13 (about 50 million web pages) [https://lemurproject.org/clueweb12/]. It includes 80 topics and 39 runs (including 2 baseline runs), with 4-level relevance judgement for documents.
The target corpus for the TREC 2019 Deep Learning track (TR19DL) dataset is an MS MARCO corpus (3.2 million documents). TR19DL dataset includes 43 topics and 38 runs, with 4-level relevance judgement for documents.
When calculating scores for Precision, DCG, RBP, INST and AP, we linearly map the relevance score of the i-th item in the form of r_i = x/x_max. When calculating scores for ERR, we exponentially the relevance score of the i-th item in the form of r_i = (2^x - 1)/2^x_max. Here x is the original relevance score, x_max is the maximum relevance score in the collection. We exponentially map relevance scores for ERR in order to make its metric scores be identical to ones given by its original definition <cit.>.
§ SYSTEM RANKING SIMILARITY
Figure <ref> shows how the system rankings according to different aggregations under the browsing model of a metric resemble one another in terms of Kendall's τ. 95% CIs for correlations are given in parentheses.
Note that for Precision, DCG and RBP, the system ranking list returned by A_ERG and A_ETG are exactly the same. That is because for these metrics, the C(i) is a constant given i (refer to Eq. <ref>, Eq. <ref> and Eq. <ref>), the metric score given by A_ETG is thus equal to the metric score given by A_ERG multiplying with a constant (refer to Eq. <ref> and Eq. <ref>). For Precision, the system ranking list returned by A_avg is the same as ones returned by A_ERG and A_ETG. That is because for Precision@k, L(i) = 1 when i = k and L(i) = 0 while in other cases, the metric score of Precision@k is thus equal to A(k) (refer to Eq. <ref>, Eq. <ref> and Eq. <ref>). Hence, when k is given, the metric score of Precision@k given by A_avg is thus equal to the one given by A_ETG multiplying with a constant (refer to Eq. <ref> and Eq. <ref>).
From Figure <ref> we can observe that: (1) the system ranking similarity among different aggregations depends on the browsing model of a metric and it is hard, if not impossible, to summarise in a few words. More specifically, the following can be observed in terms of system ranking similarity; (2) generally, system ranking lists returned by different aggregations under the browsing model of a metric are more similar on WWW3 compared to results on TR19DL. We can further observe the following result.
The system ranking similarity among different aggregations tends to be low under the browsing model of Precision. No pair of aggregations has a system ranking similarity of more than 0.90 in terms of τ on WWW3 dataset, and the τ is even lower on TR19DL dataset.
Under the browsing model of DCG, system rankings given by different aggregations tends to be similar to each other on WWW3, while on TR19DL, system ranking lists returned by A_max tend to be less similar to system ranking lists returned by A_ETG, A_ERG and A_fin.
Under the browsing model of RBP, system rankings given by different aggregations tends to be similar each other on both of the two datasets. Specifically, system ranking lists returned by A_max are relatively less similar to system ranking lists returned by A_ETG, A_ERG and A_fin.
Under the browsing model of INST, system rankings given by different aggregations also tends to be similar each other on both of the two datasets. Specifically, system ranking lists returned by A_max are relatively less similar to system ranking lists returned by A_ETG and A_ERG.
Under the browsing model of AP, it is clear that: (1) System ranking lists returned by A_ERR is very different to ones returned by other aggregations; (2) A_avg has high system ranking similarity to A_ERG; (3) system ranking lists returned by A_max, A_fin and A_PE are similar to each other.
Under the browsing model of ERR, it is clear that: (1) System ranking lists returned by A_ETG is less similar to ones returned by other aggregations; (2) system ranking lists returned by A_ERG, A_avg and A_ERR are similar to each other; (3) system ranking lists returned by A_max, A_fin and A_PE are similar to each other.
§ SYSTEM RANKING CONSISTENCY
This section compares the performance of different aggregations given the browsing model of a metric in terms of system ranking consistency across two disjoint topic sets. To be more specific, given a test collection whose topic set is T and a set of K runs associated with it, we compare a set {M} of candidate metrics following the method in previous work <cit.> as follows.
* For each measure M, evaluate the K runs with T, and thereby obtain a |T| × K topic-by-run score matrix S_M .
* From each S_M, obtain a τ score B times using the algorithm shown in Algorithm <ref>, where each τ quantifies the system ranking consistency when the K runs are ranked according to two disjoint subsets of T . We thus obtain a B ×|{M}| matrix C containing the consistency τ scores.
* To see if any of the differences in mean consistency τ scores are statistically significant, apply a paired, randomised Tukey HSD test <cit.> to C.
Note that Tukey HSD test is a multiple comparison procedure, one can thus ensure that the familywise Type I error rate is no more than α, which is set to 0.05 throughout our study. Moreover, as the randomised Tukey HSD test is distribution-free, it can be applied regardless the distribution of τ scores. We use the script of the tool [http://research.nii.ac.jp/ntcir/tools/discpower-en.html] for the randomised Tukey HSD test with 2,000 trials.
Table <ref> summarises the results of our system ranking consistency experiments with B = 1, 000 topic subset pairs in each case. From Table <ref> we can observe the following result.
In general, A_ERG and A_ETG performs well in terms of system ranking consistency. A_ERG has the best or the second best system ranking consistency in all but one cases (INST on TR19DL). A_ETG also has the best or the second best system ranking consistency in all but two cases with two exceptions (INST and ERR on WWW3).
A_max tends to return system rankings with discrepancy on different topic sets. It is in the last place or the penultimate in terms of system ranking consistency in all but one cases (Precision on TR19DL). A possible explanation for the low consistency of system rankings given by A_max is that, it always returns the same result after it encounters the maximum relevance score so far, and thus the metric scores tend to be similar, which impairs the ability of the metric score to discriminate runs.
The performance of A_avg in terms of system ranking consistency is mediocre while stable. Overall, the trend is that it underperforms A_ERG and A_ETG while outperforming A_max in terms of system ranking consistency.
The performance of A_fin in terms of system ranking consistency is volatile. In general, two trends can be observed: (1) It does not perform well in terms of system ranking consistency under the browsing model of Precision, where it is in the last place. (2) Its performance in terms of system ranking consistency is mediocre under the browsing model of ERR, where it is in the third last place. However, it is hard to summarise its performance briefly when it is combined with the browsing model of other metrics, as its performance varies among different datasets. For example, on WWW3, it has the third-best system ranking consistency under the browsing model of DCG, but on TR19DL, likewise under the browsing model of DCG, it is in the last place.
The performance of A_PE in terms of system ranking consistency is also unstable. Under the browsing model of AP and ERR, its performance tends to be the
the compromise of A_fin and A_max. However, in other cases, this trend cannot be confirmed. What we observed is that, on one dataset its ranking in system ranking consistency is between A_fin and A_max, but on the other it ranks higher than both.
A_ERR has a outstanding performance in system ranking consistency under the browsing model of ERR, but its performance falters when it is combined with the browsing model of INST and AP. This result suggests that A_ERR is a highly specialised aggregation function for the browsing model of ERR and might have a bad performance in terms of system ranking consistency when it is used for the model of other metrics.
From the perspective of canonical and alternative aggregations, the overall picture is that metrics with their canonical aggregation all have favourable performances in system ranking consistency. For ERR, replacing A_ERR with A_ERG might further drive up its performance in terms of system ranking consistency, but whether the improvement is substantial needs further verification in future work. Current result shows that the improvement on WWW3 dataset is statistically significant while the improvement on TR19DL dataset is incremental and lacks statistical significance.
§ DISCRIMINATIVE POWER
In offline evaluation practice, a metric that tends to significantly discriminate more system pairs is preferred. The ability to significantly discriminate system pairs is called discriminative power.
To figure out the discriminative power of the metrics, we compute the scores of each metric for K runs on |T| topics with cutoff L = 10. Thus, for each metric, we have a |T| × K score matrix and we have K * (K - 1) / 2 system pairs on |T| topics. We then carry out significance tests for the difference of metric scores on each system pair. For significance testing, we used the randomised version of the paired Tukey HSD test, using the Discpower tool with 2,000 trials. The algorithm to obtain Achieved Significance Level (ASL) is given by the algorithm in the work of Carterette <cit.>.
Figure <ref> shows the result in the form of ASL curve. Metrics whose curves are close to the origin are the ones with high discriminative power, which means that they produce smaller p-values for many run pairs than other metrics do. Note that for Precision, DCG and RBP, the ASL curves of A_ETG are not shown in the figures as they are exactly the same as the ASL curves of A_ERG, since the metric score given by A_ETG is equal to the metric score given by A_ERG multiplying with a constant. For Precision, the ASL curves of A_avg are not shown in the figure as it is exactly the same as the ASL curves of A_ERG and A_ETG, since the metric score given by A_avg is equal to the metric score given by A_ETG multiplying with a constant. From Figure <ref>, we can observe the following result.
A_ERG brings a strong discriminative power and in general performs superbly among different metrics. A_ETG also has a hefty discriminative power in most cases, but it hobbles when it is combined with the browsing model of ERR. Under the browsing model of Precision, DCG and RBP, it has same ASL curves as the one of A_ERG and it is thus the one of best performers in discriminative power. Under the browsing model of INST and AP, the discriminative power of A_ETG is also strong, akin to A_ERG. Nevertheless, the discriminative power of A_ETG tones down under the browsing model of ERR, being dwarfed by A_avg and the canonical A_ERR. Considering the fact that A_ETG(i) is equal to V^+ · A_ERG(i), the dismal performance of A_ETG might have potential relation with the volatile browsing model of ERR.
A_max has a frail discriminative power and is a lagger in general. It is in the last place in terms of discriminative power under the browsing model of DCG, RBP, INST and ERR. It is in the penultimate place under the browsing model of AP, only outstripping A_ERR, whose discriminative power is tenuous in that case. Similar to what causes the low consistency of system rankings given by A_max the possible explanation for the weak discriminative power of A_max is that, it always returns the same result after it encounters the maximum relevance score so far, and thus the metric scores tend to be similar, which impairs the ability of the metric score to discriminate runs.
The discriminative power of A_avg is mediocre while stable, just like its performance in terms of system ranking consistency. In general, it is dwarfed by A_ERG and A_ERG while outstripping A_max in discriminative power.
The discriminative power of A_fin is volatile and highly depend on the browsing model of a metric. Under the browsing model of RBP and INST, it has a strong discriminative power, performing similar to or even better than A_ERG and A_ETG. Under the browsing model of AP and DCG, its discriminative power is mediocre. Its discriminative power falters under the browsing model of Precision, where it is in the last place in terms of discriminative power.
The discriminative power of A_PE is prone to be the compromise of A_fin and A_max in general. This result is intuitive if one considers the definition of A_PE. The only exception is that, on WWW3 dataset it outperforms both A_fin and A_max in discriminative power when it is combined with the browsing model of Precision.
A_ERR performs well in discriminative power under the browsing model of ERR, merely being inferior to A_ERG. Nevertheless, it has an insufficient performance under the browsing model of AP and INST. Especially in the case of AP, its discriminative power is substantially weaker than other aggregations. This result again suggests that A_ERR is a highly specialised aggregation function for the browsing model of ERR and might perform poorly in terms of discriminative power when being combined with the browsing model of other metrics.
From the perspective of canonical and alternative aggregations, the overall picture is that metrics with their canonical aggregation all have good, if not the best, performances in discriminative power. Nevertheless, for ERR, replacing A_ERR with A_ERG can further strengthen the discriminative power.
§ CONCLUSIONS AND DISCUSSION
In this study, we meta-evaluated metrics obtained by combining different aggregation functions with the browsing model of Precision, DCG, RBP, INST, AP and ERR. We compared these metrics in order to figure out that: given the browsing model of a metric, what is the impact of using different aggregation functions on system ranking similarity, system ranking consistency and discriminative power. Our work extends the work of Moffat et al. <cit.> from the perspective of statistical reliability in offline evaluation experiment. Our experimental results provide a useful insight for researchers who are going to design reliable evaluation metrics for offline evaluation using the C/W/L/A framework. With respect to the RQs, we have the following findings:
RQ1: The system ranking similarity among aggregations. The system ranking similarity among different aggregations depends on the browsing model of a metric and it is hard to give a universal rule.
RQ2: The system ranking consistency of aggregations. A_ERG and A_ETG have outstanding performances in terms of system ranking consistency. A_max usually performs poorly in terms of system ranking consistency. The performance of A_avg in terms of system ranking consistency is mediocre. The performances of A_fin and A_PE in terms of system ranking consistency are volatile, depending on the browsing model of a metric. A_ERR has a outstanding performance in system ranking consistency under the browsing model of ERR, but it performs poorly when being combined with the browsing model of INST and AP.
RQ3: The discriminative power of aggregations. A_ERG tends to have the strongest discriminative power and performs the best in most cases. A_ETG also has outstanding performance in terms of discriminative power except for the case of ERR. A_max tends to have a weak discriminative power and has an insufficient performance in most cases. The discriminative power of A_avg is mediocre. The discriminative power of A_fin is volatile and highly depend on the browsing model of a metric. The discriminative power of A_PE is prone to be the compromise of A_fin and A_max in most cases. A_ERR performs well in discriminative power under the browsing model of ERR, but it performs poorly under the browsing model of AP and INST.
RQ4: Alternative aggregations that improve the statistical reliability of metrics. Given that the canonical aggregation of Precision, DCG, RBP, INST and AP is A_ERG, and that A_ERG has been mentioned above as performing well in terms of system ranking consistency and discriminative power, there is no evidence that replacing the canonical aggregation with alternative aggregation would further improve their performance. For ERR, replacing A_ERR with the canonical A_ERG can further strengthen the discriminative power while obtaining a system ranking list similar to the canonical version.
Overall, our result suggests that, in terms of system ranking consistency and discriminative power, A_ERG has an outstanding performance while A_max usually has an insufficient performance. A possible explanation is that: A_ERG uses the information of all relevance scores it has encountered so far, while using the information of the probability of users inspecting documents on each rank (1/V^+). Therefore, metric scores given by A_ERG are able to discriminate more runs. On the other hand, A_max only uses the information of the maximum relevance score it has encountered so far, and thus the metric scores tend to be similar, which impairs the ability of the metric score to discriminate runs.
Based on the results in this study, we recommend IR researchers to: (1) use ERR with A_ERG in offline evaluation practice in order to achieve high system ranking consistency and discriminative power while obtaining a system ranking list similar to the canonical version at the same time; (2) use A_ERG as the aggregation function when designing evaluation metrics using the C/W/L/A framework. This is conducive to improve the system ranking consistency and discriminative power of the metrics.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02187v1
|
20230705102736
|
Electroweak sphalerons, scalar multiplets, and symmetry breaking patterns
|
[
"Yanda Wu",
"Wenxing Zhang",
"Michael J. Ramsey-Musolf"
] |
hep-ph
|
[
"hep-ph"
] |
ACFI-T23-04
[email protected]
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University,
800 Dongchuan Road, Shanghai, 200240 China
Shanghai Key Laboratory for Particle Physics and Cosmology,
Key Laboratory for Particle Astrophysics and Cosmology (MOE),
Shanghai Jiao Tong University, Shanghai 200240, China
[email protected]
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University,
800 Dongchuan Road, Shanghai, 200240 China
Shanghai Key Laboratory for Particle Physics and Cosmology,
Key Laboratory for Particle Astrophysics and Cosmology (MOE),
Shanghai Jiao Tong University, Shanghai 200240, China
[email protected], [email protected]
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University,
800 Dongchuan Road, Shanghai, 200240 China
Shanghai Key Laboratory for Particle Physics and Cosmology,
Key Laboratory for Particle Astrophysics and Cosmology (MOE),
Shanghai Jiao Tong University, Shanghai 200240, China
Amherst Center for Fundamental Interactions, Department of Physics,
University of Massachusetts Amherst, MA 01003, USA
Kellogg Radiation Laboratory, California Institute of Technology,
Pasadena, CA 91125 USA
In this study, we present a comprehensive analysis of the electroweak sphaleron formalism and its application to electroweak phase transition (EWPT) patterns in extensions of the Standard Model scalar sector with electroweak multiplets. We offer an equivalence proof for different choices for the form of sphaleron configurations; construct the previously unestablished high-dimensional SU(2) sphaleron transformation matrix;
and revisit the required boundary conditions needed for solving the sphaleron field equations.
We then investigate the leading order sphaleron dynamics in the context of a multi-step EWPT.
We showcase two distinct analytical approaches for extending the SU(2) scalar multiplet to the standard model (SM) under differing EWPT scenarios, and perform an explicit calculation of the sphaleron energy using a septuplet example. In the context of a single-step EWPT leading to a mixed phase, we find that the additional multiplet's contribution to the sphaleron energy is negligible, primarily due to the prevailing constraint imposed by the ρ parameter. Conversely, in a two-step EWPT scenario, the sphaleron energy can achieve significantly high values during the initial phase, thereby markedly preserving baryon asymmetry if the universe undergoes a first-order EWPT. In both cases, we delineate the relationship between the sphaleron energy and the parameters relevant to dark matter phenomenology.
Electroweak sphalerons, scalar multiplets, and symmetry breaking patterns
Michael J. Ramsey-Musolf
August 1, 2023
==========================================================================
§ INTRODUCTION
The origin of Baryon Asymmetry of the Universe (BAU) remains an open question in the frontier of particle physics and cosmology. In order to explain the BAU, Sakharov proposes three necessary conditions: (1) baryon-number violation; (2) C and CP violation; (3) departure from thermal equilibrium or CPT violation <cit.>.
In principle, the Standard Model (SM) provides all the necessary ingredients for generation of the baryon asymmetry during the era of electroweak symmetry-breaking (EWSB), a scenario known as electroweak baryogenesis (EWBG).
Indeed, the first condition can be fulfilled by the non-pertubative weak sphaleron process. However, the SM fails to satisfy the second and third conditions. The CP-violation associated with the Cabibbo-Kobayashi-Maskawa matrix is too weak to generate the observed BAU <cit.>, and EWSB occurs through a smooth crossover transition due to the large Higgs mass <cit.>, thereby missing the needed out of equilibrium requirement. Many beyond Standard Model (BSM) theories have been proposed to remedy these shortcomings and facilitate EWBG (please see <cit.> for reviews). In this work, we focus on a key element of BSM EWBG: electroweak sphaleron dynamics. We do so in the context of a general class of BSM scenarios, namely, those involving an extended Higgs sector containing higher dimensional electroweak multiplets.
Electroweak baryogenesis requires a first order electroweak phase transition (FOEWPT), during which bubbles of broken symmetry nucleate in the symmetric phase. The BSM CP-violating interactions at the bubble walls generate a left-handed fermion number density that biases symmetric phase sphaleron transitions into generation of non-zero baryon plus lepton number (B+L) <cit.>. The asymmetry diffuses into the bubble interiors. A sufficiently strong FOEWPT leads to suppression of the broken phase sphaleron rate, thereby allowing preservation of the asymmetry <cit.>. A central question, therefore, pertains to the broken phase sphaleron rate: is it sufficiently quenched so as to preserve the BAU?
While the most reliable approaches to answering this question are obtained using lattice computations, as a practical matter performing a broad survey of BSM scenarios and associated parameter choices relies on (semi-)analytic methods and perturbation theory. The latter provides a baseline for comparison and validation against non-perturbative studies. The aim of the following study is to refine this baseline and clarify some formal considerations along the way. In doing so, we recall that the analytic result for the broken phase sphaleron rate, Γ_WS can be written as the product of a dynamical prefactor A and a statistical factor <cit.>:
Γ_WS = A e^-E_sph/T ,
where E_sph is the energy associated with the semiclassical sphaleron solution.
Our focus in the present study falls on the latter.
To further set the context, we recall that the thermal history of EWSB can entail either a single, direct transition to the present Higgs phase or a series of steps. In the presence of additional scalar fields Φ, a different vacuum associated with a non-zero vacuum expectation value (vev) for one or more components of Φ may precede the Higgs phase. Alternately, the Higgs phase may also involve a non-zero Φ vev. While Φ may be either a SM gauge singlet or carry SM quantum numbers, in this study we consider the case where Φ is an SU(3)_C singlet but charged under SU(2)_L×U(1)_Y. We further specify that only the neutral component of Φ obtains a non-zero vev. Three representative patterns of EWSB are illustrated in <ref>, where case (a), (b) and (c) represent the SM one-step EWPT, one-step EWPT to the mixed phase and two-step EWPT, respectively.
Each case may accommodate EWBG. For the single-step transitions in (a) and (b), the presence of Φ will modify Γ_WS through thermal loops and, for (b), through additional contributions to the semiclassical sphaleron solution. Note that for (b), constraints from the electroweak ρ-parameter place strong constraints on ⟨Φ⟩, when Φ is neither a gauge singlet or second Higgs doublet. One may evade these constraints through a suitable choice of field content, as in the Georgi-Machacek model <cit.>. For (c), the first step may accommodate EWBG if (i) this step involves a FOEWPT; (ii) if BSM CPV interactions generate a sufficiently large asymmetry; (iii) Γ_WS in the EWSB Φ vacuum is sufficiently suppressed; and (iv) the second step to the Higgs phase does not allow for re-excitation of the EW sphalerons. The viability of this possibility has been demonstrated in Refs. <cit.>.
To our knowledge, EW sphaleron dynamics for these scenarios in the presence of Φ have not been explored in a unified and systematic way. In what follows we endeavor to do so, focusing on cases (b) and (c) wherein Φ can play an active role in the semiclassical sphaleron solution. We investigate both the corresponding topological structure and sphaleron energy.
Thus, this study mainly consists of two parts. In the first part, we review, update, and clarify various formal aspects related to the semiclassical treatment of the sphaleron in BSM theories, including: relationships between various treatments for the sphaleron configuration; a general construction of the 1-form framework for a general scalar multiplet; restrictions arising in the presence of more than one scalar field multiplet; topology pertaining to higher dimensional (beyond doublet) multiplets; equation of motion and choice of boundary conditions. We intent our discussion of these issues to provide a general reader with some background as well as to set the context for our specific choices in the second part of the study.
In the latter part, we compute the sphaleron energy for scenarios (b) and (c) for Φ being an electoweak septuplet, whose presence in the Higgs vacuum of scenario (c) can contribute to the dark matter (DM) relic density. In this instance, we delineate the dependence of E_sph on the parameters relevant to DM phenomenology: the DM mass, its self-interaction, and the coupling to SM fields that enters the annihilation and direct detection cross sections. this looks good:In our current work, we primarily focus on the analysis of the zero-temperature model. Our aim is to provide a methodology for applying the sphaleron formalism to different EWSB patterns, where the zero-temperature model can provide a good approximation of physical quantities. The thermally corrected model can be analyzed in a parallel manner. We find that, depending on the values of these parameters, E_sph for step C1 of case (c) can be significantly larger than in the SM for single step transition in case (a), suggesting that the two-step scenario can be particularly conducive to EWBG.
Our discussion of these issues is organized as follows. In section II, we present a detailed analysis of the sphaleron formalism, either in SM or in BSM scenarios. In section III, we discuss an electroweak multiplet extension to the SM and present three possible types of EWPT after this extension. In section IV, we compute the sphaleron energy of this model under different types of EWPT.
§ SPHALERON FORMALISM
In this section, we address several issues pertaining to sphaleron formalism:
* We first summarize the most widely considered choices for the sphaleron configurations and construct the relations between them, starting with the Weinberg-Salam theory.
* As we will utilize the 1-form choice when treating higher dimensional multiplets, we give a general construction in terms of Wigner D-matrices that applies to scalar fields of arbitrary isospin.
* We apply this construction to an extended scalar sector and point out restrictions on the scalar potential needed to accommodate a multi-scalar field sphaleron solution.
* For Φ differing from a scalar doublet, there exist additional parameters describing trajectories in field space beyond those pertaining to the doublet case. We present a sufficient condition for we demonstrate the dependence of the spahleron solution on additional multiplet to ensure the sphaleron solution yields the baryon plus lepton charge Q_B+L=1 (equilvalent to Q_B=1/2 since B-L is conserved by the sphaleron transitions) .
* We review the derivation of the sphaleron field equations for a general electroweak multiplet and clarify requirements on the corresponding boundary conditions.
§.§ Sphaleron configurations in the Standard Model
The SM electroweak sphaleron formalism is first constructed by Manton and Klinkhamer <cit.>. Klinkhamer and Laterveer later propose another sphaleron configuration with a different field configuration from Manton and Klinkhamer's one <cit.>. We will demonstrate that these two configurations are equivalent to each other under a sphaleron gauge transformation. Other sphaleron configurations are also been discussed in this work, like Refs. <cit.>.
Manton first constructs the sphaleron topological non-contractible loop (NCL) within the Weinberg-Salam theory. Through the topological identity map, the Higgs field at spatial infinity is parameterized as <cit.>
H^∞(μ,θ,ϕ)=[
H_1^∞
H_2^∞]=[sinμsinθ e^iϕ
e^-iμ(cosμ+isinμcosθ)
],
where the Higgs field at spatial infinity r→∞ is denoted as H^∞. The parameter μ∈[0,π] characterizes motion along the NCL, where μ=0,π correspond to vacuum and μ=π/2 corresponds the sphaleorn configuration. Other two parameters θ and ϕ are two spherical angles at spatial infinity. The unitary transformation matrix U^∞ is constructed as
U^∞=[
H^∞ *_2 H^∞_1
-H^∞ *_1 H^∞_2
],
Then the field configurations for arbirary r are given in <cit.>
H(μ,ξ,θ,ϕ)=v/√(2)h(ξ)U^∞([ 0; 1 ]),
𝐀_i(μ,ξ,θ,ϕ)dx^i=-i/gf(ξ)∂_i U^∞ (U^∞)^-1.
where ξ=gΩ r is a dimensionless radial parameter with Ω=246.22 GeV; v is the Higgs vev; 𝐀_i=A_i^a σ^a/2, where σ^a denotes the Pauli matrix; and g represents the weak gauge coupling constant. Under the spherical symmetry ansatz, h(ξ) and f(ξ) denote the Higgs field and gauge field radial profile function. The radial profile functions satisfy a set of coupled differential equations implied by the field's Euler-Lagrangian equation, whose boundary conditions will be discussed in later subsection. Note that i∈ [r,θ,ϕ] in spherical coordinate. Since the radial gauge is applied, the radial component of the gauge field A_r^a vanishes.
Klinkhamer and Laterveer propose a different field configuration <cit.> (denoted here as the KL configuration), while their sphaleron matrix U^∞ is identical with Manton and Klinkhamer's original construction <cit.> (denoted as the MK configuration). The KL configuration defines the 1-form F_a via
i (U^∞ -1)dU^∞ = ∑_a=1^3 F_a σ^a/2 .
where the F_a are crucial for the sphaleron Yang-Mills and kinetic parts energy calculation.
The NCL in KL configuration commences and terminates at the topologically diifferent vacua, and is composed of three phases <cit.>
* I, μ∈ [-π/2,0]: builds up the Higgs field configuration;
* II, μ∈ [0,π]: builds up and destorys the gauge field configuration;
* III, μ∈ [π, 3π/2]: destorys the Higgs field configuration.
where μ=-π/2, 3π/2 represent the vacuum configuration and μ=π/2 denotes the sphaleron configuration. In different phases, the sphaleron's profile functions are different. In phase I and III, the profile function is given by
𝐀_i=a_i=0,
H(μ,ξ,θ,ϕ) = v(sin ^2 μ+h(ξ) cos ^2 μ)/√(2)([ 0 1 ])^T ,
where a_i denotes the U(1) gauge field. While in phase II, the field configuration read
H(μ,ξ,θ,ϕ)=v/√(2)h(ξ)([ 0; 1 ]),
𝐀_i dx^i = 1/g(1-f(ξ))[F_1 σ_1/2+F_2 σ_2/2]
+1/g(1-f_3(ξ))[F_3 σ_3/2],
a_i dx^i = (1-f_0(ξ))F_3 .
where a_i represents the U(1) gauge field, and g^' denotes the U(1) gauge coupling constant.
We now show the equivalence of this KL field configuration with the MK configuration under the f_0=0 and f=f_3 restrictions.
Apply an unitary transformation (U^∞)^-1 to the MK configuration, eq. (<ref>). The Higgs field becomes
H(μ,ξ,θ,ϕ) →v/√(2)h(ξ)([ 0; 1 ]),
The gauge field transforms in the usual way 𝐀_μ→ U𝐀_μU^-1-i/g(∂_μU) U^-1, so the transformed gauge field becomes
𝐀_𝐢dx^i →(U^∞)^-1(-i/gf(ξ)∂_iU^∞(U^∞)^-1)U^∞
-i/g∂_i(U^∞)^-1U^∞
=i/g(1-f(ξ))(U^∞)^-1∂_iU^∞
=1/g(1-f(ξ))[F_1 σ_1/2+F_2 σ_2/2+F_3 σ_3/2].
Under the symmetric ansatz (f=f_3,f_0=1), the Higgs field and non-abelian gauge field configurations in eq. (<ref>) are equal to the gauge transformed configurations eq. (<ref>) and eq. (<ref>). Hence, the MK configuration is a special (zero mixing angle) case of KL configuration with the additional stipulation f=f_3.
Different gauge field configurations should lead to the same sphaleron energy, which is gauge independent. Should state why the phase I and phase III differences do not affect the sphaleron energy, hence MK and KL will give the same sphaleron energies. Apart from the MK and KL configurations, there are other sphaleron configurations. Akiba, Kikuchi and Yanagida propose the field configuration from the general spherical symmetric ansatz <cit.>, denoted here as the AKY configuration. Kleihaus, Kunz and Brihaye construct the configuration based on a set of orthonormal vector <cit.>, which is quite similar with Rebbi and Rossi' monopole solution <cit.>, and we will name this as KKB configuration. To serve as a comprehensive summary of sphaleorn configurations, we discuss the AKY and KKB configurations into the Appendix.
AKY in <cit.> shows that their field solutions are totally equivalent with MK sphaleron configuration. Besides, the work <cit.> compares the MK and AKY configurations from the perspective of bloch wave function.
In the remainder of this work, we will generalize the sphaleron configuration with scalar multiplet based on the KL configuration.
§.§ A general 1-form for SU(2) multiplet
To that end, it is useful to provide a general construction of the 1-form F_a applicable to a general scalar SU(2)_L multiplet of arbitrary isospin J. In passing, we note that
Ahriche et al. <cit.> calculate the sphaleron energy for higher dimensional SU(2) scalar representations, wherein they use but do not prove that the 1-form F_a is invariant property concerning different representation dimensions. We will expand on their work by showing this invariance.
An arbitrary SU(2) matrix can be parameterized in terms of Wigner-D matrix, which in the fundamental representation reads
U(α, β, γ) = 𝒟^1/2_m,m^'(α,β,γ)
=e^-iασ_3/2 e^-iβσ_2/2 e^-iγσ_3/2
=(
e^-iα + γ/2cosβ/2 -e^-iα - γ/2sinβ/2
e^iα - γ/2sinθ/2 e^iα + γ/2cosβ/2),
where α,β,γ are three Euler angles.
Comparing this matrix with sphaleron matrix eq. (<ref>), we can obtain the following relationships
cos(β/2) cos(α/2+γ/2)=1+sin^2μ(cosθ-1),
sin(β/2) sin(α/2-γ/2)=sinϕsinθsinμ,
sin(β/2) cos(α/2-γ/2)=-cosϕsinθsinμ ,
cos(β/2) sin(α/2+γ/2)=sinμcosμ (cosθ-1) .
we obtain these relations by (i) expand eq. (<ref>) and eq. (<ref>) into matrix addition with basis I_2× 2, σ_1, σ_2 and σ_3; (ii) equal the basis coefficients of these two matrices.
While it is possible in principle to solve these equations and establish relationships between (α,β,γ) and (μ,θ,ϕ), doing so in practice is cumbersome. Not only must we be careful with the sign of the final solution of three Euler angles, but also they have non-linear dependence with μ,θ,ϕ, complicating the calculation of the 1-form. Therefore, although eq. (<ref>) looks quite intuitive, we seek an alternate method.
Instead, we can use the multiplication of multiple Wigner-D matrices to represent the sphaleron matrix. For a general representation J with matrix dimension 2J+1, we can write the sphaleron matrix as
U^∞_mn(μ, θ, ϕ)=∑_m^' D^J_mm^'(ω_-, -θ, μ) D^J_m^' n(μ, θ, ω_+) ,
with
ω_±=-μ±(ϕ-π/2) .
if we set J=1/2, we can restore the standard sphaleron matrix eq. (<ref>). This kind of parameterization method is quite easy to calculate the 1-from, since the Euler parameters are liner with respect to μ, θ and ϕ.
A general formation of F_a in the representation J can be calculated through the generalization of eq. (<ref>)
i(U^∞ -1)dU^∞ = ∑_a=1^3 F_a T_a,
where T_a are the SU(2) generators in a general representation.
The 1-form F_a can then be calculated through
F_3 = 1/Tr(T_3^2) Tr[i(U^∞)^-1dU^∞.T_3],
and
F_1 T_1 + F_2 T_2 = i(U^∞)^-1dU^∞ - F_3 T_3.
Using this calculation method, we verify that F_a are invariant for J=[3/2,2,5/2,3] under usual SU(2) generator's representation <cit.>.
§.§ Sphaleron under a SU (2) scalar multiplet extension
In this subsection, we investigate the sphaleron configuration with a general high-dimensional SU(2) scalar extension to the SM. This configuration was previously constructed by Ahriche et. al <cit.>. However, we present a different perspective on the unitary transformation matrix.
Consider N scalar multiplet fields, denoted as Φ^i with i=1,…,N. In <cit.>, the vacuum configurations of Φ^i are parameterized as
Φ^i=v_i h_i(ξ)/√(2)(0,…,1,…,0)^T.
where v_i and h_i represent the scalar field's vev and radial profile function, respectively.
However, since there exists only a single SU(2) sphaleron gauge transformation matrix, U^∞, it is in general not a priori clear that one choice can transform all scalar fields in phase II to the form in eq. (<ref>) that carries no dependence on (μ,θ,ϕ).
To address this question, one should take into account the number of gauge transformation degrees of freedom. For concreteness, we consider the two Higgs doublet model (2HDM). In the 2HDM, we can perform an SU(2)_L×U(1)_Y transformation to a basis where the vev of neutral component of Φ^1 is real while the corresponding neutral component of Φ^2 is complex <cit.>
Φ^1=([ 0; v_1 ]), Φ^2=([ 0; v_2 e^iξ ]),
where v_1 and v_2 are real and positive, and 0≤ξ <2π. If we follow MK's sphaleron configuration, the field configurations for the 2HDM should be written as
Φ^1 =v_1/√(2)h_1(ξ)U^∞(μ,θ,ϕ)([ 0; 1 ]),
Φ^2 =v_2/√(2)h_2(ξ)U^∞(μ,θ,ϕ)([ 0; e^iξ; ])
=v_2/√(2)h_2(ξ)U^∞(μ^',θ^',ϕ^')([ 0; 1; ]) ,
where h_1(ξ) and h_2(ξ) denote the radial profile function of two doublets, respectively. Generally, μ≠μ^',θ≠θ^', ϕ≠ϕ^'. In other words, the presence of a complex phase in the vacuum configuration that cannot be removed by a gauge transformation implies that there does not exist a single U matrix that can rotate both scalar fields to the form in eq. (<ref>) for a common set of NCL parameters.
Only for certain choices of the scalar potential parameters, for which ξ=0, can one achieve such a common set. This situation should hold for a general dimensional electroweak multiplet extension of the SM, whose field configuration should be written as
Φ=v_ϕ/√(2)ϕ(ξ) U^∞(μ^',θ^',ϕ^')([ 0; ⋯; 1; ⋯; 0 ]) .
we seek for situations where μ= μ^',θ = θ^', ϕ = ϕ^'. This requires additional constraints to the model parameters. As in the 2HDM, additional constraints should be applied to make ξ=0. In section III, we will analyze these constraints carefully.
Assuming these constraints are satisfied, the sphaleron configuration proposed by Ahriche et al. can be directly applied. We provide a summary of their results for the sake of completeness.
In the first and third phases, when μ∈ [-π/2,0] and μ∈ [π,3π/2], the electroweak multiplet's configuration is
Φ=v_ϕ (sin^2 μ + ϕ(ξ) cos^2 μ)/√(2)(0,⋯,1,⋯,0)^T,
In the second phase when μ∈ [0,π], the field configuration is
Φ=v_ϕϕ(ξ)/√(2)(0,⋯,1,⋯,0)^T.
§.§ The validity check of baryon charge
In general, one should ask how the presence of these additional multiplets affect the Cherns-Simons number and, thus, B+L, associated with the sphaleron configuration. In the case of n=2 it has been shown that
the sphaleron baryonic charge Q_B=1/2, and the leptonic charge of sphaleron is the same as the baryonic charge, leading to Q_B+L=1 <cit.>.
U^∞'=U^∞exp[if(2n-4)].
where U^∞ is the general J representation matrix we constructed in (<ref>), f(2n-4) represents a hermitian (2J+1)× (2J+1) matrix with 2n-4 hidden new parameters. The additional matrix exp[if(2n-4)] does not influence the 1-form F_a defined in (<ref>), but can provide a topological identity map for general dimensional SU(2) multiplet. Within the additional 2n-4 parameters, the map π_3(S^(2n-1)) becomes π_2n-1(S^(2n-1)), and the latter is a non-trivial map (not map to one point) and can be used to construct the NCL in the configuration space. it will not be clear to a reader what this means, may need more explanation Let me explain more. We save the existence proof of f(2n-4) to future works.
We now review the computation of Q_B, which can be written as <cit.>
Q_B(sphaleron)= ∫_-∞^t_0 dt ∫ d^3 x (g^2/32π^2F^a_μνF^aμν),
where t_0 represents the sphaleron configuration while t=-∞ represents the vacuum. The dual field tensor F^aμν=1/2ϵ^μνρσF^a_ρσ. Clearly, since eq. (<ref>) leaves the 1-form F_a unchanged from the KL form, we will demonstrate that the value of Q_B will also be unchanged. To proceed with the latter, note that since
F^a_μνF^aμν can be written as a total divergence ∂_μ K^μ, with
K^μ =ϵ^μνρσ (F_νρ^aA_σ^a-g/3f^abcA_ν^aA_ρ^bA_σ^c) ,
so that
Q_B(sphaleron)
=g^2/32π^2( ∫ d^3 x K^0 |_t=t_0+∫_-∞^t_0 dt ∫_S K·d⃗S⃗),
where K^0=0 at vacuum when t=-∞, since the gauge field A_i^a=0 at the vacuum configuration eq. (<ref>). If we work out the explicit gauge field component A_i^a in eq. (<ref>), we would see that A_i^a ∼ 1/r, which means that the surface term in eq. (<ref>) does not vanish. The sphaleron baryon charge is gauge invariant from the definition eq. (<ref>), so that we can make a gauge transformation U_charge such that the gauge field A_i^a falls off faster than 1/r. Such transformation can take the following form <cit.>
U_charge=exp(-iΩ(r)r̂·σ⃗), Ω(r)=μtanh(β r),
where μ is the NCL parameter in the sphaleron configuration, β is a large number. Under such gauge transformation, the surface term would vanish <cit.>. The sphaleron baryon charge becomes
Q_B(sphaleron) =∫ d^3 x K^0|_t=t_0+2μ-sin(2μ)/2π =1/2 .
where the first term vanishes due to the gauge field goes faster than 1/r at spatial infinity, and the NCL parameter μ=π/2 at the sphaleron point. The result eq. (<ref>) implies that the sphaleron baryon charge is irrelevant to the detailed shape of the radial profile function f(ξ) defined in eq. (<ref>). As we will see below, within the electroweak multiplet extension of the SM, the multiplet field would bias the gauge field radial profile function to some extent, while keeping Q_B(sphaleron)=1/2.
§.§ Sphaleron Energy and equation of motion
In the following computations, we utilize the KL configuration defined in eq. (<ref>) and eq. (<ref>). For the additional scalar multiplet, its configuration is established in eq. (<ref>) and eq. (<ref>).
It is covenient to define the sphaleron energy relative to that of the vacuum state in the configuration space, viz
E_sph=E(μ=π/2)-E(μ=-π/2),
where E(μ=π/2) represents the sphaleron energy at the configuration space saddle point, while E(μ=-π/2) depicts the vacuum state value. The general potential V(H,Φ) of Higgs field and multiplet Φ includes the Higgs potential, Higgs and Φ portal interaction and Φ self interaction terms. However, one need to pay attention that different choices can be made to give the potential value V at the origin, and different choices correspond to different sphaleron vacuum energies. However, since the relevant quantity is the energy difference eq. (<ref>), these different choices will have no physical consequence.
For example, we can write the Higgs field potential into two forms, one is -μ^2 H^† H+λ (H^† H)^2, another is λ(H^†H-1/2v^2)^2. In the former case, we should carefully consider the sphaleron vacuum state
value and the situation would be more complicated if more scalar fields enter the potential.
In the following analysis, we construct the sphaleron energy using one scalar multiplet's extension to the SM. Meanwhile, it is sufficient to use V(H,Φ) as a general object to demonstrate the main ideas in this section. We will present the explicit interaction terms in the next section. With one multiplet's extension to the SM, either term in the right hand side of eq. (<ref>) can be written as
E =4πΩ/g∫ dξ[ 1/4F^a_ijF^a_ij+1/4f^a_ijf^a_ij+ (D_iH)^†(D_iH)
+ (D_iΦ)^†(D_iΦ) + V(H,Φ) ],
when μ=-π/2, the Yang-Mills term, U(1) term, and kinetic term both equal to trivial zero, since the gauge fields are empty and the scalar fields are in vacuum states; while when μ=π/2, these three terms' formal computation are carried out in Appendix <ref>. Therefore, the only undetermined terms in eq. (<ref>) are V(H,Φ)(ξ,μ=π/2) and V(H,Φ)(ξ,μ=-π/2). Thus, the sphaleron energy can be expressed as
E_sph = E(μ=π/2) - E(μ=-π/2)
= 4πΩ/g∫ dξ[ 1/4F_ij^aF_ij^a(ξ,μ=π/2) + 1/4f_ijf_ij(ξ,μ=π/2) + (D_iH)^†(D_iH) (ξ,μ=π/2) + (D_iΦ)^†(D_iΦ)(ξ,μ=π/2) .
. + V(H,Φ)(ξ,μ=π/2) - V(H,Φ)(ξ,μ=-π/2) /].
The field's equation of motion (EOM) can be obtained via the Euler-Lagrangian equation. In our analysis, there are two scalar fields, H and Φ. Similar to the case in Ahriche et al.'s work <cit.>, the EOMs reads
f^''+2/ξ^2(1-f)[f(f-2)+f_3(1+f_3)]+(1-f)(v^2h^2/4Ω^2+αϕ^2)=0,
f_3^''-2/ξ^2[3 f_3+f(f-2)(1+2 f_3)]+(v^2/4Ω^2h^2+βϕ^2)(f_0-f_3)=0,
f_0^''+2/ξ^2(1-f_0)-g^' 2/g^2(v^2/4Ω^2h^2+βϕ^2)(f_0-f_3)=0,
h^''+2/ξh^'-2/3ξ^2h[2(1-f)^2+(f_0-f_3)^2]-1/g^2 v^2Ω^2∂ V[h,ϕ]/∂ h=0,
ϕ^''+2/ξϕ^'-8Ω^2ϕ/3 v_ϕ^2ξ^2[2α(1-f)^2+β (f_0-f_3)^2]-1/g^2 v_ϕ^2Ω^2∂ V[h,ϕ]/∂ϕ=0,
where f^' denotes df/dξ and f^'' denotes d^2 f/dξ^2. In the zero temperature computation, we set v=Ω=246.22 GeV. However, at high temperature universe, v is a function of the temperature; Ω is just a dimensional constant; and α and β are defined as
α = [J(J+1)-J_3^2]v_ϕ^2/2 Ω^2, β = J_3^2v_ϕ^2/Ω^2.
where J denotes the multiplet representation dimension, and J_3 is the third component value. Since we put the multiplet's vev in its neutral component, J_3 equals to the opposite value of hypercharge Y.
The only undefined term in EOMs (<ref>) is the potential term V[h,ϕ], which is related with BSM models and types of EWPT.
§.§ Boundary conditions of the sphaleron EOM
In this subsection, we will clarify some subtleties regarding the sphaleron EOM boundary consitions. The boundary condition for scalar fields at spatial infinity is clear: each field should approach its vacuum. On the other hand, at the origin, some subtleties would appear, depending on the choice of co-ordinate system. At this location,
the scalar field profile function boundary condition shares common features with gauge field ones. Therefore, we can mainly focus on gauge field profile function boundary condition analysis. Working with spherical-polar co-ordinates, the usual criteria for boundary condition can be summarized as <cit.>
* when ξ→ 0, the field is free of singularity,
* when ξ→∞, the gauge field should vanish to ensure the finiteness of sphaleron energy, where A_i^a=0 is equivalent to the pure gauge state A_i^adx^i ∼∂_i (U^∞)^-1U^∞ up to a gauge transformation.
In this work, we have different opinions to above two criteria and propose following additional condition
* when ξ→∞, if we set the boundary condition as f(ξ→∞)=C, the field profile function should converge to the chosen constant value C. In such case, either the scalar field or the gauge field can converge to the vacuum state, where we do not expect the profile function to have any rapid changes around vacuum configuration.
Let us elaborate on the singularity issue. According to MK configuration eq. (<ref>), U^∞ is a function of angular parameters θ,ϕ. When r→ 0, if the field does not vanish, the field would have some preferred angular direction at the origin, which can lead to a rotational singularity. In the following, we will demonstrate that such singularity is removable.
As we show in section <ref>, a unitary gauge transformation can connect following two field configurations under the zero weak mixing angle scenario
-f(ξ) ∑_a F_a J_a U^∞⟷ [1-f(ξ)] ∑_a F_a J_a,
which means such a gauge transformation can interchange the boundary condition at the origin and spatial infinity. For example, following two sets of boundary conditions can be converted to each other by such a gauge transformation.
* (a) ξ→ 0, f(ξ)→ 0; ξ→∞, f(ξ)→ 1;
* (b) ξ→ 0, f(ξ)→ 1; ξ→∞, f(ξ)→ 0.
Thus, when ξ→ 0, the free of singularity condition is not strict. Since we can always make such gauge transformation to remove the singularity. In fact, the two criteria at the beginning of this subsection can be turned into
* sphaleron has finite energy
A finite sphaleron energy requires that (i) the field is free of singularities everywhere, and (ii) the integrand of eq. (<ref>) vanishes when ξ→∞. For (ii), when ξ→∞, the gauge field and scalar field approaching the vacuum condition can make the Yang-Mills, U(1), and kinetic terms vanish, and equate the terms V(H,Φ)(ξ,μ=π/2) and V(H,Φ)(ξ,μ=-π/2). As we have shown, both gauge field boundary conditions (a) and (b) can lead to a finite sphaleron energy. Under such a situation, we should consider the third convergence condition that has been proposed in this work, which can be used to distinguish between (a) and (b).
Now, for our specific sphaleron configuration eq. (<ref>), If we only consider the first two criteria, we can have two sets of boundary condition, where we label them as Normal boundary condition and Inverse boundary condition. For the Normal condition, we have
for ξ→ 0, {f(ξ),f_3(ξ),h(ξ),ϕ(ξ)}→ 0, f_0 (ξ) → 1;
for ξ→∞, {f(ξ),f_3(ξ),h(ξ),ϕ(ξ), f_0(ξ)}→ 1;
While the Inverse Boundary condition reads
for ξ→ 0, {f(ξ),f_3(ξ)}→ 1, {f_0(ξ), h(ξ),ϕ(ξ)}→ 1;
for ξ→∞, {f(ξ),f_3(ξ)}→ 0, {f_0(ξ), h(ξ),ϕ(ξ)}→ 1.
The field profile functions and sphaleron energy of the SM under these two boundary choices are shown in <ref>. These two scenarios' sphaleron energy are very similar, where the Inverse boundary choice is a little bit larger than the Normal one. However, the third convergence condition requires us to choose the Normal Boundary condition, since the field profile functions vary rapidly when ξ→∞ for the Inverse boundary scenario.
§ ELECTROWEAK SEPTUPLET EXTENSION TO THE SM: MODEL ANALYSIS
In this section, we will analyze the scalar septuplet extension to the SM under different EWPT scenarios, using the formalism outlined in Section <ref>.
As a prelude, let us review the motivation for focusing on the scalar septuplet.
In general, for an electroweak multiplet having isospin J, J cannot be arbitrarily large. When J≥ 5, the Landau scale at which the gauge coupling Landau pole occurs would decrease to around Λ_landau≤ 10 TeV <cit.>. Furthermore, the partial wave unitarity condition for tree-level scattering amplitude constrains J≤ 7/2 for a complex scalar multiplet and J ≤ 4 for a real scalar multiplet <cit.>.
Besides, we are more focused on the neutral component of the multiplet, where the charge relation J_3+Y=0 needs to be satisfied. Furthermore, in order to avoid stringent dark matter experimental direct detection constraints we require that the neutral field does not couple to Z current, which requires Y=0. Since only multiplet with integer J can have J_3=0 component, we will focus on this scenario. Such electroweak multiplet with zero vev can be a dark matter candidate <cit.>.
Thus, the highest dimension for an electroweak multiplet satisfying the unitary condition and providing a viable dark matter candidate is the septuplet with J=3 <cit.>. Therefore, the sphaleron energy computation with a septuplet extension to the SM is carried out in this study.
As discussed in the introduction, we consider three patterns of EWSB, as shown in <ref>. <ref> (a) shows the one-step EWPT to pure Higgs phase, where the additional scalar can change the Higgs phases' sphaleron energy through thermal loops. In principle, the thermal loop corrections should also be included when analyzing patterns (b) or (c), since the EWSB occurs at hot early universe. The three dimensional effective field theory (3dEFT) is a powerful analytic method to organize the thermal corrections <cit.>. There are recent applications of 3dEFT to the nucleation rate computation <cit.>, whose results show that the thermal correction would bias the zero temperature four dimensional model parameters (including vev) to some extent. However, zero temperature analysis can still provide a useful baseline for subsequent T>0 analyses.
In our present work, we mainly aim to provide a methodology for applying the sphaleron formalism into different EWSB patterns, so the zero temperature analysis is a good and clear start point. When the temperature effect is included, the analysis strategy can be applied to the thermal potential.
For our current zero-temperature analysis, we are more interested in case (b) and (c). We label the vevs of the scalar potential stationary points in <ref> as, X(v_x,0), Y(0,v_y) and Z(v_zx,v_zy). In general, v_zx≠ v_x and v_zy≠ v_y. Further more, when we parameterize the scalar fields and perform a model analysis, we usually regard the field vevs as input parameter. Thus, for patterns (b) and (c), we cannot use one single model analysis strategy, since the required input vevs and model parameter relationships may differ in different EWSB patterns. We will show two analysis strategies separately after the introduction of the model.
§.§ The Model
The general potential of the SM Higgs H and another SU(2) multiplet Φ can be written as <cit.>
V= M_A^2(Φ^†Φ)+{M_B^2(ΦΦ)_0+ h.c. }
-μ^2 H^† H+λ(H^† H)^2+λ_1(H^† H)(Φ^†Φ)
+λ_2((H H)_1(ΦΦ)_1)_0+[λ_3(H H)_0(ΦΦ)_0+ h.c. ]
+V_self(Φ,Φ),
with
V_self(Φ,Φ) = ∑_J=0^2 Jκ_k((ΦΦ)_k(Φ Φ)_k)_0
+∑_k=0^2 J{κ_k^'((ΦΦ)_k(ΦΦ)_k)_0 .
. +κ_k^''((ΦΦ)_k(ΦΦ)_k)_0+ h.c. }.
where J is the multiplet isospin index, and J=3 is the septuplet case. The scalar multiplet self-interaction potential V_self(Φ,Φ) may be important in solving the core-cusp problem <cit.>. The H and Φ are the complex conjugate representation of H and Φ. As pointed out in <cit.>, the terms (ΦΦ)_1, (ΦΦ)_3 and (ΦΦ)_5 vanish due to the property of Clebsch-Gordan coefficients. Therefore, for the self interaction potential, only terms with k∈[0,2,4,6] have non-zero contributions. Further more, only terms with k=0,2 are independent for our septuplet example <cit.>, which simplifies our model analysis.
§.§ One-step EWPT to the mixed phase
In this pattern, we parameterize the general complex Higgs field (H), septuplet field (Φ) and their complex conjugate representation (H̅, Φ̅) as
H=(
[ ω^+; 1/√(2)(v+h+i π); ]);H̅=(
[ 1/√(2)(v+h-i π); -ω^-; ]),
Φ =(
[ ϕ _3,3; ϕ _3,2; ϕ _3,1; 1/√(2)(v_ϕ+ϕ+i π_ϕ); ϕ _3,-1; ϕ _3,-2; ϕ _3,-3; ]);Φ̅=(
[ ϕ _3,-3^*; -ϕ _3,-2^*; ϕ _3,-1^*; - 1/√(2)(v_ϕ+ϕ-i π_ϕ); ϕ _3,1^*; -ϕ _3,2^*; ϕ _3,3^*; ]).
where v and v_ϕ are vevs of the Higgs field and septuplet field, respectively. We put the septuplets vev into its neutral component, where the neutral fields are unconstrained by the Z current experiment.
As discussed in Section <ref>, additional constraints need to be applied if we put the Higgs and septuplet's vevs both into real neutral components. This can be fulfilled by requiring all the fluctuation fields (inside Higgs or septuplet) have positive mass eigenvalues. Before that, one important constraint is from the tadpole condition
∂ V/∂ x_i|_∀ x_i=0=0 ,
where x_i∈ [h,π,ω^±,ϕ,π_ϕ,ϕ_3,j,ϕ_3,j^*]; j denotes the various subscripts that appear in Φ; and ∀ x_i=0 means set all the field fluctuations equal to zero after the partial derivative. Subsequently, we can obtain five parameter constraints
Im(M_B^2)=Im(λ_3)=0,
Im(κ_0^'')-2Im(κ_0^')+4(Im(κ_2^'')-2Im(κ_2^'))/3√(5)=0,
μ ^2= λ v^2+λ_13v_ϕ^2,
M_A^2-2/√(7)Re(M_B^2)=-λ_sv_ϕ^2-λ_13v^2,
where the first three constraints actually arise from one condition: ∂ V/∂π =0. We convert this single tadpole constraint into three separate constraints, which can eliminate the mixing between h and π_ϕ and simplify our analysis. In addition, λ_13 and λ_s are two combined parameters
λ_13=1/2λ_1-1/√(14)λ_3,
λ_s= +1/7[κ_0+2Re(κ_0^')-2Re(κ_0^'')]
+4/21√(5)[κ_2+2Re(κ_2^')-2Re(κ_2^'')].
As discussed in Ref. <cit.>, λ_13 enters the DM annihilation and direct detection rates, while λ_s characterizes DM self-interactions.
The total potential can be expressed as a series addition of mass matrices:
V(H,Φ)= 1/2([ h ϕ ]) H_2× 2([ h; ϕ ]) + 1/2([ π π_ϕ ]) Pi_2× 2([ π; π_ϕ ])
+ ([ ω^+ ϕ_3,1 ϕ_3,-1^* ]) C1_3× 3([ ω^-; ϕ_3,1^*; ϕ_3,-1 ])
+ ([ ϕ_3,2 ϕ_3,-2^* ]) C2_2× 2([ ϕ_3,2^*; ϕ_3,-2 ])
+([ ϕ_3,3 ϕ_3,-3^* ]) C3_2× 2([ ϕ_3,3^*; ϕ_3,-3 ]).
where we put the explicit mass matrix expression in Appendix <ref>. As expected, we observe a massless pseudo-scalar particle and a massless charged Higgs particle after the computation of matrix eigenvalues. The matrix Pi_2× 2 has one non-zero eigenvalue and C1_3× 3 has two non-zero eigenvalues.
Let us now enumerate the constraints that we need to apply. If this number plus the quantity of input parameters is less than or equal to the total parameters' degrees of freedom, we are free to move on. On the one hand, a non-negative mass matrix eigenvalues require 9 constraints: 1 from Pi_2× 2, 2 from each other four matrices. Also, we have 5 tadpole constraints, so we have 14 parameter constraints. On the other hand, we have 19 degrees of freedom from the the model eq. (<ref>) (note that some parameters are complex and we need to count the SM two parameters). So in total we can set 5 independent input parameters for this model. We take these 5 input parameters to be v,v_ϕ,λ,λ_13 and λ_s, and they will appear in our later potential analysis. After these constraints, we are able to set μ=μ^',θ=θ^',ϕ=ϕ^' in eq. (<ref>).
Now, we can compute the sphaleron energy. According to sphaleron Higgs and multiplet configuration eq. (<ref>) and eq. (<ref>), we need to set all the fluctuation fields in eq. (<ref>) and eq. (<ref>) equal to zero. Then, make the following replacement
v→ h[ξ]v, v_ϕ→ϕ[ξ]v_ϕ,
we can obtain the final potential formula in one-step EWPT as
V_One(ξ,μ=π/2)=
1/2 v_ϕ^2 ϕ[ξ]^2 [ λ _13 v^2 h(ξ)^2- (v_ϕ^2 λ _s+λ _13 v^2)]
+1/4 v^2 h[ξ]^2 [λ v^2 h(ξ)^2-2 (λ v^2+λ _13 v_ϕ^2)]
+1/4 v_ϕ^4 λ _s ϕ[ξ]^4 ,
where V_One represents the one-step EWPT to the mixed phase. The vacuum potential reads
V_One(ξ,μ=-π/2)=-1/4(v_ϕ^4 λ _s+λ v^4+2 λ _13 v_ϕ^2 v^2) .
Thus far, we have finished the last task needed to solve the EOM and compute the sphaleron energy. Equations (<ref>) and (<ref>) multipled by the normalization factor ξ^2/g^2 Ω^4 constitute the potential that appear in eq. (<ref>). However, for the potential term V[h(ξ),ϕ(ξ)] that appear in EOMs (<ref>), we should directly use eq. (<ref>) without any such normalization factors.
§.§ Two-step EWPT
For this EWPT pattern, as demonstrated previously, the analysis method should be different from one-step case, since the v and v_ϕ in as computed for the one-step scenario do not correspond to the true vevs now. However, we will continue to use v and v_ϕ to denote the Higgs and septuplet vev in this subsection, keeping in mind that they do not bear any relationship with one-step values.
First, we expand the Higgs and septuplet fields around their extremal
scalar field configuration
H=h/√(2)( [ 0; 1 ]), Φ=ϕ/√(2)( [ 0; 0; 0; 1; 0; 0; 0 ]),
Then, substitute eq. (<ref>) into eq. (<ref>), we can obtain a general potential expression V_general. Secondly, apply the tadpole criteria
∂ V_general/∂ h=∂ V_general/∂ϕ=0,
we can obtain nine extremal points, which have a ℤ_2 symmetry. These nine extremal points can be shown by mirroring <ref> (c) to all four quadrants. <ref> (c)'s X, Y and Z point's vev and their hessian determinant are summarized in Table <ref>, where we have defined a new set of parameters
v=μ/√(λ), v_ϕ=√(2 √(7)M_B^2-7 M_A^2)/√(7λ _s),
v_z^2=λ _s (λ _13 v_ϕ^2 - λ v^2)/λ _13^2-λλ _s,
v_zϕ^2=λ (λ _13 v^2 - v_ϕ^2 λ _s)/λ _13^2 - λλ _s,
V_z=v_ϕ^4 λ _s+λ v^4-2 λ _13 v_ϕ^2 v^2,
where the definition of λ_13 is same with eq. (<ref>). We notice that the relationship between vevs and model parameters are different from one-step EWPT to mixed phase eq. (<ref>). In the one-step EWPT, v and v_ϕ should be interpreted as v_z and v_zϕ shown in Table <ref>. One can verify that, inside eq. (<ref>), if we put the expression of v and v_ϕ into v_z^2 and v_zϕ^2, the v_z^2 and v_zϕ^2 have following relation
μ^2=λ v_z^2 +λ_13 v_zϕ^2,
M_A^2-2Re(M_B^2)/√(7)=-λ_13 v_z^2-λ_s v_zϕ^2 .
This is just the last two relations in eq. (<ref>), so the two analysis methods are consistent with each other. Let us elaborate further on the mass matrix in the two-step EWPT. The calculational methods should be quite parallel with one-step scenario, where we need to start from the general field parameterization eq. (<ref>) and eq. (<ref>). While, the difference comes from the relationship between vevs and model parameters. Therefore, we can obtain the various mass matrices in eq. (<ref>), but with different parameter relationships.
Returning to our potential analysis, we can express the potential as
V_general =1/4[ϕ^2 (2 h^2 λ _13-2 v_ϕ^ 2 λ _s)
+h^2 (h^2 λ -2 λ v^2)+ϕ^4 λ _s].
For purposes of deriving and solving the EOM and computing the sphaleron energy, we need to make the substitution h→ h[ξ]v, ϕ→ϕ[ξ]v_ϕ. Then the potential reads
V_Two(ξ,μ=π/2)= 1/4h[ξ]^2 v^2 (h[ξ]^2 v^2λ -2 λ v^2)+1/4ϕ[ξ]^4 v_ϕ^4λ _s
+1/2ϕ[ξ]^2 v_ϕ^2 ( h[ξ]^2 v^2λ _13- v_ϕ^2 λ _s).
where V_Two represent potential in two-step EWPT scenario, which has the identical property with eq. (<ref>) in the sphaleron energy computation.
The vacuum potential in two-step EWPT reads
V_Two(ξ,μ=-π/2)=-1/4(v_ϕ^4 λ _s+λ v^4-2 λ _13 v_ϕ^2 v^2) .
To fulfill a two-step EWPT, additional parameter constraints should be applied. As shown in <ref> (c), we require our universe undergoes from O → Y → X. Here are the requirements
1. O must be a secondary local minimum, this require
λ _s>0,
2. V(Y)>V(X), this implies
λ _s v_ϕ^4< λ v^4,
3. Hess(X)>0, this require
λ _13 v^2-v_ϕ^2 λ _s>0,
4. Hess(Y)>0, this implies
λ _13 v_ϕ^2-λ v^2>0,
5. if we require the point M exist, we need to solve the equations:
v_z^2=λ _s (λ _13 v_ϕ^2 - λ v^2)/λ _13^2-λλ _s,
v_zϕ^2=λ (λ _13 v^2 - v_ϕ^2 λ _s)/λ _13^2 - λλ _s,
with the constraints eq. (<ref>) and eq. (<ref>), the conditions v_z^2>0 and v_zϕ^2>0 require
λ _13^2-λλ _s>0 .
one would observe that Hess(M)<0 under all above criteria, so the mixed point is not a stationary point.
These constraints are not totally independent, since the constraint eq. (<ref>) can be derived out from eq. (<ref>) and eq. (<ref>), and the latter two conditions are of crucial importance. Overall, we again have five input parameters: v,v_ϕ,λ, λ_s and λ_13. The parameter ranges that satisfy the two-step EWPT are shown in <ref>. In this plot, the lower bound is constrained by eq. (<ref>), while the right vertical bound is constrained by eq. (<ref>). The smaller the value of λ_s, the larger unconstrained parameter region we would have. At the end of the first step, constrained by the effective portal coupling, the septuplet vev cannot be arbitrarily small.
§ SPHALERON ENERGY WITH DIFFERENT EWPT SCENARIOS
The formal sphaleron energy can be defined as <cit.>
E_sph=B·4πΩ/g
where Ω=246.22 GeV and g is the weak coupling constant. The sphaleron B value is the integral part of eq. (<ref>). how does it depend on the parameters? In the same it is a function of λ/g^2In the SM, it's a function of λ/g^2. While now, many BSM parameters λ_13, λ_s will come in. Also, since we consider the U(1) effect, it also includes the g^'. In the SM, where the EWPT is shown in pattern (a) in <ref>, the sphaleron B=1.900506. We will compute the sphaleron B value in pattern (b) and (c) in this subsection.
§.§ One-Step EWPT to the Mixed Phase
In this situation, both Higgs field and septuplet field obtain vev after the phase transition, while the v_ϕ should be constrained by the ρ parameter. The ρ parameter under multiple electroweak scalars is defined as
ρ=∑_i[J_i(J_i+1)-Y_i^2] v_i^2/2Y_i^2v_i^2,
where J_i is the total isospin, Y_i denotes the hypercharge. In our situation, we have two scalar fields, one is the higgs field with J=1/2 and Y=1/2, another is the additional multiplet with J and Y=0. Then, the ρ parameter is given by
ρ=1+2 J (J+1)v_ϕ^2/v^2,
the larger the multiplet representation, the stronger constraints are imposed on v_ϕ. According to the newest ρ parameter <cit.>, ρ=1.00038 ± 0.00020. Within 95% significance level, v_ϕ is constrained to
v_ϕ^2 ≲23.401/J(J+1)GeV
so for our septuplet case, we are safe to take v_ϕ=1 GeV.
The computation of sphaleron energy can be separated into two parts: (i) obtain the field's profile solution from the EOMs (<ref>); (ii) put the field's solution into the sphaleron energy expression eq. (<ref>). For the first step, we present the field's profile function solution in <ref> (left figure) under the parameter choice λ_13=0.05 and λ_s=0.005. The field's profile solutions have a good convergence when ξ→∞. The sphaleron energy in this parameter choice is B=1.900535, which is quite close to the SM B value. Apart from this, we perform a parameter scan to compute the sphaleron energy, which result is shown in <ref>. Since the vev of v_ϕ is overwhelmingly small, the sphaleron energy differs little from pure SM case. Nevertheless, we observe that the multiplet effective self coupling λ_s almost doesn't influence the sphaleron energy. While, the larger value of effective portal coupling λ_13, the greater value of sphaleron energy. This relationship can be inferred from the one-step potential eq. (<ref>) under a small value of v_ϕ. Therefore, in one-step EWPT scenario, if we only consider one scalar multiplet extension, the additional multiplet has negligible influence to the SM sphaleron energy constrained by the ρ parameter.
We would like to make some comments about Georgi-Machacek model <cit.> where for more than one additional EW multiplet, the vevs for the new mutliplets can be large, but the ρ parameter constraint is satisfied. The formalism to analyze this case will be the same is discussed here, but then including one additional field vev. We might anticipate a significantly different result for the sphaleron energy in this case. We defer a detailed study to future work.
§.§ Two-step EWPT
Since the modification of sphaleron energy in one-step case is very small, we are more interested for two-step EWPT scenario. As shown in <ref> (c), the first step is C1: O→ Y and the second step is C2: Y→ X. The multiplet's vev at point Y is unconstrained, since the ρ parameter measures at point X in today's universe, where the multiplet's vev equal to zero. Thus, the sphaleron energy at point Y can reach a sizable value. Parallel to the one-step EWPT analysis, we show the field's profile function solution in the right part of <ref> under the same value of λ_13 and λ_s but a larger choice of v_ϕ. For a model parameter scan, the sphaleron energy at Y is presented in <ref>.
In <ref>, the intersection between the orange region and the right hand side of the vertical dashed λ_13 line represents the unconstrained sphaleron energy domain. From eq. (<ref>), we observe that the portal effective coupling λ_13 doesn't affect the potential term V_Two under the v=0 scenario at point Y, so that λ_13 doesn't alter the sphaleron energy at Y. While, the greater value of λ_s, the higher value of the sphaleron energy. Therefore, the sphaleron energy's relationship with λ_13 and λ_s at two-step EWPT differs from one-step ones. This difference can be deduced from the different sphaleron potential configuration in one-step eq. (<ref>) and two-step eq. (<ref>).
It is interesting to observe that there is a sizeable orange region with sphaleron energy greater than the SM value. If this pattern persists at T>0; if our universe undergoes a first order EWPT during the first step (C1); and if there exists sufficient BSM CPV to create the baryon asymmetry, this asymmetry can be well preserved at point Y.
For demonstration in the real triplet extension, see Refs. <cit.>.
In general, the second step C2 to the Higgs phase could either preserve or erase this baryon asymmetry. If the second step is first order and if the sphaleron energy at point X is sufficiently large, then this asymmetry can be preserved in the final Higgs phase. A complete analysis of this possibility for the T>0 general electroweak multiplet case will appear in a future study.
Finally, we comment on model constraints implied by dark matter phenomenology.
The work <cit.> researches such constraint, wherein the effective Higgs-septuplet portal coupling λ_eff should be very small in order to satsify present direct detection limits. In our work, the effective portal parameter λ_13=λ_eff/2. In our parameter scan, we take λ_13 smaller than 0.1 both in <ref> and <ref>. We can verify that our parameter choice is unlimited under the newest dark matter direct search constraint <cit.>.
§ CONCLUSION
Determining the origin of the cosmic baryon asymmetry remains an important research challenge at the interface of particle and nuclear physics with cosmology. Among various possible baryogenesis mechanisms, we focus on electroweak baryogenesis, which naturally connects with the Higgs mechanism. While the nature of EWSB and strength of CP-violation in the SM do not admit for successful EWBG, it can occur in a variety of BSM scenarios. Of particular interest for our study is the occurence of a first order electroweak phase transition and the computation of the corresponding broken phase sphaleron rate.
We make a detailed study of the sphaleron formalism and compute the sphaleron energy under different EWPT scenarios. For concreteness we have focused on an extension of the SM scalar sector with an electroweak septuplet, whose neutral component can contribute to the dark matter relic density.
For the sphaleron formalism, we summarize different sphaleron configurations that have been established by Manton and Klinkhamer (MK), Klinkhamer and Laterveer (KL), et.al. Further more, we show that MK and KL configurations are equivalent up to a unitary transformation. In the SU(2) multiplet extension to the SM, a proof of 1-form F_a invariance with respect to representation dimension is of crucial importance, which is based on the construction of high SU(2) sphaleron transformation matrix. Previously, Ahriche et al. analyse the sphaleron under SU(2) multiplet situation but without giving a proof of F_a invariance. In this work, we establish the general dimensional SU(2) transformation matrix and demonstrate the invariance property F_a. Besides this, we discuss the restrictions arising in the presence of more than one scalar field multiplet; topology pertaining to higher dimensional (beyond doublet) multiplets; equation of motion and choice of boundary conditions. Our formal considerations are benefit for clarifying some points that appeared in previous literatures. For the multiple steps EWPT, we analyse the multiplet extension model's parameter constraint in one-step EWPT to mixed phase and two-step EWPT scenario separately. In both scenarios, we have five input parameters, the Higgs and septuplet vev, the Higgs and septuplet effective self couplings, and the Higgs-septuplet effective portal coupling. In one-step EWPT to mixed phase scenario, constrained by the ρ parameter, the additional multiplet's vev cannot be too large and its effect to the SM sphaleron energy is negligible. On the other hand, for two-step EWPT, the multiplet's vev at the end of first step is unconstrained, therefore can lead to a large enhancement of the sphaleron energy. If our universe undergoes a first order EWPT during the first step, then the baryon asymmetry can be well preserved during the first step of two-step EWPT.
In the future, numerous studies can be conducted based on this work. For instance, the computation of sphaleron energy under thermal corrections and the computation of one-step EWPT with the Georgi-Machacek model, et.al.
M.J. Ramsey-Musolf, Y. Wu, and W. Zhang were supported in part by the National Natural Science Foundation of China under grant no. 11975150 and by the Ministry of Science and Technology of China under grant no. WQ20183100522. M. J. Ramsey-Musolf also gratefully acknowledges support under the Double First Class Plan of the Shanghai Jiao Tong University and sponsorship from Shanghai Tang Junyuan Education Foundation.
§ OTHER SPHALERON CONFIGURATIONS
§.§ AKY configuration
Under the general spherically symmetric ansatz, the gauge field configurations is written as <cit.>
A_j^a(x) =1/g[D(r)ϵ_jamx_m +B(r) (r^2 δ_ja-x_j x_a)
+ C(r) x_j x_a ],
The Higgs field is written as
H(x)=v/√(2)[H(r)+iK(r) σ⃗·r⃗̂⃗/2] ([ 0; 1 ]).
where D(r),B(r), C(r), H(r) and K(r)
are all radial functions. Usually, the radial gauge condition sets C(r)=0.
§.§ KKB configuration
Start form a set of orthonormal vectors <cit.>
𝐮_1(ϕ)=(cosϕ, sinϕ, 0),
𝐮_2(ϕ)=(0,0,1) ,
𝐮_3(ϕ)=(sinϕ,-cosϕ, 0) ,
The fields are expanded as follows
A_i^a(𝐫)=u_j^i(ϕ) u_k^a(ϕ) w_j^k(ρ, z),
a_i(𝐫)=u_j^i(ϕ) a_j(ρ, z),
H(𝐫)=τ^i u_j^i(ϕ) h_j(ρ, z) v/√(2)([ 0; 1 ]) .
where we change the field labels to make them consistent with this study's convention.
§ SPHALERON ENERGY COMPUTATION
In this appendix, we provide detailed calculations of the sphaleron energy for the Yang-Mills term and the kinetic term in a general SU(2) multiplet dimension representation.
§.§ Yang-Mills term
We consider the SU(2) Yang-Mills term computation under a general representation.
F^aijF^a_ij =F^aijF^b_ij1/2S(R) Tr[{T^a,T^b}]
=1/S(R) Tr[{F^aijT^a· F^b_ijT^b}],
where S(R) is the Dynkin index, and we use Tr[{T^a,T^b}]=2S(R)δ^ab.
Since
F^a_ijT^a=∂_iA^a_jT^a-∂_jA^a_iT^a+gϵ^abcA_i^bA_j^cT^a,
and
ϵ^abcA_i^bA_j^cT^a =ϵ^bcaT^aA_i^bA_j^c,
=1/i[T^b,T^c]A_i^bA_j^c,
=1/i[A_i^bT^bA_j^cT^c-A_j^cT^cA_i^bT^b] .
where we have used the fact that [T^b,T^c]=iϵ^bcaT^a for all SU(2) multiplet. We can deduce that the Yang-Mills term is invariant for different SU(2) multiplet representations.
§.§ Kinetic term
For a general SU(2) multiplet, it's covariant derivative reads
(D_iΦ)=∂_i Φ - igA^a_i J^aΦ - ig^'a_iXΦ ,
Since our sphaleron construction occurs in spherical coordinates, the index i∈ [r, θ, ϕ]. The kinetic term in the second phase of KL sphaleron configuration reads
(D_iΦ)^†(D_iΦ) =(∂_i Φ)^†(∂_i Φ)+g^2 ⟨Φ^†|J^bJ^a|Φ⟩ A^a_i A^b_i + g^' 2⟨Φ^†|X^2|Φ⟩ a^ia^i+2gg^'A_i^3a_iJ^3X Φ^†Φ ,
=(∂_i Φ)^†(∂_i Φ)+h^2g^2[v^2/4(J(J+1)-(J^3)^2)A_μ^+A^μ-+v^2/2(J^3)^2A_μ^3A^μ 3]
+g^' 2(J^3)^2h^2v^2/2(a_r^2+a_θ^2+a_ϕ^2)
-g g^' (J^3)^2 v^2 h^2 (a_θ A_θ^3/r^2+a_θ A_ϕ^3/(r sin (θ ))^2) ,
where
A_μ^+A^μ- = (A_θ^1)^2+(A_θ^2)^2/r^2+(A_ϕ^1)^2+(A_ϕ^2)^2/r^2 Sin^2[θ] ,
A_μ^3A^μ 3 =(A_θ^3)^2/r^2+(A_ϕ^3)^2/r^2 Sin^2[θ] .
where we need to know the explicit expression of A_i^a, with i∈[r,θ,ϕ] being the spherical coordinates label and a∈[1,2,3] being the SU(2) generators label. The expressions of A_i^a can be computed through eq. (<ref>).
§.§ General energy form
The U(1) field sphalron energy computation is straightforward, so we don't list the result here. Finally, we scale the sphaleron energy in following way <cit.>:
∫ d^3 x (1/4F_ij^aF_ij^a + 1/4f_ijf_ij+ (D_iΦ)^†(D_iΦ)) →4 πΩ/g∫ dξ (1/4F_ij^aF_ij^a(ξ) + 1/4f_ijf_ij(ξ)+ (D_iΦ)^†(D_iΦ)(ξ)),
where we add the dimensionless radial parameter (ξ) to each component to label the differences before and after the transformation. When μ=π/2, the formal expression reads
1/4F_ij^aF_ij^a(ξ,μ=π/2) = sin ^2μ(8/3 f^' 2+4/3 f_3^' 2)+8/ξ^2sin ^4μ{2/3 f_3^2(1-f)^2+1/3{f(2-f)-f_3}^2} ,
1/4f_ijf_ij(ξ,μ=π/2) =4/3(g/g^')^2{sin ^2μ f_0^' 2+2/ξ^2sin ^4μ(1-f_0)^2} ,
(D_iΦ)^†(D_iΦ)(ξ,μ=π/2) =v^2_2/Ω^2{1/2ξ^2ϕ^' 2+4/3sin ^2μϕ^2{(J(J+1)-J_3^2)(1-f)^2+J_3^2(f_0-f_3)^2}} .
§ MASS MATRICES IN THE SU(2) DOUBLET PLUS SEPTUPLET MODEL
In this appendix, we list the explicit mass matrices that appear in eq. (<ref>).
§.§ Higgs Matrix
H_2× 2=(
[ 2λ v^2 2λ _13 v v_ϕ; 2 λ _13 v v_ϕ 2v_ϕ^2 λ_s; ]) .
where λ_13 and λ_s are two combined parameters that defined in equations eq. (<ref>) and eq. (<ref>).
§.§ Pseudo-Scalar Matrix
Pi_2× 2=(
[ 0 0; 0 4 Re(M_B^2)/√(7)+v_ϕ^2 κ _π+2 Re(λ_3) v^2/√(14); ]),
where
κ_π=2/7[Re(κ_0^'')-4Re(κ_0^')]+8/21√(5)[Re(κ_2^'')-4Re(κ_2^')].
§.§ Charged Higgs Matrices
C1_3× 3=(
[ 0 -v v_ϕλ _2 /2 √(14) -v v_ϕλ _2 /2 √(14); -v v_ϕλ _2 /2 √(14) M_A^2+v_ϕ^2 κ _122+ v^2(λ_1/2+λ_2/4√(42)) 2 Re(M_B^2)/√(7)+Re(λ _3) v^2/√(14)-v_ϕ^2 κ _123; -v v_ϕλ _2 /2 √(14) 2 Re(M_B^2)/√(7)+Re(λ _3) v^2/√(14)-v_ϕ^2 κ _123 M_A^2+v_ϕ^2 κ _122+ v^2(λ_1/2-λ_2/4√(42)); ]),
where
κ _122 =κ_2-4Re(κ_2^'')/21√(5)-Re(κ_0^'')/7 ,
κ_123 =1/7(κ_0+2Re(κ_0^')-Re(κ_0^''))+1/21√(5)(3κ_2+8Re(κ_2^')-4Re(κ_2^'')) .
The three eigenvalues of matrix C1_3× 3 are difficult to obtain. However, we can numerically calculate them, and we find that one of these eigenvalues equal to zero. This zero eigenvalue correspond to the massless charged Higgs particle.
C2_2× 2=(
[ M_A^2+v_ϕ^2 κ _211+v^2(λ_1/2+λ_2/2√(42)) -2 Re(M_B^2)/√(7)-Re(λ _3) v^2/√(14)+v_ϕ^2 κ _212; -2 Re(M_B^2)/√(7)-Re(λ _3) v^2/√(14)+v_ϕ^2 κ _212^* M_A^2+v_ϕ^2 κ _211+v^2(λ_1/2-λ_2/2√(42)); ]),
where
κ _211 =2√(5)/21[κ_2-Re(κ_2^'')]-Re(κ_0^'')/7,
κ_211 =1/7(κ_0+2κ_0^'-κ_0^'')+2√(5)/21(2κ_2^'-κ_2^'') .
C3_2× 2=(
[ M_A^2+v_ϕ^2 κ _311+v^2(λ_1/2+1/4√(3/14)λ_2) 2 Re(M_B^2)/√(7)+Re(λ _3) v^2/√(14)+v_ϕ^2 κ _312; 2 Re(M_B^2)/√(7)+Re(λ _3) v^2/√(14)+v_ϕ^2 κ _312^* M_A^2+v_ϕ^2 κ _311+v^2(λ_1/2-1/4√(3/14)λ_2); ]),
where
κ _c311 =-Re(κ_0^'')/7+√(5)Re(κ_2^'')/21,
κ _c312 =-1/7(κ_0+2κ_0^'-κ_0^'')+√(5)/21(κ_2+2κ_2^'-κ_2^'') .
h-physrev3.bst
|
http://arxiv.org/abs/2307.03102v1
|
20230706162209
|
Measurement of ambient radon daughter decay rates and energy spectra in liquid argon using the MicroBooNE detector
|
[
"MicroBooNE collaboration",
"P. Abratenko",
"O. Alterkait",
"D. Andrade Aldana",
"L. Arellano",
"J. Asaadi",
"A. Ashkenazi",
"S. Balasubramanian",
"B. Baller",
"G. Barr",
"D. Barrow",
"J. Barrow",
"V. Basque",
"O. Benevides Rodrigues",
"S. Berkman",
"A. Bhanderi",
"A. Bhat",
"M. Bhattacharya",
"M. Bishai",
"A. Blake",
"B. Bogart",
"T. Bolton",
"J. Y. Book",
"L. Camilleri",
"Y. Cao",
"D. Caratelli",
"I. Caro Terrazas",
"F. Cavanna",
"G. Cerati",
"Y. Chen",
"J. M. Conrad",
"M. Convery",
"L. Cooper-Troendle",
"J. I. Crespo-Anadon",
"R. Cross",
"M. Del Tutto",
"S. R. Dennis",
"P. Detje",
"A. Devitt",
"R. Diurba",
"Z. Djurcic",
"R. Dorrill",
"K. Duffy",
"S. Dytman",
"B. Eberly",
"P. Englezos",
"A. Ereditato",
"J. J. Evans",
"R. Fine",
"O. G. Finnerud",
"B. T. Fleming",
"N. Foppiani",
"W. Foreman",
"D. Franco",
"A. P. Furmanski",
"D. Garcia-Gamez",
"S. Gardiner",
"G. Ge",
"S. Gollapinni",
"O. Goodwin",
"E. Gramellini",
"P. Green",
"H. Greenlee",
"W. Gu",
"R. Guenette",
"P. Guzowski",
"L. Hagaman",
"O. Hen",
"R. Hicks",
"C. Hilgenberg",
"G. A. Horton-Smith",
"Z. Imani",
"B. Irwin",
"R. Itay",
"C. James",
"X. Ji",
"L. Jiang",
"J. H. Jo",
"R. A. Johnson",
"Y. J. Jwa",
"D. Kalra",
"N. Kamp",
"G. Karagiorgi",
"W. Ketchum",
"M. Kirby",
"T. Kobilarcik",
"I. Kreslo",
"M. B. Leibovitch",
"I. Lepetic",
"J. -Y. Li",
"K. Li",
"Y. Li",
"K. Lin",
"B. R. Littlejohn",
"H. Liu",
"W. C. Louis",
"X. Luo",
"C. Mariani",
"D. Marsden",
"J. Marshall",
"N. Martinez",
"D. A. Martinez Caicedo",
"S. Martynenko",
"A. Mastbaum",
"N. McConkey",
"V. Meddage",
"J. Micallef",
"K. Miller",
"K. Mistry",
"T. Mohayai",
"A. Mogan",
"M. Mooney",
"A. F. Moor",
"C. D. Moore",
"L. Mora Lepin",
"M. Moudgalya",
"S. Mulleria Babu",
"D. Naples",
"A. Navrer-Agasson",
"N. Nayak",
"M. Nebot-Guinot",
"J. Nowak",
"N. Oza",
"O. Palamara",
"N. Pallat",
"V. Paolone",
"A. Papadopoulou",
"V. Papavassiliou",
"H. Parkinson",
"S. F. Pate",
"N. Patel",
"Z. Pavlovic",
"E. Piasetzky",
"I. Ponce-Pinto",
"I. Pophale",
"X. Qian",
"J. L. Raaf",
"V. Radeka",
"A. Rafique",
"M. Reggiani-Guzzo",
"L. Ren",
"L. Rochester",
"J. Rodriguez Rondon",
"M. Rosenberg",
"M. Ross-Lonergan",
"C. Rudolph von Rohr",
"I. Safa",
"G. Scanavini",
"D. W. Schmitz",
"A. Schukraft",
"W. Seligman",
"M. H. Shaevitz",
"R. Sharankova",
"J. Shi",
"E. L. Snider",
"M. Soderberg",
"S. Soldner-Rembold",
"J. Spitz",
"M. Stancari",
"J. St. John",
"T. Strauss",
"A. M. Szelc",
"W. Tang",
"N. Taniuchi",
"K. Terao",
"C. Thorpe",
"D. Torbunov",
"D. Totani",
"M. Toups",
"Y. -T. Tsai",
"J. Tyler",
"M. A. Uchida",
"T. Usher",
"B. Viren",
"M. Weber",
"H. Wei",
"A. J. White",
"Z. Williams",
"S. Wolbers",
"T. Wongjirad",
"M. Wospakrik",
"K. Wresilo",
"N. Wright",
"W. Wu",
"E. Yandel",
"T. Yang",
"L. E. Yates",
"H. W. Yu",
"G. P. Zeller",
"J. Zennamo",
"C. Zhang"
] |
hep-ex
|
[
"hep-ex",
"physics.ins-det"
] |
The MicroBooNE Collaboration
[email protected]
We report measurements of radon daughters in liquid argon within the MicroBooNE time projection chamber (LArTPC). The presence of radon in MicroBooNE's 85 metric tons of active liquid argon bulk is probed with newly developed charge-based low-energy reconstruction tools and analysis techniques to detect correlated ^214Bi-^214Po radioactive decays. Special datasets taken during periods of active radon doping enable new demonstrations of the calorimetric capabilities of single-phase neutrino LArTPCs for β and α particles with electron-equivalent energies ranging from 0.1 to 3.0 MeV. By applying ^214Bi-^214Po detection algorithms to beam-external physics data recorded over a 46-day period, no statistically significant presence of radon is detected, corresponding to a limit of <0.38 mBq/kg at the 95% confidence level. The obtained radon radiopurity limit – the first ever reported for a noble element detector incorporating liquid-phase purification – is well below the target value of the future DUNE neutrino detector.
Measurement of ambient radon daughter decay rates and energy spectra in liquid argon using the MicroBooNE detector
C. Zhang
August 1, 2023
==================================================================================================================
§ INTRODUCTION
Liquid argon (LAr) detectors are excellent devices for performing nuclear and particle physics measurements where the deposited energy is at the MeV scale or below <cit.>. The ArgoNeuT <cit.> and MicroBooNE <cit.> single-phase time projection chambers (LArTPCs) have used sub-MeV detection capabilities to observe final-state neutrons from GeV-scale neutrino-nucleus interactions <cit.>, to set new limits on the existence of millicharged particles <cit.>, and to demonstrate calibration and reconstruction techniques using MeV-scale signatures <cit.>. The MicroBooNE, ICARUS <cit.>, and LArIAT <cit.> collaborations have also measured 𝒪(10 MeV) Michel electrons <cit.>.
Far lower in energy, the DarkSide-50 dual-phase LArTPC and DEAP single-phase scintillation detector used 𝒪(1–100 keV) ionization signatures from electron and argon nuclear recoils to place new limits on dark matter <cit.>.
While sub-MeV scale reconstruction techniques and tools are mature for dark matter LAr experiments using dual-phase LArTPC or scintillation detector technology, similar tools are in an early stage of development for single-phase LAr neutrino detectors relying primarily on charge readout technologies <cit.>.
At the end of this decade, the ≈10 kT underground single-phase LArTPCs of the DUNE experiment will be sensitive to neutrinos produced in nearby supernovae <cit.>, and may ultimately serve as a probe of solar neutrinos <cit.>, neutrinoless double-β decay <cit.>, and dark sector particle interactions <cit.>. Other impending or proposed future efforts also plan to realize multi-ton-scale LAr detectors, such as the LEGEND neutrinoless double-β decay detector <cit.> and the DarkSide-20k and Argo dark matter detectors <cit.>.
Many future large LAr detector physics goals require high radiopurities to minimize backgrounds to low-energy signals. Radon, specifically ^222Rn, is a significant source of background, as its progeny generate MeV-scale γ rays, β particles, and α particles that can produce neutrons or high-energy γ rays in secondary interactions. In large LAr detectors, these decay products can be generated by radon diffused throughout the LAr bulk, compromising background reduction benefits offered by detector fiducialization. LAr and liquid xenon (LXe) detectors sensitive to low-energy signals have reduced radon contamination by implementing rigorous detector material and outgassing assay campaigns <cit.> and by installing specialized systems capable of filtering radon from gaseous argon <cit.>. Using these methods, the DarkSide-50 and DEAP-3600 dark matter experiments have achieved radon levels of <cit.> and <cit.> in their bulk LAr volumes, respectively.
Existing methods of active radio-purification may not be suitable for large next-generation experiments with LAr or LXe. Gas-phase impurity filtration technologies relying on evaporation and subsequent re-condensing of the bulk LAr may not be able to achieve the throughput required for timely full-volume purification. In addition, as has been demonstrated for the case of electronegative impurities <cit.>, liquid-phase argon may be less susceptible to radon contamination than the gaseous phase, indicating potential benefits in minimizing evaporation of the bulk LAr. While effective high-throughput filtration of electronegative impurities from LAr and LXe has been achieved <cit.>, large-scale liquid-phase radon purification systems have not been demonstrated.
Stringent radiopurity requirements for massive next-generation LAr and LXe detectors highlight the need for more dedicated liquid-phase purification R&D. The DUNE collaboration aims to achieve a bulk radon contamination of in its baseline 10 kT LArTPC modules in service to its diverse MeV-scale physics program <cit.>. The DarkSide-20k and Argo dark matter experiments aim for , about three orders of magnitude lower than the nominal DUNE expectation, and in line with the purity achieved in the smaller DarkSide-50 detector <cit.>.
The MicroBooNE collaboration has shown that its electronegative impurity filtration system also removed radon intended to be actively doped into its LAr bulk <cit.>. After introducing a gaseous radon source into its circulation system, MicroBooNE's LArTPC observed steady rates of MeV-scale signatures on its wire planes consistent with time-correlated decays while purifying the LAr bulk. Rates increased when portions of the purification system were bypassed. Subsequent Geiger counting surveys revealed elevated radioactivity levels in oxygen-removing filter skids containing high-area copper-impregnated aluminum pellets <cit.>. This unexpected demonstration refutes previous conjectures in the literature that large-throughput liquid-phase electronegative impurity filters introduce large amounts of radon into detectors <cit.>.
It also stresses the importance of studying the absolute bulk radon purity of detectors that incorporate liquid-phase filters of this type.
In this paper, we present a limit of the specific activity of radon in MicroBooNE's liquid argon bulk of at the 95% confidence level. This measurement is performed with newly developed charge-based low-energy LArTPC reconstruction tools and analysis techniques that are used to detect decays and subtract backgrounds.
The measured upper limit is well within the target range for DUNE's low-energy physics program. This result provides an example of contamination levels that can be achieved in a liquid-filtered large LAr detector in the absence of any direct efforts toward radio-purification. Using only information from MicroBooNE's charge collection system, we also provide the first dem onstration of the calorimetric capabilities of single-phase neutrino LArTPCs for β and α particles ranging from approximately 0.1 to 3.0 MeV in electron-equivalent reconstructed energy.
We begin with a description of the MicroBooNE detector and datasets used in Sec. <ref>. Sections <ref> and <ref> then describe the MeV-scale reconstruction framework and analysis procedures used to perform the measurement of correlated decays. Section <ref> describes Monte Carlo (MC) simulations and data-MC comparisons used to validate reported detection efficiencies and reconstructed energy spectra. Specific activity results are then reported in Sec. <ref>, and conclusions are given in Sec. <ref>.
§ MICROBOONE DETECTOR AND DATASETS
MicroBooNE was a single-phase LArTPC detector located in the Booster Neutrino Beamline at Fermi National Accelerator Laboratory that operated from 2015 to 2021. The primary component was a 2.56 × 2.33 × 10.37 m^3 TPC containing 85 metric tons of purified LAr. The TPC and an accompanying light collection system were contained within a cylindrical cryostat containing 170 metric tons of purified LAr. Supporting components, including readout and triggering electronics, high- and low-voltage supplies, and liquid argon filtration and monitoring systems, were inside the Liquid Argon Test Facility building housing the cryostat. Details of the MicroBooNE detector and support systems are presented in Ref. <cit.>.
In the MicroBooNE LArTPC, an electric field of 274 V/cm causes ionization electrons generated by particle interactions in the active volume to drift at a rate of 1.1 mm/μs, with a maximum drift time of 2.3 ms for ionization deposited near the cathode.
The drift charge arrives at an anode consisting of three planes of conducting sense wires with 3 mm pitch between wires and 3 mm spacing between planes. Inward-facing and middle “induction” planes each contain 2,400 wires oriented at ±60^∘ with respect to the 3,456 vertical “collection” plane wires.
Induction plane wires, voltage-biased to have minimal impact on the electric field, experience bipolar currents induced by passing ionization clouds. These ionization electrons then terminate their drift on collection plane wires, generating unipolar currents. Wire signals are digitized by readout electronics with a sampling period of 500 ns per ADC time-tick.
For each triggered detector readout, 6400 samples (3.2 ms) are saved for each wire, ensuring ample time for collection of all ionization charge present inside the TPC regardless of drift distance.
Ionization charge created after the time of triggering is also collected and recorded as particle interactions continue to occur in the spatial vicinity of existing drifting electrons.
Digitized waveforms are then filtered and processed to perform the analysis described in this paper. The residual equivalent noise charge (ENC) on wires post-filtering is around 400 e^- and 300 e^- for the longest wires on the induction and collection planes, respectively <cit.>.
While scintillation light has played a central role in prior LAr-based radon measurements <cit.>, MicroBooNE's light collection efficiency of 𝒪(1–10) photoelectrons per MeV is too low to provide meaningful information for isolated MeV-scale events, so data from the light-sensitive photomultiplier tubes are not used in this analysis.
MicroBooNE's LAr purification system was designed to remove electronegative impurities, enabling the achievement of electron lifetimes of several tens of milliseconds during physics data-taking <cit.>.
A mixture of recirculated liquid argon and re-condensed boil-off argon gas from the cryostat ullage was fed in series through two filters at approximately 0.6 L/s <cit.>.
The first filter contained 4Å molecular sieve material <cit.>, while the second contained copper-impregnated aluminum pellets <cit.>.
For a set of data-taking runs in 2021, a 500 kBq ^226Ra source was inserted into the gas circulation line upstream from the system's condensers. For these radon doping datasets, ^222Rn-containing argon gas was condensed and combined with recirculated argon prior to liquid filtration. During a subset of these special runs, the re-condensed ^222Rn-containing LAr was routed directly into the TPC, bypassing the recirculating LAr entering the filtration system (“filter bypass” radon doping data). A more detailed description of ^222Rn doping run configurations is given in Ref. <cit.>.
To measure radon activity in liquid-filtered LAr, data from a 46-day period during a MicroBooNE physics data-taking campaign were used, recorded between June 9 and July 24, 2018. The ≈ 654,000 event readouts used for the analysis, representing a cumulative recorded exposure of about 35 minutes, were collected during periods when the BNB beam was not delivering neutrinos to MicroBooNE (“beam-external” data). Instead, each readout was triggered by a low-frequency pulse delivered to the trigger system by a function generator (“unbiased” beam-external data).
This dataset is used to estimate cosmic backgrounds in MicroBooNE's beam neutrino physics analyses. Data taken during the filter bypass radon doping R&D campaign in 2021 were used to validate the MC-reported capability of MeV-scale analysis tools by identifying and reconstructing correlated decays in MicroBooNE. From this dataset, ≈ 81,000 events recorded over two days using the standard filtration/circulation configurations and ≈76,000 events recorded over two days using the filter bypass configuration were used.
§ LOW-ENERGY RECONSTRUCTION
Here we review the novel reconstruction of MeV-scale features using MicroBooNE's charge collection system. These newly-developed techniques utilize lowered thresholds to enhance sensitivity to the low-energy signals sought in this analysis.
Data processing is carried out in , a common software framework used for all Fermilab LArTPCs <cit.>.
§.§ Geometric reconstruction
Filtering and deconvolution algorithms are first applied to each digitized TPC waveform to suppress noise and account for the expected signal shape from the readout electronics.
The end result of this deconvolution process for each readout channel, visualized in Fig. <ref>, is a series of charge pulses corresponding to groups of drifted electrons sensed over 3.2 ms of readout time <cit.>.
An algorithm scans selected regions and fits each pulse to a Gaussian function, creating reconstructed “hits." Properties like amplitude, mean time, and RMS width for each hit are extracted directly from the fit <cit.>. The pattern-recognition algorithm Pandora <cit.> evaluates relative orientation of reconstructed hits from each of the wire planes and identifies contiguous line-like patterns. Features that correlate across multiple planes are reconstructed by Pandora into 3D particle tracks.
Unlike tracks, MeV-scale activity creates charge depositions spanning only a few wires. To reconstruct these features, we first exclude all wire hits associated with 3D tracks longer than 5 cm. Remaining same- or adjacent-wire hits are grouped into clusters based on their relative proximity in time, with a maximum allowable separation that scales with their RMS widths. Each cluster's overall charge-weighted mean time and RMS are computed. Finally, the Gaussian integral of each hit is added up and converted from ADC counts to electrons using a plane-specific electronics calibration scale factor.
For each hit cluster on the collection plane wires, we search for matching candidate clusters on the two induction planes. Only matches with intersecting wires are considered. If at least one matching induction plane cluster is found, a 3D “blip” is reconstructed. While a minimum of two matched planes is required to form a blip, three-plane matches are common and significantly less likely to be induced by noise.
Several criteria are evaluated to determine if potential matched clusters coincide in time. The fractional overlap of the clusters' time spans must exceed 50%. The clusters' start or end times must also coincide to within 1 (2 time-ticks). Finally, the clusters' charge-weighted mean times must differ by less than 80% of the quadrature sum of the clusters' RMS values.
The relative integrated charge of the candidate clusters is evaluated to reject false matches. This is illustrated in Fig. <ref>, which shows the relation between charge values on the collection plane and one of the induction planes for cluster pairs satisfying the time-based criteria described above. When only two matching planes are required, many matches are found with large charge discrepancies, where a cluster with relatively high charge on one plane is matched with a low-charge cluster on the other. When a match is required on all three planes, this population of charge-disparate matches disappears, suggesting that false hits are induced by electronics noise. To reject these false matches, for clusters with absolute charge differences > 10,000 electrons, we require the ratio of the larger cluster to the smaller cluster be .
The geometric coordinate system in the MicroBooNE volume is defined such that x̂ is parallel to the electron drift direction, spanning the 2.56 m distance between the wire planes and the cathode. The ŷ and ẑ directions relate to positions along the detector's height (2.32 m) and length (10.36 m), respectively.
Each blip's y and z coordinates are defined by the common point of intersection of the central-most wires in the plane-matched clusters. For reconstruction of the x coordinate, the true interaction time (t_0) of the particle producing the blip must be assumed in order to convert the raw time along the wire readout signal to a physical drift time, which is then multiplied by the ionization drift velocity in the LAr volume.
In LArTPCs, t_0 is usually determined by an external beam signal and/or a flash of scintillation light detected by a photon detection system.
For non-beam physics, scintillation light alone must be used for tagging t_0, requiring the matching of a light flash to features in the charge readout.
In MicroBooNE, most radiological decays do not produce enough light for flash-matching. This lack of t_0 tagging means the x coordinate assignment is ambiguous for the analysis presented in this paper, and is therefore not used.
For example, a blip from an ^39Ar decay reconstructed to appear near the cathode with an assigned coordinate of x = 240 cm (t_reco≈ 2.2 ms) could correspond to a decay that actually occurred near the anode at x≈30 cm, but at a later time during the triggered drift period (t_0 ≈ 1.9 ms).
A Monte Carlo (MC) simulation of the MicroBooNE detector, described in greater detail and validated using <3.3 MeV electrons from ^214Bi decays in Sec. <ref>, is used to characterize reconstruction performance.
Samples of low-energy electrons distributed uniformly throughout the LArTPC active volume are simulated to measure blip reconstruction efficiency.
The efficiency is influenced by settings related to the formation of “regions of interest” (ROIs) in the raw signal deconvolution, and by the absolute ADC signal threshold used in the hit-finding algorithm.
Figure <ref> shows the efficiency as a function of electron-deposited energy for MicroBooNE's standard reconstruction configuration and for a special “low-threshold” configuration (first used in Ref. <cit.>) where the deconvolution ROI and hit-finding thresholds were lowered.
Unresponsive or nonfunctional wires on each plane limit the maximum achievable efficiency to ≈85% and ≈95% for the two induction planes, and ≈90% for the collection plane.
This effect is compounded for 3D plane-matching, which is limited to ≈89% for 2–3 planes (collection + one induction) and ≈73% for 3-plane matches.
Table <ref> shows the energies at which the rising edge of the efficiency curves for these two configurations reach 50% of the maximum achievable efficiency after accounting for nonfunctional wires. The criteria needed for forming ROIs during signal deconvolution loosened significantly in the low-threshold reconstruction, particularly for the collection plane. The result is enhanced sensitivity to lower-energy deposits on the collection plane, coupled with smaller improvements in the two induction planes. Further lowering thresholds leads to an increase in noise-induced hits being reconstructed on each plane. This not only reduces the ability to find non-ambiguous matches for hits between planes, but also impacts the reconstruction of tracks.
§.§ Energy reconstruction
Visible energy is reconstructed using charge from the collection plane.
If t_0 is known, the collected charge is scaled up to account for the electrons absorbed by electronegative impurities during the drift. This correction uses the calibrated electron attenuation lifetime, τ_e, found from anode-to-cathode piercing cosmic muon tracks <cit.>. For reconstruction of ambient radiological signals presented in this analysis, the τ_e correction is not applied.
In standard MicroBooNE operating conditions, τ_e is effectively infinite, as measured charge attenuation across the drift volume is negligible. Corrections based on each 3D blip's y and z coordinate are applied to account for known non-uniformities in charge collection across the collection plane <cit.>.
A significant fraction of ionization electrons recombine with Ar_2^+ before they drift to the wire planes. This effect must be accounted for to reconstruct the total charge deposited by a particle. The probability ℛ of an electron surviving recombination depends on the local density of electrons, dQ/dx, and the electric field, ℰ. The energy can therefore be reconstructed using
E_reco = Q/ℛ(dE/dx,ℰ_local)× W_ion,
where Q is the reconstructed charge in units of electrons, and = 23.6 eV <cit.> is the mean energy required to produce an electron-ion pair in LAr.
While determining dQ/dx along tracks is straight-forward, it is nearly impossible for MeV-scale depositions, since dx cannot be reliably measured when the collected charge is concentrated on only a few readout channels <cit.>.
Calorimetry at the MeV-scale is further complicated by accumulated space charge effects <cit.> that modify the local electric field, and since electronic stopping power for electrons (and therefore recombination) increases substantially and non-linearly for kinetic energy ≲1 MeV <cit.>.
Simplifying assumptions are therefore made in Eq. <ref>.
Figure <ref> shows the relationship between the deposited energy and free ionization charge for the sample of low-energy electrons described previously. The Modified Box model <cit.> is used to calculate recombination using the local electric field. The error bars give a sense for the mean deviations caused by non-uniformity of the field due to space charge effects.
Despite a small deviation from linearity at low energies, the relationship is approximately linear overall, with an average charge yield of ≈ 24,700 electrons per MeV of deposited energy.
At MicroBooNE's nominal electric field, ℰ = 274 V/cm, this corresponds to an equivalent electron recombination survival fraction of ℛ ≈0.584, and a mean stopping power of ⟨ dE/dx ⟩≈ 2.8 MeV/cm, consistent with values calculated from the the NIST table of electronic stopping for electrons below a few MeV <cit.>. Eq. <ref> thus simplifies to an `electron-equivalent' energy,
E_reco [MeVee] = Q/0.584× W_ion.
For electron energy deposits between about 1.5 MeV and 3.5 MeV, this linearized reconstruction yields an energy scale bias within the range of intrinsic variations from -field non-uniformities. For energies in the range of 0.1-1 MeV, the energy bias ranges between about 10% and 20%. The result presented in this paper is not particularly sensitive to accurately reconstructed energy scales at this level.
Using this linear conversion of reconstructed charge into energy, the energy resolution according to the MC simulation is presented in Fig. <ref>. The resolution and its error-bar in each bin of deposited energy (E_dep) is evaluated by taking the fitted Gaussian width of the distribution of δ E = (E_reco - E_dep)/E_dep. A function that is used to characterize calorimetric detectors is fit to the plotted MC results,
δ E/E = a_0/E [MeV]⊕a_1/√(E [MeV])⊕ b.
The terms in this function represent contributions from electronic noise (a_0 = 3.1%), counting statistics (a_1 = 6.4%), and reconstruction-related systematic effects (b = 7.30%). This best fit corresponds to an electron energy resolution of 10% at 1 MeV and 8% at 5 MeV. This is well below the 10-20% expected in the DUNE detector for supernova neutrinos <cit.>, and roughly consistent with the resolution (7% for electrons over 5 MeV) needed for DUNE to study solar neutrinos <cit.>.
§ ANALYSIS PROCEDURE
§.§ Bi-Po decay topology
The activity of in the TPC is inferred by measuring the rate of decaying to , a technique successfully used in the recent demonstration of filtration of radon by MicroBooNE's liquid argon purification system <cit.>. The isotope (Q_β=3.27 MeV) decays with a half-life of 19.7 minutes, emitting an electron (or “β particle”) with an energy spectrum extending to the decay endpoint. The daughter then decays from the same point in the TPC with a half-life of 164.3 μs, emitting a mono-energetic 7.7 MeV α particle.
Due to the electron drift velocity of 1.1 mm/ <cit.>, the temporally-separated and emissions manifest as two spatially-separated signals occurring on the same readout wire(s) with an average apparent separation of 18 cm. Since the densely-ionizing α signal is highly quenched in LAr due to recombination and other effects <cit.>, it appears much fainter than the signal and deposits only a few thousand electrons, compared to the β_Bi which deposits on average ≈ 15,000 and a maximum of ≈ 80,000 electrons.
The β decay of the can also produce several low-energy γ rays <cit.> which interact primarily via Compton scattering in the surrounding LAr, creating additional blips in the vicinity of the β_Bi signal. Since the radiation length at these energies is 𝒪(10 cm), these displaced γ-induced blips can be mistaken for the signal if they occur on the same readout channel.
This also implies that any other β-decaying radioisotope that emits γ rays can mimic the signal, such as ^214Pb (Q_β=1.02 MeV) in the decay chain. Fig. <ref> illustrates the (`BiPo') topology as it appears in a MicroBooNE event, including several potential γ signals near the candidate β_Bi deposition.
§.§ Signal selection
Here we outline the selection of BiPo decay candidates. As described in Sec. <ref>, we use data collected in 2021 when a ^226Ra source was used to introduce ^222Rn into the MicroBooNE TPC. Our procedure is similar to that used in the study that demonstrated the removal of radon by the filtration system <cit.>, with several modifications to improve the signal-to-background ratio.
To maximize sensitivity at lower energies, TPC data are reconstructed using the low-threshold configuration described in Sec. <ref>. To avoid low-energy activity induced by cosmic ray muons passing through the detector, such as δ rays, we veto all hits within 15 cm of tracks resembling through-going cosmic muons. This proximity is evaluated per-plane, in a 2D space in which each hit's drift time and wire number are converted into distance-equivalent coordinates. Remaining hits are clustered, plane-matched, and reconstructed into 3D blips.
Readout channels that are identified by the upstream signal deconvolution algorithm as particularly noisy are excluded from consideration. Additional requirements are enforced to reject hit clusters that are not sufficiently isolated, as well as those coinciding in time with other hits across nearby wires, a topology consistent with coherent noise.
To ensure none of the deposited energy is missed, collection plane hit clusters adjacent to non-functional wires are vetoed.
Blips are evaluated to identify candidate deposits, requiring a match in at least two planes. A fiducial requirement in the yz-plane (, ) excludes energy deposits near the edges of the active volume where space-charge distortion effects and radiological backgrounds from G10 support struts are more prominent <cit.>. To reject noise and blips from ^39Ar β decays (Q_β=0.57 MeV), as well as high-energy blips not consistent with the Q_β of ^214Bi decay, we select only candidates with an integrated charge corresponding to energies between 0.5 MeV and 3.5 MeV.
After a candidate blip is identified, we search for associated candidates on the collection plane wires corresponding to the start and end of the β cluster as shown in Fig. <ref>. Clusters occurring on these wires within a “signal region” time window of 20-500 following the Βcandidate are evaluated as potential candidates. The minimum of 20 is imposed to ensure the α produces a distinct and well-separated signal on the readout wire.
Only clusters with < 6,000 electrons are selected as candidates for the highly-quenched signal, corresponding to an electron-equivalent energy < 0.24 MeVee.
§.§ Background subtraction
The time separation ΔT is stored for each BiPo candidate. Such a distribution can be fit to an exponential function, with its decay time fixed to the 164.3 μs half-life of , and used to infer the true signal content in the sample. However, our sample will be contaminated by several sources of background outlined below.
* Random electronics noise resulting in a time-independent contribution to ΔT.
* Unrelated radiological or cosmic activity, such as from γ rays or neutrons. Such topologies create groups of closely spaced blips with separations on the order of several centimeters, leading to a ΔT contribution with a characteristic time of ≈10–30 .
* Low-energy γ rays emitted in the β decay of radiological isotopes, including but not limited to . This background is particularly problematic since the spatial distribution of these γ interactions relative to the candidate translates to a time distribution with a characteristic time constant resembling that of true BiPo decays.
To account for these backgrounds, we repeat the selection procedure on the same wires but in a time window preceding each candidate. Spatial symmetry with respect to the signal region ensures that the distribution of false candidates, due to noise or γ activity, will be identical to that in the forward signal region.
Figure <ref> shows the distribution of candidate decay times for the forward signal region and time-reversed background region for data taken during the Rn-doping period. Distributions are scaled up to represent the full LArTPC active volume by correcting for the fractional fiducial volume used for the selection (62%).
Fitting the background region's distribution to a function modeling the three background categories discussed above suggests the approximate relative contributions of each are about 50% (1), 20% (2), and 30% (3), respectively.
We also consider additional detector effects that may influence the quantity and spatial distributions of candidates in the signal and background regions. The accumulation of slowly-drifting positive Ar ions from a constant flux of cosmic rays distorts the electric field, leading to a slightly higher field strength in regions nearer to the cathode and a lower field strength nearer to the anode. Since recombination depends on the local electric field, energy deposited nearer to the cathode (i.e., in the signal region) will produce more free charge relative to deposits nearer to the anode (i.e., in our time-reversed background region). Electron drift attenuation and transverse/longitudinal diffusion will have an opposite effect, decreasing the detection efficiency for ionization in the signal region relative to the background region. We employ a data-driven method to account for the confluence of these two effects by running the selection in a “control region” of the collection plane separated from the candidate by at least several wires. Here, we expect symmetrically-distributed contributions from γ_Bi production in both the forward and backward regions, so any differences can be attributed to the aforementioned detector effects. We fit a linear function to the forward-to-backward candidate ratio per time bin and apply this as a bin-by-bin correction factor on the background region distribution.
The end result is a downward scaling in the range of 2%–3% on the background distribution, with bins at higher ΔT requiring a larger correction as expected.
§.§ Extracting the decay rate
The background-subtracted distribution of BiPo candidates' ΔT is shown in Fig. <ref> for the filter bypass period and the equal-length period preceding it in which the full filter was employed. Both subtracted distributions are well-described by a single exponential function of the form p_0 + p_1 ·exp(-ΔT/τ), where τ is fixed to the lifetime of 164.3 .
Integrating the exponential component of the fit allows us to extrapolate the rate of BiPo decays present in the sample, regardless of the chosen time window used in the selection. In the nominal fit, the constant background term p_0 is treated as a free parameter to account for the possibility of a background subtraction imperfection. To account for this uncertainty, we repeat the fit with p_0 fixed to zero and treat the difference in outcome between this and our nominal fit as a systematic uncertainty.
The fit functions to each of the Rn-doping data periods shown in Fig. <ref> are integrated to calculate the total decay rate. This results in an average rate of about 0.85±0.2 BiPo decay candidates per 3.2 ms TPC readout period in the filter bypass period compared to (5±7)×10^-3 candidates per readout when the full filter was in use.
To visualize the time evolution of these measurements, we divide the data into 2-hour periods and perform this technique in each of them separately. The resulting rates as a function of event time relative to the start of each respective data-taking period are shown in Fig. <ref>. Vertical error bars include contributions from both the returned fit uncertainty and the systematic uncertainty from fixing the fit parameter p_0.
§ MONTE CARLO SIMULATION
§.§ Generated samples
To translate a measured BiPo rate per TPC readout window into a measurement of the specific activity of ^222Rn in MicroBooNE's liquid argon, the efficiency of the BiPo selection described in Sec. <ref> must be corrected for. Monte Carlo simulations are used to characterize this efficiency. With the aid of the radioactive decay generator <cit.>, a list of γ and β rays are generated matching the kinematic and time distributions expected from individual correlated decay pairs. These particle lists are used to generate simulated MicroBooNE events, each containing 40 decays distributed randomly throughout the active volume with a randomized time within ±2.8 ms relative to the main drift window. This rate is equivalent to about 22.8 decays per 3.2 ms readout window. Particle propagation and detector readout are simulated using an integration of the LArSoft <cit.> the Geant4 <cit.> software packages referred to as “LArG4.” To realistically account for cosmic backgrounds and for electronics noise present in data, which are challenging to accurately model in simulation, wire signals from each simulated event are overlaid onto an unbiased beam-external data event. Each overlaid event is then processed by the reconstruction and signal selection.
Table <ref> summarizes the crucial detector physics parameters used to generate this Monte Carlo dataset. In the LArG4 framework, electron-ion recombination is simulated with the Modified Box model mentioned in Sec. <ref>.
Since the ArgoNeuT collaboration used data from stopping protons and deuterons to parameterize this model, it is applicable for dE/dx < 35 MeV/cm <cit.>.
For α particles and nuclear recoils, which are more highly-ionizing, additional charge quenching effects must be considered <cit.>.
In this analysis, the charge deposited α particles comes from an empirical field-dependent
model based on fits to existing data, crafted by the Noble Element Simulation Technique (NEST) collaboration <cit.>. A random Poisson-like smearing is then applied to the ionization yield (σ = √(N_e)) to mimic binomial fluctuations.
This approach predicts a mean α charge-yield (QY) of about 390 e^-/MeV compared to the Modified Box model's prediction of 530 e^-/MeV.
Sources of physics-related systematic uncertainty are studied using additional samples with key simulation parameters varied accordingly. The dominant source of systematic uncertainty is the α QY. Since the few existing α data in LAr do not report measurement errors, NEST assigns a ±10% uncertainty on its empirical model. We assume an uncertainty of ±20% for this analysis.
Electron drift diffusion is particularly impactful for low-energy deposits in LAr. Since these features typically span only a few wires, any charge within the main electron cloud that diffuses far enough to be collected on neighboring wires is less likely to produce signals above threshold. The value for the longitudinal diffusion simulated in this analysis comes from a recent MicroBooNE measurement of D_L = 3.74^+0.28_-0.29 cm^2/s <cit.>. This analysis also predicts the associated transverse diffusion, D_T, though no direct measurement of D_T exists at MicroBooNE's electric field. Systematic samples are generated with correlated variations in D_L and D_T of ±1σ and ±30%, respectively.
Systematic effects from MicroBooNE's calibrated energy scale (e^- per ADC) are addressed through samples in which all charge deposits are scaled up or down by 5%. Recombination modeling uncertainties are addressed by using an alternative parameterized model <cit.> and by enhancing recombination fluctuations by a factor of 10 as some data suggest <cit.>.
§.§ Calorimetric validation
While precise calorimetry is not essential for signal selection, energy spectra are reconstructed to validate the simulation of low-energy signatures.
These validations further extend the demonstrated boundaries of charge-based reconstruction capabilities in large single-phase LArTPCs.
Energy reconstruction follows the procedure laid out in Sec. <ref>, allowing us to translate collected charge into “electron-equivalent” energy using Eq. <ref> in which an electron-like recombination factor is assumed.
A similar background subtraction technique as described in Sec. <ref> is performed on the energy distributions of and candidates using information from the collection plane.
The filter bypass Rn-doping dataset is used for these calorimetric checks. Due to the lack of filtration, the concentration of LAr impurities rose dramatically during this period. For this reason, data was excluded beyond 35 hours when the measured electron lifetime was found to drop below ≈7 ms. For comparison, MC was generated with an electron lifetime of 8 ms to match the average level of attenuation observed in data events with tagged BiPo candidates.
Figure <ref> shows the background-subtracted energy spectrum of candidates in the data and MC simulation, with the usual energy-based selection requirement (E_β > 0.5 MeV) dropped to reveal the full spectrum. As expected, the data exhibit a tail extending out to matching the Q_β value of . The shape of the lower end of the spectrum is sculpted by energy threshold effects discussed in Sec. <ref>; the efficiency for reconstructing plane-matched blips drops rapidly for electron energies below 0.7 MeV, reaching 50% around 0.5 MeV and becoming negligible by A goodness-of-fit test between data and the MC yields a χ^2 of 58 over 33 degrees of freedom (ndf). Applying an energy shift of -5% to the MC (equivalent to the calibrated energy scale uncertainty) improves the match, yielding χ^2/ndf = 42/33.
The reconstructed energy spectrum from the filter-bypassed Rn-doping R&D run period is shown in Fig. <ref>.
Since the 7.7 MeV α particle experiences significant charge quenching in LAr, its reconstructed energy in electron-equivalent units ranges from only 50 to 200 keV.
Unlike for the signal, the selection of the correlated signal takes place entirely on the collection plane with no plane-matching requirements imposed.
As shown in Fig. <ref> and Table <ref>, the reconstruction efficiency extends far lower in energy on the collection plane alone compared to when plane-matching requirements are imposed.
Despite this lowered threshold, the reconstructed spectrum occupies the very lowest extent of the sensitivity, with an average hit-finding efficiency of ≈10% in the 100–150 keV true electron-equivalent energy range encompassing the signal, and below 100 keV true energy.
The shape of the spectrum is heavily sculpted by this sudden turn-on in sensitivity, exhibiting a sharp rising edge from 70–90 keVee.
This same thresholding effect is also visible in the MC samples, though offset from data by slightly less than 10 keV.
With the α QY scaled up by 20%, the distribution skews too high, overshooting the high-energy tail of the data and resulting in a softened rising edge at the lower end.
When the α QY is scaled down by 20%, the high-energy tail does not extend out as far as the data and the rising edge sharpens.
While there is some broad qualitative agreement in the spectrum between data and MC, this comparison highlights the unresolved systematic uncertainties in modeling this signal.
§.§ Efficiency
Since we demonstrated the accuracy of the simulation through data-MC calorimetric comparisons, we now use it to determine the efficiency in measuring the rate of decays. To best reflect standard MicroBooNE operating conditions, the simulated drift electron lifetime is set sufficiently high such that charge attenuation is negligible.
The analysis procedure is carried out on each MC sample and the underlying cosmic data overlaid onto the simulated events.
The overlay data alone yields a rate of about 0.02 candidates per readout. This is subtracted off the rates obtained from each sample in order to properly gauge the efficiency of the MC contribution.
For the nominal MC sample, a rate of 1.38±0.25 decays per readout is measured compared to the simulated rate of 22.8 per readout, equivalent to an efficiency of . Effects due to nonfunctional wires, vetoing of hits surrounding cosmic tracks, fiducialization, and thresholding are folded into this efficiency. The uncertainty on ϵ_nom arises primarily from the systematic uncertainty assigned during the fitting procedure described in Sec. <ref>.
Table <ref> reports the relative impact on MC efficiency for each physics-related source of systematic uncertainty. Uncertainties related to the α QY and electron diffusion dominate the error budget. Added in quadrature, the total systematic uncertainty on efficiency is about ±50%, yielding a final efficiency of ϵ = (6 ± 3) %.
§ AMBIENT RADON RATE RESULTS AND DISCUSSION
To measure the ambient radon rate in standard MicroBooNE operating conditions, rather than during R&D periods used in previous sections during which was actively being added to the TPC, we use a large sample of unbiased beam-external events from the 2018 physics data-taking period described in Sec. <ref>. Figure <ref> shows the background-subtracted ΔT distribution for this data, fitted to the exponential function f = p_0 + p_1exp(-ΔT/τ), with τ fixed to the lifetime.
Integrating the BiPo component of the fit and incorporating statistical and systematic uncertainties from Sec. <ref>, a rate of (0.7 ± 2.8) × 10^-3
candidates per readout is obtained.
The error on this rate is dominated by the statistical uncertainty from the fit.
This rate is converted to a measurement of the specific activity by correcting for the MC efficiency (ϵ) found in Sec. <ref> and dividing by the total mass of LAr in the active volume. This yields an activity of mBq/kg = (0.04 ± 0.17) mBq/kg, which is consistent with zero.
Assuming that secular equilibrium has been reached for and its progeny in the MicroBooNE LAr, we use this result to place a limit on the true decay rate of at the 95% confidence level (C.L.).
We divide this data into a series of 48-hour periods, and repeat our BiPo rate measurement procedure in each. A lower unbiased trigger rate is used in normal data-taking compared to the R&D runs used previously, necessitating the use of longer time periods to achieve sufficient per-bin statistics. Rates for each period are shown in Fig. <ref>. No major trends are observed over time, as might be expected if there were sudden changes in the LAr circulation system's operational state or gradual degradation in filter purities.
The 95% C.L. upper limit of 0.38 mBq/kg for measured in this analysis is well below the current radiopurity target for DUNE's low-energy physics program <cit.>. Given the similarity in LAr filtration system design and components between MicroBooNE and DUNE <cit.>, we expect similar levels of radon purity in DUNE's bulk LAr if a comparable cryogenic recirculation period can be achieved.
Similar analyses with existing and future ProtoDUNE datasets <cit.> would provide further support for this statement.
The result from this analysis lacks the precision necessary for direct relevance to next-generation dark matter experiment radiopurity goals.
However, when combined with Ref. <cit.>, this result suggests promising intrinsic capabilities of liquid-phase filtration systems for achieving high radiopurities, which should be further investigated in liquid noble element dark matter R&D efforts. Analyses with higher statistical precision and lower inherent background contamination should be performed with future Fermilab-based LArTPCs such as SBND <cit.>, given its larger LAr volume and highly capable light collection system.
§ CONCLUSION
Using the MicroBooNE charge collection system and newly developed low-energy reconstruction tools, we have probed the presence of in a large LArTPC by identifying MeV-scale energy depositions produced in decays of its daughter isotopes and .
Blips matching the expected appearance of decay β particles were identified and reconstructed using a multi-plane scheme. Weaker blips matching the appearance of subsequent decay α particles were then reconstructed in a narrow region of spatial/temporal phase space with respect to the signal. Backgrounds to coincident signals arising from randomly-coincident blips, multi-site γ ray interactions, and β+γ radon daughter decays were subtracted using off-window and time-reversed-window side-band methods.
By estimating the efficiency for signal detection using MC simulations and validating these simulations with special MicroBooNE R&D datasets, measured rates were reliably converted into measurements of activity.
We do not detect any presence of in steady-state MicroBooNE physics data-taking conditions, and set a limit of at the 95% confidence level. This limit is well below the targeted upper limit for the DUNE LArTPC experiment's baseline low-energy physics program of <cit.>, and was achieved by MicroBooNE in the absence of any direct efforts towards radio-purification.
This also represents the first in-situ measurement of bulk activity in a liquid-filtered noble element particle detector.
In performing this measurement, we have extended the boundaries of charge-based calorimetry and reconstruction capabilities in large single-phase neutrino LArTPCs. We accurately reconstruct the energy spectrum of β particles in decay within an energy range of 0.2–3.0 MeV, and identify and reconstruct decay α particles with 75–200 keV of electron-equivalent energy.
To our knowledge, these are the lowest energies at which particle calorimetry and identification capabilities have been demonstrated so far in a single-phase neutrino LArTPC.
This document was prepared by the MicroBooNE collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. MicroBooNE is supported by the following: the U.S. Department of Energy, Office of Science, Offices of High Energy Physics and Nuclear Physics; the U.S. National Science Foundation; the Swiss National Science Foundation; the Science and Technology Facilities Council (STFC), part of the United Kingdom Research and Innovation; the Royal Society (United Kingdom); and the UK Research and Innovation (UKRI) Future Leaders Fellowship. Additional support for the laser calibration system and cosmic ray tagger was provided by the Albert Einstein Center for Fundamental Physics, Bern, Switzerland. We also acknowledge the contributions of technical and scientific staff to the design, construction, and operation of the MicroBooNE detector as well as the contributions of past collaborators to the development of MicroBooNE analyses, without whom this work would not have been possible. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) public copyright license to any Author Accepted Manuscript version arising from this submission.
|
http://arxiv.org/abs/2307.00811v1
|
20230703075108
|
Review helps learn better: Temporal Supervised Knowledge Distillation
|
[
"Dongwei Wang",
"Zhi Han",
"Yanmei Wang",
"Xiai Chen",
"Baichen Liu",
"Yandong Tang"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
A Comprehensive Survey of Artificial Intelligence Techniques for Talent Analytics
Chuan Qin, Member, IEEE,
Le Zhang,
Rui Zha,
Dazhong Shen,
Qi Zhang, Ying Sun, Member, IEEE,
Chen Zhu, Member, IEEE,
Hengshu Zhu*, Senior Member, IEEE,
Hui Xiong*, Fellow, IEEE
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
C. Qin, C. Zhu, and H. Zhu are with the Career Science Lab, BOSS Zhipin, Beijing, China. E-mail: [email protected], [email protected], [email protected].
L. Zhang is with the Business Intelligence Lab, Baidu Inc, Beijing, China. E-mail: [email protected].
R. Zha is with the University of Science and Technology of China, Anhui, China. E-mail: [email protected].
D. Shen and Q. Zhang are with the Shanghai Artificial Intelligence Laboratory. E-mail: [email protected], [email protected].
Y. Sun and H. Xiong are with the Hong Kong University of Science and Technology (Guangzhou), china. E-mail: [email protected], [email protected]
H. Zhu and H. Xiong are the corresponding authors.
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Reviewing plays an important role when learning knowledge. The knowledge acquisition at a certain time point may be strongly inspired with the help of previous experience. Thus the knowledge growing procedure should show strong relationship along the temporal dimension.
In our research, we find that during the network training, the evolution of feature map follows temporal sequence property. A proper temporal supervision may further improve the network training performance.
Inspired by this observation, we design a novel knowledge distillation method. Specifically, we extract the spatiotemporal features in the different training phases of student by convolutional Long Short-term memory network (Conv-LSTM). Then, we train the student net through a dynamic target, rather than static teacher network features. This process realizes the refinement of old knowledge in student network, and utilizes them to assist current learning.
Extensive experiments verify the effectiveness and advantages of our method over existing knowledge distillation methods, including various network architectures, different tasks (image classification and object detection) .
§ INTRODUCTION
"Reviewing the old to learn the new."-Confucius. In the learning process, reviewing not only deepens our memory of old knowledge, but also more importantly, inspires us to learn the new knowledge. We believe that the learning process of neural networks also possesses strong temporal relationship similarly.
To verify this hypothesis, we conducted time series prediction analysis on a fully connected network using Autoregressive Integrated Moving Average model (ARIMA)[2]. Specifically, we train the network to fit a quadratic function and use ARIMA to model the feature maps through epochs. As shown in Fig <ref>, the fitted ARIMA model can provide a approximated prediction to the real training process, which indicates the temporal sequence property of network learning. Motivated by this observation, we aim to solve the following two questions: (1) Can networks utilize the old knowledge to assist current learning like humans? (2) How to apply positive supervision to the temporal learning process of network?
In this work, we proposed a novel knowledge distillation framework named Temporal Supervised knowledge Distillation (TSKD). Different from existing distillation methods that mainly focus on extracting knowledge from spatial features, we attempt to extract more knowledge in the temporal dimension. Moreover, we design a dynamic learning target with teacher features to guide the learning process of students. This process imitates the human learning and reviewing knowledge. A better student network can be obtained by the temporal supervision of the teacher network.
Overall, our contributions are summarized as follows:
∙ We find that the knowledge in the network training grows regularly over time. There exists exploitable information in the temporal dimension.
∙ We establish a new training paradigm by planning the network training as memorize-review mode. This makes it possible for the student network to review old knowledge and utilize it to assist current learning.
∙ We propose a novel knowledge distillation framework by supervising the student network in the temporal dimension. The spatiotemporal features extracted by a designed Conv-LSTM can be well guided by teacher features.
∙ We achieve competitive performance against representative feature based distillation works on various computer vision tasks and network architectures.
§ RELATED WORK
Deep learning[17] has brought significant boosts to a series of pattern recognition tasks, such as image classification[24, 16, 29, 32], object detection[7, 21, 9] and semantic segmentation[34, 3]. As powerful networks usually grow deeper and wider[28, 6], knowledge distillation is an effective way to train small models. The concept of knowledge distillation was first proposed by Hinton et al.[12], where the student model can achieve better performance by learning the output probability distributions of the teacher model. As the intermediate features contain more knowledge, FitNet[22] was proposed to transfer the knowledge from teacher network features to student network features using ℒ_2 distance as a constraint. Following FitNet, many existing methods utilize the knowledge within intermediate features and achieve state-of-the-art distillation performance. The mainstream research is designing new transformation and loss functions to distill knowledge in the transformation space. AT[30] used multiple layer attention maps to transfer spatial information. CRD[25] formulated the pair-wise distillation as contrastive learning. OFD[11] designed a new distance function to distill major knowledge between teacher and student using marginal ReLU.
Another branch is optimizing the matching relationship between teacher and student feature candidates. ReviewKD[4] used multi-level information of the teacher to guide one-level learning of the student network. SAD[14] utilized an attention-based meta-network that learns the relative similarities to identify the possible links between teacher features and student features.
However, previous works have mainly focused on transferring the spatial information in teacher features. Researchers tend to neglect the fact that student networks have fundamental differences in learning style on the same data due to the structural discrepancy with teachers. None of them consider using teachers to guide the temporal learning process of students. Moreover, making teacher feature maps as the consistent learning goal may not be the best choice for students to converge. In this paper, we establish the distillation framework from a novel perspective.
§ METHOD
§.§ Background and Notations
Let S_0,S_1,…,S_t-1,S_t denote the student network in different training epochs and T denote teacher network . Given the same input data X, we denote the output of teacher layer t_l and student layer s_l as F_t_l∈ℝ^C_t_l× H_t_l× W_t_l and F_s_l^i∈ℝ^C_s_l× H_s_l× W_s_l, respectively, where C,H and W denote the channels, height and width, respectively, and superscript i denotes the training epoch of the student model.
Obviously, F_s_l^i and F_t_l have spatial information discrepancies due to the different convergence degree of the network. Previous works have mainly focused on knowledge transfer in the spatial dimension, usually by reducing the distance between the two in the transformation space. At each iteration of training, the adding loss term of the current student model S_t can be written as follows:
L_spatial = ∑_(s_l,t_l)∈𝒞𝒟(Map_s(F_s_l^t),Map_t(F_t_l))
where Map() is the transformation function that transfers the feature map to a more representative space, 𝒞 is the association set of feature maps that need to be constrained, and 𝒟 is the distance measurement function.
And the overall loss term for student network is:
L_student = L_task + λ L_spatial
Under this framework, the student can acquire knowledge from both the teacher network and data simultaneously. However, these kinds of methods just focus on the intermediate outputs of the student network while neglecting the progressive learning process. Our motivation comes from the regular pattern change over time. Distilling in the temporal dimension may be more effective. In our method, we exploit the spatiotemporal information in student learning and use the teacher model to guide the whole learning process. More intuitively, Markov Chain (MC) can be used to tell the difference between our method and others. The training of the existing feature-based KD methods is a one-order MC while the current training is only based on the last state.
In our framework, the current training will be effected by k previous states and the one-order MC becomes a k-order MC. (shown in eq <ref>).
P(S_t|S_t-1) ⇒ P(S_t|S_t-1,…,S_t-k)
§.§ Definitions
As mentioned before, we attempt to use previous knowledge to assist current learning. Specifically, we view the training of the student network as a temporal process and plan it as a memorize-review mode (shown in Fig <ref>). Here we give some definitions for better explaining our method.
Action 1 (Memorize): As the training progresses, the network will gradually converge. We hope that network can memorize the current state at certain times for future review. This action is achieved by saving the current model.
Action 2 (Train): This action is the same as the general data-based training and continues throughout.
Action 3 (Review): Review the knowledge learned in k previous memory nodes and utilize it to assist the current training. The implementation details are given in section <ref>.
Memory nodes: Perform Action 1 and 2. The set of memory nodes is denoted by ℳ.
General nodes: Only perform Action 2. The set of general nodes is denoted by 𝒢.
Review nodes: Perform Action 2 and 3.The set of review nodes is denoted by ℛ.
Memory interval: The number of general nodes between memory nodes, denoted by δ.
§.§ Review Mechanism
Given input data X, the student model S_i will have responses at different depths. Take layer s_l as an example. The knowledge increment between two adjacent temporal models can be represented as
Δ^i-1,i_s_l = |AT(F_s_l^i-1) - AT(F_s_l^i)|
where AT() is the transformation function that transforms the feature map to an attention map. The details of AT() will be discussed in Section <ref>. In short, Δ_i-1,i implies the cognitive differences between S_i and S_i-1 of the same input X.
When the training reaches a review node S_t,t∈ℛ, we calculate the increments among the k previous memory nodes and S_t. The increments will compose to a length k knowledge sequence:
knowledge_seq = (Δ^t-kδ,t-(k-1)δ_s_l,…,Δ^t-δ,t_s_l)
where δ denotes the memory interval. The value of δ indicates the number of general training epochs between two memory nodes. The knowledge sequence summarizes the learning process during the corresponding period and is where we extract spatiotemporal information. Actually, the main insight of TSKD is extracting the temporal information in the progressive training. In our method, we design a simple Conv-LSTM network to learn the features of the knowledge sequence. The detailed descriptions of Conv-LSTM are included in Section <ref>. More specifically, the sequence can be seen as "what has been learnt", and Conv-LSTM gives a prediction of "what to learn next" based on the input sequence.
Δ_s_l^pred = ConvLSTM(knowledge_seq)
Now the problem is the training of the Conv-LSTM network. There does not exist a optimal solution for the whole learning process of a specific neural network due to the structural complexity and data distribution. However, we already have a well-trained teacher network whose outputs can be used as outlines for the student. Thus, we design the following distillation mechanism. Simply put, we calculate the increment between S_t and teacher network T and use it as the Conv-LSTM's target. The increment is calculated as
Δ^abs_s_l,t_l = |AT(F_t_l)-AT(F_s_l^t)| s.t.(s_l,t_l)∈𝒞
where 𝒞 is the association set of feature maps, and the association strategy is simple one-to-one match in our method.
We call Δ^abs_s_l,t_l as absolute increment because it implies "what needs to be learnt". More importantly, Δ^abs_s_l,t_l is a learning goal that gradually changes as training continues. The student network's learning process is advantageously constrained under the guidance of this dynamic target.
For the current review node,the spatiotemporal loss can be calculated as :
L_ST = ∑_(s_l,t_l)∈𝒞 MSE(Δ_s_l^pred,Δ^abs_s_l,t_l)
where MSE() denotes the mean square error. ∑ means that the review will be performed on multi depths of the student network. Note that the training of Conv-LSTM also brings gradient descent to student network parameters. The overall learning objective of a network at a review node can be represented as:
L_student = L_task + λ L_ST
Fig <ref> gives an overview of the proposed method. The full training strategy is summarized in Algorithm <ref>.
§.§ Attention Transfer and Conv-LSTM
There are two key components in review. They are convolutional Long Short-term Memory network (Conv-LSTM) and Attention Transfer(AT()).
AT() utilizes the insight of [30], it is a transformation function that maps the 3D feature map tensor F ∈ℝ^C× H× W to a 2D attention map F_sum∈ℝ^H× W. In our method, F is flattened by summing the squares along the channel dimension, which can be denoted as:
F_sum = ∑ _n=1^C|F_n|^2
Although various transformation functions are proposed[25, 11] to map the feature maps to a more knowledge-transferable space, we choose AT() because it is simple and intuitive. More importantly, it does not bring disruption to the temporal pattern contained in the original feature maps. The distribution of values in F_sum reflects the spatial attention of the network more clearly.
Conv-LSTM was first designed for precipitation nowcasting[23] problem. The main difference between Conv-LSTM and general LSTM is that the element-wise operations are replaced by convolutionlal operations. Thus, Conv-LSTM can extract spatial features of the input data. The reason we choose Conv-LSTM as the extractor is that tools such as ARIMA are not sufficient to deal with high-dimension data.
Given the input sequence (Δ^t-kδ,t-(k-1)δ_s_l,…,Δ^t-δ,t_s_l), taking LSTM neuron t as an example,the input gate i_t,forget gate f_t, cell 𝒞_t, output gate o_t and hidden states ℋ_t will be calculated as:
i_t = σ (W_Δ i * Δ^t-δ,t_s_l + W_h i * ℋ_t-1+W_ci∘𝒞_t-1+b_i),
f_t = σ (W_Δ f * Δ^t-δ,t_s_l + W_h f * ℋ_t-1+W_cf∘𝒞_t-1+b_f),
𝒞_t =f_t∘𝒞_t-1 + i_t∘ tanh (W_Δ c * Δ^t-δ,t_s_l + W_h c * ℋ_t-1+b_c),
o_t = σ (W_xo * Δ^t-δ,t_s_l + W_ho * ℋ_t-1 + W_co∘𝒞_t + b_o),
ℋ_t = o_t∘ tanh(𝒞_t)
Similar to [23], we design the Conv-LSTM network in encoding-forecasting style by stacking the Conv-LSTM layers. The initial states and cell outputs of the forecaster are copied from the last state of the encoder. The final prediction is gained by concatenating all the states in the forecaster and feeding them into a 1×1 convolutional layer. However, unlike general spatiotemporal sequence forecasting problems, knowledge increment sequence has a smaller scale and simpler features. Thus, our Conv-LSTM is a simplified version that has only one layer in the encoder and one layer in the forecaster.
§ EXPERIMENTS
§.§ Experimental settings
The loss weight λ, number of memory nodes k and memory interval δ are important hyper-parameters in the review stage. For image classification, we set λ = 1,k=3 and δ=5. For object detection, we set λ = 0.4, k=3 and δ = 1. The influence brought by the different settings of these parameters are explored further in the ablation study.
All the experiments are implemented by Pytorch[26] using an A6000 GPU.
§.§ Image Classification
Datasets (1) CIFAR-100[15] comprises of 50,000 training images, with 500 images per class, and 10,000 test images. (2) ImageNet[5] is considered the most challenging dataset for classification, offering 1.2 million images for training and 50,000 images for validation across 1,000 classes.
Implementation Details For the CIFAR-100 dataset, we conduct experiments with different network architectures such as ResNet[10] and WideResNet[31]. Our training settings are the same as [25], except for scaling up the initial learning rate linearly and setting the batch size as recommended in [8].
To be more specific, we train all models for 240 epochs and decay the learning rate by 0.1 for every 30 epochs after the first 150 epochs. The initial learning rate is set to 0.1 for other models. The batch size for all models is 128. We train each model three times and report the mean accuracy. In the interest of fairness, previous method results are either reported in previous papers (when the training setting matched ours) or obtained using author-released codes with our training settings.
For ImageNet, we adopt the standard training process, which involves training the model for 100 epochs and decaying the learning rate every 30 epochs. The initial learning rate is set to 0.1, and the batch size is set to 256.
Results on CIFAR-100 Table <ref> presents the results on CIFAR-100., We have categorized previous works into different groups based on the main idea. KD is the only method that employs logits, while the others mainly utilize the spatial information in feature maps. In contrast, our method utilizes the spatiotemporal feature extracted in the training process. It outperforms all previous methods in every group.
Results on ImageNet We also conducted additional experiments on ImageNet to further validate our approach. Specifically, we experimented with two distillation settings: from ResNet50 to MobileNet[13] and from ResNet34 to ResNet18. The experimental results are reported in Table <ref> and Table <ref>. Our method achieves competitive results. In setting from ResNet34 to ResNet18, the gap between the student and teacher models had already been reduced to a very small value of 1.61 by the previous best method. Nevertheless, we were able to further reduce this gap to 1.41, resulting in a 14% relative performance improvement.
§.§ Object Detection
In addition to the classification task, we also applied our method to object detection task. For this task, we distilled the output features of the teacher and student's backbones, following a similar procedure as in the classification task. We evaluated our method on the widely-used COCO2017 dataset[18] and used the best pre-trained model provided by Detectron2 as the teacher and trained the student models using standard training policies[27].
However, we found that the number of epochs in the training of a classical detection network on COCO2017 training set is relatively fewer compared to the classification networks (usually hundreds). This makes it difficult for deploying the memorize-review pipeline. Singly applying TSKD can hardly achieve outstanding performance. Thus, we introduce ReviewKD[4] as our strong baseline to obtain satisfactory results. It can be observed that our TSKD brings a further boost to the AP metrics on the COCO2017 validation set.
§.§ Ablation Studies
Feature maps as knowledge sequence. In our distillation method, we extract spatiotemporal features from the increment sequence rather than feature map sequence. The reason we choose increment is that it can filter the irrelevant information in the feature maps. The network will pay more attention to what is new in progressive learning. The experiments show that using the feature maps themselves as sequences also brings improvement, but increment groups have better performance (Tabel <ref>).
Effects of memory interval. The memory interval δ determines how many epochs of general training are performed between memory nodes. When δ is relatively high, the learning period recorded in the knowledge increment sequence will become longer. This may make the review more difficult. Different settings of δ are explored in Table <ref>.
Effects of number of memory nodes. In order to investigate how many memory nodes are proper in one review, we compare different settings of k in Table <ref>. Obviously, the more memory nodes that are reviewed, the longer the sequence is. However, given the same training time, the frequency of review will also decrease accordingly. On the other hand, too few memory nodes can also make the temporal property in the sequence not clear enough. The R56-R20 experiment shows the highest accuracy when k=6.
§ CONCLUSION
In this paper, we present a novel knowledge distillation method named TSKD. We found the temporal pattern in the evlolution of network knowledge. Motivated by this observation, we replanned the original training as a memorize-review mode and used teacher as temporal supervisor.
Our method achieves competitive performances against other feature based distillation methods.
The shortcomings in this research lie in the lack of validation of other transformation functions and spatiotemporal feature extractors.
For future work, more functions and extractors will be explored in our framework.
§ REFERENCES
[1] Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D Lawrence, and Zhenwen Dai. Variational
information distillation for knowledge transfer. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 9163–9171, 2019.
[2] George EP Box and David A Pierce. Distribution of residual autocorrelations in autoregressive-integrated
moving average time series models. Journal of the American statistical Association, 65(332):1509–1526,
1970.
[3] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution
for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.
[4] Pengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia. Distilling knowledge via knowledge review. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5008–5017,
2021.
[5] Jia Deng,Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical
image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255.
Ieee, 2009.
[6] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas
Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth
16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[7] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages
1440–1448, 2015.
[8] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew
Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv
preprint arXiv:1706.02677, 2017.
[9] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE
international conference on computer vision, pages 2961–2969, 2017.
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[11] Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, and Jin Young Choi. A comprehensive
overhaul of feature distillation. In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pages 1921–1930, 2019.
[12] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv
preprint arXiv:1503.02531, 2015.
[13] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco
Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision
applications. arXiv preprint arXiv:1704.04861, 2017.
[14] Mingi Ji, Byeongho Heo, and Sungrae Park. Show, attend and distill: Knowledge distillation via attentionbased
feature matching. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35,
pages 7945–7952, 2021.
[15] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
[16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. Communications of the ACM, 60(6):84–90, 2017.
[17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.
[18] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár,
and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014:
13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages
740–755. Springer, 2014.
[19] Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational knowledge distillation. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3967–3976, 2019.
[20] Nikolaos Passalis and Anastasios Tefas. Probabilistic knowledge transfer for deep representation learning.
CoRR, abs/1803.10837, 1(2):5, 2018.
[21] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time
object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages
779–788, 2016.
[22] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua
Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
[23] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional
lstm network: A machine learning approach for precipitation nowcasting. Advances in neural
information processing systems, 28, 2015.
[24] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[25] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive representation distillation. arXiv preprint
arXiv:1910.10699, 2019.
[26] Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus.
2011.
[27] Tao Wang, Li Yuan, Xiaopeng Zhang, and Jiashi Feng. Distilling object detectors with fine-grained feature
imitation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages 4933–4942, 2019.
[28] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations
for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern
recognition, pages 1492–1500, 2017.
[29] Shan You, Tao Huang, Mingmin Yang, Fei Wang, Chen Qian, and Changshui Zhang. Greedynas: Towards
fast one-shot nas with greedy supernet. In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 1999–2008, 2020.
[30] Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance
of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928, 2016.
[31] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146,
2016.
[32] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional
neural network for mobile devices. In Proceedings of the IEEE conference on computer vision and pattern
recognition, pages 6848–6856, 2018.
[33] Borui Zhao, Quan Cui, Renjie Song, Yiyu Qiu, and Jiajun Liang. Decoupled knowledge distillation. In
Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pages 11953–11962,
2022.
[34] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing
network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages
2881–2890, 2017.
|
http://arxiv.org/abs/2307.02331v1
|
20230705144138
|
Differential recall bias in estimating treatment effects in observational studies
|
[
"Suhwan Bong",
"Kwonsang Lee",
"Francesca Dominici"
] |
stat.ME
|
[
"stat.ME"
] |
[
Awais Rashid
================
Observational studies are frequently used to estimate the effect of an exposure or treatment on an outcome. To obtain an unbiased estimate of the treatment effect, it is crucial to measure the exposure accurately. A common type of exposure misclassification is recall bias, which occurs in retrospective cohort studies when study subjects may inaccurately recall their past exposure. Specifically, differential recall bias can be problematic when examining the effect of a self-reported binary exposure since the magnitude of recall bias can differ between groups. In this paper, we provide the following contributions: 1) we derive bounds for the average treatment effect (ATE) in the presence of recall bias; 2) we develop several estimation approaches under different identification strategies; 3) we conduct simulation studies to evaluate their performance under several scenarios of model misspecification; 4) we propose a sensitivity analysis method that can examine the robustness of our results with respect to different assumptions; and 5) we apply the proposed framework to an observational study, estimating the effect of childhood physical abuse on adulthood mental health.
1.9
§ INTRODUCTION
Observational studies are conducted to quantify the evidence of a potential causal relationship between an exposure or treatment and a given outcome. While numerous methods have been proposed to address confounding bias in observational studies, only a few have considered the challenges associated with accurately measuring exposure. One such challenge is the presence of recall bias, which refers to a systematic error that occurs when participants inaccurately recall or omit details of past experiences, potentially influenced by subsequent events. Recall bias is particularly problematic in studies relying on self-reporting, such as retrospective cohort studies. It can lead to exposure misclassification, which can manifest as random or differential misclassification <cit.>. Unlike random recall bias, differential recall bias occurs when the misclassification of exposure information varies according to the value of other study variables. Specifically, under differential recall bias, exposure is differentially under-reported (over-reported) depending on the outcome. In the childhood physical abuse and adult anger data that will be discussed as our example, adults tend to under-report their exposures to childhood abuse because they are hesitant to disclose their experiences, even in anonymous or confidential surveys, due to feelings of shame, guilt, or fear of retaliation. Also, it is possible that individuals who have experienced childhood abuse and suffer from anger issues may be more likely to report their abuse, as their anger may be related to unresolved trauma or emotional distress stemming from the abuse. However, it is important to note that the relationship between childhood abuse and adult anger is complex, and under-reporting of childhood abuse is a common problem that can vary depending on a range of individual and contextual factors <cit.>. In addition to differential recall bias, random recall bias occurs when inaccuracies in the reporting of past events are due to chance and not influenced by any specific factors. If the inaccuracies are equally likely to occur across the groups, then the bias may cancel out and the estimated treatment effect may be unbiased <cit.>. However, differential recall bias will likely lead to a biased estimate <cit.>.
While most observational studies may suffer from recall bias to some degree, retrospective studies are particularly prone to differential recall bias. In prospective cohort studies, researchers follow a group of individuals without the outcome and investigate whether they develop the outcome depending on their exposures. However, differential recall bias can still occur if there are differences in the accuracy of recalling past exposures between groups defined by certain confounders. If the confounders are adequately controlled for, the impact of differential recall bias can be minimized or eliminated. However, in retrospective cohort studies or case-control studies, differential recall bias arises from the groups defined by the outcome, and adjusting for confounders cannot reduce its impact. This paper will focus on methods for addressing differential recall bias in retrospective cohort studies.
Several studies have shown that differential recall bias can have a significant impact on associational measures <cit.>. Existing methods considered (differential) recall bias as a misclassification problem, and attempted to estimate an associational odds ratio. For instance, <cit.> proposed an approach for accounting for recall bias that uses a matrix correction in the context of a misclassification problem. However, the correcting matrix must be derived either from earlier studies or a validation study carried out on a subsample of the study subjects. <cit.> proposed a logistic regression method to assess the impact of recall bias on the conclusions by postulating a simpler misclassification model while adjusting for confounders. However, <cit.> only dealt with case-control studies when the exposure was underestimated for controls and overestimated for cases. The confounder adjustment also heavily depends on the logistic regression model; thus, the target estimand is restricted to a conditional odds ratio. To our knowledge, contributions regarding the effect of differential recall bias on measures with causal interpretations are scarce.
Accounting for measurement error in causal inference is important. However, studies on measurement error have typically focused on mismeasured covariates and misclassified outcomes. For example, previous studies have investigated measurement error in covariates <cit.> and misclassification of binary outcomes <cit.>. However, only a limited number of studies have addressed misclassified exposure, or recall bias. <cit.> proposed a nonparametric identification method for estimating the ATE with differential treatment measurement error. The method addresses both over-reporting and under-reporting measurement errors, but it relies on strict assumptions, such as no misclassification for compliant groups. Furthermore, their bounds for the ATE are derived from the true treatment assigning probability, which may be unknown when recall bias is present. <cit.> and <cit.> have shown that the exposure misclassification could significantly impact causal analysis. <cit.> compared several causal estimators for time-varying exposure reclassification cases, and <cit.> proposed a likelihood-based method that adjusts for exposure misclassification bias, but it relies on non-differential measurement error assumption. In summary, there is a gap in the literature on causal inference methods that can analytically quantify the impact of different recall biases.
The overall goal of this paper is to propose a set of robust estimators for estimating the average treatment effect (ATE) in retrospective cohort studies in the presence of differential recall bias. We provide three significant contributions to our research. First, we describe the impact of recall bias on estimating the ATE within the causal inference framework. We propose a model for recall bias, which allows for the ATE to be identified under assumptions. Furthermore, we investigate the impact of violation of the assumptions on this identification result. We also introduce more relaxed assumptions and discuss partial identification results in Section <ref>. Second, we present two different strategies for estimating the ATE – maximum likelihood estimation (MLE) and stratification. We develop an MLE method based on correctly specifying the exposure and outcome models. We propose three stratification methods that are more robust to model misspecification. When the magnitude of recall bias is extreme, stratification methods may not be effective. We further propose a nearest-neighbor combination method to deal with extreme recall bias. We demonstrate the performance and practicality of the proposed methods through simulation studies under various degrees of recall bias and model misspecifications. Finally, we introduce a novel sensitivity analysis approach to assess the impact of differential recall bias on our conclusion. This is a crucial and useful way to quantify the evidence, given that the degree of recall bias is typically unknown in practice. We demonstrate how the sensitivity analysis method can provide robust conclusions by applying it to the real data in Section <ref>.
§ NOTATION AND RECALL BIAS MODEL
§.§ Causal Inference Framework and Target Parameters
We start by introducing the causal inference framework for an observational study. We rely on the potential outcome framework <cit.> to establish causation between exposure and outcome. Assume N individuals in total. We denote ∈𝒳⊆ℝ^d as an observed covariate vector for the ith individual. We let =1, to indicate that individual i was exposed to a certain binary exposure, and =0 otherwise. We can define potential outcomes as follows: if =0, then individual i exhibits response (0); if =1, then individual i exhibits (1). Only one of the two potential outcomes can be observed depending on the exposure of individual i. The response exhibited by individual i is =(1)+(1-)(0). In this paper, (0) and (1) are assumed to be binary. Depending on the occurrence of the outcome, the potential outcome is equal to either 0 or 1.
Consider two assumptions: (1) unconfoundedness and (2) positivity. The unconfoundedness assumption means that the potential outcomes ((0), (1)) are conditionally independent of the treatment Z given , i.e., ((0),(1)) ⊥⊥|. The positivity assumption means that the probability (=1|) lies in (0,1). These assumptions together are often called strong ignorability <cit.>. We also adopt the Stable Unit Treatment Value Assumption <cit.> to identify causal effects; that is, the potential outcomes for each individual are not affected by the treatment status of other individuals.
Binary exposure is frequently retrospectively investigated to find the cause of the outcome in observational studies; thus, exposure to a risk factor is never randomized. A naive comparison of the prevalence of the outcome between the exposed and unexposed groups can be misleading due to the confounding bias. The effect caused by treatment to an individual i is defined as the difference, (1)-(0). However, it is impossible to observe both (0) and (1) for any individual. Under strong ignorability assumptions, it is possible to identify the average treatment effect (ATE). Thus, our parameter of interest is the ATE, τ = [(1)] - [(0)]. In some instances, we are interested in estimating the conditional average treatment effect (CATE) <cit.> at a given level of = for ∈𝒳 as τ() = [(1)|=] - [(0)|=].
§.§ Recall Bias Model
Some observational studies, including retrospective cohort studies, are retrospective in nature. Thus, recall bias may occur when the exposures are self-reported. In this paper, we consider situations with differential recall bias where the exposure is under-reported (over-reported) differently depending on the outcome. In the presence of the recall bias, the underlying true exposure is not observed. Instead, we observe the biased exposure ^* with recall bias. If no recall bias exists, then =^*. We assume that only one of the over-report and under-report recall biases occurs. For instance, our childhood abuse example revealed that exposure was under-reported only in the presence of recall bias.
[Differential Recall Bias]
In under-reported cases, recall bias occurs independently with probability η_y() for individuals with =y, =1, and = where y = 0, 1 and ∈𝒳.
η_0() = (^*=0|=0,=1,=)
η_1() = (^*=0|=1,=1,=).
Similarly, in over-reported cases, recall bias occurs independently with probability ζ_y() for individuals with =y, =0, and = where y = 0, 1 and ∈𝒳.
ζ_0() = (^*=1|=0,=0,=)
ζ_1() = (^*=1|=1,=0,=).
Assumption <ref> proposes the recall bias model that assumes that the occurrence and magnitude of bias depends on the observed outcome and . In essence, after stratifying the data based on , the 2 × 2 contingency table of Y_i and Z_i can be subject to misclassification. Specifically, in cases where under-reporting occurs, some of (Y_i=y, Z_i=1) individuals are categorized as (Y_i=y, Z_i^*=0) for each y and the misclassification ratio is parametrized as η_y(). We can further simplify the recall bias model by removing the dependency on .
The parameters η_0() and η_1() (ζ_0() and ζ_1()) represent the probability of under-reporting (over-reporting) depending on outcome variables. In under-reported cases, =0 always implies ^*=0, but =1 implies either ^*=1 or ^*=0. Therefore, recall bias occurs only when =1. Note that if there is no recall bias, then η_0()=η_1()=0 (ζ_0()=ζ_1()=0). For our child abuse example that will be discussed in Section <ref>, we use the parameter set (η_0(),η_1()) since childhood abuse exposure is under-reported. The parameter η_0() (η_1()) can be seen as the proportion of adults with low (high) anger scores that fail to recall child abuse correctly.
§ IDENTIFICATION OF CAUSAL PARAMETERS
If recall bias is absent and the exposure is observed correctly, then τ and τ() can be identified under strong ignorability assumptions. p_0() = [(0)|=] and p_1() = [(1)|=] based on potential outcomes can be identified as p_1|0() and p_1|1() respectively. Moreover, the CATE and the ATE can be identified as τ() = p_1|1() - p_1|0() and τ = _[p_1|1()] - _[p_1|0()] where p_y| z() = (=y|=z,=) for z=0,1, y=0,1, and ∈𝒳.
In the presence of recall bias, if inference is made on the basis of observed ^* rather than , then we obtain a biased estimate because ((0),(1)) ⊥̸⊥^* |. To describe this, consider the probabilities based on observed exposure Z^∗, p_y| z^∗() = (=y|^∗=z,=) for z=0,1, y=0,1, and ∈𝒳. Then,
p_y| z() = p_yz()/p_1z()+p_0z()≠p_yz^∗()/p_1z^∗()+ p_0z^∗() = p_y| z^∗()
where p_yz() = (=y,=z|=) and p_yz^∗() = (=y,^∗=z|=) for z=0,1, y=0,1, and ∈𝒳 holds. In this section, we will examine the identification of the causal parameters in the presence of recall bias. Assume that the exposure is under-reported. Properties for an over-reported case can be similarly derived.
The precise recall bias mechanism in real-life scenarios is often unknown. Assuming we lack precise knowledge of the recall bias parameter functions stated in Assumption <ref>, it becomes infeasible to identify the average treatment effect with certainty. However, if we can establish bounds on the recall bias parameter functions, we can potentially bound the target parameters. Under the recall bias model in Assumption <ref>, the following relationships hold.
p_11() = p_11^∗()/1-η_1(),
p_10() = p_10^∗() - η_1()/1-η_1() p_11^∗()
p_01() = p_01^∗()/1-η_0(),
p_00() = p_00^∗() - η_0()/1-η_0() p_01^∗().
The following proposition allows partial identification of the causal parameters if the recall bias occurs with the probability of at most δ.
Under Assumption <ref>, suppose there exists a constant 0≤δ<1 which 0≤η_0(),η_1()≤δ holds for all ∈𝒳. Then the following inequalities hold for all ∈𝒳.
p_11^*()/p_11^*()+1/1-δp_01^*() ≤ p_1|1() ≤p_11^*()/p_11^*()+(1-δ)p_01^*()
p_10^*()-δ/1-δp_11^*()/p_10^*()+p_00^*()-δ/1-δp_11^*() ≤ p_1|0() ≤p_10^*()/p_10^*()+p_00^*()-δ/1-δp_01^*().
This proposition can be used when we can constrain the occurrence probability of recall bias using domain knowledge. For instance, in the example of child abuse, existing literature allows us to restrict the probability of recall bias. This constraint can then be used to bound the target estimands.
Some additional assumptions may be useful to narrow the bounds of estimands. For example, when studying the potential impact of childhood abuse on mental health issues in adulthood, it is important to consider the possibility of individuals hiding or feeling shame about their previous experiences. Additionally, those who have mental health problems in adulthood may be more likely to under-report their history of abuse, as they may feel particularly affected by their experiences and may be hesitant to disclose them. Providing this additional information enables us to make the assumption that η_0() ≤η_1().
Under Assumption <ref>,
(a) Suppose there exists a constant 0≤δ<1 which 0≤η_0()≤η_1()≤δ holds for all ∈𝒳. Then, the following inequalities hold for all ∈𝒳.
p^*_1|1() ≤ p_1|1() ≤p_11^*()/p_11^*()+p_01^*()(1-δ)
p_10^*()-δ/1-δp_11^*()/p_10^*()+p_00^*()-δ/1-δp_11^*()≤ p_1|0() ≤max{p^*_1|0(), p_10^*()-δ/1-δp_11^*()/p_10^*()+p_00^*()-δ/1-δ{p_01^*()+p_11^*()}}.
(b) Suppose there exists a constant 0≤δ<1 which 0≤η_1()≤η_0()≤δ holds for all ∈𝒳. Then, the following inequalities hold for all ∈𝒳.
p_11^*()/p_11^*()+p_01^*()/(1-δ)≤ p_1|1() ≤ p^*_1|1()
min{p^*_1|0(), p_10^*()-δ/1-δp_11^*()/p_10^*()+p_00^*()-δ/1-δ{p_01^*()+p_11^*()}}≤ p_1|0() ≤p_10^*()/p_10^*()+p_00^*()-δ/1-δp_01^*().
This proposition implies that by assuming a relationship between two parameters, η_0() and η_1(), we can narrow down either the upper or lower bound of the ATE. Since τ() = p_1|1() - p_1|0() holds, we can partially identify the CATE using the two aforementioned propositions. If we can estimate p_yz^*() by regression model or stratification, then the marginal ATE is also partially identified by averaging through the distribution of .
We partially identified the causal parameter when the exact recall bias parameter functions are unknown. In this part, we further assume that recall bias parameter functions are known. We can then point identify the causal treatment effect parameter and obtain the following identification result.
Under Assumption <ref>, the following equality holds for all ∈𝒳.
τ() = p_11^*()/1-η_1()/p_11^*()/1-η_1() +p_01^*()/1-η_0() - p_10^*() - η_1()/1-η_1() p_11^*()/p_10^*() - η_1()/1-η_1() p_11^*()+p_00^*() - η_0()/1-η_0() p_01^*().
In many situations, it is not feasible to model recall bias parameter functions accurately. However, in some cases, domain knowledge allows us to make an assumption about the magnitude of recall bias and treat it as a constant. For example, in the context of our study on childhood abuse, previous research has reported the presence of approximately 50% false responses. In this case, we may assume that the differential recall bias parameters, η_0() and η_1(), remain constant and are not influenced by covariates.
[Constant Differential Recall Bias]
The magnitude of recall bias for individual i does not depend on the covariates ∈𝒳. In under-reported cases, for all ∈𝒳,
η_0() = η_0, η_1() = η_1.
Similarly, in over-reported cases, for all ∈𝒳,
ζ_0() = ζ_0, ζ_1() = ζ_1.
Throughout the remainder of this paper, we further adopt Assumption <ref> by considering the recall bias parameter functions as constants. By making this assumption, we can reliably estimate the treatment effects even in the presence of recall bias. We will introduce several methods to estimate the ATE with a given value of (η_0, η_1) in Section <ref>.
§ METHODS FOR RECOVERING THE TREATMENT EFFECTS IN THE PRESENCE OF RECALL BIAS
In this section, we propose two estimation methods that provide consistent estimates of the ATE in the presence of recall bias and confounding: (1) maximum likelihood estimation and (2) stratification. We suggest three stratification techniques for the stratification method: (1) propensity score stratification, (2) prognostic score stratification, and (3) blocking. Furthermore, we discuss the nearest-neighbor combination method used to address the problems in the stratification method with recall bias.
For a given value of (η_0,η_1), the first ML-based method requires the correct identification of models for exposure and two potential outcomes to obtain a consistent estimate of τ. The stratification-based method requires a few model assumptions. Stratification can be implemented on the basis of either propensity scores or prognostic scores <cit.>. The propensity score stratification method requires a correctly specified exposure model, while the prognostic score stratification method needs a correctly specified outcome model. The last blocking method suggested by <cit.> does not need any model assumption. In the following subsections, we discuss these estimation methods in more detail. For simplicity, we consider the under-reported exposure case with the tuning parameter (η_0, η_1). Over-reported exposure cases with (ζ_0, ζ_1) can be dealt with similarly.
§.§ Maximum Likelihood Estimation (ML)
Consider the outcome models m_z() = (=1|=z,=) for z=0,1 that are the models for the two probabilities, p_1|1() and p_1|0(). Moreover, the propensity score model e() is considered for the probability (=1|=). We discussed that the probability p_z() can be identified as p_1| z(), which can thus be estimated by m_1(). In the absence of recall bias, either m_z() or e() is required to be correctly specified to obtain a consistent estimate. However, in the presence of recall bias, m_z() nor e() cannot be estimated from the observable data set due to the absence of true . We can rather estimate the ATE as a function of the tuning parameters of the recall bias model.
The first method presented in this subsection uses maximum likelihood estimation. m_z() and e() must be specified to construct the likelihood function to obtain an estimate for given values of (η_0,η_1). Under Assumptions <ref> and <ref>, the joint probability (,^∗|) of observable variables can be represented by a function of m_0(), m_1(), and e(). We assume models m_z(; γ_z), z=0, 1 and e(; β) with parameters γ_z and β, respectively. For instance, logistic regressions can be used such as m(Z, ; γ) = exp(γ_z Z + γ_^T )/(1+ exp(γ_z Z + γ_^T )) with m_1() = m(1, ; γ) and m_0() = m(0, ; γ) and e() = exp(β^T ) /(1 + exp(β^T )). These model parameters can be estimated by solving the following maximization problem,
θ̂ = (β̂, γ̂_0, γ̂_1) = *argmax_β, γ_0, γ_1∑_i=1^Nlog(Y_i = y_i, Z_i^* = z_i | _i = _i)
where
(=1, ^* = 1 | =) = (1-η_1)m_1(; γ_1)e(; β)
(=0, ^*=1 | = ) = (1-η_0){1-m_1(; γ_1)}e(; β)
(=1, ^*=0 | = ) = m_0(; γ_0) {1-e(; β)} + η_1 m_1(; γ_1)e(; β)
(=0, ^*=0 | = ) = {1-m_0(; γ_0)}{1- e(; β)} + η_0 {1-m_1(; γ_1)} e(; β).
Once we obtain the estimate θ̂, we can compute m̂_z() = m_t(; γ̂_z) and ê() = e(; β̂). The marginal probabilities p_z are then estimated by taking sample averages of m̂_z() as
p̂_1^ML = 1/N∑_i=1^Nm̂_1(), p̂_0^ML = 1/N∑_i=1^Nm̂_0().
Thus, the ATE can be estimated by τ̂^ML = p̂_1^ML - p̂_0^ML. The estimate θ̂ can vary with the values of η_0 and η_1; thus, the estimator τ̂_ML can be regarded as a function of η_0 and η_1. Unlike usual situations in causal inference, both m_z(; γ_z) and e(; β) must be correctly specified to obtain a consistent estimate of τ̂^ML. If (η_0, η_1) are known and the three models for m_0, m_1, and e are correctly specified, then τ̂^ML is a consistent estimator for τ. τ̂^ML can be changed for different values of η_0 and η_1. Therefore, we can consider τ̂^ML as an estimator that recovers the true ATE when we believe η_0 and η_1 are correctly specified. However, η_0 and η_1 are unknown in practice. Thus, considering a plausible region of (η_0, η_1) based on previous knowledge is recommended for data applications.
§.§ Stratification
Stratification can be alternatively used to estimate τ by aiming to balance the covariate distributions between exposed and unexposed groups. Compared with the ML method, stratification requires a few model assumptions. In this section, we suggest three stratification techniques with different model assumption scenarios.
After stratifying, the estimation of τ is straightforward. Assume that there are I strata. Each stratum i, contains n_i individuals. There are N = ∑_i=1^I n_i individuals in total. We denote by ij the jth individual in stratum i for j=1,…, n_i. If we assume (Y_ij(1), Y_ij(0))⊥ Z_ij holds within each stratum i, then the stratum-specific probabilities p_1i=_|stratum i[p_1()] and p_0i=_|stratum i[p_0()] can be identified from the 2×2 table generated by stratum i. However, Z_ij^∗ is observed instead of Z_ij due to recall bias. Therefore, the recall bias adjustment using (<ref>) is required. For each generated stratum, assume that Table <ref> is observed.
Using the relationships (<ref>), the probabilities p_1i and p_0i are estimated by
p̂_1i = a_i/a_i + b_i, p̂_0i = c_i/c_i+d_i
where a_i = a_i^*/(1-η_1), b_i = b_i^*/(1-η_0), c_i = {c_i^* - η_1(a_i^* + c_i^*)}/(1-η_1), and d_i = {d_i^* - η_0(b_i^* + d_i^*)}/(1-η_0).
The marginal probabilities can be estimated by the weighted average of these stratum-specific probabilities with weights s_i = n_i/N. Thus, p̂_1^S = ∑_i=1^I s_i p̂_1i and p̂_0^S = ∑_i=1^I s_i p̂_0i. Therefore, the ATE is estimated by τ̂^S = p̂_1^S - p̂_0^S.
§.§.§ Propensity Score Stratification
Among stratification based methods, stratification based on propensity score is the most common approach <cit.>. The propensity score is a conditional probability of the treatment assignment given the observed covariates, e() = (=1|_i=). We only have to assume a treatment model to create strata using propensity score. Compared with estimator τ̂^ML, propensity score stratification does not use outcome models. Therefore, τ̂^Prop requires only a few modeling assumptions.
However, similar to many stratification-based methods, this method relies on the assumption that stratification achieves covariate balance by at least approximately. Furthermore, strata are formed on the basis of biasedly estimated propensity score ê^*() = (^*=1|_i = ) using Z^* instead of unobservable Z. It is not feasible to compare the covariate distributions between the exposed (Z=1) and unexposed (Z=0) groups. Thus, constructing strata based on the propensity score can be problematic if η_0 and η_1 are significantly different from 0. We assume that recall bias is independent of covariates conditioning on the observed outcome. Thus, if η_0 = η_1, then the covariate balance between the Z^*=1 and Z^*=0 groups is asymptotically the same as that between the Z=1 and Z=0 groups intuitively. The following proposition justifies propensity score stratification using ê^*() when η_0 = η_1.
Let e() = (Z_ij=1|_ij=) and e^∗() = (Z_ij^∗=1|_ij=) for ∈𝒳. Assume Assumptions <ref> and <ref> hold.
(a) In under-reported case with η_0=η_1=η, e^∗() = (1-η)e() holds for all ∈𝒳.
(b) In over-reported case with ζ_0=ζ_1=ζ, e^∗() = ζ + (1-ζ)e() holds for all ∈𝒳.
If the recall bias occurs with the same probability between the Y=0 and Y=1 groups (that is, recall bias is not differential), then e^∗() is also a balancing score. Thus, we can create valid strata using the biased propensity score obtained by observable variables.
§.§.§ Prognostic Score Stratification
Instead of using the propensity score, the prognostic score can be utilized to construct strata <cit.>. If there is Ψ(_ij) such that Y_ij(0) ⊥⊥_ij|Ψ(_ij), then we call Ψ(·) the prognostic score. Similar to propensity score stratification, prognostic score stratification permits the estimation of exposure effects within the exposed group. If (Y_ij(0), Y_ij(1)) ⊥⊥_ij|Ψ(_ij) is further assumed, then prognostic score stratification is valid for estimating overall exposure effects. For instance, if m(Z_ij, _ij; γ) = exp(γ_z Z_ij + γ_^T _ij)/(1+ exp(γ_z Z + γ_^T _ij )) is assumed, Ψ(_ij) = γ_^T _ij is the prognostic score.
Like propensity score stratification, stratification on the prognostic score leads to a desirable and balanced structure. Since we do not know Ψ(_ij) a priori, it has to be estimated from the data. As mentioned before, if η_0 = η_1, then the probabilities of recall bias occurrence between the Y=1 and Y=0 groups are the same. In this case, the prognostic score can be used in stratification while estimating the treatment effect. Since the exposure was under-reported, we know Z_ij^* = 1 always implies Z_ij = 1. We first estimate γ_ by using the data of the ^* = 1 group. Assuming that the recall bias occurs randomly, we then calculate the prognostic scores Ψ(_ij) = γ_^T _ij for all individuals. The outcome models should be correctly specified for prognostic score stratification. Even though τ̂^Prog needs fewer modeling assumptions than τ̂^ML, modeling assumption is still required. Moreover, score-based stratifications need a further assumption that η_0 = η_1 to be justified.
§.§.§ Blocking
In Sections <ref> and <ref>, proper scores based on modeling assumptions are required to create valid strata. Also, score-based stratifications could be problematic if η_0 and η_1 significantly differ. Stratification based on propensity score also requires accurate treatment model identification, and the outcome model must be correctly specified to create strata with a prognostic score. However, the blocking method does not require any model assumption. Our goal is to make covariates _ij in block i to be similar. If the covariates in each block are almost the same, then we assume that (Y_ij(0),Y_ij(1))⊥ Z_ij in each block i holds. <cit.> used the blockingChallenge package in R to build blocks.
Suppose that there are N=Ik individuals. To make I blocks with size k, I individuals are first randomly chosen as template individuals for each block. The remaining I(k-1) individuals are then matched to template individuals using optimal matching at a ratio of (k-1):1. After the first blocking, separate an individual who is the most distant from the remaining k-1 individuals in each block. Setting these I individuals as template individuals for each block, optimal matching is used again to build I blocks. Repeating this process facilitates the implementation of an effective minimum within-block distance stratification. Repeat this process until no changes occur to obtain I blocks, which are strata with size k.
The blocking method does not require any model assumption. However, the covariates _ij in each block i need to be similar. When achieving covariate balance is difficult or a weak overlap situation emerges, such blocks are not obtained. If the covariate balance within the block can be easily achieved, the blocking method is likely to provide a reliable estimator. Different from τ̂^ML, τ̂^Prop, and τ̂^Prog, an advantage of τ̂^B is that any modeling assumptions is unnecessary. This stratification technique is still robust under model misspecification. We will examine the performances of these estimators in Section <ref>.
§.§ Nearest-Neighbor Combination of Strata
Propensity score stratification with five strata is usually considered. <cit.> and <cit.> showed that five strata could remove 90% of confounding bias with a continuous outcome. However, <cit.> indicated that stratification with a fixed number of strata might lead to biased inference due to residual confounding, especially when the sample size is large. <cit.> and <cit.> also found that the construction of additional strata helps control residual confounding in the case of a binary outcome; thus, more than five strata can be more powerful and generate less biased results. Especially for the blocking method, we require a large number of strata to obtain an unbiased estimator. However, an estimation of the ATE using stratification with many strata can be problematic in the presence of extreme recall bias. Two severe problems in some strata can be encountered while estimating τ using the relationships (<ref>). For fixed values of η_0 and η_1, observable values a_i^*, b_i^*, c_i^*, d_i^* within certain strata may not be plausible, potentially due to random variations.
(a) c_i = c_i^* - η_1(a_i^* + c_i^*)/1-η_1 or d_i = d_i^* - η_0(b_i^* + d_i^*)/1-η_0 can be negative.
(b) a_i+b_i or c_i+d_i can be equal to 0.
Condition (a) indicates a situation when the recall bias occurs less than expected in the given stratum, leading to the unstable performance of the estimator. If all the individuals are exposed (not exposed) in the stratum (b), then estimating the treatment effect is impossible. These problems may occur when the number of strata is large (i.e., the number of individuals in a stratum is too small) or when the overlap is substantially weak. In order to address the aforementioned challenges, we propose a novel method called nearest-neighbor strata combining. This method effectively tackles the issues at hand while maintaining a minimal loss in terms of bias. The fundamental concept behind this approach is to merge a problematic stratum with its nearest-neighbor stratum. By selectively increasing the size of only the strata facing challenges, rather than all strata, we can effectively manage the problem. The specific techniques for combining strata vary depending on the chosen stratification method.
First, strata based on propensity and prognostic scores have an order between strata, facilitating the easy combination of strata. After stratification based on score:
1. Order strata 1,2,…,I by their score quantiles.
2. Produce 2 × 2 tables for stratum i=1,2, … , I and compute a_i, b_i, c_i, and d_i for each stratum i.
3. For stratum i=1,2,…,I-1 with the problem, combine this stratum with the nearest-neighbor stratum i+1. If stratum I has a problem, then combine it with stratum I-1.
4. Repeat this combination method until every stratum does not have a problem.
However, we need another strategy for the blocking method due to the absence of order between blocks. In this case, we use the distances between the centroid of covariate vectors _ij. It is recommended to standardize each covariate to prevent large-scale variables from dominating the method. After blocking:
1. Compute the centroids of covariate vectors _i = 1/n_i∑_j=1^n_i_ij for each block i = 1,2,…,I.
2. Produce 2 × 2 tables for block i=1,2, … , I and compute a_i, b_i, c_i, and d_i for each block.
3. For block i=1,2, … , I with the problem, combine it with the nearest-neighbor block j = *argmin_j≠i‖_i - _j ‖_2 where ‖·‖_2 denotes Euclidean norm.
4. Repeat 1-3 until every block does not have a problem.
The final stratification result, which has no computing issues, is used to estimate τ̂^Prop, τ̂^Prog, and τ̂^B. Initially, we allocate an equal number of individuals to each stratum; nevertheless, the process of combining strata may result in varying stratum sizes. Under the assumption of a substantial overlap and by selecting an appropriate number of strata, denoted as I, the stratification method is expected to operate without encountering any issues. However, in Sections <ref> and <ref>, we employ the nearest-neighbor combination method as a precautionary measure to avoid potential challenges associated with differential recall bias correction.
§ SIMULATION STUDIES
We conduct simulation studies to compare the performance of the proposed methods: (1) ML, (2) propensity score stratification, (3) prognostic score stratification, and (4) blocking. We consider various model specification scenarios to examine how they can successfully recover the true treatment effect under different model misspecification cases.
We consider four independent covariates, _i=(X_i1, X_i2, X_i3, X_i4). X_i1 and X_i2 are binary covariates, whereas X_i3 and X_i4 are continuous covariates. We also consider four simulation scenarios where the exposure and outcome models are correctly specified or misspecified: (i) (cor, cor), (ii) (cor, mis), (iii) (mis, cor), and (iv) (mis, mis). For example, (mis, cor) means the exposure model is misspecified, but the outcome model is correctly specified. We randomly generate exposure and potential outcomes ((0), (1)) of each individual depending on the model specification scenario. However, due to recall bias, we cannot observe a true exposure ; we observe the biased exposure ^∗ instead. We assume that the exposure is under-reported for this simulation study. We generate ^∗ based on the observed outcome = (1) + (0)(1-) (See supplementary materials for the detailed simulation settings).
We compare the considered methods considering their successful recovery of the true ATE under different model misspecification scenarios. In addition to this factor of model misspecification, we also consider two sample sizes (N=1000 or 2000) and two recall bias parameters ((η_0, η_1) = (0.1,0.1) or (0.1,0.2)) throughout this simulation. We fix the strata size to 50 in stratification methods with the nearest-neighbor combination method. Table <ref> shows the simulation results that are obtained from 1000 simulated datasets.
If both the exposure and potential outcome models are correctly specified, then τ̂^ML is the best estimator. τ̂^Prop and τ̂^Prog show similar performance in each scenario. Particularly, even in the treatment model misspecification scenario, stratification based on propensity score shows slightly better results than stratification based on prognostic score. Score-based stratifications perform agreeably, although η_0 and η_1 are different. τ̂^B provides the least biased estimate in the case of misspecification for both models. On the contrary, τ̂^ML shows the worst performance in (mis, mis) scenario. As expected, the model dependency for the blocking method is the smallest, and that for the ML method is the largest. This finding leads to a good result of the blocking estimator and a poor result of the ML estimator in the worst model misspecification scenario. Even though we require weak assumptions, the blocking estimator performs well throughout every model misspecification scenario. If the models are misspecified, τ̂^ML, τ̂^Prop, and τ̂^Prog are no longer consistent estimates of τ.
§ DATA EXAMPLE: CHILD ABUSE AND ADULT ANGER
In this section, we apply the causal inference framework to the motivating example of our research, which examines the causal relationship between childhood abuse and adult anger. We consider a retrospective cohort study to examine the question, “Does child abuse by either parent increase a likelihood toward to adult anger?". This study focuses on the publicly available 1993-94 sibling survey of the Wisconsin Longitudinal Study (WLS).
<cit.> indicated that the results might be affected by a tendency to under-reporting of abuse, a common weakness in studies of child abuse and adult health. Adults are likely not to report their childhood abuse even though there is any. Thus, exposure is generally under-reported, and recall bias of self-reported exposures occurs in this situation. Inference based on the under-reported childhood abuse status may lead to a biased treatment effect. <cit.> also asserted that severe amount of false negative responses (approximately 50%) exist when reporting childhood abuse, whereas false positive responses are absent. Absence of false positive responses means that if = 0, ^∗ cannot be 1. Therefore, we may regard this recall bias as a differential recall bias with parameters (η_0, η_1).
We define the exposure variable by combining two responses asking the presence or absence of abuse in their childhood by their father or mother. The responses were measured in the following four categories: “not at all", “a little", “some", and “a lot". By following <cit.>, the exposure of child physical abuse is defined as an indicator of “some" or “a lot" to at least one of two responses. Since these two questions were asked for adults to recall their childhood exposures, there may exist a systematic bias in recalling exposures. The outcome was initially measured as anger scale of <cit.>. We define a binary outcome variable of an individual as 1 only if his/her anger score is larger than or equal to 10, the 75th percentile of the measured anger scores. Seven covariates are considered: sex, age at the time of the interview, father's education, mother's education, parental income, farm background, and an indicator of parents' marital problems or single parent. See <cit.> for additional details regarding the WLS data.
We applied (1) ML, (2) propensity score stratification, (3) prognostic score stratification, and (4) blocking for the estimation of the ATE. The logistic outcome regression with the seven covariates without interaction terms is considered for the ML method. The same exposure model is used for propensity score stratification, whereas prognostic score stratification is based on the same outcome model. Ten strata are constructed by using the quantile values of the estimated score. A block size of 20 is used for the blocking method to build blocks.
Instead of assigning some probability, we may restrict the recall bias probability within a specific interval. From Proposition <ref>, we can calculate the bounds of ATE if partially identified. Figure <ref>(a) shows the lower and upper bounds of ATE when 0 ≤η_0 , η_1 ≤δ. If δ increases, the uncertainty of the ATE also increases.
Moreover, <cit.> highlighted that feelings of shame are strongly associated with the tendency to conceal experiences of childhood abuse, leading to under-reporting of such exposure. They also noted that individuals who exhibit aggressive behavior may be more prone to experiencing shame in relation to the abuse they suffered. This may be due to the belief that the abuse would not have happened if they had been stronger and more assertive, leading to a vicious cycle of shame and aggression. Based on this study, individuals who exhibit aggressive behaviors and have high anger scores are more likely to experience recall bias when reporting childhood abuse experiences, that is, η_0 ≤η_1. This allows us to narrow down the lower bounds of the ATE from Proposition <ref>, as presented in Figure <ref>(b). Hence, we can conclude that childhood abuse has a causal effect on high anger scores in individuals, even in the presence of differential recall bias, based on this assumption.
We may further narrow down the bounds of the ATE if we make a stronger assumption. <cit.> suggested that the probabilities of recall bias may not differ significantly based on an adult's anger score. This suggests that recall bias may not be strongly related to an individual's level of anger. If we accept this possibility, we can focus on the situation where 0 ≤η_0 = η_1 ≤ 0.5, which is a stronger assumption than our previous ones. The estimates for various values of η_0 and η_1 are shown in Table <ref>. The table also includes the estimates in the absence of recall bias (i.e., η_0=η_1=0). Figure <ref>(c) shows the estimates of the ATE across the line of η_0=η_1. All the estimates increase as η_0 = η_1 increases. Furthermore, the 95% confidence intervals of all methods do not contain 0. In Figure <ref>(d), we particularly focus on the results of the ML and blocking estimators when η_0=η_1. Even though the confidence interval of the blocking estimator is broader than that of the ML estimator, possibly due to the fact that it requires weak assumptions, the confidence interval still stays above 0. These results imply that the under-reporting issue does not alter the initial conclusion; on the contrary, it strengthens the conclusion that there is significant evidence that child abuse increases the adult anger score. We conclude that childhood abuses affect the anger scores of adults even in the presence of recall bias. Note that the variances of the estimates are obtained using 500 bootstrapped samples.
The ML and stratification based on propensity score share the same treatment model assumption, thus providing almost identical results. However, stratification based on prognostic score and the blocking method, which does not use treatment model assumption, provides similar results. The estimates are larger than those from the ML and propensity score stratification methods. Misspecification of the treatment model can be one of the possible reasons for explaining this result. We may consider other treatment models, such as those considering higher-order terms or interaction terms.
We also conduct a sensitivity analysis of recall bias with parameters 0 ≤η_0, η_1 ≤ 0.5. Figure <ref> shows contour plots of the estimated ATEs for the values of η_0 and η_1 in this region. This figure reveals that most of the estimates are above 0. Especially for the blocking method, estimates are below 0 only for small region of η_0 ≥1/2η_1 + 0.4 for 0 ≤η_1 ≤ 0.2.
§ DISCUSSION
In this paper, we introduced a causal inference framework for observational studies while accounting for recall bias. We start by partially identifying the causal estimands and narrow the bounds of the ATE by gradually introducing stronger assumptions. If we consider two tuning parameters that characterize all possible combinations of recall bias, we can point identify the estimands. We also proposed two estimation approaches for recovering the true ATE in the presence of recall bias. Moreover, we suggested three stratification techniques for the stratification method. In particular, blocking can be used to reduce the risk of model misspecification. We also proposed a nearest-neighbor combination method to improve the quality of stratification methods.
Our proposed approach has the following features. First, to the best of our knowledge, the proposed approach is the first attempt to estimate the ATE accounting for differential recall bias while adjusting for confounding bias. Second, we demonstrated that the causal effects could be identified under some assumptions. Third, we developed estimation and correction methods based on the identification results in causal inference. Finally, we not only provided estimation results in child abuse data, but we also conducted a sensitivity analysis, which can provide information regarding the amount of recall bias necessary to alter the causal conclusion. This condition will provide practical guidance for practitioners to examine the robustness of their findings.
One of the limitations is that the two tuning parameters are unknown in many natural situations. We assumed that the misclassified probabilities (η_0,η_1) or (ζ_0,ζ_1) are equal for all individuals, but this assumption may not be realistic. We show that the ATE can be partially estimated by providing a bound to recall bias probability. Researchers may deduce a reasonable bound for the ATE depending on the domain knowledge of recall bias probabilities. Moreover, unobserved covariates may exist, which lead to residual confounding bias. Sensitivity analyses are usually performed to deal with the unmeasured confounding bias, but most of the proposed methods are unsuitable in the presence of recall bias. This issue will be discussed in our future research. We hope that our work aids in analyzing the treatment effect under the misclassification of treatment assignment.
§ SOFTWARE
Supplementary materials containing the proofs, the nearest-neighbor combination method, and simulation studies are available online. The R-codes described in this paper are available at
<https://github.com/suhwanbong121/recall_bias_observational_study>.
§ ACKNOWLEDGMENTS
This work was supported by NIH grants (R01ES026217, R01MD012769, R01ES028033, 1R01ES030616, 1R01AG066793-01R01, 1R01ES029950, R01ES028033-S1), Alfred P. Sloan Foundation (G-2020-13946), Vice Provost for Research at Harvard University (Climate Change Solutions Fund) and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2021R1C1C1012750).
apalike
|
http://arxiv.org/abs/2307.00985v1
|
20230703130515
|
An embarrassingly parallel optimal-space cardinality estimation algorithm
|
[
"Emin Karayel"
] |
cs.DS
|
[
"cs.DS"
] |
Effect of the cross-section architecture on the impact resistance of bio-inspired low-porosity structures using neural networks
[
August 1, 2023
===============================================================================================================================
In 2020 Błasiok (ACM Trans. Algorithms 16(2) 3:1-3:28) constructed an optimal space streaming algorithm for the cardinality estimation problem with the space complexity of (^-2ln(δ^-1) + ln n) where , δ and n denote the relative accuracy, failure probability and universe size, respectively.
However, his solution requires the stream to be processed sequentially.
On the other hand, there are algorithms that admit a merge operation; they can be used in a distributed setting, allowing parallel processing of sections of the stream, and are highly relevant for large-scale distributed applications.
The best-known such algorithm, unfortunately, has a space complexity exceeding Ω(ln(δ^-1) (^-2lnln n + ln n)).
This work presents a new algorithm that improves on the solution by Błasiok, preserving its space complexity, but with the benefit that it admits such a merge operation, thus providing an optimal solution for the problem for both sequential and parallel applications.
Orthogonally, the new algorithm also improves algorithmically on Błasiok's solution (even in the sequential setting) by reducing its implementation complexity and requiring fewer distinct pseudo-random objects.
§ INTRODUCTION
In 1985 Flajolet and Martin <cit.> introduced a space-efficient streaming algorithm for the estimation of the count of distinct elements in a stream a_1,...,a_m whose elements are from a finite universe U.
Their algorithm does not modify the stream, observes each stream element exactly once and its internal state requires space logarithmic in n=U.
However, their solution relies on the model assumption that a given hash function can be treated like a random function selected uniformly from the family of all functions with a fixed domain and range.
Despite the ad-hoc assumption, their work spurred a large number of publications[Pettie and Wang <cit.> summarized a comprehensive list.], improving the space efficiency and runtime of the algorithm.
In 1999 Alon et al. <cit.> identified a solution that avoids the ad-hoc model assumption.
They use 2-independent families of hash functions, which can be seeded by a logarithmic number of random bits in U while retaining a restricted set of randomness properties.
Their refined solution was the first rigorous Monte-Carlo algorithm for the problem.
Building on their work, Bar-Yossef et al. in 2002 <cit.>, then Kane et al. in 2010 <cit.> and lastly, Błasiok in 2020 <cit.>[An earlier version of Błasiok's work was presented in the ACM-SIAM Symposium on Discrete Algorithms in 2018. <cit.>] developed successively better algorithms achieving a space complexity of (^-2ln(δ^-1) + ln n), which is known to be optimal <cit.>.
These algorithms return an approximation Y of the number of distinct elements A (for A := {a_1,…,a_m}) with relative error and success probability 1 - δ, i.e.:
(Y-A≤A) ≥ 1-δ
where the probability is only over the internal random coin flips of the algorithm but holds for all inputs.
Unmentioned in the source material is the fact that it is possible to run the older algorithms by Alon et al. and Bar-Yossef et al. in a parallel mode of operation.
This is due to the fact that the algorithms make the random coin flips only in a first initialization step, proceeding deterministically afterwards and that the processing step for the stream elements is commutative.
For example, if two runs for sequences a and b of the algorithm had been started with the same coin flips, then it is possible to introduce a new operation that merges the final states of the two runs and computes the state that the algorithm would have reached if it had processed the concatenation of the sequences a and b sequentially.
Note that the elements of the sequences are not required to be disjoint.
This enables processing a large stream using multiple processes in parallel.
The processes have to communicate at the beginning and at the end to compute an estimate.
The communication at the beginning is to share random bits, and the communication at the end is to merge the states.
Because there is no need for communication in between, the speed-up is optimal with respect to the number of processes, such algorithms are also called embarrassingly parallel <cit.>.
This mode of operation has been called the distributed streams model by Gibbons and Tirthaputra <cit.>.
Besides the distributed streams model, such a merge operation allows even more varied use cases, for example, during query processing in a Map-Reduce pipeline <cit.> or as decomposable/distributive aggregate functions within OLAP cubes <cit.>. Figure <ref> illustrates two possible modes of operation (among many) enabled by a merge function.
However, an extension with such a merge operation is not possible for the improved algorithms by Kane et al. and Błasiok.
This is because part of their correctness proof relies inherently on the assumption of sequential execution, in particular, that the sequence of states is monotonically increasing, which is only valid in the sequential case.
This work introduces a new distributed cardinality estimation algorithm which supports a merge operation with the same per-process space usage as the optimal sequential algorithm by Błasiok: (^-2ln(δ^-1) + ln n).
Thus the algorithm in this work has the best possible space complexity in both the sequential and distributed streaming model.[That the complexity is also optimal for the distributed setting is established in Section <ref>.] (Table <ref> provides a summary of the algorithms mentioned here.)
The main idea was to modify the algorithm by Błasiok into a history-independent algorithm. This means that the algorithm will, given the same coin-flips, reach the same state independent of the order in which the stream elements arrive, or more precisely, independent of the execution tree as long as its nodes contain the same set of elements.
This also means that the success event, i.e., whether an estimate computed from the state has the required accuracy, only depends on the set of distinct stream elements encountered (during the execution tree) and the initial random coin flips.
As a consequence and in contrast to previous work, the correctness proof does not rely on bounds on the probability of certain events over the entire course of the algorithm, but can be established independent of past events.
Błasiok uses a pseudo-random construction based on hash families, expander walks, an extractor based on Parvaresh-Vardy codes <cit.> and a new sub-sampling strategy <cit.>[Lem. 39]. I was able to build a simpler stack that only relies on hash families and a new two-stage expander graph construction, for which I believe there may be further applications. To summarize — the solution presented in this work has two key improvements:
* Supports the sequential and distributed streaming model with optimal space.
* Requires fewer pseudo-random constructs, i.e., only hash families and expander walks.
In the next Section I will briefly discuss the history of the algorithm by Błasiok, because it is best understood as a succession of improvements starting from Alg. 2 by Bar-Yossef et al. In this context it will be possible to introduce the improvements in the new algorithm in more detail. After that, I present new results on expander walks (Section <ref>) needed in the new pseudo-random construction and a self-contained presentation of the new algorithm and its correctness proof (Sections <ref> and <ref>).
Concluding with a discussion of its optimality in the distributed setting (Section <ref>), its runtime complexity (Section <ref>) and a discussion of open research questions (Section <ref>).
The results obtained in this work have also been formally verified <cit.> using the proof assistant Isabelle <cit.>.
Isabelle has been used to verify many <cit.> advanced results from mathematics (e.g. the prime number theorem <cit.>) and computer science (e.g. the Cook-Levin theorem <cit.>).
For readers mainly interested in the actual results, the formalization can be ignored as the theorems and lemmas all contain traditional mathematical proofs.
Nevertheless, Table <ref> references the corresponding formalized fact for every lemma and theorem in this work.
§ BACKGROUND
The algorithm in this work is a refinement of the solution by Błasiok, Kane et al., and Alg. 2 by Bar-Yossef et al. This section introduces them briefly and describes the key improvements in this work, whenever the relevant concepts are introduced.
Bar-Yossef's algorithm relies on the fact that when, r balls are thrown randomly and independently into b bins, the expected number of bins hit by at least one ball will be
b (1-(1-b^-1)^r) .
If r — the number of balls — is close to b it is possible to invert Eq. <ref> to obtain an estimate for r by counting the number of hit bins. Bar-Yossef et al. were able to show that its possible to choose the bins for each ball k-wise independently, where k is in (ln(^-1)) instead of completely independently, because the expectation and variance of the number of hit bins coverges exponentially fast to the corresponding values for the idealized case of independently choosing a random bin for each ball.
With that in mind we can imagine an algorithm choosing a hash function g randomly from a k-wise independent hash family from the universe U=[n] to [b] that maps each stream element using g into the b bins and tracks whether a bin was `hit' by a stream element.
To be able to deal with a situation where the number of distinct stream elements is much larger than b they introduce a sub-sampling strategy. This works by choosing a second pairwise-independent hash function f with a geometric distribution, i.e., the universe elements are assigned a level, where each universe element has a level ≥ 0. Only half of them have a level ≥ 1 and only a quarter of them have a level ≥ 2, etc.
They then choose to restrict the analysis to the universe elements with a given minimum level — the sub-sampling threshold — such that the cardinality of the stream elements in that part of the universe is close to the number of bins. To achieve that they not only store for a bin, whether a stream element was hashed into it, but also the maximal level of the stream elements mapping into it. This is the reason for the lnln n factor in the term ^-2lnln n in the space complexity of their algorithm.
They also run in parallel a second rough estimation algorithm. At the end they use the rough estimation algorithm to get a ball park for the number of stream elements and determine a sub-sampling threshold s, for which the number of stream elements is expected to be approximately the number of bins. They then count the number of bins hit by at least one stream element of that level or higher. By inverting Eq. <ref> it is then possible to estimate the number of stream elements with the given minimum level s. Scaling that number by 2^s gives an approximation of the total number of distinct stream elements.
As mentioned in the introduction, the algorithm by Bar-Yossef can be extended with a merge operation, allowing it to be run in the distributed streams model. This essentially works by taking the maximum of the level stored in each bin and relies on the fact that max is a commutative, associative operation.
Kane et al. in 2010 found a solution to avoid the lnln n factor in the term ^-2lnln n of the space complexity, i.e., they were able to store a constant number of bits on average per bin instead of lnln n bits. To achieve this, instead of estimating the sub-sampling threshold at the end, they obtain a rough estimate for the cardinality of the set during the course of the algorithm. Whenever the rough estimate indicates a large enough sub-sampling threshold, the information in bins with smaller levels is not going to be needed. (Note that the estimate determined by the rough-estimation algorithm is monotone.) Besides dropping the data in bins with a maximal level below the current sub-sampling threshold, which I will refer to as the cut-level in the following, they only store the difference between the level of the element in each bin and the cut-level.
It is then possible to show that the expected number of bits necessary to store the compressed table values is on average (1). To limit the space usage unconditionally the algorithm keeps track of the space usage for the table and, if it exceeds a constant times the table size, the algorithm will reach an error state deleting all information. To succeed they estimate the probability that the rough estimate is correct at all points during the course of the algorithm. Similarly they show that the space usage will be at most a constant times the bin count with high probability assuming the latter is true. To achieve that they rely on the fact that there are at most (ln n) points, where the rough estimate increases, which enables a union bound to verify that the error state will not be reached at or at any point before the estimation step.
However the bound on the number of changes in the rough estimatator is only true, when the algorithm is executed sequentially and their analysis does not extend to the distributed streams model.
This is a point where the solution presented here distinguishes itself: The algorithm in this work does not use a rough estimation algorithm to determine the cut-level. Instead, a cut-level is initialized to 0 at the beginning and is increased if and only if the space usage would be too high otherwise. The algorithm never enters a failure state, preserving as much information as possible in the available memory. Because of the monotonicity of the values in the bins it is possible to show that the state of the algorithm is history independent. In the estimation step a sub-sampling threshold is determined using the values in the bins directly. This is distinct from the previously know methods, where two distinct data structures are being maintained in parallel. During the analysis it is necessary to take into account that the threshold is not independent of the values in the bins, which requires a slightly modified proof (see Lemma <ref>). The proof that the cut-level will not be above the sub-sampling threshold works by verifying that the cut-level (resp. sub-sampling threshold) will with high probability be below (resp. above) a certain threshold that is chosen in the proof depending on the cardinality of the set (see Subsection <ref>).
Another crucial idea introduced by Kane et al. is the use of a two-stage hash function, when mapping the universe elements from [n] to [b]. The first hash function is selected from a pairwise hash-family mapping from [n] to [b̃] and the second is a k-wise independent family from [b̃] to [b]. The value b̃ is chosen such that w.h.p. there are no collisions during the application of the first hash function (for the universe elements above the sub-sampling threshold), in that case the two-stage hash function behaves like a single-stage k-wise independent hash function from [n] to [b]. This is achievable with a choice of b̃∈(b^2) thus requiring fewer random bits than a single stage function from a k-wise independent family would.
The algorithm by Kane et al. discussed before this paragraph has a space complexity of (^-2 + ln n) for a fixed failure probability (<1/2). It is well known that the success probability of such an algorithm can be improved by running l ∈( ln(δ^-1) ) independent copies of the algorithm and taking the median of the estimates of each independent run. <cit.>[Thm. 2.1] In summary, this solution, as pointed out by Kane et al., leads to a space complexity of ( ln(δ^-1) (^-2 + ln n)). Błasiok observed that this can be further improved: His main technique is to choose seed values of the hash functions using a random walk of length l in an expander graph instead of independently. This reduces the space complexity for the seed values of the hash functions. Similarly he introduces a delta compression scheme for the states of the rough-estimation algorithms [which also have to be duplicated l times]. In a straightforward manner his solution works only for the case where < (ln n)^-1/4. In the general case, he needs a more complex pseudo-random construction building on expander walks, Parvaresh-Vardy codes and a sub-sampling step. The main obstacle is the fact that deviation bounds for unbounded functions sampled by a random walk do not exist, even with doubly-exponential tail bounds.
In this work, because there is no distinct rough estimation data structure, the compression of its state is not an issue. However there is still the space usage for the cut-levels: Because maintaining a cut-level for each copy would require too much space, it is necessary to share the cut-level between at least lnln n copies at a time.
To achieve that I use a two-stage expander construction. This means that each vertex of the first stage expander encodes a walk in a second expander. (Here it is essential that the second expander is regular.) The length of the walk of the second expander is (lnln n) matching the number of bits required to store a cut-level, while the length of the walk in the first (outer) expander is ( ln (δ^-1) (lnln n)^-1). Note that the product is again just (ln(δ^-1)). The key difference is that the copies in the inner expander have to share the same cut-level, while the outer walk does not, i.e. there are ( ln (δ^-1) (lnln n)^-1) separate cut-levels. See also Figure <ref>. To work this out the spectral gaps have to be chosen correctly and I introduce a new deviation bound for expander walks (in Section <ref>.) This relies on a result mentioned in Impagliazzo and Kabanets from 2010 <cit.>, which shows a Chernoff bound for expander walks in terms of Kullback-Leibler divergence. Before we can detail that out let us first briefly introduce notation.
§ NOTATION AND PRELIMINARIES
This section summarizes (mostly standard) notation and concepts used in this work:
General constants are indicated as C_1, C_2, ⋯ etc. Their values are fixed throughout this work and are summarized in Table <ref>.
For n ∈ℕ, let us define [n] := {0, 1, …, n-1}. The notation [P] for a predicate P denotes the Iverson bracket, i.e., its value is 1 if the predicate is true and 0 otherwise.
The notation x (resp. ln x) stands for the logarithm to base 2 (resp. e) of x ∈ℝ_> 0. The notations x and x represent the floor and ceiling functions: ℝ→ℤ.
For a probability space Ω, the notation _ω∼Ω(F(ω)) is the probability of the event: {ω | F(ω)}. And _ω∼Ω(f(ω)) is the expectation of f if ω is sampled from the distribution Ω, i.e., _ω∼Ω(f(ω)) := ∫_Ω f(ω) d ω. Similarly, f = (f - f)^2.
For a finite non-empty set S, U(S) is the uniform probability space over S, i.e., ({x}) = S^-1 for all x ∈ S. (Usually, we will abbreviate U(S) with S when it is obvious from the context.) All probability spaces mentioned in this work will be discrete, i.e., measurability will be trivial.
All graphs in this work are finite and are allowed to contain parallel edges and self-loops. For an ordering of the vertices of such a graph, it is possible to associate an adjacency matrix A = (a_ij), where a_ij is the count of the edges between the i-th to the j-th vertex. We will say it is undirected d-regular if the adjacency matrix is symmetric and all its row (or equivalently) column sums are d. Such an undirected d-regular graph is called a λ-expander if the second largest absolute eigenvalue of its adjacency matrix is at most d λ.
Given an expander graph G, we denote by Walk(G,l), the set of walks of length l. For a walk w ∈Walk(G,l) we write w_i for the i-th vertex and w_i,i+1 for the edge between the i-th and (i+1)-th vertex. Because of the presence of parallel edges, two distinct walks may have the same vertex sequence. As a probability space U(Walk(G,l)) corresponds to choosing a random starting vertex and performing an (l-1)-step random walk.
§ CHERNOFF-TYPE ESTIMATES FOR EXPANDER WALKS
The following theorem has been shown implicitly by Impagliazzo and Kabanets <cit.>:
Let G = (V,E) be a λ-expander graph and f a boolean function on its vertices, i.e.: f : V →{0,1} s.t.
μ = _v ∼ U(V) f(v), 6 λ≤μ and 2 λ < < 1 then:
_w ∼Walk(G,l)( ∑_i ∈ [l] f(w_i) ≥ (μ + ) l ) ≤exp( - l D( μ + || μ + 2 λ ) )
Especially, the restriction μ≥ 6 λ in the above result causes technical issues since usually one only has an upper bound for μ. The result follows in Impagliazzo and Kabanets work as a corollary from the application of their main theorem <cit.>[Thm. 1] to the hitting property established by Alon et al. <cit.> in 1995. It is easy to improve Theorem <ref> by using an improved hitting property:
Let G = (V,E) be a λ-expander graph and W ⊆ V, I ⊆ [l] and let μ := W/V then:
_w ∼Walk(G,l)( ⋀_i ∈ I w_i ∈ W ) ≤ (μ (1-λ) + λ)^I≤ (μ+λ)^I
The above theorem for the case where I = [l] is shown by Vadhan <cit.>. It is however possible to extend the proof to the case where I ⊂ [l].
To understand that, it is important to note that the proof establishes that the wanted probability is the l_1 norm of r := P (A P)^l-1 u where A is the transition matrix of the graph, P is a diagonal matrix whose diagonal entries are in {0,1} depending on whether the vertex is in the set W and u is the vector, where each component is |V|^-1. Note that u represents the stationary distribution of the random walk. If I is a strict subset of [l] then the above term for r needs to be corrected, by removing multiplications by P for the corresponding steps, i.e.:
r' := A^k_0 P A^k_1 P A^k_2… A^k_I-1 P A^k_I u
where k_i is distance between the i-th index in I and i+1-th index in I.[The 0-th index in I is defined to be 0 and the I+1-th index in I is defined to be l.] Because A u = u and because the application of A does not increase the l_1 norm, it is possible to ignore the first and last term, i.e., it is enough to bound the l_1 norm of
x = P A^k_1 P A^k_2… A^k_I-1 P u .
This can be regarded as an I-step random walk, where the transition matrix is A^k_i for step i. (Note that k_i > 0 for 1 ≤ i ≤I-1). The proof of the mentioned theorem <cit.> still works in this setting if we take into account that A^k is itself the adjacency matrix of a λ-expander on the same set of vertices. (Indeed it is even a λ^k-expander.)
With the previous result, it is possible to obtain a new, improved version of Theorem <ref>:
Let G = (V,E) be a λ-expander graph and f a boolean function on its vertices, i.e.: f : V →{0,1} s.t.
μ = _v ∼ U(V) f(v) and μ + λ≤γ≤ 1 then:
_w ∼Walk(G,l)( ∑_i ∈ [l] f(w_i) ≥γ l ) ≤exp( - l D( γ || μ + λ) )
This follows from Theorem <ref> and the generalized Chernoff bound <cit.>[Thm. 1].
Impagliazzo and Kabanets approximate the divergence D( γ || μ + λ ) by 2(γ - (μ+λ))^2. In this work, we are interested in the case where μ + λ→ 0, where such an approximation is too weak, so we cannot follow that approach. (Note that D(γ || μ + λ) can be arbitrarily large, while (γ - (μ + λ))^2 is at most 1.) Instead, we derive a bound of the following form:
Let G = (V,E) be a λ-expander graph and f a boolean function on its vertices, i.e.: f : V →{0,1} s.t.
μ = _v ∼ U(V) f(v) and μ + λ≤γ < 1 then:
_w ∼Walk(G,l)( ∑_i ∈ [l] f(w_i) ≥γ l ) ≤exp( - l (γln ((μ + λ)^-1) - 2e^-1) )
The result follows from Theorem <ref> and the inequality:
D( γ || p ) ≥γln(p^-1) - 1 for 0 < γ < 1 and 0 < p < 1.
To verify that note:
D( γ || p ) ≥ γlnγ + γln(p^-1) + (1-γ) ln (1-γ) + (1-γ) ln((1-p)^-1)
≥ -e^-1 + γln (p^-1) - e^-1 + 0 ≥γln (p^-1) - 2e^-1
using x ln x ≥ -e^-1 for x > 0 (and ln y ≥ 0 if y ≥ 1).
An application for the above inequality, where the classic Chernoff-bound by Gillman <cit.> would not be useful, is establishing a failure probability for the repetition of an algorithm that already has a small failure probability. For example, if an algorithm has a failure probability of δ^*, then it is possible to repeat it (ln ( δ^-1)/ln ((δ^*)^-1))-times to achieve a failure probability of δ. (This is done in Section <ref>.) Another consequence of this is a deviation bound for unbounded functions with a sub-gaussian tail bound:
[Deviation Bound]lemmadeviationboundstatement
Let G = (V,E) be a λ-expander graph and f : V →ℝ_≥ 0 s.t. _v ∼ U(V)( f(v) ≥ x) ≤exp(-x (ln x)^3) for x ≥ 20 and λ≤exp(-l (ln l)^3 ) then
_w ∼Walk(G,l)( ∑_i ∈ [l] f(w_i) ≥c:dev_bound l ) ≤exp(-l)
where c:dev_bound := e^2 + e^3 + (e-1) ≤ 30.
Note that the class includes sub-gaussian random variables but is even larger. The complete proof is in Appendix <ref>. The proof essentially works by approximating the function f using the Iverson bracket:
f(x) ≤Σ_k [e^k ≤ f(x) ≤ e^k+1] e^k+1 and establishing bounds on the frequency of each bracket. For large k this is established using the Markov inequality, and for small k the previous lemma is used. The result is a stronger version of a lemma established by Błasiok <cit.>[Lem. 36], and the proof in this work is heavily inspired by his.[The main distinction is that he relies on a tail bound from Rao <cit.>, while this work relies on Lemma <ref>.]
§ EXPLICIT PSEUDO-RANDOM CONSTRUCTIONS
This section introduces two families of pseudo-random objects used in this work along with an explicit construction for each.
§.§ Strongly explicit expander graphs
For the application in this work, it is necessary to use strongly explicit expander graphs. For such a graph, it is possible to sample a random vertex uniformly and compute the edges incident to a given vertex algorithmically, i.e., it is possible to sample a random walk without having to represent the graph in memory. Moreover, sampling a random walk from a d-regular graph G with n-vertices is possible using a random sample from [n d^l-1], i.e., we can map such a number to a walk algorithmically, such that the resulting distribution corresponds to the distribution from Walk(G,l) — this allows the previously mentioned two-stage construction.
A possible construction for strongly explicit expander graphs for every vertex count n and spectral bound λ is described by Murtagh et al. <cit.>[Thm. 20, Apx. B][Similar results have also been discussed by Goldreich and Alon: Goldreich <cit.> discusses the same problem but for edge expansion instead of the spectral bound. Alon <cit.> constructs near-optimal expander graphs for every size starting from a minimum vertex counts (depending on the degree and discrepancy from optimality).]. Note that the degree d in their construction only grows polynomially with λ^-1, hence ln (d(λ)) ∈( ln (λ^-1)). We will use the notation ℰ([n], λ, l) for the sample space of random walks of length l in the described graph over the vertex set [n]. The same construction can also be used on arbitrary finite vertex sets S, if it is straightforward to map [S] to S algorithmically. Thus we use the notation ℰ(S, λ, l) for such S. Importantly ℰ(S, λ, l) = S d(λ)^l-1. Thus a walk in such a graph requires ( S + l (λ^-1)) bits to represent.
§.§ Hash Families
Let us introduce the notation: ℋ_k([2^d], [2^d]) for the Carter-Wegman hash-family <cit.> from [2^d] to [2^d]. (These consist of polynomials of degree less than k over the finite field GF(2^d)).
It is straightforward to see that a hash-family for a domain [2^d] is also a family for a subset of the domain [n] ⊆ [2^d]. Similarly it is possible to reduce the size of the range by composing the hash function with a modulo operation: [2^d] → [2^c] for c ≤ d. Hence the previous definition can be extended to hash families with more general domains and ranges, for which we will use the notation: ℋ_k([n],[2^c]).
Note that (ℋ_k([n],[2^c])) ∈( k (c + ln n) ).
For our application, we will need a second family with a geometric distribution (as opposed to uniform) on the range, in particular such that (f(a) ≥ k) = 2^-k. This is being used to assign levels to the stream elements. A straightforward method to achieve that is to compose the functions of the hash family ℋ_k([2^d],[2^d]) with the function that computes the number of trailing zeros of the binary representation of its input [2^d] → [d]. We denote such a hash family with 𝒢_k([2^d]) where the range is [d+1]. Like above, such a hash family is also one for a domain [n] ⊆ [2^d], and hence we can again extend the notation: 𝒢_k([n]).
Note that: _f ∼ G_k([n]) (f(a) ≥ k) = 2^-k for all k ≤ n and also (𝒢_k([n])) ∈( k ln n).
§ THE ALGORITHM
Because of all the distinct possible execution models, it is best to present the algorithm as a purely functional data structure with four operations:
init: () →seed single: [n] →seed→sketch
merge: sketch→sketch→sketch estimate: sketch→ℝ
The init step should be called only once globally — it is the only random operation — its result forms the seed and must be the same during the entire course of the algorithm. The operation single returns a sketch for a singleton set corresponding to its first argument. The operation merge computes a sketch representing the union of its input sketches and the operation estimate returns an estimate for the number of distinct elements for a given sketch. It is possible to introduce another primitive for adding a single element to a sketch, which is equivalent to a merge and a single operation, i.e.: add(x,τ,ω) := merge(τ, single(x, ω)). In terms of run-time performance it makes sense to introduce such an operation, especially with an in-place update, but we will not discuss it here.
The algorithm will be introduced in two successive steps. The first step is a solution that works for (ln n)^-1≤δ < 1. The sketch requires only (ln(δ^-1) ^-2 + lnln n), but the initial coin flips require (ln n + ln (^-1)^2+ln(δ^-1)^3) bits. For δ≥ (ln n)^-1 this is already optimal. In the second step (Section <ref>) a black-box vectorization of the previous algorithm will be needed to achieve the optimal (ln(δ^-1) ^-2 + ln n) space usage for all 0 < δ < 1.
For this entire section let us fix a universe size n > 0, a relative accuracy 0 < < 1, a failure probability (ln n)^-1≤δ < 1 and define:
l := c:epsln (2 δ^-1) b := 2^ (c:delta^-2)
k := c:approx_bin_balls_1ln b + c:approx_bin_balls_2 λ := min(1/16, exp(-l (ln l)^3))
Ψ := 𝒢_2([n]) ×ℋ_2([n],[c:pre_bins b^2]) ×ℋ_k([c:pre_bins b^2], [b]) Ω := ℰ(Ψ,λ, l)
The implementation of the operations is presented in Algorithm <ref>. Note that these are functional programs and pass the state as arguments and results; there is no global (mutable) state. The sketch consists of two parts (B,q). The first part is a two-dimensional table of sizes b and l. The second part is a single natural number, the cut-off level. The function compress is an internal operation and is not part of the public API. It increases the cut-off level and decreases the table values if the space usage is too high.
§.§ History-Independence
As mentioned in the introduction, this algorithm is history-independent, meaning that given the initial coin flips, it will reach the same state no matter in which permutation or frequency the stream elements are encountered. More precisely, the final state only depends on the set of encountered distinct elements over the execution tree and the initial coin flips, but not the shape of the tree. This is one of the key improvements compared to the solutions by Kane et al. and Błasiok. Informally, this is easy to see because the chosen cut-off level is the smallest possible with respect to the size of the values in the bins, and that property is maintained because the values in the bins are monotonically increasing with respect to the set of elements in the execution tree. Nevertheless, let us prove the property more rigorously:
Let ω∈Ω be the initial coin flips. Then there is a function τ(ω, A) such that following equations hold:
single(ω,x) = τ(ω,{x})
merge(τ(ω,A),τ(ω,B)) = τ(ω,A ∪ B)
The function τ is defined as follows:
τ_0((f,g,h),A) := j →max{ f(a) | a ∈ A ∧ h(g(a)) = j }∪{-1}
τ_1(ψ,A,q) := j →max{τ_0(ψ,A) - q, -1 }
τ_2(ω,A,q) := (i,j) →τ_1(ω_i,A,q)[j]
q(ω,A) := min{ q ≥ 0 | ∑_i ∈ [l], j ∈ [b]( τ_2(ω,A,q)[i,j] + 2 )≤c:space_bound b l }
τ_3(ω,A,q) := (τ_2(ω,A,q), q)
τ(ω,A) := τ_3(ω,A,q(ω,A))
The function τ_0 describes the values in the bins if there were no compression, i.e., when q=0. The function τ_1 describes the same for the given cut-off level q. Both are with respect to the selected hash functions ψ = (f,g,h).
The function τ_2 represents the state of all tables based on a seed for the expander. The next function τ_3 represents the entire state, which consists of the tables and the cut-off level. The function q represents the actual cut-off level that the algorithm would choose based on the values in the bins. Finally, the full state is described by the function τ for a given seed ω and set of elements A.
Equations <ref> and <ref> hold for all ω∈Ω and ∅≠ A ⊂ [n].
Let us also introduce the algorithms merge_1 and single_1. These are the algorithms merge and single but without the final compression step. By definition, we have merge(x,y) = compress(merge_1(x,y)) and, similarly, single(ω,x) = compress(single_1(ω,x)).
The following properties follow elementarily[The verification relies on the semi-lattice properties of the max operator, as well as its translation invariance (i.e. max (a+c,b+c) = max (a,b) + c).] from the definition of τ, s and the algorithms:
* τ(ω, A) = compress(τ_3(ω,A,q)) for all 0 ≤ q ≤ q(ω,A)
* τ_3(ω, A_1 ∪ A_2, max(q(ω,A_1),q(ω,A_2))) = merge_1(τ(ω, A_1), τ(ω, A_2))
* τ_3({x}, 0) = single_1(ω, x)
* q(ω, A_1) ≤ q(ω, A_2) if A_1 ⊆ A_2
* q(A) ≥ 0
To verify Eq. <ref> we can use <ref>, <ref> and <ref> and to verify Eq. <ref> we use <ref>, <ref> taking into account that
max(q(ω, A_1),q(ω, A_2)) ≤ q(ω, A_1 ∪ A_2) because of <ref>.
§.§ Overall Proof
Because of the argument in the previous section, τ(ω,A) will be the state reached after any execution tree over the set A and the initial coin flips, i.e., ω∈Ω. Hence for the correctness of the algorithm, we only need to show that:
Let ∅≠ A ⊆ [n] then
_ω∈ U(Ω)( estimate(τ(ω,A)) - A≥A) < δ.
Proof: Postponed. This will be shown in two steps: First, we want to establish that the cut-off threshold q will be equal to or smaller than q_max := max(0, A - b) with high probability. And if the latter is true, then the estimate will be within the desired accuracy with high probability. For the second part, we verify that the estimation step will succeed with high probability for all 0 ≤ q ≤ q_max. (This will be because the sub-sampling threshold s in the estimation step will be ≥ q_max with high probability.)
For the remainder of this section, let ∅≠ A ⊂ [n] be fixed and we will usually omit the dependency on A. For example, we will write τ(ω) instead of τ(ω,A).
Formally we can express the decomposition discussed above using the following chain:
_ω∈Ω( estimate(τ(ω)) - A≥A) ≤
_ω∈Ω( ∃ q ≤ q_max. estimate(τ_2(ω,q)) - A≥A∨ q(ω) > q_max) ≤
_ω∈Ω( ∃ q ≤ q_max. estimate(τ_2(ω,q)) - A≥A) + _ω∈Ω( q(ω) > q_max) ≤δ/2 + δ/2
The first inequality is the converse of the informal argument from above.[Algebraically it is more succinct to bound the failure event from above, instead of bounding the success event from below, which means that some informal arguments will be accompanied by their algebraic converse. For example, an argument that event A implies B might be accompanied by P( B) ≤( A).] The second inequality is just the sub-additivity of probabilities. And the third inequality consists of the two goals we have, i.e., the overall proof can be split into two parts:
* _ω∈Ω( q(ω) > q_max) ≤δ/2
* _ω∈Ω( ∃ q ≤ q_max. estimate(τ_2(ω,q)) - A≥A) ≤δ/2
The first will be shown in the following subsection, and the next in the subsequent one.
Subsection <ref> discusses the space usage of the algorithm.
§.§ Cut-off Level
This subsection proves that the cut-off level will be smaller than or equal to q_max. This is the part where the tail estimate for sub-gaussian random variables over expander walks (Lemma <ref>) is applied:
_ω∈Ω( q(ω) > q_max) ≤δ/2
Let us make a few preliminary observations:
(x+2)≤(x+2) ≤ (c+2)+ max(x-2^c,0) for (-1) ≤ x ∈ℝ and c ∈ℕ.
This can be verified using case distinction over x ≥ 2^c+2.
_f ∼𝒢_2([n])max (f(a) - q_max - 2^c, 0) ≤ 2^-q_max 2^-2^c for all a ∈ [n] and c ∈ℕ
Note that this relies on the fact f is geometrically distributed.
A b^-1 2^-q_max≤ 1
This follows from the definition of q_max via case distinction.
To establish the result, we should take into account that q(ω) is the smallest cut-off level q fulfilling the inequality:
∑_i ∈ [l], j ∈ [b]( τ_2(ω,q)[i,j] + 2 )≤c:space_bound b l.
In particular, if the inequality is true for q_max, then we can conclude that q(ω) is at most q_max, i.e.:
_ω∈Ω( q(ω) > q_max) = _ω∈Ω( ∑_i ∈ [l], j ∈ [b]( τ_2(ω,q_max)[i,j] + 2 ) > c:space_bound b l )
Let us introduce the random variable X over the seed space Ψ. It describes the space usage of a single column of the table B:
X(ψ) := ∑_j ∈ [b]( τ_1(ψ,q_max)[j] + 2)
Which can be approximated using Eq. <ref> as follows:
X(ψ) ≤∑_j ∈ [b] c+2 + max ( τ_1(ψ,q_max)[j] - 2^c, 0) = ∑_j ∈ [b] c+2 + max (τ_0(ψ)[j] - q_max - 2^c, 0 )
for all 0 ≤ c ∈ℕ. Hence:
_ψ∼Ψ( X(ψ) ≥ (c+3) b ) ≤_ψ∼Ψ( ∑_j ∈ [b]max (τ_0(ψ)[j] - q_max - 2^c, 0 ) ≥ b ) ≤
_(f,g,h) ∼Ψ( ∑_j ∈ [b]max{ f(a) - q_max - 2^c | a ∈ A ∧ h(g(a)) = j }∪{0}≥ b ) ≤
_(f,g,h) ∼Ψ( ∑_a ∈ Amax (f(a) - q_max - 2^c, 0) ≥ b ) ≤
b^-1∑_a ∈ A_(f,g,h) ∼Ψmax (f(a) - q_max - 2^c, 0) ≤
b^-1A 2^-q_max 2^-2^c≤ 2^-2^c
where the third and second-last inequality follow from Eq. <ref> and <ref>.
It is straightforward to conclude from the latter that for all 20 ≤ x ∈ℝ:
_ψ∼Ψ( X(ψ)/b - 3 ≥ x ) ≤_ψ∼Ψ( X(ψ) ≥ b(⌊ x ⌋+3) ) ≤exp ( -2^⌊ x ⌋ln 2) ≤ e^-x (ln x)^3
Hence, it is possible to apply Lemma <ref> on the random variables b^-1 X(ψ) - 3 obtaining:
_ω∈Ω( ∑_i ∈ [l] b^-1 X(h(ω,i)) - 3 ≥c:dev_bound l ) ≤exp(-l) ≤δ/2
This lemma now follows using c:space_bound≥c:dev_bound + 3 and that ∑_i ∈ [l] X(h(ω,i)) ≤c:space_bound b l implies q(ω) ≤ q_max as discussed at the beginning of the proof (Eq. <ref>).
§.§ Accuracy
Let us introduce the random variables:
t(f) := max{ f(a) | a ∈ A } - b + 9
s(f) := max(0, t(f))
p(f,g,h) := { j ∈ [b] |τ_1((f,g,h), 0)[j] ≥ s(f) }
Y(f,g,h) := 2^s(f)ρ^-1(p(f,g,h))
where ρ(x) := b (1-(1-b^-1)^x) — the expected number of hit bins when x balls are thrown into b bins. (See also Figure <ref>). Note that the definitions t, p and Y correspond to the terms within the loop in the estimate function under the condition that the approximation threshold q is 0.
In particular: estimate(τ_3(ω,0)) = median_i ∈ [l] Y(ω_i) for ω∈Ω.
Moreover, we denote by R(f) the set of elements in A whose level is above the sub-sampling threshold, i.e.:
R(f) := { a ∈ A | f(a) ≥ s(f) }.
The objective is to show that the individual estimates obtained in the loop in the estimate function (assuming q=0) have the right accuracy and that the threshold s ≥ q_max with high probability, i.e.:
_ψ∼Ψ( Y(ψ) - A > A∨ s(f) < q_max) ≤1/16
In Lemma <ref> this will be generalized to 0 ≤ q ≤ q_max.
To be able to establish a bound on the above event, we need to check the likelihood of the following 4 events:
* The computed sub-sampling threshold s(f) is approximately (A).
* The size of the sub-sampled elements R(f) is a good approximation of 2^-s(f)A.
* There is no collision during the application of g on the sub-sampled elements R(f).
* The count of elements above the sub-sampling threshold in the table is close to the expected number ρ(R(f)) (taking collisions due to the application of h into account).
Then it will be possible to conclude that one of the above must fail if the approximation is incorrect. More formally:
E_1(ψ) :↔ 2^-16 b ≤ 2^-t(f)A≤ 2^-1 b
E_2(ψ) :↔R(f) - 2^-s(f)A≤/3 2^-s(f)A
E_3(ψ) :↔∀ a ≠ b ∈ R(f) . g(a) ≠ g(b)
E_4(ψ) :↔p(ψ) - ρ(R(f))≤/12R(f)
for ψ = (f,g,h) ∈Ψ. The goal is to show all four events happen simultaneously w.h.p.:
_ψ∼Ψ ( E_1(ψ) ∨ E_2(ψ) ∨ E_3(ψ) ∨ E_4(ψ) ) ≤1/16
A first idea might be to establish the above by showing separately that:
_ψ∼Ψ ( E_i(ψ) ) ≤ 2^-6 for each i ∈{1,…,4}. However this does not work and the actual strategy is to establish bounds on
_ψ∼Ψ( ⋀_j < i E_j(ψ) ∧ E_i(ψ) ) ≤ 2^-6 for each i ∈{1,…,4}.
Note that the latter still implies Equation <ref>. Let us start with the i=1 case:
_ψ∈Ψ ( E_1(ψ) ) ≤ 2^-6
For X(f) = max{ f(a) | a ∈ A } it is possible to show:
_(f,g,h) ∼Ψ ( X(f) < (A) - k - 1 ) ≤ 2^-k _(f,g,h) ∼Ψ ( X(f) > (A) + k ) ≤ 2^-k
using the proof for the F_0 algorithm by Alon et al. <cit.>[Proposition 2.3].
The desired result follows taking k=7 and that t(f) = X(f) - b + 9.
The following lemma is the interesting part of the proof in this subsection. In previous work, the sub-sampling threshold is obtained using a separate parallel algorithm, which has the benefit that it is straightforward to verify that R(f) approximates 2^-sA. The drawback is, of course, additional algorithmic complexity and an additional independent hash function. However, in the solution presented here, the threshold is determined from the data to be sub-sampled itself, which means it is not possible to assume independence. The solution to the problem is to show that R(f) approximates 2^-sA with high probability for all possible values s(f) assuming E_1.
L := _ψ∼Ψ ( E_1(ψ) ∧ E_2(ψ) ) ≤ 2^-6
Let r(f,t) := { a ∈ A | f(a) ≥ t } and t_max be maximal, s.t. 2^-16 b ≤ 2^-t_maxA.
Then 2^7 ≤^2/9 2^-16 b ≤^2/9 2^-t_maxA.
Hence: 2^7+t_max-t≤^2/9 2^-tA = ^2/9 r(f,t).
Thus:
2^7+t_max-t r(f,t) ≤ 2^7+t_max-t r(f,t) ≤^2/9 ( r(f,t))^2
for all 0 < t ≤ t_max. (This may be a void statement if t_max≤ 0.)
Hence:
_(f,g,h) ∈Ψ( ∃ t. 0 < t ≤ t_max∧r(f,t) - r(·,t) > /3 r(·,t) ) ≤
∑_t=1^t_max_(f,g,h) ∈Ψ( r(f,t) - r(·,t) > √( 2^7+t_max-t r(f,t))) ≤∑_t=1^t_max 2^-7-t_max+t≤ 2^-6
Note that the predicate E_2(ψ) is always true if s(f) = 0 because, in that case, there is no sub-sampling, i.e., R(f) = A. On the other hand if s(f) > 0, then s(f) = t(f) ≤ t_max assuming E_1(ψ).
Hence:
L ≤ _(f,g,h)( s(f) > 0 ∧ E_1(f,g,h) ∧ E_2(f,g,h) )
≤ _(f,g,h)( 0 < t(f) ≤ t_max∧R(f) - 2^-t(f)A > /3 2^-t(f)A)
≤ _(f,g,h)( 0 < t(f) ≤ t_max∧r(f,t(f)) - 2^-t(f)A > /3 2^-t(f)A) ≤ 2^-6
where the last step follows from the previous equation.
Note that: E_1(f,g,h) ∧ E_2(f,g,h) →R(f)≤2/3 b for (f,g,h) ∈Ψ
L := _ψ∼Ψ ( E_1(ψ) ∧ E_2(ψ) ∧ E_3(ψ) ) ≤ 2^-6
Using Eq. <ref> we can conclude:
L ≤ _(f,g,h) ∼Ψ( R(f)≤ b ∧ (∃ a < b ∈ R(f). g(a) = g(b)) )
≤ ∫_𝒢_2([n]) [R(f)≤ b] _g ∼ℋ_2([n], [c:pre_bins b^2])(∃ a < b ∈ R(f). g(a) = g(b)) d f
≤ ∫_𝒢_2([n]) [R(f)≤ b] ∑_a < b ∈ R(f)_g ∼ℋ_2([n], [c:pre_bins b^2])(g(a) = g(b)) d f
≤ ∫_𝒢_2([n])b (b-1)/2 c:pre_bins b^2 d f ≤1/2 c:pre_bins = 2^-6.
L := _ψ∼Ψ ( E_1(ψ) ∧ E_2(ψ) ∧ E_3(ψ) ∧ E_4(ψ) ) ≤ 2^-6
Let R̃(f,g,h) = { i ∈ [c:pre_bins b^2] | f(a) ≥ t(f) ∧ g(a) = i ∧ a ∈ A} denote the indices hit in the domain [c:pre_bins b^2] by the application of g on the elements above the sub-sampling threshold. If E_3(f,g,h), then R̃(f,g,h) = R(f) and if E_1(f,g,h) ∧ E_2(f,g,h) ,then R(f)≤ b (see Eq. <ref>). Recalling that p(ψ) is the number of bins hit by the application of k-independent family from R̃(ψ) ⊆ [c:pre_bins b^2] to [b] we can apply Lemma <ref>. This implies:
_(f,g,h) ∼Ψ( ⋀_i ∈{1,2,3} E_i(f,g,h) ∧p(f,g,h) - ρ(R(f))≥/12R(f)) ≤
_ψ∼Ψ( R̃(ψ)≤ b ∧p(ψ) - ρ(R̃(ψ))≥/12R̃(ψ)) ≤
_ψ∼Ψ( R̃(ψ)≤ b ∧p(ψ) - ρ(R̃(ψ))≥ 9 b^-1/2R̃(ψ)) ≤ 2^-6
where we used, that b ≥ 9^2 12^2 ^-2 (i.e. c:delta >= 9^2 12^2.
Equation <ref> is true.
Let us start by observing that E_1(ψ) ∧ E_2(ψ) ∧ E_4(ψ) →A^*(ψ) - A≤A. This is basically an error propagation argument.
First note that by using Eq. <ref>:
p(f,g,h) ≤ρ(R(f)) + /12R(f)≤ρ(2/3b) + 1/12R(f)≤41/60 b.
Moreover, using the mean value theorem:
ρ^-1(p(f,g,h)) - R(f) = (ρ^-1)'(ξ) p(f,g,h) - ρ(R(f))≤/3R(f)
for some ξ between ρ(B(f)) and p(f,g,h) where we can approximate (ρ^-1)'(ξ) < 4. Hence:
ρ^-1(p(f,g,h)) - 2^-s(f)A ≤ ρ^-1(p(f,g,h)) - R(f) + R(f) - 2^-s(f)A
≤ /3R(f) + R(f) - 2^-s(f)A
≤ /3R(f) - 2^-s(f)A + /3 2^-s(f)A + R(f) - 2^-s(f)A
≤ ( 2/3 + ^2/9) 2^-s(f)A≤ 2^-s(f)A
It is also possible to deduce that E_1(f,g,h) → t(f) ≥ (A) - b → s(f) ≥ q_max.
Using Lemma <ref> to <ref> we can conclude that Equation <ref> is true. And the implications derived here show that then Equation <ref> must be true as well.
To extend the previous result to the case: q ≤ q_max, let us introduce the random variables:
t_c(ψ, q) := max{τ_1(ψ, q)[j] + q | j ∈ [b] } - b + 9
s_c(ψ, q) := max(0, t_c(ψ, q))
p_c(ψ, q) := { j ∈ [b] |τ_1(ψ, q)[j] + q ≥ s_c(ψ,q) }
Y_c(ψ, q) := 2^s_c(ψ,q)ρ^-1(p_c(ψ,q))
These definitions t_c, p_c and Y_c correspond to the terms within the loop in the estimate function for arbitrary q.
_ψ∼Ψ( ∃ q ≤ q_max. Y_c(ψ,q) - A > A) ≤1/16
It is possible to see that t_c(ψ,q) = t(ψ) if q ≤ t(ψ). This is because τ_1(ψ,q) + q and τ_1(ψ,0) are equal except for values strictly smaller than q. With a case distinction on t(ψ) ≥ 0 it is also possible to deduce that s(ψ,q) = s(ψ) if q ≤ s(ψ).
Hence: p_c(ψ, q) = p(ψ) and Y_c(ψ,q) = Y(ψ) (for q ≤ s(ψ)).
Thus this lemma is a consequence of Lemma <ref>.
The previous result established that each of the individual estimates is within the desired accuracy with a constant probability. The following establishes that the same is true for the median with a probability of 1- δ/2:
L := _ω∈Ω( ∃ q ≤ q_max. estimate(τ_2(ω,q)) - A≥A) ≤δ/2
Because the median of a sequence will certainly be in an interval, if more than half of the elements are in it, we can approximate the left-hand side as:
L ≤ _ω∈Ω( ∃ q ≤ q_max. ∑_i ∈ [l] [Y(ω_i,q) - A≥A] ≥l/2)
≤ _ω∈Ω( ∑_i ∈ [l] [∃ q ≤ q_max. Y(ω_i,q) - A≥A] ≥l/2)
≤ exp( - l (1/2ln((1/16+1/16)^-1) - 2 e^-1)) ≤exp( - l/4) ≤δ/2
The third inequality follows from Lemma <ref> and <ref> as well as λ≤1/16.
We can now complete the proof of Theorem <ref>.
Follows from Lemma <ref> and the previous lemma, as well as the reasoning established in Equation <ref>.
§.§ Space Usage
It should be noted that the data structure requires an efficient storage mechanism for the levels in the bins. If we insist on reserving a constant number of bits per bins, the space requirement will be sub-optimal.
Instead we need to store the table values in a manner in which the number of bits required for a value x is proportional to ln x.
A simple strategy would be to store each value using a prefix-free universal code and concatenating the encoded variable-length bit strings.[Note that a vector of prefix-free values can be decoded even if they are just concatenated.] A well-known universal code for positive integers is the Elias-gamma code, which requires 2 x + 1 bits for x ≥ 1 <cit.>. Since, in our case, the values are integers larger or equal to (-1), they can be encoded using 2 (x+2) + 1 bits.[There are more sophisticated strategies for representing a sequence of variable-length strings that allow random access. <cit.>] (We are adding 2 before encoding and subtracting after decoding.)
In combination with the condition established in the compress function of Algorithm <ref> the space usage for the table is thus (2 c:space_bound+ 1) b l ∈( b l ) ⊆( ln(δ^-1) ^2).
Additionally, the approximation threshold needs to be stored. This threshold is a non-negative integer between 0 and n requiring ( lnln n) bits to store.
In summary, the space required for the sketch is ( ln(δ^-1) ^2 + lnln n).
For the coin flips, we need to store a random choice from Ω, i.e., we need to store ln (Ω) bits.
The latter is in
(ln (Ω) ) ⊆ ( ln (Ψ) + l ln (λ^-1))
⊆ ( ln( 𝒢_2([n])) + ln( ℋ_2([n], [c:pre_bins b^2])) + ln( ℋ_k([c:pre_bins b^2],[b])) + l^2 (ln l)^3)
⊆ ( ln n + ln n + k ln (^-1) + ln (δ^-1)^3 )
⊆ ( ln n + ln (^-1)^2 + ln (δ^-1)^3) .
Overall the total space for the coin flips and the sketch is ( ln(δ^-1) ^-2 + ln n + ln (δ^-1)^3).
§ EXTENSION TO SMALL FAILURE PROBABILITIES
The data structure described in the previous section has a space complexity that is close but exceeds the optimal ( ln(δ^-1) ^-2 + ln n). The main reason this happens is that, with increasing length of the random walk, the spectral gap of the expander is increasing as well — motivated by the application of Lemma <ref> in Subsection <ref>, with which we could establish that the cut-level could be shared between all tables. A natural idea is to restrict that.
If δ^-1 is smaller than ln n the term (ln (δ^-1))^3 in the complexity of the algorithm is not a problem because it is dominated by the ln n term. If it is larger, we can split the table into sub-groups and introduce multiple cut-levels. Hence a single cut-level would be responsible for a smaller count of tables, and thus the requirements on the spectral gap would be lower. (See also Figure <ref>).
A succinct way to precisely prove the correctness of the proposal is to repeat the previous algorithm, which has only a single shared cut-level, in a black-box manner for the same universe size and accuracy but for a higher failure probability. The seeds of each repetition are selected again using an expander walk. Here the advantage of Lemma <ref> is welcome, as the inner algorithm needs to have a failure probability depending on n — the natural choice is (ln n)^-1. This means the length of the walk of the inner algorithm matches the number of bits of the cut-level (lnln n). The repetition count of the outer algorithm is then (ln (δ^-1)/lnln n). Note that the total repetition count is again ( ln (δ^-1)).
Let n > 0, 0 < < 1 and 0 < δ < 1. Then there exists a cardinality estimation data structure for the universe [n] with relative accuracy and failure probability δ with space usage (ln(δ^-1) ^-2 + ln n).
If δ^-1 < ln n, then the result follows from Theorem <ref> and the calculation in Subsection <ref>. Moreover, if n < exp(e^5), then the theorem is trivially true, because there is an exact algorithm with space usage exp(e^5) ∈(1). Hence we can assume e^5 ≤ln n ≤δ^-1. Let Ω^*, single^*, merge^* and estimate^* denote the seed space and the API of Algorithm <ref> for the universe [n], relative accuracy and failure probability δ^* := (ln n)^-1.
Moreover, let m := ⌈ 4 ln (δ^-1)/lnln n⌉ — the plan is to show that with these definitions Algorithm <ref> fulfills the conditions of this theorem.
Let ν(θ,A)[i] := τ^*(θ_i, A) for i ∈ [m] and θ∈Θ := U(ℰ(Ω^*,δ^*,m)). Then it is straightforward to check that:
single(θ,x) = ν(θ,{x}) merge(ν(θ,A),ν(θ,B)) = ν(θ,A ∪ B)
for x ∈ [n] and ∅≠ A, B ⊆ [n] taking into account Lemma <ref>.
Hence the correctness follows if:
_θ∈Θ( estimate(ν(θ,A)) - A > A ) ≤δ.
Because the estimate is the median of the individual estimates, this is true if at least half of the individual estimates are in the desired range.
Similar to the proof of Lemma <ref> we can apply Lemma <ref>.
This works if
exp( -m (1/2ln((δ^* + δ^*)^-1) - 2e^-1)) ≤δ
which follows from m ≥ 4 ln (δ^-1) (lnln n)^-1 and lnln n ≥ 5.
The space usage for the seed is: lnΘ∈(ln n + ln (^-1)^2 + (ln ((δ^*)^-1))^3 + m ln ((δ^*)^-1)) ⊆ (ln n + ln (^-1)^2 + ln (δ^-1)).
And the space usage for the sketch is: ( m ln ((δ^*)^-1) ^-2 + m lnln n ) ⊆( ln (δ^-1) ^-2 + lnln n).
§ OPTIMALITY
The optimality of the algorithm introduced by Błasiok <cit.> follows from the lower bound established by Jayram and Woodruff <cit.>. The result (as well as its predecessors <cit.>) follows from a reduction to a communication problem. This also means that their theorem is a lower bound on the information the algorithm needs to retain between processing successive stream elements.
It should be noted that, if additional information is available about the distribution of the input, the problem becomes much easier.
Indeed with such assumptions it is even possible to introduce algorithms that can approximate the cardinality based on observing only a fraction of the input, so the upper bound established in the previous section and the lower bounds discussed here are with respect to algorithms, that work for all inputs.[The probabilistic nature of the correctness condition is only with respect to the internal random bits used.]
An immediate follow-up question to Theorem <ref> is whether the space usage is also optimal in the distributed setting. Unfortunately, this question is not as well posed as it sounds. One interpretation would be to ask whether there is a randomized data structure that fulfills the API described at the beginning of Section <ref>, i.e., with the four operations: init, single, merge and estimate, fulfilling the same correctness conditions, requiring o(ln (δ^-1 )^-2 + ln n). For that question the answer is no, because such a data structure can be converted into a sequential streaming algorithm: Every time a new stream element is processed, the new state would be computed by obtaining the sketch of the new element using the single operation and merging it with the pre-existing state using the merge operation. (See also the first mode of operation presented in Figure <ref>.)
A more interesting question is, if there is a less general algorithm that works in the distributed streams model. Let us assume there are p processes, each retaining m stream elements, and they are allowed to communicate at the beginning, before observing the stream elements, and after observing all stream elements. Here, let us assume that the processes know how many processes there are and also how many stream elements each process owns. Even with these relaxed constraints, the number of bits that each process will need to maintain will be the same as the minimum number of bits of a sequential streaming solution.
This follows by considering a specific subset of the input set where except for process 0, the stream elements on all the other processes are equal to the last stream element of process 0. In particular, the information the processes 1,2, …, p-1 have is 0 bits from the perspective of process 0. If our distributed hypothetical algorithm is correct, it can only be so if the worst-case space usage per process is Ω(ln (δ^-1 )^-2 + ln n).
It should be noted that more relaxed constraints, for example, if the processes are allowed to communicate multiple times, after having observed some of the stream elements, prevent the previous reduction argument. And there will be more efficient solutions. Similar things happen, if assumptions about the distribution of the input are made.
§ RUNTIME
The function compress in Algorithm <ref>, which is being used as an internal operation within the single and merge operations is described in a way that allows verifying its correctness properties easily, but as an algorithm it has sub-optimal runtime. In the following, I want to introduce an alternative faster implementation with the same behavior.
Let us recall that the function repeatedly decrements every (non-negative) table entry and increments the cut-off level until the condition
L := ∑_i ∈ [l], j ∈ [b] (B[i,j]+2) > c:space_bound b l
is fulfilled. If the number of iterations in the loop — the minimum value that the table entries need to be decreased by — is known, the while loop can be removed. This results in an algorithm of the following form:
[indent-mark]
function compress((B,q) : 𝒮) : 𝒮
+
Δfindrequiredcutoff(B)
B[i,j] max(B[i,j]-Δ,-1) for i ∈ [l], j ∈ [b]
q q + Δ
return (B,q)
-
The function findrequiredcutoff is a dynamic programming algorithm. It starts by computing the minimum amount by which the left-hand side of Eq. <ref> has to be reduced. Then it computes a temporary table, with which it is possible to determine the effect of every possible Δ on L. To understand how that works, let us first note that the contribution of a single table entry B[i,j] to L will change only if (B[i,j]+2) is affected, which splits the possible Δ values into distinct consecutive intervals. For example: If B[i,j] = 23, then any Δ below 9 will not affect (B[i,j]+2). If Δ is between 10 and 17, then the contribution of (B[i,j]+2) will decrease by one. If Δ is between 18 to 21, it will decrease by two, etc. All of that can be kept track off more efficiently using a sequence χ which describes the relative effect of a Δ compared to Δ-1, i.e., the discrete derivative of the function we are looking for. For our example this means that χ will be 1 for the values: 10, 18, 22, 24 and 0 otherwise. It is, of course straightforward to cumulatively determine χ for the entire table.
[indent-mark]
function findrequiredcutoff(B) : ℕ
+
R ∑_i ∈ [l], j ∈ [b] (B[i,j]+2) - c:space_bound b l
χ[i] 0 for i ∈ [ n]
for (i,j) ∈ [l] × [b]
+
x B[i,j]+2
inc(χ[x - (2^k-1)]) for each 1 ≤ k ≤ x, k ∈ℕ
-
Δ 0
while R > 0:
+
inc(Δ)
R R - χ[Δ]
-
return Δ
In the last step, the algorithm determines the smallest Δ fulfilling Eq. <ref> using the function χ, i.e., the length of the smallest prefix of χ whose sum surpasses R. To estimate the runtime of the above compression algorithm and the resulting merge and estimate operations, it makes sense to first obtain a bound on the left-hand side of Eq <ref>, for any possible input of the compress operation.
* For the single operation: L ∈( ln(δ^-1) lnln n ) ⊆(ln n).
* For the merge operation: L ∈( ln(δ^-1) ^-2).
The first observation follows from the definition of single_1 in Algorithm <ref>. Note that this is within the context of the inner algorithm (Section <ref>), where it is correct to assume δ^-1≤ln n. For the merge operation, this follows from the fact that the initial merge_1 can at most double the space usage of its inputs, where for each input Eq. <ref> can be assumed.
On the other hand it is easy to check that the runtime of the new compress function is in (L + ln(δ^-1) ^-2 + ln n) in the word RAM model for a word size w ∈ O(max (ln n, ln (^-1), lnln(δ^-1))). In summary, the operations merge and single require O(ln(δ^-1) ^-2 + ln n) operations.
A practical implementation of the estimate function introduced in Algorithm <ref> requires an approximation of ρ^-1(x). This can be done by increasing the parameter b by a factor of 4 (and the parameter k accordingly, since it is defined in terms of b) and computing an approximation of
ρ^-1(x) with an error of /2 (in the range 0 ≤ x ≤41/60b)[Because of Lemma <ref>, it is enough to approximate ρ^-1 only within this range.]. In combination the resulting algorithm has again a total relative error of . For such an implementation the number of operations is asymptotically O(ln(δ^-1) ^-2 + ln n).
It is straightforward to extend the same result to the extended solution derived in Section <ref>.
§ CONCLUSION
A summary of this work would be that for the space complexity of cardinality estimation algorithms, there is no gap between the distributed and sequential streaming models. Moreover, it is possible to solve the problem optimally (in either model) with expander graphs and hash families without using code-based extractors (as they were used in previous work). The main algorithmic idea is to avoid using a separate rough estimation data structure for quantization (cut-off); instead, the cut-off is guided by the space usage. During the estimation step at the end, an independent rough estimate is still derived, but it may be distinct from the cut-off reached at that point. This is the main difference between this solution and the approach by Kane et al. <cit.>. The main mathematical idea is to take the tail estimate based on the Kullback-Leibler divergence for random walks on expander graphs, first noted by Impagliazzo and Kabanets <cit.> seriously. With which, it is possible to achieve a failure probability of δ using (ln (δ^-1)/ln ((δ^*)^-1)) repetitions of an inner algorithm with a failure probability δ^* > δ. Note that the same cannot be done with the standard Gillman-type Chernoff <cit.> bounds. This allows the two-stage expander construction that we needed. As far as I can tell, this strategy is new and has not been used before.
Błasiok <cit.> and Kane et al. <cit.> also discuss strong tracking properties for the sequential streaming algorithm. Their methods do not scale into the distributed stream model, because the possible number of reached states is exponentially larger than the number of possible states in the sequential case. An interesting question is whether there are different approaches for the distributed streams model or complexity bounds with respect to the number of participating processes or total number of stream elements, with which strong-tracking properties can be derived.
Another interesting question is whether the two-stage expander construction can somehow be collapsed into a single stage. For that, it is best to consider the following non-symmetric aggregate:
_ω∈ℰ (ℰ (S, exp(-l (ln l)^3), l), exp(-l/m), m)(∑_i ∈ [m][∑_j ∈ [l] X(ω_ij) ≥c:dev_bound] ≥m/2) ≤exp(-(lm))
where X may be an unbounded random variable with, e.g., sub-gaussian distribution. Indeed, the bound on the count of too-large cut-off values from Algorithm <ref> turns out to be a tail estimate of the above form. I tried to obtain such a bound using only a single-stage expander walk but did not succeed without requiring too large spectral gaps, i.e., with λ^-1∈(1) for m ≪ l. There is a long list of results on more advanced Chernoff bounds for expander walks <cit.> and investigations into more general aggregation (instead of summation) functions <cit.>, but I could not use any of these results/approaches to avoid the two-stage construction.
This suggests that either there are more advanced results to be found or multi-stage expander walks are inherently more powerful than single-stage walks.
§ PROOF OF LEMMA <REF>
*
Let μ_k := _v ∼ V [e^k ≤ f(v)] ≤exp(-e^k k^3) for k ≥ 3. We will show
L_k := _w ∼Walk(G,l)( ∑_i ∈ [l] [e^k ≤ f(w_i)] ≥ l e^-k k^-2) ≤exp(-l-k+2) for all k ≥ 3
by case distinction on the range of k:
Case k ≥max(ln l,3):
In this case the result follows using Markov's inequality. Note that the random walk starts from and remains in the stationary distribution, and thus for any index i ∈ [l] the distribution of the i-th walks step w_i will be uniformly distributed over V, hence:
L_k ≤ e^k k^2 l^-1_w ∼Walk(G,l)∑_i ∈ [l] [e^k ≤ f(w_i)] = e^k k^2 _v ∼ V [e^k ≤ f(v)]
≤ e^k k^2 exp(-e^k k^3 ) = exp( k + 2 ln k - e^k k^3 ) ≤exp( 2 k - e^k (k^2 + 2) )
≤ exp( 2 k - e^k k^2 - e^k - e^k ) ≤exp ( -l -k + 2 )
Here we use that k^3 ≥ k^2 + 2 and e^k ≥ k for k ≥ 3 and e^k ≥ l.
Case 3 ≤ k < ln l:
Then we have
L_k ≤ exp( -l (e^-k k^-2ln ((μ_k + λ)^-1) - 2e^-1) ) using Lemma <ref>
≤ exp( -l (e^-k k^-2 (e^k k^3 - ln 2) - 2e^-1) ) ≤exp( -l ( k - e^-k k^-2ln 2 - 2e^-1) )
≤ exp( -l ( k - 1) ) ≤exp( -l - k + 2 )
Concluding the proof of Eq. <ref>.
Note that:
∑_i ∈ [l] f(w_i) ≤ e^2 l + ∑_i ∈ [l]∑_k ≥ 2 e^k+1 [e^k ≤ f(w_i) < e^k+1]
≤ e^2 l + ∑_i ∈ [l]( ∑_k ≥ 2 e^k+1 [e^k ≤ f(w_i)] - ∑_k ≥ 2 e^k+1 [e^k+1≤ f(w_i)] )
≤ (e^2 + e^3) l + (e-1) ∑_i ∈ [l]( ∑_k ≥ 3 e^k [e^k ≤ f(w_i)] )
Hence:
_w ∼Walk(G,l)( ∑_i ∈ [l] f(w_i) ≥c:dev_bound l ) ≤ _w ∼Walk(G,l)( ∑_k ≥ 3, i ∈ [l] e^k [e^k ≤ f(w_i)] ≥ l )
≤ _w ∼Walk(G,l)( ⋁_k ≥ 3∑_i ∈ [l] [e^k ≤ f(w_i)] ≥ l e^-k k^-2)
≤ ∑_k ≥ 3 L_k ≤∑_k ≥ 3exp( -l - k + 2 ) ≤exp(-l) .
§ BALLS AND BINS
Let Ω = U( [r] → [b]) be the uniform probability space over the functions from [r] to [b] for b ≥ 1 and 0 ≤ r ≤ b and let
X(ω) = ω([r]) be the size of the image of such a function. This models throwing r balls into b bins independently, where X is the random variable counting the number of hit bins. Moreover, let E_i(ω) = {ω| i ∈ω([r]) } be the event that the bin i was hit.
Note that X(ω) = ∑_i ∈ [b] E_i(ω).
And we want to show that
_ω∼Ω X(ω) = b (1-(1-1/b)^r) _ω∼Ω X(ω) ≤r(r-1)/b
_ω∼Ω X(ω) = b (1-(1-1/b)^r)
First note that:
( E_i) = _ω∼Ω{ω|ω([r]) ⊆ [b] ∖{ i }} = (1-1/b)^r
which can be seen by counting the number of functions from [r] to [b] ∖{ i}.
Hence:
X = ∑_i ∈ [b] 1 - ( E_i) = b (1 - ( 1-1/b)^r )
20
_ω∼Ω X(ω) ≤r(r-1)/b
Note that for r ≤ 1: X = 0 because X is constant. For r ≥ 2:
X = X^2 - ( X)^2 = ∑_i,j ∈ [b] (E_i ∧ E_j) - ( E_i) ( E_j)
= ∑_i,j ∈ [b] (1- ( E_i) - ( E_j) + ( E_i ∧ E_j)) - (1-( E_i)) (1-( E_j))
= ∑_i,j ∈ [b]( E_i ∧ E_j) - ( E_i) ( E_j)
= ∑_i ≠ j ∈ [b](1-2 b^-1)^r - (1-b^-1)^2r + ∑_i ∈ [b]( 1-b^-1)^r - ( 1-b^-1)^2r
= b (b-1) [ (1-2 b^-1)^r - (1-b^-1)^2r] + b [ ( 1-b^-1)^r - ( 1-b^-1)^2r]
= b^2 [ (1-2 b^-1)^r - (1-b^-1)^2r] + b [ ( 1- b^-1)^r - ( 1-2 b^-1)^r]
= -r ξ_1^r-1 + r ξ_2^r-1 for some ξ_1 ∈(1-2b^-1,(1-b^-1)^2), ξ_2 ∈(1-2b^-1,1-b^-1)
≤ -r (1-2b^-1)^r-1 + r (1-b^-1)^r-1
= r (r-1) b^-1ξ_3^r-2 for some ξ_3 ∈(1-2b^-1,1-b^-1)
≤ r(r-1)/b
The lines where the variables ξ_i were introduced follow from the application of the mean value theorem.
The above is a stronger version of the result by Kane et al. <cit.>[Lem. 1]. Their result has the restriction that r ≥ 100 and a superfluous factor of 4.
Interestingly, it is possible to obtain a similar result for k-independent balls into bins. For that let Ω' be a probability space of functions from [r] to [b] where
_ω∼Ω'( ⋀_i ∈ Iω(i) = x(i) ) = r^-I
for all I ⊂ [r], I≤ k and all x : I → [b].
As before let us denote X'(ω) := ω([r]) the number of bins hit by the r balls. Then the expectation (resp. variance) of X' approximates that of X with increasing independence k, more precisely:
If ε≤ e^-2 and k ≥ 1 + 5 ln (b ^-1) (ln (ln (b ^-1)))^-1 then:
_ω' ∈Ω' X'(ω') - _ω∈Ω X(ω) ≤ε r _ω' ∈Ω' X'(ω') - _ω∈Ω X(ω) ≤ε^2 .
This has been shown[Without the explicit constants mentioned in here.] by Kane et al. <cit.>[Lem. 2]. The proof relies on the fact that
X = ∑_i ∈ [b]max(1,Y_i) where Y_i denotes the random variable that counts the number of balls in bin i.
It is possible to show that (Y_i)^j = (Y'_i)^j for all j ≤ k (where Y'_i denotes the same notion over Ω').
Their approach is to approximate max(1,·) with a polynomial g of degree k.
Since g(Y_i) = g(Y'_i) they can estimate the distance between X and X' by bounding the expectation of each approximation error: g(Y_i) - max(1,Y_i). Obviously, larger degree polynomials (and hence increased independence) allow better approximations. The reasoning for the variance is analogous.
If k ≥c:approx_bin_balls_1ln b + c:approx_bin_balls_2 then:
L := _ω' ∈Ω'( X'(ω') - ρ(r) > 9 b^-1/2 r ) ≤ 2^-6
This follows from Lemma <ref>, <ref> and the previous lemma for = min (e^-2, b^-1/2) in particular:
X' ≤ X + 1/b≤r^2/b and hence:
L ≤ _ω' ∈Ω'( X'(ω') - X' + X' - ρ(r)≥ 9 b^-1/2 r )
≤ _ω' ∈Ω'( X'(ω') - X' + b^-1/2 r ≥ 9 b^-1/2 r )
≤ _ω' ∈Ω'( X'(ω') - X'≥ 8 b^-1/2 r )
≤ _ω' ∈Ω'( X'(ω') - X'≥ 8 √( X')) ≤ 2^-6
where the last line follows from Chebychev's inequality.
§ TABLE OF CONSTANTS
§ FORMALIZATION
As mentioned in the introduction the proofs in this work have been machine-checked using Isabelle. They are available <cit.> in the AFP (Archive of Formal Proofs) <cit.> — a site hosting formal proofs verified by Isabelle. Table <ref> references the corresponding facts in the AFP entries. The first column refers to the lemma in this work. The second is the corresponding name of the fact in the formalization. The formalization can be accessed in two distinct forms: As a source repository with distinct theory files, as well as two “literate-programming-style” PDF documents with descriptive text alongside the Isabelle facts (optionally with the proofs). The latter is much more informative. The third column of the table refers to the file name
[ is abbreviated by and with .]
of the corresponding source file, while the last column contains the reference of the AFP entry, including the section in the PDF versions.
l p4.5cm l l l
Lemma Formalized Entity Theory Src.
Thm. <ref> 3p11.2cmThis theorem from Impagliazzo and Kabanets was stated for motivational reasons and is never used in any of the following results, hence it is not formalized.
Thm. <ref> theorem hittingproperty Expander_Graphs_Walks <cit.>
Thm. <ref> theorem klchernoffproperty Expander_Graphs_Walks <cit.>
Lem. <ref> lemma walktailbound DDE_Tail_Bounds <cit.>
Lem. <ref> lemma deviationbound DDE_Tail_Bounds <cit.>
Lem. <ref> (1) lemma singleresult DDE_Inner_Algorithm <cit.>
Lem. <ref> (2) lemma mergeresult DDE_Inner_Algorithm <cit.>
Lem. <ref> lemma cutofflevel DDE_Cutoff_Level <cit.>
Lem. <ref> lemma e1 DDE_Accuracy_WO_Cutoff <cit.>
Lem. <ref> lemma e2 DDE_Accuracy_WO_Cutoff <cit.>
Lem. <ref> lemma e3 DDE_Accuracy_WO_Cutoff <cit.>
Lem. <ref> lemma e4 DDE_Accuracy_WO_Cutoff <cit.>
Lem. <ref> lemma accuracywithoutcutoff DDE_Accuracy_WO_Cutoff <cit.>
Lem. <ref> lemma accuracysingle DDE_Accuracy <cit.>
Lem. <ref> lemma estimateresult1 DDE_Accuracy <cit.>
Thm. <ref> lemma estimateresult DDE_Accuracy <cit.>
Thm. <ref> (1) theorem correctness DDE_Outer_Algorithm <cit.>
Thm. <ref> (2) theorem spaceusage DDE_Outer_Algorithm <cit.>
Thm. <ref> (3) theorem asymptoticspacecomplexity DDE_Outer_Algorithm <cit.>
Lem. <ref> lemma expballsandbins DDE_Balls_And_Bins <cit.>
Lem. <ref> lemma varballsandbins DDE_Balls_And_Bins <cit.>
Lem. <ref> (1) lemma expapprox DDE_Balls_And_Bins <cit.>
Lem. <ref> (2) lemma varapprox DDE_Balls_And_Bins <cit.>
Lem. <ref> lemma deviationbound DDE_Balls_And_Bins <cit.>
Reference to the formal entities
|
http://arxiv.org/abs/2307.02493v1
|
20230704055644
|
FREEDOM: Target Label & Source Data & Domain Information-Free Multi-Source Domain Adaptation for Unsupervised Personalization
|
[
"Eunju Yang",
"Gyusang Cho",
"Chan-Hyun Youn"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
FREEDOM: Target Label & Source Data & Domain Information-Free Multi-Source Domain Adaptation
for Unsupervised Personalization
Eunju Yang, Student Member, IEEE,
Gyusang Cho,
and Chan-Hyun Youn, Senior Member, IEEE
Eunju Yang, Gyusang Cho, and Chan-Hyun Youn are with the Department
of Electrical Engineering, KAIST, Korea,
e-mail: {yejyang, gyusang.cho, chyoun}@kaist.ac.kr.
Manuscript received Feb 10, 2023. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
From a service perspective, Multi-Source Domain Adaptation (MSDA) is a promising scenario to adapt a deployed model to a client's dataset. It can provide adaptation without a target label and support the case where a source dataset is constructed from multiple domains. However, it is impractical, wherein its training heavily relies on prior domain information of the multi-source dataset — how many domains exist and the domain label of each data sample. Moreover, MSDA requires both source and target datasets simultaneously (physically), causing storage limitations on the client device or data privacy issues by transferring client data to a server. For a more practical scenario of model adaptation from a service provider's point of view, we relax these constraints and present a novel problem scenario of Three-Free Domain Adaptation, namely TFDA, where 1) target labels, 2) source dataset, and mostly 3) source domain information (domain labels + the number of domains) are unavailable.
Under the problem scenario, we propose a practical adaptation framework called FREEDOM. It leverages the power of the generative model, disentangling data into class and style aspects, where the style is defined as the class-independent information from the source data and designed with a nonparametric Bayesian approach. In the adaptation stage, FREEDOM aims to match the source class distribution with the target's under the philosophy that class distribution is consistent even if the style is different; after then, only part of the classification model is deployed as a personalized network. As a result, FREEDOM achieves state-of-the-art or comparable performance even without domain information, with reduced final model size on the target side, independent of the number of source domains.
Source-Free Domain Adaptation, Multi-Source-Free Domain Adaptation, Multi-Source Domain Adaptation.
§ INTRODUCTION
The domain shift problem caused by clients' dissimilar environments is one of the common obstacles for deep-learning-based service providers, as the applications are known to be data-dependent. This problem originates from the distribution discrepancy between the client (target) and server (source)-side datasets <cit.>. Additional adaptation with client data can be an alternative, but providing additional annotation to client data is burdensome in most cases. As a possible workaround, unsupervised domain adaptation (UDA) <cit.> and its downstream, multi-source domain adaptation (MSDA) <cit.> aim to adapt a model to an unlabeled target by leveraging labeled source dataset.
Especially, MSDA considers the more plausible situation wherein it presumes the source dataset consists of samples from multiple domains.
Despite these technological advances, many factors still exist to consider when projecting real-world service scenarios onto MSDA's. Because of privacy issues on both source and target data, it is almost forbidden to transfer the dataset to each other. In other words, the client's unlabeled data can not be transferred to the server and vice versa. Moreover, sending multiple source datasets to the client may suffer storage limitations. Recent Source-Free UDA (SFUDA) has been introduced to address this situation by only sending a source-side model, not the dataset <cit.>. Multi-source-Free domain adaptation (MSFDA) approaches are also explored to support the multi-source cases <cit.>.
Existing MSFDA approaches <cit.> usually train multiple models with each source dataset to weave them for the target, requiring domain information as prior knowledge. However, there are two additional factors to take into account: 1) maintaining domain labels is pricey; 2) handcrafted information on the number of domains in the training dataset can be overwhelming prior.
It is a well-known problem of domain adaptation that the domain information can be unprovided <cit.>. For example, a practitioner can collect training datasets from multiple channels <cit.>, in which the number of domains could be intractable. Besides, the number of domains can be overwhelming prior; a single dataset can consist of multiple latent domains. Treating the dataset that is believed to be a single domain as a single domain may not be optimal <cit.>.
In this paper, we relax the unhandled condition for domain information along with the MSFDA scenario, coined three-free domain adaptation (TFDA) — domain adaptation scenario free from 1) target label, 2) source dataset at adaptation time, and 3) domain information, which is more pragmatic than previous scenarios described in Figure <ref>. Here, domain information embraces both multi-source domain labels and the number of source domains.
Under the scenario of the TFDA, we propose a three-FREE DOMain adaptation method termed FREEDOM that trains a single model from a compound multi-source dataset and deploys it to a client, supporting unsupervised adaptation to the client dataset.
Since domain information is not provided and the target adaptation should be endowed without source datasets, we propose peripheral modules to transfer knowledge. We define `style' as the remainder after subtracting typical class knowledge from the data; style is the knowledge that is the same as or includes the domain we usually believe. We train two encoders and a decoder to disentangle class and style embeddings from the given data while reconstructing its marginal distribution. As a remedy to handle domain information-free, we adopt nonparametric Bayesian as a prior for the style encoder. For the target adaptation, FREEDOM leverages the trained encoders and decoder from the source side and modulates the class encoder to transform a target input into the most likely embedding on the original class space while freezing the classifier layer.
The ultimate goal of FREEDOM is to adapt the class encoder with hypothesis transfer <cit.>. Thus, style encoders and decoders are exploited only to force stable adaption in a self-supervised manner and are eventually discarded after the tuning. Therefore, FREEDOM can have a lighter inference network than the MSFDAs, of which model size depends on the number of source domains <cit.>. We summarize our contributions as follows:
* We present a more pragmatic paradigm of Multi Source-Free Domain Adaptation with no domain information (domain labels + the number of domains), namely Three-Free Domain Adaptation (TFDA).
* We propose a disentangling-based FREEDOM with a novel alternating adaptation method to match the source and target class distribution; it exemplifies how to employ a generative model in source-free domain adaptation.
* The final adaptation model of FREEDOM's size is independent of the number of source domains, reducing the final personalized model without additional operation.
§ RELATED WORKS
In this section, we introduce related works and provide comparisons across various MSDA scenarios in Table <ref> to clarify the position of this study.
Unsupervised Domain Adaptation aims to boost the accuracy of unlabeled targets by exploiting labeled source data. To this end, the datasets are used to learn features that can reduce the gap between domains represented by ℋ-divergence <cit.>.
Two popular streams for minimizing the gap measure the discrepancy between the two domains <cit.> and using the adversarial training method <cit.>. The discrepancy-based method performs optimization by calculating a metric such as maximum mean discrepancy (MMD) <cit.>. Adversarial training employs a gradient reversal layer (GRL) to find a feature space that does not differentiate between domains and makes classification well <cit.>.
Furthermore, other generative model-based studies have been conducted for domain alignment <cit.>.
However, since they all presume a single source is given only, it is not practical in the real world.
Multi-Source Domain Adaptation (MSDA) handles unsupervised domain adaptation employing a source dataset with multiple domains, so it should consider domain discrepancies among various sources as well as domain gaps between the source and target.
The main branch of the MSDA is the hypothesis combination, where each pair of a single source and the target is used in finding a hypothesis first, and the ultimate model for the target is implemented by their weighted mixture. Mansour et al. <cit.> and Hoffman et al <cit.> presented the theoretical support of this hypothesis mixture for MSDA. Recent studies following this lineage take the form of training a model for each source-target pair and ensemble them; The algorithm focuses on how to find a common hypothesis for each pair and how to combine them.
For pair training, adversarial learning <cit.> or moment matching <cit.> is widely adopted; For weight assignment, perplexity score <cit.>, weighted averaging <cit.>, or Wasserstein distance <cit.> is utilized.
Unlike these, <cit.> extracts prototypes from multiple sources as another form of knowledge.
Another branch is to train a single feature extractor across multiple domains. For example, <cit.> implicitly aligns all domain distributions by adopting multiple classifiers while sharing a feature extractor. <cit.> trains a network with mDA layers that can provide domain-wise normalization, generating a network with a normalization layer with a different moment for each domain.
These MSDA methods commonly require domain labels to make multi-domain to be aligned. However, identifying domains from a multi-domain dataset is pricey.
Latent Domain Discovery (DD) accounts for this practical issue of finding domain labels through <cit.> or a discriminative network <cit.>.
Hoffman et al. <cit.> and Wu et al. <cit.> adopt the Gaussian mixture model and hierarchical clustering to find domain identifiers. Meanwhile, <cit.> employ an additional branch for domain discrimination, where the inference result is directly used in the MSDA network. Even though these domain discovery studies alleviate the cost of labeling in the domain aspect, they still require knowledge of the number of source domains as a prior, so they are not entirely free from domain information, unlike FREEDOM.
Source-Free Domain Adaptation is introduced to handle a challenging situation where existing DAs always require an enormous volume of the source dataset (even from multiple domains). For example, <cit.> resolves the problem via hypothesis transfer with self-supervised pseudo labeling; <cit.> use self-entropy for pseudo-label selection. In another way, <cit.> generates target-like data in order for model adaptation.
However, these all presume the single source situation, in which performance is crushed with multiple source domains.
Recent Multi-Source Free Domain Adaptation (MSFDA) studies deal with this via confidence-anchor <cit.> or hypothesis transfer with optimization-based ensembling <cit.>. However, despite their outstanding contributions, they still rely on domain labels and their target model's size increase as the number of source domains increase. Thus, in this paper, FREEDOM considers a more plausible situation where domain information is not given, and the target model is independent of the increase of source domains.
§ FREEDOM
FREEDOM aims to resolve the TFDA scenario, where a model is trained with a multi-source dataset without domain information and deployed into a client device to support adaptation with the unlabeled target dataset.
Let 𝒟_src = {(x_n, y_n)}_n=1^N_s and 𝒟_tgt = {x̃_n}_n=1^N_t denote the multi-source and target datasets; their data distributions are different. The client's model is adapted to 𝒟_tgt, leveraging the deployed model without any source data sample. So, server-side training is the only way to determine which knowledge to transfer from the multi-source dataset 𝒟_src.
Following the assumption of TFDA, the source dataset may consist of training samples from multiple domains while the information is not configurable, which complicates the problem since domain-wise model training is not allowed and requires additional manipulation. Besides, it is desirable to hand over the burden of adaptation to the server as much as possible since the target adaptation procedure is assumed to be performed on limited hardware. Therefore, FREEDOM consists of two training procedures: 1) source-side (server) training and 2) target-side (client device) adaptation, as described in Fig. <ref>. It is discerned from the precedent MSFDA studies <cit.>, presuming that multiple models are given by regular training.
The source-side algorithm is required to learn beneficial information for target adaptation, which should also be agnostic to the domain information. To accomplish this, FREEDOM takes three pillars of philosophy. First, we posit that every input data consists of class and style knowledge and build a disentangling model comprised of two encoders and a decoder. From the question `Is domain information necessary?', we find that the domain information is auxiliary in achieving the primary goal, and we chiefly need common class knowledge. Thus, if we have a way to draw the gist knowledge, the handcrafted domain labels are unnecessary. Based on this, we define style as a non-class aspect, which means a residual obtained by subtracting class information from the data distribution.
Second, we discover the prior distributions of each class, which are exploited as the blueprint for the target's class space. Thus, we posit that a class embedding follows the Gaussian Mixture Model (GMM); the source-side algorithm finds moments of each class's Gaussian distribution. Then by regularization transfer, we can guide the target's class encoder to find the space.
Finally, we define the prior style distribution with a nonparametric Bayesian method to make it serve without information on the number of source domains; the source-side algorithm regards the style aspect following Dirichlet Process Mixture (DPM).
The target-side adaptation adopts hypothesis transfer where the classifier is fixed <cit.>, so we only have to match the class embedding space with the original embedding space.
To this end, we exploit the generative model given by the server and pseudo-label.
Upon the entropy maximization from the pseudo-label, it adapts the classification model to the target by maximizing the evidence of the target. The rationale for this comes from the distribution of the target can be described with the compound of the intrinsic class aspect obtained from the multi-source dataset and the target's style aspect. To make it find target distribution stably, FREEDOM proposes alternating adaptation relying on the class prior. Figure <ref> summarizes the overall behavior of FREEDOM following the TFDA scenario.
§.§ Probabilistic Graphical of FREEDOM
§.§.§ Generative model
Before introducing the algorithms' details, we delineate the underlying generative model that consists of the FREEDOM framework. We posit that input x_n ∈ℝ^D is generated from class embedding z_n^class∈ℝ^H_c and style embedding z_n^style∈ℝ^H_s, where each embedding follows GMM and DPM, respectively.
The generative model of observation x_n follows the process :
1. Choose latent class embedding z_n^class
* y_n ∼Mult(π^class), where π^class∈Δ^C-1
* z_n^class| y_n ∼𝒩(z|μ_y_n^class, Σ_y_n^class)
2. Choose latent style embedding z_n^style
* π^style|γ∼GEM(γ)
* s_n |π^style∼Mult(π^style)
* μ^style_s ∼𝒩(μ| 0, 𝕀)
* σ_s,h^style∼Gamma(1, 1)
* z_n^style| s_n ∼𝒩(z|μ_s_n^style, Σ_s_n^style), where Σ_s_n^style = σ^style_s_n·𝕀
3. Choose a data point from the two embeddings
* x∼𝒩(x|μ_x, Σ_x), where [μ_x, logΣ_x] = f_Θ([z^class:z^style]). Here, Θ is the decoder parameter,
where all notations are summarized in Table <ref>.
First, the class embedding, the hidden feature to discriminate into C categories, follows a class-specific Gaussian distribution 𝒩(z|y_y_n^class, Σ_y_n^class) specified with its label y_n ∈ [C]. Specifically, the class label y_n is determined by the multinomial distribution parameterized by π^class = {π_y^class}_y=1^C∈ℝ_+^C, where ∑_y=1^Cπ_y^class = 1. Unlike class embeddings, which have explicit latent identifiers, it is challenging to know the number of mixtures for style embedding in advance. Thus, we postulate its prior distribution in a nonparametric Bayesian manner, especially DPM. As with the class embedding, let π^style = {π^style_s}_s=1^∞ be the prior probability of the style identifier, except having an infinite length; it is constructed with the Stick-Breaking process by additional random variable β_s, which follows the beta distribution. Then, we can define π_s = β_s ∏_l=1^s-1 (1-β_l); summing up the two processes, we can represent it with the Griffiths-Engen-McCloskey distribution (GEM). The given style identifier s_n defines style-conditional distribution as Gaussian 𝒩(z|y_s_n^style, Σ_s_n^style), where its mean and variance follow Normal and Gamma distributions, respectively.
Finally, we can construct the data x_n; we presume that the evidence follows Gaussian 𝒩(x|μ_x, Σ_x).
The parameters are derived by the decoder network f_Θ, i.e., the decoder returns the mean μ_x and variance Σ_x from the concatenated tensor of the two embeddings. Figure <ref> describes the generative process; we can factorize the joint probability as follows:
p(x, z^class, z^style, y, s) =
p(x|z^class, z^style) p(z^class| y)p(y)
p(z^style| s)p(s).
§.§.§ Inference model
We posit inference models of latent variables and find them throughout mean-field variational inference to discover the evidence distribution, where the joint variational distribution can be factorized as
q(z_n^style, z_n^class, s_n, y_n|x_n)
= q_Φ^style(z_n^style|x_n) q_Φ^class(z_n^class|x_n)
q(s_n|x_n) q(y_n|x_n).
First, for both class and style embedding, we presume the variational distributions, q(z^class|x) and q(z^style|x), follow the normal distribution like their generative models; the variational distributions' means and variances are inferred by encoder Φ_class and Φ_style, respectively, i.e., q(z_n^class|x_n) = 𝒩(z;μ̂^class, Σ̂^class) and [μ̂^class, logΣ̂^class] = f_Φ_class(x). We also find inference models for other latent variables: style identifier s_n and class label y_n; We propose that inference from inputs can be replaced with inferences from corresponding latent embeddings via Lemma 1, and we establish an inference model based on this. Inference on a style identifier and its style mode is assumed to be a DPM inference problem when style embedding is given. For class labels, it is replaced by an inference network f_W_0:ℝ^H_cℝ^C based on a supervised model. The final inference model for classification is a compound function of the class encoder and the classifier header, i.e. f_W_0∘ f_Φ_class(x_n).
More details are provided in the subsequent section.
§.§ Source-side Training
We find all parameters for the generative and inference models on the source side as the way to knowledge transfer into a target.
Specifically, the training aims for two objectives: finding prior distribution on the class embedding space throughout evidence likelihood maximization and finding encoders to disentangle an input into style and class aspects.
§.§.§ Evidence likelihood maximization
The FREEDOM parameters are adjusted to maximize the log likelihood with the given multi-source domain samples 𝒟^src. However, it is nontrivial to maximize it directly, for the term is intractable. As a workaround, we employ variational distribution q(z^style, z^class, s, y|x) approximating the true distribution, and Jensen's inequality can substitute by the evidence lower bound (ELBO) maximization as follows:
log p(x)
= log∫∫∑_s∑_y p(x, z^style, z^class, s, y) dz^class dz^style
≥𝔼_q[logp(x, z^style,z^class, s, y)/q(z^style,z^class,s,y|x)] = ℒ_ELBO^SRC (x)
Then, by Eq. <ref> and <ref>, we can factorize the source-side ELBO into three terms:
ℒ_ELBO^SRC (x, y)
= 𝔼_q(z^style, z^class|x) [log p_Θ(x|z^style, z^class)]
- 𝒟_KL[q_Φ_class(z^class, y|x)|| p(z^class, y)]
- 𝒟_KL[q_Φ_style(z^style, s|x)|| p(z^style, s)]
:= ℒ_recon(x) -ℒ_KL^class(x,y) -ℒ_KL^style(x),
where 𝒟_KL denotes the Kullback–Leibler (KL) divergence between the two distributions.
The first term represents the reconstruction loss (ℒ_recon); the remaining two imply the regularization term for class and style embedding to their respective prior, shorthand (ℒ_KL^class) and (ℒ_KL^style), respectively.
The reconstruction loss ℒ_recon is computed by comparing the evidence and its reconstructed samples with the latent class and style embeddings taken from the two encoders. The latent embeddings are taken throughout the reparameterization trick <cit.>, which fiddles with additional noise from the encoders' outputs, making the loss differentiable.
The class regularizer, the second term of Eq. <ref>, can be further disassembled as
ℒ_KL^class(x, y) := 𝒟_KL[q(z^class, y|x) || p(z^class, y)]
= 𝔼_q[log p(z^class| y)] + 𝔼_q[log p(y)] - 𝔼_q[log q(z^class|x)]
-𝔼_q[log q(y|x)].
Maximizing it enforces finding class-wise prior p(z^class| y) and the class encoder f_Φ^class, mapping an input to the class embedding space to satisfy the prior at once.
We can streamline the loss function by exploiting the one-hot vector of the given class label y∈𝕀^C in place of
the variational posterior of the class q(y|x). The tractable form of class regularization loss is configurable in Appendix <ref>.
As the regularizer for the style embedding, however, its prior distribution is intractable due to the indefinite dimension, hindering finding the tractable form of the loss ℒ_KL^style. Specifically, the terms related to the style identifier s, e.g., 𝔼_q[log p(z^style| s)], 𝔼_q[log p(s)], and 𝔼_q[log q(s|x)]. So, instead, we take a detour based on Lemma 1.
Lemma 1. The optimal variational posterior of the style identifier s is given as
q^*(s|x) = 𝔼_q_Φ^style(z^style|x)[p(s|z^style)].
The Lemma alludes that we can use the style embedding z^style from its variational posterior as a stepping stone to approximate the actual posterior of s. Inspired by this, we take an alternating update, decoupling the optimization into finding the style embedding's variational posterior and the prior distribution of the style embedding represented with the DPM. For the sake of explanation, let us impose the subscript t to represent the optimization round. Then, instead of directly minimizing ℒ^style_SRC concerning all hidden variables, we (1) explore the style distribution p_t(z^style) = ∑_s p(z^style| s)p(s) using style embeddings from the variational posterior q_Φ^style_t(z^style|x) and (2) leverage it to update the style encoder, i.e., finding q_Φ_t+1^style(z^style|x).
(Step 1) Variational inference for style embedding's DPM:
Expressly, let Z_t^style = {z^style_n |z^style∼ q_t(z^style|x_n), x_n ∈𝒟^style} and ρ_t = {β_t, θ_t:={μ_s^style, Σ_s^style}, s_t} denote the set of style embeddings for the round t and the set of hidden variables of the style embedding, respectively. Then, we find the posterior of ρ_t with Z_t^style; since the distributions are still intractable and the massive evidence is given, we employ variational inference in finding DPM posterior throughout truncated stick-breaking approximation <cit.>.
The truncated stick-breaking distribution assumes that the total number of sticks representing β is fixed as T, which implies q(β_T = 1) = 1 and π_s^style = 0, ∀ s > T. Please note that this assumption is applied to variational distribution, not to the actual distribution; it alleviates the approximation difficulty. Then, the mean-field variational approximation for this DPM problem can be achieved by maximizing the following lower bound of DPM on z^style,
p(z^style)
≥𝔼_q[log p(z^style|μ_s^style, Σ_s^style, s)]
+ 𝔼_q[log p(μ_s^style)]
+∑_n=1^N_s𝔼_q[log p(s_n |β)] + 𝔼_q[log p(β|γ)] - 𝔼_q[log q(ρ)]
:= ℒ_ELBO^DPM(z^style)
where q(ρ) = ∏_l=1^T-1 q_γ_l(β_l) ∏_l=1^T q_ν_μ_l(μ_l) ∏_h=1^H_s q_a_lh, b_lh(σ_lh)
∏_n=1^N_s q_ϕ_n(s_n). Here, q_γ_l is Beta distribution, q_ν_μ_l is Normal distribution, q_a_lh, b_lh(σ_lh) is Gamma distribution, and q_ϕ_n(s_n) is multinomial distribution. Then, we find the optimal ρ^*_t maximizing ℒ_ELBO^DPM throughout the coordinate ascent <cit.>.
(Step 2) Maximizing the style regularization term:
After finding the optimal ρ^* in (Step 1), it is exploited as an approximation of the prior distribution in calculating the style regularizer ℒ_KL^style, simplifying the problem with the finite dimension of the prior distribution.
Given the prior approximation, we should find only the variational parameter for the style embedding, that is, a style encoder Φ_style. Thus, we can remove irrelevant terms, simplifying the regularization term as
ℒ̅_KL^Style(x, β^*, μ^*, Σ^*)
= 𝔼_q[log q(z^style|x)] - 𝔼_q[log p(z^style|μ^*, Σ^*)].
§.§.§ Disentangling loss
Besides the data likelihood maximization, FREEDOM achieves disentanglement from the original input without domain information. Therefore, the class and style embeddings should be independent while reconstructing the data. To this end, we control the hyperparameter of each regularizer in turn, inspired by <cit.>.
By first being strongly tied to the class embeddings' regularizer and being loosened later, we can control the route that the class encoder can take the information.
To be more specific, the class encoder preferentially receives information from the class label. Then it obtains the rest of the information after the style encoder takes from the marginal distribution and vice versa.
In addition, we adopt two additional loss functions to clarify the knowledge independence between the class and style embedding.
The class helper is imposed to make the class encoder extract class-related knowledge, and we exploit the label smoothing for the loss function to calibrate the classifier W_0.
ℒ_LS^class = - ∑_n=1^N_sỹ·log f_W_0( f_Φ_class(x_n)),
where ỹ = y · (1-l) + l/C with given calibration parameter l.
This calibration is conjugated later for the confidence-based filtering in target adaptation.
As a style helper, we take negative cross entropy by prepositioning a gradient reversal layer (GRL) <cit.> ahead of copied class hypothesis W_0. This helper loss affects the style encoder only, not the class hypothesis W_0. To this end, we use a trick to copy the hypothesis parameter W_0 as a style's header W̅_0 without any update.
ℒ_helper^style = - ∑_n=1^N_s y·log f_W̅_0(ℛ(f_Φ_style(x_n))),
where ℛ denotes the GRL layer.
§.§.§ Summary of Source-side training
Summing all these up, the loss for the source-side training from the given multi-source dataset 𝒟_src and approximation of the style prior at round t is summarized as follows:
ℒ_t^src(x, y, β_style, β_class, ρ_t^*)
= - 𝔼_q_Φ_t^style(z^style|x) q_Φ_t^class(z^class|x)[log p_Θ_t(x|z^style, z^class) ]
+ β_style·ℒ̅_KL^style(x, β^*_t, μ^*_t, Σ_t^*)
+ β_class·ℒ_KL^class(x,y)
+ ℒ_LS^class (x, y, l ; W_0) + ℒ_helper^style (x, y; W̅_0).
In summary, the source-side training of FREEDOM consists of two steps. First, it finds DPM parameters throughout truncated variational inference, and then, it minimizes ℒ_t^src with two different weights on the style and class regularizer, in turn. Algorithm <ref> delineates this procedure.
§.§ Target-side Adaptation
On the target side, it starts by taking FREEDOM's all parameters from the source side; it inherits most of the probabilistic model of the source, except that the class label is not observable, so we can leverage the loss functions defined in the previous section by tweaking them with pseudo labels. By the generative model of FREEDOM, we posit that the class-conditional distributions discovered from the source are reusable, while style embeddings should be substituted; if we can reuse the same class distribution with the same hypothesis, then the inference model's accuracy can be guaranteed. Thus, the main objective of target adaptation is to match the target class embedding's space with the source's one represented by the class prior distribution {𝒩(z;μ_y^class, Σ_y^class)}_y=1^C — let us call this `original'. The class encoder should transform target data into the most likely embedding among the original space. To this end, FREEDOM uses 1) target likelihood maximization while sticking to the original and 2) sample selection by confidence and moment matching.
§.§.§ Likelihood maximization with alternating adaptation
We take advantage of the generative model in order to recover the class's original space.
Please remember that the class regularizer in Eq. (<ref>) forces the encoder to adhere to the prior distribution; by preserving the original prior for the class embedding, we can impose inertia to stabilize adaptation.
In addition, if we can find an ideal target distribution consisting of the style knowledge of the target and class knowledge from the original, which can mimic the target distribution, then likelihood maximization is true of finding the class encoder mapping into the original space.
It can guide the class encoders to avoid the pitfalls of non-original embedding spaces. We embody this by alternating updates of the FREEDOM's modules — adapting style encoder Φ_style, decoder Θ, and class encoder Φ_class one by one.
Target style encoder adaptation: First, we find the style embedding of the target throughout style encoder adaptation. Specifically, we find style prior parameters ρ^* via maximizing Eq. (<ref>) and employ the result in encoder adaptation throughout variants of Eq. (<ref>). The likelihood distribution is computed from the style embedding z^style drawn by the variational distribution q_Φ^style(
z|x) and class sample ẑ^class from the original space 𝒩(z;μ_ŷ^class, Σ_ŷ^class) based on its pseudo-label ŷ = max f_W_0(f_Φ_class(x)). The loss function of the style encoder adaptation is summarized as follows:
ℒ^style_tgt(x̃, ρ^*) = ℒ_helper^style (x̃, ŷ)
+ ℒ̅_KL^style(x̃, β^*, μ^*, Σ^*)
-𝔼_q_Φ_style(z^style|x)[log p_Θ(x|z^style, ẑ^class)] ,
where ẑ^class∼𝒩(z;μ_ŷ^class, Σ_ŷ^class). Please note that the class embedding used in reconstruction loss is not drawn by the class encoder but by the original distribution. It tries to find the embedding, which is the remainder after subtracting the original distribution from the target.
Target decoder adaptation: After the style encoder adaptation, the decoder is tuned to find an ideal target distribution consisting of the original class from the source and the target's style embedding.
The decoder only affects the reconstruction loss term in Eq. (<ref>), so its adaptation is conducted to minimize it; the reconstruction loss is computed similarly with the style encoder adaptation. In addition, we maximize the entropy from the reconstructed target x̂ in order to force it to take more credible information on the class space. Here is the loss function for the target decoder adaptation.
ℒ^dec_tgt = -𝔼_q_Φ_style(z^style|x)[log p_Θ(x|z^style, ẑ^class)]
- ∑_n=1^N_tŷlog f_W_0(f_Φ_class(x̂)),
where x̂∼ p_Θ(x|z^style, ẑ^class).
Target class encoder adaptation:
So far, the style encoder and decoder have been updated to represent the target distribution implying the original space, so likelihood maximization intrinsically leads the class embedding space to the original. The target encoder is updated to minimize the following loss function:
ℒ_tgt^class (x̃)
= - α^class_recon·𝔼_q(z^style, z^class|x̃) [log p(x̃|z^style, z^class)]
+ α^class_KL·ℒ_KL^class(x̃, ŷ)
- α^class_helper·∑_n=1^N_tŷlog f_w_0(f_Φ_class(x̃))
The class regularization term of the ELBO loss is computed to force the class embedding to be tied to a class conditional prior, chosen by the pseudo-label. The entropy maximization adapts the class encoder to contain more information on its inference result.
Alternating update for the target adaptation:
All of these adaptations are alternatingly conducted. One may think of tuning with the source-like optimization on Eq. (<ref>) using a pseudo label. However, the coercive optimization may find another class embedding space, which can maximize likelihood but does not accord with the original space, lowering the accuracy under the fixed hypothesis. On the other hand, this sophisticated alternating optimization can narrow down the optimization objective. In order to enhance this confinement, we additionally adopt a warm-up step repeating style and decoder update prior to class encoder adaptation.
This warm-up provides the decoder's distribution to be more aligned with the target's, which clarifies the guide role of the reconstruction loss in encoder loss (<ref>). That is to say; it ensures that the likelihood maximization does not fall into another class embedding space when updating the class encoder but toward the original.
The overall target side training is summarized in Algorithm <ref>.
§.§.§ Confident-based data selection
The alternating adaptation algorithm heavily relies on the quality of the pseudo-label. Especially, at the beginning of the adaptation, the noise in the pseudo label is fatal, so we filter out confidential samples that are likely to be correct. To this end, FREEDOM exploits two different pieces of information. One is inference on the class based on the original, i.e., γ^*_y = q^*_Φ_class(y|x), and the other is the confidence level of inference result drawn from the inference network ŷ = SoftMax(f_W_0(f_Φ_class(x))).
First, we check whether the inference results using the original and classifier are matched, i.e., maxγ^*_y = maxŷ; we only use the matching sample in adaptation. Second, we exploit the target sample where its confidence on the pseudo label is greater than the given confidence level L, i.e., maxŷ≥ L. In the evaluation section, we are going to validate the convergence of the confidence batch ratio across the target training. Here, the class label inference with the orignal γ^*_y is computed with 𝔼_q(z^class|x)[p(y|x)],
where its detail and tractable form are described in Appendix <ref>.
§ EXPERIMENTAL EVALUATION AND DISCUSSION
In this section, we validate FREEDOM with extensive experiments, from quantitative to qualitative analysis. Prior to describing the empirical results, let us expound on the general experiment settings.
After then, we introduce empirical analysis.
§.§ Experiment Configuration
§.§.§ Dataset
We evaluated FREEDOM with four popular MSFDA benchmarks: , , , and datasets. The dataset is a number-classification dataset with ten classes consisting of five domains, including MNISTM (), MNIST (), SVHN (), USPS (), and SYNNUM (). The dataset
<cit.>
is a multi-domain classification dataset having 31 classes, which includes Amazon (A), DSLR (D), and Webcam (W) as domains.
The dataset is the intersection of the Office and Caltech datasets, consisting of 10 shared classes; its number of domains is four, including Caltech (C).
The dataset <cit.> is another MSDA benchmark with 65 classes containing four domains, Art (A), Clipart (C), Product (P), and Real-World (R).
§.§.§ Competing methods
We compared FREEDOM with diverse variants of MSDA methods.
As baseline MSDA, we took MDAN <cit.>, DCTN<cit.>, M3SDA <cit.>, MDDA <cit.>, LtC-MSDA <cit.>, STEM <cit.>. They wholly focus on reducing the gap between multiple source domains and target datasets without any constraints on source-free or domain information-free.
Otherwise, mDA <cit.> and MEC <cit.> consider the case where domain labels are not given, but the number of source domains is given, which is pseudo-domain information free.
As a challenging objective, SFUDA approaches — BAIT <cit.>, PrDA <cit.>, SHOT <cit.>, MA <cit.> — are also adopted as competing methods. For a fair comparison, the softmax average value on outputs from each source domain reported by <cit.> is demonstrated. Finally, MSFDA, which is most comparable with the TFDA scenario, is adopted as baselines, e.g., DECISION <cit.> and CAiDA <cit.>.
1.2
§.§.§ Implementation
We implemented FREEDOM with PyTorch <cit.> and Scikit-learn <cit.>. In particular, we exploited the module of Scikit-learn to construct DPM's variational inference ( in Algorithm <ref> and <ref>).
When it comes to network architecture, we followed precedents.
We adopted the network architecture from <cit.> for the five-digit dataset while renovating it a little into FREEDOM's format — encoders and decoder structure.
We constructed the same network architecture using pre-trained ResNet as their backbone for the , , and datasets. We used the same structure for both style and class encoder; the input size of the decoder is twice that of each embedding, for the concatenation of the two is fed into the decoder. FC layers are used as encoders and decoders, where each encoder takes an embedding from the pre-trained ResNet-50. For the pre-trained parameter, we exploited , which is officially deployed in PyTorch. Since the backbone network is only used for input generation, it is not updated through the training, but encoders, the decoder, and the classifier layer are updated.
All network details are described in Table <ref>.
We used Adam optimizer with β_1, β_2 = 0.5, 0.99 and StepLR scheduler for all training with a decay rate of 0.9.
We commonly adopt 0.1 and 5 as β_low and β_high for alternating training parameters.
In the source-side training, we followed the pre-training strategy, widely adopted in variational model training <cit.>, to train the network without any variational loss before starting the regular training. The label smoothing parameter l is set to 0.15. More detailed hyper-parameters for each dataset are described in the following subsections.
§.§ Evaluation on Five-digit dataset
§.§.§ Quantitative Analysis
We foremost analyzed FREEDOM's performance on the dataset. First, we train the source-side model with the rest of the domains except for the target, following the TFDA scenario. Then, for the source-side training, we trained the model 200 epochs based on Algorithm <ref> with ten epochs of pre-training. After the source-side learning, the final model is deployed so as to adapt to the target. In target adaptation, we set the confidence level as 0.8; according to the confidence batch ratio, we imposed different weights on the class adaptation loss. For example, we imposed more weights on the class regularization term when the baseline model retrieves enough confident samples (conf1 in Table <ref>); if not, we gave more weight to reconstruction loss (conf2).
Table <ref> summarizes the results. It contains FREEDOM's adaptation accuracy for each target domain and baseline, that is, test accuracy right after source-side training without any adaptation. For better comparison, its first three columns explain the characteristics of each method, which denote whether it supports multi-source (MS), source-free (SF), and domain information-free (DIF), respectively. Finally, all numerical results in the table describe the average value measured with four different random seeds, considering the characteristics of the variational model.
The results show that FREEDOM has, on average, the best performance for all target data, even though it satisfies the tighter constraints. The baseline performance is poor without adaptation because the distribution deviation between domains is significant, yet FREEDOM successfully adapts the model without target labels and source datasets. Fig. <ref> demonstrates the convergence graph and the trend of the confidence batch ratio across the adaptation. The confident batch ratio is the number of confident samples normalized with its mini-batch size. Interestingly, the confidence batch ratio reflects its convergence, which can be used as a metric for unsupervised adaptation. The measure gives clues as to when to stop adaptation and hyperparameter tuning.
For the case where the initial confidence ratio is low, e.g., and , it is desirable to give more weight to reconstruction loss rather than the regularization term in (<ref>). Thus, by the metric, we applied conf1 to , , and targets and conf2 to and targets, leading to outstanding performance in adaptation.
1.2
1.1
§.§.§ Qualitative Analysis
Besides the target adaptation performance, we need to glimpse if the FREEDOM model works well as we intended.
To this end, we examine the target adaptation model's class embedding and style embedding spaces.
FREEDOM aims to adapt a class encoder that transforms any target sample into the original space discovered by source-side training.
In other words, we expect the class embedding space on the source and target sides to be identical.
Figure <ref> demonstrates the tSNE plot of the class embedding of the source from its source-side model (baseline) and the target's class embedding from its adapted model; the plot results imply that the source and target class embeddings share the same space.
Moreover, the class space is expected to have a different distribution for each class, and the result shows that the space not only the target and sources share them but also has ten independent class-conditional distributions.
We explore how the style encoder works. FREEDOM network disentangles data into style and class in order to match class space for both source and target by adapting the networks. Specifically, the style is defined as non-class knowledge completing the data distribution. In these senses, Figures <ref> and <ref> show that FREEDOM's style encoder is trained as we intended.
The figures show the style embedding spaces induced by the source-trained style encoders for the cases where is the target and is the target, respectively. In both Figures, the embeddings look closer to domain-related information than class-related information.
In Figure <ref> (left), one can see that the domain data is divided into two groups by the , and the result in the figure on the right explains the reason for this. Despite the domain information set by humans, due to the various styles within one domain, FREEDOM recognized that there are a total of 5 styles, not four. Therefore, it identified the domain as two different style groups, one with a dark background (style identifier: 4) and the other with a relatively light background color (style identifier: 0).
Figure <ref> is more intuitive to understand the non-class aspect of style embedding. Both figures represent the same style of embedding space; only the legends are different. Fig. <ref> (left) highlights the space with the class label as its legend, while the right shows the DPM model result from the style embedding, i.e., style identifier s_n. From this, we can confirm that the style embedding space can extract the rest of the information to restore the characteristics of the input image while being independent of the class as we intended.
1.1
§.§ Evaluation on Office, Office-Caltech, and Office-Home
We evaluate FREEDOM on , , and benchmarks with several random seeds; Tables <ref>, <ref> and <ref> describe the results, respectively. and datasets are similar in that they share three domains. We set the same configuration (conf1 in Table <ref>) for all targets.
Tables <ref> and <ref> show that the proposed FREEDOM outperforms existing source-free methods in an average value for the and the datasets. Since these experiments take pre-trained ResNet as their backbone, their baseline already shows quite higher performance; for only shallow layers are adapted, there is not much room to improve further compared to the benchmark. Interestingly, our baseline outperforms the competing methods even in some cases. It implies that the source-side learning algorithm proposed by FREEDOM finds a meaningful class embedding space, which makes adaptation stable.
On the other hand, the proposed method showed comparable accuracy in the dataset. Even though its accuracy is not the utmost on average, it beats with three targets. Moreover, it can give comparable performance even though domain information is not provided.
§.§ More analysis on FREEDOM
In this section, we present additional empirical analysis to explore the potential of FREEDOM and its modules.
§.§.§ Analysis of DPM-based Style Prior
As an expedient to cope with domain information-free, we leveraged Dirichlet Process as style prior distribution of FREEDOM. To validate its efficacy, we conduct three experiments. First, we compare it with naïve Gaussian prior, which indicates a single multivariate Gaussian as a prior instead of DPM. Table <ref> demonstrates the comparison result. The results show that the DPM gives more margin for the style embedding and affects higher accuracy in unsupervised adaptation.
We also compare the adaptation result with the case where domain information is given. We set different multivariate Gaussians for style embedding as we did for the class embedding. For example, we posit four different priors for style embedding for the dataset. Fig. <ref> demonstrates the part of the result, demonstrating that FREEDOM is comparable to or even better than the case where the domain information is given. In the case where the target is SVHN, its test accuracy with DPM prior was improved by more than 1 % poit.
Finally, we validate FREEDOM’s domain information-freeness by applying our method to the case where the source domain is configured with a single domain. We compare the performance with the prior work of SHOT<cit.>, of which the method is to serve the case of SFUDA. Table <ref> demonstrates the result, that FREEDOM is suitable even for the single source case, free from the source domain information.
§.§.§ Analysis of Final Model Size
Existing MSFDA methods perform target adaptation by utilizing the ensemble of models learned from each domain model. Accordingly, the size of the target adaptation inference network increases with the number of source-domain. As shown in Table <ref>, existing techniques have different sizes of inference networks depending on the number of source-domain (3 for and 4 for ). Conversely, in FREEDOM, it can be seen that the size of the target adaptation inference network does not increase even if the number of source-domain increases; this gain becomes more prominent as the number of source domains increases. In addition, we want to emphasize that the final FREEDOM model is achievable without additional processing like knowledge distillation merely by discarding redundant parts of models, e.g., style encoder and decoder.
1.2
§.§.§ Analysis of the effect of batch selection
For the stable target adaptation, we introduce two batch selection strategies in FREEDOM: 1) filtering out batches using agreement tests on moment-based inference and classifier likelihood inference and 2) confidence-based filtering.
Figure <ref> contains with and without matching cases with the same confidence level. It demonstrates that the matching-based selection provides a more stable adaptation than without it.
In addition, we compare five different settings on confident batch selection to validate the rationale for using label smoothing loss in source-side training and data selection leveraging on it. According to <cit.>, label smoothing loss can calibrate the network, where its prediction softmax denotes confidence in the inference. Thus, using the calibrated network, FREEDOM filters out target data samples to be used in pseudo-label inference. To validate its efficacy, we set four different confidence levels {0.8, 0.6, 0.3, 0} to compare their adaptation.
Figure <ref> demonstrates that the target selection based on a higher confidence level (0.8 or 0.6) provides a more stable adaptation than the lower one (0.3 or 0).
§.§.§ Abalation Study
We analyze each FREEDOM module's efficacy through an ablation study. In target adaptation, FREEDOM introduces several submodules, utilizing its generative model. We measure the performance change in the absence of each module in the proposed overall target adaptation algorithm (Algorithm <ref>), as shown in Table <ref>.
The consupicuous performance degradation in a) warm-up highlights the necessity of the alternating adaptation algorithm in target adaptation( in Algorithm <ref>).
In other words, adapting the class encoder to the target without sequential optimization of the target data shows that the maximum likelihood loss can hinder regular training.
The results of b) without class-prototype learning validates the class regularization term in Eq. (<ref>).
One of FREEDOM's main strategies is transferring the class-conditional distribution learned from the source side to the target.
We specifically add an ℒ_KL^class regularization term for this strategy when adapting the target's class encoder. Excluding this term may cause 6.9% point degradation of the final accuracy.
Finally, c) without confidence level and d) without matching show the effect of batch selection. As explained in the previous section, batch selection determines how good pseudo-label data can be provided when FREEDOM performs target adaptation according to data characteristics. Confidence level-based and matching-based filtering both utilize FREEDOM's generative model characteristics, showing that learning performance can be further improved when both are used.
§ CONCLUSION
In this paper, we first propose a more pragmatic scenario named TFDA, which relaxes the two significant obstacles, information of 1) domain label and 2) the number of domains, for applying domain adaptation to AI-based services.
This relaxation reduces the amount of information necessary for training, thus introducing more practicality. On the other hand, this relaxation enforces the network to learn without domain labels, which is a non-trivial problem to solve. Our proposed method, FREEDOM, resolve the hurdles by disentangling the class features and style features and applying bayesian non-parametric modeling on the style features. We evaluate FREEDOM on four popular MSDA benchmarks to validate our method. We further demonstrate the feasibility of each module of the proposed technique through experiments on embedding space and various ablation studies.
§ PROOF OF LEMMA
Lemma 1. The optimal variational posterior of the style identifier s is given as
q^*(s|x) = 𝔼_q_Φ^style(z^style|x)[p(s|z^style)].
Proof of Lemma 1.
ℒ_ELBO^SRC(x) = 𝔼_q(z^style,z^class,s,y|x)[logp(x,z^style, z^class,s,y)/q(z^style,z^class,s,y|x)]
=𝔼_q(·|x)[logp(x|z^style, z^class)p(z^class| y) p(y) p(z^style| s) p(s)/q(z^style,z^class,s,y|x)]
=𝔼_q(·|x)[logp(x|z^style, z^class)p(z^class| y) p(y) p(s |z^style) p(z^style)/q(z^style,z^class,s,y|x)]
= 𝔼_q(·|x)[ logp(x,z^style,z^class, y)/q(z^style, z^class, y|x)]_∘ - 𝔼_q(·|x)[logq(s|x)/p(s|z^style)]
= ∘ - ∫ q(z^style|x) ∑_s q(s|x) [logq(s|x)/p(s|z^style)] dz^style
= ∘ - 𝔼_q(z^class|x)[𝒟_KL(q(s|x) || p(s|z^style))],
where ∘ is the term extraneous to the style identifier s. From the above, one can find that the optimal variational posterior of s can be obtained when 𝒟_KL(q(s|x) || p(s|z^style)) = 0, for the KL divergence is always non-negative. In addition, we know that ∑_s q(s|x) = ∑_s p(s|z^style) = 1, leading to q(s|x) = p(s|z^style). We draw the conclusion by taking expectation on both side, i.e., q^*(s|x) = 𝔼_q(z^style|x)[p(s|z^style)] ▪
§ DETAILS OF LOSS DERIVATION
§.§.§ Class regularization loss
The following Lemma 2 supports to the derivation of the tractable form of the class regularization loss.
Lemma 2. For the given two multi-variate Gaussian, p(x) = 𝒩(x;μ, Σ) and q(x) = 𝒩(x;μ̂,Σ̂),
𝔼_q(x)[log p(x)]=
-1/2∑_h=1^H(log 2πΣ|_h + Σ̂|_h/Σ|_h +(μ̂|_h-μ|_h)^2/Σ|_h),
where H is the dimension of x, Σ|_h is the (h,h)^th element of the diagonal matrix Σ, and μ|_h denote the h^th element of vector μ.
Then, one can obtain the tractable form of the class regularization loss.
ℒ_KL^class(x, y) =
1/2∑_h=1^H_c(
log (2πΣ_y^class|_h)
+Σ̂^class|_h/Σ_y^class|_h
+ (μ̂^class|_h + μ_y^class|_h)^2/Σ_y^class|_h
- log 2πΣ̂^class|_h - 1
) - logπ_y^class
§.§.§ Style regularization loss
The style regularization loss is computed with the given prior parameters obtained by the variational inference on the DPM; its tractable form is derived with Lemma 2 as well. Here is the loss function:
ℒ̅_KL^Style(x, β^*, μ^*, Σ^*)
= 𝔼_q[log q(z^style|x)] - 𝔼_q[log p(z^style|μ^*, Σ^*)]
= -H_s/2 - 1/2∑_h=1^H_s( log 2πΣ̂|_h + log 2πΣ_s_n^*|_h + Σ̂|_h/Σ_s_n^*|_h
+ (μ̂|_h - μ_s_n^*|_h)^2/Σ_s_n^*|_h).
§.§.§ Class label inference with original distribution
From the generative model of class embedding, one can infer the most probable class label. The inference on the class label from an input can be drawn similarly to Lemma 1. If we modify the ELBO in terms of the class label y_n, we can derive the optimal inference as q^*(y_n|x_n) = 𝔼_q(z_n^class|x_n)[p(y_n|z_n^class)], which can be further approximated with the reparameterization trick and <cit.> as follows
𝔼_q(z^class|x)[p(y|x)] ≈1/L∑_l=1^L[p(y_k)p(z^class^(l)| y_k)/∑_j=1^C p(y_j) p(z^class^(l)| y_j)]_k=1^C,
where z^class^(l) = μ^class + Σ^class∘ϵ ^(l) and ϵ ^(l) is the random noise following the normal distribution.
IEEEtran
|
http://arxiv.org/abs/2307.02399v1
|
20230705161454
|
Cosmological Background Interpretation of Pulsar Timing Array Data
|
[
"Daniel G. Figueroa",
"Mauro Pieroni",
"Angelo Ricciardone",
"Peera Simakachorn"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc",
"hep-ph"
] | |
http://arxiv.org/abs/2307.01295v1
|
20230703190138
|
Hodge diamonds of the Landau--Ginzburg orbifolds
|
[
"Alexey Basalaev",
"Andrei Ionov"
] |
math.AG
|
[
"math.AG",
"math-ph",
"math.MP"
] |
plain
theoremTheorem
assumption[theorem]Assumption
axioms[theorem]Axioms
conjecture[theorem]Conjecture
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
*theorem*Theorem
*conjecture*Conjecture
definition
remark[theorem]Remark
example[theorem]Example
definition[theorem]Definition
notation[theorem]Notation
namelist[1]
#1
n04
⋱
L>l<𝒜ℬ𝒞𝒟ℰℱ𝒢ℋ̋ℐ
Łℒℳ𝒩Ø𝒪𝒫ℛ𝒬𝒰𝒯𝒳HomMFHMFCl∂∇0.3mm /REFERENCE
A. Basalaev: Faculty of Mathematics, National Research University Higher School of Economics, Usacheva str., 6, 119048 Moscow, Russian Federation, and
Skolkovo Institute of Science and Technology, Nobelya str., 3, 121205 Moscow, Russian [email protected]. Ionov: Boston College, Department of Mathematics, Maloney Hall, Fifth
Floor, Chestnut Hill, MA 02467-3806, United States [email protected]
Consider the pairs (f,G) with f = f(x_1,…,x_N) being a polynomial defining a quasihomogeneous singularity and G being a subgroup of (N,), preserving f. In particular, G is not necessarily abelian.
Assume further that G contains the grading operator j_f and f satisfies the Calabi-Yau condition.
We prove that the nonvanishing bi-graded pieces of the B–model state space of (f,G) form a diamond.
We identify its topmost, bottommost, leftmost and rightmost entries as one-dimensional and show that this diamond enjoys the essential horizontal and vertical isomorphisms.
Hodge diamonds of the Landau–Ginzburg orbifolds
Andrei Ionov
August 1, 2023
===============================================
§ INTRODUCTION
Let a polynomial f ∈[x_1,…,x_N] be quasihomogeneous w.r.t. some positive integers d_0,d_1,…,d_N, i.e.
f(λ^d_1x_1,…,λ^d_Nx_N) = λ^d_0 f(x_1,…,x_N), ∀λ∈^∗.
Assume also x_1=⋯=x_N = 0 be the only critical point of f. Then the zero set f(x_1,…,x_N)=0 defines a degree d_0 smooth hypersurface X_f in (d_1,…,d_N). Such hypersurfaces became of great interest at the end of 80th in the context of mirror symmetry (cf. <cit.>). In particular, if Calabi–Yau conditiond_0 = ∑_k=1^N d_k holds, the first Chern class of X_f vanishes and X_f is a Calabi–Yau variety.
The polynomials f above define the so-called quasihomogeneous singularities and can be studied from the point of view of singularity theory. The varieties X_f at the same time are the objects of Kähler geometry. To relate the singularity theory properties of f to the Kähler geometry properties of X_f is an important problem. This problem is in particular interesting in the context of mirror symmetry.
One of the basic features of mirror symmetry is that the Hodge diamond on a Calabi–Yau threefold X_1 coincides with the Hodge diamond of another Calabi–Yau threefold X_2 after a rotation by 90^∘. If X_1 = X_f_1 and X_2 = X_f_2 for some quasihomogeneous f_1,f_2 as above, the “rotation by 90^∘” property can be purely formulated in the language of singularity theory. For this purpose one introduces the so–called Landau–Ginzburg orbifolds (cf. <cit.>).
§.§ Hodge diamonds
The state space of a Calabi–Yau variety X is the cohomology ring H^∗(X).
Then its cohomology ring is bigraded building up a Hodge diamond of size D := _ X. In particular the following properties hold
* H^∗(X) = ⊕_p,q ∈ H^p,q(X),
* H^p,q(X) = 0 if p <0 or q <0 or p > D or q > D,
* H^0,0(X) = H^D,D(X) = 1,
* H^D,0(X) = H^0,D(X) = 1,
* there is a “horizontal” vector space isomorphism H^p,q(X) ≅ H^q,p(X),
* there is a “vertical” vector space isomorphism H^p,q(X) ≅ (H^D-q,D-p(X))^∨, where (-)^∨ stands for the dual vector space
This is the A–side of mirror symmetry.
§.§ Landau–Ginzburg orbifolds
The B–side of mirror symmetry is given by the pair (f,G) with f being a quasihomogeneous polynomial with the only critical point 0 ∈^N and G being a group of symmetries of f.
Consider the maximal group of linear symmetries of f
_f := { g ∈(N,) | f(g · ) = f() }.
It's nontrivial because it contains a nontrivial subgroup J generated by j_f
j_f · (x_1,…,x_N) := (e^2 π√(-1) d_1/d_0 x_1,…, e^2 π√(-1) d_N/d_0x_N).
Also important is the group _f := _f ∩(N,) consisting of _f elements preserving the volume form of ^N.
For any G ⊆_f, the pair (f,G) is called a Landau–Ginzburg orbifold. One associates to it the state space(f,G) that is the equivariant generalization of a Jacobian ring of f. This is the B–side of mirror symmetry.
Up to now Landau–Ginzburg orbifolds were mostly investigated for the groups G acting diagonally on ^N and also for f belonging to a very special class of polynomials — the so–called invertible polynomials (cf. <cit.>). Aslo some work was done for the symmetry groups G = S ⋉ G^d with G^d acting diagonally and S ⊂ S_N (<cit.>).
We relax both conditions in this paper.
§.§ Mirror symmetry
On its most basic level the Calabi–Yau variety and a Landau–Ginzburg orbifold are said to be mirror dual if the state spaces of them are isomorphic. Up to now such an isomorphism is only found for f being an invertible polynomial and diagonal symmetry groups. This is done by Chiodo and Ruan <cit.>.
However if mirror symmetry holds, the vector space (f,G) should build up the Hodge diamond too. Namely, it should satisfy the properties (1)–(6) above. Formulated for (f,G) this becomes purely the singularity theory question.
This is the main topic of our paper.
Let f ∈[x_1,…,x_N] define an isolated singularity and be quasihomogeneous, satisfying Calabi–Yau condition.
Then for any G ⊆_f, s.t. J ⊆ G the state space (f,G) forms a Hodge diamond of size N-2.
The ptoof is summed up in Propositions <ref>, <ref> and <ref>.
The vertical and horizontal isomorphism are given in Section <ref>.
More generally in mirror symmetry one considers the Calabi–Yau orbifolds replacing the ordinary cohomology ring by the Chen–Ruan cohomology ring H^∗_orb.
This is an essential question if H^∗_orb forms a Hodge diamond too. Some of the Hodge diamond properties above follow exactly in the same way the show them in Theorem <ref> while the other (like the property (4)) were not investigated in literature up to our knowledge.
§.§ Acknowledgements
The work of Alexey Basalaev was supported by International Laboratory of Cluster Geometry NRU HSE, RF Government grant, ag. no. 075-15-2021-608 dated 08.06.2021.
The authors are grateful to Anton Rarovsky for sharing the pictures from his bachelor thesis.
§ PRELIMINARIES AND NOTATION
§.§ Quasihomogeneous singularities
The polynomial f ∈[x_1,…,x_N] is called quasihomogeneous if there are positive integers d_0,d_1,…,d_N, s.t.
f(λ^d_1x_1,…,λ^d_Nx_N) = λ^d_0 f(x_1,…,x_N), ∀λ∈.
QH1
In what follows we will say that f is quasihomogeneous w.r.t. the weightsd_0,d_1,…,d_N or the reduced weightsq_1 := d_1/d_0,…,q_N:= d_N/d_0.
We will say that f defines an isolated singularity at 0 ∈^N if 0 is the only critical point of f.
According to K. Saito <cit.> one may consider without changing the singularity only the quasihomogeneous polynomials, s.t. 0 < q_k < 1/2 for all k=1,…,N. We will assume this condition to hold in what follows.
It follows then that f has no summand of the form x_ix_j and the number of its monomials is not less than the number of the variables =N.
Ferma, chain and loop type polynomials are examples of quasihomogeneous singularities for any natural a_i
f = x_1^a_1 Ferma type,
f = x_1^a_1 + x_1 x_2^a_2 + … + x_N-1 x_N^a_N chain type,
f = x_1^a_1x_2 + x_2^a_2x_3 + … + x_N-1^a_N-1x_N + x_N^a_Nx_1 loop type.
Using the word 'type' we assume the certain structure of the monomials set and do not specify the exponents a_i.
It's easy to see that for Ferma, chain and loop type polynomials the reduced weights q_1,…,q_m are defined in a unique way. This is also true for any quasihomogeneous singularity (cf. <cit.>). We have
q_1 = 1/a_1 Ferma type
q_i = ∑_j=i^N(-1)^j-i/a_1⋯ a_j chain type
q_m = (-1)^N-11 - a_N + ∑_k=2^N-1 (-1)^k a_N ∏_l=2^k a_N-l+1/∏_k=1^N a_k - (-1)^N loop type,
where one assumes a_0 := a_N, a_-1 := a_N-1, a_-2 := a_N-2 and so on.
Given f ∈[x_1,…,x_N] and g ∈[y_1,…,y_M] defining both quasihomogeneous singularities it follow immediately that f+g ∈[x_1,…,x_N,y_1,…,y_M] defines a quasihomogeneous singularity too. We will denote such a sum by f ⊕ g.
All quasihomogeneous N ≤ 2 isolated singularities are given by the ⊕–sums of the Ferma, chain or loop type polynomials.
All quasihomogeneous N=3 isolated singularities are given by the polynomials f_I = x_1^a_1 + x_2^a_2+ x_3^a_3, f_II = x_1^a_1 + x_2^a_2x_3 + x_3^a_3 , f_III = x_1^a_1 + x_2^a_2x_1+ x_3^a_3x_1 + x_2^px_3^q, f_IV = x_1^a_1 + x_2^a_2x_3 + x_3^a_3x_1 ,
f_V = x_1^a_1 + x_2^a_2x_1 + x_3^a_3x_2, f_VI = x_1^a_1x_2 + x_2^a_2x_1 + x_3^a_3x_1 + x_2^p x_3^q, f_VII = x_1^a_1x_2 + x_2^a_2x_3 + x_3^a_3x_1 with some positive a_1,a_2,a_3. The numbers a_i are arbitrary for f_I,f_II,f_IV,f_V,f_VII, however the polynomails f_III and f_VI are only quasihomogeneous if ≠ 0 and some additional combinatorial condition on a_1,a_2,a_3 holds. In particular the least common divisor of (a_2,a_3) should be divisible by a_1-1 for f_III to exists.
§.§ Graph of a quasihomogeneous singularity
Let f ∈[x_1,…,x_N] define an isolated singularity. Then for every index j ≤ N the polynomial f has either the summand x_j^a or a summand x_j^ax_k for some exponent a ≥ 2 and index k ≤ N (cf. <cit.>). Construct a map κ: { 1,…,N}→{ 1,…,N}. Set κ(j):=j in the first case above and κ(j) := k in the second.
Associate to f the graph[Such graphs were first considered by Arnold, however with the selfpointing arrows j → j too. We decide to remove such arrows to reduce complexity.]Γ_f with N vertices labelled with the numbers 1,…,N and the oriented arrows j →κ(j) if j ≠κ(j). In other words, the vertices correspond to the variables x_i and the arrows to the monomials x_j^ax_k.
The graphs of the N=3 quasihomogeneous singularities are all listed in Figure <ref>.
The following proposition is immediate.
Any graph Γ_f is a disjoint union of the graphs of the following two types
* oriented tree,
* oriented circle with the oriented trees having the roots on this oriented circle.
In what follows we consider the root of the type (1) graph above as a cycle with one vertex. This merges the two types above.
It's easy to see that Γ_f ⊕ g = Γ_f ⊔Γ_g, but it's not true that f decomposes into the ⊕–sum if Γ_f has more than one component.
§.§ Graph decomposition of a polynomial
Assume we only know the graph Γ_f and not the polynomial f itself. The graph structure indicates some monomials that enter f.
Call these monomials graph monomials. In particular, f has only graph monomials if it is of Ferma, chain or loop type or a ⊕–sum of them.
Let f be s.t. Γ_f has only one connected component. Then Γ_f has one oriented circle, and p oriented trees with the roots on this circle.
We have the decomposition
f = f_0 + f_1 + … + f_p + f_add,
with
* f_0,f_1,…,f_p, f_add∈[x_1,…,x_N],
* f_0 consisting of the graph monomials of f, that build up the oriented circle or the common root,
* f_k consisting of the graph monomials of f, that build up the k-th oriented tree,
* f_add := f - f_0 - f_1 … - f_p consiting of all the non–graph monomials of f.
This decomposition extends easily to the case of Γ_f having several components. Note that we could have had p=0, but f_0 ≠ 0 in any case.
The polynomial f = x_1^3 + x_1 (x_2^2+x_3^2+x_4^2) + ϵ x_2 x_3 x_4 with some non–zero defines an isolated singularity. It's also quasihomogeneous with q_1=…=q_4=1/3.
We have p = 3,
f_0 = x_1^3, f_1 = x_1 x_2^3, f_2 = x_1 x_3^3, f_3 = x_1 x_4^3, f_add = x_2x_3x_4.
§.§ Graph exponents matrix
For a quasihomogeneous singularity f define the matrix E_f with the entries in _≥ 0. It follows from Proposition <ref> that f has exactly N graph monomials. Let every row of E_f correspond to a graph monomial. The components of this row will be (α_1,…,α_N) if and only if the correponding graph monomial is · x_1^α_1⋯ x_N^α_N for some ∈^∗.
The matrix E_f is only defined up to a permutation of the rows. We will call it graph exponents matrix.
Let E_ij denote the components of E_f. Then for some non–zero constant c_k we have
f - f_add = ∑_k=1^N c_k x_1^E_k1⋯ x_N^E_kN.
Such a matrix was previously defined in the literature only for the invertible polynomials (see Section <ref>). We consider it here in a wider context.
The matrices E_f of the Example <ref> are
E_f_I = [ a_1 0 0; 0 a_2 0; 0 0 a_3 ],
E_f_II = [ a_1 0 0; 0 a_2 1; 0 0 a_3 ],
E_f_III = [ a_1 0 0; 1 a_2 0; 1 0 a_3 ],
E_f_IV = [ a_1 0 0; 0 a_2 1; 1 0 a_3 ],
E_f_V = [ a_1 0 0; 1 a_2 0; 0 1 a_3 ],
E_f_VI = [ a_1 1 0; 1 a_2 0; 1 0 a_3 ],
E_f_VII = [ a_1 1 0; 0 a_2 1; 1 0 a_3 ].
The graph exponents matrices of loop and chain type polynomials read
E_loop =
(
[ a_1 1 … 0 0; 0 a_2 1 … 0; ⋮ ⋱ ⋱ 0; 0 0 … a_N-1 1; 1 0 0 … a_N ]),
E_chain =
(
[ a_1 0 … 0 0; 1 a_2 0 … 0; ⋮ ⋱ ⋱ ⋱ ⋮; 0 … 1 a_N-1 0; 0 0 … 1 a_N ])
In general, by Proposition <ref> if Γ_f has only one connected component, the matrix E_f after some renumbering of the variables and the rows has the block form. The diagonal blocks are several chain type exponent matrices and exactly one loop type exponents matrix as in Eq. (<ref>), s.t. for every chain type block there is exactly one additional matrix entry 1 in the first row of this block and the column of a loop type block. All the other matrix entries except listed vanish.
E_f =
(
[ 1c|A_0 0 0 0; –|1c|U_i_1j_1 1c|A_1 0 0; – |⋮ ⋱ 0; - -|1c|U_i_pj_p 0 0 1|c|A_p ])
where A_0 is a loop type polynomial exponents matrix and A_1,…,A_p are chain type polynomial exponent matrices, the matrix U_ij is the rectangular matrix with 1 at position (i,j) and all other entries 0.
Assuming the decomposition of Eq. (<ref>), the matrix A_0 is exactly the exponent matrix of f_0 and the matrices A_1,…,A_p are defined by f_1,…,f_p.
Let f define a quasihomogeneous singularity. Then
(i) the matrix E_f is invertible,
(ii) there is a canonical choice of the weights (d_0,d_1,…,d_N).
Let f be quasihomogeneous with the reduced weights (q_1,…,q_N). Introduce two ^N vectors: q := (q_1,…,q_N)^T and 1 := (1,…,1)^T.
The weights (q_1,…,q_N) are defined uniquelly by the graph monomials of f. In particular, for f being decomposed as in Eq. (<ref>) we have that all of f_0,f_1,…,f_p and f_add are quasihomogeneous with the same weight sets.
Note that f_0 is of Ferma or loop type and the weights of it's variables are defined uniquely. Similarly for any f_k with k=1,…,p corresponding to the tree with the root on the oriented circle, the weight of the root's variable is defined by the quasihomogeneity of f_0, going up the tree of deduces uniquely the weight of every variable of f_k corresponding to the consequent vertex.
Then the quasihomogeneity condition on f is equivalent to the ^N vector equality E_f ·q = 1. It follows now from Cramer rule that (E_f) ≠ 0 because this equation has a unique solution. This completes (i).
The canonical weight set is obtained by taking d_f := (E_f) and solving E_f ·d = d_0 1 for d := (d_1,…,d_N)^T.
§.§ Invertible polynomials
The set of all quasihomogeneous singularities contains the following important class. The polynomial f defining an isolated quasihomogeneous singularity having no monomial of the form x_ix_j and as many monomials as the variables is called invertible polynomial and is said to define an invertible singularity.
Let f be an invertible polynomial. Then after some rescaling and renumbering of the variables we have f = f^(1)⊕…⊕ f^(n) for f^(k) being either of Ferma, chain or loop type.
Assume Γ_f to contain a vertex with two incoming arrows. Then f is of the form
α_1 x_i^a x_l^K + α_2 x_j^bx_i + α_3 x_k^cx_i + g(x),
where K ∈{ 0,1}, α_1α_2α_3 ≠ 0, b,c ≥ 2 and g does not depend on x_k,x_i,x_j. Computing
f/ x_i = a α_1 x_i^a-1 x_l^K + α_2 x_j^b + α_3 x_k^c,
f/ x_j = b α_2 x_j^b-1x_i,
f/ x_k = a α_3 x_k^c-1x_i.
Setting x_i = 0 we see that vanishing f/ x_i = f/ x_j = f/ x_k =0 is equivalent to α_2 x_j^b + α_3 x_k^c = 0 what what shows that x_i=x_j=x_k=0 is not an isolated critical point of f.
The graphs of invertible singularities are disjoint unions of isolated vertices (Ferma types), oriented cycles (loop types) and one branch trees (chain types).
Assuming the graph decomposition of Section <ref> we have always f_add = 0, p=0 and f_0 = f for Ferma and loop types, but p=1 and f_0 + f_1 = f for chain type with f_0 = x_m^a_m.
The quasihomogeneous singularities with N=2 are all invertible.
The quasihomogeneous singularities with N=3 are not all invertible. In the notation of Example <ref> we have f_I— Ferma⊕Ferma⊕Ferma, f_II— Ferma⊕chain, f_III— not invertible, f_IV— Ferma⊕loop,
f_V— chain, f_VI— not invertible, f_VII— loop.
§ SYMMETRIES
Given a quasihomogeneous polynomial f = f(x_1,…,x_N) consider the
maximal group of linear symmetries of f defined by
_f := { g ∈(N,) | f(g ·) = f() }.
Note that, since the reduced weights are defined uniquely, any g ∈_f necessary preserves the weights of the variables, i.e. maps each x_i to a linear combination of x_j with the same weight.
Let G_f^d ⊆_f be the maximal group of diagonal symmetries of f. This is the group of all diagonal elements of (N,) belonging to _f.
We have
G_f^d ≅{ (λ_1,…,λ_N) ∈ (^*)^N | f(λ_1x_1,…,λ_Nx_N) = f(x_1,…,x_N) }.
It's obvious that
G_f' ⊕ f”^d ≅ G_f'^d × G_f”^d.
Note, however, that the same does not necessary hold for _f' ⊕ f”.
In what follows we will use the notation
[α] := exp(2 π√(-1)α), α∈.
Each element g∈ G_f^d has a unique expression of the form
g= diag([α_1/r], …,[α_N/r])
0 ≤α_i < r, α_i∈
where r is the order of g. We adopt the additive notation
g = (α_1/r,… , α_N/r) or g = 1/r(α_1,… , α_N)
for such an element g.
For f = x_1^a_1 we have _f = G_f^d = ⟨ g ⟩ with g ∈^∗ acting by g(x_1) = [1/a_1] · x_1. Its order is a_1 and in the additive notation we have g = (1/a_1), G_f ≅ / a_1.
For f = x_1^a_1x_2 + x_2^a_2 we have G_f = ⟨ g_1, g_2 ⟩ with g_1 · (x_1,x_2) = ([1/a_1] x_1, x_2) and g_2 · (x_1,x_2) = ([1/a_2(1 - 1/a_1)]x_1, [1/a_2]x_2). In the additive notation g_1 = (1/a_1, 0) and g_2 = ((1 - 1/a_1)/a_2, 1/a_2).
In this example _f = G_f^d because q_1 ≠ q_2.
Let (q_1,…,q_N) be the reduced weight set of f. Then we have
j_f:= ([q_1],…, [q_N]) ∈ G_f^d.
In particular it follows that G_f^d and _f are not empty whenever f is quasihomogeneous.
Denote by J the group generated by j_f:
J := ⟨ j_f ⟩⊆ G_f^d.
Since g∈_f preserves the weights we see that j_f commutes with g. In other words J is the central subgroup of _f.
§.§ Fixed loci of the _f elements
For each g∈_f, denote by (g)the fixed locus of g
(g):={ (x_1,…,x_N) ∈^N | g · (x_1,…,x_N) = (x_1,…,x_N) }.
This is an eigenvalue 1 subspace of ^N and therefore a linear subspace of ^N.
By N_g:=_(g) denote its dimension and by f^g:=f|_(g) the restriction of f to the fixed locus of g.
For g∈ G_f^d this linear subspace is furthermore a span of a collection of standard basis vectors.
For each h∈ G_f^d, let I_h := {i_1,…,i_N_h} be a subset of {1,…, N} such that
(h)={(x_1,…,x_N)∈^N | x_j=0, j∉ I_h}.
In the other words, (h) is indexed by I_h.
In particular, I_𝕀={1,…, N}.
More generally, for g∈_f, since g preserves the weight subspaces of ^N, the weights of the subspace (g) are well-defined and are the subset of {q_1,…,q_N}. Fix a subset I_g⊂{1,…,N} such that q_k with k∈ I_g are exactly all the weights of (g), so that, in particular, we have |I_g|=N_g. Note that if g∉G_f there is no canonical choice for I_g, but the choice made at this step will not impact our results.
Denote by I_h^c the complement of I_h in I_𝕀 and set d_h:=N-N_h, the codimension of (h).
For any diagonalizable g ∈_f with N_g > 0 the polynomial f^g defines a quasihomogeneous singularity again.
Let x_1,…, x_N be the coordinates of ^N dual to the basis diagonalizing g. In this coordinates the polynomial f^g is obtained by setting some of x_∙ to zero. The proof follows now by the same argument as in Proposition 5 of <cit.>.
If f is of Ferma or loop type, then any g ∈ G_f, s.t. g ≠𝕀 satisfies (g) = 0.
If f = x_1^a_1 + x_1x_2^a_2 + … x_N-1x_N^a_N is of chain type, then any g ∈ G_f, s.t. g ≠𝕀 satisfies
(g) = { (x_1,…,x_p,0…,0) ∈^N | x_k ∈} for some p depending on g.
The polynomial f^g is of chain type again: f^g = x_1^a_1 + x_1x_2^a_2 + … x_p-1x_p^a_p.
Denote also
_f:=_f∩(N,).
This group will be important later on because it preserves the volume form of ^N.
§.§ Age of a _f element
For g∈_f let λ_1,…,λ_N be the collection of its eigenvalues. Let 0≤α_i < 1 be such that λ_i=[α_i], then age of g is defined as the number
(g) := ∑_k=1^N α_k.
The following properties are clear but will be important in what follows.
1) For any g ∈_f we have
(g) + (g^-1) = N - N_g = d_g.
2) For a diagonalizable g∈_f we have age(g) = 0 if and only if g = 𝕀.
3) We have g∈_f if and only if age(g)∈.
§.§ Diagonal symmetries and a graph Γ_f
Let Γ_f be a graph of a quasihomogeneous singularity f and g ∈ G_f. Then if g acts nontrivially on x_k, then it acts nontrivially on all x_i, s.t. there is an oriented path from i-th to the k-th vertex.
We first show the statement for the arrows pointing at k.
Having an arrow j → k means that f has a monomial x_j^a_jx_k as a summand with a nonzero coefficient. We have g · x_k ≠ x_k and therefore the summand can only be preserved under the action of g if g· x_j ≠ x_j. Having an oriented path i → j_1 →…→ j_n → k we have by using the previous step that g · x_j_n≠ x_j_n and then g · x_j_a≠ x_j_a for all a. Hence, for x_i too.
Let E_f be the graph exponents matrix of f. Consider
G_f^gr := {(λ_1,… ,λ_N)∈ (^*)^N | ∏_j=1^N λ_j ^E_1j=… =∏_j=1^Nλ_j^E_Nj=1 }.
with E_ij being the exponents of E_f.
The group G_f^gr is exactly the maximal group of diagonal symmetries of the difference f-f_add. In particular, every element of G_f^gr preserves all graph monomials of f.
We have
G_f^d ⊆ G_f^gr.
and hence G_f^d is a finite group.
An element g = 1r(α_1,…,α_N) belonging to G_f^gr satisfies
E_f ·g∈^N.
This gives yet another characterization of the group G_f^gr
G_f^gr≅{g∈ (/)^N | E_f ·g∈^N } = E_f^-1^N / ^N.
It follows that every vector g giving a G_f^gr–element is a linear combination with integer coefficients of the columns of E_f^-1. In particular let ρ_i be the i-th column of E_f^-1
E_f^-1 = ( ρ_1 | … | ρ_N ).
Denote also ρ_i := [ρ_i] ∈ G_f.
The elements ρ_k generate G_f^gr and j_f = ρ_1⋯ρ_N.
The columns of E_f generate all relations on ρ_1, …, ρ_N.
In particular, for (E_1k, …, E_Nk)^T being a k–th column of E_f we have in G_f^gr
ρ_1^E_1k⋯ρ_N^E_Nk = 𝕀,
and all other relations among {ρ_k }_k=1^N follow from those written above.
§.§ Diagonal symmetries of an invertible singularity
In <cit.> for an invertible f the authors gave the set _f of all N–tuples (s_1,…,s_N) s.t. every g ∈ G_f^d \{𝕀} is written uniquely by
g = ∏_k ∈ I_g^cρ_k^s_k,
and s_k = 0 if and only if k ∈ I_g. Due to Eq. (<ref>) and Proposition <ref> it's enough to construct such set for Ferma, loop or chain type polynomials.
For f being of Ferma, chain or loop type the set _f consists of all s = (s_1,…,s_N), s.t.
* (Ferma type): 1 ≤ s_1 ≤ a_1-1
* (loop type): 1 ≤ s_k ≤ a_k and
s ≠ (a_1,1,a_3,1,…, a_N-1,1), s ≠ (1,a_2,1,a_4,1,…, a_N)
if N is even.
* (chain type): s is of the form
(0,…,0,s_p,s_p+1,…,s_N), with {1,…,p-1} = I_g,
with 1 ≤ s_p≤ a_p-1, 1 ≤ s_k ≤ a_k for k > p.
In the additive notation for the column s^T = (s_1,…,s_N)^T we have
g = E_f^-1 s^T.
§.§ Diagonal symmetries of a quasihomogeneous singularity
For any quasihomogeneous singularity f consider its graph decomposition as in Eq. (<ref>). Up to the renumbering and rescaling of the variables we have
f_0 = x_1^a_1 or f_0 = x_1^a_1x_2 + … + x_K^a_Kx_1,
f_1 = x_1 x_K+1^b_1 + x_K+1x_K+2^b_2 + … + x_K+L-1x_K+L^b_L
with the similar expression for f_2, …, f_p.
Any nontrivial g ∈ G_f_0^d extends to an element g∈ G_f^gr. Moreover it follows that (g) = 0 and also (g) = 0 as long as g𝕀. Similarly any element h ∈ G_f^gr with (h) = 0 acts nontrivially on x_1,…,x_K preserving f_0. Hence it defines h_0 ∈ G_f_0^d by the restriction.
At the same time any h ∈ (^∗)^L acting diagonally on (x_K+1,…,x_K+L) preserving f_1 extends to an element of G_f^gr assuming it to act trivially on f_0 and all other f_2,…,f_p. One notes immediately that such elements h are the elements of chain type polynomial symmetry group. Denote the group of all such elements by G_f_1^∘.
We construct the groups G_f_2^∘, …, G_f_p^∘ in a similar way.
It follows from Proposition <ref> that any nontrivial element g ∈ G_f^gr is decomposed uniquelly
g = g_0· g_1 ⋯ g_p,
with g_0 being the extension of g_0 ∈ G_f_0^d and g_k ∈ G_f_k^∘ for k=1,…,p acting non–trivially only on f_k variables preserving all variables of f_0 identically.
We have
|G_f^gr| = |G_f_0^d|· |G_f_1^gr| ⋯ |G_f_p^gr|.
Associate to every g_0,g_1,…,g_p an element s_0,s_1,…,s_p as in Proposition <ref>. Composing them in one column s we have
g = E_f^-1 s.
The following proposition is very important in what follows.
For any g ∈ G_f^d, s.t. g = E_f^-1 s we have
(g) = (1,…,1) E_f^-1 s.
We need to show that the components of g belong to [0,1). This follows immediately from the equality E_f g = s and the special form of the matrix E_f (see Eq. (<ref>)).
For any diagonalizable g ∈_f, s.t. N_g = 0 we have
(g) ≥∑_k=1^N q_k.
The equality is only reached if g = j_f.
Rewrite f in the coordinates x_1,…,x_N dual to the basis diagonalizing g. Then each x_k is a linear combination of x_1,…,x_N. Moreover, one can renumber the new variables s.t. the weight of x_k is the same as the weight of x_k, namely q_k.
The element j_f is represented in the old and the new basis by the same diagonal matrix. The given element g acts of x_k just by a rescaling. Therefore it's enough to show the proposition for g belonging to the maximal group of diagonal symmetries.
To prove the propositions for g ∈ G_f^d it's enough to prove the inequality for any g ∈ G_f^gr with N_g=0 and f, s.t. the graph Γ_f has only one connected component.
Let the matrix E_f^T define a polynomial f^T. Namely, if for f we have Eq (<ref>), then
f^T = ∑_k=1^N c_k x_1^E_1k⋯ x_N^E_Nk.
This polynomial is quasihomogeneous again with some positive weights q_1^T, …, q_N^T by the same argument as in Proposition <ref>.
Then for 1 := (1,…,1)^T we have
∑_k=1^N q_k = (1,…,1) E_f^-11 = (1,…,1) (E_f^T)^-11 = ∑_k=1^N q_k^T.
For a given g assume s, s.t. g = E_f^-1s as in Proposition <ref>. Then none of s_k =0 because N_g=0. We have
(g) = ( (1,…,1) E_f^-1 s )^T = s^T (E_f^T)^-11 = ∑_k=1^N s_k q_k^T ≥∑_k=1^N q_k^T,
where the last inequality holds because every s_k ≥ 1. Combining with Eq. (<ref>) we get the inequality claimed. Moreover it's obvious that the equality is only reached if s_k=1 for all k. This is equivalent to the fact that g = j_f.
§ THE TOTAL SPACE
Consider the quotient ring
(f) := [x_1,…, x_N](∂ f/∂ x_1,…,∂ f/∂ x_N).
It is a finite–dimensional –vector space whenever f defines an isolated singularity.
Call it Jacobian algebra of f and set μ_f:=_(f)— the Milnor number of f.
We will assume an additional convention:
set (f) := , μ_f := 1 for the constant function f = 0.
§.§ Grading
The reduced weights q_1,…,q_N of f define the –grading on [x_1,…,x_N].
Introduce the –grading on (f) by setting
([x_1^α_1⋯ x_N^α_N]) := α_1q_1 + … + α_Nq_N.
Let ϕ_1,…,ϕ_μ be the classes of monomials, generating (f) as a –vector space.
We say that X ∈(f) is of degree κ if it's expressed as a –linear combination of = κ elements ϕ_∙.
Denote by (f)_κ the linear subspace of (f) spanned by the = κ elements. Let the Hessian of f be defined as the following determinant:
(f):=(∂^2 f/∂ x_i∂ x_j)_i,j=1,…,N.
Its class is nonzero in (f).
The maximal degree of a (f)–element is c=c(f) := ∑_k=1^N (1 - 2 q_k). Moreover we have
(f)_c = ⟨ [(f)] ⟩.
See Section II of <cit.>.
§.§ Pairing
The algebra (f) can be endowed with the –bilinear nondegenrate pairing η_f called residue pairing (see <cit.>, <cit.>).
The value η_f([u],[v]) is taken as the projection of [u][v] to the top graded component (f)_c divided by its generator [(f)]. In particular, we have
η_f([1],[(f)]) = 1.
For any β, s.t. 0 ≤β≤c the perfect pairing η_f induces an equivalence ϕ_f,β: (f)_β≅ ((f)_c - β)^∨[p]↦η_f([p],-),
where (-)^∨ stands for the dual vector space.
See Section II of <cit.>.
§.§ The total space
For each g∈_f fix a generator of a one-dimensional vector space Λ(g):=⋀^d_g(^N/(g)). Denote it by ξ_g.
For g∈ G_f^d it is standard to choose the generator to be the wedge product of x_k with k∈ I^c_g taken in increasing order.
Define _tot(f) as the –vector spaces of dimension ∑_g ∈_f(f^g)
_tot(f) := ⊕_g ∈_f(f^g) ξ_g,
Each direct summand (f^g) ξ_g will be called the g–th sector.
We will write just _tot when the polynomial is clear from the context.
Note that for g,h ∈ G, s.t. (g) = (h), we have f^g = f^h. Then (f^g) = (f^h), but the formal letters ξ_g ≠ξ_h help to distinguish (f^g) ξ_g and (f^h)ξ_h, s.t. (f^g) ξ_g ⊕(f^h)ξ_h is indeed a direct sum of dimension (f^g)+(f^h).
§.§ B-model group action
Note that an element h∈_f induces a map
h(g)→(hgh^-1).
and hence
hΛ(hgh^-1)→Λ(g). Since we have fixed the generators ξ_∙, the latter map provides a constant c_h,g∈^* s.t.
h^∗( ξ_hgh^-1) = ρ_h,gξ_g.
Note, that if g,h∈ G_f^d or, more generally, if g and h commute c_h,g is independent of the choice of the generators since g=hgh^-1. More precisely, in this case it could be computed as follows. Let λ_k,λ'_k be the eigenvalues of h and g in their common eigenbasis, then
ρ_h,g=∏_ k=1,…,N
λ'_k ≠ 1λ_k
We define the action of _f on _tot by
h^*([p()]ξ_g)=ρ_h,g^-1[p(h·)]ξ_hgh^-1.
This is indeed a group action, i.e. (h_1h_2)^* = h_1^* · h_2^*.
Note that, in particular, if g,h∈ G_f then h act on ξ_g by
h: ξ_g↦ h^* (ξ_g) := ∏_k∈ I_g^c h^-1_k ·ξ_g
Because I_𝕀^c = I_j_f = ∅ we have
h^*(ξ_𝕀) = ξ_𝕀 and h^*(ξ_j_f) = (h)^-1ξ_j_f
for any h ∈ G_f.
Similarly for any [p]ξ_g with a homogeneous p ∈[x_1,…,x_N] and g∈_f we have
(j_f)^*([p]ξ_g) = [(p) - ∑_k ∈ I_g^c q_k] · [p]ξ_g.
For a finite G ⊆_f put
_tot,G:=⊕_g ∈ G(f^g) ξ_g⊂_tot
and define the B–model state space(f,G) by
(f,G) := ( _tot,G)^G.
Namely, the linear span of the _tot vectors that are invariant w.r.t. the action of all elements of G.
In the literature (see, for example, <cit.>) a different definition could be found where the sum is taken over the representatives of the conjugacy classes of G and the invariants in each sector are taken with respect to the centralizer of the corresponding g. The two definitions are in fact equivalent in the same way as in <cit.>.
Let f = x_1^a_1— the Ferma type polynomial. Assume a_1 = rm and consider G to be generated by g = (1/r). We have
_tot = ⟨ [1]ξ_𝕀, [x_1]ξ_𝕀, …,[x_1^rm-2] ξ_𝕀⟩⊕⟨ [1]ξ_g,…,[1]ξ_g^r-1⟩.
Because I_g^c = I_g^k^c = {1}, we have
(g^k)^*(ξ_g^l) = exp(- 2 π i ·k/r) ξ_g^l.
However (g^k)^*([x_1^l]) = exp(2 π i ·kl/r) [x_1^l] and the G–invariant monomials are x_1^rn with n ∈. This gives
(f,G) = ⟨ [1]ξ_𝕀,[x_1^r]ξ_𝕀, …, [x_1^r(m-1)]ξ_𝕀⟩.
§.§ Bigrading
For any homogeneous p ∈[x_1,…,x_N] define for [p]ξ_g its left chargeq_l and right chargeq_r to be
(q_l, q_r) = ( p - ∑_k ∈ I_g^c q_k + (g), p - ∑_k ∈ I_g^c q_k + (g^-1)).
This definition endows _tot with the structure of a -bigraded vector space.
For u,v∈ G_f^d it follows immediately that
q_∙ (ξ_u) + q_∙ (ξ_v) = q_∙ (ξ_uv)
for u,v∈ G, s.t. I_u^c ∩ I_v^c = ∅.
This bigrading restricts to (f,G) because q_l,q_r commute with the action of h^∗ for any h ∈_f, h preserves the weights and (g)=(hgh^-1).
§ HODGE DIAMOND OF LG ORBIFOLDS
Assume N ≥ 3 and the reduced weight set of f to satisfy ∑_k=1^N q_k = 1. The latter equality is called Calabi–Yau condition and we will say that f satisfies CY condition.
For f satisfying CY condition and G, s.t. J ⊆ G ⊆_f both left and right charges q_l and q_r of any Y ∈(f,G) are integer.
Note that q_r([p]ξ_g) = q_l([p]ξ_g) - (N-N_g) + 2 (g). Due to age(g)∈ the right charge q_r([p]ξ_g) is integral if and only if q_l([p]ξ_g) is integral.
It remains to recall that J^*([p]ξ_g)=[(p) - ∑_k ∈ I_g^c q_k]· [p]ξ_g by Example <ref>. Hence for a class in (f,G) we have [q_l-(g)]=1 and so q_l is integer.
The following two propositions state that the graded pieces of (f,G) are organized into a diamond when CY condition holds.
For f satisfying CY condition and a finite G ⊆_f, let V^a,b stand for the bidegree (a,b)–subspace of (f,G). We have
* V^a,b = 0 for a < 0 or b < 0;
* V^0,0≅, generated by [1]ξ_𝕀;
* V^a,b = 0 for a > N-2 or b > N-2;
* V^N-2,N-2≅, generated by [(f)]ξ_𝕀.
Assume X = [p]ξ_g for p being a polynomial fixed by g.
(i) Let g = 𝕀. Then q_l(X) = q_r(X) = p ≥ 0. For g ≠𝕀 we have (g) ∈_≥ 1. Rewriting
q_l(X) = p -∑_k ∈ I_g^c q_k + (g)
we see that q_l(X) ≥ 0 because ∑_k ∈ I_g^c q_k≤∑_k =1^N q_k=1. Similarly for q_r(X) by the same argument applied to (g^-1).
(ii) For g = 𝕀 we have that q_l(X) = q_r(X) = 0 if and only if p = 0. By Proposition <ref> we have [p] = [1] in (f).
For g ≠𝕀 we just saw that (g)≥ 1 and ∑_k ∈ I_g^c q_k≤ 1, so q_l(X) = q_r(X) = 0 is achieved only if p=0, N_g=0 and age(g)=age(g^-1), which contradicts N≥ 3 condition.
(iii) If g=𝕀, the statement follows from Proposition <ref>. For g ≠𝕀 apply the same proposition again to estimate p in (f^g). Namely, it gives
q_l(X) ≤ N_g - 2∑_k ∈ I_gq_k - ∑_k ∈ I_g^c q_k+ (g) = N_g - ∑_k ∈ I_gq_k -1 + (g).
At the same time we have N_g + (g) = N - (g^-1) ≤ N-1 because (g^-1) ∈_≥ 1. Combining this with the inequality above we get
q_l(X) ≤ N - 2 - ∑_k ∈ I_gq_k ≤ N - 2.
One gets in the similar way that q_r(X) ≤ N-2.
(iv) If g = 𝕀, by Proposition <ref> we see that q_l([(f)]ξ_𝕀)=q_r([(f)]ξ_𝕀)=N-2.
If g ≠𝕀, q_l(X) = q_r(X) = N-2 again implies N_g=0 and (g)=(g^-1)=1, which altogether implies N=2.
We now construct two symmetries of _tot. The horizontal morphism Ψ and the vertical morphism Φ.
Consider the ⊕ decomposition of _tot as in Eq. (<ref>).
The vertical morphism Φ is the ⊕ of the isomorphisms of Proposition <ref> acting on the g-th sector of _tot
Φ := ⊕_g ∈_f,β∈ϕ_f^g,β _tot→_tot^∨.
It is an isomorphism restricted to _tot, G for any finite G because each of ϕ_f^g,β is an isomorphism.
Define the horizontal morphism Ψ to act on the g–th sector by
Ψ([p] ξ_g) := [p] ξ_g^-1.
Extend it by linearity to all _tot.
This is an isomorphism because f^g = f^g^-1 and (f^g) = (f^g^-1).
1) The maps Φ and Ψ are well defined on (f,G) for any finite G ⊆_f.
2) For f satisfying CY condition and a finite G ⊆_f, let V^a,b stand for the bidegree (a,b)–subspace of (f,G).
Then the maps Ψ and Φ induce the –vector spaces isomorphisms
V^a,b≅ V^b,a and V^a,b≅ (V^N-2-b,N-2-a)^∨.
1) The map Ψ commutes with the G-action since (g)=(g^-1). Hence Ψ preserves the invariants.
To see that Φ commute with the G-action by construction it is sufficient to check that (f^g(h·))=(f^hgh^-1()). This holds since f(h·)=f() and (h)=1.
2) We have directly by the definition that q_l(Ψ(X)) = q_r(X) and q_r(Ψ(X)) = q_l(X). The first isomorphism follows.
To verify compatibility of Φ with the grading, note first, that c(f^g)=∑_k∈ I_g(1-2q_k). Thus, by Proposition <ref> the left charge of [ϕ_f^g, p(p)]ξ_g is given by
q_l([ϕ_f^g, p(p)]ξ_g) = ∑_k∈ I_g(1-2q_k)- p- ∑_k ∈ I_g^c q_k + (g)=
=N_g-2∑_k∈ I_g q_k- p- ∑_k ∈ I_g^c q_k+(N-N_g-(g^-1))
=-2+∑_k∈ I^c_g q_k- p +N-(g^-1)=N-2-q_r([p]ξ_g).
The computation for the right charge is identical.
Consider now two more special graded pieces of (f,G).
For f satisfying CY condition and a finite G ⊆_f, let V^a,b stand for the bidegree (a,b)–subspace of (f,G).
Then
* V^N-2,0≅, generated by [1]ξ_j_f^-1,
* V^0,N-2≅, generated by [1]ξ_j_f.
One notes immediately that [1]ξ_j_f and [1]ξ_j_f^-1 are non-zero in (f,G) and belong to V^0,N-2 and V^N-2,0 respectively. By Proposition <ref> it's enough to show one of the statements.
V^0,N-2 = { g ∈ G | (g) = 1, N_g = 0 }.
Let [p]ξ_g ∈ V^0,N-2. It follows from Eq. (<ref>) and (<ref>) that (g) = 1 - N_g/2. The statement follows by Proposition <ref>.
Under the CY condition for g ∈_f \{𝕀} of finite order and with integral (g) we have by Proposition <ref> that (g) ≥ 1 with the equality being reached only for g = j_f. This completes the proof.
This completes the proof of Theorem <ref>.
For a fixed pair (f,G) set
h^a,b := _{ X ∈(f,G) | (q_l(X),q_r(X)) = (a,b)}
and denote D := N-2.
It follows from the propositions above that for f satisfying CY condition and G s.t. J ⊆ G ⊆_f, the numbers h^a,b form a diamond.
[x=(-1.25cm,-.75cm),y=(1.25cm,-.75cm)]
at (0,0) h^0,0;
at (0,1) h^0,1;
at (1,0) h^1,0;
at (1,1) h^1,1;
at (2,0) h^2,0;
at (0,2) h^0,2;
at (3,0) ;
at (0,3) ⋱;
at (1.5,1.5) ⋮;
at (4,0) h^D,0;
at (0,4) h^0,D;
at (4,1) ⋱;
at (1,4) ;
at (3,3) ⋮;
at (4,2) ⋱;
at (2,4) ;
at (4,3) h^D,D-1;
at (3,4) h^D-1,D;
at (4,4) h^D,D;
[label=below:Ψ] at (2-.125, 2-.125) ↶;
[label=right:Φ] at (-.5, 4 + .5) ↕;
[very thick] (1-.105-.125,2+.105-.125) rectangle (2+.105+.125,3-.105+.125);
[very thick] (2-.105-.125,1+.105-.125) rectangle (3+.105+.125,2-.105+.125);
[very thick] (1+0.25-.105-.225,3-0.25+.105-.225) rectangle (1+0.25+.105+.225,3-0.25-.105+.225);
[very thick] (3-.5+0.25-.105-.225,1+.5-0.25+.105-.225) rectangle (3-.5+0.25+.105+.225,1+.5-0.25-.105+.225);
[dash dot] (4-.25,.25) – (.25,4-.25);
[->] (-1,2) .. controls (0,3) .. (1,2);
at (-1-.125-.105,2+.125-.105) g–th sector;
at (-1-.5+.125,2+.5+.125) h–th sector;
[->] (-1,2+.5) .. controls (0,3) .. (1+0.25,3-0.25);
at (4+1+.125+.25,4-2-.125+.25) g^-1–th sector;
at (4+1+.5-.125,4-2-.5-.125) h^-1–th sector;
[->] (4+1-.25+.25,4-2-.5) .. controls (4-.25,1+.25) .. (3-.5+0.25+.105+.225-.105-.105,1+.5-0.25-.105+.225);
[->] (4+1,4-2+.25) .. controls (4,1+.5) .. (3,2-.105+.125-.105);
Let's call the line { h^a,b | a+b = D } the horizontal line and the line { h^a,b | a-b = 0 } the vertical line.
The Hodge diamond { h^a,b}_a,b=0^D has the following special properties
* The g–th sector of (f,G) contributes as a line symmetric w.r.t. the horizontal line.
* Every g–th sector of (f,G) contributes together with a g^-1–th sector of (f,G), located symmetrically w.r.t. the vertical line.
* All the elements of the form ξ_j_f^k contribute to the horizontal line. In particular, h^a,D-a≥ 1 for all a=0,…,D.
* All the elements of the form [p]ξ_𝕀 contribute to the vertical line.
Consider f = x_1^2 x_2 + x_2^2 + x_2 x_3^6 + x_4^6 + x_1x_3^9 and G = _f. Then G = J = ⟨ j_f ⟩ with
j_f = (1/4, 1/2, 1/12, 1/6).
The basis of (f,G) is given by the elements
ξ_j_f^3, ξ_j_f^5, ξ_j_f^7, ξ_j_f^9,
[x_1]ξ_j_f^4, [x_1]ξ_j_f^8, [x_4^2]ξ_j_f^6,
[x_3^4 x_4^4]ξ_𝕀, [x_1 x_3 x_4^4]ξ_𝕀, [x_1 x_3^3 x_4^3]ξ_𝕀, [x_2 x_4^3]ξ_𝕀 , [x_1^2 x_4^3]ξ_𝕀 , [x_1 x_3^5 x_4^2]ξ_𝕀 , [x_2 x_3^2 x_4^2]ξ_𝕀,
[x_1^2 x_3^2 x_4^2]ξ_𝕀, [x_2 x_3^4 x_4]ξ_𝕀 , [x_1 x_2 x_3 x_4]ξ_𝕀, [x_1^3 x_3 x_4]ξ_𝕀 , [x_1^2]ξ_𝕀, [x_2^2]ξ_𝕀.
all having the bigrading (1,1), and the elements
ξ_j_f, ξ_j_f^11, [1] ξ_𝕀, [x_1 x_2^2 x_3 x_4^4]ξ_𝕀,
having the bigrading (0,2), (2,0), (0,0) and (2,2) respectively.
One gets the following diamond
[x=(-0.50cm,-.50cm),y=(0.50cm,-.50cm)]
(0,0) node 1;
(1,0) node 0;
(0,1) node 0;
(2,2) node 1;
(2,0) node 1;
(0,2) node 1;
(1,2) node 0;
(2,1) node 0;
(1,1) node 20;
IR[AGV85]AGV85
V. Arnold, A. Gusein-Zade, A. Varchenko,
Singularities of Differentiable Maps, vol I
Monographs in Mathematics, 82. Birkhäuser Boston, Inc., Boston, MA, 1985
[BT2]BT2
A. Basalaev, A. Takahashi, Hochschild cohomology and orbifold Jacobian algebras associated to invertible polynomials,
Journal of noncommutative geometry,
Vol. 14. No. 3. pp. 861–877 (2020).
[BTW16]BTW16
A. Basalaev, A. Takahashi, E. Werner, Orbifold Jacobian algebras for invertible polynomials,
arXiv preprint: 1608.08962.
[BTW17]BTW17
A. Basalaev, A. Takahashi, E. Werner, Orbifold Jacobian algebras for exceptional unimodal singularities,
Arnold Math J. 3, pp. 483–498 (2017).
[BI21]BI21
A. Basalaev, A. Ionov, Mirror map for Fermat polynomials with a nonabelian group of symmetries, Theoretical and Mathematical Physics, 209(2), 1491–1506., 2021.
[BI22]BI22
A. Basalaev, A. Ionov, Hochschild cohomology of Fermat type polynomials with non–abelian symmetries, Journal of Geometry and Physics, 174, 104450.
[BH95]BH95
P. Berglund, M. Henningson, Landau–Ginzburg orbifolds, mirror symmetry and the elliptic genus.
Nuclear Phys. B 433, pp. 311–32 (1995).
[BH93]BH93
P. Berglund, T. Hübsch, A generalized construction of mirror manifolds.
Nuclear Phys. B 393, pp. 377–91 (1993).
[CJMPW23]CJMPW23
Clawson, A., Johnson, D., Morais, D., Priddis, N., White, C. B. Mirror Map for Landau-Ginzburg models with nonabelian groups. arXiv preprint arXiv:2302.02782, (2023).
[CR11]CR11
A. Chiodo, Y. Ruan. LG/CY correspondence: the state space isomorphism.
Adv. Math. 227, no. 6 pp. 2157–88 (2011).
[EGZ18]EGZ18
W. Ebeling, S. Gusein-Zade,
A version of the Berglund-Hübsch-Henningson duality with non-abelian groups,
International Mathematical Research Notices (2019) https://doi.org/10.1093/imrn/rnz167.
[EGZ20]EGZ20
W. Ebeling, S. Gusein-Zade, Dual Invertible Polynomials with Permutation Symmetries and the Orbifold Euler Characteristic.
Symmetry, Integrability and Geometry: Methods and Applications, 16, 1–15 (2020).
[ET13]ET13
W. Ebeling, A. Takahashi,
Variance of the exponents of orbifold Landau-Ginzburg models,
MATH RES LETT, 20 (2013), no.01, 51–65.
[FJR]FJR
H.Fan, T.Jarvis, Y.Ruan, The Witten equation, mirror symmetry, and quantum singularity theory.
Annals of Mathematics, 178(1), 1–106 (2013).
[FJJS]FJJS
A. Francis, T. Jarvis, D. Johnson, R. Suggs, Landau-Ginzburg mirror symmetry for orbifolded Frobenius algebras.
In Proceedings of Symposia in Pure Mathematics Vol. 85, pp. 333–353 (2012).
[GH94]GH94
P. Griffiths, J. Harris. Principles of algebraic geometry. John Wiley and Sons, 1994.
[I23]I23
A. Ionov, McKay correspondence and orbifold equivalence,
Journal of Pure and
Applied Algebra, Vol. 227, Issue 5, 107297 (2023).
[IV90]IV90
K. A. Intriligator, C. Vafa, Landau-Ginzburg Orbifolds.
Nucl. Phys., B339:95–120 (1990).
[K03]K03
R. M. Kaufmann, Orbifolding Frobenius Algebras.
International Journal of Mathematics, 14(06), 573–617 (2003).
[K06]K06
Kaufmann, Ralph M. Singularities with symmetries, orbifold Frobenius algebras and mirror symmetry.
Contemporary Mathematics 403: 67–116 (2006).
[K09]K09
M. Krawitz, FJR rings and Landau-Ginzburg Mirror Symmetry,
(2009) arXiv preprint: 0906.0796.
[Kreu94]Kreu94
M. Kreuzer,
The mirror map for invertible LG models,
PHYS LETT B 328 (1994), no.3-4, 312–318.
[M]Muk
D. Mukai,
Nonabelian Landau-Ginzburg orbifolds and Calabi-Yau/Landau-Ginzburg correspondence,
(2017) arXiv preprint: 1704.04889.
[MO70]MO70
Milnor, John, and Peter Orlik. Isolated singularities defined by weighted homogeneous polynomials. Topology 9.4 (1970): 385-393.
[S71]S71
Saito, Kyoji. Quasihomogene isolierte singularitäten von hyperflächen. Inventiones mathematicae 14.2 (1971): 123-142.
[S20]S20
D. Shklyarov, On Hochschild invariants of Landau–Ginzburg orbifolds,
Advances in Theoretical and Mathematical Physics, Vol. 24, pp. 189–258 (2020).
[V89]V89
Vafa, Cumrun. String vacua and orbifoldized LG models.
Modern Physics Letters A 4.12 pp. 1169–1185 (1989).
[W93]W93
Witten E. Phases of N= 2 theories in two dimensions.
Nuclear Physics B. Aug 16;403(1-2):159–222 (1993).
[WWP]WWP
J. Ward, M. Williams, N. Priddis
Mirror Symmetry for Nonabelian Landau-Ginzburg Models,
(2018) arXiv preprint: 1812.06200.
[Y16]Y16
X. Yu. McKay Correspondence and New Calabi–Yau Threefolds.
International Mathematics Research Notices, no. 21: 6444–6468, (2017).
§ HODGE DIAMOND OF THE NONABELIAN LG ORBIFOLDS
In this section we ...
First we introduce the vector space(f,G)for the nonabelian groupsG.
§.§ Preliminaries and notation
For anyu ∈ S_N ⋉ G_f^dwe will denoteu = σ· gassuming thatσ∈ S_Nandg ∈ G_f^d.
Let(u)be the eigenvalue1subspace of^NofuandI_u^cbe the set of all indicesk, s.t.u · x_k ≠ x_k.
Restriction of f to (u), f^u := f |_(u) is a Fermat type polynomial again.
Letσ = ∏_a=1^p σ_abe the decomposition into the non–intersecting cycles. Denote by|σ_a|the length of the cycleσ_a. We will also allowσ_ato be of length1, so that we always have∑_a=1^p | σ_a | = N.
There exists the unique setg_1,…,g_pofG_f^d–elements, s.t.g_aacts non–trivially only onI^c_σ_aandσ· g = ∏_a=1^p σ_a g_a. We call the productσ· g = ∏_a=1^p σ_a g_ageneralized cycle decomposition ofuand each ofσ_ag_aa generalized cycle.
A generalized cycleσ_ag_ais said to be special if(g_a) = 1and non–special otherwise. It is clear that(σ_ag_a)∩^I_σ_ag_a^c= 0for a non–special cycle, where by^I_u^cwe mean the subspace of^Nspanned by standad basis vectors with indices inI_u^c.
For a special cycle we have (f^σ_ag_a|_^I_σ_ag_a^c) = n-1.
Denote by⌊ϕ() ⌋the class of the polynomialϕ()in(f^σ_ag_a).
Letx_i_abe theσ_ag_a-invariant linear combination ofx_∙with indexes inI_σ_ag_a^c,
s.t. (f^σ_ag_a|_I_σ_ag_a^c) has the basis ⌊x_∙^k⌋, k=0,…,n-2.
Set
'_σ_ag_a := ⟨⌊ 1 ⌋, ⌊x_i_a⌋, …, ⌊x_i_a^n-2⌋⟩ξ_σ_ag_a,
where we denote by ξ_σ_ag_a the formal letter associated to σ_ag_a.
The elements of'_σ_ag_awill be denoted by⌊ϕ() ⌋ξ_σ_ag_a.
In particular, forg_a = 𝕀we havex_i_a = ∑_i x_iwhere the summation is taken overi ∈ I^c_σ_a. We adopt the notation above for the non–special cycles too, assuming⌊x_i_a^0 ⌋ξ_σ_a g_a = ⌊ 1 ⌋ξ_σ_a g_a. Foru∈ Gwith generalized cycle decompositionu= ∏_a=1^p σ_a g_awe have
'_f,u=⊗_a=1^p '_σ_ag_a.
Fix ζ_n := exp ( 2 π√(-1))/n ) and t_k ∈ G_f^d with k=1,…,N by
t_k: (x_1,…,x_N) → (x_1,…, ζ_n x_k, …, x_N).
Then the Fermat type polynomial maximal diagonal symmetries group G_f^d is generated by t_1,…,t_N. Denote also
_f := { g ∈ G_f^d | (g) = 1 }, J := t_1⋯ t_N.
The groupsS ⋉_fandS ⋉ JwithS ⊆ S_Nwill be in particular important in this paper.
§.§ The phase space
For anyG ⊆ S_N ⋉ G_f^dwe denote by_f,Gthe phase space of(f,G), being the subspace of_totdefined as follows.
Let^Gstand for the set of representatives of the conjugacy classes ofG.
Denote
_f,G := ⊕_u ∈^G( _f,u' )^Z(u),
where the action ofv ∈ Z(u)on_f,u'is computed as follows.
Letλ_k,λ'_kbe the eigenvalues ofuandvcomputed in their common eigenvectors basis. ForX = ⌊ϕ() ⌋ξ_u ∈_f,u'andv ∈ Z(u)we have
v^*( X ) = ∏_ k=1,…,N
λ'_k ≠ 11/λ_k⌊ϕ(v ·) ⌋ξ_u.
For anyG_1,G_2 ⊆ S_N ⋉ G_f^dwe have the natural inclusioni_1: '_f,G_1→_totand the projectionsπ_2: _tot→'_f,G_2. In what follows we will consider the mapsψ: '_f,G_1→'_f,G_2via the mapsψ: _tot→_totbyψ := π_2 ∘ψ∘ i_1.
With respect to a generalized cycle decompositionu = ∏_a σ_ag_awe have the relationξ_u = ∏_a=1^p ξ_σ_ag_abetween the generators of the different vector spaces_f,G_1,_f,G_2. This extends to the product of arbitraryX_1 = ⌊ϕ_1 ⌋ξ_uandX_2 = ⌊ϕ_2 ⌋ξ_vassumed as_tot–elements byX_1X_2 := ⌊ϕ_1 ϕ_2 ⌋ξ_uvwhenI_u^c ∩ I_v^c = ∅. This is not to be confused with the cup-product on Hochschild chohomology as in <cit.>.
§.§ Age of a noncommutative LG orbifold
For anyu ∈ S_N⋉ G_f^dletλ_1,…,λ_N ∈be the eigenvalues of the linear transformation↦ u ·. We may assumeλ_k = exp(2 π√(-1)α_k)for someα_k ∈∩ [0,1). Denote:
(u) := ∑_k=1^N α_k.
This definition agrees with Eq. (<ref>). We still have
(u) + (u^-1) = N - N_u = d_u
for the inverse elementu^-1.
One notes immediately that for a generalized cycleσ_ag_a(σ_a g_a) = |σ_a| -1/2 if σ_a g_a is special,
(σ_a g_a) = |σ_a|-1/2 + (g_a) otherwise.
We a have a noncommutative analogue of Proposition <ref>.
For any u ∈ S_N ⋉ G_f^d s.t. N_u = 0 we have
(u) ≥∑_k=1^N q_k.
The equality is only reached if u = 𝕀· j_f.
Let u be decomposed into the generalized cycles by u = ∏_a=1^p σ_a g_a. Then each σ_ag_a is nonspecial
(u) = ∑_a=1^p (σ_a g_a) = ∑_a( |σ_a|-1/2 + (g_a) ).
§.§ Bigrading
The bigrading of Section <ref> is extended for the nonabelian LG orbifolds as follows. One considers the left and right chargesq_landq_rby the same formulae with the only difference — the operator(u)is defined differently.
This definition endows_totwith the structure of a-bigraded vector space.
This is exactly the bigrading introduced in <cit.>. It follows immediately thatq_∙ (ξ_u) + q_∙ (ξ_v) = q_∙ (ξ_uv)foru,v∈ G, s.t.I_u^c ∩ I_v^c = ∅.
|
http://arxiv.org/abs/2307.02555v1
|
20230705180015
|
Hunting for exoplanets via magnetic star-planet interactions: geometrical considerations for radio emission
|
[
"Robert D. Kavanagh",
"Harish K. Vedantham"
] |
astro-ph.EP
|
[
"astro-ph.EP",
"astro-ph.IM",
"astro-ph.SR"
] |
firstpage–lastpage
AT2022aedm and a new class of luminous, fast-cooling transients in elliptical galaxies
[
August 1, 2023
======================================================================================
Recent low-frequency radio observations suggest that some nearby M dwarfs could be interacting magnetically with undetected close-in planets, powering the emission via the electron cyclotron maser (ECM) instability. Confirmation of such a scenario could reveal the presence of close-in planets around M dwarfs, which are typically difficult to detect via other methods. ECM emission is beamed, and is generally only visible for brief windows depending on the underlying system geometry. Due to this, detection may be favoured at certain orbital phases, or from systems with specific geometric configurations. In this work, we develop a geometric model to explore these two ideas. Our model produces the visibility of the induced emission as a function of time, based on a set of key parameters that characterise magnetic star-planet interactions. Utilising our model, we find that the orbital phases where emission appears are highly dependent on the underlying parameters, and does not generally appear at the quadrature points in the orbit as is seen for the Jupiter-Io interaction. Then using non-informative priors on the system geometry, we show that untargeted radio surveys are biased towards detecting emission from systems with planets in near face-on orbits. While transiting exoplanets are still likely to be detectable, they are less likely to be seen than those in near face-on orbits. Our forward model serves to be a powerful tool for both interpreting and appropriately scheduling radio observations of exoplanetary systems, as well as inverting the system geometry from observations.
stars: magnetic field – radio continuum: planetary systems
§ INTRODUCTION
The majority of exoplanets discovered to date orbit around low-mass main-sequence stars[], in agreement with formation theory <cit.>. M dwarfs, the lowest mass stars on the main sequence, are the most numerous in the stellar neighbourhood <cit.>, and are expected to preferentially host close-in rocky planets <cit.>. While in theory the detection of an Earth-like planet orbiting an M dwarf is much easier compared to a Sun-like star due to the higher mass/size ratio, these stars generally exhibit much higher levels of magnetic activity. As a result, the majority of these planets likely remain undetected to date via traditional techniques such as the radial velocity and transit methods, as the activity of the host star can readily drown out signatures of the planet.
That being said, an alternative mechanism may produce signatures which can be distinguished from stellar activity, particularly for M dwarfs. This mechanism is thought to occur via magnetic star-planet interactions <cit.>. The inspiration for this comes from Jupiter's sub-Alfvénic interactions with the Galilean moons Io, Europa, and Ganymede. The motion of these bodies through Jupiter's magnetosphere is known to produce bright coherent radio emission along the magnetic field line linking each moon to Jupiter, especially in the case of Io. The radio emission is powered by the electromotive force felt by charges in the ionospheres of the moons as they move across the Jovian magnetic field. This energy is transported towards Jupiter in the form of Alfvén waves <cit.>, which subsequently accelerate electrons that emit radio waves via the electron cyclotron maser (ECM) instability <cit.>.
Determining if the orbit of a satellite is sub-Alfvénic or not requires knowledge of the plasma environment. In this region, the magnetic energy of the plasma exceeds the kinetic energy. Another way to express this is via the Alfvénic Mach number, which is
M_A = Δ u/u_A = Δ u √(4πρ)/B ,
where Δ u is the plasma velocity in the rest frame of the satellite, u_A is the Alfvén velocity, and ρ and B are the density and magnetic field strength of the plasma at the position of the satellite. When the ratio of the velocities is less than unity (M_A < 1), the disturbance in the magnetic field created by the satellite can propagate as Alfvén waves along the field lines back to the star. If M_A > 1 however, the disturbance created by the satellite is moving faster than the Alfvén waves and therefore a shock discontinuity is set up and the disturbance can no longer flow back to the star. The boundary where M_A = 1 is known as the Alfvén surface, which can be complex in shape depending on the magnetic field topology at the stellar surface <cit.>.
The reason why M dwarfs are excellent candidates for the same type of interactions as seen with Jupiter and its inner moons is primarily due to the strong magnetic fields they possess, which can be upwards of a kilogauss (kG) in strength <cit.>. High field strengths correspond to high Alfvén velocities, meaning that the plasma, or wind in the case of a low-mass main-sequence star, must be accelerated to high velocities before M_A > 1. As a result, M dwarfs are likely to harbour large Alfvén surfaces, enclosing a wide range of orbits wherein magnetic SPI can occur <cit.>.
There has been a resurgence in the search for magnetic SPI in recent years, primarily due to the detection of bright radio emission with a high degree of circular polarisation from nearby M dwarfs <cit.>, which is a signpost of the ECM mechanism <cit.>, although not necessarily powered by SPI. In the case of the 19 M dwarfs detected by <cit.>, none show any correlation between their radio luminosities and activity indicators. This is consistent with the driving mechanism being magnetospheric in origin. Yet, none of these stars are known to host close-in planets, leaving the interpretation ambiguous <cit.>. If the detected emission from these systems is in fact due to the presence of undiscovered companions, there is the question of is there something special about these systems? If so, what is it about these systems that makes them more visible compared to other nearby M dwarfs?
ECM emission is beamed, and is generally only visible for brief windows. A result of this can be seen from the emission Io induces on Jupiter, which appears only at `quadrature' points of Io's orbit (orbital phases of 0.25 and 0.75). To determine precisely when emission will appear for a system requires both knowledge of the geometry of the large-scale magnetic field that the satellite interacts with, as well as the properties of the emission cone generated from the interaction <cit.>. It could be the case that certain combinations of the geometry of the stellar magnetic field and planetary orbit could produce emission that is more visible compared to other configurations. We note also that the planet itself could be a source of beamed radio emission <cit.>, which could be difficult to disentangle from the emission induced on the star. However, there are many uncertainties in the frequency at which we expect exoplanetary radio emission, primarily due to our lack of knowledge about exoplanetary magnetic fields <cit.>.
Recently, we utilised magnetohydrodynamic (MHD) models to assess the beaming of emission induced by a hypothetical planet for a variety of orbits around WX UMa <cit.>, one of the M dwarfs detected by <cit.>. The method used was based on the surface magnetic field map of the star obtained using the Zeeman-Doppler imaging technique <cit.>. However, these maps are not generally available for M dwarfs. In fact, the only other star in the sample presented by <cit.> with a magnetic field map is AD Leo <cit.>. Note that <cit.> suggest the detected emission could be in fact due to flaring, and not magnetic SPI.
Our work on WX UMa illustrated that sophisticated MHD models can help us to better understand the underlying mechanism generating ECM emission on nearby M dwarfs, particularly in terms of identifying potential signatures of undiscovered planets. However, they are reliant on the availability of magnetic field maps for M dwarfs, and are also computationally expensive. Therefore, there is a mounting need for an alternative method to estimate the visibility of planet-induced radio emission that does not heavily depend on ZDI and MHD simulations. This would allow for the detected emission reported by <cit.>, as well as future observations, to be better-interpreted. The Exoplanetary and Planetary Radio Emission Simulator (ExPRES) code developed by <cit.> is suitable for this in theory, which was originally developed to model the observed auroral emission on Jupiter and Saturn. To our knowledge however, it has not been utilised to answer the questions laid out in this work. We discuss the comparison between our methods in this work to the ExPRES code in Section <ref>.
In this paper, our main goal is to answer two questions:
* What orbital phases is radio emission most likely to appear at in magnetic SPI?
* What systems are we more likely to detect in untargeted radio surveys?
To answer these questions, and also address the issues mentioned above, we develop a forward model based on key parameters relating to the geometry of magnetic SPI to predict the visbility of planet-induced radio emission as a function of time. This model provides the community with a flexible tool to interpret radio observations from low-mass stars in the context of magnetic SPI. The model is described in Section <ref>. In Section <ref>, we illustrate the use of the model by demonstrating the phenomenon of emission appearing at quadrature points of a satellite's orbit, as is seen for Jupiter's moon Io. Then in Section <ref>, we utilise the model to address the question of are we systematically biased towards detecting emission from systems with certain architectures.
§ MASER: A CODE FOR MODELLING MAGNETIC STAR-PLANET INTERACTIONS
In this Section, we describe the model we develop to predict when radio emission induced on a star via magnetic SPI is visible as a function of time. The model is freely available as a Python code on GitHub as the MASER (Magnetically interActing Stars and Exoplanets in the Radio) code[]. The model takes a key set of inputs relating to the geometry and physical properties of magnetic SPI, as well as an array of times for which the visibility of the radio emission is computed. Table <ref> lists each quantity and their respective symbols, which we use throughout unless noted otherwise.
The MASER code computes what we refer to as the `visibility lightcurve' for the system described by the input parameters (described further in Section <ref>). The code depends only on NumPy <cit.>. It is also compatible with Numba <cit.>, which allows for quick execution. When utilised with Numba's `no Python mode', a lightcurve with 10^4 time elements takes 2.5 milliseconds to compute on average using a single performance core of an Apple M2 chip, which is about 50 times faster than the standard computation time using Python.
§.§ The geometry of magnetic star-planet interactions
To determine if radio emission induced on stars by an orbiting planet is visible at a given time, we first need to establish the key physical and geometrical parameters of the system. The host star has a mass M_⋆, radius R_⋆, and rotation period P_⋆. Its rotation axis is ẑ_⋆, which is inclined relative to the line of sight x̂ by the angle i_⋆. The star rotates about ẑ_⋆ in a clockwise direction when looking along ẑ_⋆. Note that all vectors denoted with a hat are unit vectors (their magnitude is unity).
Given that our focus here is on M dwarfs, we opt to represent the large-scale magnetic field of the star that the planet interacts with as a dipole. Dipolar magnetic fields drop off in strength slowest as a function of distance r compared to higher order modes (quadrupole, octupole, etc.), with the field strength going as r^-3. As a result, unless the planet is very close to its host star, the field that the planet sees is a dipole. In addition to this, M dwarfs often exhibit strong, predominantly-dipolar magnetic fields <cit.>.
The maximum magnetic field strength at the stellar surface is B_⋆, which for a dipolar field occurs at its magnetic poles. The magnetic axis of the star ẑ_B points outward from the center of the star to the Northern magnetic pole, and is tilted relative to the stellar rotation axis ẑ_⋆ by the angle β. This is known as the magnetic obliquity. When β≠ 0, the magnetic axis precesses about the stellar rotation axis as the star rotates. We assume that the magnetic field rotates rigidly with the stellar rotation period. At time t, the rotation phase of the star is
ϕ_⋆ = ϕ_⋆,0 + t/P_⋆,
where ϕ_⋆,0 is the stellar rotation phase at t = 0. Note that from this definition, the phase varies from 0 to 1. As such, we multiply the phase by 2π when used in trigonometric functions. In Appendix <ref>, we describe the coordinate system for the stellar rotation and magnetic field in more detail.
Around the star, a planet orbits at a distance a. Its orbital period P_p is provided via Kepler's third law:
P_p = 2π√(a^3/GM_⋆) ,
where G is the gravitational constant. We assume that the planet's orbit is circular. Its position is described by the vector x̂_p, and the vector normal to its orbital plane ẑ_p is inclined relative to x̂ by the angle i_p. Again, the convention we adopt for the orbit direction is clockwise when looking along ẑ_p. The vector ẑ_p is misaligned with respect to ẑ_⋆ by the angle ψ, known as the spin-orbit angle. Note that in general, it is easier to measure the projected spin-orbit angle λ for exoplanetary systems, which is the angle between ẑ and ẑ', the projections of ẑ_⋆ and ẑ_p on to the plane of the sky <cit.>. The relation between ψ and λ is given by Equation <ref>. The orbital phase of the planet at time t is
ϕ_p = ϕ_p,0 + t/P_p,
where ϕ_p,0 is the orbital phase at t = 0. Again, the values for ϕ_p range from 0 to 1. When ϕ_p = 0, the planet is closest to the observer (at conjunction). However, if i_p = 0 or 180, the planet is always at the same distance from the observer, and the planet's position is either in the direction of -ẑ' or ẑ' respectively at ϕ_p = 0. Appendix <ref> presents the details of the coordinates for the planet and spin-orbit misalignment. A geometric sketch of the quantities introduced here is shown in Figure <ref>.
§.§ Interactions with dipolar magnetic fields
With the relevant properties of the exoplanetary system established, we now describe the magnetic field of the star in more detail. The shape of a dipolar magnetic field line is described by the following equation <cit.>:
r = Lsin^2θ .
Here, r is the radius of a point on the field line measured from the center of the star, θ is the magnetic co-latitude of the point, which is measured from the direction that ẑ_B points in, and L is the distance between the center of the star and the magnetic field line at the magnetic equator.
At each point in the planet's orbit, it interacts with a field line of size L, which has a certain orientation relative to the line of sight. A sketch of this is shown in Figure <ref>. The magnetic co-latitude of the planet θ_p at a given time is determined by both its position and the direction of the magnetic axis:
cosθ_p = ẑ_B·x̂_p .
With an orbital distance of a, Equation <ref> can then be rewritten as an expression for the size of the field line the planet interacts with at each point in its orbit:
L = a/sin^2θ_p .
To determine the orientation of the field line relative to the observer, we require the vector x̂_B, which points along the magnetic equator of the field line that the planet interacts with. The planet's position can be expressed in terms of this vector along with ẑ_B (see Figure <ref>):
x̂_p = sinθ_px̂_B + cosθ_pẑ_B .
Re-arranging, x̂_B is:
x̂_B = x̂_p/sinθ_p - ẑ_B/tanθ_p .
Knowing the directions of ẑ_B and x̂_B as a function of time provides us with the direction of the emission cone ĉ on the field line, which in turn determines if the radio emission the planet induces along the field line via sub-Alfvénic interactions is detectable (see Section <ref>).
There is a caveat in assuming purely dipolar magnetic field lines for the star. Following from Equation <ref>, L becomes very large for small values of θ_p. However, it is not realistic for the star to have closed field lines that extend to hundreds of stellar radii, as the wind of the star will tend to blow them open once the kinetic wind energy exceeds the magnetic tension of the field line. Therefore, we adopt a maximum size for the field lines of 100 R_⋆. If the size of the field line exceeds this, we limit the interaction to the hemisphere the planet is in only. In other words, if L > 100 R_⋆ and θ_p < π/2, the planet induces emission in the Northern magnetic hemisphere only, and if L > 100 R_⋆ and θ_p > π/2, it induces emission in the Southern magnetic hemisphere only. For sufficiently small orbits/large magnetic co-latitudes however, the planet orbits in the closed-field region of the star's magnetosphere. In this scenario, the planet induces emission in both magnetic hemispheres of the star <cit.>, similar to what is observed for the Io-induced radio emission on Jupiter <cit.>.
§.§ Radio emission from magnetic star-planet interactions
When a conducting body moves through a magnetised plasma with a sub-Alfvénic velocity, mechanical waves known as Alfvén waves are produced <cit.>. In a planetary context, these waves are thought to travel along magnetic field lines, accelerating electrons in the process. Electrons accelerated with sufficiently large pitch angles (the angle between the velocity and local magnetic field vectors) are thought to experience a magnetic mirroring effect. The mirrored electrons have a so-called `loss-cone' velocity distribution, which are unstable to electromagnetic waves at the local cyclotron frequency <cit.>. Due to this instability, these electrons release their energy as electromagnetic waves via the electron cyclotron maser (ECM) instability, typically in the radio regime <cit.>. ECM emission occurs at the fundamental and harmonics of the local cyclotron frequency <cit.>, which in CGS units is
ν_c = 2.8 B MHz,
where B is in Gauss (G).
Equation <ref> tells us that the emission frequency is a direct probe of the magnetic field strength at which the emission is generated. The field strength at each point on a dipolar field line, which is described by Equation <ref>, is given by <cit.>:
B = B_⋆/2( R_⋆/r)^3 (1 + 3 cos^2 θ)^1/2 .
Using Equation <ref>, we can rewrite Equation <ref> in terms of r only, giving
B = B_⋆(R_⋆/r)^3 (1 - 3r/4L)^1/2.
An example of the shape of dipolar field lines of different sizes along with corresponding regions of different cyclotron frequencies is shown in Figure <ref>.
As mentioned in the previous Section, we allow emission to be generated in both magnetic hemispheres if the size of the field line L is less than 100 R_⋆. To determine if fundamental ECM emission generated along the star-planet field line in either hemisphere at the frequency ν is visible to the observer, we first need to find the radius r_ν and magnetic co-latitude θ_ν on the line that give a field strength B_ν = ν / 2.8 via Equation <ref>. As the field line is symmetric about the magnetic equator, the frequency at the point (r_ν, θ_ν) is equivalent to that at (r_ν,π-θ_ν). Setting Equation <ref> equal to B_ν and re-arranging, we can define a new parameter F, which goes to zero as r approaches r_ν:
F = (B_ν/B_⋆)^2 (r/R_⋆)^6 + 3r/4L - 1.
To the best of our knowledge, there is no analytical solution to F=0. Therefore, we utilise Newton's method find its root (see Appendix <ref> for details).
Once we find r_ν, we obtain θ_ν via Equation <ref>. The point (r_ν, θ_ν) corresponds to the Northern hemisphere, and (r_ν,π - θ_ν) corresponds to the Southern hemisphere. We then determine the direction of the magnetic field vector at the emitting point B⃗_ν in each magnetic hemisphere (see Appendix <ref>), which in turn tells us the direction of the emission cone for each hemisphere ĉ. In the Northern magnetic hemisphere, the emission cone is parallel with the magnetic field vector (ĉ = B⃗_ν / B_ν), and in the Southern magnetic hemisphere, it is anti-parallel (ĉ = -B⃗_ν / B_ν). The angle between the line of sight x̂ and the vector ĉ determines if the radio emission is beamed towards the observer <cit.>. This angle is
cosγ = x̂·ĉ .
Note that emission from each hemisphere will have opposite circular polarisations, under the assumption that the magnetoionic mode is the same <cit.>.
The emission cone has a characteristic opening angle α, and thickness Δα. When γ is in the range of α±Δα/2, the emission is visible to the observer (see Figure <ref>). According to <cit.>, the cone opening angle and thickness depend on the velocity of the accelerated electrons u, such that cosα = u/c and Δα = u/c rad, where c is the speed of light. In other words, α cannot exceed 90. For the Io-induced emission on Jupiter, opening angles of around 80 to 70 have been inferred from observations <cit.>, corresponding to velocities of 0.17 to 0.34c (kinetic energies of 7.4 to 30 keV). From these values, the corresponding cone thickness ranges from ∼10 to 20. However, estimations from observations of the cone thickness imply values of around 1 <cit.>. It is currently unclear what the cause of this discrepancy is. With this in mind, we choose values for α and Δα independently of one another.
There may be certain configurations wherein the beams from both magnetic hemispheres are seen simultaneously. Assuming emission occurs in the same magnetoionic mode, the flux densities from each hemisphere will have opposite signs (neglecting the effects of radiative transfer). Therefore, in this scenario the circularly polarised flux density from each hemisphere will cancel one another out. However, the total flux density will still be received. In such a situation, we still consider the signal to be visible. Recall however that we limit emission to one hemisphere if the size of the field line exceeds 100 R_⋆ (see Section <ref>).
There are also a few conditions which must be satisfied for emission to be generated at the frequency ν at some point along the field line connecting the planet to the stellar surface. Firstly, the maximum cyclotron frequency on the field line ν_max, which occurs at the footpoint of the field line, must be greater than ν. Using Equation <ref> and <ref>, the maximum observable frequency is
ν_max = 2.8 B_⋆(1 - 3R_⋆/4L)^1/2 ,
where L is given by Equation <ref>. Similarly, the minimum frequency observable must exceed the minimum cyclotron frequency on the field line ν_min, which occurs at the magnetic equator:
ν_min = 1.4 B_⋆( R_⋆/L)^3 .
The cyclotron frequency at the planet's position ν_p must also be considered:
ν_p = 2.8 B_⋆(R_⋆/a)^3 (1 - 3a/4L)^1/2 .
Provided ν_max > ν > ν_min, and ν > ν_p, emission can occur in the hemisphere the planet occupies, as well as the opposite hemisphere provided L < L_max. However, if ν_p > ν, emission can only occur in the opposite hemisphere (again provided L < L_max), since no point on the star-planet field line in the hemisphere the planet occupies has a cyclotron frequency corresponding to the observing frequency.
§ WHAT ORBITAL PHASES DOES EMISSION APPEAR AT?
With the model described, we now demonstrate its applicability by illustrating the phenomenon of emission appearing at the quadrature points of a planet or satellite's orbit. These points correspond to orbital phases of around 0.25 and 0.75 (with 0 being primary conjunction). The phenomenon of enhanced radio emission from Jupiter at the quadrature points of Io's orbit was first identified almost six decades ago by <cit.>. As the magnetic SPI scenario represents an effectively scaled-up version of the Jupiter-Io system, there has been recent emphasis in the literature on detecting signatures of such interactions at radio wavelengths at quadrature points <cit.>.
§.§ The expectation of emission at quadrature
To understand the phenomenon of emission occurring near points of quadrature, consider the scenario where the orbital, rotation, and magnetic axes are all aligned along ẑ in the plane of the sky (refer to Appendix <ref> for definitions). In this `aligned' configuration, the planet orbits in the equatorial plane of the star, and its position as a function of orbital phase is described by the following vector (Appendix <ref>):
x̂_p = cosϕ_px̂ + sinϕ_pŷ .
The planet induces radio emission at the observing frequency along the field line connecting it to the star in both hemispheres. These points are (r,θ) and (r,π - θ) in the Northern and Southern hemispheres respectively. As the field line is symmetric about the magnetic equator, and the orbital distance is constant, the co-latitudes θ and π - θ always correspond to this frequency. The emission in each hemisphere is beamed in a cone centered along the vector
ĉ = ±(B_r/Br̂ + B_θ/Bθ̂),
where ± denotes the Northern/Southern hemisphere respectively, and B_r, r̂, B_θ, and θ̂ are defined in Equations <ref> to <ref> (replacing x̂_B with x̂_p and ẑ_B with ẑ in this scenario). The direction of ĉ relative to the line of sight determines if and when the emission is beamed towards the observer (Equation <ref>). For emission from the Northern/Southern hemisphere to be seen twice per orbit, the angle between ĉ and x̂ (γ) must be within the range α±Δα/2 twice per orbit. Using Equations <ref> to <ref>, in both magnetic hemispheres this angle can be shown to be
cosγ = 3sinθcosθ/(1 + 3 cos^2θ)^1/2cosϕ_p .
In aligned scenarios, Equation <ref> tells us that emission at a given frequency is visible from both hemispheres simultaneously, assuming fixed parameters for the emission cone. What this frequency is depends on the values of B_⋆ and a. In Figure <ref>, we show γ versus the orbital phase of the planet for different magnetic co-latitudes of the emitting point in the Northern hemisphere. We see that γ varies between a minimum and maximum at primary (ϕ_p = 0) and secondary transits (ϕ_p = 0.5), with the amplitude of these curves being determined by the quantity 3sinθcosθ / (1 + 3cos^2θ)^1/2. The larger this quantity is, the further from primary transit the angle γ is within the range α±Δα/2. It is maximised when θ = cos^-1(1 / √(3)) ≈ 55, giving cosγ = cosθ_p. Therefore, the furthest emission can appear from primary transit is centered at orbital phases of α and 1 - α, which for the maximum value of α being 90, correspond to orbital phases of 0.25 and 0.75.
In general, planet-induced radio emission in aligned systems is visible at orbital phases of 0 to (α + Δα / 2) and 1-(α + Δα / 2) to 1. These phase intervals therefore set the minimum and maximum phases where emission can be considered to be at quadrature, with the exact phases being determined by the magnetic co-latitude of the emitting point. The minimum value of γ is cos^-1(3sinθcosθ / (1 + 3cos^2θ)^1/2), which occurs at primary transit. To appear at least once, this quantity must be in the range α±Δα/2, and for emission to appear twice per orbit, it must be less than α - Δα/2.
§.§ Signal visibility for aligned and misaligned systems
As shown in the previous section, planet-induced radio emission is always visible twice per orbit in aligned systems near the quadrature points of the planet's orbit, provided that the emission occurs at a magnetic co-latitude that is not close to 0 or 90. But what happens when the magnetic, rotation, and orbital axes are no longer aligned?
In Figure <ref>, we show what we refer to as `visibility lightcurves' for two configurations: an aligned (described in Section <ref>) and a `misaligned' case.For the parameters in each case, see Table <ref>. These curves show a signal which is either `on' or `off'. When γ is in the range α±Δα/2 in either hemisphere, we say the signal is `on', i.e. the emission can in theory be seen by the observer. Otherwise, the signal is `off'. This means that either the emission is not beamed along the line of sight at that time, or emission at the observing frequency cannot be generated at that time (see Equations <ref> to <ref>).
As can be seen from Figure <ref>, in the aligned case, emission appears at the same orbital phases every orbit, near the quadrature limits described in Section <ref>. However, in the misaligned scenario, this is no longer the case. For the first orbit of the planet, emission appears three times, two of which being outside of the range of possible quadrature phases. In the second orbit, it is seen four times, twice outside of quadrature. Finally, for the third orbit emission appears three times, once outside of quadrature. This demonstrates how complex morphology arises in lightcurves when the system is no longer aligned, resulting in emission appearing outside of quadrature for a significant amount of time. Note also that in the misaligned case, the time duration of each `on' window varies significantly.
To further illustrate the significant differences between the emission morphology for aligned and misaligned systems, we compute the visibility lightcurves for each scenario for 500 orbits of the planet with 500 time elements per orbit, using the same parameters listed in Table <ref>. We then take the orbital phases where emission is visible and fold them with the orbital period of the planet, and compute the probability density (PD) of the visible emission as a function of orbital phase. These are shown in Figure <ref>. Unsurprisingly, in the aligned case emission is contained entirely within two narrow windows within the range of possible quadrature phases. In the misaligned case however, the distribution is much flatter, and has a significant component in the range of orbital phases associated with quadrature emission. Integrating the probability density in the misaligned case outside of quadrature (orbital phases of α+Δα/2 to 1 -α-Δα/2), we find that emission appears outside of quadrature 57% of the time. This illustrate that carrying out targeted radio observations of systems only at points of quadrature when we have little knowledge of the geometrical properties of the planetary orbit, magnetic field, and rotation axis of the star may not be the most appropriate course of action. Similarly, interpreting radio emission away from quadrature as being unrelated to SPI is also fraught. One must therefore use a geometric model such as the one presented in this work for analysis.
The fact that the Io induced emission on Jupiter appears almost exclusively at the quadrature points of Io's orbit <cit.> is due to the fact it resembles the aligned configuration described here. This is because we view the system from the ecliptic plane of the solar system. We show this in Appendix <ref>, where we compare the probability density of emission in an aligned configuration to the results reported by <cit.>.
§.§ A departure from emission at quadrature
We now explore the effects of each of the angles i_⋆, β, i_p and λ on the PD of the lightcurve, to determine their effects on the range of orbital phases that emission can apppear at. Again we fix the remaining values as listed in Table <ref>, and vary i_⋆, β, i_p and λ individually. Our resolution for i_⋆, β, and i_p is 1.8, and 3.6 for λ. We compute the lightcurve in the same manner as in Section <ref>, and then compute the PD of the emission as a function of orbital phase. The results of this are shown in Figure <ref>.
We see that apart from the projected spin-orbit angle, when the values of i_⋆, β, and i_p depart from those describing an aligned configuration, emission no longer primarily appears near the points of quadrature. Therefore, without knowledge of these parameters, scheduling radio observations at the quadrature points of a planet's orbit can result in limited or no visibility of the emission induced by the planet. The converse is also true. If we know these properties, the model provided here can be used to estimate what orbital phases to sample.
We see that if the stellar or orbital inclination is low (i_⋆≲20 or ≳160, i_p≲10 or ≳170), emission is never visible regardless of the magnetic obliquity and projected spin-orbit angle for systems described by the remaining parameters listed in Table <ref>. <cit.> found a similar result using the ExPRES code, in that effectively zero planet-induced emission is seen in the systems they simulated for orbital inclination ≲30 or ≳150, irrespective of the magnetic obliquity. Note that their analysis was limited to orbital inclinations and magnetic obliquities in increments of 15.
§ WHAT EXOPLANETS ARE WE BIASED TOWARDS DETECTING IN THE RADIO?
One of the main motivators for developing the model presented here is to determine if we are biased towards detecting planet-induced radio emission from exoplanetary systems with certain architectures. Analogous of our detection bias towards orbits with i_p∼ 90 when using the radial velocity and transit methods, certain orbital configurations may result in the induced ECM emission being beamed towards the observer for a longer duration of time (higher duty cycle) compared to other configurations. If this is the case, then systems identified as candidates for magnetic SPI via blind radio surveys may be more likely to reflect such configurations <cit.>.
To answer this question, we need to compute the visibility lightcurves for a wide range of parameters, and determine which parameters (if any) produce emission with a high duty cycle. As there are a large number of parameters (Table <ref>), we choose random samples for each one. Next, we describe our choices for the range of values for each parameter, as well as the underlying distribution we draw them from.
§.§ The parameter space for planet-hosting M dwarfs
As M dwarfs are likely to be the most favourable targets for detection of planet-induced radio emission, we focus on sampling a parameter space reflective of these stars. The masses of M dwarfs range from ∼0.1 to 0.6 M_, and volume-limited surveys of nearby M dwarfs suggest that the number of M dwarfs drops off linearly with mass, with late-type M dwarfs being around four times as common as early-types <cit.>. Therefore, we draw samples for the stellar mass from a linear distribution with the same slope as that found by <cit.>. The mass and radii of M dwarfs relate via <cit.>
R_⋆ = (0.935±0.015)M_⋆ + (0.0282±0.0068) ,
where R_⋆ and M_⋆ are in solar units. We use Equation <ref> to draw samples for the stellar radius based on the samples drawn for M_⋆, assuming the errors in Equation <ref> are Gaussian. For masses of 0.1 to 0.6 M_, the resulting radii range from ∼0.1 to 0.6 R_. Note that this relation is derived from eclipsing binaries, which is assumed to hold for single stars <cit.>.
The rotation periods of M dwarfs depends on both their spectral type (mass) and age <cit.>. For early M-stars, there is evidence for a bimodal distribution of rotation periods, which disappears past the fully-convective boundary. <cit.> suggest that this is either due to these stars rapidly spinning down at around 3 Gyr, or a detection bias disfavouring stars with intermediate periods, which exhibit lower levels of variability and therefore are more difficult to measure rotation periods for <cit.>. In addition to these uncertainties, there are only a small number of late-M stars with measured rotation periods <cit.>. With this in mind, along with the fact that we do not explicitly consider the age/activity of the star, we opt to choose samples for the rotation period uniformly in the range of 0.1 to 160 days, which covers the rotation periods of the M dwarfs presented by <cit.>. For the inclination of the rotation axis, there should be no preferential orientation of the vector ẑ_⋆ when projected onto a unit sphere centered on the observer. Therefore, we sample cos(i_⋆) uniformly from 1 to -1 (0 to 180), which gives a uniform surface density of points on the unit sphere. For the initial rotation phase, we choose values from 0 to 1 uniformly.
The dipolar magnetic field strengths of M dwarfs are estimated to range from at least 100 G to a few kG depending on their activity. Such information along with the magnetic obliquity can be inferred with ZDI. M dwarfs exhibit a range of surface field magnetic configurations. Going from early to mid-type M dwarfs, their fields transition from being relatively weak and non-axisymmetric <cit.> to being strong and axisymmetric <cit.>, resembling aligned dipoles. Interestingly, late-M stars appear to exhibit both configurations <cit.>. There is a further complication to this. ZDI generally only recovers a fraction of the underlying magnetic energy, which depends on the magnetic multipole. This fraction of energy recovered by ZDI also depends on both the inclination of the stellar rotation axis and the rotation rate of the star <cit.>. Note however this has only been studied in the context of the Sun, as we cannot assess the true magnetic topology of other stars. What is clear however from Figure 12(c) of <cit.> is that this effect is most severe for the dipolar component of the magnetic field. In short, ZDI can provide information about the strength and obliquity of the dipole component of the magnetic fields of M dwarfs. However, depending on the spectral type, inclination, and rotation period, its true strength may be difficult to recover with ZDI. With this in mind, as well as the fact that the dipole field strengths and obliquities are not generally explicitly stated in the literature, we again take an uninformed approach and draw the samples for the dipole field strength and obliquity from uniform distributions. For the field strengths, we consider values from 100 G to 1 kG, and for the obliquity, 0 to 180.
In terms of the planet itself, we can first impose a lower limit for its orbital distance using the Roche limit, which tells us the minimum distance a planet can be to its host star before it starts to disintegrate. Massive, gaseous exoplanets are more susceptible to this compared to rocky exoplanets. Therefore, the shortest period planets around stars are likely to be rocky. For incompressible bodies (i.e. rocky planets), the Roche limit for its orbital distance is <cit.>
a/R_⋆ > 2.44( ρ_⋆/ρ_p)^1/3 ,
where ρ_⋆ and ρ_p are the densities of the star and planet. The density of the star is ρ_⋆ = 3 M_⋆ / 4πR_⋆^3, and the lower limit for the orbital distance as a function of stellar mass is smallest when the planet density is highest. For rocky planets, this is estimated to be around 8 g cm^-3 <cit.>. Therefore
a/R_⋆ > 0.75 M_⋆^1/3R_⋆^-1 .
Note that M_⋆ and R_⋆ are in CGS units here.
The relevant outer limit for the orbital distance in the context of magnetic SPI on M dwarfs is the size of the Alfvén surface. Outside this region, the planet cannot induce radio emission from the star. Therefore, we set the upper limit for the orbital distance as the maximum radius of the Alfvén surface. This generally corresponds to where the magnetic field lines begin to open, which in our model we set to occur at 100 stellar radii, so we adopt the same value for the upper limit. This value is consistent with MHD models of the wind of WX UMa <cit.>, which possesses one of the strongest magnetic fields measured to date <cit.>. <cit.> estimated the size of the Alfvén surface to be around 80 stellar radii. Note however that an Alfvén surface of this size is likely only valid for the most active M dwarfs, and is likely an overestimation in the case of inactive M dwarfs. However, this information cannot be determined without some form of stellar wind modelling.
With these limits in place, the next question is what distribution to choose for the orbital distances. In general, it is easier to find planets the closer they orbit to their host star. Additionally, larger planets are also more easily detected. On top of this, formation models are presently at odds with the observed exoplanet demographics for M dwarfs. So far, more massive and fewer short-period (small orbital distance) planets have been found around M dwarfs compared to what these models predict <cit.>. Given these uncertainties, we again opt for a uniform distribution for the orbital distance.
If the orbital axis ẑ_p is independent of the rotation axis ẑ_⋆, the distribution of orbital inclinations should be uniform in cos i_p such that the tips of the orbital axes are uniformly distributed over a unit sphere. This distribution combined with a uniform distribution for cos i_⋆ results in distribution for the true spin-orbit angle ψ that is uniform in cosψ. The corresponding distribution for the projected spin-orbit angle λ is also uniform. Observations hint at an underlying bimodal distribution of spin-orbit angles centered at ψ≈0 and 90 <cit.>. If that is the case, then clearly there must be some relationship between the direction of ẑ_⋆ and ẑ_p. However, the number of measurements for ψ and λ are limited, particularly for M dwarfs, and can only be measured for transiting exoplanets. Due to these low numbers, we opt for an uninformed approach, and uniformly sample values for cos i_p from -1 to 1 and for λ from 0 to 360. For the initial orbital phase of the planet, we also uniformly sample the values from 0 to 1.
The final set of values to sample relate to the emission. For fundamental cyclotron emission, the upper limit for the observing frequency is set by the maximum field strength we consider, which is 1 kG. The corresponding cyclotron frequency for this field strength is 2.8 GHz via Equation <ref>. For the lower limit of the observing frequency, we set this to 10 MHz, which is the lowest operating frequency of current-generation radio telescopes <cit.>. Again, it is not clear what the underlying distribution of emitted frequencies is, given that there has yet to be a conclusive detection of such emission. Furthermore, a sophistication model for the evolution of the velocity distribution of the electrons powering the maser as they travel along the field line is required to accurately determine the frequencies at which the emission occurs over time. Lacking this information, we once again uniformly sample the observing frequency between 10 MHz to 2.8 GHz.
For the properties of the emission cone, we adopt a range of 70 to 80 for the opening angle based on the discussion point in Section <ref>. Similarly for the cone thickness, we are limited to the Jupiter-Io interaction in terms of our knowledge of appropriate values. While observations suggest thicknesses of around 1, theoretical considerations suggest values of 10 to 20 based on the range of observed opening angles. To not overestimate the thickness more than necessary, we set the upper limit to 10, and the lower limit to 1. Again, the lack of observations and a sophisticated model for the maser limit our ability to implement meaningful ranges and distributions for the cone properties in a stellar context. Our focus here however is to evaluate the geometric dependence of the duty cycle. As such, we employ uniform distributions for both values. Future work that better-establishes what are appropriate values and distributions for these quantities will allow for this to be re-assessed.
§.§ Temporal resolution of the lightcurve
An important aspect to consider here is the temporal resolution Δ t of the visibility lightcurve. Generally, for systems with short orbital periods (small orbital distances) and narrow cone thicknesses are only visible for very short windows. If Δ t is too large, we can end up undersampling and missing a large fraction of the on phases of the signal. We can determine suitable values for Δ t however by considering the time it takes for the emission cone to sweep across the line of sight. We approximate this as the duration of time taken for the planet to increase in orbital phase by Δα (the cone thickness), which is P_p(Δα / 360). With the aim of resolving each on window with at least two points, we compare the duty cycle for a few hundred random samples for a signal duration of 1000 days, using time intervals of Δ t = P_p(Δα / 720) and Δ t = P_p(Δα / 36000). We find that the duty cycle obtained using the lower resolution varies by less than 4% compared to the high resolution calculation. We therefore determine that Δ t = P_p(Δα / 720) is a suitable resolution to dynamically set for each lightcurve such that the true duty cycle of the signal is recovered.
§.§ What systems are easiest to detect?
With the assumed parameter space of planet-hosting M dwarfs laid out, we now perform a Monte Carlo simulation, sampling each parameter from their aforementioned distributions (Section <ref>). We choose 1 million values for each parameter. For each set of values, we first compute their visibility lightcurves, and then their duty cycle (the percentage of time the signal is visible for). The time duration of each lightcurve is 500 days. We find that a randomly sampled system has on average a duty cycle of 4%, and that 48% of all systems can produce emission that is ever visible. In other words, 52% of systems will never be observable, assuming static conditions for the large-scale magnetic field of the star and the emission cone. Of the 48% of systems visible, their average duty cycle is 8%. We also find that emission is as likely to be seen from the Northern magnetic hemisphere as the Southern magnetic hemisphere. This is unsurprising, as unless there is some special configuration of the system, the planet will spend as much time in the Northern hemisphere as the Southern hemisphere. In other words, there is no preferential polarisation for the radio emission, assuming both hemispheres emit via the same magnetoionic mode.
We then investigate each of the parameters to see which (if any) enhance the duty cycle, and if so, what values of the parameters do so. Due to both the high number of dimensions of the model and the random sampling, there is a large amount of scatter when plotting the duty cycle against each parameter for all of our samples. Each scatter plot is shown in Appendix <ref>. However, we see that there are certain values for the stellar inclination, magnetic obliquity, and orbital inclination which result in high duty cycles.
§.§.§ Stellar parameters
We first consider the stellar parameters that produce high duty cycles. In Figure <ref>, we plot the magnetic obliquity against the cosine of the stellar inclination for the systems where the duty cycle exceeds 20%. We see that the majority of the points all lie along curved lines, and the most visible systems are described by two distinct configurations. We refer to these as C1 and C2. In C1, the rotation axis forms the angle α, the cone opening angle, with the line of sight (i_⋆ = α or 180 - α), and the magnetic axis is either parallel or anti-parallel to the rotation axis (β = 0 or 180). In C2, the rotation axis is viewed pole on (i_⋆ = 0 or 180), and the magnetic obliquity is α or 180 - α.
We can understand the structure seen in Figure <ref>, as well as the high duty cycles of C1 and C2 by considering the angle χ that the magnetic axis ẑ_B makes with the line of sight x̂. Using Equations <ref> to <ref>, we can express χ as:
cosχ = ẑ_B·x̂ = cos i_⋆cosβ + sin i_⋆sinβcosϕ_⋆ .
The values for cosχ are always in the range cos(i_⋆±β). Overlaying different values of i_⋆±β onto Figure <ref>, we find that the vast majority of the systems follow the lines where i_⋆±β is either α or 180 - α. In other words, systems with high duty cycles are those where the magnetic axis can form the angle α with the line of sight.
Equation <ref> also explains the most visible systems described by C1 and C2. If sin i_⋆ or sinβ are zero, the angle χ no longer has any time dependence, and the magnetic axis is always inclined relative to the line of sight by the same angle. The key distinction between C1 and C2 is that in C1, the magnetic axis remains fixed in place from an observer's point of view. In C2 however, the magnetic axis precesses about the line of sight. 90% of all systems with duty cycles exceeding 40% are in C1, and 3% are in C2 (with a tolerance of ±10 for the inclination and obliquity), and the max duty cycles in C1 and C2 are 80 and 60% respectively. The lower number of systems in C2 is primarily due to adopting a uniform distribution for the stellar inclination axes (Section <ref>), which results in pole-on systems being much rarer than the near-equator on systems described by C1.
§.§.§ Planetary parameters
We now must also consider the orbit of the planet around the star when it is in C1 or C2 to understand the high duty cycles seen in Figure <ref>. In Figure <ref> we show normalised histograms of the number of systems with duty cycles exceeding 40% as a function of orbital inclination and projected spin-orbit angle. We see that most systems are near face-on (i_p≈ 0 or 180) and have projected spin-orbit angles of either ≈0 or 180. To further explore this, in Figure <ref> we plot the duty cycle of emission induced at 100 MHz as a function of the planet's orbital inclination and projected spin-orbit angle, from the Northern hemisphere of a star in C1 (i_⋆ = α = 75, β = 0) and C2 (β = α = 75, i_⋆ = 0). We compute each lightcurve for 100 orbits, with 100 time samples per orbit. For C1, we find that the maximum duty cycles correspond to planetary orbits that pass over the magnetic poles. For this to occur, the normal to the orbital plane ẑ_p must be perpendicular to the magnetic axis ẑ_B. In C1 (i_⋆ = α = 75, β = 0), this requires that:
ẑ_B·ẑ_p = cos i_pcosα + sin i_psinαcosλ = 0,
meaning that
tan i_pcosλ = -1/tanα.
The dashed line in the left panel of Figure <ref> shows the combined values of i_p and λ that describe orbits which pass over the magnetic poles, satisfying Equation <ref>. This line intersects with the regions where the duty cycle peaks, which occur at i_p≈ 15 and λ≈ 160 or 200, and i_p≈ 165 and λ≈ 20 or 340. Not all orbits described by the dashed line have high duty cycles however, which implies that further constrains exist which likely relate to the fraction of the orbit where the emission cones point along the line of sight. This is not trivial to show analytically with an exact treatment of the geometry. However, since the planet orbits over the magnetic poles, the field lines it interacts with are almost entirely radial for a significant part of its orbit. This means that the emission cone vector ĉ is parallel to the position vector of the planet x̂_p.
If we assume that the field lines are radial, the angle between the cone and line of sight in the Northern magnetic hemisphere is (Equation <ref>):
cosγ = x̂_p·x̂ = sin i_pcosϕ_p .
Near i_p = 15 and 165, γ varies sinusoidally, the minimum of γ is close to α which occurs at conjunction (ϕ_p = 0). If the minimum of γ is α - Δα/2, then the range of orbital phases where γ is within α±Δα/2 is maximised, resulting in the highest duty cycle possible. In other words, the duty cycle is maximised when
cos(α - Δα/2) = sin i_p ,
i.e. when i_p = 90 - α + Δα/2 or 90 + α - Δα/2. For α = 75, we have i_p = 17.5 and 162.5. There are two corresponding values of λ for each of these orbital inclinations that describe orbits which pass over the magnetic poles, which are obtained from Equation <ref>. For i_p = 17.5 we have λ∼ 148.2 and 211.8, and for i_p = 162.5 we have λ∼ 31.8 and 328.2. These four orbital configurations closely align with the regions where the duty cycle peaks seen in Figure <ref>, which are indicated by red circles.
In C2, the magnetic axis cannot stay in the orbital plane due to its precession about the rotation axis of the star. That being said, the magnetic axis and orbital plane will become aligned twice per stellar rotation. For an example configuration of C2 (i_⋆ = 0, β = α = 75), the magnetic axis and orbit normal are perpendicular when
ẑ_B·ẑ_p = cos i_pcosα - sin i_psinαcos(ϕ_⋆+λ) = 0,
i.e. when
ϕ_⋆ = cos^-1[1/tan i_ptanα] - λ or 2π - cos^-1[1/tan i_ptanα] - λ .
So, while the rotation phases where ẑ_B and ẑ_p become perpendicular depend on the projected spin-orbit angle λ, the magnetic axis will always align with the orbital plane twice per orbit irrespective of the value of λ, provided that 1 / tanα < tan i_p < - 1 / tanα. When ẑ_B and ẑ_p are perpendicular, Equation <ref> is then valid under the assumption that the field lines are radial. Following the same logic as for C1, the duty cycle is maximised when i_p = 17.5 or 162.5. This is what is seen in the right panel of Figure <ref>.
An interesting result from these configurations is that they produce emission that is predominantly either right (RCP) or left circularly polarised (LCP). In other words, their emission comes from either the Northern or Southern magnetic hemisphere, irrespective of if the star is in C1 or C2. In fact, when the duty cycle exceeds 40%, virtually all systems emit either RCP or LCP exclusively. This is because these two configurations require one magnetic pole to always face towards the observer. This feature has been identified in the dynamic spectra of radio bursts from a sample of M dwarfs by <cit.>, which is expected if the electrons powering the radio emission are accelerated in the large-scale magnetic field of the star as we model in this work.
We also note that we see a marginal bias towards detecting emission induced by closer in planets (see Figure <ref>). This is due to our assumption that the field lines become open when the planet interacts with a field line that connects near to the magnetic poles. If the planet orbits far from the star, virtually all the field lines it sees will be open, and as a result the visibility of the emission is only possible from a single magnetic hemisphere, marginally reducing the likelihood of seeing the emission.
§.§.§ Emission parameters
Aside from the geometrical parameters, the sampling also shows us that low-frequency emission from stars with strong magnetic fields are more favourable (Figure <ref>). This is unsurprising, as under the assumption of a uniform distribution of field strengths, lower frequencies are more likely as opposed to higher frequencies. Similarly, stars with stronger fields can produce a wider range of observable frequencies.
In terms of the cone properties, we see a marginal bias towards systems with cone opening angles closer to 90. For systems where the duty cycle exceeds 20%, those which have opening angles close to 80 are about 1.5 times more likely to be seen than those where the opening angle is around 70. This can be understood by considering the configurations which we identify in Sections <ref> and <ref> that correspond to high duty cycles, which rely on the planet passing over the magnetic poles of the star. If the cone opening angle is closer to 90, then the emission cones are at right angles to the magnetic field. In a face-on orbit, this means that the cones always point towards the observer, assuming the magnetic axis lies in the orbital plane. We also see that thicker emission cones produce more visible emission. This is expected since a thicker cone results in wider windows wherein the signal can be seen.
§ DETECTABILITY VIA OTHER METHODS AND PROSPECTS FOR TRANSITING EXOPLANETS
In the previous Section, we have identified two key configurations for the star and planetary orbit which result in planet-induced radio emission being visible for the majority of the time. That being said, these configurations describe planets in near face-on orbits, which are likely to be very difficult to detect via the radial velocity method, and also do not transit. These planets could theoretically be directly imaged if they orbit sufficiently far from their host star. However, the shortest orbital distance inferred to date for a directly-imaged planet is 3.53 au, for the massive exoplanet HD206893 c <cit.>. Normalised this distance by the stellar radius of the main sequence F-type host star of 1.25 R_ <cit.>, this planet orbits at around 600 R_⋆. An Alfvén surface of this size would require an incredibly strong magnetic field strength at the stellar surface. This would be unprecedented for a main sequence F-type star like HD206893, which typically exhibit large-scale surface field strengths of just a few Gauss <cit.>.
Another method which could be more feasible for detecting planets in C1 or C2 with current-generation telescopes is the astrometry method, which uses the reflex motion of the star projected on to the plane of the sky to infer the presence of a companion. This method is expected to lead to an explosion in the number of detected non-transiting exoplanets with survey telescopes such as Gaia <cit.>. To date, the shortest orbital distance planet discovered to orbit a main-sequence star via astrometry is the 2.3 Jupiter mass planet GJ 896Ab (EQ Peg Ab), which orbits its M3.5 host star at 0.639 au <cit.>. Interestingly, this detection was made using the Very Long Baseline Array (VLBA) at 8.4 GHz. Again normalising by the radius of the host star of 0.25 R_, the planet orbits at 550 stellar radii. While the host star is an M dwarf with an average large-scale surface magnetic field of around 500 G <cit.>, it is unlikely its Alfvén surface extends to this distance. Nevertheless, a companion in a system similar to GJ 896A could be discovered using the same method if it is closer and more massive, such as a brown dwarf. Such systems would be very suitable candidates for discovery in tandem via magnetic SPI, as explored in this work.
It is therefore useful to also estimate what systems are most visible in the radio that we are also able to detect with current techniques, i.e. transiting exoplanets. For the planet to transit the stellar disk, its inclination must be in the range |cos i_p| < R_⋆ / a, neglecting the radius of the planet. Using the same uniform distributions for each parameter as described in Section <ref>, and limiting the values of cos i_p from -R_⋆ / a to R_⋆ / a, we re-run our Monte Carlo sampling of the visibility function.
For transiting systems, overall we find the same results as for systems where all orbital inclinations are considered, in which a randomly chosen system is visible for 4% of the time, and 49% of all systems are ever visible. Similarly, there is no preferential polarisation to the radio emission. This is unsurprising, given that the inclination of the rotation axis of the star and the magnetic obliquity, both of which remain unchanged in terms of sampling, are the dominant parameter as to whether radio emission is visible at all. It is not that transiting systems are not visible, but that planets in near face-on orbits result in maximum visibility (Figure <ref>). As such, the maximum duty cycle for transiting systems we find for our sample is 56%, compared to 80% for systems that cover the full 180 in orbital inclination. There are also no significant differences in terms of scatter plots of inclinations and magnetic obliquities that produce high duty cycles (Figure <ref>) or the duty cycle against each parameter (Figure <ref>) for transiting systems.
§ SUMMARY & CONCLUSIONS
We now summarise and discuss the main findings of the paper.
§.§ Current limitations to the model
The model developed here, while fast and flexible, is not without limitations. The first of which is the lack of any information about the plasma itself. Accounting for such would allow for the plasma frequency to be computed, which provides a lower limit to the frequency range of emission <cit.>, as well as the radio power emitted via the interaction to be estimated <cit.>. It also allows for the calculation of the position of the Alfvén surface. Knowing this places a further constraint on the regions around the star where magnetic SPI can occur, without needing an arbitrary constraint like the maximum size of the field line as we have currently implemented. Knowledge of the plasma environment would also allow for absorption, reflection, and refraction processes to be incorporated into the radiative transfer. Accounting for these effects could significantly alter the visibility of the ECM emission. The plasma information also allows for the radio flux densities to be estimated, which in turn provides temporal modulation to the signal when visible. With this we can also compute the size of the planet's magnetosphere, which in turn influences the induced flux densities <cit.>.
Another aspect lacking from both the model presented here as well as that developed by <cit.> is the velocity distribution of electrons along the field line that is producing radio emission. Knowing this would allow us to self-consistently calculate many desirable quantities such as the frequency range, cone properties (which we assume to be constant in this work), and emission duration. However, for this one would likely have to couple an MHD simulation to a particle-in-cell type simulation, and account for how Alfvén waves are generated and propagate in a dynamical environment such as a stellar wind. Such a task is well beyond the scope of this paper, as well as in the case of MHD simulations such as those presented by <cit.> and <cit.>. However, it is still worth mentioning with future work in mind. Accounting for the evolution of the electron velocity distribution on the emitting field line would also allow us to compute properties such as the delay time between the interaction and the emission to appear, as well as trailing features such as those seen on in Io's footpoint on Jupiter in the UV <cit.>.
MHD models also provide information about the plasma inertia, which results in a toroidal component to the large-scale magnetic field that trails behind the direction of rotation in a Parker spiral-like configuration. Similarly, the closed field lines will be stretched outward radially, opening at some distance close to the Alfvén surface. Depending on the conditions, these deviations from purely dipolar field lines could alter the visibility of the emission. The same argument can be made against using a purely dipolar magnetic field as we have done in this work. If the induced emission is generated sufficiently close to the stellar surface, higher order modes of the magnetic field such as the quadrupole and octupolar will become more significant. As a result, the total magnetic field vector may deviate from the dipolar component significantly. However, knowledge of the strength of each magnetic mode is only generally obtainable via the ZDI method <cit.>. In future, it could be useful to also parameterise over the higher order magnetic modes in this context, using for example the potential field source surface method <cit.>. This however will be more numerically taxing than the assumption of a purely dipolar field.
In the future, it will also be necessary to develop numerical models for predicting the visibility of radio emission from magnetised low-mass stars that do not invoke the presence of a planet <cit.>. Such models should also account for the aforementioned aspects mentioned in this Section such as propagation effects. Since the same underlying geometric calculations presented in this work are relevant in that regard, the MASER code can be adapted for these scenarios. Then in the case that emission is detected from a system, these models can ideally be utilised to uniquely identify the underlying generation mechanism <cit.>.
§.§ Comparison to the ExPRES code
It is worth noting that there are similarities between the code developed here and the ExPRES code developed by <cit.> <cit.>. The key distinction is that ExPRES requires the pre-computed magnetic field geometry of the system as an input, whereas we compute the geometry of the field line the planet interacts with for an arbitrary set of system parameters on the fly. ExPRES also takes the plasma and energy of the electrons as inputs, which are used to prescribe the underlying electron cyclotron maser conditions. However, these conditions are highly uncertain in a stellar context, and likely require both MHD and particle-in-cell simulations to determine. Our code however does not explicitly assume that the prescriptions which appear to work well for the auroral emission on Jupiter and Saturn apply. ExPRES also is written in IDL, which is not open source. It is also unclear if it can be easily deployed for parametric studies, as we exhibit in this work with the MASER code.
§.§ Concluding remarks
In this work, we have developed a freely-available tool to assess and predict signatures of magnetic star-planet interactions in the radio regime. It is based on a key set of physical and geometrical parameters, which are generally known in part for exoplanetary systems. For systems with unknown parameters (i.e. the orbital distance of the planet), the model can be utilised in parametric studies to compare to observations that are indicative of such interactions. It is also fast, computationally inexpensive, and has low dependencies, and captures most of the key processes of the model presented in <cit.>, without the need for MHD simulations or magnetic field maps.
We first illustrated its ability to explain the phenomenon of radio emission appearing at the quadrature points of a satellite's orbit, which correspond to orbital phases of 0.25 and 0.75. However, this is in fact only possible in the case that the rotation, magnetic, and orbital axes are all aligned and lie in the plane of the sky. This is not the case in general for exoplanetary systems. Therefore, scheduling radio observations to coincide with the quadrature points of a known planetary orbit can result in the majority of the induced emission being missed.
We then utilised the model in a Monte Carlo simulation to assess which (if any) of the model parameters reflect exoplanetary systems we are biased towards detecting. Sampling the parameter space with 1 million values, we find that there are two distinct configurations where emission can be seen for up to 80% of the time. This is significantly higher than the average value for systems that can ever be visible of ∼9%. These two configurations rely on the inclination of the magnetic axis relative to the line of sight being fixed at an angle equal to that of the opening angle of the emission cone. Such configurations are possible if the magnetic and rotation axes are aligned (C1), or if we see the star pole on with an obliquity close to 90 (C2).
For C1, many M dwarfs exhibit strong axisymmetric dipolar magnetic fields at their surfaces <cit.>, and as a result they could be well-suited for detection of radio emission induced by planets in face-on orbits. For C2 however, it is not clear whether any M dwarfs that have had their surface fields mapped with ZDI exhibit obliquities close to 90. Some early and late M dwarfs do exhibit significant non-axisymmetric components, which in theory includes topologies with large obliquities. However, specific information relating to the dipolar component of the recovered magnetic field is often limited in the literature. Another interesting point is that if the magnetic field of the star evolves such that the dipole axis moves in to one of these configurations, emission may become more visible compared to other stages of the magnetic cycle. AD Leo is an M dwarf that has exhibited hints of activity cycles <cit.>, however, we have yet to see any evidence for a significant change to the dipole tilt.
In terms of the planet's orbital characteristics, we find that the most visible systems are those where the planet orbits over the magnetic poles. Combining this with the configurations C1 and C2 described above, these planets are in near face-on configurations. This is quite interesting, as such a population of exoplanets remains largely undiscovered via traditional methods, due to both their low radial velocity signatures and non-transiting nature. This could explain why none of the stars detected at radio wavelengths by <cit.> are known to host any close-in planets. If that is the case, the astrometry method may prove to be very complementary for confirming their presence (see Section <ref>). We note that transiting exoplanets are still likely to be detectable when the star is in C1 or C2, but are less likely to be seen in blind radio surveys compared to planets in near face-on orbits. We also note that these results are based on our assumption of non-informative priors for the underlying system geometry. Further understanding of their true underlying distributions could alter these results.
Although the code developed here has been primarily discussed in a star-planet context, it can be easily adapted to any magnetised host-satellite system by simply changing the units of the input parameters (e.g. Section <ref>). In that sense, it may also be useful in future for interpreting radio emission from brown dwarfs and exoplanets. It also could be easily applied in the area of enhanced chromospheric/coronal emission from stars due to magnetic SPI <cit.>.
§ ACKNOWLEDGEMENTS
We thank the anonymous reviewer for their helpful comments and suggestions. We acknowledge funding from the Dutch Research Council (NWO) for the e-MAPS (exploring magnetism on the planetary scale) project (project number VI.Vidi.203.093) under the NWO talent scheme Vidi. RDK also acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 817540, ASTROFLOW). We would also like to thank Benjamin Pope, Aline Vidotto, and Joan Bautista Climent for their insightful comments and suggestions on the manuscript.
§ DATA AVAILABILITY
All data presented in this work was generated using the MASER Python code we developed, which can is freely available on GitHub (see start of Section <ref>). We kindly request that utilising of this code in future publications acknowledge this work.
mnras
§ VECTORS FOR THE STELLAR ROTATION AND MAGNETIC AXES
In this work, we relate all vectors describing the exoplanetary system to the line of sight vector x̂ = (1, 0, 0), the projection of the stellar rotation axis ẑ_⋆ onto the plane of the sky ẑ = (0, 0, 1), and the vector perpendicular to ẑ in the plane of the sky ŷ = ẑ×x̂ = (0, 1, 0). The rotation axis is inclined relative to the line of sight x̂ by the angle i_⋆:
ẑ_⋆ = cos i_⋆x̂ + sin i_⋆ẑ .
The magnetic axis ẑ_B is tilted relative to the rotation axis by the angle β, and the projection of ẑ_B on to the star's equatorial plane is x̂_⋆. The rotation phase of the star ϕ_⋆ is measured between x̂_⋆ and the vector n̂_⋆, which is the projection of x̂ onto the equatorial plane:
n̂_⋆ = sin i_⋆x̂ - cos i_⋆ẑ .
The rotation phase ϕ_⋆ = 0 when the x̂_⋆ = n̂_⋆. The vector x̂_⋆ is therefore
x̂_⋆ = cosϕ_⋆n̂_⋆ + sinϕ_⋆ŷ ,
and the magnetic axis is
ẑ_B = sinβx̂_⋆ + cosβẑ_⋆ .
Figure <ref> shows a sketch of the vectors described here.
§ VECTORS FOR THE PLANET POSITION AND SPIN-ORBIT MISALIGNMENT
Around the star, a planet orbits. The normal to its orbital plane is ẑ_p, which is inclined relative to the line of sight by the angle i_p. The projection of ẑ_p on to the plane of the sky is ẑ':
ẑ_p = cos i_px̂ + sin i_pẑ' .
In general, ẑ' is not aligned with ẑ, the projection of the stellar rotation axis on to the plane of the sky, and the angle measured from ẑ' to ẑ is λ. This is known as the projected spin-orbit angle. Similarly, the angle from ŷ to the vector perpendicular to ẑ' in the plane of the sky ŷ' = ẑ'×x̂ is also λ. ŷ' and ẑ' can be expressed as
ŷ' = cosλŷ - sinλẑ ,
ẑ' = sinλŷ + cosλẑ .
The true spin-orbit angle ψ is the angle between the rotation and orbital axes, which is:
cosψ = ẑ_⋆·ẑ_p = cos i_⋆cos i_p + sin i_⋆sin i_pcosλ .
The rotation phase of the planet is measured between the position of the planet x̂_p and the projection of the line of sight on to the orbital plane n̂_p, which is
n̂_p = sin i_px̂ - cos i_pẑ' .
Therefore, the position of the planet is given by:
x̂_p = cosϕ_pn̂_p + sinϕ_pŷ' .
A sketch of the vectors described here is shown in Figure <ref>.
§ FINDING THE ROOT OF EQUATION <REF>
To find the root of Equation <ref>, we use Newton's method, which utilises the derivative of the function to be solved. The derivative of Equation <ref> with respect to r is
F' = 6/R_⋆(B/B_⋆)^2 (r/R_⋆)^5 + 3/4L .
From an initial value of r = r_i, we linearly extrapolate the tangent line from the point (r_i, F(r_i)) to the point where F = 0. The value of r where this line crosses F = 0 is r_i+1, which can be expressed as
r_i+1 = r_i - F(r_i)/F'(r_i) .
We iterate this process is until |B_ν - B(r_i)| / B_ν is less than 1%. Initialising the value of r_i = R_⋆, this typically takes 10 to 50 iterations depending on the values of the coefficients of Equation <ref>. At that point, we take the value of r_ν = r_i, and then compute θ_ν via Equation <ref>.
§ THE MAGNETIC FIELD VECTOR ALONG THE FIELD LINE
At each point on the magnetic field line, the field vector can be decomposed into a radial and meridional (polar) component:
B⃗ = B_r r̂ + B_θθ̂ .
Here, B_r and B_θ are the radial and meridional components, which at the point (r, θ) are <cit.>:
B_r = B_⋆(R_⋆/r)^3cosθ,
B_θ = B_⋆/2(R_⋆/r)^3sinθ .
The radial and meridional unit vectors r̂ and θ̂ can be expressed in terms of x̂_B and ẑ_B, which define the plane that the magnetic field line lies in (see Figures <ref> and <ref>):
r̂ = sinθx̂_B + cosθẑ_B,
θ̂ = cosθx̂_B - sinθẑ_B .
In the Northern magnetic hemisphere the emission cones point along ĉ, which are aligned with B⃗, i.e. ĉ = B⃗ / B. In the Southern magnetic hemisphere however, the field lines point towards the stellar surface, but the emission cones are still oriented away from the surface. Therefore, in the Southern hemisphere, ĉ = - B⃗ / B.
§ IO-INDUCED EMISSION FROM JUPITER AT QUADRATURE
The Jupiter-Io interaction is a good example of the aligned scenario discussed in Section <ref>. <cit.> analysed 26 years of 10 – 40 MHz radio data from Jupiter, identifying the components in its dynamic spectra that are due to the sub-Alfvénic interaction with Io. Naturally, this is a great dataset to benchmark our model against, replacing the star with Jupiter and the planet with Io. The relevant properties of Jupiter and Io are listed in Table <ref>.
In Figure <ref> we compare the results of <cit.> to the PD of the lightcurve from a system described by the values listed in Table <ref>. For comparison we also show the PD of the lightcurve for the same system, but with the angles describing an aligned system (e.g. Table <ref>). We compute both lightcurves for 500 orbits of Io, with 1000 time samples per orbit. We fix the initial rotation and orbital phases at zero. For the emission cone, we set the opening angle to 75 and thickness to 1 (Section <ref>). For the observing frequency, we choose a value of 10 MHz.
We see a broadening of the probability density when the values deviate slightly from an aligned configuration. However, there are still two peaks centered about the orbital phases for the aligned case. The actual values reproduce a probability density that accurately resembles the long-term results of <cit.>. Note that there is a slight discrepancy, in that the results of <cit.> show that some emission occurs earlier on in Io's orbit, left of the two peaks. They attribute this to the fact that Jupiter rotates faster than Io's orbit. As the Alfvén waves have a finite velocity, by the time they interact with and accelerate the electrons that power the radio emission to near the surface, the field line has passed by Io. We do not account for such a phenomenon in our model however.
§ SIGNAL VISIBILITY AS A FUNCTION OF EACH MODEL PARAMETER
In Figure <ref>, we show the scatter plot of the duty cycle of each system in the Monte Carlo simulation performed in Section <ref> against each of the model parameters.
|
http://arxiv.org/abs/2307.01863v1
|
20230704182154
|
Comparing Globular Cluster System Properties with Host Galaxy Environment
|
[
"Kate Hartman",
"William E. Harris",
"John P. Blakeslee",
"Chung-Pei Ma",
"Jenny E. Greene"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Department of Physics & Astronomy, McMaster University, 1280 Main St W, Hamilton, ON L8S 1T7, Canada
Department of Physics & Astronomy, McMaster University, 1280 Main St W, Hamilton, ON L8S 1T7, Canada
NOIRLab, 950 N. Cherry Ave., Tucson, AZ 85719, USA
Department of Astronomy and Department of Physics, University of California, Berkeley, CA 94720, USA
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
We present Hubble Space Telescope photometry in optical (F475X) and near-infrared (F110W) bands of the globular cluster (GC) systems of the inner halos of a sample of 15 massive elliptical galaxies. The targets are selected from the volume-limited MASSIVE survey, and chosen to sample a range of environments from sparsely populated groups to BCGs in dense clusters. We also present a quantitative model of the relation between (F475X - F110W) colour and cluster metallicity [M/H], using simulated GCs. Because much of the GC population in such galaxies is built up through accretion, the metallicity distribution of the GC systems might be expected to vary with galaxy environment. The photometry is used to create a completeness-corrected metallicity distribution for each galaxy in the sample, and to fit a double Gaussian curve to each histogram in order to model the two standard red and blue subpopulations. Finally, the properties of the GC metallicity distribution are correlated against galaxy environment. We find that almost no GCS properties and host galaxy environmental properties are correlated, with the exception of a weak but consistent correlation between blue fraction and nth-nearest neighbour surface density. The results suggest that the systemic properties of the GCS, at least in the inner to mid-halo regions, are influenced more strongly by the local environment at early times, rather than by the environmental properties we see today.
§ INTRODUCTION
Globular clusters (GCs) are old, massive, dense, gravitationally bound systems of stars. They are nearly ubiquitous in galaxies, found in all but the least massive of dwarfs <cit.>. As some of the earliest stellar structures to form within their host galaxies, they are powerful probes of the early history of hierarchical growth and chemical enrichment.
The most populous globular cluster systems (GCSs) belong to massive elliptical (early-type or ETG) galaxies. Several GCS properties scale with those of the host galaxy, perhaps most notably total GCS mass and galaxy halo mass <cit.>. Massive galaxies typically have distinguishable subpopulations of blue, metal-poor GCs and red, metal-rich GCs <cit.>—although this is not universally the case, with occasional examples of unimodal or multimodal populations known <cit.>.
Simulations such as in <cit.> and <cit.> support a hierarchical merger model of galaxy growth in which massive elliptical galaxies grow to their present sizes by merging with smaller satellite galaxies, and sometimes other large galaxies; other simulation and model work such as E-MOSAICS <cit.>. <cit.>, <cit.>, and <cit.> have been able to reproduce GC scaling relations in a hierarchical merger framework. This process leads to a wide range of evolutionary histories, which differ depending on a galaxy's mass and location in its environment. In rich galaxy clusters, the most massive and luminous elliptical galaxy will settle near the center, and these brightest cluster galaxies (BCGs) draw in satellite galaxies from their surroundings, growing their own stellar and halo masses and adding the satellites' GCs to their own GCSs (cf. the references cited above). Meanwhile, galaxies in low-density areas have access to less accretable material than a BCG in a rich galaxy cluster. It is natural to ask whether or not the present-day GCS properties of BCGs depend on environment.
<cit.> addressed this problem in a coarse way (see their Figure 4) by comparing full merger trees (including accreted satellites) to the main progenitor branch only (excluding accretion), finding that only the full set of accretions matched the observations then available (they used observations of the Virgo Cluster from <cit.>). Observers are beginning to tackle this problem as well; studies such as <cit.>, <cit.>, <cit.>, and <cit.> specifically targeted isolated elliptical galaxies rather than BCGs in rich clusters, although the subject galaxies in those studies are less massive and less luminous than those in our sample, and <cit.> focused on environment as a driver of difference between GCSs, although that work looked at GCS structure rather than GCS metallicity.
In this work, we explore the unaddressed parameter space of GCS metallicity versus host galaxy environment. Section <ref> presents our data and our photometry techniques, Section <ref> outlines our procedure to account for incompleteness, and Section <ref> explains our conversions from GC color to metallicity. In Section <ref>, we characterize the shape of each galaxy's GCS metallicity distribution function, and Section <ref> compares GCS parameters to galaxy environment metrics. We discuss our findings and summarize our work in Section <ref>.
§ DATA AND PHOTOMETRY
The strong scaling relations between GCS properties and host galaxy stellar and halo mass put constraints on how the sample of galaxies for this work could be constructed. In an unrestricted sample of galaxies, the relations involving mass drown out more subtle second-order signals from relations such as those involving environment <cit.>. Our sample had to be constructed to minimize differences in stellar and halo mass, and therefore to minimize differences in GCS properties related to galaxy mass.
Our sample comprises fifteen galaxies from the MASSIVE survey sample <cit.>. MASSIVE targeted the most luminous galaxies within ∼ 100 Mpc
and with stellar masses M_* > 10^11.5 M_⊙. Our selected targets all lie within a very narrow stellar mass range, and thus effectively controlled for that variable.
§.§ Images
This work makes use of HST archival data from previous work targeting the MASSIVE galaxies <cit.>) along with more recent images from GO proposal 15265. The images were taken with HST's WFC3 instrument using the F475X (475 nm) and F110W (1.1 μm) filters. The total exposure time for each image was approximately one orbit. See Table <ref> for observing information and Figure <ref> for the F110W images. The data is available at MAST: [10.17909/pvve-1002]10.17909/pvve-1002.
§.§ Environmental data: galaxies
The MASSIVE survey selection criteria included a K-band luminosity cutoff of M_K < -25.3, which corresponded to a stellar mass cutoff of M_* ≳ 10^11.5M_⊙ <cit.>. MASSIVE galaxies also have relatively high Galactic latitudes and low foreground reddenings, so the observed fields have few contaminating field stars. Galaxy group and cluster memberships are given in Table <ref>. Distances listed in the Table assume H_0 = 70 km s^-1 Mpc^-1 and mean CMB frame radial velocities from <cit.>, <cit.>, and <cit.>. It should be noted that these distances are systematically slightly greater than the more recent surface brightness fluctuation distances of <cit.> <cit.>, but only by ∼ 4 Mpc in most cases. These slight offsets do not affect any of the later conclusions in this study.
The K-band magnitude and stellar mass data from <cit.> and <cit.>, also seen in Table <ref>, allowed us to test how well we had controlled for stellar mass in our galaxy sample and to ensure that GCS-galaxy mass relations would not overpower any GCS-environment relations.
§.§ Environmental data: groups
The galaxy environmental data used in this work were derived from the group catalogues of <cit.>. Crook and collaborators created two catalogues of galaxy groups: a low density contrast (LDC) catalogue with less stringent group inclusion criteria, and a high density contrast (HDC) catalogue with more stringent criteria. Thirteen of the fifteen galaxies in our sample appear in both the HDC and LDC, while the remaining two, NGC 57 and NGC 4914, are in very sparse groups and appear only in the LDC. This work used HDC data when available, and LDC data only for NGC 57 and NGC 4914. It should be noted that NGC 57's group virial radius from <cit.> is quite large compared to those for the rest of our sample; because of this and the more relaxed LDC inclusion criteria, we should be cautious when working with group data for NGC 57 and NGC 4914.
To characterize the density of each host galaxy's environment more directly, we used the coordinates provided by <cit.> to make two nth-nearest-neighbor measurements in projection, defined as in <cit.>:
Σ_n = n/π D^2_p,n
where D_p,n is the projected distance to the nth galaxy from the galaxy of interest. We made a calculation for part of the galaxy sample, excluding NGC 57 and NGC 4914, with n=5, a standard compromise value of n that allows for both measurements of small galaxy groups and avoidance of ultra-small number statistics, and then a measurement for the whole sample with n=2, motivated by the NGC 4914 group, the smallest in the sample with three galaxies including NGC 4914 itself.
Figure <ref> compares all Crook-derived metrics, including nth-nearest-neighbor surface density for n=2. Group mass M goes as group velocity dispersion σ^2 and group radius R, so positive correlations between number of group members, group virial mass, and group velocity dispersion are expected given that the central BCGs in this sample all have similar stellar masses. Spearman correlation coefficients are given in the Figure for all significant relations.
§.§ Photometry
The galaxies in our sample are distant enough that their GCs appear as nearly unresolved point sources <cit.>. At a distance of d = 80 Mpc, the 6-pc half-light diameter of a typical GC is equivalent to an angular width of 0.015”, well below the 0.1” optical resolution of HST. With that in mind, we used DOLPHOT <cit.>, a program optimized for stellar photometry with HST data, to measure our images. DOLPHOT identified and measured the integrated light from each GC as it would for a star.
In addition to magnitudes on the VEGAMAG scale, DOLPHOT includes in its output several measurement quality flags and metrics for each detected object, including an object type flag (type 1 objects are point sources; other type numbers denote extended sources, blended sources, cosmic ray strikes, and objects too faint to measure), signal to noise ratio (SNR), chi (for measuring goodness of fit to the standard point spread functions), and sharpness (a measure of object width relative to standard PSF width). After running DOLPHOT, we cleaned the list of detected objects by retaining only those that met the following criteria:
* DOLPHOT object type 1
* Magnitude < 90 in both filters, to reject objects too faint to be measured
* SNR ≥ 4, to ensure high-quality magnitude measurements
* Chi ≤ 1.5 in both filters, to ensure high-quality magnitude measurements (see Figure <ref> for an example)
* -0.15 < F475X sharpness < 0.08, to capture point sources (see Figure <ref> for an example)
* |F110W sharpness| < 2.3(0.3608 - 0.0363(F110W) + 0.000938(F110W)^2, to capture point sources with room for scatter at the faint end
The sharpness and chi criteria are especially effective at removing nonstellar objects (small, faint background galaxies; camera artifacts; bad pixels; etc.) from the sample. The remaining culled lists are completely dominated by the rich GC systems around these giant galaxies. The F475X sharpness criterion is skewed slightly toward negative values (i.e. objects broader than the PSF) because WFC3 with the F475X filter has a higher resolution (0.04” per pixel) than its IR channel used with F110W (0.1” per pixel), so some GCs are are expected to be just marginally resolved in F475X <cit.>. With regard to the last two criteria, we found the difference between the linear sharpness cut used for F475X and the parabolic cut used for F110W to be negligible. Both choices were motivated by the actual distribution of sharpness measurements.
Close preliminary inspection of the photometry after all the culling steps listed above revealed an anomaly for one of the fields, NGC 741. The SNR values indicated a significant number of bad measurements in F475X for this galaxy. Upon further inspection, we found that one of the two CCD chips used with F475X had produced unrealistically faint magnitudes (see Figure <ref>) and thus extremely red colors on one side of the galaxy. In our analysis, we have therefore used only objects from the other, apparently unaffected chip that had produced F475X measurements in line with those from the rest of our galaxy sample. For this galaxy, the GC sample size is therefore roughly a factor of two smaller than originally intended.
§ COMPLETENESS AND CONSISTENCY CORRECTIONS
Our images posed a challenge when it came to characterizing the detection completeness of our GC samples: the background light from the host galaxy has a strong radial gradient from galaxy center outward (particularly in F110W), so the resulting sky noise therefore obscures more GCs toward the center of each image than at the outskirts. In order to accurately count GCs in all areas of our images, we used the DOLPHOT artificial-star test function to perform an extensive completeness study on three of the fields. We created twelve radial zones centered on each galaxy—circular zones for NGC 1016, and elliptical zones of constant eccentricity for NGC 57 and NGC 777 as seen in the top of Figure <ref>. We added 3500 artificial stars to each zone, and determined the artificial star recovery fraction for each zone as a function of magnitude. We then modeled the recovery fraction in each zone as a sigmoid curve <cit.>:
f_comp = 1/1+e^α(m-m_0)
Eq. <ref> accounts for the slope α of the transition region where objects quickly become too faint to be detected and the magnitude m_0 at which half of the existing objects are detected, and produces a completeness fraction f_comp, which can be interpreted as a probability: how likely are we to detect an object of a given magnitude? Because of the gradient of sky background level, m_0 is itself a function of location on the image, changing smoothly with galactocentric radius (see the bottom of Figure <ref>, comparing 50% completeness levels in the innermost and outermost zones around NGC 777). After determining a constant α for both filters, we found m_0 for each of the twelve zones and compared it to background brightness. The relation is well fitted with an exponential curve (see Figure <ref> for the results from NGC 777). We found that all three of our test galaxies produced similar relations in both filters (Figure <ref>), as expected since the filters and exposure times were essentially identical for all the fields, so we adopted the average exponential parameters for m_0 as functions of local sky brightness β for our entire sample of galaxies:
m_0,F475X = -0.05497β_ F475X^0.6024 + 30.17
m_0,F110W = -0.003423β_ F110W^0.6570 + 27.37
This sky brightness-dependent completeness relation allowed us to more accurately characterize completeness for GCs throughout our images, regardless of how close they were to the centers of their host galaxies. In order to ensure that our samples of GCs would accurately represent each GCS, we calculated f_comp for each object from its magnitude and local sky brightness, and removed all objects from the dataset with a completeness fraction less than 0.5 in either filter.
Finally, to ensure that comparisons between the galaxies would be made based on identical ranges in both GC luminosity and radial region of the halo, we made an absolute magnitude cut based on the faintest object in NGC 4839 (the most distant galaxy in the sample), and a radial distance cut at ∼20 kpc based on NGC 1600 (the nearest galaxy in the sample). These cuts ensured that we were sampling approximately the same portion of each GCS.
The results of our data cleanup procedure can be seen in Figure <ref>. All objects detected by DOLPHOT are shown as small pink points, the 50% completeness limits as determined by our artificial star tests are denoted by the dashed lines, and the effective completeness limits imposed by the final absolute-magnitude and radial cuts are denoted by the dotted lines. All objects shown as bigger red points are included in the final datasets for each galaxy that are used to define the color and metallicity distributions for its GCS.
§ COLOR-METALLICITY CONVERSION
Although integrated GC color is an observationally efficient proxy for metallicity, the relationship between color and metallicity is monotonic but nonlinear to some degree for optical/NIR colors <cit.>, though <cit.> models the relation for (g-z) linearly. To draw conclusions involving GC or GCS composition, it is necessary to adopt a conversion of the GCS color distribution functions (CDFs) to metallicity distribution functions (MDFs).
Because spectroscopic metallicity measurements of extragalactic GCs require major observing campaigns <cit.>, most color indices have not been calibrated this way, including the index used in this work, (F475X - F110W). Instead of converting directly from color to metallicity, we used the methodology described in <cit.>, performing a two-step process using a different, spectroscopically calibrated HST color index along with simulated GCs built with SSP (single stellar population) models. The stellar models adopted here are the widely used Osservatorio Astronomico di Padova suite in its CMD 3.6 version (http://stev.oapd.inaf.it/cgi-bin/cmd_3.6online tool).
The HST color index with the strongest currently available spectroscopic calibration is (F475W - F850LP), equivalent to (g-z) in the SDSS system <cit.>. To quantify HST's (F475W - F850LP) VEGAMAG index versus metallicity, we used a simple quadratic relation to the spectroscopic data as described more completely in <cit.>, which is essentially a combination of the transformations in the spectroscopic studies cited above:
(F475W - F850LP) = 2.158
+ 0.57081[Fe/H] + 0.10026[Fe/H]^2
We then created a set of simulated GCs using the Padova online tool (see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.> for details), specifying a fixed age of 12 Gyr and a total mass of 10^5 M_⊙ for each cluster, and allowing the metallicity to increase incrementally from [M/H] = -2.2 to [M/H] = 0.3 in steps of 0.1 dex. Using the Padova tool output, we plotted the resulting mock HST observations in both (F475W - F850LP), the calibrated index, and (F475X - F110W), our index, versus the input metallicity. Because our observations allow us to measure color, we inverted the results to obtain metallicity as a function of color. The metallicity-color relations, fitted with exponential equations, appear in Figure <ref>.
Our color-to-color-to-metallicity conversion is summarized in Figure <ref>; it allowed us to express our observed (F475X - F110W) GC colors in terms of (F475W - F850LP), and then metallicity:
(F475W-F850LP) = 2.785 - 5.000e^-0.830( F475X-F110W)
[M/H] = (4.456·10^8(F475W-F850LP)
- 5.996·10^8)^1/2 / 6684 - 2.847
§ MODELING MDF SHAPE
§.§ Double Gaussian fits
The completeness-sky brightness relation described in Section <ref> and the color-metallicity conversion from Section <ref> enabled us to turn our directly observed color indices into more physically meaningful metallicity values. Figure <ref> shows a typical result of both processes: the completeness correction brought up the total GC numbers, though with little change to the shape of the CDF or MDF, while the color-to-metallicity conversion compresses the red GCs into a metal-rich peak and stretches the blue GCs into a metal-poor tail (note that the data in this figure have not undergone the final magnitude and radius cuts described in Section <ref>; all data in subsequent figures have). The differences, though small, between the raw and completeness-corrected MDFs are in large part due to the shallow radial metallicity gradients that are also an observed feature of these systems (see below): because the completeness corrections are a bit larger in the inner regions where the metal-richer clusters are a bit more prominent, the metal-richer side of the MDF is boosted slightly more.
To quantify the shape of the MDFs, we modeled each metallicity histogram with a double Gaussian curve:
N_GCs,comp = A_mpe^-(x-μ_mp)^2/2σ_mp^2 + A_mre^-(x-μ_mr)^2/2σ_mr^2
with each peak (metal-poor, subscript mp, and metal-rich, subscript mr) characterized by an amplitude A, a mean μ, and a standard deviation σ.
Testing on the uncorrected GC counts with the Gaussian mixture modeling program from <cit.> strongly favored a double Gaussian over a single Gaussian model, and showed no difference between double Gaussian and multi-Gaussian models (that is, any extra modes above n=2 were damped down to zero in the GMM solution). Table <ref> shows GMM test results from the original (i.e. not completeness-corrected) GC metallicity distributions; the χ and p values are based on a null hypothesis of a unimodal Gaussian distribution, and D is a measure of peak separation. For details on D, see Section <ref> and Equation <ref>.
GMM, the most widely used fitting code for GCS studies, takes into account previous research into the nature of GCS metallicity distributions and builds on the copious evidence for metal-rich and metal-poor GC subpopulations in many galaxies <cit.>, so the question now is: how do those subpopulations combine to form the full-GCS distributions that we observe?
A more basic question regarding uni- vs. bi-modality can be posed by applying a dip test to our results for the MDFs <cit.>, with numerical results as listed in the last two columns of Table <ref>. In most cases the probability p(dip) indicates that the MDF is best considered to be unimodal (p > 0.9), with only NGC 533 and NGC 3842 strongly favoring multimodality. However, the dip test is only useful when the underlying form of the distribution is unknown. While most of our MDFs are visibly unimodal (that is, they have only one clear peak), it is simultaneously true that they are all skewed and the GMM fitting strongly rejects any fit with a unimodal Gaussian. The dip test was last used for GCSs in <cit.>, and since then a double Gaussian mixture has in most cases been found to be an appropriate empirical model for both GCS color and metallicity distributions.
Figure <ref> shows the NGC 777 double Gaussian model as an example, and Figure <ref> compares the MDFs and double Gaussian models for all fifteen galaxies in our sample. All the double Gaussian fit parameters can be found in Table <ref>. Figures <ref> and <ref> can be compared as in Figure <ref>.
§.§ Metallicity gradients
In addition to testing whether the standard double Gaussian was an appropriate model for the GCSs in our sample, we modeled the metallicity gradients of our galaxies and checked them against comparable results from the literature. Figure <ref> shows metallicity versus projected radius, along with one-sigma uncertainties. Slopes, uncertainties, and significance compared to a flat metallicity gradient can also be found in Table <ref>.
Fitting a power law to the metallicity gradients of the form [M/H] = const + α_grad log R, we found an average slope of α_grad = -0.41, with an rms scatter of ± 0.13. These values are consistent with the results for numerous other galaxies over a wide range of luminosities and appear to be a common feature of GCSs: a shallow radial decrease in mean metallicity, though with galaxy-to-galaxy scatter of 0.1-0.2 dex
<cit.>.
Despite the shallow mean gradients, the mean of the metal-rich GC subpopulation (μ_mr in particular) is very similar from one galaxy to another, hovering around [M/H] ∼ 0 with a scatter of only a few percent. A shallow trend for μ_mr to increase with galaxy mass roughly as Z ∼ M_*^0.2 has been established in previous surveys <cit.>, though our selection of targets with nearly the same stellar masses would prevent that trend from showing up here. The mean absolute value of μ_mr∼ 0.0 is, however, more metal-rich by a few tenths of a dex than has been found for red GCs in spectroscopic surveys of massive galaxies (cf. the references cited above), and also from photometric surveys based on optical color indices <cit.>. As described above, the metallicity values we obtain are the direct result of one particular set of stellar model transformations from the F475X and F110W filters; these need to be investigated further with additional sets of models and new empirical transformations to optical indices.
§ COMPARISON WITH HOST GALAXY ENVIRONMENT
Because the amplitudes from our double Gaussian fits are influenced by the number of GCs in the sample for each galaxy (and will be divided by GC count for the rest of this analysis) and by the overall shape of the curve, the key parameters are the metal-poor and metal-rich modes μ_mp and μ_mr and the metal-poor and metal-rich widths (standard deviations) σ_mp and σ_mr. The metal-rich peaks are quite uniform throughout our galaxy sample, with a mode of μ_mr∼ 0.0 and width ranging from 0.1 ≲σ_mr≲ 0.3. Most differences between GCSs arise with the metal-poor peak, which can lie anywhere from ∼ 0.5 to ∼ 0.8 dex away from the metal-rich peak.
In addition to comparing the double Gaussian parameters themselves, we calculated the mean metallicity for each GCS, the difference between modes (μ_mr - μ_mp), a second peak separation metric taking into account peak width:
D = |μ_mp - μ_mr|/√(σ_mp^2 + σ_mr^2/2)
<cit.>, and the blue fraction f_b, recovered from the double Gaussian solution.
In general, very few GCS variables showed any signs of being correlated with environmental metrics—see Table <ref> for significant Spearman coefficients—with the exception of the blue fraction and the normalized metal-rich amplitude, which are themselves related. Because blue fraction is a more physically relevant parameter, our analysis will focus on it rather than on normalized metal-rich amplitude.
Figure <ref> shows f_b versus all environmental metrics derived from <cit.>, with fitted linear models and one-sigma range shaded. Functions of group member count, group virial mass, group virial radius, and group velocity dispersion all produced large uncertainty for linear fit parameters and visual inconsistency when plotted. In contrast, nth-nearest neighbor surface density produces a smaller linear fit uncertainty and a more visually consistent positive trend (i.e. galaxies in denser neighborhoods tend to have a higher blue fraction), in addition to a significant Spearman coefficient. The relation is weak, but stronger than the poorly constrained results for other variable combinations. The linear fit parameters for f_b versus Σ_2 are:
⟨ f_b⟩ = 0.042log(Σ_2) + 0.258
The correlation we find, i.e. that increased GC blue fraction is associated with high local number densities of satellite galaxies, is at least superficially consistent with expectations from current theory that a high fraction of the metal-poor GCs in BCGs have been accreted from nearby satellites (see Section <ref>). It is not yet clear, however, whether the shallowness of this correlation and the others shown in Figure <ref> is in quantitative agreement with present models. The present observational work is necessarily focused on the inner to mid-halo region because of the restricted field of view of the cameras, whereas the theoretical simulations tend to provide global correlations.
§ DISCUSSION AND SUMMARY
In this study, we used HST images of fifteen massive BCGs selected from the MASSIVE survey to investigate the relationship between GCS metallicity and host galaxy environment. We created a model of completeness across the varying background brightness in our images, and a model for GC metallicity as a function of the HST color index (F475X - F110W). After fitting a double Gaussian curve to each MDF, we compared the GCS metallicity parameters to environmental metrics derived from <cit.> and found a weak but consistent correlation between the GCS blue fraction and nth-nearest neighbor density, but no statistically significant relationship between any other variables.
It should be noted that we consider this work as a pilot study into the relationship between GCS metallicity distribution and host galaxy environment. There are several issues that need further exploration:
* The conversion from color to metallicity used here is a preliminary step. We plan to compare the Padova models to other simple stellar population (SSP) models, to build in other features such as age/metallicity relations, and to develop direct empirical transformations between color indices from upcoming HST observations.
* It is possible that the HST images used in this work did not capture enough of our galaxies' halos for us to detect a strong relation between GCS and environmental parameters. Recent observational studies such as <cit.> show that accreted GCs may remain far from their new host galaxy's center, well outside the WFC3 field of view at the distance of our galaxy sample. To understand the full GCSs of our galaxies, we would need either a wider field of view or mosaic images.
* We should also consider the evolutionary clock of these galaxies—we are looking at them at a certain stage in their growth. The galaxies in sparser environments have essentially finished their hierarchical growth as they have no more satellites to absorb, while the ones in richer areas are still moving along their merger trees and will accrete more metal-poor GCs and increase their blue fractions in the future. Environmentally driven differences may thus emerge more strongly as BCGs evolve beyond the present day.
* We need to consider how much host galaxy environment we include. A galaxy's wider environment and more distant satellites may not have an appreciable effect on the inner halo; at what level of locality (or nonlocality) does environment begin to correlate more obviously with GCS properties?
§ ACKNOWLEDGEMENTS
We acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant to WEH. KH thanks Alison Sills, Laura Parker, Veronika Dornan, Claude Cournoyer-Cloutier, and Jerermy Karam for helpful discussions.
This research has made use of the SIMBAD database and the Vizier catalogue access tool, operated at CDS, Strasbourg, France <cit.>.
HST (WFC3)
Python (https://www.python.orghttps://www.python.org), NumPy <cit.>, pandas <cit.>, Matplotlib <cit.>, pyraf <cit.>, DOLPHOT <cit.>, ds9 <cit.>
aasjournal
|
http://arxiv.org/abs/2307.00579v1
|
20230702141756
|
Nonequilibrium interfacial properties of chemically driven fluids
|
[
"Yongick Cho",
"William M. Jacobs"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.mtrl-sci",
"cond-mat.stat-mech"
] |
[email protected].
Department of Chemistry, Princeton University, Princeton, NJ 08544, USA
Chemically driven fluids can demix to form condensed droplets that exhibit phase behaviors not observed at equilibrium.
In particular, nonequilibrium interfacial properties can emerge when the chemical reactions are driven differentially between the interior and exterior of the phase-separated droplets.
Here, we use a minimal model to study changes in the interfacial tension between coexisting phases away from equilibrium.
Simulations of both droplet nucleation and interface roughness indicate that the nonequilibrium interfacial tension can either be increased or decreased relative to its equilibrium value, depending on whether the driven chemical reactions are accelerated or decelerated within the droplets.
Finally, we show that these observations can be understood using a predictive theory based on an effective thermodynamic equilibrium.
Nonequilibrium interfacial properties of chemically driven fluids
William M. Jacobs
August 1, 2023
=================================================================
§ INTRODUCTION
A “chemically driven” fluid can maintain a nonequilibrium steady-state (NESS) when provided with a constant external supply of a chemical fuel <cit.>.
By coupling this fuel source to chemical reactions taking place in the fluid, and thereby biasing the forward/backward reaction kinetics, it is possible to drive the fluid away from equilibrium.
A NESS is established when the constant supply of chemical fuel prevents the fluid from relaxing to thermal equilibrium.
For example, in living cells, biomolecules can undergo reactions involving post-translational modifications, such as changes in conformational state due to phosphorylation <cit.>.
If the free energy used to drive these transitions away from equilibrium is derived from a constant supply of ATP, then the reaction kinetics will break detailed balance, and the intracellular fluid can be considered to be at a NESS.
Like equilibrium fluids, chemically driven fluids can demix to form coexisting phases via liquid–liquid phase separation <cit.>.
However, the phase behavior and phase-separation dynamics of a chemically driven fluid may differ qualitatively from that of a fluid at equilibrium.
For example, coarsening can be suppressed <cit.>, leading to a monodisperse size distribution of coexisting droplets at steady state <cit.>, or accelerated <cit.> relative to equilibrium.
Nonequilibrium droplets can exhibit self-regulatory behaviors such as self-division <cit.>, self-propulsion <cit.>, and self-organization into complex internal microstructures that are controlled by the chemical driving forces <cit.>.
The kinetics of droplet assembly and disassembly can also differ from equilibrium phase-separation kinetics and can be controlled by tuning the chemical driving forces <cit.>.
This ability to control the dynamics of droplet formation suggests that nonequilibrium phase separation may be a facile means of achieving spatiotemporal regulation within living cells <cit.>.
Understanding how the interfacial tension between coexisting phases changes at a NESS is essential for describing the phase behavior and dynamics of chemically driven fluids <cit.>.
In particular, nonequilibrium interfacial tensions, which imply deviations of the steady-state interfacial properties from equilibrium, emerge when the driven chemical-reaction kinetics differ between the interior and exterior of a phase-separated droplet <cit.>.
Such spatially inhomogeneous reaction kinetics can arise in complex fluids when the chemical fuel source is uniformly distributed but the catalyst required for the driven reaction pathway tends to partition into one of the two coexisting phases.
For example, spatially inhomogeneous reaction kinetics can be induced by the preferential enrichment of enzymes either inside or outside of biomolecular condensates in living cells <cit.>.
Since enzymatic partitioning is observed in a wide variety of intracellular condensates as a means of maintaining chemically specific environments to carry out specialized biochemical functions <cit.>, it is likely that inhomogeneous reaction kinetics—and thus nonequilibrium interfacial properties—are a common feature of nonequilibrium phase-separated droplets in living cells.
In this article, we use a minimal model of a chemically driven fluid to quantify the effects of driven chemical reactions on the interfacial properties between coexisting phases at a NESS.
We show that simulations of droplet nucleation and interface roughness lead to compatible inferences of the nonequilibrium interfacial tension in systems with spatially inhomogeneous chemical reactions.
The rest of this paper is structured as follows:
In section:model, we describe the minimal model and consider two chemical-reaction schemes in which the driven reaction is either accelerated or decelerated inside of the droplets.
In section:thermodynamics, we discuss how a theory based on an “effective equilibrium” can be used to predict both bulk and interfacial properties of the coexisting phases.
Then in section:interfacial_tension, we show that the nonequilibrium interfacial tensions inferred from nucleation and roughness simulations agree with the predictions of our effective-equilibrium theory.
Finally, in section:discussion, we discuss the implications of our results for conducting noninvasive measurements of interfacial properties in nonequilibrium fluids.
§ A MINIMAL MODEL FOR CHEMICALLY DRIVEN FLUIDS
In this section, we present a minimal model of a chemically driven fluid, in which reactions allow particles to interconvert between two internal conformational states.
In subsection:an_open_system, we describe the modeling approach first introduced in Ref. <cit.>.
We then consider two alternative inhomogeneous chemical-reaction schemes, which result in different reaction kinetics in the condensed and dilute phases, in subsection:inhomogeneous_chemical_reaction.
Implementation details of our simulations are presented in subsection:implementation_of_the_model.
§.§ Modeling driven chemical reactions in an open system
In order to investigate interfacial properties in chemically driven systems, we employ a lattice model of a fluid consisting of particles in an implicit solvent <cit.>.
Extending the classical two-dimensional lattice-gas model, we allow particles to undergo chemical reactions between two internal conformational states: a bonding (B) state and an inert (I) state.
Each lattice site can be occupied by at most one particle in either internal state.
Short-ranged attractive interactions between B-state particles tend to drive phase separation, whereas both I-state particles and empty (E) lattice sites, which represent the implicit solvent, are non-interacting.
Specifically, a particle in the B state engages in nearest-neighbor attractive interactions with other neighboring B-state particles with pairwise interaction strength ϵ (< 0).
These interactions give rise to a local potential energy u = ϵ b (≤ 0) at each lattice site, where b is the number of nearest-neighbor lattice sites occupied by B-state particles.
I-state particles, on the other hand, do not interact with nearest-neighbor particles, and are thus isoenergetic to empty lattice sites, with u = 0.
Chemical reactions between the B and I conformational states of individual particles occur via two distinct reaction pathways.
An undriven pathway is governed by equilibrium thermodynamics, while the driven pathway is considered to be coupled to a constant chemical fuel source.
We model these pathways within the framework of stochastic thermodynamics <cit.>, which treats reactions as Markovian events taking place at a constant absolute temperature T.
Along the undriven pathway, the ratio of the forward and backward reaction rates is dictated by detailed balance and the internal free-energy difference between the B and I states in an ideal-gas reservoir, (fig:fig1a).
Throughout this work, we consider scenarios in which > 0, such that the I state is more stable in the dilute vapor (v) phase, while attractive interactions are required to stabilize the B state in the condensed liquid (l) phase.
The fugacities and of B and I-state particles in an equilibrium ideal gas are thus related to this equilibrium free-energy difference by β≡ -ln(/), where β≡ (k_BT)^-1.
Along the driven pathway, conformational-state changes are driven out of equilibrium by a chemical-potential difference that originates from the constant chemical fuel source.
The ratio of the forward and backward reaction rates on the driven pathway is thus increased relative to the equilibrium ratio by a factor exp(β).
The relative flux between the driven and undriven pathways then determines the steady-state distribution between the conformational states.
In the limit that the flux through the driven pathway vanishes, we recover the equilibrium distribution.
By contrast, if the flux through the driven pathway dominates, then the system approaches an effective equilibrium distribution governed by an internal free-energy difference of +.
We model the fluid as an open system connected to a particle reservoir, which allows us to simulate competition between the driven and undriven pathways under conditions where the reactions are rate-limiting (fig:fig1a).
This limit describes a scenario in which particle transport within the fluid is much faster than the typical rate of transitions between conformational states along either pathway.
Importantly, this limit is appropriate for studying interfacial properties that arise purely from the competition between the driven and undriven pathways in a phase-separated system, which is the focus of this paper.
Open systems also provide advantages for studying phase coexistence that are analogous to those of the equilibrium grand-canonical ensemble, such as the elimination of interfaces and reduced finite size effects <cit.>.
Within the framework of an open system, we implement reactions along the driven pathway as direct transitions between the B and I states, while reactions along the undriven pathway occur indirectly via particle exchanges with the equilibrium particle reservoir (fig:fig1b).
Particle insertion and removal rates depend on the reservoir fugacities, and , the local potential energy, u, experienced by a B-state particle, and the base exchange rate, D, between the system and the reservoir.
Direct transitions between the B and I states, corresponding to driven I→B and B→I reactions, proceed with reaction rates k_IB and k_BI, respectively.
We use dimensionless reaction rates ≡ D^-1k_IB and ≡ D^-1k_BI for notational simplicity throughout the rest of the paper.
When the chemical drive along the driven pathway is nonzero, the probability of observing a sequence of events around the single-cycle network in fig:fig1b differs from that of the time-reversed sequence, breaking detailed balance and leading to a nonzero net probability current.
The chemical drive is directly related to the ratio of the probability of observing a forward sequence of transitions around the cycle in the B-to-I direction relative to that of its time-reversed sequence <cit.>,
β≡ln[/ e^β u].
Rearranging (<ref>) leads to the so-called “local detailed balance” condition for the direct-transition rates <cit.>,
/ = exp(β + β u + β).
Thus, (<ref>) implies that the ratio of the forward and reversed reaction rates on the driven pathway is increased relative to the ratio on the undriven pathway by a factor exp(β), and detailed balance is recovered only when = 0.
§.§ Models of inhomogeneous driven chemical reactions
In this paper, we focus on inhomogeneous driven chemical reactions, whose kinetics differ between the liquid and vapor phases.
When the reaction kinetics are identical, or homogeneous, in both phases, then the steady-state density distribution of the driven system can be mapped exactly to that of an equilibrium system <cit.>.
Consequently, no change in the interfacial properties is observed when the reaction kinetics are homogeneous <cit.>.
By contrast, when the kinetics differ between the coexisting phases, there is no single effective-equilibrium model that can describe both phases simultaneously.
Coexisting phases in inhomogeneously driven fluids can only be approximately described using different effective-equilibrium models, giving rise to nonequilibrium interfacial properties <cit.>.
Inhomogeneous reaction kinetics are implemented in our model by controlling the relative fluxes between the driven and undriven pathways in the two phases.
To this end, we tune the ratio of the fluxes by making the base rate of the driven, direct transitions between the B and I states to be dependent on a particle's local environment within the open system.
Specifically, we set the backward rate to be dependent on the local potential energy, u.
The forward rate, , then follows from the local detailed balance condition, (<ref>).
Here, we consider two chemical reaction schemes in which takes a Metropolis-like form (fig:fig1c):
=
k^∘min[1,exp(- β- β- βu)]
k^∘min[1,exp(-β- β+ βu)] .
The prefactor k^∘ sets the relative flux of the driven pathway compared to the undriven pathway in a dilute vapor phase, where u ≈ 0.
We refer to the inhomogeneous reaction kinetics defined by Eqs. (<ref>) and (<ref>) as chemical-reaction models I and II, respectively, throughout the rest of the paper.
In chemical-reaction model I, decreases monotonically with respect to u, implying that the driven reaction is faster—all else being equal—in the liquid phase, where the potential energy tends to be low, than in the vapor phase, where the potential energy tends to be high.
The behavior is the opposite in model II.
§.§ Implementation via kinetic Monte Carlo algorithm
To study the consequences of inhomogeneous driven chemical reactions, we perform kinetic Monte Carlo simulations <cit.> of our model on a two-dimensional square lattice with periodic boundary conditions (fig:fig1a).
We treat particle exchanges and direct transitions between B and I states as first-order Markovian reactions.
We then simulate the stochastic evolution of the lattice with the parameters , , and β held constant in time.
All simulation data presented in this paper are obtained at a dimensionless interaction strength of βϵ = -2.95, which is stronger than the critical dimensionless interaction strength, βϵ_c = -2ln(1+√(2)), <cit.> of the associated equilibrium lattice-gas model.
Two of the three control parameters, , , and β, must then be specified in order to define a unique coexistence point.
We choose parameters for simulation by controlling both the extent of the nonequilibrium drive, β, and the average number density of particles in either internal state in the vapor phase, .
The latter of these conditions is achieved by tuning the internal free-energy difference ; focusing on as opposed to makes for an easier connection to experimental settings.
Throughout this work, we consider coexistence points at which the vapor-phase number density is = 0.05.
Finally, the prefactor for the driven chemical-reaction pathway, k^∘, is set to k^∘ = 0.1 for chemical-reaction model I and k^∘ = 1 for model II.
These choices yield relative timescales for the driven and undriven pathways in which the interfacial properties clearly deviate from equilibrium, as opposed to the behavior in the undriven-reaction-dominated (k^∘→0) or driven-reaction-dominated (k^∘→∞) limits <cit.>.
§ CHEMICAL REACTION-INDUCED NONEQUILIBRIUM PHASE BEHAVIOR AT COEXISTENCE
In this section, we demonstrate the effect of driven chemical reactions on the bulk phase behavior of the coexisting phases, which sets the stage for examining nonequilibrium interfacial properties.
In subsection:direct_coexistence, we establish the condition for phase coexistence under nonequilibrium conditions based on mechanical balance.
We then examine how the nonequilibrium phase behavior depends on the chemical-reaction models defined in subsection:thermodynamics, and we show how these results can be understood in terms of a theory based on an effective-equilibrium approximation in subsection:FLEX.
§.§ Determination of phase coexistence
Two phases are in coexistence when there is no net energy or mass flux between them and they are mechanically balanced.
At equilibrium, a variational principle based on the second law of thermodynamics implies that equal temperatures, chemical potentials, and pressures between the coexisting phases lead to energy, mass, and mechanical balance, respectively, and vice versa.
In the grand-canonical ensemble at equilibrium, in which the temperature and chemical potential are fixed, ensuring that the grand-potential densities are the same in the two phases satisfies the equal-pressure condition <cit.>.
Although our model is driven out-of-equilibrium, the particle reservoir still guarantees that the energy and mass conditions are satisfied by providing the energy and mass input required to keep each phase at steady-state.
However, equating the grand-potential densities in the two phases does not ensure mechanical balance in fluids that are out of equilibrium due to the general absence of an equation of state <cit.>.
We therefore examine two closely related definitions of nonequilibrium phase coexistence based on mechanical balance, which are not guaranteed to coincide in the case of chemically driven fluids a priori.
First, we determine phase coexistence by equating the steady-state probability of observing the open system in either the liquid or the vapor phase.
We then compare this definition of coexistence with the result of nonequilibrium direct-coexistence simulations.
In the steady-state distribution approach, we calculate the steady-state number-density distribution of the B-state particle, p(), on the domain ∈ [0,1].
This calculation is performed using an L× L lattice.
To this end, we apply a form of nonequilibrium umbrella sampling (NEUS) <cit.>, in which we divide the one-dimensional axis into non-overlapping windows, calculate the steady-state distribution within each window, and enforce detailed balance between the adjacent windows in order to reconstruct the entire steady-state distribution <cit.>.
We observe a barrier with respect to -ln p() that scales with L, as expected for a first-order phase transition <cit.> (fig:fig2a).
Based on the location of the top of the barrier, ^*, we determine the probability of being in the vapor phase, p_v≡∫_0^^* p()d, versus the liquid phase, p_l≡∫_^*^1 p()d.
We then define the nonequilibrium potential difference, , between the liquid and vapor phases, β(L) ≡ L^-2ln(p_l/p_v).
In the macroscopic limit, _∞≡lim_L→∞(L), this quantity is exactly equal to the difference between the grand-potential densities of the two phases in a fluid at equilibrium.
We therefore associate _∞ = 0 with nonequilibrium phase coexistence in this approach.
This operational definition of is particularly useful, as it plays the role of a thermodynamic potential for the liquid–vapor phase transition under nonequilibrium conditions and can be evaluated directly from NEUS simulations.
In the direct-coexistence approach <cit.>, we consider an open system that is periodic in the vertical direction but always in contact with the liquid and vapor phases in the horizontal direction (inset of fig:fig2b).
Fluctuations at the liquid–vapor interface cause the portion of the lattice that is occupied by the liquid phase to expand or shrink.
Starting from an initial configuration in which the lattice is split equally between the liquid and vapor phases, we simulate the system until the liquid phase either fills the lattice entirely or vanishes completely.
For simulation efficiency, we only allow particle exchanges and chemical reactions to take place within a window of size L× L, which tracks the position of the planar liquid–vapor interface (inset of fig:fig2b); the width of this window is much larger than the interface height fluctuations, which ensures that the window boundaries do not affect the growth or shrinkage of the liquid phase.
We then measure the probability that the lattice is occupied by the liquid phase at the end of the simulation, P_l.
Consistent with the notion of mechanical balance <cit.>, we associate phase coexistence with the condition P_l = 50%.
We find that the coexistence points identified by these two approaches match closely when extrapolated to the macroscopic limit.
In both approaches, we find that the chemical drive at phase coexistence, , shows a consistent dependence on the lattice dimension, L, which allows us to extrapolate the coexistence condition to the macroscopic limit, L→∞ (fig:fig2c).
Interestingly, the 1/L scaling that we observe is reminiscent of effective Coulombic interactions predicted in other treatments of chemically reactive systems <cit.>.
Even in the case of a strongly driven system (|β| ≫ 1), the two definitions of phase coexistence yield very similar coexistence points, (differing by less than 10^-2 k_BT), in the macroscopic limit (fig:fig2c).
These differences are typically orders of magnitude smaller than the chemical drive at coexistence, as well as the range of values over which simulate droplet nucleation (see subsection:nucleation).
We therefore conclude that these two coexistence definitions agree well within the range of system parameters that we consider here, and we utilize the definition β_∞ = 0 for operational convenience.
§.§ Effective-equilibrium quantification of nonequilibrium phase behavior
Because the relative fluxes depend on the kinetics of the chemical reactions, reaction models I and II lead to different coexistence lines, as shown in fig:fig3a.
In this phase diagram, decreases with along the coexistence line in the case of model I, while the value of at coexistence is essentially independent of the chemical drive in the case of model II.
In both cases, the total number density in the vapor phase is fixed such that = 0.05 at coexistence.
The differences between the phase diagrams of the two chemical-reaction models suggest that the bulk properties of the coexisting phases depend on the details of the reaction models at a given nonequilibrium chemical drive.
To understand how the driven chemical reactions affect the bulk properties of the coexisting liquid and vapor phases, we map each phase to an effective-equilibrium model.
Here, effective equilibrium implies an equilibrium fluid with the same B-state interaction strength, ϵ, as in the nonequilibrium system, but with a potentially different effective internal free-energy difference between the B and I states, Δ f.
This effective free-energy difference is calculated using a generalized Widom insertion formula <cit.>,
β≡ -ln(/) + ln⟨exp(-βΔ u_I→B)⟩_I,
where and are the steady-state number densities of B and I-state particles, respectively, in a bulk phase; Δ u_I→B is the potential energy change due to changing an I-state particle to the B state; and the angle brackets ⟨·⟩_I indicate a steady-state average conditioned on a tagged lattice site being occupied by an I-state particle.
In equilibrium, Eq. <ref> reduces to β_eq = β, where _eq is the equilibrium internal free-energy difference.
At a nonzero , we evaluate (<ref>) from a steady-state trajectory in which the lattice is occupied entirely by one phase.
In this way, we are able to determine effective free-energy differences, and , for the coexisting liquid and vapor phases, respectively (fig:fig1a).
We note that while this effective equilibrium mapping is not exact in general, it nevertheless offers useful insights and provides a quantitative foundation for understanding and predicting nonequilibrium phase behaviors.
In general, coexisting phases cannot be described by the same effective-equilibrium model in driven fluids with inhomogeneous chemical reactions.
It is therefore useful to quantify the difference between the effective internal free-energy differences in the coexisting liquid and vapor phases, ≡ -.
In fig:fig3b, we observe that as the system is driven farther away from equilibrium along a coexistence line, either increases or decreases monotonically, signaling growing differences between the effective thermodynamics of the coexisting phases as the magnitude of β is increased.
However, the relationship between β and β depends on the details of the inhomogeneous chemical-reaction model.
We find that β increases with β in the case of model I, while the relationship is reversed in the case of model II.
The physical origin of this behavior is discussed in the following section.
Unexpectedly, we find that the deviations of β from zero are almost entirely due to the vapor phase for both chemical-reaction models that we examine here.
fig:fig3c shows that β either decreases or increases monotonically in the case of models I or II, respectively.
By contrast, exhibits an almost negligible change at the coexistence conditions.
A theoretical explanation for these observations is discussed below.
§.§ FLEX description of nonequilibrium phase behavior
To understand the effect of driven chemical reactions on the nonequilibrium phase behavior, we utilize the “Fixed Local Environment approXimation" (FLEX) theoretical framework introduced in Ref. <cit.> (see app:FLEX).
FLEX assumes that the internal state of a particle at a tagged lattice site relaxes to its steady-state distribution much faster than the local configuration, or “environment”, surrounding the tagged lattice site changes.
This assumed separation of timescales allows us to map the steady-state of the tagged lattice site to its own effective equilibrium.
In this way, FLEX predicts that the effective internal free-energy difference, (u), of a lattice site in an environment where a B-state particle would experience a local potential energy u is given by
β(u) = βΔf_res + ln[1+(u)(1+e^βΔf_res)e^β1+(u)(1+e^βΔf_res)].
This equation implies that the effective internal free-energy difference between the B and I states at a tagged lattice site is the sum of the equilibrium internal free-energy difference in the particle reservoir and the influence of the driven chemical-reaction pathway.
In order to employ FLEX without resorting to simulation, we assume that u ≈ 4ϵ in the liquid phase, meaning that every lattice is surrounded by B-state particles, and u ≈ 0 in the vapor phase, meaning that particles are sparsely distributed.
(<ref>) then predicts that = - = (4ϵ) - (0) is an increasing function of the chemical drive when (4ϵ) > (0), as in model I, while the relationship is the opposite when (4ϵ) < (0), as in model II.
These qualitative predictions agree with the observed simulation results presented in fig:fig3b.
Thus, we attribute the qualitative differences in β at coexistence to the acceleration of the driven chemical reactions in either the liquid or vapor phase in the case of chemical-reaction models I or II, respectively (fig:fig1c).
Quantitative predictions of the bulk-phase properties obtained from FLEX also agree well with the simulation results at phase coexistence (solid lines in fig:fig3; see app:FLEX-models).
§ EFFECTS OF DRIVEN CHEMICAL REACTIONS ON NONEQUILIBRIUM INTERFACIAL PROPERTIES
In this section, we demonstrate that driven chemical reactions can induce changes in the interfacial properties of a nonequilibrium phase-separated fluid relative to those of an equilibrium fluid.
We first infer the nonequilibrium interfacial tension from simulations of droplet-nucleation kinetics in subsection:nucleation.
These calculations are performed along two independent supersaturation pathways to establish that the interfacial tension is a material property determined by the coexistence point.
We then show how changes in the interfacial tension that arise in both chemical-reaction models can be understood within the FLEX framework in subsection:interfacial_tension_change.
Finally, in subsection:roughness, we demonstrate that the nonequilibrium interfacial tension inferred from nucleation simulations is consistent with the height fluctuations of the liquid–vapor interface at phase coexistence.
§.§ Inferring a nonequilibrium interfacial tension using classical nucleation theory
We first infer the nonequilibrium interfacial tension using simulations of droplet nucleation (fig:fig4a), which we interpret using classical nucleation theory (CNT) <cit.>.
In the case of equilibrium fluids, CNT predicts that the nucleation of a liquid droplet from a supersaturated vapor phase follows the minimum free-energy pathway along a reaction coordinate corresponding to the number of particles, n, in a cluster, or “nucleus”, of the liquid phase (inset of fig:fig4b).
The height of the nucleation barrier on this free-energy landscape, Δ F^*, is determined by a competition between the lower grand potential of the bulk liquid phase and the positive contribution to the cluster free-energy arising from the liquid–vapor interface.
If Δ F^* is much larger than typical thermal fluctuations, such that βΔ F^* ≫ 1, then the nucleation rate is dominated by the rate of crossing this barrier.
Under such conditions, CNT predicts that the nucleation rate density, J, follows the phenomenological law <cit.>
J = ρ_1 D^* Γexp(-βΔ F^*),
where the height of the free-energy barrier enters in the form of a Boltzmann factor.
Three terms comprise the kinetic prefactor: the B-state monomer density ρ_1; the “speed” at which the reaction coordinate n diffuses at the top of the nucleation barrier, D^*; and the Zeldovich factor, Γ, which accounts for recrossings of the reaction coordinate near the top of the barrier before a nucleation attempt ultimately succeeds or fails.
CNT has been shown to provide a quantitative description of nucleation kinetics in two-dimensional lattice-gas models at equilibrium <cit.>.
We compute the nucleation rate density, J, starting from a supersaturated vapor phase using forward-flux sampling (FFS) <cit.>.
In addition to the rate density, FFS determines the commitment probability to the stable liquid phase, ϕ(n), as a function of the reaction coordinate n.
We then compute the Zeldovich factor by fitting ϕ(n) to a harmonic free-energy landscape in the vicinity of the critical nucleus size <cit.>, n^*, at which the commitment probability equals 50% <cit.>.
The remaining quantities ρ_1 and D^* are independently determined from simulations of the bulk vapor phase and the dynamical behavior of the reaction coordinate when n ≈ n^* <cit.>, respectively.
From these measurements, we compute βΔ F^* = -ln(J / ρ_1 D^* Γ), which corresponds to the effective free-energy barrier height for a nonequilibrium nucleation process.
We then apply CNT to our model of a fluid with driven chemical reactions by introducing a nonequilibrium interfacial tension, σ_CNT.
To this end, we assume that the nucleation behavior also admits an effective equilibrium description <cit.>.
We therefore fit the observed effective barrier heights, βΔ F^*, as a function of the supersaturation, exp(β_∞), to an analytical expression for the two-dimensional lattice-gas free-energy landscape <cit.> using σ_CNT as the sole fitting parameter <cit.>.
This approach assumes that the thermodynamic driving force for the nucleation process is given by _∞, which is consistent with the fundamental CNT assumption that both the bulk and the interfacial properties of a nucleus are characteristics of the fluid in the macroscopic limit.
We now consider how the CNT description of the nucleation kinetics depends on the manner in which a chemically driven fluid is supersaturated.
In an equilibrium lattice-gas model at constant interaction strength, the thermodynamic driving force can be increased only by increasing the total particle density in the vapor phase, .
In fluids with driven chemical reactions, however, the chemical drive, , acts as an additional degree of freedom and thus provides an alternative means of tuning the supersaturation.
We therefore consider two different supersaturation protocols (fig:fig4a), in which we either increase and under fixed and , or increase with + fixed [see Eqs. (<ref>) and (<ref>)].
The former “ protocol” increases the particle density in the vapor phase while keeping and the relative flux between the reaction pathways constant.
By contrast, the latter “ protocol” changes the nonequilibrium potential by tuning the chemical drive along the driven reaction pathway.
In both supersaturation protocols, we keep the reaction-rate, , fixed.
We also use FLEX to predict equi-ΔΦ_∞ contours in the (β,) plane (fig:fig4a).
Viewing the parameter space in this way shows that an entire family of supersaturation protocols could plausibly be defined by monotonically tuning β and away from a coexistence point, where β_∞ = 0.
Remarkably, despite the differences between the and supersaturation protocols, we find that the nucleation kinetics can be described by CNT to near quantitative accuracy using a common value of the nonequilibrium interfacial tension, σ_CNT.
This result can be seen in fig:fig4b, where we plot the effective nucleation barrier as a function of the supersaturation along the two supersaturation protocols shown in fig:fig4a.
We emphasize that the fitted value of σ_CNT differs substantially from the equilibrium interfacial tension in the example shown in fig:fig4b, consistent with the fact that the fluid is driven away from equilibrium by inhomogeneous chemical reactions.
The good agreement between the barrier measurements along both protocols, as well as the fit to CNT, indicates that the effective nucleation barriers along both protocols are determined by the same value of the nonequilibrium interfacial tension.
We therefore propose that the nonequilibrium interfacial tension is relatively independent of the supersaturation protocol, as long as the degree of supersaturation is low (β_∞≪ 1) and the protocols converge at the same coexistence point.
This result agrees with a fundamental premise of CNT that σ_CNT is identical to the interfacial tension of a macroscopic interface between two coexisting phases <cit.>.
Thus, for the fitting parameter σ_CNT to be considered to be a nonequilibrium interfacial tension, it is important that its value is determined by the coexistence point alone.
Our simulation results along these two supersaturation protocols suggest that this is indeed the case.
§.§ Dependence of the nonequilibrium interfacial tension on the chemical drive at phase coexistence
We find that the nonequilibrium interfacial tension inferred from droplet-nucleation simulations is a function of the chemical drive at coexistence, (fig:fig5a).
As the fluid is driven further away from equilibrium, we find that the interfacial tension determined from nucleation kinetics, σ_CNT, tends to deviate further from the equilibrium value, σ_eq.
However, the manner in which σ_CNT changes as a function of differs between the two chemical-reaction models.
In the case of model I, σ_CNT increases monotonically with respect to , while it monotonically decreases in the case of model II.
Thus, knowing whether the chemical drive biases the driven reaction pathway in either the I→B or the B→I direction is insufficient to predict whether the nonequilibrium interfacial tension will increase or decrease relative to equilibrium.
Instead, the effective-equilibrium parameter offers a common explanation of the trends in the nonequilibrium interfacial tension for both chemical-reaction models (fig:fig5b).
This observation implies that the effects of the driven chemical reactions on the interfacial properties are related to the differences between the effective equilibrium behaviors of the coexisting phases.
This relationship can be explained by considering the relative populations of B and I-state particles at the interface between the coexisting liquid and vapor phases.
We first recall that a low value of indicates that the B state is more stable than the I state in terms of its internal free-energy; thus, decreasing increases the probability of finding a particle in the B state at a given u.
If = - is positive, then the effective internal free-energy difference is lower in the vapor phase than in the liquid phase, meaning that the population of B-state particles is higher in the vapor than what would be expected on the basis of an effective-equilibrium description of the liquid phase.
A negative implies the opposite behavior.
Because monotonically changes with u for both chemical-reaction models, the sign of also indicates whether the B-state particle is more or less populated at the liquid–vapor interface than would be expected on the basis of the liquid phase.
Increasing the population of B-state particles at the interface relative to the liquid phase, at a positive , has the effect of lowering the effective free-energy cost of forming the interface.
As a result, a positive implies a reduced interfacial tension relative to the equilibrium value.
By the same logic, a negative implies an increase in the interfacial tension relative to the equilibrium value.
These qualitative arguments agree with the observed changes in the interfacial tension presented in fig:fig5b.
The effective-equilibrium approach therefore allows us to explain how the interfacial tension changes along the coexistence line for a given chemical-reaction model.
As noted above, is predominantly determined by the behavior of the vapor phase in both chemical-reaction models studied here (fig:fig3c).
Thus, in the case of chemical-reaction model I, decreases monotonically with respect to , and the B-state particle population increases, in the vapor phase.
The B-state population at the interface is affected in the same way, so that the interfacial tension decreases with the applied chemical drive.
In the case of model II, on the other hand, increases with the applied chemical drive, so that the interfacial tension shows the opposite dependence.
These contrasting trends agree with our simulation findings in fig:fig5a.
These arguments are also borne out quantitatively within the FLEX framework (see app:FLEX-interface).
FLEX assumes that B-state particles at a flat interface experience a local potential energy of u = ϵ, which lies between u ≈ 4ϵ and u ≈ 0 for the liquid and vapor phases, respectively.
Because monotonically changes with u for both chemical-reaction models, (<ref>) predicts that (ϵ) lies between and , so that a positive (negative) also indicates an increase (decrease) in the B-state particle population at the interface relative to the liquid phase.
FLEX then predicts that the interfacial tension can be determined by defining an effective dimensionless interaction energy at the interface, βϵ̃, which can differ from the actual dimensionless interaction energy, βϵ.
When is positive (negative), the relative increase (decrease) in the B-state particles at the interface means that the effective interaction energy needed to adsorb B-state particles to the interface is decreased (increased) relative to βϵ.
The nonequilibrium interfacial tension can then be predicted on the basis of the effective interaction energy, βϵ̃, at the interface (see app:FLEX-interface).
These FLEX predictions are shown as solid lines in fig:fig5a.
§.§ Inferring a nonequilibrium interfacial tension using direct-coexistence simulations
We further examine the relationship between driven chemical reactions and the nonequilibrium interfacial tension by focusing on the roughness of the liquid–vapor interface.
Since our interpretation of the nucleation kinetics points to nonequilibrium effects at the droplet interface, we hypothesize that changes in the interfacial properties should be observable in simulations of liquid and vapor phases in direct coexistence.
In order to verify this, we test whether changes in the interfacial tension match changes in the height fluctuations of the liquid phase at the interface, which are directly related to the interfacial tension at equilibrium.
If the effective interaction strength increases, then intuition based on equilibrium phase behavior suggests that the interface should become smoother due to the greater effective free-energy cost of introducing curvature at the interface.
Similarly, we anticipate that rougher interfaces will be observed when the interfacial tension is lower.
In our direct-coexistence simulations, we focus on the location of the planar liquid–vapor interface in the normal direction.
During the simulation, points on the interface, {h}, defined as the left-most boundary of all rows that are filled with B-state particles (inset of fig:fig6a), fluctuate as B-state particles attach and detach from the liquid phase.
We then define the interfacial roughness, , to be the standard deviation of the heights {h},
≡⟨(h - ⟨ h ⟩)^2⟩^1/2,
where the angle brackets denote an average over all L rows of the lattice.
This definition of ignores overhanging particles on sides of the particle rows.
We simulate the interface between the coexisting phases on an L × L lattice that is equally divided between liquid and vapor phases, in the same manner as described in subsection:direct_coexistence.
Due to the system-size dependence of the coexistence point (fig:fig2c) and the maximum length scale of fluctuations imposed by the finite lattice dimension L <cit.>, the interfacial roughness is inherently subject to finite-size effects.
We therefore perform simulations at the L-dependent coexistence point of the finite-sized lattice with L = 64, as opposed to the coexistence point in the macroscopic limit.
We interpret the roughness of the liquid–vapor interface using the equilibrium solid-on-solid (SOS) approximation <cit.>.
In this approximation, B-state particles are assumed to attach to the interface only at the top of the particle rows, without overhanging neighboring rows.
Assuming that the interface can be described using the equilibrium SOS approximation, the observed roughness, , can be related to the equilibrium value, _eq, via
Δ hΔ h_eq = sinh(βϵ̃/4)sinh(βϵ/4),
where βϵ̃ is the effective dimensionless interaction strength at the interface under the SOS approximation.
We can then obtain the effective interfacial tension, σ_rough(βϵ̃), by solving for βϵ̃ using (<ref>) and applying an analytical formula for the equilibrium lattice-gas model [see (<ref>) in app:FLEX-interface].
Analyzing the interfacial roughness within the framework of the SOS model broadly supports our hypothesis of a chemically driven nonequilibrium interfacial tension.
In fig:fig6a, we show a comparison between the change in the roughness relative to equilibrium, (Δ h - Δ h_eq) / Δ h_eq, and predictions based on FLEX.
For these theoretical predictions, we use the FLEX expression for the effective dimensionless interaction strength, βϵ̃ [see (<ref>) in app:FLEX-interface], in combination with the SOS relation, (<ref>).
The direct-coexistence measurements and FLEX predictions are consistent in that the interfacial roughness increases with in the case of chemical-reaction model I, while it decreases in the case of model II.
These results are also consistent with the equilibrium intuition that the interfacial roughness increases when the interfacial tension decreases, and vice versa.
To directly compare the results of inference methods based on nucleation kinetics and interfacial roughness, we plot the relationship between σ_CNT and σ_rough in fig:fig6b.
Each point in this comparison corresponds to a unique nonequilibrium coexistence point, β.
Although the dynamic range of σ_rough tends to be smaller than that of σ_CNT, which is likely a consequence of the SOS approximation used in the former measurement, we observe a strong correlation between these two independent inferences of the nonequilibrium interfacial tension.
Importantly, this correlation holds equally well for both chemical-reaction models.
We therefore conclude that both inference methods yield compatible measurements of the nonequilibrium interfacial tension, and that this interfacial property is entirely determined by the nonequilibrium coexistence conditions.
§ DISCUSSION
In this paper, we show that driven chemical reactions alter the material properties of interfaces between coexisting phases in nonequilibrium fluids.
To this end, we consider two models of inhomogeneous chemical-reaction kinetics, which are either accelerated or decelerated in the condensed liquid phase relative to the dilute vapor phase, using a minimal lattice model <cit.>.
We find that the interfacial tension, determined either from measurements of droplet-nucleation kinetics or from measurements of fluctuations at a planar interface, changes as the system is driven out of equilibrium.
Our interpretation of this effect as a nonequilibrium interfacial tension is supported by the strong correlation between these two independent measurements.
However, the relationship between the nonequilibrium interfacial tension and the applied chemical drive depends on the details of the chemical-reaction model.
We can explain the observed deviations in the interfacial properties using a theoretical framework based on effective-equilibrium models of the coexisting phases.
We interpret our simulation results by mapping each phase in a nonequilibrium fluid to an effective-equilibrium model whose particle density distribution is closest to that of the NESS under a constant chemical drive.
By calculating the effective free-energy differences between the internal states of particles in both phases, we find that we can explain the bulk-phase properties of driven fluids with different chemical-reaction models.
Then, by applying this effective-equilibrium approach to predict the effective free-energy cost of forming a liquid–vapor interface, we obtain a consistent explanation for the observed trends in the nonequilibrium interfacial tensions with different chemical-reaction models.
The concept of effective equilibrium thus plays an essential role in our analysis of the effects of driven chemical reactions.
Nonetheless, we emphasize that our simulation approach, which is based on stochastic thermodynamics, and the underlying lattice model do not make any assumptions regarding an effective equilibrium.
Instead, we utilize the concept of an effective equilibrium only when interpreting our simulation results or when generating approximate analytical predictions using FLEX.
Alternatively, the concept of an effective equilibrium can be employed as a starting point for modeling the thermodynamic and transport properties of chemically driven fluids.
For example, in mesoscale modeling approaches based on linear irreversible thermodynamics <cit.>, the NESS is obtained starting from a free-energy functional that implies local equilibration within a fluid element <cit.>.
In such models, therefore, local effective equilibrium is assumed to provide a complete description of the system, from which stationary states and phase-transformation pathways can be computed, rather than an approximation for analyzing nonequilibrium phenomena as in this paper.
Although we have studied the effects of driven chemical reactions in a reaction-limited regime in this paper, we expect that our qualitative predictions of nonequilibrium interfacial properties will also hold beyond this regime.
Extending our modeling approach to examine the influence of particle diffusion on nonequilibrium interfacial properties thus represents a promising direction for future study.
Studying nonequilibrium interfacial properties may also be possible within the context of linear irreversible thermodynamics <cit.>, although local effective-equilibrium assumptions are integral to such models as noted above.
However, we note that any study of interfacial properties in mean-field models of chemically driven fluids <cit.> will need to disentangle interfacial effects from those arising from changes in the coexistence conditions.
For example, distinguishing between interfacial and bulk effects is important for comparing nucleation behavior between stochastic thermodynamics-based <cit.> and mean-field approaches <cit.>.
Our simulations suggest that measurements of interfacial roughness can serve as a general method for identifying nonequilibrium effects in driven fluids.
In particular, our analysis indicates that the interfacial tension under different nonequilibrium coexistence conditions can be inferred quantitatively regardless of the kinetics of driven chemical reactions.
This noninvasive approach is particularly appealing, as it could be applied directly to living samples <cit.> and to “aging” systems whose material properties change over time <cit.>.
We note that a related strategy has been employed for human cell nuclei fusion processes, in which the interfacial tension is determined from the fluctuation–dissipation theorem <cit.>.
This approach is similar to our proposal, as both strategies are based on the concept of an effective equilibrium; however, our simulations suggest that our inference approach could potentially be used under far-from-equilibrium conditions, where |β| > 1.
Finally, we note that recently developed experimental settings for studying chemically driven fluids could provide suitable platforms for testing our predictions.
To realize the inhomogeneous driven chemical reactions that are essential for observing nonequilibrium interfacial properties, the reaction kinetics along the driven pathway could be tuned by controlling the partitioning of enzymes into phase-separated droplets <cit.> or by engineering the reactions to be dependent on the local concentration of droplet material <cit.>.
Investigations of interface roughening could then be used to probe the nonequilibrium phase behavior of these experimental systems.
Since we anticipate that interface roughening could be observed most easily near the critical point, it will also be necessary to inspect the critical behavior of inhomogeneously driven fluids.
Such studies present another important direction for future investigation via theory and simulation.
The authors thank Dr. Sushant Saryal for helpful discussions and comments on the manuscript.
This work is supported by the National Science Foundation (DMR-2143670).
§ FIXED LOCAL ENVIRONMENT APPROXIMATION (FLEX)
In this appendix, we reproduce the results from “Fixed Local Environment approXimation" (FLEX) derived in the our previous work <cit.> for the reader's convenience.
We refer the reader to reference <cit.> for further details.
In FLEX, we assume that each lattice site relaxes to the steady-state much more rapidly than the surrounding local environment, which is represented by the local potential energy u.
Based on this timescale separation, we consider u for the local environment to be fixed, and we calculate the steady-state distribution ρ̃_i (i= E, B, or I) within a tagged site in accordance with the Markovian transition rates shown in fig:fig1a.
Given this assumption of a fixed local environment, ρ̃_B and ρ̃_I can be regarded as the number densities of particles in the B and I states at steady state.
We then map our nonequilibrium model to an effective equilibrium with identical particle number densities ρ̃_B and ρ̃_I.
In this mapping, we define effective fugacities ≡ (/)exp(β u) and ≡/, with the single-site partition function ξ̃=1++.
Because the number densities depend on u, the liquid and vapor phases, which are characterized by different average potential energy, are mapped to distinct effective equilibria.
We quantify the difference in the effective thermodynamic properties of these phases in terms of an effective internal free-energy difference, β≡ -ln(/).
The resulting prediction, (<ref>) in the main text, captures the general trend of how each phase deviates from thermal equilibrium as the system is driven away from equilibrium.
This equation also highlights the importance of controlling the functional dependence of with respect to the local potential energy in order to realize inhomogeneous chemical reactions.
FLEX illustrates the roles of and in tuning the relative flux of the two competing reaction pathways between the particle states.
The flux through the direct I→B pathway is j_I→B =, while the flux through the indirect I→E→B pathway is j_I→E→B = /(1+e^β).
Thus, the relative flux between the two pathways is
j_I→B/j_I→E→B = (1+e^β).
FLEX predicts that along the coexistence line, specified by and (fig:fig3a), this relative flux gradually changes, leading to deviations in the interfacial properties and phase-transition kinetics relative to an equilibrium fluid.
FLEX makes quantitative predictions of the phase behavior of the open system at a given nonequilibrium drive, , by assuming that the particle–hole symmetry of two-dimensional lattice-gas model is a reasonable approximation at the effective equilibrium.
In the equilibrium lattice-gas model, the liquid and vapor phases coexist at μ = 2ϵ, where μ is the chemical potential of a particle, as a result of this particle–hole symmetry <cit.>.
The equilibrium supersaturation, S, is thus approximately equal to exp[β(μ-2ϵ)], which can be interpreted as the ratio ρ/(1-ρ) at u=2ϵ, where ρ is the number density of particles.
We take this ratio as the definition of supersaturation S̃ in the FLEX-mapped effective equilibrium,
S̃≡[1-]_u=2ϵ = (2ϵ; )e^-2βϵ(2ϵ; )+1.
We then relate the FLEX supersaturation, S̃, to the “thermodynamic” driving force of the nonequilibrium phase transition, , via S̃≡exp(β), and we associate phase coexistence with S̃ = 1.
The number density of particles in the vapor phase, , is taken to be the sum of the B and I-state number densities in the effective equilibrium at u = 0,
≡(0; )+(0; ).
Here we assume that the vapor phase is sparsely populated with particles and so interactions among them are negligible.
Under a specified chemical drive at phase coexistence, , Eqs. (<ref>) and (<ref>) uniquely determine the effective fugacities and effective equilibrium of the open system.
§ APPLICATION OF FLEX TO CHEMICAL-REACTION MODELS
Eqs. (<ref>) and (<ref>) can be simplified to provide a qualitative understanding of the nonequilibrium phase behavior of the macroscopic liquid and vapor phases at coexistence.
For the kinetic models employed in this paper, the reaction rate plateaus at low potential energies, either approaching k^∘ in model I or 0 in model II (fig:fig1c).
The plateau allows us to approximately equate (4ϵ) and (2ϵ), meaning that the chemical-reaction kinetics in the liquid phase can be used to determine phase coexistence.
We further assume that the I state is much more stable than the B state so that e^β≫ 1.
With these approximations, we can directly evaluate the coexistence conditions for both chemical-reaction models.
In the case of chemical-reaction model I, approaches zero at u = 0 (fig:fig1b), which leads to identical fugacities in the reservoir and in the FLEX approximation of the vapor phase.
Under this condition, the FLEX coexistence criterion, S̃ = 1, leads to a self-consistent equation for ,
e^β = e^-2βϵ[ 1+(4ϵ)(1+e^β)1+(4ϵ)(1+e^β)e^β].
Substituting (<ref>) into (<ref>) yields β = ln( e^-2βϵ), which remains constant along the coexistence line.
When is much higher than the coexistence condition for the equilibrium two-state lattice-gas model, e^-2βϵ≫ 1, this prediction for agrees well with the FLEX result at equilibrium, β = β = β_eq = ln[(e^-2βϵ+1)-1].
In the case of chemical-reaction model II, becomes small near u = 2ϵ, so that the FLEX fugacities become identical to the reservoir values at this value of u.
Applying this condition to Eqs. (<ref>) and (<ref>) leads to a result that is similar to (<ref>) in the case of model I,
e^β = e^-2βϵ[1+(0){1+e^β( + )}1+(0)(1+e^β)e^β]
≈ e^-2βϵ,
where the approximation on the second line follows from the assumption e^β≫ 1.
(<ref>) indicates that is nearly independent of , which agrees with the results shown in fig:fig3a.
Evaluating (<ref>) leads to β≈β, since (4ϵ)≈0 in this model.
§ PREDICTING INTERFACIAL PROPERTIES WITH FLEX
We now apply FLEX to predict the nonequilibrium interfacial tension.
To this end, we focus on the effective free-energy cost of attaching a B-state particle to a flat liquid–vapor interface.
Since the free-energy cost of attaching a B-state adatom is -βϵ in equilibrium, we propose that an appropriate effective-equilibrium model of a nonequilibrium interface should reproduce the relation = exp(βϵ) / [1 + exp(βϵ)] at a tagged lattice site on the interface.
We therefore invert this relationship to extract the effective dimensionless interaction strength, βϵ̃, of an adatom at the interface at steady state,
βϵ̃≡ln[1-]_u=ϵ = ln[(ϵ; )(ϵ; )+1]-βϵ,
where is the FLEX approximation of the B-state number density at coexistence.
The notation [·]_u=ϵ indicates that the tagged lattice site experiences a fixed local environment consisting of exactly one B-state particle.
Evaluating (<ref>) using this effective interaction strength predicts the deviation of the interfacial roughness from the equilibrium value within the FLEX framework.
From the effective dimensionless interaction strength at the interface, βϵ̃, we predict the nonequilibrium interfacial tension, σ(ϵ̃), using the following analytical formula at equilibrium <cit.>:
σ(ϵ̃) = √(4ϵ̃β^-2πχ(β)∫_β_c^β K'(8[cosh(β'ϵ̃)-1][cosh(β'ϵ̃)+1]^2) [cosh(β'ϵ̃)-3sinh(β'ϵ̃)] dβ'),
where K' is the elliptic integral of the first kind, χ(β) = [1-sinh^-4(βϵ̃/2)]^1/8, and β_c is the inverse critical temperature given by β_c|ϵ̃| = 2ln(1+√(2)).
This prediction captures the general trend of the interfacial tension under nonequilibrium conditions with different chemical-reaction kinetics (fig:fig5a).
48
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Weber et al.(2019)Weber,
Zwicker, Jülicher, and Lee]weber2019physics
author author C. A. Weber, author D. Zwicker,
author F. Jülicher, and author C. F. Lee, title title Physics of active emulsions, https://doi.org/10.1088/1361-6633/ab052b journal journal Rep. Prog. Phys. volume 82, pages 064601 (year 2019)NoStop
[Zwicker(2022)]zwicker2022intertwined
author author D. Zwicker, title title The intertwined
physics of active chemical reactions and phase separation, https://doi.org/https://doi.org/10.1016/j.cocis.2022.101606 journal journal Curr. Opin. Coll. Inter. Sci. volume 61, pages 101606 (year
2022)NoStop
[Monahan et al.(2017)Monahan, Ryan, Janke, Burke, Rhoads, Zerze, O'Meally, Dignon, Conicella, Zheng, Best, Cole, Mittal,
Shewmaker, and Fawzi]monahan2017phosphorylation
author author Z. Monahan, author V. H. Ryan,
author A. M. Janke, author K. A. Burke, author
S. N. Rhoads, author
G. H. Zerze, author
R. O'Meally, author
G. L. Dignon, author
A. E. Conicella, author
W. Zheng, author R. B. Best, author R. N. Cole, author J. Mittal, author F. Shewmaker, and author N. L. Fawzi, title title Phosphorylation of the FUS
low-complexity domain disrupts phase separation, aggregation, and
toxicity, https://doi.org/https://doi.org/10.15252/embj.201696394
journal journal EMBO J. volume 36, pages 2951 (year
2017)NoStop
[Kim et al.(2019)Kim,
Tsang, Vernon, Sonenberg,
Kay, and Forman-Kay]kim2019phosphorylation
author author T. H. Kim, author B. Tsang, author R. M. Vernon, author
N. Sonenberg, author
L. E. Kay, and author
J. D. Forman-Kay, title
title Phospho-dependent phase separation of FMRP and
CAPRIN1 recapitulates regulation of translation and deadenylation, https://doi.org/10.1126/science.aax4240 journal
journal Science volume 365, pages 825 (year 2019)NoStop
[Nosella and Forman-Kay(2021)]nosella2021phosphorylation
author author M. L. Nosella and author J. D. Forman-Kay, title title
Phosphorylation-dependent regulation of messenger RNA transcription,
processing and translation within biomolecular condensates, https://doi.org/https://doi.org/10.1016/j.ceb.2020.12.007 journal journal Curr. Opin. Cell Biol. volume 69, pages 30 (year 2021)NoStop
[Berry, Brangwynne, and Haataja(2018)]berry2018physical
author author J. Berry, author C. P. Brangwynne, and author M. Haataja, title title Physical
principles of intracellular organization via active and passive phase
transitions, https://doi.org/10.1088/1361-6633/aaa61e journal journal Rep. Prog. Phys. volume 81, pages 046601 (year
2018)NoStop
[Zwicker, Hyman, and Jülicher(2015)]zwicker2015suppression
author author D. Zwicker, author A. A. Hyman, and author F. Jülicher, title title Suppression of Ostwald
ripening in active emulsions, https://doi.org/10.1103/PhysRevE.92.012317 journal journal Phys. Rev. E volume 92, pages 012317 (year 2015)NoStop
[Wurtz and Lee(2018)]wurtz2018chemical
author author J. D. Wurtz and author C. F. Lee, title title
Chemical-reaction-controlled phase separated drops: Formation, size
selection, and coarsening, https://doi.org/10.1103/PhysRevLett.120.078102 journal
journal Phys. Rev. Lett. volume 120, pages 078102 (year 2018)NoStop
[Kirschbaum and Zwicker(2021)]kirschbaum2021controlling
author author J. Kirschbaum and author D. Zwicker, title title Controlling
biomolecular condensates via chemical reactions, https://doi.org/10.1098/rsif.2021.0255 journal journal J. R. Soc. Interface volume 18, pages 20210255 (year 2021)NoStop
[Tena-Solsona et al.(2021)Tena-Solsona, Janssen, Wanzke,
Schnitter, Park, Rieß,
Gibbs, Weber, and Boekhoven]Tena-Solsona2021ripeningaccelerated
author author M. Tena-Solsona, author J. Janssen, author C. Wanzke,
author F. Schnitter, author H. Park, author
B. Rieß, author
J. M. Gibbs, author
C. A. Weber, and author
J. Boekhoven, title title Accelerated ripening in chemically fueled emulsions, https://doi.org/https://doi.org/10.1002/syst.202000034 journal journal Chem. Sys. Chem. volume 3, pages e2000034 (year
2021)NoStop
[Zwicker et al.(2017)Zwicker, Seyboldt, Weber, Hyman, and Jülicher]zwicker2017growth
author author D. Zwicker, author R. Seyboldt,
author C. A. Weber, author A. A. Hyman, and author F. Jülicher, title title Growth and division of active droplets
provides a model for protocells, https://doi.org/10.1038/nphys3984 journal journal Nature Phys. volume 13, pages 408 (year 2017)NoStop
[Demarchi et al.(2023)Demarchi, Goychuk, Maryshev, and Frey]Demarchi2023Selfpropulsion
author author L. Demarchi, author A. Goychuk,
author I. Maryshev, and author E. Frey, title title Enzyme-enriched condensates show
self-propulsion, positioning, and coexistence, https://doi.org/10.1103/PhysRevLett.130.128401 journal
journal Phys. Rev. Lett. volume 130, pages 128401 (year 2023)NoStop
[Bauermann et al.(2022)Bauermann, Laha, McCall, Jülicher, and Weber]bauermann2022chemical
author author J. Bauermann, author S. Laha,
author P. M. McCall, author F. Jülicher, and author C. A. Weber, title
title Chemical kinetics and mass action in coexisting
phases, https://doi.org/https://doi.org/10.1021/jacs.2c06265
journal journal J. Am. Chem. Soc. volume 144, pages 19294 (year
2022)NoStop
[Donau et al.(2022)Donau,
Späth, Stasi, Bergmann, and Boekhoven]Donau2022multiphasic
author author C. Donau, author F. Späth,
author M. Stasi, author A. M. Bergmann, and author J. Boekhoven, title
title Phase transitions in chemically fueled,
multiphase complex coacervate droplets, https://doi.org/https://doi.org/10.1002/anie.202211905 journal journal Angew. Chem. Int. Ed. volume 61, pages e202211905 (year
2022)NoStop
[Cho and Jacobs(2023)]cho2023nucleation
author author Y. Cho and author W. M. Jacobs, title title Tuning
nucleation kinetics via nonequilibrium chemical reactions, https://doi.org/10.1103/PhysRevLett.130.128203 journal
journal Phys. Rev. Lett. volume 130, pages 128203 (year 2023)NoStop
[Cates and Nardini(2023)]cates2023nucleation
author author M. E. Cates and author C. Nardini, title title Classical
nucleation theory for active fluid phase separation, https://doi.org/10.1103/PhysRevLett.130.098203 journal
journal Phys. Rev. Lett. volume 130, pages 098203 (year 2023)NoStop
[Ziethen, Kirschbaum, and Zwicker(2023)]Ziethen2023nucleation
author author N. Ziethen, author J. Kirschbaum, and author D. Zwicker, title title Nucleation of
chemically active droplets, https://doi.org/10.1103/PhysRevLett.130.248201 journal
journal Phys. Rev. Lett. volume 130, pages 248201 (year 2023)NoStop
[Nott et al.(2015)Nott,
Petsalaki, Farber, Jervis,
Fussner, Plochowietz, Craggs,
Bazett-Jones, Pawson, Forman-Kay, and Baldwin]Nott2015PhaseTransition
author author T. J. Nott, author E. Petsalaki,
author P. Farber, author D. Jervis, author
E. Fussner, author A. Plochowietz, author T. D. Craggs, author D. P. Bazett-Jones, author T. Pawson, author J. D. Forman-Kay, and author A. J. Baldwin, title title
Phase transition of a disordered nuage protein generates environmentally
responsive membraneless organelles, https://doi.org/10.1016/j.molcel.2015.01.013 journal
journal Mol. Cell volume 57, pages 936 (year 2015)NoStop
[Söding et al.(2020)Söding, Zwicker, Sohrabi-Jahromi,
Boehning, and Kirschbaum]soding2020mechanisms
author author J. Söding, author D. Zwicker,
author S. Sohrabi-Jahromi,
author M. Boehning, and author J. Kirschbaum, title title Mechanisms for active regulation of
biomolecular condensates, https://doi.org/10.1016/j.tcb.2019.10.006 journal journal Trends Cell Biol. volume 30, pages 4 (year 2020)NoStop
[Besse et al.(2023)Besse,
Fausti, Cates, Delamotte, and Nardini]besse2023interface
author author M. Besse, author G. Fausti,
author M. E. Cates, author B. Delamotte, and author C. Nardini, title
title Interface roughening in nonequilibrium
phase-separated systems, https://doi.org/https://doi.org/10.1103/PhysRevLett.130.187102 journal journal Phys. Rev. Lett. volume 130, pages 187102 (year
2023)NoStop
[O'Flynn and Mittag(2021)]oflynn2021role
author author B. G. O'Flynn and author T. Mittag, title title The role of
liquid–liquid phase separation in regulating enzyme activity, https://doi.org/10.1016/j.ceb.2020.12.012 journal journal Curr. Opin. Cell Biol. volume 69, pages 70 (year 2021)NoStop
[Schuster et al.(2021)Schuster, Regy, Dolan, Kanchi Ranganath, Jovic, Khare,
Shi, and Mittal]Schuster2021biocondensate_review
author author B. S. Schuster, author R. M. Regy,
author E. M. Dolan, author A. Kanchi Ranganath, author N. Jovic, author
S. D. Khare, author
Z. Shi, and author
J. Mittal, title title Biomolecular condensates: Sequence determinants of phase
separation, microstructural organization, enzymatic activity, and material
properties, https://doi.org/10.1021/acs.jpcb.0c11606 journal journal J. Phys. Chem. B volume 125, pages 3441 (year
2021)NoStop
[Seifert(2012)]seifert2012stochastic
author author U. Seifert, title title Stochastic
thermodynamics, fluctuation theorems and molecular machines, https://doi.org/10.1088/0034-4885/75/12/126001 journal
journal Rep. Prog. Phys. volume 75, pages 126001 (year 2012)NoStop
[Van den Broeck and Esposito(2015)]van2015ensemble
author author C. Van den Broeck and author M. Esposito, title title Ensemble and
trajectory thermodynamics: A brief introduction, https://doi.org/10.1016/j.physa.2014.04.035 journal journal Physica A volume 418, pages
6 (year 2015)NoStop
[Wilding(1995)]wilding1995critical
author author N. B. Wilding, title title Critical-point
and coexistence-curve properties of the lennard-jones fluid: A finite-size
scaling study, https://doi.org/10.1103/PhysRevE.52.602 journal journal Phys. Rev. E volume
52, pages 602 (year 1995)NoStop
[Gillespie(2007)]gillespie2007stochastic
author author D. T. Gillespie, title title Stochastic
simulation of chemical kinetics, https://doi.org/10.1146/annurev.physchem.58.032806.104637 journal journal Annu. Rev. Phys. Chem volume 58, pages 35 (year 2007)NoStop
[Pathria(1996)]pathria1996statistical
author author R. K. Pathria, @noop title Statistical
mechanics, edition 2nd ed. (publisher
Butterworth-Heinemann, Oxford, year 1996)NoStop
[Hansen and McDonald(2013)]hansen2013simpleliquids
author author J.-P. Hansen and author I. R. McDonald, @noop title Theory of simple
liquids: with applications to soft matter, edition 4th ed. (publisher Academic press, Oxford, year
2013)NoStop
[Solon et al.(2015)Solon,
Fily, Baskaran, Cates,
Kafri, Kardar, and Tailleur]solon2015pressure
author author A. P. Solon, author Y. Fily,
author A. Baskaran, author M. E. Cates, author
Y. Kafri, author M. Kardar, and author J. Tailleur, title title Pressure is not a state function for generic active fluids, https://doi.org/https://doi.org/10.1038/nphys3377 journal journal Nature Phys. volume
11, pages 673 (year 2015)NoStop
[Omar et al.(2023)Omar,
Row, Mallory, and Brady]brady2023mechanical
author author A. K. Omar, author H. Row, author S. A. Mallory, and author J. F. Brady, title
title Mechanical theory of nonequilibrium coexistence
and motility-induced phase separation, https://doi.org/10.1073/pnas.2219900120 journal journal Proc. Natl. Acad. Sci. USA volume 120, pages e2219900120 (year 2023)NoStop
[Warmflash, Bhimalapuram, and Dinner(2007)]warmflash2007umbrella
author author A. Warmflash, author P. Bhimalapuram, and author A. R. Dinner, title title Umbrella
sampling for nonequilibrium processes, https://doi.org/10.1063/1.2784118 journal journal J. Chem. Phys. volume 127, pages 154112 (year 2007)NoStop
[Chandler(1987)]chandler1987introduction
author author D. Chandler, @noop title Introduction to modern
statistical mechanics (publisher Oxford University Press,
New York, year 1987)NoStop
[Noya, Vega, and de Miguel(2008)]noya2008
author author E. G. Noya, author C. Vega, and author E. de Miguel, title title Determination of the melting point of
hard spheres from direct coexistence simulation methods, https://doi.org/10.1063/1.2901172 journal journal J. Chem. Phys. volume 128, pages 154507 (year 2008)NoStop
[Christensen, Elder, and Fogedby(1996)]Christensen1996coulombic
author author J. J. Christensen, author K. Elder, and author H. C. Fogedby, title title Phase segregation dynamics
of a chemically reactive binary mixture, https://doi.org/10.1103/PhysRevE.54.R2212 journal journal Phys. Rev. E volume 54, pages R2212 (year 1996)NoStop
[Allen and Tildesley(2017)]Allen2017liquid
author author M. P. Allen and author D. J. Tildesley, @noop title Computer simulation
of liquids, edition 2nd ed. (publisher Oxford
University Press, New York, year 2017)NoStop
[Oxtoby(1992)]oxtoby1992homogeneous
author author D. W. Oxtoby, title title Homogeneous
nucleation: Theory and experiment, https://doi.org/10.1088/0953-8984/4/38/001 journal journal J. Phys.: Condens. Matter volume 4, pages 7627 (year 1992)NoStop
[Ryu and Cai(2010)]ryu2010validity
author author S. Ryu and author W. Cai, title title Validity of classical
nucleation theory for Ising models, https://doi.org/10.1103/PhysRevE.81.030601 journal journal Phys. Rev. E volume 81, pages 030601 (year 2010)NoStop
[Allen, Valeriani, and ten Wolde(2009)]allen2009forward
author author R. J. Allen, author C. Valeriani, and author P. R. ten Wolde, title title Forward flux sampling for
rare event simulations, https://doi.org/10.1088/0953-8984/21/46/463102 journal
journal J. Phys.: Condens. Matter volume
21, pages 463102 (year 2009)NoStop
[Chaudhury and Makarov(2010)]Makarov2010Harmonic
author author S. Chaudhury and author D. Makarov, title title A harmonic
transition state approximation for the duration of reactive events in complex
molecular arrangements, https://doi.org/10.1063/1.3459058
journal journal J. Chem. Phys. volume 133, pages 034118 (year
2010)NoStop
[Hummer(2004)]Hummer2004Transitionpath
author author G. Hummer, title title From transition
paths to transition states and rate coefficients, https://doi.org/https://doi.org/10.1063/1.1630572 journal
journal J. Chem. Phys. volume 120, pages 516 (year 2004)NoStop
[Auer and Frenkel(2004)]auer2004quantitative
author author S. Auer and author D. Frenkel, title title Quantitative prediction of
crystal-nucleation rates for spherical colloids: A computational
approach, https://doi.org/10.1146/annurev.physchem.55.091602.094402 journal journal Annu. Rev. Phys. Chem. volume 55, pages 333–361 (year
2004)NoStop
[Saito(1996)]saito1996statistical
author author Y. Saito, @noop title Statistical physics of
crystal growth (publisher World Scientific, Singapore, year 1996)NoStop
[Jawerth et al.(2020)Jawerth, Fischer-Friedrich, Saha,
Wang, Franzmann, Zhang,
Sachweh, Ruer, Ijavi,
Saha, Mahamid, Hyman, and Jülicher]Jawerth2020aging
author author L. Jawerth, author E. Fischer-Friedrich, author S. Saha, author J. Wang, author T. Franzmann, author
X. Zhang, author J. Sachweh, author M. Ruer, author M. Ijavi, author S. Saha,
author J. Mahamid, author A. A. Hyman, and author F. Jülicher, title title Protein condensates as aging Maxwell
fluids, https://doi.org/DOI: 10.1126/science.aaw4951 journal journal Science volume
370, pages 1317 (year 2020)NoStop
[Caragine, Haley, and Zidovska(2018)]Caragine2018Surface
author author C. M. Caragine, author S. C. Haley, and author A. Zidovska, title title Surface
fuctuations and coalescence of nucleolar droplets in the human cell
nucleus, https://doi.org/10.1103/PhysRevLett.121.148101
journal journal Phys. Rev. Lett. volume 121, pages 148101 (year
2018)NoStop
[Saleh, Jeon, and Liedl(2020)]saleh2020enzymatic
author author O. A. Saleh, author B.-j. Jeon, and author T. Liedl, title title Enzymatic degradation of liquid
droplets of DNA is modulated near the phase boundary, https://doi.org/10.1073/pnas.2001654117 journal journal Proc. Natl. Acad. Sci. U.S.A. volume
117, pages 16160 (year 2020)NoStop
[Späth et al.(2021)Späth, Donau, Bergmann, Kränzlein, Synatschke, Rieger, and Boekhoven]spaeth2021molecular
author author F. Späth, author C. Donau,
author A. M. Bergmann, author M. Kränzlein, author
C. V. Synatschke, author
B. Rieger, and author
J. Boekhoven, title title Molecular design of chemically fueled
peptide–polyelectrolyte coacervate-based assemblies, https://doi.org/10.1021/jacs.1c01148 journal journal J. Am. Chem. Soc. volume 143, pages 4782 (year 2021)NoStop
[Nakashima et al.(2021)Nakashima, van Haren, André,
Robu, and Spruijt]nakashima2021active
author author K. K. Nakashima, author M. H. van
Haren, author A. A. M. André, author I. Robu, and author E. Spruijt, title title Active coacervate droplets
are protocells that grow and resist Ostwald ripening, https://doi.org/10.1038/s41467-021-24111-x journal journal Nat. Comm. volume 12, pages
3819 (year 2021)NoStop
[Shneidman, Jackson, and Beatty(1999)]shneidman1999analytical
author author V. A. Shneidman, author K. A. Jackson, and author K. M. Beatty, title title On the
applicability of the classical nucleation theory in an Ising system, https://doi.org/10.1063/1.479985 journal journal J. Chem. Phys. volume 111, pages 6932 (year 1999)NoStop
|
http://arxiv.org/abs/2307.03002v1
|
20230706140539
|
Autoionization of high-$\ell$ core-excited Rydberg states of alkaline-earth-metal atoms
|
[
"Eduardo Marin-Bujedo",
"Matthieu Génévriez"
] |
physics.atom-ph
|
[
"physics.atom-ph"
] |
[][email protected]
Institute of Condensed Matter and Nanosciences, Université catholique de Louvain, BE-1348 Louvain-la-Neuve, Belgium
The autoionization of core-excited Rydberg states is theoretically studied for
a broad range of principal and angular-momentum quantum numbers n and ℓ
in alkaline-earth-metal atoms. We combined two theoretical methods to calculate
accurate autoionization rates for n=10-65 and ℓ=0-45 over the
100 orders of magnitude that they span. The strong interaction between the two
valence electrons for low ℓ states is treated from first principles with
configuration interaction with exterior complex scaling, while at large ℓ
the weak correlation is described by a perturbative approach and
arbitrary-precision floating-point arithmetics. The results, which we benchmark
against available experimental data, provide autoionization rates for the
Np_1/2, 3/2 and, when applicable, (N-1)d_3/2, 5/2 ion-core states of
Mg, Ca and Sr (N=3-5). Using the extensive set of calculated data, we analyze the
dependence of the rates on ℓ and identify five general laws of the
autoionization of high-ℓ states. An empirical formula describing the
scaling of the rates with ℓ is suggested.
Autoionization of high-ℓ core-excited Rydberg states of alkaline-earth-metal atoms
M. Génévriez
August 1, 2023
==================================================================================
§ INTRODUCTION
When the ion core of a Rydberg atom or molecule is excited the system can
decay via three different mechanisms: fluorescence of the ion core,
fluorescence of the Rydberg electron, and, because the energy of the system is
above the first ionization threshold, autoionization. Between these three
mechanisms, autoionization is the fastest by up to several orders of magnitude
for states in which the orbital angular momentum of the Rydberg electron is
low <cit.>. The dynamics governing autoionization are a sensitive probe
to electron correlations and, as such, have been extensively studied both in the time and frequency
domain <cit.>. Experiments based on ion-core
excitation <cit.> have unraveled some of the fascinating electron
dynamics that take place in the dense manifolds of core-excited Rydberg
states <cit.>, and studies
are ongoing to probe the complex correlations that occur for even higher
degrees of core excitation <cit.>. The development of
multi-channel quantum defect theory has led to a clear and powerful way of
understanding autoionization as the inelastic scattering of the Rydberg
electron off the ion core. An alternative method, the configuration interaction
with exterior complex scaling (CI-ECS), was recently
used <cit.> to describe the dynamics of
core-excited Rydberg states, in particular for higher-lying core excitation
where it provided a spectacular visualization of electron
dynamics <cit.>.
The behavior of autoionization when the Rydberg electron has a low
orbital-angular-momentum quantum number ℓ has been extensively studied
(see, e.g., Refs. <cit.> for reviews). In the
absence of series perturbations, the autoionization rates of a given series,
converging to a given ion-core state, scale with the principal quantum number
of the Rydberg electron as n^-3 <cit.>. This scaling is no more than
the probability to find the Rydberg electron near the nucleus,
which is where the Rydberg electron inelastically scatters off the ion core and autoionizes. Autoionization for states with
high ℓ values, on the other hand, is much less well characterized. The centrifugal barrier
ℓ(ℓ+1)/2r^2
prevents the penetration of the Rydberg-electron wavefunction into the ion core
region, thereby suppressing autoionization. Pioneering experimental studies in
Sr have shown that the rates indeed drop rapidly with ℓ <cit.>, a
result that was later verified for other series and
species <cit.> and confirmed by
theoretical predictions for ℓ≲
10 <cit.>. Fluorescence-decay mechanisms have
been observed to dominate the decay of core-excited Rydberg states for
sufficiently high ℓ values <cit.>. While values or upper limits
of the autoionization rates have been measured for ℓ as high as 50 <cit.>,
theoretical values for ℓ≳ 10 are lacking, a fact that can be
attributed to the difficulty of calculating the matrix elements involved in the
rates.
The autoionization rates of high-ℓ core-excited Rydberg states play an
important role in pulsed-field-ionization zero-kinetic-energy photoelectron
spectroscopy <cit.>. Their low values stabilize core-excited
Rydberg states against autoionization <cit.>, which permits the
measurement of photoelectron spectra of atoms, molecules and ions at high
resolution <cit.>. Autoionization also has
significant interest in cold-atoms experiments where it has been used to image
ultracold Rydberg gases <cit.>, track the formation of ultracold
neutral plasmas <cit.> or realize high-fidelity state detection of
Rydberg atoms in an atomic array <cit.>. The possibility to
suppress autoionization offers many interesting properties for quantum optics
and quantum information experiments with Rydberg atoms <cit.>.
Ion-core fluorescence, which can only be observed if it is faster than
autoionization, has been used to image ultracold Rydberg
gases <cit.>, and optical control of the ion core is a promising
route to manipulate Rydberg atoms without perturbing the Rydberg
electron <cit.>. In these perspectives, it appears
desirable to better understand the behavior of autoionization with ℓ,
from regions where it predominates over other decay rates to regions where it
is completely suppressed.
We present a theoretical study of the autoionization of alkaline-earth-metal atoms
(Mg, Ca, Sr) in core-excited Rydberg states. These species were chosen for two
reasons. First, they are widely used in the quantum optics and quantum
information applications mentioned above. Second their electronic structure is
both amenable to high accuracy calculations and simple enough so that the
different dynamics governing autoionization can be identified and understood.
We developed and used two theoretical methods to calculate autoionization
rates from ℓ=0 all the way to ℓ=45, for n = 10 - 65 and for
ion-core states comprising the excited states Np_1/2, Np_3/2, and, when
applicable, (N-1)d_3/2 and (N-1)d_5/2 (N=3, 4 and 5 for Mg, Ca and Sr, respectively). The relevant energy-level structures and
energy values of the three species are summarized in
Fig. <ref>. To calculate the rates over such a broad range
of states, we combined the capability of
CI-ECS <cit.> to treat the complete
two-electron dynamics from first principles with a perturbative treatment of
electron correlations to calculate the extremely small autoionization rates of
high-ℓ states with arbitrary numerical precision. The two methods are
discussed in detail in Sec. <ref>. The results, presented in
Sec. <ref>, provide a complete picture of the autoionization rates of
the core-excited Rydberg states of Mg, Ca and Sr. They allow us to identify
general trends and properties of the autoionization of high-ℓ states,
which we rationalize by investigating the underlying electron dynamics. An empirical
formula describing the scaling of the rates with ℓ is suggested.
§ THEORY
§.§ CI-ECS calculations
The description of core-excited Rydberg states is a challenging task for
atomic-structure techniques because it requires to treat the electronic motion
far from the nucleus (r ∼ 3000 a_0 for n=45), to calculate electronic
correlations over large regions of configuration space, and to describe
continuum processes and resonances. As in other studies (see
Ref. for a review), we reduce the complexity of the problem
by treating alkaline-earth-metal atoms as quasi two-electron systems. The two
valence electrons, subject to the effective field of the closed-shell doubly
charged ion core, are considered explicitly. The effect of the remaining
electrons, on the other hand, is accounted for with a fitted effective core model
potential. The effective Hamiltonian describing the two valence electrons is
given by
Ĥ(r_1,r_2) = -1/2∇_1^2 - 1/2∇_2^2 + V_ℓ_1(r_1) + V_ℓ_2(r_2) + 1/r_12
+ V^SO_ℓ_1j_1(r_1) + V^SO_ℓ_2 j_2(r_2) + V^(2)_pol(r_1, r_2) ,
where the vectors r_1 and r_2 represent the positions of the two
electrons and r_12 is the distance between them. The Hamiltonian includes
ℓ-dependent model potentials V_ℓ_i(r_i) representing the effect of the
doubly charged ion core on the valence electrons independently (i=1, 2). It also
includes the electron repulsion 1/r_12 and the spin-orbit interaction
V^SO_ℓ_i j_i(r_i), with j_i the total-angular-momentum quantum number of each electron. The two-electron term
V^(2)_pol(r_1,
r_2) represents the polarization of the core upon the concerted motion
of the two electrons <cit.>.
The model potentials V_ℓ(r) are of the form proposed in Ref. ,
V_ℓ(r) = -1/r[ 2 + (Z-2)e^-α_1^ℓ r + α_2^ℓe^-α_3^ℓ r]
- α_cp/2r^4W_6(r; r_c^ℓ)
,
with the cutoff function W_6 defined as
W_6(r; r_c^ℓ) = 1 - e^-(r/r_c^ℓ)^6 .
The parameters α_1^ℓ, α_2^ℓ, α_3^ℓ and r_c^ℓ
have been optimized on the experimental values of the energy levels of the
singly-charged ion in Refs. <cit.>, <cit.>
and <cit.> for Mg^+, Ca^+ and Sr^+, respectively. Their
values are listed in Table <ref>.
The spin-orbit
interaction is given by <cit.>
V^SO_ℓ j(r) = α_SO^ℓα^2/2 ℓ·s 1/rdV_ℓ/dr[ 1 - α^2/2 V_ℓ(r)]^-2 ,
with α the fine-structure constant. The additional scaling factor
α_SO^ℓ was introduced and adjusted to reproduce the
spin-orbit splittings of the low-lying excited states of the ion with an
accuracy of better than 1 cm^-1, instead of the 10 cm^-1 accuracy
obtained without it. Its values are also given in
Table <ref>. The 1-cm^-1 accuracy is
required to predict perturbations of the energies and autoionization rates
caused by Rydberg states of adjacent series with sufficient accuracy (see
Ref. for examples with Mg). Without it, the perturbations would occur at the wrong energies and therefore for the wrong Rydberg states.
The two-electron Schrödinger equation associated with the
Hamiltonian (<ref>) is solved using the CI-ECS
method, which has been described in detail
elsewhere <cit.>. Briefly, the
two-electron wavefunction is written as a linear combination of
anti-symmetrized products of one-electron spin-orbitals. Angular momenta are
coupled in the jj coupling scheme, which is the most appropriate for
core-excited Rydberg states <cit.>. Autoionization and other continuum
processes are treated using the technique of exterior complex scaling
(ECS) <cit.>. Following ECS, the radial coordinates r_1
and r_2 of the electrons are rotated into the complex plane by an angle
θ beyond a radius R_0,
r →
r if r < R_0
R_0 + (r - R_0) e^iθ if r ≥ R_0
.
The interest of ECS lies in the behavior of resonance wavefunctions. For real
r values, the amplitudes of resonance wavefunctions are nonnegligible even
as r →∞. Upon complex scaling, these become exponentially damped at
large distances and can be represented by square-integrable functions.
Calculations can thus be performed in a box of finite radius r_max
even when continua are involved. The size of the box limits the spatial extent of the largest Rydberg wavefunction that can be represented, and
therefore gives an upper bound to the maximal n value that can be reliably
calculated. We typically choose r_max > 10 000 a_0 (see
Table <ref>) such that n_max≳ 70.
Complex scaling requires the use of complete square-integrable basis sets,
which would make the size of the two-electron basis set very large and the
calculations computationally demanding (see, e.g.,
Ref. <cit.>). This issue is overcome by choosing the
complex-rotation radius R_0 to be larger that the extension of the
core-electron wavefunction. In that case the core electron does not reside in
the complex-scaled region and is well described by a small number of radial
functions. Only the outer electron must be described by a (quasi)complete
basis set and the size of the two-electron basis set is dramatically reduced.
In practice, the one-electron spin-orbitals entering the two-electron
wavefunction are constructed from radial functions, spherical harmonics and
spinors (see <cit.> for details). The complex-scaled radial functions describing
each of the two electrons are numerical finite-element
discrete-variable-representation (FEM-DVR)
functions <cit.>. They are obtained by solving the one-electron radial Schrödinger
equation for the singly-charged ion along the complex ECS
contour (<ref>). In the FEM-DVR method, the radial space is split into several finite
elements [r_i, r_i+1] and, in each element i, the Schrödinger equation is
solved on a grid of N_i points with a Legendre-Gauss-Lobatto DVR
method <cit.>. We carefully chose the size of the finite
elements and the number of grid points to minimize the basis-set size and make
the calculations as fast as possible. The parameters of the FEM-DVR
calculations are listed in Table <ref> for each
alkaline-earth-metal atom considered in this work.
In the CI expansion of the two-electron wavefunction, we use the quasi-complete
set of 1 + ∑_i (N_i - 1) FEM-DVR radial functions to describe the Rydberg
electron. The set of FEM-DVR functions representing the core electron is
restricted to those describing the low-lying levels of the Mg^+,
Ca^+ and Sr^+ ions listed in Table <ref>. Together, this means that the
two-electron basis set comprises from 5 000 to 30 000 basis functions
depending on the total angular momentum, the parity, and the atomic species.
The Hamiltonian matrix (<ref>) is calculated along
the ECS contour with the complex-scaled FEM-DVR basis and diagonalized. The eigenvalues and
eigenstates of the Hamiltonian are attributed, by inspecting the coefficients
of the CI expansion, to a Rydberg series with given values of N, ℓ_1,
j_1, ℓ, j and J. The quantum numbers N, ℓ_1 and j_1
correspond to the principal, orbital-angular-momentum and
total-angular-momentum quantum numbers of the core electron, respectively.
They indicate the ionization threshold to which the Rydberg series converges.
The quantum numbers ℓ and j are associated to the angular momenta of
the Rydberg electron, and J is the quantum number for the total angular momentum
of the entire, two-electron system. We use below the notation
(Nℓ_1_j_1nℓ_j)_J to denote the Rydberg states. Because the
quantum defects δ_ℓ of the high-ℓ states considered in this
work are very small, the difference between the principal quantum number n
of the Rydberg electron and its effective principal quantum ν = n -
δ_ℓ number is negligible in most cases. We thus use n
interchangeably to describe either quantity. When channel interactions are
strong, as is often the case for low ℓ values, the Rydberg series are
strongly perturbed and mixed, such that the assignment to one single series is
rather arbitrary.
Because the Hamiltonian matrix is
complex-symmetric, its eigenvalues are complex and given by E -
iΓ/2 (see
Figs. <ref>(a) and <ref>(b)
for the Sr(5p_1/2ns_1/2)_1 and Ca(4p_1/2, 3/2np_j)_2 series). When
the eigenstates correspond to bound states and autoionizing resonances (red
and blue solid circles), the eigenvalues are independent of the complex-rotation
angle θ. The real part E gives the energy of the state while the
imaginary part is half the autoionization rate Γ. The eigenvalues of
continuum states, on the other hand, are rotated with respect to the real axis
by ∼ 2θ (gray solid circles).
To assess the accuracy and reliablity of our calculations, we have compared
the energies and autoionization rates of the calculated core-excited Rydberg
states against available experimental
data <cit.>.
Overall, the agreement is excellent and the majority of the calculated rate
agree with experimental data within the uncertainties.
Two examples are shown in Fig. <ref> for the
Sr(5p_1/2ns_1/2)_1 series (panel a) and Ca(4p_1/2, 3/2np_j)_2
series (panel b), confirming the excellent agreement over the entire range of
n values measured in the experiments. In the upper figure, perturbations
caused by the interaction of the Sr(5p_1/2ns_1/2)_1 series with states
belonging to series converging to the Sr^+(5p_3/2) threshold cause
deviations from the smooth n^-3 decrease of the autoionization rates with
n [dashed line in Fig. <ref>(a)]. The positions of
the perturber states, shown by the assignment bar within the figure, match the
energies at which the autoionization rates of (5p_1/2ns_1/2)_1 states
are larger. This increase is caused by the mixing of these states with
the perturber, which has a lower n value and therefore a larger
autoionization rate. In the lower figure, similar perturbations occur but the
larger number of Rydberg series involved makes the assignment of perturber
states more complicated.
The CI-ECS approach thus allows the accurate calculation of the energies and
autoionization rates of core-excited Rydberg states, even in regions where
perturbations between series are important. The extraction of the
autoionization rates from the calculations is straightforward and does not
involve fitting the density of states or the photoionization cross sections. It is
ideally suited for large-scale calculations of autoionization rates.
§.§ Perturbation theory for high-ℓ states
Although in principle the autoionization rates can be calculated with CI-ECS
for all values of ℓ, this approach becomes cumbersome at high ℓ
where the rates reach values below the numerical accuracy of the calculations
(typically 10^-12 Hartree) and the numerical accuracy of double-precision
arithmetics on the computer (10^-16). Whereas abitrary-precision
arithmetics could be used to reach higher accuracies, they make the
calculation and diagonalization of the large complex-rotated Hamiltonian
matrix very demanding computationally. For large ℓ values the
centrifugal barrier experienced by the Rydberg electron is large and
prevents its penetration in the ion-core region. The interelectronic distance r_12
is always large and the electron repulsion is thus always small, such that a
full treatment of two-electron correlations is no longer necessary. Instead, a
perturbative treatment is possible which significantly simplifies the
calculations and makes the use of arbitrary-precision arithmetics possible.
In the perturbative limit, a core-excited Rydberg state and its associated
wavefunction are well described by a single jj-coupled configuration (N
ℓ_1_j_1 n ℓ_j)_J,
|Nℓ_1j_1nℓ jJM_J⟩ =
∑_m_ℓ_1m_ℓ
m_s_1m_s
m_j_1m_j⟨ℓ_1 m_ℓ_11/2m_s_1 | j_1 m_j_1|⟨%s|%s⟩⟩ℓ
m_ℓ1/2m_s | j m_j⟨j_1 m_j_1j m_j | J
M_J||%s⟩⟩Nℓ_1m_ℓ_11/2m_s_1|nℓ m_ℓ1/2m_s⟩
,
with |nℓ m_ℓ1/2m_s⟩ and |Nℓ_1m_ℓ_11/2m_s_1⟩ describing the spin-orbitals of the Rydberg and core electrons, respectively. We omitted antisymmetrisation in the above because, for high-ℓ states, the effect of exchange is negligible as the
core and Rydberg electrons occupy very different regions of configuration space. The autoionization rate of a high-ℓ core-excited Rydberg state into a given continuum (N' ℓ'_1_j'_1εℓ'_j')_J is given by Fermi's golden
rule,
Γ = 2π|⟨N'ℓ'_1j'_1εℓ'j'JM_J | 1/r_12 | Nℓ_1j_1nℓ jJM_J|⟩|^2 ρ(ε),
with ρ(ε) representing the continuum density of states at energy
ε.
The matrix element in Eq. (<ref>) is calculated by expanding the electron-electron repulsion 1/r_12 in multipole
terms <cit.>. Although there is a large number of terms in the expansion, we have observed that only the dipole (q=1) and quadrupole (q=2) terms contribute significantly
to the calculated rates, i.e., Γ≃Γ^(1) + Γ^(2). For each multipole term Γ^(q), carrying
out the integration over all coordinates gives
Γ^(q) = 2π[R_N ℓ_1 j_1^N' ℓ'_1 j'_1, qR_n ℓ^εℓ', q]^2 |B^(q)|^2 ,
with the squared norm of the angular integral B^(q) given by
|B^(q)|^2 = [j_1, j_1', j, j', ℓ_1, ℓ_1', ℓ, ℓ']
×[ ℓ_1' q ℓ_1; 0 0 0 ]^2
[ ℓ' q ℓ; 0 0 0 ]^2
j_1 q j'_1
ℓ_1' 1/2 ℓ_1
^2
×
j q j'
ℓ' 1/2 ℓ^2
j' j_1' J
j_1 j q
^2 .
We used the usual notation [a, b, …] = (2a+1)(2b+1)….
The radial integral
R_N ℓ_1 j_1^N' ℓ'_1 j'_1, q = ∫dr_1 u_N'ℓ'_1j'_1(r_1) r_1^q u_Nℓ_1j_1(r_1)
involving the reduced radial wavefunctions u_nℓ j(r) of the core
electron is calculated by considering that the influence of the distant
Rydberg electron on the core electron is minimal, and that the core-electron
radial wavefunction is identical to the one of the bare ion. The integral is
then calculated using the ionic FEM-DVR basis functions obtained, as for the
CI-ECS calculations, by solving the one-electron Schrödinger equation for the
ion (see Sec. <ref>).
The radial integral for the Rydberg electron,
R_n ℓ^εℓ', q = ∫dr_2 u_εℓ'(r_2) r_2^-q - 1 u_nℓ(r_2) ,
is calculated using the fact that the Rydberg electron in a high-ℓ state experiences, to a very good approximation, the Coulomb potential of the singly charged ion core and nothing else. Because of the large centrifugal barrier, it does not penetrate into the core region where the electrostatic potential would depart from the Coulomb case. In other words, the quantum defect is vanishingly small (see <cit.> and <cit.> for details) and the Rydberg-electron radial wavefunction is hydrogenic. The integral is then known analytically in terms of the Appell hypergeometric function F_2 <cit.>,
R_n ℓ^εℓ', q = 𝒩_n,ℓ𝒞_ε, ℓ'Γ(ℓ + ℓ' - q + 2)(1/n + ik)^-(ℓ + ℓ' - q + 2)
× F_2(ℓ + ℓ' - q + 2, -1/ik+ℓ'+1, 2ℓ+2, 2ℓ'+2; 2/1+ikn, 2ikn/1+ikn) ,
with k = √(2ε). The normalization constant 𝒩_n,ℓ of the initial state is given by
𝒩_n,ℓ = 1/(2ℓ+1)!√((n+ℓ)!/(n-ℓ-1)!2n)(2/n)^ℓ+3/2
and the one of the final state by
𝒞_ε, ℓ = 1/(2ℓ+1)!2(2k)^ℓ/√(1-exp(-2π/k))∏_s=1^ℓ√(s^2 + 1/k^2) .
The Appell F_2 function, along with all other functions in
Eq. (<ref>), can be calculated to within arbitrary numerical
precision with the mpmath library <cit.>. For the
calculations presented below, a numerical accuracy of 10^-60 was chosen for
the calculation of the radial integrals (<ref>), which have values ranging from about
1 atomic unit at low ℓ to 10^-50 atomic units at high ℓ. The
other quantities entering Eq. (<ref>) have values well above the
numerical precision of the computer and can be calculated using
double-precision arithmetics. The final result is obtained, using a numerical
accuracy of 10^-120, by multiplying and squaring all quantities together to
obtain Γ^(q), by summing over the dipole and quadrupole contributions,
and by summing over all continua accessible from the core-excited Rydberg state
under consideration. Because the squares of the angular integrals have values
of typically 10^-2 or larger, the relative numerical accuracy of the final
results is guaranteed to be at least 10^-10 for values up to 10^-110
atomic units. This means that all the rates shown below, whose values reach as
low as 10^-100, are calculated with sufficient numerical accuracy.
Determining such minuscule autoionization rates is possible only with
theoretical methods and not with experimental measurements, because the
lifetimes involved are far too long and other decay mechanisms will dominate.
Anticipating on the results presented below, the rates calculated with the
perturbative approach closely match the ones obtained by solving the full
two-electron Schrödinger equation with CI-ECS for ℓ values in the range
from 6 to 10 (see for example the blue and red circles in
Fig. <ref>). Above ℓ∼ 10, the rates are in
general lower than the numerical accuracy of the CI-ECS calculations and only
the perturbative approach provides reliable results. Below ℓ∼ 6,
perturbations are frequent. Because they cannot be represented within the
single-configuration framework of the perturbative approach, only the CI-ECS
method provides reliable results in this range. The agreement between the two
methods in the ℓ∼ 6-10 range validates the perturbative treatment for
large ℓ values and shows that it is possible, when combining the two
approaches, to accurately calculate the autoionization rates of core-excited
Rydberg states over the entire range of possible ℓ values. The
results provide benchmark data for future studies and, because they permit a
systematic analysis of the role played by the Rydberg-electron angular
momentum, they allow us to gain deep physical insight on the electronic
dynamics responsible for autoionization.
§ RESULTS
We calculated the autoionization rates of all states of Mg, Ca, and Sr with 10
≤ n ≤ 65 and 0 ≤ℓ≤ 45 with the methods described above. The
numerical results are provided in the Supplemental Material. In the following,
we analyze the ℓ dependence of the rates and extract general behaviors
from the large body of calculated data. In most cases, the rates decrease
rapidly with ℓ, as expected from previous
works <cit.>,
but do not always follow a single decay trend. For ℓ≳ 10, the
autoionization rates differ by several orders of magnitude depending on the
values of j and J. For a given ion-core state and fixed values of j - ℓ and J-ℓ, the
evolution of the rates with ℓ is smooth and, anticipating on the results
of Sec. <ref>, follows simple scaling laws. We will see that this
behavior is in fact governed by the value of K- ℓ, with K the quantum
number associated with the total angular momentum without Rydberg-electron
spin. This allows us to define branches as ensembles of Rydberg states converging to a given ionization threshold and
with fixed values of K - ℓ, whose autoionization rates behave
in a similar manner. Such branches can exhibit a fine structure due to the
spin-orbit interaction of the Rydberg electron. We first analyze the behavior
of single branches, before considering all branches and later all thresholds of
all species.
§.§ Behavior for a single branch
Figure <ref> shows the autoionization rates of the (
4d_5/245ℓ_j)_J core-excited Rydberg states of Sr with j =
ℓ - 1/2 and J = j +1/2 (K-ℓ=+1/2). The calculated rates decrease
by more than 20 orders of magnitude between ℓ = 1 and ℓ = 44. For
ℓ≳ 25, they are far smaller (Γ≲ 1 s^-1) than those
of other decay mechanisms such as the fluorescence of the Rydberg electron,
which takes place in the milliseconds range.
The decay of the rates in Fig <ref> does not follow a
single trend and a shoulder (shown by the arrow) is observed around ℓ = 7. Its
origin is, predominantly, the vastly different behavior of autoionization into
the continua above the Sr^+ ( 5s_1/2) and Sr^+ ( 4d_3/2) ionization thresholds, both accessible from the (
4d_5/245ℓ_j)_J states (see Fig. <ref>).
For the former threshold, the partial rates (empty gray circles) are large for
small ℓ and fall very rapidly as ℓ increases. For the latter
threshold, the partial rates (empty gray squares) are significantly smaller
for small ℓ but decrease more slowly with ℓ and thus dominate
the total autoionization rates for ℓ≳ 10.
Autoionization into the continua above both the Sr^+(5s_1/2) and
Sr^+(4d_3/2) thresholds is predominantly caused by the quadrupolar part of
the electron-electron repulsion. The kinetic energy of the emitted electron is
however much larger for the 5s_1/2 continua (1.84 eV) than for the
4d_3/2 ones (0.035 eV). For large kinetic energies, the radial
integral (<ref>) for the Rydberg electron decreases much
faster with ℓ than for smaller kinetic energies, a fact we verified in
a systematic manner for electron kinetic energies from 0.3 eV to 8 eV and
all possible values of ℓ. The other quantities entering the
autoionization rates given by Eq. (<ref>) vary only little or not
at all with ℓ. Therefore, because of the Rydberg-electron radial
integral, the rates decrease faster with ℓ for larger electron kinetic
energies. We verified this property for the other series and thresholds of Sr,
Ca and Mg. In conclusion, the shoulder in the rates of Fig
<ref> comes from the fact that two continua with very different energies are accesible upon autoionization.
Another shoulder can be observed for the partial rates into the continua above
the Sr^+(4d_3/2) threshold (empty gray squares). In this case, it cannot
be attributed to different photoelectron kinetic energies. Instead, it is
explained by a change of the values of the angular integrals in
Eq. (<ref>) which finds its origin in the
evolution of the angular-momentum coupling between the core and Rydberg
electrons discussed in the following section.
§.§ Behavior for all branches
We now consider the behavior of autoionization rates with ℓ for all
possible values of j (ℓ±1/2) and J (j + j_1 ≥ J
≥ |j - j_1|). The value of n is fixed and we consider a single
ionization threshold (N, ℓ_1, j_1 fixed). Figure
<ref> shows the rates of the
(5p_3/245ℓ_j)_J states of Sr. At low ℓ values,
the Rydberg electron penetrates into the ion core region. Its wave function
significantly overlaps with the one of the core electron and the quantum
defects are large. Perturbations between adjacent Rydberg series are frequent
(see Fig <ref>(b) for example) and the autoionization
rates do not exhibit a regular behavior for ℓ≲ 4 (region I in
Fig. <ref>). In an intermediate region between ℓ∼ 4 and ℓ∼ 8 (region II), the rates decrease monotonically and
their magnitudes are similar for all values of j and J. In the last
region (region III), they split into what appears to be three branches with
very different magnitudes but similar evolution with ℓ.
The apparent branches are further split by the spin-orbit interaction of the Rydberg electron, leading to two fine-structure components with slightly different rates (solid circles and crosses in Fig <ref>).
The branches can be associated to different values of K-ℓ, K
being the quantum number of the total angular momentum excluding
Rydberg-electron spin. This is not surprising because, when the spin-orbit
interaction of the Rydberg electron is negligible, intermediate (jK)
coupling is the most appropriate coupling scheme for core-excited Rydberg
states <cit.>. The total angular momentum j_1 of the core
electron strongly couples to the orbital angular momentum ℓ of the
Rydberg electron to give K. The jj-coupled states that we calculate
with the methods of Sec. <ref> are related to intermediate-coupling
states by the transformation coefficients <cit.>
|Nℓ_1j_1nℓ jJ⟩ = ∑_K (-1)^j_1 + ℓ + 1/2 + J√((2K+1)(2j + 1))
j_1 ℓ K
1/2 J j
|Nℓ_1j_1nℓ KJ⟩ .
For large ℓ values, there is an almost one-to-one correspondence between
a given jK-coupled state, which we denote as (Nℓ_1_j_1nℓ)_K
below, and the two jj-coupled (Nℓ_1_j_1nℓ_j=ℓ±
1/2)_J=K ± 1/2 states, thus making the assignment straightforward.
For the rates shown in Fig. <ref>, there are four
possible values of K for each value of ℓ (K - ℓ = ±1/2, ±3/2). The four corresponding branches are
distinguished by their color and labelled by their K - ℓ value. The
spin-orbit interaction of the Rydberg electron further splits the K-ℓ
branches into two sub-branches with J = K + 1/2 and J = K - 1/2 (crosses
and solid circles in Fig. <ref>, respectively). This
effect is very small, as expected because the spin-orbit interaction of the
Rydberg electron is small, and it is in fact only visible for the K - ℓ = -
3/2 branch (orange crosses and circles).
The very different magnitudes of the autoionization rates of the branches
trace back to the interplay between the radial and angular parts of the
electron-electron repulsion in Eq. (<ref>). For ℓ≳
10, autoionization into the continua above the 5p_1/2 threshold dominates
over autoionization into those above the 4d_3/2, 5/2 and 5s_1/2
thresholds because the photoelectron kinetic energy is much smaller in the
former case (see Fig. <ref>). The
(5p_3/245ℓ_j)_J states couple to the 5p_1/2 continua
through the quadrupolar (tensor order of 2) part of the electron-electron repulsion (q = 2).
Upon autoionization, ℓ is thus unchanged or changes by 2 (Δℓ = ℓ' - ℓ = 0, ±2). Importantly, the radial integrals
⟨εℓ'|1/r_2^3|nℓ⟩ differ by several orders of magnitude depending on the value
of Δℓ, with Δℓ = + 2 being the largest [see
Fig. <ref>(a)]. The same observation applies for the dipole (tensor order of 1)
part of the electron repulsion (q=1), in which case the radial integrals are
much larger for Δℓ = +1 than for Δℓ = -1. A similar
situation is encountered for the dipole matrix elements describing
photoabsorption and photoionization in hydrogenic systems, for which Δℓ = + 1 transitions dominate over Δℓ = -1
transitions <cit.>.
Because K (or J for jj coupling) must be conserved upon autoionization,
angular-momentum coupling constrains the possible changes of ℓ for a
given branch. Considering the quadrupolar interaction, the initial state
(5p_3/2nℓ)_K = ℓ - 3/2 can only autoionize into
continua above the 5p_1/2 threshold with ℓ' = ℓ - 2. Continua with ℓ' = ℓ + 0 , 2 are inaccessible because in these
cases angular-momentum coupling between the 5p_1/2 core electron and the
ℓ' ionized electron cannot yield K=ℓ - 3/2 and K cannot be conserved. The autoionization of the states of the - 3/2 branch thus involves a Δℓ = -2 transition only, with a small radial integral translating into small values for the rates (orange crosses and
circles in Fig. <ref>). The opposite observation holds for
the K - ℓ = +3/2 branch. Only the Δℓ = +2 transition is possible and, because the corresponding radial
integral is large, the autoionization rate is large (blue crosses and
circles in Fig. <ref>). For the K - ℓ = ± 1/2 branches (red and green circles and
crosses in Fig. <ref>), the only possible transition is Δℓ = 0, which explains
why the rates of both branches are very similar and lie between those of the
+3/2 (Δℓ = +2) and -3/2 (Δℓ = -2) branches.
The above analysis revealed that the gross structure of the branches is
related to which Δℓ value contributes predominantly to
autoionization. Let us consider another example, the 4d_5/2nℓ
states of Sr, which for large ℓ values autoionize predominantly into
continuum above the Sr^+(4d_3/2) threshold through quadrupole
interactions. As illustrated in Fig. <ref>, six branches
(K - ℓ =±1/2, ±3/2 and ±5/2) can be observed which
are grouped into 3 main components. The lowest branch and component, which is
further split by the spin-orbit interaction of the Rydberg electron, is K -
ℓ = -5/2 and autoionizes through Δℓ = -2 transitions only.
The intermediate branches K - ℓ = -3/2 and -1/2 autoionize through
predominantly Δℓ = 0 transitions, and the branches with the
largest rates, K - ℓ = 1/2, 3/2 and 5/2 autoionize through strong
Δℓ = +2 transitions. The substructure within the three main
Δℓ components is due to differences in angular-momentum coupling
which translate into different values of the angular integrals
in Eq. (<ref>).
The conclusions drawn above implicitly rely on the assumption that the angular
integrals entering the autoionization rates have similar magnitudes for any
Δℓ. As shown in Fig. <ref> this is verified for
large ℓ (ℓ≳ 10) and thus K values. The
underlying reason can be made explicit by considering the large-ℓ
behavior of the angular integrals. We use jK coupling to make the behavior
more apparent, however similar conclusion can be drawn with jj coupling as
well. As shown in appendix <ref>, the asymptotic
formulas for the Wigner symbols <cit.> allow to reduce the norm squared
of the angular integral B^(q) in jK coupling to
| B^(q)|^2 ∼ [ℓ_1, ℓ_1', j_1]
[ ℓ_1' q ℓ_1; 0 0 0 ]^2
j_1 q j'_1
ℓ_1' 1/2 ℓ_1
^2
×[D_0Δℓ^q(0, π/2, 0)]^2
×⟨j_1 (K - ℓ) q (-Δℓ) | j_1' (K - ℓ -Δℓ)|^⟩2,
where D is the Wigner rotation matrix. Importantly, the integral no longer
depends on the values of ℓ and K but only on their difference
K-ℓ, i.e., on the branch under consideration. They also depend
on Δℓ which, for a given branch, can be taken as the largest value
allowed by angular-momentum coupling because it corresponds to the largest
radial integral.
Equation (<ref>) describes the change of angular
momentum of the core electron (j_1 → j_1') through its q-pole coupling
with the Rydberg electron. It is particularly instructive because, when multiplied by the radial integrals (see Eq. (<ref>)), it describes an electric dipole (q=1) or electric quadrupole (q=2) optical transition of the core electron,
Γ^(q)∼ 2π[R_n ℓ^εℓ', qD_0Δℓ^q(0, π/2, 0)]^2
×|⟨N ℓ_1 j_1 (K - ℓ)| r_1^q C_q, -Δℓ | N' ℓ_1' j_1' (K - ℓ -Δℓ)|⟩|^2
with an effective “light” intensity given by the two terms between the square brackets on the right
hand side. C_q, -Δℓ(θ_1, ϕ_1) is an unnormalized spherical harmonic.
The transition dipole or quadrupole moment of the core electron in equation (<ref>)
involves the projection of the core-electron angular momentum j_1 onto
an axis that is no longer the quantization axis but, rather, another axis
defined by the electron repulsion. The same coefficient also shows that the projection of j_1 onto the new axis is K - ℓ.
We have shown earlier that the different branches correspond to different K-ℓ values.
We can therefore relate the branches to the orientation of the
core-electron angular momentum relative to the axis defined by its coupling to
the Rydberg electron. Like the branch, this orientation has a crucial influence
on the autoionization of high-ℓ core-excited Rydberg states.
For lower values of ℓ, the angular integrals rapidly change with
ℓ (see Fig. <ref>), and their magnitudes differ
significantly. Transitions that change K become nonnegligible (gray lines in
Fig. <ref>). The relative magnitude of the autoionization
rates in different branches can no longer be simply estimated, and we have
observed in all our calculations that the rates become in fact similar
regardless of the branch for values below ℓ≲ 10.
§.§ General behavior for alkaline-earth species
In addition to Sr, we also calculated the autoionization rates of core-excited
Rydberg states of Mg and Ca. The ionization thresholds Np_1/2, Np_3/2
and, when applicable, (N-1)d_3/2 and (N-1)d_5/2, were considered
(N=3-5, see Fig. <ref>). The rates of the three species
evolve in a similar way with ℓ, as illustrated by the examples shown in
Fig. <ref>. The similarity is not surprising as all
alkaline-earth-metal ions possess the same electronic structure, with the
exception of the (N-1)d_3/2,5/2 states only present for Ca and the heavier
species. For high ℓ values the Rydberg electron is essentially hydrogenic
regardless of the atomic species. Differences in the values of the rates are
thus due to the different properties of the ion cores, in particular the
energies of the states, which affect the photoelectron kinetic energies, and
the transition dipole moment ⟨ N'ℓ_1'j_1'|r_1|Nℓ_1j_1
⟩ and transition quadrupole moment ⟨ N'ℓ_1'j_1'|r_1^2|Nℓ_1j_1
⟩, which directly enter the calculation of the autoionization rates.
For the series converging to the Np_1/2 thresholds [Fig. <ref>(a)],
the continua above the Ns_1/2 threshold and, except for Mg, the
(N-1)d_3/2 threshold are accessible. The latter dominate at high ℓ
values, a fact we verified by calculating the partial rates. The influence of
the photoelectron kinetic energy on the speed at which the rates decrease is
conspicuous. Indeed, the energy difference between the 3s_1/2 and
3p_1/2 thresholds in Mg (4.422 eV) is much larger than the one between the
3d_3/2 and 4p_1/2 thresholds in Ca (1.431 eV) or the 4d_3/2 and
5p_1/2 thresholds in Sr (1.136 eV), and the rates for Mg decay much faster
than for the other two species. We observe that, as before, the faster the
photoelectron the faster the rates decrease with ℓ.
The rates of series converging to the Np_3/2 thresholds of Mg, Ca, Sr, and
belonging to the branch with K - ℓ = +1/2, are shown in Fig.
<ref>(b). Autoionization proceeds in the continua above the Ns_1/2,
Np_1/2 and, for Ca and Sr, the (N-1)d_3/2,5/2 ionization thresholds.
At low ℓ values, the dipole-type coupling to the Ns_1/2 and
(N-1)d_5/2 continua and the quadrupole-type coupling to the (N-1)d_3/2
continua are all important. We observe, as in Fig. <ref>
and <ref>, a change in the decay trend around ℓ∼ 10 after which autoionization to continua above the Np_1/2 threshold
is the major decay channel. The same observations as for Fig.
<ref>(a) can be made regarding the relationship between the
photoelectron kinetic energy and the speed at which the rates decrease with
ℓ. The spin-orbit splitting of the Mg^+(3p_1/2, 3/2) levels
(11.4 meV) is the smallest of the three species and the rates are the largest
for large ℓ. The rates for Mg decrease by only 5 orders of magnitude in
the range from ℓ = 10 to ℓ = 30, whereas those of Ca and Sr
decrease by 9 and 16 orders of magnitude, respectively. The reasoning holds
for all other branches and all other thresholds of the three species we
investigated.
§.§ Comparison against available high-ℓ experimental data
Quantitative experimental data on the autoionization rates of high-ℓ
core-excited Rydberg states are scarce even for the alkaline-earth-metal atoms
which, in comparison, have been extensively studied for low ℓ values
(see Ref. for a review). Cooke et
al. <cit.> measured the autoionization rates of (5p_j_1 n
ℓ_j)_J states of Sr for n = 16 and ℓ = 3 - 5. Their
data, shown in Fig. <ref>, falls in good agreement with our
CI-ECS results (black crosses and red solid circles, respectively). In the
experiment, the autoionization rates were determined from the overall
linewidths of 5p_j_1nℓ states, their j and J substructure
being unresolved. For a given ℓ value, the experimental linewidth is
therefore the result of both the combined autoionization linewidths of all
j and J sublevels, and the small energy differences between these
sublevels. We modeled this with our CI-ECS data by generating Lorentzian line
profiles for each sublevel, with center frequencies and linewidths given by
the results of the calculations. These profiles were then summed and the overall
linewidths determined in a least-square fit to a Lorentzian function. It is
these values that are shown in Fig. <ref> (red solid circles).
The (4d_3/2, 5/251c_j)_J circular core-excited Rydberg states of Sr, i.e, those of maximal orbital and magnetic quantum numbers of the Rydberg electron |m| = ℓ = n-1,
have been shown to be stable against autoionization by Teixeira et
al. <cit.>. They could experimentally determine lower bounds for
the autoionization lifetimes of Sr(4d_3/251c) and Sr(4d_5/251c)
circular states of 5 ms and 2 ms, respectively. Our calculations agree
with these lower bounds and in fact predict lifetimes that are longer by many
orders of magnitude (77 and 21, respectively). We can thus confirm that
circular core-excited Rydberg states are completely immune to autoionizaton.
Our results reveal that, in fact, most states with ℓ≥ 22 are also
immune to autoionization, in the sense that autoionization lifetimes are
longer than even the fluorescence lifetime of the Rydberg electron (millisecond range).
In a recent work, Yoshida et al. <cit.> thoroughly
investigated the autoionization of the 5p_1/2nℓ states of Sr with ℓ =
0-5. To compare our CI-ECS results with the data presented
in <cit.>, we determined the scaled autoionization rates Γ_0
of the (5p_1/2nℓ_j)_J series (ℓ=3-5) by fitting the calculated
rates to the usual formula
Γ(n) = Γ_0/n^3
in the range n=51-75. The scaled rates are compared in
Table <ref> and show good agreement with the results
of <cit.>. For lower ℓ values, the rates do not closely follow
the scaling law (<ref>) because of series perturbations.
§.§ ℓ and n scaling
Beyond ℓ∼ 4 Rydberg-series perturbations are rare and the behavior of the autoionization rates with ℓ is smooth. We found that the decrease of the rates with ℓ is well described by the empirical exponential law
Γ(ℓ) = Γ_0 e^aℓ^2 + bℓ ,
where Γ_0, a and b are parameters that depend on n and on the
branch under consideration. They can be determined in a least-squares fit of
the calculated rates (ℓ = 5 - 30) yielding, for example, Γ_0 =
3.5(15) · 10^-4 Hartree, a = -0.064(2) and b = -2.69(6) for the
(3p_1/245ℓ_ℓ - 1/2)_j - 1/2 series of Mg. The law is
shown by the solid black line in Fig. <ref> and compared against the
calculated data (red crosses). It reproduces the rates to within 30% or
better over the 60 orders of magnitude that their values span. Increasing the
degree of the polynomial in the exponential leads to a more accurate fit, at
the expense of an increased number of parameters. We find that a second order
polynomial represents a good compromise between accuracy and simplicity.
Equation (<ref>) describes the rates well when a single
decay trend is observable. When the rates show several decay trends, typically
associated with autoionization into the continua of different ion-core states,
the behavior is well described by the sum of exponential laws
Γ(ℓ) = ∑_i=1^k Γ_0^(i) e^a^(i)ℓ^2 + b^(i)ℓ .
k is typically the number of ion-core thresholds with significantly
different energies. The rate values obtained by fitting the above equation,
choosing k=2, to the theoretical results for the Ca(4p_3/245ℓ_ℓ+1/2)_ℓ+2 branch (ℓ = 5 - 30)
are shown in Figure <ref>. As for the single-trend case, the agreement
between the fit results and the calculated values is excellent and better than
20% over the entire range (ℓ = 5-30).
An approximate formula for the ℓ dependence of autoionization rates,
derived in Ref. <cit.>, predicts a polynomial dependence Γ∝ 1/n^3ℓ^4q-3 where q is the order of the
multipole expansion giving the dominant contribution to autoionization. The
prediction has been verified for the low ℓ states of the
Sr(5p_1/2nℓ) series <cit.> (ℓ≲ 5) but, as ℓ
increases, it rapidly deviates from our results. The same is true for other
series and species and, in the case of Mg shown in Fig. <ref>
(dashed gray line), the deviation occurs at even smaller ℓ values. The
deviation can be attributed to the assumptions made in Ref. <cit.> to analytically estimate
the radial integral (<ref>) and the
rate (<ref>), leading to the polynomial scaling, whereas the
integral is calculated exactly in the present work.
The n^-3 scaling of autoionization rates is well established for low ℓ
values but deserves a closer inspection as ℓ increases. When n ≫ℓ, the cubic scaling is verified <cit.> as illustrated in
Fig. <ref>(a) for the (4p_3/2n(ℓ=7)_15/2)_7 series of
Ca. Clear deviations appear when n becomes comparable to ℓ, as
shown in Fig. <ref>(b) for the same branch but a higher ℓ=18
value. In this situation, the rates initially increase with n before
passing through a maximum and eventually
following the expected n^-3 asymptotic form (blue dashed line) as n ≫ℓ. The same observation holds for all other branches, ion-core states,
and atomic species that we studied.
A departure from the n^-3 scaling law is well known for the fluorescence
lifetimes of high-ℓ Rydberg states. In this case, because the
fluorescence only occurs to nearby states of similar n values, a scaling of
n^-5 can be derived <cit.>. Autoionization proceeds, instead,
to continua with the same energy regardless of the value of ℓ and, therefore, an
argument similar to the one for fluorescence cannot be made. The n
dependence of the autoionization rates, encoded in the complicated functions shown in Eq. (<ref>), depends on the
kinetic energy of the ionized electron and does not follow a simple polynomial
scaling law.
§ CONCLUSION
The autoionization rates of core-excited states of Mg, Ca and Sr were
calculated for n = 10 - 65 and ℓ = 0 - 45. For low ℓ values,
we obtained the rates by treating the full extent of correlations between the
two valence electrons with CI-ECS. Both the values of the rates and the
perturbations caused by states belonging to other, adjacent Rydberg series
fall in excellent agreement with the available experimental data. Beyond
ℓ∼ 5, the rates drop rapidly and perturbations become much scarcer,
two facts indicating the rapid decrease of the electron-electron repulsion. We
show that a perturbative treatment of dipole- and quadrupole-type electron correlations, which compares
well to the results of the full nonperturbative CI-ECS calculations, is
sufficient at this point. The hydrogenic integrals involved in the
perturbative calculations are computed without approximation and with a
numerical precision of 10^-60, a fact imposed by the rapid decrease of the
rates with ℓ.
The complete picture provided by the results has allowed us to analyze the
autoionization of high-ℓ core-excited Rydberg states in detail and
derive both the quantum number dependencies of their autoionization dynamics and the physical mechanisms responsible for these dependencies. Five general laws have been identified:
First, the decay of the autoionization rates with ℓ is very rapid. Above
ℓ∼ 25, they become negligible compared to all other decay processes,
even those as slow as the fluorescence of the Rydberg electron taking place on
the millisecond timescale.
Second, the values of the rates separate into branches belonging to different
K-ℓ values, i.e., different couplings between the total
angular momentum of the ion core and the orbital angular momentum of the
Rydberg electron. Each branch can be associated to predominantly one change of
ℓ upon autoionization (Δℓ = ± 1 for q=1 and Δℓ = 0, ± 2 for q=2), a property resembling selection rules in
radiative transitions. Depending on the predominant Δℓ, the
values of the rates differ by up to several orders of magnitude which gives
rise to well separated branches.
Third, for each branch the decrease of the rates with ℓ presents a single
decay trend if autoionization proceeds predominantly into the continua above a
single ion-core state. Otherwise, several decay trends can be observed. The
speed at which the rates decrease is determined by the energy of the
autoionized electron, therefore the different trends are particularly
pronounced when autoionization occurs into continua above ion-core states
that have very different energies. In that case, a shoulder is observed around ℓ∼ 8 where the rapid decrease of the rates suddenly turns into a much
slower one. A similar trend has been observed in other species such as the Yb
atom <cit.>.
Fourth, autoionization rates are typically larger when the kinetic energy of
the electron is small, and smaller when this energy is large. This means, for
example, that the rates for the Mg(3p_1/2nℓ) series are smaller
than those of the Ca(4p_1/2nℓ) and Sr(5p_1/2nℓ) series
because the Mg^+(3p_1/2) ion-core state lies the highest in energy.
Fifth, the autoionization rates decrease with ℓ following, to a good
approximation, an exponential law in which the argument is a second-order
polynomial in ℓ. Using this law the rates can be described within a
good relative accuracy over the many orders of magnitude that they span. This
scaling law can be used, in the future, to extrapolate the rates from a small
set of measured high-ℓ autoionization rates to other ℓ values.
The dependence of the rates on n follows the usual n^-3 scaling
law when n ≫ℓ. This is no longer the case when n and ℓ
are similar, in which case no simple scaling law has been found.
The general laws presented above were derived from the extensive data
calculated for the alkaline-earth-metal atoms. For high-ℓ states, the
exact shape of the ion core has only little influence on the Rydberg electron
as, because of the centrifugal barrier, it does not penetrate in the ion core
region. The conclusions drawn above are therefore not limited to
alkaline-earth-metal species and are expected to apply to high-ℓ
Rydberg states of other atoms, molecules and ions. Generalization to the case
of molecules requires the vibrational and rotational structure of the ion core
to be taken into account, leading to several differences compared to the
atomic case. More branches are expected to form because the electronic angular
momenta also couple to the rotational angular momentum of the ion core
following, typically, Hund's angular-momentum-coupling cases (d) or
(e) <cit.>. Rotational and vibrational autoionization can
occur and typically involve small energies for the ionized electron. If these
decay channels dominate, we expect that the decay of the autoionization rates
with ℓ be significantly slower. A comprehensive study of the
autoionization of the high-ℓ Rydberg states of molecules is
an interesting perspective for future work.
We would like to thank F. Merkt for bringing the subject of the present paper to our attention, and X. Urbain for helpful comments. This work was supported by the Fonds de la Recherche Scientifique (FNRS) under IISN Grant No. 4.4504.10. E.M.B. and M.G. acknowledge support from the Fonds Spéciaux de Recherche (FSR) of UCLouvain. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region.
§ ASYMPTOTIC FORMULA FOR THE ANGULAR INTEGRALS IN JK COUPLING
In the perturbative limit and neglecting exchange, each multipole contribution Γ^(q) to the total autoionization rate can be written in the jK coupling scheme as (see, e.g., Refs. <cit.> for details)
Γ^(q) = 2π[R_N ℓ_1 j_1^N' ℓ'_1 j'_1, qR_n ℓ^εℓ', q]^2
[ℓ_1, ℓ_1', ℓ, ℓ', j_1, j_1']
[ ℓ_1' q ℓ_1; 0 0 0 ]^2
[ ℓ' q ℓ; 0 0 0 ]^2
j_1 q j'_1
ℓ_1' 1/2 ℓ_1
^2
ℓ' j_1' K
j_1 ℓ q
^2 .
The symbols are defined as in Eq. (<ref>). The limit of large ℓ values implies that K is large because j_1 is small. Because q is small, ℓ' is large and j_1' is small. Using the symmetry of the 6j symbols and the asymptotic formula given in Ref. <cit.>, the last squared Wigner 6j symbol in the above equation simplifies to
ℓ' j_1' K
j_1 ℓ q
^2
≃⟨j_1 b j_1' (Δℓ - b) | q Δℓ|^⟩2/2ℓ(2q+1) .
We defined b = K-ℓ, which labels in fact the autoionization branch (see Sec. <ref>). The quantity Δℓ = ℓ' - ℓ represents the change in orbital angular momentum of the Rydberg electron upon autoionization. Using the symmetry properties of Clebsch-Gordan coefficients one can rewrite the above equation as
ℓ' j_1' K
j_1 ℓ q
^2
≃⟨j_1 b q (-Δℓ) | j_1' (b-Δℓ)|^⟩2/2ℓ(2j_1'+1) .
The square of the 3j symbol involving ℓ and ℓ' in Eq. (<ref>) can also be simplified in the limit ℓ, ℓ' ≫ q. Using the asymptotic expression for Clebsch-Gordan coefficients of Ref. <cit.>, we have
[ ℓ' q ℓ; 0 0 0 ]^2
≃[D_0Δℓ^q(0, π/2, 0)]^2/2ℓ'+1 ,
where D_0Δℓ^q(0, π/2, 0) is the Wigner D matrix. Replacing Eqs. (<ref>) and (<ref>) in Eq. (<ref>), one obtains
Γ^(q) = 2π[R_N ℓ_1 j_1^N' ℓ'_1 j'_1, qR_n ℓ^εℓ', q]^2
[ℓ_1, ℓ_1', j_1]
[ ℓ_1' q ℓ_1; 0 0 0 ]^2
j_1 q j'_1
ℓ_1' 1/2 ℓ_1
^2
⟨j_1 b q (-Δℓ) | j_1' (b-Δℓ)|^⟩2
[D_0Δℓ^q(0, π/2, 0)]^2
,
where we used 2ℓ + 1 ≃ 2ℓ. The angular part of this equation,
i.e., the terms on the right hand side that depend on the
angular-momentum quantum numbers, is the one given in
Eq. (<ref>).
apsrev4-2
58
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Camus et al.(1989)Camus,
Gallagher, Lecomte, Pillet,
Pruvost, and Boulmer]camus89
author author P. Camus, author T. F. Gallagher, author J. M. Lecomte, author P. Pillet,
author L. Pruvost, and author J. Boulmer, https://doi.org/10.1103/PhysRevLett.62.2365 journal journal Phys. Rev. Lett. volume 62, pages 2365 (year 1989)NoStop
[Warntjes et al.(1999)Warntjes, Wesdorp, Robicheaux, and Noordam]warntjes99
author author J. B. M. Warntjes, author C. Wesdorp,
author F. Robicheaux, and author L. D. Noordam, https://doi.org/10.1103/PhysRevLett.83.512 journal journal Phys. Rev. Lett. volume 83, pages 512 (year 1999)NoStop
[Wehrli et al.(2019)Wehrli,
Génévriez, and Merkt]wehrli19
author author D. Wehrli, author M. Génévriez, and author F. Merkt, https://doi.org/10.1103/PhysRevA.100.012515 journal journal Phys. Rev. A volume 100, pages 012515 (year 2019)NoStop
[Cooke et al.(1978)Cooke,
Gallagher, Edelstein, and Hill]cooke78a
author author W. E. Cooke, author T. F. Gallagher, author S. A. Edelstein, and author R. M. Hill, https://doi.org/10.1103/PhysRevLett.40.178 journal journal Phys. Rev. Lett. volume 40, pages 178 (year 1978)NoStop
[Luc-Koenig et al.(1995)Luc-Koenig, Aymar, Van Leeuwen,
Ubachs, and Hogervorst]luc-koenig95
author author E. Luc-Koenig, author M. Aymar,
author R. Van Leeuwen, author W. Ubachs, and author
W. Hogervorst, https://doi.org/10.1103/PhysRevA.52.208 journal journal Phys. Rev. A volume 52, pages 208 (year 1995)NoStop
[Aymar et al.(1996)Aymar,
Greene, and Luc-Koenig]aymar96
author author M. Aymar, author C. H. Greene, and author E. Luc-Koenig, https://doi.org/10.1103/RevModPhys.68.1015 journal
journal Rev. Mod. Phys. volume 68, pages 1015 (year 1996)NoStop
[Pisharody and Jones(2004)]pisharody04
author author S. N. Pisharody and author R. R. Jones, https://doi.org/10.1126/science.1092220 journal journal Science volume
303, pages 813 (year 2004)NoStop
[Eichmann et al.(1992)Eichmann, Lange, and Sandner]eichmann92
author author U. Eichmann, author V. Lange, and author W. Sandner, https://doi.org/10.1103/PhysRevLett.68.21 journal journal Phys. Rev. Lett. volume 68, pages 21 (year 1992)NoStop
[Jones and Gallagher(1990)]jones90
author author R. R. Jones and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.42.2655 journal journal Phys. Rev. A volume
42, pages 2655 (year 1990)NoStop
[Génévriez and Eichmann(2023)]genevriez23
author author M. Génévriez and author U. Eichmann, https://doi.org/10.1103/PhysRevA.107.012817 journal journal Phys. Rev. A volume
107, pages 012817 (year 2023)NoStop
[Fields et al.(2018)Fields,
Zhang, Dunning, Yoshida, and Burgdörfer]fields18
author author G. Fields, author X. Zhang,
author F. B. Dunning, author S. Yoshida, and author
J. Burgdörfer, https://doi.org/10.1103/PhysRevA.97.013429 journal journal Phys. Rev. A volume 97, pages 013429 (year 2018)NoStop
[Génévriez(2021)]genevriez21
author author M. Génévriez, https://doi.org/10.1080/00268976.2020.1861353
journal journal Mol. Phys. volume 119, pages e1861353 (year
2021)NoStop
[Yoshida et al.(2023)Yoshida, Burgdörfer, Brienza,
Fields, and Dunning]yoshida23
author author S. Yoshida, author J. Burgdörfer, author R. Brienza, author G. Fields, and author F. B. Dunning, https://doi.org/10.1103/PhysRevA.107.043112 journal
journal Phys. Rev. A volume 107, pages 043112 (year 2023)NoStop
[Génévriez et al.(2021)Génévriez, Rosen, and Eichmann]genevriez21a
author author M. Génévriez, author C. Rosen, and author U. Eichmann, https://doi.org/10.1103/PhysRevA.104.012812 journal journal Phys. Rev. A volume
104, pages 012812 (year 2021)NoStop
[Gallagher(1994)]gallagher94
author author T. F. Gallagher, https://doi.org/10.1017/CBO9780511524530 title Rydberg Atoms (publisher Cambridge
University Press, address Cambridge, year
1994)NoStop
[Roussel et al.(1990)Roussel, Cheret, Chen, Bolzinger, Spiess, Hare, and Gross]roussel90
author author F. Roussel, author M. Cheret,
author L. Chen, author
T. Bolzinger, author
G. Spiess, author J. Hare, and author M. Gross, https://doi.org/10.1103/PhysRevLett.65.3112 journal journal Phys. Rev. Lett. volume 65, pages 3112 (year 1990)NoStop
[Lehec et al.(2021)Lehec,
Hua, Pillet, and Cheinet]lehec21
author author H. Lehec, author X. Hua, author P. Pillet, and author
P. Cheinet, https://doi.org/10.1103/PhysRevA.103.022806 journal journal Phys. Rev. A volume 103, pages 022806 (year 2021)NoStop
[Teixeira et al.(2020)Teixeira, Larrouy, Muni, Lachaud, Raimond, Gleyzes, and Brune]teixeira20
author author R. C. Teixeira, author A. Larrouy,
author A. Muni, author
L. Lachaud, author J.-M. Raimond, author S. Gleyzes, and author M. Brune, https://doi.org/10.1103/PhysRevLett.125.263001 journal
journal Phys. Rev. Lett. volume 125, pages 263001 (year 2020)NoStop
[Jones and Gallagher(1988)]jones88
author author R. R. Jones and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.38.2846 journal journal Phys. Rev. A volume
38, pages 2846 (year 1988)NoStop
[Poirier(1988)]poirier88
author author M. Poirier, https://doi.org/10.1103/PhysRevA.38.3484 journal journal Phys. Rev. A volume
38, pages 3484 (year 1988)NoStop
[Poirier(1994)]poirier94a
author author M. Poirier, https://doi.org/10.1103/PhysRevA.50.1335 journal journal Phys. Rev. A volume
50, pages 1335 (year 1994)NoStop
[Reiser et al.(1988)Reiser,
Habenicht, Müller-Dethlefs, and Schlag]reiser88
author author G. Reiser, author W. Habenicht,
author K. Müller-Dethlefs, and author E. W. Schlag, https://doi.org/10.1016/0009-2614(88)87340-8 journal
journal Chem. Phys. Lett. volume
152, pages 119 (year 1988)NoStop
[Merkt et al.(2011)Merkt,
Willitsch, and Hollenstein]merkt11
author author F. Merkt, author S. Willitsch, and author U. Hollenstein, in https://doi.org/10.1002/9780470749593.hrs071 booktitle Handbook of High-resolution Spectroscopy (publisher Wiley, year 2011) pp. pages
1617–1654NoStop
[Chupka(1993)]chupka93
author author W. A. Chupka, https://doi.org/10.1063/1.465011 journal
journal J. Chem. Phys. volume 98, pages 4520 (year 1993)NoStop
[Hollenstein et al.(2001)Hollenstein, Seiler, Schmutz, Andrist, and Merkt]hollenstein01
author author U. Hollenstein, author R. Seiler,
author H. Schmutz, author M. Andrist, and author
F. Merkt, https://doi.org/10.1063/1.1396856 journal journal J. Chem. Phys. volume 115, pages 5461 (year 2001)NoStop
[Wehrli et al.(2021)Wehrli,
Génévriez, and Merkt]wehrli21
author author D. Wehrli, author M. Génévriez, and author F. Merkt, https://doi.org/10.1039/D1CP00730K
journal journal Phys. Chem. Chem. Phys. volume 23, pages 10978 (year
2021)NoStop
[Lochead et al.(2013)Lochead, Boddy, Sadler, Adams, and Jones]lochead13
author author G. Lochead, author D. Boddy,
author D. P. Sadler, author C. S. Adams, and author M. P. A. Jones, https://doi.org/10.1103/PhysRevA.87.053409 journal journal Phys. Rev. A volume 87, pages 053409 (year 2013)NoStop
[Millen et al.(2010)Millen,
Lochead, and Jones]millen10
author author J. Millen, author G. Lochead, and author M. P. A. Jones, https://doi.org/10.1103/PhysRevLett.105.213004 journal
journal Phys. Rev. Lett. volume 105, pages 213004 (year 2010)NoStop
[Madjarov et al.(2020)Madjarov, Covey, Shaw, Choi, Kale, Cooper, Pichler, Schkolnik, Williams, and Endres]madjarov20
author author I. S. Madjarov, author J. P. Covey,
author A. L. Shaw, author J. Choi, author
A. Kale, author A. Cooper, author H. Pichler, author V. Schkolnik, author J. R. Williams, and author M. Endres, https://doi.org/10.1038/s41567-020-0903-z journal journal Nat. Phys. volume 16, pages
857 (year 2020)NoStop
[Mukherjee et al.(2011)Mukherjee, Millen, Nath, Jones, and Pohl]mukherjee11
author author R. Mukherjee, author J. Millen,
author R. Nath, author
M. P. A. Jones, and author
T. Pohl, https://doi.org/10.1088/0953-4075/44/18/184010 journal
journal J. Phys. B: At. Mol. Opt. Phys. volume 44, pages 184010 (year
2011)NoStop
[McQuillen et al.(2013)McQuillen, Zhang, Strickler, Dunning, and Killian]mcquillen13
author author P. McQuillen, author X. Zhang,
author T. Strickler, author F. B. Dunning, and author T. C. Killian, https://doi.org/10.1103/PhysRevA.87.013407 journal journal Phys. Rev. A volume 87, pages 013407 (year 2013)NoStop
[Muni et al.(2022)Muni,
Lachaud, Couto, Poirier,
Teixeira, Raimond, Brune, and Gleyzes]muni22
author author A. Muni, author L. Lachaud,
author A. Couto, author M. Poirier, author
R. C. Teixeira, author
J.-M. Raimond, author
M. Brune, and author
S. Gleyzes, https://doi.org/10.1038/s41567-022-01519-w journal journal Nat. Phys. volume 18, pages
502 (year 2022)NoStop
[Pham et al.(2022)Pham,
Gallagher, Pillet, Lepoutre, and Cheinet]pham22
author author K.-L. Pham, author T. F. Gallagher,
author P. Pillet, author S. Lepoutre, and author P. Cheinet, https://doi.org/10.1103/PRXQuantum.3.020327 journal journal PRX Quantum volume 3, pages
020327 (year 2022)NoStop
[Burgers et al.(2022)Burgers, Ma, Saskin, Wilson, Alarcón, Greene, and Thompson]burgers22
author author A. P. Burgers, author S. Ma,
author S. Saskin, author J. Wilson, author
M. A. Alarcón, author
C. H. Greene, and author
J. D. Thompson, https://doi.org/10.1103/PRXQuantum.3.020326 journal journal PRX Quantum volume 3, pages
020326 (year 2022)NoStop
[Génévriez et al.(2019)Génévriez, Wehrli, and Merkt]genevriez19b
author author M. Génévriez, author D. Wehrli, and author F. Merkt, https://doi.org/10.1103/PhysRevA.100.032517 journal journal Phys. Rev. A volume
100, pages 032517 (year 2019)NoStop
[Hansen et al.(1999)Hansen,
Laughlin, van der Hart, and Verbockhaven]hansen99
author author J. E. Hansen, author C. Laughlin,
author H. W. van der Hart, and author G. Verbockhaven, https://doi.org/10.1088/0953-4075/32/9/305 journal journal J. Phys. B: At. Mol. Opt. Phys. volume
32, pages 2099 (year 1999)NoStop
[Luc-Koenig et al.(1998)Luc-Koenig, Aymar, Lecomte, and Lyras]luc-koenig98
author author E. Luc-Koenig, author M. Aymar,
author J.-M. Lecomte, and author A. Lyras, https://doi.org/10.1088/0953-4075/31/4/020 journal journal J. Phys. B: At. Mol. Opt. Phys. volume
31, pages 727 (year 1998)NoStop
[Luc-Koenig et al.(1997)Luc-Koenig, Lyras, Lecomte, and Aymar]luc-koenig97
author author E. Luc-Koenig, author A. Lyras,
author J.-M. Lecomte, and author M. Aymar, https://doi.org/10.1088/0953-4075/30/22/018 journal journal J. Phys. B: At. Mol. Opt. Phys. volume
30, pages 5213 (year 1997)NoStop
[Nicolaides and Beck(1978)]nicolaides78
author author C. A. Nicolaides and author D. R. Beck, https://doi.org/10.1016/0375-9601(78)90116-0 journal journal Phys. Lett. A volume
65, pages 2 (year 1978)NoStop
[Simon(1979)]simon79
author author B. Simon, https://doi.org/10.1016/0375-9601(79)90165-8 journal journal Phys. Lett. A volume
71, pages 211 (year 1979)NoStop
[Eiglsperger et al.(2009)Eiglsperger, Piraux, and Madroñero]eiglsperger09
author author J. Eiglsperger, author B. Piraux, and author J. Madroñero, https://doi.org/10.1103/PhysRevA.80.022511
journal journal Phys. Rev. A volume 80, pages 022511 (year
2009)NoStop
[Rescigno and McCurdy(2000)]rescigno00
author author T. N. Rescigno and author C. W. McCurdy, https://doi.org/10.1103/PhysRevA.62.032706 journal journal Phys. Rev. A volume
62, pages 032706 (year 2000)NoStop
[Manolopoulos and Wyatt(1988)]manolopoulos88
author author D. Manolopoulos and author R. Wyatt, https://doi.org/10.1016/0009-2614(88)87322-6 journal journal Chem. Phys. Lett. volume 152, pages 23 (year 1988)NoStop
[Xu et al.(1986)Xu,
Zhu, Mullins, and Gallagher]xu86
author author E. Y. Xu, author Y. Zhu, author O. C. Mullins, and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.33.2401 journal journal Phys. Rev. A volume 33, pages 2401 (year 1986)NoStop
[Bolovinos et al.(1996)Bolovinos, Luc-Koenig, Assimopoulos,
Lyras, Karapanagioti, Charalambidis, and Aymar]bolovinos96
author author A. Bolovinos, author E. Luc-Koenig, author S. Assimopoulos, author A. Lyras,
author N. E. Karapanagioti,
author D. Charalambidis, and author M. Aymar, https://doi.org/10.1007/s004600050091 journal journal Z. Phys. D. volume 38, pages
265 (year 1996)NoStop
[Schinn et al.(1991)Schinn,
Dai, and Gallagher]schinn91
author author G. W. Schinn, author C. J. Dai, and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.43.2316 journal
journal Phys. Rev. A volume 43, pages 2316 (year 1991)NoStop
[Dai et al.(1990)Dai,
Schinn, and Gallagher]dai90
author author C. J. Dai, author G. W. Schinn, and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.42.223 journal
journal Phys. Rev. A volume 42, pages 223 (year 1990)NoStop
[Xu et al.(1987)Xu,
Zhu, Mullins, and Gallagher]xu87
author author E. Y. Xu, author Y. Zhu, author O. C. Mullins, and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.35.1138 journal journal Phys. Rev. A volume 35, pages 1138 (year 1987)NoStop
[Assimopoulos et al.(1994)Assimopoulos, Bolovinos, Jimoyiannis,
Tsekeris, Luc-Koenig, and Aymar]assimopoulos94
author author S. Assimopoulos, author A. Bolovinos, author A. Jimoyiannis, author P. Tsekeris, author E. Luc-Koenig, and author M. Aymar, https://doi.org/10.1088/0953-4075/27/12/007 journal journal J. Phys. B: At. Mol. Opt. Phys. volume 27, pages 2471 (year
1994)NoStop
[Jimoyiannis et al.(1992)Jimoyiannis, Bolovinos, and Tsekeris]jimoyiannis92
author author A. Jimoyiannis, author A. Bolovinos, and author P. Tsekeris, https://doi.org/10.1007/BF01426358 journal journal Z Phys D - Atoms, Molecules and Clusters volume 22, pages 577 (year
1992)NoStop
[Cowan(1981)]cowan81
author author R. D. Cowan, @noop title The Theory of Atomic
Structure and Spectra, Los Alamos Series in Basic and Applied
Sciences (publisher University of California Press, address Berkeley, year 1981)NoStop
[Matsumoto(1991)]matsumoto91
author author A. Matsumoto, https://doi.org/10.1088/0031-8949/44/2/009 journal journal Phys. Scr. volume
44, pages 154 (year 1991)NoStop
[The mpmath development
team(2023)]thempmathdevelopmentteam23
author author The mpmath development
team, @noop title Mpmath: A Python library for
arbitrary-precision floating-point arithmetic (version 1.3.0) (year 2023)NoStop
[Bethe and Salpeter(1957)]bethe57
author author H. A. Bethe and author E. E. Salpeter, https://doi.org/10.1007/978-3-662-12869-5 title Quantum Mechanics of One- and Two-Electron
Atoms (publisher Springer-Verlag Berlin Heidelberg, address Berlin, Heidelberg, year
1957)NoStop
[Varshalovich et al.(1988a)Varshalovich, Moskalev, and Khersonskii]varshalovich88b
author author D. A. Varshalovich, author A. N. Moskalev, and author V. K. Khersonskii, in https://doi.org/10.1142/0270 booktitle Quantum Theory of Angular Momentum (publisher World Scientific, year 1988) p. pages 264NoStop
[Cooke and Gallagher(1979)]cooke79
author author W. E. Cooke and author T. F. Gallagher, https://doi.org/10.1103/PhysRevA.19.2151 journal journal Phys. Rev. A volume
19, pages 2151 (year 1979)NoStop
[Lefebvre-Brion and Field(2004)]lefebvre-brion04b
author author H. Lefebvre-Brion and author R. W. Field, in https://doi.org/10.1016/B978-012441455-6/50006-8 booktitle The Spectra and Dynamics of Diatomic Molecules (publisher Elsevier, year 2004) pp. pages 87–231NoStop
[Varshalovich et al.(1988b)Varshalovich, Moskalev, and Khersonskii]varshalovich88c
author author D. A. Varshalovich, author A. N. Moskalev, and author V. K. Khersonskii, in https://doi.org/10.1142/0270 booktitle Quantum Theory of Angular Momentum (publisher World Scientific, year 1988) p. pages 306NoStop
|
http://arxiv.org/abs/2307.00296v2
|
20230701103907
|
Accelerated primal-dual methods with enlarged step sizes and operator learning for nonsmooth optimal control problems
|
[
"Yongcun Song",
"Xiaoming Yuan",
"Hangrui Yue"
] |
math.OC
|
[
"math.OC",
"cs.LG",
"cs.NA",
"math.NA"
] |
Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme
Doris E. Reiter
August 1, 2023
=================================================================================================
We consider a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints, which are very challenging due to their nonsmooth objective functionals and the resulting high-dimensional and ill-conditioned systems after discretization. We focus on the application of a primal-dual method, with which different types of variables can be treated individually in iterations and thus its main computation at each iteration only requires solving two PDEs. Our target is to accelerate the primal-dual method with either enlarged step sizes or operator learning techniques. The accelerated primal-dual method with enlarged step sizes improves the numerical performance of the original primal-dual method in a simple and universal way, while its convergence can be still proved rigorously. For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs. Once a neural operator is learned, solving a PDE requires only a forward pass of the neural network, and the computational cost is thus substantially reduced. The accelerated primal-dual method with operator learning is mesh-free, numerically efficient, and scalable to different types of PDEs. The acceleration effectiveness of these two techniques is promisingly validated by some preliminary numerical results.
Optimal control; nonsmooth optimization; primal-dual method; operator learning; deep neural network
49M41,
35Q90,
35Q93,
65K05,
90C25,
68T07
§ INTRODUCTION
Optimal control problems with partial differential equation (PDE) constraints play a crucial role in various areas such as
physics, chemistry, biology, engineering, and finance. We refer the reader to <cit.> for a few references. Typically, additional nonsmooth constraints are imposed on the control (variable) to promote some desired properties such as boundedness, sparsity, and discontinuity; see <cit.> and references therein. These optimal control problems are numerically challenging. One particular reason is that the PDE constraints and other nonsmooth constraints on the control are coupled together and the resulting algebraic systems after discretization are usually high-dimensional and ill-conditioned. To solve such a nonsmooth optimal control problem, it is desirable to consider different types of variables separately in iterations so that a nonsmooth optimal control problem can be decoupled into some much easier subproblems while there is no need to solve any computationally demanding system after discretization. For this purpose, it becomes necessary to deliberately consider the structure of the model under discussion for algorithmic design.
In this paper, we study the algorithmic design for optimal control problems that can be uniformly and abstractly formulated as
min_u∈U, y∈Y 1/2 y-y_d ^2_Y+α/2u_U^2+ θ(u)
s.t. y=Su,
where u∈ U and y∈ Y, with U and Y being function spaces, are called the control and the state, respectively; y_d∈ Y is a given target; and α>0 is a regularization parameter. In addition, y=Su represents a linear PDE, in which S: U→ Y is the corresponding solution operator. The nonsmooth convex functional θ(u): U→ℝ is employed to impose some additional constraints on the control, such as boundedness <cit.>, sparsity <cit.>, and discontinuity <cit.>. Various optimal control problems with PDE constraints can be covered by (<ref>). For instance, the abstract state equation y=Su can be specified as the parabolic equation <cit.>, the elliptic equation <cit.>, the wave equation <cit.>, etc. Also, the control u can be a distributed control or a boundary control. Moreover, the functional θ(u) can be the indicator function of an admissible set <cit.>, the L^1-regularization term <cit.>, or the total variation regularization <cit.>.
§.§ State-of-the-art
Numerical study for optimal control problems, including (<ref>), has become an increasingly active field in the past decades. In the literature, the semi-smooth Newton (SSN) methods have been studied intensively and extensively; see, e.g., <cit.> for control constrained optimal control problems, and <cit.> for sparse elliptic optimal control problems. In <cit.>, it has been proved that SSN methods possess locally superlinear convergence and they usually can find high-precision solutions provided that some initial values are appropriately chosen. Computationally, it is notable that, at each iteration of SSN methods, one encounters a large-scale and ill-conditioned Newton system. Practically, it is required to solve these Newton systems up to very high accuracy to ensure the convergence, which is numerically challenging, especially for time-dependent problems [The same concerns also apply to interior point methods, e.g., <cit.>, for different types of
optimal control problems.]; see, e.g., <cit.> for more discussions.
On the other hand, the alternating direction method of multipliers (ADMM) <cit.> has been applied to various optimal control problems modeled by (<ref>); see, e.g. <cit.>. At each iteration of the ADMM metods, one only needs to solve a simple optimization subproblem, which generally has a closed-form solution, and a standard unconstrained optimal control problem. The dimensionality of the unconstrained optimal control subproblems after discretization is inevitably high. Hence, these subproblems must be solved iteratively while it is computationally expensive to acquire accurate solutions of these subproblems. To tackle this computation bottleneck, an inexact ADMM was proposed in <cit.>, with an automatically adjustable inexactness criterion for the inner iterations. As a result, the unconstrained optimal control subproblems only need to be solved up to a rather low accuracy by a few inner iterations, while the overall convergence of the inexact ADMM can be still guaranteed rigorously. Notwithstanding, we reiterate that optimal control subproblems still have to be solved for the inexact ADMM in <cit.>.
§.§ Vanilla application of the primal-dual method
To solve the problem (<ref>) efficiently, we aim at such algorithms that can avoid both complicated Newton systems and unconstrained optimal control subproblems. For this purpose, it suffices to apply the primal-dual method in <cit.>, because it does not require specific initial iterates and its resulting subproblems are easier than the original model. The primal-dual method in <cit.> and its variants have been widely applied in various areas such as PDEs <cit.>, imagining processing <cit.>, statistical learning <cit.>, and inverse problems <cit.>.
We shall show that, the vanilla application of the primal-dual method in <cit.> to the abstract model (<ref>) requires solving two PDEs at each iteration, and thus it differs from the just-mentioned SSN and ADMM approaches in the literature. To fix ideas, we focus on a parabolic control constrained optimal control problem in the following discussion and all the results can be easily extended to other problems modeled by (<ref>).
Let Ω⊂ℝ^d (d ≥ 1) be a bounded domain and Γ:=∂Ω its boundary. We consider the following parabolic optimal control problem:
min_u∈L^2(𝒪), y∈L^2(Q)1/2 y-y_d ^2_L^2(Q)+α/2u_L^2(𝒪)^2+ θ(u)
subject to the parabolic problem
∂y/∂t-Δy=uχ_𝒪 in Ω×(0,T), y=0 on Γ×(0,T), y(0)=0.
Above, Q=Ω×(0,T) with 0<T<+∞; 𝒪=ω×(0,T) with ω an open subset of Ω; χ_𝒪 is the characteristic function of 𝒪; y_d∈ L^2(Q) is a given target; and α>0 is the regularization parameter. Moreover, we specify θ(u) as the indicator function of the admissible set U_ad:
U_ad:={v∈ L^∞(𝒪)| a≤ v(x,t)≤ b a.e. in Ω×(0,T)},
with a and b given constants. Existence and uniqueness of the solution to problem (<ref>)-(<ref>) have been well studied in <cit.>.
To apply the primal-dual method in <cit.> to problem (<ref>)-(<ref>), we first observe that (<ref>)-(<ref>) can be rewritten as
min_u∈ L^2(𝒪) f(Su)+g(u),
where f(Su)= 1/2 Su-y_d _L^2(Q)^2 and g(u)=α/2u_L^2(𝒪)^2+θ(u).
Introducing an auxiliary variable p∈L^2(Q), it follows from the Fenchel duality <cit.> that the primal-dual formulation of (<ref>) reads as
min_u∈ L^2(𝒪)max_p∈L^2(Q) g(u)+(p, Su)_L^2(Q)-f^*(p),
where (·,·)_L^2(Q) denotes the canonical L^2-inner product, f^*(p):=sup_y∈L^2(Q){(y,p)_L^2(Q)-f(y)} is the convex conjugate of f(y) and can be specified as
f^*(p)=1/2p_L^2(Q)^2+(p, y_d)_L^2(Q).
Then, implementing the primal-dual method in <cit.> to (<ref>), we readily obtain the following scheme:
u^k+1=min_u∈L^2(𝒪) {g(u)+(p^k,S u)_L^2(Q)+1/2ru-u^k_L^2(𝒪)^2},
u̅^k=2u^k+1-u^k,
p^k+1=max_p∈L^2(Q){(p, Su̅^k)_L^2(Q)-f^*(p)-1/2sp-p^k_L^2(Q)^2},
where the parameters r>0 and s>0 can be understood as the step sizes of the primal and dual subproblems, respectively.
For the solutions to subproblems (<ref>) and (<ref>), one can show that
u^k+1=P_U_ad(-S^*p^k-1/ru^k/α+1/r),
p^k+1=(S(2u^k+1-u^k)+1/sp^k-y_d)/(1+1/s),
where S^*: L^2(Q)→ L^2(𝒪) is the adjoint operator of S, and P_U_ad(·) denotes the projection onto the admissible set U_ad, namely, P_U_ad(v)(x,t) := max{a, min{v(x,t), b}} a.e in 𝒪, ∀ v∈ L^2(𝒪).
It follows from (<ref>) that the main computation cost of (<ref>)-(<ref>) consists of solving y^k:=S(2u^k+1-u^k), i.e., the state equation (<ref>) with u=2u^k+1-u^k, and computing q^k|_𝒪:=S^*p^k, where q^k is obtained by solving the adjoint equation:
-∂ q^k/∂ t
-Δ q^k=p^k in Ω×(0,T),
q^k=0 on Γ×(0,T),
q^k(T)=0.
Obviously, the main computation of (<ref>)-(<ref>) is solving only the PDEs (<ref>) and (<ref>). The Newton systems of SSN methods and the unconstrained optimal control subproblems of ADMM methods are both completely avoided. Therefore, the computational load of the primal-dual method (<ref>)-(<ref>) at each iteration is much lower than that of the SSN and ADMM methods. Meanwhile, when θ(u) is an L^1 or a total variation regularization, methods for solving the resulting subproblem (<ref>) can be found in Section <ref> and <cit.>. It can be seen that, for these two cases, the primal-dual method (<ref>)-(<ref>) also only requires solving two PDEs as shown in (<ref>).
§.§ Enlarging step sizes for (<ref>)–(<ref>)
As analyzed in <cit.>, to ensure the convergence of the primal-dual method (<ref>)–(<ref>), the step sizes r and s are required to satisfy the condition
r· s<1/S^2,
where
S=sup_v_L^2(𝒪)=1{Sv_L^2(Q), ∀ v∈ L^2(𝒪)}. Numerical efficiency of the primal-dual method (<ref>)–(<ref>) certainly depends on the choices of the step sizes r and
s. In the literature, to allow for larger step sizes and hence accelerate convergence, r· s are usually chosen to be very close to, or even equal the upper bound 1/S^2. It is thus clearly interesting to discuss if the upper bound 1/S^2 can be further enlarged theoretically, while the convergence of the primal-dual method (<ref>)–(<ref>) can be still guaranteed.
Recently, it has been shown in <cit.> that, for saddle point problems in the generic convex setting, the convergence condition (<ref>) can be optimally improved to
r· s < 4/3·1/S ^2.
We are motivated by the work <cit.> and consider whether or not the upper bound 4/3·1/S ^2 can be further enlarged for the model (<ref>), given that the functionals 1/2 Su-y_d _L^2(Q)^2 and α/2u_L^2(𝒪)^2 are indeed strongly convex. Below, we shall show that, to ensure the convergence of the primal-dual method (<ref>)–(<ref>) for problem (<ref>), the step sizes r and s can be chosen subject to
r· s < 4 + 2 α r/3·1/S ^2.
As a result, the step sizes r and s can be enlarged for the primal-dual method (<ref>)–(<ref>) and its numerical performance can be accelerated. With (<ref>), the primal-dual method (<ref>)–(<ref>) is accelerated simply by a larger interval for possible choices of the step sizes, and the computational load of each iteration remains. As we shall show, this is a simple and universal way for accelerating the primal-dual method (<ref>)–(<ref>) by reducing its number of iterations, while its convergence can be still proved rigorously.
§.§ Accelerating (<ref>)–(<ref>) with operator learning
In the context of traditional numerical methods, the PDEs (<ref>) and (<ref>) should be solved repeatedly by certain mesh-based numerical discretization schemes (e.g., finite difference methods (FDM) or finite element methods (FEM)), which require solving large-scale and ill-conditioned algebraic systems. Even a single implementation of such a PDE solver could be expensive; hence the computation cost for solving the PDEs (<ref>) and (<ref>) repeatedly is usually extremely high. Furthermore, given another target y_d∈ L^2(Q), one has to solve the resulting optimal control problem from scratch, and hence solve the state and adjoint equations repeatedly again.
To tackle the computational difficulty above, we advocate to adopt deep learning techniques, which have recently emerged as a new powerful tool for scientific computing problems thanks to the universal approximation property and great expressibility of deep neural networks (DNNs). Indeed, various deep learning approaches have been developed for PDEs; see e.g. <cit.> and references therein. Compared with traditional numerical solvers for PDEs, these deep learning techniques are usually mesh-free, easy to implement, and very flexible to different PDEs. It is arguably accepted that deep learning techniques are helping alleviate human efforts in algorithmic design yet empowering the solvability of a large class of scientific computing problems. Among deep learning techniques, one is to approximate PDE solutions via DNNs, such as the deep Ritz method <cit.>, the deep Galerkin method <cit.>, and physic-informed neural networks <cit.>. Despite that these methods have shown promising results in diversing applications, each of them is tailored for a specific PDE. It is thus necessary to train a new neural network given a different input function (e.g., initial condition, boundary condition, or source term), which is computationally costly and time-consuming. Hence, these methods are not applicable to (<ref>) and (<ref>) because they have to be solved repeatedly with different u^k and p^k.
Another deep learning technique, called operator learning, is to apply a DNN to approximate the solution operator of a PDE, which maps from an input function to the PDE solution, see e.g., <cit.>. To be concrete, consider a PDE solution operator G : X → Y, v ↦ w, where X and Y are two infinite-dimensional Banach spaces and w = G (v). Operator learning aims at approximating G with a neural network 𝒢_θ parameterized by θ. Once a neural solution operator is learned, obtaining a PDE solution 𝒢_θ (v) for a new input function v requires only a forward pass of the neural network. Hence, neural solution operators can be used as effective surrogates for PDEs and are computationally attractive for problems that require repetitive yet expensive simulations, see e.g., <cit.>.
We are thus inspired to consider constructing two DNN surrogates for the PDEs (<ref>) and (<ref>) by operator learning to accelerate the primal-dual method (<ref>)-(<ref>). Precisely, we propose to construct two neural surrogates y=𝒮_θ_s(u) and q=𝒮_θ_a(p) parameterized by θ_s and θ_a for (<ref>) and (<ref>), respectively. Then, replacing S and S^* by 𝒮_θ_s and 𝒮_θ_a in (<ref>), we propose the following primal-dual method with operator learning for solving (<ref>)-(<ref>):
u^k+1=P_U_ad(-𝒮_θ_a(p^k)-1/ru^k/α+1/r), p^k+1=(𝒮_θ_s(2u^k+1-u^k)+1/sp^k-y_d)/(1+1/s).
Different primal-dual methods can be specified from (<ref>) by using different operator learning techniques such as the Deep Operator Networks (DeepONets) <cit.>, the physic-informed DeepONets <cit.>, the Fourier Neural Operator (FNO) <cit.>, the Graph Neural Operator (GNO) <cit.>, and the Laplace Neural Operator (LNO) <cit.>. Note that, given two neural surrogates, these primal-dual methods with operator learning only require implementing two forward passes of the neural networks and some simple algebraic operations. More importantly, given a different y_d∈ L^2(Q), these primal-dual methods with operator learning can be applied directly to the resulting optimal control problems without the need of solving any PDE. Moreover, we reiterate that the resulting primal-dual methods with operator learning for solving (<ref>)-(<ref>) can be easily extended to other various optimal control problems in form of (<ref>), see Section <ref> for more details.
Finally, we mention that some deep learning techniques have been recently developed for solving optimal control problems with PDE constraints, such as the ISMO <cit.>, operator learning methods <cit.>, the amortized finite element analysis <cit.>, and physics-informed neural networks (PINNs) methods <cit.>. All these deep learning methods, however, are designed for only smooth optimal control problems with PDE constraints, and they cannot be directly applied to the nonsmooth problems modeled by (<ref>). To tackle this issue, the ADMM-PINNs algorithmic framework has been recently proposed in <cit.>. With the advantages of both the ADMM and PINNs, the ADMM-PINNs algorithmic framework in <cit.> is applicable to a wide range of nonsmooth optimal control and inverse problems. It is notable that the neural networks in the ADMM-PINNs algorithmic framework have to be re-trained at each iteration.
§.§ Organization
The rest of this paper is organized as follows. In Section <ref>, we prove the convergence of the primal-dual method (<ref>)-(<ref>) with the enlarged step sizes (<ref>). In Section <ref>, we test a parabolic control constrained optimal control problem and validate the efficiency of the accelerated primal-dual method (<ref>)-(<ref>) with (<ref>). In Section <ref>, we showcase extensions to other optimal control problems by a sparse elliptic optimal control problem. In Section <ref>, we focus on the implementation of the primal-dual method with operator learning (<ref>), and report some numerical results to validate its efficiency. Finally, some conclusions and perspectives are given in Section <ref>.
§ CONVERGENCE ANALYSIS OF (<REF>)-(<REF>) WITH (<REF>)
In this section, we rigorously prove the convergence for the primal-dual method (<ref>)–(<ref>) with the enlarged step sizes (<ref>). For this purpose, we first show that the primal-dual method (<ref>)–(<ref>) can be equivalently interpreted as a linearized ADMM. We reiterate that the convergence analysis does not depend on the specific form of the solution operator S and the nonsmooth convex functional θ(u) in (<ref>). Hence, the convergence results can be applied to other optimal control problems with PDE constraints in the form of (<ref>).
§.§ Preliminary
In this subsection, we summarize some known results in the literature for the convenience of
further analysis. We denote by (·,·) and · the canonical L^2-inner product and the associated norm, respectively.
Let λ∈ L^2(Q) be the Lagrange multiplier associated with the constraint y=Su. It is clear that problem (<ref>)-(<ref>) is equivalent to the following saddle point problem:
min_u∈ L^2(𝒪), y ∈ L^2(Q)max_λ∈ L^2(Q) g(u) + f(y) + (λ, Su - y).
Let (u^*, y^*, λ^*)^⊤ be the solution of (<ref>). Then, the first-order optimality condition of (<ref>) reads as
{ ∂θ(u^*)+α u^* + S^*λ^*∋ 0 ,
y^*-y_d-λ^*=0,
-Su^*+ y^*=0,
.
which can be rewritten as the following variational inequalities (VIs):
{ θ(u)-θ(u^*)+(u - u^*, α u^* + S^*λ^* ) ≥ 0, ∀ u∈ L^2(𝒪),
(y - y^*, y^*-y_d- λ^* ) ≥ 0, ∀ y ∈ L^2(Q),
(q - λ^*, -Su^* + y^* ) ≥ 0, ∀ q ∈ L^2(Q).
.
The following lemma will be used later. Its proof can be found in <cit.>, and thus omitted.
Let H be a Hilbert space and ϕ: H → R ∪{+∞} a proper, convex and lower semi-continuous extended real-valued functional on H. Let ϕ^*(v):=sup_w∈ H(v,w)-ϕ(w) be the convex conjugate of ϕ(v). Then, for all w ∈ H, it holds that
w = min_v{ϕ(v)+ 1/2 v - w ^2} + min_v{ϕ^*(v)+ 1/2 v - w ^2}.
For any constant s>0, applying (<ref>) to sϕ(v), instead of ϕ(v), we have
w = min_v{ϕ(v)+ 1/2s v - w ^2} + smin_v{ϕ^*(v)+ s/2 v - 1/sw ^2}.
§.§ Equivalence between (<ref>)–(<ref>) and linearized ADMM
In this subsection, we show that the primal-dual method (<ref>)–(<ref>) is equivalent to the following linearized ADMM:
{ y^k+1 = min_y ∈ L^2(Q){f(y)-(y, sSu^k)+ s/2y- 1/sλ^k^2},
u^k+1 =min_u∈ L^2(𝒪){g(u)+(λ^k + s(Su^k - y^k+1) , S u)+1/2 ru-u^k^2},
λ^k+1 = λ^k + s(Su^k+1 - y^k+1) .
.
First, we note that the primal-dual method (<ref>)-(<ref>) can be rewritten as
u^k+1=min_u∈L^2(𝒪) {g(u)+(p^k,S u)+1/2ru-u^k^2},
p^k+1=max_p∈L^2(Q){-f^*(p)-1/2sp-(p^k+ s(2Su^k+1-Su^k))^2}.
Let
λ^k+1= p^k + sS(u^k+1- u^k). Then, (<ref>)-(<ref>) can be written as
u^k+1=min_u∈L^2(𝒪) {g(u)+(p^k,S u)+1/2ru-u^k^2},
λ^k+1= p^k + sS(u^k+1- u^k),
p^k+1=max_p∈L^2(Q){-f^*(p)-1/2sp-(λ^k+1+sSu^k+1)^2}.
Next, taking w=λ^k+1 + sSu^k+1 and ϕ=f^*(p) in (<ref>), we obtain that
λ^k+1 + sSu^k+1 = min_p ∈ L^2(Q){f^*(p)-(p, Su^k+1)+1/2 sp- λ^k+1^2}
+ s min_y ∈ L^2(Q){f(y)- (y, s Su^k+1) + s/2 y - 1/sλ^k+1^2 }.
Clearly, the first term of the right-hand side is exactly p^k+1 obtained by (<ref>). Additionally, we introduce
y^k+2 = min_y ∈ L^2(Q){f(y)-(y, sSu^k+1)+ s/2y- 1/sλ^k+1^2}.
Then, (<ref>) can be rewritten as
λ^k+1 + sSu^k+1 = p^k+1 + sy^k+2,
which implies that p^k = λ^k + s(Su^k - y^k+1). Substituting this result into (<ref>) and (<ref>) to elmiminate p^k and p^k+1, we thus have
u^k+1 =min_u∈L^2(𝒪){g(u)+(λ^k + s(Su^k - y^k+1) , S u)+1/2 ru-u^k^2},
λ^k+1 = λ^k + s(Su^k+1 - y^k+1),
y^k+2 = min_y ∈L^2(Q){f(y)-(y, sSu^k+1 )+ s/2y- 1/sλ^k+1^2} .
Swapping the order such that the update of y comes first, we get the linearized ADMM (<ref>) directly.
§.§ Convergence
In this subsection, we prove the convergence of the primal-dual method (<ref>)-(<ref>) with the enlarged step sizes (<ref>) in form of the linearized ADMM (<ref>).
We first see that, for any y ∈ L^2(Q) and u∈ L^2(𝒪), the iterate (y^k+1, u^k+1, λ^k+1)^⊤ generated by the linearized ADMM (<ref>) satisfies the following VIs:
(y - y^k+1, y^k-y_d-s(Su^k - y^k+1)- λ^k ) ≥ 0,
θ(u)-θ(u^k+1)+(u - u^k+1, α u^k+1 + S^*(λ^k + s(Su^k - y^k+1))+ 1/r( u^k+1 - u^k) ) ≥ 0,
1/s(λ^k+1 - λ^k) - (Su^k+1 - y^k+1) = 0.
Though (<ref>) is no more related to p^k, for the convenience of further analysis, we still denote
p^k= λ^k + s(Su^k - y^k+1).
Substituting it into (<ref>)–(<ref>) yields
{ (y - y^k+1,y^k+1-y_d -p^k) ≥ 0, ∀ y ∈ L^2(Q),
θ(u)-θ(u^k+1)+(u - u^k+1, α u^k+1 + S^*p^k + 1/r( u^k+1 - u^k) ) ≥ 0, ∀ u∈ L^2(𝒪),
(λ - p^k, 1/s(λ^k+1 - λ^k) - (Su^k+1 - y^k+1)) ≥ 0, ∀λ∈ L^2(Q).
.
Next, we present some useful lemmas.
Let {(u^k, y^k, λ^k)^⊤} be the sequence generated by the linearized ADMM (<ref>) and (u^*, y^*, λ^*)^⊤ the solution of (<ref>). We have
(S (u^k+1- u^k), λ^k+1-λ^k ) ≥1/r( u^k+1 - u^*, u^k +1 - u^k) + 1/s( λ^k+1 - λ^*,λ^k+1 - λ^k) + αu^k+1-u^*^2 .
First, taking (y,u,λ)^⊤=(y^k+1, u^k+1, p^k)^⊤ in (<ref>) and (y,u,λ)^⊤ = (y^*, u^*, λ^*)^⊤ in (<ref>), respectively, and adding them together, we get
{ (y^* - y^k+1, -p^k + λ^* ) ≥ 0,
(u^* - u^k+1, α u^k+1 - α u^* + S^* ( p^k - λ^*) + 1/r (u^k+1 - u^k)) ≥ 0,
(λ^* - p^k, 1/s(λ^k+1 - λ^k) - S(u^k+1-u^*) + (y^k+1 - y^*) ) ≥ 0.
.
Adding the above three inequalities together, we have
1/r(u^* - u^k+1, u^k+1 - u^k) + 1/s (λ^* - p^k, λ^k+1 - λ^k) ≥αu^k+1 - u^* ^2.
From (<ref>) and (<ref>), we have p^k = λ^k+1 - s S(u^k+1-u^k). Then, the desired result (<ref>) follows from (<ref>) directly.
Let {(u^k, y^k, λ^k)^⊤} be the sequence generated by the linearized ADMM (<ref>), and (u^*, y^*, λ^*)^⊤ the solution point of (<ref>). Then, we have
2(S(u^k+1- u^k),λ^k+1-λ^k )
≥ [(1/r + α ) u^k+1- u^*^2 + 1/sλ^k+1- λ^*^2] - [(1/r + α )u^k- u^*^2 + 1/sλ^k- λ^*^2]
+ [(1/r + α/2 )u^k+1- u^k^2) + 1/sλ^k+1- λ^k^2)].
We first note that
2(u^k+1 - u^*, u^k+1 - u^k) = (u^k+1- u^*^2 - u^k- u^*^2) + u^k+1 - u^k ^2,
and
2(λ^k+1 - λ^*, λ^k+1 - λ^k) = (λ^k+1- λ^*^2 - λ^k- λ^*^2) + λ^k+1 - λ^k ^2.
By the Cauchy-Schwarz inequality, we have
2u^k+1 - u^*^2 = (u^k+1- u^*^2 - u^k- u^*^2) + (u^k+1- u^*^2 + u^k- u^*^2)
≥ (u^k+1- u^*^2 - u^k- u^*^2) + 1/2u^k+1- u^k^2.
Substituting the inequalities (<ref>), (<ref>) and (<ref>) into (<ref>), we obtain the desired result (<ref>) directly.
For convenience, we introduce the notations
D = 1/r I - 3 s/4 + 2 α r S^*S, and σ = 2 + 4α r/2 + α r s.
It is easy to verify that D is self-adjoint and positive definite under the condition (<ref>). Moreover, it holds that
1/rI - s S^* S = D - 1/4σ S^* S.
With the above result, we can obtain the following estimate.
Let {(u^k, y^k, λ^k)^⊤} be the sequence generated by the linearized ADMM (<ref>). We have that
- (S(u^k+1- u^k), λ^k+1 - λ^k)
≥[1/2u^k+1 - u^k^2_D + 1/8σS(u^k+1 - u^k)^2 ] + αu^k+1 - u^k ^2
- [ 1/2u^k - u^k-1^2_D + 1/8σS(u^k - u^k-1)^2 ] - 1/2σS(u^k+1 - u^k)^2.
Substituting (<ref>) into (<ref>) to eliminate y^k+1 , we have
θ(u)-θ(u^k+1)+ (u - u^k+1, α u^k+1 + S^* λ^k+1 + (1/rI - s S^*S )(u^k+1 - u^k) ) ≥ 0, ∀ u∈ L^2(𝒪).
We relabel the superscript k+1 as k in the above VI and obtain
θ(u)-θ(u^k)+ (u - u^k, α u^k + S^* λ^k + (1/rI - s S^*S )(u^k - u^k-1) ) ≥ 0, ∀ u∈ L^2(𝒪).
Taking u = u^k in (<ref>) and u = u^k+1 in (<ref>), and adding the resulting two inequalities, we obtain
- ( S(u^k+1- u^k),λ^k+1 - λ^k)
≥ (u^k+1 - u^k, (1/rI - s S^* S )[(u^k+1 - u^k) - (u^k - u^k-1)] ) + αu^k+1 - u^k ^2
(<ref>)= (u^k+1 - u^k, (D -1/4σ S^* S )[(u^k+1 - u^k) - (u^k - u^k-1)] ) + αu^k+1 - u^k ^2
= u^k+1 - u^k^2_D - (u^k+1 - u^k, D (u^k - u^k-1)) - 1/4σS(u^k+1 - u^k)^2
+ 1/4σ(S(u^k+1 - u^k), S (u^k - u^k-1)) + αu^k+1 - u^k ^2 .
Applying the Cauchy-Schwarz inequality to (<ref>), we have
- ( S (u^k+1- u^k),λ^k+1 - λ^k )
≥ 1/2u^k+1 - u^k^2_D - 1/2u^k - u^k-1^2_D
- 3/8σS(u^k+1 - u^k)^2 - 1/8σS(u^k - u^k-1)^2 + αu^k+1 - u^k ^2,
which is just the desired result (<ref>).
Let {(u^k, y^k, λ^k)^⊤} be the sequence generated by the linearized ADMM (<ref>). Then, for any δ∈ (0, 1/2s), we have
- ( S(u^k+1- u^k),λ^k+1 - λ^k) ≥
- 1/4 s(1 + 2sδ) S(u^k+1- u^k)^2 - 1/s (1-sδ) λ^k - λ^k+1^2.
By the Cauchy-Schwarz inequality, we have
- ( S(u^k+1- u^k),λ^k+1 - λ^k) ≥
- s/4 (1 -sδ)S(u^k+1- u^k)^2 - 1/s (1-sδ) λ^k - λ^k+1^2.
For δ∈ (0, 1/2s), we have 0<1/1- sδ≤ 1 + 2sδ and thus complete the proof.
By adding (<ref>), (<ref>) and (<ref>), we obtain that
0 ≥ [(1/r + α ) u^k+1- u^*^2 + 1/sλ^k+1- λ^*^2 + 1/2u^k+1 - u^k^2_D + 1/8σS(u^k+1 - u^k)^2 ]
- [(1/r + α )u^k- u^*^2 + 1/sλ^k- λ^*^2 + 1/2u^k - u^k-1^2_D + 1/8σS(u^k - u^k-1)^2 ]
+ [(1/r + 3α/2 )u^k+1- u^k^2) - 1/4 [2σ + s(1 + 2sδ)] S(u^k+1- u^k)^2 + δλ^k - λ^k+1^2.
To simplify the notations, we introduce
E_k = (1/r + α )u^k- u^*^2 + 1/sλ^k- λ^*^2 + 1/2u^k - u^k-1^2_D + 1/8σS(u^k - u^k-1)^2,
and
V_k+1 = δ (u^k+1 - u^k ^2 + λ^k+1 - λ^k ^2).
Next, we intend to show that
E_k+1≤ E_k - V_k+1,
which implies ∑_k=1^∞ V_k < + ∞.
Then, the convergence of (<ref>) can be proved from this result.
Let {(u^k, y^k, λ^k)^⊤} be the sequence generated by the linearized ADMM (<ref>) and (u^*, y^*, λ^*)^⊤ the solution point of (<ref>). We have
E_k+1≤ E_k-V_k+1.
It follows from the positive definiteness of D that there exists a sufficiently small constant δ∈ (0, 1/2s) such that
D > δ (2/3α r I + s^2/3 α rS^*S),
which implies that
3α r/2u^k+1 - u^k ^2_D -s^2δ/2S(u^k+1- u^k)^2 ≥δu^k+1- u^k^2.
Recall (<ref>). We thus have
1/ru^k+1 - u^k^2 - 3s/4 + 2 α r S (u^k+1 - u^k)^2 ≥ 0.
Moreover, by some simple manipulations, we can show that
3α/2u^k+1 - u^k^2 + ( 3s/4 + 2 α r - 1/2σ - 1/4 s) S(u^k+1- u^k)^2
= 3α r/2u^k+1 - u^k ^2_D.
Combining the results (<ref>), (<ref>), (<ref>) and (<ref>) together, we get (<ref>) directly .
With the help of preceding lemmas, we now prove the strong global convergence of the linearized ADMM (<ref>) under the condition (<ref>).
Let {(u^k, y^k, λ^k)^⊤} be the sequence generated by the linearized ADMM (<ref>), and (u^*, y^*, λ^*)^⊤ the solution point of (<ref>). If r and s satisfy the condition (<ref>), then {u^k} converges to u^* strongly in L^2(𝒪), {y^k} converges to y^* strongly in L^2(Q), and λ^k converges to λ^* strongly in L^2(Q).
Summarizing the inequality (<ref>) from k = 1 to k = ∞, we have that
δ∑_k= 1^∞ (u^k+1 - u^k ^2 + λ^k+1 - λ^k ^2) ≤ (1/r + α )u^1- u^*^2 + 1/sλ^1- λ^*^2 + 1/2u^1 - u^0^2_D + 1/8σS(u^1 - u^0)^2
< + ∞.
As a result, we have u^k+1 - u^k→ 0 and λ^k+1 - λ^k→ 0, and {u^k} and {λ^k} are bounded in L^2(𝒪) and L^2(Q), respectively. Recall (<ref>). We have
(S (u^k+1- u^k), λ^k+1-λ^k ) ≥ 1/r( u^k+1 - u^*, u^k +1 - u^k) + 1/s( λ^k+1 - λ^*,λ^k+1 - λ^k) + α u^* - u^k+1^2,
which implies that
α u^* - u^k+1^2 ≤Su^k+1- u^kλ^k+1-λ^k + 1/ru^k+1 - u^* u^k +1 - u^k + 1/sλ^k+1 - λ^*λ^k+1 - λ^k .
Since the solution operator S and the iterates λ^k and u^k are bounded, it follows from u^k+1 - u^k→ 0 and λ^k+1 - λ^k→ 0 that
u^k→ u^* strongly in L^2(𝒪).
It follows from the continuity of the operator S that Su^k→ Su^* strongly in L^2(𝒪).
Additionally, the fact λ^k+1 - λ^k→ 0 implies that Su^k+1-y^k+1→ 0, and hence
y^k→ y^* strongly in L^2(Q).
Concerning with the convergence of λ^k, we note that λ^*=y^*-y_d (see (<ref>)) and it follows from the optimality condition of the y-subproblem in (<ref>) that
λ^k=-sSu^k+y^k+1-y_d+sy^k+1.
We thus have that
λ^k-λ^*=-s(Su^k-y^k)+(y^k+1-y^*)≤ sSu^k-y^k+y^k+1-y^*.
Since Su^k-y^k→ 0 and y^k+1-y^*→ 0, we conclude that
λ^k→λ^* strongly in L^2(Q).
§ NUMERICAL RESULTS
In this section, we solve a parabolic control constrained optimal control problem to validate the acceleration effectiveness of the primal-dual method (<ref>)-(<ref>) with the enlarged step sizes (<ref>). For the numerical discretization for all experiments, we employ the backward Euler finite difference scheme (with step size τ) for the time discretization and the piecewise linear finite element method (with mesh size h) for the space discretization, respectively. Our codes were written in MATLAB R2020b and numerical experiments were conducted on a MacBook Pro with mac OS Monterey, Intel(R) Core(TM) i7-9570h (2.60 GHz), and 16 GB RAM.
We consider the following example:
min_u∈ L^2(Q), y∈ L^2(Q) 1/2y-y_d_L^2(Q)^2+α/2u_L^2(Q)^2+θ(u),
where y and u satisfy the following parabolic equation:
∂ y/∂ t-Δ y=f+u in Ω×(0,T),
y=0 on Γ×(0,T), y(0)=φ.
Above, φ∈ L^2(Ω), the function f∈ L^2(Q) is a source term that helps us construct the exact solution without affection to the numerical implementation. The nonsmooth term θ(u) is the indicator function of the admissible set (<ref>). We set Ω=(0,1)^2, ω=Ω, T=1, a=-0.5, b=0.5 and
y=(1-t)sinπ x_1sinπ x_2, q=α (1-t)sin 2π x_1sin 2π x_2, φ=sinπ x_1sinπ x_2,
f=-u+dy/dt-Δ y, y_d=y+dq/dt+Δ q, u=min(-0.5,max(0.5,-q/α)).
Then, it is easy to verify that (u^*,y^*)^⊤=(u, y)^⊤ is the optimal solution of (<ref>). The problem (<ref>) has been discussed in, e.g. <cit.>.
For the purpose of numerical comparison, we also test the accelerated primal-dual (APD) method in <cit.> and the inexact ADMM (InADMM) method in <cit.>.
Numerical implementations of the InADMM follow all the settings in <cit.>, including the parameters settings, the solvers for the subproblems, and the stopping criteria. All algorithms to be tested are summarized below.
(1) PD-C: The primal-dual method (<ref>)–(<ref>) with the original convergence condition (<ref>);
(2) PD-I: The primal-dual method (<ref>)–(<ref>) with the enlarged step sizes (<ref>);
(3) APD(k): The accelerated primal-dual method in <cit.>, which adjusts the parameters every k iterations;
(4) InADMM: The inexact ADMM in <cit.> with CG inner iterations.
The initial values for all primal-dual methods are set as (u^0,p^0)^⊤=(0,0)^⊤. For a prescribed tolerance tol>0, we terminate the iterations if
max{u^k+1-u^k_L^2(𝒪)/max{1,u^k_L^2(𝒪)},p^k+1-p^k_L^2(Q)/max{1,p^k_L^2(Q)}}≤ tol.
Recall that the upper bound of step sizes 1/S^2 is enlarged by the factor 4+2α r/3. It is clear that the choice of α affects the value of 4+2α r/3 and thus has a further impact on the performance of PD-I. Intuitively, a relatively large α always leads to a large 4+2α r/3 and hence more likely improves the numerical efficiency. To validate this fact, we consider two different cases for problem (<ref>) in terms of the value of α in the following discussion.
Case I: α=10^-3. Concerning with the choices of r and s in all primal-dual methods, we note that, after the space-time discretization, one can estimate that S = S^*≈ 0.05 and this value is not affected
by the mesh sizes τ and h. According to (<ref>), r and s should be chosen such that r· s<1/S^*S≈ 400. Here, we choose r=4× 10^3 and s=1×10^-1 for PD-C. In addition, it follows from (<ref>) that the upper bound of r· s can be enlarged by 4+2α r/3=4. We thus choose r=4× 10^3 and s=4× 10^-1 for PD-I. The parameters for all test algorithms are summarized in Table <ref>.
The numerical results with τ=h=1/2^6 and tol=10^-5 are summarized in Table <ref>. We observe that PD-C is slower than InADMM, while PD-I is comparable to InADMM in terms of the total computational cost.
For APD, it is remarkable that implementing the adaptive step size selection strategy at every iteration is not efficient and it should be deliberately determined in practice, which is validated by the fact that APD(5) converges much faster than APD(1). We see that PD-I is more efficient than PD-C, APD(1), and APD(5). In particular, PD-I is 3 times faster than PD-C, which implies the superiority of the improved condition (<ref>) to the original one (<ref>).
Case II: α=10^-5. The parameters for all primal-dual methods are summarized in Table <ref>. The numerical results with τ=h=1/2^6 and tol=10^-5 are presented in Table <ref> and Figure <ref>.
We observe from Table <ref> that all primal-dual methods require less CPU time than that of InADMM. More specifically, although InADMM requires only 22 outer iterations, a total of 264 PDEs are required to be solved to promote the convergence. Compared with PD-C, PD-I can improve the numerical efficiency by 19.7% and it is even faster than APD. Here, we set α=10^-5, which leads to the value of 4+2α r/3 relatively small, and hence compared with Case I, less numerical efficiency is improved by PD-I.
Next, we recall that both PD-C and PD-I are described on the continuous level
and their convergence are analyzed in function spaces. Hence, mesh-independent property of these algorithms can be expected in practice, which means that the convergence behavior is independent of the fineness of the discretization. We test PD-C and PD-I with α=10^-3 and τ=h=1/2^i,i=4,⋯,9, and report the iteration numbers in Table <ref>, from which mesh-independent properties of PD-C and PD-I can be observed.
Finally, in Table <ref>, we report the L^2-error for the iterate (u, y) obtained by PD-I for various values of h and τ. For succinctness, we only give the results for the case where α=10^-5 and tol= 10^-5. It is clear from Table <ref> that, when PD-I is applied to the problem (<ref>), the iterative accuracy is sufficient and the overall error of u and y are both dominated by the discretization error.
§ EXTENSION: A SPARSE ELLIPTIC OPTIMAL CONTROL PROBLEM
In previous sections, we focus on the parabolic optimal control problem (<ref>)–(<ref>) to expose our main ideas more clearly. As mentioned in the introduction, various optimal control problems can be covered by the model (<ref>) and all previous discussions can be easily extended to them. In this section, we showcase by a sparse elliptic optimal control problem to delineate how to extend the primal-dual method (<ref>)-(<ref>) with the enlarged step sizes (<ref>) to other optimal control problems. Some notations and discussions analogous to previous ones are not repeated for succinctness.
Let us consider the following sparse elliptic optimal control problem:
u∈ L^2(𝒪),y∈ H_0^1(Ω)min J(y,u)=1/2y-y_d_L^2(Ω)^2+α/2u_L^2(Ω)^2+μu_L^1(Ω)+I_U_ad(u),
where y and u satisfy the following state equation:
-Δ y=u in Ω,
y=0 on Γ.
In (<ref>)-(<ref>), Ω⊂ℝ^d(d≥ 1) is a convex polyhedral domain with boundary Γ:=∂Ω, y_d∈ L^2(Ω) is a given target, and the constants α>0 and μ>0 are regularization parameters.
We denote by I_U_ad(·) the indicator function of the admissible set
U_ad:={u∈ L^∞(Ω)| a≤ u(x)≤ b, a.e. in Ω}⊂ L^2(Ω),
where a,b ∈ L^2(Ω) with a < 0 < b almost everywhere. Due to the presence of the nonsmooth L^1-regularization term, the optimal control of (<ref>) has small support <cit.>. Because of this special structural property, such problems capture important applications in various fields such as optimal actuator placement <cit.> and impulse control <cit.>.
§.§ Primal-dual method for (<ref>)-(<ref>)
Similar to what we have done for (<ref>)–(<ref>), implementing the primal-dual method in <cit.> to (<ref>)-(<ref>) yields the following iterative scheme:
u^k+1=P_U_ad(𝕊_μ r/α r+1(u^k-rS^*p^k/α r+1)),
p^k+1=(S(2u^k+1-u^k)+1/sp^k-y_d)/(1+1/s),
where S:L^2(Ω)→ L^2(Ω) such that y=Su is the solution operator associated with the elliptic state equation (<ref>), S^*: L^2(Ω)→ L^2(Ω) is the adjoint operator of S, P_U_ad(·) denotes the projection onto the admissible set U_ad, namely, P_U_ad(v)(x) := max{a, min{v(x), b}} a.e. in Ω, ∀ v∈ L^2(Ω), and 𝕊 is the Shrinkage operator defined by
𝕊_ζ(v)(x) = sgn(v(x)) (|v(x)|-ζ)_+ a.e. in Ω,
where ζ a positive constant, “sgn" is the sign function, and (·)_+ denotes the positive part. Under the condition (<ref>) or (<ref>), both the PD-C and PD-I can be proposed for problem (<ref>)-(<ref>), and convergence analysis can simply follow the results in Section <ref> directly. At each iteration of (<ref>), the main computation consists of solving the state equation (<ref>) to compute S(2u^k+1-u^k), and the adjoint equation
-Δ q^k=p^k in Ω, q=0 on Γ,
to compute q^k=S^*p^k.
§.§ Numerical results
In this subsection, we report some numerical results to validate the efficiency of the primal-dual method (<ref>) for solving (<ref>)-(<ref>).
Example 2. We consider the example given in <cit.>. To be concrete, we set Ω=(0,1)×(0,1), a=-30, b = 30, and y_d =
1/6e^2x_1sin(2π x_1)sin(2π x_2) in (<ref>)-(<ref>). In all numerical experiments, the numerical discretization is implemented by the finite element method described in <cit.>. We test the PD-C and PD-I for two cases in terms of the choice of α. The initial value is set as (u^0,p^0)^⊤=(0,0)^⊤, and all algorithms are terminated if (<ref>) holds with tol=10^-5.
First, we set μ=5×10^-3 and α= 1×10^-3. The parameters are selected as those listed in Table <ref>. We summarize the numerical results in Table <ref>. It is clear that all algorithms are robust to the mesh size, and mesh-independent convergence can be observed. The PD-I improves the numerical efficiency significantly. The numerical results u and y obtained by PD-I with h=1/2^6 are reported in Figure <ref>. As expected, we note that u = 0 on a relatively
large part of Ω due to the presence of the regularization term μu_L^1(Ω).
Second, we still set μ=5×10^-3 but α= 1×10^-5. For this case, the parameters are selected as those listed in Table <ref>. The numerical results of all test algorithms with respect to different mesh sizes are presented in Table <ref>. We observe from these results that the performance of all algorithms are robust to the mesh sizes, while PD-I improves the numerical efficiency of PD-C sharply. The numerical results u and y obtained by PD-I with h=1/2^6 are reported in Figure <ref>.
Next, we study the effectiveness of μ on the performance of PD-C and PD-I. For this purpose, we implement both of them to Example 3 with different μ and α=10^-3.
The results are reported in Table <ref> and the computed optimal controls are depicted in Figure <ref>. These results indicate that all algorithms are robust to the values of μ. Moreover, it was shown in <cit.> that, as μ increases, the size of the nonzero region of u decreases, and when μ is sufficiently large, u is zero on the whole Ω. From Figure <ref>, it is easy to see that the nonzero part of u decreases as μ increases, which coincides with the results in <cit.>.
§ IMPLEMENTATION OF THE PRIMAL-DUAL METHOD WITH OPERATOR LEARNING (<REF>)
In this section, we shall delineate the implementation of the primal-dual method with operator learning (<ref>) and specify some particular algorithms. To this end, we assume that 𝒪=Q in (<ref>)-(<ref>) to simplify the notation. Then, the central concern is constructing two surrogates y=𝒮_θ_s(u) and q=𝒮_θ_a(p) for the state equation y=Su and its adjoint q=S^*q, respectively, by some operator learning methods.
We first elaborate on the main ideas of operator learning.
For this purpose, let G be the solution operator of a generic PDE defined on the domain 𝒟 which takes an input function u, and y=G(u) be the solution of the PDE. Operator learning aims at approximating G with a neural network 𝒢_θ. For any point z∈𝒟, G(u)(z) is a real number (the value of y at z). The neural network 𝒢_θ takes inputs consisting of u and z, and outputs the value 𝒢_θ(u)(z). Suppose that we have a data set {G(u_i)(z_j)}_1≤ i≤ N_1, 1≤ j≤ N_2 of different input functions {u_i} and points {z_j}. Then, the neural network is trained by solving
θ^*=min_θ𝒢_θ(u)(z)-G(u)(z)_L^2(𝒟)^2.
Operator learning provides a surrogate model y=𝒢_θ^*(u) for y=G(u). Several operator learning methods, such as the DeepONets <cit.>, the physics-informed DeepONets <cit.>, the FNO <cit.>, the GNO <cit.>, and the LNO <cit.>, have been recently proposed in the PDE literature. In the following discussion, to expose our main ideas clearly, we focus on the DeepONets <cit.> to elaborate on the implementation of (<ref>). Other operator learning methods can be applied in a similar way.
§.§ Primal-dual method with DeepONets for (<ref>)-(<ref>)
The DeepONets <cit.> provide a specialized deep learning framework to learn PDE solution operators. For the convenience of readers, we give a brief overview of the
DeepONets, with a special focus on learning the solution operator of the state equation (<ref>).
Typically, as shown in Figure <ref>, the DeepONets architecture consists of two separate neural networks referred to as the “branch" and “trunk"
networks, respectively. Then the DeepONets can be expressed as
𝒢_θ(u)(z)=∑_i=1^nb_i(u)t_i(z)+b_0.
Above, θ denotes the collection of all trainable weight and bias parameters in the branch and trunk networks. The vector (b_1(u) , ⋯ , b_n(u) )^⊤ is the output of the branch network with the input {u(x_j)}_j=1^m. For each input function u, {u(x_j)}_j=1^m represent the evaluations at the fixed scattered sensors {x_j}_j=1^m⊂Ω×(0,T). The vector (t_1(z) , ⋯, t_n(z))^⊤ is the output of the trunk network with the continuous coordinates z∈Ω× (0,T) as inputs; and b_0 ∈ℝ is a trainable bias. Different from the fixed
locations {x_j}_j=1^m⊂Ω×(0,T), the coordinates z∈Ω×(0,T) may vary for different u. The
final output of the DeepONets is obtained by merging the outputs of the branch and trunk networks via an inner product. The stacked DeepONet has one trunk network and n stacked branch networks, while the unstacked DeepONet has one trunk network and one branch network.
Furthermore, we note that the DeepONets do not restrict the branch and trunk nets to any specific architecture. As z∈Ω×(0,T) is usually low dimensional, a standard fully-connect neural network (FNN) is commonly used as the trunk net. The choice of the branch net depends on the structure of the input function u, and it can be chosen as a FNN, residual neural network, convolutional neural network (CNN), or a graph neural network (GNN).
For instance, if {u(x_j)}_j=1^m is defined on a two-dimensional equispaced grid, then a CNN can be used; if {u(x_j)}_j=1^m is given on an unstructured mesh, then a GNN can be used.
We refer to <cit.> for more discussions on the DeepONets.
Given a data set of different input functions {u_i} and points {z_j}: {G(u_i)(z_j)}_1≤ i≤ N_1, 1≤ j≤ N_2 , we train the DeepONets by solving
min_θℒ(θ):=1/N_1N_2∑_i=1^N_1∑_j=1^N_2𝒢_θ(u_i)(z_j)-G(u_i)(z_j)_L^2(Ω× (0,T))^2.
Note that one
data point is a triplet (u_i; z_j; G(u_i)(z_j)), and thus one specific input function u_i may appear in multiple data points with different z_j. For example, a dataset of size 400 can be generated from 20 u trajectories, and each evaluates G(u)(z) for 20 z locations.
Using the DeepONets, we can obtain a surrogate model y=𝒮_θ_s^*(u) for the state equation y=Su, where θ_s^* is obtained by solving (<ref>) with G replaced by the solution operator S. Similarly, we can also obtain a surrogate model q=𝒮_θ_a^*(p) for the adjoint equation q=S^*p. We thus specify (<ref>) as the following primal-dual method with DeepONets:
u^k+1=P_U_ad(-𝒮_θ_a^*(p^k)-1/ru^k/α+1/r),
p^k+1=(𝒮_θ_s^*(2u^k+1-u^k)+1/sp^k-y_d)/(1+1/s).
Clearly, with the pre-trained surrogate models y=𝒮_θ_s^*(u) and q=𝒮_θ_a^*(p), one only needs to compute 𝒮_θ_a^*(p^k) and 𝒮_θ_s^*(2u^k+1-u^k), and implement some simple algebraic operations. Moreover, given a different target y_d, the primal-dual method with DeepONets (<ref>) can be directly applied to the resulting optimal control problem without solving any PDE. Hence, the primal-dual method with DeepONets (<ref>) is easy and cheap to implement. Finally, it is easy to see that the primal-dual method with DeepONets (<ref>) for solving (<ref>)-(<ref>) can be easily extended to other various optimal control problems modeled by (<ref>), see Example 3 in Section <ref> for more discussions.
§.§ Numerical results
In this section, we discuss the implementation of the primal-dual method with DeepONets (<ref>), and validate its effectiveness via some pedagogical numerical examples involving elliptic and parabolic control constrained optimal control problems.
Example 3.
We consider the following elliptic control constrained optimal control problem:
u∈ L^2(Ω), y∈ L^2(Ω)min J(y,u)=1/2y-y_d_L^2(Ω)^2+α/2u_L^2(Ω)^2+θ(u),
where the state y and the control u satisfy the following state equation:
-νΔy+y=u+f in Ω, y=0 on Γ.
Above, Ω⊂ℝ^d(d≥ 1) is a convex polyhedral domain with boundary Γ:=∂Ω, y_d∈ L^2(Ω) is a given target, and f∈ H^-1(Ω) is a given source term. The constant ν>0 is the diffusion coefficient and α>0 is a regularization parameter. We denote by θ(u):=I_U_ad(u) the indicator function of the admissible set U_ad={u∈ L^∞(Ω)| a≤ u(x)≤ b, a.e. in Ω}⊂ L^2(Ω),
where a,b ∈ L^2(Ω).
In our numerical experiments, we set Ω = (0,1), ν=1, α=10^-3, a=-0.5 and b=0.5. We further let
y=k_ssin(π x), q=α k_a sin(2π x), u=max{a,min{b,-q/α}},
f=-u-Δ y+y, y_d=y+Δ q-q,
where k_s and k_a are constants. Then, it is easy to show that (u,y)^⊤ is the solution of problem (<ref>)-(<ref>). By choosing different k_s and k_a, we can obtain different y_d and thus specify a series of elliptic control constrained optimal control problems.
To obtain a surrogate model for (<ref>), we first consider constructing a neural operator 𝒩_θ by a DeepONet to approximate the solution operator S̅ of the following elliptic equation
-νΔy+y=u in Ω, y=0 on Γ.
Then, it is easy to see that y=𝒮_θ_s^*(u):=𝒩_θ^*(u+f) is a surrogate model for (<ref>). Moreover, since the state equation (<ref>) is self-adjoint, we can use q=𝒮_θ^*_a(p):=𝒩_θ^*(p) as a surrogate model for the corresponding adjoint system of (<ref>)-(<ref>).
We employ an unstacked DeepONet to construct y=𝒩_θ^*(u). Both the branch net and the trunk net
are fully-connected neural networks consisting of 2 hidden layers with 20 neurons per hidden layer and equipped with hyperbolic tangent activation functions. We adapt the MATLAB codes used in <cit.> to generate a set of training data {u_i; z_j; S̅(u_i)(z_j)}_1≤ i≤ N_1, 1≤ j≤ N_2. For every u_i, {u_i(x_j)}_j=1^m are the inputs of the branch network. We take N_1=1000, N_2=m=65, {x_j}_j=1^m and {z_j}_j=1^N_2 are equi-spaced grids in [0,1]. We sample zero-boundary functions {u_i}^N_1_i=1∈ L^2(0,1) from a Gaussian random field with a Riesz kernel, i.e.,
u_i ∼𝒢ℛ(0, C), with C=49^2(-Δ+49 I)^-2.5,
where Δ and I represent the Laplacian and the identity operator, respectively.
We then compute the solutions S̅(u_i) exactly in a Fourier space (see <cit.> for the details), and finally evaluate the values of S̅(u_i)(z_j) for every (u_i, z_j). Moreover, we modify the output 𝒩_θ(u_i)(z_j) as 𝒩_θ(u_i)(z_j)x(x-1) so that the homogeneous Dirichlet boundary condition y(0)=y(1)=0 is satisfied automatically. For training the neural networks, we implement 20000 iterations of the Adam <cit.> with learning rate η=10^-3. The training of the DeepONet is performed in Python utilizing the PyTorch framework. The training process is initialized using the default initializer of PyTorch. After the training process, we thus obtain the neural operator 𝒩_θ^* and hence the surrogate models y=𝒩_θ^*(u+f) and q=𝒩_θ^*(p). We then obtain the following primal-dual method with DeepONet for solving (<ref>)-(<ref>):
u^k+1=P_U_ad(-𝒩_θ^*p^k)-1/ru^k/α+1/r),
p^k+1=(𝒩_θ^*(2u^k+1-u^k+f)+1/sp^k-y_d)/(1+1/s).
We implement (<ref>) to (<ref>)-(<ref>) with different choices of k_s and k_a. We set r=2× 10^3 and s=4× 10^-1 in (<ref>), and terminate the iteration if (<ref>) holds with tol=10^-5. The numerical results are reported in Table <ref> and Figure <ref>. First, it can be observed from Table <ref> that (<ref>) converges fast and the iteration numbers are almost not affected by k_s and k_a. Moreover, the relative errors of u and y are very small for all cases under investigation, which, together with the results in Figure <ref>, imply that the computed controls and the exact ones are in excellent agreement and they are visually indistinguishable. From these results, we may conclude that (<ref>) is efficient and robust enough to pursue highly accurate solutions for control constrained elliptic optimal control problems.
Example 4. We consider the parabolic control constrained optimal control problem (<ref>). In particular, we set Ω=(0,1), T=1, α=10^-3 a=-100 and b=100. Let
y=k_s(e^t-1)sin(π x), q=α k_a(T-t)sin(2π x), u=max{a,min{b,-q/α}},
f=-u+∂ y/∂ t-Δ y, y_d=y+∂ q/∂ t+Δ q,
where k_s and k_a are constants. Then, it is easy to show that (u,y)^⊤ is the solution of problem (<ref>). A series of parabolic control constrained optimal control problems can be specified by choosing different k_s and k_a.
It is easy to see that implementing the primal-dual method with DeepONets (<ref>) to (<ref>) requires two surrogate models y=𝒮_θ_s^*(u) and q=𝒮_θ_a^*(p), respectively, for the state equation (<ref>) and the corresponding adjoint equation:
-∂ q/∂ t
-Δ q =p in Ω×(0,T),
q=0 on Γ×(0,T),
q(T)=0.
To this end, we first discretize (<ref>) and (<ref>) in time by the backward Euler method with the step size τ= T/N, where N is a positive integer. The resulting discretized state equation reads:
y_0=ϕ; for n=1,…,N, with y_n-1 being known, we obtain y_n from the
solution of the following linear elliptic problem:
-τΔy_n+y_n= τ(f_n+u_n)+y_n-1 in Ω, y_n=0 on Γ,
and the resulting discretized adjoint equation reads: q(T)=0; for n=N-1,…,0, with q_n+1 being known, we obtain q_n from the
solution of the following linear elliptic problem:
-τΔq_n+q_n= τp_n+q_n+1 in Ω, q_n=0 on Γ,
where we denote by y_n, f_n, u_n, q_n and p_n the approximate values of y(nτ), f(nτ), u(nτ), q(nτ) and p(nτ), respectively. It is easy to see that the elliptic equations (<ref>) and (<ref>) have the same form as that of (<ref>). Hence, we can follow the same routine presented in Example 3 to construct two DeepONet surrogates for (<ref>) and (<ref>). For the implementation of (<ref>), we set r=8× 10^2 and s=4× 10^-1 and terminate the iteration if (<ref>) holds with tol=10^-5.
The numerical results with respect to different k_s and k_a are reported in Table <ref> and Figure <ref>. Table <ref> shows that the primal-dual method with DeepONets (<ref>) converges fast, with almost the same number of iterations for different values of k_s and k_a. This suggests that the method is highly efficient and robust to the choices of k_s and k_a. Additionally, the relative errors of u and y are very small across all test cases, which, in conjunction with the results presented in Figure <ref>, indicate that the exact and computed controls are in excellent agreement and cannot be distinguished visually. Overall, these results demonstrate that the primal-dual method with DeepONets (<ref>) is capable of producing highly accurate solutions.
§ CONCLUSIONS AND PERSPECTIVES
We proposed two accelerated primal-dual methods for a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints. Both the accelerated methods keep the common advantages of primal-dual type methods. That is, different types of variables can be treated separately and thus the main computational load of each iteration is for two PDEs, while there is no need to solve high-dimensional and ill-conditioned saddle point systems or optimal control subproblems. For the accelerated primal-dual method with enlarged step sizes, it accelerates the primal-dual method in a simple and universal way, yet its convergence can be still proved rigorously. For the accelerated primal-dual method with operator learning, it is mesh-free and easy to implement. Indeed, surrogate models are constructed for the PDEs by deep neural networks, and once a neural operator is learned, only a forward pass of the neural networks is required to solve the PDEs. Efficiency of both the accelerated primal-dual methods is validated promisingly by numerical results.
Our work leaves interesting questions for future study. First, our philosophy of algorithmic design can be conceptually applied to optimal control problems with nonlinear PDE constraints. This could be achieved by combining the primal-dual method proposed in <cit.> with operator learning techniques. Second, our numerical results promisingly justifies the necessity to investigate some theoretical issues such as the convergence estimate and the error estimate for the approach of primal-dual methods with operator learning. Finally, we focused on the DeepONets to elaborate on our main ideas of accelerating primal-dual methods by operator learning techniques. It is interesting to consider other operator learning techniques in, e.g., <cit.>.
siamplain
10
andrade2012multigrid
S. G. Andrade and A. Borzì, Multigrid second-order accurate
solution of parabolic control-constrained problems, Computational
Optimization and Applications, 51 (2012), pp. 835–866.
attouch2008augmented
H. Attouch and M. Soueycatt, Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces: applications to games, PDE's and control, Pacific Journal of Optimization, 5 (2008),
pp. 17–37.
barry2022
J. Barry-Straume, A. Sarshar, A. A. Popov, and A. Sandu, Physics-informed neural networks for PDE-constrained optimization and control, arXiv preprint arXiv:2205.03377, 2022.
bauschke2011
H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, Vol. 408, New York: Springer, 2011.
beck2019
C. Beck, W. E, and A. Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward
stochastic differential equations, Journal of Nonlinear Science, 29 (2019), pp. 1563–1619.
biccari2022
U. Biccari, Y. Song, X. Yuan and E. Zuazua, A two-stage numerical approach for the sparse initial source identification of a diffusion-advection equation. arXiv preprint arXiv:2202.01589, 2022.
cao2023
Q. Cao, S. Goswami, and G.E. Karniadakis, LNO: Laplace neural operator for solving differential equations, arXiv preprint, arXiv:2303.10528, 2023.
chambolle2011first
A. Chambolle and T. Pock, A first-order primal-dual algorithm for
convex problems with applications to imaging, Journal of mathematical
imaging and vision, 40 (2011), pp. 120–145.
ciaramella2016
G. Ciaramella and A. Borzì, A LONE code for the sparse control of quantum systems, Computer Physics Communications, 200 (2016), pp. 312–323.
clason2017primal
C. Clason and T. Valkonen, Primal-dual extragradient methods for
nonlinear nonsmooth PDE-constrained optimization, SIAM Journal on
Optimization, 27 (2017), pp. 1314–1339.
e2018
W. E and B. Yu, The deep Ritz method: A deep learning-based numerical algorithm for solving
variational problems, Communications in Mathematics and Statistics, 6 (2018), pp. 1–12.
elvetun2016
O. L. Elvetun, and B. F. Nielsen, The split Bregman algorithm applied to PDE-constrained optimization problems with total variation regularization, Computational Optimization and Applications, 64 (2016), pp. 699–724.
gabay1975dual
D. Gabay and B. Mercier, A dual algorithm for the solution of non linear variational problems via finite element approximation, Computers & Mathematics with Applications, 2 (1976): pp. 17–40.
glowinski1994exact
R. Glowinski and J. Lions, Exact and approximate controllability for
distributed parameter systems, Part I, Acta Numerica, 3 (1994), pp. 269–378.
glowinski1995exact
R. Glowinski and J. L. Lions, Exact and approximate controllability for distributed parameter systems, Part II, Acta Numerica, 4 (1995), pp. 159–328.
glowinski2008exact
R. Glowinski, J. L. Lions, and J. He, Exact and Approximate
Controllability for Distributed Parameter Systems: A Numerical Approach
(Encyclopedia of Mathematics and its Applications), Cambridge University
Press, 2008.
glowinski1975approximation
R. Glowinski and A. Marroco, Sur l'approximation, par éléments finis d'ordre un, et la résolution, par pénalisation-dualité d'une classe de problèmes de dirichlet non linéaires, Revue française d'automatique, informatique, recherche
opérationnelle. Analyse Numérique, 9 (1975), pp. 41–76.
GSY2019 R. Glowinski, Y. Song and X. Yuan, An ADMM numerical approach to linear parabolic state constrained optimal control problems, Numerische Mathematik, 144 (2020), pp. 931–966.
glowinski2022
R. Glowinski, Y. Song, X. Yuan, and H. Yue, Application of the alternating direction method of multipliers to control constrained parabolic optimal control problems and beyond. Ann. Appl. Math, 38 (2022), pp. 115-158.
goldstein2015adaptive
T. Goldstein, M. Li and X. Yuan,
Adaptive primal-dual splitting methods for statistical learning and
image processing.
In Advances in Neural Information Processing Systems, (2015),
pp. 2089–2097.
han2018
J. Han, A. Jentzen, and W. E, Solving high-dimensional partial differential equations using
deep learning, Proceedings of the National Academy of Sciences, 115 (2018), pp. 8505–8510.
haoBilevel2022
Z. Hao, C. Ying, H. Su, J. Zhu, J. Song, and Z. Cheng, Bi-level physics-informed neural networks for PDE constrained optimization using Broyden's hypergradients, arXiv preprint arXiv:2209.07075, 2022.
he2022
B. He, F. Ma, S. Xu and X. Yuan, A generalized primal-dual algorithm with improved convergence condition for saddle point problems, SIAM Journal on Imaging Sciences, 15 (2022), pp. 1157–1183.
hintermuller2002primal
M. Hintermüller, K. Ito, and K. Kunisch, The primal-dual active
set strategy as a semismooth newton method, SIAM Journal on Optimization, 13
(2002), pp. 865–888.
kunisch2004
K. Kunisch and M. Hintermüller, Total bounded variation regularization as a bilaterally constrained optimization problem, SIAM Journal on Applied Mathematics, 64 (2004), pp. 1311–1333.
hinze2008optimization
M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich, Optimization with
PDE Constraints, Vol. 23, Springer Science & Business Media, 2008.
hwang2021solving
R. Hwang, J. Y. Lee, J. Y. Shin, and H. J. Hwang,
Solving PDE-constrained control problems using operator learning, arXiv preprint arXiv:2111.04941, 2021.
kingma2015
D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, in the 3rd International
Conference on Learning Representations, 2015; preprint available from https://arxiv.org/
abs/1412.6980
kroner2011
A. Kröner, K. Kunisch and B. Vexler, Semismooth Newton methods for optimal
control of the wave equation with control constraints, SIAM Journal on Control and Optimization, 49
(2011), pp. 830-858.
kocachki2021
N. Kovachki, Z. Li, B. Liu, K. Azizzadenesheli,
K. Bhattacharya, A. Stuart, and A. Anandkumar, Neural operator: Learning
maps between function spaces. arXiv preprint arXiv:2108.08481, 2021.
KR2002
K. Kunisch and A. Rösch, Primal-dual active set strategy for a general class of constrained optimal control problems, SIAM Journal on Optimization, 13 (2002), pp. 321–334.
li2020FNO
Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar, Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.
li2020GNO
Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar, Neural operator: Graph kernel network for partial differential equations. arXiv preprint arXiv:2003.03485, 2020.
lions1971optimal
J. L. Lions, Optimal Control of Systems Governed by Partial
Differential Equations (Grundlehren der Mathematischen Wissenschaften),
vol. 170, Springer Berlin, 1971.
liu2023jcp
S. Liu, S. Osher, W. Li, and C. W. Shu, A primal-dual approach for solving conservation laws with implicit in time approximations, Journal of Computational Physics, 472 (2023), pp. 111654.
liu2023arxiv
S. Liu, S.Liu, S. Osher, and W. Li, A first-order computational algorithm for reaction-diffusion type equations via primal-dual hybrid gradient method, arXiv preprint, arXiv:2305.03945, 2023.
lu2021learning
L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis, Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators, Nature Machine Intelligence, 3(2021), pp. 218–229.
ludeepxde2021
L. Lu, X. Meng, Z. Mao, and G.E. Karniadakis, DeepXDE: A deep learning library for solving differential equations, SIAM Review, 63 (2021), pp. 208–228.
lucomparison2022
L. Lu, X. Meng, S. Cai, Z. Mao, S. Goswami, Z. Zhang, and G. E. Karniadakis. A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data. Computer Methods in Applied Mechanics and Engineering, 393 (2022), pp. 114778.
lye2021iterative
K. O. Lye, S. Mishra, D. Ray, and P. Chandrashekar, Iterative surrogate model optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks, Computer Methods in Applied Mechanics and Engineering 374 (2021), pp. 113575.
mowlayi2021
S. Mowlavi and S. Nabi. Optimal control of PDEs using physics-informed neural networks, Journal of Computational Physics, 473 (2023), pp. 111731.
pang2019
G. Pang, L. Lu, and G. E. Karniadakis, fPINNs: Fractional physics-informed neural networks, SIAM Journal on Scientific Computing, 41 (2019), pp. A2603–A2626.
pearson2017
J. W. Pearson and J. Gondzio, Fast interior point solution of quadratic programming
problems arising from PDE-constrained optimization, Numerische Mathematik, 137
(2017), pp. 959–999
pearson2012regularization
J. W. Pearson, M. Stoll, and A. J. Wathen, Regularization-robust preconditioners for time-dependent PDE-constrained optimization problems,
SIAM Journal on Matrix Analysis and Applications, 33 (2012), pp. 1126–1152.
porcelli2015preconditioning M. Porcelli, V. Simoncini, and M. Tani, Preconditioning of active-set Newton methods for PDE-constrained optimal control problems, SIAM Journal on Scientific Computing, 37 (2015), pp. S472–S502.
pougkakiotis2020
S. Pougkakiotis, J. W. Pearson, S. Leveque, and J. Gondzio, Fast solution methods for convex quadratic optimization of fractional differential equations, SIAM Journal on Matrix Analysis and Applications, 41(2020), pp. 1443–1476.
raissi2019physics
M. Raissi, P. Perdikaris, and G. E. Karniadakis,
Physics-informed neural networks: A deep learning framework for
solving forward and inverse problems involving nonlinear partial differential
equations, Journal of Computational physics, 378 (2019), pp. 686–707.
schiela2014operator
A. Schiela and S. Ulbrich, Operator preconditioning for a class of
inequality constrained optimal control problems, SIAM Journal on
Optimization, 24 (2014), pp. 435–466.
sirignano2018dgm
J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375 (2018), pp. 1339–1364.
song2023admmpinns
Y. Song, Y. Yuan, and H. Yue, The ADMM-PINNs algorithmic framework for nonsmooth PDE-constrained optimization: a deep learning approach, arXiv preprint arXiv:2302.08309,2023.
stadler2009elliptic
G. Stadler,
Elliptic optimal control problems with L^1-control cost and
applications for the placement of control devices,
Computational Optimization and Applications, 44 (2009), pp. 159–181.
stoll2013one
M. Stoll, One-shot solution of a time-dependent time-periodic
PDE-constrained optimization problem, IMA Journal of Numerical Analysis, 34
(2013), pp. 1554–1577.
sun2022
Y. Sun, U. Sengupta, and M. Juniper. Physics-informed Deep Learning for simultaneous Surrogate Modelling and PDE-constrained Optimization. Bulletin of the American Physical Society, 2022.
tian2018convergence
W. Tian and X. Yuan, Convergence analysis of primal–dual based
methods for total variation minimization with finite element approximation,
Journal of Scientific Computing, 76 (2018), pp. 243–274.
troltzsch2010optimal
F. Tröltzsch, Optimal Control of Partial Differential Equations:
Theory, Methods, and Applications, Vol. 112, AMS,
2010.
ulbrich2011semismooth
M. Ulbrich, Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces, Vol. 11, SIAM, 2011.
valkonen2014primal
T. Valkonen, A primal–dual hybrid gradient method for nonlinear
operators with applications to MRI, Inverse Problems, 30 (2014), pp. 055012.
wachsmuth2011
G. Wachsmuth and D. Wachsmuth, Convergence and regularization results for optimal control problems with sparsity functional, ESAIM: Control, Optimisation and Calculus of Variations, 17 (2011), pp. 858–886.
wang2021fast
S. Wang, M. A. Bhouri, and P. Perdikaris, Fast PDE-constrained optimization via self-supervised operator earning, arXiv preprint arXiv:2110.13297, 2021.
wang2021
S. Wang, H. Wang, and P. Perdikaris, Learning the solution operator of parametric partial differential equations with physics-informed DeepONets, Science advances, 7(2021), pp. eabi8605.
xue2020
T. Xue, A. Beatson, S. Adriaenssens, and R. Adams, Amortized finite element analysis for fast PDE-constrained optimization, In International Conference on Machine Learning, PMLR, 2020, pp. 10638–10647.
yu2022gPINN
J. Yu, L. Lu, X. Meng, and G. E. Karniadakis, Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems, Computer Methods in Applied Mechanics and Engineering, 393 (2022), pp. 114823.
zhang2017
K. Zhang, J. Li, Y. Song, and X. Wang, An alternating direction method of multipliers
for elliptic equation constrained optimization problem, Science China Mathematics, 60 (2017),
pp. 361–378.
|
http://arxiv.org/abs/2307.03163v2
|
20230706174620
|
Probing the high temperature symmetry breaking with gravitational waves from domain walls
|
[
"Xiu-Fei Li"
] |
hep-ph
|
[
"hep-ph",
"astro-ph.CO"
] |
=1
10000
-27pt
1.2
0.00in
0.00in
6.5in
8.5in
|
http://arxiv.org/abs/2307.00750v1
|
20230703045617
|
Feasibility of Universal Anomaly Detection without Knowing the Abnormality in Medical Images
|
[
"Can Cui",
"Yaohong Wang",
"Shunxing Bao",
"Yucheng Tang",
"Ruining Deng",
"Lucas W. Remedios",
"Zuhayr Asad",
"Joseph T. Roland",
"Ken S. Lau",
"Qi Liu",
"Lori A. Coburn",
"Keith T. Wilson",
"Bennett A. Landman",
"Yuankai Huo"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Vanderbilt University, Nashville TN 37235, USA Vanderbilt University Medical Center, Nashville TN 37215, USA
NVIDIA Corporation, Santa Clara and Bethesda, USA
Feasibility of Universal Anomaly Detection without Knowing the Abnormality in Medical Images
Can Cui1 Yaohong Wang2 Shunxing Bao1 Yucheng Tang3 Ruining Deng1 Lucas W. Remedios1 Zuhayr Asad1 Joseph T. Roland2 Ken S. Lau2
Qi Liu2 Lori A. Coburn2 Keith T. Wilson2 Bennett A. Landman1 Yuankai Huo1
August 1, 2023
=============================================================================================================================================================================================================
Many anomaly detection approaches, especially deep learning methods, have been recently developed to identify abnormal image morphology by only employing normal images during training. Unfortunately, many prior anomaly detection methods were optimized for a specific “known" abnormality (e.g., brain tumor, bone fraction, cell types). Moreover, even though only the normal images were used in the training process, the abnormal images were oftenly employed during the validation process (e.g., epoch selection, hyper-parameter tuning), which might leak the supposed “unknown" abnormality unintentionally. In this study, we investigated these two essential aspects regarding universal anomaly detection in medical images by (1) comparing various anomaly detection methods across four medical datasets, (2) investigating the inevitable but often neglected issues on how to unbiasedly select the optimal anomaly detection model during the validation phase using only normal images, and (3) proposing a simple decision-level ensemble method to leverage the advantage of different kinds of anomaly detection without knowing the abnormality. The results of our experiments indicate that none of the evaluated methods consistently achieved the best performance across all datasets. Our proposed method enhanced the robustness of performance in general (average AUC 0.956).
§ INTRODUCTION
In the context of human perception, it is observed that individuals possess the ability to summarize and store normal patterns, thereby enabling them to recognize the abnormal patterns upon first encounter by comparing them to the normal patterns stored in memory. Especially when existing certain abnormal cases are infrequent or unknown, the ability to discriminate between normal and abnormal patterns must be acquired through learning from normal data. This has driven the research in the area of anomaly detection in machine learning, which has been further enhanced by the advances in deep learning techniques, improving the generalization ability of more complex patterns and leading to more effective detection of abnormality. Different from the regular classification problems, anomaly detection is a kind of one-class classification, where normal and abnormal patterns are binary classifications, but trained solely on normal data and are tested on their ability to detect abnormal patterns <cit.> <cit.>.
In the medical domain, vast amounts of data are routinely processed, with the identification of abnormal cases being of great value. For instance, images with poor quality or artifacts should be discarded or require repetition, and patterns in images that deviate from normal patterns may indicate a rare disease. Due to the scarcity of labeled data and the infrequency of abnormal cases, conventional classification methods may not be appropriate. Therefore, there is a need for the development of anomaly detection techniques for medical image data.
Numerous anomaly detection methods have already been proposed, with distribution-based and pretext-task-based strategies being two of the primary approaches <cit.>. Distribution-based methods estimate the distribution of normal data or compact the feature space of normal data, allowing abnormal data that lies outside the distribution or boundary of normal data to be recognized as abnormal. Examples of distribution-based methods include One-Class Support Vector Machines (OCSVM) <cit.>, Deep Support Vector Data Description (DeepSVDD) <cit.>, and Variational Autoencoder (VAE) <cit.>. On the other hand, pretext-task-based methods involve training models for specific tasks, such as reconstruction <cit.>, inpainting <cit.>, and denoising <cit.>, etc., using normal data only. It is expected that the model can achieve good performance in completing these tasks using normal data but perform poorly when presented with abnormal data. Previous works have explored the effectiveness of pretext-task-based methods for anomaly detection.
According to such prior arts, the abnormality in medical image analysis is very heterogeneous and complicated, including, but not limited to, the existence, shape, density of anomalies and other
sensory abnormalities. The abnormalities in sensory attributes can also lead to content abnormalities, such as local abnormalities (use pixel-wise reconstruction loss) or more global abnormalities (use perceptual loss at the content level). Unfortunately, most of the prior anomaly detection methods were optimized for a specific “known" abnormality (e.g., brain tumor, bone fraction, cell types). Moreover, even though only the normal images were used in the training process, the abnormal images were oftenly employed during the validation process (e.g., epoch selection, hyper-parameter tuning), which might leak the supposed “unknown" abnormality unintentionally. Whether an anomaly detection strategy can perform consistently well for various kinds of anomalies is a problem (Fig.1).
In this study, we compare the performance of multiple representative anomaly detection methods and introduce a decision-level ensemble to take the advantage of different methods to capture multiple kinds of anomalies. Meanwhile, An overlooked but crucial problem in training an anomaly detection model is the hyperparameter selection of the training epoch. The selection of a suitable training epoch can influence the results significantly but it is neglected by lots of anomaly detection works. Different from the regular classification problems, there may no anomaly data available in the training phase, so the classification accuracy of the validation set may not be applicable here. Also, the validation set may introduce the bias to known abnormality. In this work, we investigated different epoch selection strategies including a fixed number of epochs, loss of normal data in the validation set, and a dynamic epoch selection method proposed by Reiss et al. <cit.>. And their performance was compared with the model selected by a validation set with both normal and abnormal data available.
The contribution of this paper is threefold:
∙ Firstly, we compare multiple representative anomaly detection methods on various medical datasets.
∙ Secondly, we investigate the inevitable but often neglected issues on how to unbiasedly select the optimal anomaly detection model during the validation phase using only normal images.
∙ Thirdly, we propose a simple decision-level ensemble method to leverage anomaly detection without knowing the abnormality. Extensive experiments were done on 6 datasets and 5 different anomaly detection methods.
§ METHODS
§.§ Anomaly detection benchmarks
In this work, we selected and compared five representative anomaly detection methods for image data. The structures of these methods are displayed in Fig.2.
1) Auto-encoder with pixel-wise loss. The auto-encoder is a basic reconstruction-based method that aims to train a model to be able to reconstruct normal images but performs worse in reconstructing abnormal images. Images are reconstructed from a bottle-neck feature vector downsampled by convolutional and pooling layers. The pixel-wise loss is used to supervise the reconstruction training and used as the anomaly score.
2) Autoencoder with a perceptual loss. Instead of using the pixel-wised reconstruction loss. The perceptual loss cares about the content similarity of real and reconstructed images. Especially for pathology images, the reconstruction of dense regions with edges is hard to reconstruct, which may confuse the reconstruction errors results from the anomaly input. The similarity in higher-level embedding spaces from a pre-trained network can also benefit anomaly detection.
3) SkipGanomaly. <cit.> SkipGanomaly is an enhanced variant of GANomaly <cit.>, which has been developed to generate normal images through the use of both skip connections and adversarial loss. Compared to the auto-encoder-based method, the generated images were found to be less blurry. Zehnder et al <cit.> applied it successfully in anomaly detection in pathology images of breast cancer.
4) IGD. <cit.> The IGD method is based on the density estimation of VAE for anomaly detection. It constrains a smooth gaussian-shape latent space for normal data with adversarially interpolated training samples. Specifically, it not only constrains the latent space with regular VAE design but also forces the model to predict the interpolation coefficient of normal embeddings.
5) PANDA. <cit.> The PANDA method draws inspiration from the DeepSVDD method <cit.>. In PANDA, the encoder used for image embedding is replaced with an ImageNet pre-trained encoder, instead of a learned encoder from reconstruction tasks. The ImageNet pre-trained encoder is preferred over a learned encoder because it already has a strong expression ability. The embedding generated by PANDA is then fine-tuned by a distance loss to a fixed embedding center to compact the embedding space. This approach helps to improve the overall efficiency and effectiveness of the anomaly detection model. Also, an elastic regularization inspired by continual learning was added to combat the collapse of compacted space. The distance of an embedding to the fixed center can be used as the anomaly score.
The above five representative anomaly detection method covers both the pretext-task-based and the distribution-based anomaly detection methods. By covering multiple perspectives, these methods are expected to be effective in detecting various anomalies. As a result, combining them into an ensemble model can increase the overall robustness of the anomaly detection system.
§.§ Model selection strategies during the validation stage
An important yet overlooked issue in anomaly detection is the selection of a suitable training epoch for the model. Stopping the training process at different epochs can lead to significantly varied outcomes, but previous research has not adequately addressed how the epoch is chosen. Unlike typical classification tasks that use both normal and abnormal data in a validation set for epoch selection, the ideal setting of an anomaly detection model should only see normal images, even during the validation stage. Not only because the abnormal cases can be rare to get, but also because the model is expected to be a real “unbiased" anomaly detection method that deals with unknown abnormalities. Unfortunately, many prior arts employed abnormal images during the validation phase, which leak the known abnormality to the AI model, while some others set a fixed number of epochs for model training which may not fully exploit the model performance at the risk. In this paper, we evaluate (1) the performance gap if we only use normal data in both the training and validation phases and (2) how to select the optimal anomaly detection model by only using the normal images.
The first strategy was to employ two sets of normal and abnormal images during the validation stage. Then, select the best model or tuning hyper-parameters based on the best binary classification performance. This was widely used in the prior arts, yet leaked the abnormality to the AI model. Therefore, the models are selected for the best performance on “known" abnormality.
Here, we investigate the alternative strategies that only use the normal images during the validation phase:
Strategy 1) Assessing the loss of normal samples in the validation set, which provides an indication of how well the model has been trained for the pretext task, such as image reconstruction.
Strategy 2) Sample-wise early stopping proposed by Reiss et al <cit.>. Firstly, multiple model checkpoints are required to save in the training phase. Then, for each sample in the testing phase, its anomaly score at different checkpoints will be normalized by the corresponding average anomaly score of normal samples in the validation sets and denoted as the maximal ratio. The model checkpoint with a maximal ratio indicates the checkpoint has the best separation to this testing sample and the maximal ratio is used as the anomaly score of this sample.
§.§ Model Ensemble
To create an ensemble of different anomaly detection methods, the anomaly scores range of normal data in validation is used to normalize the anomaly score of testing sets as the equation Eq. (<ref>) shown. This is done to eliminate the unnormalized score with a smaller value from being drowned out by the one with much larger values. Once the anomaly scores α_i have been normalized, the average ensemble strategy is used to combine the scores of k methods (Eq. (<ref>)).
α_ensemble = 1/n∑_i^kα̂_̂î
α̂_̂î = α_i - min(N_i)/max(N_i) - min(N_i)
where N_i is the anomaly score set of normal data invalidation sets.
§ EXPERIMENTS
§.§ Dataset
Four medical datasets and two natural image datasets are used to investigate and evaluate the anomaly detection algorithm in this work. 1) Camelyon breast dataset <cit.>. A public dataset for breast cancer metastases detection in digital pathology. Following the previous work <cit.>, patches in the size of 768×768 were tiled from either the healthy tissue or tumor tissue under 40× magnification and used as normal and abnormal data separately. 2) Inhouse colon dataset. A private pathology dataset of healthy colon tissues and Crohn's disease. The patches tile in the size of 1812×1812 were labeled by pathologists to be normal and abnormal (with disease). 3) Camelyon dataset with artifacts. Nine kinds of common artifacts/corruptions for pathology images were generated on the normal images on the Camelyon dataset using the toolbox released by Zhang et al. <cit.>. 4) Brain tumor dataset. A public dataset contains 2D MRI slices with tumors or without tumors, in the size of 512×512. 5) Hazelnut dataset. A prevalent computer vision anomaly detection benchmark from MVet <cit.>. 6) Tile dataset. A prevalent computer vision anomaly detection benchmark from MVet <cit.>.
The patches for a patient will only exist in the training, validation, or testing set. The number of data splits is shown in Table <ref>.
§.§ Experimental Setting
The experiments were divided into three parts.
1) Comparison of different anomaly detection models. In section 2.1, we introduced five anomaly detection methods, which were separately applied to six datasets. Each method was trained for 250 epochs, with a checkpoint saved every 25 epochs, except for the PANDA method, which was trained for 30 epochs with a checkpoint saved every 2 epochs. The PANDA method suggests early stopping to avoid collapse, and 30 epochs are already twice the suggested number of epochs. The autoencoder with a bottleneck structure consists of 4 down/up convolutional blocks. When the input images were resized to 64 x 64 or 256 x 256, the length of the bottleneck was 16 and 128 separately. The output of the convolutional layers in the fourth block of the ImageNet pre-trained vgg-16 was used for the perceptual loss to train the reconstruction networks.
2) Comparison of four training epoch selection strategies.
The most common methods are setting a fixed number of training epochs and selecting the epoch in which the complete validation set (with both normal and abnormal data) achieved the highest performance. They were compared with the sample-wise model selection and strategy using the loss of normal validation mentioned in Section 2.2.
3) Comparison of ensemble model and individual models. The five individual anomaly detection models were ensembled as Section 2.3 introduced.
Moreover, all images were processed as a three-channel input, with the channels copied for grayscale images, and then normalized to [0,1]. The batch size for 64 x 64 and 256 x 256 were 64 and 8 separately. The default settings of the prior literature and official code were followed without special mention. To evaluate the capability of the model in discerning normal and abnormal data, the Receiver Operating Characteristic Area Under the Curve (ROC-AUC) score was utilized.
§ RESULTS AND DISCUSSION
The results of the ensemble anomaly detection are compared with the individual methods and presented in Table 2. It can be seen that none of the individual anomaly detection methods outperforms other methods across all datasets. Even though some methods achieved the best performance in some datasets, they may fail in some other datasets. The average ensemble results take the advantage of anomaly detection from different aspects and tend to achieve more robust results. Notably, the ensemble method surpasses the best models in Hazelnut and Tile datasets and demonstrates competitive performance in other datasets. Moreover, the average AUC of the ensembled method across all datasets can outperform the best individual anomaly detection method in our experiments.
Table 3 is the comparison of different model selection methods for the best training epochs. It is observed that with the help of labeled abnormal data in the validation set, the results always outperformed other methods across most of the datasets. The sample-wise selection method was performed close to the complete validation set, but it requires larger saving memory and multiple times of inferences for better performance. The epoch selected by the validation set with normal data only performed better than the baseline method using the fixed number of epochs, which is an efficient and practical method for model selection in anomaly detection.
§ CONCLUSION
In this paper, we assess the prevalent anomaly detection approaches on six image cohorts. From the empirical validation, we conclude that (1) none of the evaluated methods consistently achieved the best performance across all datasets; (2) Current model selection methods typically included abnormal images during the validation stage and leak the abnormality, which unsurprisingly yielded a better performance compared with a more rigor model selection method using only normal images during validation; (3) Our proposed simple ensemble method enhanced the performance of the anomaly detection without knowing the abnormality.
§ ACKNOWLEDGEMENTS
This work is supported by the Leona M. and Harry B. Helmsley Charitable Trust grant G-1903-03793, NSF CAREER 1452485.
splncs04
|
http://arxiv.org/abs/2307.02008v1
|
20230705033750
|
Rogue waves and their patterns for the coupled Fokas-Lenells equations
|
[
"Liming Ling",
"Huajie Su"
] |
nlin.SI
|
[
"nlin.SI"
] |
School of Mathematics, South China University of Technology, Guangzhou, China 510641
[email protected]
School of Mathematics, South China University of Technology, Guangzhou, China 510641
[email protected]
In this work, we explore the rogue wave patterns in the coupled Fokas-Lenells equation by using the Darboux transformation.
We demonstrate that when one of the internal parameters is large enough,
the general high-order rogue wave solutions generated at a branch point of multiplicity three can be decomposed into some
first-order outer rogue waves and a lower-order inner rogue wave.
Remarkably, the positions and the orders of these outer and inner rogue waves are intimately related to Okamoto polynomial hierarchies.
Keywords: Coupled Fokas-Lenells equation, Asymptotic analysis, Rogue wave pattern, Darboux transformation.
2020 MSC: 35Q55, 35Q51, 37K10, 37K15, 35Q15, 37K40.
Rogue waves and their patterns for the coupled Fokas-Lenells equations
Huajie Su
August 1, 2023
======================================================================
§ INTRODUCTION
Integrable equations, such as
the classical nonlinear Schrödinger (NLS) equation <cit.>,
derivative-type NLS equation <cit.>,
and other integrable equations play a crucial role in describing nonlinear wave fields. As we know,
the NLS equation is an applicable model to describe the picosecond short pulse,
while it is not effective for the subpicosecond or femtosecond pulse.
In this case, we need to consider the high-order nonlinear effects.
In the 1980s, Hasegawa and Kodama proposed the high-order NLS equation,
from which several integrable models can be derived, such as the Hirota equation,
derivative NLS equation, and Sasa-Satsuma equation.
In 2009, after recalling certain aspects of the standard derivation of the NLS equation in nonlinear fiber optics,
Lenells <cit.> derived the following integrable model
u_t-ν u_tx+γ u_xx+ρ |u|^2(u+ν u_x)=0
when taking into account certain terms that are normally ignored. This model was first derived by
Fokas with the aid of bi-Hamitonian methods <cit.>, so we named this model as Fokas-Lenells (FL) equation.
After applying a gauge and coordinate transformation, the equation mentioned above can be reduced to <cit.>
u_xt+u+ |u|^2 u_x=0.
Fokas and Lenells provided contributions by deriving the Hamiltonian structure
and inverse scattering method (ISM) of the integrable FL equation in <cit.>.
Since then, several distinct types of solutions to the FL equation have been constructed using different techniques.
The rogue wave solutions were derived from <cit.>,
the dark soliton solutions were constructed using the Hirota bilinear method <cit.>,
and the algebraic geometry solutions were constructed by Zhao et al. <cit.>.
However, in the birefringent optical fiber systems, two wave packets of different carrier frequencies need to be considered.
The corresponding coupled Fokas-Lenells (CFL) system
which is given in <cit.>
i D_ξ q_1,τ+-η/2 q_1,ξξ + (2|q_1|^2+σ|q_2|^2) D_ξ q_1 + σ q_1q_2^* (D_ξ q_2)=0,
i D_ξ q_2,τ+-η/2 q_2,ξξ + (2σ|q_2|^2+|q_1|^2) D_ξ q_1 + q_2q_1^* (D_ξ q_2)=0,
can be utilized to describe the propagation of ultrashort optical pulses <cit.>
in the study of ultrafast optics and hydrodynamics,
where η=± 1 is the type of dispersion with σ=± 1, D_ξ=1+ iν∂/∂ξ
is a differential operator and ν is the permutation of the Manakov system.
The CFL system is also a generalization of the Manakov system that takes into account more physical effects than the latter <cit.>.
Manakov system includes the terms of group-velocity dispersion and self- and cross-phase modulation.
Additionally, the CFL system takes into account the effects of space-time coupling <cit.>
and self-deepening <cit.>.
These terms are obtained by considering the slowly varying envelope approximation in <cit.>.
The CFL system (<ref>), with ξ = νζ and the transformation
ζ=η x- η t, τ= -2 ν^2 t, q_i= i/2ν e^η ix+η itu_i,
yields the CFL equations
u_1,xt+u_1+ i(|u_1|^2+1/2σ |u_2|^2)u_1,x+ i/2σ u_1u_2^*u_2,x= 0,
u_2,xt+u_2+ i(σ |u_2|^2+1/2|u_1|^2)u_2,x+ i/2u_2u_1^*u_1,x= 0,
which were initially proposed by Guo and Ling <cit.> using the matrix generalization of the Lax pair.
Ling, Feng, and Zhu delved into the integrability of the CFL equations in <cit.>
and constructed multi-Hamiltonian structures using the Tu scheme.
The Lax pair of the CFL equations is
Φ_x= 𝐔(x,t;λ)Φ, 𝐔(x,t;λ)= iλ^-2σ_3+λ^-1𝐐_x,
Φ_t= 𝐕(x,t;λ)Φ, 𝐕(x,t;λ)= i(1/4λ^2σ_3+1/2σ_3(𝐐^2-λ𝐐)),
where
σ_3=[ 1 0 0; 0 -1 0; 0 0 -1 ]
, 𝐐=
[ 0 v_1 σ v_2; u_1 0 0; u_2 0 0 ].
The zero curvature equation
𝐔_t-𝐕_x+[𝐔,𝐕]=0 ([𝐔,𝐕]≡𝐔𝐕-𝐕𝐔
is the commutator) for Lax pair (<ref>) yields the CFL equations
with the symmetric condition v_i=u_i^*,i=1,2 (the superscript ^* denotes the complex conjugate).
Some investigations have already been carried out on the CFL equations, including
the Riemann-Hilbert approach <cit.> and modulation instability <cit.>.
On the other hand, due to the integrability,
we can construct different types of exact solutions for the CFL equations utilizing the methods of integrable systems.
In 2017, Zhang et al. constructed the solitons, breathers, and rogue waves via the Darboux transformation of
the integrable CFL equations <cit.>.
In 2018, Ling et al. utilized the generalized Darboux transformation to obtain general soliton solutions
<cit.>, such as bright solitons, bright-dark solitons, and others.
The general rogue wave solutions were constructed by Ye. et al. <cit.> in 2019, and some localized waves were constructed
by Yue et al. <cit.> in 2021.
Drawing upon the rogue wave solutions,
it has been demonstrated that lower-order rogue waves can have special patterns
as evidenced in various graphs <cit.>.
More specifically, for the CFL equations, using the Darboux transformation,
we can construct rogue wave solutions <cit.> at the branch points of multiplicity two and three on the
Riemann surface which is given by the spectral characteristic polynomial.
In these two cases, Ye et al. <cit.> presented figures illustrating
first-order and second-order rogue waves, showcasing their doublet, triplet, quartet, and sextet states.
The question naturally arises as to how to study these particular patterns for high-order rogue waves.
Recently, the studies of rogue wave patterns become popular in the field of rogue waves, which can be used to predict higher-order rogue wave events
and recognize their decomposition mechanism.
The roots of special polynomials have been found to be closely associated with rogue wave patterns in various equations,
as evidenced by previous studies.
In 2021, Yang et al. explored the rogue wave patterns of the NLS equation that corresponds to the Yablonskii-Vorob'ev hierarchies in <cit.>.
In their subsequent work <cit.> in 2023, they examined the rogue wave patterns of the Manakov equations
and the three-wave resonant interaction equation associated with Okamoto polynomial hierarchies.
In <cit.>, Zhang et al. demonstrated that the rogue wave patterns of the vector NLS equation are associated with
generalized Wronskian-Hermite polynomials.
To the best of our knowledge, there are no studies on the rogue wave patterns for the CFL equations.
The main contribution of this work is to study the patterns of rogue waves generated at the branch points
of multiplicity three <cit.>.
Actually, the patterns of rogue wave solutions generated at branch points of different multiplicity are associated with different polynomials.
The case of multiplicity two is associated with the Yablonskii-Vorob’ev polynomial hierarchies.
The deep-going analysis for the rogue wave solutions generated by the case of multiplicity two needs to be given separately.
In this work, we concentrate on the case of multiplicity three and analyze the rogue wave patterns,
which are associated with Okamoto polynomial hierarchies <cit.>.
In contrast to previous studies on rogue wave patterns <cit.>,
we utilize the Lax pair and the Darboux transformation to construct rogue wave solutions.
Considering that our research is rooted in the integrability of the CFL equations,
it is conceivable that a similar methodology can be applied to other general integrable systems,
enabling the derivation and analysis of rogue waves and their associated patterns.
We organize this work as follows.
In Section <ref>, we introduce Okamoto polynomial hierarchies and the Darboux transformation for the Lax pair.
In Section <ref>, we introduce the plane wave solutions for the CFL equations and study the branch points of Riemann surfaces given by
the spectral characteristic polynomial. At the branch point of multiplicity three, we construct high-order rogue wave solutions.
In Section <ref>, we analyze the patterns of the rogue wave solutions generated at the branch point of multiplicity three.
By utilizing the root structures of Okamoto polynomial hierarchies, the rogue wave patterns have two parts:
the outer region and the inner region.
We decompose the rogue wave solutions into some first-order rogue wave solutions in the outer region
and a lower-order rogue wave solution in the inner region.
§ PRELIMINARIES
To initiate our analysis of rogue wave patterns in the CFL equations,
we will provide some preliminaries.
This section will cover Okamoto polynomial hierarchies and the Darboux transformations
for the Lax pair (<ref>).
Okamoto polynomial hierarchies, as outlined in the study by Yang et al. <cit.>,
play a crucial role in the analysis of rogue wave patterns.
The rogue wave patterns indicate the specific positions of rogue waves when one of the internal parameters is sufficiently large.
Furthermore, the Darboux transformation is a powerful tool for constructing solitonic solutions <cit.>,
as it enables us to derive the rogue wave solutions that we aim to investigate.
§.§ Okamoto polynomial hierarchies
Okamoto polynomial hierarchies <cit.>
are a generalization of the Okamoto polynomials <cit.>.
Okamoto demonstrated that the logarithmic derivative of the Okamoto polynomials
yields rational solutions to the Painlevé IV equation.
Lateer, Kajiwara, and Ohta discovered the determinant representation of Okamoto polynomials
using Schur polynomials <cit.>.
Building upon this discovery,
the determinant representation of Okamoto polynomials can be generalized to define its hierarchies <cit.>.
Before introducing Okamoto polynomial hierarchies, it is necessary to define Schur polynomials.
Given an infinite dimensional vector
𝐱=(x_1,x_2,⋯)∈ℂ^∞, the Schur polynomials S_n are defined by
∑_n=0^∞S_n(𝐱)ϵ^n=exp(∑_n=1^∞x_nϵ^n).
We also define S_n(𝐱)=0 for n≤ -1.
In order to analyze the rogue wave patterns, we introduce the following propositions regarding Schur polynomials.
For any complex η 0 and infinite dimensional vector 𝐱=(x_1,x_2,⋯)∈ℂ^∞, we have
S_n(x_1,x_2,⋯)=η^nS_n(x_1η^-1,x_2η^-2,⋯).
Given 𝐱=(x_1,x_2,⋯)∈ℂ^∞ and an integer k≥ 2.
If x_i=𝒪(η),∀ i k and
x_k=𝒪(η^k), for n≥ 2, we have the asymptotic expansion
S_n(𝐱)=S_n(𝐯)+𝒪(η^n-1), k≥ 3,
𝒪(η^n-2), k=2,
where 𝐯=(x_1,0,⋯,0,x_k,0,⋯). Especially, S_n(𝐱)=S_n(𝐯) for n=1,2.
For Proposition <ref>, we can establish the following identity
∑_n=0^∞S_n(𝐱)ϵ^n =exp(∑_n=1^∞x_nϵ^n)
=exp(∑_n=1^∞η^-nx_n(ϵη)^n)
=∑_n=0^∞S_n(x_1η^-1,x_2η^-2,⋯)(ϵη)^n
=∑_n=0^∞η^nS_n(x_1η^-1,x_2η^-2,⋯)ϵ^n,
and group terms according to the power of ϵ.
To prove Proposition <ref>, if k≥ 3, we proceed with the following calculations
∑_n=0^∞(S_n(x_1η^-1,⋯,x_k-1η^-(k-1),x_kη^-k,x_k+1η^-(k+1),⋯)-
S_n(x_1η^-1,⋯,0,x_kη^-k,0,⋯))ϵ^n
= exp(x_1η^-1ϵ+x_kη^-kϵ^k)(
exp(∑_n=2,n k^∞x_n(ϵ/η)^n)-1
)
= exp(𝒪(1)ϵ+𝒪(1)ϵ^k)(
exp(∑_n=2,n k^∞𝒪(η^-n+1)ϵ^n)-1
)
= (∑_n=0^∞𝒪(1)ϵ^n)
(𝒪(η^-1)ϵ^2+⋯)
= ∑_n=2^∞𝒪(η^-1)ϵ^n.
If k=2,
∑_n=0^∞(S_n(x_1η^-1,x_2η^-2,x_3η^-3,⋯)-
S_n(x_1η^-1,x_2η^-2,0,⋯))ϵ^n
= exp(x_1η^-1ϵ+x_2η^-2ϵ^2)(
exp(∑_n=3^∞x_n(ϵ/η)^n)-1
)
= exp(𝒪(1)ϵ+𝒪(1)ϵ^2)(
exp(∑_n=3^∞𝒪(η^-n+1)ϵ^n)-1
)
= (∑_n=0^∞𝒪(1)ϵ^n)
(𝒪(η^-2)ϵ^3+⋯)
= ∑_n=3^∞𝒪(η^-2)ϵ^n.
By utilizing Proposition <ref> and grouping the terms with respect to ϵ,
we complete the proof.
Based on Proposition <ref>, it is established that the Schur polynomials can
be expressed as simplified polynomials involving only two parameters x_1,x_k with error terms
when one of the parameters is large enough. This simplification provides the basis
for our investigation into Okamoto polynomial hierarchies
<cit.>. To define the hierarchies,
we consider a special form of Schur polynomials p_j^[m](z) which are defined by
∑_j=0^∞ p_j^[m](z) ϵ^j=exp(z ϵ+ϵ^m),
where z∈ℂ.
Then we define k-type Okamoto polynomial hierarchies <cit.> for k=0,1 respectively
W_N^[k,m](z)=c_N^[k](p_3i-j-k^[m])_1≤ i,j ≤ N=c_N^[k]|
p_2-k^[m](z) p_1-k^[m](z) ⋯ p_3-N-k^[m](z)
p_5-k^[m](z) p_4-k^[m](z) ⋯ p_6-N-k^[m](z)
⋮ ⋮ ⋱ ⋮
p_3N-1-k^[m](z) p_3N-2-k^[m](z) ⋯ p_2N-k^[m](z)
|,
where
c_N^[k]=3^-1/2 N(N-1)(2-k) ! (5-k) ! ⋯(3 N-1-k) !/0 ! 1 ! ⋯(N-1) !,
which ensure that the leading order terms of W_N^[k,m](z) with respect to z are equal to 1.
To study the decomposition of rogue wave solutions, we also define
W_N,1^[k,m](z)=c_N^[k] (
|
p_2-k^[m](z) ⋯ p_4-N-k^[m](z) p_1-N-k^[m](z)
p_5-k^[m](z) ⋯ p_7-N-k^[m](z) p_4-N-k^[m](z)
⋮ ⋮ ⋱ ⋮
p_3N-1-k^[m](z) ⋯ p_2N+1-k^[m](z) p_2N-2-k^[m](z)
| .
.+
|
p_2-k^[m](z) ⋯ p_2-N-k^[m](z) p_3-N-k^[m](z)
p_5-k^[m](z) ⋯ p_5-N-k^[m](z) p_6-N-k^[m](z)
⋮ ⋮ ⋱ ⋮
p_3N-1-k^[m](z) ⋯ p_2N-1-k^[m](z) p_2N-k^[m](z)
|
).
The term W_N,1^[k,m](z) is the sum of two variations of W_N^[k,m](z).
One variation is to subtract two from the indices of the elements in the penultimate column of W_N^[k,m](z),
and the other variation is to subtract two from the indices of the elements in the last column of W_N^[k,m](z).
For the CFL equations, the expression of rogue wave solutions in the outer region includes
terms W_N,1^[k,m](z).
Now we turn to the root structures of the Okamoto polynomial hierarchies (<ref>),
which are important in understanding rogue wave patterns.
Previous studies have demonstrated that all Okamoto polynomials (the case m=2) have simple
roots <cit.>.
When m and N are small, it can be observed that the nonzero roots of the Okamoto polynomial hierarchies are typically simple,
while the zero roots may be multiple roots <cit.>.
However, whether all non-zero roots of the hierarchies are simple remains a conjecture.
Nevertheless, some results have already been obtained.
Yang et al. have studied the root distributions of the Okamoto polynomial hierarchies <cit.>.
This theorem reveals the symmetry in the patterns of rogue waves.
To analyze the root distributions of Okamoto polynomial hierarchies, let N_0 be the remainder of N divided by m,
we define (N_1^[k],N_2^[k]) as follow: If m3≡ 1,
(N_1^[0],N_2^[0])=
(N_0,0), 0≤ N_0≤ [m/3],
([m/3],N_0-[m/3]), [m/3]+1≤ N_0≤ 2 [m/3],
(m-1-N_0,m-1-N_0), 2[m/3]+1≤ N_0≤ m-1,
(N_1^[1],N_2^[1])=
(0,N_0), 0≤ N_0≤ [m/3],
([m/3]-1,N_0-1-[m/3]), [m/3]+1≤ N_0≤ 2 [m/3]+1,
(m-1-N_0,m-N_0), 2[m/3]+2≤ N_0≤ m-1.
If m3≡ 2,
(N_1^[0],N_2^[0])=
(N_0,0), 0≤ N_0≤ [m/3],
(N_0-1-[m/3],[m/3]), [m/3]+1≤ N_0≤ 2 [m/3]+1,
(m-1-N_0,m-1-N_0), 2[m/3]+2≤ N_0≤ m-1,
(N_1^[1],N_2^[1])=
(0,N_0), 0≤ N_0≤ [m/3]+1,
(N_0-1-[m/3],[m/3]+1), [m/3]+2≤ N_0≤ 2 [m/3]+1,
(m-1-N_0,m-N_0), 2[m/3]+2≤ N_0≤ m-1,
where the symbol [x]=max_n∈ℤ,n≤ xn.
These notations are used to study the degree of zero roots of Okamoto polynomial hierarchies
and analyze the rogue wave patterns in the inner region.
The following theorem <cit.> is hold:
Given an integer m≥2, the Okamoto polynomial hierarchies W_N^[k,m](z) is monic with degree N(N+1-k).
If m is not a multiple of 3, then W_N^[k,m](z) have the decomposition
W_N^[k,m](z)=z^N^[k]q_N^[k,m](z^m),
where q_N^[k,m](ξ) is a monic polynomial with respect to ξ with all real-value coefficients and a nonzero constant term.
The multiplicity of the zero root is
N^[k]=N_1^[k](N_1^[k]-N_2^[k]+1)+(N_2^[k])^2.
If m is a multiple of 3, then
W_N^[k,m](z)=z^N(N+1-k).
Based on the Proposition <ref>, <ref> and the root structures of Okamoto polynomial hierarchies in Theorem <ref>,
we can analyze the rogue wave patterns in Section <ref> if m is not a multiple of 3. The case of m is a multiple of
3 will be excluded from consideration, and we will see the reason in the proof of rogue wave patterns.
When we consider the rogue wave decomposition in the inner region, the proof is similar
to the Theorem <ref>.
The root distributions indicate the positions in the rogue wave patterns.
More specifically, with a linear transformation, the positions of rogue waves correspond to
the root distributions of Okamoto polynomial hierarchies. The order of the rogue waves corresponds
to the degree of roots.
§.§ Darboux transformation
Now we turn to introducing the Darboux transformation <cit.>, which is used to convert the Lax pair (<ref>) into a new one.
For the new elements in the Lax pair (<ref>), we denote it by adding superscript ^[N], such as the new potential functions 𝐐^[N].
By establishing a relationship between the original and new potential functions, together with initial seed solutions,
we can construct a variety of new solutions.
In Section <ref>, we will focus on specific parameter selections for the Darboux transformation,
which enables us to generate rogue wave solutions by plane wave solutions.
We introduce the N-fold Darboux transformation 𝐓_N(λ;x,t), as presented in <cit.>.
Let 𝐀_i=|x_i⟩⟨ y_i|𝐉, where |x_i⟩=(x_i,1,x_i,2,x_i,3)^T and
|y_i⟩=(y_i,1,y_i,2,y_i,3)^T are three dimensional complex vectors, and
𝐉=diag (1,-1,-σ). The transformation from |y_i⟩ to |x_i⟩ is
[|x_1,1⟩,|x_2,1⟩,⋯,|x_N,1⟩] =
[|y_1,1⟩,|y_2,1⟩,⋯,|y_N,1⟩]𝐁^-1, 𝐁=(b_ij)_N× N,
[|x_1,k⟩,|x_2,k⟩,⋯,|x_N,k⟩] =
[|y_1,k⟩,|y_2,k⟩,⋯,|y_N,k⟩]𝐌^-1, 𝐌=(m_ij)_N× N, k=2,3,
and the coefficients are given by
b_ij=⟨ y_i|𝐉|y_j⟩/λ_i^*-λ_j+⟨ y_i|𝐉σ_3|y_j⟩/λ_i^*+λ_j, m_ij=⟨ y_i|𝐉|y_j⟩/λ_i^*-λ_j-⟨ y_i|𝐉σ_3|y_j⟩/λ_i^*+λ_j.
Let superscript ^† denotes the complex conjugate and transposition,
the N-fold Darboux transformation has the following form.
By the following N-fold Darboux transformation
𝐓_N(λ;x,t)=𝕀+∑_i=1^N[𝐀_i/λ-λ_i^*-σ_3𝐀_iσ_3/λ+λ_i^*],
the Lax pair (<ref>) can be converted into a new one.
Then the Bäcklund transformation between old and new potential functions is
𝐐^[N]=𝐐+∑_i=1^N(𝐀_i-σ_3𝐀_iσ_3),
i.e.
u_i^[N]=u_i+2𝐘_i𝐌^-1𝐘^†, i=1,2,
where
𝐘=[ y_1,1 y_2,1 ⋯ y_N,1 ], 𝐘_i=[ y_1,i+1 y_2,i+1 ⋯ y_N,i+1; ].
In this paper, we will consider u_i 0, and it follows that
u_i^[N]=u_i(𝐌+2u_i^-1𝐘^†𝐘_i)/(𝐌), i=1,2,
since
1+𝐘_i𝐌^-1𝐘^†=[ 𝐌 𝐘^†; 𝐘_i 1 ]/(𝐌).
We proceed to analyze the numerator and denominator of the obtained solution (<ref>).
Specifically, by selecting appropriate |y_i⟩ and seed solutions u_i in Theorem <ref>,
the elements of the numerator and denominator in (<ref>) exhibit quadric forms which are helpful in
constructing the rogue wave solutions.
§ ROGUE WAVE SOLUTIONS
In Section <ref>, we have discussed the theorem regarding Okamoto polynomial hierarchies and the Darboux transformation for the CFL equations.
In the subsequent section, we will utilize the properties of Schur polynomials
and the root structures of Okamoto polynomial hierarchies to analyze the rogue wave patterns.
Specifically, in this section, we employ Theorem <ref> to construct the rogue wave solutions.
To achieve this, we introduce the seed solutions u_i and select specific |y_i⟩ vectors.
§.§ Seed solution and spectral characteristic polynomial
We will consider the seed solutions in the form of plane wave solutions in Theorem <ref>.
Through these plane wave solutions, we can transform the Lax pair (<ref>) into a system with
constant coefficients. By simultaneously diagonalizing the transformed matrices for 𝐔 and 𝐕 in (<ref>),
we can effectively solve the Lax pair (<ref>) and obtain the fundamental solutions.
It is worth noting that the choices of the parameters |y_i⟩ are connected to the fundamental solutions.
The fundamental solutions are determined by the spectral characteristic polynomial,
which forms a three-sheet Riemann surface. We will investigate the properties at the branch points on the Riemann surface.
These properties play a crucial role in determining the feasibility of constructing the rogue wave solutions
through the Darboux transformation.
It is accessible to obtain the plane wave solutions for the CFL equations (<ref>):
u_i^[0]=a_i e^ iω_i, i=1,2
where
ω_1= b_1x-1/2(2a_1^2+σ a_2^2-2/b_1+σ a_2^2b_2/b_1)t,
ω_2= b_2x-1/2(2σ a_2^2+a_1^2-2/b_2+a_1^2b_1/b_2)t,
the parameters a_is are real numbers and b_is are nonzero real numbers.
Inserting the seed solutions (<ref>) into the Lax pair (<ref>),
introducing z=1/λ^2, we solve the Lax pair (<ref>) by ODE. Consider the parameter settings a_i 0 and
b_1 b_2,
we have the fundamental solutions for the Lax pair (<ref>)
Φ(λ)=𝐃𝐄diag( e^θ_1, e^θ_2, e^θ_3), 𝐃=diag(1, e^ iω_1, e^ iω_2),
where
θ_i = i(κ_i-z)(x+1/2b_1b_2z(κ_i-z+b_1+b_2)t), i=1,2,3
and
𝐄 =[ 1 1 1; a_1b_1/λ(κ_1+b_1) a_1b_1/λ(κ_2+b_1) a_1b_1/λ(κ_3+b_1); a_2b_2/λ(κ_1+b_2) a_2b_2/λ(κ_2+b_2) a_2b_2/λ(κ_3+b_2); ].
The terms κ_i, i=1,2,3 satisfy the algebraic equation
κ/z-2+a_1^2 b_1^2/(κ+b_1)+σ a_2^2b_2^2/(κ+b_2)=0.
Note that λ is the primary spectral parameter of Lax pair (<ref>), but here we use the parameter z.
The algebraic equation (<ref>)
generate a three-sheet Riemann surface
ℛ={(z,κ)∈𝕊^2:κ/z-2+a_1^2 b_1^2/(κ+b_1)+σ a_2^2b_2^2/(κ+b_2)=0},
with projection p:(z,κ)↦ z, where 𝕊 is the Riemann sphere.
Denote α=a_1^2b_1^2+σ a_2^2b_2^2, β=a_1^2b_1^2b_2+σ a_2^2b_1b_2^2, γ=b_1+b_2, δ=b_1-b_2,
the branch points of (<ref>) are determined by the following quartic equation with respect to z:
A_4z^4+ A_3 z^3+A_2 z^2+A_1 z+A_0=0,
where the LHS is the discriminant of the spectral characteristic polynomial (<ref>) with respect to κ. The coefficients are
given by
A_4 =4 α^2-16 γα+16 δ^2+32 β , A_3=-4 α^3+20 α^2γ
-20 αδ^2-12 αγ^2+16 δ^2γ-36 αβ+24 βγ,
A_2 =3 α^2δ^2-2 α^2γ^2-αδ^2γ-3 αγ^3-2 δ^4+6 δ^2γ^2+18 αβγ-18 βδ^
2+6 βγ^2-27 β^2,
A_1 =-1/4γ^4α+1/2γ^3β+γ^3δ^2-γδ^4-3/4αδ^4+γ^2
αδ^2-9/2γβδ^2,
A_0=1/16δ^6+1/16δ^2γ^4-1/8δ^4γ^2.
The complex roots (not real) of the quartic equation (<ref>) corresponds to the rogue wave solutions of the
CFL equations (<ref>). But generally, it is hard to analyze the roots of the algebraic equation (<ref>).
The discriminant of (<ref>) with respect to z is useful to analyze the roots:
Δ≡1/16(2 β+δ^2-γ^2)[(γα-2 β)^2 -α^2 δ^2]×
[ ( 324 δ^2+108 γ^2) β^2+
( 54 α^2δ^2+18 α^2γ^2-180
αδ^2γ-108 αγ^3+288 δ^
4) β.
.-8 α^3γ^3+27 δ^4α^
2-6 α^2δ^2γ^2+27 α^2γ^
4-96 αδ^4γ+64 δ^6] ^3.
The equation Δ=0 can be solved with respect to β.
Our main idea is to determine the cases of the roots of algebraic equation (<ref>) by evaluating different values of β.
In this regard, we establish the following properties:
If the parameters (a_1,a_2,b_1,b_2) belong to Ω={(a_1,a_2,b_1,b_2)| a_1,a_2≠ 0, σ b_2≠ 2a_2^-2-b_1a_1^2a_2^-2, b_1≠ b_2},
there are several cases for the roots of (<ref>):
* If 3 α^2-4γα-4 δ^2≥0, let β_1^[a]≤β_2^[a]≤β_3^[a]≤β_4^[a]≤β_5^[a]
are the real roots of Δ, there are three cases:
* If β∈ (-∞,β_1^[a])∪(β_2^[a],β_3^[a])∪(β_4^[a],β_5^[a]),
we obtain two real roots and a pair of complex conjugate roots (Fig. <ref>-a).
* If β∈ (β_1^[a],β_2^[a])∪(β_3^[a],β_4^[a])∪(β_5^[a],∞),
we obtain four real roots (Fig. <ref>-b).
* If β∈{β_1^[a],β_2^[a],β_3^[a],β_4^[a],β_5^[a]}, we obtain
one, two, or three real roots, or a pair of complex conjugate roots,
or one real root and a pair of complex conjugate roots (Fig. <ref>-c).
* If 3 α^2-4γα-4 δ^2<0, let β_1^[a]≤β_2^[a]≤β_3^[a]
are the real roots of Δ, there are three cases:
* If β∈ (-∞,β_1^[a])∪(β_2^[a],β_3^[a]), we can
obtain two real roots and a pair of complex conjugate roots (Fig. <ref>-a).
* If β∈ (β_1^[a],β_2^[a])∪ (β_3^[a],∞), we can
obtain four real roots or two pairs of complex conjugate roots (Fig. <ref>-b, c).
* If β∈{β_1^[a],β_2^[a],β_3^[a]}, we obtain
one, two, or three real roots, or a pair of complex conjugate roots,
or one real root and a pair of complex conjugate roots (Fig. <ref>-d).
We use the discriminant (<ref>) to study the quartic equation (<ref>), which is a real quartic equation about z.
* If 3 α^2-4γα-4 δ^2≥0, we can obtain the following roots of Δ=0:
β_1^[b]= 1/2(γ^2-δ^2), β_2^[b]= 1/2( γ-δ) α, β_3^[b]= 1/2( γ+δ) α,
β_4^[b]= ( -9 δ^2-3 γ^2) α^2+ ( 30 δ^2γ+18 γ^3) α-
48 δ^4+ | ( 3 α-8 γ) δ^
2+αγ^2| √(3)√(3 α^2-4
γα-4 δ^2)/36(3 δ^2+γ^2),
β_5^[b]= ( -9 δ^2-3 γ^2) α^2+ ( 30 δ^2γ+18 γ^3) α-
48 δ^4- | ( 3 α-8 γ) δ^
2+αγ^2| √(3)√(3 α^2-4
γα-4 δ^2)/36(3 δ^2+γ^2).
We rearrange the roots β_1^[a]≤β_2^[a]≤β_3^[a]≤β_4^[a]≤β_5^[a], then
* If β∈ (-∞,β_1^[a])∪(β_2^[a],β_3^[a])∪(β_4^[a],β_5^[a]), then Δ<0.
The quartic equation (<ref>) has two real roots and a pair of complex conjugate roots.
* If β∈ (β_1^[a],β_2^[a])∪(β_3^[a],β_4^[a])∪(β_5^[a],∞), then
Δ>0. The quartic equation (<ref>) could have four real roots or
two pairs of complex conjugate roots. Since the equation (<ref>) is a square equation
with respect to the parameter β, solving the equation, the roots are
β_1(z)= 64 z^3+ ( 48 γ-72 α) z^2+ ( 36 γα-36 δ^2+12 γ^2) z-9 δ^2γ+γ^3+√(Δ_1 ^3)/108 z
,
β_2(z)= 64 z^3+ ( 48 γ-72 α) z^2+ ( 36 γα-36 δ^2+12 γ^2) z-9 δ^2γ+γ^3-√(Δ_1 ^3)/108 z,
where Δ_1≡-12
α z+3 δ^2+γ^2+8 γ z+16
z^2.
To distinguish these two cases, we need to study the function β_1(z) and β_2(z). If the union
of the range for β_1(z) and β_2(z) is ℝ,
then there exists a real root of equation (<ref>). There would not exist two pairs of complex conjugate roots.
Denote the roots of the equation Δ_1=0 are
z_1=3α-2γ+√(3)√(3 α^2-4γα-4 δ^2)/8,
z_2=3α-2γ-√(3)√(3 α^2-4γα-4 δ^2)/8,
we get z_1z_2>0 and
β_1(z_1)=β_2(z_1),β_1(z_2)=β_2(z_2). By direct calculation, we obtain the limit
lim_z→+∞β_1(z)=lim_z→-∞β_2(z)=+∞ and
lim_z→ 0zβ_1(z)=-γ(9δ^2-γ^2)+|3δ^2+γ^2|√(3δ^2+γ^2)/108,
lim_z→ 0zβ_2(z)=-γ(9δ^2-γ^2)-|3δ^2+γ^2|√(3δ^2+γ^2)/108.
Thus lim_z→ 0^+zβ_1(z)>0 and lim_z→ 0^+zβ_2(z)<0. If z_1>0, since β_1(z) and β_2(z)
are continuous on (0,z_2], the range contains [β_1(z_2),+∞) for β_1(z) and (-∞,β_2(z_2)]
for β_2(z). If z_1<0, the range contains (-∞,β_1(z_1)] for β_1(z)
and [β_2(z_1),+∞) for β_2(z).
Hence for any β, there exists a real z such that the quantic equation (<ref>) is valid. Therefore,
there are no two pairs of complex conjugate roots for (<ref>).
* If β∈{β_1^[a],β_2^[a],β_3^[a],β_4^[a],β_5^[a]}, then Δ=0. We obtain
one, two, or three real roots, or a pair of complex conjugate roots,
or one real root and a pair of complex conjugate roots.
* If 3 α^2-4
γα-4 δ^2<0, we merely obtain the real roots β_1^[b],
β_2^[b] and β_3^[b] for the equation Δ=0.
Rearranging the roots β_1^[a]≤β_2^[a]≤β_3^[a], then
* If β∈ (-∞,β_1^[a])∪(β_2^[a],β_3^[a]), then Δ<0. There are
two real roots and a pair of complex conjugate roots.
* If β∈ (β_1^[a],β_2^[a])∪ (β_3^[a],∞), then Δ>0.
In this case, we can not distinguish whether there exists a real z
such that the quartic equation (<ref>) is valid just using these roots. There are four real roots or two pairs of complex conjugate roots.
* If β∈{β_1^[a],β_2^[a],β_3^[a]}, then Δ=0. There are
one, two, or three real roots, or a pair of complex conjugate roots,
or one real root and a pair of complex conjugate roots.
Below we provide some examples. In Figure <ref>,
the three subfigures Figure (<ref>-a), Figure (<ref>-b) and Figure (<ref>-c) correspond to
the condition (<ref>), (<ref>) and (<ref>)
in Proposition <ref> respectively.
In Figure <ref>, the subfigure Figure(<ref>-a) corresponds to the condition (<ref>),
the subfigures Figure(<ref>-b) and Figure (<ref>-c) correspond to the condition
(<ref>) and the subfigure Figure (<ref>-c) corresponds to the condition (<ref>) in Proposition <ref>.
Now we concentrate on the branch point (z^[0],κ^[0])∈ℛ (<ref>) of multiplicity three.
By straightforward calculation, we obtain
κ^[0] =b_1b_2( -b_1-b_2+ i√(3)(
b_1-b_2) ) /b_1^2-b_1b_2+b_2
^2,
z^[0] =b_1/2-b_2/4-3/4b_2^2(2b_1
-b_2)/b_1^2-b_1b_2+b_2^2+3/4 ib_1 b_2( b_1-b_2)√(3)/b_1^2-b_1b_2+b_2^2, b_1,b_2>0.
As we can see, the imaginary parts of z^[0] and κ^[0] are not zero.
Moreover, a_is can be expressed by b_is:
a_1=√(2 b_1)( b_1-b_2) /b_1^2-b_1b_2+b_2^2,
a_2=√(2 b_2)( b_1-b_2) /b_1^2-b_1b_2+b_2^2.
Hence the branch point and a_is can be represented by b_is.
It is routine to verify that (<ref>) satisfies the equation (<ref>).
Since z^[0] is a complex root (not real), we can generate high-order rogue waves at the point (z^[0],κ^[0]).
To construct the local coordinate chart, we expand (z,κ)∈ℛ at (z^[0],κ^[0])
with the following form with respect to ϵ:
z=z(ϵ)=z^[0]+z^[1]ϵ^3, κ=κ(ϵ)=κ^[0]-2z^[1]ϵμ(ϵ),
where
z^[1]=-1/2b_1b_2( b_1-b_2) /b_1^2-
b_1b_2+b_2^2, μ(ϵ)=∑_i=1^∞μ_iϵ^i-1.
Substituting (z,κ) (<ref>) into the spectral characteristic polynomial (<ref>), it leads to the recursive relation
about κ:
μ^3+ϵ^2μ^2+ i√(3)ϵμ-1=0.
Hence the coefficients μ_i can be determined through the following recursive relation:
μ_1=1, μ_i=-1/3(∑_j+k+l=i, 0≤ j,k,l≤ i-1μ_jμ_kμ_l+∑_j+k=i-2, j,k≥ 0μ_jμ_k+ i√(3)μ_i-1), i≥ 2 .
The first several coefficients are
μ_1=1, μ_2=-√(3)/3 i, μ_3=-1/3, μ_4=2/27√(3) i, μ_5=1/27, μ_6=0, μ_7=1/3^5, μ_8=-√(3) i/3^6, μ_9=0.
It can be verified that the convergence domain of the series μ(ϵ) is |ϵ|<√(3). With the local coordinate
chart (<ref>) at (z^[0],κ^[0]), the roots of (<ref>) are
κ_i=κ(ϵω^i-1),i=1,2,3, where ω= e^2π i/3 is a root of equation ω^3=1.
Now we turn to constructing the rogue wave solutions.
Note that the Darboux transformation can be used to generate soliton solutions <cit.> for a non-branch point
on the Riemann surface ℛ.
The rogue wave solutions are generated at the branch point of multiplicity two and three,
as stated in <cit.>.
In this paper,
we only focus on the case multiplicity of three. We will investigate the case of multiplicity two in future work.
§.§ The determinant representation of rogue wave solutions
Based on the seed solutions (<ref>), we consider |y_s⟩=Φ(λ_s)(c_s,1,c_s,2,c_s,3)^T,s=1,2,⋯,N
in Theorem <ref>. For the spectral parameters λ=λ_s in (<ref>), we denote κ_l=κ_l^(s) and
θ_l=θ_l^(s).
If the parameters (a_1,a_2,b_1,b_2) belong to Ω={(a_1,a_2,b_1,b_2)| a_1,a_2≠ 0, σ b_2≠ 2a_2^-2-b_1a_1^2a_2^-2, b_1≠ b_2},
then the determinant elements of the numerator and denominator defined in (<ref>)
have the following quadric forms which are given in <cit.>:
m_rs =[ c_r,1 c_r,2 c_r,3 ]^*
(z^[0]_k,l)_1≤ k,l ≤ 3[ c_s,1; c_s,2; c_s,3 ],
m_rs+2a_i^-1 e^- iω_iy_r,1^*y_s,i+1 =[ c_r,1 c_r,2 c_r,3 ]^*
(z^[1]_k,l)_1≤ k,l ≤ 3[ c_s,1; c_s,2; c_s,3 ],
where
z^(0)_k,l =2/λ_sκ_k^(r)*/κ_l^(s)-κ_k^(r)*e^θ_l^(s)+θ_k^(r)*,
z^(1)_k,l =2/λ_sκ_l^(s)/κ_l^(s)-κ_k^(r)*κ_k^(r)*+b_i/κ_l^(s)+b_ie^θ_l^(s)+θ_k^(r)*.
Since dividing both the numerator and denominator by the same factor does not change the value of the solutions (<ref>),
we will consider two new elements by discarding the factors 2/λ_s:
m_rs^(0) =λ_s/2m_rs,
m_rs^(1) =λ_s/2(m_rs+2a_i^-1 e^- iω_iy_r,1^*y_s,i+1).
Using the above formulas (<ref>), we can analyze the concrete form of the solution (<ref>).
To obtain high-order rogue wave solutions,
we use local coordinate chart (<ref>) at (z^[0],κ^[0]), then the terms
κ_l^(s)=κ(ϵ_sω^l-1). Since θ_l^(s) are the functions of κ_l^(s)
and λ_s, we also have θ_l^(s)=θ(ϵ_sω^l-1).
On the other hand, we need to set special (c_s,1,c_s,2,c_s,3) in |y_s⟩ to construct the rogue wave solutions. The idea arises from the calculation
of the limit.
If we take
|y_s⟩=|y_s^(0)⟩:=Φ(λ_s)(c^(0)_s(ϵ_s),ω c^(0)_s(ωϵ_s),ω^2c^(0)_s(ω^2ϵ_s))^T ,
where Φ(λ_s) is defined in (<ref>). Then we can set
Φ(λ_s)=(Φ_s(ϵ_s),Φ_s(ωϵ_s),Φ_s(ω^2ϵ_s)),
where Φ_s(ϵ_s) is a column vector. Expanding Φ_s(ϵ_s)=∑_i=0^∞Φ_s^[i]ϵ_s^i,
using 1+ω+ω^2=0, we obtain
|y_s⟩=3∑_k=1^∞Φ_s^[3k-1]ϵ_s^3k-1diag(c^(0)_s(ϵ_s),c^(0)_s(ωϵ_s),c^(0)_s(ω^2ϵ_s)) ,
which has only (3k-1)th order coefficients with respect to ϵ_s.
Furthermore, dividing both the numerator and denominator by the same coefficients in (<ref>) does not alter the value
of the solutions (<ref>).
Considering |y_s⟩/ϵ_s^2, we can obtain a solution that only takes into account the (3k-1)th order terms of ϵ_s.
Similarly, if we consider
|y_s⟩=|y_s^(1)⟩:=Φ(λ_s)(c^(1)_s(ϵ_s),ω^2 c^(1)_s(ωϵ_s),ω c^(1)_s(ω^2ϵ_s))^T
and |y_s⟩/ϵ_s, then the solution is only in terms of the (3k-2)th order coefficients about ϵ_s.
Next, we will conduct precise calculations.
Now we need to introduce additional internal parameters, by considering another form of c^(l)_s(ϵ_s), l=0,1. To simplify
the notations, we consider a function c_s(ϵ_s) firstly.
Let χ^[i]∈ℂ,
we consider c_s(ϵ_s)= e^∑_i=1^∞χ^[i]ϵ_s^i and χ^[3i]=0, i≥ 1.
Under the local coordinate chart (<ref>),
define ϑ(ϵ_s)=θ(ϵ_s)+∑_i=1^∞χ^[i]ϵ_s^i, we obtain
ϑ(ϵ_s)= iκ(ϵ_s)[x+(κ(ϵ_s)+b_1+b_2/z(ϵ_s)-2)t/2b_1b_2]+ln(c_s(ϵ_s))
=∑_i=0^∞ϑ^[i]ϵ_s^i
where
ϑ^[0]= iκ^[0][x+(c_1/z^[0]-2)t/2b_1b_2],
ϑ^[1]= -2 iz^[1]μ_1[x+(κ^[0]+c_1/z^[0]-2)t/2b_1b_2]+χ^[1],
ϑ^[i]= i{[-2z^[1]∑_k+3l=i,k≥1,l≥0μ_kc_2^l+c_1
c_2^i/3δ_i3,0]κ^[0]t/2b_1b_2z^[0]-2z^[1]μ_i[x+(c_1/z^[0]-2)t/2b_1b_2].
.-z^[1]t/b_1b_2z^[0]∑_m+n=iμ_m[-2z^[1]∑_k+3l=n,k≥0,l≥0μ_k
c_2^l+c_1c_2^n/3δ_n3,0]}+χ^[i], i≥ 2,
the terms c_1=κ^[0]+b_1+b_2, c_2=-z^[1]/z^[0] and δ_i 3,0 is the Kronecker's delta.
With the above preliminaries, we can construct the high-order rogue waves at the branch point of multiplicity three.
Taking |y_s⟩=|y_s^(0)⟩ and c^(0)_s(ϵ_s)=c_s(ϵ_s) that defined in (<ref>)
and (<ref>) for all s=1,2,⋯ N, since κ_l^(s)=κ(ϵ_sω^l-1),
we consider the following two functions naturally with respect to ϵ_s and ϵ_r^*:
ℳ(ϵ_s,ϵ_r^*) =κ̂(ϵ_r^*) e^ϑ(ϵ_s)+ϑ̂(ϵ_r^*)/κ(ϵ_s)-κ̂(ϵ_r^*)=κ^[0]*/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]*∑_k=0,l=0^∞,∞M_k,lϵ_r^*kϵ_s^l,
𝒢(ϵ_s,ϵ_r^*) =κ̂(ϵ_r^*)+b_i/κ(ϵ_s)+b_iκ(ϵ_s) e^ϑ(ϵ_s)+ϑ̂(ϵ_r^*)/κ(ϵ_s)-κ̂(ϵ_r^*)=κ^[0]*+b_i/κ^[0]+b_iκ^[0]/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]*∑_k=0,l=0^∞,∞G_k,lϵ_r^*kϵ_s^l,
where κ̂(ϵ_r^*)=κ(ϵ_r)^* and ϑ̂(ϵ_r^*)=ϑ(ϵ_r)^*.
Then we calculate the quadric forms (<ref>)
m_rs^(0) =∑_k,l=1^3(ω^*)^k-1ω^l-1ℳ(ω^l-1ϵ_s,(ω^k-1ϵ_r)^*)
=9κ^[0]*/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]*∑_k=1,l=1^∞,∞M_3k-1,3l-1ϵ_r^*(3k-1)ϵ_s^3l-1
and
m_rs^(1) =∑_k,l=1^3(ω^*)^k-1ω^l-1𝒢(ω^l-1ϵ_s,(ω^k-1ϵ_r)^*)
=9κ^[0]*+b_i/κ^[0]+b_iκ^[0]/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]*∑_k=1,l=1^∞,∞G_3k-1,3l-1ϵ_r^*(3k-1)ϵ_s^3l-1.
As we discussed earlier, the quadratic forms only depend on the (3k-1,3l-1)th order coefficients
with respect to ϵ_s and ϵ^*_r. Similarly, taking |y_s⟩=|y_s^(1)⟩
and c^(1)_s(ϵ_s)=c_s(ϵ_s) for all s=1,2,⋯,N,
the quadratic forms only depend on the (3k-2,3l-2)th order coefficients.
Due to the fact that a solution of the CFL equations (<ref>) multiplied by a constant with a modulus of one
is still a solution of it, we can discard the terms
9κ^[0]*/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]* in m_rs^(0) and
9κ^[0]*+b_i/κ^[0]+b_iκ^[0]/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]* in m_rs^(1).
Take limit ϵ_s, ϵ^*_r→ 0 in (<ref>), for k=0,1,
it leads to k-type rogue wave solutions
u_i,0^[N]=a_i(((G_3k-1,3l-1)_1≤ k,l≤ N)/((M_3k-1,3l-1)_1≤ k,l≤ N)) e^ iω_i,
u_i,1^[N]=a_i(((G_3k-2,3l-2)_1≤ k,l≤ N)/((M_3k-2,3l-2)_1≤ k,l≤ N)) e^ iω_i.
For 0-type rogue wave solutions, the free internal parameters are (χ^[1],χ^[2],χ^[4],χ^[5],⋯,χ^[3N-1]) and
for 1-type rogue wave solutions, the free internal parameters are (χ^[1],χ^[2],χ^[4],χ^[5],⋯,χ^[3N-2]).
We will study the rogue wave patterns for these two cases.
Actually, we can generate multi-rogue wave solutions by taking |y_s⟩=|y_s^(0)⟩ for 1≤ s ≤ N_1 and
|y_s⟩=|y_s^(1)⟩ for N_1+1≤ s ≤ N where N_1 is an integer. Then there will
occur (3k-1,3l-2)th and (3k-2,3l-1)th order coefficients. In this case, let χ_l^[i]∈ℂ, we consider
c_s^(l)(ϵ_s)= e^∑_i=1^∞χ_l^[i]ϵ_s^i and χ_l^[3i]=0, i≥ 1. Then we
can expand ϑ^(l)(ϵ_s)=θ(ϵ_s)+∑_i=1^∞χ_l^[i]ϵ_s^i just like (<ref>).
After taking the limit ϵ_s,ϵ^*_r→ 0 in (<ref>), we obtain the multi-rogue wave solutions
u_i^(N_1,N_2)=a_i[ 𝐌^(1)_N_1,N_1 𝐌^(1)_N_1,N_2; 𝐌^(1)_N_2,N_1 𝐌^(1)_N_2,N_2 ]/[ 𝐌^(0)_N_1,N_1 𝐌^(0)_N_1,N_2; 𝐌^(0)_N_2,N_1 𝐌^(0)_N_2,N_2 ] e^ iω_i,
where N_2=N-N_1 and the matrices are
𝐌^(0)_N_p,N_q=
(M_3k-p,3l-q)_1≤ k ≤ N_p,1 ≤ l ≤ N_2, 𝐌^(1)_N_p,N_q=
(G_3k-p,3l-q)_1≤ k ≤ N_p,1 ≤ l ≤ N_2.
For the rogue wave solutions (<ref>),
the internal parameters are (χ_0^[1],χ_0^[2],χ_0^[4],χ_0^[5],⋯,χ_0^[3N_1-1]) in |y_s^(0)⟩ and
(χ_1^[1],χ_1^[2],χ_1^[4],χ_1^[5],⋯,χ_1^[3N_1-2]) in |y_s^(1)⟩. If N_1=0 or N_2=0, then
the multi-rogue wave solutions degenerate to k-type rogue wave solutions (<ref>).
To analyze the rogue wave patterns, we need another form of the rogue wave solutions (<ref>).
To simplify the notations, we consider the k-type rogue wave solutions (<ref>) firstly.
Using the skill κ=e^ln(κ), M_k,l and G_k,l can be expressed by Schur polynomials.
Expanding
ln(κ) =ln(κ^[0])+
ln(1-2z^[1]/κ^[0]∑_i=1^∞μ_iϵ_s^i)
=ln(κ^[0])-∑_j=1^∞1/j(2z^[1]/κ^[0])^j(∑_i=1^∞μ_iϵ_s^i)^j
=ln(κ^[0])+∑_j=1^∞H_j^(1)ϵ_s^j,
similarly, we have Taylor expansions
ln(κ/κ^[0])=∑_j=1^∞H_j^(1)ϵ_s^j
, ln(κ^[0]-κ^[0]*/-2z^[1]μ_1ϵ_sκ-κ^[0]/κ-κ^[0]*)=∑_j=1^∞H^(2)_jϵ_s^j,
ln(κ^[0]-κ^[0]*/κ-κ^[0]*)=∑_j=1^∞H^(3)_jϵ_s^j
, ln(κ^[0]+b_i/κ+b_i)=∑_j=1^∞H^(4)_jϵ_s^j.
Here we denote H^(i)=(H^(i)_1,H^(i)_2,⋯),i=1,2,3,4. Now we can reduce the functions (<ref>)
to more useful forms. Firstly, we expand
1/κ-κ^*=κ^[0]*-κ^[0]/(κ-κ^[0]*)(κ^*-κ^[0])1/1-(κ-κ^[0])(κ^*-κ^[0]*)/(κ-κ^[0]*)(κ^*-κ^[0])
=κ^[0]*-κ^[0]/(κ-κ^[0]*)(κ^*-κ^[0])∑_j=0^∞((κ-κ^[0])(κ^*-κ^[0]*)/(κ-κ^[0]*)(κ^*-κ^[0]))^j,
then
the functions (<ref>) can be expressed by
ℳ(ϵ_s,ϵ_r^*)
=κ^[0]*/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]*∑_k=0^∞(4z^[1]z^[1]*μ_1μ_1^*ϵ_sϵ_r^*/|κ^[0]-κ^[0]*|^2)^k
exp(∑_l=1^∞((kH^(2)_l+H_l^(3)+ϑ^[l])ϵ_s^l+(kH^(2)*_l+H_l^(3)*+H_l^(1)*+ϑ^[l]*)ϵ_r^*l)),
𝒢(ϵ_s,ϵ_r^*)
=κ^[0]*+b_i/κ^[0]+b_iκ^[0]/κ^[0]-κ^[0]* e^ϑ^[0]+ϑ^[0]*∑_k=0^∞(4z^[1]z^[1]*μ_1μ_1^*ϵ_sϵ_r^*/|κ^[0]-κ^[0]*|^2)^k
exp(∑_l=1^∞((kH^(2)_l+H_l^(3)+H_l^(4)+H^(1)_l+ϑ^[l])ϵ_s^l+(kH^(2)*_l+H_l^(3)*-H_l^(4)*+ϑ^[l]*)ϵ_r^*l)).
Denote =(ϑ^[1],ϑ^[2],⋯), then the coefficients are given by
M_k,l =∑_r=0^min(k,l)C^rS_l-r(rH^(2)+H^(3)+^)S_k-r(rH^(2)*+H^(3)*+H^(1)*+^*),
G_k,l =∑_r=0^min(k,l)C^rS_l-r(rH^(2)+H^(3)+H^(4)+H^(1)+^)S_k-r(rH^(2)*+H^(3)*-H^(4)*+^*),
where the constant C=4|z^[1]|^2|μ_1|^2/|κ^[0]-κ^[0]*|^2. Hence for 0-type rogue wave
solutions in (<ref>), the coefficients
M_3k-1,3l-1=
[ S_3k-1(H^(3)+H^(1)+); C^1/2S_3k-2(H^(2)+H^(3)+H^(1)+); ⋮ ]_3N× 1^†[ S_3l-1(H^(3)+); C^1/2S_3l-2(H^(2)+H^(3)+); ⋮ ]_3N× 1,
where 1≤ k,l ≤ N.
The expression of G_3k-1,3l-1 is similar to M_3k-1,3l-1. Hence the coefficients M_k,l and G_k,l can be expressed
by Schur polynomials. For the multi-rogue wave solutions (<ref>), using the same method,
we can obtain a similar expression.
To express the multi-rogue wave solutions (<ref>) using Schur polynomials, we introduce
some notations.
For two integers 0≤ N_1,N_2≤ N and N_1+N_2=N, given two
infinite dimensional vector 𝐱^(r)=(x_1^(r),x_2^(r),x_3^(r),⋯),r=1,2, and p,q∈{1,2} we define
𝐘_N_p,N_q^(p,q)(𝐱^(1),𝐱^(2))=(𝐘_N_p^(p)(𝐱^(2)))^†𝐘_N_q^(q)(𝐱^(1)),
where 𝐘_N_p^(p)(𝐱^(r)) is 3N by N_p matrix, and the (i,j)th element is
Y^(p)_N_p;i,j(𝐱^(r))=C^i-1/2S_3j-i-p+1((i-1)H^(2)+_p+𝐱^(r)).
The terms _p=(ϑ_p-1^[1],ϑ_p-1^[2],⋯), and the coefficients ϑ_p-1^[i]
are given by the expansion ϑ^(p-1)(ϵ_s)=θ(ϵ_s)+∑_i=1^∞χ_p-1^[i]ϵ_s^i
=∑_i=0^∞ϑ_p-1^[i]ϵ_s^i just like (<ref>).
Then the k-type rogue wave solutions (<ref>) can be represented by
u_i,0^[N]=a_i((𝐘_N,N^(1,1)(H^(3)+H^(4)+H^(1),H^(3)-H^(4)))/(𝐘_N,N^(1,1)(H^(3),H^(3)+H^(1)))) e^ iω_i,
u_i,1^[N]=a_i((𝐘_N,N^(2,2)(H^(3)+H^(4)+H^(1),H^(3)-H^(4)))/(𝐘_N,N^(2,2)(H^(3),H^(3)+H^(1)))) e^ iω_i.
For the multi-rogue wave solutions (<ref>),
denote
𝐘_N_1,N_2(𝐱^(1),𝐱^(2))=
[ 𝐘_N_1,N_1^(1,1) 𝐘_N_1,N_2^(1,2); 𝐘_N_2,N_1^(2,1) 𝐘_N_2,N_2^(2,2) ],
we have the following proposition about the multi-rogue wave solutions of the CFL equations (<ref>).
Given two integers N_1,N_2 with 0≤ N_1,N_2≤ N and N_1+N_2=N,
let |y_s⟩=|y_s^(0)⟩ for 1≤ s ≤ N_1 and
|y_s⟩=|y_s^(1)⟩ for N_1+1≤ s ≤ N
in Theorem <ref> and the seed solutions u_i=u_i^[0] in (<ref>). By Bäcklund transformation,
the CFL equations (<ref>) have multi-rogue wave solutions
u_i^(N_1,N_2)=a_i((𝐘_N_1,N_2(H^(3)+H^(4)+H^(1),H^(3)-H^(4)))/(𝐘_N_1,N_2(H^(3),H^(3)+H^(1)))) e^ iω_i.
For the multi-rogue wave solutions (<ref>), the free internal parameters are (χ_0^[1],χ_0^[2],χ_0^[4],χ_0^[5],⋯,χ_0^[3N_1-1]) for |y_s^(0)⟩
in (<ref>) and
(χ_1^[1],χ_1^[2],χ_1^[4],χ_1^[5],⋯,χ_1^[3N_1-2]) for |y_s^(1)⟩ in (<ref>).
The 0-type rogue wave solutions u_i,0^[N] in (<ref>) have the form u_i^(N,0) in (<ref>) and
the 1-type rogue wave solutions u_i,1^[N] have the form u_i^(0,N). For these two cases, we use the notations
(χ^[1],χ^[2],χ^[4],χ^[5],⋯,χ^[3N-1]) and
(χ^[1],χ^[2],χ^[4],χ^[5],⋯,χ^[3N-2]) respectively to
represent the free internal parameters.
Since the selections of parameters χ^[3i],i≥ 1 do not impact the rogue wave solution, we can set these terms to zero.
We will provide the reasons behind the proof of the rogue wave patterns in the inner region.
In the next section, we will analyze the k-type rogue wave solutions (<ref>), and show their patterns.
Actually, there are three types of rogue wave solutions. If we take |y_s⟩=Φ(λ_s)(c_s(ϵ_s), c_s(ωϵ_s), c_s(ω^2ϵ_s))^T, then
the quadratic forms (<ref>) only depends on (3k,3l)th order coefficients of two functions
(<ref>). In this case, the solution can be converted to 0-type, since
the first column of 𝐘_N_p^(p)(𝐱^(r)) (<ref>) has only one nonzero element S_0=1.
As a summary of this section, considering the seed solutions (<ref>),
it leads to the fundamental solution (<ref>) of the Lax pair (<ref>)
and the Riemann surfaces (<ref>). Additionally, we study a general proposition of the Riemann surfaces at branch points. Then
we construct the rogue wave solutions (<ref>) generated at the branch point of multiplicity three
using the Bäcklund transformation.
To analyze the rogue wave patterns in Section <ref>, we reduce
the multi-rogue wave solutions to determinant
representation (<ref>).
In the next section, we use the root structures of Okamoto polynomial hierarchies to study
k-type rogue wave solutions (<ref>).
§ THE ROGUE WAVE PATTERNS
In this section, we study the rogue wave patterns
for (<ref>), and our results are as follows.
Under the assumption nonzero roots of Okamoto polynomial hierarchies are all simple, the patterns are divided into two parts,
the outer region, and the inner region. In the outer region, the rogue wave can be decomposed into some
first-order rogue wave solutions which are far from the origin. In the inner region, it can be viewed as a lower-order rogue wave.
The positions and orders of these rogue waves are associated with the root distributions of Okamoto polynomial hierarchies respectively.
§.§ The asymptotics of the outer region
Now we study the rogue wave patterns in the outer region for (<ref>).
Let η=(χ^[m])^1/m, m≥ 2 where χ^[m] is an internal parameter of the k-type rogue wave solutions
(<ref>) and √(x^2+t^2)=𝒪(η), suppose the nonzero roots of W_N^[k,m](z)
are all simple. As |η|→∞ for k-type rogue wave solutions (<ref>) with k=0,1, we have first-order rogue wave solutions
for i=1,2 near the nonzero roots (x_0,t_0) of W_N^[k,m]((ϑ^[1](x,t)-χ^[1]) e^- iη):
u_i,k^asy(x̂,t̂)=
a_i(
|p_1|^2(x̂+(p_1q_1^*)/|p_1|^2t̂+r_3)^2+p_2 (t̂+r_4)^2+4|z^[1]|^2|μ_1|^2/|κ^[0]-κ^[0]*|^2/|p_1|^2(x̂+(p_1q_1^*)/|p_1|^2t̂+r_1)^2+p_2 (t̂+r_2)^2+4|z^[1]|^2|μ_1|^2/|κ^[0]-κ^[0]*|^2)
e^ iω_i+𝒪(η^-1),
where x̂=x-x_0|η|,t̂=t-t_0|η|. The translation terms are
r_1 =(q_2/p_1)+H_1^(1)*/2p_1^*,
r_2=-(p_1q_1^*)/2p_2|p_1|^2(2(q_2 p_1^*)+ iH_1^(1)*p_1),
r_3 =(q_2/p_1)+ i(H_1^(4)/p_1)+H^(1)_1/2p_1,
r_4=-(p_1q_1^*)/2p_2|p_1|^2(2(q_2 p_1^*)-2 i(H_1^(4)p_1^*)- iH^(1)_1p_1^*),
where
p_1 =-2 i z^[1]μ_1,
q_1=- i z^[1](b_1+b_2+2κ^[0]-2z^[0])/b_1b_2z^[0],
p_2 =|q_1|^2-((p_1q_1^*)/|p_1|)^2,
q_2=H^(3)_1+χ^[1]+(ϑ^[2](x_0,t_0)-χ^[2]) e^- iηW_N,1^[k,m]((ϑ^[1](x_0,t_0)-χ^[1]) e^- iη)/W_N^[k,m]'((ϑ^[1](x_0,t_0)-χ^[1]) e^- iη).
We will only provide the proof for 1-type as the proofs are similar to 0-type.
Our main idea is to estimate the determinant element
of the numerator and denominator in (<ref>),i.e.
𝐘_0,N(𝐱^(1),𝐱^(2)), where (𝐱^(1),𝐱^(2))=(H^(3),H^(3)+H^(1)) or
(𝐱^(1),𝐱^(2))=(H^(3)+H^(4)+H^(1),H^(3)-H^(4)). Since 𝐘_0,N(𝐱^(1),𝐱^(2))
can be expressed by 𝐘_N^(2) (<ref>),
it just needs to estimate
𝐘_N^(2)(𝐱) for a given vector 𝐱=(x_1,x_2,⋯).
By Proposition <ref>, for i≥ 2 we obtain
S_i(kH^(2)+^+𝐱)= S_i(𝐯_1)+
𝒪(η^i-1), m≥ 3,
𝒪(η^i-2), m=2
for some integer k, where 𝐯_1=(ϑ^[1]+x_1,0,⋯,0,ϑ^[m]+kH^(2)_m+x_m,0,⋯), since H_1^(2)=0.
By Proposition <ref>, since ϑ^[1](x,t)=𝒪(η) and x_1,H^(2)_m,x_m are constants,
we have S_i(𝐯_1)=η^ip_i^[m](η^-1ϑ^[1])+𝒪(η^i-1).
Using Okamoto polynomial hierarchies, it leads to
_1≤ i,j ≤ N(S_3j-i-1((i-1)H^(2)+^+𝐱))=η^N^2(c_N^[1])^-1W_N^[1,m](η^-1ϑ^[1])
+𝒪(η^N^2-1),
where c_N^[1] is defined in (<ref>).
To calculate the asymptotic expression of the numerator and denominator in the rogue wave solutions (<ref>),
using the Cauchy-Binet formula, we obtain
(𝐘_0,N(𝐱^(1),𝐱^(2)))
=∑_1≤ v_1<v_2<⋯ <v_N≤ 3N_1≤ i,j ≤ N(Y^(2)_N,v_j,i(𝐱^(2))^†)_1≤ i,j ≤ N(Y^(2)_N,v_j,i(𝐱^(1)))
=∑_1≤ v_1<v_2<⋯ <v_N≤ 3NC^-∑_i=1^Nv_i_1≤ i,j ≤ N(S^*_3i-v_j-1((v_j-1)H^(2)+𝐱^(2)
+^))_1≤ i,j ≤ N(S_3i-v_j-1((v_j-1)H^(2)+𝐱^(1)+^)).
Since the degree of S_i(𝐯_1) is decrease with respect to η when i is decrease,
the leading order term of η comes from the choice
(v_1,⋯,v_N)=(1,2,⋯, N). Hence the coefficient of the leading order term is
C^-N(N+1)/2_1≤ i,j ≤ N(S^*_3i-j-1((j-1)H^(2)+𝐱^(2)+^))
_1≤ i,j ≤ N(S_3i-j-1((j-1)H^(2)+𝐱^(1)+^)),
which has an asymptotic expansion
C^-N(N+1)/2|η|^2N^2|(c_N^[1])^-1W_N^[1,m](η^-1ϑ^[1])|^2.
Under the condition √(x^2+t^2)=𝒪(η), if η^-1ϑ^[1](x,t)
is far from the roots of the Okamoto polynomial hierarchies,
when |η|→∞, we obtain its limit is nonzero and independent of 𝐱^(1) and 𝐱^(2).
Hence the asymptotic solution of (<ref>) just a_i e^ iω_i in this case.
To get a nontrivial asymptotic expansion, we take a nonzero root (x_0,t_0) of
W_N^[1,m]((ϑ^[1](x,t)-χ^[1]) e^- iη) and expand (<ref>) near
(x_0,t_0).
We first calculate the leading order term that comes from the choice
(v_1,⋯,v_N)=(1,2,⋯, N).
Making coordinate transformation x=x̂+x_0η e^- iη, t=t̂+t_0η e^- iη
(Note that η e^- iη=|η| is real, then the transformation is reasonable), if χ^[1]=0, we have
exp((ϑ^[1](x̂,t̂)+x_1/η+ϑ^[1](x_0,t_0) e^- iη)ϵ+(ϑ^[2](x̂,t̂)+x_2/η^2+(ϑ^[2](x_0,t_0)-χ^[2]) e^- iη/η)ϵ^2+ϵ^m)
= exp(ϑ^[1](x_0,t_0) e^- iηϵ+ϵ^m)exp(ϑ^[1](x̂,t̂)+x_1/ηϵ+(ϑ^[2](x_0,t_0)-χ^[2]) e^- iη/ηϵ^2+𝒪(η^-2))
= (∑_j=1^∞p_j^[m](ϑ^[1](x_0,t_0) e^- iη)ϵ^j)(1+ϑ^[1](x̂,t̂)+x_1/ηϵ+(ϑ^[2](x_0,t_0)-χ^[2]) e^- iη/ηϵ^2+𝒪(η^-2)).
Hence we have an approximation
η^-nS_n(kH^(2)+^+𝐱)∼ p_n^[m](ϑ^[1](x_0,t_0) e^- iη)
+((ϑ^[1](x̂,t̂)+x_1)p_n-1^[m](ϑ^[1](x_0,t_0) e^- iη)
.
.+(ϑ^[2](x_0,t_0)-χ^[2]) e^- iηp_n-2^[m](ϑ^[1](x_0,t_0) e^- iη))
η^-1+𝒪(η^-2).
If χ^[1] is not zero, we just replace ϑ^[1](x_0,t_0) by ϑ^[1](x_0,t_0)-χ^[1].
Using the properties of determinants,
the coefficient of η^N^2 for (𝐘_N^(2)(𝐱)) is zero since (x_0,t_0) is the
root of W_N^[1,m]((ϑ^[1](x,t)-χ^[1]) e^- iη). For the coefficient of η^N^2-1,
letting z_1=(ϑ^[1](x_0,t_0)-χ^[1]) e^- iη and z_2=(ϑ^[2](x_0,t_0)-χ^[2]) e^- iη,
we can only choose one column be the elements (ϑ^[1](x̂,t̂)+x_1)p_n-1^[m](z_1) or
z_2p_n-2^[m](z_1).
Hence the leading order term of (𝐘_N^(2)(𝐱)) with respect to η is
η^N^2-1((ϑ^[1](x̂,t̂)+x_1)(c_N^[1])^-1W_N^[1,m]'(z_1)+z_2(c_N^[1])^-1W_N,1^[1,m](z_1)).
Another contribution is given by the choice (v_1,⋯,v_N)=(1,2,⋯, N-1,N+1) in (<ref>), then the term
_1≤ i,j ≤ N(S_3i-v_j-1((v_j-1)H^(2)+𝐱+^))
is
η^N^2-1(c_N^[1])^-1W_N^[1,m]'(z_1).
Combining (<ref>) and (<ref>) and calculating the leading order term of (<ref>) with respect to η,
as |η|→∞, we obtain the asymptotic 1-type rogue wave solution (<ref>)
a_i(ϑ^[1](x̂,t̂)+H^(3)_1+H^(4)_1+H^(1)_1+z_2W_N,1^[1,m](z_1)/W_N^[1,m]'(z_1))
(ϑ^[1](x̂,t̂)+H^(3)_1-H^(4)_1+z_2W_N,1^[1,m](z_1)/W_N^[1,m]'(z_1))^*+C/(ϑ^[1](x̂,t̂)+H^(3)_1+z_2W_N,1^[1,m](z_1)/W_N^[1,m]'(z_1))
(ϑ^[1](x̂,t̂)+H^(3)_1+H^(1)_1+z_2W_N,1^[1,m](z_1)/W_N^[1,m]'(z_1))^*+C e^ iω_i.
By simplifying the aforementioned expression (<ref>), we arrive at the final expression (<ref>)
stated in the proposition.
Moreover, it is routine to verify that the center (x̂_1,t̂_1) of the rogue wave solution (<ref>) is
(-(p_1q_1^*)(p_1q_1^*)/2p_2|p_1|^4(2q_2 p_1^*-H_1^(1)*p_1)-(q_2/p_1+H_1^(1)*/2p_1^*),(p_1q_1^*)/2p_2|p_1|^2((2q_2 p_1^*-H_1^(1)*p_1))).
As evidenced by the proof, when one of the internal parameters is large enough,
Okamoto polynomial hierarchies become inherent, as asserted in Proposition <ref>.
Proposition <ref> provides insight into the values of the rogue wave solutions (<ref>) that are distant from the origin.
The values of rogue wave solutions at points close to the roots of the Okamoto polynomial hierarchies
can be approximated by first-order rogue waves.
§.§ The asymptotics of the inner region
Now we calculate the asymptotic expression of rogue wave solutions (<ref>) in the inner region.
We have the following proposition and
the proof is relative to Theorem <ref>.
Let √(x^2+t^2)=𝒪(1), denote N=km+N_0 where N_0 is the remainder of N divided by m and
m≥ 2. If m is not a multiple of 3,
as |χ^[m]|→∞ where χ^[m] is an internal parameter for k-type rogue wave solutions in (<ref>),
we have lower order asymptotic rogue wave solutions (<ref>) for i=1,2:
u_i,k^(N_1^[k],N_2^[k])=
a_i(
(𝐘_N_1^[k],N_2^[k](H^(3)+H^(4)+H^(1),H^(3)-H^(4)))/(𝐘_N_1^[k],N_2^[k](H^(3),H^(3)+H^(1))))
e^ iω_i+𝒪((χ^[m])^-1).
The free internal parameters are given by χ_l^[i]=χ^[i]+kmH_i^(2),l=0,1 for i m and
χ_0^[m]=χ_1^[m]=kmH_m^(2).
We also only provide the proof for 1-type as the proofs are similar for 0-type. Similar to the proof in the outer region,
we will estimate 𝐘_N^(2)(𝐱) for 𝐱=H^(3),H^(3)+H^(1),H^(3)+H^(4)+H^(1),H^(3)-H^(4).
Using the Cauchy-Binet formula, we also need to calculate the limit of (<ref>) as χ^[m]→∞.
Since the elements of 𝐘_N^(2)(𝐱) can be viewed as the polynomial of χ^[m], we
calculate the coefficients with respect to χ^[m]. We will prove that, the matrix 𝐘_N^(2)(𝐱)
can be simplified to the form
𝐘_N^(2)(𝐱)∼[ 𝐀_1,1 𝐀_1,2 𝐀_1,3 ⋯ 𝐀_1,k 𝐁_1,k+1; 0_m× m 𝐀_2,2 𝐀_2,3 ⋯ 𝐀_2,k 𝐁_2,k+1; 0_m× m 0_m× m 𝐀_3,3 ⋯ 𝐀_3,k 𝐁_3,k+1; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0_m× m 0_m× m 0_m× m ⋯ 𝐀_k,k 𝐁_k,k+1; 0_(2N+N_0)× m 0_(2N+N_0)× m 0_(2N+N_0)× m ⋯ 0_(2N+N_0)× m 𝐂_k+1,k+1 ]
where 𝐂_k+1,k+1=𝐂_k+1,k+1(𝐱) is (2N+N_0)× N_0 matrix.
The m× m matrix 𝐀_i,i,i=1,2,⋯,k can be transformed into an upper triangular matrix
through column transformation, and the diagonal elements are the powers of χ^[m].
Hence the term (<ref>) can be approximated by
(𝐘_0,N(𝐱^(1),𝐱^(2)))= ((𝐘_N^(2)(𝐱^(2)))^†𝐘_N^(2)(𝐱^(1)))
∼ ((𝐂_k+1,k+1(𝐱^(2)))^†𝐂_k+1,k+1(𝐱^(1)))(χ^[m])^l
for some integer l.
Then we can obtain the proposition. Now we conduct precise calculations.
To estimate the asymptotic expression, we need to calculate the leading order terms with respect to χ^[m].
Denote 𝐱̂^(r)=(r-1)H^(2)++𝐱 and
𝐲̂^(r)=𝐱̂^(r)-χ^[m]𝐞_m for some integer r, where
𝐞_m is the m-th unit vector, we have
S_n(𝐱̂^(r))=∑_i=0^[n/m](χ^[m])^i/i!S_n-im(𝐲̂^(r)).
Now we concentrate on the first m columns of 𝐘_N^(2)(𝐱) and obtain 𝐀_1,1.
Since m is not a multiple of 3, the order of 3 in the cyclic group Z_m is m. Then for the first m columns
, with respect to the leading order terms coefficient of χ^[m] i.e. the term S_n-im(𝐲), the subscript
n-im traverse 0 to m-1 (we omit the coefficient 1/i!). On the other hand
, if the row index is decreased by 1, then the term S_n-im be S_n-im-1. Denote the leading order of 1-st row and
j-th column with respect to χ^[m] be k_1,j. We only preserve the coefficient of (χ^[m])^k_1,j in each column.
Then we obtain 𝐀_1,1. For example, if m=3j+1,j≥ 1, the first m columns of 𝐘_N^(2)(𝐱) are
3N× m matrix
[ S_1(𝐱̂^(1)) ⋯ S_3j-2(𝐱̂^(1)) S_3j+1(𝐱̂^(1)) ⋯ S_6j+1(𝐱̂^(1)) S_6j+4(𝐱̂^(1)) ⋯ S_9j+1(𝐱̂^(1)); S_0(𝐱̂^(2)) ⋯ S_3j-3(𝐱̂^(2)) S_3j(𝐱̂^(2)) ⋯ S_6j(𝐱̂^(2)) S_6j+3(𝐱̂^(2)) ⋯ S_9j(𝐱̂^(2)); ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ].
Then we expand the above elements just like (<ref>). For example,
S_3j-2(𝐱̂^(1))=S_3j-2(𝐲̂^(1)),
S_3j+1(𝐱̂^(1))=S_3j+1(𝐲̂^(1))+S_0(𝐲̂^(1))χ^[m]
and
S_9j+1(𝐱̂^(1))=S_9j+1(𝐲̂^(1))+S_6j(𝐲̂^(1))χ^[m] +S_3j-1(𝐲̂^(1))(χ^[m])^2.
Then we preserve the
term (χ^[m])^0 for the first j columns, with respect to (χ^[m])^1 for j+1 to 2j+1 columns and
with respect to (χ^[m])^2 for 2j+2 to 3j+1=m columns. The first m columns of 𝐘_N^(2)(𝐱) can be
approximated by a 3N× m matrix
[ S_1(𝐲̂^(1)) ⋯ S_3j-2(𝐲̂^(1)) χ^[m] ⋯ S_3j(𝐲̂^(1))χ^[m] S_2(𝐲̂^(1))(χ^[m])^2 ⋯ S_3j-1(𝐲̂^(1))(χ^[m])^2; 1 ⋯ S_3j-3(𝐲̂^(2)) 0 ⋯ S_3j-1(𝐲̂^(2))χ^[m] S_1(𝐲̂^(2))(χ^[m])^2 ⋯ S_3j-2(𝐲̂^(2))(χ^[m])^2; 0 ⋯ S_3j-4(𝐲̂^(3)) 0 ⋯ S_3j-2(𝐲̂^(3))χ^[m] (χ^[m])^2 ⋯ S_3j-3(𝐲̂^(3))(χ^[m])^2; ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ]
since S_0(𝐲̂^(r))=1. The above matrix is [ 𝐀_1,1 0_(3N-m)× m ]^T.
Through column transformations, the matrix 𝐀_1,1 can be transformed to
[ χ^[m] S_1(𝐲̂^(1)) S_2(𝐲̂^(1))(χ^[m])^2 ⋯ S_m-1(𝐲̂^(1))χ^[m]; 0 1 S_1(𝐲̂^(2))(χ^[m])^2 ⋯ S_m-2(𝐲̂^(2))χ^[m]; 0 0 (χ^[m])^2 ⋯ S_m-3(𝐲̂^(3))χ^[m]; ⋮ ⋮ ⋮ ⋱ S_1(𝐲̂^(m-1))χ^[m]; 0 0 0 ⋯ χ^[m]; ].
Moreover, the m-th order principal minor of 𝐘_N^(2)(𝐱) is (𝐀_1,1)=(χ^[m])^m, which is independent of the parameter.
Now we look at the other columns. Expand
S_n+3lm(𝐱̂^(r))=∑_i=[n/m]+2l^[n/m]+3l(χ^[m])^i/i!S_n-(i-3l)m(𝐲̂^(r))
+∑_i=0^[n/m]+2l-1(χ^[m])^i/i!S_n-(i-3l)m(𝐲̂^(r))
and denote s_i=(χ^[m])^[n/m]+2i/([n/m]+2i)!S_n-([n/m]-i)m(𝐲̂^(r)), it follows that
S_n+3lm(𝐱̂^(r))= ∑_i=0^l(χ^[m])^i+[n/m]+2l/(i+[n/m]+2l)!S_n-([n/m]-l+i)m(𝐲̂^(r))+𝒪((a^[m])^[n/m]+2l-1)
= ∑_i=0^l(χ^[m])^-i+[n/m]+3l/(-i+[n/m]+3l)!S_n-([n/m]-i)m(𝐲̂^(r))+𝒪((a^[m])^[n/m]+2l-1)
= ∑_i=0^l([n/m]+2i)!/(-i+[n/m]+3l)!(χ^[m])^3(l-i)s_i+𝒪((a^[m])^[n/m]+2l-1).
For fixed l, we can use S_n+3im,0≤ i≤ l-1 to remove the terms (χ^[m])^i,2l+1≤ i ≤ 3l in S_n+3lm.
Moreover, for n,n-1,⋯,[n/m]m, the coefficients of s_i are just ([n/m]+2i)!/(-i+[n/m]+3l)!(χ^[m])^3(l-i).
Since {s_i,1≤ i ≤ l} are linearly independent, and the determinant of column transformation matrices are constant
multiple of the power of χ^[m],
we can only preserve the coefficients of (χ^[m])^[n/m]+2l on j-th m columns, i.e.
([n/m])!/([n/m]+3l)!S_n-([n/m]-l)m(𝐲̂^(r)). For example, we calculate 𝐀_1,2 and 𝐀_2,2.
If m=3j+1,j≥ 1, then we reserve the term (χ^[m])^2,(χ^[m])^3 for m+1 to m+j columns,
with respect to (χ^[m])^3,(χ^[m])^4 for m+j+1 to m+2j+1 columns and
with respect to (χ^[m])^4,(χ^[m])^5 for m+2j+2 to m+3j+1=2m columns. Then m+1 to 2m columns of 𝐘_N^(2)(𝐱)
can be approximated by 3N× m matrix
[ S_m+1(𝐲̂^(1))(χ^[m])^2 S_m(𝐲̂^(2))(χ^[m])^2 S_m-1(𝐲̂^(3))(χ^[m])^2 ⋯; S_m+4(𝐲̂^(1))(χ^[m])^2 S_m+3(𝐲̂^(2))(χ^[m])^2 S_m+2(𝐲̂^(3))(χ^[m])^2 ⋯; ⋮ ⋮ ⋮ ⋱; S_m+3j-2(𝐲̂^(1))(χ^[m])^2 S_m+3j-3(𝐲̂^(2))(χ^[m])^2 S_m+3j-4(𝐲̂^(3))(χ^[m])^2 ⋯; S_m(𝐲̂^(1))(χ^[m])^3 S_m-1(𝐲̂^(2))(χ^[m])^3 S_m-2(𝐲̂^(3))(χ^[m])^3 ⋯; ⋮ ⋮ ⋮ ⋱; S_m+3j(𝐲̂^(1))(χ^[m])^3 S_m+3j-1(𝐲̂^(2))(χ^[m])^3 S_m+3j-2(𝐲̂^(3))(χ^[m])^3 ⋯; S_m+2(𝐲̂^(1))(χ^[m])^4 S_m+1(𝐲̂^(2))(χ^[m])^4 S_m(𝐲̂^(3))(χ^[m])^4 ⋯; ⋮ ⋮ ⋮ ⋱; S_m+3j-1(𝐲̂^(1))(χ^[m])^4 S_m+3j-2(𝐲̂^(2))(χ^[m])^4 S_m+3j-3(𝐲̂^(3))(χ^[m])^4 ⋯; ]^T+
[ (χ^[m])^3𝐀_1,1; 0_(3N-m)× m ].
Define the first matrix of (<ref>) be [ 𝐀_1,2 𝐀_2,2 0_(3N-2m)× m ]^T.
Then (χ^[m])^-2𝐀_2,2 is
[ S_1(𝐲̂^(m+1)) ⋯ S_3j-2(𝐲̂^(m+1)) χ^[m] ⋯ S_3j(𝐲̂^(m+1))χ^[m] S_2(𝐲̂^(m+1))(χ^[m])^2 ⋯; 1 ⋯ S_3j-3(𝐲̂^(m+2)) 0 ⋯ S_3j-1(𝐲̂^(m+2))χ^[m] S_1(𝐲̂^(m+2))(χ^[m])^2 ⋯; 0 ⋯ S_3j-4(𝐲̂^(m+3)) 0 ⋯ S_3j-2(𝐲̂^(m+3))χ^[m] (χ^[m])^2 ⋯; ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ]
which has similar form as 𝐀_1,1.
Repeating this process, then we obtain all 𝐀_i,j. Moreover,
all 𝐀_i,i can be transformed to upper triangular matrices through
column transformations. The determinants (𝐀_i,i)=(χ^[m])^2m(i-1)(𝐀_1,1)=(χ^[m])^2mi-m.
Now we need to calculate 𝐂_k+1,k+1 in (<ref>).
Through the above analysis, we can only preserve the coefficient of
(χ^[m])^[n/m]+2l which just ([n/m])!/([n/m]+3k)!S_n-([n/m]-k)m(𝐲̂^(r)) for
the last N_0 columns of 𝐘_N^(2)(𝐱).
Notice that S_n-([n/m]-k)m(𝐲̂^(r))=S_n-[n/m]m+km(𝐲̂^(r)), and n-[n/m]m is the remainder of n mod m.
Since the subscript of S_i minus one if the column index minus one, when the column index reduces km,
the coefficient is S_n-[n/m]m(𝐲̂^(r)). Hence 𝐂_k+1,k+1 have similar form of 𝐀_1,1.
We just need to concentrate on the first m columns to calculate
the last N_0 columns. Let m=3j+1,j≥ 1, then 𝐂_k+1,k+1 is formed by the first 2N+N_0 rows and
N_0 columns of the matrix
[ S_1(𝐲̂^(km+1))(χ^[m])^2k (χ^[m])^2k 0 ⋯; ⋮ ⋮ ⋮ ⋱; S_3j-2(𝐲̂^(km+1))(χ^[m])^2k S_3j-3(𝐲̂^(km+2))(χ^[m])^2k S_3j-4(𝐲̂^(km+3))(χ^[m])^2k ⋯; (χ^[m])^2k+1 0 0 ⋯; ⋮ ⋮ ⋮ ⋱; S_3j(𝐲̂^(km+1))(χ^[m])^2k+1 S_3j-1(𝐲̂^(km+2))(χ^[m])^2k+1 S_3j-2(𝐲̂^(km+3))(χ^[m])^2k+1 ⋯; S_2(𝐲̂^(km+1))(χ^[m])^2k+2 S_1(𝐲̂^(km+2))(χ^[m])^2k+2 (χ^[m])^2k+2 ⋯; ⋮ ⋮ ⋮ ⋱; S_3j-1(𝐲̂^(km+1))(χ^[m])^2k+2 S_3j-2(𝐲̂^(km+2))(χ^[m])^2k+2 S_3j-3(𝐲̂^(km+3))(χ^[m])^2k+2 ⋯ ]_m × (2N+m)^T.
To simplify the notations, we omit the variables and the power of χ^[m] in the following.
For example, the first column of the matrix (<ref>) is of the form
[ S_1 S_4 ⋯ S_3j-2 S_0 ⋯ S_3j S_2 ⋯ S_3j-1 ]_1× m.
If 1≤ N_0≤ j, the last N_0 columns of the first row is of the form (S_1,S_4,⋯,S_3N_0-2)_1× N_0 which can be expressed
by N_0th order 1-type rogue wave solutions.
If j+1≤ N_0≤ 2j+1, the last N_0 columns of the first row is (S_1,S_4,⋯,S_3j-2,S_0,⋯,S_3(N_0-j-1))_1× N_0.
Since there is only one nonzero element in the (j+1)th column and there are two nonzero elements in the first column
of matrix 𝐂_k+1,k+1, when calculating the term
((𝐂_k+1,k+1(𝐱^(2)))^†𝐂_k+1,k+1(𝐱^(1))) in (<ref>),
we can discard these two columns:
[ S_1 S_4 ⋯ S_3j-2 S_0 S_3 ⋯ S_3(N_0-j-1); S_0 S_3 ⋯ S_3j-3 0 S_2 ⋯ S_3(N_0-j-2); 0 S_2 ⋯ S_3j-4 0 S_1 ⋯ S_3(N_0-j-3); ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ ]→[ S_2 ⋯ S_3j-4 S_1 ⋯ S_3(N_0-j-3); ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ].
Denote 𝐃_1 be the submatrix of matrix 𝐂_k+1,k+1 obtained
by removing the first and second rows, the first column, and the (j+1)th column,
we have approximation
((𝐂_k+1,k+1(𝐱^(2)))^†𝐂_k+1,k+1(𝐱^(1)))∼((𝐃_1(𝐱^(2)))^†𝐃_1(𝐱^(1)))
in (<ref>).
The matrix 𝐃_1 is a (2N+N_0-2) by (j-1)+(N_0-j-1)=(N_0-2) matrix.
Hence the remaining matrix 𝐃_1 corresponds to (j-1)th order 0-type and (N_0-j-1)th
order 1-type rogue wave solutions.
If 2j+2≤ N_0≤ 3j, for the matrix 𝐂_k+1,k+1, we observe that the columns j+1, 1, 2j+2 contain one, two, and three nonzero
elements respectively. Furthermore, the columns j+2, 2, 2j+3 contain four, five, and six nonzero elements respectively.
Based on this observation, we can continue this process.
Using the same method in (<ref>),
if we denote 𝐃_2 be the submatrix of
matrix 𝐂_k+1,k+1 obtained by removing the first 3(N_0-2j-1) rows,
the first column to the (N_0-2j-1)th column, the (j+1)th column to the (N_0-j-1)th column,
and the (2j+2)th column to the N_0th column, then we have approximation
((𝐂_k+1,k+1(𝐱^(2)))^†𝐂_k+1,k+1(𝐱^(1)))∼((𝐃_2(𝐱^(2)))^†𝐃_2(𝐱^(1)))
in (<ref>). Hence the remaining matrix 𝐃_2 corresponds to
(m-1-N_0)th order 0-type and (m-N_0)th order 1-type rogue wave solutions.
For the case m=3j+2, we can use the same method. The conclusion is given in the proposition.
The rogue wave patterns in the inner region reveal that, at points around the origin,
the rogue wave solutions can be approached by lower-order rogue waves.
Now we can see why χ^[3i], i≥ 1 (the case m is a multiple of 3) would not affect the rogue wave solutions.
Using the expansion (<ref>), the term 𝐘_N^(2)(𝐱) would be a
3N× m matrix
[ S_1(𝐲̂^(1)) ⋯ S_m-2(𝐲̂^(1)) S_m+1(𝐲̂^(1))+S_1(𝐲̂^(1))χ^[m] ⋯; S_0(𝐲̂^(2)) ⋯ S_m-3(𝐲̂^(2)) S_m(𝐲̂^(2))+S_0(𝐲̂^(1))χ^[m] ⋯; ⋮ ⋱ ⋮ ⋮ ⋱ ].
We can eliminate the term with the power of χ^[m] in the (m/3+1)-th row using the first row, and repeat this process.
Then the rogue wave solutions (<ref>) would not contain the term χ^[3i], i≥ 1.
Combining the rogue wave patterns in the inner and outer regions, we obtain the asymptotic expression of rogue waves generated at
the branch point of multiplicity three on the Riemann surfaces. And the distribution of these
first-order rogue waves can be represented
by the roots of Okamoto polynomial hierarchies and the center (<ref>):
Let η=(χ^[m])^1/m, m≥ 2 and m is not a multiple of 3,
where χ^[m] is an internal parameter of the k-type rogue wave solutions
(<ref>), suppose the nonzero roots of Okamoto polynomial
hierarchies W_N^[k,m](z)
are all simple. As |η|→∞, we can decompose the rogue wave solutions (<ref>)
into N(N+1-k)-N^[k] first-order rogue wave solutions (<ref>) in the outer region
and a lower order rogue wave solutions (<ref>) in the inner region
u_i,k^[N](x,t)=∑_(x_0,t_0) u_i,k^asy(x-x_0|η|,t-t_0|η|)+u_i,k^(N_1^[k],N_2^[k])(x,t)
+𝒪(η^-1),
where (x_0,t_0) traverses the nonzero roots of W_N^[k,m]((ϑ^[1](x,t)-χ^[1]) e^- iη). The positions of these
first-order rogue waves in the outer region are (x_0|η|+x̂_1,t_0|η|+t̂_1), where
(x̂_1,t̂_1) is defined in (<ref>). The position of the lower rogue wave in the inner region is
the origin.
In conclusion, Theorem <ref> tells us the decomposition of rogue wave solutions (<ref>)
when one of the internal parameters is large enough.
The positions and the orders of rogue waves in the outer region correspond to the roots of Okamoto polynomial hierarchies.
Due to the root distributions of the Okamoto polynomial hierarchies given by (<ref>),
we can observe the symmetry structures by the positions of the rogue wave patterns.
§.§ Examples
Now we give some examples to verify and visualize the rogue wave patterns in Theorem <ref>.
Assuming b_1=1, b_2=2, and N=3, we can calculate
ϑ^[1](x,t)=-2 i/3x+√(3)+ i/9t+χ^[1].
To simplify the calculation, without loss of generality,
we consider the parameter settings where all internal parameters are zero except for one that is nonzero.
Using this assumption, we plot the graph of the norm of rogue wave solutions (<ref>) and the
positions which are given in Theorem <ref>.
* For 0-type rogue waves, we consider three cases
(χ^[5])^1/5=5, (χ^[7])^1/7=5, (χ^[8])^1/8=5,
and the figures are plotted in Figure <ref>. In these cases, the degree of W_3^[0,m](z) with respect
to z is N(N+1)=12.
* For (χ^[5])^1/5=5, the first component of the solution (<ref>) is plotted in Figure (<ref>-a) and
the second component is plotted in Figure (<ref>-d). Since m=5,N=3, we calculate (N_1^[0],N_2^[0])=(1,1) and N^[0]=2
in Theorem <ref>. Hence there are N(N+1)-N^[0]=10 first-order rogue wave solutions in the outer region and
a (1,1)-order rogue wave solution in the inner region.
* For (χ^[7])^1/7=5, the two components are plotted in Figure (<ref>-b,e). The term
(N_1^[0],N_2^[0])=(2,1) and N^[0]=5. Hence there are N(N+1)-N^[0]=7 first-order rogue wave solutions in
the outer region and a (2,1)-order rogue wave solution in the inner region.
* For (χ^[8])^1/8=5, the two components are plotted in Figure (<ref>-c,f). By direct calculation,
there are 8 first-order rogue wave solutions in the outer region and a (0,2)-order rogue wave solution in the inner region,
i.e. a second-order 1-type rogue wave solution.
* For 1-type rogue waves, we consider four cases
(χ^[2])^1/2=5, (χ^[4])^1/4=5, (χ^[5])^1/5=5, (χ^[7])^1/7=5,
and the figures are plotted in Figure <ref>. The degree of W_3^[1,m](z) with respect
to z is N^2=9.
* For (χ^[2])^1/2=5, the first component of the solution (<ref>) is plotted in Figure (<ref>-a) and
the second component is plotted in Figure (<ref>-e). Since m=2, the terms (N_1^[1],N_2^[1])=(0,1) and N^[1]=1
in Theorem <ref>. There are 8 first-order rogue wave solutions in the outer region and
a (0,1)-order rogue wave solution in the inner region, i.e. a first-order 1-type rogue wave solution.
* For (χ^[4])^1/4=5, the two components are plotted in Figure (<ref>-b,f). The
order (N_1^[1],N_2^[1])=(0,1) and N^[1]=1 in Theorem <ref>.
The case (χ^[4])^1/4=5 is the same as (χ^[2])^1/2=5.
* For (χ^[5])^1/5=5, the two components are plotted in Figure (<ref>-c,g). We get
(N_1^[1],N_2^[1])=(2,1) and N^[1]=4. Hence there are 5 first-order rogue wave solutions in
the outer region and a (2,1)-order rogue wave solution in the inner region.
* For (χ^[5])^1/5=5, the two components are plotted in Figure (<ref>-d,h). By straightforward calculation,
there are 7 first-order rogue wave solutions in the outer region and a (1,0)-order rogue wave solution in the inner region,
i.e. a first-order 0-type rogue wave solution.
From Figure <ref> and <ref>, it can be observed that these circles predict the positions of the rogue wave solutions
in Theorem <ref>.
We can use the roots of Okamoto polynomial hierarchies
W_N^[k,m](ϑ^[1](x,t)-χ^[1]) and centers (<ref>) to predict the positions.
Now we adjust the argument of (χ^[m])^1/m with fixed norm.
Denote θ̃=((χ^[m])^1/m), let
(x_θ̃,t_θ̃) be the roots of W_N^[k,m]((ϑ^[1](x,t)-χ^[1]) e^- iθ̃)
for θ̃∈ [0,2π).
Using the expansion of ϑ in (<ref>), we define the matrix 𝐀 which is given by
[ ϑ^[1]-χ^[1]; (ϑ^[1]-χ^[1])^* ]
=𝐀[ x; t ].
It leads to the relation
[ x_θ̃; t_θ̃ ]
=𝐀^-1diag( e^ iθ̃, e^- iθ̃)
𝐀[ x_θ̃=0; t_θ̃=0 ].
Note that the matrix 𝐀^-1diag( e^ iθ̃, e^- iθ̃)𝐀 has real elements,
since 𝐀 is the transformation matrix between (x,t)^T and a pair of conjugate complex numbers.
Hence for the parameter (χ^[m])^1/m with fixed norm,
we can use the position of case θ̃=0 to calculate the position (x_θ̃,t_θ̃)
for θ̃∈ [0,2π) through a coordinate transformation (<ref>).
Now we give an example for the cases different θ̃, with fixed norm |(χ^[m])^1/m|.
We set b_1=1,b_2=2,N=3 and the parameters χ^[k]=0,k 5 with four cases
(χ^[5])^1/5=5 e^ iθ̃, θ̃=0,π/2,π,3π/2.
Here we consider 0-type rogue waves, and plot the norm of rogue wave solutions |u_i,0^[3](x,t)| and the positions in Figure <ref>.
As we change θ̃, the rogue wave patterns have rotations, which are given by the formula (<ref>).
§ CONCLUSION
In this paper, we use the Darboux transformation to construct the rogue wave solutions (<ref>)
of the CFL equations (<ref>) based on the paper <cit.>.
With the aid of the form (<ref>) of k-type rogue wave solutions, we can analyze the rogue wave patterns for (<ref>).
The patterns of the rogue wave solutions generated at the branch point of multiplicity three
are determined by the root structures of the
Okamoto polynomial hierarchies with a linear transformation based on the paper <cit.>.
After letting one of the internal parameters large enough,
the Okamoto polynomial hierarchies (<ref>) arise naturally and the rogue wave solutions (<ref>) have
decomposition (<ref>).
We can predict the positions of the first-order rogue waves in (<ref>) using the root distributions (<ref>)
of the Okamoto polynomial hierarchies.
For the CFL equations, we can also consider the rogue wave solutions generated at the branch point of multiplicity two
and the rogue wave patterns are associated with Yablonskii-Vorob’ev polynomial hierarchies.
More generally, in other models of integrable systems, we can use the roots of special polynomials to study
the patterns of the rogue wave solutions with tau function determinant representations.
Specifically, for general integrable models, we can also construct its Darboux transformation
and use the seed solutions to generate the high-order rogue wave solutions at the branch point of the Riemann surface.
For such rogue wave solutions, we can use a similar approach to calculate the asymptotic expressions and
analyze the properties of the associated polynomial hierarchies to study the rogue wave patterns.
siam
|
http://arxiv.org/abs/2307.00448v2
|
20230702005517
|
Extended superconducting fluctuation region and 6e and 4e flux-quantization in a Kagome compound with a normal state of 3Q-order
|
[
"Chandra M. Varma",
"Ziqiang Wang"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con"
] | |
http://arxiv.org/abs/2307.01453v1
|
20230704031552
|
Diverse Retrieval-Augmented In-Context Learning for Dialogue State Tracking
|
[
"Brendan King",
"Jeffrey Flanigan"
] |
cs.CL
|
[
"cs.CL"
] |
A Pulsed Muon Source Based on a High-Repetition-Rate Electron Accelerator
Kim Siang Khaw
August 1, 2023
=========================================================================
There has been significant interest in zero and few-shot learning for dialogue state tracking (DST) due to the high cost of collecting and annotating task-oriented dialogues. Recent work has demonstrated that in-context learning requires very little data and zero parameter updates, and even outperforms trained methods in the few-shot setting <cit.>. We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for DST.
First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python. Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance. Finally, we introduce a novel re-weighting method during decoding that takes into account probabilities of competing surface forms, and produces a more accurate dialogue state prediction.
We evaluate our approach using MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero and few-shot settings.[Our code: https://github.com/jlab-nlp/RefPyDSThttps://github.com/jlab-nlp/RefPyDST
]
§ INTRODUCTION
Dialogue state tracking (DST) is an important language understanding task required for supporting task-oriented conversational agents. For each turn in a dialogue, the goal of DST is to extract the intentions and arguments a user communicates into a meaning representation aligned with the capabilities of the system. Often, this can be represented as a set of slot-value pairs, using slots defined in a system schema. For example, if a user asks a hotel booking agent for "a four-star hotel with somewhere to park", the agent could extract the state {(hotel-stars, 4), (hotel-parking, yes)}.
Annotating these turn-level dialogue states is challenging and time-intensive <cit.>. Further, as system capabilities evolve over time, the schema and DST requirements change. As such, flexible and data-efficient DST methods are highly valuable.
For these reasons, recent work has explored zero and few-shot methods for DST. Few-shot methods often fine-tune a pre-trained language model (LM) on DST or a re-framing of the task <cit.>. While these systems are often data efficient, they are inflexible to changing system definitions, requiring re-training as new services are added. To address this, zero-shot methods for domain transfer have been proposed <cit.>, but their performance in new domains can significantly depend on conceptual overlap with training domains <cit.>.
The in-context learning framework (ICL) <cit.> is particularly appealing in this setting given that it is highly data-efficient and flexible: instead of fine-tuning, ICL methods prompt a fixed LM with templated examples for a task. This approach requires no re-training when adapting to schema changes. In recent work, <cit.> find that prompting a language model with examples for DST in a text-to-SQL format can outperform fine-tuned zero and few-shot methods.
In this work, we propose RefPyDST, a retrieval-augmented in-context learning approach to DST for use with language models pre-trained on code, such as OpenAI Codex <cit.>, by building on recent ICL methods for DST <cit.>. Our approach advances the state of the art with three key contributions.
First, we develop a novel in-context prompt that re-frames DST as text-to-python, explicitly modeling slot value coreferents using variables. We provide an overview of this prompt and example of such coreference in <ref>. We demonstrate that this approach significantly improves system performance in the zero and few-shot settings, and particularly improves accuracy on predictions requiring coreference resolution.
Second, we introduce a novel method for diverse supervised example retrieval, which yields a set of in-context examples that are both individually relevant and collectively representative of the output space, inspired by maximum marginal relevance (MMR) <cit.>. Our approach significantly improves performance in few-shot settings, overcoming a failure mode in supervised example retrieval in which examples are each similar to an input x but redundant in the outputs they demonstrate.
Third, we propose a novel scoring method PMI^β which compensates for surface-form competition among sampled LM completions in constrained generation settings. Inspired by <cit.>, we re-weigh each completion y by an estimate of its a priori likelihood in the task context. We find this improves system performance in both the zero and few-shot settings.
Together, our contributions address key challenges in DST and in retrieval-augmented ICL generally. Our method produces state-of-the-art results on MultiWOZ 2.1 and 2.4 DST benchmarks across a variety of few-shot settings. Similarly, we obtain a new zero-shot state-of-the-art in the multi-domain setting.
§ TASK DEFINITION
A task-oriented dialogue consists of turns or paired utterances between a user and an agent which interfaces the user with a programmable system.
At each turn t, the purpose of a DST module is to use the dialogue history up to that turn to predict a dialogue state y_t, which represents the user's goal and progress in using the system. Let A_i be an agent utterance, U_i be a user utterance, and C_t = [(A_1,U_1),(A_2,U_2), ... (A_t,U_t)][For user-initiated dialogues, A_1 may be omitted] be the dialogue history up to turn t. The task is to map the history C_t to a state representation y_t.
In this work, we predict dialogue states y_t which can be represented as slot-value pairs:
y_t = {(s_1, v_1), (s_2, v_2) ... (s_n, v_n)}
where each slot s_i and the types of values it permits are defined in a system schema. For example, an agent supporting hotel reservations might have a slot `hotel-parking' taking boolean values for constraining search to hotels that include parking.
We can equivalently define this task as predicting state changes, as proposed in <cit.>. Let x_t = [y_t-1, (A_t, U_t)] be a dialogue context consisting of the previous dialogue state prediction and utterances for the current turn. Using this turn context x_t, we predict a state change:
Δ y_t = {+(s_i, v_i) ... -(s_j, v_j) ...}
where y_t is computed by applying the difference Δ y_t to y_t-1.
This approach has two advantages for few-shot in-context learning.
First, the turn context x_t requires fewer tokens to represent than the complete history C_t, permitting more in-context examples.
Second, the number of distinct state changes Δ y_t observed in practice is much smaller than the number of distinct states y_t, simplifying the search for relevant examples and the generation problem.
For these reasons, we formulate our DST problem as mapping from the turn context x_t to a state change Δ y_t. For readability, we often use `turn' to refer to this turn context x_t, distinguishing it from the history C_t or turn number t using notation.
§ METHODS
Given a dialogue turn t, our method produces a state change Δ y_t by (1) retrieving a set of in-context examples , (2) formatting these into a prompt , (3) generating and scoring possible program solutions (LM completions) with OpenAI Codex <cit.>, (4) executing the program to compute a state change Δ y_t. Given the state change, we compute the complete dialogue state y_t by applying the difference to y_t-1. We describe our prompting function , in <ref>. In <ref>, we describe our method for retrieving a diverse and representative set of examples . Finally, we describe our method for scoring LM completions with a point-wise mutual information estimate in <ref>.
§.§ Prompting with Text-to-Python
We design a novel prompt that re-frames DST as a text-to-Python task, allowing us to explicitly represent coreference phenomena and leverage the unique capabilities of language models pre-trained with code. <ref> provides an overview. Formally, we define a prompting function , which takes a test dialogue turn x_t and a set of k in-context examples = {(x_1, Δ y_1), ...(x_k, Δ y_k)} and produces a string representing the program synthesis task.
Our prompt (<ref>) starts with a task definition represented as a set of Python classes corresponding to each DST domain. Each informable slot is an attribute in the appropriate class. Type hints are used to label categorical slots with their values and non-categorical slots with the most appropriate type. The dialogue state is also represented as an object which can be manipulated, having an attribute per-domain.
We represent instances of our programming synthesis task with in-context python examples. Each in-context example ([y_t-1, A_t, U_t], Δ y_t) is represented as follows: the previous dialogue state y_t-1 is represented as a dictionary, mapping slot names to values.
Non-categorical values such as names are de-lexicalized by replacing their string value with a variable referencing their existing value in the state.
Solutions to the programming task are represented as function calls that manipulate the dialogue state. One of the key benefits of our formulation of the DST task as python is explicit representation of coreference phenomena. For example, the solution corresponding to a user input “find me a restaurant in the same area as my hotel" would be , explicitly modeling the resolution of the linguistic coreference.
§.§ Retrieving Diverse Relevant Examples
We propose a method for in-context example selection that produces an example set that is both relevant to a test turn x_t and diverse, representing the relevant portions of the output space. We first learn an embedding space in which similar state changes have high cosine similarity with one another (<ref>), following <cit.>. Using this, we propose a novel method for decoding such that examples are similar to x_t but dissimilar to each other (<ref>).
§.§.§ Retriever Training
We fine-tune an embedding model to approximate the true similarity between two turn contexts x_i, x_j with the cosine similarity between their encoded representations, following prior works <cit.>. Let D_train be a set of dialogue turns serving as training data for an example retriever and selection pool at inference time. As described in <ref>, each example e_i ∈ D_train is a context state-change pair e_i = (x_i, Δ y_i). A single example e_i is shown in the green box in <ref>.
We encode an example or query turn context x = [y_t-1, (A_t, U_t)] by concatenating each element of the turn context and passing the result through an embedding model[We use all-mpnet-base-v2 <cit.>, available in sentence-transformers <cit.>] . For two example turn contexts x_i, x_j, the cosine similarity between their embeddings cos(x_i, x_j) approximates their relevance to each other. At inference time, we can embed a test turn x_t and retrieve highly similar examples with nearest neighbors search.
We fine-tune our embedding model with a supervised contrastive loss, such that high cosine similarity of representations correlates with high similarity between dialogue state changes, following the procedure in <cit.>. For our learning objective, we assume a metric that gives the true similarity between two dialogue state changes for a pair of turns sim_F_1, which we define below. For each dialogue turn in the training set, we use sim_F_1 to define positive and (hard) negative examples as the top and bottom 5% of the current nearest 200 examples, respectively. We train each retriever for 15 epochs using the hyperparameters detailed in <ref>.
We define the ground-truth similarity sim_F_1 between two dialogue state changes as follows. Let Δ y^a = {(s_1^a, v_1^a)... (s_m^a, v_m^a)} and Δ y^b = {(s_1^b, v_1^b)... (s_n^b, v_n^b)} be two dialogue state changes. For any slot value v_i exhibiting coreference to another slot s_j, we replace v_i with s_j. For example, the state change corresponding to a turn "I need a taxi to my hotel" would become {(taxi-destination, hotel-name)}, regardless of the particular hotel name value.
We then compute true state similarity using the average between the F_1 score comparing updated slots and the F_1 score comparing updated slot-value pairs, as proposed in <cit.>:
Δ y^aΔ y^b = 1/2F_1({s_1^a, ...}, {s_1^b, ...}) +
1/2F_1({(s_1^a, v_1^a), ...}, {(s_1^b, v_1^b), ...})
§.§.§ Decoding Diverse Examples
We propose an adaptation of MMR which uses our learned embedding model to produce a diverse set of examples that maximizes similarity to x_t and minimizes similarity between examples in . Particularly for encoders that are fine-tuned to approximate output similarity, this yields a set of examples that is more representative of the output space than simply selecting the nearest k, which may all have the same label. Formally, we define the ideal set of in-context examples ℰ^*_k for an input x_t to be the k examples satisfying:
ℰ^*_k = argmax_ℰ_k ⊂𝒟_train∑_x_i ∈ cos(x_t, x_i)
- α∑_x_i, x_j ∈ cos(x_i, x_j)
where the hyperparameter α is a dissimilarity factor and α=0 corresponds to typical nearest-k example selection. We greedily approximate ℰ^*_k by iteratively selecting the example which maximizes the equation at each step.
For more efficient decoding of with large selection pools, we limit the considered examples to the nearest N such that |D_train| >> N >> k. For example in one run in the 5% MultiWOZ few-shot setting, |D_train| = 2754, N=100, and k=10.
§.§ Decoding with Point-wise Mutual Information
We introduce a new rescoring function, PMI^β, to mitigate surface form competition when generating from language models, that we use for making predictions in our setting. PMI^β is an extension of PMI_DC, which was proposed in <cit.> for mitigating surface form competition in the classification setting.
We first describe surface form competition and PMI_DC (<ref>), and then describe PMI^β, an adaptation of this method to the constrained generative setting with in-context examples (<ref>).
§.§.§ Surface-form Competition
Conditioned on a prompt, a language model assigns a likelihood to all completing strings, from which we can sample. While string likelihoods can be used as a proxy for output class or structure likelihoods, these are not the same. For example, in our DST formulation, many strings can correspond to the same state change Δ y_t, or may not correspond to a valid state change at all. As such, <cit.> argue string likelihoods can be unreliable for scoring the best among a fixed set of choices which may each contain numerous surface forms in V^*. To compensate for this, they propose scoring with Domain Conditional Point-wise Mutual Information (PMI_DC = P(y|x, domain)/P(y|domain)). This re-weighs choices by a priori likelihood of their string form in the task context P(y|domain).
§.§.§ Scoring with PMI^β
To mitigate surface-form competition, we propose PMI^β: a prompt conditional pointwise mutual information scoring method that adapts PMI_DC to our constrained generative setting with in-context examples. Doing so requires overcoming two key challenges. First, our choices to score amongst are not practically enumerable. Second, the task context we condition on is partly defined by our choice of in-context examples . We overcome these by first generating a small set of plausible completions 𝒞 and their likelihoods according to a language model. Then, we re-weigh these likelihoods according to an estimate of their a priori likelihood conditioned on only the task context and selected examples ℰ_k:
PMI^β(x;y|ℰ_k) = P(y|))/P(y|)^β
where f'_prompt is a prompt designed for estimating P(y|ℰ_k) without conditioning on x_t, described below, and β is a hyperparameter for adjusting the impact of re-weighing by a priori likelihood.[While only β=1 corresponds neatly to a point-wise mutual information estimate pmi(x_t;y), we find 0 < β < 1 to be more effective in practice. Prior work in terminology extraction has also proposed scaling PMI estimates, though in a different context <cit.>]
To generate the candidate completions 𝒞, we sample a set of plausible candidates using nucleus sampling <cit.>.
While one could simply use the language model to compute P(y) directly, such unconditional estimates tend to vary wildly. Following <cit.>, we instead estimate the probability of the completion in context, but further account for the use of in-context examples. To do this, we construct an additional prompt which contains the same problem definition, but reverses the order outputs and inputs. Using this, we can estimate the probability of a completion y in the context of our task and examples without x_t, illustrated in <ref>. Finally, we select the completion ŷ which maximizes Eq. <ref>, and parse it to a dialogue state change Δ y_t:
ŷ = argmax_y ∈𝒞 PMI^β(x;y|ℰ_k)
We choose a minimum a priori likelihood of between 10^-7 and 10^-5, as estimates for P(y|f'_prompt(ℰ_k)) can be very low, particularly when rare slot values implied by x_t are not present in any example. When constructing our candidate set 𝒞, we choose the five most likely sampled completions under the original prompt. Finally, we canonicalize each completion y when computing P(y|f'_prompt(ℰ_k)) by first parsing it to a dialogue state change, and then re-writing it as a string in the form as if it were an example in ℰ_k. In effect, this normalizes mis-spellings and enforces the expected order of keyword arguments in the update string, further controlling for high variance in our estimates.
§ EXPERIMENTS
We describe our zero and few-shot experimental setups, evaluation, and baselines. Hyperparameter and implementation details can be found in <ref>.
§.§ Experimental Settings
We conduct zero and few-shot DST experiments on the MultiWOZ dataset <cit.>, containing over ten thousand multi-domain task-oriented dialogues crowd-sourced in a wizard-of-oz setup. There are five domains in the validation/test sets and a total of thirty informable slots. We evaluate on the newest MultiWOZ 2.4 <cit.>. For comparison with prior work, we also report on MultiWOZ 2.1 <cit.>.
We evaluate performance with standard joint-goal accuracy (JGA) for all of our experiments. For a turn x_t, a dialogue state prediction ŷ_t is considered correct only if all slot names and values exactly match the ground-truth state y_t.
For the few-shot setting, following <cit.>, we sample 1%, 5%, or 10% of the dialogues from the training set to serve as a training set D_train for each experiment. We fine-tune our retriever using D_train and select in-context examples from it. We conduct three independent runs for each sample size and report the average JGA across runs. We also perform a single run in the full setting, using 100% of the training data.
For the zero-shot setting, there are no labeled examples to select from, but a single formatting example is used for all inference turns, as in <cit.>. We consider two evaluation settings. The first is the typical assessment on all test set dialogues, as in few-shot and complete training regimes, which we will refer to as the standard MultiWOZ benchmark. These results allow comparison to few-shot and full-data results, as well as other methods which use zero supervised dialogues in training. We also report results on the MultiWOZ `leave-one-out' benchmark for zero-shot transfer methods <cit.>, reporting JGA considering only slots in each individual domain, as well as the average of these five single-domain results.
We compare to a number of prior state-of-the-art zero-shot and few-shot DST methods as baselines. These include DST specific architectures <cit.>, various fine-tuning methods <cit.>, and a strong ICL baseline <cit.>.
§ RESULTS
Few-shot DST on MultiWOZ We present few-shot and full-shot dialogue state tracking results on MultiWOZ 2.1 & 2.4 in <ref>. We find that our method achieves state-of-the-art in the 1%, 5%, and 10% few-shot settings for both MultiWOZ 2.1 & 2.4, outperforming all fine-tuned methods as well as other in-context learning methods. While all methods considered improve with additional data, our method is remarkably data efficient: RefPyDST achieves 95% of its full-shot performance using only 5% of the training data, on average. In comparison, using 5% of the training data with IC-DST Codex only achieves 89% of its full-shot performance.
Zero-shot DST on MultiWOZ We present zero-shot multi-domain results on MultiWOZ 2.4 in <ref>. We find our method outperforms all zero-shot methods, achieving a 12.4% increase in multi-domain JGA over IC-DST Codex, our strongest performing baseline. Comparisons are limited to methods that use zero training data, as opposed to transfer methods that train on some MultiWOZ domains and evaluate on others.
For comparison with domain transfer methods, we present zero-shot results on the leave-one-out benchmark for MultiWOZ 2.1 & 2.4 in <ref>.
Following prior work, we evaluate only dialogues and slots in the held-out domain.[Prior work on the leave-one-out
setting evaluates using the following method: (1) filter to dialogues which contain the held out domain (this can include dialogues in multiple domains) and (2) only check slots in that domain when computing JGA. <cit.>]
Evaluating average performance in this setting, we find our method outperforms all methods except for the current state-of-the-art transfer method, SDT-seq. Their method outperforms ours by 1.5% on each held-out domain on average. However, transfer methods such as SDT-seq require significant out-of-domain DST training data, while ours requires none. Despite this training data disadvantage, our approach outperforms all other zero-shot transfer methods.
§ ANALYSIS & ABLATIONS
In this section, we further analyze the performance characteristics of our method.
Ablations
In order to assess how each part of our method contributes to performance, we conduct a leave-one-out ablation, as well as reporting the performance of using only our prompting method. Each ablation is conducted using a 20% sample of the development data in the MultiWOZ 2.4 dataset (200 dialogues), sampled independently of the set used to tune hyperparameters. We present results in <ref> for the zero and 5% few-shot setting. In the few-shot setting, we find leaving out our diverse retrieval to be most impactful.
Does using Python improve coreference resolution?
Since our Python prompting method explicitly models coreference through variable reference, we analyzed how our system performed on state predictions requiring coreference resolution. Using coreference annotations released with the 2.3 version of the MultiWOZ dataset <cit.>, we evaluate accuracy on slot values which require coreference to resolve. Our results are presented in <ref>. Overall, our full model improves upon the baseline for coreference. Removing Python greatly reduces our model's performance, demonstrating the benefit of modeling coreference as Python variable reference.
Does our retrieval method improve demonstrated label diversity?
We investigate to what degree our diverse decoding procedure increases diversity in the distribution of demonstrated labels for a given input. To approximate a label, we define S(e_i) as the distinct combination of slot names in the output for an in-context example e_i = (x_i, Δ y_i), ignoring assigned values.
First, we simply count the average number of distinct combinations of slot names in , shown in upper half of <ref>. For each x_t, we retrieve a set of in-context examples . We count the number of distinct slot combinations across each e_i ∈, and report the development set average. A value of 1 indicates the retriever is fully redundant: all k examples demonstrate the same combination of slots, while a value of k indicates every example in is unique.
Second, we consider the entropy of slot combinations present in , shown in the lower half of <ref>. For each x_t, we again compute S(e_i) for each retrieved example in . We then compute the specific conditional entropy H(S|X = x_t), estimating the probability of each slot combination p(S|x_t) using its frequency in . We report the development set average or conditional entropy H(S|X). H(S|X = x_t) = 0 indicates a fully redundant retriever that retrieves the same set of slots for all examples, and a uniform distribution of slot combinations yields H(S|X = x_t) = log_2(k).[While this is true of a uniform distribution over demonstrated slot combinations, we find uniformly sampling from D_train yields an entropy of ∼ 2.6, as the distribution of labels in the training data is not uniform.]
We find our retrieval methods increase the diversity of in-context examples across all settings. For a given training set size, we see that diverse decoding increases the number of distinct `labels', measured by S(e_i), as well as the entropy H(S|X). Still, selected examples are not random, as we can see when comparing H(S|X) to a random retriever which uniformly samples from D_train.[In Appendix <ref>, we also compare few-shot task performance for our retrieval method against random retrieval] Finally, we see that as the size of the training set increases, the diversity in exemplified labels for a given choice of α decreases. Increasing training data leads to a higher density of each slot combination, requiring more aggressive discounting to achieve the same diversity in . As such, we increase α with training set size, using α=0.2 for 1% and 5% settings and α=0.3 & α=0.5 for 10% and 100% settings, respectively.
§ RELATED WORK
Dialogue State Tracking
There has been a recent increase in work on the zero and few-shot DST systems. Many approaches fine-tune a pretrained language model by re-framing DST as some form of text-to-text or auto-regressive language modeling task <cit.>. Many of these methods often exhibit zero-shot transfer capabilities <cit.>. However, these approaches still require re-training when a domain is added or changed, and zero-shot transfer performance is dependent on the relatedness of the new domain to existing ones.
Some recent works instead model DST as an in-context learning problem <cit.>, bypassing the need for re-training when system definitions change.
In particular, we build on the work of <cit.>, which models DST by predicting dialogue state changes at each turn, relying on only a state summary and agent/user turn utterances for inference.
Their work models DST as a text-to-SQL problem, whereas we model it as a Python programming problem with novel methods for selecting in-context examples and scoring language model completions.
In-Context Learning
Some recent works explore the properties of effective in-context examples. In classification settings, <cit.> find random examples can significantly limit performance, and propose using a pre-trained embedding model to find examples semantically close to x, retrieving one per class.
Other works investigate the role of examples in ICL performance in detail, finding that ICL methods perform best when example inputs and test inputs are as close in distribution as possible, and when the distribution of exemplified labels closely matches the target distribution <cit.>.
Paralleling this, a number of works across NLP tasks propose methods for retrieving relevant in-context examples. <cit.> use an unsupervised embedding model to embed a test input x and all available examples, retrieving the k with highest embedding cosine similarity. Other works use a similar dense retriever but in an embedding space learned with supervision. <cit.> fine-tune an example retriever with contrastive learning in which positive examples maximize p_LM(y|x, e_i). <cit.> propose a contrastive learning objective specific to DST, fine-tuning an embedding model to embed turns with similar state changes in proximity to each other. Rather than use a separate retrieval module, <cit.> use the LM itself to select examples which are most likely when conditioned on x. Given a test input x, each of these works scores the relevance of an individual example e_i to a test input x and then selects the k most relevant ones to include in a prompt. In most cases, this yields a set of examples which are meaningfully similar to x. However, considering examples individually does not necessarily lead to adequate exemplification of the output space. In supervised settings that learn a relevance metric which approximates output similarity, this can lead to degenerate examples sets which all exemplify the same output. In contrast to this, we propose a novel method for using this score to construct with examples that are relevant to x while being distinct from each other.
In concurrent work to our own, <cit.> propose a method for decoding diverse examples of explanations from a retriever for use in reasoning problems, also based on maximum-marginal-relevance (MMR) <cit.>. Their work uses unsupervised measures of similarity between explanations, where ours uses a supervised retriever which approximates similarity of outputs. Thus, diversity in our example sets correlates to diversity in exemplified outputs. In another concurrent work to our own <cit.> propose a method for diverse example selection in a semantic parsing task, using the outputs of selected examples to incrementally cover more structures in .
For tasks which can be re-framed as program synthesis, a number of works have also developed ICL methods for use with LMs pre-trained on code such as Codex and Codegen <cit.>. <cit.> use ICL with Codex to generate Lisp-like programs in a dialogue semantic parsing task. <cit.> evaluate such models capabilities in Text-to-SQL problems, and <cit.> use a Text-to-SQL framing to use Codex for DST. Instead of SQL queries, we generate Python programs, allowing for intuitive modeling of phenomena like coreference.
Finally, recent works have considered adjusting how completion strings are scored with an LM. <cit.> normalize log-likelihoods by length before scoring completions. <cit.> re-weigh LM probabilities by learning an affine transformation that yields uniform scores given `content-free inputs'. <cit.> propose PMI_DC, a method for re-scoring completions using pointwise mutual information (pmi), which we adapt to our constrained generative setting.
§ CONCLUSION
We propose RefPyDST, an in-context learning method for DST. Our contributions address key challenges in DST and in retrieval-augmented ICL, producing state-of-the-art results on MultiWOZ DST benchmarks for few-shot and zero-shot setups. Future work could apply methods developed here to other in-context learning problems.
§ LIMITATIONS
While in-context learning methods for DST are promising in their data efficiency and flexibility to new domains, they typically require very large models to perform effectively. At 175 billion parameters, OpenAI Codex <cit.> is much larger than some of the fine-tuned approaches to DST, though with better performance and ability to adapt to new domains without re-training. Despite our advances, there are still significant errors when applying ICL for DST. As such, ICL may not necessarily be relied on in safety-critical settings.
§ ACKNOWLEDGEMENTS
We thank Geetanjali Rakshit, Nilay Patel, Changmao Li, Chris Toukmaji, Rongwen Zhao, and other JLab members for insightful feedback on preliminary drafts of this work, and thank the anonymous reviewers and area chairs for their detailed and helpful feedback. The authors were supported in part by the NSF National
AI Institute for Student-AI Teaming (iSAT) under
grant DRL 2019805. The opinions expressed are
those of the authors and do not represent views
of the NSF. We are thankful for the computing resources provided by the Pacific Research Platform's Nautilus cluster, supported by the National Science Foundation under Award Numbers CNS-1730158, ACI-1540112, ACI1541349, OAC-1826967, the University of California Office of the President, and the University of California San Diego’s California Institute for Telecommunications and Information Technology/Qualcomm Institute.
acl_natbib
§ DIALOGUE STATE NORMALIZATION
Real world task oriented dialogue systems can interface users with thousands or more entities, such as restaurants or hotels in MultiWOZ. Since reasoning directly over all such entities is intractable, dialogue understanding modules often first predict a surface form (e.g. a restaurant name mentioned by a user) which another module links to a canonical form (e.g. that restaurants name in a database). While dialogue state trackers evaluated on MultiWOZ do not need to interact with a database, handling of typos and unexpected surface forms is important for a realistic assessment of system performance, since predictions for a slot are evaluated on exact string match.
As such, most research systems including the baselines in this paper use rule-based functions to fix typos and unexpected surface forms. We propose a robust rule-based method for effective linking of surface forms to canonical forms described below.
Mapping to canonical forms
We begin by first reading in canonical forms for every informable slot in the MultiWOZ system. For categorical slots, these are defined in a schema file, as released with MultiWOZ 2.1 <cit.>. For non-categorical slots, we read in values from the database defined with the original MultiWOZ data collection <cit.>. Neither source of information contains dialogue data, only information defining the task. The taxi and train service have informable slots for departure and destination locations. In addition to the locations listed for these slots in a database (i.e. scheduled train journeys), we accept the name of any entity which has an address as a canonical form for these slots. For time slots we consider any time represented in "hh:mm" form as canonical. Overall, this gives us a mapping from a slot name s_i to a set of canonical forms 𝒞_𝒾 for all slot names.
Given a slot name s_i and a slot value surface form v_j, we select the correct canonical form c_j as follows: (1) we first generate a set of aliases for v_j. These are acceptable re-phrasings of v_j, such as adding the leading article "the", a domain specifying suffix such as "hotel" or "museum", or switching numbers to/from digit form (e.g. "one" ↔ "1"). We then consider a surface form v_j as mapped to a canonical form c_j if any of the aliases a_j ∈ A_j is a fuzzy match for the canonical form c_j, using the scorer in the [<https://pypi.org/project/fuzzywuzzy/>] package. We require a score of 90 or higher, and verify in the development data that no surface form maps to more than one canonical form.
Choosing the most likely surface form While in a real world dialogue system we would only need to link to canonical forms, gold dialogue state states in MultiWOZ are themselves annotated with surface forms, not always matching the name of the entity in the database and occasionally disagreeing on an entity name. So as to not alter the evaluation process and make sure we can fairly compare to prior work, we use the training data available in each experimental setting to choose the most likely surface form for a given canonical form c_j. To do this, we simply count the occurrences of each surface form in the gold labels of the training set for that experiment, and select the most frequently occurring one for c_j. However for low data regimes, we often do not observe all canonical forms. Following numerous prior works, we make use of the ontology file released with the dataset <cit.>, which lists all observed surface forms for a slot name, and treat each of these as if we had seen them 10 times. This serves as a smoothing factor for selecting the most likely surface form. For the zero-shot experiments, we use only the counts derived from the ontology file, as we have no training data to observe.
Overall, we find this approach to normalization to be robust when compared to other works, which rely on hard-coded fixes for commonly observed typos. Further, our normalization can be initialized with any similarly formatted system definition and data set, allowing for use in other domains.
To verify that our approach to normalization is not the key factor distinguishing our performance from previous methods, we apply it to a faithful re-implementation of our IC-DST Codex baseline <cit.> in our ablation in <ref>.
§ PROMPT EXAMPLES
Please see our GitHub repository for prompt examples: https://github.com/jlab-nlp/RefPyDSThttps://github.com/jlab-nlp/RefPyDST.
§ IMPLEMENTATION DETAILS
§.§ Hyperparameters
All hyperparameter tuning is performed using a 10% split of the development set (100 dialogues) and manual tuning. We find that a smaller choice for p (0.7) in nucleus sampling helps performance in the zero-shot setting. Similarly, we find that in order to select a diverse set of examples, we need to scale α. We use α=0.2 for the 1% & 5% settings, α=0.3 for 10%, and α=0.5 for the full setting. For the full setting, we also increase the the number of considered examples from the nearest 100 to nearest 200. Across all settings, we compute PMI^β with β=0.4. We use a robust approach to normalizing predicted values (i.e. to resolve mis-spellings, etc.) described in Appendix <ref>. We apply this normalization to our strongest baseline (IC-DST Codex) in our ablations (<ref>).
When computing P(y|), we clip low token log probabilities at 5e-7 in the few-shot setting and 5e-4 in the zero-shot setting, as the lack of examples leads to poorer calibration in the zero-shot setting. We also clip full-sequence log probabilities at 1e-7 in the few-shot setting and 1e-5 in the zero-shot setting.
§.§ Retriever fine-tuning details
For both our methods and the re-implementation of IC-DST Codex <cit.> used in our ablations (<ref>), we fine-tune the retriever using the package <cit.>, following the procedure of <cit.>. We begin with pre-trained embedding model, which we use as a retriever with nearest neighbors search[ We use the scipy implementation: <https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html>]. Each of our retrievers is trained for 15 epochs using the , which computes the contrastive loss proposed by <cit.> using only hard positives and hard negatives. For each dialogue turn in the training set, we use sim_F_1 to define positive and (hard) negative examples as the top and bottom 5% of the nearest 200 examples, respectively.
§.§ Arguments to Codex
For all methods, we make requests to OpenAI Codex with arguments , , and stop sequences of either (IC-DST Codex baseline replication) or (ours). For methods which utilize nucleus sampling <cit.> with the parameter. In the few-shot setting, we sample with , keeping only most likely results. In the zero-shot setting, we increase to 32.
§ RANDOM RETRIEVAL ABLATION
In <ref>, we compare our retrieval methods to random retrieval, on the 20% split of the development set used in our previous ablations. For random retrieval, we sample k examples from D_train uniformly at random to construct . We find this significantly under-performs our learned retrieval methods, whether selecting the top-k examples or using our diverse decoding approach.
|
http://arxiv.org/abs/2307.02380v1
|
20230705154659
|
Moments of ideal class counting functions
|
[
"Kam Cheong Au"
] |
math.NT
|
[
"math.NT",
"math.RT",
"11N37, 11R42 (Primary) 11F30, 20C15 (Secondary)"
] |
equationsection
@restore@footnote
mpfnfootnote
footnote
footnotetext@footnote@collect
@footnote@collect#1
@footnote@acc
[thefnmark]#1
@footnote@use
@footnote@acc
@footnote@accempty
@footnote@accempty
every box/.style=
before upper pre=@restore@footnote
,
every box on layer 1/.append style=
after app=@footnote@use
plain
theoremTheorem[section]
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
definition
definition[theorem]Definition
conjecture[theorem]Conjecture
remark[theorem]Remark
problem[theorem]Problem
example[theorem]Example
exercise[theorem]Exercise
examplebox
[colback=myteal,colframe=myteal!75!black]
exercisebox
[colback=mygreen,colframe=mygreen!75!black]
packagebox
[colback=myyellow,colframe=myyellow!75!black]
Kam Cheong Au]Kam Cheong Au
Rheinische Friedrich-Wilhelms-Universität Bonn
Mathematical Institute
53115 Bonn, Germany
[email protected]
[2010]Primary: 11N37
, 11R42. Secondary: 11F30, 20C15
We consider the counting function of ideals in a given ideal class of a number field of degree d. This describes, at least conjecturally, the Fourier coefficients of an automorphic form on GL(d), typically not a Hecke eigenform and not cuspidal. We compute its moments, and also investigate the moments of the corresponding cuspidal projection.
Moments of ideal class counting functions
[
August 1, 2023
=========================================
§ INTRODUCTION
Given a positive definite binary integral quadratic form g(x,y) of discriminant D, the properties of the numbers r_g(n) = #{(x,y)∈ℤ^2 | g(x,y)=n} have been intensively investigated. The Dirichlet series ∑_n≥ 1 r_g(n) n^-s is the L-function of an elliptic modular form, the moments
∑_n≤ x r_g(n)^2β≍ x (log x)^2^2β-1-1, β∈ℝ^>0,
have been quite well-studied (for example <cit.>, <cit.>, <cit.> and <cit.>). One can define, in a quite natural way, a closely related quantity r_cusp,g(n) such that ∑_n≥ 1r_cusp,g(n)n^-s is now the L-function of a cuspidal modular form.
One expects a slower growth of the following moment
∑_n≤ x |r_cusp,g(n)|^2β≍ x (log x)^A.
It is important to note that neither r_g(n) nor r_cusp,g(n) are in general Fourier coefficients of a Hecke eigenform. Therefore the exponent A cannot be predicted from a suitable Sato-Tate law and is an interesting quantity to consider. Blomer <cit.> gives a formula for A in when D is fundamental; we will derive a similar formula for non-fundamental D as an application of our general framework to be described below.
Because r_g(n) is essentially number of integral ideals of norm n in a given ideal class, one can extend this investigation to general number fields. Let K be a number field of degree d, 𝔄 an ideal class of K, a(𝔄,n) the number of integral ideals in the class 𝔄 with norm n. We first consider the moment
∑_n≤ x a(𝔄,n)^2β≍ x (log x)^A_1.
We will give a formula for exponent A_1 (Theorem <ref>). When K/ℚ is Galois, it turns out A_1 = d^2β-1-1; the non-Galois case is more complicated. Being a linear combination of L-functions of automorphic representations of GL(1)/K, one could perform (at least conjecturally) automorphic induction to produce representations of GL(d)/ℚ, where it makes sense to talk about the cusp space. We study this phenomenon in the second section. We first give a natural criterion on which the corresponding Artin L-function should come from a cuspidal automorphic representation (Proposition <ref>). This criterion agrees with proven cases of automorphic induction <cit.>. In general number fields, we can again then define a quantity a_cusp(𝔄,n) that generalizes r_cusp,g(n), and ∑_n≥ 1 a_cusp(𝔄,n)n^-s is a linear combination of L-functions of cuspidal automorphic forms. Consider the same moment problem:
∑_n≤ x |a_cusp(𝔄,n)|^2β≍ x (log x)^A_2.
We will prove (in majority of cases: Proposition <ref>, <ref>) a formula for A_2. It will involve finer properties of the number field K. In our investigation, the naive language of quadratic form no longer suffices. Instead, we employ extensively the language of (finite group) representations. Not only it allows us to state certain formulas more concisely, but also provides more transparent proofs for the known quadratic cases.
We work mainly with Dirichlet series, the asymptotic of the partial sum is then obtained via Tauberian theorems <cit.>.
Most of the works below are extracted from the author's Master thesis (at University of Bonn) under the same title.
§ FULL PARTIAL ZETA SERIES
§.§ General considerations
In this section, we outline the framework we will be working throughout this article and then prove several important results. We will temporarily not mention objects related to ideal classes. Doing so will allow us to concentrate on the more important representation-theoretic details. We fix the following notations which will be used in this section. Let L/K be an abelian extension of number fields, M be an extension of L such that M/ℚ is Galois (M can be taken, for example, to be the Galois closure of L over ℚ). Denote
G = Gal(M/ℚ) H = Gal(M/K) N = Gal(L/K)
M
L
K
ℚ[no head, from=1-1, to=2-1]
[no head, from=2-1, to=3-1]
[no head, from=3-1, to=4-1]
["H"description, curve=height=18pt, no head, from=1-1, to=3-1]
["G"description, curve=height=30pt, no head, from=1-1, to=4-1]
Let χ: N→ℂ^× be a (one-dimensional) character of N, denote χ : H →ℂ^× to be the one-dimensional character of H by composing χ with the natural map H→ N with χ, let χ^ be the induced character of χ from H to G (it might not be multiplicative). We will sometimes abuse notations by using same symbols for characters and their associated representations. For two characters ρ_1,ρ_2 of G, write
⟨ρ_1,ρ_2⟩_G = 1/|G|∑_g∈ Gρ_1(g) ρ_2(g)
as the usual inner product between characters.
Denote the Artin L-function associated with a representation ρ of G as L(ρ,s), also let a(ρ,n) be defined by
L(ρ,s) = ∑_n≥ 1a(ρ,n)/n^s.
We have L(χ,s) = L(χ,s) = L(χ^,s), that is
a(χ,n) = a(χ,n) = a(χ^,n).
This follows immediately from the fact that Artin L-function is invariant under restriction and induction. (See <cit.>)
For a prime p in ℚ, unramified in M, recall that the Frobenius (p) is a well-defined conjugacy class in G: take any 𝔓 in M lying over p, then (p) is the conjugacy class containing (𝔓/p).
For a prime p unramified in ℚ, let C be the corresponding conjugacy class of the Frobenius automorphism (p) in G, then
a(χ^,p) = χ^ (C).
This follows immediately by comparing the first order expansion of determinant:
1/(1-χ^( p) p^-s) = 1 + χ^( p)/p^s + ⋯
For real β>0, we are interested in the behaviour of the series ∑_n≤ x |a(χ,n)|^2β n^-s near s=1.
Let χ be a 1-dimensional character of N, then as s→ 1 from the right, we have, for some c≠ 0,
∑_n≥ 1|a(χ,n)|^2β/n^s∼ c (s-1)^-ϱ(χ,β)
where
ϱ(χ,β) = 1/|G|∑_g∈ G |χ^(g)|^2β
We write A≈ B if A/B is holomorphic on (s)>1/2 without zero. Then we have
∑_n=1^∞|a(χ,n)|^2β/n^s = ∏_p (1+|a(χ,p)|^2β/p^s +|a(χ,p^2)|^2β/p^2s+⋯) ≈∏_p (1+|a(χ,p)|^2β/p^s).
Here last step is valid since a(χ,n) = O(n^ε) for any ε>0. We can neglect those finitely many ramified primes. Let C denote a conjugacy class in G, above is
∏_C⊂ G∏_p = C(1+ |χ^(C)|^2β/p^s) ≈∏_C⊂ G∏_p = C(1+ 1/p^s)^|χ^(C)|^2β.
From Chebotarev's density theorem, we know ∏_p = C (1+ 1/p^s) has polar density |C|/|G|. So we have
ϱ(χ,β) = ∑_C⊂ G|C|/|G| |χ^(C)|^2β = 1/|G|∑_g∈ G |χ^(g)|^2β.
For β>0, there exists a constant c≠ 0 such that
∑_n≤ x |a(χ,n)|^2β∼ c x (log x)^ϱ(χ,β)-1.
This follows from the above after using a version of Tauberian theorems. See <cit.> or <cit.>.
We record an observation that will be used later:
Let ρ_1,⋯,ρ_k be representations of G, then
∑_n≥ 1a(ρ_1,n)⋯ a(ρ_k,n)/n^s,
originally convergent for (s)>1, has meromorphic continuation to (s) > 1/2.
Write A≈ B if A/B is holomorphic on (s)>1/2 without zeroes. We can neglect those finitely ramified primes. Then
∑_n≥ 1a(ρ_1,n)⋯ a(ρ_k,n)/n^s = ∏_p (1+a(ρ_1,p)⋯ a(ρ_k,p)/p^s + a(ρ_1,p^2)⋯ a(ρ_k,p^2)/p^2s + ⋯) ≈∏_p (1+a(ρ_1,p)⋯ a(ρ_k,p)/p^s).
For unramified p, one has a(ρ_1,p)a(ρ_2,p) = a(ρ_1⊗ρ_2,p) with ⊗ meaning tensor product representation. Therefore above equals
∏_p (1+ a(ρ_1⊗⋯⊗ρ_k,p)/p^s) ≈∏_p (1+ a(ρ_1⊗⋯⊗ρ_k,p)/p^s + a(ρ_1⊗⋯⊗ρ_k,p^2)/p^2s+⋯) ≈ L(ρ_1⊗⋯⊗ρ_k,s),
this is the Artin L-function of another representation of G, which is known to admit meromorphic continuation.
Using the main conclusion of <cit.>, it should not be too hard to prove that the Dirichlet series in the above lemma has meromorphic continuation to all of ℂ, but we will not need this fact.
Denote
ϱ(β) = max_χ∈Nϱ(χ,β).
Here the maximum is taken over all (1-dimensional) character of N. Let 𝔛⊂N such that the maximum is attained, we will see in proof of following lemma that 𝔛 is independent of β.
(i) We have
ϱ(β) = 1/|G|∑_C⊂ G |C| ( [G:H] |H∩ C|/|C|)^2β,
here we are summing over all conjugacy classes of G.
(ii) Let χ∈𝔛. Then for each C ⊂ G, χ is constant on C∩ H, and χ^ (g) is of the form ℤ^≥ 0 times a root of unity.
(iii) Given σ∈ N, there exists a conjugacy class C⊂ G such that χ(σ)χ^ (C) ≥ 0 for all χ∈𝔛.
Let C be a conjugacy class of G, ϕ be the indicator function on C. We have
⟨χ^,ϕ⟩_G = 1/|G|∑_g∈ Cχ^(g).
On the other hand, by Frobenius reciporcity,
⟨χ^,ϕ⟩_G = ⟨χ,Res ϕ⟩_H = 1/|H|∑_h∈ H∩ Cχ(h),
thus
χ^(C) = [G:H]/|C|∑_h∈ H∩ Cχ(h).
Hence
ϱ(χ,β) = 1/|G|∑_C⊂ G|C|/|G| |χ^(C)|^2β
is maximized when χ is the trivial character on N. For any other character χ also maximizing it, we must have |χ^(C)| = |1^(C)| for all C, implying tha χ(h) is constant on h∈ H∩ C, and the fact that χ(h) is a root of unity implies χ^(C) is a positive integer times roots of unity. For the last assertion, choose any lift σ' ∈ H of σ∈ N, let C be the conjugacy class of G containing σ', then this C satisfies the condition.
Consider the following partial zeta coefficient: for σ∈ N, let[we used the notation a(·,n) in two different ways: if σ∈ N, a(σ,n) is defined via the next displayed equation; if χ∈N, a(χ,n) is the coefficient of the Artin L-function L(χ,s). The symbol σ will always denote an element in N; χ will always denote an element in N.]
a(σ,n) = 1/|N|∑_χ∈Nχ̅(σ) a(χ,n).
Let S be a finite collection of finite places in ℚ, i.e. a finite set of prime numbers, write (n,S) = 1 if n is not divisible by each element in S.
For any σ∈ N and n ∈ℤ^≥ 1, one has a(σ,n) ∈ℚ. Let S the be finite primes in ℚ which lies below primes that are ramifieid in L/K, then we also have a(σ,n) ∈ℤ^≥ 0 for (n,S)=1.
Let τ∈(ℚ/ℚ), denote χ^τ denote the character by composing the value of χ with τ. Then
a(σ,n)^τ = 1/|N|∑_χ∈Nχ^τ(σ) a(χ,n)^τ = 1/|N|∑_χ∈Nχ^τ(σ) a(χ^τ,n) = 1/|N|∑_χ∈Nχ(σ) a(χ,n) = a(σ,n),
this holds for all τ∈(ℚ/ℚ), so a(σ,n)∈ℚ. Let S' be primes in K lying over S ⊂ℚ. Note that
∑_n≥ 1a(χ,n)/n^s = ∏_∈ S' L_(χ,s) ∏_∉ S'1/1-χ()(N)^-s,
here the Euler factors L_(χ,s) for ramified primes lack a uniform description; but for (n,S)=1, it does not affect value of a(σ,n), so a(σ,n) is non-negative integer from character orthogonality.
Our first major result is
Let β>0, σ∈ N, then as s→ 1 from the right, we have
0< lim inf_s→ 1 (s-1)^ϱ(β)∑_n≥ 1|a(σ,n)|^2β/n^s≤lim sup_s→ 1 (s-1)^ϱ(β)∑_n≥ 1|a(σ,n)|^2β/n^s < ∞.
The fact that lim sup < ∞ is evident since
|a(σ,n)|^2β≪∑_χ |a(χ,n)|^2β
and Dirichlet series of each term has exponent ≤ϱ(β). Proving lim inf≠ 0 is more involved. Abbreviate χ^(C) := a_C,χ where C is a conjugacy class of G. For χ∈𝔛, by the above lemma, we can choose positive integer l such that a_C,χ^l ≥ 0 for all C. Write μ as a primitive l-th root of unity. For some ρ_i(C)∈ℂ, i=0,⋯,l-1 to be fixed later, define
f(χ,C) = ∑_i=0^l-1ρ_i(C) ∏_ p ∈ C(1+ μ^i a_C,χ/p^s)
also define a'(χ,n) and a'(σ,n) via
∑_n≥ 1a'(χ,n)/n^s := ∏_C⊂ G f(χ,C),
a'(σ,n) = 1/|N|∑_χ∈Nχ̅(σ) a'(χ,n).
Note that the same prime p never occurs in two f(χ,C_1) and f(χ,C_2) with C_1≠ C_2; as usual, we only focus on primes which are not ramified in M.
We claim that we can choose ρ_i(C) ≠ 0 independent of χ such that
* For all j and C, ∑_i=0^l-1ρ_i(C) μ^ij equals either 0 or 1.
* χ̅(σ)a'(χ,n) ≥ 0 for all n and χ∈𝔛.
Assuming above claim, let us prove lim inf > 0. (1) implies a'(χ,n) = a(χ,n) for all χ or a'(χ,n) = 0 for all χ, therefore |a(σ,n)| ≥ |a'(σ,n)|, so it suffices to prove the assertion for ∑_n≥ 1 |a'(σ,n)|^2βn^-s. Since
|a'(σ,n)|^2β ≫|∑_χ∈𝔛χ̅(σ)a'(χ,n) |^2β - ∑_χ∉𝔛 |a'(χ,n)|^2β
≫∑_χ∈𝔛 |a'(χ,n)|^2β - ∑_χ∉𝔛 |a'(χ,n)|^2β by condition (2),
condition (1) also implies
*∑_n≥ 1|a'(χ,n)|^2β/n^s = ∏_C⊂ G( ∑_i=0^l-1ρ_i(C) ∏_ p ∈ C (1+ μ^i |a_C,χ|^2β/p^s) ).
To see why (*) is true, we compute, for example coefficient of p_1^-sp_2^-s for p_1≠ p_2, if p_1 = p_2 = C, then
a'(χ,p_1p_2) = (∑_i=0^l-1ρ_i(C) μ^2i)a_C,χ^2.
if p_1 = C_1 ≠ p_2 = C_2, then
a'(χ,p_1p_2) = (∑_i=0^l-1ρ_i(C_1) μ^i)(∑_i=0^l-1ρ_i(C_2) μ^i) a_C_1,χ a_C_2,χ.
since the numbers in parenthesis are either 0 or 1, we can distribute |·|^2β on both sides while keeping equalities. Above formulas generalize when n=p_1⋯ p_k, so (*) indeed holds. In (*), the
∏_ p ∈ C (1+ μ^i |a_C,χ|^2β/p^s) ≍ (s-1)^-μ^i |a_C,χ|^2β |C|/|G|, s→ 1.
As i varies from 0 and l-1, the dominant term come from those such that μ^i has the largest real part, i.e. i=0, so
∑_i=0^l-1ρ_i(C) ∏_ p ∈ C(1+ μ^i |a_C,χ|^2β/p^s) ≍ (s-1)^-|a_C,χ|^2β |C|/|G|
(here we also used ρ_i(C)≠ 0). Multiplying over all C⊂ G gives ∑_n≥ 1 |a'(χ,n)|^2β n^-s is ≍ (s-1)^-ϱ(χ,β), which is (s-1)^-ϱ(β) when χ∈𝔛.
Therefore
∑_n≥ 1|a'(σ,n)|^2β/n^s≫ (s-1)^-ϱ(β) - ∑_χ∉𝔛∑_n≥ 1|a'(χ,n)|^2β/n^s.
by definition of 𝔛, all terms on the right has pole order <ϱ(β), so above is still ≫ (s-1)^-ϱ(β), proving lim inf is positive, assuming two conditions above.
Now we explain how to choose ρ_i(C) achieving these criteria. We make
ρ_i(C) = l^-1 ∀ i or ρ_i(C) = l^-1μ^-1-i ∀ i.
Obviously (1) is satisfied. For (2), the lemma above says there exists conjugacy class C_0 ⊂ G such that χ̅(σ) a_C_0,χ = χ̅(σ) χ^(C_0) ≥ 0 for all χ∈𝔛. For C≠ C_0, we pick the first possibility in <ref>, giving
f(χ,C) = 1 + ∑_p_i∈ C, p_i≠ p_ja_C_0,χ^l/p_1^s ⋯ p_l^s + ∑_p_i∈ C, p_i≠ p_ja_C,χ^2l/p_1^s ⋯ p_2l^s + ⋯, C≠ C_0,
which has non-negative coefficients (by our choice of l); for C = C_0, we pick the second possibility in (<ref>), giving
f(χ,C_0) = ∑_p_1∈ C_0a_C,χ/p_1^s + ∑_p_i∈ C_0, p_i≠ p_ja_C_0,χ^l+1/p_1^s ⋯ p_l+1^s + ∑_p_i∈ C_0, p_i≠ p_ja_C_0,χ^2l+1/p_1^s ⋯ p_2l+1^s + ⋯,
so χ̅(σ)f(χ,C_0) has non-negative coefficients. Then
∑_n≥ 1χ̅(σ)a'(χ,n)/n^s = χ̅(σ) f(χ,C_0) ∏_C≠ C_0 f(χ,C)
also has non-negative coefficients, which is (2).
The above proof is an extended version of that used by Blomer in <cit.>, where it is essentially our case specialized to l=2.
Examining the above proof shows that we do not need to sum over all n≥ 1 to make lim inf >0. In fact, for any finite set of primes S in ℚ, replacing ∑_n≥ 1|a(σ,n)|^2β n^-s with
∑_(n,S)=1
n squarefree|a(σ,n)|^2β/n^s
produces exactly the same qualitative behaviour, and the statement of above theorem remains unchanged. We only need to modify the proof to exclude primes p ∈ S in the expression f(χ,C) above.
For χ_1,⋯,χ_k ∈N, imitating the proof of Proposition <ref>, one shows easily the pole of
∑_n≥ 1a(χ_1,n)⋯ a(χ_k,n)/n^s
at s=1 has order
ϱ(χ_1,⋯,χ_n) := 1/|G|∑_g∈ Gχ_1^(g) ⋯χ_k^(g) ∈ℤ^≥ 0.
Let χ_1,⋯,χ_k ∈N. If ϱ(χ_1,⋯,χ_k) = ϱ(k/2), then each χ_i ∈𝔛 and χ_1⋯χ_k = 1.
Hölder's inequality implies
ϱ(χ_1,⋯,χ_k) ≤(1/|G|∑_g∈ G |χ_1^(g)|^k )^1/k⋯(1/|G|∑_g∈ G |χ_k^(g)|^k )^1/k = ϱ(χ_1,k/2)^1/k⋯ϱ(χ_k,k/2)^1/k
which is ≤ϱ(k/2). Equality holds if and only if each term is equals ϱ(k/2), that is χ_i ∈𝔛.
For the second assertion, from Lemma <ref>, we know that χ_i is constant on each C∩ H if it's non-empty (otherwise 0), and we have
χ_i^(C) = [G:H]|C∩ H|/|C|χ_i(H∩ C),
hence
ϱ(χ_1,⋯,χ_k) = 1/|G|∑_C⊂ G, C∩ H≠∅( [G:H]|C∩ H|/|C|)^k χ_1(C∩ H) ⋯χ_k(C∩ H),
so if this equals ϱ(k/2), χ_1(C∩ H) ⋯χ_k(C∩ H) must be 1 for all conjugacy classes C of G that intersect H, so χ_1⋯χ_k = 1.
Let S be finite primes in ℚ which lie below primes that are ramifieid in L/K. Then for any integer k≥ 1, there exists constant C_k,S > 0, independent of σ, such that
∑_n≥ 1, (n,S)=1a(σ,n)^k/n^s∼ C_k,S (s-1)^ϱ(k/2), s→ 1,
and
∑_n≤ x, (n,S)=1 a(σ,n)^k ∼C_k,S/ϱ(k/2)! x (log x)^ϱ(k/2)-1.
Since (n,S)=1, each a(σ,n) is non-negative, so we can remove the absolute value in Theorem <ref>. On the other hand, from Lemma <ref>, we know that f(s)=∑_n≥ 1, (n,S)=1 a(σ,n)^k n^-s has meromorphic extension to (s)>1/2, then above theorem implies that it has pole of order exactly ϱ(k/2) there. The constant C_k,S is the leading coefficient of f(s), it remains to prove it is independent of σ. We have
f(s) = 1/|N|^k∑_χ_1,⋯,χ_k ∈N(χ_1 ⋯χ_k)(σ)∑_n≥ 1,(n,S)=1a(χ_1,n)⋯ a(χ_k,n) n^-s.
Here σ only appears as (χ_1 ⋯χ_k)(σ). The inner Dirichlet series has pole of order ϱ(χ_1,⋯,χ_n), if it contributes to leading coefficient, the above lemma forces χ_1⋯χ_k = 1, so (χ_1 ⋯χ_k)(σ) = 1, thus it is independent of σ.
When β∈ℤ/2, we see that lim inf and lim sup in Theorem <ref> are equal, it is very natural to expect this also holds for β>0:
For real β>0, the lim inf and lim sup in Theorem <ref> are equal, and they are independent of σ∈ N.
§.§ Ideal class counting function
In this section, we apply the results proved above to ideal classes in number field.
Let K be a number field, L its Hilbert class field, M be the Galois closure of L/ℚ. Using all previous notations, we see that N is isomorphic to the ideal class group: (L/K) = N corresponds to ideal classes of K under inverse of Artin map. Moreover,
∑_n≥ 1a(χ,n)/n^s = L(χ,s) = ∑_I ⊂𝒪_Kχ(I)/N(I)^s
is the L-series of an ideal class group character and
a(σ,n) = 1/|N|∑_χ∈Nχ̅(σ) a(χ,n)
counts the number of integral ideals of norm n in the class σ.
Let K be a number field, let 𝔄 denote an ideal class of K, a(𝔄,n) denote the number of integral ideals in this class with norm n. For k≥ 1 positive integer, there exists constants C_k>0, independent of 𝔄, such that as s→ 1,
∑_n≥ 1a(𝔄,n)^k/n^s∼ C_k (s-1)^ϱ(k/2) s→ 1,
and
∑_n≤ x a(𝔄,n)^k ∼C_k/ϱ(k/2)! x (log x)^ϱ(k/2)-1,
with the positive integer ϱ(k/2) computed using notations in previous section. Moreover, for general real β>0,
x (log x)^ϱ(β)-1≪∑_n≤ x a(𝔄,n)^2β≪ x (log x)^ϱ(β)-1.
This is a direct translation of results in previous section. Note that we can take S = ∅ since L/K is unramified.
When k=1, one easily computes ϱ(k/2) = 1, so
∑_n≤ x a(𝔄,n) ∼ C_1 x.
This is actually a special case of the following more general result (<cit.>):
∑_n≤ x a(𝔄,n) = κ_K x + O(x^1-1/[K:ℚ]),
where κ_K is the residue of Dedekind zeta function of at s=1 divided by class number h_K of K.
The quantities ϱ(χ, β), ϱ(β) are in general difficult to compute as [M:ℚ] could be very large compared to [K:ℚ]. There is a simpler formula when K/ℚ is assumed to be Galois.
Let K/ℚ be Galois, χ a character of its ideal class group 𝒞,
ϱ(χ,β) = 1/h_K [K:ℚ]∑_𝔄∈𝒞 |l(χ,𝔄)|^2β,
where l(χ,𝔄) = ∑_τ∈(K/ℚ)χ(𝔄^τ). Moreover, for β>0,
ϱ(β) = max_χϱ(χ,β) = [K:ℚ]^2β-1.
When K/ℚ is Galois, L/ℚ is also Galois[If K/ℚ is Galois and L Hilbert class field (= maximal unramified abelian extension) of K, then L/ℚ is also Galois. This is a nice exercise in algebraic number theory, for completeness we quickly recall the proof: we need to show for each σ∈(ℚ/ℚ), σ(L) = L; since L/K is abelian unramified, so is σ(L)/σ(K) = σ(L)/K (we used here the assumption K/ℚ is Galois), but L is maximal under this property, so σ(L)⊂ L, repeating the argument with σ replaced by σ^-1 shows the other inclusion, so σ(L)=L.], so we can take M=L and N=H, thus there is no difference between χ and χ. Also H is a normal subgroup of G, its induced character then takes the following special form:
χ^(g) = 0 g∉ H
∑_s∈ G/Hχ(s^-1gs) g∈ H
and
ϱ(χ,β) = 1/|G|∑_g∈ G |χ^(g)|^2β = 1/h_K [K:ℚ]∑_h∈ H |χ^(h)|^2β.
If h∈ H correspond to an ideal class 𝔄, then as s varies over G/H, s^-1 h s are exactly ideal classes 𝔄^τ with τ∈(K/ℚ), this gives the term l(χ,𝔄).For the formula of ϱ(β), we know the maximum is attained when χ is trivial, and l(1,𝔄) = [K:ℚ], so
ϱ(β) = 1/h_K [K:ℚ]∑_h∈ H [K:ℚ]^2 β = [K:ℚ]^2 β - 1.
If K/ℚ is quadratic, Gal(K/ℚ) = {1,σ}, let m be order of χ, 𝔄_j be the set of ideal classes 𝔄 such that χ(𝔄) = e^2π i j/m. Then |𝔄_j| = h/m and
l(χ,𝔄) = χ(𝔄) + χ(𝔄^σ) = χ(𝔄) + χ(𝔄^-1) = 2cos(2π j/m), 𝔄∈𝔄_j.
So
ϱ(χ,β) = 1/2h∑_j=0^m-1∑_𝔄∈𝔄_j |l(χ,𝔄)|^2β = 1/2h∑_j=0^m-1h/m|2cos(2π j/m)|^2β = 1/2m∑_j=0^m-1|2cos(2π j/m)|^2β,
which recovers a formula in Blomer <cit.>.
Let K = ℚ[x]/(x^3-21x-28), it's a cubic Galois field, let σ be a generator of Galois group. K has class group 𝒞≅ℤ/3ℤ, 2 splits in K, let be a prime lying above 2, it generates the class group. Let χ∈𝒞 be an ideal class group character, such that χ() = e^2π i /3. Since Galois group act transitively on prime ideals lying over 2, (2) = ^σ^σ^2, let ^σ∼^i, then 1+i+i^2 ≡ 0 3, so i≡ 0 3, thus ^σ∼ are in the same ideal class. Therefore
l(χ,1) = 3, l(χ,) = 3χ() = 3e^2π i /3, l(χ,^2) = 3e^-2π i /3.
So for every χ∈𝒞,
ϱ(χ,β) = 1/9 (3^2β + |3e^2π i /3|^2β + |3e^-2π i /3|^2β) = 3^2β -1.
in this case, every character attains the maximum ϱ(β).
Let K = ℚ[x]/(x^3-x^2-54x+169), it's the unique cubic subfield of ℚ(ζ_163), let σ be a generator of Galois group. K has class group 𝒞≅ℤ/2ℤ×ℤ/2ℤ, 5 splits in K, let _1, _2 be two primes lying above 5, they generate the class group. Let A∈GL_2(ℤ/2ℤ)≅ S_3 be such that
([ _1; _2 ])^σ = A([ _1; _2 ]).
Because σ has order 3, A must also has order 3, we can assume it is A = ([ 0 1; 1 1 ]), then _1^σ∼_2, _2^σ∼_1 _2. Therefore (K/ℚ) acts transitively on non-principal ideal classes. Hence for 𝔄≠ 1 in 𝒞, we have
l(χ,𝔄) = ∑_I∈𝔄χ(I) - 1 = 3 χ = 1
-1 χ≠ 1.
Conclusion:
ϱ(χ,β) = 3^2β-1 χ = 1
1/12(3^2β + 1^2β + 1^2β + 1^2β) = 1/4(3^2β-1+1) χ≠ 1.
in this case, only the trivial character attains maximum ϱ(β).
When K/ℚ is Galois, we have ϱ(β) = [K:ℚ]^2β-1, this is not the case when K/ℚ is non-Galois.
Let K/ℚ be a non-Galois cubic field of class number 1 (so we can take L=K). Let a(n) be the number of integral ideals of norm n in K, we claim
∑_n≤ x a(n)^2β≍ x(log x)^ϱ(β) with ϱ(β) = (1+3^2β-1)/2.
Indeed, G = S_3 has three conjugacy classes, and we can assume H= {(12),1}, [G:H] = 3. Using the formula
ϱ(β) = 1/|G|∑_C⊂ G |C| ( [G:H] |H∩ C|/|C|)^2β,
we obtain
ϱ(β) = 1/6[2(3× 0/2)^2β + 3(3× 1/3)^2β + 1(3× 1/1)^2β],
as claimed.
Next we present a computational example where K/ℚ is non-Galois to illustrate in general finding ϱ(β) is not trivial at all.
Let K = ℚ[X]/(X^4+5X^2-X+1), its class number is 3, the Galois group of K is S_4. We find the formula for ϱ(β) with help of PARI, MAGMA and GAP.[it seems MAGMA alone suffices for all computations below.]
Hilbert class field L of K has degree 12 over ℚ, we can compute an absolute defining polynomial of L in PARI using
which outputs f(x):= x^12-x^11+x^10-x^9-x^7+x^6-4 x^5-3 x^4+3 x^3+7 x^2+4 x+1. Using
which is used to find degree 4 subfields of L, including K, it gives a polynomial g ∈ℤ[x] such that for y = g(x) and L = ℚ[x]/(f(x)), we have K = ℚ(y). Explicitly, g(x) = -3 x^11+4 x^10-4 x^9+4 x^8-x^7+3 x^6-4 x^5+13 x^4+5 x^3-12 x^2-17 x-4.
Next we compute the Galois closure M of L/ℚ, this is a very large field. We switch to MAGMA since PARI is not capable finding Galois information f(x) due to its large degree. Inputting
we will explain the choice later. gives the Galois group of f in terms of permutations of its 12 roots: G = (M/ℚ) ⊂ S_12 is generated by permutations
{(2, 12)(4, 7)(8, 10),
(2, 3, 4)(5, 10, 7)(8, 11, 12),
(3, 5, 11)(4, 12, 10), (2, 7, 8)(4, 12, 10)
(1, 5)(2, 4)(3, 9)(6, 11)(7, 10)(8, 12),
(1, 7)(2, 9)(3, 4)(5, 10)(6, 8)(11, 12),
(1, 6, 9)(2, 8, 7)(3, 11, 5)(4, 10, 12)}.
which has order 648 = [M:ℚ].
WLOG, we may identify (M/L) as the stabilizer of point 1. We still need to identify what is H = (M/K) under the above representation of G. We can work in any field extension of ℚ which contains all roots of f, for example ℂ or ℚ_p, we choose to work with later. ℚ_p contains all roots of f if p is unramified and f ≡ 0 splits modulo p. Such p has density 1/[M:ℚ] = 1/648 and p=1913 satisfies the condition.
Such an explicit correspondence can be retrieved using in MAGMA. It says the index i corresponds to x_i ∈ℚ_1913;
x_1 = 181 + 902 p + 24 p^2 + 665 p^3 + 796 p^4 + O(p^5),
x_2 = 272 + 1462 p + 614 p^2 + 1182 p^3 + 1912 p^4 + O(p^5),
⋯
x_6 = 651 + 750 p + 1492 p^2 + 743 p^3 + 204 p^4 + O(p^5),
⋯
x_9 = 1054 + 367 p + 598 p^2 + 1629 p^3 + 521 p^4 + O(p^5),
⋯
x_12 = 1759 + 843 p + 1836 p^2 + 900 p^3 + 768 p^4+ O(p^5).
Recall the polynomial g we computed above, L = ℚ(x_1) and K = ℚ(y) with y=g(x_1). For σ∈ G = (M/ℚ),
σ∈ H σ(y) = y g(σ(x_1)) = g(x_1).
Now one does a very explicit computation, to see that g(x_i) = g(x_1) + O(p^5) i∈{1,6,9}. Therefore for σ∈ G, those in H should be (if we ignore the error O(p^5)) characterized by σ(1) ∈{1,6,9}.
What remains is a pure group-theoretical computation, so we use GAP here.
One checks H is indeed a group by typing and verify it has the expected index (=4) in G: . This proves H = (M/K) is indeed the group we want to find. We have all ingredients to compute ϱ(β) from the formula
ϱ(β) = 1/|G|∑_C⊂ G |C| ( [G:H] |H∩ C|/|C|)^2β.
In GAP, we list all conjugacy classes of G via , there are 17. |C| for each of them is , [G:H] |H∩ C|/|C| for each of them is , combining gives finally our long-sought formula
ϱ(β) = 1/24(8+6× 2^2β + 4^2β)
Note that this in contrast to K/ℚ Galois case, which is simply [K:ℚ]^2β-1.
When K/ℚ is non-Galois, it would be an interesting topic of further investigation to find an easier way to compute ϱ(β).
§ CUSPIDAL PARTIAL ZETA SERIES
§.§ Criterion of being cuspdial
We consider the following situation: L/K is an abelian extension of number fields, with both K and L being Galois over ℚ. Let
G = (L/ℚ) N = (L/K) Q = (K/ℚ).
That is, we have an exact sequence of groups:
1⟶ N⟶ G⟶ Q⟶ 1
It determines an action of Q on N, hence also on N. Explicitly: for τ∈ Q, let τ be a lift in G, define τχ as
(τχ)(h) = χ(τ^-1 h τ), h∈ N.
Evidently it's independent of the lift τ. For character χ on N, denote χ^ as the induced character of χ to G, explicitly:
χ^(g) = 0 g∉ N
∑_s∈ G/Hχ(s^-1gs) = ∑_τ∈ Q (τχ) (g) g∈ N .
For χ_1, χ_2 ∈N, ⟨χ_1^, χ_2^⟩_G equals the number of τ∈ Q such that χ_1 = τχ_2.
We have
⟨χ_1^, χ_2^⟩_G = 1/|G|∑_h∈ Nχ_1^(h) χ_2^(h)
= 1/|G|∑_h∈ N∑_τ_1,τ_2∈ Q (τ_1 χ_1 τ_2 χ_2)(h)
= |N|/|G|#{(τ_1,τ_2)∈ Q^2 | τ_1 χ_1 = τ_2 χ_2} = |N||Q|/|G| |{τ∈ Q | χ_1 = τχ_2}|
,
then |G| = |N||Q| implies the statement.
Recall our notations on the Artin L-functions L(χ,s), L(χ^,s) and the number a(χ,n) with L(χ,s) = ∑_n≥ 1 a(χ,n)/n^s from first section.
For χ∈N, the following are equivalent:
* χ^ is an irreducible representation of G
* ∑_n≤ x |a(χ,n)|^2 ≪ x
* For each τ≠ 1 in Q, τχ≠χ.
(1) (3) follows from above lemma. Recall that in Proposition <ref>, we have shown
∑_n≤ x |a(n,χ)|^2β≪ x (log x)^ϱ(χ,β)-1,
and since ϱ(χ,1) is exactly ⟨χ^, χ^⟩_G, we have (1) (2).
Consider the Artin L-function L(χ,s) coming from an automorphic representation of GL(1)/K, by automorphic induction, there should be an automorphic representation π on GL(n)/ℚ, n = [K:ℚ] realizing L(χ,s). Using the growth rate ∑_n≤ x |a(n,χ)|^2 ≪ x above, we see that π should be cuspidal if and only if χ^ is irreducible. Proving this rigorously would be very hard, and is known only when K/ℚ is cyclic <cit.>[the criterion there is: χ∈N is non-cuspidal if and only if it is invariant under some element of the Galois group, this is our condition (3) of the Proposition]. We simply take above three equivalent criteria as a working definition of cuspidal L-functions for L(χ,s).
If L is Hilbert class field of K, then we can identify N as character group of the ideal class group, then action of τ∈ Q = (K/ℚ) is the same as action of τ on ideal classes of K: (τχ)(𝔄) = χ(𝔄^τ) for an ideal class 𝔄 in K. This is because the isomorphism between class group and (L/K) is given by Frobenius map, and τ^-1 (𝔭) τ = (τ𝔭).
When K is quadratic, the non-trivial element of Q acts as inverse on ideal class because 1+τ annihilates the class group. Therefore L(s,χ) should be a cusp form of GL(2)/ℚ if and only if τχ = χ̅≠χ, i.e. if and only if χ is not real.
Let K=ℚ(√(-23)), it has class number 3, for non-real character χ of its ideal class group, above example says L(χ,s) should come from a cuspidal automorphic form on GL(2)/ℚ. Indeed, L(χ,s) = L(f,s) with f is the unique weight 1 normalized newform of level 23, explicitly f(z) = η(z)η(23z) with η(z) the Dedekind eta function. This particular case can be shown without deep facts, see Zagier <cit.> for the computations.Above is an example of a celebrated result of Deligne and Serre <cit.>: in terms of L-function, odd 2-dimensional irreducible Galois representations and weight 1 newforms are equivalent.
For τ∈ Q, χ∈N, we have L(τχ,s) = L(χ,s).
This holds for any representation χ : G→ GL(V), not necessarily one-dimensional. The proof is purely formal. By definition
L(χ,s) = ∏_1/(1-χ(𝔓/) N()^-s | V^I_𝔓/).
Here V^I_𝔓/ is the subspace fixed under χ by inertia group I_𝔓/. Note that
τχ (𝔓/) = χ (τ𝔓/τ),
and the fixed space under action of τχ is V^I_τ𝔓/τ therefore
L(τχ,s) = ∏_1/(1-χ(τ𝔓/τ ) N()^-s | V^I_τ𝔓/τ).
As K/ℚ is Galois, τ permutes prime ideals of K and norm is Galois invariant, so they're equal.
Let
N_0 = {χ∈N|χ^ is irreducible},
then Q acts on N_0 without fix point (by third point of Proposition <ref>). Let χ_1,⋯,χ_k be representatives from different orbits, we have k = |N_0|/|Q|. For i≠ j, Proposition <ref> implies χ_i^ and χ_j^ are two non-isomorphic irreducible representations of G. Moreover, by above lemma, the Artin L-function L(χ_i,s) depends only on the orbit, not on representatives chosen.
Let χ_1,⋯,χ_k as above, then L(χ_1,s),⋯,L(χ_k,s) are linearly independent over ℂ.
Let ∑_i c_i L(χ_i,s) = 0, for p unramified in L, comparing p^-s coefficient gives ∑_i c_i χ_i^( p) = 0. Since the Frobenius map is surjective when p varies, ∑_i c_i χ^_i = 0, χ^_i being mutually different irreducible representations implies c_i = 0.
Let K be the cubic field in Example <ref>, then we computed that ϱ(χ,β = 1) = 3 for all χ∈N, so none of L(χ,s) will be cuspidal. In this case, |N_0| = 0.
For K be the cubic field in Example <ref>, here N = ℤ/2ℤ×ℤ/2ℤ,
ϱ(χ,β=1) = 3 χ = 1
1 χ≠ 1,
so for χ≠ 1, χ^ is irreducible. In this case |N_0| = 3 and there is only one orbit. This L-function, namely
1-1/5^s+1/8^s-1/13^s-1/17^s-1/23^s+2/25^s+1/27^s + ⋯,
should come from a cuspidal form on GL(3)/ℚ.
We make two remarks, they are however not used in sequel. [However, Example <ref> below is a special case of these two general facts.]Firstly, if 1→ N→ G→ Q→ 1 splits (i.e. G is a semidirect product of N,Q), the representations χ_1^,⋯, χ_k^ constructed above are exactly those irreducible representations of G with highest dimension (=|Q|). This follows from a general result of linear representation of semidirect product (Serre, <cit.>). When L is Hilbert class field of K and K/ℚ cyclic, global class field theory says above sequence always splits. So in this case, G as an abstract group alone determines ϱ_cusp(σ,β) (defined in next section) that appears in asymptotic of moments.
§.§ Cuspidal version of a(σ,n)
Recall the partial zeta function associated for σ∈ N:
ζ(σ,s) = 1/|N|∑_χ∈Nχ̅(σ) L(χ,s)= ∑_n≥ 1a(σ,n)/n^s.
We remove those terms which are not cuspidal L-function, obtaining
ζ_cusp(σ,s) = 1/|N|∑_χ∈N_0χ̅(σ) L(χ,s) := ∑_n≥ 1a_cusp(σ,n)/n^s.
Using the fact that L(χ,s) is invariant under action of Q and χ^ = ∑_τ∈ Qτχ, one easily sees
ζ_cusp(σ,s) = 1/|N|∑_i=1^k χ_i^(σ) L(χ_i,s),
here χ_1,⋯,χ_k are representatives of Q-orbits on N_0. This form turns out to be more amendable than original definition. We wish to investigate the moment ∑_n≤ x |a_cusp(σ,n)|^2β. Before that, we caution there might exist σ∈ N such that ζ_cusp(σ,s) is identically zero. By linear independence (Proposition <ref>) of L(χ_i,s), this occurs if and only if χ_i^(σ)=0 for all i.
Assume K/ℚ be quadratic, if the class group N is 2-torsion, then N_0 is empty, so ζ_cusp(σ,s) = 0 for all σ∈ N. Quite unexpectedly, there are examples for non-2-torsion N and σ∈ N such that ζ_cusp(σ,s) = 0. To be precise, note that
χ_i^ = χ_i + χ̅_i = 2(χ_i), and any χ_i ∈N_0 if and only if its order is not 1 or 2, so if σ is killed by all χ_i^, then it must have order 4. From which it follows that ζ_cusp(σ,s) = 0 if and only if N ≅ℤ/4ℤ× (ℤ/2ℤ)^n and σ an element of order 4 in N.
Consequently, unlike the situation for ∑_n≤ x |a(σ,n)|^2β, whose leading term of asymptotic is independent of σ∈ N, the leading term of ∑_n≤ x |a_cusp(σ,n)|^2β might depend on σ∈ N. Let
ϱ_cusp(σ,β) = max_i=1,⋯,k
χ_i^(σ)≠ 0ϱ(χ_i,β),
here we used the notation
ϱ(χ_i,β) = 1/|G|∑_g∈ G |χ_i^(g)|^2β,
which is the exponent for ∑_n≥ 1 |a(χ,n)|^2β/n^s at s=1.
Also recall the notation
ϱ(χ_1,⋯,χ_i) = 1/|G|∑_g∈ Gχ_1^(g)⋯χ_i^(g),
which is always a non-negative integer and is the exponent for ∑_n≥ 1 a(χ_1,n)⋯ a(χ_i,n)/n^s at s=1. Note in both cases, the exponent at s=1 remaimight dependns unchanged if we only sum over squarefree integer n, this follows immediately from their proofs Proposition <ref>.
Let β be a positive integer, for σ∈ N, if ζ_cusp(σ,s) is not identically zero, then
∑_n≥ 1|a_cusp(σ,n)|^2β/n^s
has pole of exact order ϱ_cusp(σ,β) at s=1.
Hence
∑_n≤ x|a_cusp(σ,n)|^2β∼ C x (log x)^ϱ_cusp(σ,β) -1
for some positive constant C.
The proof for β = 1 is especially elegant: in this case, we need to show ∑_n≥ 1|a_cusp(σ,n)|^2/n^s has pole of exact order 1,
∑_n≥ 1|a_cusp(σ,n)|^2/n^s = 1/|N|^2∑_i,jχ_i^(σ)χ_j^(σ) ∑_n≥ 1a(χ_i,n) a(χ_j,n)/n^s.
For the indices i≠ j, the order of poles of the sum is ϱ(χ_i,χ_j) = ⟨χ_i^, χ_j^⟩_G which is 0 since χ_i^ are non-isomorphic representations. So the order of poles of ∑_n≥ 1|a_cusp(σ,n)|^2/n^s is the same as
1/|N|^2∑_i |χ_i^(σ)|^2 ∑_n≥ 1|a(χ_i,n)|^2/n^s,
each has pole order ⟨χ_i^, χ_i^⟩_G = 1, and by assumption the projection onto cuspidal space is non-zero at least one |χ_i^(σ)| > 0, so the leading term is positive, completing the proof when β = 1.
For general positive integral β, we need some notations and easy observations. Fix σ∈ N, let 𝔛_σ,β be those χ_i such that maximum is attained for ϱ_cusp(σ,β), also let N_0,σ be those χ_i in N_0 for which χ_i^(σ)≠ 0.
Let {ρ_1,⋯,ρ_r}, {ψ_1,⋯,ψ_r}⊂N_0 such that
ϱ(ρ_1,⋯,ρ_r,ψ_1,⋯,ψ_r) = ϱ_cusp(σ,r).
Then
* each ρ_i,ψ_i ∈𝔛_σ,r,
* |ρ_i^(g)| = |ψ_j^(g)| for all g∈ G,
* ρ^_1 ⋯ρ^_r = ψ^_1 ⋯ψ^_r.
Hölder's inequality implies
ϱ(ρ_1,⋯,ρ_r,ψ_1,⋯,ψ_r) = 1/|G|∑_g∈ Gρ^_1(g) ⋯ρ^_r(g)ψ^_1(g)⋯ψ^_r(g)
≤1/|G|∑_g∈ G |ρ^_1(g)| ⋯ |ρ^_r(g)| |ψ^_1(g)|⋯ |ψ_r^(g)|
≤(1/|G|∑_g∈ G |ρ_1^(g)|^2r)^1/2r⋯(1/|G|∑_g∈ G |ψ_r^(g)|^2r)^1/2r = ϱ(ρ_1,r)^1/2r⋯ϱ(ψ_r,r)^1/2r.
In order for this to equal to
ϱ_cusp(σ,β) = max_i=1,⋯,k
χ_i^(σ)≠ 0ϱ(χ_i,β),
each of term inside the parenthesis must be this number, proving (1). During the Hölder's inequality step, equality holds if and only if there exists λ such that ∀ g∈ G, |ϕ_1^(g)| = λ|ϕ_2^(g)|, here ϕ_1,ϕ_2 ∈{ρ_1,⋯,ρ_r,ψ_1,⋯,ψ_r}, because ϕ_1,ϕ_2 are both 1-dim characters, ϕ_1^(1) = ϕ_2^(1) = |Q|, thus λ = 1, giving (2). Using polar coordinates, write
(ρ^_1 ⋯ρ^_r)(g) = R(g) e^i θ_1(g), (ψ^_1 ⋯ψ^_r)(g) = R(g) e^i θ_2(g),
then
1/|G|∑_g∈ G R(g)^2 e^iθ_1(g) - iθ_2(g) = ϱ(ρ_1,⋯,ρ_r,ψ_1,⋯,ψ_r) = ϱ_cusp(σ,r) = 1/|G|∑_g∈ G R(g)^2,
this forces e^iθ_1(g) - iθ_2(g) = 1, which is (3).
Because a_cusp(σ,n) is a linear combination of a(χ_i,n), it is easy to see order of pole of both Dirichlet series are ≤ϱ_cusp(σ,β). Therefore it suffices to prove the pole for squarefree sum
∑_n≥ 1, n squarefree|a_cusp(σ,n)|^2β/n^s
is exactly this number. Let
X = {(ρ_1,⋯,ρ_β, ψ_1,⋯,ψ_β) ∈ (N_0,σ)^2β|ϱ(ρ_1,⋯,ρ_β,ψ_1,⋯,ψ_β) = ϱ_cusp(σ,β)}
be the set of 2β-tuples that attain the maximum order of pole, this set is non-empty since we assumed a_cusp(σ,n) does not vanish identically. Furthermore,
∑_n≥ 1|a_cusp(σ,n)|^2β/n^s = 1/|N|^2β∑_(ρ_i,ψ_i)∈ (N_0,σ)^2β (ρ^_1 ⋯ρ^_β)(σ)(ψ^_1 ⋯ψ^_β)(σ)
×∑_n≥ 1∏_1≤ i≤β a(ρ_i,n) ∏_1≤ i≤β a(ψ_i,n)/n^s.
Consider whether (ρ_i,ψ_i) is in X or not, when no, it only contributes pole of lower order, so we can safely ignore them, for those in X. By lemma, we have ρ^_1 ⋯ρ^_β = ψ^_1 ⋯ψ^_β, so modulo terms of lower order poles. Thus
∑_n≥ 1|a_cusp(σ,n)|^2β/n^s∼1/|N|^2β∑_(ρ_i,ψ_i)∈ X |(ρ^_1 ⋯ρ^_β)(σ)|^2 ∑_n≥ 1∏_1≤ i≤β a(ρ_i,n) ∏_1≤ i≤β a(ψ_i,n)/n^s.
The same is true if we restrict the sum to square-free n only, in this case, the a(χ_i,n) becomes multiplicative:
∏_1≤ i≤β a(ρ_i,n) ∏_1≤ i≤β a(ψ_i,n) = a(ρ^_1⊗⋯⊗ρ^_β,n) a(ψ^_1⊗⋯⊗ψ^_β,n)= |a(ρ^_1⊗⋯⊗ρ^_β,n)|^2,
which are coefficient associated with tensor product representation. Therefore all terms in above displayed equation have order of pole ϱ_cusp(σ,r) and positive leading coefficients, therefore so is ∑_n≥ 1|a_cusp(σ,n)|^2β/n^s.
The above proof can be directly adapted to prove Theorem <ref> when β is positive integral.
The above proof only works for β integral, for general real β>0, we requires an additional assumption for given σ∈ N, β>0:
**χ_i ∈𝔛_σ,β∀ g∈ G, χ_i^(g) ∈ℝ×(root of unity).
Hererecall that 𝔛_σ,β is the set of character for which maximum ϱ_cusp(σ,β) = max_i=1,⋯,k
χ_i^(σ)≠ 0ϱ(χ_i,β) is attained.
Let β > 0, σ∈ N, if ζ_cusp(σ,s) is not identically zero and equation (<ref>) is satisfied, writing ϱ = ϱ_cusp(σ,β), then
0 < lim inf_s→ 1^- (s-1)^ϱ∑_n≥ 1|a_cusp(σ,n)|^2β/n^s≤lim sup_s→ 1^- (s-1)^ϱ∑_n≥ 1|a_cusp(σ,n)|^2β/n^s < ∞ .
The proof goes almost exactly the same as that of Theorem <ref>.
The fact that lim sup < ∞ is evident. Proving lim inf≠ 0 is more involved. We start with
ζ_cusp(σ,s) = 1/|N|∑_i=1^k χ_i^(σ) L(χ_i,s).
Abbreviate χ^(C) := a_C,χ where C is a conjugacy class of G. By our assumption (<ref>), we can choose positive integer l such that a_C,χ^l ≥ 0 for all C and χ∈𝔛_σ,β, write μ as a primitive l-th root of unity. For some ρ_i(C)∈ℂ, i=0,⋯,l-1 to be fixed later and χ∈{χ_1,⋯,χ_k}, define
f(χ,C) = ∑_i=0^l-1ρ_i(C) ∏_ p ∈ C (1+ μ^i a_C,χ/p^s).
Also define a'(χ,n) and a'(σ,n) via
∑_n≥ 1a'(χ,n)/n^s := ∏_C⊂ G f(χ,C),
a'(σ,n) = 1/|N|∑_i=1^k χ_i^(σ) a'(χ_i,n).
Note that the same prime p never occurs in two f(χ,C_1) and f(χ,C_2) with C_1≠ C_2; as usual, we only focus on prime which are not ramified in L.
We claim that we can choose ρ_i(C) ≠ 0 independent of χ such that
* For all j and C, ∑_i=0^l-1ρ_i(C) μ^ij equals 0 or 1.
* χ_i^(σ)a'(χ_i,n) ≥ 0 for all n and i.
Assuming above claim, let us prove lim inf > 0. Since p-th coefficient of L(χ_i,s) is χ_i^( p), (1) implies a'(χ,n) = a_cusp(χ,n) for all χ or a'(χ,n) = 0 for all χ, therefore |a_cusp(σ,n)| ≥ |a'(σ,n)|, so it suffices to prove the assertion on lim inf for ∑_n≥ 1 |a'(σ,n)|^2βn^-s. Observe
|a'(σ,n)|^2β ≫|∑_χ∈𝔛_σ,βχ_i^(σ) a'(χ,n) |^2β - ∑_χ∉𝔛_σ,β |a'(χ,n)|^2β
≫∑_χ∈𝔛_σ,β |a'(χ,n)|^2β - ∑_χ∉𝔛_σ,β |a'(χ,n)|^2β by condition (2).
Condition (1) also implies (via the same reasoning as in Theorem <ref>)
∑_n≥ 1|a'(χ,n)|^2β/n^s = ∏_C⊂ G( ∑_i=0^l-1ρ_i(C) ∏_ p ∈ C (1+ μ^i |a_C,χ|^2β/p^s) ).
each ∏_ p ∈ C (1+ μ^i |a_C,χ|^2β/p^s) is ≍ (s-1)^-μ^i |a_C,χ|^2β |C|/|G| as s→ 1, so as i varies between 0 and l-1, the dominant term will be ≍ (s-1)^-|a_C,χ|^2β |C|/|G| (here we also used ρ_i(C)≠ 0), multiplying over all C⊂ G implies that RHS of above displayed equation is ≍ (s-1)^-ϱ(χ,β), which is (s-1)^-ϱ_cusp(σ,β) when χ∈𝔛_σ,β,
therefore
∑_n≥ 1|a'(σ,n)|^2β/n^s≫ (s-1)^-ϱ_cusp(σ,β) - ∑_χ∉𝔛_σ,β∑_n≥ 1|a'(χ,n)|^2β/n^s.
By definition of 𝔛_σ,β, all terms on the right has pole order <ϱ_cusp(σ,β), so above is still ≫ (s-1)^-ϱ_cusp(σ,β), proving lim inf is positive, assuming two conditions above.
Now we explain how to choose ρ_i(C) achieving the criteria. We make
ρ_i(C) = l^-1 ∀ i or ρ_i(C) = l^-1μ^-1-i ∀ i.
Obviously (1) is satisfied. For (2), let C_0 be the conjugacy class containing σ. We make the first choice if σ∉ C and otherwise the second choice. For σ∉ C,
f(χ,C) = 1 + ∑_p_i∈ C, p_i≠ p_ja_C,χ^l/p_1^s ⋯ p_l^s + ∑_p_i∈ C, p_i≠ p_ja_C,χ^2l/p_1^s ⋯ p_2l^s + ⋯ C≠ C_0,
which has non-negative coefficients (by our choice of l); for σ∈ C, giving
f(χ,C_0) = ∑_p_1∈ C_0a_C,χ/p_1^s + ∑_p_i∈ C_0, p_i≠ p_ja_C_0,χ^l+1/p_1^s ⋯ p_l+1^s + ∑_p_i∈ C_0, p_i≠ p_ja_C_0,χ^2l+1/p_1^s ⋯ p_2l+1^s + ⋯,
so χ_i^(σ) f(χ,C_0) also has non-negative coefficients. Then
∑_n≥ 1χ_i^(σ) a'(χ,n)/n^s = χ_i^(σ) f(χ,C_0) ∏_C≠ C_0 f(χ,C)
also have non-negative coefficients, which is (2).
Just as proof of Theorem <ref>, both Theorem <ref> and <ref> generalize when one replaces ∑_n≥ 1|a_cusp(σ,n)|^2β/n^s with
∑_(n,S)=1
n squarefree|a_cusp(σ,n)|^2β/n^s.
One drawback of Theorem <ref> is that (<ref>) is in general difficult to check for a given field extension L/K. Even when K/ℚ is cyclic (in which automorphic induction has been proven) and L being Hilbert class field of K, it might still fail.
Let p ≡ 1 3 be a prime, K be the unique cubic subfield of ℚ(ζ_p). Assume K has class number 7,[this is the case when p=313,877,1129,⋯] L its Hilbert class field, we have an exact sequence
1⟶(L/K) := N ⟶(L/ℚ) := G ⟶(K/ℚ) := Q ⟶ 1.
G is a group of order 21, there are only two such groups up-to isomorphism: C_7× C_3 or the non-abelian semidirect product C_7 ⋊ C_3. We claim G cannot be C_7× C_3. Indeed, if it were, then every subfield of L is Galois over ℚ since G would be abelian, in particular, this holds for the inertia field F of 𝔓/p, here 𝔓 is a prime in L lying over p, F/ℚ is degree 7, (𝔓∩ F)/p is unramified in F, since F would also be Galois, it must be unramified for every prime lying above p, so F/ℚ is everywhere unramified, a contradiction to Minkowski's theorem. Thus G is C_7 ⋊ C_3, it has two 3-dimensional characters:
Action of Q on N_0 ={χ∈N | χ^ irreducible} has 2 orbits, ρ_i = χ_i^, their L-functions L(ρ_1,s) and L(ρ_2,s) differ by complex conjugation. For any ideal class σ of N, we have
∑_n≥ 1a_cusp(σ,n)/n^s = 1/7∑_i=1,2ρ_i(σ) L(ρ_i,s).
Looking at the table, we see none of the cuspidal projection vanishes. When β∈ℤ^≥ 1, each of them has growth:
∑_n≤ x |a_cusp(σ,n)|^2β∼ C x (log x)^ϱ_cusp(σ,β)-1,
where
ϱ_cusp(σ,β) = 1/21(3^2β + 3| -1-√(-7)/2|^2β + 3| -1+√(-7)/2|^2β) = 1/7(3^2β-1 + 2^1+β).
However, for non-integral β > 0, since the prerequisite (<ref>) no longer holds: -1±√(-7)/2 has no desired property, so we cannot conclude from Theorem <ref> the same order of growth for such β.
Similar to non-cuspidal case, we expect the following to hold:
For real β>0, Theorem <ref> still holds after removing the condition in equation (<ref>); lim inf and lim sup in Theorem <ref> should also be equal.
Note that, unlike the non-cuspidal case, the lim sup and lim inf might depend on σ.
§.§ Epistein zeta function of non-fundamental discriminant
In all our previous explicit examples, we focused on the case L being the Hilbert class field of K. The theorems however works for any abelian extension of number field L/K. In this subsection, we shall apply our previous results to certain abelian extensions closely related to non-maximal orders of K. We recall some basic facts relating binary quadratic form to class field theory, (c.f. <cit.>). Let K be an imaginary quadratic field, 𝒪 an order of K, discriminant of 𝒪 be D, f the conductor of 𝒪, L be its associated ring class field (it is characterized by p∤ D splits in L if and only if p is represented by the principal form). S' be primes of K lying above f, let I^S' be free abelian group generated by prime ideals in K coprime to f. Similar to the fundamental-discriminant case, L/ℚ is still Galois. Recall the number a(σ,n) defined previously, by examining the proof of Lemma <ref>, one sees when (n,f)=1, a(σ,n) is the number of ideals in I^S' of norm n that maps to σ under Frobenius map.
Since G = (L/ℚ) has a normal subgroup N = (L/K), the argument in Example <ref> (which was done there only when 𝒪 = 𝒪_K) carries through, and we have the same formula:
ϱ(χ,β) = 1/2m∑_j=0^m-1|2cos(2π j/m)|^2β, m = order of χ.
So ϱ(β) = [K:ℚ]^2β-1 = 2^2β-1. The condition that χ^ is irreducible remains unchanged: it is irreducible if and only if χ is non-real.
(L/K) is isomorphic to the form class group: primitive quadratic form of discriminant D upto equivalence. Let g(x,y) be a primitive binary quadratic form of discriminant D,
r_g(n) = #{(x,y)∈ℤ^2 | g(x,y) = n}.
The quantity a(σ,n) for (n,S)=1 is essentially the number of r_g(n) with g corresponds to σ∈(L/K). More precisely, we can choose a representative g(x,y) = ax^2+bxy+cy^2 such that 𝔞 = ℤa + ℤ-b+√(D)/2 is an ideal of 𝒪 prime to the conductor f, and we have
g(x,y) = N(xa + y-b+√(D)/2)/N(𝔞).
So
∑_n≥ 1r_g(n)/n^s = N(𝔞)^s ∑_z∈𝔞1/N(z)^s = N(𝔞)^s w∑_(z)⊂𝔞1/N(z)^s,
with w the number of units in 𝒪. Because ideals in 𝒪 lacks unique factorization, it will be more convenient to look at those prime to conductor, which do have unique factorization, this part corresponds to
N(𝔞)^s∑_(z)⊂𝔞, (z,f)=11/N(z)^s = 1/h(𝒪)∑_χχ(𝔞) ∑_I⊂𝒪, (I,f)=1χ(I)/N(I)^s.
Here χ ranges over all characters of the Picard group of 𝒪: i.e. ideal class that are prime to f, this is isomorphic to (L/K). By lifting I⊂𝒪 to 𝒪_K, the sum
∑_I⊂𝒪, (I,f)=1χ(I)/N(I)^s = ∑_I⊂𝒪_K, (I,f)=1χ(I)/N(I)^s
is the same (apart from a finite Euler product supported on f) as L(χ,s), with χ is now interpreted as a character on I^S' modulo an equivalence relation: I' ∼ I if and only if I'I^-1 is a principal ideal of the form (a), a∈ℤ,(a,f)=1. Combining above equations, we arrive at
∑_n≥ 1, (n,f)=1r_g(n)/n^s = w/h(𝒪)∑_χχ(𝔞) × (some Euler factors supported at f)× L(χ,s).
We define ∑_n≥ 1, (n,f)=1 r_cusp,g(n) n^-s to be RHS with real χ removed, it is the L-function of an elliptic cusp form.
Corollary <ref> in this case becomes:
For real β>0,
∑_n≤ x, (n,f)=1 r_g(n)^2β≍ x (log x)^2^2β-1-1.
With ≍ replaced by ∼ times a constant if β∈ℤ^≥ 1.
This result looks less impressive, but it generalizes almost verbatim to the cuspidal version r_cusp,g(n): χ^ is always real-valued, so the prerequisite (<ref>) of Theorem <ref> is satisfied. Recall the expected order of cuspidal moment:
ϱ_cusp(σ,β) = max_i=1,⋯,k
χ_i^(σ)≠ 0ϱ(χ_i,β).
Here χ_1,⋯,χ_k are orbit representatives of Q on N_0: in this case, just pick one from each pair (χ,χ̅) of non-real χ.
Assume r_cusp,g(n) is not identically zero[when this happens is described precisely in Example <ref>], let
ρ = ϱ_cusp(σ,β) = max_i=1,⋯,k
χ_i^(σ)≠ 0ϱ(χ_i,β).
For any real β >0, we have
∑_n≤ x, (n,f)=1 |r_cusp,g(n)|^2β≍ x (log x)^ρ-1.
With ≍ replaced by ∼ times a constant if β∈ℤ^≥ 1.
The case when D is a fundamental discriminant is proved in <cit.> by using an explicit calculation. In our approach, we are able to get away with such calculations using language of finite group representation; and establish the result for all D, whether fundamental or not, in a uniform way.
§ ACKNOWLEDGEMENTS
1.3
I am grateful to my supervisor Prof. Valentin Blomer, for introducing me to these topics and his numerous suggestions that improve the readability of this manuscript. I also thank Dr. Egdar Assing on grading the thesis from which this manuscript originates, as well as colleague Francisco Araújo for related discussions.
plain
|
http://arxiv.org/abs/2307.01628v1
|
20230704102421
|
Photonic bound states in the continuum governed by heating
|
[
"A. I. Krasnov",
"P. S. Pankin",
"G. A. Romanenko",
"V. S. Sutormin",
"D. N. Maksimov",
"S. Ya. Vetrov",
"I. V. Timofeev"
] |
physics.optics
|
[
"physics.optics"
] |
APS/123-QED
Kirensky Institute of Physics, Federal Research Center KSC SB
RAS, Krasnoyarsk, 660036 Russia
Siberian Federal University, Krasnoyarsk, 660041 Russia
[email protected]
Kirensky Institute of Physics, Federal Research Center KSC SB RAS, Krasnoyarsk, 660036 Russia
Siberian Federal University, Krasnoyarsk, 660041 Russia
Krasnoyarsk Scientific Center, Siberian Branch, Russian Academy of Sciences, Krasnoyarsk, 660036 Russia
Kirensky Institute of Physics, Federal Research Center KSC SB
RAS, Krasnoyarsk, 660036 Russia
Siberian State University of Science and Technology, Krasnoyarsk, 660037 Russia
Kirensky Institute of Physics, Federal Research Center KSC SB
RAS, Krasnoyarsk, 660036 Russia
Siberian Federal University, Krasnoyarsk, 660041 Russia
Kirensky Institute of Physics, Federal Research Center KSC SB
RAS, Krasnoyarsk, 660036 Russia
Siberian Federal University, Krasnoyarsk, 660041 Russia
Siberian Federal University, Krasnoyarsk, 660041 Russia
Kirensky Institute of Physics, Federal Research Center KSC SB
RAS, Krasnoyarsk, 660036 Russia
Kirensky Institute of Physics, Federal Research Center KSC SB
RAS, Krasnoyarsk, 660036 Russia
Siberian Federal University, Krasnoyarsk, 660041 Russia
[†]These authors equally contributed to this work
[*]Corresponding author: [email protected]
A photonic crystal microcavity with the liquid crystal resonant layer tunable by heating has been implemented. The multiple vanishing resonant lines corresponding to optical bound states in the continuum are observed. The abrupt behaviour of the resonant linewidth near the vanishing point can be used for temperature sensing.
Photonic bound states in the continuum governed by heating
I. V. Timofeev
August 1, 2023
==========================================================
§ INTRODUCTION
One-dimensional photonic crystal (PhC) is a periodic structure formed by layers with different refractive indices (RIs) <cit.>.
The optical thicknesses of the alternating layers are comparable with the wavelength, which leads to the Bragg diffraction of light.
In the photonic bandgap (PBG) spectral region the PhC reflects light with small losses.
The resonant layer embedded between two PhC mirrors forms a microcavity, which supports microcavity (MC) modes <cit.>.
When the resonant layer is an anisotropic material
the MC mode can be transformed into a bound state in the continuum (BIC) <cit.>.
The BIC is a nonradiative localized eigenmode ebmedded in the continuum of propagating waves <cit.>.
The BIC is a general wave phenomenon, which occurs in quantum mechanics, radio physics, photonics, and acoustics <cit.>.
Variation of parameters of the system near the BIC affects the coupling between the localized mode and the continuum of propagating waves and thereby tunes the radiation decay rate.
The BIC has been used in various applications, such as, lasers <cit.>, waveguides <cit.>, nanocavities <cit.>, amplified chiral response <cit.>, trapping and sorting of nanoparticles <cit.>, perfect absorbers <cit.>, etc.
Due to the narrow spectral line the quasi-BICs have been used for RI sensing <cit.>, mechanical pressure sensing <cit.> as well as for temperature sensing <cit.>. In this work, we demonstrate an optical BIC in a PhC microcavity tunable by heating the anisotropic liquid crystal (LC) resonant layer <cit.> having in mind potential applications for temperature sensing.
§ MODEL
The microcavity consists of a LC layer embedded between two identical 1D PhCs, see Fig. <ref>(a).
The silicon nitride (Si_3N_4) and silicon dioxide (SiO_2) layers are deposited on glass substrate by using the plasma enhanced chemical vapor deposition method.
The number of periods in PhC is 8 plus one unpaired layer of silicon nitride.
Polyvinyl alcohol (PVA) layers are formed on each PhC by the spin-coating method and then mechanically rubbed to ensure a homogeneous planar alignment of the LC.
The gap between PhC mirrors is provided with Teflon spacers.
The 4-pentyl-4'-cyanobiphenyl (5CB) nematic LC is embedded into the gap by the capillary method.
The thicknesses and RIs of all layers are presented in Table <ref>.
In nematic LCs the optical axis (OA) coincides with the
preferred alignment of the long axes of the LC molecules which is described by the unit vector a = [cos(ϕ), sin(ϕ), 0 ], called the director, see Fig. <ref>(b) <cit.>.
The polarizing microscopy images of the optical texture of the LC layer confirm the planar LC alignment, see Fig. <ref>(c).
The uniform dark texture ensures rubbing directions of PVA layers are parallel to the polarizer or the analyzer, while the maximum intensity of the transmitted light is observed upon rotation of the crossed polarizers by 45^∘.
The microcavity is conjugated with hemispherical lenses made of glass with RI n_G = 1.5. The immersion oil with RI of 1.5 is placed between the glass substrates and lenses to eliminate the air gap.
The experimental set-up for measuring the microcavity transmittance spectra is shown in Fig. <ref>(d).
The incoherent radiation from the source propagates through an optical fiber and a polarizer. After passing through the polarizer, the TE-polarized (TE wave) or TM-polarized (TM wave) radiation is focused on the microcavity. The outgoing radiation is collected in a fiber optic collimator connected to the spectrometer.
The microcavity is heated using the thermostat with temperature controlling by a thermistor. The azimuthal angle ϕ of the LC OA orientation is changed by rotation of the sample on the motorized turntable.
§ RESULTS AND DISCUSSION
Figure <ref>(a) shows the measured PhC transmittance spectra for TE and TM-waves incident at angle θ_B = asin[(n_Si_3N_4/n_G) sin(atan(n_SiO_2/n_Si_3N_4))]≈ 53^∘.
The wide dip corresponding to the PBG for TE waves is observed, while TM waves pass through the PhC due to the Brewster effect at the Si_3N_4/SiO_2 interfaces <cit.>.
The transmittance spectra of the microcavity for different values of the azimuthal angle ϕ at room temperature are shown in Figure <ref>(b-f).
In the PBG spectral region the multiple resonant dips corresponding to the MC modes are observed when ϕ ≠ 0, π/2, see Fig. <ref>(c-e).
It can be seen that the position and width of the resonant lines are strongly dependent on the value ϕ.
The width of the resonant line is determined by the total decay rate γ_tot = γ + γ_0, which is the sum of the radiation decay rate γ into the TM polarized continuum and the material loss rate γ_0.
The cases ϕ = 0, π/2 correspond to symmetry-protected BICs with zero radiation decay rate γ = 0, due to the orthogonality of the localized TE and propagating TM waves <cit.>. The resonant dips do not occur in the corresponding spectra, see Fig. <ref>(b, f).
The measured temperature transformation of the microcavity spectra for fixed values ϕ = π/6, π/4, π/3 is shown in Fig. <ref>(a-c).
It can be seen that position and width of the resonant lines change when temperature T increases from 24^∘C to 35^∘C.
The radiation decay rate γ is determined by the Poynting vector of TM waves of the resonant mode γ ∝ |E_x|^2 at the LC/PVA boundary <cit.>.
The value E_x is a sum of the ordinary (o-wave) and extraordinary (e-wave) waves E_x = E_ox + E_ex.
The polarization vectors E_o,e of the o- and e-waves are determined by the direction of the LC OA a, the permittivities of the o- and e-waves ε_o,e(T), and the unit vectors in the propagation directions κ_o,e(T) = [κ_o,ex; 0; κ_o,ez]. According to <cit.> one can write
E_o = E_o [a ×κ_o],
E_e = E_e [ a - ε_e(α)/ε_oκ_e(κ_ea) ],
where α is the angle between the vectors a and κ_e.
The thermal motion of the LC molecules leads to a change in the permittivities ε_o = ε_⊥(T) and ε_⊥(T) ≤ ε_e(α) ≤ ε_∥(T).
For certain values of temperature T the condition E_x = E_ox + E_ex = 0 can be satisfied. The radiation decay rate γ in this case is equal to zero γ = 0, and the resonant line collapses. This case corresponds to Friedrich–Wintgen BIC <cit.>, also called accidental BIC <cit.> or parametric BIC <cit.>.
At temperature T_c = 35^∘C the phase transition of the LC to the isotropic liquid is observed. The TE and TM waves are not mixed, and resonant lines vanish <cit.>.
Figure <ref>(d-f) presents temperature transformation of the microcavity spectra calculated by the Berreman transfer matrix method <cit.>.
For calculating the spectra the frequency-dependent <cit.> and temperature-dependent refractive indices <cit.> were adjusted within 5% to provide agreement with the experiment.
In Fig. <ref> (a) we demonstrate the transmittance in the vicinity of the BIC shown in Fig. <ref> (b) by black rectangle.
In Fig. <ref> (b) we show the temperature dependence of the resonant linewidth and its temperature derivative.
In Fig. <ref> (b) one can see that at the BIC temperature T the derivative exhibits an abrupt change.
In Fig. <ref> (c) we show the numerical data obtained by the Berreman method in the same range of parameters.
In the framework of the temporal coupled-mode theory (TCMT) <cit.> the LC layer and the PhCs are considered as a resonator and waveguides. The amplitude a of the MC mode obeys the following equations
da/dt=-(iω_0+γ + γ_0)a+⟨ d^*|
(
[ s_1^; s_2^ ]),
(
[ s_1^; s_2^; ])
=C(
[ s_1^; s_2^; ])+ a|d⟩.
Here ω_0 is the resonant frequency, |d⟩ is the column vector of coupling constants, s_1,2^ are amplitudes of TM waves in the PhC waveguides.
In the case of central-plane mirror symmetry, the non-resonant scattering matrix C at the BIC frequency is written as
C=
e^iψ(
[ ρ ± iτ; ± iτ ρ ]),
where ψ and ρ are the phase and amplitude of the complex reflection coefficient, τ is the amplitude of the complex transmission coefficient, ρ^2 + τ^2 = 1.
The coupling constant vector for the case when the MC mode is even with respect to the mirror plane is written as
|d⟩= ([ d_1; d_2 ]) =
e^iψ/2√(γ/2(1+ρ))(
[ ±τ-i(1+ρ); ±τ- i(1+ρ) ]),
and for the case of odd MC mode it is written as
|d⟩= ([ d_1; d_2 ]) =
e^iψ/2√(γ/2(1+ρ))(
[ ±τ+i(1+ρ); ∓τ- i(1+ρ) ]).
Assuming that harmonic waves s^∝ e^-iω t propagate in the waveguides, the Eq. (<ref>) and Eq. (<ref>) yield the final expression for the scattering matrix S in following form
S(T,ω)=C+|d⟩⟨ d^*|/i(ω_0(T) - ω) + γ(T) + γ_0.
The complex eigenfrequency in dependence on temperature ω_r(T) = ω_0(T) -iγ(T) was found analytically by solving the eigenvalue problem for an open system, see Supplementary in <cit.>.
The major contribution to the material loss in the fabricated microcavity is due to the conducting transparent layers of aluminum-doped zinc oxide deposited on the glass substrates.
It cannot be taken into account by solving the eigenvalue problem formulated for the semi-infinite PhCs, therefore the rate of material loss γ_0 is fitted to be consistent with the numerical spectra.
In Fig. <ref> (d) we demonstrate the TCMT solution with fitted γ_0. One can see a good agreement between Fig. <ref> (d) and Fig. <ref> (c).
Although the presence of the conducting layers decrease the maximum possible quality factor Q = ω_0/2γ_tot to the value Q_max = ω_0/2γ_0 at the BIC frequency, it ensures the voltage-tunable Q factor which has been demonstrated in <cit.>.
§ CONCLUSION
In this work we demonstrated an optical bound state in the continuum in a photonic crystal microcavity tunable by heating the anisotropic liquid crystal resonant layer. We experimentally measured the temperature dependencies of the transmittance spectrum at Brewster's angle and observed multiple vanishing resonant lines which indicate the occurrence of optical bound states in the continuum.
The obtained dependencies are explained theoretically with application of the temporal coupled mode theory and rigorous Berreman's transfer matrix method.
It is found that in the point of optical bound state in the continuum the resonant linewidth has an abrupt change which can be employed for engineering temperature sensors.
Acknowledgments. This work was supported by the Russian Science Foundation under grant no 22-22-00687.
The authors would like to express their special gratitude to the Krasnoyarsk Regional Center for Collective Use of the Federal Research Center “Krasnoyarsk Scientific Center, Siberian Branch of the Russian Academy of Sciences” for providing equipment within this project.
Disclosures. The authors declare no conflicts of interest.
Data Availability Statement. The data that support the findings of this study are available from the corresponding author, P.S.P., upon reasonable request.
|
http://arxiv.org/abs/2307.01129v1
|
20230703160810
|
Nitrogen-vacancy magnetometry of CrSBr by diamond membrane transfer
|
[
"Talieh S. Ghiasi",
"Michael Borst",
"Samer Kurdi",
"Brecht G. Simon",
"Iacopo Bertelli",
"Carla Boix-Constant",
"Samuel Mañas-Valero",
"Herre S. J. van der Zant",
"Toeno van der Sar"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"physics.app-ph"
] |
[Correspondence: ][email protected] Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The
Netherlands
Kavli Institute of Nanoscience,
Delft University of Technology, 2628 CJ Delft, The
Netherlands
Kavli Institute of Nanoscience,
Delft University of Technology, 2628 CJ Delft, The
Netherlands
Kavli Institute of Nanoscience,
Delft University of Technology, 2628 CJ Delft, The
Netherlands
Kavli Institute of Nanoscience,
Delft University of Technology, 2628 CJ Delft, The
Netherlands
Institute of Molecular Science, University of Valencia, Catedrático José Beltrán 2, Paterna 46980, Spain
Kavli Institute of Nanoscience, Delft University of Technology, 2628 CJ Delft, The NetherlandsInstitute of Molecular Science, University of Valencia, Catedrático José Beltrán 2, Paterna 46980, Spain
Kavli Institute of Nanoscience,
Delft University of Technology, 2628 CJ Delft, The
Netherlands
[email protected] Institute of Nanoscience,
Delft University of Technology, 2628 CJ Delft, The
Netherlands
Magnetic imaging using nitrogen-vacancy (NV) spins in diamonds is a powerful technique for acquiring quantitative information about sub-micron scale magnetic order. A major challenge for its application in the research on two-dimensional (2D) magnets is the positioning of the NV centers at a well-defined, nanoscale distance to the target material required for detecting the small magnetic fields generated by magnetic monolayers. Here, we develop a diamond `dry-transfer' technique akin to the state-of-the-art 2D-materials assembly methods and use it to place a diamond micro-membrane in direct contact with the 2D interlayer antiferromagnet CrSBr. We harness the resulting NV-sample proximity to spatially resolve the magnetic stray fields generated by the CrSBr, present only where the CrSBr thickness changes by an odd number of layers. From the magnetic stray field of a single uncompensated ferromagnetic layer in the CrSBr, we extract a monolayer magnetization of M_CSB=0.46(2) T, without the need for exfoliation of monolayer crystals or applying large external magnetic fields. The ability to deterministically place NV-ensemble sensors into contact with target materials and detect ferromagnetic monolayer magnetizations paves the way for quantitative analysis of a wide range of 2D magnets assembled on arbitrary target substrates.
Nitrogen-vacancy magnetometry of CrSBr by diamond membrane transfer
T. van der Sar
August 1, 2023
===================================================================
Introduction
The recent emergence of 2D magnetic materials with their potential applications in spin-logic circuitry and memory devices has triggered experimental research to find methods for detection and control of their magnetic ordering <cit.>. This has been a challenge for the past decade because of the low magnetic moment of these magnets at the 2D limit, compared with their bulk counterpart. There have been many techniques introduced so far, commonly used for the detection of magnetic behavior and quantitative study of the magnetism in the 2D magnets, such as magneto-transport measurements <cit.> and electron tunneling <cit.>, magnetic circular dichroism and magneto-optical Kerr effect measurements <cit.>, optical second harmonic generation <cit.>, magnetic force microscopy <cit.>, scanning superconducting quantum interference device (SQUID) microscopy <cit.>, angular electron spin resonance <cit.> and nitrogen-vacancy (NV) magnetometry <cit.>.
Among them, NV-magnetometry provides high sensitivity and nano-precision in the detection of magnetic ordering at a large temperature range from 0.35 to 600 K <cit.>. By this technique, we can detect weak static and dynamic magnetic stray fields that provide quantitative information about the magnetization of a 2D magnet down to the monolayer limit, providing insights into the magnetic domains and localized magnetic defects <cit.>.
A central challenge for achieving high spatial resolution and extracting quantitative results on material magnetization using NV magnetometry is to achieve a nanoscale and well-defined distance between the sample and the NV centers <cit.>. A powerful method is to embed a single NV spin into a diamond scanning probe, which enables imaging with 50 nm spatial resolution and quantitative determination of monolayer magnetizations <cit.>. A second approach is to deposit a 2D magnet directly onto a diamond chip hosting a high density of near-surface NV spins <cit.><cit.>. This approach benefits from a strong signal due to a large number of NV spins. Furthermore, it enables large-area magnetic imaging with 100 m-scale field-of-views at diffraction-limited resolution <cit.>, but requires the ability to fabricate the sample onto the diamond.
For the NV-magnetometry of 2D materials, here we develop a method based on a deterministic placement of a diamond membrane onto the target magnetic flake on a substrate with micrometer lateral precision. We demonstrate a pick-and-place diamond transfer procedure similar to the `dry-transfer' technique commonly used in assembling stacks of 2D materials <cit.>. Using this technique we achieve close proximity between the diamond and the sample that is crucial for detecting the weak magnetic fields of atomically thin magnetic layers with high spatial resolution. By using micron-sized diamond membranes as done in Ref.<cit.>, we strongly increase the probability to achieve nanoscale stand-off distance between diamond and sample with respect to using millimeter-sized diamonds <cit.>. Such large diamonds generally lead to micron-scale diamond-sample distances because of the ∼3 orders of magnitude larger probability to capture spurious particles (e.g. dust) between diamond and sample. Moreover, our transfer technique positions the diamond at a target location with a micrometer lateral precision,
while being compatible with 2D materials assembly techniques. This high degree of control also precludes an incorrect orientation of the diamond, as well as the need to further push/drag the diamond to a target location using micromanipulators, which can damage sensitive samples such as 2D materials. Furthermore, thinning the diamond membrane to a few microns enhances the collection of the NV photoluminescence because of reduced optical aberrations and yields a flexible membrane that can conform more readily to a target surface, which facilitates pickup with the PDMS/PMMA stamp. An additional benefit of our approach is that it enables placing a diamond sensor onto commonly used substrates such as SiO_2/Si that facilitates optical detection and thickness determination of 2D materials. Moreover, these substrates and materials can be equipped with electronic circuitry (e.g. for the microwave resonator) to study and control the various spins in the system.
We use the diamond transfer technique to characterize the magnetization of a 2D interlayer antiferromagnet, Chromium Sulfur Bromide (CrSBr) (Fig. 1a). The diamond membrane contains a layer of NV centers implanted at about 70 nm below the surface (see methods). Each NV has an S=1 electron spin with a zero-field quantization axis (Ŝ_NV) that is along the diamond (111) axis, angled by α = 54.7 degrees relative to the sample-plane normal (Fig. 1a). The Zeeman splitting of the NV spin states m_s=±1 is induced by a small external field (B_ex∼ 5.6 mT) and is modulated by the projection of the local magnetic stray field (B_NV) along Ŝ_NV <cit.>. We detect this field by measuring the NV electron spin resonance (ESR) transitions frequencies through the application of a microwave magnetic field via a stripline and read out of the NV's spin-dependent photoluminescence (PL) <cit.>.
As illustrated in Fig. 1b, each CrSBr layer is ferromagnetically ordered along its in-plane magnetic easy-axis (y-axis) below its Néel temperature <cit.>. Therefore, an odd number of CrSBr layers leads to a net magnetic moment that generates a magnetic stray field (purple arrows) at the CrSBr edges (Fig. 1b). We determine the projection of this field onto the NV axis (dB_NV) by measuring the corresponding shift in the NV ESR frequency df_ESR=γ_NVdB_NV, with γ_NV=28.053 GHz/T the NV gyromagnetic ratio <cit.>). From the stray field, we quantify the monolayer magnetization of the CrSBr flake and extract the NV-sample distance. We note that the small bias field B_ex of 5.6 mT used in our measurements is negligible compared to the crystal anisotropy and exchange fields of CrSBr <cit.>, such that the CrSBr flakes preserve their interlayer antiferromagnetic ordering along the easy axis. Thus, depending on their thickness, the total stray field can have zero vs. finite value for even vs. odd numbers of the CrSBr layers.
Diamond membrane transfer
The main steps for the sample preparation are shown in Fig. 2 with further details in the Methods section. The diamond membrane used in this work is fabricated by ion implantation at a depth of about 70 nm below the diamond surface with an NV density of 10^3 NVs/m^2 (see Methods). As shown in Fig. 2a, a large diamond chip is etched by O_2 plasma into squares of 50×50m^2 with a thickness of 5 m, which remain connected to the surrounding diamond by small holding bars. Next, the diamond frame is attached to a metallic tip using a UV-curing adhesive. The metallic tip is held by micro-manipulators of a probe station that facilitates a controllable movement of the diamond chip. With that, the diamond chip is brought above a flexible 0.5 mm-thick polydimethylsiloxane (PDMS) layer held on a substrate. As shown in Fig. 2b, another metallic sharp tip (with ≈10-20 m in diameter) is then used to break the connecting rod and tip out one of the square diamond membranes onto the PDMS (No. 1). To detach the diamond membrane from the diamond frame, we first bring a target membrane to (almost) touch the PDMS surface, then push the target membrane into contact with the (sticky) PDMS polymer, then lift the frame to detach it from the membrane, and then retract the tip. This procedure eliminates the chance of the membrane flipping upside down. The reason for choosing PDMS is to provide the diamond membrane with a rather flexible substrate with low adhesion to ease the diamond pick-up procedure that follows in the next step.
To pick up the diamond (Fig. 2c), we use a stamp that is made of a layer of poly-methyl methacrylate (PMMA) and PDMS (No. 2) held on a glass slide (see the methods section for the preparation of the PMMA membrane and the stamp). The PDMS-PMMA stamp is then brought in contact with the diamond membrane (held on PDMS 1), using a transfer stage that is equipped with micromanipulators. The good adhesion of the diamond and PMMA helps with the diamond pick-up when retracting the stamp (when in contact, the stack can additionally be annealed up to 80 ^∘C to promote the adhesion). In the next step (Fig. 2d), the diamond membrane is transferred on top of a CrSBr flake previously placed next to the Au stripline (see Methods for details).
When aligned, the diamond membrane and CrSBr flake are brought in contact while heating up the stage to 100 ^∘C. After cooling down the stage to 30 ^∘C and retracting the PDMS, the PMMA and diamond get locally detached from the PDMS in the central area and stay on the SiO_2 substrate. At this step, the stage is heated up to 180 ^∘C to melt the PMMA which relaxes the PMMA-diamond on the CrSBr-substrate and allows for full detachment of PMMA from the PDMS stamp (Fig. 2e). The PMMA in the region of interest is then removed by e-beam lithography, leaving an overlap between the unexposed part of the PMMA and the diamond membrane to avoid diamond displacement in the PMMA-removal procedure. The PMMA removal is not necessary and is done to avoid the possibility of light scattering. We note that the presence of the PMMA has minimal effect on the detected NV response (see SI, section S5). Thus, the last step could be omitted, especially to avoid air exposure in the case of air-sensitive magnetic materials. Note that the dark shadow around the central area of the diamond (in Fig. 2e) is a signature of the diamond membrane being directly in touch with the CrSBr flake <cit.> which is essential for detecting the magnetization. The fringes observed in the optical image are related to the slight gradual variation of the diamond thickness due to the deep etching process of the diamond top surface (see SI, section S1) which does not change the NV-CrSBr distance.
The direct contact with the intact interface between the diamond and the 2D magnet is one of the main advantages of this technique. The method minimizes the chance of contamination (e.g. dust particles) at the interface that has been affecting measurements in previous reports with a wet transfer of the diamond <cit.>. Moreover, this diamond transfer method can be used in the inert atmosphere of a glove box which makes it suitable for air-sensitive 2D magnetic materials. In addition, with the full coverage of the 2D flakes with the diamond and PMMA membranes, further encapsulation of air-sensitive flakes can be avoided.
NV electron spin resonance measurements
The CrSBr flake that we study here consists of regions with various thicknesses that can be distinguished by their optical contrast with respect to the SiO_2/Si substrate (Fig. 3a). The atomic force microscope (AFM) image of the flake (Fig. 3b) and the corresponding AFM height profile (Fig. 3c) show the thickness of different regions of the CrSBr flake across its width. Over the same region, we measure the NV ESR transition frequency (Fig. 3d). The spatial alignment of Fig. 3a-d along the width of the CrSBr flake guides us to determine the ESR signal corresponding to each step in the CrSBr flake. The non-zero modulation of the ESR frequency at some of the edges is a signature for an odd number of CrSBr flakes that would give rise to an uncompensated stray field. For such measurements, as described earlier, we are able to selectively detect the PL response from the NV centers and the CrSBr flake because of their distinct PL spectrum (Fig. 3e and SI section S1).
In Fig. 3f, we show the spatial map of the PL and ESR contrast measured at 70 K for the regions shown in the AFM image in this panel. The PL intensity map indicates a noticeable contrast as the CrSBr thickness changes considerably. On the contrary, the measured ESR signal shows insignificant variation at the large step height of CrSBr (≈ 28 and 72 nm) and changes considerably across the other edges with the thickness of ∼2.4 and ∼15 nm. This is also confirmed by the same magnitude of the ESR signal measured at another 2.4 nm CrSBr step in the middle of the flake (see panels c,d, and g) which corresponds to a tri-layer CrSBr edge <cit.> . Moreover, our AFM characterization indicates two more steps in the CrSBr flake with 1.1±0.23nm and 1.71±0.19nm thicknesses associated with monolayer and bilayer of CrSBr (shown in Fig. S2 in SI). We observe a finite modulation of the ESR signal at the monolayer CrSBr step and no ESR modulation at the bilayer CrSBr. We conclude that the ESR measurement is indeed detecting only the uncompensated stray field generated by the steps in the CrSBr thickness corresponding to an odd number of CrSBr layers, as expected for an antiferromagnetic interlayer stacking order <cit.>. The same magnitude of the ESR modulations at different steps is consistent with the detected stray field being generated by an uncompensated magnetic moment of a single layer of the CrSBr.
At the positions 35 m and 42 m (Fig. 3d), we observe a modulation of f_ESR of approximately double the amplitude as the modulations observed at the other steps discussed above. These positions are located in a region where the AFM image shows strong topography changes. In particular, the higher-contrast AFM image in Fig. S2 (in SI) shows that the CrSBr in the region between 36-45 m is broken into multiple smaller pieces. If two such neighboring pieces have opposite magnetic order, the stray fields generated at their edges would add, which could explain the observed doubling of the ESR shift.
To extract the magnetization, we focus on the central peak in the NV-ensemble-measured stray field of Fig. 3d, because it is well isolated from other peaks and is located in a region where the CrSBr shows a clear, small and isolated step in its height of 2.4 nm. This enables an accurate analysis of the stray field.. By the NV ESR measurements, we extract a stray magnetic field of dB_NV∼ 60 T generated at the edge of the 2.4 nm CrSBr step (Fig. 3h). For quantitative magnetometry, we need to know the direction of the magnetization of the 2D magnet with respect to the NV axis (Ŝ_NV). For the case of CrSBr, the direction of the magnetic easy axis can be readily distinguished optically, as the mechanical exfoliation of the crystals results in elongated rectangular flakes following the in-plane magnetic anisotropy axes <cit.>. As shown earlier in Fig. 1b, the direction of the CrSBr magnetization and the NV axis in our sample is such that the CrSBr stray fields have a finite projection only along the z-component of the NV spins. Thus, knowing the projection of the M_CSB on Ŝ_NV, we can extract the uncompensated saturated magnetization of CrSBr. By fitting the stray-field peak of Fig. 3h (see SI section S3), we find M_s=0.46(2) T, consistent with the M_s=0.48(2) T saturation magnetization reported by SQUID measurements in a magnetic field-polarized bulk CrSBr crystal <cit.>. In contrast to Ref. <cit.>, the M_s value in our work is extracted from the stray field of an uncompensated magnetic layer on top of a multi-layer CrSBr crystal. Furthermore, we extract an NV-sample distance z_0 = 0.13(4) µm (see SI). This is larger than the ∼70 nm NV implantation depth expected from the stopping range of ions in matter (SRIM) <cit.>, corroborating previous work <cit.> that demonstrated that the SRIM underestimates NV implantation depth.
Detection of magnetic phase transition
For bulk CrSBr, the transition temperature from the antiferromagnetic to paramagnetic state is about 130 K, as measured by SQUID magnetometry <cit.>. Here we evaluate the thermal fluctuations of magnetic moments in the CrSBr flake by measuring the NV ESR frequency at various temperatures over the width of the CrSBr flake. In Fig. 4a, we show the stray field modulation across the width of the CrSBr flake measured below the phase transition temperature and in panel b we show the modulation of the corresponding ESR frequency signal as a function of temperature. From there, we extract the temperature-dependence of the magnitude of the stray field at the three studied CrSBr edges, normalized by their maximum value estimated for T = 0 K (dB_T0), shown in panel c. The dB_T0 is estimated by the extrapolation of dB_NV to 0 K. As expected, the stray fields from the three CrSBr edges (with thicknesses of 15 and 2.4 nm) show a gradual decay as the temperature is increased towards the critical temperature. We find the temperature at which the net stray fields from the edges of the CrSBr disappear to be 130 K. The extracted T_c from these measurements agrees well with previously reported values <cit.>
Conclusions
The key benefits of NV magnetometry are its magnetic sensitivity, its well-understood level structure and interaction with auxiliary nuclear spins, and the precisely tunable NV density and implantation depth. Our method integrates these advantages with 2D materials assembly methods. The developed method for the pick-up/transfer of NV-diamond membranes allows for high precision in alignment and direct contact of the diamond and magnetic target, facilitating NV-ensemble magnetometry to detect stray fields down to the monolayer limit. The compatibility of this method with the conventionally used transfer stages and glove-box inert atmosphere will pave the way for further easy and fast exploration of sensitive 2D magnetic materials. Using this technique we have obtained an intact interface at the 2D magnet-diamond interface, resulting in a spatially resolved magnetization profile. From these first-time ODMR measurements on CrSBr, we have quantified the magnetization of a single layer of the CrSBr without the need for exfoliation of a monolayer of the crystal or applying a large external magnetic field. These measurements performed on various CrSBr thicknesses resolve the uncompensated magnetization of the crystal, determining an even versus an odd number of CrSBr layers. Moreover, the temperature dependence of the measured ESR signal suggests that the stray fields generated at the edges of the 2D magnets can be more sensitive to thermal fluctuations. These experimental realizations enabled by the diamond transfer technique open the way for more applications of NV-ensemble magnetometry of 2D magnetic materials, enabling large-area and fast readout of magnetic configurations in 2D spintronic devices.
Methods
Diamond Membrane Preparation. We fabricate the 50 × 50 × 5 m^3 diamond membrane (shown in Fig. 2a and 2e) from a 4 × 4 × 0.5 mm^3 electronic-grade diamond chip acquired from Element 6 Inc. We start by having the chip cut and polished into 2 mm × 2mm × 50 m membranes by Almax Easylabs. After cleaning in nitric acid, we have nitrogen ions implanted by Innovion at a density of 10^13/cm^2 and an energy of 54 keV, yielding an estimated implantation depth of 70±10 nm <cit.>. To create NVs, we vacuum anneal the diamond at a pressure of 3×10^-6 mBar by a 6-hour ramp to 400 deg., 4-hour hold, 6-hour ramp to 800 deg, 2-hour hold and cool down to room temperature. Assuming a nitrogen-to-NV conversion efficiency of 1% <cit.> during the anneal, we estimate the resulting NV density to be 10^3 NVs/m^2.
We etch the microsquares into the NV-side of the diamond using a Ti mask and reactive ion etching (RIE). We deposit 50 nm of Ti using e-beam evaporation, define its shape in a PMMA resist using e-beam lithography, and etch it into shape by 3 min. of reactive ion etching in a SF_6/He plasma at P_RF=30 W RF power. We then transfer the Ti mask 5 m into the diamond using RIE in an O_2 plasma (P_RF=90 W, P_ICP= 1100 W, etch rate 0.25 m/min). As the final step, we flip the diamond and etch from the backside using O_2 RIE and a quartz hard mask with a square 1.4 mm opening until microsquares are released and only remain attached to the bulk diamond via a small holding bar. The Ti is removed using HF <cit.>.
PMMA-PDMS stamp Preparation. For the preparation/suspension of the PMMA membrane, first, we spin-coat a layer of a water-soluble polymer, Electra 92 (AR-PC 5090, All-resist) at the rate of 1000 rpm for 3 min. The Electra is then backed on a hotplate at 100 ^∘C for 1 min. Then two layers of PMMA (4%, 950K) are successively spin-coated on top of the Electra layer with 1000 rpm for 3 min. After each spin-coating step, the PMMA layer is annealed on the hot plate at 180 ^∘C for 45 s. At this stage, the layer PMMA (≈ 0.7-1m thick) is solidified and is ready for suspension. For that, we use a Scotch tape and cut an open window (7x7 mm^2) in its center. We attach the tape to the PMMA-Electra-SiO_2 sample and immerse the sample in water. The tape holds the PMMA membrane and gives some control over the movement of the sample, while the Electra layer gets dissolved in water. By that, the SiO_2 sample gets detached and the PMMA layer stays on the water surface. In the next step, we take the tape and the PMMA out of the water beaker, let it dry in air for about 10 min, and put it on top of a PDMS layer held on a grass slide <cit.>.
Sample Preparation. Before using the described technique for the transfer of the diamond membrane on the CrSBr flake, there are a few steps taken for the sample preparation. The Au-stripline is made by e-beam lithography and deposition of the Ti (5 nm) and Au (100 nm) using e-beam evaporation at ultra-high vacuum (<10^-6 mbar), followed by liftoff in acetone and O_2 plasma treatment to remove the PMMA layer and its residues, respectively. The CrSBr flakes are cleaved from their bulk crystals using Nitto tape and are directly exfoliated on a PDMS layer. Then they are transferred from the PDMS on the target substrate distanced by <5m from the Au stripline using micro-manipulators of a transfer stage. The sample preparation could also be done by the lithography of the Au stripline directly on the substrate where the CrSBr flake is exfoliated on.
CrSBr synthesis. CrSBr crystals are prepared by the direct reaction of their components in a stoichiometric ratio, mixing chromium (99.99 %, Alfa-Aesar), sulfur (99.99 %, Sigma-Aldrich), and bromine (99.9 %, Sigma-Aldrich), with a 3 % in mass excess of bromine, which acts as a transport agent. Crystals are characterized by powder and crystal X-Ray diffraction, energy dispersive X-Ray analysis (EDX), high-resolution TEM, SQUID magnetometry and temperature-dependent single crystal, as reported in Ref. <cit.>.
Photoluminescence microscope setup.
We detect the NV photoluminescence (PL) using a home-built cryogenic microscope setup. The cryostat is a Montana S100 hosting a room-temperature-stabilized NA = 0.85 microscope objective. We position and focus the sample using cryogenic slip-stick positioners (ANPx101 and ANPz101, Attocube), and use a fast steering mirror (FSM300, Newport) to make spatial NV PL maps (Fig. 3d, Fig. 3f). We excite the PL using a continuous-wave 520 nm laser (Obis LX 520, Coherent) and detect it using an avalanche photodiode (Excelitas, SPCM-AQRH-13) after suppressing the laser and the CrSBr PL by band-pass filtering (600-800 nm). The NV+CrSBr PL spectrum shown in Fig. 3e is recorded with an Andor Kimera spectrometer equipped with an Ivac CCD camera using a 550 nm long-pass filter. A Windfreak SynthHDv2 generates the microwaves used for driving the NV spins.
Acknowledgements
We would like to acknowledge the assistance of Allard J. Katan with this project. This project has received funding from the European Union Horizon 2020 research and innovation program under grant agreement No. 863098 (SPRING), and from the Netherlands Research Council (NWO) under grants 740.018.012 (startup), 680.91.115 (Projectruimte) and 193.077 (VIDI). SMV and CBC acknowledge the financial support from the European Union (ERC AdG Mol-2D 788222) and the Spanish MCIN (2D-HETEROS PID2020-117152RB-100, co-financed by FEDER, and Excellence Unit “María de Maeztu” CEX2019-000919-M). SMV thanks the Generalitat Valenciana for a postdoctoral fellow APOSTD-CIAPOS2021/215.
Author Contribution
T.S.G., S.K., H.S.J.v.d.Z., and T.v.d.S. conceived and designed the experiments. M.B. and B.G.S. fabricated the diamond membranes. S.K. fabricated the stripline and developed the diamond tipping process. T.S.G. developed the diamond membrane transfer process, performed AFM characterizations, and fabricated the CrSBr-diamond sample. C.B. and S.M. synthesized the CrSBr crystals. M.B. performed the ESR measurements, with the help of S.K., and I.B.. T.v.d.S. analyzed the ESR data. T.S.G. wrote the manuscript with contributions from all co-authors.
51
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Gibertini et al.(2019)Gibertini, Koperski, Morpurgo, and Novoselov]gibertini2019magnetic
author author M. Gibertini, author M. Koperski,
author A. F. Morpurgo, and author K. S. Novoselov, @noop journal journal Nature
Nanotechnology volume 14, pages 408
(year 2019)NoStop
[Jiang et al.(2018)Jiang,
Shan, and Mak]jiang2018electric
author author S. Jiang, author J. Shan, and author K. F. Mak, @noop
journal journal Nature Materials volume 17, pages 406 (year
2018)NoStop
[Song et al.(2018)Song,
Cai, Tu, Zhang, Huang, Wilson, Seyler, Zhu,
Taniguchi, Watanabe et al.]song2018giant
author author T. Song, author X. Cai, author M. W.-Y. Tu, author
X. Zhang, author B. Huang, author N. P. Wilson, author K. L. Seyler, author L. Zhu, author T. Taniguchi,
author K. Watanabe, et al., @noop journal journal
Science volume 360, pages 1214
(year 2018)NoStop
[Wang et al.(2018a)Wang, Gutiérrez-Lezama, Ubrig, Kroner,
Gibertini, Taniguchi, Watanabe, Imamoğlu, Giannini, and Morpurgo]wang2018very
author author Z. Wang, author I. Gutiérrez-Lezama, author N. Ubrig, author M. Kroner,
author M. Gibertini, author T. Taniguchi, author
K. Watanabe, author
A. Imamoğlu, author
E. Giannini, and author
A. F. Morpurgo, @noop journal journal Nature Communications volume 9, pages 1 (year
2018a)NoStop
[Klein et al.(2018)Klein,
MacNeill, Lado, Soriano,
Navarro-Moratalla, Watanabe, Taniguchi, Manni, Canfield, Fernández-Rossier et al.]klein2018probing
author author D. R. Klein, author D. MacNeill,
author J. L. Lado, author D. Soriano, author
E. Navarro-Moratalla, author
K. Watanabe, author
T. Taniguchi, author
S. Manni, author P. Canfield, author J. Fernández-Rossier, et al., @noop
journal journal Science volume 360, pages 1218 (year
2018)NoStop
[Wang et al.(2018b)Wang, Sapkota,
Taniguchi, Watanabe, Mandrus, and Morpurgo]wang2018tunneling
author author Z. Wang, author D. Sapkota,
author T. Taniguchi, author K. Watanabe, author
D. Mandrus, and author
A. F. Morpurgo, @noop journal journal Nano Letters volume
18, pages 4303 (year
2018b)NoStop
[Huang et al.(2017)Huang,
Clark, Navarro-Moratalla, Klein, Cheng, Seyler, Zhong, Schmidgall, McGuire, Cobden et al.]huang2017layer
author author B. Huang, author G. Clark,
author E. Navarro-Moratalla,
author D. R. Klein, author R. Cheng, author
K. L. Seyler, author
D. Zhong, author E. Schmidgall, author M. A. McGuire, author D. H. Cobden, et al., @noop journal
journal Nature volume 546, pages 270 (year 2017)NoStop
[Gong et al.(2017)Gong,
Li, Li, Ji, Stern, Xia, Cao, Bao,
Wang, Wang et al.]gong2017discovery
author author C. Gong, author L. Li, author Z. Li, author
H. Ji, author A. Stern, author Y. Xia, author T. Cao, author W. Bao, author C. Wang, author
Y. Wang, et al., @noop
journal journal Nature volume 546, pages 265 (year
2017)NoStop
[Fei et al.(2018)Fei,
Huang, Malinowski, Wang,
Song, Sanchez, Yao,
Xiao, Zhu, May et al.]fei2018two
author author Z. Fei, author B. Huang, author P. Malinowski, author
W. Wang, author T. Song, author J. Sanchez, author W. Yao,
author D. Xiao, author
X. Zhu, author A. F. May, et al., @noop journal journal Nature Materials volume 17, pages 778 (year 2018)NoStop
[Sun et al.(2019)Sun,
Yi, Song, Clark,
Huang, Shan, Wu,
Huang, Gao, Chen et al.]sun2019giant
author author Z. Sun, author Y. Yi, author T. Song, author
G. Clark, author B. Huang, author Y. Shan, author S. Wu, author D. Huang, author C. Gao, author
Z. Chen, et al., @noop
journal journal Nature volume 572, pages 497 (year
2019)NoStop
[Chu et al.(2020)Chu,
Roh, Island, Li,
Lee, Chen, Park,
Young, Lee, and Hsieh]chu2020linear
author author H. Chu, author C. J. Roh,
author J. O. Island, author C. Li, author
S. Lee, author J. Chen, author J.-G. Park, author A. F. Young, author J. S. Lee, and author D. Hsieh, @noop journal journal Physical Review
Letters volume 124, pages 027601
(year 2020)NoStop
[Lee et al.(2021)Lee,
Dismukes, Telford, Wiscons,
Wang, Xu, Nuckolls,
Dean, Roy, and Zhu]lee2021magnetic
author author K. Lee, author A. H. Dismukes,
author E. J. Telford, author R. A. Wiscons, author
J. Wang, author X. Xu, author C. Nuckolls, author C. R. Dean,
author X. Roy, and author X. Zhu, @noop journal journal Nano Letters volume
21, pages 3511 (year 2021)NoStop
[Yi et al.(2016)Yi,
Zhuang, Zou, Wu,
Cao, Tang, Calder,
Kent, Mandrus, and Gai]yi2016competing
author author J. Yi, author H. Zhuang, author Q. Zou, author
Z. Wu, author G. Cao, author S. Tang, author S. Calder,
author P. Kent, author
D. Mandrus, and author
Z. Gai, @noop journal journal 2D Materials volume
4, pages 011005 (year 2016)NoStop
[Niu et al.(2019)Niu,
Su, Francisco, Ghosh,
Kargar, Huang, Lohmann,
Li, Xu, Taniguchi et al.]niu2019coexistence
author author B. Niu, author T. Su, author B. A. Francisco, author
S. Ghosh, author F. Kargar, author X. Huang, author M. Lohmann, author J. Li, author Y. Xu, author T. Taniguchi, et al., @noop journal journal Nano
Letters volume 20, pages 553
(year 2019)NoStop
[Rizzo et al.(2022)Rizzo,
McLeod, Carnahan, Telford,
Dismukes, Wiscons, Dong,
Nuckolls, Dean, Pasupathy
et al.]rizzo2022visualizing
author author D. J. Rizzo, author A. S. McLeod,
author C. Carnahan, author E. J. Telford, author
A. H. Dismukes, author
R. A. Wiscons, author
Y. Dong, author C. Nuckolls, author C. R. Dean, author A. N. Pasupathy, et al., @noop journal journal Advanced Materials , pages 2201000 (year 2022)NoStop
[Uri et al.(2020)Uri,
Kim, Bagani, Lewandowski,
Grover, Auerbach, Lachman,
Myasoedov, Taniguchi, Watanabe et al.]uri2020nanoscale
author author A. Uri, author Y. Kim, author K. Bagani, author
C. K. Lewandowski, author
S. Grover, author N. Auerbach, author E. O. Lachman, author Y. Myasoedov, author T. Taniguchi, author K. Watanabe,
et al., @noop journal journal
Nature Physics volume 16, pages 164
(year 2020)NoStop
[Marchiori et al.(2022)Marchiori, Ceccarelli, Rossi, Lorenzelli, Degen, and Poggio]marchiori2022nanoscale
author author E. Marchiori, author L. Ceccarelli, author N. Rossi,
author L. Lorenzelli, author C. L. Degen, and author M. Poggio, @noop
journal journal Nature Reviews Physics volume 4, pages 49 (year
2022)NoStop
[Moro et al.(2022)Moro,
Ke, del Águila, Söll,
Sofer, Wu, Yue, Li, Liu, and Fanciulli]moro2022revealing
author author F. Moro, author S. Ke, author A. G. del Águila, author A. Söll, author
Z. Sofer, author Q. Wu, author M. Yue, author L. Li, author X. Liu, and author
M. Fanciulli, @noop journal journal Advanced Functional Materials volume 32, pages 2207044 (year
2022)NoStop
[Maletinsky et al.(2012)Maletinsky, Hong, Grinolds, Hausmann, Lukin, Walsworth, Loncar, and Yacoby]maletinsky2012robust
author author P. Maletinsky, author S. Hong,
author M. S. Grinolds, author B. Hausmann, author
M. D. Lukin, author
R. L. Walsworth, author
M. Loncar, and author
A. Yacoby, @noop journal journal Nature Nanotechnology volume 7, pages 320 (year 2012)NoStop
[Simpson et al.(2016)Simpson, Tetienne, McCoey, Ganesan, Hall, Petrou, Scholten, and Hollenberg]simpson2016magneto
author author D. A. Simpson, author J.-P. Tetienne, author J. M. McCoey, author K. Ganesan,
author L. T. Hall, author S. Petrou, author
R. E. Scholten, and author
L. C. Hollenberg, @noop
journal journal Scientific Reports volume 6, pages 1 (year
2016)NoStop
[Casola et al.(2018)Casola,
Van Der Sar, and Yacoby]casola2018probing
author author F. Casola, author T. Van
Der Sar, and author A. Yacoby, @noop journal journal Nature
Reviews Materials volume 3, pages 1
(year 2018)NoStop
[Thiel et al.(2019)Thiel,
Wang, Tschudin, Rohner,
Gutiérrez-Lezama, Ubrig, Gibertini, Giannini, Morpurgo, and Maletinsky]thiel2019probing
author author L. Thiel, author Z. Wang,
author M. A. Tschudin, author D. Rohner, author
I. Gutiérrez-Lezama, author
N. Ubrig, author M. Gibertini, author E. Giannini, author A. F. Morpurgo, and author P. Maletinsky, @noop journal
journal Science volume 364, pages 973 (year 2019)NoStop
[Broadway et al.(2020)Broadway, Scholten, Tan, Dontschuk, Lillie, Johnson, Zheng, Wang, Oganov, Tian
et al.]broadway2020imaging
author author D. A. Broadway, author S. C. Scholten, author C. Tan,
author N. Dontschuk, author S. E. Lillie, author
B. C. Johnson, author
G. Zheng, author Z. Wang, author A. R. Oganov, author S. Tian, et al., @noop journal journal Advanced Materials volume 32, pages 2003314 (year 2020)NoStop
[Sun et al.(2021)Sun,
Song, Anderson, Brunner,
Förster, Shalomayeva, Taniguchi, Watanabe, Gräfe,
Stöhr et al.]sun2021magnetic
author author Q.-C. Sun, author T. Song, author E. Anderson, author
A. Brunner, author J. Förster, author T. Shalomayeva, author T. Taniguchi, author K. Watanabe, author J. Gräfe, author R. Stöhr, et al., @noop journal journal Nature Communications volume 12, pages 1 (year 2021)NoStop
[Fabre et al.(2021)Fabre,
Finco, Purbawati, Hadj-Azzem,
Rougemaille, Coraux, Philip, and Jacques]fabre2021characterization
author author F. Fabre, author A. Finco,
author A. Purbawati, author A. Hadj-Azzem, author
N. Rougemaille, author
J. Coraux, author I. Philip, and author V. Jacques, @noop journal journal Physical Review Materials volume 5, pages 034008 (year 2021)NoStop
[Song et al.(2021)Song,
Sun, Anderson, Wang,
Qian, Taniguchi, Watanabe,
McGuire, Stöhr, Xiao
et al.]song2021direct
author author T. Song, author Q.-C. Sun,
author E. Anderson, author C. Wang, author
J. Qian, author T. Taniguchi, author K. Watanabe, author M. A. McGuire, author R. Stöhr, author D. Xiao,
et al., @noop journal journal
Science volume 374, pages 1140
(year 2021)NoStop
[Laraoui and Ambal(2022)]laraoui2022opportunities
author author A. Laraoui and author K. Ambal, @noop journal journal Applied
Physics Letters volume 121, pages
060502 (year 2022)NoStop
[Robertson et al.(2022)Robertson, Tan, Scholten, Healey, Abrahams, Zheng, Manchon, Wang, and Tetienne]robertson2022imaging
author author I. O. Robertson, author C. Tan,
author S. C. Scholten, author A. J. Healey, author
G. J. Abrahams, author
G. Zheng, author A. Manchon, author L. Wang, and author J.-P. Tetienne, @noop journal journal 2D Materials volume 10, pages 015023 (year 2022)NoStop
[Scheidegger et al.(2022)Scheidegger, Diesch, Palm, and Degen]scheidegger2022scanning
author author P. J. Scheidegger, author S. Diesch,
author M. L. Palm, and author C. Degen, @noop
journal journal Applied Physics Letters volume 120, pages 224001 (year 2022)NoStop
[Toyli et al.(2012)Toyli,
Christle, Alkauskas, Buckley,
Van de Walle, and Awschalom]toyli2012measurement
author author D. Toyli, author D. Christle,
author A. Alkauskas, author B. Buckley, author
C. Van de Walle, and author
D. Awschalom, @noop journal journal Physical Review X volume 2, pages 031001 (year
2012)NoStop
[Chen et al.(2022)Chen,
Asif, Whalen, Támara-Isaza, Luetke, Wang,
Wang, Ayako, Lamsal,
May et al.]chen2022revealing
author author H. Chen, author S. Asif, author M. Whalen, author
J. Támara-Isaza, author
B. Luetke, author Y. Wang, author X. Wang, author M. Ayako,
author S. Lamsal, author A. F. May, et al., @noop journal journal 2D Materials volume 9, pages 025017 (year
2022)NoStop
[McLaughlin et al.(2022)McLaughlin, Hu, Huang, Zhang, Lu, Yan, Wang,
Tserkovnyak, Ni, and Du]mclaughlin2022quantum
author author N. J. McLaughlin, author C. Hu,
author M. Huang, author S. Zhang, author
H. Lu, author G. Q. Yan, author H. Wang, author Y. Tserkovnyak, author N. Ni, and author C. R. Du, @noop journal journal Nano Letters volume 22, pages 5810 (year
2022)NoStop
[Garsi et al.(2021)Garsi,
Stöhr, Denisenko, Shagieva, Trautmann, Vogl, Sene, Kaiser, Zappe, Reuter
et al.]garsi2021non
author author M. Garsi, author R. Stöhr,
author A. Denisenko, author F. Shagieva, author
N. Trautmann, author
U. Vogl, author B. Sene, author F. Kaiser, author A. Zappe,
author R. Reuter, et al., @noop journal journal arXiv
preprint arXiv:2112.12242 (year 2021)NoStop
[Yan et al.(2022)Yan,
Li, Lu, Huang, Xiao, Wernert, Brock, Fullerton, Chen, Wang et al.]yan2022quantum
author author G. Q. Yan, author S. Li, author H. Lu, author
M. Huang, author Y. Xiao, author L. Wernert, author J. A. Brock,
author E. E. Fullerton, author H. Chen, author
H. Wang, et al., @noop
journal journal Advanced Materials volume 34, pages 2200327 (year
2022)NoStop
[Chen et al.(2023)Chen,
Asif, Dolui, Wang,
Támara-Isaza, Goli, Whalen, Wang, Chen, Zhang
et al.]chen2023above
author author H. Chen, author S. Asif, author K. Dolui, author
Y. Wang, author J. Támara-Isaza, author V. D. P. Goli, author M. Whalen, author X. Wang, author Z. Chen, author H. Zhang, et al., @noop journal journal ACS
Applied Materials & Interfaces (year 2023)NoStop
[Scholten et al.(2021)Scholten, Healey, Robertson, Abrahams, Broadway, and Tetienne]scholten2021widefield
author author S. Scholten, author A. Healey,
author I. Robertson, author G. Abrahams, author
D. Broadway, and author
J.-P. Tetienne, @noop journal journal Journal of Applied Physics volume 130, pages 150902 (year
2021)NoStop
[Zomer et al.(2014)Zomer,
Guimarães, Brant, Tombros, and Van Wees]zomer2014fast
author author P. Zomer, author M. Guimarães, author J. Brant, author N. Tombros, and author B. Van Wees, @noop journal journal Applied Physics
Letters volume 105, pages 013101
(year 2014)NoStop
[Schlussel et al.(2018)Schlussel, Lenz, Rohner, Bar-Haim, Bougas, Groswasser, Kieschnick, Rozenberg, Thiel, Waxman et al.]schlussel2018wide
author author Y. Schlussel, author T. Lenz,
author D. Rohner, author Y. Bar-Haim, author
L. Bougas, author D. Groswasser, author M. Kieschnick, author E. Rozenberg, author L. Thiel, author A. Waxman, et al., @noop journal journal Physical Review Applied volume 10, pages 034032 (year
2018)NoStop
[Bertelli et al.(2020)Bertelli, Carmiggelt, Yu, Simon, Pothoven, Bauer, Blanter, Aarts, and Van
Der Sar]bertelli2020magnetic
author author I. Bertelli, author J. J. Carmiggelt, author T. Yu,
author B. G. Simon, author C. C. Pothoven, author
G. E. Bauer, author
Y. M. Blanter, author
J. Aarts, and author
T. Van Der Sar, @noop journal journal Science Advances volume 6, pages eabd3556 (year
2020)NoStop
[Rondin et al.(2014)Rondin,
Tetienne, Hingant, Roch,
Maletinsky, and Jacques]rondin2014magnetometry
author author L. Rondin, author J.-P. Tetienne, author T. Hingant,
author J.-F. Roch, author P. Maletinsky, and author V. Jacques, @noop
journal journal Reports on Progress in Physics volume 77, pages 056503 (year 2014)NoStop
[Gruber et al.(1997)Gruber,
Drabenstedt, Tietz, Fleury,
Wrachtrup, and Borczyskowski]gruber1997scanning
author author A. Gruber, author A. Drabenstedt,
author C. Tietz, author L. Fleury, author
J. Wrachtrup, and author
C. v. Borczyskowski, @noop
journal journal Science volume 276, pages 2012 (year
1997)NoStop
[Boix-Constant et al.(2022)Boix-Constant, Mañas-Valero, Ruiz,
Rybakov, Konieczny, Pillet,
Baldoví, and Coronado]boix2022
author author C. Boix-Constant, author S. Mañas-Valero, author A. M. Ruiz, author A. Rybakov,
author K. A. Konieczny, author S. Pillet, author
J. J. Baldoví, and author
E. Coronado, @noop journal journal Advanced Materials volume 34, pages 2204940 (year
2022)NoStop
[Wilson et al.(2021)Wilson,
Lee, Cenker, Xie,
Dismukes, Telford, Fonseca,
Sivakumar, Dean, Cao et al.]wilson2021interlayer
author author N. P. Wilson, author K. Lee,
author J. Cenker, author K. Xie, author
A. H. Dismukes, author
E. J. Telford, author
J. Fonseca, author S. Sivakumar, author C. Dean, author T. Cao, et al., @noop journal journal Nature Materials volume 20, pages 1657 (year 2021)NoStop
[Bogdanović et al.(2017)Bogdanović, Liddy, van Dam,
Coenen, Fink, Lončar, and Hanson]bogdanovic2017robust
author author S. Bogdanović, author M. S. Liddy, author S. B. van Dam,
author L. C. Coenen, author T. Fink, author
M. Lončar, and author
R. Hanson, @noop journal journal APL Photonics volume
2, pages 126101 (year 2017)NoStop
[Göser et al.(1990)Göser, Paul, and Kahle]goser1990magnetic
author author O. Göser, author W. Paul, and author H. Kahle, @noop journal journal Journal of
Magnetism and Magnetic Materials volume 92, pages 129 (year 1990)NoStop
[Healey et al.(2022)Healey,
Rahman, Scholten, Robertson,
Abrahams, Dontschuk, Liu,
Hollenberg, Lu, and Tetienne]healey2022varied
author author A. J. Healey, author S. Rahman,
author S. C. Scholten, author I. O. Robertson, author
G. J. Abrahams, author
N. Dontschuk, author
B. Liu, author L. C. Hollenberg, author Y. Lu, and author J.-P. Tetienne, @noop journal journal ACS nano volume 16, pages
12580 (year 2022)NoStop
[Telford et al.(2020)Telford, Dismukes, Lee, Cheng, Wieteska, Bartholomew, Chen, Xu, Pasupathy, Zhu
et al.]telford2020layered
author author E. J. Telford, author A. H. Dismukes, author K. Lee,
author M. Cheng, author A. Wieteska, author
A. K. Bartholomew, author
Y.-S. Chen, author
X. Xu, author A. N. Pasupathy, author X. Zhu, et al., @noop journal journal Advanced Materials volume 32, pages 2003240 (year
2020)NoStop
[Toyli et al.(2010)Toyli,
Weis, Fuchs, Schenkel, and Awschalom]toyli2010chip
author author D. M. Toyli, author C. D. Weis,
author G. D. Fuchs, author T. Schenkel, and author D. D. Awschalom, @noop
journal journal Nano letters volume 10, pages 3168 (year
2010)NoStop
[Pezzagna et al.(2010)Pezzagna, Naydenov, Jelezko, Wrachtrup, and Meijer]pezzagna2010creation
author author S. Pezzagna, author B. Naydenov,
author F. Jelezko, author J. Wrachtrup, and author J. Meijer, @noop
journal journal New Journal of Physics volume 12, pages 065017 (year
2010)NoStop
[Challier et al.(2018)Challier, Sonusen, Barfuss, Rohner, Riedel, Koelbl, Ganzhorn, Appel, Maletinsky, and Neu]challier2018advanced
author author M. Challier, author S. Sonusen,
author A. Barfuss, author D. Rohner, author
D. Riedel, author J. Koelbl, author M. Ganzhorn, author P. Appel, author P. Maletinsky, and author E. Neu, @noop journal journal Micromachines volume 9, pages 148 (year 2018)NoStop
[Kaverzin et al.(2022)Kaverzin, Ghiasi, Dismukes, Roy, and Van Wees]kaverzin2022spin
author author A. A. Kaverzin, author T. S. Ghiasi, author A. H. Dismukes, author X. Roy, and author B. J. Van Wees, @noop journal journal 2D Materials volume 9, pages 045003 (year
2022)NoStop
Hi
Supporting Information:
Nitrogen-vacancy magnetometry of CrSBr by diamond membrane transfer
S1. Optical micrographs and photoluminescence measurements
S2. Atomic force microscopy (AFM) measurements
In Fig. <ref>, we show the overlap of the AFM height profile images and the optical image of the sample. By presenting the height measurements in the AFM images by adaptive nonlinear color mapping we highlight the steps in the CrSBr thickness (confirmed by the height profile of the edges in panel b). The clear correspondence of the modulations of the detected stray field (dB_NV) with respect to the location of the edges in the AFM and optical images, show the finite dB_NV detected for the mono-layer and tri-layer CrSBr edges and its absence for the bi-layer CrSBr edge.
We note that the AFM image of the CrSBr flake (Fig. <ref>) shows the presence of bubbles at the interface between the flake and SiO_2 substrate. To assess a potential effect on the NV-sample distance, we characterized the dimensions of 20 bubbles in the AFM image. We extract a mean bubble height of 10±4 nm and a mean lateral bubble diameter of 3±1 m, i.e., more than two orders of magnitude larger than the bubble height. This extreme flatness of the bubbles presumably allows for a slight conformation of the diamond membrane such that it locally stays in contact with the CrSBr. The absence of the spatial features in our PL and ESR measurements (Fig. 3, main manuscript) further indicates that the NV-CrSBr distance is not affected by the presence of bubbles.
S3. Extracting Ms from the ensemble ODMR frequencies
As described in the main text, we extract the CrSBr magnetization M_s by analyzing the magnetic stray field at the 2.4 nm step in the CrSBr flake indicated in Fig. 3h of the main manuscript. Here we describe the fitting procedure. The step is oriented parallel to the x-axis and located at y=y_0. For the small magnetic bias fields used, the CrSBr magnetization is expected to be oriented along y (easy axis), i.e., perpendicularly to the step edge. If the step corresponds to a change in the CrSBr thickness by an odd number of layers, the uncompensated layer magnetization leads to a magnetic stray field that emerges from the edge. The z component of this field in the NV layer is given by
B_z(y) = M_s t/2 πz_0/z_0^2+(y-y_0)^2
where t=0.79 nm is the thickness of a single CrSBr layer <cit.>, M_s is the magnetization in Tesla, and we used t ≪ z_0 with z_0 the NV-implantation depth.
The projection of B_z(y) on the NV-axis dB_NV(y) = B_z(y)cos(54.7^∘) leads to a shift in the ESR frequency of the NV spins given by df_ESR(y) = γ dB_NV(y). We characterize this shift by measuring the optically detected magnetic resonance (ODMR) spectra of our NVs. We do so by sweeping the frequency of a microwave drive applied to the stripline and detecting the spin-dependent NV photoluminescence. The normalized ODMR response of a single NV spin at location y is described by the Lorentzian
S_s(y, df) = 1- 1/(df-df_ESR(y))^2 + w^2
where df is the detuning between the microwave drive and the `offset' ESR frequency associated with the applied bias field, and w is the width of the ESR response. The measured ensemble ODMR response is smeared by our diffraction-limited optical spot size, as described by the convolution of S_s(y) with a Gaussian:
S(y, df) = S_s(y, df) * 1/√(2π s^2) e^-y^2/2 s^2
with s the standard deviation of the optical spot. We extract df_ESR(y) by fitting S(y,df) at each y-location with a Lorentzian. We then fit these values for df_ESR(y) to the experimentally extracted values, which we obtain by fitting lorentzians to the measured ensemble ODMR curves.
To extract the NV-sample distance z_0 from the stray-field data, we need to know the optical spot size s. Therefore, we first determine s by the analysis shown in Fig. <ref>. We do so by fitting the photoluminescence across the diamond holding bar (Fig. <ref>a) with a Gaussian e^-x^2/2s^2 convolved with the 1-μm-wide holding-bar width, yielding s=451(10) nm (Fig. <ref>b). We then fix s to this value and extract both M_s and z_0 by fitting the stray field (Fig. <ref>c). These fits yield M_s=0.46(2) T and M_s=0.46(4) T. Furthermore, we extract z_0 = 0.13(4) µm and z_0=0.13(9) µm for the two peaks - which is indeed larger than the ∼70 nm expected from SRIM.
S4. Stray-field detection of the CSB phase transition at different microwave and laser powers
S5. Effect of PMMA-coverage on the NV-related signal
|
http://arxiv.org/abs/2307.02579v2
|
20230705182421
|
d-Fold Partition Diamonds
|
[
"Dalen Dockery",
"Marie Jameson",
"James A. Sellers",
"Samuel Wilson"
] |
math.NT
|
[
"math.NT",
"math.CO",
"11P82 (primary), 11P83 (secondary)"
] |
Estimating the roughness exponent of stochastic volatility from discrete observations of the realized variance
Xiyue Han^* and Alexander Schied
Department of Statistics and Actuarial Science, University of Waterloo, 200 University Ave W, Waterloo, Ontario, N2L 3G1, Canada. E-Mails: [email protected], [email protected] authors gratefully acknowledge support from the
Natural Sciences and Engineering Research Council of Canada through grant RGPIN-2017-04054.
June 25, 2023
=================================================================================================================================================================================================================================================================================================================================================================================
In this work we introduce new combinatorial objects called d–fold partition diamonds, which generalize both the classical partition function and the partition diamonds of Andrews, Paule and Riese, and we set r_d(n) to be their counting function. We also consider the Schmidt type d–fold partition diamonds, which have counting function s_d(n). Using partition analysis, we then find the generating function for both, and connect the generating functions ∑_n= 0^∞ s_d(n)q^n to Eulerian polynomials. This allows us to develop elementary proofs of infinitely many Ramanujan–like congruences satisfied by s_d(n) for various values of d, including the following family: for all d≥ 1 and all n≥ 0, s_d(2n+1) ≡ 0 2^d.
§ INTRODUCTION AND STATEMENT OF RESULTS
In 2001, George Andrews, Peter Paule, and Axel Riese <cit.> defined plane partition diamonds, which are partitions whose parts are non-negative integers a_i and b_i which are placed at the nodes of the graph given in Figure <ref>, where each directed edge indicates ≥. They found that the generating function for such partitions is given by <cit.>
∏_n=1^∞1+q^3n-1/1-q^n.
In this paper, we aim to generalize this construction. For any positive integer d, we place d nodes between any two nodes a_k and a_k+1, so that we have the classical partition function p(n) when d=1 and we have the plane partition diamonds of Andrews, Paule, and Riese when d=2.
More precisely, we define d-fold partition diamonds to be partitions whose parts are non-negative integers a_i and b_i,j which are placed at the nodes of the graph given in Figure <ref>, where each directed edge indicates ≥.
Letting r_d(n) denote the number of d-fold partition diamonds of n, we then have that
∑_n=0^∞ r_1(n)q^n = ∏_n=1^∞1/1-q^n,
∑_n=0^∞ r_2(n)q^n = ∏_n=1^∞1+q^3n-1/1-q^n,
∑_n=0^∞ r_3(n)q^n = ∏_n=1^∞1 + 2q^4n-2(1+q) + q^8n-3/1-q^n.
Note that this corresponds to the classical partition function when d=1 and to (<ref>) when d=2. More generally, we have the following result.
The generating function for the number of d-fold partition diamonds is given by
∑_n=0^∞ r_d(n)q^n =∏_n=1^∞F_d(q^(n-1)(d+1)+1,q)/1-q^n ,
where F_d is a polynomial that is defined recursively in Lemma <ref>.
In fact, we can also prove an analogous result regarding Schmidt type d-fold partition diamonds, which are a variant obtained by summing only the links a_0 + a_1 + ⋯ + a_k (i.e., omitting the b_i's when expressing n as a sum) as in <cit.>. Letting s_d(n) denote the number of Schmidt type d-fold partition diamonds of n, we have that
∑_n=0^∞ s_1(n)q^n = ∏_n=1^∞1/(1-q^n)^2,
∑_n=0^∞ s_2(n)q^n = ∏_n=1^∞1+q^n/(1-q^n)^3,
∑_n=0^∞ s_3(n)q^n = ∏_n=1^∞1+4q^n+q^2n/(1-q^n)^4.
Note that the generating functions for d=1 and d=2 given above were known previously <cit.>. More generally, we have the following result.
The generating function for the number of Schmidt type d-fold partition diamonds is given by
∑_n=0^∞ s_d(n)q^n = ∏_n=1^∞A_d(q^n)/(1-q^n)^d+1.
Here, A_d(q) denotes the dth Eulerian polynomial, which is defined by A_0(q)=1 and, for all d≥ 1,
A_d(q) = (1+(d-1)q)A_d-1(q)+q(1-q)A'_d-1(q).
Both Theorems <ref> and <ref> arise as special cases of a more technical theorem, Theorem <ref>, which will be proved using MacMahon's partition analysis.
In Section 2, we will describe the MacMahon operator and use it to understand a generating function for d-fold partition diamonds; this will allow us to state our main technical theorem, Theorem <ref>. In Section <ref>, we will prove some preliminary results needed to prove Theorem <ref>, which we do in Section <ref>. Finally, in Section <ref>, we prove several infinite families of Ramanujan–like congruences satisfied by the functions s_d(n) in elementary fashion. For example, we prove that for all d≥ 1 and all n≥ 0,
s_d(2n+1) ≡ 0 2^d.
§ BACKGROUND AND NOTATION
In order to describe the generating functions that arise in studying d-fold partition diamonds, we start by introducing the MacMahon operator Ω_≥, which is the primary tool for computing these generating functions. The discussion below is inspired by the landmark series of papers on the subject by Andrews, Paule, and others <cit.>.
§.§ The MacMahon operator
First we define MacMahon's Omega operator .
The operator is given by
∑_s_1=-∞^∞⋯∑_s_r=-∞^∞ A_s_1,…,s_r_1^s_1⋯_r^s_r∑_s_1=0^∞⋯∑_s_r=0^∞ A_s_1,…,s_r,
where the domain of the A_s_1,…,s_r is the field of rational functions over in several complex variables and the λ_i are restricted to a neighborhood of the circle |_i|=1. In addition, the A_s_1,…,s_r are required to be such that any of the series involved is absolutely convergent within the domain of the definition of A_s_1,…,s_r.
We will always have operate on variables denoted by letters , μ from the Greek alphabet (so that letters from the Latin alphabet will be unaffected by ).
The benefit of using the MacMahon operator comes from our ability to compute it quickly using elimination formulae. MacMahon <cit.> gave a list of some of these, e.g.,
1/(1- x)(1-^-1 y) = 1/(1-x)(1-xy).
This formula can be generalized as follows.
For d a positive integer and j an integer, we have that
λ^j/(1- x_1)⋯(1- x_d)(1-^-1y) = 1/(1-y)[1/(1-x_1)(1-x_2)⋯(1-x_d).
.-y^j+1/(1-x_1y)(1-x_2y)⋯(1-x_dy)] .
Expanding the left hand side using geometric series and applying the definition of , we obtain
λ^j/(1-λ x_1)⋯(1-λ x_d)(1-λ^-1y) = ∑_a_1,…,a_d+1≥ 0λ^j+a_1+⋯+a_d-a_d+1x_1^a_1⋯ x_d^a_dy^a_d+1
= ∑_a_1,…,a_d≥ 0
0 ≤ k ≤ a_1+⋯+a_d+jλ^kx_1^a_1⋯ x_d^a_dy^a_1+⋯+a_d+j-k
= ∑_a_1,…,a_d ≥ 0 x_1^a_1⋯ x_d^a_d∑_k=0^a_1+⋯+a_d+j y^k
=
∑_a_1,…,a_d≥ 0 x_1^a_1⋯ x_d^a_d[1-y^a_1+⋯+a_d+j+1/1-y],
and the result follows by simplifying the geometric series.
§.§ Generating functions for d-fold partition diamonds
Following the work of Andrews, Paule, and Riese in <cit.>, for integers d, n≥ 1 we set
D_d,n D_d,n(q_0, q_1, …, q_n; w) ∑ q_0^a_0⋯ q_n^a_n w^∑_j,k b_j,k
where the (outer) sum for D_d,n ranges over all non-negative integers a_i and b_j,k which satisfy the inequalities encoded by Figure <ref>. Thus, D_d,n is the generating function for all d-fold partition diamonds of fixed length n, and each part (i.e., vertex of the graph) corresponds to some q_i or w.
Using this notation, we are now ready to state our main technical theorem.
For d≥ 1 and n≥ 1,
D_d,n(q_0, …, q_n; w) = (∏_k=0^n-1F_d(Q_kw^dk,w)/(1-Q_kw^dk)(1-Q_kw^dk+1)⋯ (1-Q_kw^dk+d)) 1/1-Q_nw^dn,
where F_d is the polynomial that is defined recursively in Lemma <ref>, and Q_k q_0q_1… q_k for all k≥ 0.
§.§ The crude form of the generating function
Here, we continue to generalize the objects described in <cit.> as we fix some more notation to describe the crude form of the generating function D_d,n. For integers d, n≥ 1 we set
Λ_n Λ_d,n(_n,1^a_n-1-b_n,1_n,2^a_n-1-b_n,2⋯_n,d^a_n-1-b_n,d) (μ_n,1^b_n,1-a_nμ_n,2^b_n,2-a_n⋯μ_n,d^b_n,d-a_n).
Note here that Λ_d,1Λ_d,2⋯Λ_d,n encodes all of the inequalities that must be satisfied in a d-fold diamond of length n. That is, each inequality (i.e., edge of the graph) corresponds to some λ_i,j or μ_i,j. In other words, we immediately have that
D_d,n = ∑_a_i≥ 0, b_j,k≥ 0Λ_d,1Λ_d,2⋯Λ_d,n q_0^a_0⋯ q_n^a_n w^∑_j,k b_j,k.
In the proof of Theorem <ref>, we will also need (for integers d, n≥ 1, and ρ≥ 0)
D_d,n^(ρ) D_d,n^(ρ)(q_0, q_1, …, q_n;w) ∑ q_0^a_0⋯ q_n^a_n w^∑_j,k b_j,k
where the sum for D_d,n^(ρ) ranges over all non-negative integers a_i and b_j,k which satisfy the inequalities encoded by Λ_d,1Λ_d,2⋯Λ_d,n and where q_n≥ρ.
Now we set (for d,n≥ 1 and 1≤ k≤ n)
h h_d 1/1- λ_1,1⋯λ_1,d q_0
f_k f_k,d 1/(1-μ_k,1/λ_k,1w) ⋯(1-μ_k,d/λ_k,dw) ·(1-λ_k+1,1⋯λ_k+1,d/μ_k,1⋯μ_k,d q_k)
g_n g_n,d 1-λ_n+1,1⋯λ_n+1,d/μ_n,1⋯μ_n,d q_n+1/1-q_n+1/μ_n,1⋯μ_n,d
and note that one has the following crude form of the generating functions.
For d, n≥ 1, we have that
D_d,n = h· f_1 ⋯ f_n · g_n
D_d,n^(ρ) = h· f_1 ⋯ f_n · g_n (q_n/μ_n,1⋯μ_n,d)^ρ.
The proof follows, mutatis mutandis, as in the proofs of Propositions 2.1 and 2.2 in <cit.>.
§.§ Preliminary Results
In order to prove Theorem <ref>, we must be able to apply some elimination formulae in order to simplify the expression for D_d,n given in Lemma <ref>. We'll first do this in the case where n=1.
For d≥ 1, we have that
D_d,1(q_0, q_1; w) = F_d(q_0,w)/(1-q_0)(1-q_0w)⋯ (1-q_0w^d)· (1-q_0q_1w^d),
where F_d(q_0,w)∈[q_0,w] is a polynomial of degree d-1 in q_0 that is given by F_1(q_0,w)=1 and
F_d(q_0,w) = (1-q_0w^d)F_d-1(q_0,w) - w(1-q_0)F_d-1(q_0w,w)/1-w.
Proceed by induction on d. The case d=1 corresponds to classical partitions with at most three parts. We may compute the generating function for such partitions using MacMahon's partition analysis: we have
D_1,1 (q_0,q_1;w) = ∑_a_0,a_1,b_1≥ 0λ_1^a_0-b_1μ_1^b_1-a_1 q_0^a_0 w^b_1 q_1^a_1 = ∑_a_0,a_1,b_1≥ 0 (λ_1q_0)^a_0(μ_1/λ_1 w)^b_1(q_1/μ_1)^a_1
= 1/(1-λ_1q_0)(1-λ_1^-1μ_1 w)(1-μ_1^-1 q_1) = 1/(1-q_0)(1-μ_1q_0w)(1-μ_1^-1q_1)
= 1/(1-q_0)(1-q_0w)(1-q_0q_1w),
by applying (<ref>) to successively eliminate λ_1 and μ_1. Hence, F_1=1.
Now suppose that the conclusion holds for d-1. Then by Lemma <ref> (and the induction hypothesis) we have
D_d,1 (q_0,q_1;w) = 1/(1- _1 ⋯_d q_0) ·(1-_1^-1μ_1w) ⋯(1-_d^-1μ_dw) ·(1-μ_1^-1⋯μ_d^-1 q_1)
= D_d-1,1 (λ_dq_0, μ_d^-1 q_1; w)/1-λ_d^-1μ_dw
= F_d-1(λ_dq_0,w)/(1-λ_dq_0)⋯(1-λ_dq_0w^d-1)(1-λ_dμ_d^-1q_0q_1w^d-1)(1-λ_d^-1μ_dw).
Write F_d-1 as a polynomial in q_0 with coefficients in ℤ[w] as
F_d-1(q_0,w) = ∑_i=0^d-2 a_i(w) q_0^i.
Then
D_d,1 (q_0,q_1;w)
= ∑_i=0^d-2 a_i(w) (λ_dq_0)^i/(1-λ_dq_0)⋯(1-λ_dq_0w^d-1)(1-λ_d/μ_dq_0q_1w^d-1)(1-μ_d/λ_dw)
= ∑_i=0^d-2 a_i(w) q_0^iλ_d^i/(1-λ_dq_0)⋯(1-λ_dq_0w^d-1)(1-λ_d/μ_dq_0q_1w^d-1)(1-μ_d/λ_dw)
= ∑_i=0^d-2 a_i(w) q_0^iλ_d^i/(1-λ_dq_0)⋯(1-λ_dq_0w^d-1)(1-λ_d^-1w)(1-q_0q_1w^d),
by (<ref>). Rearranging,
D_d,1 (q_0,q_1;w) = ∑_i=0^d-2 a_i(w) q_0^i/1-q_0q_1w^dλ_d^i/(1-λ_dq_0)⋯(1-λ_dq_0w^d-1)(1-λ_d^-1w)
= ∑_i=0^d-2 a_i(w) q_0^i/(1-q_0q_1w^d)(1-w)[ 1/(1-q_0)⋯(1-q_0w^d-1) - w^i+1/(1-q_0w)⋯(1-q_0w^d)]
= 1/(1-q_0)⋯(1-q_0w^d)(1-q_0q_1w^d)∑_i=0^d-2 a_i(w) q_0^i[1-q_0w^d/1-w - w^i+1(1-q_0)/1-w]
by applying Proposition <ref>. Thus, we have shown that
F_d(q_0,w) = ∑_i=0^d-2 a_i(w) q_0^i[1-q_0w^d/1-w - w^i+1(1-q_0)/1-w]
= 1-q_0w^d/1-w∑_i=0^d-2 a_i(w)q_0^i - w/1-w(1-q_0) ∑_i=0^d-2 a_i (w) q_0^i w^i
= (1-q_0w^d) F_d-1(q_0,w) - w(1-q_0)F_d-1(q_0w,w)/1-w.
To complete the proof, note that F_d(q_0,w) is indeed a polynomial since w=1 is a root of the numerator given above.
Lemma <ref> will serve as the base case of an induction argument in the proof of Theorem <ref>. We will also need the following results in order to complete that proof.
For k≥1, and y_1, …, y_k, z≠ 0 distinct elements from an appropriate field, define
p(𝐲;z)∏_i=1^k(1-y_i/z),
where 𝐲 = (y_1, …, y_k). For i≤ j≤ k define
p_j(𝐲;z) ∏_i=1
i≠ j^k(1-y_i/z)^-1.
Then we have
1/p(𝐲;z) = ∑_j=1^kp_j(𝐲;y_j)/1-(y_j/z).
See <cit.>.
Let d,n≥ 1 and ρ≥ 0. Then
D_d,n^(ρ)(q_0,…,q_n;w) = (q_0⋯ q_n)^ρ w^dnρD_d,n(q_0,…,q_n;w).
The above equality is given by the following bijection. Given an arbitrary d-fold partition diamond, adding ρ to each part
assigns the partition a unique d-fold partition diamond with smallest part at least ρ. Conversely, given an arbitrary d-fold partition diamond with smallest part at least ρ, subtracting ρ from each part assigns the partition a unique d-fold partition partition. Thus set of d-fold partition diamonds and the set of d-fold partition diamonds with smallest part at least ρ are in bijection, which is encoded by the above equality on their generating functions; this proves the result.
§.§ Proof of Theorem <ref>
To prove Theorem <ref>, we follow the same approach as in the proof of Theorem 2.1 of <cit.>.
We proceed by induction on n. The base case (for n=1) has already been discussed in Lemma <ref>. Now, suppose the theorem holds for n and note that by Lemma <ref> we have
D_n+1 = D_d,n+1(q_0,…,q_n;w) = h· f_1⋯ f_n+1· g_n+1
= h· f_1⋯ f_n-11/(1-μ_n,1/λ_n,1w) ⋯(1-μ_n,d/λ_n,dw)·1/(1-λ_n+1,1⋯λ_n+1,d/μ_n,1⋯μ_n,d q_n)
·1/(1-μ_n+1,1/λ_n+1,1w) ⋯(1-μ_n+1,d/λ_n+1,dw)·1/(1-1/μ_n+1,1⋯μ_n+1,d q_n+1).
Now, we apply Lemma <ref> to the last d+2 factors to eliminate _n+1,1, …_n+1,d and μ_n+1,1, …μ_n+1,d and find that
D_n+1 = h· f_1⋯ f_n-11/(1-μ_n,1/λ_n,1w) ⋯(1-μ_n,d/λ_n,dw) D_1(q_n/μ_n,1…μ_n,d, q_n+1;w)
= h· f_1⋯ f_n-11/(1-μ_n,1/λ_n,1w) ⋯(1-μ_n,d/λ_n,dw)·F_d(q_n/μ_n,1⋯μ_n,d,w)/p(𝐲; μ_n,1⋯μ_n,d),
where we set 𝐲 = (y_1, …, y_d+2) (q_n,q_nw,…, q_nw^d,q_nq_n+1w^d). Then, applying Lemma <ref> and substituting F_d(q_n/μ_n,1⋯μ_n,d, w) = ∑_i=0^d-1a_i(w) q_n^i/(μ_n,1⋯μ_n,d)^i (where a_i(w)∈[w] depends on d) gives
D_n+1 = h· f_1⋯ f_n-1F_d(q_n/μ_n,1⋯μ_n,d,w)/(1-μ_n,1/λ_n,1w) ⋯(1-μ_n,d/λ_n,dw)·∑_j=1^d+2p_j(𝐲; y_j)/1-y_j/μ_n,1⋯μ_n,d
= ∑_i=0^d-1 a_i(w) q_n^i ∑_j=1^d+2 p_j(𝐲; y_j) 1/y_j^i[h· f_1⋯ f_n-1/(1-μ_n,1/λ_n,1w) ⋯(1-μ_n,d/λ_n,dw)·1/1-y_j/μ_n,1⋯μ_n,d·(y_j/μ_n,1⋯μ_n,d)^i ]
= ∑_i=0^d-1 a_i(w) q_n^i ∑_j=1^d+2 p_j(𝐲; y_j) 1/y_j^i D_n^(i)(q_0, q_1, …, q_n-1, y_j; w)
= ∑_i=0^d-1 a_i(w) Q_n^i w^din∑_j=1^d+2 p_j(𝐲; y_j) D_n(q_0, q_1, …, q_n-1, y_j; w)
= F_d(Q_nw^dn,w) ∑_j=1^d+2 p_j(𝐲; y_j) D_n(q_0, q_1, …, q_n-1, y_j; w),
where we have used Lemma <ref> and Lemma <ref>. By the induction hypothesis, we may observe that
D_n(q_0, q_1, …, q_n-1, y_j; w) = 1-Q_nw^dn/1-Q_n-1y_jw^dnD_n(q_0, q_1, …, q_n-1, q_n; w),
and thus we have
D_n+1 = F_d(Q_nw^dn,w) (1-Q_nw^dn) D_n(q_0, q_1, …, q_n-1, q_n; w) ∑_j=1^d+2p_j(𝐲; y_j)/1-(q_0… q_n-1)y_jw^dn
= F_d(Q_nw^dn,w) (1-Q_nw^dn) D_n(q_0, q_1, …, q_n-1, q_n; w) 1/p(𝐲; q_n/Q_nw^dn)
= F_d(Q_nw^dn,w) (1-Q_nw^dn) D_n(q_0, q_1, …, q_n-1, q_n; w)
·1/(1-Q_nw^dn)(1-Q_nw^dn+1)⋯ (1-Q_nw^dn+d)(1-Q_n+1w^dn+d)
by Lemma <ref> with z=(Q_n-1w^dn)^-1. Thus we have shown that
D_n+1 = F_d(Q_nw^dn,w)/(1-Q_nw^dn+1)⋯ (1-Q_nw^dn+d)(1-Q_n+1w^dn+d) D_n,
which gives the desired result.
§.§ Proof of Theorems <ref> and <ref>
Now, Theorems <ref> and <ref> can be obtained as corollaries of Theorem <ref>.
Theorem <ref> comes from applying Theorem <ref> to
∑_n=0^∞ r_d(n) q^n= lim_n→∞ D_d,n(q,q,…, q;q).
To prove Theorem <ref>, we first apply Theorem <ref> to obtain
∑_n=0^∞ s_d(n) q^n= lim_n→∞ D_d,n(q,q,…, q;1) = ∏_n=1^∞F_d(q^n,1)/(1-q^n)^d+1,
so we must show that F_d(q,1)=A_d(q). Since F_1(q,1) = A_1(q) = 1, it suffices to show that F_d(q,1) and A_d(q) satisfy the same recurrence (<ref>).
Recalling the recurrence from Lemma <ref> and writing F_d-1(q,w)=∑_i=0^d-2a_i(w)q^i gives
F_d(q,w)(1-w) = (1-qw^d)F_d-1(q,w) - w(1-q)F_d-1(qw,w)
= (1-qw^d)∑_i=0^d-2a_i(w)q^i - w(1-q)∑_i=0^d-2a_i(w)q^iw^i
= ∑_i=0^d-2a_i(w)q^i[1-qw^d - w^i+1 + qw^i+1]
= (1-w)∑_i=0^d-2a_i(w)q^i[(1+w+⋯ +w^i) +q(w^i+1 + ⋯ + w^d-1)) ].
Finally, dividing by w-1 and then setting w=1 gives
F_d(q,1) = ∑_i=0^d-2a_i(1)q^i[(i+1) +q(d-i-1)) ]
= ∑_i=0^d-2a_i(1)q^i[1+(d-1)q +i(1-q) ]
= (1+(d-1)q)F_d-1(q, 1)+q(1-q)F'_d-1(q,1),
as desired.
§ RAMANUJAN–LIKE CONGRUENCES SATISFIED BY S_D(N)
We begin this section by noting the following corollary of Theorem <ref>, which gives an alternate form of the generating function for s_d(n).
We have
∑_n=0^∞ s_d(n) q^n = ∏_n=1^∞( ∑_j=0^∞ (j+1)^d q^jn).
This follows immediately from Theorem <ref>, together with the following identity proved by Euler in <cit.>.
∑_j=0^∞ (j+1)^d q^jn = A_d(q^n)/(1-q^n)^d+1.
One can view Corollary <ref> combinatorially as follows. In order to build an arbitrary Schmidt type d-fold partition diamond, one must pick the difference, j, between a_n-1 and a_n for each positive integer n. This completely determines all of the linking nodes a_0, a_1, …, and the q^jn factors keep track of their contribution. There are then (j+1)^d options for the nodes b_n,1, …, b_n,d (since there are j+1 possibilities for each of those nodes).
We utilize this new view of the generating function for s_d(n) to easily prove a variety of arithmetic properties satisfied by these functions. Prior to doing so, we remind the reader of two well–known results which will also be helpful below:
(Euler's Pentagonal Number Theorem)
We have
∏_m=1^∞ (1-q^m) = ∑_n=-∞^∞ (-1)^n q^n(3n+1)/2.
See <cit.>.
(Jacobi)
We have
∏_m=1^∞ (1-q^m)^3 = ∑_n=0^∞ (-1)^n (2n+1) q^n(n+1)/2.
See <cit.>.
We now prove the following theorem which yields an infinite family of Ramanujan–like congruences modulo arbitrarily large powers of 2.
For all d≥ 1 and all n≥ 0,
s_d(2n+1) ≡ 0 2^d.
For fixed d≥ 1,
∑_n=0^∞ s_d(n) q^n
= ∏_n=1^∞( 1 + 2^dq^n + 3^dq^2n +4^dq^3n + 5^dq^4n + 6^dq^5n + …)
≡ ∏_n=1^∞( 1 + 0q^n + 3^dq^2n +0q^3n + 5^dq^4n + 0q^5n + …) 2^d
= ∏_n=1^∞( 1 + 3^dq^2n + 5^dq^4n + …).
In the penultimate line above, we have used the fact that 2^d | (2j)^d for all j≥ 1. The last expression above is a function of q^2, and this immediately implies the result.
We note, in passing, that the d=2 case of Theorem <ref> was proven by Andrews and Paule <cit.>.
We next prove the following overarching lemma that will provide us with the machinery needed to prove infinite families of divisibility properties satisfied by s_d(n).
Let k and r be nonnegative integers and let m≥ 2. For all n≥ 0, s_ϕ(m)k+r(n) ≡ s_r(n) m where ϕ(m) is Euler's totient function.
Note that
∑_n=0^∞ s_ϕ(m)k+r(n)q^n
= ∏_n=1^∞( 1 + 2^ϕ(m)k+rq^n + 3^ϕ(m)k+rq^2n +4^ϕ(m)k+rq^3n.
. + 5^ϕ(m)k+rq^4n + 6^ϕ(m)k+rq^5n + …)
= ∏_n=1^∞( 1 + (2^ϕ(m))^k· 2^r q^n + (3^ϕ(m))^k· 3^rq^2n +(4^ϕ(m))^k· 4^rq^3n.
. + (5^ϕ(m))^k· 5^rq^4n + (6^ϕ(m))^k· 6^rq^5n + …)
≡ ∏_n=1^∞( 1 + 2^r q^n + 3^rq^2n +4^rq^3n + 5^rq^4n + 6^rq^5n + …) m
= ∑_n=0^∞ s_r(n)q^n.
The penultimate line above follows from Euler's generalization of Fermat's Little Theorem.
We now transition to a consideration of various families of congruences modulo 5. We first prove an infinite family of such congruences thanks to Lemma <ref> which uses the function s_1(n) as its starting point.
For all k≥ 0 and all n≥ 0,
s_4k+1(5n+2) ≡ s_4k+1(5n+3) ≡ s_4k+1(5n+4) ≡ 0 5.
Note that
∑_n=0^∞ s_1(n)q^n
= ∏_n=1^∞1/(1-q^n)^2
= ∏_n=1^∞(1-q^n)^3/(1-q^n)^5
≡ ∏_n=1^∞(1-q^n)^3/(1-q^5n)5
= ( ∑_j=0^∞ (-1)^j(2j+1)q^j(j+1)/2) ∏_n=1^∞1/(1-q^5n)
thanks to Lemma <ref>. Since ∏_n=1^∞1/(1-q^5n) is a function of q^5, the above product will satisfy a congruence modulo 5 in an arithmetic progression of the form 5n+r, for 0≤ r≤ 4, if and only if such a congruence is satisfied by
( ∑_j=0^∞ (-1)^j(2j+1)q^j(j+1)/2).
Since 5n+2 and 5n+4 are never triangular numbers, this immediately tells us that, for all n≥ 0,
s_1(5n+2) ≡ s_1(5n+4) ≡ 0 5.
Moreover, we know that 5n+3 = j(j+1)/2 if and only if j≡ 2 5, and in such cases, 2j+1 ≡ 0 5. Because of the presence of the factor 2j+1 in the sum above, we then see that, for all n≥ 0,
s_1(5n+3) ≡ 0 5
since 2· 2+1 = 5 ≡ 0 5.
The remainder of the proof immediately follows from Lemma <ref>.
We note, in passing, that Baruah and Sarmah <cit.> also provided a proof of the k=0 case of Theorem <ref>.
We next consider the following modulo 5 congruences, with a focus on the case d=2.
For all k≥ 0 and all n≥ 0,
s_4k+2(25n+23) ≡ 0 5.
We begin by noting that
∑_n=0^∞ s_2(n) q^n
= ∏_n=1^∞1+q^n/(1-q^n)^3
= ∏_n=1^∞1-q^2n/(1-q^n)^4
= ∏_n=1^∞(1-q^2n)(1-q^n)/(1-q^n)^5
≡ ∏_n=1^∞(1-q^2n)(1-q^n)/(1-q^5n)5
= ( ∑_j, k=-∞^∞ q^2(j(3j+1)/2) + k(3k+1)/2) ∏_n=1^∞1/(1-q^5n)
thanks to Lemma <ref>.
At this point, we consider those terms of the form q^25n+23 in the power series representation of the last expression above. There are no terms of the form q^25n+8, q^25n+13, q^25n+18, or q^25n+23 in the double sum above, although there are terms of the form q^25n+3. Thus, once we multiply the double sum above by
∏_n=1^∞1/(1-q^5n) = ∑_n=0^∞ p(n)q^5n,
we see that the only way to obtain a term of the form q^25n+23 which can contribute to s_2(25n+23) modulo 5 is to multiply by a term of the form q^5(5m +4) from the power series representation of (<ref>). This will then contribute, modulo 5, to the value of s_2(25n+23) by multiplying by the value p(5m +4). Thanks to Ramanujan's well–known result that, for all m ≥ 0, p(5m +4)≡ 0 5, we then know that, for all n≥ 0, s_2(25n+23) ≡ 0 5. The theorem then follows thanks to Lemma <ref>.
We can also show that the functions s_4k+3 satisfy a rich set of congruences modulo 5 via the following:
For all k≥ 0 and all n≥ 0,
s_4k+3(5n+2) ≡ s_4k+3(5n+4) ≡ 0 5.
We begin with the following generating function manipulations:
∑_n=0^∞ s_3(n) q^n
= ∏_n=1^∞1+4q^n+q^2n/(1-q^n)^4
≡ ∏_n=1^∞1-q^n+q^2n/(1-q^n)^45
= ∏_n=1^∞1+(-q^n)+(-q^n)^2/(1-q^n)^4
= ∏_n=1^∞1-(-q^n)^3/(1-(-q^n))(1-q^n)^4
= ∏_n=1^∞1+q^3n/(1+q^n)(1-q^n)^4
= ∏_n=1^∞(1-q^6n)(1-q^n)^2/(1-q^3n)(1-q^2n)(1-q^n)^5
≡ ∏_n=1^∞(1-q^6n)(1-q^n)^2/(1-q^3n)(1-q^2n)∏_n=1^∞1/(1-q^5n)5
We now consider the power series representation of
F(q):=∏_n=1^∞(1-q^6n)(1-q^n)^2/(1-q^3n)(1-q^2n),
which turns out to be a well–known modular form; indeed, it appears as the seventh modular form in Mersmann's list of the 14 primitive eta–products which
are holomorphic modular forms of weight 1/2. See <cit.> and <cit.> for additional details. After some elementary calculations (for example, by noting that qF(q^8) appears in <cit.>), we find that
F(q)= ∑_t=0^∞ q^t(t+1)/2 - 3∑_t=0^∞ q^(3t+1)(3t+2)/2.
As noted in the proof of Theorem <ref>, 5n+2 and 5n+4 can never be triangular numbers. Since 1/2(3t+1)(3t+2) is always triangular, we conclude that s_3(5n+2) ≡ 0 5 and s_3(5n+4) ≡ 0 5. The case where k > 0 follows from Lemma <ref>.
Related to the above, we can also prove the following additional congruence family modulo 5.
For all k≥ 0 and n≥ 0,
s_4k+3(25n+23) ≡ 0 5.
As with the proof of Theorem <ref> above, we begin by considering s_3(n). Using the notation from above, recall that
∑_n=0^∞ s_3(n) q^n
≡ ( ∑_t=0^∞ q^t(t+1)/2 - 3∑_t=0^∞ q^(3t+1)(3t+2)/2) ∏_n=1^∞1/(1-q^5n)5
= ( ∑_t=0^∞ q^t(t+1)/2 - 3∑_t=0^∞ q^(3t+1)(3t+2)/2) ( ∑_n=0^∞ p(n)q^5n).
We then consider ways that q^25n+23 can arise when (<ref>) is expanded. We note that no numbers of the form 25n+8, 25n+13, 25n+18, or 25n+23 can be represented as a triangular number. Thus, the only way to obtain a term of the form q^25n+23 in (<ref>) is to multiply by a term of the form p(5m+4)q^5(5m+4) (in the same way as was discussed in the proof of Theorem <ref>). This implies that, for all n≥ 0, s_3(25n+23) ≡ 0 5. The full result follows from Lemma <ref>.
We close this section with one last infinite family of congruences, this time modulo 11.
For all n≥ 0,
s_10k+1(121n+111) ≡ 0 11.
In the work of Gordon <cit.>, we set k=2 and r=2, so that α=1. We then see that, if 24n≡ 211^2, then 12n≡ 1 121 or n≡ 111121. Gordon's work then implies that, for all n≥ 0, s_1(121n+111) 11.
The full result then follows from Lemma <ref>.
§ CLOSING THOUGHTS
We close this work with the following set of conjectured congruence families modulo 7:
For all k≥ 0 and n≥ 0,
s_6k+1(49n+17) ≡ s_6k+2(49n+17) ≡ 0 7,
s_6k+1(49n+31) ≡ s_6k+2(49n+31) ≡ 0 7,
s_6k+1(49n+38) ≡ s_6k+2(49n+38) ≡ 0 7,
s_6k+1(49n+45) ≡ s_6k+2(49n+45) ≡ 0 7.
We would be very interested to see elementary proofs of the above results.
In addition, Theorems <ref> and <ref> provide three Ramanujan congruences for s_3(n), namely
s_3(2n+1) ≡ 0 2
s_3(5n+2) ≡ 0 5
s_3(5n+4) ≡ 0 5,
by taking d=3 and k=0, respectively. We would like to know if there are any other Ramanujan congruences for s_3(n) and if any exist for d ≥ 4. We are especially interested in knowing if s_d(n) satisfies only finitely many Ramanujan congruences for fixed d ≥ 3.
alpha
|
http://arxiv.org/abs/2307.00639v1
|
20230702192143
|
Emergent Spatiotemporal Organization in Stochastic Intracellular Transport Dynamics
|
[
"Kunaal Joshi",
"Harrison York",
"Charles S. Wright",
"Rudro R. Biswas",
"Senthil Arumugam",
"Srividya Iyer-Biswas"
] |
physics.bio-ph
|
[
"physics.bio-ph",
"q-bio.QM",
"q-bio.SC"
] |
Joshi et al.Joshi et al.
xcolorIncompatible color definition
latexOverful
latexUnderful
The interior of a living cell is an active, fluctuating, and crowded environment. Yet, it maintains a high level of coherent organization, which is readily apparent in the intracellular transport network. Membrane-bound compartments called endosomes play a key role in carrying cargo, in conjunction with myriad components including cargo adaptor proteins, membrane sculptors, motor proteins, and the cytoskeleton. These components coordinate to effectively navigate the crowded cell interior and transport cargo to specific intracellular locations, even though the underlying protein interactions and enzymatic reactions exhibit stochastic behavior. A major challenge is to measure, analyze, and understand how, despite the inherent stochasticity of the constituent processes, the collective outcomes show an emergent spatiotemporal order that is precise and robust. This review focuses on this intriguing dichotomy, providing insights into the known mechanisms of noise suppression and noise utilization in intracellular transport processes, and also identifies opportunities for future inquiry.
stochastic dynamics, noise, intracellular transport, endosomal trafficking, emergent order
Emergent Spatiotemporal Organization in Stochastic Intracellular Transport Dynamics
Kunaal Joshi,^1,∗ Harrison York,^2,∗ Charles S. Wright,^1,2 Rudro R. Biswas,^1 Senthil Arumugam^2,3,4,5,†, and Srividya Iyer-Biswas,^1,6,†
^1Department of Physics and Astronomy, Purdue University, West Lafayette, IN 47907, USA
^2Monash Biomedicine Discovery Institute, Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton/Melbourne, VIC 3800, Australia
^3ARC Centre of Excellence in Advanced Molecular Imaging, Monash University, Clayton/Melbourne, VIC 3800, Australia
^4European Molecular Biological Laboratory Australia (EMBL Australia), Monash University, Clayton4/Melbourne, VIC 3800, Australia
^5Single Molecule Science, University of New South Wales, Sydney, NSW 2052, Australia
^6Santa Fe Institute, Santa Fe, NM 87501, USA
^∗These authors contributed equally to this work.
^†To whom correspondence should be addressed: [email protected] and [email protected].
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Biological functions and chemical reactions within eukaryotic cells are spatially restricted and compartmentalized in both membrane-bound and membrane-less organelles. To reach specific destinations within the cell, molecules and organelles rely on cues for guidance. While these processes display consistency at a macroscopic level, they are intrinsically stochastic due to probabilistic elements at the subcellular level. Thermal fluctuations at the molecular scale influence diffusion, molecule binding, and reaction kinetics. Low numbers of components can lead to significant fluctuations relative to the mean. Even genetically identical cells can exhibit stochastic variations in protein copy numbers, which can be amplified in domains with limited binding capacity like lipid membranes. Interestingly—and importantly—this noise is not debilitating in complex biological processes that involve multiple components with diverse interactions, such as intracellular trafficking.
This review focuses on the interplay between constituent stochastic dynamics and deterministic outcomes in cellular organization. We describe (1) the components and organization of the vesicular transport network, (2) the physical and biochemical processes that govern cargo delivery within the cell, (3) examples of emergent trafficking processes that ensure robust transport outcomes, and (4) methods amenable to the study of fast, stochastic transport processes over sufficiently long periods. Finally, we discuss the future prospects of studying stochastic transport phenomena via whole-cell measurements and integrating imaging, mathematics, and biology to uncover underlying mechanisms.
§ ORGANIZATION OF THE ENDOSOMAL TRANSPORT NETWORK
Eukaryotic cells have a complex organization, which enables precise control of biochemical reactions through compartmentalization, both in membrane-bound organelles as well as through intracellular positioning. It is readily observable that cells show a heterogenous distribution of organelles and cytoskeleton, as well as cytoplasmic proteins and nucleic acids, which can be dynamically adjusted in response to cell identity and state. Cells utilize a vesicular transport system to move cargo between distinct intracellular locations including movements of cargo between organelles, of products destined to be released via exocytosis, and of materials internalized from the extracellular milieu to be delivered to specific destinations (Fig. <ref>). Furthermore, these transport components have been increasingly identified to play critical roles in the intracellular positioning of almost every membrane-bound organelle.
Multiple layers of organization tightly regulate the transport and motility of these vesicles, ensuring the sorting and precise localization of internalized cargo. In addition to spatial movement, the identity of the transport vesicle, which is reflected in the lipid and protein composition of the cytoplasmic-facing membrane, shifts via endosomal conversion. This dynamic and stochastic process involves the exchange or conversion of proteins and/or lipids, leading to progressive biochemical maturation of the vesicle, and is crucial for the sorting and processing of cargo within the endosomal system <cit.>.
§.§ Cytoskeletal elements and motor proteins
The dynamic structure of mammalian cells is maintained through three complimentary cytoskeletal systems, composed of microtubules, actin, and intermediate filaments (IFs). Each of these filaments are composed of large polymerized proteins that self-assemble and disassemble in a dynamic fashion. They are able to anchor to membranes and transmit forces, providing structural rigidity and enabling cellular remodeling. By anchoring to organelles, the cytoskeleton also exerts control over subcellular localization, thereby playing a central role in cellular organization. While IFs are largely involved in nuclear structure and cell-cycle control, an emerging body of results implicates IFs in direct control of vesicular transport (reviewed by <cit.>); furthermore, IFs are able to indirectly affect trafficking through crosstalk with the microtubule and actin cytoskeleton.
Long-range transport throughout the cell is primarily enabled by microtubules, elongated cytoskeletal filaments composed of polymerized tubulin dimers with inherent polarity, defined by a minus end at the microtubule organizing center (MTOC) near the nucleus, where the filaments are polymerized, and a plus end toward the plasma membrane, where tubulin subunits associate (and disassociate). Associating with these microtubules are the motor proteins dynein and the kinesin family, which bind to membrane vesicles as well as other organelles as part of multicomponent complexes. Following membrane binding, these motor proteins are able to “walk” along microtubules through sequential ATP-consuming cycles, which induce conformational changes that step the protein along the microtubule <cit.>. These motor proteins sense the directionality of microtubules and step preferentially toward a specific end, with dynein showing minus end-directed trafficking and the kinesin protein family predominantly plus end-directed motility <cit.>.
§.§ Membrane compartment identifiers
A key component governing the structural and functional identities of endosomes is the family of membrane-bound proteins localized to the cytoplasmic-facing membranes. A notable example is the Rab family proteins. These are small GTPases that act as determinants of endosomal character enabling specific binding by a diverse range of proteins <cit.>. The regulation of Rab GTP–GDP binding is crucial for controlling endosomal activity and involves various protein classes, including GDP exchange factors (GEFs), GTPase-activating proteins (GAPs), and GDP dissociation inhibitors (GDIs) <cit.>. These recruited proteins, which bind to specific Rab proteins, influence the fate of the vesicle and its cargo by controlling features such as vesicular motility, via association with motor proteins and adaptors (Sec. <ref>); membrane budding and tubulation, via membrane-shaping proteins (Sec. <ref>); and the modulation of vesicular fusion, via membrane-tethering proteins and the fusion machinery (Sec. <ref>).
§.§ Lipid composition
Endosomal identity is also determined by lipid composition, especially through key signaling lipids such as phosphoinositides (PIs). Despite being a small fraction of cellular membranes (less than 1% of the total phospholipid pool), phosphoinositides play a critical role in organizing the membrane structure of the vesicular transport system, in addition to the plasma membrane, Golgi apparatus, and endoplasmic reticulum (ER) <cit.>. These membrane phospholipids consist of a myo-inositol ring that can undergo reversible phosphorylation and dephosphorylation at the 3-, 4- and 5-OH groups via specific kinases and phosphatases. These enzymes are recruited to their target organelles through association with specific membrane-bound proteins such as Rab proteins. This establishes a spatial heterogeneity of phosphoinositides across cellular sub-compartments, which become associated with specific vesicular populations within the cell <cit.>. The interconversion of these lipids is highly dynamic and changes as vesicles mature and lipids are exchanged, such as through membrane fusion <cit.>. As such, these lipids are able to act as molecular “signposts” to orchestrate the spatiotemporal recruitment of membrane proteins containing a domain that recognizes a particular phosphoinositde (e.g., the PH and FYVE domains). Despite their essential role in cellular organization and the growing list of diseases associated with phosphoinositide dysfunction, the mechanisms by which phosphoinositide conversion is spatiotemporally controlled remain largely unknown. These aspects are reviewed in <cit.>.
§.§ Effector proteins
A diverse array of endosomal effector proteins interacts with Rab proteins, phosphoinositides, and the cytoplasmic face of transmembrane receptors located within the endosomal membrane. These effector proteins typically exhibit weak affinity for endosomes, allowing for competition and exchange among effectors that recognize the same sequences. Stable binding of effectors can be influenced by factors such as clustering, recognition of membrane curvature, and coincidence detection of specific cognate protein and phosphoinositide species on the same membrane <cit.>. Once bound, these effector proteins can influence cargo transport through endosomal tethering and fusion, as well as cargo sorting followed by subsequent fission.
§.§ Membrane sculpting, fusion, and fission
Alongside vesicular motility, cargo transport is dependent on transit through correct endosomal compartments, with most cargoes typically passing through multiple endosomal populations en route to their destination. A major step in this process is sorting of cargoes at the early endosomal level by tubulation and fission, as well as newly generated compartments fusing into the next set of compartments. Membrane deformation leading to tubulation, reshaping, or scission results from the action of motor proteins that pull on membranes <cit.>, actin polymerization on the membrane by actin nucleators <cit.>, or the interaction of curvature-inducing membrane-binding proteins, collectively termed here as membrane sculptors <cit.>. Endosomal fusion is dependent on Rab GTPases, tether molecules such as EEA1 and SNAREs. The main superfamily of such proteins is the Bin/amphiphysin/Rvs (BAR) domain containing family, which includes proteins that control membrane curvature in endosomal fission, maturation, and endocytosis; these proteins are also found in other organelles such as mitochondria and at the plasma membrane <cit.>. Other organelles also play a role in endosomal fission: cargo-containing tubule is scissioned from the parent endosome through the action of the endoplasmic reticulum and the actin cytoskeleton .
§ STOCHASTIC MODELS FOR INTRACELLULAR TRANSPORT
The complex interplay between membrane-bound proteins, phosphoinositides, and effector proteins—as well as enzymes, motor proteins, force-generating proteins, and cargo-bound transmembrane proteins—enables the precise transport of materials within the cell, regulates vesicular–organellar interactions (such as lysosomal fusion), and modulates interactions with cytosolic proteins, in addition to signal processing of receptors that are internalized following activation. Yet, each of these processes has intrinsic stochasticity. (Basic aspects are reviewed in the Appendix A.) The focus of this review is how deterministic outcomes emerge in whole-cell phenomenologies despite noisy constituent dynamics. In addition to the stochastic models discussed in this review, the mean completion times for many other transport processes have also been modeled, such as the mean search times of particles diffusing on a network with given properties <cit.>. However, we restrict ourselves here to examples of processes in which either the full distribution or the fluctuations about the mean of the relevant variables have been characterized, and refer the reader to the excellent review by Mogre et al. for details on other models <cit.>.
§.§ Undirected transport processes
Biological systems utilize various modes of transport, which we broadly classify as “undirected” and “directed”. These processes typically possess different effective descriptions characterizing motion over different length scales, ranging from nanometers to micrometers (Fig. <ref>).
§.§.§ Diffusion (passive)
At the shortest scales of motion, all biomolecules undergo diffusion, a passive mode of transport driven by conversion of fast thermal fluctuations of light molecules, such as water, in the surrounding medium into comparatively slower motion of the heavy biomolecule under observation <cit.>. Heavy biomolecules moving through the cytoplasmic medium continually collide with lighter fluid molecules, leading to a disjointed trajectory termed Brownian motion, characterized by random small ballistic (inertial) movements between successive collision events (Fig. <ref>a). Over timescales slower than those characterizing these rapid collisions, a simpler behavior emerges, which can be modeled using two complementary perspectives <cit.>: the Langevin equation governing the random motion of the biomolecule position x(t),
ẋ(t)=F(x(t))/γ+η(t),
and the equivalent Fokker-Planck (FP) equation governing the time evolution of the probability density of locating the biomolecule:
∂_tp(x,t)=÷[(-F(x,t)/γ + D)p(x,t)].
Herein, F(x) is a generic external force field; γ is the effective inverse mobility of the biomolecule in the medium, yielding a drift velocity F(x(t))/γ; D is the diffusion coefficient or diffusivity; and each Cartesian component of η(t) characterizes an independent Gaussian white noise with variance proportional to 2D: ≤η_i(t) η_j(t') = 2D δ(t-t'). The mobility and diffusivity are related through the Einstein relation, γ D = k_BT <cit.>, where k_B is Boltzmann's constant and T is the temperature of the medium. Since γ varies linearly as the size of the biomolecule (generalized Stokes' law), D is a decreasing function of molecular size.
When the external force is zero, the motion of the biomolecule is characterized as (undirected) diffusion. A characteristic property of undirected diffusive motion is that the mean square of each (Cartesian) component of displacement of a single biomolecule from its starting location increases linearly with time <cit.>:
≤ (x_i(t) - x_i(0))^2=2 D t.
One can contrast this with simple ballistic motion, where the mean square displacement grows as the square of elapsed time. When large numbers of the same biomolecule undergo diffusion, p(x,t) can be replaced by the number density of biomolecules, n(x,t), in Eq. (<ref>), yielding the familiar diffusion equation (Fick's second law) governing the passive spreading of biomolecules in a fluid medium:
∂_t n(x,t)=÷[(-F(x)/γ + D)n(x,t)].
Since diffusion speeds up for smaller biomolecules, as discussed above, small proteins of 1–5 nm such as motor and adaptor proteins diffuse freely and efficiently, but vesicles (100–1000 nm) are largely confined, with their free motion further compromised by molecular crowding <cit.>. However, diffusion still plays a crucial role in vesicular transport; for example, in motor proteins dispersed in the cytoplasm searching for binding partners and microtubules, and vesicles themselves searching for targets not located directly on the microtubules (see Sec. <ref>).
Diffusion is not as useful as energy-consuming ballistic motion for long range movement of cargo by large objects such as vesicles, but the situation is different for smaller molecules. Even accounting for the reduced diffusion of molecules within the crowded and active intracellular environment, the average time taken for a small protein to diffuse from the plasma membrane to the perinuclear region within a cell (∼ 10μm) is on the order (or faster) of the time taken for a motor protein to cross this distance (moving at ∼ 800 nm/s in vivo) <cit.>. Purely considering the speeds of these processes, it appears counterproductive to encapsulate cargo into larger, constrained vesicles. However, the control of interactions through compartmentalization and directed transport is essential to cellular organization, which limits spurious interactions and mislocalization of cargo. For example, signals are more faithfully transmitted from activated receptors to the nucleus by trafficking groups of endocytosed receptors toward the perinuclear region than by activating secondary messengers in the periphery of the cell, which must diffuse toward the nucleus and thus participate in additional interactions such as dephosphorylation and deactivation <cit.>. The simple diffusion process is also popular as a theoretical model since intuition-enhancing analytic solutions exist for many problems; see Appendix A for extended discussion.
§.§.§ Subdiffusion
For many intracellular transport processes where biomolecules passively move through a complex quasi-fluid medium, the mean square displacement grows sublinearly with time,
≤ (x_i(t) - x_i(0))^2∝ t^α, 0<α<1.
This kind of motion, termed subdiffusion, lies between the limits of simple diffusion (α = 1, see Eq. (<ref>)) and “caged” motion (α = 0, corresponding to the biomolecule “rattling” inside a small bounded region). As expected, subdiffusive motion is much slower than ballistic motion, which is characterized by α=2.
Subdiffusive motion is found within the crowded cytosolic environment, where soluble proteins, lipids, cytoskeletal filaments, and organelles occupy up to 50% of the volume <cit.>. It is extremely inefficient as a mechanism for long-range transport of material. Examples include the motion of messenger RNA molecules <cit.>, chromosomal loci in bacteria <cit.>, and lipid granules in yeast cells <cit.>. A simple rule of thumb for passive motion inside the cell is that smaller particles (corresponding to tens of nanometers in size) tend to exhibit diffusive motion, while particles an order of magnitude larger tend to exhibit subdiffusive motion <cit.>.
Two mechanisms are commonly invoked to explain the emergence of subdiffusion <cit.>. The first theory assumes that biomolecules pause briefly at binding sites between patches of diffusive motion, giving rise to a non-exponential distribution of wait times between steps in the discrete Brownian motion framework. Such processes are typically modeled as continuous time random walks <cit.> and are weakly non-ergodic. In the second theory, the cytoplasm is assumed to behave like a viscoelastic fluid due to the presence of elastic elements such as nucleic acids and cytoskeletal filaments. This structure leads to long-term correlations in noise, like a form of medium memory <cit.>. The corresponding motion of the biomolecule can then be modeled using fractional Brownian motion or a fractional Langevin equation <cit.>.
In addition to these mechanisms, diffusive motion in the presence of a high concentration of obstacles can also appear subdiffusive at short timescales <cit.>. Theories of (sub)diffusion can also include molecular crowding, traps, and confinement <cit.>. Some studies suggest that multiple mechanisms can coexist <cit.>. Thus, determining which mechanism is the source of the experimentally observed subdiffusive behavior cannot be accomplished by simply considering the scaling of variance. Other measures, such as ergodicity, are needed to deduce the mechanisms underlying a specific instantiation of subdiffusion.
§.§.§ Active diffusion
The densely packed cytoplasm and cytoskeletal filaments enable an alternative mode of transport, termed active diffusion. This random intracellular motion is achieved by the active force fluctuations of these cytoskeletal elements <cit.>. The summation of many ATP-driven contractile processes leads to random motion of particles, albeit at higher displacements than those of constrained diffusion due to the “stirring” of the cytoplasm <cit.>. Active diffusion is important in a range of systems, especially in larger cells with sizes up to 100 µm. Active diffusion can be much faster than passive diffusion due to an enhancement of the diffusion constant (the value of α still remains 1 when excluding effects that lead to subdiffusion or drift), and can additionally be modulated in different cells, or even spatially within a single cell <cit.>.
§.§ Directed transport
Although diffusion is effective for transport over short distances without incurring energetic costs, it becomes inefficient over longer intracellular distances for large objects such as vesicles. Additionally, its unbiased nature precludes spatial sorting. These challenges are overcome via the directed transport processes detailed below. Through the active expenditure of energy, these directed transport processes permit precise and directed movement of vesicles and cargo <cit.>.
§.§.§ Advection
Advection refers to a net overall flow of the cytoplasm in a particular direction, usually due to actomyosin cortex-generated flows. In the simplest models, the cytoplasm is treated as a linearly viscous (i.e., Newtonian) fluid and the advective effect of net flow is incorporated into a background drift velocity, v_fl, which replaces the F/γ term in Eq. (<ref>). The inclusion of this effect makes the mean square displacement increase quadratically with time at large times (i.e., α=2). However, the fluctuations about the time-dependent mean continue to be diffusive, i.e., ≤ (x(t) - ≤ x(t))^2∝ t, which dominates the mean square displacement at short times. The relative importance of drift over diffusion for a particle's advective motion over a lengthscale L is given by the dimensionless Péclet number, Pe = L|v_fl|/D. Thus, the drift term dominates at longer lengthscales (Pe≫ 1), while the motion is predominantly diffusive at shorter lengthscales (Pe≪ 1). Examples where motion is predominantly dominated by drift due to long lengthscales include cytoplasmic streaming in plant cells <cit.> and Drosophila oocytes <cit.>.
The preceding model can be improved by treating the cytoplasm as a poroelastic material consisting of a fluid phase interacting with an elastic solid phase <cit.>. The upgraded treatment has been shown to more accurately reproduce the flow patterns arising due to blebbing, motility, indentation, and cytoskeletal contractions <cit.>.
§.§.§ Motor protein driven cargo movement
The directed motion of cargo along cytoskeletal filaments such as microtubules requires energy expenditure in successive ATP-consuming cycles, which power specialized motor proteins capable of stepping along the filaments. Active motor transport has different effective descriptions at short, intermediate and long length- and timescales as discussed below <cit.>.
Short time scales: Stochastic motor movement
At the microscopic level, a stochastic motion of the motor protein on a filament, driven by thermal fluctuations and energy (ATP) consumption, can be modeled by the Brownian ratchet mechanism <cit.>. In this model, the motor stochastically jumps between different conformational states while moving on a single track (cytoskeletal filament) labeled by a one-dimensional (1D) coordinate x. In a given conformal state i, the motor undergoes Brownian motion inside a periodic potential V_i(x) with diffusion constant D_i. Both these quantities are specific to a given state, but all potentials have the same period, which is equal to the step size of the motor L. The central idea is that the states are ordered in such a way that the minima of the potentials are successively shifted forward. Thus, when transitioning cyclically between the states in that same order, after each transition rests long enough to slide into the forward-shifted minimum of the new potential, the motor can move forward step by step. A quasi-realistic three-state model of such a motor is sketched in Fig. <ref>e (i).
When no energy is supplied, detailed balance in thermodynamic equilibrium ensures that there cannot be any sustained directed motion of the motor, irrespective of the construction of its different conformal states (see Appendix A). However, motor proteins consume energy using ATP molecules in a nonequilibrium process that allows them to move in a directional manner along cytoskeletal filaments <cit.>. Incorporating the overall motor velocity into the description of motor motion over longer timescales ranging over multiple steps, motor motion can be well-approximated by simple diffusion with drift <cit.>, represented through Eq. <ref>. On this timescale, detailed information about conformal states is averaged out.
Intermediate time scales: Cargo attached to multiple motors
As vesicular cargo moves along an intracellular cytoskeletal filament, it is usually attached to and moved by multiple motors of different types (Fig. <ref>). Although the motion of each individual motor is biased toward a particular direction, this direction could vary between different motors. For example, kinesin moves towards the (+) end of the microtubule, while dyenin moves towards the (-) end <cit.> (Fig. <ref>(ii)). Moreover, motors stochastically attach to and detach from the filament and cargo, allowing the motion of cargo to vary in both magnitude and direction. At any given time, the configuration of the attached motors determines the net diffusivity and drift velocity of the cargo (Fig. <ref>e (ii)). Depending on the specifics of the model, different kinds of realistic cargo motion have been predicted.
Using the fact that motor proteins can work cooperatively or antagonistically, a phenomenon known as “tug-of-war”, characterized by bidirectional transport and stochastic stalling, has been proposed <cit.>. In the tug-of-war model, where kinesin and dynein motors exert opposing forces, the drift velocity of a particular state in the motor–cargo system is approximately linearly dependent on the net force resulting from the specific motors attached to the filament in that state <cit.>. However, in vivo observations suggest that dynein and kinesin may also exhibit inhibitory protein-protein interactions that contribute to stalling behavior, necessitating a more complex model than purely mechanical opposition <cit.>. An extension of this model that accounts for interactions with other particles moving along the same track, via the specific constraint that particles cannot occupy the same site simultaneously, is the totally asymmetric exclusion process (TASEP). The simplest version of this model allows for finding exact solutions for the stationary state <cit.>. Additional factors such as absorption and desorption kinetics can be incorporated but require mean field approximations or numerical methods to solve.
Long timescales: Directed motion with intermittent search
Over long timescales when transit lengths approach the lengthscales separating source and target for cargo motion, the linear movements of motor protein-driven vesicles along microtubules are punctuated by frequent pauses, including periodic “turns” as motors desorb from one microtubule and hop to a nearby microtubule. This process, characterized by frequent pauses and intermittent bidirectional motility, is termed a “random intermittent search process” (Fig. <ref>c). Such motion mimics efficient search strategies observed in animal foraging <cit.>. The frequent pauses observed in motor protein-driven vesicles may enable exploratory forays into the cytoplasm to locate target destinations <cit.>. Since most intracellular targets are not located at the MTOC, vesicles need to detach from microtubules in the vicinity of the target organelle anyway. Although the reaction kinetics during processive motion may not favor microtubule desorption, the diffusive state allows for a better match between vesicle residence time and lower-affinity binding, enabling a more accurate interpretation of local reaction space by ensuring the required reaction kinetics can take place. Thus, by incorporating stochastic unbinding, the entire intracellular space can be explored, enabling “searching” for the target molecules, which can be both cytoplasmic or membrane-bound, occurring through organelle-organelle interactions, localized within a particular sub-cellular localization. Prominent examples include phosphatases that are required to attenuate receptor signaling, and fusion with lysosomes that are predominately found within the perinuclear region of the cell <cit.>.
At these long time- and length-scales of vesicular motion where “searching” is necessary, the biologically relevant quantity is the First Passage Time (FPT) distribution of the search process <cit.> (Fig. <ref>b). For a single searcher, this usually yields results similar to that obtained for the 1D diffusion process (see Appendix A). As discussed in Sec. <ref> below, the search process becomes faster and more deterministic when multiple searchers are involved. This strategy is utilized when multiple cargoes are destined for the same target.
§.§ Vesicular maturation and sorting
Vesicular maturation is defined as an endosome losing a specific molecular identifier and gaining a new one; for example, APPL1 to EEA1 <cit.> or Rab5 to Rab7 <cit.>. Although these studies uphold a single endosome-centric view of maturation, it is evident that fission and fusion processes continually occur. Furthermore, the tubulation leading to fission may also involver a cargo sorting step that provides an additional functionality to move the cargoes in synchrony with the maturation.
Vagne and Sens <cit.> have modeled vesicular transport and cisternal maturation through a sequence of irreversible steps in which (a) a membrane-bound compartment receives an influx of a given component A through homotypic fusion (i.e., fusion that occurs when A is already present on the vesicle); (b) A subsequently converts to B through the maturation process; and finally (c) B exits through selective budding. For examples of such processes, see <cit.>. Modeling these processes as a sequence of elementary Markovian reactions, under the extremely simplified assumption of constant rates, the mean FPT can be determined analytically <cit.> while the full stochastic distribution is found numerically via the standard Gillespie algorithm <cit.>. The steady-state dynamics were found to be controlled by two parameters: r_1, the ratio of the rate of vesicle injection to that of budding, and r_2, the ratio of the rate of conversion (from A to B) on the compartment's surface to that of budding; r_1 controls the size and r_2 the composition of the vesicle <cit.>.
An implicit assumption in the above model is that the components A and B do not tend to cluster on the endosomal surface (i.e., they are not more clustered than a random distribution). Recent experiments investigating the early endosomal maturation characterized by the conversion from APPL1 to EEA1 effector proteins have shown that these proteins form homotypic clusters on the surface of the endosome instead of attaching at random locations <cit.>. To account for a possible utility of such homotypic clustering, agent-based simulations of the maturation process were performed by enhancing the above simple model with processes that prefer homotypic clusters on the endosomal surface. Through the use of agent-based simulations, the effects of clustering and collisions on the timing of the early endosomal maturation process has been quantified in <cit.>, with the simulation results suggesting that this clustering mechanism significantly reduces the mean and variability in the conversion time (for further details see Secs. <ref> (clustering) and <ref>).
§ MITIGATING VERSUS UTILIZING NOISE
It is natural to view noise as detrimental to order. However, biological systems do not merely have to overcome the stochasticity inherent in molecular interactions—they sometimes also gain advantage from it <cit.>. The intracellular endosomal trafficking network presents a clear example of a system with significant levels and diverse sources of stochasticity, yet an overall robustness in reliability of cargo transport; for example, delivery of cargoes to lysosomes for degradation <cit.> or from the endosomes to the Golgi network <cit.>.
§.§ Noise suppression
§.§.§ Mitigation of noise through use of large copy numbers
Large numbers of chemical species: Pooling
A widely prevalent strategy for reduction in noise follows from the law of large numbers (see Appendix A). Consequently, when large numbers of biochemicals are present in a cell, their transport or reactive behaviors can be deterministic even though each individual microscopic step is stochastic. Thus, even though a single diffusing biomolecule executes an irregular path governed by a single instantiation of the time evolution of the Langevin equation, Eq. (<ref>), when a large number of such molecules are considered, their density follows the deterministic diffusion law, Eq. (<ref>). The noise in the motion of individual particles is perceived as being eliminated. Similar arguments can also be put forward for chemical reactions between large numbers of biomolecules, when the reactant numbers evolve in a deterministic fashion despite the inherent, often large, stochasticity present in biochemical processes at the molecular level <cit.>. Thus, when reactants are present in large numbers, cellular processes that depend on them (yet are composed of ubiquitous molecule-level steps of chemical reactions and stochastic transport) become deterministic.
A modified strategy of “pooling” can also be used to lower noise in biochemical processes where some critical components are present in small numbers, such as during genetic transcription and translation <cit.>. The noise originating from precursor processes can be suppressed by maintaining a reservoir (pool) of necessary substances from those processes. The maintenance of noise-free deterministic precursor chemical processes, such as one in chemical equilibrium, ensures a steady (constant in the case of chemical equilibrium) supply of the precursors, allowing that chemical noise to be absent in subsequent processes. For example, York et al. <cit.> showed that there exists an epidermal growth factor (EGF)–Ca^2+–APPL1 interaction that leads to the rapid desorption of APPL1 from pre-existing endosomes and the binding of re-binding APPL1 via a distinct phosphotyrosine binding domain to freshly generated endosomes containing phosphorylated EGF receptor (EGFR). This then allows dynein recruitment and the highly processive re-localization of these endosomes to the ER-rich perinuclear region, which has been shown to facilitate EGFR deactivation <cit.>. This EGF–Ca^2+–APPL1–dynein nexus thereby leads to the tight control of the EGFR signaling window in response to large concentrations of EGF, imparting robustness to the cell's growth factor sensing.
Large numbers of searchers
Search processes can also utilize the presence of a large number of searchers for faster and more deterministic detection of a target (when compared to the same search being performed by a single searcher). Search processes can be modeled as FPT processes (Fig. <ref>d), with the FPT for a single particle obtained from a stochastic process using the formalisms discussed in Sec. <ref>. Given the FPT P(τ) for a single particle, the FPT of N independent searchers becomes <cit.>:
P^(N)(τ)=N P(τ)[1-∫_0^τ P(τ')dτ']^N-1.
Interestingly, this expression arises in a different biological context—that of emergent periodicity in synchronized flashing of fireflies <cit.>, where both the mean and the variance of the net FPT distribution, P^(N)(τ), decreases as N increases, irrespective of the specific model underlying P(τ). Thus, increasing the number of searchers makes the search process faster and more deterministic. For more detailed reading, such as calculation of the asymptotic limits of the composite FPT distribution of a large number of searchers starting from known single-particle FPT distributions as in Eq. (<ref>), see <cit.>.
§.§.§ Spatiotemporal organization strategies
Clustering
Clustering of molecules on the surface of vesicles also plays an important role in aiding the directed motion of the vesicles themselves through the cytoplasm. For example, it has been shown that the clustering of dynein motors on the membrane of a phagosome allows for the generation of a cooperative force on a single microtubule, resulting in rapid directed transport of the phagosome along the microtubule <cit.>, overcoming the stochasticity associated with opposing motor proteins on an endosome that results in bidirectional motility. The effect of multiple motors attached to a cargo is modeled in Sec. <ref>.
Clustering can lead to specificity and increases recruitment rates, as exemplified by recruitment of dynamin through the clustering of phosphoinositides <cit.>. Although the effect on conversion time in the absence of clustering has not been experimentally measured (due to the lack of methods to selectively turn off the self-affinities of the proteins under consideration), simulations show that turning off clustering while keeping all other rates constant significantly increases the mean and variance of the conversion time <cit.>. The clustering of phosphoinositides has also been postulated to be involved in experimentally measured EEA1 clustering <cit.>, that plays a role in endosomal conversions. In a new proposed model of seeded endosomal conversions, incoming APPL1 endosomes collide with pre-existing mature EEA1 endosomes, which results in “transfer” of EEA1. Clustered EEA1 on mature endosomes ensures threshold number of molecules being planted onto the incoming endosome. Agent-based simulations show that clustering significantly reduces the mean and variance in the conversion time in endosomal maturations <cit.>.
Hierarchical arrangement of timescales
The separation of timescales along with the energetic coupling between different molecule types being transported plays an important role in suppressing noise in membrane transport mechanisms, which involve the transport of molecules between two compartments separated by a membrane. Such mechanisms have been used to model the suppression of noise in intracellular glucose levels through sodium-potassium pumps in combination with sodium-glucose coupled transporters <cit.>. In short, the reaction noise of the transported molecule of interest, say `A', on the `target' side (II) of the membrane is reduced when the timescale (lowering the rates) of transport across the membrane is increased to well above the timescales of reaction noise in A on the `source' side (I) of the membrane.
We discuss below simple examples of this phenomenon when the molecule A undergoes a bursty birth-death reaction on the source side:
ϕ ->[b_A]v_A A_I; A_I ->[d_A] ϕ,
where the subscript on A denotes the side of the membrane the molecule is located in. In steady state, this process has a super-Poisson Fano factor F=(1+v_A)/2 (see Appendix A). The dynamical timescale is controlled by the death rate, d_A <cit.>.
Uniporter dynamics: A simple uniporter reversibly transports a species A between compartment I and II as follows <cit.>: A_I <=>[k][r × k] A_II.
The value of r controls the equilibration ratio for transport and k sets its inverse timescale. The Fano factor on side II is <cit.>,
F_A_II=1+(v_A-1)/2(1+r +d_A/k).
Clearly, noise in A is suppressed on side II (the Fano factor decreased) while holding equilibrium concentrations of A constant, if k is made much smaller than d_A, i.e., if transport occurs slowly compared to A_I's fluctuation dynamics.
Adding nonlinear coupling. Symporter+Antiporter dynamics: The uniporter is unable to achieve sub-Poisson level noise suppression (when F_A_II<1). Symporters and antiporters can exceed this limit of performance by the coupling the transport of A with that of another molecule, B <cit.>:
Symporter: A_I + B_I <=>[k][r× k] A_II + B_II,
Antiporter: A_I + B_II <=>[k][r× k] A_II + B_I.
Assuming that B_I obeys chemical dynamics similar to A_I (Eq. <ref> with rates with subscript B), for the simple case when v_A=v_B=v, as k varies from 0 to ∞, the Fano factor of A_II varies between:
1/2≤ F_A_II≤(1+v)(b_A+b_B)+4√(b_Ab_Br)/4(b_A+b_B+2√(b_Ab_Br)).
Thus, by combining a slow transfer process (k ≪ d_A, d_B) with appropriate coupling between A and B, symporters and antiporters can suppress the noise in A to as low as sub-Poisson levels on side II of the membrane!
Coincidence detection
Coincidence detection describes the weak binding of a protein to two or more nodes (such as proteins and phosphoinositides). A variety of proteins that are specific to endosomal surfaces bind via coincidence detection of binding partners and specific phosphoinositide-binding via specialized motifs such as the PH Bar domain or FYVE domains <cit.>. Thus, such proteins only effectively localize to membranes that contain all binding partners, thereby increasing specificity by reducing spurious binding. This also decreases the number of components required to facilitate effector localization, due to a sharp increase in the number of permutations available to enable the effective and coordinated binding and unbinding of proteins.
Structural organization
Specific structural features of biomolecules can enhance their affinity to other biomolecules, reducing noise in associated processes. For example, in the case of proteins diffusing across a thermally fluctuating membrane, mismatch between the curvature preferred by the proteins and the surrounding membrane curvature can guide and modulate their motion, imparting greater precision and enhancing lateral diffusion <cit.>. In the cisternal maturation model (cf. Sec. <ref>), the affinity-driven process of homotypic fusion relies on specific interactions between identical components, promoting the fusion of membranes and facilitating maturation. Nanoclusters of activated receptors (phosphorylated EGF receptors) also form discrete packages of signaling information that provide a robust signal via “analogue-to-digital conversion” <cit.>.
§.§.§ Reaction cascades
Endosomal networks transmit and decode extracellular signaling events and so must be specific, multiplexed, robust, and adaptable <cit.>. The network must translate a given input into a specific output, simultaneously process different signals, and resist both internal and external fluctuations—all while tuning itself to suit the cellular identity and state. Propagation of noise between interdependent processes depends on network circuit topology and timescales <cit.>. Many theoretical studies have examined features of network topology that enable signaling cascades to confer emergent behavior <cit.>, concluding that specific network topologies such as long cascades with weak interactions and particular types of feedback motifs may suppress fluctuations <cit.>. We briefly elaborate on this analysis below.
Following <cit.>, consider a signaling cascade of N species s_i where the production of species s_i depends only on s_i-1, each species undergoes self-degradation, and there is an additive source of independent noise. While the dynamics can depend non-linearly on the reactants, near steady state they can be linearized for fluctuations about steady state. Thus, near steady state, s_i's production rate is linearly dependent on s_i-1's fluctuation from steady state, with a differential amplification rate equal to c_i, while s_i's differential degradation rate is proportional to its own deviation from steady state, with a degradation rate of γ_i. A solution to this linearized analytically tractable problem (see <cit.> for details) showed that as long as the differential amplification rates are less than the corresponding degradation rates (i.e., the timescales of propagation of signal along the cascade is slower than the timescales of achieving steady state), the noise in the output species, s_N, is linearly dependent on the noise in the input species, s_0, with a linear coefficient ∝ e^-N/N_0/√(N) that is exponentially suppressed by the length (N) of the signaling cascade. Here, N_0 sets the attenuation scale for the cascade length and is found to decrease (i.e., attenuation is faster) when differential amplification rates are lowered. Thus, reaction cascades with small amplification-to-degradation rate ratios in each step serve to suppress noise transmission in the cascade. This property is useful in designing good signaling networks as follows.
In general, the dependence of the s_i+1 production rate on s_i can be non-linear, for all reactions in the cascade, leading to the existence of multiple stable fixed points of the system (i.e., steady states). The specific fixed point attained by the system in steady state, the “response” of the cascade, is determined by the value of s_0, i.e., the input “signal”. An ideal threshold response is achieved by a noiseless system with two stable fixed points separated by a threshold, when the input signal crosses a threshold separating two basins of attraction for the two output steady states. It has been shown that when the dependence of s_i+1 on s_i is ultrasensitive (i.e., more responsive than hyperbolic Michaelis-Menten kinetics), there is a robust condition that the differential amplification rates must be smaller than the degradation rates to produce a desirable sharp digital-like output response despite a noisy input, for appropriate cascade lengths <cit.>.
§.§ Benefits of noise
There are notable benefits to the presence of noise in distinct trafficking processes. Inherent stochasticity in directed motor transport plays a crucial role in circumventing roadblocks due to microtubule-associated proteins <cit.>. Some motors like kinesin-1 follow individual protofilaments; thus, the stochastic dissociation of single kinesin-1 motors upon encountering an obstacle enables a cargo bound to a team of such motors to effectively bypass the obstacle <cit.>. Other motors like kinesin-2 and dynein frequently side-step to neighboring protofilaments due to stochasticity, allowing them to successfully bypass obstacles <cit.>.
In search processes involving directed motion with intermittent search (Sec. <ref>), the presence of stochasticity plays a crucial role in effectively locating a target. When the target is not directly positioned along the motor pathway, the vesicle must stochastically detach from the pathway and rely on diffusion to find the target. At low noise levels, characterized by a low diffusivity, the vesicle takes significantly longer to reach the target (in fact, in the absence of any noise, the vesicle remains confined to the motor pathway and never successfully locates the target). At high noise levels, the vesicle detaches too frequently from the motor pathway, leading to prolonged search times. Thus, an intermediate noise level provides the optimum balance to efficiently find the target. Cells may have the ability to modulate the noise level by controlling the binding affinity of the vesicle to the motor pathway, in this way regulating the detachment rate of the vesicle to ensure an optimal level of stochasticity and thus facilitating timely and accurate target localization.
One of the two cisternal maturation pathways discussed in Sec. <ref> <cit.> requires stochasticity to drive the maturation process of the compartment, without which the compartment would perpetually be in steady state with mixed components due to balanced influx and outflux of the respective components.
Stochasticity also plays an important role in mixing. For a large number of particles undergoing diffusive motion and starting from a non-uniform density gradient, Eq. (<ref>) implies that the higher the coefficient of diffusion (k_BT/γ)—in other words, the noisier the individual particle's trajectory—the faster the density gradient evens out to the steady-state density. There is much incentive for cells to maintain a uniform homeostatic concentration of substances. Active diffusion plays a crucial role in facilitating faster mixing. In this process, experimentally observed diffusivities in intracellular transport processes are much greater than expected to arise from purely thermal fluctuations and sharply decrease in the absence of ATP <cit.>.
§ CONCLUDING REMARKS
Scaling up analytically tractable stochastic models of intracellular transport to develop a general conceptual framework remains an ongoing quest <cit.>. However, the identification of hidden simplicities in emergent whole cell phenomenologies have proved a useful route to relate stochastic models to cell-level phenomena in other systems <cit.>, and may offer a useful framework to approach similar problems in the context of intracellular transport. The advent of a new suite of quantitative, dynamic live-cell imaging modalities, which includes the highly suitable light-sheet based techniques, in conjunction with novel data analysis techniques makes this an exciting line of inquiry to pursue at this time <cit.>(Appendix B).
Finally, we note that the endosomal network is context-specific at various levels. For instance, for a given cell type, distinct cargoes show disparate intracellular itineraries such as EGF (degraded in lysosomes) and transferrin (recycled to the plasma membrane) <cit.>. This can be further tuned by features such as concentration; for instance, EGF at lower concentrations instead causes its cognate receptors to be recycled <cit.>. Trafficking of a given cargo may depend on the cellular state; for example, receptor-bound insulin has been shown to display an altered balance of recycling to degradation in insulin target tissues in obese mice <cit.>. Furthermore, distinct cell fates in distinct tissues or in developmental contexts may result from <cit.> either changes in the stoichiometry of key molecules and protein isoforms or simply the morphology of the cell <cit.>. In pathophysiological contexts the endosomal system has also been shown to adopt a new homeostatic state, as both hijacking viruses and bacterial toxins can alter endosomal trafficking and acidification to promote infection. In these contexts, the interplay with energetic costs of cellular dynamics and metabolic homeostasis is likely to prove crucial <cit.>.
In conclusion, while we have gained a wealth of insights into the molecular and structural mechanisms of protein machineries, and have catalogued various behaviors of prominent cargoes and their destinations, significant gaps remain in our knowledge of how interactions, transport, and membrane remodeling come together in space and time to result in the beautiful choreography of robust cargo detection, trafficking, and specific delivery to targets that cells routinely perform.
[SUMMARY POINTS]
* Commensurate with its importance to cellular function, intracellular transport is a complex, multiscale process that utilizes several physical transport modalities, alongside fine control of vesicular identity, to carry a variety of cargo to their destinations.
* Stochasticity at the microscopic level is an intrinsic feature of intracellular transport, with noise arising due to physical transport mechanisms as well as biomolecular interactions.
* Despite the noisiness of constituent dynamics, precise and robust outcomes arise at the whole-cell level.
* Describing cell-level outcomes entails integration across disparate time-, length-, and abundance-scales, which in turn requires data spanning the scales of the relevant phenomena to inform parameter choices and guide development of models with testable hypotheses.
* Advances in imaging methodologies can now be used to observe stochastic, dynamic processes with sufficient spatiotemporal resolution and statistical precision to relate specific molecular processes to cell-level outcomes.
* Given time series of the behaviors of all endosomes carrying a specific set of markers, variance-based measures of noise and FPT distributions are straightforward to calculate. These can be related to global outcomes, such as endpoints of specific maturation steps or internalization of cargo to target compartments.
* Eukaryotic cells seem to both suppress noise and also strategically use it to achieve desired outcomes, but clear-cut examples of each approach still remain largely anecdotal.
[FUTURE ISSUES]
* While the significant differences in the intracellular organizations of terminally differentiated cells have been identified, it remains an open question as to how changes arising in the transport system support signaling during tissue patterning and differentiation.
* In pathophysiological contexts, extending understanding to the dysregulation of trafficking of key biomolecules may provide a mechanistic approach for therapeutic drug and nanoparticle delivery strategies, in turn enabling efficient release at specific subcellular locations.
* Experiments measuring precise timing of cargo delivery are still lacking. Single particle tracking of various cargoes through the endosomal network will be a powerful approach to extract stochastic features of the system.
* A new conceptual framework for spatiotemporal organization of the intracellular transport network that yields quantitative principles that transcend system-specific details remains to be articulated.
§ DISCLOSURE STATEMENT
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
§ ACKNOWLEDGMENTS
S.A. thanks the National Health and Medical Research Council of Australia (APP1182212) and Monash Data Futures Institute Seed Grant. H.M.Y. is supported by an Australian Government Research Training (RTP) Scholarship. The EMBL Australia Partnership Laboratory (EMBL Australia) is supported by the National Collaborative Research Infrastructure Strategy of the Australian Government. K.J., R.R.B. and S.I.-B. thank the Purdue Research Foundation and Purdue University start-up funds for financial support. K.J. and S.I.-B acknowledge support from the College of Science Dean's Special Fund, the Ross-Lynn Fellowship award and the Bilsland Dissertation Award (to K.J.).
§ LITERATURE CITED
143
natexlab#1#1
allen2010
Allen LJ. 2010.
An introduction to stochastic processes with applications to biology.
CRC press
2011-amoruso
Amoruso C, Lagache T, Holcman D. 2011.
Modeling the early steps of cytoplasmic trafficking in viral infection and gene
delivery.
SIAM Journal on Applied Mathematics 71:2334–2358
Axelrod1981
Axelrod D. 1981.
Cell-substrate contacts illuminated by total internal reflection fluorescence.
J Cell Biol 89:141–145
Axelrod2001
Axelrod D. 2001.
Total internal reflection fluorescence microscopy in cell biology.
Traffic 2:764–774
Bakker2017
Bakker J, Spits M, Neefjes J, Berlin I. 2017.
The egfr odyssey - from activation to destruction in space and time.
J Cell Sci 130:4087–4096
Bartumeus2005
Bartumeus F, da Luz MGE, Viswanathan GM, Catalan J. 2005.
Animal search strategies: A quantitative random-walk analysis.
Ecology 86:3078–3087
Benichou2011
Bénichou O, Loverdo C, Moreau M, Voituriez R. 2011.
Intermittent search strategies.
Rev. Mod. Phys. 83:81–129
Bertalan2015
Bertalan Z, Budrikis Z, La Porta CAM, Zapperi S. 2015.
Navigation strategies of motor proteins on decorated tracks.
PLOS ONE 10:e0136945
Binder2012
Binder B, Holzhütter HG. 2012.
A hypothetical model of cargo-selective Rab recruitment during organelle
maturation.
Cell Biochem. Biophys. 63:59–71
Blue2018
Blue RE, Curry EG, Engels NM, Lee EY, Giudice J. 2018.
How alternative splicing affects membrane-trafficking dynamics.
J. Cell Sci. 131
Blythe2007
Blythe RA, Evans MR. 2007.
Nonequilibrium steady states of matrix-product form: a solver's guide.
J. Phys. A: Math. Theor. 40:R333
Bonifacino2006
Bonifacino JS, Rojas R. 2006.
Retrograde transport from endosomes to the trans-golgi network.
Nat. Rev. Mol. Cell Biol. 7:568–579
Bonucci2023
Bonucci M, Shu T, Holt LJ. 2023.
How it feels in a cell.
Trends in Cell Biology
Bouchaud1990
Bouchaud JP, Georges A. 1990.
Anomalous diffusion in disordered media: Statistical mechanisms, models and
physical applications.
Phys. Rep. 195:127–293
Brandizzi2002
Brandizzi F, Frangne N, Marc-Martin S, Hawes C, Neuhaus JM, Paris N. 2002.
The destination for single-pass membrane proteins is influenced markedly by the
length of the hydrophobic domain.
Plant Cell 14:1077–1092
Brangwynne2009
Brangwynne CP, Gijsje HK, MacKintosh FC, Weitz DA. 2009.
Intracellular transport by active diffusion.
Trends Cell Biol. 19:423–427
bressloff2014
Bressloff PC. 2014.
Stochastic processes in cell biology, vol. 41.
Springer
Bressloff2012
Bressloff PC, Newby JM. 2012.
Filling of a Poisson trap by a population of random intermittent searchers.
Phys. Rev. E 85:031909
Bressloff2013
Bressloff PC, Newby JM. 2013.
Stochastic models of intracellular transport.
Rev. Mod. Phys. 85:135–196
brown2020
Brown AI, Westrate LM, Koslover EF. 2020.
Impact of global structure on diffusive exploration of organelle networks.
Scientific reports 10:4984
Bruggeman2018
Bruggeman FJ, Teusink B. 2018.
Living with noise: On the propagation of noise from molecules to phenotype and
fitness.
Curr. Opin. Syst. Biol. 8:144–150
Burov2011
Burov S, Jeon JH, Metzler R, Barkai E. 2011.
Single particle tracking in systems showing anomalous diffusion: the role of
weak ergodicity breaking.
Phys. Chem. Chem. Phys. 13:1800–1812
Campas2008
Campàs O, Leduc C, Bassereau P, Casademunt J, Joanny JF, Prost J. 2008.
Coordination of kinesin motors pulling on fluid membranes.
Biophys. J. 94:5009–5017
Cardelli2020
Cardelli L, Laurenti L, Csikasz-Nagy A. 2020.
Coupled membrane transporters reduce noise.
Phys. Rev. E 101:012414
Carlton2005
Carlton JG, Cullen PJ. 2005.
Coincidence detection in phosphoinositide signaling.
Trends Cell Biol. 15:540–547
Castro2021
Castro M, Lythe G, Smit J, Molina-París C. 2021.
Fusion and fission events regulate endosome maturation and viral escape.
Sci Rep. 11:7845
Charras2008
Charras GT, Coughlin M, Mitchison TJ, Mahadevan L. 2008.
Life and times of a cellular bleb.
Biophys. J. 94:1836–1853
Chen2014
Chen BC, Legant WR, Wang K, Shao L, Milkie DE, et al. 2014.
Coincidence detection in phosphoinositide signaling.
Science 346:1257998
Chepyala2016
Chepyala SR, Chen YC, Yan CCS, Lu CYD, Wu YC, Hsu CP. 2016.
Noise propagation with interlinked feed-forward pathways.
Sci. Rep. 6:23607
Chou2011
Chou T, Mallick K, Zia RKP. 2011.
Non-equilibrium statistical mechanics: from a paradigmatic model to biological
transport.
Rep. Prog. Phys. 74:116601
Cullen2008
Cullen PJ. 2008.
Endosomal sorting and signalling: an emerging role for sorting nexins.
Nat. Rev. Mol. Cell Biol. 9:574–582
Cullen2014
Cullen PJ, Carlton JG. 2014.
Phosphoinositides in the mammalian endo-lysosomal network.
Subcell Biochem. 59:65–110
Cullen2018
Cullen PJ, Steinberg F. 2018.
To degrade or not to degrade: mechanisms and significance of endocytic
recycling.
Nat. Rev. Mol. Cell Biol. 19:679–696
Derivery2010
Derivery E, Gautreau A. 2010.
Assaying WAVE and WASH complex constitutive activities toward the arp2/3
complex.
Methods Enzymol. 484:677–695
Derivery2009
Derivery E, Sousa C, Gautier JJ, Lombard B, Loew D, Gautreau A. 2009.
The arp2/3 activator WASH controls the fission of endosomes through a large
multiprotein complex.
Dev. Cell 17:712–723
Dix2008
Dix JA, Verkman AS. 2008.
Crowding effects on diffusion in solutions and cells.
Annu. Rev. Biophys. 37:247–263
Drechsler2017
Drechsler M, Giavazzi F, Cerbino R, Primo L, Lichtenstein L, Ferrari A. 2017.
Active diffusion and advection in drosophila oocytes result from the interplay
of actin and microtubules.
Nat. Commun. 8:1520
1905-einstein-brownian
Einstein A. 1905.
Investigations on the theory of brownian movement.
Ann. Phys. (Leipzig) 17
eling2019
Eling N, Morgan MD, Marioni JC. 2019.
Challenges in measuring and understanding biological noise.
Nature Reviews Genetics 20:536–548
Erdi2016
Érdi P, Lente G. 2016.
Stochastic chemical kinetics.
Springer New York
everitt1998
Everitt B. 1998.
The cambridge dictionary of statistics.
Cambridge University Press, 360th ed.
Ferro2019
Ferro LS, Can S, Turner MA, ElShenawy MM, Yildiz A. 2019.
Kinesin and dynein use distinct mechanisms to bypass obstacles.
eLife 8:e48629
Foret2012
Foret L, Dawson JE, Villaseñor R, Collinet C, Deutsch A, et al. 2012.
A general theoretical framework to infer endosomal network dynamics from
quantitative image analysis.
Curr. Biol. 22:1381–1390
ganguly2012
Ganguly S, Williams LS, Palacios IM, Goldstein RE. 2012.
Cytoplasmic streaming in drosophila oocytes varies with kinesin activity and
correlates with the microtubule cytoskeleton architecture.
Proceedings of the National Academy of Sciences 109:15109–15114
Gardiner2009
Gardiner C. 2009.
Stochastic methods: A handbook for the natural and social sciences.
Springer Berlin, Heidelberg
Gennerich2009
Gennerich A, Vale RD. 2009.
Walking the walk: how kinesin and dynein coordinate their steps.
Curr. Opin. Cell Biol. 21:59–67
Gillespie1976
Gillespie DT. 1976.
A general method for numerically simulating the stochastic time evolution of
coupled chemical reactions.
J. Comp. Phys. 22:403–434
golding2006
Golding I, Cox EC. 2006.
Physical nature of bacterial cytoplasm.
Physical review letters 96:098102
Granger2014
Granger E, McNee G, Allan V, Woodman P. 2014.
The role of the cytoskeleton and molecular motors in endosomal dynamics.
Semin. Cell Dev. Biol. 31:20–29
Guo2014
Guo M, Ehrlicher AJ, Jensen MH, Renz M, Moore JR, et al. 2014.
Probing the stochastic, motor-driven properties of the cytoplasm using force
spectrum microscopy.
Cell 158:822–832
Gupta2017
Gupta SK, Guo M. 2017.
Equilibrium and out-of-equilibrium mechanics of living mammalian cytoplasm.
J. Mech. Phys. Solids 107:284–293
2006-helenius
Helenius J, Brouhard G, Kalaidzidis Y, Diez S, Howard J. 2006.
The depolymerizing kinesin mcak uses lattice diffusion to rapidly target
microtubule ends.
Nature 441:115–119
2009-hirokawa
Hirokawa N, Noda Y, Tanaka Y, Niwa S. 2009.
Kinesin superfamily motor proteins and intracellular transport.
Nature Reviews Molecular Cell Biology 10:682–696
howard2001
Howard J. 2001.
Mechanics of motor proteins and the cytoskeleton: Sinauer assoc.
Sunderland, MA
2009-iyer-biswas-powerlaws
Hu J, Iyer-Biswas S, Sealfon SC, Wetmur J, Jayaprakash C, Hayot F. 2009.
Power-laws in interferon-b mrna distribution in virus-infected dendritic cells.
Biophysical Journal 97:1984–1989
2009-iyer-biswas-dissertation
Iyer-Biswas S. 2009.
Applications of methods of non-equilibrium statistical physics to models of
stochastic gene expression.
Ph.D. thesis, Ohio State University
2014-iyer-biswas-PRL
Iyer-Biswas S, Crooks GE, Scherer NF, Dinner AR. 2014a.
Universality in stochastic exponential growth.
Phys. Rev. Lett. 113:028101
2009-iyer-biswas
Iyer-Biswas S, Hayot F, Jayaprakash C. 2009.
Stochasticity of gene products from transcriptional pulsing.
Phys. Rev. E 79:031911
2014-iyer-biswas-mixedP
Iyer-Biswas S, Jayaprakash C. 2014.
Mixed poisson distributions in exact solutions of stochastic autoregulation
models.
Phys. Rev. E 90:052712
2014-iyer-biswas-PNAS
Iyer-Biswas S, Wright CS, Henry JT, Lo K, Burov S, et al. 2014b.
Scaling laws governing stochastic growth and division of single bacterial
cells.
Proc. Natl. Acad. Sci. U.S.A. 111:15912–15917
2016-iyer-biswas-FPT
Iyer-Biswas S, Zilman A. 2016.
First-passage processes in cellular biology, chap. 5.
John Wiley & Sons, Inc, 261–306
2017-iyer-biswas-intthresh
Jafarpour F, Vennettilli M, Iyer-Biswas S. 2017.
Biological timekeeping in the presence of stochasticity.
ArXiv: 1703.10058
jeon2011
Jeon JH, Tejedor V, Burov S, Barkai E, Selhuber-Unkel C, et al. 2011.
In vivo anomalous diffusion and weak ergodicity breaking of lipid granules.
Physical review letters 106:048103
joshi2023intergenerational
Joshi K, Biswas RR, Iyer-Biswas S. 2023.
Intergenerational scaling law determines the precision kinematics of stochastic
individual-cell-size homeostasis.
bioRxiv: 2023.01.20.525000
joshi2023cellular
Joshi K, Roy S, Biswas RR, Iyer-Biswas S. 2023.
Cellular dynamics under time-varying conditions.
bioRxiv: 2023.03.07.531540
joshi2023emergent
Joshi* K, Wright* CS, Ziegler* KF, Spiers EM, Crosser JT, et al.
2023a.
Emergent simplicities in stochastic intergenerational homeostasis.
bioRxiv: 2023.01.18.524627
joshi2023nonmarkovian
Joshi* K, Ziegler* KF, Roy* S, Wright CS, Gandhi R, et al. 2023b.
Non-markovian memory and emergent simplicities in the stochastic and plastic
adaptation of individual cells to dynamic environments.
bioRxiv: 023.05.27.542601
Jovic2014
Jović M, Kean MJ, D. A, Boura E, Gingras AC, et al. 2014.
Endosomal sorting of VAMP3 is regulated by PI4K2A.
J. Cell Sci. 127:3745–3756
Julicher1997
Jülicher F, Ajdari A, Prost J. 1997.
Modeling molecular motors.
Rev. Mod. Phys. 69:1269
Kar2023
Kar J, Kar S, Gupta A, Jana SS. 2023.
Assembly and disassembly dynamics of nonmuscle myosin II control endosomal
fission.
Cell Rep. 42:112108
2019-nagel-RMP
Keim NC, Paulsen JD, Zeravcic Z, Sastry S, Nagel SR. 2019.
Memory formation in matter.
Rev. Mod. Phys. 91:035002
Keller2000
Keller D, Bustamante C. 2000.
The mechanochemistry of molecular motors.
Biophys. J. 78:541–556
Klumpp2005
Klumpp S, Nieuwenhuizen TM, Lipowsky R. 2005.
Movements of molecular motors: Ratchets, random walks and traffic phenomena.
Physica E Low Dimens. Syst. Nanostruct. 29:380–389
Kolomeisky2007
Kolomeisky AB, Fisher ME. 2007.
Molecular motors: a theorist's perspective.
Annu. Rev. Phys. Chem. 58:675–695
2012-lagache
Lagache T, Danos O, Holcman D. 2012.
Modeling the step of endosomal escape during cell infection by a
nonenveloped virus.
Biophysical Journal 102:980–989
2017-lagache
Lagache T, Sieben C, Meyer T, Herrmann A, Holcman D. 2017.
Stochastic model of acidification, activation of hemagglutinin and escape of
influenza viruses from an endosome.
Frontiers in Physics 5
Lewis2015
Lewis OL, Zhang S, Guy RD, Del Alamo JC. 2015.
Coordination of contractility, adhesion and flow in migrating Physarum
amoebae.
J. R. Soc. Interface 12:20141359
Liepelt2007
Liepelt S, Lipowsky R. 2007.
Kinesin's network of chemomechanical motor cycles.
Phys. Rev. Lett. 98:258102
Lin2016
Lin C, Schuster M, Guimaraes SC, Ashwin P, Schrader M, et al. 2016.
Active diffusion and microtubule-based transport oppose myosin forces to
position organelles in cells.
Nat. Commun. 7:11814
Lipowsky2005
Lipowsky R, Klumpp S. 2005.
`life is motion': multiscale motility of molecular motors.
Physica A Stat. Mech. Appl. 352:53–112
LippincottSchwartz2018
Lippincott-Schwartz J, Snapp EL, Phair RD. 2018.
The development and enhancement of FRAP as a key tool for investigating
protein dynamics.
Biophys. J. 115:1146–1155
LubyPhelps2013
Luby-Phelps K. 2013.
The physical chemistry of cytoplasm and its influence on cell function: an
update.
Mol. Biol. Cell 24:2593–2596
Mandelbrot1968
Mandelbrot BB, Van Ness JW. 1968.
Fractional brownian motions, fractional noises and applications.
SIAM Rev. 10:422–437
margiotta2016
Margiotta A, Bucci C. 2016.
Role of intermediate filaments in vesicular traffic.
Cells 5:20
Mayle2012
Mayle KM, Le AM, Kamei DT. 2012.
The intracellular trafficking pathway of transferrin.
Biochim Biophys Acta 1820:264–281
Mim2012
Mim C, Unger VM. 2012.
Membrane curvature and its generation by BAR proteins.
Trends Biochem. Sci. 37:526–533
Mitchison2008
Mitchison TJ, Charras GT, Mahadevan L. 2008.
Implications of a poroelastic cytoplasm for the dynamics of animal cell shape.
Semin. Cell Dev. Biol. 19:215–223
Moeendarbary2013
Moeendarbary E, Valon L, Fritzsche M, Harris AR, Moulding DA, et al. 2013.
The cytoplasm of living cells behaves as a poroelastic material.
Nat. Mater. 12:253–261
Mogilner2018
Mogilner A, Manhart A. 2018.
Intracellular fluid mechanics: Coupling cytoplasmic flow with active
cytoskeletal gel.
Annu. Rev. Fluid Mech. 50:347–370
Mogre2020
Mogre SS, Brown AI, Koslover EF. 2020.
Getting around the cell: Physical transport in the intracellular world.
Phys. Biol. 17:061003
Parmeggiani1999
Parmeggiani A, Jülicher F, Ajdari A, Prost J. 1999.
Energy transduction of isothermal ratchets: Generic aspects and specific
examples close to and far from equilibrium.
Phys. Rev. E 60:2127
Peskin1995
Peskin CS, Oster G. 1995.
Coordinated hydrolysis explains the mechanical behavior of kinesin.
Biophys. J. 68:202S
picas2014
Picas L, Viaud J, Schauer K, Vanni S, Hnia K, et al. 2014.
Bin1/m-amphiphysin2 induces clustering of phosphoinositides to recruit its
downstream partner dynamin.
Nature Communications 5:5647
Posor2022
Posor Y, Jang W, Haucke V. 2022.
Phosphoinositides as membrane organizers.
Nat. Rev. Mol. Cell Biol. 23:797–816
presley1997
Presley JF, Cole NB, Schroer TA, Hirschberg K, Zaal KJ, Lippincott-Schwartz J.
1997.
Er-to-golgi transport visualized in living cells.
Nature 389:81–85
Prost1994
Prost J, Chauwin JF, Peliti L, Ajdari A. 1994.
Asymmetric pumping of particles.
Phys. Rev. Lett. 72:2652
Radszuweit2013
Radszuweit M, Alonso S, Engel H, Bär M. 2013.
Intracellular mechanochemical waves in an active poroelastic model.
Phys. Rev. Lett. 110:138102
rai2016
Rai A, Pathak D, Thakur S, Singh S, Dubey AK, Mallik R. 2016.
Dynein clusters into lipid microdomains on phagosomes to drive rapid transport
toward lysosomes.
Cell 164:722–734
2022-rajagopal
Rajagopal V, Arumugam S, Hunter PJ, Khadangi A, Chung J, Pan M. 2022.
The cell physiome: What do we need in a computational physiology framework for
predicting single-cell biology?
Annual Review of Biomedical Data Science 5:341–366
Reimann2002
Reimann P. 2002.
Brownian motors: Noisy transport far from equilibrium.
Phys. Rep. 361:57–265
Rink2005
Rink J, Ghigo E, Kalaidzidis Y, Zerial M. 2005.
Rab conversion as a mechanism of progression from early to late endosomes.
Cell 122:735–749
2013-roberts
Roberts AJ, Kon T, Knight PJ, Sutoh K, Burgess SA. 2013.
Functions and mechanics of dynein motor proteins.
Nature Reviews Molecular Cell Biology 14:713–726
RodriguezBoulan2005
Rodriguez-Boulan E, Kreitzer G, Müsch A. 2005.
Organization of vesicular trafficking in epithelia.
Nat. Rev. Mol. Cell Biol. 6:233–247
Rowland2014
Rowland AA, Chitwood PJ, Phillips MJ, Voeltz GK. 2014.
ER contact sites define the position and timing of endosome fission.
Cell 159:1027–1041
Ryu2014
Ryu J, Galan AK, Xin X, Dong F, Abdul-Ghani MA, et al. 2014.
Appl1 potentiates insulin sensitivity by facilitating the binding of irs1/2 to
the insulin receptor.
Cell Reports 7:1227–1238
Sanders2023
Sanders S, Joshi K, Levin PA, Iyer-Biswas S. 2023.
Beyond the average: An updated framework for understanding the relationship
between cell growth, dna replication, and division in a bacterial system.
PLoS Genet. 19:e1010505
Sarfati2023
Sarfati R, Joshi K, Martin O, Hayes JC, Iyer-Biswas S, Peleg O. 2023.
Emergent periodicity in the collective synchronous flashing of fireflies.
eLife 12:e78908
Schadschneider2010
Schadschneider A, Chowdhury D, Nishinari K. 2010.
Stochastic transport in complex systems: from molecules to vehicles.
Elsevier
scher1975
Scher H, Montroll EW. 1975.
Anomalous transit-time dispersion in amorphous solids.
Physical Review B 12:2455
Schnitzer2000
Schnitzer MJ, Visscher K, Block SM. 2000.
Force production by single kinesin motors.
Nat. Cell Biol. 2:718–723
1915-schrodinger
Schrödinger E. 1915.
Zur theorie der fall-und steigversuche an teilchen mit brownscher bewegung.
Physikalische Zeitschrift 16:289–295
Seksek1997
Seksek O, Biwersi J, Verkman A. 1997.
Translational diffusion of macromolecule-sized solutes in cytoplasm and
nucleus.
The Journal of cell biology 138:131–142
2022-iyer-biswas-waddington
Shakiba N, Li C, Garcia-Ojalvo J, Cho KH, Patil K, et al. 2022.
How can waddington-like landscapes facilitate insights beyond developmental
biology?
Cell Systems 13:4–9
Sigismund2008
Sigismund S, Argenzio E, Tosoni D, Cavallaro E, Polo S, Di Fiore PP. 2008.
Clathrin-mediated internalization is essential for sustained egfr signaling but
dispensable for degradation.
Dev Cell 15:209–219
Simunovic2015
Simunovic M, Voth GA, Callan-Jones A, Bassereau P. 2015.
When physics takes over: BAR proteins and membrane curvature.
Rev. Trends Cell Biol. 25:780–792
Smelser2015
Smelser AM, Macosko JC, O'Dell AP, Smyre S, Bonin K, Holzwarth G. 2015.
Mechanical properties of normal versus cancerous breast cells.
Biomech. Model. Mechanobiol 14:1335–1347
Soldati2006
Soldati T, Schliwa M. 2006.
Powering membrane traffic in endocytosis and recycling.
Nat. Rev. Mol. Cell Biol. 7:897–908
Stanoev2018
Stanoev A, Mhamane A, Schuermann KC, Grecco HE, Stallaert W, et al. 2018.
Interdependence between EGFR and phosphatases spatially established by
vesicular dynamics generates a growth factor sensing and responding network.
Cell Syst. 7:295–309.e11
Striepen2022
Striepen JF, Voeltz GK. 2022.
Coronin 1C restricts endosomal branched actin to organize ER contact and
endosome fission.
J. Cell Biol. 221
Thattai2002
Thattai M, Van Oudenaarden A. 2002.
Attenuation of noise in ultrasensitive signaling cascades.
Biophys. J. 82:2943–2950
tolic2004
Tolić-Nørrelykke IM, Munteanu EL, Thon G, Oddershede L, Berg-Sørensen
K. 2004.
Anomalous diffusion in living yeast cells.
Physical Review Letters 93:078102
tominaga2013
Tominaga M, Kimura A, Yokota E, Haraguchi T, Shimmen T, et al. 2013.
Cytoplasmic streaming velocity as a plant size determinant.
Developmental Cell 27:345–352
tsimring2014
Tsimring LS. 2014.
Noise in biology.
Reports on Progress in Physics 77:026601
Vagne2018a
Vagne Q, Sens P. 2018.
Stochastic model of maturation and vesicular exchange in cellular organelles.
Biophys. J. 114:947–957
Villasenor2016
Villaseñor R, Kalaidzidis Y, Zerial M. 2016.
Signal processing by the endosomal system.
Curr. Opin. Cell Biol. 39:53–60
Villasenor2015
Villasenor R, Nonaka H, Del Conte-Zerial P, Kalaidzidis Y, Zerial M. 2015.
Regulation of EGFR signal transduction by analogue-to-digital conversion in
endosomes.
eLife 4:e06156
Visscher1999
Visscher K, Schnitzer MJ, Block SM. 1999.
Single kinesin molecules studied with a molecular force clamp.
Nature 400:184–189
Wallroth2018
Wallroth A, Haucke V. 2018.
Phosphoinositide conversion in endocytosis and the endolysosomal system.
J. Biol. Chem. 293:1526–1535
WandingerNess2014
Wandinger-Ness A, Zerial M. 2014.
Rab proteins and the compartmentalization of the endosomal system.
Cold Spring Harb. Perspect. Biol. 6:a022616
Wang1998
Wang HY, Elston T, Mogilner A, Oster G. 1998.
Force generation in rna polymerase.
Biophys. J. 74:1186–1202
weber2010
Weber SC, Spakowitz AJ, Theriot JA. 2010.
Bacterial chromosomal loci move subdiffusively through a viscoelastic
cytoplasm.
Physical review letters 104:238102
Wehrens2018
Wehrens M, Büke F, Nghe P, Tans SJ. 2018.
Stochasticity in cellular metabolism and growth: Approaches and consequences.
Curr. Opin. Syst. Biol. 8:131–136
weigel2011
Weigel AV, Simon B, Tamkun MM, Krapf D. 2011.
Ergodic and nonergodic processes coexist in the plasma membrane as observed by
single-molecule tracking.
Proceedings of the National Academy of Sciences 108:6438–6443
woodhouse2013
Woodhouse FG, Goldstein RE. 2013.
Cytoplasmic streaming in plant cells emerges naturally by microfilament
self-organization.
Proceedings of the National Academy of Sciences 110:14132–14137
2021-iyer-biswas-bioenergetics
Yang X, Heinemann M, Howard J, Huber G, Iyer-Biswas S, et al. 2021.
Physical bioenergetics: Energy fluxes, budgets, and constraints in cells.
Proc. Natl. Acad. Sci. U.S.A. 118
York2020
York HM, Coyle J, Arumugam S. 2020.
To be more precise: the role of intracellular trafficking in development and
pattern formation.
Biochem. Soc. Trans. 48:2051–2066
York2022
York HM, Joshi K, Wright CS, Kreplin LZ, Rodgers S, et al. 2022.
Deterministic early endosomal maturations emerge from a stochastic
trigger-and-convert mechanism.
bioRxiv :2022.04.15.488498
York2021
York HM, Patil A, Moorthi UK, Kaur A, Bhowmik A, et al. 2021.
Rapid whole cell imaging reveals a calcium-APPL1-dynein nexus that regulates
cohort trafficking of stimulated EGF receptors.
Commun. Biol. 4:224
Yu2021
Yu L, Lei Y, Ma Y, Liu M, Zheng J, et al. 2021.
A comprehensive review of fluorescence correlation spectroscopy.
Front. Phys. 9:644450
Zeigerer2012
Zeigerer A, Gilleron J, Bogorad RL, Marsico G, Nonaka H, et al. 2012.
Rab5 is necessary for the biogenesis of the endolysosomal system in vivo.
Nature 485:465–470
Zerial2001
Zerial M, McBride H. 2001.
Rab proteins as membrane organizers.
Nat. Rev. Mol. Cell Biol. 2:107–117
Zhang2021
Zhang ML, Ti HY, Wang PY, Li H. 2021.
Intracellular transport dynamics revealed by single-particle tracking.
Biophys. Rep. 7:413–427
Zoncu2009
Zoncu R, Perera RM, Balkin DM, Pirruccello M, Toomre D, De Camilli P. 2009.
A phosphoinositide switch controls the maturation and signaling properties of
appl endosomes.
Cell 136:1110–1121
§.§ Appendix A: Theoretical preliminaries
§.§.§ Characterizing and quantifying `noise'
Many biologically relevant quantities in the cells are characterized by random fluctuations around expected values and are thus stochastic variables <cit.>. Examples include the numbers of molecules, their spatial coordinates viewed both in static and dynamic contexts, and various inter-event time periods <cit.>. Measures of noise serve to quantify deviations from deterministic behavior <cit.>. Below we highlight common measures of noise that characterize various aspects of the width of the probability distribution of the relevant random variable.
*Absolute measure: variance and standard deviation.
The variance of a random variable x, with mean value ⟨ x⟩, is defined thus: Var(x) = ⟨(x-⟨ x⟩)^2⟩. The angular brackets represent averaging of the enclosed quantity. The variance is an absolute measure of the square of the width of the probability distribution. Its square root, the standard deviation, SD(x)=√(Var(x)), is thus an absolute measure of the width of the probability distribution and an absolute measure of noise. Such absolute measures are physically relevant in biological contexts involving the same kinds of random variables, such as when comparing levels of fluctuations in the timings of different events. Moreover, they are convenient to use due to the variance of the sum of independent random variables being equal to the sum of the variances of the individual random variables. Thus, the noise in each time step of a sequence of random processes, measured using their variances, can simply be summed to yield the time uncertainty of the overall process <cit.>.
*Relative measure: Coefficient of Variation.
The canonical dimensionless measure for noise is the ratio of variance to mean squared, defined as the square of the “Coefficient of Variation”, which describes the relative magnitude of fluctuations <cit.>. This scale-independent measure allows comparisons across variables with different dimensions, including variables characterized by different length- or timescales. The Coefficient of Variation is used to directly characterize the width of the probability distribution as a fraction of the mean value. This measure is physically most meaningful when the random variable is non-negative <cit.>.
*Measure of Poissonian character: Fano factor.
The Fano factor, defined as the ratio of the variance to the mean (F_x = Var(x)/≤ x), is a useful measure of noise in stochastic dynamics of biochemicals since it provides insight into the type of process involved. Biochemicals governed by elementary chemical reactions, such as the simple birth-death process (Eq. <ref> with v_A=1), have the steady-state Fano factor value of F = 1 corresponding to a Poisson distribution. A succession of such reactions typically results in an increase of the Fano factor beyond 1 in the final product <cit.>. In contrast, F<1 typically indicates the presence of negative feedback <cit.>.
§.§.§ First passage processes.
In many biological processes, the first passage time (FPT) provides a useful framework for modeling stochasticity, especially in the context of the timing of events. The FPT refers to the duration required for a specific event to occur for the first time, starting from a well-defined initial condition. Examples include a randomly diffusing particle reaching a target location after starting some distance away, or the concentration of a particular biochemical species surpassing a critical threshold value after starting from some lower value <cit.> (Fig. <ref>b).
The FPT problem can often be cast in the following universal formulation for solution by either analytic or numerical means. Consider a stochastic variable x(t) that evolves according to specified rules, such as diffusion (see Eq. (<ref>) below) or stochastic exponential growth in the case of individual bacterial cell sizes between divisions <cit.>. We keep these definitions general, making no restrictive assumptions about the nature (for instance, the dimensionality) of x. We then wish to find the FPT (τ) distribution, P(τ), for x to start from a certain starting point x(0) and reach a region E. The starting point may even be distributed according to some initial probability distribution, with zero probability to lie inside E for the FPT problem to be meaningful. We now consider the stochastic evolution of x in the presence of absorbing boundary conditions (defined as where the probability distribution of x vanishes) on the boundary of E, calculating the total rate, J(t), of probability absorption by the region E. The FPT distribution is then simply equal to the calculated rate of absorption: P(τ) = J(τ). A specific application of this formalism to 1D diffusion (Sec. <ref>) is outlined below.
Consider a particle diffusing in one dimension with a background drift velocity v_d starting at x(0) = 0. We wish to calculate the FPT distribution to arrive at x=L. This model is relevant for describing lattice diffusion of MCAK, a member of the kinesin-13 family, which performs a 1D diffusion to find the end of microtubules. Upon encountering the tip, MACK utilizes ATP to depolymerize the microtubules <cit.>. Following the formalism described in Sec. <ref>, we first need to solve Eq. (<ref>) with F/γ→ v_d with the absorbing boundary condition P(x=L, t)=0 and initial condition P(x,0) = δ(x) (the Kronecker delta function at x=0). The solution to this problem is possible through the method of images, and the FPT distribution is just the probability current entering x=L <cit.>,
P(τ) = - D ∂_xP(x,τ)|_x=L = L/√(4π D τ^3) e^-(L - v_dτ)^2/4Dτ.
This solution, known as the inverse Gaussian distribution, displays interesting properties. In the absence of drift, the distribution behaves like a power law, ∼τ^-3/2 at large times τ≫ L^2/D. This distribution is so broad that even its mean diverges. There is a slowly decreasing probability that the particle has not reached x=L, since there is a large region for making excursions in the direction opposite to the target. However, P(τ) is still normalized so that the particle always eventually reaches the target.
The presence of drift dramatically changes this situation. If the particle drifts toward the target, the FPT is normalized to 1 and at long times τ≫ L/v_d, the FPT is exponentially suppressed. Thus, all moments of the distribution are well-defined and the particle reaches the target in finite time. If, on the other hand, the particle moves away from the target, the FPT is no longer normalized to 1, which indicates that there is a finite probability that the particle never reaches the target. Both these long-time behaviors are consistent with common intuition of the FPT process.
A useful consequence of the above formulation of the FPT problem is that the FPT distribution of a particle to reach an exit, starting from some specified region, is the same as the distribution of transit times (where return through the exit is disallowed). Thus, the mean FPT is a measure of the inverse rate of transfer of biomolecules across the intervening space <cit.>.
Finally, a major simplification occurs in the formalism when the random variable evolves monotonically, such as via a stochastic growth process without death, e.g., bacterial cell growth between divisions under balanced growth conditions <cit.>. Then, if the time evolution of the probability distribution of the random variable is known, the FPT distribution for hitting a threshold value is simply the negative time derivative of the cumulative probability distribution evaluated at the threshold <cit.>.
§.§.§ Emergence of directed motor motion from energy-consuming stochastic dynamics.
The Langevin equation corresponding to motion in a given conformal state is given by the 1D version of Eq. (<ref>) with F(x) replaced by -V'(x) <cit.>. Denoting by w_i→ j(x) the position-dependent L-periodic transition rate from i→ j, the Fokker-Planck equation for the probability density in i^th state evolves as follows (we have used D_iγ_i = k_BT):
∂_t p_i(x,t)= 1/γ_i∂_x[(V_i'(x)+k_BT∂_x)p_i(x,t)]+∑_j≠ i[w_i→ j(x)p_j(x,t)-w_j→ i(x)p_i(x,t)].
This equation has been solved analytically for simple systems, such as those with two states and with a not unreasonable assumption γ_1=γ_2 = γ <cit.>. Can this system yield a net nonzero motor velocity? The average motor velocity for a two state system in steady state has been calculated to be <cit.>,
v=Lk_BT/γ[∫_0^L∫_y^y+Le^V(z)/k_BTdz/e^V(y)/k_BT-e^V(y+L)/k_BTdy]^-1,
where,
V(x)=∫_0^x[λ(y)V_1'(y)+(1-λ(y))V_2'(y)]dy,
and
λ(x)=∑_n=-∞^∞ p_1(x+nL,t)/∑_m=-∞^∞[p_1(x+mL,t)+p_2(x+mL,t)],
where the right-hand side can be shown to be time-independent for generic transition rates.
If the system is in thermodynamic equilibrium (with no energy input) so that detailed balance holds,
w_1→2(x)/w_2→1(x)=e^[V_1(x)-V_2(x)]/k_BT=p_2(x)/p_1(x),
and the value of λ(x) can be shown to be,
λ(x)=1/1+e^-[V_1(x)-V_2(x)]/k_BT.
Since V_1 and V_2 are both periodic with period L, it follows that λ is now also periodic with the same period, and hence from Eq. (<ref>), V is also periodic with period L. Thus, in Eq. (<ref>), e^V(y)/k_BT-e^V(y+L)/k_BT goes to zero, and hence the motor velocity v is 0.
Thus, analysis of this simple problem confirms the second law of thermodynamics, which mandates that maintaining a non-zero motor velocity requires maintaining energy input (in the form of ATP), which keeps the system out of detailed balance and hence avoids true thermodynamic equilibrium. How the energy input acts in a realistic scenario depends on the specifics of the reaction scheme for the ATP-related processes. Obtaining steady-state solutions to models incorporating such schemes generally necessitates numerical calculations such as shown in <cit.>. From the results of these calculations biologically relevant measures such as noise and efficiency (the mechanical work done per unit of energy consumed) can be evaluated <cit.>. More complex generalized ratchet models involving multiple states have also been explored <cit.>.
§.§.§ The law of large numbers.
According to the law of large numbers, the accumulated outcomes of many repeated independent trials of a random experiment are proportional to the probabilities of those outcomes. While this law does match the prevalent common sense understanding of probability, we now provide a simple quantitative argument. Using the additive properties of the mean and variance (Sec. <ref>), we conclude that the sum of N independent identical random variables is a random variable whose mean and variance both are respectively N times the mean and variance of the a single variable. Thus, it follows that the standard deviation is only √(N) times the standard deviation of a single variable. Thus, the fractional width of the probability distribution of the sum of these variables (or any other random variable that is proportional to the sum, such as the mean of the random variables) is suppressed by a factor of √(N) compared to the corresponding measure for a single variable. In other words, as N becomes large, the sum of the independent random variables becomes more and more deterministic, when viewed as a fraction of their expected magnitude. When this property is extended to random variables counting the success of a given outcome in repeated independent trials as either 1 (success) or 0 (failure), we find that the accumulated outcome is deterministic after a large number of trials and is proportional to the probability of that outcome, thus proving the law.
§.§ Appendix B: Quantitative imaging and analysis
Fluorescence microscopy of live cells is essential to the study of intracellular transport. Fluorescence recovery after photobleaching (FRAP), in which fluorescent molecules in a small region of interest are photobleached prior to imaging of the movement of unbleached molecules into the same region, permits calculation of molecular diffusivity <cit.>. Similarly, fluorescence correlation spectroscopy (FCS) enables calculation of diffusivities from the autocorrelation functions of fluorescence intensities in an illumination region of interest <cit.>. However, these ensemble-averaged methods are limited in the study of stochasticity in intracellular transport dynamics, where observations of single-vesicle dynamical data are necessary. Such results provide spatiotemporal information about the movements and interactions of specific sets of biomolecules, achieved by labeling these features of interest with specific molecular probes <cit.>.
Traditional epifluorescence microscopy yield such results, but with poor axial resolution and low signal-to-noise ratio due to out-of-plane illumination, confounding attempts to localize in three dimensions and leading to untenable levels of photobleaching over time. For this reason, other imaging modalities have been developed and adopted. Total internal reflection fluorescence (TIRF) provides a significant improvement in signal-to-noise ratio by exciting fluorescent molecules within ∼200 nm of the surface <cit.>. For example, it has been successfully used to observe the diffusive motion of molecules on the cell membrane <cit.>. Unfortunately, TIRF is limited to studying phenomena near the cell membrane; for this reason, light sheet microscopy (LSM) approaches that use selective illumination of a thin optical section have gained in popularity. As an example, lattice light-sheet microscopy (LLSM) employs an ultrathin light sheet to acquire images plane-by-plane to generate a three-dimensional (3D) volume, thus permitting rapid volumetric measurements of whole cells <cit.>. Importantly for the study of intracellular trafficking, because LLSM enables whole-cell volumetric imaging, it enables measuring all events within a prolonged duration of the measurement while simultaneously allowing measurement of fast dynamics (on the order of seconds) from high-resolution data (∼220 nm in lateral, ∼320 nm in axial dimensions, and ∼2 s in time). Additionally, as this technique produces negligible photobleaching, specimens may be observed for >30 min. In principle, this permits the complete set of labeled molecules of interest (often consisting of two to four separately labeled species) within a whole cell to be visualized for extended periods of time, providing the observations required to interrogate the stochastic processes highlighted above, at multiple spatiotemporal scales.
|
http://arxiv.org/abs/2307.02954v1
|
20230706124033
|
Eating sandwiches: Modular and lightweight elimination of transaction reordering attacks
|
[
"Orestis Alpos",
"Ignacio Amores-Sesar",
"Christian Cachin",
"Michelle Yeo"
] |
cs.DC
|
[
"cs.DC"
] |
Eating sandwiches: Modular and lightweight elimination of transaction reordering attacks
Orestis Alpos[
Institute of Computer Science, University of Bern,
Neubrückstrasse 10, 3012 CH-Bern, Switzerland.]
University of Bern
<[email protected]>
Ignacio Amores-Sesar[1]
University of Bern
<[email protected]>
Christian Cachin[1]
University of Bern
<[email protected]>
Michelle Yeo[
IST Austria, Am Campus 1, 3400 Klosterneuburg, Austria.]
IST Austria
<[email protected]>
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================
Traditional blockchains grant the miner of a block full control not only over which transactions but also their order.
This constitutes a major flaw discovered with the introduction of decentralized finance and allows miners to perform
MEV attacks. In this paper, we address the issue of sandwich attacks by providing a construction that takes as input a
blockchain protocol and outputs a new blockchain protocol with the same security but in which sandwich attacks are not
profitable. Furthermore, our protocol is fully decentralized with no trusted third parties or heavy cryptography
primitives and carries a linear increase in latency and minimum computation overhead.
§ INTRODUCTION
The field of blockchain protocols has proved to be extremely robust. Since its creation with Bitcoin <cit.>, going through several enhancements such as Ethereum <cit.> and with the appearance of decentralized finance (DeFi) design flaws started to sprout. Blockchains would ideally allow users to trade tokens with each other in a secure manner. However, existing designs did not consider users trading tokens of one platform for FIAT currency or tokens of a different platform, arguably one of the major flaws of nowadays blockchain platforms, maximal extractable value (MEV) <cit.>.
Sandwich attacks are one of the most common types of MEV <cit.> accounting for a loss of 174M USD over the span of 33 months <cit.> for users of Ethereum.
Sandwich attacks leverage the miner's ability to select and position transactions within a block.
Consider the simple example of a sequence of transactions that swap one asset X for another asset Y in a decentralized exchange where exchange rates are computed automatically based on some function of the number of underlying assets in the pool (e.g., a constant product market maker <cit.>).
Now suppose there is a miner that also wants to swap some units of X for Y.
The most favorable position for the miner would be to place their transaction at the start of the sequence, so as to benefit from a lower X to Y exchange rate. We can extend this approach to achieve a simple arbitrage strategy given any sequence of X to Y swaps: the miner can compute the X to Y exchange at the start of the sequence and use the computed exchange rate to exchange some units of X to get say k units of Y.
The miner then front-runs (i.e., inserts a transaction at the start) this sequence of X to Y swaps with this transaction.
To finish off the attack, the miner back-runs (i.e., inserts a transaction at the end) the sequence with another transaction that swaps some units of Y to X.
In this way, the miner profits from the low X to Y exchange rate from their first transaction which gives the miner more units of Y (and hence more units of X when converted back) than if the first transaction was placed anywhere else in the sequence.
Refer to <Ref> for a detailed description of exchange rate computation and sandwich attacks.
Since any miner of a given block has full control over the transactions added to the block, as well as the way transactions in the block are ordered, it is trivial for the miner to launch the aforementioned sandwich attack.
Consequently, this gives miners a lot of power as they control precisely the selection and positioning of transactions with every block they mine.
A classic technique to mitigate this attack is thus to remove the control over the positioning of the transactions in the block from the adversary, whether by using a trusted third party to bundle and order the transactions as in
flashbots[<https://www.flashbots.net>], Eden[<https://www.edennetwork.io>], or OpenMEV[<https://openmev.xyz/>],
or imposing a fair ordering of the transactions using some consensus algorithm <cit.>. The classic solutions either affect the centralization of the protocol or its efficiency. Furthermore, they cannot easily be implemented on top of existing blockchain protocols.
In this work, we provide an efficient decentralized solution Π^3 (“Partitioned and Permuted Protocol") which does not rely on external resources and can be easily implemented on top of any blockchain protocol Π.
Assume that the final order of transactions in a block B_i that is mined by a miner M_i
is determined by a permutation Σ_i which is chosen uniformly at random.
For simplicity, let us focus on three transactions in B_i, a victim transaction T^*
submitted by a client, and the front-running and back-running transactions, T_1 and T_2, respectively,
created by the miner. Since any relative ordering of these three transactions is equally probable, T_1 will be ordered
before T_2 with the same probability as T_2 before T_1,
hence the miner will profit or make a loss with the same probability.
Protocol uses such a permutation for each block, chosen by a set of leaders,
which consists of recent miners in the blockchain.
We recognize and overcome the following challenges.
First, Σ_i cannot be known before creating B_i, otherwise M_i would have the option to use Σ^-1_i,
the inverse of Σ_i, to initially order the transactions in B_i, so that the final order is the one that benefits M_i.
We overcome this by making Σ_i known only after M_i has been mined.
On the other hand, if Σ_i is chosen after creating B_i, a coalition of leaders would be able to try multiple different
permutations and choose the most profitable one — the number of permutations a party can try is only limited by their processing power.
For these reasons, we have the leaders commit to their contributions to Σ_i before B_i becomes known, producing unbiased randomness.
To incentivize leaders to open their commitments, our protocol Π^3 employs a delayed reward release mechanism that only releases the rewards of leaders when they have generated and opened all commitments.
In some cases, however, performing a sandwich attack might still be more profitable than the block reward, and hence a leader might still choose to not reveal their commitment to bias the resulting permutation.
In general, a coalition of k leaders can choose among 2^k permutations out of the n_t ! possible ones, where n_t denotes the number of transactions in the block.
It turns out that the probability that T_1, T^*, and T_2 appear in that order in one of the 2^k permutations is significant.
Protocol mitigates this by chunking each transaction into m chunks,
which lowers the probability of a profitable permutation in two ways.
First, the number of possible permutations is much larger, (n_t m)! instead of n_t !.
Second, a permutation is now profitable if the majority of chunks of T_1 appear before the chunks of T^*,
and vice versa for the chunks of T_2.
As we show both analytically and empirically, the probability of a profitable permutation approaches zero as the number of chunks m increases.
Organization
In this paper, we introduce a construction that takes as input a blockchain protocol Π and produces a new blockchain protocol Π^3 in which sandwich attacks are no longer profitable.
We begin by revisiting the concept of atomic broadcast <cit.> and setting the model for the analysis. Secondly, we introduce our construction justifying how miners are incentivized to follow the protocol before moving on to analyzing the construction in detail. Thirdly, we guarantee that the construction does not include any vulnerability to the protocol by showing that Π^3 implements atomic broadcast if Π does. This part of the analysis is performed in the traditional Byzantine model. Fourthly, we consider the rational model to show that sandwich attacks are no longer profitable in Π^3. We consider the dual model of Byzantine for the security analysis and rational for the analysis of the sandwich attacks because we considered it to be a perfect fit to show that the security of Π^3 is not weakened even against an adversary that obtains nothing for breaking the protocol, as well as, we can assume that any party attempts to extract value from any sandwich attack. In other words, we consider both the security analysis and the analysis of the sandwich attack in the worst scenario possible for the protocol.
Lastly, we conclude the paper with an empirical analysis of the protocol under real-life data, as well as an analysis of the additional overhead introduced by our protocol.
§ RELATED WORK
A recent line of work <cit.>
formalizes the notion of fair ordering of transactions.
These protocols ensure, at consensus level, that the final order is consistent with the local
order in which transactions are observed by parties.
Similarly, the Hashgraph <cit.> consensus algorithm aims to achieve fairness by having each
party locally build a graph with the received transactions.
As observed by Kelkar <cit.>, a transaction order
consistent with the order observed locally for any pair of transactions is not always possible, as Condorcet cycles may be formed.
As a result, fair-ordering protocols output a transaction order that is consistent with the view of only some fraction of the
parties, while some transactions may be output in a batch, i.e., with no order defined among them.
Moreover, although order-fairness removes the miner's control over the order of transactions,
it does not eliminate front-running and MEV-attacks: a rushing adversary that becomes aware
of some tx early enough can broadcast a tx' and make sure that sufficiently many nodes receive it before tx'.
Another common defense against front-running attacks is the commit and reveal technique.
The idea is to have a user first commit to a transaction, e.g., by announcing its hash or its encryption,
and, once the order is fixed, reveal the actual transaction.
However, an adversary can choose not to reveal the transaction, should the final order be non-optimal.
To mitigate this, Doweck and Eyal <cit.> employ time-lock puzzle commitments <cit.>, so that a transaction can be brute-force revealed, and protocols such as Unicorn <cit.> and Bicorn <cit.> employ verifiable delay functions <cit.>.
The disadvantages of these solutions is that transactions may be executed much later than submitted,
and a delay for the time-lock puzzle has to be set, matching the network delay and adversary's computational power.
A different line of work <cit.> employs a committee:
transactions are encrypted with the public key of the committee, so that its members can collaboratively decrypt it.
As it is based on threshold encryption <cit.>, this technique requires a threshold setup.
Finally, multi-party computation (MPC) has been used <cit.>
to prevent front-running. MPC protocols used in this setting must be tailor-made so that misbehaving is identified and punished
<cit.>.
A common side effect of all aforementioned techniques is that the validity of a transaction can only be checked after it is revealed.
The protocol presented in this work disincentivizes sandwich attacks without requiring hidden transactions or
employing computationally heavy cryptography.
Another widely deployed solution against front-running involves a trusted third party.
Flashbots[<www.flashbots.net>], Eden[<www.edennetwork.io>],
and OpenMEV[<https://openmev.xyz/>]
allow Ethereum users to submit transactions to their services, then order received
transactions, and forward them to Ethereum miners.
Chainlink's Fair Sequencing Service <cit.>, in a similar fashion, collects encrypted transactions from users,
totally orders them, and then decrypts them.
The drawback with these solutions is that attacks are not eliminated, but trust is delegated to a different set of parties.
An orthogonal but complementary line of research is taken by Heimbach and Wattenhofer <cit.>.
Instead of eliminating sandwich attacks, the authors aim to improve the resilience of ordinary transactions against sandwich attacks by strategically setting their slippage tolerance to reduce the risk of both transaction failure as well as sandwich attacks.
Baum <cit.>, and Heimbach and Wattenhofer <cit.> provide surveys of techniques against front-running attacks.
§ MODEL
Notation
For a set X, we denote the set of probability distributions on X by μ(X).
For a probability distribution ν∈μ(X), we denote sampling x from X according to ν by x ν.
§.§ Broadcast primitives
Blockchain protocols need to satisfy certain conditions in order to implement a robust ledger.
The traditional gold standard is atomic broadcast <cit.>, which ensures that all parties deliver the same transactions in the same order. We also use the notion of block-based atomic broadcast to model a blockchain.
This is a modification to the standard atomic broadcast abstraction, where we explicitly include the notions
of a block and a block miner in the interface and properties.
Atomic broadcast
Parties may broadcast a transaction tx by invoking ().
The protocol outputs transactions through () events, in which case we say that a party delivers .
We equip our atomic broadcast protocol with a validity predicate () on transactions to determine their validity according to the logic of the blockchain protocol.
A protocol implements atomic broadcast with validity predicate () if it satisfies the following properties, except with negligible probability:
Validity: If some correct party invokes (), then every correct party eventually outputs ().
No duplication: No correct party outputs () for a particular more than once.
Agreement: If some correct party outputs (), then eventually every correct party outputs ().
Total order: Given two transactions tx and tx' and two correct parties P_i and P_j that output both () and ('). If P_i outputs () before ('), then P_j also outputs () before (').
External validity: If a correct party outputs (), then ()=.
Block-based atomic broadcast
Parties broadcast transactions and deliver blocks using the events
() and (), respectively,
where block contains a sequence of transactions [_1, …, _].
The protocol outputs an additional event (, P),
which signals that block has been mined by party P, where P is defined as the miner of .
Notice that (, P) signals only the creation of a block and not its delivery.
In addition to predicate (), we also equip our protocol
with a predicate () to determine the validity of a block.
Moreover, we define a function (), which describes how to fill a block:
it gets as input a sequence of transactions and any other data required by the protocol and outputs a block.
These predicates and function are determined by the higher-level application or protocol.
A protocol implements block-based atomic broadcast with validity predicates () and () and block-creation function () if it satisfies the following properties, except with negligible probability:
Validity: If a correct party invokes a (), then every correct party eventually outputs (), for some block that contains tx.
No duplication: No correct party outputs () for a block more than once.
Integrity: If a correct party outputs (), then it has previously output the event (, ·) exactly once.
Agreement: If some correct party outputs (), then eventually every correct party outputs ().
Total order: Let and ' be blocks, and P_i and P_j correct parties that output () and ('). If P_i delivers before ', then P_j also delivers before '.
External validity: If a correct party outputs (),
such that =[_1, …, _],
then ()= and (_i) =, for i ∈ 1, …,.
Moreover, if (_1, …, _) returns , then ()=.
Fairness: There exists C ∈ and μ∈_>0, such that for all N≥ C consecutive delivered blocks,
the fraction of the blocks whose miner is correct is at least μ.
Observe that the properties assure that (, P) is triggered exactly once for each block ,
hence each block has a unique miner.
For ease of notation, we define on a block the fields .,
which contains its transactions, and ., which contains its miner.
Since blocks are delivered in total order, we can assign them a height, a sequence number in their order of delivery,
accessible by .. Finally, for simplicity we assume that a delivered block allows access to all blocks with smaller
height, through an array .. That is, if . = i then .[i'] returns ', such that
'. = i', for all i' ≤ i.
§.§ Blockchain and network
Blockchain protocols derive their security from different techniques such as proof of work (PoW) <cit.>, proof of stake (PoS) <cit.>, proof of space-time (PoST) <cit.>, or proof of elapsed time (PoET) <cit.>.
We restrict our model to PoW for simplicity, however, our model can easily be generalized to the other techniques.
For instance, we can use a slashing mechanism to prevent elected leaders from mining in multiple positions for PoS-based blockchains.
In the remainder of the paper, we assume that any protocol Π that we introduce is a PoW blockchain protocol implementing block-based atomic broadcast.
Parties
Similar to previous works, our protocol does not make explicit use
of the number of parties or their identities, and does not require the parties
themselves to know this number. We assume an static network of n_p parties.
We consider the Byzantine model, where f parties may behave arbitrarily, as well as, the rational model where all parties behave maximizing their utilities.
Transactions & Blocks
A transaction tx contains a set of inputs, a set of outputs,
and a number of digital signatures.
Transactions are batched into blocks.
A block contains a number of transactions, n_t, for simplicity we assume n_t to be constant.
A block may contain parameters specific to protocol Π such as references to previous blocks, but we abstract the logic of accessing them in a field ., as explained the context of Definition <ref>. We allow conditional execution of transactions across blocks, i.e., a transaction can be executed conditioned on the existence of another transaction in a previous block.
Network
A diffusion functionality implements communication among the parties, which is structured into synchronous rounds.
The functionality keeps a RECEIVE_i string for each party P_i and makes it available to P_i at the start of every round.
String RECEIVE_i is used to store all messages P_i receives.
When a party P_i instructs the diffusion functionality to broadcast a message, we say that
P_i has finished its round and the functionality tags P_i as finished for this round.
The adversary, detailed in Section <ref>, is allowed to read the string of any party at any moment during the execution and to see any messages broadcast by any party immediately. Furthermore, the adversary can write messages directly and selectively into RECEIVE_i for any P_i, so that only P_i receives the message at the beginning of the next round. This models a rushing adversary.
When all non-corrupted parties have finished their round, the diffusion functionality takes all messages that were broadcast by non-corrupted parties in the round and adds them to RECEIVE_i for all parties, this is the reason of the name synchronous rounds. Every non-corrupted party communicates changes to its local view at the end of each round. If a non-corrupted party creates a block in round r, the new block is received by all parties by round r+1.
Furthermore, even if the adversary causes a block to be received selectively by only some non-corrupted parties in round r, the block is received by all non-corrupted parties by round r+2. The update of the local view also includes the delivery of transactions contained in the blocks that satisfy the conditions to be accepted.
§ PROTOCOL
Our proposed protocol Π^3 (“Partitioned and Permuted Protocol") contains two modifications to a given underlying blockchain protocol Π in order to prevent sandwich MEV attacks.
Our first modification involves randomly permuting the transactions in any given block.
Miners of the immediately preceding blocks, which we refer to as leaders for a block, are in charge of generating random partial seeds.
These partial seeds are then combined to form a seed which will be the input into a PRG to produce a random permutation that is applied applied to the transactions in the block.
To ensure that the permutation is random, we need to first ensure leaders participate in the generation of random partial seeds, and secondly ensure the partial seeds generated by the leaders are randomly.
That is, the leaders should not commit to the same partial seed each time or collude with other leaders to generate biased partial seeds.
To incentivize each leader to participate in the generation of the seed, Π^3 stipulates that they commit to their partial seed and present a valid opening during the commitment opening period, otherwise their reward will be burnt.
In typical blockchain protocols, the miner of a block receives the block reward immediately.
In Π^3, the miner does not receive the reward until a certain number of additional blocks has been mined.
We refer to this as a waiting phase and stress that the precise length of the waiting phase is a parameter in our protocol that can be tweaked.
Our second modification is to divide each transaction into smaller chunks before permuting the transactions of a block.
This modification increases the cardinality of the permutation group in order to reduce the effectiveness of any attack aiming to selectively open partial seeds in order to bias the final permutation.
We stress that our proposed modifications incur minimal computational overhead, since the only possible overhead corresponds to transaction delivery and this aspect is computationally cheap. Thus, the construction does not have a noticeable impact on the efficiency of the underlying blockchain protocol.
In <Ref> we provide an in-depth analysis of the efficiency impact of our proposed modifications.
§.§ Transaction permutation
Our protocol Π^3 consists of the following four components (see <Ref>): block mining, generation of the random permutation, reward (re)-distribution, and chunking the transactions.
Appending the partial seeds
Let n_ℓ be the size of the leader set for each block.
The miner M_i of block B_i is part of the leader set of blocks B_i+j, for j ∈ [n_ℓ].
M_i must therefore contribute a partial seed σ_i,j for each of these n_ℓ blocks following B_i.
Hence, M_i needs to create n_ℓ random seeds σ_i, 1, …, σ_i, n_ℓ and commitments to them, C(σ_i, 1), …, C(σ_i, n_ℓ).
The commitments C(σ_i, j), for j ∈ [n_ℓ], are appended to block B_i, while the seeds σ_i, j are stored locally by M_i.
A block that does not contain n_ℓ commitments is considered invalid.
Furthermore, Π^3 requires a deterministic commitment scheme for committing to permutations. Looking ahead, we want that any party knowing the committed value can demonstrate it to any other party. Thus, the more standard commitments schemes such as Pedersen commitment <cit.> are ill-suited.
However, the simpler hash functions should be considered. A collision-resistant hash function is known to constitute a secure commitment scheme when the entropy of the committed values is high enough. Since the parties commit to a random partial seed, hash functions constitute a cheap and safe commitment scheme.
Opening the commitments
Let τ_1, τ_2 ∈ℕ_> 0.
Between τ_1 and τ_1+τ_2 blocks after the creation of some block B_i, the commitments of the partial permutation to be applied on block B_i must be opened.
The miners of these blocks also need to append the openings to their blocks, unless a previous block in the chain already contains them (see below for more details).
The parameter τ_1 controls the probability of rewriting block B_i after the commitments have been opened.
Whereas, parameter τ_2 guarantees that there is enough time for all the honest commitments to be opened and added to some block. Any opening appended a block B_j for j > i+τ_1 + τ_2 is ignored.
We note that specific values of τ_1 and τ_2 might cause our protocol to suffer an increase in latency.
We leave these parameters to be specified by the users of our protocol. Looking ahead, we will discuss latency-security trade-offs in Section <ref>. The τ_1 blocks created until opening the commitments takes place is known as silent phase, whereas the following τ_2 blocks is known as loud phase.
The recording of commitments is achieved with the help of a smart contract
providing a method open(i,j,σ_i,j),
where σ_i,j is a (claimed) opening of the j-th commitment h_i,j
published in the i-th block B_i.
We remark, that the smart contract serves only as proof that an opening
to a commitment has been provided, and does not add any functionality to the protocol.
The protocol monitors the blockchain for calls to this method.
The arguments to each call, as well as the calling party and the block it appears on, are used to determine the final permutations of the blocks and the distribution of the rewards, which we will detail below.
Deriving the permutation from partial seeds
Let the seed σ_i for block B_i be defined as σ_i-1,1⊕σ_i-2,2⊕…⊕σ_i-n_ℓ, n_ℓ.
Given the seed σ_i, let r_i := G(σ_i), where G: { 0,1 }^λ→{ 0,1 }^ℓ
is a pseudorandom generator. If at least one of the partial seeds σ_i,j, for j ∈ [n_ℓ],
is chosen at random, then σ_i is random as well, and r_i is indistinguishable from a random number <cit.> without the knowledge of σ_i,j.
Algorithm PermFromRandBits <cit.> is a standard algorithm to produce a random permutation from a polynomial number of bits.
Incentivizing the behavior
A crucial factor in the security of Π^3 against sandwich MEV attacks is that the permutation used to order transactions within a block should be truly random.
Thus, the miners should generate all partial seeds uniformly at random.
To incentivize them to do so, we exploit the fact that all leaders remain in the waiting phase for a period of time, which means that they have not yet received the block rewards for mining their block on the blockchain. Note that the waiting phase is n_ℓ+τ_1+τ_2+d blocks long.
This implies that their rewards can be claimed by other miners or burnt if a party diverges from the proper execution, according to the rules described below. Consider a partial permutation σ_i,j committed by miner M_i of block B_i. Recall that σ_i,j will be applied on block B_i+j and that miners can be uniquely identified due to the () event.
* Before τ_1 blocks have been appended after block B_i+j any other leader of the leader set _i+j who can append a pre-image of h_i,j to the chain can receive the reward corresponding to M_i. This mechanism prevents party M_i from disclosing its commitment before every other leader committed its randomness, thus preventing colluding. A miner whose commitment has been discovered by another leader is excluded from all the leader sets.
* If the opening of σ_i,j is not appended to any block, miner M_i loses its reward. This mechanism prevents miners from not opening their commitments. Note that miners are incentivized to include all the valid openings, as discussed below.
* If any of the previous conditions do not apply, party M_i receives an α fraction of the block reward for α∈ (0,1), which would be paid out the moment M_i leaves the waiting phase.
Each miner that appends the opening of M_i's commitments gets (1-α) · w/n_ℓ for each commitment appended.
§.§ Chunking the transactions
In all commit-and-open schemes, there exists the vulnerability that a set of malicious parties may decide to not open their commitments to bias the outcome.
In our protocol, any coalition of k leaders can choose between 2^k ways to bias the final permutation.
This situation can worsen when the same miner created several blocks out of
B_i-1, …, B_i-n_ℓ, hence controlling k commitments in the same leader set.
To mitigate this, we could use simultaneous broadcast channels to force miners to open their commitments simultaneously, or time lock puzzles to negate the effect of the delay.
However, we note that the former is next to impossible to implement in practice <cit.>, especially in the blockchain domain. The latter diverts the miners' computational resources from mining new blocks, compromising the security of the protocol.
In the particular case of generating a permutation, there is another alternative.
Let us assume that a block contains n_tx transactions, this means there exist n_tx! possible permutations of them. A coalition of k leaders can choose between 2^k possible permutations among the n_tx! total permutations. Furthermore, in the simplest case the coalition only aims to order the three main transactions that constitute the sandwich attack, thus the fraction of advantageous permutations is 1/6, the fraction of disadvantageous permutations is 1/6 and the remaining are neutral. If k is big enough, the coalition could still extract enough value to compensate for the lost block rewards of those parties that do not open their commitment.
Therefore, we propose to divide, each payment generated by the transaction into m payments.
For instance, suppose transaction T_i consists of Alice paying Bob 1 ETH.
In our protocol, each party would locally divide T_i into m transactions chunks T_i^1, … T_i^m, with each chunk consisting of Alice paying 1 / m ETH to Bob.
After all transactions are chunked, the permutation will be applied to the much bigger set of transactions. There exist (n_tx m)! permutations and the coalition would need to order the 3m chunks that constitute the involved transactions. Furthermore, for a given permutation with some chunks ordered beneficially, there will exist chunks ordered in a disadvantageous way, with overwhelming probability. The coalition needs to optimize the good ordering of some chunks while keeping the bad ordering under control. This becomes extremely unlikely as the number of chunks m grows, as analyzed in Section <ref>.
In general, transactions consist of the execution of some code that may or may not produce a set of payments as output. In this case, each party executes the code of the transaction and chunks each payment as explained above.
Conditional execution of chunks
In a typical block mining scenario, an adversarial miner can further utilize fine-grained conditions such as slippage to additionally control the conditional execution of transactions – and in our case transaction chunks – in a given block.
In <Ref> we present an in-depth analysis of how doing so could lead to higher expected revenue, which may also be of independent interest.
To mitigate this, execution of transactions can only be conditional on the sate of the at the end of the previous block.
Specifically, assume a block B_i with transactions
T_1, …, T_n_tx gets delivered, and denote by
state the state of the blockchain after executing B_i-1.
First, the validity of each T_i, for i ∈ [n_tx], is serially checked against state.
If the condition of T_i holds, then all chunks of T_i will be executed,
and if the condition does not hold, then no chunks of T_i will be executed.
In a second step, transactions involving transfer of funds are split into chunks and permuted,
and the chunks of transactions that have been found valid get executed.
This guarantees that atomicity is preserved even after transactions are split into chunks.
A consequence is that transactions cannot depend on other transactions contained in the
same block since the validity check is performed against the state at the previous block.
Another implication is that an honest user who, for example, conditions payment on slippage,
has slightly less control over the slippage incurred from previous transactions on B_i.
§.§ Details
xxxxx̄xxxx̄xxxx̄xxxx̄xxxx̄xxxM̄MMMMMMMMMMMMMMMMMMImplements: atomic broadcast
Uses: block-based atomic broadcast
State:
σ[i,j], for all i ≥ 1, j ∈ [n_ℓ]
c[i,j], for all i ≥ 1, j ∈ [n_ℓ]
upon event ⟨, ⟩ do
invoke ⟨, ⟩
upon event ⟨, , Q ⟩ do
i_open. - τ_1 - 1
for i' ∈ [i_open - n_ℓ - 1, i_open - 1] do
if .[i'].miner = P then
Open(.[i'].[i_open - i'])
upon event ⟨, ⟩ do
i_del. - τ_1 - τ_2
for j ∈ [n_ℓ] do /̀/ Read commitments for block
c[i_del,j] .[i_del - j].[j]
for i' ∈ [i_del + τ_1 + 1, i_del + τ_1 + τ_2] do /̀/ Read the openings for block
for ∈.[i']. such that = open(k, l, σ) do
if k + l = i_del and H(σ) = c[i_del, l] then
σ[i_del, l] σ
seed 0^λ
for j ∈ [n_ℓ] do /̀/ Compute final permutation for block
if σ[i_del, j] ≠ then
seedseed⊕σ[i_del, j]
Σ = PermFromRandBits(G(seed))
txs.() /̀/ Chunk and permute transactions in block
chunked_txs [ ]
for ∈txs do
chunked_txschunked_txsChunk(, m)
chunked_and_permuted_txsPermute(Σ,chunked_txs)
for ∈chunked_and_permuted_txs do /̀/ Deliver each transaction
invoke ⟨, ⟩
function () :
[ ]
for ∈ do
tx
for j ∈ [n_ℓ] do
σ$^λ
c H(σ)
c
return
function () :
if (∃∈.: ())(∃ j ∈ [n_ℓ]: .[j] = ) then
return
return
Protocol . Code for party P.
In Algorithm <ref> we show the pseudocode for protocol ,
which implements an atomic broadcast () primitive.
The pseudocode assumes an underlying protocol Π, which is modeled as a block-based atomic broadcast () primitive,
as defined in Section <ref>.
The user or high-level application interacts with by invoking () events. These are handled
by invoking the corresponding () event on the underlying protocol Π (lines <ref>-<ref>).
Protocol Π outputs an event (, Q) whenever some party Q mines a new block (line <ref>).
For , the mining of a new block at height i starts the opening phase for the block at height
i_open = i - τ_1 - 1 (line <ref>).
Hence, party P loops through the n_ℓ blocks before i_open and checks whether it is the miner
of each of them (lines <ref>-<ref>).
If this is the case, P must provide a valid opening to the commitment related to block at height i_open.
The opening is achieved by a specific type of transaction, for example through a call to a smart contract.
In the pseudocode we abstract this into a function Open().
Protocol Π outputs an event () whenever a block is delivered (line <ref>).
According to the analysis of our protocol, this will allow to deliver the block
τ_1 + τ_2 positions higher than , i.e., the block _del at height i_del = . - τ_1 - τ_2.
To this goal, first reads the commitments related to _del (lines <ref>-<ref>).
By construction of , a commitment c_i,j, written on block _i, is used to order the
transactions in block _i+j.
Hence, the commitments related to _del have been written on the n_ℓ blocks before _del.
Protocol then reads the openings to these commitments (lines <ref>-<ref>). Again by construction of ,
the openings of
the commitments related to _del have been written on the blocks with height i_del+τ_1 + 1 to
i_del+τ_1 + τ_2. For each of these blocks, loops through its transactions that contain an opening.
Line <ref> then checks whether the opening is for a commitment related to block _del
and whether the opening is valid.
Protocol then calculates the final permutation Σ to be applied to block (lines <ref>-<ref>).
As presented in Section <ref>,
Σ = PermFromRandBits(G(seed)),
where seed is the XOR of all valid openings for block ,
G is a pseudorandom generator, and PermFromRandBits an algorithm that derives a permutation from random bits.
The remaining of this block chunks the transactions contained in (lines <ref>-<ref>)
and applies Σ on the chunked transactions (line <ref>). The function Chunk() works as explained in Section <ref>.
Finally, delivers the chunked and permuted transactions through () events (lines <ref>-<ref>).
The function () is an upcall from . It specifies how a block is filled with transactions and additional data.
For simplicity, the pseudocode omits any detail specific to . It first writes all given transactions on the block, then picks uniformly at random n_ℓ bit-strings of length λ.
These are the partial random seeds to be used in the permutation of the following n_ℓ blocks, if the block
that is currently being built gets mined and delivered by . The commitments to these partial seeds are appended on the block.
Finally, the predicate () specifies that a block is valid if all its transactions are valid, as specified by (),
and if it contains n_ℓ commitments. The predicate () is omitted, as its implementation does not affect .
§ ANALYSIS
§.§ Security analysis
We model the adversary as an interactive Turing machine (ITM) that corrupts up to t parties at the beginning of the execution. Corrupted parties follow the instructions of the adversary and may diverge arbitrarily from the execution of the protocol. The adversary also has control over the diffusion functionality. That is, she can schedule the delivery of messages (within the Δ rounds), as well as read the RECEIVE_i of every party at any moment of the execution and directly write in the RECEIVE_i of any party.
We first show that the security of our construction is derived from the security of the original protocol. Given an execution of protocol Π^3, we define the equivalent execution in protocol Π as the execution in which every party follows the same steps but the commitment, opening, and randomization of transactions are omitted. We also recall the parameters τ_1 and τ_2 that denote the length (in blocks) of the silent and loud phase respectively.
The probability that an adversary can rewrite a block after any honest partial permutations have been opened is negligible in τ_1.
Assume an adversary controlling up to t parties and a block B. We know that if τ_1>d, protocol Π would deliver block B, thus an adversary cannot revert the chain to modify the order of the transactions stored in B but with negligible probability.
The probability that an adversary can rewrite a chain omitting the opening of some honest partial permutation is negligible in τ_2.
The fairness quality of protocol Π states that for any consecutive N blocks, if N≥ N_0 the fraction of honest blocks is at least μ. Thus, if τ_2≥max{N_0,1/μ}, there exists at least one honest block containing every opening that is not previously included in the chain. Since
Our construction aims to turn any protocol into a protocol robust against sandwich attacks. However, there might be new vulnerabilities. Intuitively, our construction should not introduce any vulnerability because the only modified aspect is the order in which transactions are delivered. Theorem <ref> formalizes this intuition.
If protocol Π implements block-based atomic broadcast, then the Partitioned and Permuted Protocol Π^3 implements atomic broadcast.
Note that by construction a transaction is delivered according to Π^3 if and only if Π delivers a block containing it.
Validity. Assume that an honest party M_i broadcasts a transaction tx that never gets delivered in Π^3. This implies that Π does not deliver a block containing transactions tx, which is a contradiction with the validity property of Π.
No-duplication. Notice that Π^3 only delivers transactions contained in blocks delivered by Π. Thus, consider a transaction tx and an honest party delivering tx more than once in Π^3. There are two possibilities. On the one hand, tx is delivered because the same block is delivered twice according to Π, this constitutes a violation of the no duplication property of Π. On the other hand, tx is delivered because different blocks containing it are delivered according to Π, this is a violation with the external validity property of Π.
Agreement. Assume that party M_i delivers transaction tx in Π^3, then there exists a block B containing transaction tx which was delivered by party M_i in Π. Due to the agreement property of Π, every honest party M_j eventually delivers block B, in Π.
Hence, party M_j eventually delivers transaction tx in Π^3.
Total order. The total order property of Π guarantees a partial order of delivery of transactions contained in different blocks in Π^3. For transactions contained in the same block, the order is defined by the random permutations. According to Lemmas <ref> and <ref>, all correct parties agree on the same permutation with all but negligible probability. Notice that all the opening of permutation to be applied to block B_i must be included in blocks B_i+τ_1+1,...,B_i+τ_1+τ_2, which by the time a party delivers transactions in block B_i according to Π^3, are delivered by Π. We conclude by using the agreement and total order properties of Π to guarantee that every party agrees on the same openings, thus the same permutations and order of delivered transactions.
External validity This follows from the external validity of Π.
After showing that Π^3 is as secure as the original protocol Π. We turn our attention to analyzing the behavior of Π^3 under sandwich attacks, in the upcoming section.
§.§ Game-theoretic analysis
Here, we aim to show that if we assume all miners are rational, i.e., they prioritize maximizing their own payoff, behaving honestly as according to our protocol Π^3 is a stable strategy.
Strategic games
For N∈ℕ, let Γ = (N, (S_i), (u_i)) be an N party game where S_i is a finite set of strategies for each party i ∈ [N].
Let S := S_1 ×⋯× S_N denote the set of outcomes of the game.
The utility function of each party i, u_i: S →ℝ, gives the payoff of party i given an outcome of Γ.
For any party i, a mixed strategy s_i is a distribution in μ(S_i).
A strategy profile of Γ is s := s_1 ×⋯× s_N where s_i is a mixed strategy of party i.
The expected utility of a party i given a mixed strategy profile s is defined as u_i(s) = 𝔼_a_1 s_1, ⋯, a_N s_N[u_i(a_1), ⋯, u_i(a_N)].
Finally, we note that if s_i is a Dirac distribution over a single strategy a_i ∈ S_i, we say s_i is a pure strategy for party i.
Notation
Let w denote the total reward for mining a block and q the negligible probability that a PPT adversary guesses a correct opening.
Recall in <Ref> that the total block reward w is split between the miner of the block who gets α· w and the miners that append the correct openings who get (1-α)· w/n_ℓ for each correct opening they append.
For a given block, we denote by m the number of chunks for each transaction in the block, and by λ the utility of the sandwich attack on the block.
Specifically, λ refers to the utility of a sandwich attack performed on the original transactions in the order they are in before chunking and permuting them.
We also denote the optimal sandwich utility by Λ, which is the maximum utility one can get by performing a sandwich attack.
Finally, we denote by λ̂_i the average utility of the sandwich attack taken over all blocks on the chain for a specific miner M_i.
This can be computed easily as the transaction mempool is public.
We stress that it is important to look at the average sandwich utility for each miner separately and not the average over all miners as the utility a miner can derive from a sandwich attack depends on their available liquidity (i.e., how much assets they can spare to front-run and back-run the transactions).
Quasi-strong ε-Nash Equilibrium
In terms of game theoretic security, we want our protocols to be resilient to deviations of any subset of miners that form a coalition and deviate jointly.
The security notion we want to achieve is that of an quasi-strong ε-Nash Equilibrium.
Let C denote the coalition of parties.
For any strategy profile s, we denote by u_C(s) the expected utility of the coalition under s.
We denote by u_C(s'_C, s_-C) the expected utility of the coalition when playing according to some other strategy profile s'_C given the other parties that are not part of the coalition play according to s.
(quasi-strong ε-Nash Equilibrium)
A quasi-strong ε-Nash Equilibrium is a mixed strategy profile s such that for any other strategy profile s'_C, u_C(s) ≥ u_C(s'_C, s_-C) - ε for some ε >0.
The notion of a quasi-strong Nash Equilibrium is particularly useful in the context of blockchains as the coalition could potentially be controlled by a single miner with sufficient resources <cit.>.
The notion of an ε-equilibrium is also important in cases where there could be a small incentive (captured by the ε parameter) to deviate from the protocol, and of course the smaller one can make ε, the more meaningful the equilibrium.
In the following analysis we also consider games that span several rounds and model them as extensive-form games (see, e.g., <cit.> for a formal definition, and <Ref> for more details).
We say a strategy profile is a quasi-strong subgame perfect ε-equilibrium if it is a quasi-strong ε-Nash equilibrium for all subgames in the extensive-form game.
Quasi-strong ε-Nash Equilibrium
In terms of game theoretic security, we want our protocols to be resilient to deviations of any subset of miners that form a coalition and deviate jointly.
The security notion we want to achieve is that of an quasi-strong ε-Nash Equilibrium.
Let C denote the coalition of players.
For any strategy profile s, we denote by u_C(s) the expected utility of the coalition under s.
We denote by u_C(s'_C, s_-C) the expected utility of the coalition when playing according to some other strategy profile s'_C given the other players that are not part of the coalition play according to s.
(quasi-strong ε-Nash Equilibrium)
A quasi-strong ε-Nash Equilibrium is a mixed strategy profile s such that for any other strategy profile s'_C, u_C(s) ≥ u_C(s'_C, s_-C) - ε for some ε >0.
The notion of a quasi-strong Nash Equilibrium is particularly useful in the context of blockchains as the coalition could potentially be controlled by a single miner with sufficient resources <cit.>.
The notion of an ε-equilibrium is also important in cases where there could be a small incentive (captured by the ε parameter) to deviate from the protocol, and of course the smaller one can make ε, the more meaningful the equilibrium.
Subgame perfection
We also consider games that span several rounds and we model them as extensive-form games (see, e.g., <cit.> for a formal definition).
Extensive form games can be represented as a game tree T where the non-leaf vertices of the tree are partitioned to sets corresponding to the players.
The vertices belonging to each player are further partitioned into information sets I which capture the idea that a player making a move at vertex x∈ I is uncertain whether they are making the move from x or some other vertex x' ∈ I.
A subgame of an extensive-form game corresponds to a subtree in T rooted at any non-leaf vertex x that belongs to its own information set, i.e., there are no other vertices that are the set except for x.
A strategy profile is a quasi-strong subgame perfect ε-equilibrium if it is a quasi-strong ε-Nash equilibrium for all subgames in the extensive-form game.
The induced game
Let us divide our protocol into epochs: each epoch is designed around a given block say B_i and begins with the generation of random seeds for B_i and ends with appending the openings for the committed random seeds for B_i (i.e., block B_i + τ_1 + τ_2).
We define the underlying game Γ induced by any given epoch of our protocol Π^3.
Γ is a (τ_2 +1)-round extensive form game played by n_ℓ + τ_2 parties (n_ℓ leaders comprising the leader set L_i for any block B_i and the τ_2 miners that mine the blocks B_i+τ_1 + 1… B_i + τ_1 + τ_2).
Note that although we have N τ_2 sets of τ_2 miners to choose from (where N is the total number of miners in the chain) to be the miners of the blocks B_i+τ_1 + 1… B_i + τ_1 + τ_2, we can simply fix any set of τ_2 miners together with L_i to be the parties of Γ as we assume all miners are rational and so the analysis of the utilities of any set of τ_2 miners will be the same in expectation.
We use A to denote the set of all miners in τ_2.
In what follows, we assume an arbitrary but fixed ordering of the miners in A.
Round 1 of Γ consists of only the parties in L_i performing actions, namely picking a random seed and committing to it.
In rounds 2, … , τ_2 +1 of Γ, each member of L_i can act by choosing to open their commitment or not.
However, the moment a member of L_i opens its commitment in a given round, they lose the chance to open their commitment in any subsequent round.
Only one miner from A and according to the imposed ordering acts in each round from round 2 to τ_2 +1 of Γ.
The choice of actions of the miner in any of these rounds are the subsets of the set of existing commitment openings (from members of L_i) to append to their block.
Finally we note that the L_i ∩ A is not necessarily empty and thus miners in the intersection can choose to open and append their commitment in the same round.
Let us define the honest strategy profile as the profile in which all members of L_i choose to generate a random seed in round 1 of Γ, all members of L_i open their commitments at round τ_2 (i.e., at block B_i+τ_2 -1), and each member of A appends all existing opened commitments that appear in the previous round.
We denote the honest strategy profile by s.
The security notion we want to achieve for our protocol is a quasi-strong subgame perfect ε-equilibrium (refer to <Ref>).
Looking ahead, we will also prove that ε can be made arbitrarily small by increasing the number m of chunks.
The expected utility of an honest leader is at least (1-q)^n_ℓα w.
The expected utility for a user following the honest strategy comes from the sum of the block reward, the expected utility from the ordering of any of their transactions within the block, and appending valid openings of committed seeds (if any) to their blocks.
The expected utility from the ordering of transactions is 0 due to symmetry: each possible order is equally likely, for each order that gives some positive utility, there exists a different order producing the same negative utility.
The expected utility from the block reward is (1-q)^n_ℓα w.
Thus, the total expected utility of an honest miner is at least (1-q)^n_ℓα w.
We outline and analyze two broad classes of deviations or attacks any coalition can attempt in this setting.
The first class happens at round 1 of Γ where the members of the coalition commit to previously agreed seeds to produce a specific permutation of the transactions.
The coalition then behaves honestly from round 2 to τ_2 +1 of Γ.
We call this attack the chosen permutation attack and denote this attack strategy by s_CP.
In the second class, the coalition behaves honestly at round 1 of Γ, but deviates from round 2 onwards where some members selectively withhold opening or appending commitments to bias the final permutation.
We call this attack the biased permutation attack, and denote it by s_BP.
Chosen permutation attack
Before we describe and analyze the chosen permutation attack (for say a block B_i), we first show that a necessary condition for the attack to be successful, that is, the coalition's desired permutation happens almost surely, is that at least all n_ℓ leaders in L_i have to be involved in the coalition (members of A can also be involved in the coalition, however as we will show this will simply increase the cost).
To do so, we let S denote the set of permutations over the list of transactions and their chunks, and we define what we mean by a protocol Π_perm (involving n parties) outputs random a permutation in S by the following indistinguishability game called random permutation indistinguishability played between a PPT adversary, a challenger, and a protocol Π_perm. First, the adversary corrupts up to n -1 parties. The adversary has access to the corrupted parties' transcripts. Then, the challenger samples σ_0 uniformly at random from S, and sets σ_1 to be the output of Π_perm. After that, the challenger flips a random bit b and sends σ_b to the adversary. The game ends with the adversary outputting a bit b'. If b'=b, the adversary wins the game.
We say a protocol Π_perm outputs a random permutation if the the adversary wins the above game with probability 1/2 + ε for some negligible ε.
Let us define the output of a single round of Π^3 as the random permutation that is generated from the seeds generated from all leaders in the round according to the algorithm described in <Ref>.
The following lemma states that as long as a single leader is honest, the output of Π^3 is pseudorandom.
An adversary that corrupts at most n_ℓ-1 leaders in a single round of Π^3 can only win the random permutation indistinguishability game with negligible probability.
The proof follows in the same way as introduced by M. Blum <cit.>, with the addition of the PRG.
Lemma <ref> implies that launching the chosen permutation attack and thus choosing to deviate at round 1 of Γ comes with an implicit cost: either a single miner has to mine n_ℓ blocks in a row so the miner single-handedly forms the coalition, or all leaders in L_i have to be coordinated into playing according to a predefined strategy.
Given the underlying blockchain is secure, the expected utility of the single miner when playing according to s_CP is at most λ/2^n_ℓ more than the expected utility of following the honest strategy.
Since the underlying blockchain is secure, a necessary condition is that a single miner cannot own more than 1/2 of the total amount of resources owned by all miners of the protocol.
Thus, the probability of mining n_ℓ blocks in a row is strictly less than 1/2^n_ℓ.
This means that the expected utility under the attack strategy u_C(s_CP) < λ/2^n_ℓ + n_ℓα w, which is at most λ/2^n_ℓ larger than the expected utility under the honest strategy which is u_C(s) = n_ℓα w.
The attack strategy of a coalition composed by more than one miner is more complex compared to the case where there is a single miner, as the coalition needs to ensure its members coordinate strategies.
First, the coalition works with the miner of block B_i to select and fix a permutation generated by a specific PRG seed σ_i. Then, the coalition secret shares σ_i among its members[This not only prevents members from knowing the partial seeds of other members and hence stealing their block reward, but also additionally safeguards the partial seeds of the members against the miner of block B_i who cannot generate a partial seed of their block and hence has nothing to lose.]. After that, the coalition sets up some punishment scheme to penalize members that do not reveal their partial seeds[This ensures that every member will reveal reveal their partial seeds and the permutation will be generated properly.]. Finally, the coalition commits and reveals these partial seeds in accordance to the protocol Π^3.
Let 𝒞 denote the expected cost of coordinating the whole chosen permutation attack for the coalition. For this attack to succeed, the expected coordination cost has to be smaller than the expected profit λ.
The chosen permutation attack fails to be profitable compared to the honest strategy if 𝒞 > λ.
From Lemma <ref>, the expected revenue of an honest miner is (1-q)^n_ℓα w, thus the expected revenue of the coalition when following the honest strategy is u_C(s) = n_ℓ· (1-q)^n_ℓα w.
The expected revenue for the chosen permutation attack strategy is u_C(s_CP) = n_ℓ· (1-q)^n_ℓα w + λ - 𝒞.
Thus, assuming 𝒞 > λ, and since the expected revenue from a mixed strategy is a convex combination of the revenues of the honest and attack strategies, the pure honest strategy gives a strictly larger expected payoff compared to any mixed strategy.
Computing, or even estimating, the coordination cost is non-trivial as it consists of several dimensions and also depends on a myriad of factors and assumptions.
A few notable costs are, firstly, timing costs.
The coalition has to convince and coordinate all the leaders to agree on a permutation and also commit and reveal them during a short interval of d blocks.
This involves the cost of securely communicating with all the leaders and also the computational cost involved in setting up the secret sharing scheme.
A second factor is the choice of the initial order of transactions, which the coalition would have to also agree on with the miner of the attacked block.
Picking transactions greedily would be the simplest choice as finding the optimal set of transactions from the mempool is NP-hard <cit.>.
Finally, the coalition has to set up a punishment scheme to penalize members that do not reveal their permutations.
If we ignore the cost of setting up such a scheme, this can be implemented using a deposit scheme with the size of the deposit at least the value of the expected additional per user profit from the sandwich attack <cit.>.
This implies an opportunity cost at least linear in λ/n_ℓ, as well as the assumption that each member has at least λ/n_ℓ to spare to participate in the attack.
Finally, we note that the coalition could extend to miners from A which are outside the leader set L_i. However, since these miners do not contribute to generating the random seeds, they simply add to the communication cost of the coalition.
Biased permutation attack
The intuition behind this attack is that any coalition that controls k ≤ n_ℓ commitments can choose to select the ones to open or append, which allows the coalition to chose among 2^k possible permutations in order to bias the final ordering.
This can be achieved in two situations: either k out of n_ℓ leaders of L_i form a coalition and decide which of their commitments to open, or some subset of miners in the loud phase (of size say k) form a coalition and end up controlling k openings, let κ := min{k, k}.
Unlike in the case of the chosen permutation attack, it suffices consider the case where we have a single miner that happens to either occupy k leader positions among the group of leaders L_i or mine the k blocks that belong to the coalition in the loud phase.
This is because the case where a coalition of distinct miners that collude only adds additional coordination cost.
The probability that any such coalition gains any additional utility by performing the biased permutation attack compared to the honest strategy can be upper-bounded.
Let revenue denote the utility the coalition would gain from performing the biased permutation attack.
The probability that a coalition of κ members performing the biased permutation attack achieves utility of at least κ w >0 is
[revenue≥κ w]≤1-(1-e^-2mκ w/λ)^2^k.
Given a random permutation and a sandwich attack with original utility λ (utility if the order of the transactions were not randomized), denote by {X_i(σ)}_i=1^m the utility produced by chunk i. The sum of these random variables X(σ)=∑_i=1^n X_i(σ) represents the total utility of a sandwich attack (after chunking and permuting). X takes values in [-λ,λ], thus the variables {X_i(σ)} take values in [-λ/m,λ/m], are equally distributed and are independent. We define the random variables Y_i(σ)=X_i(σ)+λ/m∈[0,2λ/m], and Y(σ)=∑_i=1^n Y_i(σ)∈[0,2λ]. Using lemma <ref>, [Y_i(σ)]=λ/m and
[Y(σ)]=λ. Applying Chernoff's bound <cit.> to Y,
[Y(σ)≥(1+δ)[Y(σ)]]≤ e^-2δ^2[Y(σ)]^2/m(λ/m)^2=
e^-2mδ^2
for δ>0. We can rewrite Equation <ref> as follows:
[revenue(σ)≥δλ]=[revenue(σ)+λ≥ (1+δ)λ]=[Y(σ)≥(1+δ)[Y(σ)]]≤ e^-2mδ^2.
Using the law of total probability we obtain that [revenue(σ)≤δλ]≥ 1-e^-2mδ^2.
Considering the maximum over the 2^k possible permutations σ and δ=κω/λ we conclude that
[revenue≥κ w]=1-[revenue≤κ w]=1-[revenue(σ)≤κ w]≤ 1-(1-e^-2mκ w/λ)^2^k.
The probability that a coalition of κ members has positive additional utility is:
[revenue≥ 0]≤max_k'≤κ{1-(1-e^-2mk' w/λ)^2^k}.
Lemma <ref> states a bound for the probability that a coalition of κ parties has a utility of at least κ w >0, the penalty for not opening κ commitments. Thus, the general case for a coalition aiming to maximize profit is the maximum over k'≤κ.
Recall that Λ is the maximal utility and
let p_k,λ denote max_k'≤κ{ 1-(1-e^-2m(1-q)^n_ℓk'w/λ)^2^k}.
The expected additional utility from the biased permutation attack of a single miner controlling k leaders is no greater than p_k,λΛ.
This follows from the upper bound on p_k,λ from Lemmas <ref> and <ref>, and the fact that we defined the maximum utility from any permutation as Λ.
Lemmas <ref> and <ref> allow us to prove our main theorem.
Suppose 𝒞 > λ, then the honest strategy s = ((random seed)_i=1^n_ℓ,(open)_i=1^n_ℓ) is a quasi-strong subgame perfect ε-equilibrium in Γ for ε = max{λ/2^n_ℓ, p_k,λΛ}.
We first observe that the expected utility of a coalition that mixes both the chosen and biased permutation attack strategies is no greater than the expected utility of a coalition that performs the chosen permutation attack with a different chosen permutation that accounts for the biasing of the permutation in the second round of Γ.
Hence, it suffices to analyze the expected utility of the coalition when implementing either of these strategies, i.e., deviating at round 1 of Γ or from rounds 2 onwards.
We first analyze the expected utility of a coalition when implementing the chosen permutation attack, which occurs at round 1 or Γ.
Since we assume 𝒞 > λ, from Lemma <ref> and Lemma <ref>, we see that any additional expected payoff of any coalition that deviates only at round 1 of Γ by implementing the chosen permutation attack compared to the expected revenue of behaving honestly is at most λ/2^n_ℓ.
Now we analyze the expected utility of a coalition when implementing the biased permutation attack.
From Lemma <ref>, we see that the strategy that implements the biased permutation attack across all of rounds 2 to τ_2 +1 of Γ only gives at most p_k,λΛ more payoff in expectation compared to following the honest strategy s in these rounds.
As such, if we set ε = max{λ/2^n_ℓ, p_k,λΛ} to be the largest difference in additional expected revenues between both strategies, we see that s = ((random seed)_i=1^n_ℓ,(open)_i=1^n_ℓ) is a quasi-strong ε-subgame perfect equilibrium of Γ.
Recall that ε bounds the additional expected utility an adversary can gain by deviating from the honest strategy profile s.
The security of our protocol therefore improves as ε = max{λ/2^n_ℓ, p_k,λΛ} decreases.
We observe that the first component λ/2^n_ℓ goes to 0 exponentially as the size of the leader set n_ℓ increases.
As for the second component p_k, λ, we conduct an empirical analysis of sandwich attacks on Ethereum, <Ref>, to estimate p_k, λ and we show that this value approaches zero as the number of chunks m increases.
§ CASE STUDY: ETHEREUM MEV ATTACKS
We validate the utility of our results with real-world data from Ethereum.
Specifically, we estimate, using Lemma <ref>, the probability that a coalition of k parties obtains positive revenue,
for various values of k, sandwich revenue λ, and chunks m.
We conclude by analyzing the overhead incurred by protocol as a function of m and its security-efficiency trade-offs.
Empirical security analysis
We obtain the data on the profit of sandwich attacks on Ethereum using the Eigenphi tool[<https://eigenphi.io/mev/ethereum/sandwich>^10 <https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1234.md>] for October 2022.
To convert between ETH and USD we use the price of ETH as of October 31st, approx. 1,570 USD.
The block reward at this time is 2 ETH^10, or approx. 3,140 USD.
In Figure <ref> we show the number of attacks in bins of increasing profit,
as returned by Eigenpi.
From this data we make use of two facts.
First, 99.97% of the attacks had profit lower than 10K USD, or approx. 6.37 ETH,
and second, the most profitable sandwich attack had a profit of
170,902.35 USD, or approx. 109 ETH.
Hence, we define λ_99.97 = 6.37 and λ_max = 109.
In Figure <ref> we plot
an upper bound for the probability of positive revenue for a coalition of k leaders,
considering a sandwich revenue of λ_max (Figure <ref>)
and λ_99.97 (Figure <ref>).
We observe that, even for the largest observed sandwich revenue, λ_max,
the probability of a profitable attack drops below 0.5 for m=33.
For λ_99.97, the probability is low even for small values of m. For example, already for
m = 2 we get p_k, 6.37≃ 0.49, and for m = 10 we get p_k, 6.37≃ 0.0038, for all k ≥ 1.
We also remark that Lemma <ref> states upper bound for the adversary to have some positive revenue.
Overhead
In terms of space,
each block contains exactly n_ℓ commitments and on average n_ℓ openings of partial seeds.
A commitment to a partial seed takes 256 bits of space.
Assuming openings are implemented as a call open(i,j,σ_i,j) to a smart contract,
where i and j are 16-bit integers
and an address is 160 bits long, then each opening consumes
468 bits on the block.
In total, incurs on average an overhead of
724 n_ℓ bits per block.
As an example, for n_ℓ = 10, this results in an average overhead of less than 1 KB per block.
We remark that chunking of transactions happens locally,
and hence adds no space overhead on the block.
Concerning execution, there are two main sources of overhead in .
First, when a block is delivered parties compute the final permutation of its n_t m transactions
using PermFromRandBits(), which has linear-logarithmic bit complexity.
The overhead is thus O(n_t m ·log(n_t m)).
Moreover, parties execute n_t m - n_t more transactions,
which incurs an overhead of O(n_t m).
In total, considering n_t to be a constant and m a parameter to ,
the execution overhead scales as O(m log m).
We remark here that the vast majority of computational resources is used in the mining mechanism,
and this is not changed from Π.
Finally, incurs an increased latency when delivering transactions.
While Π has a latency of d blocks, has a latency of τ_1 + τ_2 + d blocks.
As discussed earlier, τ_1 affects the probability of rewriting a block, after the commitments that
order its transactions have been opened, while τ_2 affects the time frame in which miners can open their commitments.
Security-efficiency tradeoffs
We first observe a tradeoff between security of and computational overhead.
On the one hand, increasing the number of chunks improves the security of Π^3.
Recall that ε = max{λ/2^n_ℓ, p_k,λΛ}, and
in <Ref> we highlight that p_k,λΛ goes to zero as the number of chunks m increases.
In Figure <ref> we see how p_k,λ changes with m, based on historical data.
Specifically, for the vast majority of observed sandwich attacks (in Figure <ref> we use
the 99.97-th percentile) the probability of a coalition to succeed drops exponentially with m.
On the other hand, the execution overhead increases as O(m log m).
Moreover, the size of leader set n_ℓ leads to the following tradeoff.
In <Ref> we show that the security of Π^3 against rational adversaries improves exponentially with n_ℓ.
However, the size of the leader set also determines the number of leader sets each miner needs to be a part of,
and hence the length of time each miner has to wait until it receives its block reward.
§ CONCLUSION
In this paper we introduced a new construction that can be implemented on top of any blockchain protocol with three main properties. First, the construction does not add any vulnerability to the old protocol, i.e., the security properties remain unchanged. Secondly, performing sandwich attacks in the new protocol is no longer profitable. Thirdly, the construction incurs in minimal overhead with the exception of a minor increase in the latency of the protocol.
Our empirical study of sandwich attacks on the Ethereum blockchain also validates the design principles behind our protocol, demonstrating that our protocol can be easily implemented to mitigate sandwich MEV attacks on the Ethereum blockchain.
§ ACKNOWLEDGMENTS
We thank Alice and Bob for interesting discussions about cryptography and distributed systems.
This work has been funded by the Swiss National Science Foundation (SNSF)
under grant agreement Nr. 200021_188443 (Advanced Consensus Protocols).
ieeesort
§ SANDWICH MEV ATTACKS
Decentralized exchanges
Decentralized exchanges (DEXes) allow users to exchange various cryptocurrencies in a decentralized manner (i.e., in a peer-to-peer fashion without a central authority).
Some examples of DEXes on the Ethereum blockchain are
Uniswap[<https://uniswap.org>] and Sushiswap[<https://sushi.com>]
DEXes typically function as constant product market makers (CPMMs) <cit.>, i.e., the exchange rate between any two underlying assets is automatically calculated such that the product of the amount of assets in the inventory remains constant.
As an example, consider the scenario where a user at time t wants to swap δ_X of asset X for asset Y in the X ⇌ Y liquidity pool, and suppose the pool has X_t and Y_t amount of assets X and Y in its inventory at time t.
The user would receive
δ_Y = Y_t - X_T · Y_T/X_t + (1-f) δ_X = Y_t (1-f) δ_X/X_t + (1-f)δ_X
amount of asset Y for δ_X amount of asset X, where f is a fee charged by the pool <cit.>.
We can thus compute the exchange rate of X to Y at time t as
ρ^XY_t := δ_X/δ_Y = X_t + (1-f) δ_X/Y_t(1-f)
Sandwich attack
As transactions in a block are executed sequentially, the exchange rate for a swap transaction could depend on where the transaction is located in the block.
From <Ref>, we note that the exchange rate from X to Y increases with the size of the trade δ_X.
Thus, if a transaction that swaps X for Y occurs after several similar X to Y swap transactions, the exchange rate for this particular transaction would increase.
Consequently, the user which submitted this transaction would pay more per token of Y as compared to if the transaction occurred before the other similar transactions.
In a sandwich attack, the adversary (usually miner) manipulates the order of the transactions within a block such that they can profit from the manipulated exchange rates.
Specifically, the adversary is given the list 𝒯 of all transactions that can be included in a block and can make two additional transactions t_1 and t_2: transaction t_1 exchanges some amount (say δ_X) of asset X for asset Y, and t_2 swaps the Y tokens from the output of t_1 back to X.
Let us denote the amount of tokens of X the adversary gets back after t_2 by δ'_X.
The goal of the adversary is to output a permutation over 𝒯∪{t_1, t_2} such that δ'_X - δ_X is maximized.
A common technique is to front-run all transactions exchanging X to Y in the block, i.e., place transaction t_1 before all transactions exchanging X to Y and t_2 after <cit.>.
We note that users can protect themselves by submitting a slippage bound sl>0 together with each transaction. However, this protection is only partial.
§ SANDWICH ATTACKS WITH RANDOM PERMUTATION
Here we outline a way an adversary can still launch a sandwich attack even when the transactions in a block are randomly permuted by carefully specifying slippage bounds.
Background for attack
The setting of the attack is as follows: suppose there is a particularly large transaction t^* (that hence impacts the exchange rates) swapping X for Y in the list of transactions 𝒯 in a block, and suppose the adversary is aware of t^* (maybe due to colluding with the miner of the block).
We make two further simplifying assumptions: first, that all other transactions are small and hence have negligible impact on the X ⇌ Y exchange rates, and second, that the swap fee f is negligible.
Like in the case of the classic block sandwich attack, the adversary can create 2 transactions t_1 and t_2 (together with slippage bounds), where t_1 exchanges some amount of X for Y, and t_2 exchanges the Y tokens back to X.
Unlike in the case of the classic sandwich attack, the adversary has no control over the final order of the transactions in the block as the transactions will be randomly permuted.
An advantageous permutation for the adversary would be any permutation such that t_1 comes before t^* and t^* comes before t_2 (hereafter we use the notation a ≺ b to denote a “comes before" b for two transactions a and b).
Permutations that would be disadvantageous to the adversary would be any permutation such that t_2 ≺ t^* ≺ t_1, as this is the precise setting where the adversary would lose out due to unfavorable exchange rates.
Any other permutations outside of these are acceptable to the adversary.
Utilities Suppose both t_1 and t_2 are executed.
We assume the utility of the adversary is α∈ℝ^+ if the resulting permutation is advantageous, -α if the resulting permutation is disadvantageous, and 0 for all other permutations.
We assume the utility of the adversary is 0 if both their trades did not execute, as fees are negligible.
We further assume the following utilities if only 1 trade executes:
if only t_1 executes, the utility of the adversary is 0 if t^* ≺ t_1 and 0 ≤β < α if t_1 ≺ t^*.
The intuition behind this is that if t_1 executes before t^* in this block, there is a chance that when t_2 executes in the next block the adversary can still benefit from the favorable exchange rates due to advantageous permutation (albeit split over more than 1 block, thus the discount in utility).
In the same vein, if only t_2 executes, the utility of the adversary is 0 if t^* ≺ t_2 and -γ < -α if t_2 ≺ t^*.
The reason why γ > α is to not only take into account the potential loss to the adversary from the disadvantageous permutation, but also the opportunity cost of waiting more than 1 block for t_1 to execute.
Here, we denote by s the strategy where the adversary simply wants both transactions to execute and thus does not care about the slippage to be the strategy where the slippage bound for both transactions are set to ∞.
It is clear that the expected utility of the adversary under strategy s is 0 due to the fact that both advantageous and disadvantageous permutations occur with equal probability.
Sandwich attack by controlling slippage
We first describe the first attack where the adversary can gain positive expected utility just by being more precise in specifying slippage bounds.
The intuition behind this attack is that by specifying the slippage bounds to be extremely precise, the adversary can ensure that transaction t_1 always executes before t^*, and t_2 only executes if t_1 and t^* have executed.
The attack strategy (denoted by s_slip) is as follows:
* The adversary computes the current exchange rate of X to Y, denoted by ρ^XY_t_0, and sets the slippage bound on t_1 to be ρ^XY_t_0 + ε_1 for some ε_1 > 0.
* The adversary computes the hypothetical exchange rate of X to Y after an execution of t^*. We denote this rate by ρ^XY_t^*. The adversary also computes the hypothetical exchange rate of Y to X after an execution of t_1, t^*, and both t_1 and t^*. We denote these rates by ρ^XY_t_1, ρ^XY_t^*, and ρ^XY_t_1 + t^* respectively.
* The adversary sets the slippage bound for t_2 to be ρ^YX_t^* + ε_2 for some other ε_2 > 0.
If ε_1 < ρ^XY_t^* - ρ^XY_t_0 and ε_2 < min(ρ^YX_t_1, ρ^YX_t^*) - ρ^YX_t_1 + t^*, the expected utility of μ_slip is α/6 + β/3 > 0.
Let σ be a random permutation over 𝒯∪{t_1,t_2}.
We denote by [σ_r,i] the position/index of the ith transaction after the permutation.
We will proceed case by case for each of the 6 different orderings of t_1, t_2, t^*.
* t_1 ≺ t^* ≺ t_2: both t_1 and t_2 would be executed. The utility of the adversary is α in this case.
* t_1 ≺ t_2 ≺ t^*: t_1 will be executed.
However, t_2 will not be executed as ρ^YX_t_1 + t^* + ε_2 < ρ^YX_t_1.
Since t_1 ≺ t^*, the utility of the adversary is β in this case.
* t^* ≺ t_1 ≺ t_2: t_1 will not be executed as ρ^XY_t_0 + ε_1 < ρ^XY_t^*.
t_2 will be executed.
Since t^* ≺ t_2, the utility of the adversary in this case is 0.
* t^* ≺ t_2 ≺ t_1: both t_1 and t_2 will not execute as ρ^XY_t_0 + ε_1 < ρ^XY_t^* and ρ^YX_t_1 + t^* + ε_2 < ρ^YX_t^*.
Since both trades will not execute, the utility of the adversary in this case is 0.
* t_2 ≺ t_1 ≺ t^*: t_1 will execute but t_2 will not execute as ρ^YX_t_1 + t^* + ε_2 < ρ^YX_t_0.
Since t_1 ≺ t^*, the utility of the adversary is β in this case.
* t_2 ≺ t^* ≺ t_1: both t_1 and t_2 will not execute as ρ^XY_t_0 + ε_1 < ρ^XY_t^* and ρ^YX_t_1 + t^* + ε_2 < ρ^YX_t_0.
Since both trades will not execute, the utility of the adversary in this case is 0.
Since each ordering is equally likely to occur, the expected utility of the adversary is α/6 + β/3 >0.
Long-range sandwich attacks
Long-range sandwich attacks are attacks where an adversary aims to front and back run transactions over multiple blocks.
This can happen when the adversary mines more than 1 block in a row.
However, as the probability of mining more than 1 consecutive block is very small (and grows exponentially smaller in the number of consecutive blocks), the success probability of such an approach is similarly low.
An approach that would lead to a larger probability of success would be for the adversary to create two transactions t_1 and t_2 and split t_1 and t_2 into separate blocks such that t_2 only conditionally executes upon t_1 being on the chain.
Formally, instead of adding both t_1 and t_2 to the transaction list 𝒯 like in the above attack setting, the adversary now only adds t_1 to 𝒯 and waits until t_1 is on the chain to add t_2 to the transaction mempool.
We note that this can be done by wrapping transactions into smart contracts, which can handle the conditional execution of transactions based on some state of the blockchain.
This can also be done in Bitcoin-like blockchains by ensuring that the UTXO of t_1 is given as input to t_2.
The attack strategy (denoted by s_longslip) is as follows:
* The adversary computes the current exchange rate of X to Y, denoted by ρ^XY_t_0, and sets the slippage bound on t_1 to be ρ^XY_t_0 + ε_1 for some ε_1 >0.
* The adversary waits until t_1 has been executed (i.e. when the block which contains t_1 is gossiped), then either wraps t_2 into a smart contract that checks if t_1 is on the blockchain and if so executes t_2, or ensures that the UTXO of t_1 is given as input to t_2.
Recall that if t_1 ≺ t^* in a block and t_2 executes in some block after the block containing t_1, the utility of the adversary is 0 ≤β < α, and that the utility of the adversary is 0 if both trades do not occur.
We now show that the expected utility under s_longslip is also positive.
If ε_1 < ρ^XY_t^* - ρ^XY_t_0, the expected utility of s_longslip is β/2.
We note that the orderings t_1 ≺ t^* and t^* ≺ t_1 are equally likely to occur.
If t_1 ≺ t^*, t_1 would be executed together with t^* and thus t_2 would also be executed in some block after the block containing t_1.
The expected utility of the adversa ry is β in this case.
If t^* ≺ t_1, t_1 would not be executed ρ^XY_t_0 + ε_1 < ρ^XY_t^*.
Since t_1 did not execute, t_2 would not be executed and thus the utility of the adversary in this case is 0.
We can make the computation of expected utilities more precise by assuming that the utility of the adversary if t_2 is executed one block after t_1 is β, and multiply β by δ^d for some discount factor δ <1 if t_2 is executed d blocks after t_1.
However, this would require detailed assumptions about the probability of transactions being selected from the mempool which can depend on fees and other factors.
This is beyond the scope of our paper, thus we leave this as an interesting direction of future work.
|
http://arxiv.org/abs/2307.02182v1
|
20230705102033
|
A Scheme to resist Fast Correlation Attack for Word Oriented LFSR based Stream Cipher
|
[
"Subrata Nandi",
"Srinivasan Krishnaswamy",
"Pinaki Mitra"
] |
cs.CR
|
[
"cs.CR"
] |
Department Of Computer Science and Engineering,Indian Institute of Technology Guwahati,India
[email protected]
[email protected]
Department of Electrical and Electronics and Electrical Engineering,Indian Institute of Technology Guwahati,India
[email protected]
A Scheme to resist Fast Correlation Attack for Word Oriented LFSR based Stream Cipher
Subrata Nandi1 Srinivasan Krishnaswamy2
Pinaki Mitra1
August 1, 2023
=====================================================================================
In LFSR-based stream ciphers, the knowledge of the feedback equation of the LFSR plays a critical role in most attacks. In word-based stream ciphers such as those in the SNOW series, even if the feedback configuration is hidden, knowing the characteristic polynomial of the state transition matrix of the LFSR enables the attacker to create a feedback equation over GF(2). This, in turn, can be used to launch fast correlation attacks.
In this work, we propose a method for hiding both the feedback equation of a word-based LFSR and the characteristic polynomial of the state transition matrix. Here, we employ a z-primitive σ-LFSR whose characteristic polynomial is randomly sampled from the distribution of primitive polynomials over GF(2) of the appropriate degree. We propose an algorithm for locating z-primitive σ-LFSR configurations of a given degree. Further, an invertible matrix is generated from the key. This is then employed to generate a public parameter which is used to retrieve the feedback configuration using the key. If the key size is n- bits, the process of retrieving the feedback equation from the public parameter has a average time complexity 𝕆(2^n-1).
The proposed method has been tested on SNOW 2.0 and SNOW 3G for resistance to fast correlation attacks. We have demonstrated that the security of SNOW 2.0 and SNOW 3G increases from 128 bits to 256 bits.
§ INTRODUCTION
Stream ciphers are often used to secure information sent over insecure communication channels. Here, the ciphertext is the XOR sum of a pseudo-random keystream and the plaintext. Many stream ciphers are designed using LFSRs as they are extremely easy to implement both in hardware and software. Various word-oriented stream cipher configurations have been proposed to effectively utilize the word based architecture of modern processors. These include SNOW 2.0<cit.>, SNOW 3G<cit.>, SNOW V<cit.>and Sosemanuk <cit.>. These ciphers are based on LFSRs with multi-bit delay blocks and allow for extremely quick software implementations.
The feedback configurations of these LFSRs are publicly known, and this plays a key role in many well-known plaintext attacks such as algebraic attacks, fast correlation attacks, distinguishing attacks, guess and determine attacks etc.
The KDFC scheme<cit.> provides resistance against some of these attacks by concealing the feedback configuration. However, in this scheme, the characteristic polynomial of the LFSR over GF(2) is known. This, in turn, leads to cryptanalysis by a Fast correlation attack(FCA) <cit.>. In this article, we propose a scheme to resist such attacks.
§.§ Related Works
Multiple correlation attacks and distinguishing attacks have been proposed for SNOW 2.0 and SNOW 3G<cit.>. The first description of a Fast correlation attack for word-based stream ciphers is found in <cit.>. An enhanced version of this attack is given in <cit.>. These attacks utilize a linear recurring relation with coefficients in F_2.
For SNOW 2.0, this relation has a degree of 512. The attack described in <cit.> has a time complexity of 2^212.38. The attack described in <cit.> considers the LFSR in SNOW2.0 as being over F_2^8. This results in a linear recurring relation of degree 64. Here, parity check equations are generated using the Wagner's k-tree algorithm described in <cit.>. The time complexity of this attack is 2^164.5, which is roughly 2^49 times better than the attack described in <cit.>. However, the feedback function of LFSR (as an equation over F_2^32) is critical for this attack. <cit.> explains a Mixed Integer Linear Programming(MILP) based linear mask search on SNOW 2.0 to find a better correlation 2^-14.411 for the FSM approximation equation. It recovers the Key of SNOW 2.0 with time complexity 2^162.91. <cit.> describes a vectorized linear approximation attack on SNOW 3G with a bias value of 2^-40 and time complexity of 2^177 to find the state of the SNOW 3G LFSR. <cit.> proposes another attack model based on a modified Wagner K-tree algorithm and linear approximation of some composition function with time complexity 2^162.86 for SNOW 2.0 as and time complexity 2^222.33 for SNOW 3G. This attack considers the LFSR as a state transition machine over 𝔽_2 and uses the characteristic polynomial of the state transition matrix. This polynomial has degree 512. All these algorithms need a linear recurring relation that the LFSR satisfies. Our main contribution, as stated in the following subsection, is to deny the attacker the knowledge of any such linear recurring relation.
§.§ Our contributions
In this paper, we give an algorithm to generate a z- primitive σ-LFSR configuration with a random primitive characteristic polynomial which is sampled from the uniform distribution on all primitive polynomial of a given degree. In the proposed scheme, there are two levels of randomness; in the choice of the primitive characteristic polynomial of certain degree (over GF(2)) and in the choice of feedback configuration for a given characteristic polynomial. The purpose of doing this is to deny the attacker the knowledge of any linear recurring relation that the LFSR satisfies. This is done to counter Fast Correlation Attacks which use such linear recurring relations to generate parity check equations.
* We have introduced a z-primitive-σ-LFSR(<cit.>) enumeration algorithm based on a random primitive polynomial of degree n over GF(2) to generate a M-companion matrix ∈ F_2^n × n. The chosen M-companion matrix of Z-primitive σ-LFSR replaces the existing feedback polynomial of SNOW 2.0 and SNOW 3G over GF(2^32) after the completion of initial phase.
* The keyspace of SNOW 2.0 and SNOW 3G is increased to 256 bits which is broken into two parts (K_1,IV_1) both ∈ F_2^128 and (K_2,IV_2) both ∈ F_2^128.
* We have used an Invertible matrix∈ F_2^512 × 512 using AES-128 algorithm with K_2 and IV_2.
* In this article, we introduce a pre-computation phase(Challenge matrix) based on the M-companion matrix and the invertible matrix together before the initialization phase of SNOW 2.0 and SNOW 3G. This step conceals the feedback polynomial over GF(2) as well as the feedback function over M_m(F_2). Moreover, we have challenged the adversary to find the M-companion matrix from the challenge matrix, which is based on the search version of NP-hard problem, Exact Nonzero Matrix Factorization(NMF) problem(<cit.>).
§.§ Organization of the Paper
Section 2 deals with notations, definitions and theorems related to z-primitive σ-LFSRs and Fast Correlation Attacks. In section 3, we describe a z-primitive σ-LFSR generation algorithm with proof of correctness and a short example. Section 4 gives a brief description of stream ciphers SNOW 2.0 and SNOW 3G. Section 5 illustrates the proposed method for pre-processing, initialization and key generation. In section 6, this method is applied on SNOW 2.0 and SNOW 3G and the resulting resistance to fast correlation attacks is demonstrated . Finally, the conclusions along with some additional discussion and future work are outlined in Section 7.
§ PRELIMINEARIES
In this section, we introduce notations and definitions that are used throughout this article.
§.§ Notations
The following is the list of notations used in this paper:
The companion matrix of a polynomial f(x)=x^b+ c_b-1x^b-1+c_b-2x^b-2+⋯+c_1x+c_0 is given as follows.
P_f=
[ 0 0 0 ⋯ c_0; 1 0 0 ⋯ c_1; ⋮ ⋮ ⋮ ⋯ ⋮; 0 0 0 ⋯ c_b-2; 0 0 0 ⋯ c_b-1; ]∈𝔽_2^b × b
§.§ SNOW 2.0 and SNOW 3G
The SNOW series of stream ciphers represent a class of stream ciphers where a word based Linear Feedback Shift Register (LFSR) is coupled with a Finite State Machine(FSM). The keystream is the sum of the outputs of the the LFSR and the FSM
The main distinction between SNOW 2.0 and SNOW 3G is that the later has an additional register in the FSM (designated by Rt3).
In both schemes, the LFSR is composed of 16 32-bit delay blocks with the feedback polynomial γ .z^16 + z^14+ γ^-1+z^5+1 ∈ F_2^32[z], where γ∈ F_2^32 is a root of the primitive polynomial x^4+α^23 x^3+ α^245.x^2+α^48x+α^239∈ F_2^8[x], and is a root of the polynomial y^8+y^7+y^5+y^3+1 ∈ F_2[y].
At time instant t≥ 0, the state of the LFSR is defined as (St_t+15, St_t+14,..., St_t) ∈ F_2^32× 16.
Let the output of the registers in the of FSM of SNOW2 at time instant t be denoted by Rt1_t and Rt2_t. The output of the FSM at time instant t is denoted by F_t, The FSM in SNOW2 is governed by the following equations
F_t = (St_t+15⊞Rt1_t) ⊕Rt2_t, t≥ 0
Rt1_t+1 = St_t+5⊞ Rt2_t
Rt2_t+1 = SBox(Rt1_t)
where Sbox is a bijection over F_2^32, composed of four parallel AES S-boxes followed by the AES MixColumn transformation.`
Let the outputs of the registers in the FSM of SNOW3G at time instant t be denoted by
Rt1_t, Rt2_t and Rt3_t respectively. Let the output of the FSM at time instant t be denoted by F_t. The FSM is governed by the following equations:
Ft_t = (St_t+15⊞ Rt1_t)⊕ Rt2_t
Rt1_t+1 = (St_t+5⊕ Rt3_t)⊞ Rt2_t
Rt2_t+1 = SBox1(Rt1_t)
Rt3_t+1 = SBox2(Rt2_t)
where SBox1 in equation <ref> is same as the SBox of SNOW 2.0 while SBox2 is another bijection over GF(2^32), based on the Dickson polynomial.
§.§ Fast Correlation Attack on Word Based LFSRs
In a Fast Correlation Attack,
every window of the keystream {Y_1.....Y_N} is seen as the noisy enncoding of the initial state of LFSR. The noise is used to model the effect of nonloinearity in the FSM of the stream cipher. The parity check equations of the code are generated from the feedback equation of the LFSR along with the linear approximation of the FSM. Thus, the problem of recovering the initial state of the LFSR reduces to that of decoding an [N,l] linear code where l is the size of the LFSR. Using the Wagners K-tree algorithm , this problem reduces to decoding a smaller code [N_k,l'] where l'<l. In (<cit.>), the feedback equation of degree 512 is used to successfully recover the state of the LFSR of SNOW 2.0 and SNOW 3G. On the contarary (<cit.>) uses a feedback equation of degree 64 over F_2^8.
The keystream is seen as an LFSR sequence Y = {y_1,y_2,….y_N} transmitted through a Binary Symmetric Channel. The effect of the nonlinearity is modelled as a noise sequence η = {η_1,η_2,…,η_N}. The keystream sequence K = {k_1,k_2,…,l_N} is considered as a sum of the LFSR sequence and the noise i.e. k_i =y_i⊕η_i.
Consider a stream cipher with an LFSR having an initial state of length t.
A window of length N of the keystream is seen as a noisy encoding of the initial state of the LFSR by a [N,t]-linear code. The generator matrix of the code is a function of the feedback equation of the LFSR.
FCA attack can be done in two stages such as Preprocessing stage and Processing Stage.Preprocessing Stage:
Typically, the LFSR's initial state size is more than the key size. To reduce the key size, a partial k-sum problem is solved. This reduces the decoding problem to that of a [N_k, t']-linear code where t' <t. Here, the number of parity check equations is N_k.Processing Stage:
In this stage, Fast Walsh Transform (FWT) is then used to generate and evaluate the parity check equations to decode the modified code after guessing a few relevant bits of the initial state of LFSR. The other LFSR initial state bits can be recovered with substantially less difficulty after the target t' bits have been recovered. Thus, the entire initial state of the LFSR is recovered.
§.§ FCA attack on SNOW 2.0
The core idea of FCA attacks<cit.> on SNOW 2.0 is to approximate the FSM component using linear masking and then, by combining expressions for multiple keystream words, cancel out the impact of the registers Rt1 and Rt2. A biased linear equation between state and keystream is then found:
ζ. K_t ⊕λ .K_t+1=ζ.St_t+λ.St_t+1+λ. St_t+5+ζ.St_t+15+ λ.St_t+16
, where ζ , λ are the linear masks over GF(2^32).
Let us denote the correlation of the above equation as ϵ_FSM(ζ, λ).
After calculatiing the ϵ_FSM=2^-14.51<cit.>, FCA attack is mounted on SNOW 2.0 for key recovery using multiple linear approximations and using a modified k-tree algorithm. Here, equation <ref> is used to find multiple linear equations as follows:
ζ .K_t + λ. K_t+1=(s_0,s_1,⋯,s_l-1)× a^t
, where a^t is the column vector in F_2^l × 1 of the parity check matrix A=(a_1,a_2,⋯,a_N) ∈ F_2^l × N and t∈[N]. The matrix A is computed from the knowledge of l-degree feedback polynomial of the LFSR.
Finally, the state of the LFSR can be calculated using FWHT of time complexity 𝒪(2^162.86).
§.§ FCA attack on SNOW 3G
For SNOW 2.0 and SNOW 3G, the key recovery attack strategy is the same. The generation of some equations is the only distinction. As, SNOW 3G having three registers Rt1,Rt2,Rt3, the FSM approximation equation is as follows:
Δ. K_t-1⊕ζ. K_t ⊕λ .K_t+1= Δ. St_t-1⊕ζ.St_t⊕λ.St_t+1⊕λ. St_t+5⊕Δ .St_t+14⊕ζ.St_t+15⊕λ.St_t+16
Paper(<cit.>) calculates the latest correlation as 2^-33.82 for the masks ζ=0x00000101, λ=0x00018001, and Δ=0x00286006/0x00386006 for the equation <ref>.
Following that, we create a few linear equations using equations <ref> and the feedback equation of the LFSR of SNOW 3G.
Δ. K_t-1⊕ζ. K_t ⊕λ .K_t+1=(s_0,s_1,⋯,s_l-1)× A
,where the parity check matrix for the feedback polynomial over GF(2) of degree l is A ∈ F_2^l × N. Equation (<ref>) is used to generate parity check equations for a certain numbers to find the the state of the LFSR of SNOW 3G.
The remaining steps are already covered in FCA cryptanalysis of SNOW 2.0.
The key recovery attack can be carried out with a 2^222.33 time complexity.
§.§ FCA attack on Sosemanuk
The feedback equation over GF(2^32) for Sosemanuk is
St_t+10=St_t+9⊕γ^-1.St_t+3⊕γ.St_t
And the linear approximation equation using the state of LFSR and Keystream for Sosemanuk is as follows
ζ. K_t ⊕ζ K_t+3= ζ. St_t⊕ζ. St_t+2⊕ζ. St_t+3⊕ζ. St_t+10
, where ζ∈ GF(2^32). The correlation of the equation <ref> is 2^-21.41. Using the above two equations(<ref>,<ref>), we find the following equation
(ζ *γ+ζ)*St_1 +ζ*St_3 + (ζ *γ^-1 + ζ)*St_4+ζ*St_10=ζ*K_1+ ζ*K_4
The next step is to find out a certain number of parity check equations from equation(<ref>), where the equation(<ref>) plays the important role to reduce the variables to state variables. Finally, the time complexity to find the state of the LFSR using Walsh Hadamard Transformation and other procedures(Sorting of a list, XORing two words, comparison of two words etc) is 2^155.47.
To understand the correlation attack, we need following definitions:
ϵ(Z): The correlation or bias of a random variable Z is defined as:
ϵ(Z)=P(Z=0)-P(Z=1)
The correlation of a boolean function f:F_2^n→ F_2 to zero is defined as
ϵ(f)=|{Z∈ F_2^n:f(Z)=0}|-|{x∈ F_2^n:f(Z)=1}|/2^n
where Z∈ F_2^n is a uniformly distributed random variable.
The correlation of an (p,q)-function F : F_2^p→ F_2^q with a linear output mask ζ∈ F_2^p
and a linear input mask λ∈ F_2^q is defined as
ϵ_F(ζ:λ)=P(ζ.F(Z)=λ.Z)-P(ζ.F(Z)≠λ.Z)
, where Z∈ F_2^p is a uniformly distributed random variable.
Correlation attacks can be resisted by making the feedback equation dependent on the key. A procedure for generating a key dependent feedback equation is given in <cit.>. This can resist FCAs like the one given in (<cit.>), where the feedback equation of the LFSR is seen as an equation with coefficients from extensions of F_2. However, in this scheme the chararacteristic polynomial of the state transition matrix of the LFSR is the same as that in SNOW 2.0 and SNOW 3G. This polynomial can be used to generate a Linear Recrring Relation with coefficeints from 𝔽_2 that the output of the LFSR satisfies .This makes the scheme vulnerable to fast correlation attacks like the one given in <cit.> that employ the linear recurring relation over 𝔽_2.
§.§ σ-LFSRs and z-Primitive σ-LFSR
A σ-LFSR is a word based LFSR with multi-input multi-output delay blocks. The outputs of the delay blocks are multiplied by gain matrices and then added. This sum is fed back to the input of the first block. The output of a σ-LFSR with m-input m-output delay blocks is a vector sequence which satisfies the following recurrence relation,
𝐬_𝐧+𝐛 = 𝐬_𝐧+𝐛-1B_b-1 + 𝐬_𝐧+𝐛-2B_b-2 +⋯ + 𝐬_𝐧B_0
where s_i ∈𝐅_2^𝐦 and B_i ∈𝐅_2^𝐦× 𝐦. The B_is are the feedback gain matrices. The following matrix is called the configuration matrix of the σ-LFSR.
C=
[ 0 I 0 ⋯ 0; 0 0 I ⋯ 0; ⋮ ⋮ ⋮ ⋯ ⋮; 0 0 0 ⋯ I; B_0 B_1 B_2 ⋯ B_b-1 ]∈𝔽_2^mb × mb
where 0 ∈𝔽_2^m× m is the all zero matrix and I∈ F_2^m × m is identity matrix. We shall refer to the structure of this matrix as the m-companion structure. The characteristic polynomial of a σ-LFSR is the characteristic polynomial of its configuration matrix. The period of the sequence generated by a σ-LFSR is maximum if its characteristic polynomial is primitive.
The companion matrix of a primitive polynomial f(x)=x^b+ c_b-1x^b-1+c_b-2x^b-2+⋯+c_1x+c_0 is given as follows.
P_f=
[ 0 0 0 ⋯ c_0; 1 0 0 ⋯ c_1; ⋮ ⋮ ⋮ ⋯ ⋮; 0 0 0 ⋯ c_b-2; 0 0 0 ⋯ c_b-1; ]∈𝔽_2^b × b
The vector got by appending the outputs of the delay blocks of a σ-LFSR at a given time instant is known as the state vector of the σ-LFSR at that time instant. This vector is an element of 𝔽_2^mb. In this work, we will consider the state vector of a σ-LFSR to be a column vector. Thus, for 0≤ i ≤ b-1, if s_i is the output of the i-th delay block at any given time instant, then the state vector of the σ-LFSR at that time instant is c = ( s_0
s_1
⋮
s_b-1)
Two consecutive state vectors of a σ-LFSR are related by the following equation,
c_k+1 = C× c_k
where c_k and c_k+1 are the state vectors at the k-th and k+1-th time instant respectively and C is the configuration matrix of the σ-LFSR.
The output of a σ-LFSR with b, m-input m-output delay blocks can be seen as a collection of m scalar sequences emanating from the m outputs of the first delay block. These scalar sequences are known as the component sequences of the σ-LFSR. Any n = mb consecutive entries of the component sequence constitute a state vector of the component sequences. In this work, we consider state vectors of component sequences to be row vectors. Two consecutive state vectors of any component sequence of a σ-LFSR, say w_i and w_i+1, are related by the following equation.
w_i+1 = w_i × M
where the matrix M is the companion matrix of the σ-LFSR i.e., if the characteristic polynomial of the σ-LFSR is p(x) = x^n + c_n-1x^n-1 + ⋯+ c_0 then the matrix M is given as follows,
M=
[ 0 0 0 ⋯ c_0; 1 0 0 ⋯ c_1; ⋮ ⋮ ⋮ ⋯ ⋮; 0 0 0 ⋯ c_n-2; 0 0 0 ⋯ c_n-1; ]∈𝔽_2^b × b.
The minimal polynomial of a sequence is the characteristic polynomial of the LFSR with the least number of delay blocks that generates the sequence.
If a σ-LFSR is primitive, then each of its component sequences have the same period as that of the output sequence of the σ-LFSR. Further, the minimal polynomial of each of these sequences is the same as the characteristic polynomial of the σ-LFSR. In fact, each of these sequences are shifted versions of each other.
The distance between two component sequences S^i and S^j of a primitive σ-LFSR is defined as the number of left shifts needed to get to S^j from S^i. Let δ denote the left shift operator i.e, for a sequence S, δ S(0) = S(1) and δ^i S(0) = S(i).
The distance vector of a primitive σ-LFSR S is defined as the set of integers D=(d_1,d_2,⋯,d_m-1) where d_i is the distance between the first and i+1-th component sequences, i.e. S^i=δ^d_i× S^1.
<cit.>
Let α be a root of a primitive polynomial f(x) = c_0 +c_1x ++c_n-1x^n-1 +x^n over GF(2) having degree n=mb. Let D_m=(d_0,d_1,⋯,d_m-1) be the distance vector of a σ-LFSR with characteristic polynomial f(x) having b, m-input m-output delay blocks. The set
A={1,α,⋯,α^b-1,α^d_1,α^d_1+1,⋯,α^d_1+b-1,⋯,α^d_m-1,α^d_m-1+1,⋯,α^d_m-1+b-1}
is a basis for GF(2^mb) as a vector space over GF(2).
Now, {1,α,α^2,…,α^n-1} is a basis for GF(2^mb) as a vector space over GF(2). With respect to this basis, for 1≤ i ≤ n-1, α^i is represented by the vector e_1^i and every vector v ∈𝔽_2^n represents a polynomial in α with degree less that n . Further, the linear map corresponding to multiplication by α is represented by the following matrix.
M_α=
[ 0 1 0 ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋯ ⋮; 0 0 0 ⋯ 1; c_0 c_1 c_2 ⋯ c_n-1-1; ]
As there are no zero divisors in GF(2^mb), if each entry of the set A in Lemma <ref> is multiplied by a polynomial in α with degree less that n, the resulting set will also be a basis for GF(2^mb). Therefore, given any non-zero vector v ∈𝔽_2^n the set of vectors {v, vM_α,…, vM_α^b-1, vM_α^d_1, vM_α^d_1 +1,…, vM_α^d_1+b-1,…,vM_α^d_m, vM_α^d_m +1,
…, vM_α^d_m+b-1} is a basis for 𝔽_2^mb. Further, any matrix M with characteristic polynomial f(x) can be derived from M_α by a similarity transformation. Hence. given any non-zero vector v and a matrix M with characteristic polynomial f(x), the set {v, vM,…, vM^b-1, vM^d_1, vM^d_1 +1,…, vM^d_1+b-1,…,vM^d_m, vM^d_m +1,
…, vM^d_m+b-1} is a basis for 𝔽_2^mb. Given such a matrix M and a distance vector (d_1,d_2,…,d_m, one a generate a set of vectors {e_1^n,v_1 = e_1^nM^d_1, v_2 = e_1^nM^d_2,…,v_m-1 = e_1^nM^d_m-1. Such a set can be used to generate an m-companion matrix (and thereby a σ-LFSR configuration using the following lemma.
<cit.>
Let M be the companion matrix of a given primitive polynomial f(x) of degree n where n=mb. Each m-companion matrix can be uniquely obtained from M by means of a similarity transformation P_1× M× P_1^-1 where P_1 has the following structure:
P_1=[e_1^n, v_1,⋯,v_m-1;e_1^nM, v_1M,⋯,v_m-1M;⋯, e_1^nM^b-1, v_1M^b-1,⋯,v_m-1M^b-1 ]
where M is the companion matrix for f(x).
Matrix state of σ-LFSR sequence
Let S={S(i)∈ F_2^m}_i ∈ℤ be a sequence generated by a σ-LFSR with a primitive characteristic polynomial f(x) of degree n. The i-th matrix state of S is a matrix of n consecutive elements of S starting from the i-th element
Mat_S(i)={S(i),S(i+1),⋯,S(i+b-1)}_m × n
The dimension of a σ-LFSR sequence S is the rank of any of its matrix states Mat_S(i).
Consider a σ-LFSR with b, m-input m-output delay blocks with configuration matrix Q = P_1× M× P_1^-1 where P_1 is as defined in Lemma <ref> and M is the companion matrix of the characteristic polynomial of the σ-LFSR. The first m rows of the matrix P_1 constitute a matrix state of the sequence generated by the σ-LFSR.
For 1≤ i ≤ mb, let the i-th column of P_1 be denoted by c_i. Further, for some 1≤ i ≤ mb, let c_i be a state vector of the σ-LFSR. The next state vector of the σ-LFSR is Q× c_i. This vector is computed as follows
Q× c_i = (P_1 × M × P_1^-1) × c_i=P_1 × M × (P_1^-1× c_i)=P_1 × M × e_1 ^ i= P_1× e_1^i+1=c_i+1
Thus, the next state vector is the next column of P_1. Therefore, the n columns of P_1 are n consecutive state vectors of the σ-LFSR. Consequently, these vectors' first m rows constitute m consecutive outputs of the σ-LFSR. Hence, the first m rows of the matrix P_1 are a matrix state of the sequence generated by the σ-LFSR.
We now discuss a couple of results related to the binary field to motivate the concept of z-primitive sigma-LFSRs.
If α is a primitive element of GF(2^mb), α^z is a primitive element of GF(2^m), where z=2^mb-1/2^m-1.
As α is a primitive element of GF(2^mb), α generates the cyclic multiplicative group F_2^mb^*. Therefore, α^2^mb-1=1. Further, the cardinality of the cyclic group generated by α^z, i.e. the order of α^z, is 2^mb-1/GCD(2^mb-1,z). Hence, if z=2^n-1/2^m-1,
order(α^z) =2^mb-1/GCD(z,2^mb-1)
= (2^m-1)(2^m(b-1)+2^m(b-2)+⋯+1)/(2^m(b-1)+2^m(b-2)+⋯+1)
=2^m-1
As, the order of the element α^z is 2^m-1, it can be said that α^z is the generator of the subgroup F_2^m^*, i.e. α^z is a primitive element of GF(2^m), where z=2^mb-1/2^m-1.
This gives rise to the following corollary
Let f(x) be a primitive polynomial of degree n=mb over F_2 and α be its root. If z=2^mb-1/2^m-1, a polynomial g(x) having degree m with root α^z is a primitive polynomial. Further, the polynomial g(x) is given as follows:
g(x)=(x+α^z)*(x+α^2z)*⋯*(x+α^2^m-1z) f(α)
The primitiveness of g(x) follows from Lemma 1. Since mb is the smallest exponent of alpha that equals 1, (α^z, α^2z, α^2^2z,…, α^2^m-1z) are all distinct. Further, if α^z is a root of g(x), so are α^2z, α^2^2z,…, α^2^m-1z. Therefore, g(x)=(x+α^z)*(x+α^2z)*⋯*(x+α^2^m-1z) f(α).
Given g(x)=a_0 +a_1*x+a_2*x^2+⋯+x^m-1+x^m as a primitive polynomial of degree m over GF(2) and α is a root of g, we can find another primitive polynomial f(x) of degree n over GF(2), where m|n as follows
f(x)=(x+α^z)*(x+α^2z)*(x+α^4z)*⋯ *(x+α^2^n-1*z) g(α)
, where z=2^n-1/2^m-1 and α^2^m-1=1.
Let S be a primitive σ-LFSR with b, m-input m-output delay blocks. Let its distance vector be D=(d_1,⋯,d_m-1). If all the elements of the distance vector are divisible by z = 2^mb-1/2^m-1,i.e z|d_i ∀ 1≤ i ≤ m-1, then S is called a z-primitive σ-LFSR.
In the vector space F_q^mb over the finite field F_q and α∈ F_q^mb, the subspace V of m dimensional is called α- splitting subspace if
V ⊕α V ⊕α^2 V ⊕⋯⊕α^b-1 V=F_q^mb
Let V be a subspace of F_2^mb, split by α. α∈ F_2^mb is a primitive element of F_2^mb. B=(β_0,β_1,⋯,β_m-1) is a basis of V. S_i^j=Tr(β_i α^j),i=(0,1,⋯,m-1) is the coordinate sequence of the vector sequence of the original σ-LFSR. When V=F_2^m, the σ-LFSR is a z-primitive σ-LFSR.
When V=F_2^mb and B=(β_0,β_1,⋯,β_m-1) is the basis of V. Assume β_i=α^d_i, i=0,1,⋯,m-1, then,
β_i^2^m-1 =1
(α^d_i)^2^m-1 =1
As, α is the primitive element of GF(2^n), we know that α^n-1=1 and n is the least number to satisfy the equation. We can write that
2^mb-1 | d_i(2^m-1)
which is equal to z|d_i and the sequence generated from Z-primitive σ-LFSR.
<cit.> The number of z-primitive σ-LFSR of having b, m-input m-output delay blocks is |GL_m(GF(2))|/2^m-1×ϕ(2^mb-1)/mb, where |GL_m(GF(2))| is the total m × m invertible matrices over GF(2) and ϕ(2^mb-1)/mb is the number of primitive polynomials of degree mb over GF(2).
Note that the structure of full rank matrix with first row e_1^m of dimension m is as follows
INV_m=[ 0 0 0 ⋯ 1; v_11 v_12 v_13 ⋯ v_1m; v_21 v_22 v_23 ⋯ v_2m; ⋮ ⋮ ⋮ ⋮ ⋮; v_m1 v_m2 v_m3 ⋯ v_mm ]_m× m
=
0 1
INV_m-1 θ
,where INV_m-1 is a m-1 dimensional full rank matrix of above structure and θ is a m-1 dimensional vector.
Given m and b, the set of z-primitive σ-LFSR configurations is a subset of primitive σ-LFSR configurations. For the case b=1, both the sets have the same cardinality (This follows from Theorem 1 and Theorem 6.3.1 in <cit.>). Therefore, when b=1, every σ-LFSR configuration is a z-primitive σ-LFSR configuration.
`
Let f(x) be the primitive polynomial of degree mb and α is a root of f(x). S is a primiive σ-LFSR over F_2^m. For any distance vector D_m=(d_0,d_1,⋯,d_m-1), z|d_i | ∀ i ∈[1,m-1], from lemma 1.3
A=(α^d_0,α^d_0+1,⋯,α^d_0+b-1α^d_1,α^d_1+1,⋯,α^d_1+b-1,⋯,α^d_m-1,α^d_m-1+1,⋯,α^d_m-1+b-1)
form a basis vector on GF(2^mb). Moreover, the vectors (α^d_0,α^d_1,⋯,α^d_m-1) forms a basis for GF(2^m) over GF(2). Therefore, #primitive σ-LFSR sequneces over GF(2^m) is the set
X={1,β^x_1,β^x_2,⋯,β^x_m-1| x_i ∈ [1,2^m-1]}
where β=α^z,z=2^mb-1/2^m-1. If
Y={1,β^y_1,β^y_2,⋯,β^y_m-1}∈ X is a set of basis of GF(2^m) over GF(2), any other set of basis in X can be determined by Y, using a reversible linear transformation. Let the matrix
V_m=[ 0 0 0 ⋯ 1; v_11 v_12 v_13 ⋯ v_1m; v_21 v_22 v_23 ⋯ v_2m; ⋮ ⋮ ⋮ ⋮ ⋮; v_m1 v_m2 v_m3 ⋯ v_mm ]
=
0 1
V_m-1 θ
represents the reversible linear transformation where θ is a vector of dimension m-1 and V_m-1 is a (k-1) × (k-1) invertible matrix. Observe that the number of elements form basis of GF(2^m) over GF(2) is equivalent to # V_k-1 matrices. Besides, there are 2^m-1 ways to choose θ. Then, there are 2^m-1× |GL_m-1(GF(2))| many Zprimitive σ-LFSRs.
Observe that above theorem shows that the number of Z primitve σ -LFSR over GF(2^m) is more than the number of primitive polynomial over GF(2^m)=ϕ(2^mb-1)/b.
The z-set for a pair of integers m and b and a primitive polynomial f having degree mb, denoted by z_mb^f, is the set of distance vectors of z-primitive σ-LFSRs having b, m-input m-output delay blocks and characteristic polynomial f.
The set of distance vectors related to all σ-LFSR sequences of m input output and b delay blocks for a primitive polynomial f(x) of degree mb which is divisable by z=2^mb-1/2^m-1, is called z-set for f(x) or z_mb^f.
z_mb^f={D_m=(d_0,d_1,⋯,d_m-1) | z|d_i}
Note that there is a one to one correspondence between the set of distance vectors and the set of σ-LFSRs. Therefore, the cardinality of the z-set for a given pair of integers m and b and a given primitive polynomial f is equal to the number of z-primitive LFSR configurations with b, m-input m-output delay blocks having characteristic polynomial f.
<cit.> Let α and α^z be roots of primitive polynomials f(x) and g(x) having degrees mb and m respectively. The following map ϕ from z_mb^f and z_m^gis a bijection.
ϕ z_mb^f⟶ z_m^g
(d_0,d_1,⋯,d_m-1)↦(d_0/z,d_1/z,⋯,d_m-1/z)
If α and α^z are roots of primitive polynomials f(x) and g(x) having degrees mb and m respectively, the above theorem proves that there is one to one correspondence between z-primitive σ-LFSRs having b, m-input m-output delay blocks and characteristic polynomial f(x) and σ-LFSR configurations with a single m-input m-output delay block and characteristic polynomial g(x) (This is because every primitive σ-LFSR configuration is a z-primitive σ-LFSR configuration when b=1.)
§ PROPOSED SCHEME
In this section, we describe a method of converting the existing feedback configuration of the LFSR into a random z-primitive σ-LFSR configuration. In contrast to the scheme given in <cit.>, the characteristic polynomial in this scheme is also randomly sampled from the set of primitive polynomials of a given degree. Information about this configuration is embedded in a public parameter. At the decryption side, the configuration is recovered from the public parameter using the secret key. In the proposed scheme, in addition to sharing the secret key and calculating the initial state of the LFSR, the following two steps have to be performed before the start of communication.
* Generating a random z-primitive σ-LFSR configuration.
* Generating the public parameter using the state transition matrix of the z-primitive σ-LFSR and the secret key and declaring it to the entire network.
§.§ Generation of z-primitive σ LFSR Configuration
In this subsection, we propose a method to generate a random configuration matrix of z-primitive σ-LFSR with b, m-input m-output delay blocks. Here, we first randomly sample a primitive polynomial f of degree mb. We then calculate a primitive polynomial g with degree m such that if α is the root of f then α^z is the root of g where z = 2^mb-1/2^m-1. Thereafter, we calculate the distance vector of a primitive σ-LFSR with a single m-input m-output delay block having characteristic polynomial g. This distance vector is used to generate an element of z_mb^f . This is then used to generate a σ-LFSR configuration with characteristic polynomial f. Given a randomly sampled polynomial f of degree mb, this process involves the following algorithms;
* An algorithm to find a primitive polynomial g of degree m.
* An algorithm to generate an element of z_m^g.
* An algorithm to generate the σ-LFSR configuration with characteristic polynomial f from an element of z_m^g.
In the above algorithm, the matrix M_f is a representation of the root of f. The above algorithm calculates the minimal polynomial of the matrix M_f^z which according to Lemma 1 is a primitive polynomial of degree m. We now proceed to generating a random element of z_m^g. Observe that for a σ-LFSR with a single delay block to have characteristic polynomial g the feedback gain matrix can be any matrix with characteristic polynomial g. This is used to prove the following lemma leads to an algorithm that samples an element of z_m^g.
Given a primitive polynomial g of degree m, the set of matrix states of sequences generated by σ-LFSRs with a single delay block having characteristic polynomial g is the set of all full rank matrices. Further, corresponding to each such σ-LFSR configuration there exists a unique matrix state with first row e_1^m.
Consider a σ-LFSR configuration with characteristic polynomial g having a single delay block and feedback gain matrix B. The characteristic polynomial of B is g. Given any vector v, the matrix M_1 = [v,Bv,…,B^m-1v] is a matrix state of this σ-LFSR configuration. As g is a primitive polynomial, the matrix M_1 is invertible. To prove the first statement of the lemma, we will now prove that any invertible matrix having first column v is a matrix state of a sequence generated by some σ-LFSR configuration with characteristic polynomial g having a single delay block.
Let M_2 be an arbitrary full rank matrix having first column v. Let P = M_2M_1^-1 and B' = PBP^-1. Comparing the first columns on both sides of the equation PM_1 = M_2, it is apparent that v is an eigen vector of P corresponding to the eigen value 1. Now,
M_2 = PM_1 = P[v,Bv,…,B^m-1v]
= [Pv,PBv,…,PB^m-1v]
= [v,PBP^-1Pv,…,PB^m-1P^-1Pv]
= [v,PBP^-1v,…,PB^m-1P^-1v]
= [v,B'v,…,B'^m-1v]
Thus, M_2 is a matrix state of a σ-LFSR with a feedback matrix B' having characteristic polynomial g. Thus, any arbitrary matrix having first column v is a matrix state of a σ-LFSR with characteristic polynomial g. Further, as v can be arbitrarily chosen, any full rank matrix is a matrix state of a sequence generated by a σ-LFSR configuration with characteristic polynomial g having a single delay block. Alternatively, the set of matrix states of sequences generated by σ-LFSRs with a single delay block having characteristic polynomial g is the set of all full rank matrices.
Now, the minimal polynomial of each component sequence is g(x). Therefore, every set of m-consecutive binary values occurs exactly once as a sub-sequence of these sequences in each period <cit.>. Therefore, for each sequence generated by a σ-LFSR with characteristic polynomial g, there is a unique matrix state with the first row as e_1^m. As all sequences generated by a primitive σ-LFSR configuration are just shifted versions of each other, this matrix uniquely characterizes the σ-LFSR configuration.
As a consequence of the above lemma, for a primitive polynomial g(x) with degree m and companion matrix M_g, given a random full rank matrix M = [e_1^m;v_2;…;v_m-1] ∈𝔽_2^m × m, the set of integers (d_1,d_2,…,d_m-1) such that v_i = e_i^m M_g^i for 1 ≤ i ≤ m-1 is an element of z_m^g. However, given an intertible matrix in 𝔽_2^m × m, calculating the set of integers (d_1,d_2,…,d_m-1) is not trivial. This is illustrated in the following lemma and corollary.
Let a,b ∈ F_2^m and g be a primitive polynomial of degree m over GF(2). Let M_g∈ F_2^m × m be the companion matrix of g. The calculation of the value i in the equation a× M_g^i=b takes 𝒪(m× e_K+e_K ×√(P_K)+m^3) time, where 2^m-1=∏_i=1^kP_i^e_i and P_k is the largest factor.
M_g is a representation of a root of g and can therefore be seen as a primitive element of F_2^m. Therefore, ℬ={M_g,M_g^2,⋯,M_g^m-1,M_g^m} forms a basis for F_2^m. Hence, any M_g^i can be given by a linear combination of the elements in ℬ. Given an equation a × M_g^i=b for a,b ∈𝔽_2^m, the value of M_g^i can be computed by solving the following linear equation for v
v ×[ a; a× M_g; a × M_g^2; ⋮; a × M_g^m-1 ]_m× m=b
If v = (a_0,a_1,…,a_m-1, then M_g^i = (a_0× I+a_1× M_g+⋯+a_m-1× M_g^m-1). Solving Equation <ref> takes 𝒪(m^3) operations. The value of i can then be found from M_g^i by calculating the discrete logarithm using the Pohlig Hellman algorithm<cit.>. The time complexity of Pohlig Hellman algorithm is 𝒪(m× e_k+e_k×√(P_k)), where 2^m-1=∏_i=1^kP_i^e_i and P_k is the largest factor. Therefore the total time needed to calculate the value of i is 𝒪(m× e_K+e_K ×√(P_K)+m^3).
Observe that calculating an element of z_m^g from the matrix M involves solving (m-1) instances of the problem discussed in the above lemma. We therefore have the following corollary.
Given a matrix M=(e_1^m,v_1,⋯,v_m-1)∈ F_2^m × m and a companion matrix M_g∈ F_2^m × m of a primitive polynomial g, Finding the distance vectors d_1,d_2,⋯,d_m-1 using Pohlig Hellman Algorithm from the equation e_1^m× M_g^d_i=v_i for i∈[1,m-1] takes 𝒪((m-1)×(m× e_K+e_K ×√(P_K)+m^3))) time.
From Corollary 2 and Lemma 5, it is apparent that calculating an element of z_m^g from a random invertible matrix is computationally expensive.
We therefore present a randomized algorithm (Algorithm <ref>) where an invertible matrix M_1 and the distance vector are simultaneously generated. The first row M_1 is taken as e_1^m. The algorithm runs for m-1 iterations. In the i-th iteration, an integer d_i is randomly chosen from the set of integers ranging from 1 to 2^m-1. The vector e_1^mM_g is then evaluated. If this vector is linearly independent of the previously added rows of M_1, then d_i is appended to the list of entries of the distance vector. Otherwise, the process is repeated with a new choice of d_i. Thus, in each iteration a new entry is added to the distance vector and new row is appended to M_1. The linear independence of the newly added vector is checked by simultaneously generating a matrix M_2 whose entries are all 0 below the anti-diagonal. Further, for all 1≤ j ≤ m, the span of the first j rows of M_2 is the same as the span of the first j rows of M_1. The linear independence of the rows of M_2 ensures the linear independence of the rows of M_1. The correctness of the algorithm is proved by Therem <ref>. Moreover, the proof gives an insight into the working of the algorithm.
The main computational challange in the above algorithm is to find the value M_g^d. This can be done using the Binary Exponentiation Algorithm(BEA)<cit.> with matrix M_g and integer d as inputs. Using BEA, for any d ∈ [1,2^m-1], M_g^d can be calculated in 𝒪(m^2log_2(m)) time. Here, 𝒪(m^2) time is required for multiplying two matrices of size m × m which is given in Algorithm <ref>.
The COUNTSETBITS(x) function in the above algorithm counts the number of ones in (A[i,:] && B[:,j]). The output of the bitwise AND of the COUNTSETBITS function with the integer 1 tells us the number is even or odd. It gives integer 1 when the count is odd and gives 0 when the count is even. It runs in 𝒪(1) time. The Algorithm <ref> for COUNTSETBITS is given below:
Given a primitive polynomial g of degree m over GF(2), Algorithm 2 generates an element of the z_m^g.
As a result of Lemma <ref>, the above theorem stands proved if it is proved that the matrix M_1 is invertible. This is proved by inductively proving that each new row added to M_1 is linearly independent of all its previous rows. We now inductively prove the following statements;
* For all 1≤ i≤ m, the first i rows of M_2 are linear combinations of the first i rows of M_1. Further, M_2(i,m-i+1)=1 and M_2(i,k) = 0 for all k> m-i+1.
* For all 1≤ i≤ m, the first i rows of M_1 are linearly independent.
As the first rows of M_1 and M_2 are e_1^m, both these statements are trivially true when i=1.
Assume that they are true for all i<ℓ<m.
Observe that, when i=ℓ, due to lines 7 and 8 of Algorithm <ref>, M_2[ℓ,:] is initially equal to M_1[ℓ,:]. M_2[ℓ,:] is then modified in the subsequent while loop (Line 11 to Line 15 of Algorithm <ref>) by adding it with some of the previous rows of M_2. By assumption, the first ℓ-1 rows of M_2 are linear combinations of the first ℓ-1 rows of M_1. Hence, at the end of this while loop, M_2(ℓ,:) is a linear combination of the first ℓ rows of M_1. Further, by assumption M_2(j,m-j+1)=1 and M_2(j,k)=0 for all j<ℓ and k>m-j+1. Therefore, when i=ℓ and j = p<ℓ, in Line 12 of Algorithm <ref>, the p-th row of M_2 (whose m-p+1-th entry is 1) is added to M(ℓ,:) if and only if M(i,m-p+1) = 1. Therefore, after this addition M(i,m-p+1) becomes zero. Further, all the entries of M(ℓ,:) that have been made zero in the previous iterations of the while loop remain zero as M(p,k)=0 for all k>m-p+1. Therefore, at the end of the inner while loop M(ℓ,k) = 0 for all k>m-ℓ +1. Now, the value of i is incremented only when
M_2(ℓ,m-ℓ +1) = 1. Therefore, M_2(ℓ,m-ℓ+1)=1 and M_2(ℓ,k) = 0 for all k> m-ℓ+1.
The structure of the first ℓ rows of M_2 ensures that these rows are linearly independent. Further, since these rows are a linear combinations of the first ℓ rows of M_1, the first ℓ rows of M_1 are linearly independent.Thus both statements are true when i=ℓ and the theorem stands proved.
The element of z_m^g generated in Algorithm-2 is then used to find an element of z_mb^f using the bijection mentioned in Theorem 2. This is then used to generate the desired σ-LFSR configuration in the following algorithm.
Algorithm-<ref> generates the configuration matrix of a z-primitive σ-LFSR whose characteristic polynomial is the primitive polynomial f considered in Algorithm <ref>.
The set D={d_1,⋯,d_m-1} is an element of z_m^g. Therefore, by Theorem 2 the set D' = {zd_1,…,zd_m-1} is an element of z_mb^f. Therefore, by the arguments given in Section 2, the vectors (e_1^n,w_1,…,w_m-1, e_1^nM_f,w_1M_f,⋯,w_m-1M_f^b-1,
…,e_1^nM_f^b-1,w_1M_f^b-1,⋯,w_m-1M_f^b-1) are linearly independent. Hence, by Lemma <ref>, the matrix QM_fQ^-1 is an m-companion matrix with characteristic polynomial f.
Now, by Theorem <ref>, the first m-rows of the matrix Q constitute a matrix state of the sequence generated by a σ-LFSR with configuration matrix QM_fQ^-1. Therefore, the distance vector of this σ-LFSR is {zd_1,⋯,zd_m-1}. As each entry of this vector is a multiple of z, the σ-LFSR is z-primitive.
using the Corollary-1, Theorem-2 and Lemma-3 generates the matrix state B=[e_1^n,w_1,⋯,w_m-1]^T∈ F_2^m × n where v_i ∈ F_2^1 × n where z=2^n-1/2^m-1. The matrix B can be extended to an invertible matrix Q as following
Q=[B;B× M_f;B× M_f^2,⋯, B × M_f^b-1]∈ F_2^n × n
(<cit.>, Page:65). The M-companion matrix is calculated as Q× M_f × Q^-1.<cit.>
Complexity Analysis:-
Each M_f^z× d_i in Step 4 of Algorithm 4 can be calculated in 𝒪(n^2log_2(n)) time. Since this operation is done (n-1)-times, the computational complexity of Step 4 is 𝒪(n^3)log(n). Further, Q× M_f× Q^-1 can be calculated in 𝒪(n^3) time. Thus, the overall time complexity of Algorithm 4 is 𝒪(n^3log_2(n)).
For the case where m=3 and b=3, we aim to generate a z-primitive σ-LFSR configuration with characteristic polynomial f(x)=x^9 + x^6 + x^4 + x^3 + x^2 + x + 1. The following is the companion matrix of f(x)
M_f=([ 0 0 0 0 0 0 0 0 1; 1 0 0 0 0 0 0 0 1; 0 1 0 0 0 0 0 0 1; 0 0 1 0 0 0 0 0 1; 0 0 0 1 0 0 0 0 1; 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 1 0 0 1; 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 1 0 ])
* z=2^9-1/2^3-1=73
* g(x)=(x+α^73)*(x+α^73*2)*(x+α^73*4)f(x)=x^3 + x^2 + 1
, where α is root of f(x).
* Randomly sample a matrix from linear group GL(3,GF(2)). Let the sampled matrix be
Mat=([ 0 0 1; 1 0 0; 0 1 0 ])
* The distance vector corresponding to the above matrix is
{6,5}
* The subspace V_f in Step 4 of Algorithm 4 is as follows:
V_f={e_1^9,e_1^9*M_f^6,e_1^9*M_f^5}
* The matrix Q in Step 5 of Algorithm 4 is as follows
Q=([ 0 0 0 0 0 0 0 0 1; 0 1 1 0 0 0 1 0 1; 1 0 0 0 0 1 0 1 1; 0 0 0 0 0 0 0 1 0; 1 1 0 0 0 1 0 1 1; 0 0 0 0 1 0 1 1 1; 0 0 0 0 0 0 1 0 0; 1 0 0 0 1 0 1 1 0; 0 0 0 1 0 1 1 1 0 ])
* The configuration matrix of the z-primitive σ-LFSR is calculated as follows,
Q × M_f × Q^-1=([ 0 0 0 1 0 0 0 0 0; 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 1 0; 0 0 0 0 0 0 0 0 1; 1 0 1 1 0 1 0 1 0; 1 0 0 1 0 0 0 0 1; 0 1 0 0 1 0 1 1 0 ])=([ 0 0 0 1 0 0 0 0 0; 0 0 0 0 1 0 0 0 0; 0 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 1 0 0; 0 0 0 0 0 0 0 1 0; 0 0 0 0 0 0 0 0 1; 1 0 1 1 0 1 0 1 0; 1 0 0 1 0 0 0 0 1; 0 1 0 0 1 0 1 1 0 ])
* The gain matrices for the configuration matrix (Q × M_f × Q^-1) are follows
B_0=
[ 1 0 1; 1 0 0; 0 1 0 ] B_1=
[ 1 0 1; 1 0 0; 0 1 0 ] B_2=
[ 0 1 0; 0 0 1; 1 1 0 ]
§.§ Generation of the Public Parameter
Recall that in the generation of the configuration matrix, the primitive polynomial in Algorithm <ref> and the elements of the distance vector in Algorithm <ref> are randomly chosen independent of the key. Hence, the information needed to recover the σ-LFSR configuration is not completely contained in the key. Therefore, in addition to the key, we generate a publicly known parameter matrix C to enable the reciever to recover the σ-LFSR configuration.
Note that the proposed scheme has two secret keys K_1, K_2∈ F_2^128 and two initialization vectors IV_1,IV_2∈ F_2^128. We now state the algorithm that generates the public matrix C followed by a brief explanation.
In the above algorithm, the secret key K_1,K_2 is used to generate a secret matrix S which is multiplied with the last 32 rows of the M-companion matrix, M_1, to produce the matrix C. This C is then made public. In order to generate a matrix M, a vector v_1 is generated using the (key,IV) pair K_1,IV_1. This is done using the procedure that is used to generate the initial state of SNOW 2.0 and SNOW 3G. v_1 is then divided into 4 equal parts of 128 bits. Each of this part is encrypted using AES-128 algorithm using the key IV pair K_2,IV_2 pair. The encrypted words are then concatenated to generate the next vector v_2. This procedure is then followed recursively to generate 512 vectors. This vectors are considered as a rows of a matrix. The elements of the matrix below the diagonal are then discarded and and all the diagonal elements are made 1. This results an upper triangular matrix U. The matrix S is then generated by multiplying the transpose of U with U. That is S=U × L.
The receiv computes the matrix S from (K_1, K_2). The feedback gain matrices B_i are recovered by multiplying the public matrix C(Generated in Algorithm-5) with the inverse of S. Thus, the receiver regenerates the LFSR configuration from the keys (K_1,K_2) and the public matrix CWhenever a user intends to transmit confidential data using an LFSR-based word-oriented stream cipher, Algorithm-5 uses (K_1, IV_1, K_2, IV_2) pair to mask the configuration matrix of the LFSR. The process of generating the challenge matrix occurs during the pre-computing phase prior to the initialization cycle of the Cipher. The σ-LFSR configuration corresponding to the M-companion matrix generated in Algorithm-4 is then used for Keystream Generation.
§.§.§ Doing away with the Public Parameter C
For the reciever to recover the σ-LFSR configuration, in addition to the shared key and publicly known IV, he/she should know the primitive polynomial that is sampled in Algorithm <ref> and the entries of the distance vector. If these entries can be securely generated using the secret key, then the public parameter will not be required.
This will avoid the communication cost incurred for sharing the public parameter.
§.§ Initialization Phase:
The initialization phase of this scheme is as same as the initialization phase of SNOW 2.0 or SNOW 3G. It runs for 32 clock cycles using the same feedback polynomial over GF(2^32) and the adversary is not allowed to access the keystreams during this period. At the last clock, it replaces the coefficients of the feedback polynomial over GF(2^32) by the gain matrices B_i∈ F_2^32 × 32 M_1.
§.§ Keystream Generation Phase:
In the Key generation phase, the LFSR part of SNOW 2.0 and SNOW 3G is regulated by the following equation:
St_15^t+1=B_0St_0^t + B_1St_1^t + ⋯ + B_15St_15^t
, where B_i∈ F_2^32 × 32, i ∈ [16] is the gain matrices of M_1. And, each delay block(St_j,j∈[16]) is updated as:
St_k^t+1=
St_k+t+1^0 0 ≤ k+t+1 ≤ 15
∑_i=0^15B_iSt_i^t+k k+t+1 >15
FSM update part is the same as it was in SNOW 2.0 and SNOW 3G in section 4.
§.§ Security of the proposed scheme
Here, in addition to the keystream, the attacker has access to the public matrix C generated by Algorithm <ref>. He/She could potentially use this matrix to retrieve the gain matrices of the LFSR. On average, a brute force attack will require 2^255 guesses to get to the correct key. We now show that other methods of deriving the feedback configuration are computationally more expensive.
Given any symmetric invertible matrix S' ∈𝔽_2^mb × mb, there exists a matrix M' ∈𝔽_2^m × mb such that the public parameter C is a product of M' and S.
Let M_1 be the configuration matrix of the σ-LFSR and let C = M_1[mb-m+1:mb,:]× S be the public parameter matrix derived in Algorithm <ref>. Given any symmetric invertible matrix S' ∈𝔽_2^mb × mb,
C = M_1[mb-m+1:mb,:]× S × S'^-1× S'
= M'× S'
where M' = M_1[mb-m+1:mb,:]× S × S'^-1∈𝔽_2^m × mb.
To find the gain matrices of the σ-LFSR from the public parameter, one can sample an invertible symmetric matrix S' and find the corresponding values of M' such that C = M'× S'. If the feedback configuration corresponding to M' is z-primitive, then assuming this to be the feedback configuration, one can launch any existing attack to generate the initial state of the LFSR. The number of invertible symmetric matrices in
𝔽_2^mb × mb is 2^mb(mb-1)/2, Hence, the average number of attempts needed to get the correct invertible matrix is 𝒪(2^mb(mb-1)/2-1) which is prohibitively high.
Alternatively, one could sample a z-primitive LFSR configuration and consider the configuration matrix as a potential M_1. One can then check if the public parameter can be written as a product of the last m rows of this matrix and a symmetric matrix S. This involves checking if a set of m× mb linear equations in mb(mb-1)/2 variables has a solution (For SNOW 2 and SNOW 3G the number of equations is 8192 and the number of variables is 130816). If this set of equations has a solution, one can launch any known attack to recover the initial state of the LFSR. For a given value of m and b, the total number of z-primitive LFSR configurations is |GL(m,𝔽_2)|/2^m-1×ϕ(2^mb-1)/mb. For SNOW 2 and SNOW 3 this number turns out to be 2^1493. Therefore, the average number of z-primitive LFSRs that need to be sampled to get to the right configuration is approximately 2^1492.
To understand the added security of the proposed scheme, we need to understand a MQ problem.
Multivariate Quadratic(MQ) Problem It involves finding solution to multivariate quadratic equations over a finite field.
f_1(x_1, x_2, …, x_n) =∑_1≤ i ≤ j≤ na_ij*x_i*x_j = d_1
f_2(x_1, x_2, …, x_n) = ∑_1≤ i ≤ j ≤ na_ij*x_i*x_j =d_2
⋮
f_m(x_1, x_2, …, x_n) =∑_1≤ i ≤ j ≤ na_ij*x_i*x_j =d_m
,where a_ij∈ GF(2),x_1,x_2,⋯,x_n are variables over the GF(2) and f_1,f_2,⋯ f_m are m quadratic equations.
The NP-hardness of the MQ problem follows from the fact that it is a generalization of the Boolean satisfiability problem (SAT), which is a well-known NP-hard problem.
Our proposed scheme publicly shares a matrix(C∈ F_2^32 × 512) which can be represented as the right hand side value of the following quadratic equation:
x_i_1*y_j_1 + x_i_2*y_j_2+⋯+x_i_512*y_j_512=v_k
where v_k∈ C is known, x_i∈ M_1 and y_j∈ S are unknown variables over GF(2). 1≤ k ≤ (32× 512), 1≤ i≤ (32 × 512) and 1 ≤ j ≤ (256 × 513). The security of our proposed scheme depends on solving underdefined multivariate quadratic equations over GF(2) where number of equations m=2^14 and number of variables n=2^17.172.
§.§ Resistance to Attacks:
Several Known plaintext attacks like Algebraic Attacks<cit.>, Distinguishing Attacks(<cit.>),Fast Correlation Attacks<cit.>, Guess and Determine Attacks<cit.>, Cache Timing Attacks(<cit.>) are reported for SNOW 2.0 and SNOW 3G. All these attacks either use the feedback equation of the σ-LFSR or the linear recurring relation corresponding to the characteristic polynomial of the LFSR . A method of hiding the configuration matrix is explained in <cit.>. However, the characteristic polynomial of the σ-LFSR is publicly known. The scheme given in <cit.> is therefore vulnerable to schemes that use the characteristic polynomial of the σ-LFSR. This polynomial gives rise to a linear recurring equation of degree 512 with coefficients in 𝔽_2.These schemes include the fast correlation attack given in <cit.> and the fault attack on SNOW3G given in <cit.>. As the characteristic polynomial in this scheme is not known. Therefore, to get to the characteristic polynomial the attacker has to keep sampling from the set of primitive polynomials of degree 512 till he/she gets to the correct characteristic polynomial. As the number of primitive polynomials of degree 512 over 𝔽_2 is 2^502, the attacker on average will sample 2^501 polynomials. Alternatively, the attacker could try to generate the symmetric matrix S in Algorithm <ref> by sampling the key space. This on average will take 2^255 attempts to get to the correct key.
§ EXPERIMENT RESULT
The proposed scheme has been implemented in C (using a GCC compiler) on a machine with an Intel Core i5-1135G7 processor having 8GB RAM and a 512 GB HD drive. The parallel implementation of e_1^n × M_f^z ×d_i for n=512, M_f∈ F_2^512 × 512 and 1≤ |d_i|≤ 512 where i∈{1,31}, took a total of 0.04 seconds. The calculation P × Q × P^-1(step-6 in Algorithm-<ref>) took another .08 seconds. Algorithm <ref> was completed in 0.13 seconds. The total initialization time for our scheme is on average 0.2 seconds (Averaged over 200 test cases). Besides, to accelerate the keystream generation process, 32- bit vector-matrix multiplications in the σ-LFSR are done using Algorithm-<ref> with constant time complexity, 𝕆(c), where c=32. This lead to an improvement in performance over existing implementations. The Key generation times(KGT) for SNOW 3G and our proposed scheme are given in the following table.
Look up table based implementation of LFSR in SNOW 3G can be cryptanalysed by cache timing attacks<cit.>. Hence, we have considered the implementation of LFSR part of SNOW 3G(specially field multiplication over GF(2^32)) is programmed without any look up tables.
§ CONCLUSION AND FUTURE WORK
In this article, we have introduced a z-primitive sigma-LFSR generation algorithm to generate an m-companion matrix of m input-output and b number of delay blocks for word-based LFSR. Besides, to hide the feedback polynomial over GF(2), we have multiplied a key-dependent invertible matrix with the m-companion matrix. Finding the part of the multiplied matrix that is shared as a public parameter is analogous to searching a symmetric matrix from the space of 2^256 × 511. Our scheme can resist Fast Correlation Attacks (FCA). We have shown that applying our scheme to SNOW 2.0, SNOW 3G, and Sosemanuk can withstand FCA. Besides, our scheme is robust against any known plaintext attacks based on the Feedback equation of the LFSR. The future works of this scheme are as follows
* Implementation of word based stream cipher with word size m=64,128 and comparison with SNOW V and SNOW VI<cit.>.
* Developing the scheme without the public parameter C∈ F_2^32 × 512.
splncs04
|
http://arxiv.org/abs/2307.02335v2
|
20230705144327
|
The Classification of Galaxy Morphology in H-band of COSMOS-DASH Field: a combination-based machine learning clustering model
|
[
"Yao Dai",
"Jun Xu",
"Jie Song",
"Guanwen Fang",
"Chichun Zhou",
"Shuo Ba",
"Yizhou Gu",
"Zesen Lin",
"Xu Kong"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
UTF8gbsn
Guanwen Fang
[email protected], [email protected]
0000-0002-4638-0235]Yao Dai (代瑶)
Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing 246133, People's Republic of China;
<[email protected]>
0000-0003-1697-6801]Jun Xu (徐骏)
Jun Xu and Yao Dai contributed equally to this work
Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing 246133, People's Republic of China;
<[email protected]>
0000-0002-0846-7591]Jie Song (宋杰)
Deep Space Exploration Laboratory / Department of Astronomy, University of Science and Technology of China, Hefei 230026, China; <[email protected]>
School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, People's Republic of China
0000-0001-9694-2171]Guanwen Fang (方官文)
Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing 246133, People's Republic of China;
<[email protected]>
0000-0002-5133-2668]Chichun Zhou (周池春)
School of Engineering, Dali University, Dali 671003, People's Republic of China
School of Engineering, Dali University, Dali 671003, People's Republic of China
0000-0003-3196-7938]Yizhou Gu (顾一舟)
School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang, Shanghai 200240, People's Republic of China
0000-0001-8078-3428]Zesen Lin (林泽森)
Department of Physics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong S.A.R., China
0000-0002-7660-2273]Xu Kong (孔旭)
Deep Space Exploration Laboratory / Department of Astronomy, University of Science and Technology of China, Hefei 230026, China; <[email protected]>
School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, People's Republic of China
By applying our previously developed two-step scheme for galaxy morphology classification, we present a catalog of galaxy morphology for H-band selected massive galaxies in the COSMOS-DASH field, which includes 17292 galaxies with stellar mass M_⋆>10^10 M_⊙ at 0.5<z<2.5. The classification scheme is designed to provide a complete morphology classification for galaxies via a combination of two machine-learning steps. We first use an unsupervised machine learning method (i.e., bagging-based multi-clustering) to cluster galaxies into five categories: spherical (SPH), early-type disk (ETD), late-type disk (LTD), irregular (IRR), and unclassified (UNC). About 48% of galaxies (8258/17292) are successfully clustered during this step. For the remaining sample, we adopt a supervised machine learning method (i.e., GoogLeNet) to classify them, during which galaxies that are well-classified in the previous step are taken as our training set. Consequently, we obtain a morphology classification result for the full sample. The t-SNE test shows that galaxies in our sample can be well aggregated. We also measure the parametric and nonparametric morphologies of these galaxies. We find that the Sérsic index increases from IRR to SPH and the effective radius decreases from IRR to SPH, consistent with the corresponding definitions. Galaxies from different categories are separately distributed in the G–M_20 space. Such consistencies with other characteristic descriptions of galaxy morphology demonstrate the reliability of our classification result, ensuring that it can be used as a basic catalog for further galaxy studies.
§ INTRODUCTION
Galaxy morphology and how it evolves with time are crucial in understanding the assembling history and evolution of galaxies. Various galaxies exhibit different features (e.g., budge, spiral arm, bar, and tidal tail). By visual inspection of about 400 galaxy photographic images, <cit.> presented a systematic study of galaxy morphology, which found that galaxies can be mainly divided into four categories (i.e., Spiral, Lenticular, Elliptical and Irregular), and proposed the Hubble sequence scheme. These galaxy morphology categories are then found to be connected to other physical parameters. For instance, color, gas content, star formation rate, stellar mass and environment <cit.>. The diverse properties of galaxies in different morphology categories may imply different evolution paths. To understand galaxy evolution, the key is to obtain reliable classification results of galaxies at each epoch in the universe.
There are several ways to derive the morphological type of galaxies. Visual inspection is a commonly used direct way since <cit.> and is still widely used in some projects. The Galaxy Zoo is a significant project of visual inspection that gets nearly half a million volunteers involved. In the project, the morphological type of each source is voted on by a certain number of volunteers by recognizing features in the image <cit.>. This method shows good robustness when signal-to-noise ratio and resolution change between images, but in the meanwhile prohibitively time-consuming. Apart from the visual inspection, the multidimensional morphological parameter space is a practical tool in galaxy morphology classification when taking an empirical cutoff. For example, some non-parametric statistics (e.g., concentration, asymmetry, clumpiness, M_20, and the Gini coefficient) are designed to describe the characteristics of galaxies <cit.>. Galaxy morphology could be distinguished within the parameter space <cit.>. These parameters describe the certain morphological features of galaxies quantitatively, but drop much information in the image and thus may lead to failure in classification.
In recent years, machine-learning technology such as the convolutional neural network (CNN) has been applied to derive galaxy morphology
automatically <cit.>. By taking advantage of the abundant information in the raw image, the CNN method has been applied to SDSS <cit.> and CANDELS images <cit.>. Since CNN is a supervised machine learning (SML) method, it highly depends on the prior information from the training set to simulate human perceptions. Meanwhile, unsupervised machine learning (UML) is another kind of machine-learning technology, which does not need a pre-labeled training set. It clusters galaxies by the characteristics of the image itself, even if the machine does not understand the galaxy features. As a result, it is widely used in morphology analysis in the era of big data survey <cit.>. Generally, UML methods work in two steps: (1) extract features from the raw image, and (2) cluster galaxies by similar features.
Various UML methods have been designed in practice. For example, <cit.> and <cit.> extracted features using the growing neural gas algorithm <cit.> and cluster the galaxies with the hierarchical clustering technique.
The convolutional autoencoder (CAE; ) is another effective technique for extracting image features.
<cit.> applied CAE and a Bagging-based multi-clustering model to cluster CANDELS images and obtained a reliable classification result with a cost of rejecting a certain fraction of disputed sources that reach no agreement in the voting of the bagging method. Later, by adopting the classification result of <cit.> as a training set, <cit.> used an SML method to classify the rejected sources in <cit.>. Thus, by combining the UML and SML methods, we are able to classify the galaxy sample into different morphological categories entirely.
COSMOS-DASH is the largest near-infrared (NIR) survey using HST/WFC3, which could help us study the morphology of galaxies at redshift 0.5<z<2.5, where the rest-frame optical emission shifts into NIR. In this paper, we apply both UML (i.e., CAE & bagging-based multi-clustering algorithm) and SML (i.e., GoogLeNet) methods to massive galaxies in the COSMOS-DASH field to get reliable and complete morphology classification result.
The paper is organized as follows. Section <ref> describes the COSMOS-DASH survey and the sample we used. We introduce the UML method and the GoogLeNet model in Section <ref>. In Section <ref>, We present the test of the classification results in the galaxy parameter space and provide a catalog. Finally, a conclusion will be given in Section <ref>.
§ DATA AND SAMPLE SELECTION
§.§ COSMOS-DASH
Wide-field NIR survey is vital in studying galaxies at high redshift, where the rest-frame optical emissions shift into the NIR bands. Various projects are conducted by ground-based facilities (e.g., NMBS, ; UltraVISTA, ) and space facilities (e.g., HST, JWST). For the HST, it is hard to balance resolution, depth, and area for observations. To obtain high-resolution deep-field images, observations should be limited to a tiny field of view, which makes large-scale deep-field NIR sky surveys very difficult. Drift and Shift <cit.> is an efficient technique for wide-field observation with HST. With the DASH technique, <cit.> present a wide-field NIR survey of the COSMOS field, which is also named COSMOS-DASH. It is taken with 57 DASH orbits in the F160W filter of WFC3 and covers an area of 0.49 deg^2 (0.7 deg^2 when combined with archival data), which is much larger than the CANDELS field. Since the exposures are around 300s per pointing, the 5σ source depth of the image is H_160=25.1 ABmag. The COSMOS-DASH field is centered at R.A.=10:00:28.6, decl.=+02:12:21.0 and contains 50000× 50000 pixel (with 01 per pixel). The final mosaic of the image is available from the COSMOS-DASH website.[<https://archive.stsci.edu/hlsp/cosmos-dash/>]
§.§ UVISTA Catalog
To obtain stellar mass and other physical parameters of galaxies, we select our sample from the UltraVista K_s-selected catalog <cit.>, which is based on an early release of the NIR data (UltraVista DR1). The catalog was generated by the PSF matching images in 30 filters. They divided the UltraVISTA into nine separate pointings depending on the layout of the COSMOS Suprimecam. Moreover, PSF matching was done separately in each of the nine fields to optimize any field-to-field PSF variations. They choose the UltraVista K_s band as the selection ban and reach a depth of K_s, tot = 23.4 ABmag at 90% completeness. The photometric redshift of each galaxy in the catalog was determined by fitting the spectral energy distributions (SEDs) within 0.1-24 μm by using the EAZY code <cit.>. The photometric redshift had been tested by the spectral redshift of galaxies from COSMOS.
Moreover, the rest-frame colors were extracted from the outputs of the EAZY code. Along with the redshifts, they also fit the galaxy's SEDs to the <cit.> stellar population synthesis models to derive stellar mass with the FAST code <cit.>. During the fitting process, they assumed a <cit.> initial mass function, an exponentially declining star formation history, a <cit.> dust attenuation curve, and solar metallicity. Although the depth of this catalog is only K_s, tot = 23.4 mag, it is deep enough to select the massive galaxies analyzed in this paper.
§.§ Selection of Galaxies for Analysis
This paper aims to derive the morphology classification result of massive galaxies in the COSMOS-DASH field. We adopt the UltraVISTA/K_s selected catalogs and HST/F160W images from the COSMOS-DASH survey. We study the massive galaxies with M_⋆ > 10^10M_⊙ at 0.5<z<2.5, which are bright enough to derive reliable morphologies. Since there are a few bright stars in the field, we also set the criterion use = 1 to ensure reliable stellar mass estimation. The flag means the objects 1) are not too faint (i.e., K_s<23.5); 2) are a galaxy rather than a star; 3) are not near a bright star; 4) are only missing a few filters of data, so their photometric redshift and stellar population fits are reliable. Finally, 17292 galaxies are selected in our final sample after removing images with bad pixels.
§ THE METHOD FOR MORPHOLOGICAL CLASSIFICATION
In this section, we present the scheme we use to classify the morphology of these galaxies (as shown in Figure <ref>). In our previous work, <cit.> developed a UML method to classify galaxies with similar morphologies in deep field surveys automatically. The UML method consists of two steps:
(1) Use CAE to compress the dimensions of the original data and extract the features; (2) Based on the bagging clustering method, guaranteed galaxies with similar characteristics are classified into one group. After discarding the galaxies with inconsistent voting results, the remaining galaxies were nicely grouped into 100 groups. Then by visual classification, 100 types of galaxies with similar features are classified into five categories, including spherical (SPH), early type disk (ETD), late-type disk (LTD), irregular disk (IRR), and unclassified (UNC). As demonstrated in <cit.>, among the three models used by the bagging-based multi-clustering method, GoogLeNet has a high classification efficiency in the morphology classification of galaxies in a deep field. Therefore, following <cit.>, we use the GoogLeNet model as our SML algorithm to classify the remaining sources that are discarded by the UML method so that we can fully utilize the sample data and realize the purpose of complete classification. The SML method consists of two steps: (1) The GoogLeNet model is trained by adopting galaxies successfully classified by the UML method as the training set. (2) The trained GoogLeNet model is applied to classify the discarded sources in the UML step.
§.§ Data Preprocessing
Following <cit.>, we crop the original large-size image to a size of 28 × 28 and place the galaxy in the center of the image so that unnecessary noise interference is reduced. Then we use the convolutional autoencoders (CAE) algorithm to extract image information and compress dimensions through different convolution and pooling operations at each layer <cit.>. CAE is an effective technique to extract image features. It can be used for automatic noise reduction without requiring any label information and can be
achieved by reconstructing the image <cit.>. The detail of the CAE architecture is shown in Table <ref>. The parameters and loss function of each layer of CAE that we adopt are the same as those used in <cit.>. As seen from Figure <ref>, after applying CAE for noise reduction, image features are effectively extracted, and image quality is significantly improved.
§.§ UML clustering process
The process of the UML clustering is illustrated in Figure <ref>. As demonstrated in <cit.>, a single clustering model may be biased and return a misclustering result. Thus, we adopt the bagging-based multi-clustering method <cit.> to give a more robust clustering result of our pre-processed 28×28 images after applying CAE to the images for noise reduction, As shown in Figure <ref>, the same batch of data is inputted into three clustering models simultaneously (i.e., K-means, ; AGG, ; and BIRTH, ). Each model clusters the sample into 100 categories. Those categories derived by the three models are aligned by setting labels of K-means as the main one and matching the group that shows the highest frequency of the K-means label in the result of the other two models <cit.>. Once the categories are aligned, We take the majority win-out strategy in voting, and those sources for which the three models reach no agreement in voting are discarded.
In visual classification, We randomly select a certain number of images (approximately 20 to 50) based on the number of galaxies in each group and display them on the same panel for visual classification. Three collaborators participated in this classification. We agree when two or more people have the same classification result for a specific galaxy. Otherwise, it is considered an unclassifiable galaxy. Thus, we divided them into five categories with physical meanings <cit.>.
As a result, we finally obtain 8258 galaxies with reliable morphological labels, providing the basis for the following SML clustering process, and discard 9034 sources with inconsistent voting results in our UML clustering process.
§.§ SML Clustering Process – the GoogLeNet algorithm
To complete the classification of our sample, we take the 8528 UML well-classified sources as a training set and conduct SML to the rest 9034 galaxies.
As demonstrated by <cit.>, GoogLeNet performs well in the classification of deep-field galaxies in the classical neural network model. Therefore, we adopt GoogLeNet <cit.> here as a supervised classification model.
The structure of GoogLeNet is shown in Figure <ref>. The inception structure has two main contributions. One is to superposition more convolution in the case of the same size and extract more abundant features. The other is the simultaneous convolution re-aggregation on multiple sizes, which can extract features of different scales, making classification more accurate and improving efficiency. In the inception, the structure is to bring together features with solid correlations to accelerate convergence. The model parameters of each layer used in this work are described in Table <ref>.
In this part of the work, we use the 8258 well-classified galaxies obtained from the UML model as labeled data to classify the remaining 9034 galaxies. In order to avoid overfitting, we randomly divide the labeled data into the training (7412) and verification (846) sets with a fixed proportion of about 9:1 as shown in Table <ref> <cit.>. When training the GoogLeNet model, the algorithm's step size, learning rate, and depth are referred to <cit.>.
§ RESULT AND DISCUSSION
§.§ Overall morphological classification results
By combining UML with SML (i.e., the GoogLeNet model), we derive a complete morphology classification result of 17292 galaxies selected in the COSMOS-DASH field (Table <ref>), which includes 5335 SPHs, 3132 ETDs, 2837 LTDs, 1693 IRRs, and 4295 UNCs. Part of the result is shown in Table <ref>.
We sample the complete results of the classification for inspection and find that among which SPH is the most concentrated and brightest; ETD is slightly dim, with a bright nucleus in the center and a relatively concentrated luminosity. Most LTDs have an obvious nuclear sphere and spiral arm, while the luminosity is more diffuse. IRR does not have an apparent regular shape but can be identified as a galaxy. UNC is mainly shown in pictures with a meager signal-to-noise ratio, and it is impossible to identify whether there is a galaxy or what kind of galaxy it is. Figure <ref> shows the pictures of randomly selected galaxies for each morphological type in the obtained label samples from the UML method. It can be seen from the pictures that the morphology types are distinguishable.
In Table <ref>, we present the classification accuracies of the GoogLeNet model, which are larger than 90% for all five types. Also, we test
the distribution of the verification set and training set in the physical parameter space, and the verification set can cover the entire parameter space well. The precision and recall of Figure <ref> are based on verification set estimation. The average precision and recall are both over 90%, indicating that GoogLeNet has a good performance in classifying images of galaxies <cit.>, with a low probability that the various classes of galaxies are confused. Among them, the recognition accuracies of SPH and UNC are higher than those of other classes. We conclude from our analysis that SPH and UNC have more distinct features and therefore have better training in the model. It is typical for SPH to be misclassified as ETD because both galaxy populations exhibit smooth contours, and there is no strict boundary between them. Some LTDs have distinct nuclear sphere structures but not distinct spin arms, leading to misclassification as IRRs.
§.§ t-SNE test
The t-SNE graph is an efficient way to map high-dimensional data to a low-dimensional space and to transform the clustering results into dimensions suitable for inspection <cit.>. We sample the results of the five classes of galaxies that were finally classified by the UML and GoogLeNet model for 2000, 3000, and 4000 times using the t-SNE technique. As shown in Figure <ref>, the five categories of galaxies show a clear trend of clustering on the screen as the sampling time increases. Galaxies with similar features are clustered together. In each category, there is a small amount of overlay at the edges of the populations, which is caused by morphological similarities and is expected by galaxy morphological evolution.
The following conclusions can be drawn from Figure <ref>: (1) The UML method provides a feasible prior sample, which is reflected in the t-SNE graph that the distributions of all galaxy types tend to be stable. (2) The GoogLeNet model trained by the result of UML successfully classified the remaining sources, and keep the aggregation degree consistent with the UML method. There are apparently distinguishable boundaries for all types of galaxies, indicating the reliability of our classification method.
§.§ Test of Morphological Parameters
Galaxy morphology parameters play an essential role in the description of the physical properties of galaxies. Different categories of galaxies show different physical properties, and the correspondence between the visual classification results and the physical properties of massive galaxies can effectively reflect the reliability of our result <cit.>. In this section, we analyze the classification results using galaxy morphology parameters. Since most of the UNC images have a meager signal-to-noise ratio, morphological parameters are difficult to measure and might have large uncertainties. On the other hand, ignoring the UNC sources would not affect the analysis of other classes, so we do not discuss the nature of UNC in this section.
§.§.§ Parametric Measurements
To derive the galaxy morphology parameters, we used the GALFIT package <cit.> and GALAPAGOS software <cit.> to fit galaxy surface brightness profiles with a single Sérsic model and measure the Sérsic index n and the effective radius r_e for each galaxy.
The distributions of the Sérsic index are shown in Figure <ref>. Panel (a) shows that among the 8258 galaxies successfully clustered by the UML method, the median Sérsic index of IRR, LTD, ETD, and SPH are 1.03, 1.34, 2.96, and 3.83, respectively, with a gradually increasing trend. In panel (b), the median Sérsic index of IRR, LTD, ETD, and SPH were 1.29, 1.36, 3.00, and 3.64, respectively; in panel (c), the median Sérsic indexes of IRR, LTD, ETD, and SPH are 1.17, 1.35, 2.98 and 3.73, respectively. The classification results of GoogLeNet (panel b) and the overall classification results (panel c) both share similar distributions for the four galaxy types and the same increasing trend from IRR to SPH with the UML sample, which is consistent with the expected correlation between this parameter and galaxy morphology.
The effective radius distributions of the four classes are shown in Figure <ref>. Among the 8258 galaxies clustered by the UML method (panel a), the median effective radii of the four classes (i.e., SPH, ETD, LTD, and IRR) are 2.09, 2.24, 4.07, and 4.47 kpc, respectively. Among the 9034 galaxies classified by the GoogLeNet model (panel b), the median effective radii of SPH, ETD, LTD, and IRR are 2.19, 2.29, 4.17, and 4.27 kpc. In the total sample of 17,292 galaxies (panel c), the median effective radii of SPH, ETD, LTD, and IRR are 2.14, 2.29, 4.17, and 4.37 kpc, respectively. The median distribution of effective radii of galaxies increases from SPH, ETD, LTD to IRR.
In short, the distributions of the Sérsic index and effective radius of different classes of galaxies derived from our method are consistent with the expected correlations between galaxy morphologies and these structure parameters.
§.§.§ Nonparametric Measurements
Using the Morpheus program <cit.>, we calculate the nonparametric morphological parameters Gini coefficient (G) and the normalized second-order moment of the brightest 20% of the galaxy's flux (M_20) for all galaxies in our sample. Thus, we can investigate the correspondence between the galaxy morphological classification results and the physical relations between the various types of galaxies.
The Gini coefficient (G) indicates the flux distribution of galaxies <cit.>. Following <cit.>, it can be calculated as:
G=1/fn(n-1)∑_i=0^n(2i-n-1)f_i,
where n is the number of pixels of the galaxy, f_i is the pixel flux value sorted in ascending order, and f represents the mean over the pixel values. M_20 is the normalized second-order moment of the brightest 20% pixels of the galaxy defined as:
M_tot=∑_i^nM_i=∑_i^nf_i[(x_i-x_c)^2+(y_i-y_c)^2]
M_20=log_10∑_iM_i/M_tot, while ∑_if_i<0.2f_tot,
where f_tot is the total flux of the galaxy, f_i is the flux value of each pixel i, (x_i, y_i) is the position of pixel i, and (x_c, y_c) is the center of the image. <cit.> developed M_20 to trace the spatial distribution of bright nuclei, bars, and off-center clusters. The G–M_20 diagram is often used to test the separation of different classes of galaxies (e.g., <cit.>; <cit.>).
We plot the distribution of the four types of galaxies in the G–M_20 space. As shown in Figure <ref>, various types of galaxies are well distinguished in the G–M_20 space. The Gini coefficient of galaxies gradually increases from IRR to SPH, while the value of M_20 slowly decreases. SPH galaxies tend to have the largest Gini coefficient and the smallest M_20. The overall trend from IRR to SPH in this diagram is in good agreement with the expected variations between these four morphology types, which further suggests the robustness of our two-step method to morphologically classify galaxies.
§ CONCLUSION
In this paper, we apply a machine-learning classification method combining UML and SML <cit.> to massive galaxies in the COSMOS-DASH field. Our method gets the sample data completely classified and shows good classification accuracy.
The method includes two steps: (1) UML clustering. In this step, the data is denoised and extracted by CAE. Then the Bagging-based multi-clustering method is used to divide galaxies with similar features into 100 categories at first, and further classified into five categories manually by visual inspection. After discarding sources with inconsistent voting, 47.76% (8258) of the sources are successfully classified, including 2664 SPHs, 1485 ETDs, 1227 LTDs, 715 IRRs, and 2167 UNCs. (2) SML (i.e., GoogLeNet model) clustering, the 8258 galaxies successfully classified by the UML method are taken as the training set of the GoogLeNet model to train the neural network and successfully classify the remaining 52.24% of the galaxies. Thus, we achieve the complete morphological classification for our sample.
Our result shows good accuracy in the test set. We also apply the t-SNE graph and G-M_20 diagram to our classification result, from which we find that the classification results of combining the UML method with the SML method are consistent with the characteristics of the galaxy morphology parameters.
This paper is based on observations made with the NASA/ESA HST, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program HSTGO-14114. Support for GO-14114 is gratefully acknowledged. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analyzed can be accessed via [https://doi.org/10.17909/T96Q5M]https://doi.org/10.17909/T96Q5M.
This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB 41000000), the National Natural Science Foundation of China (NSFC, Grant No. 12233008, 11973038, 62106033), the China Manned Space Project (No. CMS-CSST-2021-A07), the Cyrus Chun Ying Tang Foundations and the Frontier Scientific Research Program of Deep Space Exploration Laboratory. C.C.Z. acknowledges the support from Yunnan Youth Basic Research Projects (202001AU070020). Z.S.L. acknowledges the support from the China Postdoctoral Science Foundation (2021M700137). Y.Z.G. acknowledges support from the China Postdoctoral Science Foundation funded project (2020M681281).
aasjournal
|
http://arxiv.org/abs/2307.02813v2
|
20230706071822
|
CPDG: A Contrastive Pre-Training Method for Dynamic Graph Neural Networks
|
[
"Yuanchen Bei",
"Hao Xu",
"Sheng Zhou",
"Huixuan Chi",
"Haishuai Wang",
"Mengdi Zhang",
"Zhao Li",
"Jiajun Bu"
] |
cs.LG
|
[
"cs.LG",
"cs.SI"
] |
CPDG: A Contrastive Pre-Training Method for Dynamic Graph Neural Networks
Yuanchen Bei^11* Both authors contributed equally to this paper., Hao Xu^21, Sheng Zhou^12† Corresponding author., Huixuan Chi^3,
Haishuai Wang^1, Mengdi Zhang^2, Zhao Li^1, Jiajun Bu^1
^1 Zhejiang University, Hangzhou, China
^2 Meituan, Beijing, China
^3 Institute of Computing Technology, Chinese Academy of Science, Beijing, China
[email protected], [email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
August 1, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Dynamic graph data mining has gained popularity in recent years due to the rich information contained in dynamic graphs and their widespread use in the real world. Despite the advances in dynamic graph neural networks (DGNNs), the rich information and diverse downstream tasks have posed significant difficulties for the practical application of DGNNs in industrial scenarios.
To this end, in this paper, we propose to address them by pre-training and present the Contrastive Pre-Training Method for Dynamic Graph Neural Networks ().
tackles the challenges of pre-training for DGNNs, including generalization capability and long-short term modeling capability, through a flexible structural-temporal subgraph sampler along with structural-temporal contrastive pre-training schemes. Extensive experiments conducted on both large-scale research and industrial dynamic graph datasets show that outperforms existing methods in dynamic graph pre-training for various downstream tasks under three transfer settings.
dynamic graph neural networks, pre-training, contrastive learning
§ INTRODUCTION
Graph is ubiquitous in real-world scenarios and mining graph data has attracted increasing attention from both academic and industry communities <cit.>, such as epidemiology <cit.>, bioinformatic <cit.>, and recommender system <cit.>.
Previous graph mining works have mainly focused on static graphs, neglecting the dynamic nature of real-world graph data, especially in industrial scenarios <cit.>.
For example, in Meituan[<https://meituan.com>], users engage in real-time interactions with various items and shops, forming a rapidly changing large-scale dynamic graph.
Such dynamic and evolving patterns are essential for accurate graph modeling <cit.>, which has gained growing interest in practice.
Recently, the dynamic graph neural networks (DGNNs) have achieved success for dynamic graph mining <cit.>.
A typical DGNNs workflow trains the model on dynamic graph data and then directly applies it for inference <cit.>.
However, the learned patterns from training may not always be applicable to new inference graphs, especially in dynamic and large-scale industrial systems.
Furthermore, frequent retraining is especially impractical in large-scale industrial systems, where billions of interactions may occur within a short time interval.
On the one hand, DGNNs require simultaneous consideration of temporal and structural information, resulting in higher computational complexity compared to traditional graph neural networks (GNNs) for static graph.
On the other hand, the unique requirements of different tasks in dynamic graphs, such as node-level or graph-level and temporal or structural, further limit the possibility of retraining for each new task.
To sum up, rich information and various downstream tasks have posed significant difficulties for the application of DGNNs.
In recent years, the pre-training and fine-tuning strategy has been widely studied in natural language processing (NLP) <cit.>, computer vision (CV) <cit.> and gradually expanded to the graph domain <cit.>.
The goal of graph pre-training is to learn the transferable knowledge representation on large-scale graph data, which can be applied to various downstream applications and tasks.
Coincidentally, such a strategy has the potential to address the aforementioned challenges in applying DGNNs to real-world systems through pre-training on historical dynamic graph data, followed by fine-tuning for specific downstream tasks, rather than retraining from scratch.
Although feasible, the dynamic nature has brought extra challenges to pre-training DGNNs compared with static graph neural networks:
First, Generalization Capability. Generalization is fundamental but essential for pre-training strategies <cit.>, particularly in cases where dynamic information brings various downstream tasks that involve both temporal and structural aspects.
However, current DGNNs are mostly trained for specific tasks like dynamic link prediction and lack the ability to generalize to various downstream tasks <cit.>.
Second, Long-short Term Modeling Capability.
In dynamic graphs, both long-term stable patterns and short-term fluctuating patterns are important natures and urgent for downstream tasks, especially for rapidly changing industrial dynamic graphs.
Existing DGNNs have focused on capturing the long-term stable patterns by utilizing memory buffers or temporal smoothing <cit.>.
However, when pre-training on graphs with large time intervals, the short-term fluctuating patterns can easily be overshadowed by long-term stable patterns.
How to simultaneously capture both long-short term patterns during DGNN pre-training is still under-explored.
To tackle the above challenges, in this paper, we propose , a novel Contrastive Pre-Training Method for Dynamic Graph Neural Networks.
Figure <ref> illustrates the overall workflow of our proposed method.
More specifically, first proposes a structural-temporal sampler that extracts context subgraph with flexible temporal-aware sampling probability.
Additionally, both the temporal and structural contrastive pre-training schemes are carefully designed to learn transferable long-short term evolution patterns and discriminative structural patterns in dynamic graphs.
To further benefit the downstream tasks, explicitly extracts the evolved patterns during pre-training and provides for downstream fine-tuning.
We conduct extensive experiments on both large-scale real-world dynamic graph datasets and the industrial dataset in Meituan. The results demonstrate that outperforms state-of-the-art graph pre-training methods on all datasets under various transfer settings and downstream tasks.
Further experimental studies of prove the effectiveness of the designed method from multiple dimensions. The main contributions of this paper are summarized as follows.
* We highlight the difficulties of applying DGNNs in industrial scenarios, that the model complexity and various tasks make it impractical to retrain DGNNs.
We propose to address them with pre-training for DGNNs which has received limited attention in the literature.
* We propose a novel method to tackle the challenges of pre-training and efficiently learn the transferable knowledge on large-scale dynamic graphs by a novel flexible structural-temporal subgraph sampler along with two views of subgraph contrastive pre-training schemes.
* We conduct extensive experiments on both large-scale real-world dynamic graph datasets and the industrial dataset in Meituan with different transfer settings and downstream tasks. Experimental results demonstrate the effectiveness and the generalization ability of .
In the following sections, we will first review the previous work related to our method in Section <ref>. Second, we will give some key preliminaries of our work in Section <ref>.
Then, the detailed description of the proposed will be introduced in Section <ref>.
To further verify the effectiveness of , we conduct various experiments in Section <ref>.
Finally, the conclusion of this paper is posed in Section <ref>.
§ RELATED WORKS
§.§ Dynamic Graph Neural Networks
Graph Neural Networks (GNNs) have shined a light on graph-structured data mining in recent years to guide the representation learning <cit.>.
However, since these networks are mainly designed for static graphs, they lack the ability to simultaneously capture the temporal and structural patterns within dynamic graphs, which are more common in the real world. Therefore, Dynamic Graph Neural Networks (DGNNs) have gained increasing attention recently, and many methods have been proposed and achieved great success in dynamic graph representation learning <cit.>.
Among them, DyRep <cit.> posits dynamic graph representation learning as a latent mediation process and utilizes the temporal point process for dynamic graph modeling. JODIE <cit.> designs a coupled recurrent model to learn dynamic node embeddings with the update, projection, and prediction components. TGAT <cit.> further designs a self-attention temporal-topological neighborhood aggregator for dynamic graph modeling.
Then, TGN <cit.> designs a generic and state-of-the-art framework for dynamic graph learning with the memory module and unifies most of the above methods into this framework.
§.§ Pre-Training for GNNs
With the continuous success of GNNs in graph mining and the trend of increasing graph data scale in recent years, a growing number of works have begun to note the important role of graph pre-training and explore various solutions for pre-training GNNs on unannotated graph data <cit.>.
Most current works focus on static graphs. Among them, a direct way is to design some unsupervised tasks to let GNNs (e.g. GraphSAGE <cit.> and GAT <cit.>) pre-train on unlabeled data, such as link prediction <cit.>.
Another category of methods is graph contrastive learning-based pre-training: DGI <cit.> maximizes the mutual information between node representations and graph summaries. Recently, GCC <cit.> proposes a contrastive learning scheme on subgraph perspectives with an instance discrimination task to learn transferable universal structural patterns.
Besides, GPT-GNN <cit.> designs a generative graph pre-training paradigm with masked node attribute generation and edge generation tasks inspired by the pre-train language method.
In recent years, few works have begun to pay attention to pre-training DGNNs on dynamic graphs. PT-DGNN <cit.> improves GPT-GNN with temporal-aware masking. DDGCL <cit.> maximizes the time-dependent agreement between a node identity's two temporal views. Recently, SelfRGNN <cit.> proposes a Riemannian reweighting self-contrastive approach for self-supervised learning on dynamic graphs.
The model comparison between our designed and these state-of-the-art methods is illustrated in Table <ref>.
§ PRELIMINARIES
In this section, we present the key definitions related to pre-training for DGNNs. The main symbols are listed in Table <ref>.
§.§ Dynamic Graph
There exist two main types of dynamic graphs, discrete-time dynamic graphs (DTDG) and continuous-time dynamic graphs (CTDG) <cit.>.
DTDG is a sequence of static graph snapshots taken at intervals in time. CTDG is more general and can be represented as temporal lists of events, which reflects more detailed temporal signals and fine-grained rather than the coarse-grained DTDG.
In this paper, we focus on the CTDG that is widely used in industrial systems and more demand for pre-training, which can be formulated as:
Dynamic Graph <cit.> is defined as =(^T, ^T), where ^T is a temporal set of vertices, ^T is the temporal set of edges, and T is the time set.
N=|^T| denotes the number of vertices in .
Each edge e_i, j^t∈ℰ^T is denoted as a triple, e_i, j^t = (i, j, t), where node i, j ∈𝒱^T and time t ∈ T.
Each (i, j, t) means node i has an interaction with node j at time t.
We denote the temporal graph at time t as the graph ^t = (^t, ^t), ^t and ^t as the set of vertices and edges observed before time t.
We denote _i^t = {j | e_i, j^t-∈^t, t^- ≤ t} as the neighborhood set of node i in time interval [0, t], and _i^k,t as the set of k-hop neighborhoods of node i.
§.§ Dynamic Graph Neural Networks
Given a dynamic graph =(^T, ^T), the existing Dynamic Graph Neural Network (DGNN) encoder learns the temporal embeddings for all nodes at time t, denoted as 𝐙^t = (𝐳_1^t, ..., 𝐳_i^t, ...).
Besides, a memory ℳ stores the memory state for each node at time t, denoted as 𝐒^t = (𝐬_1^t, ..., 𝐬_i^t, ...), in which each state memorizes the temporal evolution of the node at the time interval [0, t] in compressed format.
The paradigm of DGNN encoder can be formulated as a function of node i and its k-hop temporal neighbors 𝒩_i^k, t at time t:
𝐳_i^t = Emb(i, t) = ∑_u∈_i^k, tf(𝐬_i^t, 𝐬_u^t-),
where f(·) denotes a learnable function <cit.>, such as dynamic graph attention. 𝐬_i^t denotes the state of node i that is stored in memory at time t, and 𝐬_u^t- is the latest state of node u before time t, which is initialized as a zero vector for new encountered nodes and updated with batch processing by three following steps: Message Function, Message Aggregator and Memory Updater.
(i) Message Function. Given an interaction (i,j,t) involving node i, a message is computed to update state 𝐬_i^t of node i in the memory at time t, which can be expressed as:
𝐦_i^t = Msg(𝐬_i^t-, 𝐬_j^t-, ϕ(Δ t)),
where Msg(·) is the message function, such as identity and MLP <cit.>.
Δ t denotes the last updated time interval between 𝐬_i^t- and 𝐬_j^t-, ϕ(·) represents a generic time encoding <cit.>.
(ii) Message Aggregator.
As each node i may have multiple interaction events in time interval [t_1, t] and each event generates a message, a message aggregator is further used to aggregate all the messages 𝐦_i^t_1, ..., 𝐦_i^t_b for t_1, ..., t_b ≤ t:
𝐦_i^t = Agg(𝐦_i^t_1, ..., 𝐦_i^t_b),
where Agg(·) is an aggregation function, such as mean and last time aggregation <cit.>.
(iii) Memory Updater.
The state 𝐬_i^t of node i at time t is updated upon 𝐦_i^t and its previous state 𝐬_i^t-, which can be formulated as:
𝐬_i^t = Mem(𝐬_i^t-, 𝐦_i^t),
where Mem(·) represents a time series function, such as RNN <cit.>, LSTM <cit.> and GRU <cit.>.
Note that most popular DGNN encoders have followed the above paradigm, such as JODIE <cit.>, DyRep <cit.> and TGN <cit.>. They differ in the implementation of f(·), Msg(·), Agg(·) and Mem(·), as compared in Table <ref>.
Pre-Training for Dynamic Graph Neural Networks aims at pre-training a generalizable DGNN encoder f_θ on unlabeled large-scale dynamic graph =(^T, ^T) in a self-supervised way so that f_θ can benefit initialization of models in downstream tasks.
It is worth noting that in the dynamic graph downstream tasks, the newly encountered events may contain nodes that occur in the pre-training stage, thus it is also beneficial to extract the temporal pattern of these nodes and propagate them in the fine-tuning stage.
§ METHODOLOGY
In this section, we introduce our proposed , which consists of the following parts:
Firstly, a flexible structural-temporal subgraph sampler is applied with various sampling strategies to obtain subgraphs with different preferences. More specifically, we provide a η-BFS sampling strategy and a ϵ-DFS sampling strategy for extracting the subgraph with temporal preference and structural preference.
Secondly, with the proposed sampler, a structural-temporal contrastive pre-training module is further proposed with both temporal contrast and structural contrast to learn transferable long-short term evolution patterns and discriminative structural patterns simultaneously.
Finally, an optional evolution information enhanced fine-tuning module is further proposed to extract informative long-short term evolution patterns during pre-training and provide for downstream tasks.
Figure <ref> illustrates the overall network architecture of .
§.§ Structural-Temporal Subgraph Sampler
To obtain informative and transferable information for model pre-training, we design two sampling strategies for different focuses rather than uniformly random sampling schemes in most existing methods <cit.>.
The structural-temporal subgraph sampler is designed to flexibly access various structural/temporal-aware subgraph sampling functions on the dynamic graph data for extracting subgraphs with structural-temporal preferences and providing contrastive pre-training with informative samples.
§.§.§ η-BFS Sampling Strategy
Existing works on DGNNs have successfully modeled the long-term stable patterns within dynamic graphs by utilizing memory buffers or smoothing restrictions <cit.>.
However, in addition to the long-term stable patterns, the short-term fluctuating patterns are also of great significance for real-life industrial graphs with rapidly changing.
For example, in recommender systems, both the long-term and short-term interests of users extracting from the user-item interaction graph are important for making optimal recommendation decisions <cit.>.
In order to further capture the short-term fluctuating temporal evolution patterns in dynamic graphs during pre-training, we design a η-BFS sampling strategy to extract temporal subgraphs by accessing various temporal-aware sampling functions.
We first define the set of event time that contains node i by time t as T_i^t = {t_u | (i, u, t_u) ∈^t, t_u < t}, given a root node i at time t, the η-BFS sampling first observes all the 1-hop neighbors u∈_i^t of node i. A sampling probability p_u is then assigned to each neighbor u by a temporal-aware function f_t→ p(·) under T_i^t.
The 1-hop η-neighbors are randomly sampled from _i^t depending on p_u.
By conducting the above sampling process on each sampled 1-hop η-neighbors, we get the 2-hop η-neighbors.
After recursively conducting the above sampling for k times, the η-BFS sampling generates the η-BFS k-hop subgraph.
Figure <ref> illustrates a toy example of η-BFS sampling with η=2 and k=2.
The η-BFS sampling strategy will be utilized with two designed temporal-aware probability functions to generate sample pairs for the following temporal contrastive learning in Subsection <ref>.
§.§.§ ϵ-DFS Sampling Strategy
In addition to the temporal evolution pattern, the unique and discriminative structural pattern of each node is also indispensable for the pre-training of DGNNs <cit.>.
A naive way to capture the unique structural pattern is conducting a random walk starting from node i to generate the subgraph <cit.>, which is equivalent to the depth-first search (DFS) rooted by node i.
However, such vanilla DFS will miss the temporal pattern that evolved over time in the dynamic graph scenario.
We further extend the vanilla DFS with temporal-aware selection and propose a structural ϵ-DFS sampler for maintaining both structural and temporal patterns.
Following the definition of T_i^t = {t_u | (i, u, t_u) ∈^t, t_u < t} in Subsection <ref>, we chronologically sort all the 1-hop neighbors from _i^t as 𝒩𝒮_i^t, and select the most recent interacted ϵ neighbors:
𝒩𝒮_i^t = [a, b, ..., u, ..., v, ..._ϵ-neighbors],
where t_a≤ t_b≤ ... ≤ t_u≤ t_v≤ ... < t.
It is worth noting that such expansion only considers the most recent temporal information, which is consistent with the η-BFS but in a discrete formulation.
Then, for each selected 1-hop neighbor, we add it to subgraph _i^t and repeat the above sampling process for k times.
Figure <ref> illustrates a toy example of ϵ-DFS sampler with ϵ=2 and k=2.
The ϵ-DFS sampling strategy will be utilized to generate sample pairs for the following structural contrastive learning in Subsection <ref>.
Since both η-BFS and ϵ-DFS samplers are independent of the learnable parameters or node embeddings during the pre-training stage, the sampling
operators can be preprocessed before pre-training for efficiency.
§.§ Structural-temporal Contrastive Pre-training
With the help of the proposed structural-temporal subgraph sampler with the designed η-BFS sampling strategy and ϵ-DFS sampling strategy,
the structure-temporal contrastive pre-training is designed to capture the informative and transferable long-short term evolution patterns and structural patterns from these samples.
§.§.§ Temporal Contrast (TC)
The DGNN encoders with the memory module have the ability to learn long-term stable patterns <cit.>.
However, the short-term fluctuating evolution patterns over time are also vital for DGNNs' real-world applications, such as user modeling in industrial scenarios.
Therefore, the temporal contrast is proposed by contrasting the recent subgraph (positive sample) with the agelong subgraph (negative sample) to capture the short-term fluctuating patterns over time changing.
The motivation for temporal contrast is: the relatively recent events are more consistent with the current state than the agelong events.
Concerning the consistency of temporal information, the temporal-aware function f_t→ p(·) can be implemented in two opposite ways for generating the positive and negative samples: chronological probability function f_t→ p^tp(·) and reverse chronological probability function f_t→ p^tn(·). The probabilities are defined as follows:
Chronological Probability is proportional to the time interval between the event time t_u and t:
t̂_u = t_u-min T_i^t/t-min T_i^t,
p_u^tp=exp(t̂_u/τ)/∑_v∈𝒩_i^texp(t̂_v/τ),
where τ is the temperature coefficient.
Reverse Chronological Probability is inversely proportional to the time interval between the event time t_u and t:
p_u^tn=exp(t_u /τ)/∑_v∈𝒩_i^texp(t_v /τ),
where t_u = 1 - t̂_u and t̂_u is defined in Eq. (<ref>).
Given an interaction event (i,j,t), first obtains the embedding z_i^t of node i at time t with DGNN encoder.
Then, generates temporal positive and negative subgraphs by η-BFS sampling strategy with the chronological and reverse chronological probability, denoted as 𝒯𝒫_i^t and 𝒯𝒩_i^t.
For each node u in 𝒯𝒫_i^t and node v in 𝒯𝒩_i^t, we retrieve the embeddings from the memory ℳ and then pool them into a subgraph-wise embedding with a readout function:
𝐡_tp^t = Readout(𝐬_u^t, u ∈𝒯𝒫_i^t),
𝐡_tn^t = Readout(𝐬_v^t, v ∈𝒯𝒩_i^t),
where Readout(·) is a kind of graph pooling operation <cit.>, such as min, max, and weighted pooling. In this paper, we use mean pooling for simplicity.
The temporal contrastive learning is implemented with a triplet margin loss <cit.> among center node embedding z^t_i, positive temporal subgraph embedding 𝐡_tp^t and negative temporal subgraph embedding 𝐡_tn^t:
_η = 1/|^t|∑_i∈^t𝔼{max{d(𝐳_i^t,𝐡_tp^t)-d(𝐳_i^t,𝐡_tn^t)+α, 0}},
where α is the margin constant, d(·) is the distance metric and we adopt Euclidean distance in this paper.
Note that the long-term stable patterns are captured by the memory module of the DGNN encoder. The temporal contrast and states from the memory module in DGNN collaboratively capture the long-short term evolution patterns.
§.§.§ Structural Contrast (SC)
Besides the informative temporal evolution patterns, discriminative structural patterns are also important for structure modeling on dynamic graphs.
Following the instance discrimination paradigm, we treat the subgraph generated by ϵ-DFS sampler of node i as the positive sample, and the subgraph generated by ϵ-DFS sampler of another random node i'≠ i as the negative sample for structural contrastive learning.
Analogously, after obtaining the embedding z_i^t of node i at time t, generates structural positive and negative subgraphs by S-ϵ-DFS sampler, denoted as 𝒮𝒫_i^t and 𝒮𝒩_i'^t.
For each node u in 𝒮𝒫_i^t and node v in 𝒮𝒩_i^t, we retrieve the embeddings from the memory ℳ and then pool them into a subgraph-wise embedding with a readout function:
𝐡_sp^t = Readout(𝐬_u^t, u ∈𝒮𝒫_i^t),
𝐡_sn^t = Readout(𝐬_v^t, v ∈𝒮𝒩_i'^t).
The structural contrastive learning is implemented with a triplet margin loss among center node embedding z^t_i, positive structural subgraph embedding 𝐡_sp^t and negative structural subgraph embedding 𝐡_sn^t:
_ϵ = 1/|^t|∑_i∈^t𝔼{max{d(𝐳_i^t,𝐡_sp^t)-d(𝐳_i^t,𝐡_sn^t)+α, 0}},
where d(·) is the same as Eq.(<ref>) in temporal contrast.
§.§.§ Overall Pre-Training Objective Function
Following the general setting of unsupervised dynamic graph representation learning <cit.>, finally adds an auxiliary temporal link prediction pre-text task for pre-training.
Given a pair of temporal edge (i,j,t), the affinity score is calculated by:
ŷ_i,j^t = σ(MLP(𝐳_i^t𝐳_j^t)),
where σ(·) denotes the sigmoid function.
The temporal link prediction task is optimized by a binary cross-entropy loss <cit.> as follows:
_tlp=∑_(i,j,j',t)∈-y_i,j^t·log(ŷ_i,j^t)+y_i,j'^t·log(ŷ_i,j'^t),
where ={(i,j,j',t)|(i,j,t)∈^t, (i,j',t)∉^t}, and y_i,j^t∈{0,1} donates whether there is an edge between node i and j at time t.
Overall, the objective function of method can be formulated as a linear combination of _η, _ϵ and _tlp:
_pre = (1-β)·_η+β·_ϵ+_tlp.
where β∈ (0, 1) is a hyperparameter to balance the impact of temporal contrast and structure contrast. The method with any DGNN encoders is pre-trained using the gradient descent on the objective function _pre.
§.§ Evolution Information Enhanced Fine-Tuning
In the pre-training, the memory ℳ stores the long-short term evolution information of each node through the designed pre-training objections, which is beneficial for the downstream dynamic graph tasks.
For example, if node i occurs in both the pre-training and fine-tuning stages, the evolution pattern of node i can be propagated to the neighborhoods in the downstream task for further enhancement.
With the above findings, we further design this evolution information enhanced fine-tuning (short as EIE) module as an optional auxiliary scheme for downstream fine-tuning.
During the pre-training of , we uniformly store l checkpoints of the memory ℳ and fuse the sequence of checkpoints as the evolution information:
𝐄𝐈=f_EI([𝐒^1, ..., 𝐒^l]),
where 𝐒^l denotes l-th memory checkpoint from pre-training, f_EI(·) can be any kind of sequence operation function, such as mean pooling, attention mechanism, and GRU <cit.>.
In the fine-tuning stage, we first transform the evolution information 𝐄𝐈 with a two-layer MLP to let it better fit to the downstream data, then we combine it with the downstream temporal embeddings 𝐙^down as the enhanced embeddings:
𝐙^EIE = [𝐙^down MLP(𝐄𝐈)],
where [ · · ] is the matrix concatenation operator. The fine-tuning in downstream tasks is then conducted on the enhanced embeddings 𝐙^EIE.
§.§ Model Analysis
§.§.§ Pre-training stage
We define the time complexity of DGNN as O(D), where O(D) varies from different DGNN backbones. To pre-train the model scalably, the Monte Carlo trick <cit.> is widely utilized for DGNNs for model training under batch processing, which is also suitable for .
For , the additional time consumption depends on sampling and contrastive pre-training.
For sampling temporal contrastive subgraph pair of N nodes, the time complexity is O(2η^k N), where η and k denote sampling width and depth respectively.
For sampling structural contrastive subgraph pair of N nodes, the time complexity is O(2ϵ^k N), where ϵ and k also denote sampling width and depth.
We set η=20, ϵ=20, and k=2 so that η^k ≪ N and ϵ^k ≪ N. Then, the contrastive learning equipped with a non-parameter mean-pooling operation has the time complexity of O(4N).
Therefore, the overall time complexity of pre-training is O(D+2(η^k+ϵ^k+2)N).
The pseudocode of the overall pre-training procedure of is described as the given Algorithm <ref>.
§.§.§ Fine-tuning stage
If we fine-tune without the EIE strategy (i.e. full fine-tune), the complexity is the same as only the DGNN backbone with O(D). If we fine-tune with EIE, the complexity comes from Eq.(<ref>) and Eq.(<ref>), of which the complexity of Eq.(<ref>) depends on the selected operation function that we will discuss in the following table, and the complexity of Eq.(<ref>) is O(N).
Therefore, the complexity can be summarized as Table <ref>, where d is the pre-trained checkpoint length and we set it as 10 for default. The performance comparison results with different fine-tuning ways are also conducted in the following Subsection <ref>.
The procedure of evolution information enhanced fine-tuning in is illustrated at Algorithm <ref>.
§ EXPERIMENTS
In this section, we conduct extensive experiments on five public widely-used dynamic graph datasets and an industrial dynamic graph dataset from Meituan to demonstrate the effectiveness of the proposed method from various perspectives.
To evaluate the generalization ability of , we consider three transfer settings and two downstream tasks. 1) Transfer settings include time transfer, field transfer, and time+field transfer. 2) Downstream tasks include both dynamic link prediction and dynamic node classification.
§.§ Experimental Datasets
The experiments are conducted on five widely-used research datasets and an industrial dataset from Meituan on various downstream tasks.
The details of these datasets are introduced as follows.
Dynamic Link Prediction Datasets:
Amazon Review[<https://jmcauley.ucsd.edu/data/amazon>] <cit.> is a large-scale user-product dynamic graph dataset containing 20.9 million users, 9.3 million products, and 82.8 million reviews (edges) into 29 product fields. The time span of these reviews is in the range of May 1996 - Oct 2018.
We select the Beauty, Luxury, and Arts, Crafts, and Sewing fields for the experiment.
The user-product interaction prediction is set as the downstream dynamic link prediction task for Amazon dataset.
Gowalla[<http://www.yongliu.org/datasets.html>] <cit.> is a famous social network for users check-in at various locations, containing about 36 million check-ins made by 0.32 million users over 2.8 million locations. These check-in records are in the time span of Jan 2009 - June 2011. The locations are grouped into 7 main fields. We select the Entertainment, Outdoors, and Food fields for the experiment.
The check-in prediction is set as the downstream dynamic link prediction task for Gowalla dataset.
The data split of Amazon and Gowalla varies from different transfer settings considering both time and field, which is detailed described in Subsection <ref>.
Meituan is a industrial user-poi interaction (i.e. click and purchase) dataset collected from Meituan food delivery service in Beijing of China from Feb.14th to Mar.28th, 2022 (42 days), which contains about 0.75 million interaction records (sorted by time).
Each interaction record contains a user-ID, poi-ID, timestamp, and other contexts.
We utilize this industrial dataset for evaluating the dynamic link prediction task with a ratio of 6:4 sorted by time for pre-training and downstream.
Dynamic Node Classification Datasets:
Wikipedia[<http://snap.stanford.edu/jodie/wikipedia.csv>] <cit.> is a dynamic network between users and edited pages for node classification. It contains about 9,300 nodes and around 160,000 temporal edges. Dynamic labels indicate banned users.
MOOC[<http://snap.stanford.edu/jodie/mooc.csv>] <cit.> is a dynamic network of students and online courses for node classification, which contains about 7,200 nodes and 411,749 interactions between them. Dynamic labels indicate drop-out students.
Reddit[<http://snap.stanford.edu/jodie/reddit.csv>] <cit.> is a dynamic graph between active users and their posts under subreddits for node classification, with about 11,000 nodes, about 700,000 temporal edges, and dynamic labels indicating banned users.
We split each dynamic node classification dataset with the ratio of 6:2:1:1 sorted by time for pre-training, downstream training, validation, and testing, respectively.
§.§ Baselines
We compare the proposed method with ten representative methods, including state-of-the-art static graph learning methods and dynamic graph learning methods, which can be can be organized into four main categories:
Task-supervised Static Graph Learning Models:
* GraphSAGE <cit.> learns node representation by sampling and aggregating features from the node’s local neighborhood without temporal information. It performs link prediction as its pre-training task.
* GAT <cit.> aggregates the neighborhood message with the multi-head attention mechanism. The pre-training task of GAT is the same as GraphSAGE.
* GIN <cit.> is a simple GNN architecture that generalizes the Weisfeiler-Lehman test. It also performs the link prediction task for the model pre-training.
Self-supervised Static Graph Learning Models:
* DGI <cit.> maximizes the mutual information between local patch representation and corresponding global graph summary in a self-supervised manner.
* GPT-GNN <cit.> is a state-of-the-art generative pre-training framework with self-supervised node attribute generation and edge generation tasks on the large-scale graph pre-training.
Task-supervised Dynamic Graph Learning Models:
* DyRep <cit.> is a temporal point process (TPP-based) dynamic graph model, which posits representation learning as a latent mediation process. Following its task setting for dynamic graphs, we adopt temporal link prediction as its pre-training task.
* JODIE <cit.> is a state-of-the-art dynamic graph model, which employs two recurrent neural networks and a novel projection operator to estimate the embedding of a node at any time in the future. The pre-training task of JODIE is the same as DyRep.
* TGN <cit.> is a generic and state-of-the-art dynamic graph learning framework with memory modules and dynamic graph-based operators. The pre-training task of TGN is the same as DyRep.
Self-supervised Dynamic Graph Learning Models:
* DDGCL <cit.> is a self-supervised dynamic graph learning method via contrasting two nearby temporal views of the same node identity, with a time-dependent similarity critic and GAN-type contrastive loss.
* SelfRGNN <cit.> is a state-of-the-art self-supervised Riemannian dynamic graph neural network with the Riemannian reweighting self-contrastive approach for dynamic graph learning.
For all the baselines, we first pre-train them on the same pre-training data as and utilize them for initialization with full fine-tuning strategy <cit.> in downstream tasks.
§.§ Experimental Setup
To evaluate the effectiveness of the method, we follow the experimental setting of popular existing pre-training work <cit.> with three transfer settings and two downstream tasks in fine-tuning on these datasets.
Transfer Settings:
* Time transfer. We split the graph by time spans for pre-training and fine-tuning on the same field.
For the Amazon Review dataset, we split the data before 2017 for pre-training and data since 2017 for fine-tuning.
For the Gowalla dataset, we split the data before 2011 for pre-training and data since 2011 for fine-tuning.
For Meituan, Wikipedia, MOOC, and Reddit datasets, due to the lack of field categories, we only conduct the time transfer and adopt the same chronological data split with the first 60% for pre-training and the rest for fine-tuning.
* Field transfer. We pre-train the model from one graph field and transfer it to other graph fields. For the Amazon Review dataset, we use data from Arts, Crafts, and Sewing field for pre-training and data from Beauty and Luxury fields for fine-tuning. For the Gowalla dataset, we use data from the Food field for pre-training and data from Entertainment and Outdoors fields for fine-tuning.
* Time+Field transfer.
In this setting, we consider graph transfer from different time spans and fields simultaneously.
That is, we use the data from fields before a particular time for model pre-training and the data from other fields after a particular time for fine-tuning. The split time and fields are the same as the above two tasks: For the Amazon dataset, we first pre-train the model on Arts, Crafts, and Sewing field under the data range from May 1996 to Dec 2016, and then we fine-tune the DGNN model on Beauty or Luxury data fields under the data range from Jan 2017 to Oct 2018. For the Gowalla dataset, we pre-train the model on Food field under the data range from Jan 2009 to Dec 2010, then we fine-tune the DGNN model on Entertainment or Outdoors fields under the data range from Jan 2011 to June 2011.
Downstream Task Settings:
* Dynamic link prediction. For Amazon Review, Gowalla, and Meituan datasets, we aim to predict the future user-item interactions at a specific time as the downstream task.
Then, we adopt the widely used AUC and AP metrics as the evaluation metrics.
Note that we conduct the downstream task in Beauty and Luxury fields for Amazon Review dataset, and in Entertainment and Outdoors fields for the Gowalla dataset.
* Dynamic node classification. For Wikipedia, MOOC, and Reddit datasets, we aim to predict the node state with a specific class label at a specific time as the downstream task.
Then, we adopt the widely used AUC metric as the evaluation metric.
We introduce the statistics and settings of the experimental datasets in Table <ref> and Table <ref>.
Environment Configuration and Hyper-parameter Setting:
The model hyper-parameters are optimized via grid search on all the experimental datasets, and the best models are selected by early stopping based on the AUC score on the validation set. Hyper-parameter ranges of the model for grid search are the following: learning rate in {5 × 10^-4, 1 × 10^-4, 5 × 10^-3, 1 × 10^-3, 5 × 10^-2, 1 × 10^-2}, the mini-batch size settings are searched from range {256, 512, 1024, 2048, 4096, 8192}. All weighting matrices are initialized by Xavier initialization for all models. For all dynamic graphs (Dyrep, JODIE, TGN, DDGCL, SelfRGNN, ), the memory states are all initialized with zero.
Note that we run all the experiments five times with different random seeds and report the average results with standard deviation to prevent extreme cases.
All experiments are conducted on the Centos7.0 system equipped with NVIDIA A100 (80G) GPUs.
§.§ Main Results
In this section, we report the comparison results between and the baselines for various downstream tasks, including dynamic link prediction and node classification.
Performance on Dynamic Link Prediction Task:
The results and comparisons on Amazon Review and Gowalla datasets under three transfer settings are shown in Table <ref>. And the results and comparisons on Meituan under the time transfer setting are shown in Table <ref>. From the results, we have the following observations.
* Our proposed achieves the best performance under different transfer settings. Specifically, the average improvements on the AP metric of all the fields on Amazon Review and Gowalla datasets are 1.59%, 1.33%, 1.60% w.r.t. time transfer, field transfer, and time+field transfer settings, respectively.
Furthermore, in the Meituan industrial dataset, compared to the DGNN encoder (DyRep, JODIE, and TGN), the performance gains by can also be observed.
These observations demonstrate the effectiveness of our through flexible structural-temporal samplers along with temporal-view and structural-view subgraph contrastive pre-training schemes and optional evolution information enhanced fine-tuning scheme.
* The dynamic graph methods generally perform significantly better than the static graph methods. Among the compared baselines, the dynamic graph methods (DyRep, JODIE, TGN, DDGCL, and SelfRGNN) generally perform significantly better than the static graph methods (GraphSAGE, GIN, GAT, DGI, and GPT-GNN) on Amazon Review and Gowalla datasets.
This further verifies the necessity of capturing the temporal evolution patterns in the dynamic graphs.
Another interesting finding is that the static generative graph pre-training framework, GPT-GNN, performs worse than some classic graph neural networks, such as GraphSAGE, which has also been observed and discussed in <cit.>.
* The dynamic task-supervised methods generally perform better than dynamic self-supervised methods.
Among the compared baselines, the dynamic task-supervised methods (DyRep, JODIE, and TGN) generally perform better than the dynamic self-supervised methods (DDGCL and SelfRGNN) on experimental datasets.
This demonstrates the importance of memory in the DGNN encoder to capture the long-term evolution of each node in dynamic graphs and the insufficiency of current self-supervised dynamic graph models for pre-training.
Performance on Dynamic Node Classification Task:
In addition to the dynamic link prediction downstream task, we also conduct the dynamic node classification task on Wikipedia, MOOC, and Reddit datasets compared with the state-of-the-art dynamic graph methods to further verify the generalization ability in various downstream tasks for .
Table <ref> shows the node classification results. From these results, we have the following observation:
* also has the best performance under the dynamic node classification task in most cases, which shows has good generalization capability in various downstream tasks.
Note that the performance of is lower than TGN on MOOC dataset.
The reason is that the structure and temporal patterns are not obvious in the MOOC dataset as in other datasets.
* The dynamic task-supervised methods make a better performance than the dynamic self-supervised methods.
These results are consistent with the observation in dynamic link prediction, which further verifies the effectiveness of the memory module within DGNNs in learning long-term evolution.
§.§ Model Generalization
To further investigate the generalization of the proposed method, we evaluate the performance of with different DGNN encoders.
We try three representative encoders namely, DyRep, JODIE, and TGN.
More specifically, each encoder pre-trains with on Amazon-Beauty and Amazon-Luxury under the three transfer settings.
Table <ref> illustrates the comparison results on vanilla task-supervised pre-training and pre-training of the AUC metric.
From the results, we can observe that the performance of pre-training different DGNN encoders with is consistently better than the vanilla pre-training strategy.
This demonstrates both the effectiveness and generalization of the proposed method in different DGNN backbones.
In order to further investigate the generalization of on inductive downstream tasks, we further compare the performance of with the JODIE encoder under the inductive link prediction task.
The results in Table <ref> demonstrate the performance of on three transfers on the inductive task. We observe that significantly improves AUC under time transfer by at least 10% on the Entertainment and Outdoors datasets, respectively. Moreover, under the time+field transfer, the most challenging transfer, still achieves 5.17% and 3.19% AUC gains on Entertainment and Outdoors datasets, respectively.
Altogether, the results demonstrate that is also able to well generalize on the inductive downstream task.
§.§ Ablation Study
The proposed method consists of three carefully designed modules, including temporal contrast (TC), structural contrast (SC), and evolution information enhanced (EIE) fine-tuning.
To shed light on the effectiveness of these modules, we conduct ablation studies on the variants of .
As shown in Figure <ref>, we report the results of without temporal contrast (w/o TC), without structural contrast (w/o SC), and without the evolution information enhanced fine-tuning (w/o EIE) of the Amazon Review dataset under the time+field transfer setting.
In general, we have the following findings:
(i) The performance of variants without TC, SC, or EIE is significantly worse than the , which indicates the effectiveness of all three modules of the proposed method.
(ii) The performance of w/o TC drops more than w/o SC on the Beauty field, indicating that temporal contrast provides more meaningful information for pre-training in this case. While the observation is just the opposite on the Luxury field. It suggests that different dynamic graphs have different emphases on temporal evolution information and structural evolution information, so it is meaningful for the to adopt both temporal and structural contrast collaboratively.
§.§ Discussion on EIE
In this section, we investigate the impact of the optional evolution information enhanced (EIE) fine-tuning strategy.
There are different types of f_EI(·) that can be derived from different EIE variants, such as EIE-mean, EIE-attn, and EIE-GRU. To get the fused evolution information 𝐄𝐈, EIE-mean, EIE-attn, and EIE-GRU implement f_EI(·) with mean pooling, attention mechanism <cit.>, and GRU, respectively.
The results on two fields of the Amazon dataset under the time+field transfer task are shown in Table <ref>, from which we can observe that:
(i) All the EIE variants perform better than the full fine-tuning strategy for downstream tasks, even the simple mean pooling operation also has a good improvement compared to the full fine-tuning strategy. This indicates that the information stored in pre-trained memory 𝐒^l contains rich evolution information, which demonstrates the effectiveness of EIE for the downstream task.
(ii) EIE-GRU performs best against other variants. This observation demonstrates that the GRU operator has a better ability to capture the evolution pattern of pre-trained memory checkpoints.
§.§ Parameter Study
We further investigate the impact of hyper-parameter β in Eq.(<ref>) and report the results in Figure <ref>. We can observe that the performance on the Beauty dataset drops with the β increases in general, while that of the Luxury dataset is relatively stable. One possible reason is that the temporal information is more essential on the Beauty dataset, while the temporal information is almost as essential as structural information on the Luxury dataset.
§ CONCLUSION
In this paper, we propose to address the difficulties of applying DGNNs in industrial scenarios.
We propose a novel method to capture the long-short term temporal evolution patterns and discriminative structural patterns through flexible structural-temporal subgraph samplers along with structural-temporal contrastive pre-training schemes.
Furthermore, we introduce an optional evolution information enhanced fine-tuning strategy to take advantage of the evolved patterns during pre-training. Extensive experiments on widely used dynamic graph datasets and an industrial dataset in Meituan demonstrate the effectiveness of our proposed method.
In future works, we will explore the in more practical scenarios, such as industrial recommender systems.
IEEEtran
|
http://arxiv.org/abs/2307.01545v1
|
20230704075823
|
EffSeg: Efficient Fine-Grained Instance Segmentation using Structure-Preserving Sparsity
|
[
"Cédric Picron",
"Tinne Tuytelaars"
] |
cs.CV
|
[
"cs.CV"
] |
EffSeg: Efficient Fine-Grained Instance Segmentation using Structure-Preserving Sparsity
Olivier Couronné
========================================================================================
Many two-stage instance segmentation heads predict a coarse 28×28 mask per instance, which is insufficient to capture the fine-grained details of many objects. To address this issue, PointRend and RefineMask predict a 112×112 segmentation mask resulting in higher quality segmentations. Both methods however have limitations by either not having access to neighboring features (PointRend) or by performing computation at all spatial locations instead of sparsely (RefineMask). In this work, we propose EffSeg performing fine-grained instance segmentation in an efficient way by using our Structure-Preserving Sparsity (SPS) method based on separately storing the active features, the passive features and a dense 2D index map containing the feature indices. The goal of the index map is to preserve the 2D spatial configuration or structure between the features such that any 2D operation can still be performed. EffSeg achieves similar performance on COCO compared to RefineMask, while reducing the number of FLOPs by 71% and increasing the FPS by 29%. Code will be released.
§ INTRODUCTION
Instance segmentation is a fundamental computer vision task assigning a semantic category (or background) to each image pixel, while differentiating between instances of the same category. Many high-performing instance segmentation methods <cit.> follow the two-stage paradigm. This paradigm consists in first predicting an axis-aligned bounding box called Region of Interest (RoI) for each detected instance, and then segmenting each pixel within the RoI as belonging to the detected instance or not.
Most two-stage instance segmentation heads <cit.> predict a 28×28 mask (within the RoI) per instance, which is too coarse to capture the fine-grained details of many objects. PointRend <cit.> and RefineMask <cit.> both address this issue by predicting a 112×112 mask instead, resulting in higher quality segmentations. In both methods, these 112×112 masks are obtained by using a multi-stage refinement procedure, first predicting a coarse mask and then iteratively upsampling this mask by a factor 2 while overwriting the predictions in uncertain (PointRend) or boundary (RefineMask) locations. Both methods however have limitations.
PointRend <cit.> on the one hand overwrites predictions by sampling coarse-fine feature pairs from the most uncertain locations and by processing these pairs individually using an MLP. Despite only performing computation at the desired locations and hence being efficient, PointRend is unable to access information from neighboring features during the refinement process, resulting in sub-optimal segmentation performance.
RefineMask <cit.> on the other hand processes dense feature maps and obtains new predictions in all locations, though only uses these predictions to overwrite in the boundary locations of the current prediction mask. Operating on dense feature maps enables RefineMask to use 2D convolutions allowing information to be exchanged between neighboring features, which results in improved segmentation performance PointRend. However, this also means that all computation is performed on all spatial locations within the RoI at all times, which is computationally inefficient.
In this work, we propose EffSeg which combines the strengths and eliminates the weaknesses of PointRend and RefineMask by only performing computation at the desired locations while still being able to access features of neighboring locations (<ref>). To achieve this, EffSeg uses a similar multi-stage refinement procedure in combination with our Structure-Preserving Sparsity (SPS) method. SPS separately stores the active features (the features in spatial locations requiring new predictions), the passive features (the non-active features) and a dense 2D index map. More specifically, the active and passive features are stored in N_A × F and N_P × F matrices respectively, with N_A the number of active features, N_P the number of passive features and F the feature size. The index map stores the feature indices (as opposed to the features themselves) in a 2D map, preserving information about the 2D spatial structure between the different features in an efficient way. This allows SPS to have access to neighboring features such that any 2D operation can still be performed. See <ref> for more information about our SPS method.
We evaluate EffSeg and its baselines on the COCO <cit.> instance segmentation benchmark. Experiments show that EffSeg achieves similar segmentation performance compared to RefineMask (the best-performing baseline), while reducing the number of FLOPs by 71% and increasing the FPS by 29%.
§ RELATED WORK
Instance segmentation. Instance segmentation methods can be divided into two-stage (or box-based) methods and one-stage (or box-free) methods. Two-stage approaches <cit.> first predict an axis-aligned bounding box called Region of Interest (RoI) for each detected instance and subsequently categorize each pixel as belonging to the detected instance or not. One-stage approaches <cit.> on the other hand directly predict instance masks over the whole image without using intermediate bounding boxes.
One-stage approaches have the advantage that they are similar to semantic segmentation methods by predicting masks over the whole image instead of inside the RoI, allowing for a natural extension to the more general panoptic segmentation task <cit.>. Two-stage approaches have the advantage that by only segmenting inside the RoI, there is no wasted computation outside the bounding box. As EffSeg aims to only perform computation there where it is needed, the two-stage approach is chosen.
Fine-grained instance segmentation. Many two-stage instance segmentation methods such as Mask R-CNN <cit.> predict rather coarse segmentation masks. There are two main reasons why the predicted masks are coarse. First, segmentation masks of large objects are computed using features pooled from low resolution feature maps. A first improvement found in many methods <cit.> consists in additionally using features from the high-resolution feature maps of the feature pyramid. Second, Mask R-CNN only predicts a 28×28 segmentation mask inside each RoI, which is too coarse to capture the fine details of many objects. Methods such as PointRend <cit.>, RefineMask <cit.> and Mask Transfiner <cit.> therefore instead predict a 112×112 mask within each RoI, allowing for fine-grained segmentation predictions. PointRend achieves this by using an MLP, RefineMask by iteratively using their SFM module consisting of parallel convolutions with different dilations, and Mask Transfiner by using a transformer. All of these methods have limitations however. PointRend has no access to neighboring features, RefineMask performs computation on all locations within the RoI at all times, and Mask Transfiner performs attention over all active features instead of over neighboring features only and it does not have access to passive features. EffSeg instead performs local computation at sparse locations while keeping access to both active and passive features.
Another family of methods obtaining fine-grained segmentation masks, are contour-based methods <cit.>. Contour-based methods first fit a polygon around an initial mask prediction, and then iteratively update the polygon vertices to improve the segmentation mask. Contour-based methods can hence be seen as a post-processing method to improve the quality of the initial mask. Contour-based methods obtain good improvements in mask quality when the initial mask is rather coarse <cit.> (a mask predicted by Mask R-CNN <cit.>), but improvements are limited when the initial mask is already of high-quality <cit.> (a mask predicted by RefineMask <cit.>).
Spatial-wise dynamic networks. In order to be efficient, EffSeg only performs processing at those spatial locations that are needed to obtain a fine-grained segmentation mask, avoiding unnecessary computation in the bulk of the object. EffSeg could hence be considered as a spatial-wise dynamic network. Spatial-wise dynamic networks have been used in many other computer vision tasks such as image classification <cit.>, object detection <cit.> and video recognition <cit.>. These methods differ from EffSeg however, as they apply an operation at sparse locations on a dense tensor (see SparseOnDense method from <ref>), whereas EffSeg uses the Structure-Preserving Sparsity (SPS) method separately storing the active features, the passive features and a 2D index map containing the feature indices.
§ EFFSEG
§.§ High-level overview
EffSeg is a two-stage instance segmentation head obtaining fine-grained segmentation masks by using a multi-stage refinement procedure similar to one used in PointRend <cit.> and RefineMask <cit.>. For each detected object, EffSeg first predicts a 14×14 mask within the RoI and iteratively upsamples this mask by a factor 2 to obtain a fine-grained 112×112 mask.
The 14×14 mask is computed by working on a dense 2D feature map of shape [N_R, F_0, 14, 14] with N_R the number of RoIs and F_0 the feature size at refinement stage 0. The 14×14 mask however is too coarse to obtain accurate segmentation masks, where a single cell from the 14×14 grid might contain both the object and the background, rendering a correct assignment impossible. To solve this issue, higher resolution masks need to be produced, reducing the fraction of ambiguous cells which contain both the foreground and background.
The predicted 14×14 mask is therefore upsampled to a 28×28 mask where in some locations the old predictions are overwritten by new ones and where in the remaining locations the predictions are left unchanged. Features corresponding to the mask locations which require a new prediction, are called active features, whereas features corresponding to the remaining mask locations which are not being updated, are called passive features. Given that a new segmentation prediction is only required for a subset of spatial locations within the 28×28 grid, it is inefficient to use a dense feature map of shape [N_R, F_1, 28, 28] (as done in RefineMask <cit.>). Additionally, when upsampling by a factor 2, every grid cell gets subdivided in a 2 × 2 grid of smaller cells, with the feature from the parent cell copied to the 4 children cells. The dense feature map of shape [N_R, F_1, 28, 28] hence contains many duplicate features, which is a second source of inefficiency. EffSeg therefore introduces the Structure-Preserving Sparsity (SPS) method, which separately stores the active features, the passive features (without duplicates) and a 2D index map containing the feature indices (see <ref> for more information).
EffSeg repeats this upsampling process two more times, resulting in the fine-grained 112×112 mask. Further upsampling the predicted mask is undesired, as 224×224 masks typically do not yield performance gains <cit.> while requiring additional compute. At last, the final segmentation mask is obtained by pasting the predicted 112×112 mask inside the corresponding RoI box using bilinear interpolation.
§.§ Structure-preserving sparsity
Motivation. When upsampling a segmentation mask by a factor 2, new predictions are only required in a subset of spatial locations. The Dense method, which consists of processing dense 2D feature maps as done in RefineMask <cit.>, is inefficient as new predictions are computed over all spatial locations instead of only over the spatial locations of interest. A method capable of performing computation in sparse set of 2D locations is therefore required. We distinguish following sparse methods.
First, the Pointwise method selects features from the desired spatial locations (called active features) and only processes these using pointwise networks such as MLPs or FFNs <cit.>, as done in PointRend <cit.>. Given that the pointwise networks do not require access to neighboring features, there is no need to store information about the 2D spatial relationship between features, making this method simple and efficient. However, the features solely processed by pointwise networks miss context information, resulting in inferior segmentation performance as empirically shown in <ref>. The Pointwise method is hence simple and efficient, but does not perform that well.
Second, the Neighbors method consists in both storing the active features, as well as their 8 neighboring features. This allows the active features to be processed by pointwise operations, as well as by 2D convolution operations (with 3×3 kernel and dilation one) by accessing the neighboring features. The Neighbors method hence combines efficiency with access to the 8 neighboring features, yielding improved segmentation performance the Pointwise method. However, this approach is limited in the 2D operations it can perform. The 8 neighboring features for example do not suffice for 2D convolutions with kernels larger than 3×3 or dilations greater than 1, nor do they suffice for 2D deformable convolutions which require features to be sampled from arbitrary locations. The Neighbors method hence lacks generality in the 2D operations it can perform.
Third, the SparseOnDense method consists in applying traditional operations such as 2D convolutions at sparse locations of a dense 2D feature map, as done in <cit.>. This method allows information to be exchanged between neighboring features (as opposed to the Pointwise method) and is compatible with any 2D operation (as opposed to the Neighbors method). Moreover, it is computationally efficient as it only performs computation there where it is needed. However, the use of a dense 2D feature map of shape [N_R, F, H, W] as data structure is storage inefficient, given that only a subset of the dense 2D feature map gets updated each time, with unchanged features copied from one feature map to the other. Additionally, the dense 2D feature map also contains multiple duplicate features due to passive features covering multiple cells of the 2D grid, leading to a second source of storage inefficiency. Hence, while having good performance and while being computationally efficient, the SparseOnDense method is not storage efficient.
Fourth, the Structure-Preserving Sparsity (SPS) method stores a N_A × F matrix containing the active features, a N_P × F matrix containing the passive features (without duplicates) and a dense 2D index map of shape [N_R, H, W] containing the feature indices. The goal of the index map is to preserve the 2D spatial configuration or structure of the features, such that any 2D operation can still be performed (as opposed to the Neighbors method). Separating the storage of active and passive features, enables SPS to update the active features without requiring to copy the unchanged passive features (as opposed to the SparseOnDense method). The SPS method is hence storage efficient, in addition to being computationally efficient and supporting any 2D operation thanks to the 2D index map.
An overview of the different methods with their properties is found in <ref>. The SPS method will be used in EffSeg as it ticks all the boxes.
Toy example of SPS. In <ref>, a toy example is shown illustrating how a 2D convolution operation (with 3×3 kernel and dilation one) is performed using the Structure-Preserving Sparsity (SPS) method. The example contains 4 active features and 3 passive features, organized in a 3×3 grid according to the dense 2D index map. Notice how the index map contains duplicate entries, with passive feature indices 5 and 6 appearing twice in the grid.
The SPS method applies the 2D convolution operation with 3×3 kernel and dilation 1 to each of the active features, by first gathering its neighboring features into a 3×3 grid and then convolving this feature grid by the learned 3×3 convolution kernel. When a certain neighbor feature does not exist as it lies outside of the 2D index map, a padding feature is used instead. In practice, this padding feature corresponds to the zero vector.
As a result, each of the active features are sparsely updated by the 2D convolution operation, whereas the passive features and the dense 2D index map remain unchanged. Note that performing other types of 2D operations such as dilated or deformable <cit.> convolutions occurs in similar way, with the only difference being which neighboring features are gathered and how they are processed.
§.§ Detailed overview
<ref> shows a detailed overview of the EffSeg architecture. The overall architecture is similar to the one used in RefineMask <cit.>, with some small tweaks as detailed below. In what follows, we provide more information about the various data structures and modules used in EffSeg.
Inputs. The inputs of EffSeg are the backbone feature maps, the predicted bounding boxes and the query features. The backbone feature maps B_s are feature maps coming from the P_2-P_7 backbone feature pyramid, with backbone feature map B_s corresponding to refinement stage s. The initial backbone feature map B_0 is determined based on the size of the predicted bounding box, following the same scheme as in Mask R-CNN <cit.> where B_0 = P_k_0 with
k_0 = 2 + min( ⌊log_2(√(wh) / 56) ⌋, 3 ),
and with w and h the width and height of the predicted bounding box respectively. The backbone feature maps B_s of later refinement stages use feature maps of twice the resolution compared to previous stage, unless no higher resolution feature map is available. In general, we hence have B_s = P_k_s with
k_s = max (k_0 - s, 2).
Note that this is different from RefineMask <cit.>, which uses k_s = 2 for stages 1, 2 and 3.
The remaining two inputs are the predicted bounding boxes and the query features, with one predicted bounding box and one query feature per detected object. The query feature is used by the detector to predict the class and bounding box of each detected object, and hence carries useful instance-level information condensed into a single feature.
Dense processing. The first refinement stage (stage 0) solely consists of dense processing on a 2D feature map.
At first, EffSeg applies the RoIAlign operation <cit.> on the B_0 backbone feature maps to obtain the initial RoI-based 2D feature map of shape [N_R, F_0, H_0, W_0] with N_R the number of RoIs (the number of detected objects), F_0 the feature size, H_0 the height of the map and W_0 the width of the map. Note that the numeral subscripts, as those found in F_0, H_0 and W_0, indicate the refinement stage. In practice, EffSeg uses F_0=256, H_0=14 and W_0=14.
Next, the query features from the detector are fused with the 2D feature map obtained by the RoIAlign operation. The fusion consists in concatenating each of the RoI features with their corresponding query feature, processing the concatenated features using a two-layer MLP and adding the resulting features to the original RoI features. Fusing the query features allows to explicitly encode which object within the RoI box is considered the object of interest, as opposed to implicitly infer this from the delineation of the RoI box. This is hence especially useful when having overlapping objects with similar bounding boxes.
After the query fusion, the 2D feature map gets further processed by a Fully Convolutional Network (FCN) <cit.>, similar to the one used in Mask R-CNN <cit.>, consisting of 4 convolution layers separated by ReLU activations.
Finally, the resulting 2D feature map is used to obtain the coarse 14×14 segmentation predictions with a two-layer MLP. Additionally, EffSeg also uses a two-layer MLP to make refinement predictions, which are used to identify the cells (locations) from the 14×14 grid that require higher resolution and hence need to be refined.
Sparse processing. The subsequent refinement stages (stages 1, 2 and 3) solely consist of sparse processing using the Structure-Preserving Sparsity (SPS) method (see <ref> for more information about SPS).
At first, the SPS data structure is constructed or updated from previous stage. The N_A features corresponding to the cells with the 10.000 highest refinement scores, are categorised as active features, whereas the remaining N_P features are labeled as passive features. The active and passive features are stored in N_A × F_s-1 and N_P × F_s-1 matrices respectively, with active feature indices ranging from 0 to N_A - 1 and with passive feature indices ranging from N_A to N_A + N_P - 1. The dense 2D index map of the SPS data structure is constructed from the stage 0 dense 2D feature map or from the index map from previous stage, while taking the new feature indices into consideration due to the new split between active and passive features.
Thereafter, the SPS data structure is updated based on the upsampling of the feature grid by a factor 2. The number of active features N_A increases by a factor 4, as each parent cell gets subdivided into 4 children cells. The children active features are computed from the parent active feature using a two-layer MLP, with a different MLP for each of the 4 children. The dense 2D index map is updated based on the new feature indices (as the number of active features increased) and by copying the feature indices from the parent cell of passive features to its children cells. Note that the passive features themselves remain unchanged.
Next, the active features are fused with their corresponding backbone feature, which is sampled from the backbone feature map B_s in the center of the active feature cell. The fusion consists in concatenating each of the active features with their corresponding backbone feature, processing the concatenated features using a two-layer MLP and adding the resulting features to the original active features.
Afterwards, the feature size of the active and passive features are divided by 2 using a shared one-layer MLP. We hence have F_s+1 = F_s / 2, decreasing the feature size by a factor 2 every refinement stage, as done in RefineMask <cit.>.
After decreasing the feature sizes, the active features are further updated using the processing module, which does most of the heavy computation. The processing module supports any 2D operation thanks to the versatility of the SPS method. Our default EffSeg implementation uses the Semantic Fusion Module (SFM) from RefineMask <cit.>, which fuses (adds) the features obtained by three parallel convolution layers using a 3×3 kernel and dilations 1, 3 and 5. In <ref>, we compare the performance of EffSeg heads using different processing modules.
Finally, the resulting active features are used to obtain the new segmentation and refinement predictions in their corresponding cells. Both the segmentation branch and the refinement branch use a two-layer MLP, as in stage 0.
Training. During training, EffSeg applies segmentation and refinement losses on the segmentation and refinement predictions from each EffSeg stage s, where each of these predictions are made for a particular cell from the 2D grid. The ground-truth segmentation targets are obtained by sampling the ground-truth mask in the center of the cell, and the ground-truth refinement targets are determined by evaluating whether the cell contains both foreground and background or not. We use the cross-entropy loss for both the segmentation and refinement losses, with loss weights (0.25, 0.375, 0.375, 0.5) and (0.25, 0.25, 0.25, 0.25) respectively for stages 0 to 3.
Inference. During inference, EffSeg additionally constructs the desired segmentation masks based on the segmentation predictions from each stage. The segmentation predictions from stage 0 already correspond to dense 14×14 segmentation masks, and hence do not require any post-processing. In each subsequent stage, the segmentation masks from previous stage are upsampled by a factor 2, and the sparse segmentation predictions are used to overwrite the old segmentation predictions in their corresponding cells. After performing this process for three refinement stages, the coarse 14×14 masks are upsampled to fine-grained 112×112 segmentation masks. Finally, the image-size segmentation masks are obtained by pasting the RoI-based 112×112 segmentation masks inside their corresponding RoI boxes using bilinear interpolation.
The segmentation confidence scores s_seg are computed by taking the product of the classification score s_cls and the mask score s_mask averaged over the predicted foreground pixels, which gives
s_seg = s_cls·1/|ℱ|∑_i^ℱ s_mask, i
with ℱ the set of all predicted foreground pixels.
§ EXPERIMENTS
§.§ Experimental setup
Datasets. We perform experiments on the COCO <cit.> instance segmentation benchmark. We train on the 2017 training set and evaluate on the 2017 validation and test-dev sets.
Experiment details. Throughout our experiments, we use a ResNet-50+FPN or ResNet50+DeformEncoder backbone <cit.> with the FQDet detector <cit.>. For the ResNet-50 network <cit.>, we use ImageNet <cit.> pretrained weights provided by TorchVision (version 1) and freeze the stem, stage 1 and BatchNorm <cit.> layers (see <cit.> for the used terminology). For the FPN network <cit.>, we use the implementation provided by MMDetection <cit.>. The FPN network outputs a P_2-P_7 feature pyramid, with the extra P_6 and P_7 feature maps computed from the P_5 feature map using convolutions and the ReLU activation function. For the DeformEncoder <cit.>, we use the same settings as in Mask DINO <cit.>, except that we use an FFN hidden feature size of 1024 instead of 2048. For the FQDet detector, we use the default settings from <cit.>.
We train our models using the AdamW optimizer <cit.> with weight decay 10^-4. We use an initial learning rate of 10^-5 for the backbone parameters and for the linear projection modules computing the MSDA <cit.> sampling offsets used in the DeformEncoder and FQDet networks. For the remaining model parameters, we use an initial learning rate of 10^-4. Our models are trained and evaluated on 2 GPUs with batch size 1 each.
On COCO <cit.>, we perform experiments using a 12-epoch and a 24-epoch training schedule, while using the multi-scale data augmentation scheme from DETR <cit.>. The 12-epoch schedule multiplies the learning rate by 0.1 after the 9th epoch, and the 24-epoch schedule multiples the learning rate by 0.1 after the 18th and 22nd epochs.
Evaluation metrics. When evaluating a model, we consider both its performance metrics as well as its computation metrics.
For the performance metrics, we report the Average Precision (AP) metrics <cit.> on the validation set, as well as the validation AP using LVIS <cit.> annotations AP^* and the validation AP using LVIS annotations with the boundary IoU <cit.> metric AP^B*. For the main experiments in <ref>, we additionally report the test-dev Average Precision AP_test.
For the computation metrics, we report the number of model parameters, the number of GFLOPs during inference and the inference FPS. The number of inference GFLOPs and the inference FPS are computed based on the average over the first 100 images of the validation set. We use the tool from Detectron2 <cit.> to count the number of FLOPs and the inference speeds are measured on a NVIDIA A100-SXM4-80GB GPU.
Baselines. Our baselines are Mask R-CNN <cit.>, PointRend <cit.> and RefineMask <cit.>. Mask R-CNN could be considered as the entry-level baseline without any enhancements towards fine-grained segmentation. PointRend and RefineMask on the other hand are two baselines with improvements towards fine-grained segmentation, with RefineMask our main baseline due to its superior performance. We use the implementations from MMDetection <cit.> for both the Mask R-CNN and PointRend models, whereas for RefineMask we use the latest version from the official implementation <cit.>.
In order to provide a fair comparison with EffSeg, we additionally consider the enhanced versions of above baselines, called Mask R-CNN++, PointRend++ and RefineMask++. The enhanced versions additionally perform query fusion and mask-based score weighting as done in EffSeg (see <ref>). For PointRend++, we moreover replace the coarse MLP-based head by the same FCN-based head as used in Mask R-CNN, yielding improved performance without significant changes in computation metrics.
Note that Mask Transfiner <cit.> was not used as baseline, due to irregularities in the reported experimental results and in the experimental settings as discussed in <cit.>.
§.§ Main experiments
<ref> contains the main experiment results on COCO. We make following observations.
Performance. Performance-wise, we can see that Mask R-CNN++ performs the worst, that RefineMask++ and EffSeg perform the best, and that PointRend++ performs somewhere in between. This is in line with the arguments presented earlier.
Mask R-CNN++ predicts a 28×28 mask per RoI, which is too coarse to capture the fine details of many objects. This is especially true for large objects, as can be seen from the significantly lower AP_L values compared to the other segmentation heads.
PointRend++ performs better compared to Mask R-CNN++ by predicting a 112×112 mask, yielding significant gains in the boundary accuracy AP^B*. However, PointRend++ does not access neighboring features during the refinement process, resulting in lower segmentation performance compared RefineMask++ and Effseg, which both do leverage the context provided by neighboring features.
Finally, we can see that the segmentation performance of both RefineMask++ and EffSeg is very similar. There are some small differences with RefineMask++ typically having higher AP^* and AP^B* values, and EffSeg typically having higer validation AP values, but none of these differences are deemed significant.
Efficiency. In <ref>, we can find the computation metrics of the different models as a whole, containing both the computational costs originating from the segmentation head as well as those originating from the backbone and the detector. To provide a better comparison between the different segmentation heads, we also report the computation metrics of the segmentation heads alone in <ref>.
As expected, we can see that Mask R-CNN++ is computationally the cheapest, given that it only predicts a 28×28 mask instead of a 112×112 mask. From the three remaining heads, RefineMask++ is clearly the most expensive one, as it performs computation at all locations within the RoI instead of sparsely. PointRend++ and EffSeg are lying somewhere in between, being more expensive than Mask R-CNN++, but cheaper than RefineMask++.
Finally, when comparing RefineMask++ with EffSeg, we can see that EffSeg uses 36% fewer parameters, reduces the number of inference FLOPs by 71% and increases the inference FPS by 29%.
Performance vs. Efficiency. <ref> shows three performance vs. efficiency plots, comparing the COCO validation AP against the `Parameters', `Inference GFLOPs' and `Inference FPS' computation metrics. From these, we can see that EffSeg provides the best performance vs. efficiency trade-off for each of the considered computation metrics.
We can hence conclude that EffSeg obtains excellent segmentation performance similar to RefineMask++ (the best performing baseline), while reducing the inference FLOPs by 71% and increasing the number of inference FPS by 29% compared to the latter.
§.§ Comparison between processing modules
In <ref>, we show results comparing EffSeg models with different processing modules (see <ref> for more information about the processing module). All models were trained for 12 epochs using the ResNet-50+FPN backbone. We make following observations.
First, we can see that the MLP processing module performs the worst. This confirms that Pointwise networks such as MLPs yield sub-optimal segmentation performance due to their inability to access information from neighboring locations, as argued in <ref>.
Next, we consider the convolution (Conv), deformable convolution <cit.> (DeformConv) and Semantic Fusion Module <cit.> (SFM) processing modules. We can see that the Conv and DeformConv processing modules reach similar performance, whereas SFM obtains slightly higher segmentation performance. Note that the use of DeformConv and SFM processing modules was enabled by our SPS method (<ref>), which supports any 2D operation. This is in contrast to the Neighbors method (<ref>) for example, that neither supports DeformConv nor SFM (as it contains dilated convolations). This hence highlights the importance of SPS to support any 2D operation, allowing for superior processing modules such as the SFM processing module.
Finally, <ref> additionally contains the DenseSFM baseline, applying the SFM processing module over all RoI locations similar to RefineMask <cit.>. Note that DenseSFM uses a slightly modified EffSeg head denoted by EffSeg^†, reducing the sampled backbone feature sizes to F_s-1 (see <ref>) in order to reduce the memory consumption during training. When looking at the results, we can see that densely applying the SFM module (DenseSFM) as opposed to sparsely (SFM), does not yield any performance gains while dramatically increasing the computation cost. We hence conclude that no performance is sacrificed when performing sparse processing instead of dense processing.
§.§ Limitations and future work
We only provide results on the COCO <cit.> instance segmentation benchmark. However, we plan to add results on the Cityscapes <cit.> instance segmentation benchmark for the final paper version. Additionally, we also plan to provide additional results on COCO using larger backbones and longer training schedules.
The 2D operations (convolutions) performed on the SPS data structure, are currently implemented in a naive way using native PyTorch <cit.> operations. Instead, these operations could be implemented in CUDA, which should result in additional speed-ups for our EffSeg models.
EffSeg can currently only be used for the instance segmentation task. Extending it to the more general panoptic segmentation <cit.> task, is left as future work.
§ CONCLUSION
In this work, we propose EffSeg performing fine-grained instance segmentation in an efficient way by introducing the Structure-Preserving Sparsity (SPS) method. SPS separately stores active features, passive features and a dense 2D index map containing the feature indices, resulting in computational and storage-wise efficiency while supporting any 2D operation. EffSeg obtains similar segmentation performance as the highly competitive RefineMask head, while reducing the number of FLOPs by 71% and increasing the FPS by 29%.
ieee_fullname
|
http://arxiv.org/abs/2307.03368v1
|
20230707032739
|
Waveform-Domain Adaptive Matched Filtering: A Novel Approach to Suppressing Interrupted-Sampling Repeater Jamming
|
[
"Hanning Su",
"Qinglong Bao",
"Jiameng Pan",
"Fucheng Guo",
"Weidong Hu"
] |
eess.SP
|
[
"eess.SP"
] |
Journal of Class Files, Vol. 18, No. 9, September 2020
How to Use the IEEEtran Templates
Waveform-Domain Adaptive Matched Filtering: A Novel Approach to Suppressing Interrupted-Sampling Repeater Jamming
Hanning Su, Qinglong Bao, Jiameng Pan, Fucheng Guo, and Weidong Hu
This work was supported by the National Science Foundation of China under Grant 62231026. (Corresponding author: Qinglong Bao.)
Hanning Su, Qinglong Bao, Jiameng Pan, Fucheng Guo and Weidong Hu are with the School of Electronic Science, National University of Defense Technology, Changsha 410073, China (e-mail:[email protected]; [email protected]; [email protected]; [email protected]; [email protected];)
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
DOI: DOI will be provided upon publication.
Please note that this is the accepted version of the paper, which has been accepted for publication by IEEE. The final published version may have undergone additional edits and pagination. For the final published version, please refer to the IEEE journal or conference proceedings.
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The inadequate adaptability to flexible interference scenarios remains an unresolved challenge in the majority of techniques utilized for mitigating interrupted-sampling repeater jamming (ISRJ). Matched filtering system based methods is desirable to incorporate anti-ISRJ measures based on prior ISRJ modeling, either preceding or succeeding the matched filtering. Due to the partial matching nature of ISRJ, its characteristics are revealed during the process of matched filtering. Therefore, this paper introduces an extended domain called the waveform domain within the matched filtering process. On this domain, a novel matched filtering model, known as the waveform-domain adaptive matched filtering (WD-AMF), is established to tackle the problem of ISRJ suppression without relying on a pre-existing ISRJ model. The output of the WD-AMF encompasses an adaptive filtering term and a compensation term. The adaptive filtering term encompasses the adaptive integration outcomes in the waveform domain, which are determined by an adaptive weighted function. This function, akin to a collection of bandpass filters, decomposes the integrated function into multiple components, some of which contain interference while others do not. The compensation term adheres to an integrated guideline for discerning the presence of signal components or noise within the integrated function. The integration results are then concatenated to reconstruct a compensated matched filter signal output. Simulations are conducted to showcase the exceptional capability of the proposed method in suppressing ISRJ in diverse interference scenarios, even in the absence of a pre-existing ISRJ model.
Interrupted-sampling repeater jamming (ISRJ), ISRJ suppression, wave domain, adaptive matched filtering.
§ INTRODUCTION
The Interrupted-Sampling Repeater Jamming (ISRJ) represents a form of intra-pulse interference, wherein the jammer samples a brief segment of the radar waveform and promptly retransmits it <cit.>. The jamming signals exhibit strong coherence with the actual target echo, resulting in the appearance of both genuine and spurious target peaks in the range profile obtained through matched filtering. By employing flexible jamming parameters, the jammer has the capability to generate a variable number of false targets with varying amplitude and positions <cit.>.
One prominent focus in the research on ISRJ suppression is the enhancement of interference suppression signal-to-noise ratio (SNR) while minimizing the loss of target SNR, thereby improving adaptability to challenging scenarios characterized by limited snapshots, low SNR, flexible signal-to-jamming ratio (SJR), and varying ISRJ modulation schemes. Several approaches have been proposed to address these requirements, such as orthogonal waveforms and filter design methods <cit.>, as well as time-frequency domain filtering techniques <cit.>. A common characteristic among these methods is their reliance on time-domain matched filtering systems, which establish mappings from the time-domain signal to the compressed pulse signal and assume reversibility of these mappings. Exploiting this assumption, the filter's inputs or outputs can be matched with pre-established mappings to mitigate ISRJ and achieve a range profile devoid of interference. Different methods emerge from distinct matching criteria, such as the utilization of a separable convex optimization scheme for joint waveform and mismatched filter design <cit.>, or the implementation of band-pass filtering based on time-frequency analysis for time-frequency domain filtering methods <cit.>. The effectiveness of these methods relies significantly on prior information pertaining to the ISRJ model, encompassing the jammer's operational mode, modulation scheme, and operational parameters.
The time-domain matched filtering system can experience various imperfections attributed to the indirect modulation characteristics of ISRJ. Furthermore, ISRJs can display diverse patterns in different durations, reflecting their specific objectives <cit.>. Consequently, the preformulated mappings of inputs and outputs of the filter in practical systems become considerably complex in ISRJ suppression methods. Some of these imperfections are too complicated to be modeled accurately, and the inaccurate modeling may pose significant negative influence on the performance of ISRJ suppression. To facilitate the implementation of the methods, cognitive models are developed to depict the operational characteristics of flexible interference scenarios <cit.>, while deconvolution processes are employed to estimate the crucial parameters of ISRJ <cit.>.
Indeed, cognitive model-based methods are limited in adaptability due to their reliance on the accuracy of the cognitive model, which restricts their potential for broader application. In reference <cit.>, a neural network is introduced to extract segments of the signal free from jamming interference, enabling the generation of a band-pass filter. This adaptive approach circumvents the need for prior information about the interference. However, it is necessary to further validate the performance of the network using real radar measurements, as it has currently been trained only with simulated data. Additionally, the band-pass filter method is specifically applicable to stretched echoes of linear frequency modulated (LFM) waveforms, necessitating further investigation to address ISRJ suppression in the presence of complex waveforms.
In recent research <cit.>, an integration decomposition method is introduced to tackle the challenge of recognizing false targets caused by ISRJ. It establishes an intrinsic integration sequence starting from the received echo and derives a nonlinear mapping to an antiderivative of an energy function. This derived mapping is subsequently employed to extract the characteristics of ISRJ. This suggests the existence of potential adaptive discriminative features between ISRJ and the actual echo signal within the micro-domain of the matched filtering system. However, in reference <cit.>, only preliminary pattern classification based on these discriminative features is conducted, without delving into a comprehensive discussion of the mathematical principles underlying this phenomenon. As a result, further investigation into this potential micro-domain remains necessary.
ISRJ fully exploits the deficiencies exhibited by the accumulated output of the matched filtering process. Moreover, from the perspective of the data structure involved in the convolution process of matched filtering, the ISRJ, being only partially matched, manifests its characteristics within the data structure. Hence, we define the waveform domain as the domain of the process data and establish a new matched filtering model upon it. In this manuscript, we present a comprehensive method called Waveform-Domain Adaptive Matching Filtering (WD-AMF) as a solution to the broader problem of ISRJ suppression. In WD-AMF, our focus shifts from the macroscopic input and output waveforms of the time-domain matched filtering to the integrated function within the convolution operation in the waveform domain. To effectively suppress ISRJ while preserving the output gain of the echo signal, we employ a robust adaptive algorithm in the waveform domain.
The remaining sections of this paper are organized into six parts. In Section 2, we establish the problem formulation for mitigating ISRJ. Section 3 introduces the framework of the WD-AMF, encompassing the relevant definitions and representations of intermediate variables. Section 4 presents the expressions and characteristics of cumulative waveform coherence functions for the echo signal, ISRJ, and received signal of the LFM waveform. Building upon these findings, a statistical model is formulated to effectively suppress ISRJ. Section 5 outlines the application of the IMM-KF technique to solve the anti-ISRJ model, providing a comprehensive expression of the WD-AMF. In Section 6, simulations are conducted to demonstrate the superior performance of the proposed method in mitigating ISRJ. Finally, in Section 7, we conclude this paper.
§ PROBLEM FORMULATION
§.§ ISRJ model
The ISRJ signal can be expressed as the product of the transmitted waveform w(t) and the interrupted-sampling function p(t), denoted as w_p(t) = w(t) p(t). By utilizing the ambiguity function, the output of the matched filter for the ISRJ signal can be represented by the following equation <cit.>:
w_po(t)=∑_n=-∞^+∞ f_s T_Sa(n π f_s T_) 𝒳(t,-n f_s)
where Sa(x)=sin(x)/x, T_ represents the duration of the sampling pulse, which corresponds to the width of an individual jamming slice pulse. The interrupted-sampling frequency is denoted by f_s=1/T_s, and T_s corresponds to the interrupted-sampling repeater period. The function 𝒳(·, ·) represents the ambiguity function of the transmitted waveform. It is worth noting that for self-defensive forwarding interference, due to the utilization of a time-sharing transmit-receive antenna by the jammer, the condition f_sT_⩽ 0.5 holds.
§.§ anti-ISRJ model
Let's assume that a monostatic pulsed Doppler radar transmits a pulse compression waveform w(t). When the self-defensive jammer forwards a jamming signal with Doppler modulation f_d_, the received signal can be expressed as:
x(t) = A_sw(t-τ_s)e^j2π f_d_s(t-τ_s) + A_ w_p(t-τ_)e^j2π f_d_(t-τ_)
=s(t-τ_s)+(t-τ_)
Here, A_s represents the amplitude of the target echo signal, τ_s denotes the propagation delay of the target, and f_d_s represents the Doppler frequency of the target. On the other hand, A_ represents the amplitude of the jamming signal, and τ_ denotes the delay of the interference signal. The received echo signal and interference signal are denoted by s(t) and (t), respectively.
When x(t) traverses a generalized matched filter h(t), characterized by the system function H[·], within the framework of ISRJ suppression methods based on matched filtering systems, a predefined mapping is employed for both x(t) and h(t). Consequently, an output is generated, which can be expressed as follows:
z_o(t) =KH{G[x(t)]}
= G[x(t)] ⊗K[h(t)]
= G[s(t-τ_s)] ⊗K[h(t)]+G[(t-τ_)] ⊗K[h(t)]
=KH{G[s(t-τ_s)]}+ϱ(t)
Here, KH[·] symbolizes the mapping resulting from the application of K[·] to the system function H[·]. Furthermore, G[·] represents the mapping in relation to x(t). Additionally, ϱ(t) can be defined as the convolution of G[(t-τ_)] and K[h(t)]. Hence, an effective strategy to counteract the interference from ISRJ involves maximizing the amplitude of the main lobe in KH{G[s(t-τ_s)]}, while simultaneously minimizing the amplitude of the side lobes in KH{G[s(t-τ_s)]} and inhibiting the peak amplitude of ϱ(t).
§ WAVEFORM-DOMAIN ADAPTIVE MATCHED FILTERING
Signal time-domain matched filtering is defined as:
x_o(t) = ∫_-∞^∞ x(t-μ) h(μ) d μ
If x(μ) and h(μ) represent signals with finite duration T, (<ref>) can be interpreted as the overall integral of the product of x(t-μ) and h(μ) across the fast time variable μ within the intra-pulse period of h(μ). By considering (<ref>), it can be observed that ISRJ fully exploits the deficiencies exhibited by the accumulated output of the matched filtering process. Even if ISRJ is temporally discontinuous, its output after passing through the matched filter can still be represented in the same form as the target echo. However, from the perspective of the data structure involved in the convolution process of matched filtering, the characteristics of ISRJ, being only partially matched, are manifested within the data structure as depicted in Fig. <ref>.
Consequently, we designate the fast time domain μ within h(μ) as the waveform domain, and the function being integrated as the waveform response function (WRF). This term signifies the response that occurs once the waveform traverses the filter and is defined as follows:
υ^(t)(μ) = x(t-μ) h(μ)
where μ∈[-T/2,T/2]. Next, our point of interest lies in the instantaneous analytical expression of υ^(t)(μ) in the waveform domain. Divergent from temporal matched filtering of signals, we define the cumulative waveform coherence function (CWCF) as the variable upper limit integration of υ^(t)(μ), in the waveform domain:
y^(t)(ρ) = ∫_-∞^ρυ^(t)(μ) d μ
By comparing (<ref>) and (<ref>), it becomes apparent that (<ref>) is not restricted to a single matched filter value in the time domain, but rather encompasses the integration variable of the entire waveform domain. In the scenario where ρ→∞, it follows that y^(t)(ρ)=x_o(t).
If we define a^(t)(ρ) = Y[y^(t)(ρ)], whcih denotes the weight function of v^(t), while δ^(t) = V[y^(t)(ρ)] denotes the compensation term. Then, the WD-AMF can be defined as:
z_o(t) = ∫_-∞^∞ a^(t)(μ)v^(t)(μ) d μ + δ^(t)
The WD-AMF method can be considered as a micro-operation conducted on the conventional matched filter. By adaptively adjusting a^(t)(μ) and δ^(t), the output z_o(t) achieves efficient suppression of interference signals while maintaining the original signal energy in an adaptive fashion.
§ CUMULATIVE WAVEFORM COHERENCE FUNCTION
In the subsequent sections, we will deduce and thoroughly examine the analytical expressions for the CWCF of s(t), (t), and x(t).
§.§ y_s^(t)(ρ)
In the context of pulsed Doppler radar systems, assuming that the transmitted waveform is a LFM signal, and for the sake of convenience, let us assume that the propagation delay is negligible. Consequently, the baseband echo signal can be mathematically expressed as follows:
s(t)=rect(t/T) A_se^j π k t^2+j 2 π f_d_s t
Here, the function rect(·) represents the rectangular window function, T corresponds to the pulse width here, and k corresponds to the chirp rate. The impulse response of the matched filter can be expressed as follows:
h_s(t)=rect(t/T) e^-j π k t^2
Combining (<ref>), we can obtain the expression for CWCF of s(t):
[ y_s^(t)(ρ); =∫_-∞^ρ s(t-μ) h_s(μ) d μ; ={[ c^(t)(ρ), when α_s^(t)⩽ρ⩽β_s^(t); c_0^(t), when ρ>β_s^(t); 0, when ρ<α_s^(t) ]. ]
Here, α_s^(t)=max{-T/2,-T/2+t} and β_s^(t)=min{T/2,T/2+t}. c^(t)(ρ) and c_0^(t) are defined as follows:
c^(t)(ρ) = c_1^(t)(ρ) ·Sa[π(k t+f_d_s)(ρ-α_s^(t))]
c_1^(t)(ρ) = A_s(ρ-α_s^(t))·e^jπ k t^2+j2π f_d_s t/e^j π(k t + f_d_s)(ρ+α_s^(t))
c_0^(t) = c^(t)(β_s^(t))
The derivation of (<ref>) and (<ref>) can be found in (<ref>) to (<ref>). By focusing solely on the magnitudes of y_s^(t)(ρ), it becomes evident that for t t_s = -f_d_s/k and α_s^(t)⩽ρ⩽β_s^(t),
|y_s^(t)(ρ)| = A_s|sin[π k(t-t_s)(ρ-α_s^(t))]/π k(t-t_s)|
which represents a half-wave rectified function. The magnitude is |A_s/π k(t-t_s)|<A_s|β_s^(t)-α_s^(t)|=A_s(T-|t_s|), and it decreases as |t-t_s| increases. Based on the properties of the Sa(·) function, it can be inferred that in this case, |y_s^(t)(ρ)|≈ 0.
In particular, when t= t_s and α_s^(t_s)⩽ρ⩽β_s^(t_s), |y_s^(t_s)(ρ)| takes the form of a first-order linear function:
|y_s^(t_s)(ρ)| = A_s(ρ-α_s^(t_s))
where the minimum value of |y_s^(t_s)(ρ)| is 0, and the maximum value is A_s(T-|t_s|).
§.§ y_^(t)(ρ)
For ISRJ, the interference signal can be regarded as:
(t)
=∑_n = -∞^∞_n (t)
=∑_n = -∞^∞rect(t-nT_s/T_) · A_ w(t)e^j2π f_d_t
Here, _n(t) represents the n-th interference slice, T_ corresponds to the slice width. Then, CWCF of of (t) can be expressed as:
y_^(t)(ρ) = ∑_n = -∞^∞ y__n^(t)(ρ)
where y__n^(t)(ρ) represents the CWCF of the n-th individual interference slice:
[ y__n^(t)(ρ); ={[ b_n^(t)(ρ), when α__n^(t)⩽ρ⩽β__n^(t); b_n_0^(t), when ρ>β__n^(t); 0, when ρ<α__n^(t) ]. ]
Here, α__n^(t)=max{-T/2,-T/2+t,-T_/2+ nT_s+t} and β__n^(t)=min{T/2,T/2+t,T_/2+nT_s+t}. b_n^(t)(ρ) and b_n_0^(t) are defined as follows:
b_n^(t)(ρ) = b_n_1^(t)(ρ) ·Sa[π(k t+f_d_)(ρ-α__n^(t))]
b_n_1^(t)(ρ) = A_(ρ-α__n^(t))·e^jπ k t^2+j2π f_d_ t/e^j π(k t + f_d_)(ρ+α__n^(t))
b_n_0^(t) = b_n^(t)(β__n^(t))
The derivation of (<ref>) and (<ref>) can be found in (<ref>) to (<ref>). It is straightforward to infer that when β__m-1^(t)<ρ⩽β__m^(t), the following equation holds true:
[ y_^(t)(ρ); ={[ ∑_n = -∞^m-1 b_n_0^(t), when β__m-1^(t)⩽ρ < α__m^(t); ∑_n = -∞^m-1 b_n_0^(t) + b_m^(t), when α__m^(t)⩽ρ⩽β__m^(t) ]. ]
Hence, y_^(t)(ρ) can be expressed as a piecewise function. Specifically, when t = t_ = -f_d_/k, we have:
|y_^(t_)(ρ)|
=[ {[ ∑_n=-∞^m-1A_(β__n^(t_)-α__n^(t_)),; when β__m-1^(t_)⩽ρ<α__m^(t_); ∑_n=-∞^m-1A_(β__n^(t_)-α__n^(t_))+A_(ρ-α__m^(t_)),; when α__m^(t_)⩽ρ⩽β__m^(t_) ]. ]
which is a stepped function. And the minimum value of |y_^(t_)(ρ)| is 0, and the maximum value is A_·T_/T_s(T-|t_|).
When t t_, the expression for |b_n^(t)(ρ)| is given as follows:
|b_n^(t)(ρ)| = A_|sin[π k(t-t_)(ρ-α__n^(t))]/π k(t-t_)|
which has a period of T_|b_n|^(t) = |1/k(t-t_)|. In particular, when t = t__n = -nf_s+f_d_/k, we have:
|b_n^(t__n)(ρ)| = A_|sin[π nf_s(ρ-α__n^(t__n))]/π nf_s|
and T_|b_n|^(t__n) = 1/nf_s = T_s/n. If T_⩽T_s/2n, then |b_n^(t__n)(ρ)| is a monotonically increasing function. Therefore, |y_^(t__n)(ρ)| can also be thought of as a stepped function similar to |y_^(t_)(ρ)|:
|y_^(t__n)(ρ)|
≈[ {[ ∑_n=-∞^m-1A_(β__n^(t__n)-α__n^(t__n)),; when β__m-1^(t__n)⩽ρ<α__m^(t__n); ∑_n=-∞^m-1A_(β__n^(t__n)-α__n^(t__n))+A_(ρ-α__m^(t__n)),; when α__m^(t__n)⩽ρ⩽β__m^(t__n) ]. ]
When t t__n, the effective accumulation of |b_n^(t)| is lacking, resulting in |y_^(t)(ρ)| being represented as a piecewise envelope with a smaller maximum amplitude. As the difference |t-t_| increases, |b_n_0^(t)| becomes smaller, and |y_^(t)(ρ)| becomes closer to |y_s^(t)(ρ)|, which can be approximated as a stable half-wave rectification function:
[ |y_^(t)(ρ)|; ≈{[ |y_s^(t)(ρ)| when α__n^(t)⩽ρ⩽β__n^(t); |b_n_0^(t)| when ρ>β__n^(t); 0 when ρ<α__n^(t) ]. ]
§.§ y_x^(t)(ρ)
Upon careful examination of the aforementioned information, we can proceed to delve into the pertinent characteristics and effectiveness of y_x^(t)(ρ). It is possible to express y_x^(t)(ρ) as follows:
y_x^(t)(ρ) = y_s^(t-τ_s)(ρ) + y_^(t-τ_)(ρ)
Furthermore, |y_x^(t)| exhibits the following relation:
||y_s^(t-τ_s)|-|y_^(t-τ_)||
⩽|y_x^(t)|
⩽||y_s^(t-τ_s)|+|y_^(t-τ_)||
Especially, when t = t_s + τ_s, we have:
|y_x^(t_s+τ_s)(ρ)| = |y_s^(t_s)(ρ)+y_^(t_s+τ_s-τ_)(ρ)|
Based on (<ref>), (<ref>), (<ref>), and (<ref>), it can be deduced that |y_x^(t_s+τ_s)(ρ)| can be approximated as the combination of an autoterm, |y_s^(t_s)(ρ)|, and a crossterm, |y_^(t_s+τ_s-τ_)(ρ)|. In cases where t_s + τ_s-τ_ = t__n, the crossterm exhibits characteristics resembling those of a step function, thus significantly impeding the similarity between |y_x^(t_s+τ_s)| and |y_s^(t_s)|. However, when t_s + τ_s-τ_ t__n, the crossterm can be approximated as a half-wave rectified function, with its magnitude diminishing as |t_s-t_+τ_s-τ_| increases. At this point, due to the periodic characteristics of the half-wave rectification function, we observe that |y_x^(t_s+τ_s)(β_s^(t_s))|≈|y_s^(t_s)(β_s^(t_s))|.
Similarly, when t = t__n+τ_, we have:
|y_x^(t__n+τ_I)(ρ)| = |y_s^(t__n+τ_-τ_s)(ρ)+y_^(t__n)(ρ)|
From (<ref>), (<ref>), (<ref>), and (<ref>), it becomes apparent that the magnitude of |y_^(t__n+τ_)(ρ)| can be approximated as an auto-term |y_^(t__n)(ρ)|, augmented by a cross-term |y_s^(t__n+τ_-τ_s)(ρ)|. And there is |y_x^(t__n+τ_I)(β__n^(t__n))|≈|y_^(t__n)(β__n^(t__n))|.
For values of t that do not satisfy t = t_s + τ_s or t = t__n+τ_, it can be deduced from (<ref>), (<ref>), and (<ref>) that the magnitude |y_x^(t)(ρ)| corresponds to a complex envelope, devoid of the distinctive amplitude characteristics exhibited by |y_x^(t_s+τ_s)(ρ)| and |y_x^(t__n+τ_)(ρ)|. Furthermore, its maximum magnitude is significantly inferior to them.
Based on the aforementioned analysis and discourse, it becomes evident that |y_x^(t)| demonstrates linear characteristics exclusively when t = t_s + τ_s. Consequently, we can establish the objective function as follows:
[ O^(t)(ρ)
={[ |y_x^(t)(β_s^(t_s))|-|y_x^(t)(α_s^(t_s))|/T-|t_s|· (ρ-α_s^(t_s)),; when α_s^(t)⩽ρ⩽β_s^(t); |y_x^(t)(β_s^(t_s))|,; when ρ>β_s^(t); 0,; when ρ<α_s^(t) ]. ]
It is evident that when the cross-term becomes zero, and when t=t_s+τ_s, the objective function O^(t_s+τ_s)(ρ) is equivalent to |y_x^(t_s+τ_s)(ρ)| itself. Conversely, when t = t__n+τ_, the slope of the linear segment in |y_x^(t__n+τ_)(ρ)| is T_s/T_·T-|t_s|/T-|t__n| times greater than that of O^(t__n+τ_)(ρ), thereby exhibiting a distinct and discernible characteristic.
In the scenario where the cross-term is non-zero and there is additive Gaussian white noise, the problem of suppressing ISRJ can be formulated as a hypothesis testing problem, aiming to evaluate the equality between |y_x^(t)(ρ)| and O^(t)(ρ).
§ ADAPTIVE FILTERING TERM AND COMPENSATION TERM
In the subsequent sections, we will employ statistical methods to derive the adaptive weight a^(t)(μ) and the compensation term δ^(t) in the WD-AMF by leveraging the disparities between |y_x^(t)(ρ)| and O^(t)(ρ) based on statistical analysis.
§.§ Noise model
Assuming the existence of Gaussian white noise, which possesses additive characteristics represented as ξ(t)∼(0,σ^2) in the time domain, when substituting ξ(t) into (<ref>) and (<ref>), the following results are obtained:
wgn^(t)(μ) = ξ(t-μ) h(μ)
bn^(t)(ρ) = ∫_-∞^ρ wgn^(t)(μ) d μ
Clearly, the term denoted as wgn^(t)(μ) maintains its characteristic as additive Gaussian white noise, adhering to the properties of wgn^(t)(μ)∼(0,σ^2). Conversely, bn^(t)(ρ) represents a standard Brownian noise, adhering to the properties of bn^(t)(μ)∼[0,(μ+T/2)σ^2], which is a Gauss-Markov random process.
§.§ Filtering model
Based on the analysis in the preceding sections, the anti-ISRJ problem can be mathematically formulated as a hypothesis testing problem to examine the equality between |y_x^(t)(ρ)| and O^(t)(ρ). However, the noise model of |y_x^(t)(ρ)| is more intricate than that of y_x^(t)(ρ). Moreover, the complex structure of |y_x^(t)(ρ)| implies that different analytical solutions exist at different times t, making it unfeasible to use a single criterion for mathematical modeling.
To overcome this issue, an equivalent mathematical model is proposed as a probabilistic hypothesis testing problem of whether |y_x^(t)(ρ)| contains interference elements. Since ∂ y_x^(t)(ρ)/∂ρ = v^(t)(ρ) = v^(t)(μ), any alteration in the slope of |y_x^(t)(ρ)| is directly affected by v^(t)(ρ). Consequently, the equivalent model can be equivalent to a probability hypothesis testing problem of whether |v^(t)(ρ)| and E^(t)(ρ) are identical, where E^(t)(ρ) can be expressed as:
E^(t)(ρ) = ∂ O^(t)(ρ)/∂ρ = E^(t)(μ)
If we denote the set of instances without jamming signal on μ as U_s, when only a slice jamming is present on μ as U_, and when both an echo signal and a slice jamming are present on μ as U_s+, then the following relationships hold:
|v^(t_s+τ_s)(μ)|/E^(t_s+τ_s)(μ)
={[ 1=𝒜^(t_s+τ_s), when μ∈U_s; A_/A_s = 𝒞^(t_s+τ_s), when μ∈U_; ℳ^(t_s+τ_s)(μ), when μ∈U_s+; 0, else ].
where ℳ^(t)(μ)∈[|𝒜^(t)-𝒞^(t)|,𝒜^(t)+𝒞^(t)], and
|v^(t__n+τ_)(μ)|/E^(t__n+τ_)(μ)
=
{[ T_s/T_·T-|t_s|/T-|t__n|·A_s/A_=𝒞^(t__n+τ_), when μ∈U_s; T_s/T_·T-|t_s|/T-|t__n|=𝒜^(t__n+τ_), when μ∈U_; ℳ^(t__n+τ_)(μ), when μ∈U_s+; 0, else ].
(<ref>) and (<ref>) imply that if |v^(t)(μ)| = E^(t), then t = t_s + τ_s and μ∈U_s. Additionally, for self-defensive forwarding interference, we observe 𝒜^(t_n+τ_)>2, meaning that when μ∈U_, we have |v^(t_n+τ_)(μ)|>2E^(t__n+τ_). As previously stated, our focus centers around |v^(t)(μ)| for μ∈U_, U_s+. Specifically, our objective is to preserve v^(t_s+τ_s)(μ) to the greatest extent while minimizing v^(t__n+τ_)(μ). It is evident that when A_≫ A_s, we have |𝒜^(t)-𝒞^(t)|>2E^(t), leading to |v^(t)(μ)|>2E^(t) for μ∈U_, U_s+. However, in scenarios where A_ is relatively small, it is possible to encounter |𝒜^(t)-𝒞^(t)|<2E^(t)<𝒜^(t)+𝒞^(t), and thus the condition |v^(t)(μ)|>2E^(t) for μ∈U_, U_s+ is not universally valid. Considering that ℳ^(t)(μ) is a continuous function within the range [|𝒜^(t)-𝒞^(t)|, 𝒜^(t)+𝒞^(t)], and the intervals U_ and U_s+ are relatively short, we can extend μ to obtain v^(t)(μ±γ· dμ)>2E^(t), for μ∈U_, U_s+, where γ deones the scaling factor.
Therefore, the anti-ISRJ problem can be reformulated as a probabilistic hypothesis testing problem, determining whether |v^(t)(μ±γ· dμ)|>2E^(t)(μ). Consequently, the anti-ISRJ problem can be transformed into an unbiased estimation problem for v^(t)(μ) and y_x^(t)(ρ). We denote their estimates as v̂^(t)(μ) and ŷ_x^(t)(ρ), respectively.
§.§ State estimation model
In scenarios where t=t_s+τ_s and t=t_+τ_, and the impact of crossterm can be deemed insignificant, it is feasible to approximate y_x^(t)(μ) as locally exhibiting a linear relationship of first order. This characteristic enables us to model ŷ_x^(t)(μ) by employing a linear function model complemented by two impulse function models.
In the subsequent steps, our objective is to establish models for ŷ_x^(t)(μ) and v̂^(t)(μ) using the Interactive Multiple Model Kalman Filter algorithm (IMM-KF)<cit.>. Within the framework of the IMM-KF algorithm, the interdependent state of ŷ_x^(t)(μ) can be precisely defined as a weighted combination of three distinct model states, given by the expression:
M̂^(t)(μ|μ) = u_1M̂^(t)_1(μ|μ) + u_2M̂^(t)_2(μ|μ) + u_3M̂^(t)_3(μ|μ)
Here, the probabilities u_1, u_2, and u_3 are determined for each model based on the residuals and residual covariance obtained through the utilization of the Kalman filter. M̂^(t)_1(μ|μ), M̂^(t)_2(μ|μ), and M̂^(t)_3(μ|μ) represent the estimated states of their respective models, and their one-step prediction state equations are expressed as follows:
M̂^(t)_1(μ+dμ|μ)
= F_1M̂^(t)(μ|μ)
=[[ 1 dμ 0 0; 0 1 0 0; 0 0 0 0; 0 0 0 1 ]]
[[ ŷ_x^(t)(μ); v̂^(t)(μ); δ̂_-^(t)(μ); δ̂_+^(t)(μ) ]]
M̂^(t)_2(μ+dμ|μ)
=F_2 M̂^(t)(μ|μ)
=[[ 1 dμ dμ 0; 0 1 dμ 0; 0 -1 0 0; 0 0 0 1 ]]
[[ ŷ_x^(t)(μ); v̂^(t)(μ); δ̂_-^(t)(μ); δ̂_+^(t)(μ) ]]
M̂^(t)_3(μ+dμ|μ)
= F_3 M̂^(t)(μ|μ)
=[[ 1 dμ 0 dμ; 0 1 0 dμ; 0 0 0 0; 0 0 0 1 ]]
[[ ŷ_x^(t)(μ); v̂^(t)(μ); δ̂_-^(t)(μ); δ̂_+^(t)(μ) ]]
Here, F_i, with i=1,2,3, denotes the matrices governing state transitions. The entities δ̂_-^(t)(μ) and δ̂_+^(t)(μ) correspond to distinct impulse functions exerting influence over both the direction and magnitude of v̂^(t)(μ).
For the model M̂_1^(t), v̂^(t)(μ+dμ|μ) is a constant. This model describes the linear integration of ŷ_x^(t)(μ) with a fixed v̂^(t)(μ).
For the model M̂_2^(t), the value of v̂^(t)(μ+dμ|μ) undergoes a linear variation induced by δ̂_-^(t)(μ). It is assumed that the negative impulse function δ̂_-^(t)(μ)dμ is incorporated into ŷ_x^(t)(μ+dμ|μ). As a result, an impulse function δ̂_-^(t)(μ+dμ|μ) = -v̂^(t)(μ) emerges. This model effectively captures the sudden transition process in v̂^(t) and ŷ^(t)(μ) when the signal dissipates.
Regarding the model M̂_3^(t), the value of v̂^(t)(μ+dμ|μ) experiences a linear variation caused by δ̂_+^(t)(μ). It is postulated that the positive impulse function δ̂_+^(t)(μ)dμ is incorporated into ŷ_x^(t)(μ+dμ|μ). Consequently, the impulse function remains constant, denoted as δ̂_+^(t)(μ+dμ|μ) = δ̂_+^(t)(μ). This model effectively captures the abrupt transition process in v̂^(t) and ŷ^(t)(μ) when the signal emerges.
Since the constructed model is a Markov model, we are unable to derive variable δ̂_+^(t)(μ) from the state equation. However, based on prior analysis, we may infer that δ̂_+^(t)(μ) is equal to K· E^(t),K>2. Given that the estimation of ŷ_x^(t)(μ) and v̂^(t)(μ) in subsequent processing involves the weighted sum of multiple modes, we may set δ̂_+^(t)(μ) to a larger value, such as δ̂_+^(t)(μ) = 𝒵· E^(t), 𝒵=20.
Indeed, M̂_1^(t) constitutes a substantial proportion of M̂^(t), considering that only a negligible fraction of time corresponds to high weights of M̂_2^(t) and M̂_3^(t). Hence, the probability transition matrix can be represented as follows:
P^(t) =
[[ 1-2p_0 p_0 p_0; 1 0 0; 1 0 0 ]]
where p_0 represents the probability of a sudden change in the |v^(t)|, and the zero elements in the diagonal of the matrix are not strictly zero, but typically represent a very small value to ensure matrix invertibility.
When t does not equal t_s+τ_s or t_+τ_, the complex variable v̂^(t)(μ) ceases to remain constant. Although the IMM weighted state output may offer an approximation of v̂^(t)(μ), the accuracy of estimating its absolute value gradually diminishes as δ̂_+^(t)(μ) decreases. As a result, the state estimation of ŷ_x^(t)(μ) and v̂^(t)(μ), μ∈U_,U_s+ becomes biased.
§.§ a^(t)(μ) and δ^(t)
Given that ŷ_x^(t)(μ) is a biased estimation, we continue to use E^(t) as the decision criterion. Thus, we define the adaptive weight function a^(t)(μ):
[ Y[y_x^(t)(μ)] = a^(t)(μ); ={[ 0, when μ∈{μ±γ· dμ||v̂^(t)(μ)|>2 E^(t)(μ)}; 1, when μ∉{μ±γ· dμ||v̂^(t)(μ)|>2 E^(t)(μ)} ]. ]
By utilizing the adaptive threshold E^(t)(μ) in (<ref>) to detect interference events in the waveform domain, the corresponding echo signals are excluded. However, this inevitably leads to the loss of a segment of the echo signal when only integrating the signal within the region μ∈U_s. To address this issue, a compensation term δ^(t) is introduced to correct z_o(t).
To define δ^(t), we first consider Φ, which consists of the intervals μ±γ· dμ where |v̂^(t)(μ)|>2 E^(t)(μ). The length of these intervals is denoted as L_Φ. The complement of Φ, denoted as Ω, represents intervals with a length of L_Ω. We assume that Ω consists of G disjoint subsets, denoted as Ω_g where g=0,1,2,⋯,G.
Next, we define Ψ as a random continuous subset of Ω with an interval length of minL_Φ,L_Ω, denoted as Ψ⊆Ω. Within Ψ, we assume the presence of Q disjoint subsets, denoted as Ψ_q where q=0,1,2,⋯,Q, without any intersection.
The expression for δ^(t) is then given by:
δ^(t) = V[y_x^(t)(μ)]= ∑_q=0^Q∫_Ψ_qv̂^(t)(μ) dμ
Then we can redefine z_o(t) as:
z_o(t) =∫_-∞^∞ a^(t)(μ) v^(t)(μ) d μ+δ^(t)
=∑_g=0^G b_g_0^(t)+∑_q=0^Qb̂_q_0^(t)
If x(t) contains noise term ξ(t), we further apply noise compensation to z_o(t), ensuring that z_o(t) can be fully integrated over the entire waveform domain at any given moment t, then the numerical result of (<ref>) can be represented as:
z_o(t)
=∫_-∞^∞ a^(t)(μ) v^(t)(μ) d μ+δ^(t) + ∫_L_Φ wgn_c^(t)(μ)
={[ (T-t_s)A_se^jπ f_d t_s+bn^(t)(L_Ω)+bn_c^(t)(L_Φ) ,
when t = t_s + τ_s
bn^(t)(L_Ω)+bn_c^(t)(L_Φ),
else ].
Here, wgn_c^(t) and bn_c^(t) are Gaussian white noise and Brownian noise, respectively, which have the same distribution as wgn^(t) and bn^(t), but are statistically independent.
§ NUMERICAL EXAMPLES
In this section, numerical illustrations are employed to validate the efficacy of the proposed approach.
§.§ Construction of WD-AMF
Let us assume that the transmitter employs a baseband LFM waveform with a pulsewidth of 100 s and a bandwidth of 6 MHz. The receiver operates at a sampling frequency of 15 MHz. The SNR is set at 0 dB. Furthermore, the interrupted-sampling frequency is specified as 50 KHz, with a duty ratio of 0.2. It is important to note that the ISRJ experiences a time delay, denoted as τ_ - τ_s, of 40 s relative to the echo signal. The SJR is established at -15 dB. Additionally, we set γ· dμ = 0.3 s and 𝒵 = 20.
Considering the moment when the echo signal emerges as time zero, we designate t_1=0, t_2=20 s, and t_3=40 s. Fig. <ref> illustrates the simulated simulation outcomes of the WD-AMF method at these three time points.
Fig. <ref>fig1(a)-<ref>fig1(c) depict the simulated intermediate results of WD-AMF of x(t) at time t_1. Fig. (<ref>) and Fig. <ref>fig1(b) illustrate the estimated values of the waveform-domain states |v̂^(t)(μ)| and |ŷ^(t)(μ)|, respectively, where the red markers denote the measured values and the green markers denote the estimated values. The black solid lines in Fig. <ref>fig1(a) and Fig. <ref>fig1(b) represent the adaptive threshold E^(t) and the objective function O^(t), respectively. The simulation results show that the IMM-KF algorithm provides a good estimation of the part μ∈U_s, but a large deviation occurs in the estimation of the parts μ∈U_, U_s+. This is because when A_>δ̂_+^(t), the IMM cannot describe this type of nonlinearity well. However, this does not affect the subsequent processing results. As analyzed earlier, even if some interference elements leak into our adaptive decision interval, their continuous integration value in the waveform domain is extremely small, so the impact on the final filtering result can be neglected. Fig. <ref>fig1(c) shows the labeling results of the interference component and non-interference component in |v^(t)(μ)| obtained through the adaptive threshold E^(t), indicating that both are well distinguished.
Fig. <ref>fig1(d)-<ref>fig1(f) display the simulated intermediate results of WD-AMF applied to x(t) at time t_2. The obtained simulation outcomes confirm that the IMM-KF algorithm continues to provide a reliable estimation of the linear component of |v^(t)(μ)| and |y^(t)(μ)|. Due to the minute magnitude of E^(t), only an insignificantly small portion of the non-interference component enters the adaptive decision interval, as depicted in Fig. <ref>fig1(f).
Fig. <ref>fig1(g)-<ref>fig1(i) showcase the simulated intermediate results of WD-AMF applied to x(t) at time t_3. The obtained simulation results demonstrate that the IMM-KF algorithm yields commendable estimations over the entire waveform domain. This favorable outcome arises due to the capability of the IMM to effectively capture this type of nonlinearity when A_s≪δ̂_+^(t) within the algorithm. Notably, Fig. <ref>fig1(i) distinctly exhibits the differentiation between the interference component and non-interference component in |v^(t)(μ)|, thereby enabling the exclusive integration of |z_o(t_1)| over the non-interference component.
Fig. <ref> presents the normalized amplitude output results obtained from WD-AMF. The black solid line corresponds to the output of the matched filter, denoted as |x_o(t)|, while the red line represents the output of WD-AMF, denoted as |z_o(t)|. Analysis of Fig. <ref> reveals that |z_o(t)| effectively suppresses interference signals without compromising the amplification of the echo signal. Additionally, in comparison to the output results of matched filter, the output results of WD-AMF demonstrate a reduced peak level for the first sidelobe. Moreover, the SJR achieved after applying WD-AMF reaches a value of 24 dB, accompanied by a significant 32 dB suppression of interference signal gain by the matched filter.
§.§ Evaluation of ISRJ Resistance
This subsection aims to analyze the system's performance in the presence of a mobile point target and multiple sources of jamming. The simulation parameters of the jamming scene can be found in Tab <ref>. It is assumed that the two jamming sources share the same jamming characteristics. To facilitate a comparative analysis, the anti-ISRJ algorithms described in literature <cit.> and literature <cit.> have been selected. The LFM waveform parameters employed by the three algorithms are as follows: the bandwidth (B) is set to 6 MHz, the pulse width (T) is set to 100 s, the interrupted-sampling frequency (f_s) is set to 100 kHz, and the duty ratio (η) is set to 0.25. Furthermore, for the algorithm introduced in literature <cit.>, the SNR loss is assumed to be 1 dB. It is important to note that both approaches presented in <cit.> and <cit.> necessitate prior knowledge of the interference signal's parameters. Hence, it is presumed that the parameters of the interfering signals in <cit.> and <cit.> are already known.
Fig. <ref> illustrates the output results of different algorithms, with Fig. <ref>fig3(a) specifically displaying the output results of the MF algorithm. In the simulation scenario described in this paper, the approach introduced in literature <cit.> yields a substantial number of spurious targets, greatly impairing the detection of weak targets. In contrast, the method presented in literature <cit.> effectively mitigates false targets, exhibiting an approximate difference of 16 dB between the peak of the interference output and the peak of the target output. Notably, our proposed method attains the lowest sidelobe level, showcasing an approximate difference of 23 dB between the peak of the interference output and the peak of the target output.
We have conducted additional verification of the output performance of our proposed method under various SNRs and SJRs within the scenario presented in Tab <ref>. To mitigate the influence of noise randomness, we performed 200 Monte Carlo simulations for each SNR and SJR parameter. Let Λ_s, Λ_, and Λ_n respectively denote the average peak levels of the target, interference, and noise. Fig. <ref> illustrates the average target peak value across multiple simulations. It is important to note that, in Fig. <ref>fig4(a), the SNR has been fixed at 0 dB, while in Fig. <ref>fig4(b), the SJR has been fixed at -20 dB.
From Fig. <ref>fig4(a), it can be inferred that when SNR is sufficiently high, the interference peak and the noise peak are comparable, while the target peak remains relatively constant at 0 dB. In such cases, the numerical results of the WD-MAF algorithm can be approximated to those of the MF algorithm.
When the SNR is low, we can analyze the changes in the target peak and the interference peak separately. Firstly, let's consider the target peak. The target peak initially decreases and then increases with decreasing SNR. At an SNR of -14 dB, the target peak becomes comparable to the interference peak, which negatively impacts target detection. As the SNR further decreases to -18 dB, the target peak approaches the noise peak. This behavior can be attributed to the reduction in the integration space Ω as the SNR decreases. When the interval length of Ω becomes smaller than that of Φ, the noise is compensated by δ^(t), resulting in a gradual decrease in the target power Λ_s. As the SNR continues to decrease, the interval length of Ω tends to zero, and the integrated signal can be considered as noise, leading to Λ_s approaching Λ_n.
Now, let's analyze the interference peak. The interference peak initially increases and then decreases with decreasing SNR. This is because as the SNR decreases, the peak power of the noise gradually approaches that of the interference, making it challenging for the impulse model in the IMM to distinguish between noise and impulse function-induced breakpoints. Consequently, the state estimation performance of the model deteriorates, leading to missed alarms and an increase in Λ_. Similarly, as the noise energy contained in Λ_ increases with further SNR decrease, Λ_ gradually approaches Λ_n.
Turning to Fig. <ref>fig4(b), it is observed that as SJR increases, the target peak remains nearly constant at 0 dB. At high SJRs, the interference peak approaches the noise peak and remains relatively constant. Conversely, at low SJRs, the interference peak initially increases due to the degradation in the model's state estimation performance, as explained earlier.
Based on the aforementioned analysis, it can be concluded that in the simulated scenario, when the SNR is greater than -8 dB and the SJR is less than 0 dB, the numerical results of the WD-AMF algorithm closely align with those of the MF without interference. Furthermore, the sidelobe peaks are lower than -18 dB, thereby meeting the detection requirements of the scenario.
§.§ Parameter sensitivity analysis
In order to further assess the effectiveness of the proposed method, this section will analyze the method's sensitivity to two crucial parameters of the ISRJ: the sampling repetition period T_s and the sampling duty cycle η. Experiments will be conducted by varying the T_s and η using the simulation scenario parameters specified in Section <ref>. Fig. <ref> illustrates the relationship between the average interference peak levels and the varying observational variables.
Fig. <ref>fig5(a) depicts the output peak of the interference signal for different intermittent sampling periods while maintaining a fixed duty cycle of η=20%. It is evident that the peak level remains relatively stable around -25 dB. This observation indicates that the performance of the proposed WD-AMF is minimally affected by the intermittent sampling period of the ISRJ.
Fig. <ref>fig5(b) illustrates the output peak of the interference signal for various duty cycle conditions while keeping the ISRJ sampling repetition period fixed at T_s = 20 s. Upon observation, it can be inferred that when the duty cycle η is less than 50%, the interference peak remains relatively constant, stabilizing at the level of the noise peak. This behavior suggests effective ISRJ suppression during such instances, indicating that the WD-AMF is not significantly influenced by the sampling repetition period. However, when the duty cycle is set to 50%, the interference signal experiences a rapid increase, reaching -3 dB. This phenomenon signifies the failure of the WD-AMF at a duty cycle of 50%. The cause of this failure lies in the fact that at η=50%, precisely A_ = 2E^(t), rendering (<ref>) ineffective and resulting in a swift escalation of the interference peak. Consequently, in practical applications, it is advisable to appropriately adjust the adaptive threshold in (<ref>) to meet the requirements for ISRJ suppression under different duty cycle conditions.
§ CONCLUSION
This paper presents the waveform-Domain adaptive matched filtering (WD-AMF) method as a solution for mitigating interrupted-sampling repeater jamming (ISRJ), aiming to address the limitations of previous matched filtering system-based methods that necessitate urgent ISRJ modeling. By examining the dissimilarities between ISRJ and radar-transmitted waveforms through the cumulative waveform coherence function (CWCF), we identify the primary disparity as the slope difference in the CWCF. We formulate the anti-ISRJ problem by incorporating a CWCF-based objective function and employ the IMM-KF algorithm for state estimation of CWCF. Subsequently, the adaptive weighted function (AWF) in the waveform domain is derived by hypothesis testing, utilizing the target function and estimated state values based on conditional probabilities. The AWF is then utilized to obtain the adaptive filtering term and compensation term.
Multiple simulations are conducted to demonstrate the effectiveness of the proposed method, showcasing its superior anti-ISRJ performance and adaptability compared to other matched filtering system-based methods that do not require ISRJ modeling. Parametric sensitivity simulations reveal that WD-AMF exhibits insensitivity to the ISRJ period and duty ratio below 50%.
Nonetheless, the proposed method does possess certain limitations. It assumes the presence of constant modulus constraints or minimal amplitude variations for both the echo signal and the interference signal, which may prove challenging to achieve in the context of wideband signals or a scintillating target. Furthermore, the high Doppler tolerance of LFM waveforms introduces biased estimations during the state estimation phase, hindering accurate estimations of the target function. Hence, it is worthwhile to investigate and discuss potential waveforms characterized by well-defined CWCF.
§ APPENDIX A: CWCF OF S(T)
When -T⩽ t<0, and -T/2⩽ρ< T/2+t,
y_s^(t)(ρ)
= ∫_-T/2^ρ A_se^jπ k (t-μ)^2 + j2π f_d_s (t-μ)× e^-jπ k μ^2d μ
= c_1^(t)(ρ) ·Sa[π(k t+f_d_s)(ρ-α_s^(t))]
= c^(t)(ρ)
c_1^(t)(ρ) = A_s(ρ-α_s^(t)) ·e^jπ k t^2+j2π f_d_s t/e^j π(k t + f_d_s)(ρ+α_s^(t))
When -T⩽ t⩽0, and T/2+t⩽ρ⩽T/2,
y_s^(t)(ρ)
= ∫_-T/2^T/2+t A_se^jπ k (t-μ)^2 + j2π f_d_s (t-μ)× e^-jπ k μ^2d μ
= c^(t)(β_s^(t))
= c_0^(t)
When 0<t⩽ T, and -T/2⩽ρ< -T/2+t,
y_s^(t)(ρ) = 0
When 0⩽ t⩽ T, and -T/2+t⩽ρ⩽T/2,
y_s^(t)(ρ)
= ∫_-T/2+t^ρ A_se^jπ k (t-μ)^2 + j2π f_d_s (t-μ)× e^-jπ k μ^2d μ
= c_1^(t)(ρ) ·Sa[π(k t+f_d_s)(ρ-α_s^(t))]
= c^(t)(ρ)
The combination of (<ref>)-(<ref>) can be obtained, and the combination of y_s^(t)(ρ) is expressed as:
[ y_s^(t)(ρ); ={[ c^(t)(ρ) when α_s^(t)⩽ρ⩽β_s^(t); c_0^(t) when ρ>β_s^(t); 0 when ρ<α_s^(t) ]. ]
§ APPENDIX B: CWCF OF (T)
When t<-T+T_/2-nT_s,
y__n^(t)(ρ) = 0
When -T+T_/2-nT_s⩽ t⩽-T-T_/2-nT_s, and -T/2⩽ρ⩽T_/2+nT_s+t
y__n^(t)(ρ)
= ∫_-T/2^ρ A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2 d μ
= b_n_1^(t)(ρ) ·Sa[π(k t+f_d)(ρ-α_I_n^(t))]
= b_n^(t)(ρ)
b_n_1^(t)(ρ) = A_(ρ-α__n^(t)) ·e^jπ k t^2+j2π f_d_ t/e^j π(k t + f_d_)(ρ+α__n^(t))
When -T+T_/2-nT_s⩽ t⩽-T-T_/2-nT_s, and ρ>T_/2+nT_s+t
y__n^(t)(ρ)
= ∫_-T/2^T_/2+nT_s+t A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2d μ
= b_n^(t)(β__n^(t))
= b_n_0^(t)
When t>-T-T_/2-nT_s, and -T/2⩽ρ<-T_/2+nT_s+t
y__n^(t)(ρ) = 0
When t>-T-T_/2-nT_s, and -T_/2+nT_s+t⩽ρ⩽T_/2+nT_s+t
y__n^(t)(ρ)
= ∫_-T_/2+nT_s+t^ρ A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2d μ
= b_n_1^(t)(ρ) ·Sa[π(k t+f_d_)(ρ-α__n^(t))]
= b_n^(t)(ρ)
When t>-T-T_/2-nT_s, and ρ>T_/2+nT_s+t
y__n^(t)(ρ)
= ∫_-T_/2+nT_s+t^T_/2+nT_s+t A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2d μ
= b_n_0^(t)
When t>T+T_/2-nT_s,
y__n^(t)(ρ) = 0
When T-T_/2-nT_s⩽ t⩽T+T_j/2-nT_s, and ρ<-T_/2+nT_s+t,
y__n^(t)(ρ) = 0
When T-T_/2-nT_s⩽ t⩽T+T_/2-nT_s, and -T_/2+nT_s+t⩽ρ⩽T/2,
y__n^(t)(ρ)
= ∫_-T_/2+nT_s+t^ρ A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2d μ
= b_n_1^(t)(ρ) ·Sa[π(k t+f_d_)(ρ-α__n^(t))]
= b_n^(t)(ρ)
When t<T-T_/2-nT_s, and -T/2⩽ρ<-T_/2+nT_s+t,
y__n^(t)(ρ) = 0
When t<T-T_/2-nT_s, and -T_/2+nT_s+t⩽ρ⩽T_/2+nT_s+t,
y__n^(t)(ρ)
= ∫_-T_/2+nT_s+t^ρ A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2d μ
= b_n_1^(t)(ρ) ·Sa[π(k t+f_d_)(ρ-α_I_n^(t))]
= b_n^(t)(ρ)
When t<T-T_/2-nT_s, and ρ>T_/2+nT_s+t,
y__n^(t)(ρ)
= ∫_-T_/2+nT_s+t^T_/2+nT_s+t A_ e^jπ k (t-μ)^2 + j2π f_d_ (t-μ)× e^-jπ k μ^2d μ
= b_n_0^(t)
The expression for y__n^(t)(ρ) can be derived by combining formulas (<ref>)-(<ref>). It can be written as:
[ y__n^(t)(ρ); ={[ b_n^(t)(ρ) when α__n^(t)⩽ρ⩽β__n^(t); b_n_0^(t) when ρ>β__n^(t); 0 when ρ<α__n^(t) ]. ]
unsrt
Hanning Su received the B.Sc degree in electronic engineering from Xidian University in 2018. He is currently working towards the Ph.D. degree in signal and information processing with the National Key Lab of Science and Technology on ATR, National University of Defense Technology. His current research interests include radar signal processing, target tracking, and radar anti-jamming.
Qinglong Bao received his B.Sc and Ph.D degrees from the National University of Defense Technology, Changsha, China, in 2003 and 2010, respectively. Currently, he is an Associate Professor with the School of Electronic Science, National University of Defense Technology. His current research interests include radar data acquisition and signal processing.
Jiameng Pan received the B.E. degree in Zhejiang University in 2013, and the Ph.D. degree in National University of Defense Technology in 2020. He is currently a lecturer with the College of Electronic Science and Technology, National University of Defense Technology. His main research interests include radar signal processing, target tracking, and radar anti-jamming.
Fucheng Guo received the Ph.D. degree in information and communication engineering from the National University of Defense Technology (NUDT), Changsha, Hunan, China, in 2002.,He is now a Professor in the School of Electronic Science, NUDT. His research interests include source localization, target tracking, and radar/communication signal processing.
Weidong Hu was born in September 1967. He received the B.S. degree in microwave technology and the M.S. and Ph.D. degrees in communication and electronic system from the National University of Defense Technology, Changsha, China, in 1990, 1994, and 1997, respectively.
He is currently a Full Professor in the ATR Laboratory, National University of Defense Technology, Changsha. His research interests include radar signal and data processing.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.